"We almost went out of business several times."

Usually founders don't talk about their company's near-death experiences. But Jen-Hsun Huang, the boss of Nvidia, has no reason to be coy. His firm, which develops microprocessors and related software, is on a winning streak. In the past quarter its revenue increased by 55 percent, reaching $2.2 billion, and in the past 12 months its share price has almost quadrupled.

A big part of Nvidia's success is because demand is growing quickly for its chips, called graphics processing units (GPUs), which turn personal computers into fast gaming devices. But the GPUs also have new destinations: notably data centers where artificial-intelligence (AI) programs gobble up the vast quantities of computing power that they generate.

Soaring sales of these chips are the clearest sign yet of a secular shift in information technology. The architecture of computing is fragmenting because of the slowing of Moore's law, which until recently guaranteed that the power of computing would double roughly every two years, and because of the rapid rise of cloud computing and AI. The implications for the semiconductor industry and for Intel, its dominant company, are profound.

Things were straightforward when Moore's law, named after Gordon Moore, a founder of Intel, was still in full swing. Whether in PCs or in servers (souped-up computers in data centers), one kind of microprocessor, known as a "central processing unit" (CPU), could deal with most "workloads," as classes of computing tasks are called. Because Intel made the most powerful CPUs, it came to rule not only the market for PC processors (it has a market share of about 80 percent) but the one for servers, where it has an almost complete monopoly. In 2016 it had revenue of nearly $60 billion.

This unipolar world is starting to crumble. Processors are no longer improving quickly enough to be able to handle, for instance, machine learning and other AI applications, which consume more number-crunching power than entire data centers did just a few years ago. Intel's customers, such as Google and Microsoft together with other operators of big data centers, are opting for more and more specialized processors from other companies and are designing their own to boot.

Nvidia's GPUs are one example. They were created to carry out the massive, complex computations required by interactive video games. GPUs have hundreds of specialized "cores" (the "brains" of a processor), all working in parallel, whereas CPUs have only a few powerful ones that tackle computing tasks sequentially. Nvidia's latest processors boast 3,584 cores; Intel's server CPUs have a maximum of 28.

The company's lucky break came in the midst of one of its near-death experiences during the global financial crisis. It discovered that hedge funds and research institutes were using its chips for new purposes, such as calculating complex investment and climate models. It developed a coding language, called CUDA, that helps its customers program its processors for different tasks. When cloud computing, big data and AI gathered momentum a few years ago, Nvidia's chips were just what was needed.

Every online giant uses Nvidia GPUs to give their AI services the capability to ingest reams of data from material ranging from medical images to human speech. The firm's revenue from selling chips to data-center operators trebled in the past financial year, to $296 million.

And GPUs are only one sort of "accelerator," as such specialized processors are known. The range is expanding as cloud-computing firms mix and match chips to make their operations more efficient and stay ahead of the competition. "Finding the right tool for the right job," is how Urs Hölzle, in charge of technical infrastructure at Google, describes balancing the factors of flexibility, speed and cost.

At one end of the range are ASICs, an acronym for "application-specific integrated circuits." As the term suggests, they are hard-wired for one purpose and are the fastest on the menu as well as the most energy-efficient.

The other extreme is field-programmable gate arrays (FPGAs). These can be programmed, meaning greater flexibility, which is why even though they are tricky to handle. Microsoft has added them to many of its servers. "We now have more FPGAs than any other organization in the world," said Mark Russinovich, chief technology officer at Azure, the firm's computing cloud.

Instead of making ASICS or FPGAs, Intel focused in recent years on making its CPU processors ever more powerful. Nobody expects conventional processors to lose their jobs anytime soon. Yet the quickening rise of accelerators appears to be bad news for the company, said Alan Priestley of Gartner, an IT consultancy. The more computing happens on them, the less is done on CPUs.

One answer is to catch up by making acquisitions. In 2015, Intel bought Altera, a maker of FPGAs, for $16.7 billion. In August it paid more than $400 million for Nervana, a start-up that is developing specialized AI systems.

The firm said it sees specialized processors as an opportunity, not a threat. New computing workloads have often started out being handled on specialized processors, said Diane Bryant, who runs Intel's data-center business, only to be "pulled into the CPU" later.