From the series The telecommunications battle
At the end of 2025, Nvidia, the Californian company founded and led by Jensen Huang, and producer of over 80% of the chips powering artificial intelligence (AI) development, was the most valuable company in the world. Its market capitalisation reached $4.532 trillion: 12% more than Apple, the next highest, and, furthermore, ten times Nvidia's level three years earlier, when ChatGPT made its international debut.
The AI gold rush has turned into an astonishing windfall for businesses selling the necessary hardware, namely today's shovels and picks. Nvidia's revenue for 2025 is expected to exceed $210 billion; in five years it has grown eightfold. Net profits over the past three years will surpass $180 billion, more than 50% of total revenue.
Experts of the ongoing competition explain that three conditions are required to achieve good results in AI: vast computing power, a good analysis model, and a huge amount of data. Under these circumstances, a shortage of chips (used in data centre servers) means more time needed to train models and the possibility of arriving late to market.
Risks and recent developments for Nvidia
A recent editorial in the Financial Times notes that, after two years of experimentation and implementation, 2026 will be the year of financial assessment for AI. Some tech giants, including Alphabet, Amazon, and Microsoft, will continue to deploy AI effectively to cut costs and improve existing services that already reach billions of people. But some insurgent AI start-ups, such as OpenAI and Anthropic [...] still need to convince investors they can build competitive moats around their own businesses
(Financial Times, January 3rd, 2026). The Economist is harsher: Sam Altman (CEO of OpenAI) is like a juggler on a unicycle. [...] To keep his audience rapt, he has thrown ever more balls into the air
(The Economist, December 29th, 2025).
The fact that major US tech groups are Nvidia's main customers, accounting for roughly 50% of purchases, could create the risk of a drop in demand. It should also be added that Google and Amazon have long since begun developing their own processes: TPUs (Tensor Processing Units) for the former, Training and Inference for the latter. Both companies intend not only to use the chips internally, but also to commercialise them, as they are already doing with Anthropic and Meta. Taiwan's TSMC is expected to produce 3.2 million TPUs for Google this year.
Nvidia nonetheless retains a number of advantages. First of all, there is its CUDA software, which has been around for twenty years and is the platform most widely used by developers worldwide. The company also has a series of agreements with Nokia, Synopsis, and Groq for the design of internal networks in data centres and for the development of new chips. Then, there is the Chinese market, which Trump has reopened to penultimate generation Nvidia H200 chips. Finally, there is diversification.
Huang delivered the opening speech at the Consumer Electronics Show in Las Vegas at the beginning of the year. He introduced, detailing dozens of agreements with robotics, automotive, and machine tool companies (Boston Dynamics, Caterpillar, Hitachi, Siemens, LG, Daimler), the arrival of Physical AI; that is, the advent of chips capable of teaching autonomous behaviour to systems. According to Huang, the ChatGPT moment of robotics has arrived
. Meanwhile, a new AI chip called Vera Rubin, five times more powerful than those currently in use, will soon reach the market. Though it is difficult to make predictions about the future, it must be said that in the past Huang has successfully anticipated a number of trends, such as the convergence of science and gaming.
GPUs
The Thinking Machine is a book about Huang and Nvidia, written by American journalist Stephen Witt. This is the story of how a niche vendor of video game hardware became the most valuable company in the world. It is the story of a stubborn entrepreneur [...] a propulsive, mercurial, brilliant, and extraordinarily dedicated man [...] whose familiarity with the inner workings of electronic circuitry approaches a kind of intimacy. [...] He does not always win, but when he does, he wins big
.
Huang was born in Taiwan in 1963 and moved to the US at the age of ten, where he graduated from Oregon State University in electronic engineering. He was hired by AMD and then by LSI Logic, where he met Chris Malachowsky and Curtis Priem, chip designers working at Sun Microsystems. In 1989, their fruitful collaboration led to the launch of SunGX, a line of 3D graphics processors that powered the workstations of scientists, animators, and CAD modellers. The trio proposed that Sun create a more cost-effective chip for use in video games. The response was scornful: we are at the service of science, not gamers
.
Nvidia, the company the trio founded in 1993, then started to create graphics cards for video games. Nvidia is a fabless company, which designs but outsources production. NV1, their first board completed in 1995, was produced by STMicroelectronics (Italy). The name Nvidia is a combination of NV (new version) and invidia, the Latin word for envy, which they intended to arouse in others. When the company was founded, there were 45 graphics cards manufacturers in North America. Huang did not indulge in science fiction but instead read business books to stay ahead of the competition.
In 1999, Nvidia handed over production to the Taiwanese company TSMC and launched the GeForce card; its marketing manager called it a GPU (Graphics Processing Unit), and graphics accelerators in general would soon take this name. The first GeForce products were installed in Microsoft's Xbox console; later the Tegra card ended up in Nintendo's consoles. Its power consumption was low, and the Japanese company chose it because it permitted gamers to detach the Switch from its base and play [...] for hours while hiding under the covers from their parents
, writes Witt.
Science
In 1999, Nvidia went public, but with the bursting of the dot-com bubble its shares lost 90% of their value. Thanks to Huang's vision, individuals who envisaged using graphics cards' computing capabilities for science were invited to join Nvidia: among them was Bill Dally, former dean of the Stanford computer science department. Under his direction, a working group was set up to use the capabilities of GPUs in parallel computing, which involves dividing tasks into packets for simultaneous execution rather than simple sequential processing. In 2006, this work led to the creation of the CUDA software (Compute Unified Device Architecture).
At the University of Toronto, Professor Geoffrey Hinton, along with his assistants Alex Krizhevsky and Ilya Sutskever, worked on the development of software inspired by the neural networks of the human brain. They assigned a computer the task of seeing and distinguishing images
, using Nvidia GPUs for the calculations, which involve interpreting sequences of numbers representing the position of individual coloured pixels.
In 2012, they won a competition at Stanford to interpret ImageNet, a database of 15 million images divided into 22,000 categories, which had been built over time through manual labelling work, thanks to Amazon Mechanical Turk. Essentially, the Toronto trio had found that video game cards, the GPUs, used for parallel computing, were faster than other processors in training neural network computers.
In 2017, the Nobel Prizes for Physics and Chemistry were awarded to groups of researchers who used Nvidia cards to create three-dimensional designs in their research, which involved large stellar collisions and cryo-electron microscopy. It was a great contribution to science, but it did not generate profits. Nvidia's profits instead came from the demand of cryptocurrency miners
. For a time, Nvidia's stock traded in parallel with bitcoin's price
, writes Witt, who adds: Huang was too much of a businessman to explicitly discourage miners from purchasing his GPUs, [...] Nvidia's official position on crypto was silence
.
The arrival of generative AI
A decade ago, a group of Google researchers was working on the study of language; the starting assumption was that the relationships between words, or chunks of them (called tokens), could be mapped based on the statistical weights governing their associations. The mechanism they were working on, called transformer, failed to attract Google's interest. The researchers dispersed and some ended up working at OpenAI, founded in 2015 by Elon Musk and Sam Altman, but which at that time remained solely under the direction of the latter.
At OpenAI, Sutskever, one of the Toronto trio, took up the idea and the start-up began working on GPT (Generative Pre-trained Transformer), with hardware made up of servers running on Nvidia graphics cards. GPT-1 was launched in 2018: it was based on a collection of 7,000 self-published e-books. The result was terrible, but a start-up could afford to take risks and make blunders that would be unthinkable for a large company like Google. The model, however, demonstrated that GPTs could work, provided they were trained with much more material.
GPT-3, launched in late 2022, was trained using the whole of Wikipedia, The New York Times archives dating back to 1851, and numerous other web pages. Within three years ChatGPT had more than 800 million users: generative AI had become a global phenomenon. Since 2023, Nvidia's revenue from data centres has consistently exceeded that from video game chips.
The intersection of parallel computing and neural networks
, writes Witt, two fringe strains of computer science, starved of investment, hated – no, detested – by industry and researchers alike had somehow unified to form a thriving, sprawling entity [...] Huang called it 'luck, founded by vision'
.