From the series The telecommunications battle
There are two interwoven and contrasting trends in the American economy. On the one hand, we are witnessing steady growth in the value of securities linked to the furious race towards artificial intelligence (AI), which could lead to a financial bubble; on the other, an increase in GDP, precisely due to the huge investments in this field, is taking place.
In the first week of November, a downward correction saw many technological securities devalue by $1.2 trillion on the stock exchange. Jamie Dimon, CEO of JPMorgan, the biggest American bank, predicts that there is a one-in-three probability of a collapse, albeit not imminently. As I see it
— he states — artificial intelligence is real and, all in all, it will pay off [...] just as happened in the past in the case of automobiles and television sets
. Products which, however, have also seen many company bankruptcies. Hence, part of the money invested in AI will probably be thrown away
[Il Sole 24 Ore, October 10th].
The Financial Times reports that in October, the IMF’s chief economist said the AI investment boom, already one of the biggest movements of capital in modern history, has helped the US avoid a sharp slowdown
. Investment in software and information-processing equipment
— adds The Wall Street Journal — drove most of America’s GDP growth in the first half of 2025, according to federal data. [...] The construction of each hub of data centres still requires as much as hundreds of millions of dollars’ worth of labour, cable, piping, copper, power hook-ups, cooling systems, backup generators, and more
[November 5th].
Nvidia and OpenAI
Nvidia is the Californian company which produces the most commonly used chips in the data centres where the AI models are trained. It was also the first company in history to exceed a capitalisation of $5 trillion. Its notional value is twice that of the top 40 groups listed on the Paris stock exchange. Nvidia’s growth has been very swift: its turnover has risen from $16.7 billion in 2021 (when its graphics cards were mainly used for online games) to $130 billion in 2025. In the same period, its profit grew to $72 billion, i.e., by sixteen times. Since a political veto forbids it from selling its most evolved GPUs (graphic processing units) in China, the critical issue is that more than 50% of its sales are concentrated among just five major clients: Microsoft, Amazon, Google, Meta, and Oracle. In short, the race towards AI is the strength and weakness of Nvidia, as it is for OpenAI as well.
OpenAI was born in 2015 on the initiative of Elon Musk and Sam Altman as a non-profit company, with the aim of developing models of artificial intelligence and, at the same time, of keeping them under human control. Today the company, administered by Altman, sells ChatGPT (which has 800 million users in the world) and raises funds with the aim of turning them into future profits. Musk has accused Altman of betraying the founding principles, but has in turn set up xAI, itself a commercial company.
2025 was a very active year for OpenAI. It began in February with the presentation at the White House of the Stargate plan, i.e., a multiyear plan to invest $500 billion in data centres, to be realised with the collaboration of Oracle and Japan’s Softbank, which has just announced it wants to finance OpenAI with $30 billion. The year then continued with the announcement of many agreements, which we will now summarise, and which led to evaluating OpenAI at $500 billion.
A plate of spaghetti
OpenAI signed an initial agreement with Nvidia in September, under which it will purchase, over the course of several years, a deployment of systems totalling at least 10 GW — that is, the equivalent of four to five million GPUs. In turn, the letter of intent between the two companies entails Nvidia’s promise to invest $100 billion in its client’s business.
Nvidia’s main competitor is the Californian AMD. OpenAI has also signed a multiyear agreement with this company, from which it intends to buy 6 GW worth of chips. Later, and under certain conditions, OpenAI might also take a 10% stake in its supplier.
Last spring, OpenAI concluded an agreement to lease computing capacity for five years in the CoreWeave data centre, a company founded as a host for cryptocurrency businesses and a customer of Nvidia, which is also one of its shareholders. Broadcom, by contrast, designs chips tailored to the specific needs of companies such as Google and Meta. OpenAI has ordered chips from it to be used for inference tasks, i.e., the processing that powers user interactions. Finally, an agreement with Amazon was reached, providing for the lease of computing capacity for seven years and the integration of AI services for its cloud computing customers.
The first big tech to bet on OpenAI was Microsoft, which has invested $13 billion in the last few years. Given the above-mentioned agreements, their relationship is no longer exclusive, but Microsoft will continue to supply space in its data centres and, until 2032, will retain its right to offer its clients solutions and 27% of OpenAI’s capital. The spider web of relations outlined in the box also includes the participation of Amazon, Google, Nvidia, and Microsoft in the financing of Anthropic, OpenAI’s main competitor.
At this point, two aspects need to be highlighted. The first is that OpenAI and Anthropic have turnovers of $13 billion and $9 billion respectively and will both be making losses for at least the next three years, while the investment commitments made by OpenAI alone amount to $1.5 trillion, according to the calculations of the Financial Times. The second aspect is that all of these agreements are of a circular nature, in which client and supplier, financier and debtor, are often confused, leading to self-generated demand.
The dot-com bubble
Many commentators are now referring to the bursting of the dot-com bubble in the spring of 2000. In the second half of the 1990s, equipment manufacturers such as Nortel, Lucent, and Cisco offered them on credit to clients building networks, in anticipation of a rapid surge in Internet growth.
Between 1996 and 2001, telecommunications groups spent over $400 billion to build 80 million miles of fibre-optic cabling, and in Europe there were as many as twenty different connection cables linking the main cities. The fees for their use dropped to a tenth within three years. Between 2000 and the following year, the revenue of the world’s seven leading telecommunications equipment suppliers fell by 22%, and their stock market value plummeted by 73%, from $1,263 billion to $339 billion, leading to tens of thousands of redundancies.
The Economist recalls that the adoption of technologies takes time. In the 1940s, only 23% of American farmers had a tractor and in 2010 one third of American companies did not yet have a website. Today, the Census Bureau observes that only 5-6% of American companies use AI tools. And yet, the weekly magazine recalls: After its 19th-century railway mania, Britain was left with track, tunnels, and bridges; much of this serves passengers today. Bits and bytes still whizz through the fibre-optic networks built in the dot-com years
[September 13th].
Chips, data centres, and electricity
The investments made in the US by the main technological groups will exceed $400 billion this year. Even though they are wealthy companies, Oracle, Meta, and Alphabet (Google) have resorted to debt in order to cover part of these investments.
Taiwanese TSMC, which produces chips for Nvidia and AMD, plans to invest $165 billion in the US: by 2030, it will build three advanced chip factories and two finishing sites in Arizona. In Indiana South Korea’s SK Hynix will build a memory chip factory to supply Nvidia GPUs, which Foxconn will integrate into servers in Texas. In this way, the circle of future American autonomy in data centre hardware would be completed.
Regarding the data centres themselves, it should be said that the US already has 5,400 of them, i.e., half of the world’s total; in spite of this, in the first six months of 2025, these centres still attracted 35% of the country’s capital investment in construction, three times the relative weight they had in the previous four-year period.
It is also worth noting that Microsoft has reached an agreement with Constellation to use electricity from the Unit 1 reactor of the Three Mile Island plant for the next twenty years — the same nuclear power plant made infamous by the 1979 accident. Among the commitments that Japan would have made during Donald Trump’s visit is to build four AP1000 nuclear plants in the United States.
Electricity is essential for the growth of AI activities. In the US for the moment, its main source is gas, which covers 40% of demand. The plans announced for new power plants would supply 85 GW of power. World orders for turbines, whose main producers are Mitsubishi, GE Vernova, and Siemens, are increasing; American orders have grown from 200 to 470 units a year.
Jensen Huang, the founder of Nvidia, argues that China will win the AI war also partly due to the lower cost of electricity. The Chinese price, discounted for those who use nationally-produced chips, is 5.6 cents per kWh as against America’s 9.1 cents. China has low-cost and reliably distributed electricity, because it worked in advance on its high-voltage transmission grids.
Lotta Comunista, November 2025