Can OpenAI Afford Its $1 Trillion AI Ambition?
OpenAI is embarking on one of the most ambitious infrastructure builds in tech history — but analysts warn the company may have over-promised while underestimating the cost and complexity of delivering its vision.
The Scale of the Promise
OpenAI has struck major deals with Nvidia, Broadcom and Advanced Micro Devices (AMD) to deliver as much as 26 gigawatts (GW) of compute capacity, a figure likened to the power demand of an entire U.S. state at peak.
According to estimates from Citi analysts, bringing one GW of compute online now demands roughly $50 billion, which covers hardware, energy infrastructure and data-centre construction. That arithmetic places OpenAI’s near-term spending requirement at around $1.3 trillion by 2030.
Additionally, internally the company’s CEO Sam Altman is reportedly considering a longer-term target of 250 GW by 2033 — which extrapolates to an astonishing $12.5 trillion in investment.
The Financial Disconnect
The challenge? OpenAI’s projected revenue through 2030 is far smaller than its planned spending. Citi predicts about $163 billion in revenue by 2030 — a fraction of the required outlay.
In short: OpenAI may face a massive financing gap. Analysts are warning that the compute build-out might be driven more by hype than by realistic economics — raising concerns of a possible AI investment bubble.
Infrastructure and Energy Constraints
Beyond money, OpenAI’s ambitions also face physical and logistical limits. Deploying tens of gigawatts of compute capacity isn’t just about buying chips, it also involves erecting massive data-centers, building out power and cooling systems, and securing high-capacity energy supply.
Some analysts warn that even U.S. power infrastructure may struggle to scale fast enough to support such consumption, which could impair OpenAI’s ability to monetize its investment.
Why Chip-makers Are Watching
Should OpenAI succeed, the ripple effect would be huge. For instance, Nvidia alone could garner up to $500 billion in revenue from its deal with OpenAI; Broadcom could see more than $100 billion.
Thus, many of OpenAI’s partners now have a lot riding on its ability to deliver. Their fortunes — and to an extent the broader AI infrastructure market — are tied to whether the company can scale as promised.
The Strategic Angle
Why is OpenAI chasing such scale? One reason is to cement its leadership in AI and secure the hardware, chips and data centers needed for ever-larger models and global deployment. By locking in long-term compute capacity, it aims to bypass shortages, control costs and maintain model-training scale.
But locking in that capacity, at these prices, also means growing both technical risk (will the compute deliver as expected?) and financial risk (can the business support it?)
What Comes Next
For OpenAI the key questions now are:
Can it raise or allocate the trillions required without collapsing its business?
Will demand for AI services scale fast enough to absorb such compute?
Can energy and infrastructure grow in sync with the compute appetite?
Will chip-supply, manufacturing and data-centre build-out keep pace?
If the answers are positive, OpenAI may well reshape the AI infrastructure world. If not, it could become a cautionary tale of ambition outpacing economics.
As one analyst put it:
“OpenAI … has the power to crash the global economy for a decade or take us all to the promised land, and right now we don’t know which is in the cards.”
For now, the company is chasing a vision of “nuclear-grade” compute scale — but the gap between promise and practical delivery remains wide.
Looking for stories that inform and engage? From breaking headlines to fresh perspectives, WaveNewsToday has more to explore. Ride the wave of what’s next.