The next era of artificial intelligence isn’t being decided by who writes the cleverest model architecture. It’s being decided by who can afford to run the machines.
A report says OpenAI is targeting roughly $600 billion in total compute spending through 2030, a number so large it effectively reframes what “building AI” means. This isn’t startup economics anymore. This is infrastructure economics — closer to building telecom networks or power grids than shipping software updates.
And it arrives at the same moment OpenAI is reportedly laying groundwork for an eventual IPO that could value the company up to $1 trillion.
The headline number: $600B in compute through 2030
Compute is the fuel behind everything modern AI does:
- training new frontier models
- running them at scale (“inference”) for consumers and businesses
- storing, moving, and securing vast data pipelines
- building the tools, agents, and products that sit on top of the models
A $600B target tells you the direction: OpenAI expects its future to be constrained less by “ideas” and more by power, chips, data centers, and bandwidth.
OpenAI’s recent financial snapshot: big revenue, even bigger costs
The same report says OpenAI’s 2025 revenue totaled about $13 billion, beating its own projection of $10 billion. It also spent roughly $8 billion during the year, below a $9 billion target.
That’s the important truth about AI economics right now: revenue can grow fast, but compute costs can grow right behind it — and sometimes faster.
In fact, another report cited in the same coverage said OpenAI told investors that inference-related expenses quadrupled in 2025, pushing adjusted gross margin down to about 33% from 40% in 2024.
Translation: the more people use the product, the more expensive it becomes to serve them — unless efficiency improves.
The “real” market is inference — and it’s brutal
Training models gets headlines. Inference gets invoices.
Inference is the always-on cost of answering questions, generating images, writing code, summarizing contracts, running agents, and powering enterprise workflows. If inference costs scale faster than revenue, even a wildly popular AI product can feel financially heavy.
This is why companies obsess over:
- cheaper cost per query
- faster inference kernels
- model distillation and routing
- specialized hardware and better networking
- deploying the right model for the right job
The “win” in AI isn’t only who has the smartest model — it’s who has the smartest unit economics.
Nvidia is the gravitational center — and may invest $30B
Another detail circulating alongside the compute-spend story: Nvidia is reportedly close to finalizing a $30 billion investment in OpenAI as part of a fundraising round where OpenAI is said to be seeking more than $100 billion, at a valuation around $830 billion.
Whether or not the deal lands exactly as described, the strategic shape is clear:
- OpenAI needs enormous compute
- Nvidia supplies much of the compute ecosystem
- money raised for AI often cycles straight back into chips and infrastructure
This creates a power loop where the leading model builders and the leading chipmakers become deeply interdependent — and the rest of the market has to compete inside that reality.
The $280B revenue target — and why it matters
Another reported figure: OpenAI expects more than $280 billion in total revenue by 2030, split roughly evenly between consumer and enterprise businesses.
If true, that implies OpenAI is aiming to become something bigger than a chatbot company — more like a global AI platform with two engines:
- Consumer: subscriptions, assistant features, creative tools, personal agents
- Enterprise: copilots, internal agents, API usage, custom deployments, security and governance layers
But that revenue goal also highlights the challenge: to support that scale, the infrastructure must scale — and the compute spend is the price of entry.
Sam Altman’s bigger vision: $1.4T and “30 gigawatts”
This all connects to a larger idea OpenAI leadership has floated publicly: a commitment toward building compute capacity on an almost surreal scale — including mentions of 30 gigawatts of computing resources and a total spending ambition around $1.4 trillion.
To give a sense of the magnitude: gigawatts are power-plant numbers. That kind of capacity is closer to national infrastructure than traditional tech.
The AI frontier is becoming an electricity story.
What this means for the AI industry
A $600B compute plan through 2030 signals three major shifts:
1) AI is consolidating around capital + infrastructure
Frontier AI may become a game that only a handful of players can afford. Not because others can’t innovate — but because they can’t finance the hardware, power contracts, and data center footprints required to stay at the leading edge.
2) The supply chain becomes strategy
Chips, memory, networking, cooling, grid access, and construction timelines become competitive advantages. So do relationships with cloud providers and sovereign infrastructure partners.
3) Regulation and competition scrutiny will intensify
When the biggest model lab and the biggest chip supplier grow more tightly connected, governments will ask hard questions about concentration, pricing power, access, and national security.
Bottom line
OpenAI’s reported $600 billion compute spend through 2030 is a reality check for anyone still thinking AI is “just software.”
The next wave of AI dominance will be decided by:
- who can secure the most compute, reliably
- who can run it profitably (inference economics)
- who can turn that infrastructure into products people pay for at massive scale
The AI race is no longer only a model race. It’s an industrial race.


