OpenAI and Nvidia’s $100 Billion Compute Pact Ignites the AI Power Race

OpenAI and Nvidia $100 Billion Compute Pact
  • Massive compute commitment: OpenAI and Nvidia announced a letter of intent to deploy 10 gigawatts of AI datacenters, representing millions of Nvidia GPUs, to power OpenAI’s next‑generation models.

  • Record‑breaking investment: Nvidia intends to invest up to $100 billion in OpenAI as each gigawatt comes online, with the first gigawatt scheduled for the second half of 2026.

  • Compute arms race heats up: The move signals an escalating competition among tech giants for AI compute infrastructure, following recent megadeals with Microsoft, Oracle, SoftBank and others.

A Compute Deal Measured in Gigawatts

On 22 September 2025, OpenAI and Nvidia unveiled a partnership that dwarfs any AI compute arrangement to date. In a joint announcement published on OpenAI’s website, the companies confirmed that they will deploy at least 10 gigawatts of Nvidia systems over the coming years, creating what they describe as an “AI factory” for OpenAI’s next‑generation models. Nvidia founder and CEO Jensen Huang framed the agreement as the “next leap forward” for artificial intelligence, equating it to building an electricity plant for the intelligence age. OpenAI co‑founder Sam Altman reinforced that sentiment, noting that the economy of the future will be anchored by compute, and that the new datacenters will enable AI breakthroughs to scale to millions.

The scale of the deal is unprecedented. Ten gigawatts of compute capacity is roughly equivalent to the power consumption of six large nuclear reactors. To put it in perspective, the entire data‑center industry currently consumes around 5–10 GW worldwide. By partnering with Nvidia to build dedicated AI factories, OpenAI is effectively doubling the sector’s compute footprint on its own. Each “gigawatt‑class” facility would house tens of thousands of Nvidia GPUs, specialized networking gear and optimized cooling systems. Nvidia’s forthcoming Vera Rubin platform will power the first gigawatt, slated to come online in 2026.

Why It Matters

The agreement reorders the competitive landscape of AI infrastructure. OpenAI has long relied on Microsoft’s Azure for its compute needs, and earlier this year the startup inked a deal with Oracle to co‑locate models on its cloud. Nvidia’s entrance adds a third heavyweight to the mix—and one with skin in both hardware and capital. The chipmaker plans to invest up to $100 billion in OpenAI as each gigawatt of capacity is deployed. This infusion gives OpenAI flexibility to build bespoke data centers, rather than renting from cloud providers, and ensures preferential access to Nvidia’s GPUs amid global shortages.

For Nvidia, the partnership locks in a major customer just as competing chipmakers like AMD, Intel and startups such as Tenstorrent chase the booming AI accelerator market. Securing OpenAI’s pipeline could guarantee demand for millions of high‑margin GPUs over several years. The tie‑up also helps Nvidia diversify beyond hyperscale customers and into the high‑growth realm of AI factories. Experts note that Nvidia has historically supplied the compute inside Microsoft Azure for OpenAI’s GPT models; now it will own a share of the infrastructure itself.

A New Economics of Intelligence

The commitment to 10 GW signals that AI developers view compute as a strategic resource on par with talent or data. In his announcement, OpenAI president Greg Brockman said that deploying 10 gigawatts of compute will push back the frontier of intelligence and “scale the benefits of this technology to everyone”. At current electricity prices, running 10 GW of datacenters continuously could cost billions per year in energy alone. Critics worry that such power demands may exacerbate grid strain and environmental impacts. OpenAI maintains that the datacenters will leverage renewable energy and that the partnership includes planning for power‑capacity build‑outs.

“Ten gigawatts is more than a moonshot—it’s a bet that the next frontier of AI is powered by entire power plants.”

The partnership’s scale is best appreciated visually. The bar chart below imagines how OpenAI and Nvidia might roll out capacity over the next five years, culminating in 10 gigawatts of compute. Although the exact timeline is not public, the companies have indicated that the first gigawatt will arrive in 2026.

openai nvidia bar chart

The Road Ahead

While OpenAI’s compute ambitions grab headlines, the partnership raises questions about concentration of AI infrastructure. Regulators have already voiced concerns about the dominance of a handful of companies in both hardware and software. Building proprietary AI factories could further entrench incumbents, making it harder for smaller labs or open‑source communities to compete. The deal also reignites debates about AI’s energy footprint: ten gigawatts of power dedicated to machine learning will require massive investments in renewable generation and grid upgrades. Environmental groups urge transparency on energy sources, cooling technologies and efficiency measures.

From a developer perspective, the expanded compute may translate into faster training times, longer context windows and multimodal capabilities. OpenAI is working on successors to GPT‑4o and the rumored GPT‑5, models that may require orders of magnitude more parameters and training data. By locking in Nvidia as a strategic partner, OpenAI ensures alignment between hardware roadmap and model design. Observers expect the partnership to accelerate the co‑evolution of chips and algorithms, optimizing performance and reducing latency.

Competitive Response

It is unlikely that rivals will sit idle. Amazon’s AWS has been investing heavily in its Trainium and Inferentia chips, while Google continues to iterate on its TPU architecture. Startups like Cerebras Systems, Graphcore and Groq pitch specialized accelerators boasting higher throughput or energy efficiency, competing with breakthroughs similar to the Zoom AI Companion 3.0 Agentic Assistant in software. Microsoft, already an investor in OpenAI, may expand its own datacenter build‑outs or deepen ties with AMD to hedge against Nvidia’s dominance. Oracle, SoftBank and the mysterious Stargate consortium—OpenAI’s other partners—have yet to announce how they will respond.

FAQ's

OpenAI’s early models were trained on Nvidia hardware, and the company credits Nvidia’s GPUs for the breakthroughs that enabled ChatGPT. Nvidia’s willingness to invest up to $100 billion and tailor hardware/software to OpenAI’s needs sealed the deal.
Ten gigawatts is roughly the output of six large nuclear power plants. Running such capacity 24/7 would require extensive renewable generation and efficient cooling systems.
The additional compute could enable larger models with longer context windows and faster inference. However, OpenAI has not confirmed any specific model names or release dates.
OpenAI says it will use renewable energy and advanced cooling, but critics worry that the sheer scale of power consumption may strain grids and increase carbon emissions.
The deal underscores the growing gap between well‑funded AI giants and independent researchers. Open‑source models and more efficient algorithms will be critical to democratizing access to AI.
Share Post:
Facebook
Twitter
LinkedIn
This Week’s
Related Posts