NVIDIA, OpenAI Partner on 100 Billion Dollar AI Data Center

Generated by AI AgentTicker Buzz
Wednesday, Sep 24, 2025 3:13 am ET4min read
Aime RobotAime Summary

- NVIDIA and OpenAI announced a $100B partnership to build a 10 exaflops AI super data center, marking a new era in AI infrastructure.

- The Vera Rubin platform, featuring 30 petaflops NVFP4 GPUs and decoupled inference tech, will power the project, boosting efficiency by 50%.

- NVIDIA secures 85-90% AI chip market dominance, while OpenAI aims for $200B revenue by 2030, driven by ChatGPT and Agent businesses.

- The deal creates a $500B AI ecosystem, with NVIDIA’s stock surging 3.93% as investors bet on computational hegemony in the AI race.

On September 22, 2025, a groundbreaking announcement from Silicon Valley sent shockwaves through the global tech industry. Chip giant

and AI leader OpenAI revealed a strategic partnership, with NVIDIA committing up to 100 billion dollars to build a super AI data center with a capacity of 10 exaflops. This investment not only surpasses the annual GDP of many countries but also marks the beginning of a new era in AI infrastructure, characterized by billion-dollar investments.

The scale of this project is immense. The 10 exaflops of computing power is equivalent to powering 8 million American households simultaneously. In AI terms, it matches the capacity of 400 million to 500 million GPUs, roughly equal to NVIDIA's total GPU shipments for 2025 and double the amount shipped in 2024. The project will be executed in phases, with the first 1 exaflop system scheduled for deployment in the second half of 2026, backed by an initial 10 billion dollar investment from NVIDIA. Subsequent funding will be injected as each exaflop system comes online, ensuring that investment and construction progress in tandem. The cost structure, as previously disclosed, estimates that building a single exaflop data center will require 50 billion to 60 billion dollars, with approximately 35 billion dollars allocated to purchasing NVIDIA's chips and systems. This translates to at least 350 billion dollars in core revenue for NVIDIA over the project's lifetime.

Driving this massive investment is OpenAI's growing demand for computational power. ChatGPT, one of OpenAI's flagship products, already boasts over 700 million active users weekly. The company is set to launch several new, computationally intensive products in the coming weeks. Industry forecasts predict that OpenAI's revenue will reach 13 billion dollars in 2025, a threefold increase from the previous year's 4 billion dollars. Looking further ahead, OpenAI aims to achieve 200 billion dollars in revenue by 2030, surpassing the current annual revenues of both NVIDIA and Meta.

The technological backbone of this AI revolution is NVIDIA's recently unveiled Vera Rubin platform. Unlike traditional single-chip designs, Vera Rubin is a comprehensive system architecture that integrates CPUs, GPUs, and specialized accelerators. It is specifically engineered to handle complex AI tasks such as processing million-token code libraries and generating long videos. The flagship product of this platform, the Rubin CPX GPU, offers 30 petaflops of NVFP4 computing power and is equipped with 128GB of GDDR7 memory. Its attention mechanism processing capability is three times that of the previous Blackwell platform. In a single rack configuration, the Vera Rubin NVL144 system can deliver 8 exaflops of computing power, 100TB of high-speed memory, and 1.7 PB/s of memory bandwidth, equivalent to the combined power of 500,000 high-performance PCs.

One of the most revolutionary aspects of the Vera Rubin platform is its "decoupled inference" technology. By separating the context processing stage from the generation stage of AI models, the platform can enhance computational efficiency by 50% while maintaining accuracy. In scalable scenarios, this can result in a 30-50 times return on investment. The platform is set to become a key weapon for OpenAI in its pursuit of "superintelligence." All data centers will run on the latest Vera Rubin platform, supporting not only the training of next-generation large models but also a wide range of AI applications from code generation to digital twins. OpenAI's co-founder emphasized that the AI systems built on NVIDIA's platform already serve hundreds of millions of users, and the deployment of 10 exaflops of computing power will push the boundaries of intelligence to new dimensions.

The 100 billion dollar investment is just the tip of the iceberg for OpenAI's expansion plans. The company has reportedly signed a 5-year, 300 billion dollar contract with Oracle for AI computing power and plans to spend an additional 100 billion dollars over the next five years on leasing backup servers from cloud service providers. Combined with the current investment from NVIDIA, OpenAI's known large-scale transactions total 500 billion dollars. This aggressive spending is tied to OpenAI's unique cost structure, which projects that research and development expenses, primarily driven by computing costs, will account for nearly 50% of total revenue by 2030. This ratio is significantly higher than that of tech giants like Amazon and Microsoft, which typically allocate 2-5% of their revenue to R&D, and even Meta, known for its high R&D spending, which allocates 25%. Despite the high costs, OpenAI expects substantial returns, with ChatGPT alone projected to generate 90 billion dollars in revenue by 2030, and Agent-related businesses contributing an additional 90 billion dollars, pushing total revenue past the 200 billion dollar mark.

For NVIDIA, this investment is a strategic move that ensures a steady flow of revenue. The investment will be recycled as OpenAI purchases NVIDIA's equipment, creating a seamless financial loop. More importantly, as OpenAI explores the possibility of developing its own chips, this investment secures NVIDIA's position as the dominant player in the AI chip market, holding an 85-90% market share. The partnership will also create synergies with OpenAI's existing collaborations with Microsoft, Oracle, and SoftBank, forming a multi-layered system of computational guarantees. OpenAI has explicitly stated that NVIDIA will be its preferred strategic partner for expanding its "AI factory," with both parties working together to optimize models and hardware roadmaps.

This billion-dollar collaboration is part of a broader strategy by NVIDIA to expand its influence. Just a week prior, NVIDIA announced a 5 billion dollar investment in Intel to co-develop the "Intel X86 with RTX" chip, along with a 2.5 billion dollar investment in building AI infrastructure in the UK and the acquisition of part of Enfabrica's team and technology. These moves are part of NVIDIA's "computational hegemony blueprint," aimed at integrating CPU giants, cloud service providers, and AI application developers into an ecosystem centered around NVIDIA. The market responded positively to the news, with NVIDIA's stock price surging 3.93% in a single day, adding approximately 170 billion dollars to its market capitalization and nearing the 4.5 trillion dollar mark. This rally was driven by investor recognition of the value of AI infrastructure, as the competition in large models intensifies and computational power becomes the key determinant of industry dynamics.

As competitors like Huawei strive to catch up in the exaflop supernode arena, NVIDIA's dual investment in capital and technology has once again widened the gap. The CEO of NVIDIA highlighted the decade-long mutual growth between NVIDIA and OpenAI, from the first DGX supercomputer to the breakthrough of ChatGPT, and emphasized that the deployment of 10 exaflops of computing power will usher in a new era of intelligence. OpenAI's CEO echoed this sentiment, stating that computational infrastructure will be the foundation of the future economy.

Comments



Add a public comment...
No comments

No comments yet