NVDA Nears Record Highs on $100B OpenAI Pact—What Could Go Wrong?

Written byGavin Maguire
Monday, Sep 22, 2025 12:54 pm ET1min read
Speaker 1
Speaker 2
AI Podcast:Your News, Now Playing
Aime RobotAime Summary

- NVIDIA and OpenAI signed a $100B LOI to build 10 GW of AI data-center capacity using NVIDIA GPUs, with 1 GW targeting 2026 deployment.

- The partnership ensures OpenAI priority access to GPUs and networking while securing NVIDIA's multi-year demand pipeline and software integration.

- Risks include vendor-financing scrutiny as NVIDIA's investment ties to deployment milestones, raising concerns about subsidizing a key customer.

- The deal reinforces compute scarcity narratives, pushing NVDA near record highs as markets bet on execution despite permitting and supply-chain challenges.

NVIDIA and OpenAI have signed a letter of intent to build at least 10 gigawatts of AI data-center capacity using

systems—effectively millions of GPUs—with the first 1 GW slated for 2H26 on the Vera Rubin platform. To support the rollout (compute, data centers, and power), NVIDIA intends to invest up to $100B in OpenAI, disbursed in stages as each gigawatt is deployed. The LOI also commits both sides to co-optimize roadmaps—OpenAI’s model and infrastructure software tuned to NVIDIA’s hardware and networking stack—positioning NVIDIA as a preferred compute partner for OpenAI’s AI factory build-out.

Functionally, this is capacity insurance at unprecedented scale: OpenAI secures priority access to GPUs and high-throughput networking when scarcity is still the rule, and NVIDIA locks in a multi-year demand pipeline while shaping the software that runs on its silicon. The cooperation is meant to be phased and multi-site, which spreads execution risk across timelines and geographies but also extends the window in which power, permitting, and supply-chain frictions can bite.

One cause for concern is the optics of vendor financing. Because NVIDIA’s investment is tied to deployment milestones, investors will ask whether the company is, in effect, subsidizing a marquee customer to buy NVIDIA gear. The strategic logic is clear—align incentives, secure volume, deepen moats—but it introduces capital-intensity and concentration risk that will put more scrutiny on returns, governance rights, and cancellation clauses once definitive terms are filed.

Context matters: rumors around Sept 2 already suggested NVIDIA had “won” incremental OpenAI business, even as OpenAI pursued other capacity sources and custom accelerator explorations. Today’s LOI formalizes NVIDIA’s central role without eliminating multi-sourcing; OpenAI still has relationships across the ecosystem, and that will shape NVIDIA’s ultimate share of wallet.

Execution is the bridge from headline to revenue. Gigawatt-scale AI factories require vast power commitments, advanced packaging and interconnect at volume, and fast rack-and-stack cycles. Slippage on any of those fronts would push recognition to the right. Conversely, early proof points—site announcements, secured power purchase agreements, firm purchase orders, and disclosed GPU counts coming online—would validate the curve and reduce the “vendor-financing” worry.

For markets, the takeaway is simple: the compute scarcity thesis just got another giant vote of confidence. With NVDA up ~3% and roughly 7% of the S&P 500, the headline is helping push the tape to fresh highs. The stock is preparing to challenge its recent all-time high around $184 this afternoon; sustaining momentum likely hinges on clarity around definitive terms, deployment cadence, and initial site power timelines. Big story today, heavy lifting tomorrow—and the market, for now, is happy to pay for the promise.

Comments



Add a public comment...
No comments

No comments yet