NVIDIA CEO: The memory bandwidth is great for inferencing

Wednesday, Jul 16, 2025 3:29 am ET1min read

NVIDIA CEO: The memory bandwidth is great for inferencing

NVIDIA Corp. has secured a significant regulatory win, with the U.S. government approving the resumption of sales of its H20 AI chip in China. This move is expected to add billions to NVIDIA's revenue this year and restore its ability to fulfill orders it had previously written off due to government restrictions [1].

The H20 chip, designed to meet the demands of China's AI infrastructure market, was previously restricted due to national security concerns. The recent approval allows NVIDIA to file for licenses to restart deliveries, unlocking an estimated $10–$12 billion in delayed orders [4]. These orders, tied to major cloud giants like Alibaba, Tencent, and Baidu, are expected to boost fiscal 2026 earnings by 8–10% [4].

NVIDIA's CEO, Jensen Huang, highlighted the chip's memory bandwidth during a recent visit to China. The H20 chip's post-sanction design balances regulatory constraints with technical prowess, featuring GDDR7 memory with around 1,200 GB/s bandwidth [4]. This bandwidth is sufficient for 80% of China's AI workloads, particularly inference tasks critical for autonomous vehicles, smart manufacturing, and real-time data processing [4].

The H20 chip's performance and ecosystem support make it a strong contender in the competitive Chinese AI market. While local rivals like Huawei's Ascend 910D and AMD's MI300X boast higher memory bandwidth, the H20's 350W TDP makes it more power-efficient for dense data centers [4]. NVIDIA's CUDA ecosystem, used by 80% of global AI developers, remains irreplaceable for enterprise customers [4].

The resumption of H20 chip sales in China marks a pivotal moment for NVIDIA. It positions the company to capitalize on the $32 billion AI infrastructure market in China and solidify its leadership in enterprise and cloud AI hardware [4]. With China's AI infrastructure spend projected to grow at a 28% CAGR through 2027, driven by state-backed supercomputing projects and corporate investments in large language models, NVIDIA is well-positioned to capture a significant portion of this demand [4].

References:
[1] https://news.bloomberglaw.com/ip-law/nvidia-to-resume-h20-ai-chip-sales-to-china-in-us-reversal-2
[2] https://www.tomshardware.com/pc-components/gpus/nvidia-reportedly-preparing-rtx-6000d-for-chinese-market-to-comply-with-u-s-export-controls-fabricated-on-tsmc-n4-featuring-gddr7-memory-capable-of-delivering-1-100-gb-s-of-bidirectional-bandwidth
[3] https://www.ainvest.com/news/nvidia-ceo-attend-china-supply-chain-expo-cctv-reports-2507/
[4] https://www.ainvest.com/news/nvidia-h20-chip-approval-catalyst-dominance-china-ai-infrastructure-boom-2507/

NVIDIA CEO: The memory bandwidth is great for inferencing

Comments



Add a public comment...
No comments

No comments yet