The recent request by NVIDIA's CEO Jensen Huang to expedite the production of SK Hynix's HBM4 chips by six months highlights the booming demand for high-performance, energy-efficient memory solutions crucial for advancing AI systems. Originally slated for release in the latter half of 2025, this acceleration reflects NVIDIA's mounting pressure to satisfy the insatiable appetite for AI training and inference capacities, spurred by tech giants like OpenAI, Microsoft, and Meta.
SK Hynix, a leading supplier of HBM systems for NVIDIA's AI GPUs, is under increased strain to compete against formidable rivals such as Samsung and Micron. Samsung has made notable progress in securing agreements to supply improved HBM3E products, with plans to manufacture HBM4 next year. Meanwhile, Micron, alongside SK Hynix, supplies NVIDIA with HBM3E chips and is rapidly advancing its own HBM4 production capabilities.
This heightened competition underscores the pivotal role HBM technology plays in supporting NVIDIA's formidable grip on the global AI chip market, where it commands an 80-90% share. The anticipated Rubin AI GPU, set to incorporate HBM4, aims to sustain NVIDIA's leadership amid unprecedented demand.
The escalating need for HBM, characterized as a "printing press" for profit due to its indispensable role in AI-driven data center operations, is evident. Goldman Sachs has revised its market projections, predicting the HBM market will significantly expand to $30 billion by 2026, driven by growing AI server shipments and higher HBM densities per AI GPU. Major players like SK Hynix are poised to continue capitalizing on this trend as AI infrastructure evolves.