When One Box of Memory Costs More Than a House: The DRAM Supercycle Has Only Just Begun

Written byDavid Feng
Tuesday, Jan 6, 2026 8:31 pm ET1min read
Aime RobotAime Summary

- Global DRAM prices surged over 100% since July 2025, surpassing $500,000 per 100-module bulk purchase box.

- NVIDIA’s Jensen Huang highlighted AI’s shift to context memory at CES 2026, driving explosive demand for storage.

- His proposed context memory architecture integrates storage with GPUs, reducing latency and boosting efficiency.

- Storage stocks surged as Huang projected 320 exabytes of incremental SSD demand from AI’s evolving needs.

The global DRAM (memory) market is experiencing what many are calling the strongest price surge in history. Since July 2025, DRAM prices have skyrocketed, with most categories rising by more than 100%. A single bulk purchase box (100 modules) is now worth roughly more than $500,000——more expensive than many residential properties!

Capital markets reacted immediately. Storage stocks surged across the board:

jumped 27.5%, rose 16.8%, while , , and Korea’s memory giants Samsung Electronics and SK Hynix also posted strong gains.

Jensen Huang’s Speech Ignites Explosive Demand for Memory Chips

At CES 2026,

CEO Jensen Huang stated that AI inference bottlenecks are shifting from computation to **context memory**. As AI usage scales, complex tasks involving multi-turn conversations and multi-step reasoning generate massive volumes of contextual data. Traditional networked storage architectures are simply too inefficient, requiring a fundamental redesign.

Specifically, for every new token generated, GPUs must reread the entire historical context from memory. As conversations lengthen, storage requirements expand linearly, quickly overwhelming bandwidth and power budgets. If each token requires 30KB of memory, a 100,000-token context needs 3GB of storage; at 100 million tokens, storage demand explodes to 3.2TB (approximately 3,277GB).

As content length continues to grow, legacy storage architectures become insufficient. Jensen Huang therefore introduced a **context memory architecture**, which essentially places storage directly inside the rack, physically adjacent to GPUs, managed by BlueField-4. This eliminates repeated data retrieval from remote storage servers.

Historically, memory chips were viewed as highly cyclical, attracting only short-term capital. Today, future growth can be modeled by **rack deployment**. According to Huang, each GPU requires roughly 16TB of incremental storage (9,600TB ÷ 576 ≈ 16.6TB). If storage racks become standard for enterprise customers, 20 million accelerator chips (GPUs + ASICs) would translate into **320 exabytes of incremental SSD demand**.

Global NAND capacity currently stands at about 1,000 exabytes. An incremental 320 exabytes represents nearly **30% demand elasticity**, enough to materially impact NAND pricing.

Looking ahead, AI is expected to evolve from simple chatbots into intelligent collaborative agents that understand the physical world, reason continuously, and call tools to complete tasks. This transformation requires ever-larger context windows and faster cross-node data sharing—driving sustained, high-growth demand for memory chips.

Comments



Add a public comment...
No comments

No comments yet