SK Hynix Completes HBM4 Development, Sets Up Production System
PorAinvest
domingo, 14 de septiembre de 2025, 6:27 pm ET1 min de lectura
NVDA--
HBM4, which stands for High Bandwidth Memory, is designed by stacking DRAM vertically, providing higher speed and performance compared to conventional DRAMs. This technology is particularly in high demand from AI applications, with Nvidia being SK Hynix’s key customer for these memory chips [1].
The new generation of HBM offers several enhancements over its predecessor, HBM3E. HBM4 doubles the number of input-output channels to 2,048, effectively doubling the bandwidth. Additionally, power efficiency has been improved by over 40 percent, making it a more energy-efficient solution for data centers and AI customers [2]. SK Hynix has also managed to increase the maximum speed of HBM4 to 10 gigabits per second, surpassing the 8 gigabits per second standard set by JEDEC.
The development of HBM4 involved the use of SK Hynix's latest Gen 5 10-nanometer DRAMs and an advanced mass reflow molded underfill (MR-MUF) technique. This technique helps to dissipate heat effectively and minimize chip warping, ensuring the stability and reliability of the memory chips during mass production [2].
Kim Ju-seon, president and head of AI infrastructure at SK Hynix, stated, "HBM4, the first to officially enter mass production globally, is a breakthrough product that addresses critical technological challenges in the AI era. We will continue to supply the highest-quality, high-performance memory products the market demands as we evolve into a full-stack AI memory provider."
This development is expected to significantly impact the AI memory market, with potential performance improvements of up to 69 percent for AI services. By adopting HBM4, data centers can eliminate data bottlenecks and reduce power costs [2].
SK Hynix's commitment to innovation and customer satisfaction is evident in this latest achievement. The company's focus on performance, energy efficiency, and reliability positions it as a key player in the AI memory market.
SK Hynix has completed the development of HBM4, the latest generation of high-bandwidth memory, with a production system set up. HBM4 offers higher speed and performance than conventional DRAMs, making it in high demand from AI applications, with Nvidia being SK Hynix’s key customer. Compared to HBM3E, HBM4 has double the number of I/Os, twice the bandwidth, and 40% increased power efficiency.
SK Hynix, a leading South Korean semiconductor company, has announced the completion of the development of HBM4, the latest generation of high-bandwidth memory (HBM). This breakthrough comes with the establishment of a mass production system for the chips, marking a significant advancement in the memory technology landscape.HBM4, which stands for High Bandwidth Memory, is designed by stacking DRAM vertically, providing higher speed and performance compared to conventional DRAMs. This technology is particularly in high demand from AI applications, with Nvidia being SK Hynix’s key customer for these memory chips [1].
The new generation of HBM offers several enhancements over its predecessor, HBM3E. HBM4 doubles the number of input-output channels to 2,048, effectively doubling the bandwidth. Additionally, power efficiency has been improved by over 40 percent, making it a more energy-efficient solution for data centers and AI customers [2]. SK Hynix has also managed to increase the maximum speed of HBM4 to 10 gigabits per second, surpassing the 8 gigabits per second standard set by JEDEC.
The development of HBM4 involved the use of SK Hynix's latest Gen 5 10-nanometer DRAMs and an advanced mass reflow molded underfill (MR-MUF) technique. This technique helps to dissipate heat effectively and minimize chip warping, ensuring the stability and reliability of the memory chips during mass production [2].
Kim Ju-seon, president and head of AI infrastructure at SK Hynix, stated, "HBM4, the first to officially enter mass production globally, is a breakthrough product that addresses critical technological challenges in the AI era. We will continue to supply the highest-quality, high-performance memory products the market demands as we evolve into a full-stack AI memory provider."
This development is expected to significantly impact the AI memory market, with potential performance improvements of up to 69 percent for AI services. By adopting HBM4, data centers can eliminate data bottlenecks and reduce power costs [2].
SK Hynix's commitment to innovation and customer satisfaction is evident in this latest achievement. The company's focus on performance, energy efficiency, and reliability positions it as a key player in the AI memory market.

Divulgación editorial y transparencia de la IA: Ainvest News utiliza tecnología avanzada de Modelos de Lenguaje Largo (LLM) para sintetizar y analizar datos de mercado en tiempo real. Para garantizar los más altos estándares de integridad, cada artículo se somete a un riguroso proceso de verificación con participación humana.
Mientras la IA asiste en el procesamiento de datos y la redacción inicial, un miembro editorial profesional de Ainvest revisa, verifica y aprueba de forma independiente todo el contenido para garantizar su precisión y cumplimiento con los estándares editoriales de Ainvest Fintech Inc. Esta supervisión humana está diseñada para mitigar las alucinaciones de la IA y garantizar el contexto financiero.
Advertencia sobre inversiones: Este contenido se proporciona únicamente con fines informativos y no constituye asesoramiento profesional de inversión, legal o financiero. Los mercados conllevan riesgos inherentes. Se recomienda a los usuarios que realicen una investigación independiente o consulten a un asesor financiero certificado antes de tomar cualquier decisión. Ainvest Fintech Inc. se exime de toda responsabilidad por las acciones tomadas con base en esta información. ¿Encontró un error? Reportar un problema

Comentarios
Aún no hay comentarios