At CES 2025, South Korean semiconductor giant SK hynix unveiled its newly developed 16-layer stacked High Bandwidth Memory (HBM4) for the first time. This breakthrough marks a significant advancement in HBM technology, offering enhanced support for AI, high-performance computing, and data centers—applications that demand extremely high memory bandwidth.Compared to its predecessor, HBM3E, SK hynix’s HBM4 increases the number of stacked DRAM dies from 12 to 16 layers, substantially boosting both memory capacity and data transfer speeds within the same footprint. According to the company, a single HBM4 package can deliver up to 36GB of capacity with bandwidth exceeding 1.2TB/s, while also improving power efficiency. The design leverages advanced Through-Silicon Via (TSV) and hybrid bonding technologies to ensure signal integrity and thermal performance.The launch of HBM4 not only reinforces SK hynix’s leadership in the premium memory market but also signals a new trend toward tighter co-design between AI processors and memory systems. As generative AI models drive surging demand for computational power, high-performance memory has become a critical bottleneck. SK hynix stated that HBM4 is expected to enter mass production in the second half of 2025 and will likely be integrated into next-generation AI accelerators and GPUs.
在2025年国际消费电子展(CES)上,韩国半导体巨头SK海力士首次公开展示了其最新研发的16层堆叠高带宽内存(HBM4)技术。这一突破标志着HBM技术迈入全新阶段,为人工智能、高性能计算和数据中心等对内存带宽要求极高的应用场景提供更强支撑。与前代HBM3E相比,SK海力士的HBM4通过将DRAM芯片堆叠层数从12层提升至16层,在相同封装面积下显著提升了存储容量和数据传输速率。据官方介绍,该产品单颗容量可达36GB,带宽超过1.2TB/s,同时功耗效率也有所优化。此外,SK海力士采用了先进的TSV(硅通孔)技术和混合键合工艺,确保信号完整性与热管理性能。HBM4的推出不仅巩固了SK海力士在全球高端内存市场的领先地位,也预示着AI芯片与内存协同设计的新趋势。随着生成式AI模型对算力需求的激增,高效能内存已成为系统性能的关键瓶颈之一。SK海力士表示,HBM4预计将在2025年下半年进入量产阶段,并有望率先应用于下一代AI加速器和GPU中。
原创文章,作者:admin,如若转载,请注明出处:https://avine.cn/9127.html