Recently, Samsung Electronics announced it is accelerating the development of customized High Bandwidth Memory (HBM) to meet the growing demand for advanced memory solutions in artificial intelligence (AI) and high-performance computing (HPC). Unlike traditional DRAM, HBM stacks multiple DRAM dies vertically and uses Through-Silicon Via (TSV) technology to enable ultra-fast interconnects, significantly boosting data bandwidth and energy efficiency. As AI model training demands higher memory bandwidth and capacity, customers are increasingly seeking tailored HBM solutions—such as interface protocols, power management features, or packaging formats optimized for specific AI accelerators.Samsung stated it is collaborating closely with several leading global AI chip designers to co-develop customized HBM products aligned with their hardware architectures and application scenarios. This approach not only enhances overall system performance but also shortens time-to-market. Additionally, Samsung plans to expand production capacity for its upcoming HBM4 products and aims to begin mass production in 2025 to address urgent market demand for next-generation HBM technology. Analysts note that Samsung’s push into customized HBM signals a strategic shift among memory makers—from standardized offerings toward customer-centric solutions—which will play a pivotal role in the future competition for AI infrastructure.
近期,三星电子宣布正加快高带宽内存(HBM)的定制化开发步伐,以满足人工智能(AI)和高性能计算(HPC)领域对先进存储解决方案日益增长的需求。与传统DRAM不同,HBM通过将多个DRAM芯片垂直堆叠,并利用硅通孔(TSV)技术实现高速互联,显著提升了数据传输带宽和能效。随着AI模型训练对内存带宽和容量提出更高要求,客户对HBM的定制需求也日趋多样化,例如针对特定AI加速器优化的接口协议、功耗管理或封装形式。三星表示,其正在与多家全球领先的AI芯片设计公司紧密合作,共同开发符合其硬件架构和应用场景的定制版HBM产品。此举不仅有助于提升系统整体性能,还能缩短产品上市周期。此外,三星还计划扩大其HBM4产品的产能,并在2025年实现量产,以应对市场对下一代HBM技术的迫切需求。分析人士指出,三星加快定制HBM开发,标志着存储厂商正从标准化产品向更贴近客户需求的解决方案转型,这将在未来AI基础设施竞争中扮演关键角色。
原创文章,作者:admin,如若转载,请注明出处:https://avine.cn/19005.html