Recently, NVIDIA officially unveiled its next-generation GPUs—the B200 and the GB200 Superchip—based on the new Blackwell architecture, marking a significant leap forward in AI computing capabilities. Compared to the previous Hopper architecture, Blackwell delivers substantial improvements in performance, energy efficiency, and scalability. The B200 GPU packs an impressive 208 billion transistors, built on TSMC’s advanced 4NP process, offering up to 30x higher AI training performance in FP4 precision while significantly improving power efficiency. Notably, the GB200 Superchip integrates two B200 GPUs with one Grace CPU via NVIDIA’s high-speed NVLink interconnect, delivering unprecedented compute power for large-scale model training and inference.These new GPUs are designed not only for data centers and cloud providers but also to accelerate applications in generative AI, scientific computing, and autonomous driving. NVIDIA CEO Jensen Huang described the Blackwell architecture as the ‘engine of AI factories,’ poised to drive global upgrades in AI infrastructure. As AI models continue to grow in size and complexity, demand for high-performance hardware is surging. Industry leaders including Microsoft, Google, and Amazon have already announced plans to deploy systems based on the Blackwell platform, signaling its potential as the foundation of next-generation AI infrastructure.
近日,英伟达(NVIDIA)正式发布了其新一代GPU——基于Blackwell架构的B200和GB200超级芯片。这一发布标志着AI计算能力迈入全新阶段。相较于上一代Hopper架构,Blackwell在性能、能效和可扩展性方面均有显著提升。B200 GPU配备高达2080亿个晶体管,采用台积电4NP先进制程工艺,FP4精度下的AI训练性能提升高达30倍,同时功耗效率也大幅提升。更引人注目的是,GB200超级芯片通过NVLink高速互连技术将两颗B200 GPU与一颗Grace CPU整合,为大模型训练和推理提供前所未有的算力支持。新GPU不仅面向数据中心和云服务商,还将加速生成式AI、科学计算和自动驾驶等前沿领域的应用落地。英伟达CEO黄仁勋表示,Blackwell架构是‘AI工厂’的核心引擎,将推动全球AI基础设施的升级。随着AI模型规模持续扩大,对高性能计算硬件的需求激增,Blackwell系列有望成为下一代AI基础设施的基石。目前,包括微软、谷歌、亚马逊等科技巨头已宣布计划部署基于Blackwell的新系统。
原创文章,作者:admin,如若转载,请注明出处:https://avine.cn/9169.html