“Supermicro continues to work closely with Intel and Habana Labs to deliver a range of server solutions supporting Arctic Sound-M and Gaudi2 that address the demanding needs of organisations that require highly efficient media delivery and AI training,” said Charles Liang, president and CEO, Supermicro.
Supermicro is able to bring new technologies to market quickly by using a Building Block Solutions approach to designing new systems. This allows new GPUs and acceleration technology to be placed into existing designs or, when necessary, quickly adapt an existing design when needed for higher-performing components.
"Supermicro is helping deliver advanced AI and media processing with systems that leverage our latest Gaudi2 and Arctic Sound-M accelerators,” said Sandra Rivera, executive vice president and general manager of the Datacenter and AI Group, Intel. "Supermicro’s Gaudi AI Training Server will accelerate deep learning training in some of the fastest growing workloads in the datacentre.”
Supermicro systems with the Arctic Sound-M GPUs will address the requirements in cloud gaming, media transcoding and streaming, virtual desktop infrastructure (VDI), simulation and visualisation, machine learning, and content creation.
With the industry’s first hardware AV-1 encoder and open-source media software stack, the Arctic Sound-M is capable of dramatically improving performance compared with software-only video transcoding and delivery solutions and contains acceleration functions for VDI environments.
The Intel Arctic Sound-M GPUs will initially be available in the 2U 2 Node single processor Intel system with 3 GPUs per node, the 4U 10xGPU system, and the CloudDC server, with additional systems to be announced later this year. Supermicro’s AI Training Servers include dual Intel 3rd Gen Intel Xeon Scalable processors and eight Habana Gaudi2 accelerators for high-performance AI training environments.
The Habana Labs Gaudi2 is intended for a range of workloads that include: vision applications such as image classification, object detection, Natural Language Processing (NLP) models, and recommendation systems.
The AI Training server will be the first commercial implementation with the new Habana Gaudi2 (HL-225) in an 8U chassis. This server will accelerate AI training to new performance levels combined with dual 3rd Gen Intel Xeon Scalable processors and up to 8TB DRAM. With 24 hot-swappable drive bays supporting ample local high-performance storage, significant amounts of IO are contained within the server.
Scaling with the Habana Gaudi2 is said to be both easy and straightforward. With ROCE, each Habana Gaudi2 accelerator can communicate with the other Gaudi2 accelerators at 700 GB/sec and communicate to other Gaudi2 accelerators that reside in different servers at 2.4 TB/sec. In addition, each server contains 6 x QSFP-DD ports to allow easy scale-out to handle larger models and data sets.