Supermicro NGC-Ready systems will allow customers to train AI models using NVIDIA V100 Tensor Core GPUs and to perform inference using NVIDIA T4 Tensor Core GPUs.
NGC hosts GPU-optimised software containers for deep learning, machine learning and HPC applications, pre-trained models, and SDKs that can be run anywhere the Supermicro NGC-Ready systems are deployed, whether that's in data centres, cloud, edge micro-datacentres, or in distributed remote locations as environment-resilient and secured NVIDIA-Ready for Edge servers powered by the NVIDIA EGX intelligent edge platform.
Commenting Charles Liang, CEO and president of Supermicro said, “With support for fast networking and storage, as well as NVIDIA GPUs, our Supermicro NGC-Ready systems are the most scalable and reliable servers to support AI. Customers can run their AI infrastructure with the highest ROI.”
Supermicro claims to have the broadest portfolio of NGC-Ready Servers optimised for data centre and cloud deployments and is looking to further expand its portfolio. In addition, the company also offers five validated NGC-Ready for Edge servers (EGX) optimised for edge inferencing applications.
“NVIDIA’s container registry, NGC, enables superior performance for deep learning frameworks and pre-trained AI models with state-of-the-art accuracy,” said Ian Buck, vice president and general manager of Accelerated Computing at NVIDIA. “The NGC-Ready systems from Supermicro can deliver users the performance they need to train larger models and provide low latency inference to make critical, real-time business decisions.”
Supermicro is able to offer multi-GPU optimised thermal designs that are able to provide high levels of both performance and reliability for AI, deep learning, and HPC applications. With 1U, 2U, 4U, and 10U rackmount NVIDIA GPU systems as well as GPU blade modules for its 8U SuperBlade enclosure,