GPUs are increasingly used to provide AI inferencing at the edge, where size, weight and power (SWaP) are key considerations. These embedded MXM graphics modules have been designed to offer the high-compute power that's required to transform data at the edge into actionable intelligence, and come in a standard format for systems integrators, ISVs and OEMs, increasing choice in both power and performance.
“The new embedded MXM graphics modules provide the perfect balance between size, weight and power for edge applications, where the demand for more processing power continues to increase,” said Zane Tsai, director of platform product center, ADLINK. “Leveraging NVIDIA’s GPUs based on the Turing architecture, our customers can now increase their edge processing performance with ruggedized modules that are fit for any environment, while remaining inside their SWaP envelope.”
ADLINK’s embedded MXM graphics modules are able to accelerate edge computing and edge AI across a range of compute-intensive applications, particularly in harsh or environmentally challenging applications such as those with limited or no ventilation, or corrosive environments. Examples include medical imaging, industrial automation, biometric access control, autonomous mobile robots, transportation and aerospace and defense. The need for high-performance, low-power GPU modules is increasingly critical as AI at the edge becomes more prevalent.
The ADLINK embedded MXM graphics modules:
● Provide acceleration with NVIDIA CUDA, Tensor and RT Cores
● Are one-fifth the size of full-height, full-length PCI Express graphics cards
● Offer more than three times the lifecycle of non-embedded graphics
● Consume as low as 50 watts of power