At roughly the size of a stick of chewing gum, the InferX X1M boards combine high performance inference capabilities into a low-power M.2 form factor for space and power constrained applications such as robotic vision, industrial, security, and retail analytics.
"With the general availability of our X1M board, customers designing edge servers and industrial vision systems can now incorporate superior AI inference capabilities with high-accuracy, high throughput and low power on complex models," explained Dana McCarty, Vice President of Sales and Marketing for Flex Logix's Inference Products. "By incorporating an X1M board, customers can not only design new and exciting new AI capabilities into their systems, but they also have a faster path to production ramp versus designing their own custom card design."
Featuring Flex Logix's InferX X1 edge inference accelerator, the InferX X1M board offers among the most efficient AI inference acceleration for advanced edge AI workloads such as Yolov5. The boards have been optimised for large models and megapixel images at batch=1. This provides customers with the high-performance, low-power object detection and other high-resolution image processing capabilities needed for edge servers and industrial vision systems.
To help its customers to market quickly, Flex Logix also provides a suite of software tools to accompany the boards. This includes tools to port trained ONNX models to run on the X1M, and simple runtime framework to support inference processing within both Linux and Windows.
Also included in the software tools is an InferX X1 driver with external APIs designed for applications to easily configure and deploy models, as well as internal APIs for handling low-level functions designed to control and monitor the X1M board.