The announcement sees the delivery of the industry’s first NN compiler implementation for higher performance with low memory footprint on NXP’s i.MX RT crossover MCUs. As developed by Facebook, Glow can integrate target-specific optimisations, and NXP has leveraged this ability using NN operator libraries for Arm Cortex-M cores and the Cadence Tensilica HiFi 4 DSP, in order to maximize the inferencing performance of its i.MX RT685 and i.MX RT1050 and RT1060.
In addition, this capability is merged into NXP’s eIQ Machine Learning Software Development Environment, which is freely available within NXP’s MCUXpresso SDK.
Facebook introduced Glow (the Graph Lowering NN compiler) in 2018 as an open source community project, with the goal of providing optimisations to accelerate neural network performance on a range of hardware platforms.
As an NN compiler, Glow takes in an un-optimised neural network and generates highly optimised code. Directly running optimised code, like that possible with Glow, greatly reduces the processing and memory requirements. NXP has taken an active role within the Glow open source community to help drive broad acceptance of new Glow features.
“The standard, out-of-the-box version of Glow from GitHub is device agnostic to give users the flexibility to compile neural network models for basic architectures of interest, including the Arm Cortex-A and Cortex-M cores, as well as RISC-V architectures,” said Dwarak Rajagopal, Software Engineering Manager at Facebook. “By using purpose-built software libraries that exploit the compute elements of their MCUs and delivering a 2-3x performance increase, NXP has demonstrated the wide-ranging benefits of using the Glow NN compiler for machine learning applications, from high-end cloud-based machines to low-cost embedded platforms.”
With the demand for ML applications expected to increase significantly in the years ahead, consumer device manufacturers and embedded IoT developers will need optimised ML frameworks for low-power edge embedded applications using MCUs.
“NXP is driving the enablement of machine learning capabilities on edge devices, leveraging the robust capabilities of our highly integrated i.MX application processors and high performance i.MX RT crossover MCUs with our eIQ ML software framework,” said Ron Martino, senior vice president and general manager, NXP Semiconductors. “The addition of Glow support for our i.MX RT series of crossover MCUs allows our customers to compile deep neural network models and give their applications a competitive advantage.”
NXP’s edge intelligence environment solution for ML is a comprehensive toolkit that provides the building blocks that developers need to implement ML in edge devices. With the merging of Glow into eIQ software, ML developers will now have a comprehensive, high-performance framework that is scalable across NXP’s edge processing solutions that include the i.MX RT crossover MCUs and i.MX 8 application processors.
Customers will be better equipped to develop ML voice applications, object recognition and facial recognition, among other applications, on i.MX RT MCUs and i.MX application processors.