The reference design enables ultra-low-power internet of things (IoT) devices to implement hearing and vision and showcases the MAX78000 low-power microcontroller with neural network accelerator for audio and video inferences.
The system also contains the MAX32666 ultra-low power Bluetooth microcontroller and two MAX9867 audio CODECs and the entire system is delivered in an ultra-compact form factor demonstrating how it is now possible for AI applications such as facial identification and keyword recognition to be embedded in low-power, cost-sensitive applications such as wearables and IoT devices.
AI applications require intensive computations, usually performed in the cloud or in expensive, power-hungry processors that can only fit in applications with big power budgets such as self-driving cars. The MAXREFDES178# camera cube, however, demonstrates how AI can now live on a low-power budget, enabling applications that are time- and safety-critical to operate on even the smallest of batteries.
The MAX78000’s AI accelerator slashes the power of AI inferences up to 1,000x for vision and hearing applications, as compared to other embedded solutions. According to Maxim, the AI inferences running on the MAXREFDES178# also show dramatic latency improvements, running more than 100x faster than on an embedded microcontroller.
The compact form factor of the camera cube at 1.6in x 1.7in x 1.5in (41mm x 44mm x 39mm) shows that AI can be implemented in wearables and other space-constrained IoT applications.
The MAX78000 solution itself is up to 50 percent smaller than the next-smallest GPU-based processor and does not require other components like memories or complex power supplies to implement cost-effective AI inferences.