Enhancements include support for the Lattice Propel design environment for embedded processor-based development and the TensorFlow Lite deep-learning framework for on-device inferencing. The new version includes the Lattice sensAI Studio design environment for end-to-end ML model training, validation, and compilation.
Using sensAI 4.0, developers can employ a simple drag-and-drop interface to build FPGA designs with a RISC-V processor and a CNN acceleration engine to enable much easier implementation of ML applications on power-constrained Edge devices.
With demand growing for support of low power AI/ML inferencing for applications like object detection and classification. AI/ML models can be trained to support applications for a range of devices that require low-power operation at the Edge, including security and surveillance cameras, industrial robots, and consumer robotics and toys. The Lattice sensAI solution stack helps developers create AI/ML applications that run on flexible, low power Lattice FPGAs.
“With support for TensorFlow Lite and the new Lattice sensAI Studio, it’s now easier for developers to leverage our sensAI stack to create AI/ML applications capable of running on battery-powered Edge devices,” said Hussein Osman, Marketing Director, Lattice.
Enhancements to the Lattice sensAI solution stack 4.0 include:
- TensorFlow Lite – support for the framework reduces power consumption and increases data co-processing performance in AI/ML inferencing applications.
- Lattice Propel – the stack supports the Propel environment’s GUI and command-line tools to create, analyze, compile, and debug both the hardware and software design of an FPGA-based processor system.
- Lattice sensAI Studio – a GUI-based tool for training, validating, and compiling ML models optimized for Lattice FPGAs.
- Improved performance - by leveraging advances in ML model compression and pruning, sensAI 4.0 can support image processing at 60 FPS with QVGA resolution or 30 FPS with VGA resolution.