According to the two companies, this integration will enable developers to use the Imagimob platform to create production-ready deep learning tinyML applications, and to optimise and deploy the ML models using the NDP120 by a simple click of a button.
The combined Imagimob-Syntiant solution supports a range of applications, such as sound event detection, keyword spotting, fall detection, anomaly detection, gesture detection and maný other use cases.
“The collaboration with Syntiant will be very valuable for our customers because it allows them to quickly develop and deploy powerful, production-ready deep learning models on the Syntiant NDP120,” explained Anders Hardebring, CEO and co-founder at Imagimob. “We see a lot of market demand in sound event detection, fall detection and anomaly detection.”
The Imagimob platform also includes a built-in fall detection starter project, which includes an annotated dataset with metadata (video), and a pre-trained ML-model (in h5-format) that detects when a person falls from a belt-mounted device using IMU data. Any developer can use the fall detection model and improve it by collecting more data.
“Pairing our NDP120 with the Imagimob platform will enable developers to quickly and easily deploy deep learning models,” said Kurt Busch, CEO of Syntiant. “Studies suggest that there are more than 35 million falls in the US alone that require some kind of medical attention, so there is significant opportunity for applications across both consumer and industrial use cases.”
The NDP120 has been designed to bring highly accurate always-on voice and sensor neural processing to all types of consumer and industrial products. Packaged with the Syntiant Core 2, the company’s 2nd generation, highly flexible deep neural network, the NDP120 supports Ok Google and Hey Google hotwords at under 280uW and can run multiple applications simultaneously at under 1mW.
Imagimob AI is an end-to-end development platform for machine learning on edge devices and allows developers to go from data collection to deployment on an edge device in a matter of minutes. Imagimob AI is used by many customers to build production-ready models for a range of use cases including audio, gesture recognition, human motion, predictive maintenance, and material detection.