Fluent.ai’s suite of speech-to-intent technologies has been ported and optimised for CEVA’s low power audio and sensor hub DSPs, providing a high performance solution for companies looking to integrate intelligent voice activation and control into their wearables, consumer devices and IoT products.
Fluent.ai provides embedded, noise robust and multilingual speech understanding solutions capable of running fully offline on small footprint and low power devices. Fluent.ai technology can support any language and accent, enabling users to speak to their devices in their native language, naturally.
CEVA’s audio and sensor hub DSPs, including the CEVA-X2, CEVA-BX1, CEVA-BX2 and SensPro family, will enable the full suite of speech-to-intent technologies to run in an always-on mode. These DSPs can also run other software and algorithms that enhance the performance and feature set, including ClearVox front-end noise reduction, MotionEngine for sensor fusion and the SenslinQ framework for contextual awareness.
According to Vikrant Tomar, Founder and CTO, Fluent.ai., “Voice activation and control is emerging as one of the most sought-after technologies in an increasingly contactless world, and together we’re bringing a cost-effective and highly accurate edge AI solution that can understand intent from speech, even in the noisiest environments.”
“Fluent.ai’s speech-to-intent technology with multilanguage support running on our DSPs is ideal for power-constrained intelligent devices where voice is the primary user interface,” said Moshe Sheier, Vice President of Marketing at CEVA. “Having all the speech processing take place on the edge device ensures privacy of the data, low latency and instantaneous response. Together, we are lowering the entry barriers for adding high quality, naturally spoken voice control to any device.”
Fluent.ai, rather than using traditional Cloud-based approach of transcribing speech to text and then using natural language processing to extract meaning, has developed an end-to-end spoken language understanding technology that directly extracts intent from the input speech alone.
This approach allows Fluent.ai to design speech understanding models that are much smaller in size, but are highly accurate even in noisy environments. Fluent.ai systems are capable of recognizing up to 1000s of intents in a small model size of 100s of KBs. Furthermore, the company's ability to build multiple languages into a single model means that users can switch seamlessly between languages when interacting with their device, without the need to configure language settings in between.
CEVA's scalable audio and sensor hub DSPs are optimised for sound processing applications ranging from always-on voice control up to multiple sensors fusion. They have been specifically designed to tackle multi-microphone speech processing use-cases, high quality audio playback and post-processing, and on-device sound neural network implementations. In addition, a large 3rd party ecosystem of audio/voice software, hardware and development tools companies have optimized their solutions for CEVA DSPs, for a wide array of use cases and applications.