The platform drives extremely efficient and intelligent edge devices for Artificial Intelligence of Things (AIoT) solutions and services market that’s forecast to be worth over $1trn by 2030.
A hyper-efficient but powerful neural processing system, architected for embedded Edge AI applications, it adds efficient 8-bit processing to go with advanced capabilities such as time domain convolutions and vision transformer acceleration, taking them from perception towards cognition.
The second-generation of Akida now includes Temporal Event Based Neural Nets (TENN) spatial-temporal convolutions that supercharge the processing of raw time-continuous streaming data, such as video analytics, target tracking, audio classification, analysis of MRI and CT scans for vital signs prediction, and time series analytics used in forecasting, and predictive maintenance.
According to BrainChip, these capabilities are critically needed in industrial, automotive, digital health, smart home and smart city applications. The TENNs allow for radically simpler implementations by consuming raw data directly from sensors - drastically reducing model size and operations performed, while maintaining very high accuracy. This can shrink design cycles and dramatically lower the cost of development.
Another addition to the second generation of Akida is Vision Transformers (ViT) acceleration, a leading edge neural network that has been shown to perform extremely well on various computer vision tasks, such as image classification, object detection, and semantic segmentation.
This powerful acceleration, combined with Akida’s ability to process multiple layers simultaneously and hardware support for skip connections, allows it to self-manage the execution of complex networks like RESNET-50 completely in the neural processor without CPU intervention and minimises system load.
The Akida IP platform has the ability to learn on the device for continuous improvement and data-less customisation that improves security and privacy.
This, combined with the efficiency and performance available, enable very differentiated solutions that until now have not been possible. These include secure, small form factor devices like hearable and wearable devices, that take raw audio input, medical devices for monitoring heart and respiratory rates and other vitals that consume only microwatts of power. This can scale up to HD-resolution vision solutions delivered through high-value, battery-operated or fanless devices enabling a wide variety of applications from surveillance systems to factory management and augmented reality to scale effectively.
“Our customers wanted us to enable expanded predictive intelligence, target tracking, object detection, scene segmentation, and advanced vision capabilities. This new generation of Akida allows designers and developers to do things that were not possible before in a low-power edge device,” said Sean Hehir, BrainChip CEO. “By inferring and learning from raw sensor data, removing the need for digital signal pre-processing, we take a substantial step toward providing a cloudless Edge AI experience.”
Akida’s software and tooling further simplifies the development and deployment of solutions and services. Features include:
- An efficient runtime engine that autonomously manages model accelerations completely transparent to the developer MetaTF software that developers can use with their preferred framework, like TensorFlow/Keras, or development platform, like Edge Impulse, to easily develop, tune, and deploy AI solutions.
- Supports all types of Convolutional Neural Networks (CNN), Deep Learning Networks (DNN), Vision Transformer Networks (ViT) as well as Spiking Neural Networks (SNNs), future-proofing designs as the models get more advanced.
Akida comes with a Models Zoo and a burgeoning ecosystem of software, tools, and model vendors, as well as IP, SoC, foundry and system integrator partners.
BrainChip is currently engaged with early adopters on the second generation IP platform, while general availability will follow in Q3’ 2023.