The 2nd generation Akida platform is designed for extremely energy-efficient processing of complex neural network models on Edge devices. The support for 8-bit weights, activations, and long-range skip connections expands the reach of models that are accelerated completely in Akida’s hardware.
With the exponential increase in Cloud compute requirements for AI compounded by the growth of Generative AI, the move towards Hybrid AI solutions needs more capable and efficient compute at the Edge.
The introduction of Temporal Event Based Neural Nets (TENNs) has revolutionised the advanced sequential processing for multi-dimensional streaming and time-series data, and in turn, this has radically reduced model size and improved performance, both of which are important considerations for Edge devices.
Reducing the model size and improving compute density by order of magnitude allows more capable AI use cases to compute closer to a sensor in a more secure fashion.
Combining this with hardware acceleration of Vision Transformers (ViT) models, which boosts vision performance, unlocks the potential to create game-changing Edge devices that can process advanced vision and video applications in milliwatts or audio and other similar applications in microwatts at the sensor.
“Generative AI and LLMs at the Edge are key to intelligent situational awareness in verticals from manufacturing to healthcare to defence,” said Jean-Luc Chatelain, MD of Verax Capital Advisors and former MD and Global CTO at Accenture Applied Intelligence. “Disruptive innovation like BrainChip TENNs support Vision Transformers built on the foundation of neuromorphic principles, can deliver compelling solutions in ultra-low power, small form factor devices at the Edge, without compromising accuracy.”
Second generation MetaTF software will enable developers to evaluate the capabilities of Akida, optimise and customise their designs when architecting their System on a Chip (SoC) along with their software solutions. In addition to TensorFlow, MetaTF will support ONNX which allows for greater compatibility across various frameworks including PyTorch.
Akida’s fully digital, customisable, event-based neural processing solution is intended for advanced intelligent sensing, medical monitoring and prediction, and high-end video-object detection among many other applications. Along with its extreme efficiency, accuracy and performance, Akida also has the ability to securely learn on-device without the need for cloud retraining.
“This is a significant step in BrainChip’s vision to bring unprecedented AI processing power to Edge devices, untethered from the cloud,” said Sean Hehir, CEO, BrainChip. “With Akida’s 2nd generation in advanced engagements with target customers, and MetaTF enabling early evaluation for a broader market, we are excited to accelerate the market towards the promise of Edge AI”.