These GAIA-based accelerators have been designed to enable energy-efficient transformers and large language models (LLMs) on resource-constrained devices while supporting ongoing investment in existing convolutional and recurring neural networks.
With the Synabro SDK, developers will have access to a unified development flow to optimise pre-trained models for execution on GAIA-based devices. The combined solution offers a highly flexible general-purpose neural processing engine that is scalable from smart sensors to edge-of-network infrastructure applications.
GAIA’s memory, neural engine, and internal communication architecture offers high utilisation further increasing efficiency within an already ultra-low energy envelope. Enabling users to mix and scale neural engines, CPU architectures and core counts with an elastic memory structure opens the addressable application space for mission-critical, real-time edge applications from tens of giga operations (GOPS) to hundreds of trillions of operations per second (TOPS).
The Synabro SDK allows developers to migrate pre-trained models to GAIA-based hardware quickly and intuitively. Furthermore, advanced quantization, pruning, and sparsity techniques enable users to extend the Pareto curve by optimising for accuracy, latency, and memory size.
“The AiM Future team has dedicated years to tackling intricate consumer electronics demands while also studying the automotive and industrial sectors,” said CEO ChangSoo Kim. “We have closely listened to our customers and have developed cutting-edge technology to maintain our position at the forefront of this rapidly evolving market.”
AiM Future is currently licensing GAIA and Synabro to strategic lead partners, with general availability expected in Q4, 2024.