With the support from A-list investors including Kleiner Perkins, Mayfield, and Fidelity the company has announced the formal availability of its expansive line of IP solutions that are customisable for any workload or application.
Formed by the same team that designed Marvell’s ThunderX2 server chips, Akeana offers a variety of IP solutions, including microcontrollers, Android clusters, AI vector cores and subsystems, and compute clusters for networking and data centres.
Akeana is intending to offer licensing options and processors that fill and exceed current performance gaps and to that end has released three processor lines and SoC IP, in tandem with the formal launch of the company. All of these devices are ready for customer delivery and include:
Akeana 100 Series: a line of highly configurable processors with 32-bit RISC-V cores that supports applications from embedded microcontrollers to edge gateways, to personal computing devices.
Akeana 1000 Series: a processor line that includes 64-bit RISC-V cores and an MMU to support rich operating systems, while maintaining low power and requiring low die area. These processors support in-order or out-of-order pipelines, multi-threading, vector extension, hypervisor extension and other extensions that are part of recent and upcoming RISC-V profiles, as well as optional AI computation extensions.
Akeana 5000 Series: a line of extreme performance processors with 64-bit RISC-V cores optimised for demanding applications in next-gen devices, laptops, data centres, and cloud infrastructure. These processors are compatible with the Akeana 1000 Series but with much higher single thread performance.
Processor System IP: a collection of IP blocks needed for the creation of processor SoCs, including a Coherent Cluster Cache, I/O MMU, and Interrupt Controller IPs. In addition, Akeana provides Scalable Mesh and Coherence Hub IP (compatible with AMBA CHI) to build large coherent compute subsystems for Data Centres and other use cases.
AI Matrix computation engine: designed to offload Matrix Multiply operations for AI acceleration. Configurable in size and supporting various data types, it may be attached to the coherent cluster cache block like a core for optimal data sharing.
"Our team has a proven track record of designing server chips, and we are now applying that expertise to the broader semiconductor market as we formally go to market,” said Rabin Sugumar, Akeana CEO. “With our rich portfolio of customisable cores and special security, debug, RAS, and telemetry features, we provide our customers with unparalleled performance, observability, and reliability."
Akeana is a member of the RISC-V Board of Directors and is also participating in the RISE project to accelerate the availability of software for RISC-V.