Arteris NoC tiling innovation helps to accelerate designs for AI applications

1 min read

Arteris, a provider of system IP which accelerates system-on-chip (SoC) creation, has announced an evolution of its network-on-chip (NoC) IP products.

Credit: Arteris

With new tiling capabilities and extended mesh topology support it can help in the faster development of Artificial Intelligence (AI) and Machine Learning (ML) compute in system-on-chip (SoC) designs.

This new functionality enables design teams to scale compute performance by more than 10 times while meeting project schedules plus power, performance and area (PPA) goals.

Network-on-chip tiling is an emerging trend in SoC design. An evolutionary approach it uses proven, robust network-on-chip IP to facilitate scaling, condense design time, speed testing and reduce design risk. It allows SoC architects to create modular, scalable designs by replicating soft tiles across the chip. Each soft tile represents a self-contained functional unit, enabling faster integration, verification and optimization.

According to the company, tiling coupled with mesh topologies within Arteris’ flagship NoC IP products, FlexNoC and Ncore, will be transformative for the ever-growing inclusion of AI compute into most SoCs.

AI-enabled systems are growing in size and complexity yet can be quickly scaled with the addition of soft tiles without disrupting the entire SoC design. Together, the combination of tiling and mesh topologies provides a way to further reduce the auxiliary processing unit (XPU) sub-system design time and overall SoC connectivity execution time by up to 50% versus manually integrated, non-tiled designs.

The first iteration of NoC tiling organises Network Interface Units (NIUs) into modular, repeatable blocks, improving scalability, efficiency and reliability in SoC designs. These SoC designs result in increasingly larger and more advanced AI compute which supports fast-growing, sophisticated AI workloads for Vision, Machine Learning (ML) models, Deep Learning (DL), Natural Language Processing (NLP) including Large Language Models (LLMs), and Generative AI (GAI), both for training and inference, including at the edge. 

“Arteris is continuously innovating, and this revolutionary NoC soft tiling functionality supported by large mesh topologies is an advancement in SoC design technology,” said K. Charles Janac, president and CEO of Arteris. “Our customers, who are already building leading-edge AI-powered SoCs, are further empowered to accelerate the development of much larger and more complex AI systems with greater efficiency, all while staying within their project timeline and PPA targets.”

The FlexNoC and Ncore NoC IP products, which offer expanded AI support via tiling and extended mesh topology capabilities, are now available to early-access customers and partners.