AI platform enables customisable packaging solutions for AI applications

2 mins read

Sarcina Technology, a semiconductor packaging specialist, has announced the launch of an AI platform that enables the creation of advanced AI packaging solutions that can be tailored to meet specific customer requirements.

Sarcina Technology's semiconductor packaging platform Credit: Sarcina

Using ASE’s FOCoS-CL (Fan-Out Chip-on-Substrate-Chip Last) assembly technology the platform includes an interposer which supports chiplets using UCIe-A for die-to-die interconnects, allowing for the delivery of cost-effective and customisable solutions.

Sarcina Technology said that it was looking to ‘push the boundaries of AI computing system development’ by providing a platform that enables more efficient, scalable, configurable and cost-effective semiconductor packaging solutions for AI applications.

As AI workloads continue to evolve, there is a need for increasingly sophisticated packaging solutions capable of supporting higher computational demands. Sarcina’s interposer packaging technology integrates memory solutions with high-efficiency interconnects.

According to Dr. Larry Zu, CEO of Sarcina Technology, "Six years ago, after prototyping a 2.5D silicon TSV interposer package that integrated one ASIC and two HBMs, we predicted this technology would enable highly complex compute solutions. Today, this vision is becoming a reality, driven by RDL die-to-die interconnects like UCIe."

"With FOCoS assembly technology, we are entering a new era of AI computing,” added Zu. He continued, “Our AI platform offers greater efficiency and customisation, with the lowest cost in the industry for generative AI chips. This ensures that our customers stay competitive in the rapidly evolving AI landscape."

The Sarcina team has developed an interposer with up to 64 bits of data interface per module, achieving data rates of up to 32 GT/s. This delivers the highest UCIe-A performance in terms of both bandwidth and data rate, as specified by the UCIe 2.0 standard. To further enhance data transfer throughput, multiple modules can be arranged in parallel along the silicon die edge. There is also a choice between LPDDR5X/6 packaged memory chips and HBMs.

Sarcina has developed expertise in designing high-power, high-performance semiconductor packages which allows semiconductor startups to focus on developing efficient algorithms for generative AI and edge AI training without the need for an expensive post-silicon design and manufacturing team.

Startups will be able to simply develop their silicon and pass it to Sarcina for post-silicon packaging, streamlining the process and reducing costs while maintaining high performance.

Sarcina’s die-to-die interposer solution enables AI customers to use chiplets to form large silicon areas, supporting high-performance computing with satisfactory wafer yields. This large package design allows for the integration of more memory, which is crucial for generative AI applications that require rapid, parallel data processing.

Key features of the Sarcina AI platform:

Cost-effective chiplet design: A cost-efficient alternative to expensive SoC solutions.

Fast die-to-die interconnection with UCIe-A standard: Up to 64-bit data interface per module and 32 GT/s transmission speed per lane. Supports multi-module configurations, interconnect redundancy and side-band configurations as specified by UCIe 2.0 standards.

FOCoS-CL advanced packaging technology: A cost-effective replacement for costly 2.5D TSV (Through-Silicon Via) silicon interposer technology, as well as other expensive solutions like silicon bridge die with fan-out RDL interconnections.

LPDDR5X/6 and HBM options: Provides superior memory bandwidth and efficiency to support a variety of AI workloads. LPDDR6 memory also incorporates 3D stacking technology similar to HBM, achieving data rates of approximately 10 GT/s.

Scalable package size: Supports package sizes up to 100 mm x 100 mm, enabling scalability for diverse AI applications.

Power specifications: <500W for forced air cooling and up to 1000W with liquid cooling, offering flexible deployment options.

Memory integration: Supports up to 20 LPDDR5X/6 memory chips or up to 8 HBM3E chips, facilitating high-speed data processing for AI workloads.