The two companies are working to enable wireless systems engineers to use AI-based autoencoders to compress Channel State Information (CSI) data and so significantly reduce fronthaul traffic and bandwidth requirements.
Engineers working on 5G and 6G wireless communications systems will now be able to ensure user data integrity and maintain wireless communications systems' reliability and performance standards while reducing costs.
“The collaboration between MathWorks and Altera enables organisations to harness the power of AI for a wide range of 5G and 6G wireless communications applications, from 5G RAN to advanced driver-assistance systems (ADAS),” explained Mike Fitton, vice president and GM, Vertical Markets at Altera. “By utilising our FPGA AI suite and MathWorks software, developers can streamline their workflow from algorithm design to hardware implementation, ensuring their AI-based wireless systems meet the rigorous demands of modern applications.”
MathWorks offers a tool suite that enhances AI and wireless development, particularly for Altera FPGAs. Its Deep Learning HDL Toolbox has been specifically developed to address the needs of engineers looking to implement deep learning networks on FPGA hardware.
Leveraging the capabilities of the HDL Coder, this toolbox enables users to customise, build, and deploy an efficient, high-performance Deep Learning Processor IP Core - significantly enhancing performance and flexibility in wireless applications by supporting standard networks and layers.
"AI-enabled compression is a powerful technology for the telecommunications industry," said MathWorks Principal Product Manager Houman Zarrinkoub. “MathWorks software offers a robust foundation for AI and wireless development. By integrating our tools with Altera's FPGA technologies, wireless engineers can efficiently create high-performance AI applications and advanced 5G and 6G wireless systems."
The FPGA AI Suite offers push-button custom AI inference accelerator IP generation on Altera FPGAs using the OpenVINO toolkit, using pre-trained AI models from popular industry frameworks. It further helps FPGA developers integrate AI inference accelerator IP seamlessly into FPGA designs by using best-in-class Quartus Prime Software FPGA flows.
By combining the Deep Learning Toolbox and the OpenVINO toolkit it is possible to create a mor streamlined path for developers to optimise AI inference on Altera FPGAs.