The card, the industry's first low profile adaptable accelerator with PCIe Gen 4 support, will benefit from Zebra’s high throughput capabilities that will enable the Alveo U50 to compute convolutional neural networks more efficiently.
This is the latest in a series of Zebra-enhanced Xilinx boards that enable inference acceleration for a wide variety of AI applications, the others include the Alveo U200 and Alveo U250 boards.
“The level of acceleration that Zebra brings to our Alveo cards puts CPU and GPU accelerators to shame,” said Ramine Roane, Xilinx’s Vice President of marketing. “Combined with Zebra, Alveo U50 meets the flexibility and performance needs of AI workloads and offers high throughput and low latency performance advantages to any deployment.”
According to Mipsology, its Zero Effort IP creates the first plug-and-play FPGA solution, delivering broad application flexibility, a longer life and lower power/cost. It leverages existing skill sets and eliminates the need for FPGA expertise, making the Alveo U50 as easy-to-use for deep learning inference acceleration as a CPU or GPU.
“Zebra delivers the highest possible performance and ease-of-use for inference acceleration,” said Ludo Larzul, Mipsology’s founder and chief executive officer. “With the Alveo U50, Xilinx and Mipsology are providing AI application developers with a card that excels across multiple apps and in every development environment.”
Zebra-powered FPGAs are, according to the company, better suited to accelerate neural network inference than either CPUs or GPUs for both the data centre and large industrial AI applications, including robotics, smart cities, image processing/video analytics, healthcare, retail, driver-assist cars, video surveillance and many more, due to their high performance and long life expectancy.
According to Mipsology, they also extend the lifetime of neural network solutions by doubling FPGA performance every year on the same silicon across FPGA generations.