The high-performance model is deployed on the Xilinx Zynq UltraScale+ MPSoC device based ZCU104 and leverages the company's deep learning processor unit (DPU), a soft-IP tensor accelerator, which is powerful enough to run a variety of neural networks, including classification and detection of diseases.
The collaboratively developed solution uses an open-source model, which runs on a Python programming platform on a Xilinx Zynq UltraScale+ MPSoC device, meaning that it can be adapted by researchers to suit different application specific requirements. Medical diagnostic, clinical equipment makers and healthcare service providers can use the open-source design to rapidly develop and deploy trained models for many clinical and radiological applications in a mobile, portable or point-of-care edge device with the option to scale using the cloud.
AI is one of the fastest growing and high demand application areas of healthcare, so we’re excited to share this adaptable, open-source solution with the industry,” said Kapil Shankar, vice president of marketing and business development, Core Markets Group at Xilinx. “The cost-effective solution offers low latency, power efficiency, and scalability. Plus, as the model can be easily adapted to similar clinical and diagnostic applications, medical equipment makers and healthcare providers are empowered to swiftly develop future clinical and radiological applications using the reference design kit.”
The solution’s artificial intelligence (AI) model is trained using Amazon SageMaker and is deployed from cloud to edge using AWS IoT Greengrass, enabling remote machine learning (ML) model updates, geographically distributed inference, and the ability to scale across remote networks and large geographies.
Dirk Didascalou, Vice President of IoT at Amazon Web Services, said, “Amazon SageMaker enabled Xilinx and Spline.AI to develop a high-quality solution that can support highly accurate clinical diagnostics using low cost medical appliances. The integration of AWS IoT Greengrass enables physicians to easily upload X-ray images to the cloud without the need of a physical medical device, enabling physicians to extend the delivery care to more remote locations.”
The solution has already been used for a pneumonia and Covid-19 detection system, providing high levels of accuracy and low inference latency. The development team leveraged over 30,000 curated and labelled pneumonia images and 500 Covid-19 images to train the deep learning models. This data is made available for public research by healthcare and research institutes such as National Institute of Health (NIH), Stanford University, and MIT, as well as other hospitals and clinics around the world.