Exploiting the processing power of the CEVA-XM4 imaging & vision DSP, the CDNN is claimed to enable embedded systems to perform deep learning tasks 3x faster than the leading GPU-based systems while consuming 30x less power and requiring 15x less memory bandwidth. For example, running a Deep Neural Network based pedestrian detection algorithm at 28nm requires less than 30mW for a 1080p 30fps video stream.
Key to the performance, low power and low memory bandwidth capabilities of CDNN is the CEVA Network Generator, a proprietary automated technology that converts a customer’s network structure and weights to a slim, customised network model used in real-time. This enables a faster network model which consumes lower power and memory bandwidth, with less than 1% degradation in accuracy compared to the original network.
Eran Briman, vice president of marketing at CEVA, said: “Our new Deep Neural Network framework for the CEVA-XM4 is the first of its kind in the embedded industry, providing a significant step forward for developers looking to implement viable deep learning algorithms within power-constrained embedded systems.”
The CDNN software framework is supplied as source code, extending the CEVA-XM4’s existing Application Developer Kit. CDNN includes real-time example models for image classification, localisation and object recognition. It is intended to be used for object and scene recognition, ADAS, AI, video analytics, augmented reality, virtual reality and similar computer vision applications.