Specifically designed for the R-Car system-on-chip (SoC) devices from Renesas, these tools make it possible to rapidly develop network models with highly accurate object recognition from the initial stage of software development that take advantage of the performance of the R-Car. This reduces post-development rework and thereby helps shorten development cycles.
“Renesas continues to create integrated development environments that enable customers to adopt the “software-first” approach,” said Hirofumi Kawaguchi, Vice President of the Automotive Software Development Division at Renesas. “By supporting the development of deep learning models tailored to R-Car, we are helping our customers to build AD and ADAS solutions, while also reducing the time to market and development costs.”
“The GENESIS for R-Car, which is a cloud-based evaluation environment that we built jointly with Renesas, allows engineers to evaluate and select devices earlier in the development cycles and has already been used by many customers,” said Satoshi Miki, CEO of Fixstars. “We will continue to develop new technologies to accelerate machine learning operations (MLOps) that can be used to maintain the latest versions of software in automotive applications.”
AD and ADAS applications use deep learning to achieve highly accurate object recognition, but deep learning inference processing requires massive amounts of data calculations and memory capacity. The models and executable programmes on automotive applications need to be optimised for an automotive SoC, since real-time processing with limited arithmetic units and memory resources can be challenging.
In addition, the process from software evaluation to verification must be accelerated and updates need to be applied repeatedly to improve the accuracy and performance.
Renesas and Fixstars have developed the following tools designed to meet these needs.
- R-Car Neural Architecture Search (NAS) tool for generating network models optimised for R-Car
This tool generates deep learning network models that efficiently utilise the CNN (convolutional neural network) accelerator, DSP, and memory on the R-Car device. This allows engineers to rapidly develop lightweight network models that achieve highly accurate object recognition and fast processing time even without a deep knowledge or experience with the R-Car architecture.
- R-Car DNN Compiler for compiling network models for R-Car
This compiler converts optimised network models into programmes that can make full use of the performance potential of R-Car. It converts network models into programmes that can run quickly on the CNN IP and also performs memory optimisation to enable high-speed, limited-capacity SRAM to maximise its performance.
- R-Car DNN Simulator for fast simulation of compiled programmes
This simulator can be used to rapidly verify the operation of programmes on a PC, rather than on the actual R-Car chip. Using this tool, developers can generate the same operation results that would be produced by R-Car. If the recognition accuracy of inference processing is impacted during the process of making models more lightweight and optimising programmes, engineers can provide immediate feedback to model development, therefore shortening development cycles.