Digital imaging technologies bring better performance to embedded systems
4 mins read
Intelligent vision systems are relatively new; they provide a link between the harder domain of modern technology and the softer world we live in.
Bridging this divide is important because we rely on technology. Through its introduction and adoption, systems are better equipped to 'understand' the real world and therefore better adapt to it. While the ability to acquire and interpret everyday objects 'in context' removes many barriers when using technology to aid modern life, the challenges in enabling embedded systems to identify objects accurately through image capture are significant.
The use cases for smarter vision are expanding, driving innovation from OEMs eager to offer the right solutions. Perhaps the highest profile use case is security; the days of human monitoring could well be numbered, as embedded systems can now 'learn' how to identify suspicious objects and patterns in crowds, even to the point of identifying individuals and tracking them through multiple locations.
Other applications benefiting from advances in technology include industrial automation, where greater operator support is leading to higher productivity. Automatic inspection of goods is one example where vision systems are now making decisions once the remit of an operator.
Emerging applications include Advanced Driver Assistance Systems (ADAS), in which multiple cameras provide 'surround vision', as well as collision and lane departure warnings. It is common for these kinds of innovation to begin life in high end models, but to migrate rapidly to mainstream cars as the cost of adoption drops.
In addition to ADAS, there are application areas that may have even larger audiences, such as vision systems for changing the way we communicate and the way in which medical care is applied. Both areas are being shaped by a closer collaboration between people and technology, and are expected to benefit from smarter vision systems. For example, surgeons already rely on cameras to relay information from within a patient when conducting keyhole surgery; with smarter vision systems, it may be possible for the surgeon to be located remotely or even for the machine to become the 'surgeon'.
Internet connectivity has influenced the way cameras – and in particularly security cameras in closed circuit systems – have evolved. It is now possible to view and control a camera remotely over a (secure) internet link. However, the 'intelligence' is still firmly in the head of the operator.
Smarter vision systems bring a new dimension to this, as demonstrated by the formation of the Embedded Vision Alliance, an industry association dedicated to furthering this new paradigm. Its aim is to provide a link between the vast amount of computer vision research available and the development of practical embedded systems.
Vision solutions have, in the past, relied heavily on the quality of the optics in the camera to provide higher quality video images. In smarter vision systems, powerful processing will largely obviate the need for expensive cameras and lenses in today's essentially analogue systems. Instead, low cost digital cameras will provide high levels of raw image data that can be processed by fast and efficient embedded processors. Many of the necessary algorithms are being developed collaboratively to create resources such as the OpenCV Library (www.opencv.org). This offers more than 2500 algorithms written in C, C++, Java and Python, with complexity ranging from image filters to motion detection.
This paradigm shift from high cost analogue to low cost all digital solutions will enable more cameras to be deployed in more applications, all relying predominantly on the processors and algorithms being developed by OEMs and aided by industry efforts such as the Embedded Vision Alliance and OpenCV Library.
At the heart of this evolution is the ability to integrate, in a cost and power effective way, immense amounts of digital signal processing, general purpose processing and hard/soft IP in a single flexible platform.
Smarter vision systems will take many forms and, while the foundation blocks will remain the same – dsps, embedded processors and supporting IP – each solution will need to be 'tuned' to the specific application.
Developing bespoke solutions from discrete devices is, of course, possible but high volume deployment is best suited by an SoC. However, developing an SoC is expensive and complex, which means it is often only viable if it targets a very high volume application, such as a portable consumer device. Creating an SoC that meets the needs of many smart vision systems could be difficult and will necessitate a certain amount of redundancy in the design. Using a programmable platform, however, overcomes this problem.
The Zynq-7000 AllProgrammable SoC integrates an ARM Cortex-A9 dual core processing subsystem alongside Xilinx' high performance fpga fabric, allowing it to be adapted to the needs of various smart vision systems. Fig 1 shows how a Zynq SoC compares with a multichip architecture in multicamera ADAS application to provide such features as blind spot detection, lane departure warning, pedestrian detection and a 360° view when parking.
Fig 2 shows a generic signal flow for a smart vision system and compares the Zynq device against an SoC integrating dsp and gpu (graphic processor unit) cores, illustrating how the processing subsystem and fpga logic can work together to provide a more flexible and versatile platform. The extensibility of the Zynq SoC, using the fpga logic, also provides protection against hitting a performance 'ceiling' if additional compute power is required.
The success of smarter vision systems will rely heavily on embedded system developers having the right technologies to meet the performance and flexibility requirements, something the industry appreciates and is working hard to achieve.
In support of this, Xilinx has partnered with system design specialists, including MathWorks and National Instruments, to accelerate system level design.
MathWorks recently released a guided workflow for Zynq-7000 SoCs in its R2013b release, which enables design engineers to create and model their algorithms in MATLAB and Simulink, partition their designs between hardware and software and target, integrate, debug and test those models automatically to Xilinx design platforms.
Similarly, engineers can abstract the complexity of traditional RTL design using LabVIEW, in conjunction with National Instruments; RIO hardware platform, which uses Zynq SoCs.
A programmable platform such as the Zynq-7000 SoC, supported by leading edge tools, dedicated design suites and open source solutions like the OpenCV Library, all supported by the efforts of the Embedded Vision Alliance, will help ensure developers have access to the best in class resources to achieve their design goals.
As an Embedded Vision Alliance Platinum Member, Xilinx has already synthesised 30 of the most used embedded vision algorithms from the OpenCV library for its Zynq SoC platform using the Vivado HLS tool within the Vivado Design Suite, allowing developers to make processor/logic design trade-offs at a system level.
This level of performance and flexibility will be key to the evolution of smarter vision systems, providing OEMs the ideal platform on which to build smarter solutions.
Giles Peckham is director of marketing, EMEA, for Xilinx.