The future is often further away than it seems. Opening an International Federation of Robotics Research (IFRR) colloquium on the road to self-driving cars last year, Henrik Christensen, professor of computer science at the University of California at San Diego, pointed to the work of Ernst Dickmanns who pioneered the concept of the self-driving car in the late-1980s culminating in a demonstration close to Charles de Gaulle airport in 1994.
The victim of successive AI winters, the work in European R&D circles fell by the wayside in the mid-1990s and it took another decade before self-driving cars once again seemed possible.
Though the first was won by a team led by a German-born engineer, Sebastian Thrun, who is now head of a small-aircraft maker, the US became the new home of autonomous driving thanks to DARPA's hosting of two Grand Challenge competitions.
The question now is how many more decades it might take to achieve practical autonomy where the driver is no longer responsible for what happens in a moving vehicle. Edwin Olson, professor of electrical engineering and computer science at the University of Michigan and CEO of May Mobility, wrote three years on what seems to be the equivalent of Moore’s Law for autonomous vehicles. In this case, the metric is “miles per disengagement”, or how far one can go before a human has to intervene because the software or hardware failed, even if only momentarily. The human performance level he set at 100 million miles. Though capability seems to be doubling every year, the lines do not intersect until well into the 2030s.
Vehicle design
The second question is how this influences vehicle design: does the industry go through its own AI winter on the way there that means the manufacturers drop the programmes or is the scene now set for an evolutionary path where gradual enhancements lead to a point where full autonomy is just an incremental change at the end of many others?
That second path is one that organisations such as Toyota are currently following, pursuing twin projects Chauffeur and Guardian.
Speaking at the IFRR colloquium, Wolfram Burgard, vice president for automated driving technology at Toyota’s research institute, described one example where a vehicle automatically accelerates out of a potential three-car collision caused by two cars slightly behind it converging on the same lane.
“The Chauffeur and Guardian applications share a large amount of technology. Prediction and planning remain largely the same. Though for Guardian you also have to understand the intent of the user. By working on Guardian we are able to accelerate the development of Chauffeur."
Other manufacturers and market analysts seem to agree. Speaking during one of a series of technology forecasts last autumn, ABI Research principal analyst James Hodgson , said: “Autonomous driving is an opportunity that is going to start with more of a whimper than a bang. Fortunately for automakers they have a second path to market: semi-autonomous.”
This, he argued, has led to the emergence of what’s become known as Level 2+ autonomy. This is effectively more advanced driver assistance where the vehicle can take control but the human behind the wheel remains engaged at all times. That contrasts with Level 3 where the car might control the accelerator, steering and brakes when driving along a road. The inclusion of Level 2+ lets automakers “monetise some of their sizeable investments so far”, he added.
Aimotive senior vice president Péter Kovács agrees, though he notes that for specific applications manufacturers are looking to higher levels of autonomy. “L2+ seems to be the sweet spot that manufacturers see as a realistic target. But there is a lot of buzz about L3 for traffic jam assistance and eventually highway driving. L4 is also a hot topic for use in terms of automated valet parking.”
For its latest generation of silicon, Ambarella significantly increased the processing power of its silicon to handle the space between L2 and L4. “The performance is 42 times higher than the current product,” claims Lazaar Louis, director of product management and marketing at the chipmaker.
Processing power
One big reason for the seemingly slow progress on autonomy – though not necessarily all that sluggish in the context of when work on autonomous vehicles started – is the sheer quantity of processing power that will be needed, and which drove Olson’s prediction.
The crossover point occurs after the number of operations per second surpasses 1 trillion per second, which needs to be delivered at much greater levels of electrical efficiency than is possible today.
A further demand on peak processing is driven by the need for redundancy in implementations where the driver is not actively driving for long periods. Kovács says AImotive expects to use dual-SoCs for reasons of reliability as well as to handle more complex situations. “For L3 highway modes you have to put a lot of effort into seeing ahead far enough and you have to be able to deal with situations where it’s requesting a handover back to the driver, but the driver does not take over.”
Another driving force for more processing power, in the near future, is the ability to have more complex functions running in the background.
“OEMs are implementing some features and then expect to bring others in over time, using over-the-air updates. They want to be able to run the next-generation software in shadow mode, so they plan in some headroom,” Louis says.
Though there were questions early in the development of these systems as to how much of the expected processor performance would be dedicated to machine learning, the trend seems to be in favour of AI-heavy implementations.
“We are seeing a strong move to AI-based compute,” Louis notes. “Rules-based approaches have drawbacks in addressing corner cases.”
Kovács points out that building safety cases under the ISO 26262 standard is difficult if AI components are purely responsible for key decisions, so there remains a need for algorithmic modules to confirm manoeuvres. However, AImotive sees the same push for higher levels of machine learning. “When it comes to the quality of performance point of view, letting the machine do more and more is definitely the way,” he says.
As Dickmanns found in his early work, reducing the amount of compute that needs to be done to process sensor input to make the workload manageable, there is a growing focus on trying to ensure the TOPS in the platform are not wasted, particularly when it comes to the most CPU-hungry inputs: the cameras. A shift to higher resolutions will increase the burden.
“If you want to process the whole image it’s practically linear with the number of pixels. With multiple 8-megapixel cameras you want to make your processing smarter than processing all the pixels, including the sky, with a single neural network,” Kovács says.
Sensor modalities
Some efficiencies come from using a mixture of sensor modalities. Though Tesla declared it favoured camera-only setups, manufacturers seem to be consolidating around the combination of cameras and radar in L2+ systems with Lidar being added for higher levels of autonomy.
Louis argues that improved signal processing combined with the improved angular resolution of recent radar chips can get to Lidar-equivalent performance.
Some see one advantage of adding Lidar is that it provides an extra degree of redundancy that could be vital for L3 and above, with the result that many teams continue to expect to use it in addition to radar.
In principle, it is possible to devolve a lot of recognition functions to the sensor modules themselves. However, teams have found that relying on individual sensors to build lists of objects can be counterproductive as they can wind up providing contradictory information, with the system not knowing which one is correct. Because of this, performing sensor fusion relatively early in the pipeline can make more sense.
On the way to full autonomy, there will doubtless be further swings in the use of AI versus algorithmic processing. For the moment, at least, carmakers look to have a commercially viable route to slowly take over control from human drivers instead of having to risk making a sudden change in how people use their vehicles.