Those worried about autonomous vehicles have always pointed to the safety aspect, asking just how safe can they be? And that’s the crux of the matter. While the trials undertaken by a range of companies have, in the main, been without incident, there have been accidents. But the Tempe accident is the first fatal one.
Human drivers have a known level of fallibility and this is accepted by almost everyone. Autonomous vehicles, by contrast, are expected to be perfect. The question is whether they can be?
At the top level – level 5 – autonomous vehicles will, according to the SAE, be expected to undertake all aspects of driving under all road and environmental conditions and to do so without the need for human intervention. And yet we know that technology is fallible.
It all comes down to serious amounts of computing power and the ability of software to take the various inputs – video, radar, sensors and so on – and make a decision.
This was a topic of discussion at the Electronics Design Show Conference a couple of years ago. One contributor asked who of those in the room had never written buggy software. The hands remained down.
Will the accident delay the development of autonomous vehicles? While it might in the short term, it’s unlikely to stop them in the long run.
The Tempe accident brings parallels with the introduction of the first motor vehicles. Then, drivers were required to have someone walking ahead of them, waving a red flag. Today, autonomous vehicles have so-called ‘safety drivers’ sitting in them during trials. One of the questions that US investigators will no doubt ask is what role the safety driver was playing. Did it involve the latter day equivalent of ‘waving a red flag’?