Could one accident kill the robot car?

Like most tech it must overcome the first inevitable failure

A Ford Transit van, driven by a robotic autonomous device  at the automaker’s proving grounds in Michigan.

A Ford Transit van, driven by a robotic autonomous device at the automaker’s proving grounds in Michigan.


Picturing the scene is almost disturbingly easy. It will be all over the rolling 24-hour news channels when the day, inevitably, comes. The scrolling news ticker at the bottom of the screen will carry the sensationalist words ‘robot car triggers fatal accident’ while a serious-faced correspondent gives us the upsetting details as emergency crews work and scurry in the background.

There is little or no doubt that this will eventually happen. Self-driving, robotically controlled cars are coming to our roads, and their arrival is inexorable. Everyone, from Audi to Ford, from Google to the US Army is working on them and the technology is marching ahead with all of the speed we have become used to in the digital age.

Self-driving cars have made the species leap from the wandering, bumbling robots that failed, almost comically, to complete the US Defence Advanced Research Programme Agency (DARPA) tests a decade ago to the point where a Google driverless Toyota Prius recently drove its occupant, a man suffering from total sight loss, to Taco Bell for a burrito.

The problem is that for all the whizz-bang impressiveness of the technology, technology is only as perfect and as flawed as the person who programmed it, the factory it was built in. Which means that inevitably, at some stage, a computer-controlled car will make a mistake or suffer a software glitch and have a crash. It has happened already, when a Google car ventured on to a road that wasn’t fully mapped on to its GPS system and got confused. The only damage was done to the car’s panels that day, but given the nature of car accidents, that cannot last once driverless cars become more widespread on our roads.

When the fatal day arrives, what will be our reaction? We live in an age where public opinion seems to act on a hair-trigger, especially when it comes to risk and safety. When a self-driving car gets tangled in a fatal collision, the headline will not read “Inevitable statistical likelihood occurs.” It will be “Killer robot car wipes out family.” And there is just the possibility that such an outcry could delay or even derail the self-driving car project, even though the benefits to safety and the environment from turning over control to the computers are almost incalculable. Indeed Google’s Sebastian Thrun said in 2010 that “more than 1.2-million lives are lost every year in road traffic accidents. We believe our technology has the potential to cut that number, perhaps by as much as half.”

The problem is that we, as humans and especially as consumers of media, don’t see the big statistical number. If self-driving cars can save 600,000 lives worldwide, that’s brilliant. But if just one robot car causes an accident that kills a small child, especially in this country (tragedies are always worse the closer to home they occur) then they will be vilified as evil; Terminators with four wheels.

Mary Cummings (herself a former US Air Force fighter pilot) and Jason Ryan are professors at the Massachusetts Institute of Technology (MIT) and, according to their recent position paper “the idea of a machine killing a human, even accidentally, will likely not resonate with the general public. Indeed, there has been recent intense media and public campaigns against autonomous weaponized military robots. These issues will likely also be raised as significant concerns once driverless, and especially driver assisted, technology is either responsible for a fatality or a serious accident that receives intense media attention.

“Furthermore, the chain of legal responsibility for driverless or driver assistance technologies is not clear as well as what basic form of licensing should be required for operation. Manufacturers and regulatory agencies of driverless technologies bear the responsibility of not only considering the technological ramifications of this technology, but also the socio-technical aspects, which at this point, has not been satisfactorily addressed.”

Jon Bentley, a former producer of the Top Gear television programme and now a presenter on The Gadget Show believes though that those developing the technology will already be aware of these concerns and will be prepared for them.

“I think that it will certainly have to be thought through, but then I think that it’s already being thought through because the advantages of a self driving car will exceed the disadvantages. It’s not as if the way we drive now is especially safe. I just think that somehow people will have anticipated this problem before it arises. All of these cars so far still have a driver on board who is supposed to be able to take over in the event of it malfunctioning, and I think until it is all thought through and resolved that will be the case.

“It’s not as though things like aeroplanes don’t suffer from manufacturing malfunctions and so on, so every product you buy already has the potential to kill, does it not? I mean, people get strangled by their duvets, so as this sort of problem is dealt with by other products, so it will be dealt with by self-driving cars.”

The aircraft comparison is one that is most often drawn on when discussing the possibility of driverless cars. After all, aeroplanes have been flying on autopilot since the 1930s and advances such as Instrument Landing Systems (ILS) can even put a ‘plane smoothly onto the tarmac without a human so much as touching the controls. So why shouldn’t similar systems be able to cope with the weather, bumpy roads and traffic lights that we encounter on the roads?

It’s a fair question but Cummings and Ryan note that autopilots are not infallible either, and in fact contributed to some well-known aviation disasters, including the loss of Air France 447 off the coast of South America. Indeed, the US Federal Aviation Authority (FAA) this year released an advisory notice that requested pilots to spend more time in positive control of their aircraft, and rely less on autopilot as they were effectively becoming de-skilled by letting the computer do all the work. There are similar concerns for drivers, and as more trust is placed in the automated systems, so drivers could potentially become more and more distant from the process of driving. When that happens, any mistake made by the computer will be compounded by a driver who is distracted, inattentive and frankly unfit to be taking emergency control.

“At this point in time, we are in a very tenuous period where we are attempting to transition new and unproven technologies into a complex sociotechnical system with significant variation in human ability’” say Cummings and Ryan. “In addition, public perception is fast becoming a major obstacle but is surmountable. To this end, great care should be taken in the experimentation with and implementation of driverless technology as an ill-timed serious accident could have unanticipated public backlash, which could affect other robotic industries as well.”

It’s worth remembering, after all, that much though aircraft may already rely on automation to fly (and land) safely, there’s always a highly-trained, highly-paid human with hundreds of hours of highly-regulated experience waiting to take control. The same cannot be said for the vast majority of cars, whoever or whatever’s holding the wheel.