Crashes not the fault of the driverless car, says Google – it’s other drivers

The human factor is proving the trickiest obstacle for makers of self-driving cars


Google, a leader in efforts to create driverless cars, has run into an odd safety conundrum: humans.

Last month, as one of Google’s self-driving cars approached a crosswalk, it did what it was supposed to do when it slowed to allow a pedestrian to cross, prompting its “safety driver” to apply the brakes. The pedestrian was fine, but not so much Google’s car, which was hit from behind by a human-driven sedan.

Google’s fleet of autonomous test cars is programmed to follow the letter of the law. But it can be tough to get around if you are a stickler for the rules.

One Google car, in a test in 2009, couldn’t get through a four-way stop because its sensors kept waiting for other (human) drivers to stop completely and let it go. The human drivers kept inching forward looking for the advantage, in the process paralysing Google’s robot.

READ MORE

It is not just a Google issue. Researchers in the fledgling field of autonomous vehicles say that one of the biggest challenges facing the makers of automated cars is blending them into a world where humans don’t live by the book.

"The real problem is that the cars are too safe," said Donald Norman, director of the Design Lab at the University of California, San Diego, who studies autonomous vehicles. "They have to learn to be aggressive in the right amount, and the right amount depends on the culture."

Traffic wrecks and deaths could well plummet in a world without any drivers, as some researchers predict. But wide use of self-driving cars is still many years away, and testers are still sorting out hypothetical risks – such as hackers – and real-world challenges, such as what happens when an autonomous car breaks down on the highway.

For now there is the more immediate problem of blending robots and humans. Already cars from several auto makers have technology that can warn or even take over for a driver, whether through advanced cruise control or brakes that apply themselves. Uber is working on self-driving car technology, and Google expanded its tests in July to Austin, Texas.

Google cars make regular use of quick evasive manoeuvres and exercise caution in ways that are commendably careful but also out of step with other vehicles on the road.

"[A Google car] is always going to follow the rules; I mean almost to a point where human drivers get in the car and are like, 'Why is the car doing that?'," said Tom Supple, a Google safety driver during a recent test drive on the streets near Google's Silicon Valley headquarters.

Fender-benders

Since 2009, Google cars have been in 16 crashes, mostly fender-benders, and in every single case, the company says, a human was at fault. This includes the rear-ender crash on August 20th, described above, that was recently reported by Google.

The Google car slowed for a pedestrian, then the Google employee manually applied the brakes. The car was hit from behind, and the employee ended up in the emergency room for mild whiplash.

Google’s report on the incident added another twist: while the safety driver did the right thing by applying the brakes, if the autonomous car had been left alone, it might have braked less hard and travelled closer to the crosswalk, giving the car behind a little more room to stop. Would that have prevented the collision? Google says it’s impossible to know.

There was a single case in which Google said the company was responsible for a crash. It happened in August 2011, when a Google car collided with another moving vehicle. But, remarkably, the Google car was being piloted at the time by an employee. Another human at fault.

Humans and machines, it seems, can be an imperfect mix. Take lane-departure technology, which uses a beep or steering-wheel vibration to warn a driver if the car drifts into another lane. A 2012 insurance industry study surprised researchers when it found that cars with these systems experienced a slightly higher crash rate than cars without them.

Bill Windsor, a safety expert with Nationwide Insurance, said that drivers who grew irritated by the beep might sometimes turn the system off, which illustrates the gulf between the way humans actually behave and the way the cars interpret that behaviour. The car beeps when a driver moves into another lane, but, in reality, the human driver is intending to change lanes without having signalled, so the driver, irked by the beep, turns the technology off.

Windsor recently experienced first hand one of the challenges arising from the clash between sophisticated car technology and human behaviour. He was on a road trip in his new Volvo, which comes equipped with “adaptive cruise control”. The technology causes the car to adapt its speeds automatically when traffic conditions warrant.

But the technology, like Google’s car, drives by the book. It leaves what is considered the safe distance between itself and the car ahead. This also happens to be enough space for a car in an adjoining lane to squeeze into, and, Windsor said, they often tried.

Dmitri Dolgov, head of software for Google's Self-Driving Car Project, said that one thing he had learned from the project was that human drivers needed to be "less idiotic".

On a recent outing with New York Times journalists, the Google driverless car made two evasive manoeuvres that simultaneously displayed how the car erred on the cautious side but also how jarring that experience could be.

In one manoeuvre, it swerved sharply in a residential neighbourhood to avoid a car that was poorly parked, so much so that the Google sensors couldn’t tell if it might then pull into traffic.

More jarring for human passengers was a manoeuvre the Google car undertook as it approached a red light in moderate traffic. The laser system mounted on top of the driverless car sensed that a vehicle coming from the other direction was approaching the red light at a higher-than-safe speed. The Google car immediately jerked to the right in case it had to avoid a collision. But the oncoming driver was just doing what humans so often do:not approaching a red light cautiously enough (though the driver did stop well in time).

Courtney Hohne, a spokeswoman for the Google project, said current testing was focusing on "smoothing out" the relationship between the car's software and humans. For example, at four-way stops, the program lets the car inch forward, as the rest of us might, asserting its turn while looking for signs that it is being allowed to go.

Eye contact

The way humans often deal with these situations is that “they make eye contact. On the fly, they make agreements about who has the right of way”, said

John Lee

, a professor of industrial and systems engineering and an expert in driver safety and automation at the University of Wisconsin. But “where are the eyes in an autonomous vehicle?” he asked.

But Donald Norman, from the Design Lab in San Diego, after years of urging caution on driverless cars, now backs quick adoption because, he says, motorists are increasingly distracted by cellphones and other in-car technology.

Witness, for example, the behaviour of Sena Zorlu, a co-founder of a Sunnyvale, California, analytics company, who recently saw a Google self-driving car at a red light in Mountain View. She could not resist the temptation to grab her phone and take a picture.

"I don't usually play with my phone while I'm driving. But it was right next to me, so I had to seize that opportunity," said Zorlu, who posted the picture to her Instagram feed.

© 2015 New York Times News Service