For robots to be better drivers, they must be more human
IBM patents system to help self-driving cars learn from insights and instincts of people
It seems appropriate that, as I write these words, this week marks Leon Kowalski’s birthday. Who is Leon Kowalski? Actually, the apposite question is who was Leon Kowalski?
In spite of his day of birth being April 10th, 2017, Leon never actually existed. He was the product of the febrile imaginations of screenwriters Hampton Fancher and David Peoples, and director Ridley Scott. Leon is a character in the epochal 1982 sci-fi Blade Runner, played in the film with unblinking detachment by actor Brion James. He’s a replicant – an android so life-like that his fictional creators boast he is “more human than human”. Unfortunately, he’s also psychotically murderous and so meets a sticky, early end.
To become better at doing the things we need them to do, or desire them to do, computers are going to have to become more like us. More human
Why is it appropriate? Because to become better at doing the things we need them to do, or desire them to do, computers are going to have to become more like us. More human. That is the very subject of a patent filed this week by IBM, which wants to create a self-driving car system which can learn, not only from its own mistakes and successes, but also by observing what humans do and learning from them.
IBM says that it is working to “understand and model human behaviour, such as reaction times and next likely action based on past observations. As computational neuroscientists, we draw on our understanding of biological cognition and behaviour generation in the brain. With that background, these cognitive computing models can help human drivers and autonomous vehicles better share the road.”
The computing giant claims that turning a computer into a better driver involves some very complex systems, and also means teaching it to communicate with other, human, drivers. For instance, anyone who tows a caravan knows that it’s just good manners to pull over occasionally and let faster traffic pass. How do you teach a computer good manners? You show it what humans do and it learns from us how to be better than us.
This, says Wendy Belluomini, head of the IBM Research Lab in Ireland, is why it’s so important to let computers learn from humans “in the wild” rather than trying to preprogramme everything in a lab beforehand. “For any machine-learning system, more data in real-life situations is best. So ideally data would be gathered over a large population of cars and drivers on many different road situations,” Belluomini told The Irish Times.
“For example, older drivers may have trouble at night. But there may be some less obvious things that could be learned looking at a large population of drivers is a large set of road situations. Maybe, and this is just making something up, male drivers under 35 have more accidents when it is raining. The implication would be that it may be safer to hand back control of the car to a female driver when it is raining, but if the driver is a male under 35 the automated system should continue. This learning would occur over time with a population of drivers.”
But does this mean that robot cars will learn our bad habits too? After all, Blade Runner’s Tyrrell Corporation didn’t design its robots to be killers, they learned that from humans.
Thankfully, Belluomini doesn’t think so: “I think the best way to do it in this kind of system is based on what actually results in an accident, since avoiding accidents is the goal. This gives an unambiguous good/bad result to train on. You could try to get into more subtle things about driver behaviour but as you said, it would be difficult. With a large enough population of vehicles you should get enough accident data to train on.”
The more computer-controlled cars that get on to the road, the faster their driving skills increase, as the IBM patent includes car-to-car communication not just so as one car can warn another of an obstacle ahead, but they can also compare notes on how to deal with that obstacle and other similar situations.
Instant communication between vehicles is also crucial for systems such as road-trains, where groups of cars run at high speeds in impossibly close proximity
Instant communication between vehicles is also crucial for systems such as road-trains, where groups of cars run at high speeds in impossibly close proximity to each other, so close that a human’s reaction speed would be far too slow to prevent an accident. A computer can keep up, though, and so a long chain of cars, knowing that they are heading for the same destination, can group together, saving fuel through the improved aerodynamics of a group of cars running close together, and saving space on the roads too.
How long before the fleets of self-learning, self-driving cars are on our roads? Blade Runner famously predicted that 2019 would be the year a moody Harrison Ford would need to be chasing runaway robots around a rainy Los Angeles, but the prediction for robotic cars is running, oh, a good two years behind that. “It’s hard to say” says Belluomini. “Every car maker has their own predictions, but some time between 2021 and 2025 is the current thinking. I think it will be longer, both to get to the level of reliability needed and to clear regulations.” Plus, we have to train up Harrison Ford to be able to chase down these robot cars when they go rogue . . ..