IBM and UCD in drive to perfect intelligent in-car assistant

Risk-analysing navigation system could remove need for in-car computer screens

There's a minor irony in that as I sit into the passenger seat of the Toyota Prius, I'm fronting up to a screen the size of a decent household television. If this is a tablet, I'd hate to see the size of the glass of water needed to swallow it, and my driver for this outing, Giovanni, notices my surprise. "Yes, we decided with the screen to go big or go home," he says as we nose the car out of the front gates of University College Dublin (UCD) and into traffic.

Why irony? After all, in-car screens are becoming ever more CinemaScope in size, so perhaps this vast screen, which obscures all of the left-hand side of the dashboard, is just a logical next step? Not so, because this is part of a joint research project between UCD and IBM to develop an artificially intelligent in-car companion which could at the very least reduce our reliance on in-car screens and has the potential to eliminate them altogether. With Apple currently having to answer multiple court cases over in-car distractions, this first spin in a prototype couldn't be more timely.

Wendy Belluomini, director of IBM's research lab in Ireland, told The Irish Times the difference between this and other predictive navigation systems is that it learns and thinks about you, what you do and where you like to go. "Google Maps is pretty good at predicting how long it's going to take you to go from one place to another, once you've put in a destination. It's doing that based on population – it has all these phones that are sending data back to Google saying: 'Well, this is how fast or slow I'm going right now.'

“But that doesn’t know anything about me and where I might tend to be going or the fact that it’s slow at a particular point matters to me. So that’s the different part. So there’s the population part, where any app developer with enough users can collect, because it’s just not that hard. But then there’s the customisation and learning part, which is not easy.


“This is the vision part, for what we’re trying to do for driver assistance. We’ve spoken before about the fact that we’re not trying to do autonomous cars: we’re trying to create vehicles that are much more fun to be in; that help you with all the difficulties that are unpleasant; and that protect you from various dangers.”

Driver data

Martin Mevissen, also from IBM, describes the way in which this new in-car assistant works. "What we're doing between UCD and IBM is researching and building a completely new in-car companion, a system that will interact with the driver, that basically takes into account contextual information from outside the car – from the environment, information about congestion, about local regulations, traffic restrictions, that sort of thing.

“But it also takes data from the driver in the car – like what is the driver doing, what is the history of the driver, what are the trips that the driver usually takes, at what time of the day, weekdays, weekend – and, based on all of that information, this companion will interact with the driver, identify the different risks that there might be in this particular situation for this particular person, and then try to direct the driver to mitigate these risks.

“So you have the situation where the car is bombarded with information, from the environment – weather conditions, traffic conditions – and the context of the driver, including their physical condition – are they elderly, are they healthy, is it a new driver?”

The data, says Robert Shorten, UCD professor of control engineering and decision science, “comes from a variety of sources, and it acts almost like a guardian angel”. “So you don’t switch it on or off, it’s always there watching and trying to anticipate what you’re doing, and if it senses risk then it will provide assistance. If it senses that you’re going to make a wrong turn, it’ll prompt you.

“So, basically, we have what we call a route-parsing engine which sits underneath the cognitive functions. So it has data about the car, about the driver and it uses that to build predictive models based on the context of the journey, what you’re trying to do.

“So you’re driving along the N11, for instance, it knows that you might be going to UCD, or into town or maybe to the RDS, and once it has a confident picture built up, based on previous data, it then starts to use that to filter the data coming into the car. So the car thinks you’re going to UCD, it starts to take into account what it knows about the location, about you, about things like is your eyesight good, is your hearing good.

“It’s always on, but it doesn’t interact until it perceives a risk to you, so it’s very much risk-driven, and that’s in a general notion of risk. It’s not just safety, but it’s also convenience. For instance, it can see that you’re in a one-way system, and it knows that if you go wrong, that’s a bigger hold up for you, so it’ll wake up and say: ‘Hey Bob, you’re in a one-way system, so would you like some help finding your way through it?’

“It could be something as simple as I’m driving home and it knows that all the schools along my route are closing at that time, so it knows there will be local congestion, so it might suggest a different route.”

Computer learning

There are two fundamental differences between this and other augmented sat-nav systems coming on to the market that aim to offer predictive and learning functions. The first is that the system taps into IBM’s Watson API learning-computer technology. Remember the Deep Blue computer that became the first to beat a grand master at chess? Watson is its grandchild, and if you consider the constantly improving nature of computing power, that should give you some idea of the fearsome processing grunt at this system’s disposal.

The second is that it designed to, potentially, rid the car interior of all those bright, distracting screens. IBM is aware that driver distraction is now seen in the same light as drink and drugs, and that it won't be long before staring at a screen and tapping keys is outlawed. So the assistant is designed to be a little more Star Trek-like and converse with you in natural language, breaking through idioms, colloquialism and accent to do so.

Its potential to improve in-car safety, as well as convenience, is manifest. “We speak to a lot of car-makers about partial autonomy, and what they’re mostly focusing on right now is to keep you from crashing into things,” says Belluomini.

“I’m thrilled that that is there, and it makes us all safer, but it’s a fairly low-level function. It’s ‘In the next 10 seconds or less I am going to hit a thing.’ It’s not solving the next level up, which is about not getting into that situation in the first place.

“So I see this as very complementary to all the safety things that the car industry is doing, because to bring the accident rate to zero, you can only get so far with a system that detects an imminent collision and says: ‘Stop.’ To get to the next level you’ve got to get to thinking: ‘Well, maybe I shouldn’t have been there in the first place.

“People get off planes from the US and start driving on the wrong side of the road for them. So I can see quite a big appeal for systems such as this from rental-car companies, to give people a little help when they’re just off a 10-hour flight, driving on the other side of the road. They could use some assistance.

“We’re not solving any of the sensor problems here. We could tie into that more, but we’re trying to do what we can do without adding more inputs. That layer that’s already there is all about sensors, and real-time data and fast reactions. What we’re doing is more a contextual level of where should you be. So let’s not put you into a situation where you’re going to be in danger.”

The enormous screen mounted in front of my seat in the Prius was, then, not strictly necessary for the running of the system, but there to show how it was working and to show some of the parameters it was using to make its decisions.

Robotic voice

We swung down Pembroke Park to a notional address used by the prototype system (and where you’d most certainly need a job with a major US tech firm to be able to afford a house). From there, the in-car assistant began working out from the direction in which we drove where we were most likely going.

After a few seconds, it chimed in with a female voice, offering some helpful hints about our destination (advising us to use UCD’s Stillorgan Road entrance to avoid closed-off through-roads). It was a simple enough test, using only limited data, and telling me something I already knew. It also spoke, rather disappointingly, in an obviously robotic voice, which rather spoiled the illusion of carrying on a conversation with the car, and which made me start to respond in a matching, clipped tone which rather defeats the point.

The potential is clear, though. If a robotic in-car assistant can be developed to the point where you talk to it and it talks to you as if there is a recognisable human in the car with you, then one really could imagine a screen-free future for car interiors.

If it can be as useful and as predictive as UCD and IBM claim, then it could at least move navigation and infotainment systems beyond merely telling you where you’re going and playing your tunes. And it needn’t be expensive, nor need decades yet of research and development.

“It’s not an insurmountable thing,” says Belluomini. “It’s not a prohibitive cost, because it’s essentially data analytics using existing sensors – you don’t need to put a $10,000 Lidar [laser] sensor on the roof. We’re in discussions with car-makers right now, but nothing I can really talk about. If someone said ‘Go’ today, well the 2020 cycle of new cars is now closed, so it’s likely it would have to be the next cycle; but if you were really determined to do it, throw out the production cycles, then it would be ready in two to three years.”