From self-driving crashes to national power cuts: the future of cyber attacks

Hackers have an array of tools for attacking individuals, companies and countries


The future is heading towards us pell-mell, bringing new opportunities for hackers to exploit. We can even predict what areas those might be in, based on what we see today. Hackers nowadays have a multiplicity of weapons they can deploy and as one starts enumerating the number and form that attacks might take, it becomes clear how much more complex the world has become since 1986, when the first PC viruses appeared.

So what might the hacks of two or three decades hence look like? Let’s look at a few examples.

Scenario one: Speaking in tongues

You’re at home, with your front door powered by a ‘smart lock’ which you can control with an app on your smartphone: it even works with voice commands, so all you have to say is “Unlock the door” and it does so. As you browse on your laptop, you’re shown a targeted ad on a social network, with some puppies running around. You like puppies. You turn up the volume to listen to it better – and hear your front door unlock.

How? The advert, targeted specifically at you, included a "Dolphin attack": voice instructions encoded at an ultrasonic frequency that you can't hear, but which your phone detects perfectly. A team from China's Zhejiang University showed that the method of ultrasonic 'speech' instructions could be used to dial phone numbers, open websites and generally make smart devices do what you'd expect them to, without you realising they've been told to.

READ MORE

A slightly modified version of this scenario – using a different approach – runs thus. You’re at home with your phone on your table, and an advert comes on the radio; it seems to be for a horror film, with distorted voices. Silently, your phone activates and opens a website which contains a zero-day exploit that subverts your phone.

In this case, the hack is enabled by the voice assistants in Apple or Android phones, which are able to distinguish commands when humans hear only distortion. It's the sort of subversion that could be useful to a government targeting specific people, or just to hackers looking to cause trouble.

A less harmful version was used by Burger King in April 2017 in a 15-second television advert in the United States: a character in the ad leaned into the camera and said, "Okay Google, what is the Whopper burger?" The "Okay Google" phrase triggered Google Home devices and Android phones, which promptly told their owners in millions of American homes what Google's top factual result (the Wikipedia entry) was. Google tweaked its systems to block the request on that ad. But Burger King tweaked it, using different voices, and once more Google devices responded. It was a real-time evolution of a hacking technique that has plenty of room to grow.

Scenario two: Driven to distraction

You’re a passenger in a new self-driving car, which is approaching a stop sign at a busy intersection. Ahead, cars are zooming across the intersection at right angles to you. The car, following the rules of the road, will have to stop and wait its turn. You’re expecting it to do so, as it has done at every other junction today and previously.

The car carries on through the stop sign, and you’re seriously injured in the resulting crash. Why didn’t the car stop? Because someone messed with the stop sign by putting a couple of stickers on it. They’re unremarkable to human eyes, but to the machine learning system deciding the car’s movements, they were so radical that it didn’t recognise it as a stop sign any more.

A team of researchers from the universities of California, Washington, Michigan and Stony Brook demonstrated that they could get a neural network which normally recognised stop signs to misclassify it as an advisory speed limit sign.

Machine learning systems, often also called artificial intelligence systems, use neural networks – which mimic the behaviour of neurons in the brain – to process inputs and generate outputs. The systems are entirely reliant on the data they’re trained on – so a machine learning system that goes wrong, or gets hacked, is going to be a deep puzzle to fix.

Scenario three: Not so smart now

You’re in a city which has introduced ‘smart meters’ for electricity monitoring. They are internet-connected devices which monitor power usage inside the home, reporting back to the power utility as often as every half hour. For the utilities, installing them means they don’t have to send expensive humans to remote locations to read meters every six months. It also lets the utilities offer variable tariffs, to switch those who default on contract payments to prepayment. In extremis, they can support rolling power cuts if there is serious supply shortage.

For security, the meters communicate with the utilities using cryptographically signed communications: each end is able to check the other is who they say they are, based on the public key used to encrypt commands and responses.

It’s late in the evening, in winter. Suddenly, the power goes off. You look out of the window of your high-rise flat: the entire city is dark, including the streetlights. Yet there’s nothing obvious going on, and there were no warnings on the news. Your phone doesn’t have any service either, because the cell towers rely on mains electricity – diesel backups aren’t used in cities. It’s starting to get cold.

Prof Ross Anderson at Cambridge University is certain that smart meters amount to "a significant new cyber-vulnerability". In a 2010 paper with his colleague Shailendra Fuloria, they argue that any cyber attacker aiming to target a country would aim to knock out the electricity supply – "the cyber equivalent of a nuclear strike". In a sense, it's the opposite of a neutron bomb, which kills people but leaves buildings standing; a smart meter attack would disable the infrastructure, but (mostly) leave people unhurt.

Such a tactic would hardly be the choice of an amateur hacker. But a nation state might choose to do it as an alternative to a direct attack, with the advantage that cyberattacks don’t leave missile trails back to their source.

Of course disrupting a country through its smart meters is just an idea in an academic paper at the moment. But so too was cryptographic ransomware with anonymous payment, once. Future nation-state attacks are likely to seek out connected infrastructure. Far easier to help your opponent’s systems fail than to attack them directly. The cyber wars of the future might see a country surrender without a shot being fired; and the conquered country might be unsure even who to surrender to.

Protect and survive

When I spoke to ex-hackers and security experts whose job is to protect against their successors, the message was consistent. You can’t stop people trying to break in; you should probably accept that people will do. The answer is to look at what happens when someone has broken in to your system: how safe is it then? Can you control where an intruder goes? Can you watch as they try to traverse the network? Can you shut down access to them – in effect, to your own users?

Is there any good news? It may be too soon to answer. We know that we can’t make every system secure, and that hackers’ curiosity will never abate. But car crashes used to be far more lethal until manufacturers were forced by regulation to include safety features. Software is still a comparatively young industry, where it’s easier to gain plaudits for making something than for making it secure.

Perhaps the data breaches we see in modern hacking are like the ozone hole: something which can be fixed by collective effort. The worry is that they are actually like climate change – and that everyone will wait for others to take affirmative action, while trying to make their own small positive impact.

This is an edited extract from Cyber Wars by Charles Arthur, reproduced with permission from Kogan Page (£14.99, koganpage.com)