Stephen Hawking’s warning on end of the human race is wrong
Dick Ahlstrom: ‘Creativity is the preserve of the human mind and not the motherboard of a computer’
Prof Stephen Hawking: “Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.” Photograph: John Stillwell/PA
‘This is the basic premise behind the Terminator film franchise.’ Above, Arnold Schwarzenegger with a robot during the “Terminator 3” photo call at the Cannes Film Festival 2003. Photograph: Scott Barbour/Getty Images
The eminent theoretical physicist Stephen Hawking raised interesting questions about the power of computer-based artificial intelligence (AI) to deliver good or harm depending on how it might be used. It was surprising however to hear him tell BBC interviewer Rory Cellan-Jones that the development of full AI “could spell the end of the human race”.
Hawking suffers from a form of motor neuron disease and so knows better than most the value that technology and in particular AI enhanced technology brings to his life. The software that controls the computer he uses to write and to communicate via his trademark robotic voice uses machine learning to predict what he wants to say next, greatly speeding up his ability to string sentences into pages and pages into books.
His physical affliction must also inform his views about the limitations presented by biological life. He comments on the risk that an advanced robotic system could “take off on its own, and re-design itself at an ever increasing rate”. Hawking suggests that AI would allow the computer to act on its own and outpace human thought. “Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded,” he said.
This is the basic premise behind the Terminator film franchise. Humans devise and build Skynet, an autonomous missile defence system but unfortunately include a bit too much AI. Skynet learns faster and faster until at 2.14am on August 29th, 1997 it becomes “self-aware”. It immediately recognises humans as a direct threat and unleashes all the world’s nuclear weapons in an attempt to wipe out the human race.
I can’t say this film prompted Hawking’s somewhat pessimistic view of our ability to compete against machines, but I am fully satisfied he has got it wrong.
Computers can now easily beat most human chess players, can carry out computations at blinding speed and can anticipate what might interest us while perusing the internet. Don’t, however, ask the chess-playing machine to work like a calculator or get the preference software subroutine to move a pawn or rook.
Computers are inherently stupid and AI was developed as a method of making them seem somewhat less stupid. Human cleverness actually achieves the capabilities which the observer of a chess-playing computer might interpret as “intelligence”.
In a way it is the pinnacle of human vanity to believe we could ever create a computer that could process information, assess options and act on decisions in a way anything like that of a human brain.
AI when in use almost always includes a way for the computer programme to “learn”, either from experience or from error. This leaves it with a limited intelligence – something along the lines of “the common things occur most commonly, so if it looks the same act the same”, or “with two options if one is wrong the other must be correct”.
Computer experts involved in much more advanced AI projects will reject my oversimplification of this. And perhaps they can see the further horizon where it might be possible to think about machine self-awareness, and in this I concede to their greater understanding of things.
Yet the very same biological evolution that means we change only slowly over time has left us with large brains that can learn extremely quickly and adjust to changing situations rapidly. Creativity is the preserve of the human mind and not the motherboard of a computer. Your laptop may have intelligence of a sort but the key work here is artificial.
Dick Ahlstrom is Science Editor