Is artificial intelligence the greatest threat to humankind?

AI could lead to human extinction, says philosopher Nick Bostrom


The idea of computers taking over the world has long been with us in science fiction. But now serious thinkers are saying it's not a matter of if, but when. The physicist Stephen Hawking recently joined other scientists in warning about complacency over the rise of artificial intelligence.

As Nick Bostrom of the modestly titled Future of Humanity Institute and Oxford Martin School at Oxford University warns: "As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species then would come to depend on the actions of the machine superintelligence."

Forget about global warming and nuclear Armageddon, then: spawn of Google is gonna get us. Or as Bostrom, a philosopher and the author of Superintelligence: Paths, Dangers, Strategies, puts it (providing today's cheery idea): Superintelligence is possibly the most important and most daunting challenge humanity has ever faced.

How long have we got before a machine becomes superintelligent?

Nick Bostrom: "Nobody knows. In an opinion survey we did of AI experts, we found a median view that there was a 50 per cent probability of human-level machine intelligence being developed by mid-century. But there is a great deal of uncertainty around that: it could happen much sooner or much later.

READ MORE

“Instead of thinking in terms of some particular year, we must learn to think in terms of a probability distribution spread out over a wide range of possible arrival dates.”

What risk does superintelligence pose?

“It would pose existential risks; that is to say, it could threaten human extinction and the destruction of our long-term potential to realise a cosmically valuable future.”

Is it possible to achieve a “controlled detonation”, and at what point would you want to do one? 

“By a ‘controlled detonation’ I mean that if there is at some point going to be an intelligence explosion – a rapid transition to machine superintelligence – then we should try to set up the initial conditions in such a way that the outcome is beneficial rather than destructive. I think that in principle this is possible, though in practice it looks hard to achieve.

“In a saner world, we would hold off on triggering an intelligence explosion until we were really certain that we had solved the control problem. Perhaps after we thought we had the solution, we would wait a couple more generations just to allow time for somebody to spot a flaw in the solution.

“But the actual world seems unlikely to allow for such a pause, which is why it is important to start working on the control problem now, even though it is difficult to do so before we know much about what kind of system will eventually become superintelligent.”

If superintelligence turns out to be benign, it still seems to change things irrevocably for humans. Does it devalue us as a species as we’ll no longer be “top of the pile”? 

“That’s something we could live with. Very few of us as individuals are top of the pile among humans, and presumably far out there in the universe there are alien intelligences that are far greater than any human mind. If the worst that happens is that we are forced to scale back a little further on our conceitedness, I would count that as a resounding success.”

Why do you say “presumably” there’s alien superintelligence? 

“If the universe is infinite, as many cosmologists now believe, then there is certainly superintelligence out there – in fact infinitely many of them in basically all possible varieties. But the closest one may well be far outside the part of the universe that will ever be able to causally interact with our descendants.”

Given the risks, should artificial intelligence be regulated by government?

“Perhaps at some time in the future, but I don’t see that anything could currently be done in that direction that would actually be helpful. Much more groundwork needs to be done before, to develop the necessary concepts and understanding. What would make sense at present is to try to make research progress on the technical control problem.”

To what extent have we already yielded control over our fate to technology?

“The human species has never been in control of its destiny. Different groups of humans have been going about their different businesses, pursuing their various and sometimes conflicting goals. The resulting trajectory of global technological and economic development has come about without much global co-ordination and long-term planning, and almost entirely without any concern for the ultimate fate of humanity.

“Picture a school bus accelerating down a mountain road, full of quibbling and carousing kids. That is humanity. But if we look towards the front, we see that the driver’s seat is empty.”

ASK A SAGE

Question: "What if the hokey-cokey really is what it's all about?"

Viktor Frankl replies: "Ultimately, man should not ask what the meaning of his life is, but rather he must recognise that it is he who is asked. In a word, each man is questioned by life; and he can only answer to life by answering for his own life; to life he can only respond by being responsible."

philosophy@irishtimes.com

Twitter @JoeHumphreys42