Is artificial intelligence the greatest threat to humankind?

AI could lead to human extinction, says philosopher Nick Bostrom

Paranoid about androids? Serious thinkers are warning about the rise of artificial intelligence. Photograph: Carsten Koall/AFP/Getty Images

Paranoid about androids? Serious thinkers are warning about the rise of artificial intelligence. Photograph: Carsten Koall/AFP/Getty Images

Fri, Aug 29, 2014, 01:00

The idea of computers taking over the world has long been with us in science fiction. But now serious thinkers are saying it’s not a matter of if, but when. The physicist Stephen Hawking recently joined other scientists in warning about complacency over the rise of artificial intelligence.

As Nick Bostrom of the modestly titled Future of Humanity Institute and Oxford Martin School at Oxford University warns: “As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species then would come to depend on the actions of the machine superintelligence.”

Forget about global warming and nuclear Armageddon, then: spawn of Google is gonna get us. Or as Bostrom, a philosopher and the author of Superintelligence: Paths, Dangers, Strategies, puts it (providing today’s cheery idea): Superintelligence is possibly the most important and most daunting challenge humanity has ever faced.

 

How long have we got before a machine becomes superintelligent?

Nick Bostrom: “Nobody knows. In an opinion survey we did of AI experts, we found a median view that there was a 50 per cent probability of human-level machine intelligence being developed by mid-century. But there is a great deal of uncertainty around that: it could happen much sooner or much later.

“Instead of thinking in terms of some particular year, we must learn to think in terms of a probability distribution spread out over a wide range of possible arrival dates.”

 

What risk does superintelligence pose?

“It would pose existential risks; that is to say, it could threaten human extinction and the destruction of our long-term potential to realise a cosmically valuable future.”

 

Is it possible to achieve a “controlled detonation”, and at what point would you want to do one? 

“By a ‘controlled detonation’ I mean that if there is at some point going to be an intelligence explosion – a rapid transition to machine superintelligence – then we should try to set up the initial conditions in such a way that the outcome is beneficial rather than destructive. I think that in principle this is possible, though in practice it looks hard to achieve.

“In a saner world, we would hold off on triggering an intelligence explosion until we were really certain that we had solved the control problem. Perhaps after we thought we had the solution, we would wait a couple more generations just to allow time for somebody to spot a flaw in the solution.

“But the actual world seems unlikely to allow for such a pause, which is why it is important to start working on the control problem now, even though it is difficult to do so before we know much about what kind of system will eventually become superintelligent.”

Sign In

Forgot Password?

Sign Up

The name that will appear beside your comments.

Have an account? Sign In

Forgot Password?

Please enter your email address so we can send you a link to reset your password.

Sign In or Sign Up

Thank you

You should receive instructions for resetting your password. When you have reset your password, you can Sign In.

Hello, .

Please choose a screen name. This name will appear beside any comments you post. Your screen name should follow the standards set out in our community standards.

Thank you for registering. Please check your email to verify your account.

We reserve the right to remove any content at any time from this Community, including without limitation if it violates the Community Standards. We ask that you report content that you in good faith believe violates the above rules by clicking the Flag link next to the offending comment or by filling out this form. New comments are only accepted for 3 days from the date of publication.
From Monday 20th October 2014 we're changing how readers sign-in to comment, click here for more information.