Artificial intelligence is real threat to human race

Once the stuff of science fiction, ‘smart’ machines are getting smarter all the time

Humans are starting to create machines that can make decisions like us but which don’t have morality and probably never will.

Humans are starting to create machines that can make decisions like us but which don’t have morality and probably never will.

 

Ebola sounds like the stuff of nightmares. Bird flu and SARS also send shivers down my spine. But I’ll tell you what scares me most: artificial intelligence.

With enough resources, humans can stop the first three. The last, which humans are creating, could soon become unstoppable.

Before we get into what could possibly go wrong, let me first explain what artificial intelligence is. Actually skip that. I’ll let someone else explain it: grab an iPhone and ask Siri about the weather or stocks. Or tell her: “I’m drunk.”

Her answers are artificially intelligent.

Right now these artificially intelligent machines are pretty cute and innocent, but as they are given more power in society, these machines may not take long to spiral out of control.

In the beginning, the glitches will be small but eventful. Maybe a rogue computer momentarily derails the stock market, causing billions in damage. Or a driverless car freezes on the highway because a software update goes awry.

But the upheavals can escalate quickly and become scarier and even cataclysmic.

Imagine how a medical robot, originally programmed to treat cancer, could conclude that the best way to obliterate cancer is to exterminate humans who are genetically prone to the disease.

Rise of the nanobots

Superintelligence

Proponents of artificial intelligence argue that these things would never happen and that programmers are going to build safeguards.

But let’s be realistic: it took nearly a half-century for programmers to stop computers from crashing every time you wanted to check your email. What makes them think they can manage armies of quasi-intelligent robots?

I’m not alone in my fear. Silicon Valley’s resident futurist, Elon Musk, recently said artificial intelligence is “potentially more dangerous than nukes”. Stephen Hawking, one of the smartest people on earth, wrote that successful AI “would be the biggest event in human history. Unfortunately, it might also be the last.”

There is a long list of computer experts and science fiction writers also fearful of a rogue robot-infested future.

Two main problems with artificial intelligence lead people like Musk and Hawking to worry. The first, more near-future fear is that we are starting to create machines that can make decisions like humans, but these machines don’t have morality and probably never will.

The second, which is a longer way off, is that once we build systems that are as intelligent as humans, these intelligent machines will be able to build smarter machines, often referred to as superintelligence. That, experts say, is when things could really spiral out of control, as the rate of growth and expansion of machines would increase exponentially. We can’t build safeguards into something that we haven’t built ourselves.

“We humans steer the future not because we’re the strongest beings on the planet or the fastest, but because we are the smartest,” said James Barrat, author of Our Final Invention: Artificial Intelligence and the End of the Human Era. “So when there is something smarter than us on the planet, it will rule over us on the planet.”

Superintelligence

Perhaps the scariest scenario is one in which these technologies will be used by the military. It’s not hard to imagine countries engaged in an arms race to build machines that can kill.

Bonnie Docherty, a lecturer on law at Harvard University and a senior researcher at Human Rights Watch, says that the race to build autonomous weapons with artificial intelligence – which is already underway – is reminiscent of the early days of the race to build nuclear weapons. She believes that treaties should be put in place now before we get to a point where machines are killing people on the battlefield.

“If this type of technology is not stopped now, it will lead to an arms race,” says Docherty, who has written several reports on the dangers of killer robots. “If one state develops it, then another state will develop it. And machines that lack morality and mortally should not be given power to kill.”

So how do we ensure that all these doomsday situations don’t come to pass? In some instances, it is likely that we won’t be able to stop them.

But we can hinder some of the potential chaos by following the lead of Google. Earlier this year when the search engine giant acquired DeepMind, a neuroscience-inspired, artificial intelligence company based in London, the two companies put together an artificial intelligence safety and ethics board that aims to ensure these technologies are developed safely.

Demis Hassabis, founder and chief executive of DeepMind, said in a video interview that anyone building artificial intelligence, including governments and companies, should do the same thing.

“They should definitely be thinking about the ethical consequences of what they do,” Hassabis said. “Way ahead of time.” – (New York Times News Service)

The Irish Times Logo
Commenting on The Irish Times has changed. To comment you must now be an Irish Times subscriber.
SUBSCRIBE
GO BACK
Error Image
The account details entered are not currently associated with an Irish Times subscription. Please subscribe to sign in to comment.
Comment Sign In

Forgot password?
The Irish Times Logo
Thank you
You should receive instructions for resetting your password. When you have reset your password, you can Sign In.
The Irish Times Logo
Please choose a screen name. This name will appear beside any comments you post. Your screen name should follow the standards set out in our community standards.
Screen Name Selection

Hello

Please choose a screen name. This name will appear beside any comments you post. Your screen name should follow the standards set out in our community standards.

The Irish Times Logo
Commenting on The Irish Times has changed. To comment you must now be an Irish Times subscriber.
SUBSCRIBE
Forgot Password
Please enter your email address so we can send you a link to reset your password.

Sign In

Your Comments
We reserve the right to remove any content at any time from this Community, including without limitation if it violates the Community Standards. We ask that you report content that you in good faith believe violates the above rules by clicking the Flag link next to the offending comment or by filling out this form. New comments are only accepted for 3 days from the date of publication.