Artificial intelligence may be more humane than people
Do we overrate human thought when we presume AI would lack compassion?
Would robots that can think for themselves choose to be altruistic, compassionate and fair?
Artificial Intelligence (AI) appears to be suffering from an image crisis. Many of the most vocal commentators seem to believe it will ultimately cause more harm than good. While fear of new technology is nothing new, it doesn’t help when thought leaders like Elon Musk join the prophets of doom. But guess what? Sometimes even Elon Musk is wrong. There I said it.
I struggled to find consensus on an antonym for AI. So we’re calling it natural intelligence. That is, the stuff that’s supposed to be crammed into our brains making us top of the food chain. But it’s overrated. And the fact that so many have blindly concluded AI will be the death of civilisation as we know it is one part humanity’s inclination to fear the unknown, and three parts The Terminator movies. Thanks Arnie.
If AI does learn how to self-evolve and, therefore, think for itself, who is to say it wouldn’t develop consciousness that was genuinely altruistic, compassionate and fair-minded? Right now, the people building AI do so with unconscious bias, limited intelligence and are frequently driven by personal gain rather than the welfare of others. That’s why the greatest achievements have come from corporate entities like Facebook, who use it for targeted advertising, photo tagging and news feeds. Microsoft and Apple need AI to make their digital assistants, Cortana and Siri, wow us by turning on the immersion.
Right now, the people building AI do so with unconscious bias, limited intelligence and are frequently driven by personal gain rather than the welfare of others
Google is by far one of the hardest at work in its efforts to create the kind of self-teaching AI that might one day outsmart us all. It recently promoted one of its own whizz kids to be the new lead of its AI division. While not a kid at 50 years of age, Jeff Dean had been impressing his co-workers at Google with his robotics skills since 1999. So he was an obvious choice.
A position like this at a company like Google isn’t one of those “made-up” titles like vice-president of customer development or chief innovation officer. AI strategy is at the heart of everything the company does. So if anyone is to inadvertently cause a machine-driven apocalypse, it’ll be these guys.
We’re not there yet though. AI’s greatest screw-ups have also come from the corporate sector. In 2015, Google’s photo-organising product tagged some images of black people as gorillas.
News that a robot figured out how to autonomously assemble an Ikea chair without malfunctioning, like most humans do, is kind of impressive. But it’s not enough to run screaming to the hills. Researchers at Nanyang Technological University in Singapore used a couple of bog standard industrial robot arms with force sensors and a 3D camera to build a robot that had a Stefan Ikea chair assembled in 20 minutes.
It was programmed to build the chair. It knew no other option than this. So given the choice, would a conscious machine decide not to help a human in distress assemble a chair? No one is wildly speculating on the possibility that robots that can think for themselves might choose to be altruistic, compassionate and fair. That they might protect the most vulnerable in society, distribute wealth equally, and put criminal, narcissistic, incompetent leaders of the free world, for example, into recovery treatment rather than a jail cell which is what humans would consider doing first.
Machines vs myopia
There are already solutions to many of the world’s ills – wealth inequality, environmental damage, racial and cultural discrimination etc – at our disposal. We as a species choose not to implement them because of the potential negative impacts – financial loss, time-consumption, not to mention apathy – they might have on us as individuals in the short term. Machines might not be so myopic.
Of course, taking a cold, rational approach to decision-making isn’t necessarily the best idea for society’s ills either. The debate really centres around what we constitute as consciousness. Can a robot develop a sense of itself – and of those around it – while continuing to deliver a purely logic-based approach to “choice”? Were this the case, artificial decision-making could decide eugenics is back in vogue.
All humans have managed to achieve thus far though is a kind of organised chaos. People will stop at a red light and wait till it’s green before driving through an intersection. But in the back of everyone’s mind is the knowledge that it would take very little for civil society to fall apart and have us all at each other’s throats. That’s why so many take comfort in organised religion as it offers answers to many of our questions. They might not be the right answers but sometimes living a lie is easier than accepting the harsh reality that we have little or no control over our lives.
From what I can tell, machines aren’t big on chaos either. They prefer order, logic and fully formed Ikea chairs. At a recent talk he gave in Austin, Texas, Elon Musk said, “Smart people who know they’re smart have a tendency to define themselves by their intelligence meaning they don’t like the idea that machines could ever be smarter than them.” I’m no psychologist but Musk himself happens to be a smart man who is clearly aware of his own intelligence. The engineer doth protest too much, methinks.