Cyborg society

With artificial intelligence widely predicted to transform everyday life by 2025, and machines to match human savvy by 2045. Are robots are likely to end up being our friends, or something rather more sinister


In a workshop above Dublin’s Science Gallery, a group of children are tinkering with robots they’ve built themselves. Their ages vary between eight and 12, their focus spilling into a chaotic creativity. Fingers dip into boxes brimming with mechanical parts. Programming code is clicked and dragged across laptop screens. Backpacks and lunch boxes are forgotten about, discarded in heaps piled across the room.

This is the midway point of a week-long robot camp that culminates with bouts of robotic sumo-wrestling. Each lesson is about tackling a new challenge: tomorrow it’s sensors, today it’s gears. That means robot drag-racing, uphill climbs and obstacle courses. But most kids are too immersed in fine-tuning their creations to even notice who’s winning.

It’s basic stuff – the parts are an advanced version of Lego called Mindstorms – but bringing these robots to life is a fun way of combining maths, physics, engineering and computer science in one interactive stream. That’s the vision behind Colmac Robotics, an ed-tech start-up founded by university students Niall McCormick and Colmán Munnelly.

The pair are developing their own grassroots initiative to inspire future engineers and raise Ireland’s profile in the world of robotics. They’ve organised 16 robot camps since last year and are about to launch their own series of educational products for schools: kits that come with everything you need to build a robot, apart from a screwdriver and a soldering iron. The idea is that if just a fraction of these budding developers stick with robotics, they could one day be spearheading an industry estimated to be worth up to $6 trillion per year by 2025.

READ MORE

“When you hear people like Bill Gates saying that robotics is the next big thing, you know there’s something to it,” says McCormick, an engineering student at NUI Galway. “Google recently bought six robotic companies in six days. Amazon is developing delivery drones, aiming to have them [operational] in a few years’ time.

“South Korea’s ministry of information said that every Korean household will have a robot helper by 2020. But the industry is developing at such a phenomenal rate that it’s difficult to make predictions. An awful lot can happen in just one year.”

Ireland has the potential to play a big part in that landscape, McCormick adds, but the time to act is now.

Of course, the history of robot design has been beset with unrealistic expectations. Android butlers were promised as far back as the 1950s, when a machine first beat a human in chess and a robot revolution appeared imminent. At the time, developers of artificial intelligence assumed that once computers could handle complex challenges such as logic and algebra, then simple tasks like movement and perception would be easy to crack. But they were wrong.

This is called Moravec’s paradox: things that are difficult for humans (such as playing chess) are often easy for machines, whereas things that are easy for humans (like recognising faces or catching a ball) are difficult for machines. Basic human skills that seem effortless to us have in fact evolved over millions of years; they’re essentially design improvements that have come about through the process of natural selection. The sensory processing of an infant, it turns out, is an extraordinary feat of engineering that’s difficult to replicate.

Machines, however, are starting to catch up. The current state of robot technology is comparable to where personal computers were in the 1980s. But it’s becoming cheaper, faster and more powerful every year. After entering the consumer market through toys (like Furbys) and spreading to household appliances (the autonomous vacuum cleaner), robots are now starting to edge their way into the workforce.

Computerised journalists are already capable of automating news stories on homicides, business developments and earthquakes. Healthcare delivery bots can navigate hospital hallways to ferry meals and medication, blood samples and bed linen. ‘Telepresence’ robots (effectively screens on wheels) offer roaming access to offices and art galleries from anywhere in the world.

PackBots have been deployed in war zones to dismantle bombs and scope out locations humans can’t reach, while automated drones are revolutionising warfare both in the air and on the ground. You can even find robots milking cows in Irish farms or driving cars across Californian highways.

Earlier this month, US think-tank Pew Research Center compiled a report by interviewing 2,551 figures in the tech industry – the majority of whom agreed that robots and artificial intelligence would transform everyday life by 2025. They were deeply divided, however, over whether this would have positive or negative implications for society.

This is an issue being explored by Michael Osborne, an associate professor in machine learning at the University of Oxford, whose probabilistic algorithms have aided the detection of planets in distant solar systems. Last year, Osborne co-authored a study investigating whether machine learning may automate certain jobs out of existence. The results estimated that 47 per cent of the US workforce could be replaced by computerisation and machines in the next two decades.

“One of the most common findings of our study was that the more educated and highly skilled you are, the less susceptible your job is,” says Osborne. “The obvious consequence of that is that if this technology takes root in society, it will be the low-skilled, low-paid jobs that will be removed – creating a ‘rich get richer’ effect that could cause great inequality. I think the real question, as we see accelerated rates of technological change, is whether we’ll be able to keep up and find new uses for human labour faster than it’s is being made redundant by technology.”

These kind of fears date back to the Luddites, when machinery was first introduced, but automation is on the cusp of an economic revolution unlike anything seen before. Baxter, for example, is a new general-purpose robot that costs the same amount as a typical production worker’s salary in the US. It communicates through facial expressions and can quickly ‘learn’ a set of simple actions – such as folding a T-shirt or catching a fast-moving object – by imitating humans. Though Baxter is packaged as a complementary tool to work alongside people, it’s a clear indication of how fast robot design is improving.

Driverless cars are expected to be the first autonomous machines we will interact with on an everyday basis – and, so far, testing has shown that they are much better at driving than humans. This poses obvious consequences for the transport industry as, according to Osborne, almost all logistical tasks could soon be automated: that means the driving of taxis, forklifts, trucks, tractors and cargo handlers.

An exponential rise in processing power has also enabled machines to increasingly outperform humans at cognitive tasks in terms of time, cost and accuracy. “Recently we’ve seen algorithms substituting for paralegals, for example, as they’re better able to dig through large amounts of files to find particular cases of interest,” says Osborne.

“They’ve also been substituting for translators: another thing that, for a long time, was thought to be beyond the realm of automation.”

Yet there are plenty of issues still holding robot technology back: simple things like efficient battery power, manufacturing complexities and perception issues.

The reason warehouses, hospitals and airports are likely to feature automated workers long before the home or office, is because those environments are clearly structured. Machines perform the same repetitive tasks in these places with little ability to make decisions or exhibit intelligence. That’s a far cry from being able to tidy a teenager’s bedroom or tell the difference between a dirty saucepan and a flowerpot. Roles that require creativity and social intelligence remain outside the realm of robots too. But Osborne believes these “bottlenecks to computerisation” are temporary.

“This quite complex human ability to understand what other humans are thinking and then respond accordingly is probably not re-creatable with an algorithm over the next 20 years or so,” he says. “I think we will eventually overcome those hurdles but it will require a much better understanding of human intelligence. Progress on that front is not really something we can do within machine learning. It’s going to come from the development of efforts within cognitive science and neuroscience.”

He added later: “Humans are nothing but a machine, essentially. There’s no reason why we can’t reproduce our behaviours on another type of machine or platform.”

One path towards realising that vision is a robot that can contribute to a better design of itself, learning in the same way a child does and gradually evolving like a species. This possibility is being explored by Hod Lipson, director of Cornell University’s Creative Machines Lab.

His team is developing self-aware robots: machines that can figure out how to walk, develop a sense of what they look like and even learn to self-replicate.

“It’s not a case of sitting down and designing a robot from scratch, but rather designing a framework whereby the robot can learn over its lifetime – maybe even to the point where robots don’t just learn but transfer [knowledge] from one robot to another, enabling larger scale learning,” he says. “Robots will be able to change their body plan over time, evolving literally. We’re already beginning to see the seeds of it.”

A recent breakthrough in this field is origami-inspired self-assembling robots. A team of researchers from the Massachusetts Institute of Technology (MIT) and Harvard University have developed miniature machines that can fold themselves from flat paper into 3-D shapes that walk away. In the future, these robots could be sent through collapsed buildings or tunnels before assembling into their functional form.

But Lipson is currently investigating whether a machine can go even further – developing the ability to understand what someone is thinking by observing what they say and do. In time, he expects machines to emulate empathy and develop common sense, eventually outstripping human intelligence.

“I believe there’s nothing that robots will not be able to do,” he says. “The idea of emotions in machines is a long and controversial discussion in its own right but I think there’s nothing fundamental that prevents that from being realised.

“It might be implemented in a different way; it might end up being a very different kind of process than human emotions, but nevertheless I think it’s possible to emulate. We’re definitely not talking about the next 10 to 15 years, but if we’re talking about the next 100 years, then there’s no doubt about it: it will be achievable.”

Lipson is not alone in this view. The Future of Humanity Institute is a leading research centre in Oxford that considers ‘existential threats’ to human civilisation. One of them is robots.

Last year, they posed a simple question to experts in AI – by what year do you think there is a 50 per cent chance that machines will achieve human-level intelligence? The median answer was 2045. But the subsequent transition from human-level intelligence to so-called super-intelligence could be rapid.

So if robots could gradually evolve as a ‘species’ more advanced than humans, what’s to become of us? Will machines view people as an inefficient, illogical nuisance that only gets in the way of world peace?

Lipson laughs. “There are a lot of hopes and fears with robots, but I don’t think that’s something that’s going to be happening within our lifetime. In the long term, AI is a very powerful technology. Personally, I think it has far more benefits than threats but it’s definitely a discussion to be had. I just don’t have the answer.”

Some techno-optimists envision robots as therapeutic tools and exercise coaches – companions capable of providing us with physical and emotional support.

In Japan, where the aging population is growing faster than in any other country, robots are being specifically developed to care for the elderly. The question of why we would prefer a machine to do these things for us is something Kathleen Richardson considers in her forthcoming book, An Anthropology of Robots and AI: Annihilation Anxiety and Machines. She's a senior research fellow in the ethics of robotics at Leicester's De Montfort University and has based the book on her time in robot labs at MIT.

“What’s interesting about how people predict technology is that they want to completely blur the boundary between human and non-human,” she says.

"That's the ultimate fantasy of Silicon Valley because it's so advantageous for them. But in reality, machines are constantly showing us just how distinctive we are. You can go to any robotics lab where people are trying to design intelligent machines and see how much they struggle. There isn't some system like HAL 9000 from 2001: A Space Odyssey. They're confronted by the realities of how complex and unique the human being is."

Richardson believes contemporary models for AI will never achieve artificial intelligence because they are fundamentally flawed. The idea that a robot might be able to experience emotions or act as an alternative to a human being, she says, reflects a profound crisis of attachment in society and downgrades our notion of human life to something merely mechanical.

“You can simulate aspects of the human experience in machines but you can’t actually recreate human subjectivity,” she says.

“From the moment you arrive in the world to the mom ent you die, you’re learning, feeling and accumulating experiences as a whole while evolving through the relationships around you. To imagine that multiplicity being simulated in an artificial entity is a gross misunderstanding and oversimplification of what it means to be human.”