Preparing for the era of artificial intelligence

A new project aims to provide insight and guidance on the future impact of AI


Each artificial intelligence (AI) technology is developed to solve a particular problem. Once it solves a problem, however, it’s no longer considered AI anymore. It becomes the norm. Everything from simple calculators and computers, to innovations such as voice recognition on the phone, are just everyday tools that would hardly be considered all that “intelligent”. It’s almost as if the term AI refers to the science and engineering of making computers do things they can’t do yet.

Or so suggests Prof Peter Stone from the University of Texas: chair of a panel of academic and industrial thinkers who co-authored the first offering from the 100 Year Study on Artificial Intelligence, an ongoing project hosted by Stanford University to help inform society on how best to manage and respond to developments in smart software, sensors and machines.

“AI is not one thing that can be clearly defined,” says Stone. “Whenever a high profile news story breaks [like when a Google-developed computing system beat one of the top human players at the 2500-year-old game of strategy known as Go earlier this year], there are headlines suggesting AI has reached a breakthrough. But it is a misconception to think that such landmarks in one AI technology have any wider significance. Just because we have cars that can drive themselves, does not mean we can apply this innovation to some other AI-related goal.”

Nervous inspiration

A more informative definition serves as the first line of the newly published report: “Artificial Intelligence and Life in 2030”, which describes AI as “a science and a set of computational technologies that are inspired by – but typically operate quite differently from – the ways people use their nervous systems and bodies to sense, learn, reason, and take action.”

READ MORE

This 50-page report is the collaborative work of 16 leading AI experts from industry and academia who spent a year investigating the likely advances in AI which might be seen in any typical North American city between now and 2030. The report serves as more than an AI Almanac though. It also addresses the various economic, political and ethical questions which could arise if and when major new technologies becoming part of the everyday.

Until very recently, many of the questions surrounding AI and society were of the “what if” variety. In the past 15 years, however, it has become increasingly ubiquitous. The hypothetical is now the real.

“The study is broken into eight sections which the panel devised based on where it believed AI would likely have the greatest societal impact in the coming years,” says Stone.

These include: transportation, home/service robots, healthcare, education, entertainment, low-resource communities, public safety and security, and employment and the workplace.

“Some of the upcoming transformations in transportation are already on most people’s radars,” he says. “However, I think people are less aware of the potential transformations in home robotics, natural language dialogue systems, and healthcare. It’s possible that we will begin to see home and service robots that go far beyond vacuuming our floors, with the ability to understand human speech, visually identify people and objects, and manipulate objects with arms and grippers.

Healthcare possibilities

“In healthcare, we may see AI-powered assistants helping doctors match symptoms to known diseases, AI systems that use clinical, environmental, and social data from millions of patients to develop more personalised diagnoses and treatments, and an expansion in automated systems performing surgery and caring for patients.

“The ability to interact with machines conversationally, rather than by typing narrowly defined commands and pushing buttons, will have a profound impact on the way we live.”

But with this comes the many implications of more robotics in everyday life. Who, for example, would be to blame if a self-driven car were to crash or an intelligent medical device were to perform a procedure incorrectly? Or what if an AI application was found guilty of racial discrimination or financial misconduct?

“It’s conceivable that AI applications used in public safety and security applications could exacerbate and codify some of the biases inherent in human decision-making, for example by training from data that reflects these biases,” says Stone. “But if carefully developed and deployed, AI technologies could also mitigate these biases. The goal of the report is to give a balanced and realistic view of both the positive and negative potential impacts of AI technologies.”

One of the other key goals of the report, which is the first in a planned series that will carry on for at least a century, is to bring to light legitimate expertise in AI – a field where fact and fiction are frequently muddled. “The public’s understanding of this subject often comes from science fiction writers and Hollywood movies,” says Stone. “We’re trying to put forth the opinions of actual AI experts.”