Putting humans centre stage in the AI race

After years of AI being the hot topic, technology companies are realising that ethics in AI is the real hot button issue


Microsoft is out to prove that it is not just "an AI-first company" but an "AI-first company with a heart and a conscience". Following on from the creation of the Artificial Intelligence and Research Group last September – led by 20-year Microsoft veteran Harry Shum – it is leading with the tagline "amplifying human ingenuity with intelligent technology".

At a recent event in London designed to showcase new and existing AI and machine learning research and products, Shum reminded the world that Microsoft has been investing in this area for a long time, quoting Bill Gates’ vision of general artificial intelligence where “computers could one day see, hear, talk, and understand human beings”.

Shum declared that AI has “so much potential to improve our lives” while cautioning that “with great opportunities come problems we need to think through such as jobs and security”. It was Microsoft’s moment to acknowledge very real fears surrounding developments in artificial intelligence and how it will impact upon all people in the near future.

Given that many of us fear our jobs will be replaced by robots or AI, there is good reason for large technology companies like Microsoft to take this into account when setting out their AI milestones. A recent survey carried out by PwC found that 46 per cent of the public believe that AI will have a negative impact by "taking away jobs" and these fears may not necessarily be baseless: a 2015 study by researchers at Oxford University and Deloitte, for example, predicts that 35 per cent of current jobs in the UK are indeed at risk of automation within the next 20 years.

READ MORE

“Inevitably some existing jobs will be replaced,” conceded Shum, before adding that we “should also look at new opportunities created by new technologies”. There will be supervised learning opportunities as AIs require humans to train them and new jobs that we have never even thought of, he explained, but “[the future of jobs] is an issue that we, as AI practitioners, as well as policy makers and government, have to think through”.

Eric Horwitz, managing director of Microsoft Research, weighed in on this matter: "It is easier to imagine current jobs disappearing than to imagine new jobs being created – this is a human bias.

“That said, we need to be cautious going forward. There is a risk of disruptions over time because there are some jobs that will become things a machine could do well – driving a truck on a long stretch of highway, for example,” he added.

“What percentage of the US population are truck drivers? It’s high! So, we need to collect more data on automation, job distribution and the workforce.”

Horwitz also addressed the need for ethics and transparency in the development of new AI technologies, reminding the audience of the scourge of algorithmic bias due to the training of machine learning systems – usually unintentionally – on biased datasets. Examples of these biases have surfaced in everyday consumer tech products ranging from search engine results to photo recognition apps.

“How do researchers build robust systems that work in the open world after being trained in a lab world,” he asked, adding that in the past some researchers have been so excited to get machine learning out as viable products that they didn’t consider the potential biases and skews in the data leading to biased outcomes, examples of which have been seen in the criminal justice system where software has exhibited racist outcomes when calculating risk of reoffending.

“We need to move people together to take AI to the next level. We’ve coordinated and got buy-in to together tackle a set of key aspirations,” said Horwitz, explaining that the long term goal was attaining more general AI, or AI that doesn’t just do one thing well but that can interact with and anticipate human behaviour to “augment the human and compliment our strengths”.

To this effect, Horwitz announced Microsoft Research AI (MSRAI), a new research and incubation hub within Microsoft Research involving over 100 researchers and focusing on ethical design and accessibility amongst other areas. One of the flagship products is Seeing AI, a free iOS app for the sight-impaired that narrates the world around the user; it can recognise facial expressions, read text from any object and identify products and paper currency, translating everything into an audible format.

This new incubation hub has also established an internal ethics board, known as the Aether (AI and Ethics in Engineering and Research) Advisory Panel. This is in addition to the Partnership on AI, of which Microsoft is a founding member alongside Google, IBM and Facebook; this is a wider industry alliance with similar societal and ethical goals.

After years of AI being the hot topic, technology companies are realising that ethics in AI is the real hot button issue – although, Microsoft is already considered an ethical company, as we were reminded by Chris Bishop, director of the Microsoft Research Lab at Cambridge. It has made it into this year's top 10 of the World's Most Ethical Companies for the seventh year in a row.

Bishop also reminded the audience that, despite all the talk of AI-infused everyday technologies, many of us do not realise that we are smack in the middle of a paradigm shift: “I think we’re seeing a profound transformation – one as big as when it all started with the first computer programme,” he said.

“We are moving from software that is handcrafted to software that learns from data.”

Bishop was present to mark the 20th anniversary of the Cambridge Research Lab, which Shum said has had a "profound impact" by contributing to many Microsoft products, including AI assistant Cortana and Skype Translator. However, the project with the greatest societal impact may be InnerEye, software using AI computer vision and machine learning to help doctors deliver more effective cancer treatments.

Richard Lowe, software engineer on the Medical Image Analysis team at Microsoft Research Cambridge, introduced InnerEye by reminding the audience that one in two people over the age of 60 will develop cancer at some point in their life, with over half of these requiring radiotherapy. A depressing statistic but one that underlines the need for AI within medical technologies and not just being used to make our smartphones smarter.

Surprisingly, the technology for InnerEye began with the Kinect motion sensing accessory for the Xbox gaming console and went on to be used in the development of the HoloLens mixed reality (MR) headset. This same technology can detect an accurate outline of both human organs and cancer tumours on CT and MRI scans. The reason this is important is that targeted radiotherapy is not just more effective but it also avoids blasting nearby healthy organs with unnecessary radiation designed to kill cancer cells.

What this means for oncologists is that instead of spending hours manually mapping out a patient’s tumour in 3D from hundreds of individual 2D images, InnerEye can automate the process.

"This manual operation is inaccurate, time-consuming and expensive," explained Antonio Criminisi, principal researcher at the Cambridge Lab, whose machine learning research has led to this breakthrough in medical image analysis. "The AI algorithm has been pre-trained to recognise and segment tumours on many similar images from past patients."

Criminisi also explained that oncologists can step in at any time and change the work done by the algorithm if needed as well as carry out regular quality checks, all of which will lead to more accurate results in the future as the AI learns from feedback.

InnerEye is currently undergoing clinical trials in several locations around the UK.

“I often say artificial intelligence is a sleeping giant when it comes to healthcare,” said Horwitz, who went on to state that AI-driven predictive modelling of hospital readmission rates is a $17.5 billion cost opportunity in the US alone. “By building a system that could predict patients for highest risk of readmission they would get care they need.”

Beyond healthcare Emma Williams, general manager for Bing, reminded us that designing AI systems will require a whole new approach: "We are used to designing for the web and user engagement is well understood. With AI we will need to invent new user models and design patterns."

To this end, there are now 200 people from various disciplines working on creating new AI design principles: “We need diversity of thought and background so we have been creating a global team – 50/50 male and female – with diverse backgrounds including PhDs in psychology, sociology, anthropology, and even musicians who have been designing the voices of our bots in Cortana.”

“There is societal angst around AI that is deeply wired into our brains: it stems from fear of another dominant species that is sentient,” said Williams. “But humans are the hero. Humans are at the centre of what we’re trying to create at Microsoft.”

These AI systems, she explained, will be designed to adapt to humans and human behaviour, indicating that this will take into account the emotional, irrational and other non-logical aspects of human behaviour. It sounds exciting but one wonders how AI will learn to adapt to our memory lapses, moods and mortal machinations when so many people fail to understand each other. Perhaps the next iteration of Cortana will hear the grumpy reluctance in our voices when we ask her to find some healthy recipes and go ahead and order some Dominos instead.