For many people living with autism, social interactions can be like being in a country where you don’t speak the language. What neurotypical people take for granted – interpreting cues from body language, tones and facial expressions, establishing a rapport with eye contact – can be a challenge for people on the autism spectrum. It’s a varied thing, of course, but for those who do experience these challenges, it can be isolating.
Dr Ned Sahin may have a solution that could help with at least some of these issues, and artificial intelligence plays a large role.
“You might have a tremendous amount of power in your brain, but not be able to communicate with others. Imagine if you didn’t speak the language, everyone was yelling at you, facing backwards, you didn’t know who to listen to, and everything about you felt just a little bit off,” he said.
“Just imagine if I could give you an AI that would outsource or near source some of the complex challenges such as determining when someone is angry or bored, or help look towards someone, pay attention when they are speaking and get the right information.”
It's not a theoretical device; Dr Sahin has built one using Google Glass. The wearable device has been through clinical trials and is now on sale in schools. It uses facial detection and analysis to detect emotions and turns into a video game: the wearer gets points for making eye contact with a teacher, or for guessing correctly if someone is happy or angry.
“We’re using facial detection and analysis to decode facial emotions and turn that into a video game. We have about 10 different apps at different stages, commercialised and under development that are a wearable life coach on your shoulder, on your head, interposed between you and the rest of reality, but not blocking you from actually being part of reality,” he said.
“AI is doing the heavy lifting. When it feels like a video game, it taps into natural motivational structures that children have and teaches them the skills that will get them through the biggest two gateways in life, which is a romantic partnership and a job.”
While Google provides the hardware, the computing behind the scenes comes through Amazon Web Services (AWS). The global giant has been doubling down on machine learning and artificial intelligence, opening up powerful tools to smaller companies and organisations at a more competitive cost than in the past. At its annual Re:Invent conference, the company announced everything from a custom designed chip to technology that can speed up the training of AI models. The end result? Amazon is hoping that it will democratise the technology, accelerating its rollout throughout every industry as it becomes easier and cheaper for companies and organisations to use the technology in their products.
In the meantime, the movement for AI for good continues. Phone maker Huawei has also dipped its toe into the water with a new app that signs a select number of story books for deaf children, helping to teach them to read. Announced at the start of December, the app will translate a book into sign language through the the Mate 20 Pro's camera, using an onscreen avatar to sign the story as the printed words are highlighted.
"We created StorySign to help make it possible for families with deaf children to enjoy an enriched story time," said Andrew Garrihy, chief marketing officer, Huawei western Europe. "We hope that by raising awareness of deaf literacy issues, people will be encouraged to donate to or support one of the fantastic charity partners we are working with across Europe."
Closer to home, Cork’s INFANT Research Centre has been using AI to help improve outcomes for newborn babies. Researchers in the centre developed an algorithm that helps detect seizures in newborns, interpreting EEG readings at the same level as a human expert. The software can be integrated into existing bedside monitors, limiting the amount of equipment necessary around a child’s bedside and providing doctors with valuable clinical data. It has been a major win for the treatment of newborns, and looks set to be rolled out globally once the clinical trials have been published. The project won an AI award last month, one of several Irish activities in artificial intelligence that were honoured.
However, while AI has enormous potential for good, there are issues ahead and there needs to be some careful consideration about its impact.
Vasi Philomin, director of software engineering with AWS, said knowing the limitations of the services is important for effective use.
“You’ve got to understand what the capabilities of the service actually is and then try to use it appropriately in those cases,” he said.
In the case of AWS’s services that use facial recognition, for example, the results are given a confidence score that indicates the probability. The higher the score, the higher the probability.
“In the real world when you see how these things are used,” he explained. “These services are used as sort of a filter to handle the massive amounts of data out there and narrow the field down for a human to take a look and make a decision in the end. We shouldn’t forget there’s a human in the loop, especially for things that are serious.”
AWS’s approach is to keep the models for AI and machine learning in the cloud, something Philomin said would improve them over time and perhaps even work out some of the biases that could creep in to data.
“It’s important to have good-quality diverse data for training and that’s something we strive to do with our services,” he said. “We continuously improve the services, and customers who are using them see the improvements without having to do anything on their side.”
But requiring businesses and the tech industry to police their own actions may be a step too far in trust for some. The tech industry is littered with cases where the limitations of services were not taken into account, and there is the fear that society will bear the brunt of this.
According to Microsoft Ireland managing director Cathriona Hallahan, AI can be a force for good – but governments need to take a hand in steering its course. "It should be a partnership between man and machine and not one or the other. Humans need to stay in control of who gets to define how that technology should be used," she said. "It should be a coalition with industry, between public and private sectors and government to come together and say 'how should we regulate this and who should have control?' It shouldn't be left in the hands of industry to do that alone."
Dr Sahin has a clear view of the impending impact of AI. “Every useful technology will be used for evil and for good. It’s desperately important for those of us who are doing good, unassailably, to push forward. That doesn’t mean ignore the concerns around ethics; it means take them as seriously as possible,” he said. “Consider it do good, and know what that means. If we worry about the sky falling in, about the data we give off now being used against us in the future, we won’t make progress forwards.”