Sponsored
Sponsored content is premium paid-for content produced by the Irish Times Content Studio on behalf of commercial clients. The Irish Times newsroom or other editorial departments are not involved in the production of sponsored content.

Research using AI to help support mental health during the pandemic

At Ulster University, important research into mental health and wellbeing is aiding both Samaritans and those who need a daily helping hand

The coronavirus pandemic has undoubtedly unleashed the biggest public health crisis in decades. However, many experts argue that another, equally devastating, crisis is happening in parallel – that of mental health.

Researchers at the School of Computing at Ulster University (UU) are now working closely with mental health professionals to see how data science and AI can alleviate the increasing pressure on already-stretched mental health services. With lockdowns, movement restrictions, and social distancing causing unprecedented anxiety and isolation, they say using machine learning can help in understanding how best to meet these changing mental health and wellbeing needs as the pandemic continues.

UU has been working closely with Samaritans for more than five years now, across Ireland as well as the UK. By using machine learning or artificial intelligence (AI) applied to anonymous call data, they have already helped establish a number of different caller types who use their helpline services.  Earlier this year they were asked to explore the impact the pandemic has had on both call duration and volume so that Samaritans could gain a more profound understanding of how user needs had been impacted by Covid-19.

According to the university’s professor of computer science, Maurice Mulvenna, the results were somewhat surprising. While the volume of calls remained relatively steady throughout the lockdown months, the timing and duration of calls were markedly different.

READ MORE

“We looked at data for two different months, pre-lockdown and during lockdown, and we could see that caller behaviour changed. More people were calling much earlier in the morning than they had before, and people were also staying on the phone for much longer than they would have before,” Mulvenna explains.

Indeed, they saw that short calls all but disappeared, callers were much more likely to spend 30 minutes or even longer on the phone to one of the helpline volunteers.

“People were at home and needed someone to speak to. They can’t just walk down the street and chat to somebody else about how they’re feeling,” Mulvenna points out.

This valuable empirical evidence and analysis will be used by Samaritans to better understand and meet the needs of callers as the pandemic progresses. As the project continues into 2021, the data and knowledge derived from it will be made use of in a digital “dashboard” to help the organisation understand in real-time how they can evolve and adapt their service to provide support to their callers in these unprecedented times.

AI for good

Another project the UU team has pioneered is that of “ChatPal”, a new “chatbot” or mental health app designed to support mental health and wellbeing. Co-developed with researchers from Cork Institute of Technology (CIT) as well as mental health professionals, the app is available for use 24 hours a day, seven days a week and allows users to “talk” to a chatbot via a form of AI.

Originally envisioned as a lifeline to people who were particularly isolated or living in rural communities, the chatbot wasn’t due to be launched until next year. However, its release was expedited as soon as it became clear the impact lockdowns and movement restrictions were having on people’s mental health, explains Dr Raymond Bond, reader in data analytics at UU.

ChatPal’s dialogues were co-created by a host of health professionals from different disciplines, as well as computer scientists and designers. Bond explains that the goal is to provide psychoeducation and support self-management but not diagnose or treat mental health issues.

“We know the limitations of AI right now and it’s important to ensure harmful or irresponsible advice isn’t given. The current prototype simply uses a positive psychology framework to coach people through developing positive emotions in their life and engender engagement with their work or hobbies and develop quality relationships, as well as to foster meaning in their life and ambition,” he says.

ChatPal may be deliberately simple but Bond admits that while the technology remains in its relative infancy, the exponential development in AI means that interactions between humans and artificial intelligence will become increasingly sophisticated.

“For example, people currently use smart speakers using a type of command language to ‘command’ the device to turn on the lights or play music. Right now the conversation is very much command-driven and the conversations are very short, but as time goes on this will become more and more bidirectional and will be more immersive – think of a smart speaker that leads the conversation and will ask you how your day was,” he says. “It may help support that feeling of social connection for people and reduce feelings of social isolation.”

Bond admits that chatbot technologies are still very limited but they will continue to work on the next version, which will further integrate the needs identified from users. They’re also running a trial looking at how the app is used “in the wild”, where the system logs the interactions of users who choose to download the app anonymously – this will help with quality control and understanding user engagement and behaviour, he explains.

But chatbots are used more widely than you might think; research has shown that, for someone accessing an online suicide prevention web chat, as many as five interactions could be with a chatbot before they are referred to an actual person. Certainly, the use of chatbots is not a 21st century development – Mulvenna explains that a rudimentary chatbot was first used 55 years ago.

“In 1965 a German American researcher created what was arguably the first chatbot, Eliza. It was very simple, it just listened to whatever you typed into the computer and would respond. There was no AI involved; it would use some built-in rules to rework sentences in order to form a conversation.”

Scientists realised that people were happy to continue “talking” to Eliza, even though they understood “she” was just a computer program. “The ability to conduct a conversation and the apparent empathy was the important aspect of this,” says Mulvenna. “So, if you are building a chatbot, that’s a primary consideration. People like to tell their stories and feel like they are being heard, it’s a very human trait.”

Bond agrees, noting that people humanise everyday objects and pets. “So why wouldn’t they talk to a computer that talks back to them?”

Mental health chatbots pose a unique challenge, admits Bond. “We are trying to balance what AI can do reliably well, what users actually want, and what mental health professionals would support. There is a delicate balancing act that needs to be done there – perhaps there is no point in giving users what they want if the mental health professionals aren’t willing to endorse it, and there is no point in trying to meet a particular need using AI if in indeed AI doesn’t have the dependable capability yet to reliably meet that need.”

That’s why as part of the ChatPal project, the UU team surveyed more than 180 mental health experts and professionals. Mulvenna says they were pleasantly surprised to see the vast majority embraced the concept and value of the project, with 75 per cent saying they would support the use of an app such as ChatPal. This ringing endorsement gave the researchers a lot of comfort, he says.

“We had thought there may be some resistance, but it just showed that these healthcare professionals are largely on board. It’s part of the realisation that it’s not just computerised mental health supports or human based counselling, it’s a blend of them and people can choose what they want.”