Special Reports
A special report is content that is edited and produced by the special reports unit within The Irish Times Content Studio. It is supported by advertisers who may contribute to the report but do not have editorial control.

How ethical is it to use AI in insurance?

There are benefits to consumers and insurers in the greater use of big data and AI insurance, including a greater ability to understand and model risks

Artificial intelligence (AI) and its seemingly endless applications to every sector, industry, and technology have been in the news again of late with the release of Chat GPT. While innovations such as these cause a splash – and sometimes a little fear – the use of AI in industry has been around for some time. In the insurance industry, AI is used to assess risks and claims – but has AI inherited bias from its human creators, and is it ethical to use it?

AI and Insurance

The insurance industry has a deep heritage in data analytics where data has always been collected and processed to inform underwriting decisions, price policies, settle claims and prevent fraud, says Jean Rea, partner, applied intelligence, KPMG. “It is therefore not surprising that there are many opportunities arising for the use of big data and analytics including AI in insurance.

“There are a broad range of uses of big data and AI across the insurance value chain – for example, they can be used to improve product offerings for consumers, develop more targeted and personalised marketing campaigns, improve customer experience through automating and digitalising the customer journey, and so on.”

READ MORE

There are many benefits to both consumers and insurers in the greater use of big data and AI insurance, including a greater ability to understand and model risks, thereby enhancing firms’ abilities to understand and assess risks, development of new or enhanced products, automation, and reducing the cost of serving customers, says Rea. “Big data and AI will enable insurers to further enhance risk assessment capabilities, and therefore consumers that were previously perceived as higher risk such as younger drivers may have greater access to more affordable insurance.

With AI, once you do bring it in, it opens the door to reducing discrimination that may have been there undetected for many years

—  Prof Cal B Muckley, UCD

“AI can help facilitate the development of more novel insurance products such as usage-based insurance products, for example, the use of vehicle telematic devices for pay-how-you-drive or pay-as-you-drive products. Such products are more tailored to consumers’ needs.” Robotic process automation and optical character recognition can be used to improve insurance processes such as underwriting and claims processes, therefore improving customer experience and reducing costs.

“Similarly, the use of natural language processing, voice recognition and chatbots can facilitate better communication and access to insurance services. In addition, claims processes can be improved. For example, image recognition can be used to automate and speed up the processing of damage-related claims, and drones can be used for remote claims inspections. This will allow claims to be paid faster and cost less to settle, which should in turn reduce premiums.”

The downsides

Rea says that some of the other challenges with AI in insurance relate to the complexity and potential lack of transparency or explainability of AI algorithms, in particular, where the use-cases involved could have a material impact on consumer outcomes or firms themselves. In such cases, heightened governance and oversight of algorithms can be helpful to mitigate these challenges. “Similar to other approaches reliant on data to build and parameterise models, biases can be inherent in AI.

“There are many forms of bias, and it can be introduced throughout the model development cycle. For example, the data on which models are built and trained may not be representative of the intended purpose of the model and hence [are] biased, the variables used in the model or complex combinations of them inherent in the model could be closely linked to discriminating factors (known as proxy bias), and biases of the model developers could get reflected in the model design and build.”

Inherited and inherent bias

Professor Prof Cal B Muckley, PhD, chair in operational risk, banking & finance, University College Dublin (UCD), agrees there is potential for the AI and machine learning algorithms to have bias, whether inherent (proxy) or inherited.

Inherited bias is when, for example, the algorithm is looking at mortgage decisions data over recent decades, says Prof Muckley. “It may be that the loan officers making those decisions were biased against a certain demographic, and their decisions could reflect a higher rate of loan decline than would have been there then if this discrimination hadn’t been there. You’re training an AI model on all of these decisions, and the machine-learning algorithm is built to pick up patterns. It may very well pick that bias up and make it harder for that demographic, or even worse it will exacerbate and extrapolate the data.”

We’ll probably never be able to eliminate prejudicial estimates, whether by people or AI or any combination of the two, says Prof Muckley. “With AI, once you do bring it in, it opens the door to reducing discrimination that may have been there undetected for many years.”

Edel Corrigan

Edel Corrigan is a contributor to The Irish Times