How should AI be regulated in insurance?
Pardon the Interruption
This article is just an example of the content available to mallowstreet members.
On average over 150 pieces of new content are published from across the industry per month on mallowstreet. Members get access to the latest developments, industry views and a range of in-depth research.
All the content on mallowstreet is accredited for CPD by the PMI and is available to trustees for free.
The risks and concerns around artificial intelligence put pressure on policymakers and regulators to step in with a variety of initiatives. What is the best regulatory approach for AI in insurance? What are the risks posed by AI and what should insurers do to mitigate them?
‘AI risks are not new’
Issues around the lack of transparency and sustainability, discrimination, bias and fairness are not new to insurance, experts have said, and can be addressed by enhancing transparency and existing governance frameworks. But could the complexity of AI pose more risks?
According to Maryland Insurance Commissioner Kathleen Birrane, one of the main concerns is around errors in decision making.
“For regulatory supervisors, probably their core consideration is whether decisions that are made by the industry using AI systems meet the regulatory standards that apply to those decisions. Are decisions inaccurate, arbitrary and capricious? Are they skewed in a way that adversely impacts protected classes or otherwise results in their illegal discrimination?” she said during a webinar organised by thinktank the Geneva Association on 2 November.
Julian Arevalo, senior expert on financial innovation at the European Insurance and Occupational Pensions Authority, highlighted the increasing use of datasets as a risk. For instance, texts and videos were not considered in the past but are now included in decision making.
The reliance on third parties is another important risk, as he noted there are a small number of big players in the market offering generative AI models to insurers, but they keep customers in the dark about how their models are being trained.
Arevalo said: “For example, the company that develops the foundation models does not tell you how the data, and which data, is processed. This [raises] number of risks that need to be mitigated.”
For Christopher Cates, director legal affairs at Canadian insurer Intact Financial Corporation, the primary risk is losing trust from customers if they are not treated in a “fair and equitable manner”.
“If we make inaccurate decisions or blindly trust a system that we don't understand, there's a real risk that our customers will not really trust us to take care of them when they need us. That's the core of insurance,” he said.
Is insurance ‘high risk’?
One of the most notable developments in AI is the EU’s AI Act, with draft rules currently being negotiated by EU institutions in a process called trilogue.
This is an ‘horizontal’ framework, meaning it applies across sectors and is not specific to insurance. It follows a risk-based approach, differentiating between AI applications that potentially have low, limited, high or unacceptable risks.
According to a report by the Geneva Association, while the original proposal of the AI Act did not explicitly include AI applications in insurance, an updated version added AI applications used for risk assessment and pricing decisions in life and health insurance as high risk.
If insurance applications are considered ‘high risk, the sector would be subject to specific regulatory requirements for aspects such as a risk management systems, data governance and management practices, said the GA.
But Arevalo argued negotiations are still ongoing and the situation may change.
He said while Eiopa welcomes the AI Act, the insurance sector is already highly regulated, and therefore “has certain specificities that make the regulation of artificial intelligence more practical [and] more desirable at a sectoral level, rather than at a cross-sectoral level”.
He added: “This doesn't mean that Eiopa doesn't see the risks linked to AI… we see the need to provide further guidance on how existing regulation applies specifically, in an AI context, to address the specific risks caused by this technology.”
Principles-based approach to AI
Rather than developing specific regulatory requirements for the use of AI in insurance, Singapore prefers providing principles and guidance. The Monetary Authority of Singapore developed the Fairness, Ethics, Accountability and Transparency principles in 2018. These principles serve as high-level guidance for the responsible use of AI in the financial sector, including insurance.
Mike Wong, director and head insurance division at MAS, said the FEAT principles are set up so insurers can operationalise the governance of AI “in a more systematic way”, adding: “We need to promote trust and confidence in the use of AI, not just within insurance, but especially within consumers and policyholders.”
To provide practical guidance on the implementation of the FEAT principles, MAS created the Veritas consortium in 2018 with participants from across the financial sector to develop actionable, methodologies to implement the principles.
The Veritas project contains three phases, Wong said. For phases 1 and 2, he explained the focus was on the FEAT principles and a methodology to implement them.
For the latest phase, he said the regulator had worked with consulting members on integrating the Veritas methodology in existing risk management frameworks, and also documenting potential challenges and lessons learned from these experiences.
“We wanted this documentation to serve as reference for future integration cycles between organisations,” Wong said.
“As the technology evolves, and the challenges and complexity presented by AI deepens, MAS will continuously assess whether there is a need to move beyond principles.”
Principles and expectations equals responsible deployment?
In the US, each state is individually responsible for insurance regulation, with the National Association of Insurance Commissioners providing a convening platform for state regulators.
Birrane explained that regulators have adopted a measured and a principles-based approach, balancing support for innovation with the need for responsible development and deployment, and consumer protection.
She said the next step would be to develop specific guidance in a form of a bulletin to set expectations regarding the use of AI systems. Two rounds of consultations on a draft bulletin took place this year. A virtual public meeting has been scheduled for 16 November.
“[By] recognising some of the challenges and the risks associated with these methods about the scope of issues, the speed of issues, we do expect carriers to adopt controls and procedures to mitigate those risks,” Birrane said.
“The rigour of the controls aligns with the degree of risks that a consumer can be harmed. Our approach in this document is principles-based; the bulletin does not dictate a specific content or specific standards, and it doesn't extend beyond the interpretive application of existing laws to AI methods.”
‘AI risk comes from smaller, non-insurance players’
Cates dismissed the idea that AI risks sit with large, regulated financial institutions because the impact would be significant.
“We have a series of checks and balances because we've been regulated. We have supervisory authorities that we have strong relationships with. We also understand that if you make a small [AI] error - let's say in underwriting - it can have a very fundamental impact on a line of business around product profitability,” he said.
He believes the risks posed by AI could come from “smaller, less regulated players” given that “anyone in society can get open source or near open source Gen AI tools” such as Deepfake, a video and image machine learning tool.
Cates concluded: “These are all basically attracting attention from regulators and are things that came out of smaller unregulated entities. These are not coming out of large institutional players.”
What are your thoughts? How should AI be regulated in insurance?