The Evolution of Requirements for Insurtech Under the EU AI Act

August 29, 2023
Authored by
Airlie Hilliard
Senior Researcher at Holistic AI
The Evolution of Requirements for Insurtech Under the EU AI Act

As the EU AI Act has taken shape through multiple rounds of legislative fine tuning, the insurance sector has increasingly found itself under the microscope.

Significant progress has been made with the landmark bill in recent months, with leading European Parliamentary Committees adopting the act on 11 May 2023 ahead of the European Parliament vote on 14 June 2023, where it was passed by majority vote.

Since then, two Trilogues have been held in which three EU Institutions have begun to negotiate their positions. This signals the beginning of the concretisation of the rules and associated requirements of the Act.

These requirements have undergone significant changes since the first version was introduced by the European Commission, with EU member states proposing several compromise texts. The regulation of insurtech – technology used to drive innovation and automate processes in the insurance industry – has been a particular area of focus.

In this blog post, we outline how the requirements for AI in insurance have evolved and what this means for insurance providers using AI in their business.

Key takeaways:

  • Insurance was not originally considered high-risk under the European Commission’s initial proposal for the EU AI Act.
  • European Parliament Committees has gone back and forth on the insurance practices and types that would be considered high-risk during the consultation process.
  • The amended text adopted by the European Parliament considers AI used to make or influence decisions about eligibility for health and life insurance high-risk.
  • Insurance providers using AI for this purpose will have to comply with the requirements for high-risk AI systems, including the establishment of a risk management system, data governance and transparency provisions, and mechanisms to facilitate human oversight.

The initial proposal: The text of the European Commission (April 2021)

With its risk-based approach, the EU AI Act imposes obligations that are proportionate to the risk posed by the system, where a high-risk AI system must comply with the most stringent obligations. However, in the initial text proposed by the European Commission in April 2021, there is no mention of insurance practices, other than the fact that notified bodies should take out appropriate liability insurance for conformity assessments, meaning that the use of AI in insurance practices would not be subject to the requirements of high-risk systems and instead would likely only be subject to other requirements such as transparency.

Insurance as a high-risk system: The text of the Slovenian Presidency (November 2021)

It did not take long for the AI Act to zero in on insurtech. The Compromise Text of the Slovenian Presidency added insurance as a high-risk application of AI when published in November 2021, inserting it into Annex III – which lists high-risk applications of AI – under systems used to determine the access to and enjoyment of private and public services and benefits. Here, the Presidency added that AI systems used for insurance premium setting, underwritings, and claims assessments were to be considered high risk, regardless of the type of insurance they were being used in.

European Parliament amendments (January-June 2022)

In January 2022, the European Parliament’s Committee on the Environment, Public Health and Food Safety published a draft opinion piece, which proposed that AI systems used in health, healthcare, long-term care, and health insurance that are not already covered by Regulation (EU) 2017/745 – which regulates medical devices – should be considered high-risk. This would include systems that have a direct or indirect effect on health or use sensitive data, as well as AI-driven administrative or management systems used in healthcare settings and by health insurance providers if they process sensitive health data.

Following this, the Committee on Legal Affairs published a draft opinion in March 2022, building on the Slovenian Presidency’s approach. Here, AI systems used to assess insurance premiums and claims or insurance risk would be considered high-risk. Also in March 2022, the Committee on Industry, Research and Energy took a similar approach, but targeted only AI systems used to determine insurance premiums.

However, the focus for insurance practices was narrowed in the April 2022 draft report by the Committee on the Internal Market and Consumer Protection and Committee on Civil Liberties, Justice and Home Affairs, with only AI systems used to make or assist decisions about eligibility for health and life insurance considered high-risk, with no mention of underwriting or premium setting. This position changed again in June 2022 with the publication of a subsequent draft report which proposed that AI systems used for the assessment of insurance risk, to determine insurance premiums, to set premiums, or conduct underwriting or claims assessment be considered high-risk, with the exception of AI systems used in relation to low-value property insurance. The proposal also recommended that AI systems used in health insurance processes be considered high-risk, widening the previously narrowed scope.

Text of the Czech Presidency (October-November 2022)

Having previously removed any provisions relating to insurance or insurtech, the Czech Presidency re-added insurance to the list of high-risk use cases in its fourth presidency text, published in October 2022. In this text, it was proposed that AI systems used in the risk assessment and pricing of health and life insurance products, except systems put into service by SMEs, would be considered high-risk. This position was held for the Czech Presidency’s Preparation for Coreper text, Draft General Approach, and Adopted General Approach, all released in November 2022.

The Adopted Text: Moving towards final requirements for insurance (June 2023)

After the extensive consultation period, the European Parliament voted on an amended text in June 2023, which refocused efforts to regulate insurance on AI systems used to make or influence decisions about eligibility for health and life insurance.

Seven requirements for high-risk systems

While the exact insurance practices and insurance types that are considered high-risk applications of AI are still up for debate and may evolve as Trilogues continue, it is clear that at least some insurance practices and forms of insurtech will be considered high-risk under the AI act and consequently will have to comply with the most stringent obligations.

Although obligations can vary by the type of entity, there are, in general, seven requirements for high-risk systems outlined in Articles 9 to 15:

High-risk Systems Obligations
  • The establishment of a continuous and iterative risk management systemthroughout the entire lifecycle of the AI system.
  • The establishment of data governance practices to ensure the appropriate training, validation, and testing for the system’s intended purpose (Article 10).
  • The creation of technical documentation before the system is put onto the market (Article 11).
  • Automatic recording of events to facilitate record-keeping (Article 12).
  • Transparency and provision of information to users should be considered during the development of the system (Article 13).
  • Appropriate human oversight to prevent or minimise risks to health, safety, or fundamental rights (Article 14).
  • An appropriate level of accuracy, robustness and cybersecurity that is maintained throughout the lifecycle of the system (Article 15).

Prepare for the EU AI Act with Holistic AI

The implementation of the final version of the EU AI Act is on the horizon – and that means the insurance sector, and insurtech companies in particular, will soon have to adapt to a dramatic new legislative paradigm. With a maximum fine of 40 million euros or 7% of global turnover outlined in the latest version of the Act, non-compliance is not an option for those using AI in their operations.

Holistic AI are governance, risk and compliance experts, and we are here to help you prepare early. Learn how.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

Discover how we can help your company

Schedule a call with one of our experts

Schedule a call