Hogan Lovells 2024 Election Impact and Congressional Outlook Report
The new AI Act establishes an obligation for the deployers of certain high-risk AI systems to conduct a “fundamental rights impact assessment” (“FRIA”). This will have a high impact on insurance companies that use AI systems for risk assessment and pricing in relation to life and health insurance products and also for creditworthiness purposes.
Recently, the EU Parliament approved the Artificial Intelligence Regulation (“AI Act”). This regulation (estimated to be finally approved around April) aims to regulate the development, deployment and use of the AI systems within the EU. The regulation categorizes these systems into different risk levels, imposing stricter obligations on higher risk AI systems. The AI Act also prohibits certain uses of AI (e.g., systems designed to manipulate behaviour or exploit vulnerabilities).
Insurance companies making use of AI systems will have to comply with several obligations. For instance, they will need to have in place an AI governance program, affix the CE mark, comply with transparency obligations… and, in some cases, conduct a FRIA.
A FRIA is an assessment that deployers* must conduct before deploying for the first time some high-risk AI systems. The aim of the FRIA is for the deployer to identify specific risks to the fundamental rights of people affected and the measures to be taken if those risk materialize. This assessment must include:
The FRIA has to be performed by the deployers of some high-risk AI systems, among others (i) AI systems that evaluate the creditworthiness of natural persons or establish their credit score (except for the systems used for detecting financial fraud); and (ii) AI systems used for risk assessment and pricing in relation to natural persons for life and health insurances.
Therefore, the AI Act will have a major impact in the insurance sector, due to the fact that the companies operating in this area may use this kind of systems for their daily activities. There is no doubt that AI can be really helpful for calculating life and health insurances premiums, but these companies must also balance the fundamental rights of individuals. In fact, in the AI Act, banking and insurance entities are named as examples of companies that should carry out a FRIA before implementing this kind of AI systems.
Although the FRIA needs to be performed only before deploying the system for the first time, the deployer needs to update any element that changes or is no longer up to date. Also, in similar cases, the deployer can rely on previously conducted FRIAs or existing impact assessments carried out by the provider of the system. In addition, the FRIA could be part of and complement a data protection impact assessment (“DPIA”) under the Regulation 2016/679 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data (General Data Protection Regulation).
Yes, the deployer has to notify the market surveillance authority of its results (in Spain, the Statute of the Spanish Artificial Intelligence Supervisory Agency has already been approved). Along with this, a questionnaire will have to be completed through an automated tool, which will will be developed by the AI authority.
Depending on the structure of the AI obligations in the company, there are several options. The one that could make more sense for insurance companies is to carry out the FRIA together with the DPIA, as there may be many synergies to leverage. This way the data protection officer and the privacy team could also be involved.
In addition, insurance companies already have in place procedures to carry out DPIAs. Integrating FRIAs as part of the same process could be less problematic and involve less resources.
Finally, FRIAs should be aligned with the AI governance program of the insurance company. Very often the risks for individuals (e.g. the existence of biases or discrimination) would be already covered by the AI governance program.
Even though the “formal” obligation will be applicable in a couple of years, the sooner the FRIAs process is ready the better. This way the impact on the implementation of the AI Act would be smoother and the company will be in a position to demonstrate compliance.
*Deployer is the natural or legal person using an AI system under its authority for a professional activity, which may be different from the developer or distributor of the system.
Authored by Gonzalo F. Gállego, Juan Ramón Robles, and