2 min read

EU AI Act, AI literature and insurance

EU AI Act, AI literature and insurance

The AI Act aims to ensure a high level of protection for fundamental rights, health, and safety, while the insurance sector legislation, already applicable to the use of AI by insurance supervised entities, establishes principles on conduct and prudential objectives.

AI Act follows a risk-based approach, classifying AI systems according to different risk levels (unacceptable risks, high risk, limited risk and minimal risk).

In the insurance sector, the AI Act identifies as high-risk those AI systems intended to be used for risk assessment and pricing in relation to natural persons in the case of life and health insurance.

The AI Act establishes a comprehensive set of requirements that providers and users of high-risk AI systems will need to comply with.

  • Providers must ensure training, validation, and testing data sets follow data
    governance practices to detect and prevent biases that could lead to discrimination.
  • Providers should ensure that the AI system undergoes a conformity assessment
    based on internal control to ensure compliance with the AI Act.
  • Deployers should conduct a fundamental rights impact assessment prior to the first use of the AI system.
  • Providers must ensure AI systems are designed to be transparent enough for users to interpret a system’s output and use it appropriately.
  • Providers should register themselves and the AI system in the EU database.
  • Providers/deployers should inform the relevant authority/providers and take
    corrective action/suspend the use of the system in case they identify a serious
    incident.

For the remaining AI systems that are not considered to be high-risk, the AI Act establishes some minimum transparency requirements, the need to promote staff AI literacy, and the development of voluntary codes of conduct.

The EU AI Act entered into force on August 1, 2024, with its requirements coming into effect under a staggered timeline. The majority of its provisions will be implemented by August 2, 2026. However, AI literature provisions started to apply as early as February 2, 2025.

This means that as of 2 February providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf.

It should take into account their technical knowledge, experience, education, and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used.

AI literacy means skills, knowledge and understanding that allow providers, deployers and affected persons, taking into account their respective rights and obligations in the context of the AI Act, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause.

A few key considerations from my side:

  1. Do not underestimate this requirement—it’s more than just a box-ticking exercise.
  2. Different roles in your organization require different levels of AI literacy.
  3. AI literacy isn’t just about compliance; it can also drive innovation within your organization.

Contact me if you want to better understand the AI Act's implications for insurance and its interaction with insurance regulation.