
The EU AI Act is expected to be enforced within the next two to three years. Companies developing or using AI in the EU, especially at Level Two or higher risks, should begin the compliance process now.
In March the European Union Parliament passed what is being referred to as the most comprehensive Artificial Intelligence legislation to date.
The almost 900 pages of text is designed to try and make the future development of AI technology more human centric and transparent. It also restricts certain elements of AI from ever being implemented within any of the EU countries.
The EU AI Act identifies seven key principles that underpin responsible AI development and deployment:
-
Human Agency and Oversight: AI system must remain under human control and not operate autonomously in a manner that undermines human decision making.
-
Technical Robustness and Safety: AI systems must be designed to be resilient and secure, minimizing the risk of failures or misuse.
-
Privacy and Data Governance: AI systems must comply with data protection laws and protect personal data.
-
Transparency: AI operations should be understandable and explainable, enabling users to trust the technology.
-
Diversity, Non-discrimination, and Fairness: AI must be designed to be inclusive, avoiding any form of discrimination.
-
Societal and Environmental Well-being: AI systems should contribute positively to society and the environment.
-
Accountability: Clear accountability structures must be in place to address any adverse outcomes associated with AI.
How is Artificial Intelligence Defined?
The legislation defines Artificial Intelligence as a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations or decisions that influence physical or virtual environments. The definition is purposely broadened to potentially encompass technology which may not currently be available today. This gives the law a wide berth to regulate a variety of current and future technologies which are implemented in the EU.
AI’s growing influence in various sectors has highlighted several critical areas where regulation is essential. These areas include:
- Discrimination and Bias: AI systems can perpetuate or even exacerbate bias, leading to unfair practices.
-
Privacy Violations: AI’s capability to process vast amounts of data raises concerns about personal privacy.
-
Intellectual Property Issues: The use of data for AI training can infringe on intellectual property rights.
-
Hallucinations: AI systems can produce incorrect or misleading outputs, leading to potentially dangerous situations.
Provisions for AI Risk Management
The EU AI Act introduces several significant provisions that emphasize risk management and the protection of fundamental rights. The Act aims to foster innovation by establishing clear rules that enhance transparency and harmonization, thereby supporting the responsible development, deployment, and marketing of AI tools.
The Act separates the types of AI into four risk levels:
-
Level One: Poses no risk and is virtually unregulated. An example of a Level One risk would be spam filters.
-
Level Two: Requires minimal oversight and simply demands transparency. An example of a Level Two risk would be a deepfake which is the generation or manipulation of images and voices. These would require a label or disclaimer to the viewer unless the deepfake is clearly obvious to any reasonable person.
-
Level Three: High Risk — AI used in sectors like transportation, healthcare, safety and law enforcement poses significant risks and requires strict oversight. Companies must conduct rigorous risk assessments, provide high-quality data to demonstrate risk mitigation and eliminate bias and maintain detailed logs and documentation for regulators.
- Level Four: Unacceptable — AI activities like social scoring (e.g., China’s system for scoring behavior) and facial recognition are banned under the Act due to their privacy risks. In China, scores are adjusted based on behaviors deemed acceptable or unacceptable by the government, which can lead to denied loans or travel.
While Level Four risks are generally forbidden, there are some exceptions. For example, law enforcement may be allowed to use facial recognition technology in limited situations, such as during an active kidnapping or terrorist threat. Additionally, the law includes carve-outs for military, defense and national security agencies.
The Near Future
The EU AI Act is expected to be enforced within the next two to three years. While the EU Parliament has approved it, member countries still need to ratify it, a formality expected to pass quickly. Companies developing or using AI in the EU, especially at Level Two or higher risks, should begin the compliance process now.
The Act establishes an AI office to oversee enforcement, with local agencies in each member country. Non-compliance penalties will be tied to annual revenue, with fines increasing based on the severity of violations, from providing misleading information to breaking prohibited rules.
Though the Act applies to EU-based businesses, there is hope that its reach will expand globally.
The rationale:
-
First, that the European Union is a robust economy, and foreign companies will choose to comply with the regulations rather than lose the potential revenue they offer.
-
Second, is what they call the Brussels effect. This is the idea that the EU has led the way with groundbreaking legislation and that other countries will recognize the benefits and use the Act as a model for their own AI development.
We saw this after the EU tackled data privacy with the implementation of the General Data Protection Regulation (GDPR) in 2018. Currently more than a dozen countries around the world have GDPR-like data privacy laws.
Industry Feedback on the EU AI Act
Our analysis and discussions with cyber insurers suggest the EU AI Act is unlikely to have an immediate impact on current policy coverage.
Insurer One: “The AI Act focuses on classifying AI systems by risk and enforcing safety, transparence and quality measures. As it targets business conduct, it doesn’t directly affect existing cyber policies.”
Insurer Two: “We don’t restrict AI-related risks and see the Act as a slow-moving influence, similar to GDPR. Current civil liability exposures are already covered in our policies. Although the regulation may reshape the risk landscape over time, current civil liability exposures are already addressed by our policy wordings.”
The Impact of the EU AI Act on Clients
The EU AI Act introduces key considerations for businesses and brokers, impacting AI risk management and insurance coverage:
-
Increased Cyber Risks: High-risk AI systems may attract more cybercriminal activity. Clients should focus on robust cyber defenses, including AI-specific threat assessments, updated response plans and cyber liability coverage.
-
Compliance and Insurance: Non-compliance with the Act can lead to significant fines, similar to GDPR. Clients may need enhanced coverage for regulatory fines, legal defense and data breaches tied to AI.
-
Data Governance Liabilities: The Act emphasizes strict data governance. Poor practices could result in liabilities, such as discrimination or erroneous AI outputs. Coverage for errors, omissions and product liability should be considered.
-
Human Oversight: Mandated human oversight of AI systems may introduce inefficiencies. Policies should cover risks related to human error and operational disruptions.
-
Future-Proofing Coverage: Regular policy reviews and proactive risk management are essential as AI technology and regulations evolve.
NFP is well-positioned to guide clients through the complexities of the EU AI Act, ensuring they are protected from emerging AI-related risks.
Why NFP, an Aon company?
At NFP, our specialists stay ahead of AI regulatory changes, including the EU AI Act. We offer tailored cyber insurance for AI-related risks and guide you through the impact on your operations.
Our Management, Cyber, and Professional Liability practice offers specialized insurance solutions in management liability, cyber liability and professional liability. With over 25 years of expertise, we provide coverage for directors and officers, cyber risks, fiduciary and employment practices, professional liability, crime, ransom and more.
Whether inside or outside the EU, our global expertise, risk assessments and consulting services ensure you avoid costly penalties and stay compliant.