Compliance Resource Center

EU AI Act
Compliance Hub

Everything you need to prepare for the August 2, 2026 deadline. Risk classifications, requirements, timelines, and practical guidance.

181 Days
14 Hours
42 Minutes
Until Phase 2 Deadline

What is the EU AI Act?

The EU AI Act is the world's first comprehensive legal framework for artificial intelligence. It establishes rules for AI systems based on their risk level and applies to any organization deploying AI systems that affect EU citizens, regardless of where the organization is based.

€35M Maximum fine for violations
7% Of global annual revenue
Aug 2026 Phase 2 compliance deadline

Key Compliance Dates

August 2024

Act Enters into Force

The EU AI Act officially became law.

February 2025

Prohibited AI Practices

Bans on social scoring, emotion recognition in workplaces, and other prohibited practices took effect.

August 2025

GPAI Model Obligations

General-purpose AI model providers must comply with transparency requirements.

August 2026

Full High-Risk Compliance

High-risk AI systems must be fully compliant. This is the critical deadline for most businesses.

August 2027

Embedded AI Products

AI embedded in regulated products (medical devices, etc.) must comply.

Understanding AI Risk Tiers

The EU AI Act classifies AI systems into four risk categories, each with different requirements.

Prohibited

Banned AI Practices

AI systems that pose unacceptable risks to people's safety and rights.

  • Social scoring by governments
  • Emotion recognition in workplaces/schools
  • Biometric categorization by sensitive attributes
  • Predictive policing based on profiling
  • Facial recognition databases scraped from internet
Cannot be deployed in EU
High Risk

Strictly Regulated

AI systems affecting fundamental rights, safety, or critical decisions.

  • Recruitment and HR decisions
  • Credit scoring and lending
  • Education assessment
  • Law enforcement
  • Critical infrastructure management
  • Medical devices and diagnostics
Extensive compliance required
Limited Risk

Transparency Required

AI systems interacting with humans requiring disclosure.

  • Chatbots and virtual assistants
  • AI-generated content
  • Emotion recognition (where permitted)
  • Deepfake detection
Must disclose AI use to users
Minimal Risk

Voluntary Compliance

AI systems with low risk to rights or safety.

  • Spam filters
  • Video game AI
  • Inventory management
  • Content recommendation (non-profiling)
No mandatory requirements

High-Risk AI Obligations

If your AI system is classified as high-risk, you must meet these requirements.

1

Risk Management System

Establish and maintain a risk management system throughout the AI system's lifecycle.

2

Data Governance

Ensure training, validation, and testing data sets meet quality criteria and are free from bias.

3

Technical Documentation

Maintain detailed technical documentation demonstrating compliance before placing on market.

4

Record Keeping

Automatically generate logs enabling traceability of AI system functioning.

5

Transparency

Design systems to enable users to interpret output and use appropriately.

6

Human Oversight

Allow effective oversight by humans, including ability to override or stop the system.

7

Accuracy & Robustness

Achieve appropriate levels of accuracy, robustness, and cybersecurity.

8

Conformity Assessment

Undergo conformity assessment before deployment and obtain CE marking.

Not sure if you're compliant?

Our 2-week assessment will classify your AI systems, identify gaps, and provide a clear roadmap to compliance.