EU AI Act
Compliance Hub
Everything you need to prepare for the August 2, 2026 deadline. Risk classifications, requirements, timelines, and practical guidance.
What is the EU AI Act?
The EU AI Act is the world's first comprehensive legal framework for artificial intelligence. It establishes rules for AI systems based on their risk level and applies to any organization deploying AI systems that affect EU citizens, regardless of where the organization is based.
Key Compliance Dates
Act Enters into Force
The EU AI Act officially became law.
Prohibited AI Practices
Bans on social scoring, emotion recognition in workplaces, and other prohibited practices took effect.
GPAI Model Obligations
General-purpose AI model providers must comply with transparency requirements.
Full High-Risk Compliance
High-risk AI systems must be fully compliant. This is the critical deadline for most businesses.
Embedded AI Products
AI embedded in regulated products (medical devices, etc.) must comply.
Understanding AI Risk Tiers
The EU AI Act classifies AI systems into four risk categories, each with different requirements.
Banned AI Practices
AI systems that pose unacceptable risks to people's safety and rights.
- Social scoring by governments
- Emotion recognition in workplaces/schools
- Biometric categorization by sensitive attributes
- Predictive policing based on profiling
- Facial recognition databases scraped from internet
Strictly Regulated
AI systems affecting fundamental rights, safety, or critical decisions.
- Recruitment and HR decisions
- Credit scoring and lending
- Education assessment
- Law enforcement
- Critical infrastructure management
- Medical devices and diagnostics
Transparency Required
AI systems interacting with humans requiring disclosure.
- Chatbots and virtual assistants
- AI-generated content
- Emotion recognition (where permitted)
- Deepfake detection
Voluntary Compliance
AI systems with low risk to rights or safety.
- Spam filters
- Video game AI
- Inventory management
- Content recommendation (non-profiling)
High-Risk AI Obligations
If your AI system is classified as high-risk, you must meet these requirements.
Risk Management System
Establish and maintain a risk management system throughout the AI system's lifecycle.
Data Governance
Ensure training, validation, and testing data sets meet quality criteria and are free from bias.
Technical Documentation
Maintain detailed technical documentation demonstrating compliance before placing on market.
Record Keeping
Automatically generate logs enabling traceability of AI system functioning.
Transparency
Design systems to enable users to interpret output and use appropriately.
Human Oversight
Allow effective oversight by humans, including ability to override or stop the system.
Accuracy & Robustness
Achieve appropriate levels of accuracy, robustness, and cybersecurity.
Conformity Assessment
Undergo conformity assessment before deployment and obtain CE marking.
Not sure if you're compliant?
Our 2-week assessment will classify your AI systems, identify gaps, and provide a clear roadmap to compliance.