EU AI Act: High-Risk AI Systems — The Complete Compliance Guide

August 2, 2026 marked the deadline for high-risk AI systems under the EU AI Act — the world’s first comprehensive AI regulation. Organizations deploying or developing AI in credit scoring, recruitment, critical infrastructure, biometric identification, or other high-risk areas must now demonstrate compliance with risk management, data governance, transparency, and human oversight requirements. The penalties for non-compliance rank among the highest in EU regulation: up to EUR 35 million or 7% of global turnover.
This guide covers which AI systems are classified as high-risk, what compliance requires, the conformity assessment process, how the AI Act interacts with DORA and GDPR, and a practical preparation roadmap for organizations affected by the August 2026 deadline.
Which AI Systems Are High-Risk?
The AI Act classifies AI systems into risk categories. High-risk systems are listed in Annex III:
- Biometric identification and categorization: Real-time and post remote biometric identification of natural persons
- Critical infrastructure management: AI for the management and operation of critical digital infrastructure, road traffic, and water, gas, heating, and electricity supply
- Education and vocational training: AI for determining access to education, assessing learning outcomes, or monitoring prohibited behavior during exams
- Employment and worker management: AI for recruitment, screening, evaluating candidates, making decisions on promotions or terminations, and task allocation based on behavior or traits
- Access to essential services: AI for credit scoring, insurance risk assessment, evaluating eligibility for public benefits, and credit approval or rejection
- Law enforcement: AI for individual risk assessment, polygraph-like tools, evidence evaluation, crime prediction, and profiling
- Migration and border control: AI for examining visa applications, assessing security risks, and verifying document authenticity
- Administration of justice: AI for applying the law to facts and for alternative dispute resolution
Additionally, AI systems used as safety components in products covered by EU harmonized legislation (medical devices, machinery, vehicles) are automatically classified as high-risk.
Compliance Requirements for High-Risk AI
Risk Management System (Article 9)
Establish a continuous risk management process covering the entire AI system lifecycle: identify and analyze known and reasonably foreseeable risks, evaluate risks against the system’s intended purpose and conditions of use, implement appropriate mitigation measures, and test the system for compliance with requirements. Risk management is not a one-time assessment — it must be maintained and updated throughout the system’s operation.
Data Governance (Article 10)
Training, validation, and testing datasets must meet quality criteria: relevance and representativeness (does the data reflect the population the AI will serve?), accuracy and completeness, freedom from errors and bias, and appropriate statistical properties. Data governance is critical for demonstrating that the AI system does not produce discriminatory outcomes. Financial institutions must connect this with existing BCBS 239 data quality frameworks.
Technical Documentation (Article 11 + Annex IV)
Comprehensive documentation covering: general system description (intended purpose, developers, version), detailed system description (architecture, algorithms, training methodology), monitoring and functioning (logging, calibration, human oversight), risk management documentation, and applicable standards and the EU declaration of conformity.
Record Keeping (Article 12)
High-risk AI systems must enable automatic logging of events throughout their operation: input data references, system outputs and decisions, anomalies and errors detected, and user identification. Logs must be retained for at least 6 months (longer where sector regulation requires it, such as DORA or MaRisk).
Transparency (Article 13)
High-risk AI systems must be designed to allow deployers to interpret outputs and use them appropriately. Deployers must be informed about: system capabilities and limitations, the level of accuracy and expected error rates, known risks and foreseeable misuse scenarios, and human oversight measures. For AI systems that interact with individuals, the person must be informed that they are interacting with an AI system.
Human Oversight (Article 14)
High-risk AI systems must be designed to allow effective human oversight during their period of use: ability to fully understand the system capabilities and limitations, ability to correctly interpret output, ability to decide not to use the system or override its output, and ability to intervene or interrupt the system. This goes beyond governance committees — it requires operational mechanisms for real-time human intervention.
Conformity Assessment
Before placing a high-risk AI system on the market or into service, a conformity assessment must be completed:
- Self-assessment (most high-risk systems): The provider assesses compliance against essential requirements, prepares technical documentation, establishes a quality management system, and draws up the EU declaration of conformity.
- Third-party assessment (biometric identification and critical infrastructure): A notified body evaluates the system against the requirements and issues a conformity certificate.
AI Act and DORA: Interaction for Financial Institutions
For financial institutions, the AI Act and DORA create complementary obligations. DORA governs ICT risk management broadly, including the operational resilience of AI systems as ICT assets. The AI Act adds specific requirements for AI system design, testing, documentation, and oversight. Existing model risk management frameworks (MaRisk, EBA EGIM) provide a foundation — they cover model validation and documentation that maps to ~60–70% of AI Act requirements. The gaps: bias testing, transparency toward affected persons, and operational human oversight mechanisms must be added.
Penalties
- Prohibited AI practices: Up to EUR 35 million or 7% of global annual turnover
- High-risk system violations: Up to EUR 15 million or 3% of global annual turnover
- Incorrect information to authorities: Up to EUR 7.5 million or 1.5% of global annual turnover
For financial institutions, these apply on top of existing sector penalties from BaFin or ECB. The compounding effect makes AI Act non-compliance in financial services particularly expensive.
Implementation Roadmap
- Month 1–2: AI inventory — identify all AI and ML systems in use across the organization. Classify each by AI Act risk category. Pay special attention to systems making decisions about individuals (credit, employment, insurance).
- Month 2–4: Gap assessment — for each high-risk system, evaluate current compliance against Articles 9–15. Document gaps in risk management, data governance, documentation, logging, transparency, and human oversight.
- Month 4–8: Remediation — implement missing controls. Build bias testing into model validation processes. Add transparency mechanisms for affected persons. Design human oversight interfaces for high-risk systems.
- Month 8–12: Conformity assessment — complete self-assessment or engage notified body. Prepare technical documentation per Annex IV. Establish quality management system. Draw up EU declaration of conformity.
Frequently Asked Questions
Does the AI Act apply to internal AI tools?
Yes, if the internal tool falls into a high-risk category. An AI system used for internal recruitment decisions is high-risk regardless of whether it is developed in-house or purchased. The classification depends on the use case, not the deployment model.
How does the AI Act interact with GDPR?
The AI Act and GDPR are complementary. GDPR governs the processing of personal data (including by AI systems). The AI Act adds requirements specific to AI system design and operation. Where AI systems process personal data, both regulations apply simultaneously. Key intersection points: data quality (AI Act) and lawful processing (GDPR), transparency (both require it, for different reasons), and automated decision-making (GDPR Article 22 + AI Act human oversight).
What is the timeline for full AI Act compliance?
Prohibited AI practices: banned since February 2025. High-risk AI obligations: applicable since August 2, 2026. General-purpose AI model obligations: applicable since August 2025. The practical timeline for organizations: if you deploy high-risk AI systems and have not started compliance preparation, you are already behind the deadline. Begin with inventory and gap assessment immediately.
Do we need to register our AI systems?
Yes. Providers and deployers of high-risk AI systems must register them in the EU database for high-risk AI systems before placing them on the market or putting them into service. The registration includes information about the provider, the system, its intended purpose, and its conformity assessment status.