
EU AI Act High Risk: What companies must implement by August 2026
EU AI Act High Risk: What companies must implement by August 2026
Meta title:EU AI Act High Risk – Obligations until August 2026
Meta description:High-risk AI according to the EU AI Act: All obligations, deadlines and sanctions until August 2026. Conformity assessment, documentation and implementation roadmap for companies.
6 months to the deadline — what needs to happen now
Things will get serious on August 2, 2026: From this date, the full requirements of the EU AI Act for high-risk AI systems will apply. By then, companies that develop, sell or use AI systems must have completed conformity assessments, finalized technical documentation, applied CE markings and registered their systems in the EU database.
Anyone who fails to do this risks fines of up to 35 million euros or 7% of annual global turnover.
This article shows what high-risk AI systems are, what specific obligations apply and how companies should use the remaining time.
1. What are high-risk AI systems?
The EU AI Act defines two ways in which an AI system is classified as high risk:
Path 1: Security component in regulated products (Article 6 Paragraph 1)
AI systems embedded as a safety component in products covered by EU harmonization legislation listed in Annex I - such as medical devices, machines, toys, elevators or aeronautical technology. An extended transition period applies to these systemsAugust 2, 2027.
Path 2: Stand-alone high-risk areas (Article 6 (2) + Annex III)
AI systems in the following eight areas applyalwaysas high risk - unless they do not pose a significant risk to health, safety or fundamental rights:
Recommended action:Create a complete AI inventory of all AI systems used in the company. Check each system against Appendix III. Document the assessment - even if you come to the conclusion that a system is *not* high risk (Art. 6 Para. 3 requires documented justification).
2. The timeline: What applies from when?
The EU AI Act came into force on August 1, 2024. The duties become stricter gradually:
At the same time, the Federal Cabinet will have this in February 2026AI MIG(AI Measures and Innovation Act) passed - the German implementing law for the AI Act. Among other things, it regulates national market surveillance and specifies sanctions.
Recommended action:Work backwards from August 2, 2026. Experience shows that a conformity assessment takes 3-6 months. Anyone who has not yet started in March 2026 will hardly meet the deadline.
3. Conformity assessment: The core of compliance
Conformity assessment is the key process for demonstrating that a high-risk AI system meets all requirements of the AI Act. For most Annex III systems one isinternal conformity assessmentsufficient (Article 43 paragraph 2). Exception: AI systems for remote biometric identification — here's oneThird-party inspection by a notified bodynecessary.
What does the assessment include?
- Quality management system(Art. 17) — documented processes for development, operation and monitoringTechnical documentation(Art. 11) — full description of the system, its purpose, architecture and performance metricsRisk management(Art. 9) — continuous process for identifying and mitigating risksData governance(Art. 10) — Quality, representativeness and bias check of training, validation and test dataRecordkeeping Requirements(Article 12) — automatic logging during operationCE marking(Art. 48) — physical or digital attachmentEU database registration(Art. 49) — Entry in the EU public database for high-risk AI
Recommended action:Start with a gap analysis: Which of the seven requirements do you already meet (e.g. through existing ISO 27001 or ISO 9001 processes)? Where are the gaps? Prioritize based on effort and risk.
4. Technical documentation: What has to go in
Article 11 in conjunction with Annex IV of the AI Act defines the minimum contents of technical documentation. This mustbefore placing on the marketcreated and then10 yearsbe kept.
Mandatory content (excerpt)
- General description of the AI system and its purposeDetailed description of the development methodology and training procedureInformation about training, validation and test datasetsMetrics to assess accuracy, robustness and cybersecurityDescription of human supervision measuresDescription of the risk management systemRecords changes throughout the lifecycle
The documentation must be designed in such a way that market surveillance authorities can assess the conformity of the system.
Recommended action:Use existing documentation standards (e.g. ISO/IEC 42001 for AI management systems) as a basis. Automate documentation as much as possible — with multiple AI systems, manual maintenance quickly becomes a bottleneck.
5. Risk management: Not a one-off project
The risk management system according to Art. 9 mustentire life cycleof a high-risk AI system — from conception through development to operation and decommissioning.
Core requirements
- Risk identification:Systematic recording of known and foreseeable risks to health, safety and fundamental rightsRisk assessment:Analysis of the likelihood and severity of potential damage — taking into account intended use *and* reasonably foreseeable misuseRisk reduction:Technical and organizational measures to reduce identified risks to an acceptable levelResidual risks:Documentation and communication of remaining risks to operators and usersTesting:Appropriate testing procedures to validate the risk reduction measures, including testing under real conditions (Article 60)
Recommended action:Integrate AI risk management into your existing enterprise risk management. Isolated AI risk considerations lead to redundancies and blind spots. ENISA has published [Guidelines for AI cybersecurity](https://www.enisa.europa.eu/topics/artificial-intelligence).
6. Human Oversight: Human stays in the loop
Article 14 requires that high-risk AI systems be designed to:can be effectively supervised by natural persons. This is more than an emergency stop button.
That means concretely
- Operators mustcompetent peopleappoint those who will oversee the systemThese people have toAbilities and limitationsunderstand the systemYou must be able to do thatCorrectly interpret the output of the systemYou mustOverride or override system decisionscanYou have toInterrupt system operationcan (stop function)AI can be used in automated decisions in the HR areano autonomous personnel decisionsmeet
Recommended action:Define clear roles and escalation paths. Provably train the people supervising. Document cases where human intervention occurs — this provides evidence of the effectiveness of oversight and data base for system improvements.
7. Transparency and information obligations
High-risk AI systems are subject to extensive transparency requirements:
- Inform operator:Providers must provide operators with clear instructions for use (Article 13) describing intended purpose, performance metrics, known risks and human oversight requirementsInform those affected:Individuals subject to a decision by a high-risk AI system have the right to oneExplanation of the decision(Article 86)Labeling:AI-generated content must be recognizable as suchRegistration:Entry in the publicly accessible EU database (Article 71) with information about the system, the provider and the conformity assessment
Recommended action:Create standardized fact sheets for each high-risk AI system. Use the database registration templates provided by the EU Commission. Check whether your GDPR data protection impact assessments (DPIA) already cover overlaps with the transparency requirements.
8. Post-market monitoring and incident reporting
The obligations do not end with the conformity assessment. Article 72 requires onePost-market monitoring system, which is proportional to the nature and risk of the AI system.
Ongoing duties
- Continuous monitoring:Systematic monitoring of system performance in productive operationIncident report:Serious Incidents mustwithin 15 daysbe reported to the responsible market surveillance authority (Art. 73)Corrective actions:If there are deviations from expected behavior, providers must act immediately - including recall from the market if necessaryUpdates and re-evaluation:Significant changes to the system require a new conformity assessment
Recommended action:Implement automated monitoring pipelines that detect drift detection, performance degradation, and bias shifts in real time. Define clear thresholds above which escalation and reporting occurs.
9. Sanctions: What threatens in the event of violations
The EU AI Act provides for a staggered sanctions regime:
The higher amount is decisive. Adjusted upper limits apply to SMEs and start-ups.
In addition to fines, market surveillance authorities can:
- TheMarket withdrawalorder non-compliant AI systemsDenProhibit operationPublic warningspronounce
The German KI-MIG specifies the responsibilities: The Federal Network Agency is expected to act as the lead market surveillance authority.
Recommended action:Don’t treat AI Act compliance as a “nice-to-have.” The sanctions are comparable to the GDPR — and enforcement will come. Anyone who acts proactively saves costs in the long term and avoids damage to their reputation.
10. Implementation roadmap: This is how you proceed
Six months is a short time. The following sequence has proven successful in practice:
Phase 1: Inventory (immediate)
- CompleteAI inventorycreate any systems that could be subject to the AI ActRisk classificationcarry out and document in accordance with Article 6Clarify responsibilities: who is the provider, who is the operator?
Phase 2: Gap analysis (March 2026)
- Align existing processes against AI Act requirementsIdentify synergies with ISO 27001, ISO 42001, GDPRCompliance roadmapcreate with milestones
Phase 3: Implementation (April-June 2026)
- Set up or expand a risk management systemCreate technical documentationDefine and train human oversight processesImplement monitoring infrastructure
Phase 4: Conformity assessment (June–July 2026)
- Carry out internal conformity assessmentCreate an EU declaration of conformityAttach CE markingRegistration in the EU database
Phase 5: Ongoing operations (from August 2026)
- Enable post-market monitoringTest incident reporting processesSchedule regular re-evaluation
How ADVISORI supports
ADVISORI accompanies companies from inventory to ongoing compliance — with a team that combines AI expertise, regulatory know-how and technical implementation expertise.
- [KI Compliance](https://advisori.de/dienste/digitale-transformation/ki-kuenstliche-intelligenz/ki-compliance):Assessment of your AI systems according to the EU AI Act, risk categorization and compliance roadmap[EU AI Act Advice](https://eu-ai-act.advisori.de/):End-to-end support from classification to conformity assessment[AI Governance](https://www.advisori.de/services/ki-governance-en):Building an AI governance framework with registers, roles and monitoring[AI Security](https://www.advisori.de/dienste/ai-security):Systematic analysis of your AI architecture for vulnerabilities and compliance gaps[Data protection for AI](https://advisori.de/dienste/digitale-transformation/ki-kuenstliche-intelligenz/datenschutz-fuer-ki):GDPR-compliant AI implementation and data protection impact assessments
WithSynthara, the ADVISORI multi-agent AI platform, we automate central compliance processes: from automated AI inventory to continuous risk assessment to post-market monitoring - manufacturer-independent and with over 1,500 interfaces.
FAQ: Frequently asked questions about the EU AI Act High Risk
What is a high-risk AI system under the EU AI Act?
A high-risk AI system is either a security component in a regulated product (Annex I) or a standalone system in one of the eight high-risk areas listed in Annex III — including biometrics, critical infrastructure, HR, education and law enforcement. The classification is carried out in accordance with Article 6 of the AI Regulation.
When do companies have to meet the high-risk requirements?
The obligations for standalone high-risk AI systems (Annex III) apply fromAugust 2, 2026. For high-risk systems in regulated products (Annex I), an extended deadline applies untilAugust 2, 2027.
How much does a violation of the EU AI Act cost?
Violations of the high-risk obligations can result in up to15 million euros or 3% of global annual salesbe punished. If prohibited AI practices are used, the upper limit increases to 35 million euros or 7% of sales.
Do I need an external conformity assessment?
In most cases one is enoughinternal conformity assessmentout of. An external audit by a notified body is only mandatory for biometric remote identification systems (Article 43).
How is the AI Act different from the GDPR?
The GDPR protects personal data, the AI Act regulates AI systems regardless of whether personal data is processed. Both regulations overlap when it comes to AI systems that process personal data - where they applyboth sets of rules in parallel. A DPIA according to GDPR can cover parts of the AI Act compliance.
What is the AI-MIG?
The AI Measures and Innovation Act (KI-MIG) is the German implementing law for the EU AI Act. It was approved by the Federal Cabinet in February 2026 and regulates, among other things, national market surveillance, responsibilities and sanction mechanisms. More about this in the [ADVISORI article on KI-MIG](https://www.advisori.de/blog/ki-mig-ai-act-durchfuehrungsgesetz-unternehmen).
Does the AI Act also apply to AI systems that were in operation before August 2026?
Yes. AI systems placed on the market or put into operation before August 2, 2026 must also meet the requirements - provided they are installed after that datechanged significantly(Article 111).
How can I check whether my AI system is high risk?
First, check whether your system falls into one of the areas of Annex III. If yes, document the assessment. If you believe that there is no significant risk despite being in Annex III, you must do sojustify it in writingand register in the EU database (Art. 6 Para. 3). The EU Commission has published [Guidelines on the practical implementation of Article 6](https://artificialintelligenceact.eu/article/6/).
Sources and further links
- [EU AI Act full text (German)](https://ai-act-law.eu/de/)[Appendix III — High-risk AI systems](https://ai-act-law.eu/de/anhang/3/)[Article 6 — Classification rules](https://ai-act-law.eu/de/artikel/6/)[EU Commission: Implementation Timeline](https://artificialintelligenceact.eu/implementation-timeline/)[EU Commission AI Act Service Desk](https://ai-act-service-desk.ec.europa.eu/)[ENISA: AI cybersecurity](https://www.enisa.europa.eu/topics/artificial-intelligence)[EU Commission: Shaping the digital future](https://digital-strategy.ec.europa.eu/de/policies/regulatory-framework-ai)
*This article was published on February 28, 2026 and reflects the current status of EU AI Act implementation. For an individual assessment of your AI systems, contact the [ADVISORI team for AI compliance](https://advisori.de/dienste/digitale-transformation/ki-kuenstliche-intelligenz/ki-compliance).*