Our expertise in the systematic classification of AI systems under the EU AI Act enables precise compliance strategies. From initial categorization to continuous reassessment — for secure and compliant AI innovation.
Our clients trust our expertise in digital transformation, compliance, and risk management
30 Minutes • Non-binding • Immediately available
Or contact us directly:










High-risk AI systems under Annex III must meet all requirements by August 2, 2026. For high-risk systems in regulated products (Annex I), an extended deadline applies until August 2027. Start your classification now.
Years of Experience
Employees
Projects
We follow a five-step process that combines technical analysis with regulatory expertise — from system capture to ongoing governance.
Capture all AI systems and use cases
Systematic assessment against Article 5 (prohibitions), Article 6 (high-risk), Article 50 (transparency)
Detailed review of Annex III areas and exception criteria
Classification documentation with audit trail
Establish trigger-based reassessment upon system changes
"Precise system classification is the cornerstone of intelligent AI compliance. Our strategic approach transforms regulatory requirements into competitive advantages and enables risk-optimized innovation."

Head of Digital Transformation
Expertise & Experience:
11+ years of experience, Applied Computer Science degree, Strategic planning and management of AI projects, Cyber Security, Secure Software Development, AI
We offer you tailored solutions for your digital transformation
Complete categorization of your AI systems according to the four risk levels of the EU AI Act. Assessment of high-risk criteria under Article 6 including Annex I and Annex III analysis.
Establishment of a framework for ongoing reassessment upon system changes, new use cases, or regulatory updates.
Choose the area that fits your requirements
The EU AI Act compliance requirements define concrete obligations for various AI systems. We support you in the complete implementation of all necessary measures to comply with the new European AI regulation.
The EU AI Act imposes extensive documentation requirements on AI systems. We support you in systematically fulfilling all documentation obligations for legally compliant AI development and use.
Article 72 of the EU AI Act requires providers of high-risk AI systems to establish a post-market monitoring system. We support you in implementation: from systematic data collection and automatic logging to timely incident reporting to the market surveillance authority.
Our AI risk assessment supports you in the systematic analysis and classification of your AI systems in accordance with EU AI Act Article 9. From AI inventory through risk analysis to a continuous risk management system across the entire lifecycle.
The EU AI Act classifies AI systems into four categories: Unacceptable risk (Article 5) — prohibited practices such as social scoring, manipulative techniques, and real-time remote biometric identification in public spaces. High risk (Article 6) — systems in regulated products (Annex I) or in eight sensitive areas (Annex III) such as biometrics, critical infrastructure, employment, or law enforcement. Limited risk (Article 50) — systems with transparency obligations, such as chatbots or deepfake generators. Minimal risk — all other systems without special regulatory requirements.
Article
6 defines two pathways: First, systems integrated as safety components in products under Annex I — e.g., medical devices, machinery, or lifts. These require third-party conformity assessment. Second, standalone systems in the eight areas of Annex III — from biometrics through HR management to administration of justice. An Annex III system can be exempted from high-risk classification if it does not pose a significant risk to health, safety, or fundamental rights (Article 6(3)).
Annex III lists eight high-risk areas: 1) Biometrics — remote identification and emotion recognition. 2) Critical infrastructure — transport, energy, water, digital networks. 3) Education and vocational training — exam assessment and access decisions. 4) Employment — recruitment, performance evaluation, task allocation. 5) Access to services — creditworthiness, insurance, emergency services. 6) Law enforcement — evidence evaluation, risk analysis. 7) Migration and border control — document verification, risk assessment. 8) Administration of justice and democratic processes.
An Annex III system is not high-risk if it does not pose a significant risk. Four criteria support the exception: The system performs a narrow procedural task. It only improves the result of a completed human activity. It detects patterns without replacing human assessment. Or it serves only to prepare an assessment. Important: Profiling systems are always high-risk — the exception never applies to them. The provider must document the assessment before placing the system on the market.
Five steps: 1) Check whether the system falls under the prohibitions of Article
5 (social scoring, manipulative AI, mass biometric surveillance). 2) Check whether it is a safety component in a product under Annex I. 3) Check whether the use case falls under one of the
8 Annex III areas. 4) If yes, examine the exception criteria under Article 6(3). 5) Check transparency obligations under Article
50 for limited-risk systems. Document every step — the authority can request the assessment.
High-risk systems must meet the following requirements: risk management system (Article 9), data governance for training and test data (Article 10), technical documentation (Article 11), automatic logging (Article 12), transparency and user information (Article 13), human oversight (Article 14), accuracy, robustness, and cybersecurity (Article 15). Additionally: quality management system (Article 17), EU declaration of conformity (Article 47), and conformity assessment before placing on the market.
General purpose AI models (GPAI) fall under Chapter V of the EU AI Act — a separate regulatory regime alongside the four-level classification. Obligations: technical documentation, transparency towards downstream providers, copyright compliance, and a summary of training data. GPAI models with systemic risk (from 10^
25 FLOPS) must additionally conduct model evaluations, adversarial testing, and cybersecurity measures. GPAI obligations apply from August 2025.
Discover how we support companies in their digital transformation
Klöckner & Co
Digital Transformation in Steel Trading

Siemens
Smart Manufacturing Solutions for Maximum Value Creation

Festo
Intelligent Networking for Future-Proof Production Systems

Bosch
AI Process Optimization for Improved Production Efficiency

Is your organization ready for the next step into the digital future? Contact us for a personal consultation.
Our clients trust our expertise in digital transformation, compliance, and risk management
Schedule a strategic consultation with our experts now
30 Minutes • Non-binding • Immediately available
Direct hotline for decision-makers
Strategic inquiries via email
For complex inquiries or if you want to provide specific information in advance