Our AI risk assessment supports you in the systematic analysis and classification of your AI systems in accordance with EU AI Act Article 9. From AI inventory through risk analysis to a continuous risk management system across the entire lifecycle.
Our clients trust our expertise in digital transformation, compliance, and risk management
30 Minutes • Non-binding • Immediately available
Or contact us directly:










The EU AI Act requires a systematic and documented risk assessment for all high-risk AI systems. Article 9 requirements apply from 2 August 2026. Inadequate risk assessment can result in fines of up to EUR 15 million.
Years of Experience
Employees
Projects
We pursue a structured, evidence-based approach to AI risk assessment that optimally connects technical complexity with the regulatory requirements of the EU AI Act.
AI system inventory and initial risk scoping
Detailed risk analysis with quantitative assessment models
Risk classification according to the four EU AI Act categories
Development of tailored risk mitigation strategies
Implementation and validation of risk control measures
"Our structured Risk Assessment approach enables companies to advance AI innovations responsibly while simultaneously meeting the highest compliance standards. Risk management as an enabler for AI excellence."

Head of Digital Transformation
Expertise & Experience:
11+ years of experience, Applied Computer Science degree, Strategic planning and management of AI projects, Cyber Security, Secure Software Development, AI
We offer you tailored solutions for your digital transformation
Complete assessment of all risk dimensions of your AI systems with focus on EU AI Act Article 9 compliance requirements.
Development of targeted strategies for risk minimization and control, tailored to your specific AI applications and regulatory requirements.
Choose the area that fits your requirements
The EU AI Act compliance requirements define concrete obligations for various AI systems. We support you in the complete implementation of all necessary measures to comply with the new European AI regulation.
The EU AI Act imposes extensive documentation requirements on AI systems. We support you in systematically fulfilling all documentation obligations for legally compliant AI development and use.
Article 72 of the EU AI Act requires providers of high-risk AI systems to establish a post-market monitoring system. We support you in implementation: from systematic data collection and automatic logging to timely incident reporting to the market surveillance authority.
Our expertise in the systematic classification of AI systems under the EU AI Act enables precise compliance strategies. From initial categorization to continuous reassessment — for secure and compliant AI innovation.
Article
9 EU AI Act mandates that a documented risk management system must be established, implemented, and maintained for every high-risk AI system. This system must cover the entire lifecycle of the AI system — from development through deployment to decommissioning. It includes identification and analysis of known and reasonably foreseeable risks, evaluation of these risks under intended use and reasonably foreseeable misuse, and adoption of appropriate risk mitigation measures. Regular systematic review and updating is mandatory.
The AI risk assessment under Article
9 targets providers and addresses the technical risk analysis of the AI system itself — health, safety, and fundamental rights. The fundamental rights impact assessment under Article
27 is an obligation for deployers of high-risk AI systems. It evaluates the concrete effects of AI deployment on fundamental rights in the specific application context, including discrimination risks and impacts on particular groups of persons. Both assessments are complementary and required for complete compliance.
The EU AI Act defines four risk levels: Unacceptable risk (Article 5) — prohibited AI practices such as social scoring or real-time biometric identification in public spaces. High risk (Article 6, Annex III) — AI systems in sensitive areas such as employment, credit scoring, law enforcement, or critical infrastructure. Limited risk (Article 50) — systems with transparency obligations such as chatbots or deepfake generators. Minimal risk — no special requirements, for example spam filters or AI-powered video games.
A systematic AI risk assessment comprises: First, an inventory of all AI systems deployed within the organization. Second, classification of each system based on the risk criteria of Article
6 and Annex III. Third, identification and analysis of known and foreseeable risks to health, safety, and fundamental rights. Fourth, evaluation of risks under intended use and reasonably foreseeable misuse. Fifth, definition and implementation of risk mitigation measures. Sixth, establishment of a continuous monitoring and update process.
Prohibitions for AI systems with unacceptable risk have applied since
2 February 2025. Requirements for high-risk AI systems under Annex III — including the risk assessment under Article
9 — take effect on
2 August 2026. The EU Digital Omnibus Act may extend the deadline for Annex III systems to
2 December 2027. Companies should still begin risk assessments early, as implementing a complete risk management system typically requires several months of preparation.
Violations of high-risk AI system requirements — including missing or inadequate risk assessments — can result in fines of up to EUR
15 million or
3 percent of worldwide annual turnover, whichever is higher. For prohibited AI practices, the maximum penalty rises to EUR
35 million or
7 percent of annual turnover. Graduated maximum amounts apply for SMEs and startups. Enforcement in Germany is handled by the Bundesnetzagentur as the competent market surveillance authority.
The EU AI Act clearly distinguishes between providers (developers) and deployers (users) of AI systems. Providers must implement a comprehensive risk management system under Article
9 covering technical risks of the system itself. Deployers of high-risk AI systems must use the system as intended under Article
26 and conduct a fundamental rights impact assessment under Article 27. If a deployer substantially modifies the AI system or markets it under their own name, they become a provider and assume full provider obligations.
Discover how we support companies in their digital transformation
Klöckner & Co
Digital Transformation in Steel Trading

Siemens
Smart Manufacturing Solutions for Maximum Value Creation

Festo
Intelligent Networking for Future-Proof Production Systems

Bosch
AI Process Optimization for Improved Production Efficiency

Is your organization ready for the next step into the digital future? Contact us for a personal consultation.
Our clients trust our expertise in digital transformation, compliance, and risk management
Schedule a strategic consultation with our experts now
30 Minutes • Non-binding • Immediately available
Direct hotline for decision-makers
Strategic inquiries via email
For complex inquiries or if you want to provide specific information in advance