1. Home/
  2. Services/
  3. Regulatory Compliance Management/
  4. EU AI Act/
  5. EU AI Act AI Risk Classification/
  6. EU AI Act Risk Assessment En

Subscribe to Newsletter

Stay up to date with the latest trends and developments

By subscribing, you agree to our privacy policy.

A
ADVISORI FTC GmbH

Transformation. Innovation. Security.

Office Address

Kaiserstraße 44

60329 Frankfurt am Main

Germany

View on map

Contact

info@advisori.de+49 69 913 113-01

Mon-Fri: 9:00 AM - 6:00 PM

Company

Services

Social Media

Follow us and stay up to date.

  • /
  • /

© 2024 ADVISORI FTC GmbH. Alle Rechte vorbehalten.

Your browser does not support the video tag.
Systematic risk management for high-risk AI systems under Article 9

EU AI Act Risk Assessment

Our AI risk assessment supports you in the systematic analysis and classification of your AI systems in accordance with EU AI Act Article 9. From AI inventory through risk analysis to a continuous risk management system across the entire lifecycle.

  • ✓Systematic risk assessment of all AI systems under EU AI Act Article 9
  • ✓Classification into the four risk categories with documented rationale
  • ✓Fundamental rights impact assessment under Article 27 for deployers
  • ✓Continuous risk management system with lifecycle monitoring

Your strategic success starts here

Our clients trust our expertise in digital transformation, compliance, and risk management

30 Minutes • Non-binding • Immediately available

For optimal preparation of your strategy session:

  • Your strategic goals and objectives
  • Desired business outcomes and ROI
  • Steps already taken

Or contact us directly:

info@advisori.de+49 69 913 113-01

Certifications, Partners and more...

ISO 9001 CertifiedISO 27001 CertifiedISO 14001 CertifiedBeyondTrust PartnerBVMW Bundesverband MitgliedMitigant PartnerGoogle PartnerTop 100 InnovatorMicrosoft AzureAmazon Web Services

What does the EU AI Act require for AI system risk assessment?

Our expertise in AI risk assessment

  • Specialization in AI risk assessment under EU AI Act Article 9
  • Experience with risk management frameworks for high-risk AI systems
  • Proven methodology for AI risk analysis and classification
  • Integration of regulatory, technical, and ethical risk perspectives
⚠

Regulatory Update

The EU AI Act requires a systematic and documented risk assessment for all high-risk AI systems. Article 9 requirements apply from 2 August 2026. Inadequate risk assessment can result in fines of up to EUR 15 million.

ADVISORI in Numbers

11+

Years of Experience

120+

Employees

520+

Projects

We pursue a structured, evidence-based approach to AI risk assessment that optimally connects technical complexity with the regulatory requirements of the EU AI Act.

Our Approach:

AI system inventory and initial risk scoping

Detailed risk analysis with quantitative assessment models

Risk classification according to the four EU AI Act categories

Development of tailored risk mitigation strategies

Implementation and validation of risk control measures

"Our structured Risk Assessment approach enables companies to advance AI innovations responsibly while simultaneously meeting the highest compliance standards. Risk management as an enabler for AI excellence."
Asan Stefanski

Asan Stefanski

Head of Digital Transformation

Expertise & Experience:

11+ years of experience, Applied Computer Science degree, Strategic planning and management of AI projects, Cyber Security, Secure Software Development, AI

LinkedIn Profile

Our Services

We offer you tailored solutions for your digital transformation

AI risk analysis and classification

Complete assessment of all risk dimensions of your AI systems with focus on EU AI Act Article 9 compliance requirements.

  • Systematic risk identification across the entire AI lifecycle
  • Impact assessment for various affected groups
  • Likelihood and impact scoring according to ISO 31000
  • Risk heat map and prioritization of action areas

Risk mitigation strategy and control measures

Development of targeted strategies for risk minimization and control, tailored to your specific AI applications and regulatory requirements.

  • Tailored risk control measures
  • Cost-benefit analysis of various mitigation options
  • Implementation roadmap with priority matrix
  • Effectiveness monitoring and success tracking

Our Competencies in EU AI Act Risikoklassifizierung

Choose the area that fits your requirements

EU AI Act Compliance Requirements

The EU AI Act compliance requirements define concrete obligations for various AI systems. We support you in the complete implementation of all necessary measures to comply with the new European AI regulation.

EU AI Act Documentation Requirements

The EU AI Act imposes extensive documentation requirements on AI systems. We support you in systematically fulfilling all documentation obligations for legally compliant AI development and use.

EU AI Act Monitoring Systems

Article 72 of the EU AI Act requires providers of high-risk AI systems to establish a post-market monitoring system. We support you in implementation: from systematic data collection and automatic logging to timely incident reporting to the market surveillance authority.

EU AI Act System Classification

Our expertise in the systematic classification of AI systems under the EU AI Act enables precise compliance strategies. From initial categorization to continuous reassessment — for secure and compliant AI innovation.

Frequently Asked Questions about EU AI Act Risk Assessment

What does Article 9 of the EU AI Act require for risk management systems?

Article

9 EU AI Act mandates that a documented risk management system must be established, implemented, and maintained for every high-risk AI system. This system must cover the entire lifecycle of the AI system — from development through deployment to decommissioning. It includes identification and analysis of known and reasonably foreseeable risks, evaluation of these risks under intended use and reasonably foreseeable misuse, and adoption of appropriate risk mitigation measures. Regular systematic review and updating is mandatory.

How does AI risk assessment differ from the fundamental rights impact assessment under Article 27?

The AI risk assessment under Article

9 targets providers and addresses the technical risk analysis of the AI system itself — health, safety, and fundamental rights. The fundamental rights impact assessment under Article

27 is an obligation for deployers of high-risk AI systems. It evaluates the concrete effects of AI deployment on fundamental rights in the specific application context, including discrimination risks and impacts on particular groups of persons. Both assessments are complementary and required for complete compliance.

What risk categories does the EU AI Act define for AI systems?

The EU AI Act defines four risk levels: Unacceptable risk (Article 5) — prohibited AI practices such as social scoring or real-time biometric identification in public spaces. High risk (Article 6, Annex III) — AI systems in sensitive areas such as employment, credit scoring, law enforcement, or critical infrastructure. Limited risk (Article 50) — systems with transparency obligations such as chatbots or deepfake generators. Minimal risk — no special requirements, for example spam filters or AI-powered video games.

What steps does a systematic AI risk assessment under the EU AI Act involve?

A systematic AI risk assessment comprises: First, an inventory of all AI systems deployed within the organization. Second, classification of each system based on the risk criteria of Article

6 and Annex III. Third, identification and analysis of known and foreseeable risks to health, safety, and fundamental rights. Fourth, evaluation of risks under intended use and reasonably foreseeable misuse. Fifth, definition and implementation of risk mitigation measures. Sixth, establishment of a continuous monitoring and update process.

When must companies conduct risk assessments under the EU AI Act?

Prohibitions for AI systems with unacceptable risk have applied since

2 February 2025. Requirements for high-risk AI systems under Annex III — including the risk assessment under Article

9 — take effect on

2 August 2026. The EU Digital Omnibus Act may extend the deadline for Annex III systems to

2 December 2027. Companies should still begin risk assessments early, as implementing a complete risk management system typically requires several months of preparation.

What penalties apply for missing AI risk assessments?

Violations of high-risk AI system requirements — including missing or inadequate risk assessments — can result in fines of up to EUR

15 million or

3 percent of worldwide annual turnover, whichever is higher. For prohibited AI practices, the maximum penalty rises to EUR

35 million or

7 percent of annual turnover. Graduated maximum amounts apply for SMEs and startups. Enforcement in Germany is handled by the Bundesnetzagentur as the competent market surveillance authority.

What is the difference between provider and deployer risk assessment obligations?

The EU AI Act clearly distinguishes between providers (developers) and deployers (users) of AI systems. Providers must implement a comprehensive risk management system under Article

9 covering technical risks of the system itself. Deployers of high-risk AI systems must use the system as intended under Article

26 and conduct a fundamental rights impact assessment under Article 27. If a deployer substantially modifies the AI system or markets it under their own name, they become a provider and assume full provider obligations.

Success Stories

Discover how we support companies in their digital transformation

Digitalization in Steel Trading

Klöckner & Co

Digital Transformation in Steel Trading

Case Study
Digitalisierung im Stahlhandel - Klöckner & Co

Results

Over 2 billion euros in annual revenue through digital channels
Goal to achieve 60% of revenue online by 2022
Improved customer satisfaction through automated processes

AI-Powered Manufacturing Optimization

Siemens

Smart Manufacturing Solutions for Maximum Value Creation

Case Study
Case study image for AI-Powered Manufacturing Optimization

Results

Significant increase in production performance
Reduction of downtime and production costs
Improved sustainability through more efficient resource utilization

AI Automation in Production

Festo

Intelligent Networking for Future-Proof Production Systems

Case Study
FESTO AI Case Study

Results

Improved production speed and flexibility
Reduced manufacturing costs through more efficient resource utilization
Increased customer satisfaction through personalized products

Generative AI in Manufacturing

Bosch

AI Process Optimization for Improved Production Efficiency

Case Study
BOSCH KI-Prozessoptimierung für bessere Produktionseffizienz

Results

Reduction of AI application implementation time to just a few weeks
Improvement in product quality through early defect detection
Increased manufacturing efficiency through reduced downtime

Let's

Work Together!

Is your organization ready for the next step into the digital future? Contact us for a personal consultation.

Your strategic success starts here

Our clients trust our expertise in digital transformation, compliance, and risk management

Ready for the next step?

Schedule a strategic consultation with our experts now

30 Minutes • Non-binding • Immediately available

For optimal preparation of your strategy session:

Your strategic goals and challenges
Desired business outcomes and ROI expectations
Current compliance and risk situation
Stakeholders and decision-makers in the project

Prefer direct contact?

Direct hotline for decision-makers

Strategic inquiries via email

Detailed Project Inquiry

For complex inquiries or if you want to provide specific information in advance

ADVISORI Logo
BlogCase StudiesAbout Us
info@advisori.de+49 69 913 113-01