1. Home/
  2. Services/
  3. Regulatory Compliance Management/
  4. EU AI Act/
  5. EU AI Act AI Risk Classification/
  6. EU AI Act System Classification En

Subscribe to Newsletter

Stay up to date with the latest trends and developments

By subscribing, you agree to our privacy policy.

A
ADVISORI FTC GmbH

Transformation. Innovation. Security.

Office Address

Kaiserstraße 44

60329 Frankfurt am Main

Germany

View on map

Contact

info@advisori.de+49 69 913 113-01

Mon-Fri: 9:00 AM - 6:00 PM

Company

Services

Social Media

Follow us and stay up to date.

  • /
  • /

© 2024 ADVISORI FTC GmbH. Alle Rechte vorbehalten.

Your browser does not support the video tag.
Precise AI System Classification for EU AI Act Compliance

EU AI Act System Classification

Our expertise in the systematic classification of AI systems under the EU AI Act enables precise compliance strategies. From initial categorization to continuous reassessment — for secure and compliant AI innovation.

  • ✓Precise classification according to EU AI Act risk categories
  • ✓Strategic advisory for cost-optimized compliance pathways
  • ✓Continuous reassessment upon system updates
  • ✓Integration into existing AI governance frameworks

Your strategic success starts here

Our clients trust our expertise in digital transformation, compliance, and risk management

30 Minutes • Non-binding • Immediately available

For optimal preparation of your strategy session:

  • Your strategic goals and objectives
  • Desired business outcomes and ROI
  • Steps already taken

Or contact us directly:

info@advisori.de+49 69 913 113-01

Certifications, Partners and more...

ISO 9001 CertifiedISO 27001 CertifiedISO 14001 CertifiedBeyondTrust PartnerBVMW Bundesverband MitgliedMitigant PartnerGoogle PartnerTop 100 InnovatorMicrosoft AzureAmazon Web Services

How is your AI system classified under the EU AI Act?

Why ADVISORI for AI classification?

  • Experience with complex AI landscapes in regulated industries
  • Detailed knowledge of all 8 Annex III areas and exception criteria
  • Proven methodology for multi-use-case systems
  • Monitoring framework for dynamic reassessment
⚠

Deadline approaching

High-risk AI systems under Annex III must meet all requirements by August 2, 2026. For high-risk systems in regulated products (Annex I), an extended deadline applies until August 2027. Start your classification now.

ADVISORI in Numbers

11+

Years of Experience

120+

Employees

520+

Projects

We follow a five-step process that combines technical analysis with regulatory expertise — from system capture to ongoing governance.

Our Approach:

Capture all AI systems and use cases

Systematic assessment against Article 5 (prohibitions), Article 6 (high-risk), Article 50 (transparency)

Detailed review of Annex III areas and exception criteria

Classification documentation with audit trail

Establish trigger-based reassessment upon system changes

"Precise system classification is the cornerstone of intelligent AI compliance. Our strategic approach transforms regulatory requirements into competitive advantages and enables risk-optimized innovation."
Asan Stefanski

Asan Stefanski

Head of Digital Transformation

Expertise & Experience:

11+ years of experience, Applied Computer Science degree, Strategic planning and management of AI projects, Cyber Security, Secure Software Development, AI

LinkedIn Profile

Our Services

We offer you tailored solutions for your digital transformation

Systematic AI Classification

Complete categorization of your AI systems according to the four risk levels of the EU AI Act. Assessment of high-risk criteria under Article 6 including Annex I and Annex III analysis.

  • AI inventory with use case mapping
  • Classification under Article 5, Article 6, and Article 50
  • Exception assessment under Article 6(3)
  • Documentation for regulatory inquiries

Dynamic Reassessment

Establishment of a framework for ongoing reassessment upon system changes, new use cases, or regulatory updates.

  • Automated detection of classification-relevant changes
  • Trigger-based reassessment processes
  • Impact assessment for category changes
  • Stakeholder communication and update management

Our Competencies in EU AI Act Risikoklassifizierung

Choose the area that fits your requirements

EU AI Act Compliance Requirements

The EU AI Act compliance requirements define concrete obligations for various AI systems. We support you in the complete implementation of all necessary measures to comply with the new European AI regulation.

EU AI Act Documentation Requirements

The EU AI Act imposes extensive documentation requirements on AI systems. We support you in systematically fulfilling all documentation obligations for legally compliant AI development and use.

EU AI Act Monitoring Systems

Article 72 of the EU AI Act requires providers of high-risk AI systems to establish a post-market monitoring system. We support you in implementation: from systematic data collection and automatic logging to timely incident reporting to the market surveillance authority.

EU AI Act Risk Assessment

Our AI risk assessment supports you in the systematic analysis and classification of your AI systems in accordance with EU AI Act Article 9. From AI inventory through risk analysis to a continuous risk management system across the entire lifecycle.

Frequently Asked Questions about EU AI Act System Classification

What four risk levels does the EU AI Act distinguish?

The EU AI Act classifies AI systems into four categories: Unacceptable risk (Article 5) — prohibited practices such as social scoring, manipulative techniques, and real-time remote biometric identification in public spaces. High risk (Article 6) — systems in regulated products (Annex I) or in eight sensitive areas (Annex III) such as biometrics, critical infrastructure, employment, or law enforcement. Limited risk (Article 50) — systems with transparency obligations, such as chatbots or deepfake generators. Minimal risk — all other systems without special regulatory requirements.

How does high-risk classification under Article 6 work?

Article

6 defines two pathways: First, systems integrated as safety components in products under Annex I — e.g., medical devices, machinery, or lifts. These require third-party conformity assessment. Second, standalone systems in the eight areas of Annex III — from biometrics through HR management to administration of justice. An Annex III system can be exempted from high-risk classification if it does not pose a significant risk to health, safety, or fundamental rights (Article 6(3)).

What eight areas does Annex III of the EU AI Act cover?

Annex III lists eight high-risk areas: 1) Biometrics — remote identification and emotion recognition. 2) Critical infrastructure — transport, energy, water, digital networks. 3) Education and vocational training — exam assessment and access decisions. 4) Employment — recruitment, performance evaluation, task allocation. 5) Access to services — creditworthiness, insurance, emergency services. 6) Law enforcement — evidence evaluation, risk analysis. 7) Migration and border control — document verification, risk assessment. 8) Administration of justice and democratic processes.

When does the exception under Article 6(3) apply?

An Annex III system is not high-risk if it does not pose a significant risk. Four criteria support the exception: The system performs a narrow procedural task. It only improves the result of a completed human activity. It detects patterns without replacing human assessment. Or it serves only to prepare an assessment. Important: Profiling systems are always high-risk — the exception never applies to them. The provider must document the assessment before placing the system on the market.

How do I classify my AI system step by step?

Five steps: 1) Check whether the system falls under the prohibitions of Article

5 (social scoring, manipulative AI, mass biometric surveillance). 2) Check whether it is a safety component in a product under Annex I. 3) Check whether the use case falls under one of the

8 Annex III areas. 4) If yes, examine the exception criteria under Article 6(3). 5) Check transparency obligations under Article

50 for limited-risk systems. Document every step — the authority can request the assessment.

What obligations arise from high-risk classification?

High-risk systems must meet the following requirements: risk management system (Article 9), data governance for training and test data (Article 10), technical documentation (Article 11), automatic logging (Article 12), transparency and user information (Article 13), human oversight (Article 14), accuracy, robustness, and cybersecurity (Article 15). Additionally: quality management system (Article 17), EU declaration of conformity (Article 47), and conformity assessment before placing on the market.

How are GPAI models like GPT or Gemini classified?

General purpose AI models (GPAI) fall under Chapter V of the EU AI Act — a separate regulatory regime alongside the four-level classification. Obligations: technical documentation, transparency towards downstream providers, copyright compliance, and a summary of training data. GPAI models with systemic risk (from 10^

25 FLOPS) must additionally conduct model evaluations, adversarial testing, and cybersecurity measures. GPAI obligations apply from August 2025.

Success Stories

Discover how we support companies in their digital transformation

Digitalization in Steel Trading

Klöckner & Co

Digital Transformation in Steel Trading

Case Study
Digitalisierung im Stahlhandel - Klöckner & Co

Results

Over 2 billion euros in annual revenue through digital channels
Goal to achieve 60% of revenue online by 2022
Improved customer satisfaction through automated processes

AI-Powered Manufacturing Optimization

Siemens

Smart Manufacturing Solutions for Maximum Value Creation

Case Study
Case study image for AI-Powered Manufacturing Optimization

Results

Significant increase in production performance
Reduction of downtime and production costs
Improved sustainability through more efficient resource utilization

AI Automation in Production

Festo

Intelligent Networking for Future-Proof Production Systems

Case Study
FESTO AI Case Study

Results

Improved production speed and flexibility
Reduced manufacturing costs through more efficient resource utilization
Increased customer satisfaction through personalized products

Generative AI in Manufacturing

Bosch

AI Process Optimization for Improved Production Efficiency

Case Study
BOSCH KI-Prozessoptimierung für bessere Produktionseffizienz

Results

Reduction of AI application implementation time to just a few weeks
Improvement in product quality through early defect detection
Increased manufacturing efficiency through reduced downtime

Let's

Work Together!

Is your organization ready for the next step into the digital future? Contact us for a personal consultation.

Your strategic success starts here

Our clients trust our expertise in digital transformation, compliance, and risk management

Ready for the next step?

Schedule a strategic consultation with our experts now

30 Minutes • Non-binding • Immediately available

For optimal preparation of your strategy session:

Your strategic goals and challenges
Desired business outcomes and ROI expectations
Current compliance and risk situation
Stakeholders and decision-makers in the project

Prefer direct contact?

Direct hotline for decision-makers

Strategic inquiries via email

Detailed Project Inquiry

For complex inquiries or if you want to provide specific information in advance

ADVISORI Logo
BlogCase StudiesAbout Us
info@advisori.de+49 69 913 113-01