The EU AI Act (Regulation (EU) 2024/1689) requires organizations to achieve compliance for high-risk AI systems by August 2026 — with fines of up to €35 million or 7% of annual turnover. Prohibitions on manipulative AI and social scoring have already been in effect since February 2025. ADVISORI combines AI transformation and regulatory expertise under one roof: we classify your AI systems, build your governance framework, and guide you to audit-ready compliance — on time and with a practical focus.
Our clients trust our expertise in digital transformation, compliance, and risk management
30 Minutes • Non-binding • Immediately available
Or contact us directly:










Years of Experience
Employees
Projects
We guide you through a proven step-by-step model from initial assessment to ongoing monitoring.
AI Inventory: Recording all AI systems, models, and use cases within the organization — including purchased SaaS solutions with embedded AI and internally developed models
Risk Classification: Categorizing each system into the four tiers of the EU AI Act (unacceptable, high, limited, minimal risk) with documented justification and assignment to Annex I or III
Gap Analysis: Systematic comparison against the requirements of Art. 9–15 (risk management, data quality, documentation, logging, transparency, human oversight, robustness) — result: prioritized action plan
Governance Setup: Establishing an AI Management System with defined roles (AI Officer, Risk Owner), approval processes, documentation standards, and integration into existing ISMS and data protection management
Implementation & Conformity: Implementing technical and organizational measures, creating documentation per Annex IV, conducting the conformity assessment, and preparing the EU Declaration of Conformity
Post-Market Monitoring & Audit: Establishing an ongoing monitoring system per Art. 72, conducting regular internal audits, implementing incident reporting processes, and performing annual compliance reviews with adjustments to new guidelines and standards
We offer you tailored solutions for your digital transformation
Systematic recording of all AI systems within your organization — from chatbots and scoring models to automated decision-making systems. Each system is classified into one of the four risk tiers based on the criteria of the EU AI Act. Deliverables: complete AI register, classification documentation with justification per system, prioritization matrix for high-risk systems under Annex III and Annex I. This forms the basis for all subsequent compliance measures.
Detailed comparison of your AI systems against the requirements of Regulation (EU) 2024/1689: risk management (Art. 9), data quality (Art. 10), technical documentation (Art. 11), logging obligations (Art. 12), transparency (Art. 13), human oversight (Art. 14), and accuracy/robustness (Art. 15). Deliverables: gap report per high-risk system, prioritized list of measures, timeline with milestones through August 2026, and estimated effort per measure.
Establishing an AI Management System (AIMS) with clearly defined roles, responsibilities, and processes — modeled on ISO/IEC 42001. Definition of AI approval processes, risk assessment cycles, documentation obligations, and escalation paths. Integration with existing ISMS (ISO 27001), data protection management (GDPR), and quality management. Deliverables: governance handbook, role matrix (AI Officer, Risk Manager, Data Owner), process map, KPI framework for AI compliance.
Creation of complete technical documentation per Annex IV of the EU AI Act for high-risk AI systems: system description, design specifications, training and test data, performance metrics, risk management measures, and validation results. Preparation for the conformity assessment — conducted internally or by notified bodies. Deliverables: technical dossier per system, Declaration of Conformity, audit trail for regulatory inquiries.
Specialized advisory services for providers and operators of General Purpose AI (GPAI): implementation of transparency obligations under Art. 53, creation of model cards, copyright compliance, and training data summaries. For GPAI models with systemic risk (>10^25 FLOPs): adversarial testing, red teaming, model evaluation, and incident reporting to the EU AI Office. Deliverables: GPAI compliance checklist, model card, risk assessment, incident response plan.
Practical training programs to fulfill the AI competence obligation (Art. 4): executive briefings for management and supervisory boards, workshops for specialist departments, and technical deep dives for development teams. Regular internal audits to verify compliance. Establishment of a post-market monitoring system for high-risk AI in accordance with Art. 72. Deliverables: training materials, audit reports, monitoring dashboard, annual compliance report.
Looking for a complete overview of all our services?
View Complete Service OverviewDiscover our specialized areas of digital transformation
Development and implementation of AI-supported strategies for your company's digital transformation to secure sustainable competitive advantages.
Establish a robust data foundation as the basis for growth and efficiency through strategic data management and comprehensive data governance.
Precisely determine your digital maturity level, identify potential in industry comparison, and derive targeted measures for your successful digital future.
Foster a sustainable innovation culture and systematically transform ideas into marketable digital products and services for your competitive advantage.
Maximize the value of your technology investments through expert consulting in the selection, customization, and seamless implementation of optimal software solutions for your business processes.
Transform your data into strategic capital: From data preparation through Business Intelligence to Advanced Analytics and innovative data products – for measurable business success.
Increase efficiency and reduce costs through intelligent automation and optimization of your business processes for maximum productivity.
Leverage the potential of AI safely and in regulatory compliance, from strategy through security to compliance.
The EU AI Act (Regulation (EU) 2024/1689) entered into force on
1 August
2024 and is being applied in stages: Since
2 February 2025, AI systems with unacceptable risk have been prohibited — including social scoring, manipulative AI, and real-time remote biometric identification. Since
2 August 2025, transparency obligations for GPAI models apply. From
2 August 2026, high-risk AI systems under Annex III (including biometrics, education, employment, and credit scoring) must be fully compliant. High-risk systems under Annex I (product safety) have until
2 August 2027. For certain AI in large-scale EU IT systems, an extended deadline of
31 December
2030 applies.
Annex III of the EU AI Act defines eight high-risk areas: (1) Biometric identification and categorization of persons, (2) Management and operation of critical infrastructure (energy, transport, water, gas), (3) General and vocational education (access, assessment, exam monitoring), (4) Employment and human resources management (candidate selection, promotion, termination), (5) Access to essential services (credit scoring, insurance, social benefits), (6) Law enforcement (risk assessment, lie detection, evidence analysis), (7) Migration and border control (visa applications, asylum procedures), (8) Administration of justice and democratic processes. In addition, AI systems embedded in products with CE marking fall under Annex I.
Providers of high-risk AI systems must meet comprehensive requirements under the EU AI Act: a risk management system covering the entire lifecycle (Art. 9), quality requirements for training, validation, and test datasets (Art. 10), complete technical documentation per Annex IV (Art. 11), automatic logging capability (Art. 12), transparency and provision of information to operators (Art. 13), measures for human oversight (Art. 14), and accuracy, robustness, and cybersecurity (Art. 15). Prior to placing on the market, a conformity assessment must be conducted and an EU Declaration of Conformity must be issued.
The fines under the EU AI Act are structured in three tiers: Up to €
35 million or 7% of global annual turnover (whichever is higher) for the use of prohibited AI practices. Up to €
15 million or 3% of turnover for violations of obligations relating to high-risk AI systems or GPAI models. Up to €7.5 million or 1.5% of turnover for providing false or incomplete information to authorities. Proportionally lower caps apply to SMEs and start-ups. Fines are also reduced for natural persons not acting in a commercial capacity.
GPAI models such as GPT-4, Claude, or Gemini have been subject to their own obligations since August 2025: providers must create technical documentation, publish a summary of training data, comply with EU copyright law, and cooperate with downstream providers. GPAI models with systemic risk (threshold: training compute exceeding 10^
25 FLOPs) have additional obligations: model evaluation according to the state of the art, adversarial testing and red teaming, assessment and mitigation of systemic risks, cybersecurity measures, and incident reporting to the EU AI Office. The AI Office oversees compliance directly at EU level.
The EU AI Act, the GDPR, and the NIS 2 Directive address different aspects with some overlap: The GDPR protects personal data — the AI Act regulates AI systems regardless of whether they process personal data. NIS 2 focuses on cybersecurity for critical infrastructure — the AI Act additionally requires robustness and accuracy specifically for AI. The Cyber Resilience Act (CRA) governs product security for connected devices and overlaps with the AI Act in the area of embedded AI. The EU Product Liability Directive extends liability to AI-related damages. Organizations need an integrated compliance strategy that covers all regulatory frameworks — which is exactly what ADVISORI provides from a single source.
Since February 2025, providers and operators of AI systems must ensure that their staff possess sufficient AI competence (Art.
4 EU AI Act). This encompasses technical knowledge, an understanding of regulatory requirements, and awareness of associated risks. The obligation applies regardless of risk class — including organizations that only deploy AI systems with minimal risk. In practice, this means: training programs for employees who develop, operate, or make decisions about AI. Documentation of qualification measures. ADVISORI offers tailored training formats for this purpose — from 90-minute board briefings to multi-day hands-on workshops for development teams.
ADVISORI is one of the few consulting firms that combines AI transformation and regulatory expertise under one roof. As an ISO 27001/9001/14001-certified organization, we bring proven expertise in information security, risk management, and compliance. Our consultants are familiar with the interfaces to GDPR, NIS2, DORA, and CRA and develop integrated compliance strategies. With our own multi-agent AI platform Synthara, we not only implement AI from a regulatory perspective but also use it operationally — giving us firsthand knowledge of requirements from both the provider and operator perspective. As a Microsoft, AWS, and Google Cloud partner, we cover all relevant AI technology stacks. This sets us apart from pure legal advisory firms or IT service providers without hands-on AI experience.
Discover how we support companies in their digital transformation
Bosch
KI-Prozessoptimierung für bessere Produktionseffizienz

Festo
Intelligente Vernetzung für zukunftsfähige Produktionssysteme

Siemens
Smarte Fertigungslösungen für maximale Wertschöpfung

Klöckner & Co
Digitalisierung im Stahlhandel

Is your organization ready for the next step into the digital future? Contact us for a personal consultation.
Our clients trust our expertise in digital transformation, compliance, and risk management
Schedule a strategic consultation with our experts now
30 Minutes • Non-binding • Immediately available
Direct hotline for decision-makers
Strategic inquiries via email
For complex inquiries or if you want to provide specific information in advance
Discover our latest articles, expert knowledge and practical guides about EU AI Act Compliance

Die Juli-2025-Revision des EZB-Leitfadens verpflichtet Banken, interne Modelle strategisch neu auszurichten. Kernpunkte: 1) Künstliche Intelligenz und Machine Learning sind zulässig, jedoch nur in erklärbarer Form und unter strenger Governance. 2) Das Top-Management trägt explizit die Verantwortung für Qualität und Compliance aller Modelle. 3) CRR3-Vorgaben und Klimarisiken müssen proaktiv in Kredit-, Markt- und Kontrahentenrisikomodelle integriert werden. 4) Genehmigte Modelländerungen sind innerhalb von drei Monaten umzusetzen, was agile IT-Architekturen und automatisierte Validierungsprozesse erfordert. Institute, die frühzeitig Explainable-AI-Kompetenzen, robuste ESG-Datenbanken und modulare Systeme aufbauen, verwandeln die verschärften Anforderungen in einen nachhaltigen Wettbewerbsvorteil.

Verwandeln Sie Ihre KI von einer undurchsichtigen Black Box in einen nachvollziehbaren, vertrauenswürdigen Geschäftspartner.

KI verändert Softwarearchitektur fundamental. Erkennen Sie die Risiken von „Blackbox“-Verhalten bis zu versteckten Kosten und lernen Sie, wie Sie durchdachte Architekturen für robuste KI-Systeme gestalten. Sichern Sie jetzt Ihre Zukunftsfähigkeit.

Der siebenstündige ChatGPT-Ausfall vom 10. Juni 2025 zeigt deutschen Unternehmen die kritischen Risiken zentralisierter KI-Dienste auf.

KI Risiken wie Prompt Injection & Tool Poisoning bedrohen Ihr Unternehmen. Schützen Sie geistiges Eigentum mit MCP-Sicherheitsarchitektur. Praxisleitfaden zur Anwendung im eignen Unternehmen.

Live-Hacking-Demonstrationen zeigen schockierend einfach: KI-Assistenten lassen sich mit harmlosen Nachrichten manipulieren.