
EU AI Act Enforcement: How Brussels Will Audit and Penalize AI Providers — and What This Means for Your Company
Imagine: A Tuesday morning, your AI project manager comes into the office with an email from the EU Commission. Your company uses ChatGPT for automated customer inquiries and Gemini for internal document analysis — tools that are now deeply integrated into your business processes. The email informs you that the Commission has initiated an investigation against the GPAI model provider you are using. They want to know: What data have you used? How do you integrate the model? Have you assessed the risks?
Does this still sound like the future? Not anymore. Since March 12, 2026, the mechanism for how Brussels will do exactly that is on the table — concrete and precise.
What the EU Commission Published on March 12, 2026
The European Commission published a draft implementing regulation on March 12, 2026 (Reference: Ares(2026)2709234), which describes for the first time in full detail how the Commission will investigate and penalize GPAI model providers. The public consultation runs until April 9, 2026, with formal adoption planned for Q2 2026.
This draft does not create new substantive obligations — it describes the procedural engine behind the EU AI Act. It answers the question that all stakeholders have been asking: What does an EU audit look like in practice?
Timeline Overview
- August 2025: GPAI obligations for model providers came into effect
- March 12, 2026: Draft implementing regulation published
- Q2 2026: Formal adoption of the implementing regulation
- August 2, 2026: Full enforcement powers of the EU Commission active (new models)
- August 2, 2027: Full enforcement also for models that came to market before August 2025
The clock is ticking. Those who have not yet started structuring their AI governance have about four months until the first high-risk wave.
How the Enforcement Mechanism Works
The draft implementing regulation is divided into three central areas:
1. Evaluations: Access to Code, Weights, and Infrastructure
Chapter II of the draft regulates how the Commission can technically evaluate models. Article 92 of the AI Act gives the Commission the right to directly access models. What can specifically fall under this is far-reaching:
- Access via APIs
- Internal access to the model
- Access to source code
- Access to model weights
- Access to hosting infrastructure
- Access to "inspect and modify system state" — at the same level as the provider's internal employees
Particularly noteworthy: The Commission can require providers to disable their own monitoring systems that would track what the regulator does during the audit. A provider therefore cannot monitor what the auditors see.
2. Independent Experts: Strict Independence Rules
When the Commission engages external experts for an evaluation (as provided for in Article 92(2) of the AI Act), strict rules apply:
- 12-month lookback: All contractual relationships between the expert and the provider in the 12 months before the evaluation are reviewed
- Shared ownership, management, or resources are disqualifying
- Experts must remain independent for the entire duration of their appointment
- Strict confidentiality obligations according to Article 78 of the AI Act
3. Procedures: Interim Measures Before Formal Decision
Chapter III is particularly relevant for companies. The Commission can take measures even before formal proceedings are opened:
- Gather information and take measures
- In case of imminent danger (health, safety risks): Provisional suspension of a model from the market — even before a formal decision has been made
This means: A GPAI model that your company uses can potentially be removed from the market while proceedings are still ongoing.
What This Means for Companies Using GPAI Models
Here lies a common misconception: Many companies believe the EU AI Act only affects providers of large language models — OpenAI, Google, Anthropic, Meta. This is wrong.
As a deployer (user) of a GPAI model, you are not directly the target of GPAI enforcement proceedings. But you are indirectly affected:
Scenario 1: The Provider is Being Audited
OpenAI or Google is under EU investigation. The Commission requests information about all downstream integrations. You must be able to demonstrate how you use the model, what risk assessments you have conducted, and whether you meet the corresponding AI Act obligations for high-risk use cases.
Scenario 2: The Model is Provisionally Suspended
An interim measure decision stops the provider's API. Your production system fails. Do you have a business continuity plan for AI outages?
Scenario 3: Your Own High-Risk Application Comes Under Scrutiny
Are you using a GPAI model in a high-risk context (e.g., personnel decisions, credit scoring, critical infrastructure)? Then the high-risk obligations of the AI Act apply from August 2026 — regardless of whether the base model itself is classified as GPAI.
Fine Framework: What's at Stake
For GPAI model providers, Article 101(1) of the AI Act provides:
- 3% of global annual revenue or
- 15 million euros — whichever is higher
For companies that use the model in high-risk applications and violate the AI Act, sanctions can amount to up to 30 million euros or 6% of global annual revenue.
For comparison: GDPR fines are often used as a benchmark. The AI Act is in similar dimensions — but with a much more complex compliance framework.
What Companies Must Do Now
The good news: The draft is public, the procedure is known, and the deadlines are clear. Those who act now have time. Those who wait do not.
Immediate Measures (by Q2 2026)
- Create AI inventory: Document all GPAI models in use (ChatGPT, Gemini, Claude, Copilot, etc.)
- Classify use cases: Which of your AI uses fall under high-risk according to Annex III of the AI Act?
- Contract check: What do your contracts with GPAI providers regulate regarding data protection, liability, and compliance?
- Risk assessment: Which AI applications in your company could be classified as high-risk?
- Establish AI Governance Framework: Who is responsible for AI compliance in your company?
- Train employees: AI Literacy is already mandatory under the AI Act
- Technical documentation: For high-risk applications, complete documentation according to Article 11 AI Act
- Business Continuity Planning: What happens if a GPAI provider API fails or is suspended?
- Incident Response: Process for the case of an EU investigation affecting your provider
Structural (Ongoing)
- Establish monitoring: Continuous observation of AI Act enforcement developments
- Follow Code of Practice: The GPAI Code of Practice influences requirements for providers — and thus indirectly your tool selection
- Keep governance documentation current: The Commission can request documentation about your AI usage during an investigation
The ADVISORI Approach: AI Governance as Competitive Advantage
ADVISORI supports companies in building a robust AI Governance Framework — not as a pure compliance project, but as a strategic foundation for the responsible use of AI.
Our approach includes:
- AI Act Gap Analysis: Where do you stand today, what is missing until August 2026?
- Risk assessment and classification: Systematic categorization of all AI applications
- Governance Framework Design: Roles, responsibilities, processes for AI compliance
- Technical documentation: Support in creating all required documents
- Training and AI Literacy: From executive level to specialist departments
Companies that build AI Governance early have a clear advantage: They can leverage AI potential without incurring regulatory risks — and can respond confidently in the event of an investigation.
More about the ADVISORI approach: AI Governance Consulting
FAQ: EU AI Act GPAI Enforcement
Am I as a company directly affected by GPAI enforcement proceedings if I only use ChatGPT?
Not directly — the enforcement proceedings are directed against the provider (e.g., OpenAI). But as a deployer, you have your own obligations: For high-risk applications, you must meet AI Act requirements, and you can be asked to cooperate during an investigation. Additionally, you bear the operational risk if a provider API is temporarily suspended.
From when can the EU Commission actually impose fines?
For new GPAI models from August 2, 2026. For models that were already on the market before August 2, 2025, an extended deadline until August 2, 2027 applies. The implementing regulation (in consultation until April 2026) will formally establish the exact procedural framework.
What does the GPAI Code of Practice mean for my company?
The Code of Practice is primarily relevant for GPAI model providers. For companies using models, it is an important indicator: It shows what requirements are placed on providers — and thus what information and guarantees you should demand when selecting providers. Providers who have signed the Code of Practice signal compliance readiness.
Do we as an SME really need to act now, or is this only relevant for large companies?
The AI Act applies regardless of company size — however, with some accommodations for SMEs (e.g., reduced fees for registrations, sandboxes). What matters is not company size, but the area of application: An SME using AI for personnel decisions falls under high-risk obligations. Act now — the deadlines apply to everyone.
Weitere relevante Beiträge
EU AI Act Enforcement: Wie Brüssel KI-Anbieter prüfen und bestrafen will — und was das für Ihr Unternehmen bedeutet
Die EU-Kommission hat am 12. März 2026 den Entwurf einer Durchführungsverordnung veröffentlicht, die erstmals konkret beschreibt, wie GPAI-Modellanbieter geprüft und bestraft werden. Was das für Unternehmen bedeutet, die ChatGPT, Gemini oder andere KI-Modelle einsetzen.
NIS2 and DORA Are Now in Force: What SOC Teams Must Change Immediately
NIS2 and DORA apply without grace period. 3 SOC areas that must change immediately: Architecture, Workflows, Metrics. 5-point checklist for SOC teams.
NIS2 und DORA sind jetzt scharf: Was SOC-Teams sofort ändern müssen
NIS2 und DORA gelten ohne Gnadenfrist. 3 SOC-Bereiche die sich sofort ändern müssen: Architektur, Workflows, Metriken. 5-Punkte-Checkliste für SOC-Teams.