KI-MIG decided: What the AI Act Implementation Act means for companies

KI-MIG decided: What the AI Act Implementation Act means for companies

23. Februar 2026
15 min Lesezeit

KI-MIG decided: What the AI Act Implementation Act means for companies

On February 11, 2026, the Federal Cabinet approved the draft of the AI Market Surveillance and Innovation Promotion Act (KI-MIG). This means that the abstract discussion about AI regulation in Germany has taken on a concrete form - with clear responsibilities, defined deadlines and sensitive sanctions. For companies that develop, provide or use AI systems, the operational implementation phase begins now. Anyone who has not established a resilient AI governance structure by August 2026 not only risks fines of up to 35 million euros, but also the loss of marketability.

This article classifies the KI-MIG, explains the new supervisory architecture, analyzes the obligations for various company roles and provides concrete recommendations for action for C-level and compliance officers.

What is the AI-MIG – and why is it coming now?

The KI-MIG is the national implementing law for the European AI Regulation (EU AI Act), which came into force on August 2, 2024. Although the EU regulation applies directly in all member states, it leaves essential implementation questions to the national legislators: Which authority monitors the market? Who is responsible for fines? How are regulatory sandboxes organized? The KI-MIG answers exactly these questions.

The full name — AI Market Surveillance and Innovation Promotion Act — already reveals the dual strategy: On the one hand, the law is intended to organize market surveillance and sanction violations, and on the other hand, to promote innovation and support companies in implementation. Federal Digital Minister Dr. Karsten Wildberger put it this way at the cabinet meeting: "We are implementing European guidelines with maximum openness to innovation and creating lean AI supervision with a clear view of the needs of the economy."

There is a simple reason why the AI-MIG is coming now: the EU AI Regulation obliges each member state to appoint at least one notifying authority and one market surveillance authority by August 2, 2025. Germany has de facto broken this deadline - only the cabinet decision of February 2026 creates the legal basis. The draft now goes to the Bundesrat and Bundestag. Given the full applicability of the AI Act from August 2, 2026, there is little scope for parliamentary delay.

The EU AI Act as a framework: risk levels and obligations at a glance

To understand the AI-MIG, you need to know the EU AI Act, whose national implementation it regulates. The European AI regulation follows a risk-based approach that divides AI systems into four categories. This classification determines which obligations apply to a company.

Prohibited AI practicesinclude systems that are considered incompatible with core European values. These include manipulative techniques that impair people's judgment, social scoring systems by authorities and certain forms of real-time biometric surveillance in public spaces. The ban on these practices has been in effect since February 2, 2025 and is now also enforceable nationally through the KI-MIG.

High-risk AI systemsform the regulatory focus. This includes AI systems in safety-critical products (medical devices, machines, vehicles) as well as in sensitive application areas such as human resources management, creditworthiness assessment, law enforcement or critical infrastructure. These systems have comprehensive requirements: technical documentation, risk management, data governance, logging, human oversight and conformity assessments. The full obligations for high-risk AI systems will become enforceable from August 2, 2026.

AI systems with limited riskare subject to transparency obligations. Chatbots must be labeled as AI, deepfakes must be recognizable as synthetically generated. These obligations also apply from August 2026.

AI systems with minimal risk— such as spam filters or AI-supported game logic — remain largely unregulated, but are subject to a voluntary code of conduct.

In addition, the AI Act regulatesGeneral purpose AI models(General Purpose AI, GPAI) such as Large Language Models. Their providers have had their own transparency and documentation obligations since August 2, 2025. Models with systemic risk - i.e. particularly high-performance models above certain thresholds - are subject to stricter requirements including adversarial testing and incident reporting.

For companies that want to view the AI Act in the context of the broader 2026 regulatory wave of NIS2, DORA, AI Act and CRA, it quickly becomes clear: the regulatory requirements are converging. Anyone who is already implementing NIS2 and DORA is well placed to efficiently integrate AI governance.

The new supervisory architecture: Federal Network Agency as the central AI authority

The heart of the KI-MIG is the definition of the national AI supervisory structure. The federal government has decided on a hybrid approach that builds on existing structures and is intended to avoid double regulation.

The Federal Network Agency as the center

The Federal Network Agency (BNetzA) has a triple role: It willcentral coordination and competence center,Market Surveillance Authorityandnotifying authorityfor AI in Germany. In this way, it bundles AI expertise at the federal level and becomes the central contact for companies that bring or operate AI systems onto the German market.

The decision in favor of the Federal Network Agency was not without political controversy. Critics pointed out that the authority has so far been known primarily as a regulator of telecommunications, postal services and energy. The federal government, however, argues that the BNetzA already has extensive experience in market surveillance of digital services and has systematically built up AI expertise since the AI Act came into force. The Bitkom President Dr. Ralf Wintergerst described the decision as a “pragmatic approach,” but at the same time warned that the authority, with the planned 60 additional positions, could only fulfill its tasks if it itself made extensive use of AI.

Sectoral responsibilities remain intact

In addition to the BNetzA, existing specialist authorities remain responsible in their respective sectors. BaFin continues to monitor AI systems in the financial sector, data protection authorities retain responsibility for data protection-related AI aspects, and state market surveillance authorities remain responsible for AI in products that already fall under harmonized EU product regulations.

The principle behind it: Companies should, if possible, stick to their known official contact (one-stop shop principle). A medical device manufacturer that develops an AI-supported diagnostic system continues to contact the authority responsible for medical devices - but this now also covers the AI-specific requirements of the AI Act.

This creates a particularly complex regulatory landscape for financial institutions that use AI: In addition to the AI Act, DORA requirements and BaFin regulations also apply to AI. An integrated view of these regulations is essential.

AI service desk for SMEs and start-ups

A notable element of the AI-MIG is the plannedAI service deskat the Federal Network Agency. It is intended to serve as a low-threshold contact point, especially for small and medium-sized companies and start-ups. The idea: Companies without their own compliance department in particular need practical guidance when classifying their AI systems and implementing the requirements. It remains to be seen whether the service desk will provide the necessary level of detail and speed of response in practice - experiences with comparable consulting services during the introduction of the GDPR were mixed.

Regulatory Sandboxes: Innovation in a legally secure framework

One of the most discussed elements of the EU AI Act are the so-called Regulatory Sandboxes - in German: AI real laboratories. Each member state is obliged to set up at least one national living laboratory by August 2, 2026. The KI-MIG assigns this task to the Federal Network Agency.

The concept is as simple as it is ambitious: Companies can develop and test innovative AI applications in a controlled, legally secure framework - under official supervision, but with reduced regulatory hurdles. The real laboratory provides a protected space in which new technologies can be tested before they are subject to the full compliance requirements of the market.

Regulatory sandboxes offer several advantages for companies. Firstly, they create legal certainty in the development phase: those who test in the real laboratory can rely on official support and feedback instead of operating in the regulatory gray area. Secondly, they accelerate market access because the knowledge gained in the real laboratory can be incorporated directly into the conformity assessment. Third, they enable a structured dialogue between innovators and regulators that helps both sides — companies with compliance and authorities with understanding new technologies.

In its statement on KI-MIG, Bitkom demanded that the access barriers for real laboratories be kept low, that the procedures be completely digital and that a reliable official response be guaranteed within 30 days. Whether the KI-MIG will meet these requirements in its final version depends on the parliamentary procedure.

For companies that work with or develop high-risk AI systems, the active use of real-world laboratories makes strategic sense. It makes it possible to validate compliance requirements at an early stage and accelerate market entry - especially in industries such as financial services, healthcare or mobility, where AI innovation and strict regulation come together.

Sanctions and fines: What threatens violations

The KI-MIG translates the sanctions framework provided for in the EU AI Act into German administrative law. The fines are based on the well-known system of the GDPR - but the maximum amounts are sometimes even stricter.

The three levels of sanctions

Violations of prohibited AI practicesare punished most severely: up to 35 million euros or 7 percent of global annual sales, whichever is higher. This level of sanctions concerns, for example, the use of manipulative AI systems or prohibited forms of biometric surveillance.

Violations of requirements for high-risk AI systemsas well as violations of the obligations for GPAI models can be punished with up to 15 million euros or 3 percent of global annual sales. This includes, for example, a lack of conformity assessments, inadequate technical documentation or a lack of risk management.

False or misleading information to authorities— for example in declarations of conformity or reporting obligations — can result in fines of up to 7.5 million euros or 1 percent of annual turnover.

Reduced upper limits apply to SMEs

The AI Act provides for reduced maximum fines for small and medium-sized companies and start-ups - the lower of the two amounts mentioned (absolute or sales-related). This alleviates the financial threat, but does not change the material obligations. An SME also needs to have its high-risk AI systems fully documented and assessed for conformity.

Enforceability from August 2026

Crucial for practice: The sanctions will only be fully enforceable when the AI Act becomes fully applicable from August 2, 2026. The bans and thus the sanction options for banned AI practices have been in effect since February 2025 - but so far there has been a lack of a national enforcement structure. The AI-MIG closes this gap.

At the same time, the Federal Digital Ministry in Brussels is campaigning for extensions of the deadline for the applicability of the high-risk requirements. It is currently unclear whether and to what extent such extensions will be granted. Companies should prepare for August 2026 as a binding deadline - any extension would be a bonus, not a planning factor.

Obligations by role: providers, operators, importers

The AI Act differentiates obligations depending on the role a company plays in the AI value chain. The KI-MIG adopts this role logic and makes it enforceable nationally. For strategic planning in companies, it is therefore crucial to correctly determine your own role.

Provider

Providers are companies that develop or have developed an AI system and bring it onto the market under their own name. They bear the most comprehensive responsibilities: risk management, technical documentation, data governance, conformity assessment, EU declaration of conformity, CE marking (for high-risk systems), post-market monitoring and incident reporting. Providers must also establish a quality management system and keep records for at least ten years.

Operator (Deployer)

Operators are companies that use AI systems - for example, a company that uses a third-party AI-based recruiting tool. Operators must ensure that the system is used in accordance with the instructions for use, that human supervision is ensured by qualified personnel, that the input data corresponds to the intended use and that they report abnormalities to the provider and the authority. Particularly relevant: Operators of high-risk AI systems must carry out a fundamental rights impact assessment before putting the system into operation.

Importers and dealers

Importers who introduce AI systems from third countries into the EU market must ensure that the provider has carried out the conformity assessment and that the technical documentation is available. Dealers must check that the CE marking is present. Both roles can under certain circumstances become providers - for example if they sell an imported AI system under their own name or significantly change it.

Role clarity as the first compliance measure

In practice, the roles often become blurred. A company that offers a pre-trained AI model from a third-party provider, fine-tuned with its own data and as an independent product, becomes a provider - with all the associated obligations. The first and most important compliance measure is therefore systematic clarification: What role do we play in which AI system? This role clarification should be documented for every AI system in the company.

Transition periods: The timetable until full applicability

The EU AI Regulation will not become fully applicable all at once, but will follow a staggered schedule. This schedule is crucial for operational planning.

Since February 2, 2025The bans on unacceptably risky AI practices and the AI literacy requirements set out in Article 4 of the Regulation apply. Companies must ensure that all employees working with AI systems have an appropriate level of competence.

Since August 2, 2025the obligations apply to providers of general purpose AI models (GPAI). Among other things, they must create technical documentation, provide information for downstream providers and have a policy for compliance with EU copyright law.

From August 2, 2026The AI Act will be fully applicable, including all high-risk requirements and national enforcement mechanisms. At the same time, the regulatory sandboxes must be operational and the market surveillance authorities must be operational.

From August 2, 2027The requirements apply to high-risk AI systems that are integrated as safety components in products covered by harmonized EU product regulations (such as medical devices, machines or toys). There is therefore an additional year of transition period for these systems.

Transition periods apply to existing AI systems that are already on the market before August 2, 2026 - but only if they are not significantly changed. A material change in purpose or functionality triggers full compliance obligations. Companies should therefore inventory their existing AI systems and check whether planned updates qualify as significant changes.

What companies need to do specifically now

The time until August 2026 is short. Five months may seem like a long time in the political debate, but it is short for implementing an AI governance framework in a larger company. The following actions should now be a priority.

1. Create AI inventory

The first step is to take a complete inventory of all AI systems in the company - both self-developed and purchased. For each system, the following should be recorded: purpose, data used, provider or internal development team, groups of people affected, responsible person and - crucially - the risk category according to the AI Act. Without this inventory, targeted compliance planning is impossible.

2. Assign risk categories

Based on the inventory, all AI systems must be classified into the risk categories of the AI Act. This assignment requires legal and technical understanding and should not be left to the IT department alone. An interdisciplinary team from legal, compliance, IT and the respective departments is recommended.

3. Build governance structures

Companies need clear responsibilities for AI. This means: a defined AI governance role (e.g. an AI governance officer), established processes for evaluating and releasing new AI systems, and integration of AI governance into existing compliance structures. Anyone who already operates an ISMS according to ISO 27001 or risk management according to DORA can build on these structures. Linking NIS2 and AI governance offers significant synergy potential here.

4. Prepare technical documentation and conformity assessment

For high-risk AI systems, vendors must create comprehensive technical documentation covering, among other things, system architecture, training data, performance metrics, and risk management outcomes. The conformity assessment - depending on the application as a self-assessment or by a notified body - requires lead time. Anyone who doesn't start documentation until July 2026 will likely miss the deadline.

5. Adjust supplier management

Companies that purchase AI systems from third parties must adapt their procurement processes. Contracts should include clauses requiring the provider to comply with AI Act requirements, including providing technical documentation, conducting compliance assessments, and responding to government requests. Existing contracts should be checked for compatibility with the new requirements.

6. Ensure AI literacy in the company

The obligation for AI competence (Article 4 AI Act), which has been in force since February 2025, is often underestimated in practice. Companies must ensure that everyone working with AI systems — from development to operation to use — has an adequate understanding of the technology and its risks. This not only affects technical teams, but also specialists and managers. Training programs should be documented and regularly updated.

The KI-MIG in the context of the European regulatory landscape

The KI-MIG is not isolated. It is part of a broader European regulatory wave that began in 2024 and will peak in 2026. In addition to the AI Act, other sets of regulations come into force with NIS2, DORA and the Cyber Resilience Act (CRA), which affect all companies that use or offer digital technologies.

The convergence of these sets of rules is no coincidence. It reflects the basic European understanding that digital innovation can only be sustainable on a basis of security, transparency and responsibility. For companies, this convergence means that anyone who looks at regulation in isolation - AI Act here, NIS2 there, DORA separately - multiplies their compliance effort. On the other hand, those who build an integrated governance architecture can use synergies and meet regulatory requirements more efficiently.

For example: The risk management that the AI Act requires for high-risk AI systems can be integrated into existing risk management frameworks already built for DORA or NIS2. Technical documentation for AI systems can tie in with the security documentation required by the CRA. And the incident reporting obligations of the AI Act can be embedded into existing security incident reporting processes.

If you would like to find out more about the interaction of these sets of regulations, you will find a comprehensive classification in our Pillar article on the 2026 regulatory wave.

Critical classification: opportunities and risks of AI-MIG

The KI-MIG deserves a differentiated assessment. On the plus side are the clear designation of responsibilities, the waiver of additional national requirements (gold plating) and the explicit innovation mandate to the Federal Network Agency. Setting up real-world laboratories and an AI service desk are the right approaches to making regulation practical.

There are several open points on the risk side. Firstly, the staffing: 60 additional positions at the Federal Network Agency for supervision of the entire German AI market are ambitiously short. For comparison: the Irish data protection authority, which acts as the main supervisory authority for numerous US tech companies in Europe, has been massively expanded to enforce GDPR - and is still considered chronically understaffed.

Secondly, the question of conformity assessment: In many cases, high-risk AI systems must be assessed by independent testing bodies (notified bodies). These bodies must first be accredited and notified - a process that experience shows takes months. Bitkom rightly warns of a bottleneck effect, such as that which occurred with the medical device regulation and which delayed innovations there.

Thirdly, the delay: Germany is late. While other member states have already passed their implementing laws or are about to do so, the parliamentary process is just beginning in Germany. This creates uncertainty for companies operating across the EU - especially for cross-border AI systems, which could be subject to different supervisory structures in different member states.

Recommendations for action for C-level and compliance officers

The regulatory situation is clear: AI governance is no longer an optional exercise, but a mandatory program. This results in three strategic imperatives for decision-makers.

First, put AI governance on the board agenda.The AI Act makes AI compliance a top priority - not just because of the amount of fines, but because the obligations are structural in nature. Risk management, human oversight and conformity assessment cannot be delegated without management defining the framework conditions. A professional AI governance framework is the basis for every compliant AI deployment.

Second: see regulation as a competitive advantage.Companies that establish resilient AI governance early on position themselves as trustworthy partners — towards customers, business partners and regulators. In a world in which AI-generated content, automated decisions and algorithmic systems are increasingly being scrutinized, verifiable compliance is becoming a differentiator.

Third: think integratedly instead of isolated.AI governance does not belong in a silo, but rather in the existing compliance architecture — alongside data protection, information security and risk management. The regulatory requirements of the AI Act, NIS2, DORA and CRA overlap in significant ways. Oneintegrated AI governance strategysaves resources and creates consistency.

Frequently asked questions about AI-MIG

What is the KI-MIG and what does the abbreviation stand for?

KI-MIG stands for AI Market Surveillance and Innovation Promotion Act. It is the German implementing law for the EU AI Regulation (AI Act) and regulates the national supervisory structure, the responsibilities of the authorities, the fine procedure and the establishment of regulatory sandboxes. The Federal Cabinet passed the draft law on February 11, 2026; it is now going through the parliamentary procedure in the Bundesrat and Bundestag.

Which authority is responsible for AI supervision in Germany?

The Federal Network Agency will become the central market surveillance authority, coordination body and notifying authority for AI in Germany. In addition, sectoral authorities remain responsible in their respective areas - such as BaFin for the financial sector. This means that companies generally retain their known official contact person.

How high are the fines for violations of the AI Act?

The fines are staggered: up to 35 million euros or 7 percent of global annual turnover for prohibited AI practices, up to 15 million euros or 3 percent for violations of high-risk requirements and up to 7.5 million euros or 1 percent for false information to authorities. The lower amounts apply to SMEs and start-ups.

When will the new AI rules apply in Germany?

The bans on unacceptably risky AI practices have been in effect since February 2025. The full requirements of the AI Act — including high-risk obligations — will become enforceable from August 2, 2026. An extended deadline until August 2027 applies to high-risk AI in harmonized products (e.g. medical devices).

What are regulatory sandboxes and how can companies use them?

Regulatory sandboxes (AI real-world laboratories) are controlled test environments in which companies can develop and test innovative AI applications under regulatory supervision - with reduced regulatory hurdles. The Federal Network Agency will set up at least one national real-world laboratory by August 2026. Companies can apply to take part; The exact access requirements will be determined as part of the legislative process.

Would you like to know where your company stands when it comes to AI governance and what needs to be done by August 2026?Our experts will help you inventory, assess risk, and build an AI governance framework that meets the requirements of the AI Act — pragmatic, actionable, and tailored to your industry.

Contact us without obligation!

📖 Also read:AI compliance as a competitive factor: How AI Act & ISO 42001 strengthen your market position

📖 Also read:AI compliance as a competitive factor: How AI Act & ISO 42001 strengthen your market position

Hat ihnen der Beitrag gefallen? Teilen Sie es mit:

Ihr strategischer Erfolg beginnt hier

Unsere Kunden vertrauen auf unsere Expertise in digitaler Transformation, Compliance und Risikomanagement

Bereit für den nächsten Schritt?

Vereinbaren Sie jetzt ein strategisches Beratungsgespräch mit unseren Experten

30 Minuten • Unverbindlich • Sofort verfügbar

Zur optimalen Vorbereitung Ihres Strategiegesprächs:

Ihre strategischen Ziele und Herausforderungen
Gewünschte Geschäftsergebnisse und ROI-Erwartungen
Aktuelle Compliance- und Risikosituation
Stakeholder und Entscheidungsträger im Projekt

Bevorzugen Sie direkten Kontakt?

Direkte Hotline für Entscheidungsträger

Strategische Anfragen per E-Mail

Detaillierte Projektanfrage

Für komplexe Anfragen oder wenn Sie spezifische Informationen vorab übermitteln möchten