
NIS2 meets AI: Why AI governance is now mandatory
The underestimated risk: AI under NIS2
Most companies treat NIS2 as a pure IT security issue. Firewall, SIEM, incident response — the usual suspects. In doing so, they overlook the biggest risk: artificial intelligence.
Every AI system in your company — whether ChatGPT, Copilot, an internal ML pipeline or an AI-powered CRM — is an ICT system in the sense of NIS2.And this means that the same risk management requirements apply to these systems as to your entire IT infrastructure.
At the same time, the high-risk obligations of the EU AI Act will come into force from August 2026. Companies face a double burden that they can only manage with an integrated framework: AI governance.
Where NIS2 and AI overlap
1. ICT risk management (§30 NIS2UmsuCG)
NIS2 requires comprehensive risk management for all ICT systems. AI systems bring with them specific risks that are missing in classic IT risk analyses: hallucinations, bias, model drift, prompt injection and uncontrolled data access.
2. Incident Reporting
If an AI system makes incorrect decisions, leaks confidential data or is manipulated - is that a reportable security incident? In many cases: yes. The 24-hour reporting period also applies here.
3. Supply Chain Security
ChatGPT is a third party. Copilot is a third party provider. Every external AI API is part of your supply chain — and therefore rated under NIS2. Have you conducted a risk analysis for OpenAI, Google or Anthropic as a supplier?
4. Training requirement
NIS2 requires cybersecurity training for senior management. The EU AI Act requires AI skills training (Article 4, valid since February 2025). Two laws, one message: Anyone who uses AI must know what they are doing.
From August 2026: The high-risk AI obligations
On February 11, 2026, the Federal Cabinet passed the AI Market Surveillance and Innovation Promotion Act (KI-MIG). The Federal Network Agency (BNetzA) will become the central supervisory authority for AI in Germany.
From August 2026, the full obligations for high-risk AI systems apply:
• Mandatory risk management system for every high-risk AI system
• Technical documentation and logging
• Human Oversight – human control must be guaranteed
• Transparency obligations towards users
• Regular review and monitoring
High-risk AI includes:AI in personnel selection, credit scoring, biometric identification, critical infrastructure, education and law enforcement.
Shadow AI: The biggest NIS2 breach that no one is reporting
According to a recent Gartner study (February 2026), over 50% of employees use private GenAI accounts for professional tasks.Each and every one of these cases is potentially:
• A data breach (GDPR)
• An ICT risk as defined by NIS2
• A violation of the AI competence requirement of the EU AI Act
If your sales department enters customer data into ChatGPT, your HR department pre-selects applications with a private AI tool, or your developers use code from uncontrolled AI sources — then you have a NIS2 problem that cannot be solved with firewalls alone.
AI Governance: A Framework for NIS2 + AI Act
The solution is not a second compliance system in addition to the ISMS, but rather an extension: AI Governance integrates AI-specific risks into your existing risk management.
What an integrated NIS2+AI framework includes:
• AI inventory: Which AI systems are in use? (Also capture Shadow AI)
• Risk assessment: Classification according to EU AI Act risk classes + NIS2 relevance
• Usage guidelines: What is allowed in which AI tool - and what is not allowed under any circumstances
• Reporting processes: Integrate AI incidents into the NIS2 reporting structure
• Supplier evaluation: evaluate OpenAI, Google, Microsoft as third-party ICT providers
• Training: Combined NIS2 + AI competency training
• Monitoring: Ongoing monitoring of AI outputs and data flows
ADVISORI does not build a parallel system, but rather expands your existing ISMS with an AI annex. This saves 30-50% compared to rebuilding.More about our AI governance approach.
Frequently asked questions
Do we need to declare AI systems in the NIS2 registration portal?
Not directly during registration. But AI systems must be included in your risk management, which is checked during BSI audits.
Is ChatGPT a high-risk AI system?
ChatGPT itself doesn't — but the way you use it can be high-risk. If you use ChatGPT for personnel selection, credit decisions or medical advice, the application falls under the high-risk category.
Who is responsible for AI compliance — CISO or AI officer?
Ideally both in coordination. The CISO is responsible for ICT security (NIS2), an AI officer or AI representative is responsible for the specific AI Act duties. ADVISORI recommends an integrated governance structure.
Conclusion: NIS2 without AI Governance is incomplete
Anyone who implements NIS2 without addressing AI risks has a gap in risk management— and with it a liability problem. The combination of NIS2 (now) and EU AI Act (from August 2026) requires an integrated approach.
ADVISORI supports you in integrating NIS2 compliance and AI governance. ISO 27001 certified, DORA and NIS2 experienced, with its own AI platform.Arrange a free initial consultation.
📖 Also read:NIS2 in medium-sized companies: The 10 most expensive mistakes in implementation
📖 Also read:AI compliance as a competitive factor: How AI Act & ISO 42001 strengthen your market position
📖 Also read:NIS2 in medium-sized companies: The 10 most expensive mistakes in implementation
📖 Also read:AI compliance as a competitive factor: How AI Act & ISO 42001 strengthen your market position
Bereit, Ihr Wissen in Aktion umzusetzen?
Dieser Beitrag hat Ihnen Denkanstöße gegeben. Lassen Sie uns gemeinsam den nächsten Schritt gehen und entdecken, wie unsere Expertise im Bereich AI Governance Ihr Projekt zum Erfolg führen kann.
Unverbindlich informieren & Potenziale entdecken.