Shadow AI: Why uncontrolled AI use is your biggest compliance risk

Shadow AI: Why uncontrolled AI use is your biggest compliance risk

23. Februar 2026
15 min Lesezeit

Shadow AI: Why uncontrolled AI use is your biggest compliance risk

Consider the following scenario: A sales rep copies a customer list with sales data into ChatGPT to generate personalized offers. A developer feeds GitHub Copilot with proprietary source code. A controller uploads quarterly figures to an AI tool to create forecasts. None of these employees act maliciously - they all want to work more productively. But none of these tools have been approved by IT, none have been checked for data protection purposes, and none appear in the AI inventory.

Welcome to the world ofShadow AI.

What begins as a productivity hack of individual employees will develop into the biggest compliance risk for many companies in 2026. Because with the EU AI Act, NIS2 and the GDPR, three regulatory frameworks meet a reality in which AI use has long been growing faster than any governance structure.

What is Shadow AI — and why is it more than Shadow IT 2.0?

Shadow AI describes the use of AI tools and services by employees without the knowledge, approval or control of the IT department. The term is deliberately based on “Shadow IT” — the phenomenon that employees use cloud services, apps or hardware on their own initiative. But Shadow AI goes further in several ways.

Classic shadow IT usually concerns infrastructure: a team uses Trello instead of the official project management tool, someone saves files in a private Dropbox. Annoying, but usually manageable. Shadow AI, on the other hand, actively processes data - often highly sensitive data. AI models such as ChatGPT, Google Gemini, Claude or Midjourney accept input, process it on external servers and potentially store it for training future models. This means: Every entry is potentially a data transfer to third parties.

Added to this is the speed at which Shadow AI is spreading. According to a study bySoftware AGuse54 percent of knowledge workers in Germanyalready AI systems without official approval. Viewed globally, the “2025State of Shadow AI Report" from Reco that average in small companies269 unauthorized AI tools per 1,000 employeesare in use. An investigation byCybSafeand the National Cybersecurity Alliance revealed that38 percent of employees share confidential data with AI platforms, which are outside the company's control.

These numbers make it clear: Shadow AI is not a fringe phenomenon. It is the norm in most organizations.

The Samsung incident: A wake-up call for every company

The most well-known Shadow AI incident occurred at Samsung in spring 2023. Three separate data leaks occurred within just 20 days of the internal release of ChatGPT. Samsung engineers uploaded proprietary source code, internal test logs, and even meeting notes to the language model — with the intention of doing their jobs more efficiently.

The consequence: Samsung banned ChatGPT and other AI chatbots for all employees. But the damage was already done. The uploaded data was on OpenAI's servers and could potentially be used to train future models. For a company whose market position is based on technological advances, this is a worst-case scenario.

The Samsung case is not an isolated case, but rather exemplary. According to thatStanford HAI AI Index Reportwere aloneA total of 233 documented AI-related security incidents in 2024registered — many of them related to uncontrolled AI use. And according to IBM’s “2025Cost of a Data Breach Report" Data breaches associated with Shadow AI cost on average$4.63 million— that's $670,000 more than traditional incidents.

Why Shadow AI is now becoming a compliance problem

Until recently, Shadow AI was primarily an IT security issue. But with the 2026 wave of regulations from the EU AI Act, NIS2, DORA and CRA, it will become a tangible compliance risk - with potentially existence-threatening consequences.

EU AI Act: No compliance without AI inventory

The EU AI Act, which has been gradually coming into force since August 2025, requires companies that use AI systemsComplete inventory of all AI applications used. This inventory is the basis for risk classification: Prohibited practices, high-risk systems, limited-risk and minimal-risk applications must be identified and dealt with accordingly.

Shadow AI makes exactly that impossible. If IT doesn't know which AI tools are in use, they can't create an inventory or perform a risk assessment. The result: The company violates the basic requirements of the EU AI Act without realizing it.

It becomes particularly critical in high-risk applications. If an AI tool is used in recruiting, for example - for example to pre-filter applications - strict requirements for transparency, human supervision and documentation apply. If an HR employee uses an unauthorized AI tool for this, the company is considered illegal.

In addition, the EU AI Act has required one since February 2025AI competence requirement: All employees who work with AI must have appropriate training. Without knowledge of the actual AI usage, this requirement cannot be met.

NIS2: Shadow AI as an uncontrolled risk

The NIS2 directive, which has been implemented into national law since December 2025, requires comprehensive compliance from affected companiesRisk management for IT security. This includes identifying all relevant risks, implementing appropriate security measures and the ability to respond quickly to incidents.

Shadow AI undermines each of these points. Unapproved AI tools represent unknown attack vectors. Data flows through uncontrolled channels. In the event of an incident, audit trails and logs are missing. And the required reporting requirement for security incidents within 24 hours is difficult to comply with if the company does not even know where the data is being processed.

As ADVISORI has already explained in the context of NIS2 and AI governance, AI risks are an integral part of NIS2 risk management. Shadow AI is the blind spot that undermines the entire risk management.

GDPR: data transfer without a legal basis

Every input into an external AI tool is potentially data processing within the meaning of the GDPR - and for US providers, data transfer to a third country. Without appropriate protective measures (standard contractual clauses, binding corporate rules or adequacy decision), this transfer is unlawful.

When an employee enters customer data, employee data or other personal information into ChatGPT, OpenAI processes that data on servers governed by US law. There is no order processing contract, no data protection impact assessment, no information for those affected. That’s several GDPR violations at once.

The possible fines: up to20 million euros or 4 percent of global annual sales. But the damage to reputation can be even more serious.

The five biggest Shadow AI risks at a glance

1. Uncontrolled data outflow

The most obvious risk: Confidential information — customer data, financial figures, source code, strategy documents — ends up in the hands of external AI providers. Without data loss prevention (DLP) for AI channels, companies have no control over what data leaves the organization.

2. Compliance violations without knowledge

Shadow AI leads to violations of the EU AI Act, NIS2 and GDPR that are not even known to management. Management is still liable - NIS2 provides for personal liability of management.

3. Loss of Intellectual Property

When proprietary code, trade secrets, or research results are incorporated into AI models, intellectual property is potentially compromised. The Samsung case shows how quickly this can happen.

4. Bad business decisions

AI-generated analyzes and recommendations based on unvalidated models can lead to incorrect decisions. Without quality control and validation of outputs, risks arise ranging from incorrect advice to a customer to incorrect financial forecasts.

5. Attack surfaces for cybercriminals

Shadow AI tools significantly expand an organization's attack surface. Untested browser extensions, unsecured API connections and uncontrolled data flows offer attackers new entry points. Cybercriminals can also use the data leaked via Shadow AI to construct highly personalized phishing attacks.

Why bans don't work

Many companies' first reaction to Shadow AI is a blanket ban: "AI tools are not permitted immediately." Samsung has done this, as have numerous banks, authorities and corporations.

But bans don't solve the problem - they just shift it. Studies show that employees continue to use AI tools despite bans, but then on private devices, via private networks or via detours that are even more difficult to control. The result: Shadow AI becomes even more invisible.

The reason is understandable: AI tools offer real productivity gains. Anyone who has ever experienced how ChatGPT summarizes a report, formulates an email or analyzes data in seconds will no longer want to be without it. A ban that offers no alternative is simply ignored.

The better strategy:Enablement instead of ban. Companies that provide their employees with safe, approved AI alternatives and define clear usage guidelines reduce Shadow AI far more effectively than those that rely on restrictions.

The Path to Shadow AI Control: A Five-Step Model

Stage 1: Establish visibility — the AI inventory

The first and most important step is to create transparency. Which AI tools are actually used in your company? This question can only be answered through a combination of technical and organizational measures.

Technically:Leverage Cloud Access Security Brokers (CASB), DLP solutions, and network monitoring to identify AI-related traffic. Analyze SaaS usage data, browser extensions, and API calls. Modern tools can automatically detect and categorize AI services.

Organizational:Conduct anonymous employee surveys. Don't ask, "Are you using banned tools?" Ask: "What tools help you in your work?" Experience shows that when employees take a non-punitive approach, they respond surprisingly openly.

The result is a complete AI inventory — the basis for any further action while fulfilling a key EU AI Act requirement.

Stage 2: Conduct risk assessment

Not all Shadow AI use is equally risky. An employee using ChatGPT to write a birthday speech poses a different risk than one uploading customer data. Evaluate each identified usage by:

  • Data sensitivity:Are personal data, trade secrets or regulated information being processed?
  • Regulatory relevance:Does the use fall under the EU AI Act (especially high-risk categories)?
  • Provider risk:Where is the data processed? Is there an AVV? What are the provider’s data protection practices?
  • Business risk:What consequences would a data leak or a wrong decision have?

Stage 3: Establish AI policies and governance

Based on the risk assessment, you develop oneAI Usage Policy, which clearly defines:

  • Which AI tools are approved for which purposes?
  • Which data is allowed to be entered into AI tools - and which is not?
  • What approval processes apply to new AI tools?
  • How are AI-generated outputs validated and documented?
  • Which training courses are mandatory?

This policy should be pragmatic. A 50-page policy that no one reads defeats the purpose. Better: short, clear rules with concrete examples, supplemented by an easily accessible list of approved tools.

Stage 4: Provide safe alternatives

The most effective lever against Shadow AI is an attractive official offer. Companies should provide their employees with AI tools that:

  • Run in a controlled environment (e.g. Azure OpenAI Service, self-hosted models)
  • Offer contractual data protection guarantees (AVV, EU data processing)
  • Integrated into the AI inventory and monitoring
  • At least as powerful as the Shadow alternatives

The structure of onecompany's own AI platformis an investment that pays off in multiple ways: through productivity gains, risk reduction and compliance compliance. Platforms likeki.advisori.deshow what this can look like in practice.

Level 5: Continuous monitoring and governance

Shadow AI is not a one-time project, but an ongoing process. New AI tools appear weekly, employees discover new use cases, and regulatory requirements continue to evolve. Therefore, establish:

  • Regular scansthe SaaS services used and network traffic
  • Quarterly updatesof the AI inventory
  • Ongoing trainingon AI competence (EU AI Act requirement)
  • An AI governance processfor evaluating and releasing new tools
  • KPIs and reportingfor management

The Role of Management: Personal Liability as a Motivator

An aspect that still receives too little attention in many companies:The management is personally liablefor compliance with NIS2 requirements. If a Shadow AI-related security incident occurs and management has not taken appropriate measures, managing directors and board members face personal consequences in addition to corporate fines.

This means: Shadow AI belongs on the agenda of every board meeting. It is not enough to delegate the issue to the IT department. CISOs, CDOs and IT leaders need a clear mandate — and the necessary resources — to effectively address Shadow AI.

Shadow AI in medium-sized businesses: Particularly at risk, particularly affected

While corporations are increasingly building dedicated AI governance teams, Shadow AI is hitting SMEs particularly hard. The reasons:

Fewer resources:Medium-sized companies rarely have specialized AI governance expertise. The IT department is often already busy with day-to-day business.

Higher relative usage:The Reco report shows that smaller companies have proportionately more shadow AI tools per employee than large corporations - while at the same time having lower monitoring capacities.

Regulatory complexity:EU AI Act, NIS2, GDPR — the regulatory landscape is overwhelming for many medium-sized companies. Understanding the intersection of these regulations and implementing them operationally requires expertise that is often not available internally.

Know-how dependency:Medium-sized manufacturing companies in particular, whose market position is based on technical know-how, risk the drain of their most valuable resource through Shadow AI.

This is where external advice can make the crucial difference. A specialized AI governance consultancy helps to understand the regulatory requirements, build a pragmatic governance framework and implement the right technical and organizational measures.

Practical tips: Immediate measures against Shadow AI

You don't have to wait for the perfect governance framework. You can implement these measures immediately:

1. Perform a quick scan:Analyze your network traffic for connections to known AI services (api.openai.com, bard.google.com, claude.ai, etc.). This gives you an initial overview of the extent of Shadow AI usage.

2. Start Awareness Campaign:Inform your employees about the risks - without moralizing. Show the Samsung case. Explain why the company is not against AI, but rather for safe AI use.

3. Create “Allowed AI” list:Define a whitelist of AI tools that may be used for specific use cases. This is better than an endless list of bans.

4. Introduce data classification:Clearly define which categories of data should not be entered into external AI tools under any circumstances: personal data, trade secrets, regulated information, financial data before publication.

5. Name AI contact person:Designate a person or team to be the point of contact for AI-related questions. Employees who know where to ask are less likely to use uncontrolled tools.

6. Initiate contract review:Check the terms of use and privacy policies of the AI tools you already use. Where is the data located? Are they used for training? Is there an AVV?

The strategic view: Shadow AI as an opportunity

As paradoxical as it sounds: Shadow AI shows you where the productivity potential lies in your company. When employees use AI tools on their own initiative, they do so because they see a concrete benefit. These use cases are valuable tips for your AI strategy.

Instead of viewing Shadow AI as just a threat, companies should view it asInnovation indicatorto use. Which departments are experimenting with AI? For which tasks? Which tools are preferred? This information flows directly into the development of a company-wide AI strategy.

The key is balance: enabling innovation, controlling risks, ensuring compliance. That is exactly the challengeAI Governanceaddressed.

Conclusion: Act now — before the regulator does

Shadow AI is not a hypothetical future scenario. It’s the reality in over 80 percent of companies — including yours. Every day confidential data flows into uncontrolled AI systems, every day the compliance risk grows.

With the EU AI Act, NIS2 and stricter GDPR enforcement, the consequences are increasing dramatically. Millions in fines, personal management liability and irreparable reputational damage are no longer theoretical risks - they are the logical consequence of inaction.

The good news: Shadow AI can be controlled. Not through bans, but through smart governance, safe alternatives and a corporate culture that promotes responsible AI use. The first step is visibility. The second is action.

Don't wait for the first incident. Be proactive.

Next step: getting a handle on Shadow AI

Want to know how big your Shadow AI risk is — and how you can control it? Our experts analyze your current situation, identify areas of action and develop a pragmatic action plan.

Contact us without obligation!

📖 Also read:AI testing & strategy: When AI models know that they will be evaluated - roadmap included + paper for download

📖 Also read:AI testing & strategy: When AI models know that they will be evaluated - roadmap included + paper for download

Hat ihnen der Beitrag gefallen? Teilen Sie es mit:

Ihr strategischer Erfolg beginnt hier

Unsere Kunden vertrauen auf unsere Expertise in digitaler Transformation, Compliance und Risikomanagement

Bereit für den nächsten Schritt?

Vereinbaren Sie jetzt ein strategisches Beratungsgespräch mit unseren Experten

30 Minuten • Unverbindlich • Sofort verfügbar

Zur optimalen Vorbereitung Ihres Strategiegesprächs:

Ihre strategischen Ziele und Herausforderungen
Gewünschte Geschäftsergebnisse und ROI-Erwartungen
Aktuelle Compliance- und Risikosituation
Stakeholder und Entscheidungsträger im Projekt

Bevorzugen Sie direkten Kontakt?

Direkte Hotline für Entscheidungsträger

Strategische Anfragen per E-Mail

Detaillierte Projektanfrage

Für komplexe Anfragen oder wenn Sie spezifische Informationen vorab übermitteln möchten