
Your AI provider becomes a supply chain risk overnight - Why AI governance is now a top priority
Monday morning, 8:47 a.m. Your CISO calls: "Claude is banned. The Pentagon has classified Anthropic as a security risk — on par with Huawei and Kaspersky. Lockheed Martin is already separating. Microsoft is reviewing."
Your credit analysis, your compliance reporting, your internal knowledge management — everything runs through a single large language model. And suddenly that's exactly a problem.
This is not a thought experiment. This happened last week. And it affects every company that uses AI in business operations — including yours.
What happened - and what doesn't fit into the headlines
On March 6, 2026, the US Department of Defense officially designated Anthropic as“Supply Chain Risk”classified - a designation that was previously reserved exclusively for companies from enemy states. The consequence: Every military service provider must prove that they do not work with Anthropic. The impact extends far beyond the Pentagon.
The background: Anthropic signed a $200 million contract with the Pentagon in July 2025. Claude was the first major AI model in the US military's classified networks. Anthropic had drawn two red lines: No commitment to mass surveillance of American citizens. No use in fully autonomous weapon systems without human control.
The Pentagon wanted to remove these restrictions — not because it specifically planned to use Claude for surveillance or autonomous weapons. But because it rejected the principle that a private company can dictate to the military what a licensed technology should be used for.
Anthropic refused. CEO Dario Amodei wrote that his company could not give in "in good conscience." Trump responded to Truth Social — declaring the company a threat to national security. That same evening, OpenAI CEO Sam Altman announced that he had taken over the vacant position.
And this is where it becomes relevant for European companies:Lockheed Martin commented on the news with a shrug."We expect minimal impacts as Lockheed Martin is not dependent on any single LLM vendor." The Pentagon, on the other hand, needs six months to replace Claude.
If you only have one AI provider, you have a single point of failure. Anyone who has several just shrugs their shoulders at news like this.
Why European companies are not allowed to sit back and relax
The obvious response: "Applies only to Pentagon contracts. We do not do business with the US military." That's legally true - but it still falls far short. The Anthropic case shows a pattern that can also be transferred to German companies in three scenarios.
Scenario 1: Regulatory shock
The high-risk obligations of the EU AI Act will apply from August 2026. The German KI-MIG implements the regulation into national law. Both sets of rules can mean that AI providers are suddenly no longer allowed to serve certain use cases - or that their models are classified as non-compliant. What happens to your processes if your only LLM is no longer approved for high-risk applications overnight?
Scenario 2: Geopolitical domino effect
What happens if the EU classifies a Chinese AI model as a security risk? Or if a US model comes under CLOUD Act concerns - for example because US authorities can access European customer data? The Anthropic case has shown that the escalation from “everything is normal” to “supply chain risk” takes exactly one week.
Scenario 3: Business policy risks
AI providers are changing their terms of use. They restrict use cases, drastically increase prices or discontinue model versions. OpenAI has changed API prices and terms multiple times in the past. Anthropic has just shown that a provider can place its own ethical guidelines over customer contracts — which may be true in principle, but poses an operational risk for dependent companies.
Chatham House gets to the point: The Anthropic case is a"inflection point where enterprise AI governance, vendor strategy, and national security policy collided."And this encounter will happen again.
The problem has a name — AI vendor lock-in
Vendor lock-in with AI is more fundamental than with classic cloud infrastructure. With cloud, you can move containers, migrate databases, abstract APIs. Things look different with AI:
Prompts are model specific.A prompt that is optimized for GPT-4 will produce different results on Claude or Gemini. Fine tuning is not transferable. RAG pipelines are calibrated to a vendor's embedding models. Evaluation benchmarks are model dependent.
Switching costs increase exponentially with depth of integration.Anyone who only uses AI for chat can switch tomorrow. Anyone who has integrated AI into credit decisions, compliance checks or customer service workflows will need weeks to months — plus revalidation, regulatory review and employee training.
There is no industry standard for AI portability.While cloud workloads are largely portable via Kubernetes, Terraform and standardized APIs, no comparable framework exists for LLM workflows. Every change is a migration project.
The proof is provided by the Anthropic case itself: Lockheed Martin - multi-vendor strategy - comments calmly. The Pentagon - deeply integrated into Claude - needs half a year.The difference is not technology. The difference is architecture.
A recent study shows that 22.3 percent of German companies are already actively looking for European cloud alternatives - driven by sovereignty concerns and cost pressure. The same logic applies to AI providers, just with greater urgency.
The solution: AI governance as a strategic framework

AI governance is not a compliance checkbox. It is an operational management tool that protects your company from exactly the risks that the Anthropic case revealed.
A resilient AI governance framework rests on five pillars:
1. Vendor diversification
At least two LLM providers for business-critical applications. No process should depend on a single model. The fallback strategy must be documented, tested and reviewed quarterly - not as an emergency plan in a drawer, but as a lived process.
2. Technical abstraction layer
An LLM router or orchestrator between your applications and the models. Prompts are formulated independently of the model and are only adapted to the respective model in the abstraction layer. If provider A fails, the system automatically routes to provider B - without departments noticing anything.
3. Risk classification
Not every AI deployment has the same risk profile. A criticality matrix shows: Which processes run via which LLM? What is the maximum tolerable downtime? What regulatory requirements apply? Processes in the “high” and “critical” categories must be given a multi-vendor architecture.
4. Exit strategy per provider
Documented for each AI provider: Maximum changeover time. Estimated costs. Responsible person. Prepared migration steps. If you don't answer these questions until the vendor has become a problem, it's too late — that's what the Pentagon just demonstrated.
5. Regulatory monitoring
EU AI Act, NIS2, industry-specific requirements such as DORA (for financial service providers) and BSI requirements are constantly evolving. AI governance must proactively track these developments and incorporate them into provider evaluations — not reactively after the next regulatory shock occurs.
The highlight:NIS2 and DORA already make supply chain risk management mandatory. AI providers explicitly fall under the supply chain. Anyone who has not integrated their AI providers into their existing risk management may already be violating regulatory requirements - and only realize it when it is too late.
Checklist: 7 questions that every CIO must answer now
Before you set up a comprehensive AI governance framework, start with these seven questions. If you can't answer more than two of these, you have a problem.
1.How many LLM providers is our AI usage spread across?
If the answer is "one," you have cluster risk.
2.What happens if provider X goes down tomorrow?
Not theoretically — concretely. Which processes stop? How long?
3.Are our AI workflows designed to be model-independent?
Or are prompts, embeddings and evaluations optimized for a single model?
4.Have we documented an exit strategy for each AI vendor?
Not “we would just change,” but a concrete migration plan.
5.Who is responsible for AI governance?
A name. Not "still being clarified", not "the IT department".
6.Do our AI contracts meet the requirements of NIS2 Article 21?
Supply chain risk management is mandatory there. AI providers are part of the supply chain.
7.Can we change AI providers within 72 hours?
Lockheed Martin can do it. Not the Pentagon. Where are you?
What this means specifically for your AI strategy
The Anthropic case is not an isolated event. It marks the beginning of a new phase in which AI providers can become geopolitical and regulatory pawns — just as cloud providers and semiconductor manufacturers already are.
Companies that act now will be in a significantly better position in twelve months:
Multi-LLM architecture as standard— not as a nice-to-have, but as a basic architectural decision for every new AI project.
Establish AI Governance Board— staffed by CIO, CISO, legal and department representatives. Quarterly meeting, documented decisions.
Quarterly AI vendor risk analysis— Analogous to the existing IT supplier review, but with AI-specific criteria (model availability, regulatory compliance, geopolitical risks).
Integration into existing ISMS— AI Governance is not a parallel process, but an extension of your ISO 27001 management system and your NIS2 framework.
Conclusion: Lockheed or Pentagon — which do they belong to?
On March 6, 2026, it became clear who was prepared and who was not. Lockheed Martin: “Minimal impact.” Pentagon: "It will take six months."
The question is not whether something similar will happen to your AI provider. The question is whether you will then be one of the companies that shrugs its shoulders and continues - or one of those that has to plan for six months of standstill.
AI governance is not bureaucracy. It's your insurance against the next Monday morning call.
---
About ADVISORI:We support banks, insurance companies and medium-sized companies in building resilient AI governance frameworks - from risk classification to multi-LLM architectures to NIS2-compliant supply chain control. [Talk to us →]
---
Timeline: Anthropic Pentagon Crisis
Date | event
|---|---|
July 2025 | Anthropic signs $200 million contract with Pentagon. Claude is used in classified networks.
February 2026 | Pentagon calls for removal of security guard rails (mass surveillance, autonomous weapons).
February 28 | Deadline. Anthropic refuses. CEO Amodei: “In good conscience we cannot give in.”
March 1st | Trump on Truth Social: Anthropic is a "threat to national security."
March 6 | Pentagon officially classifies Anthropic as a supply chain risk — on par with Huawei/Kaspersky.
March 6 | OpenAI steps in and takes over the Pentagon deal with softer conditions.
March 7th | Anthropic announces lawsuit. Microsoft separates military from commercial Claude use.
Sources
NPR: "Pentagon labels AI company Anthropic a supply chain risk" (03/06/2026)
Reuters: “US draws up strict new AI guidelines amid anthropic clash” (03/07/2026)
Reuters: "Anthropic courted the Pentagon. Here's why it walked away" (03/04/2026)
CNBC: “Anthropic and the Pentagon are back at the negotiating table” (03/05/2026)
Chatham House: "Anthropic's feud with the Pentagon reveals the limits of AI governance" (03/04/2026)
Handelsblatt: “Pentagon vs. Anthropic – The first big power struggle of the AI era” (March 6, 2026)