Anthropic Responds to U.S. Government’s Supply Chain Risk Designation — Announces Legal Appeal
Published: February 28, 2026 | Source: Official Anthropic Statement & Public Records

📌 Executive Summary
At 09:24 AM Beijing Time on February 28, 2026, Anthropic issued a top-pinned public statement on X (formerly Twitter), formally responding to U.S. Secretary of Defense Pete Hegseth’s designation of the company as a “supply chain national security risk.” The declaration — simultaneously published on anthropic.com — marks Anthropic’s first official rebuttal and confirms its intent to file a legal challenge in federal court.
This unprecedented move follows President Trump’s February 27 directive on Truth Social ordering all federal agencies to immediately cease use of Anthropic technologies, granting a six-month transition period ending August 2026.
⚖️ Key Legal & Strategic Points
🔹 No Formal Notification Received
Anthropic confirmed it had not received any official communication from the Department of Defense or the White House prior to Hegseth’s public X post. The entire escalation unfolded via social media — bypassing standard interagency notification protocols.
🔹 Overreach Claim Under 10 U.S.C. § 3252
Hegseth’s order prohibits all defense contractors, suppliers, and partners from engaging in any commercial activity with Anthropic. Anthropic counters that this exceeds statutory authority:
The law permits restrictions only on Claude’s use within DoD contracts, not across a contractor’s broader commercial operations.
✅ Practical Implications if Ruling Favors Anthropic:
– ✅ Individual users & non-defense commercial customers: Unaffected
– ✅ Defense contractors: May continue using Claude for non-DoD clients and internal R&D, provided it’s outside active DoD contract scope
🔹 Unprecedented Use of “Supply Chain Risk” Label
This is the first known instance where the U.S. government has applied the “supply chain national security risk” label to a U.S.-headquartered AI company. Historically reserved for foreign entities like Huawei, Anthropic describes the action as “unprecedented” — a factually accurate characterization based on publicly available records.

🧩 Core Dispute: Ethical Guardrails vs. Military Autonomy
The root conflict centers on two contractual red lines Anthropic insists upon in its DoD engagements:
- Prohibition on mass domestic surveillance of U.S. citizens
- Ban on fully autonomous weapons systems (i.e., lethal decisions without human oversight)
While OpenAI, Google, and xAI have accepted DoD’s “all lawful uses” clause, Anthropic remains the sole major AI firm enforcing these ethical constraints — despite being the only frontier AI company granted access to classified military networks.
💬 Industry Solidarity: OpenAI CEO Sam Altman affirmed shared principles, and co-founder Ilya Sutskever stated: “Anthropic’s refusal to compromise is excellent — and OpenAI’s alignment on these boundaries matters deeply.”

📊 Business & Regulatory Outlook
- Transition Timeline: Six-month wind-down window (Feb 27 – Aug 31, 2026)
- Financial Impact: Estimated $200M direct contract loss — modest relative to Anthropic’s ~$14B annual revenue and $380B valuation
- IPO Plans: Still on track for 2026; legal outcome will heavily influence investor sentiment and enterprise client confidence
- Legal Uncertainty: Enforceability of Hegseth’s broad commercial prohibition remains untested — pending judicial review

📣 Official Statement Excerpt
“We believe this designation is legally unsound and would set a dangerous precedent for any U.S. technology company negotiating with the government.”
— Anthropic, February 28, 2026
Statement translated by Claude, reviewed for accuracy.
Original source: https://x.com/AnthropicAI/status/2027555481699446918