Anthropic sues US government over supply chain risk designation

Published: (March 9, 2026 at 11:45 AM EDT)
3 min read
Source: Engadget

Source: Engadget

Lawsuit overview

Anthropic has filed a lawsuit to prevent the Pentagon from adding the company to a national‑security blocklist. The suit argues that the Department of Defense’s “supply chain risk” designation is unlawful and violates Anthropic’s free‑speech and due‑process rights.

“These actions are unprecedented and unlawful. The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech,” the company said in a statement published by Reutershttps://www.reuters.com/world/anthropic-sues-block-pentagon-blacklisting-over-ai-use-restrictions-2026-03-09/】.

Anthropic’s spokesperson added:

“Seeking judicial review does not change our longstanding commitment to harnessing AI to protect our national security, but this is a necessary step to protect our business, our customers, and our partners. We will continue to pursue every path toward resolution, including dialogue with the government.”

The complaint characterizes the government’s actions as an “unprecedented and unlawful … campaign of retaliation,” noting that “no federal statute authorizes the actions taken here.”

Government actions and timeline

Pentagon’s supply‑chain risk designation

In late February, the Department of Defense (DoD) and Defense Secretary Pete Hegseth pressured Anthropic to remove certain safeguards from its AI systems. CEO Dario Amodei made clear the company would not allow its model to be used for mass surveillance or autonomous weapons.

When the February 27 deadline passed, Amodei refused to comply【https://www.engadget.com/ai/anthropic-refuses-to-bow-to-pentagon-despite-hegseths-threats-085553126.html】, prompting Hegseth to threaten the supply‑chain risk designation and to announce the cancellation of Anthropic’s $200 million contract. The same day, former President Donald Trump ordered all federal agencies to cease using Anthropic’s services【https://www.engadget.com/ai/trump-orders-federal-agencies-to-drop-anthropic-services-amid-pentagon-feud-222029306.html】.

According to the lawsuit, Anthropic had nonetheless agreed to “collaborate with the Department on an orderly transition to another AI provider willing to meet its demands.”

Anthropic’s response

Anthropic maintains that it is willing to work with the DoD on a transition but will not compromise on its core safety principles. The company emphasizes its commitment to protecting national security while defending its right to free speech and due process.

OpenAI’s parallel deal

Safety principles in the DoD contract

OpenAI quickly struck its own agreement with the DoD. CEO Sam Altman highlighted two safety principles that mirror Anthropic’s concerns: prohibitions on domestic mass surveillance and human responsibility for the use of force, including autonomous weapon systems. The contract explicitly states that “the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals”【https://www.engadget.com/ai/openai-will-amend-defense-department-deal-to-prevent-mass-surveillance-in-the-us-050637400.html】.

Internal backlash

Following the deal, OpenAI’s head of robotics hardware resigned【https://www.engadget.com/ai/openais-robotics-hardware-lead-resigns-following-deal-with-the-department-of-defense-195918599.html】. Caitlin Kalinowski wrote on X that “surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.”

0 views
Back to Blog

Related posts

Read more »