Pentagon Designates Anthropic Supply Chain Risk Over AI Military Dispute

Published: (February 27, 2026 at 11:57 PM EST)
3 min read

Source: The Hacker News

Ravie Lakshmanan
Feb 28 2026National Security / Artificial Intelligence

Pentagon Designates Anthropic as a Supply‑Chain Risk

[Image: Pentagon designates Anthropic]

Anthropic responded on Friday after U.S. Secretary of Defense Pete Hegseth directed the Pentagon to label the AI up‑start as a “supply chain risk.”

“This action follows months of negotiations that reached an impasse over two exceptions we requested to the lawful use of our AI model, Claude: the mass domestic surveillance of Americans and fully autonomous weapons,” the company said.
Source

“No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons.”

Political and Military Directives

In a post on Truth Social, U.S. President Donald Trump announced an order for all federal agencies to phase out Anthropic technology within six months. A subsequent X post from Hegseth mandated that all contractors, suppliers, and partners doing business with the U.S. military cease any “commercial activity with Anthropic” effective immediately.

“In conjunction with the President’s directive for the Federal Government to cease all use of Anthropic’s technology, I am directing the Department of War to designate Anthropic a Supply Chain Risk to National Security,” Hegseth wrote.
Source

Anthropic’s Position

Anthropic argued that its contracts should not facilitate mass domestic surveillance or the development of autonomous weapons.

“We support the use of AI for lawful foreign intelligence and counter‑intelligence missions,” Anthropic noted. “But using these systems for mass domestic surveillance is incompatible with democratic values. AI‑driven mass surveillance presents serious, novel risks to our fundamental liberties.”

The company also criticized the Department of War’s stance that it will only work with AI firms allowing “any lawful use” of the technology, removing safeguards that may exist, as part of an “AI‑first” warfighting force.

“Diversity, Equity, and Inclusion and social ideology have no place in the DoW, so we must not employ AI models which incorporate ideological ‘tuning’ that interferes with their ability to provide objectively truthful responses to user prompts,” a Pentagon memorandum reads.
PDF

“The Department must also utilize models free from usage‑policy constraints that may limit lawful military applications.”

Anthropic described the designation as legally unsound, warning it would set a dangerous precedent for any American company negotiating with the government. The company noted that a supply‑chain‑risk designation under 10 U.S.C. § 3252 can only extend to the use of Claude in DoW contracts and cannot affect Claude’s use for other customers.

Industry Reaction

[Image: ThreatLocker]

Hundreds of employees at Google and OpenAI signed an open letter urging their companies to stand with Anthropic in its clash with the Pentagon over military applications for AI tools like Claude.
Open letter

OpenAI’s Stance

The standoff coincides with OpenAI CEO Sam Altman stating that OpenAI reached an agreement with the U.S. Department of Defense (DoD) to deploy its models in a classified network and asked the DoD to extend those terms to all AI companies.

“AI safety and wide distribution of benefits are the core of our mission. Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” Altman said.
Source


Found this article interesting? Follow us on:

LinkedIn to read more exclusive content we post.

0 views
Back to Blog

Related posts

Read more »