Anthropic boss rejects Pentagon demand to drop AI safeguards

Published: (February 26, 2026 at 10:54 PM EST)
3 min read

Source: BBC Technology

Reuters Anthropic CEO Dario Amodei on a stage, furrowing his brow behind round, black‑rimmed eye glasses.

Anthropic’s Stance on Pentagon Demands

Anthropic chief executive Dario Amodei said the company would rather not work with the U.S. Department of Defense (DoD) than agree to uses of its AI technology that could “undermine, rather than defend, democratic values.”

The comments followed a meeting with Secretary of Defense Pete Hegseth, during which the Pentagon demanded that Anthropic accept “any lawful use” of its tools. The meeting ended with a threat to remove Anthropic from the DoD’s supply chain.

“These threats do not change our position: we cannot in good conscience accede to their request,” Amodei said.

Specific Use‑Case Concerns

Anthropic is concerned about two potential applications of its AI system Claude:

  1. Mass domestic surveillance
  2. Fully autonomous weapons

Amodei stated that such use cases have never been included in Anthropic’s contracts with the Department of War (the name used for the Defense Department under an executive order signed by former President Donald Trump) and should not be added now.

“Should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider,” he added.

Anthropic’s Response to Revised Contract Language

An Anthropic spokeswoman said the company received updated wording from the DoD on the contract, but it represented “virtually no progress on preventing Claude’s use for mass surveillance of Americans or in fully autonomous weapons.”

“New language framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will,” she said. “Despite recent public statements, these narrow safeguards have been the crux of our negotiations for months.”

A Defense Department representative could not be reached for comment.

Pentagon Reactions

  • Emil Michael, U.S. Undersecretary for Defense, attacked Amodei on X, accusing him of trying to “personally control the US Military” and risking national safety.
  • A Pentagon official told the BBC that if Anthropic does not comply, Hegseth would invoke the Defense Production Act, which allows the president to compel a company to meet defense needs.
  • Hegseth also threatened to label Anthropic a “supply chain risk,” effectively barring the company from government contracts.
  • A former DoD official described Hegseth’s grounds for these measures as “extremely flimsy.”

History of Tensions

A source familiar with the negotiations said tensions between Anthropic and the Pentagon “go back several months,” predating public knowledge that Claude was used in a U.S. operation to seize Venezuelan President Nicolás Maduro.

Anthropic’s Position on AI Ethics

In a company blog post, Amodei explained that AI can be used to “assemble scattered, individually innocuous data into a comprehensive picture of any person’s life—automatically and at massive scale.”

  • Lawful intelligence: Anthropic supports AI for lawful foreign intelligence and counter‑intelligence missions.
  • Domestic surveillance: “Using these systems for mass domestic surveillance is incompatible with democratic values.”

Autonomous Weapons

Amodei argued that today’s most advanced AI systems are not reliable enough to power fully autonomous weapons:

“We will not knowingly provide a product that puts America’s warfighters and civilians at risk. Without proper oversight, fully autonomous weapons cannot be relied upon to exercise the critical judgment that our highly trained, professional troops exhibit every day. They need to be deployed with proper guardrails, which don’t exist today.”

Anthropic has offered to work directly with the Department of War on R&D to improve reliability, but the offer has not been accepted.

0 views
Back to Blog

Related posts

Read more »