Claude won't be allowed to engage in mass surveillance or power fully autonomous weapons — Anthropic refuses to lower AI guardrails for the Pentagon
Source: Tom’s Hardware
Statement Overview
Anthropic has issued a statement refusing to lower guardrails for Claude at the request of the Department of Justice. The Pentagon gave Anthropic until Friday to comply or have its $200 million contract cancelled, with possible additional repercussions such as being designated a supply‑chain risk. The company let the deadline pass, and its CEO Dario Amodei said it “cannot in good conscience” accept the DoD’s demands.
Points of Contention
Mass Domestic Surveillance
Anthropic argues that monitoring American citizens at large is inherently undemocratic and undermines individual liberty. The company adds that AI‑led surveillance is dangerous and only allowed because legal precedent has not yet caught up.
Fully Autonomous Weapons
Anthropic states that frontier AI is not ready to be used in fully autonomous weapons because it lacks human‑like judgment. “We will not knowingly provide a product that puts America’s warfighters and civilians at risk,” said Amodei. Partially unmanned weapons are described as “vital to the defense of the democracy,” but AI cannot yet be trusted to select and kill targets on its own.
Anthropic’s Proposed Path Forward
Anthropic offered to conduct R&D to improve reliability for systems where AI could be trusted enough to take control over automatically engaging subjects, but the DoD turned down the proposal. The company noted that “they need to be deployed with proper guardrails, which don’t exist today,” referring to the current inability of any AI model to emulate an experienced trooper.
Both points are labeled as “exceptions” by Anthropic in its otherwise vocal support for working with the Pentagon. Throughout the statement, the company reiterates its desire to “continue to serve the Department and our warfighters—with our two requested safeguards in place.”