Anthropic CEO Says AI Company 'Cannot In Good Conscience Accede' To Pentagon
Source: Slashdot
Background
Anthropic CEO Dario Amodei said the company “cannot in good conscience accede” to the Pentagon’s demands to allow wider use of its technology. The maker of the AI chatbot Claude issued a statement indicating it is not walking away from negotiations, but that new contract language from the Defense Department “made virtually no progress on preventing Claude’s use for mass surveillance of Americans or in fully autonomous weapons.”
Pentagon’s Position
The Pentagon’s top spokesman reiterated that the military wants to use Anthropic’s AI technology in legal ways and will not let the company dictate any limits ahead of a Friday deadline to agree to its demands. Sean Parnell said on social media that the Pentagon “has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement.”
Anthropic’s Response
Anthropic’s policies prevent its models, such as Claude, from being used for mass surveillance or fully autonomous weapons. It is the last of its peers—Google, OpenAI, and Elon Musk’s xAI also have contracts with the Pentagon—to refuse supplying its technology to a new U.S. military internal network.
Implications
Parnell stated the Pentagon wants to “use Anthropic’s model for all lawful purposes” but did not provide details. He warned that opening up use of the technology could jeopardize critical military operations and added, “We will not let ANY company dictate the terms regarding how we make operational decisions.”
In a post on X, Parnell gave Anthropic until 5:01 PM ET on Friday to decide; otherwise, the Pentagon will terminate the partnership and deem the company a supply‑chain risk for DOW.
Read the original AP report and the related X post.