US threatens Anthropic with deadline in dispute on AI safeguards
Source: BBC Technology
Threat to Anthropic
U.S. Secretary of Defense Pete Hegseth vowed to remove Anthropic from the Department of Defense supply chain if the company declined to allow its artificial‑intelligence (AI) technology to be used across military applications. A senior Pentagon official said Anthropic had until Friday evening to comply with the Department’s request. If the company did not agree, Hegseth said the Defense Production Act would be invoked, potentially compelling Anthropic executives to permit unrestricted Pentagon use on national‑security grounds and labeling the firm a supply‑chain risk.
Discussion with Anthropic
The threat was issued during a Pentagon meeting that Hegseth demanded with Anthropic CEO Dario Amodei. According to a source familiar with the talks, the tone was cordial, but Amodei outlined Anthropic’s red lines:
- Involvement in autonomous kinetic operations where AI makes final targeting decisions without human intervention.
- Use of Anthropic tools for mass domestic surveillance.
The Pentagon official clarified that the current dispute is unrelated to autonomous weapons or mass surveillance.
Anthropic’s Response
Anthropic issued a statement saying it continued “good‑faith conversations about our usage policy to ensure Anthropic can continue to support the government’s national security mission in line with what our models can reliably and responsibly do.” A company spokesperson noted that Amodei “expressed appreciation for the Department’s work and thanked the Secretary for his service” during the meeting.
Other AI Companies with Pentagon Contracts
Anthropic, the maker of the Claude chatbot, was one of four AI firms awarded contracts with the Pentagon last summer. The other recipients were:
- OpenAI (creator of ChatGPT)
- xAI (Elon Musk’s company behind the Grok chatbot)
Each contract was valued at up to $200 million (£148 million).
Safety Track Record and Controversies
Anthropic positions itself as a safety‑focused AI developer, regularly publishing safety reports on its products. A report from the previous year acknowledged that its technology had been “weaponised” by hackers for sophisticated cyber‑attacks.
The company’s reputation faced scrutiny after reports that the U.S. military used Claude during the operation that led to the capture of former Venezuelan President Nicolás Maduro in January.

