Trump bans Anthropic AI from federal agencies after firm refuses to unlock capabilities — Anthropic cites risks of autonomous military applications, mass domestic surveillance
Source: Tom’s Hardware
Trump’s Directive to Federal Agencies
President Donald Trump ordered every U.S. federal agency to stop using technology from AI company Anthropic on Friday, February 27. He posted the directive to Truth Social at 3:47 PM ET, more than an hour before the Pentagon’s 5:01 PM ET deadline for Anthropic to comply with its demands.
“THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS,” Trump wrote on Truth Social, adding that he is directing every U.S. federal agency to IMMEDIATELY CEASE all use of Anthropic’s technology.
View the post
Anthropic’s Stance
After months of private talks that collapsed into a public standoff, Anthropic CEO Dario Amodei said the company “cannot in good conscience accede” to the Department of Defense’s terms. The company cited ethical concerns over autonomous military applications and the risk of mass domestic surveillance.
Pentagon’s Response
The Pentagon threatened to invoke the Korean War‑era Defense Production Act to compel Anthropic’s compliance and warned it would label the company a “supply chain risk,” a designation usually reserved for firms from adversarial nations such as Huawei.
Impact on Defense Contractors
- Claude, Anthropic’s flagship model, was the only AI model approved for use in classified military systems.
- Palantir, a defense software firm that uses Claude for its most sensitive government contracts, will need to find a replacement quickly.
- OpenAI CEO Sam Altman said he shares Anthropic’s position on autonomous weapons’ ethical “red lines,” complicating OpenAI’s candidacy as a direct replacement.