Trump moves to ban Anthropic from the US government

Published: (February 28, 2026 at 03:00 PM EST)
5 min read

Source: Ars Technica

Theoretical Use Cases

The Defense Department pressured Anthropic to drop restrictions on how its AI can be used by the military.

Anthropic CEO Dario Amodei on Tuesday, July 25, 2023, during a hearing on AI held by the Senate Judiciary Subcommittee on Privacy, Technology, and the Law.
Credit: Getty Images | Bloomberg

Trump’s Directive

US President Donald Trump announced Friday that he was instructing every federal agency to “immediately cease” use of Anthropic’s AI tools. The move comes after Anthropic and top officials clashed for weeks over military applications of artificial intelligence.

“The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG‑ARM the Department of War,” Trump said in a post on Truth Social.

Trump said there would be a six‑month phase‑out period for agencies using Anthropic, allowing time for further negotiations between the government and the AI startup.

  • The Pentagon and Anthropic did not immediately respond to requests for comment.

The Department of Defense has sought to change the terms of a deal struck with Anthropic and other companies last July to eliminate restrictions on how AI can be deployed and instead permit “all lawful use” of the technology. Anthropic objected, claiming that such a change could allow AI to be used to fully control lethal autonomous weapons or to conduct mass surveillance on U.S. citizens.

The Pentagon does not currently use AI in these ways and has said it has no plans to do so. However, top Trump‑administration officials have voiced opposition to a civilian tech company dictating military use of such an important technology.

Anthropic was the first major AI lab to work with the U.S. military, through a $200 million deal signed with the Pentagon last year. It created several custom models known as Claude Gov that have fewer restrictions than its regular ones. Google, OpenAI, and xAI signed similar deals around the same time, but Anthropic is the only AI company currently working with classified systems.

Claude Gov in Military Use

Anthropic’s model is available through platforms provided by Palantir and Amazon’s cloud platform for classified military work. Claude Gov is currently used for routine tasks—writing reports, summarizing documents—as well as for intelligence analysis and military planning, according to a source familiar with the situation who spoke to WIRED on condition of anonymity.

In recent years, Silicon Valley has moved from largely avoiding defense work to increasingly embracing it, eventually becoming full‑blown military contractors. The fight between Anthropic and the Pentagon is now testing the limits of that shift. This week, several hundred workers from OpenAI and Google signed an open letter supporting Anthropic and criticizing their own companies’ decisions to remove restrictions on military use of AI.

In a memo sent to OpenAI staff, CEO Sam Altman said the company agreed with Anthropic and viewed mass surveillance and fully autonomous weapons as a “red line.” Altman added that OpenAI would try to negotiate a deal with the Pentagon that would let it continue working with the military, as reported by The Wall Street Journal.

The public spat between the Pentagon and Anthropic began after Axios reported that U.S. military leaders used Claude to assist in planning an operation to capture Venezuela’s president, Nicolás Maduro. After the operation, an employee at Palantir relayed concerns from an Anthropic staffer to U.S. military leaders about how its models had been used. Anthropic has denied ever raising concerns or interfering with the Pentagon’s use of its technology.

Escalation

The dispute has escalated, with officials publicly trading barbs with the AI company on social media.

  • Defense Secretary Pete Hegseth met with Anthropic’s CEO, Dario Amodei, earlier this week. He gave the company until Friday to commit to changing the terms of its contract to allow “all lawful use” of its models. Hegseth praised Anthropic’s products during the meeting and said the Department of Defense wanted to continue working with Anthropic, according to a source familiar with the interaction.
  • Some experts say the dispute boils down to a clash over “vibes” rather than concrete disagreements over how artificial intelligence should be deployed.

“This is such an unnecessary dispute, in my opinion,” says Michael Horowitz, an expert on military use of AI and former Deputy Assistant Secretary for Emerging Technologies at the Pentagon. “It is …”

Additional Commentary

“…theoretical use cases that are not on the table for now.”

Horowitz notes that Anthropic has supported all of the ways the Department of Defense has proposed using its technology thus far. “My sense is that the Pentagon and Anthropic agree at present about the use cases where the technology is not ready for prime time,” he adds.

Anthropic was founded on the idea that AI should be built with safety at its core. In January, Amodei penned a blog post about the risks of powerful artificial intelligence that touched upon the dangers of fully autonomous AI‑controlled weapons.

“These weapons also have legitimate uses in the defense of democracy,” Amodei wrote. “But they are a dangerous weapon to wield.”

Additional reporting by Paresh Dave.

This story originally appeared at WIRED.com.

About the Author

Photo of WIRED

Wired.com is your essential daily guide to what’s next, delivering the most original and complete take you’ll find anywhere on innovation’s impact on technology, science, business, and culture.

Comments

38 Comments

Most Read

Listing image for first story in Most Read: Google quantum‑proofs HTTPS by squeezing 2.5kB of data into 64‑byte space

0 views
Back to Blog

Related posts

Read more »