Anthropic and the Pentagon

Published: (March 6, 2026 at 12:07 PM EST)
5 min read

Source: Schneier on Security

OpenAI In, Anthropic Out: The Pentagon’s AI Supplier Shift

Summary:
OpenAI has replaced Anthropic as the U.S. Department of Defense’s (DoD) AI supplier. The change follows a week of high‑profile political pressure on big‑tech firms and a broader debate over the existential risks of advanced AI.


1. Background

  • Anthropic’s stance: The company insisted that its models could not be used for “mass surveillance” or “fully autonomous weapons.”
  • Political reaction: Defense Secretary Pete Hegseth dismissed those provisions as “woke.”

2. The Turning Point

  • Trump’s order (Friday evening): Federal agencies were instructed to stop using Anthropic models.
  • OpenAI’s response: Within hours, OpenAI struck a deal with the administration to provide classified government systems with AI, potentially securing hundreds of millions of dollars in contracts.

3. Market Implications

  • Free‑market principle: In a competitive economy, firms should be free to sell and buy what they want, subject to existing federal contracting rules.

  • Pentagon’s threats: The department’s “vindictive” threats are the outlier in this scenario.

  • Commoditisation of AI models:

    • Top‑tier offerings (Anthropic, OpenAI, Google) now have comparable performance.
    • Improvements are incremental, with each new model offering only modest gains every few months.
    • Users choose the best model only about 60 % of the time—a virtual tie among providers.
  • Branding matters:

    • Anthropic, led by Dario Amodei, markets itself as a “moral and trustworthy” AI provider—a positioning that carries market value.
    • OpenAI’s CEO Sam Altman has pledged to uphold similar safety principles, though how this will play out amid the current rhetoric remains unclear.

4. Strategic Posturing

  • Anthropic’s loss vs. reputational gain:

    • Publicly opposing the Pentagon may be worth the forfeited contracts for Anthropic.
    • Conversely, OpenAI’s involvement could entangle it in political controversy, potentially harming its brand with civil‑libertarian and corporate customers.
  • Pentagon’s alternatives:

    • Even without a big‑tech partner, the DoD can deploy dozens of open‑weight models (publicly available parameters) that are often licensed permissively for government use.

5. Anthropic’s Prior Commitments

  • $200 M defense partnership (2023): Anthropic entered a multi‑year agreement with the DoD.

  • Palantir collaboration (2024): A partnership with the surveillance firm further tied Anthropic to government‑related projects.

  • Amodei’s statements:

    • His public essay on AI risk repeatedly invokes “democracy” and “autocracy,” while sidestepping the specifics of federal collaboration.
    • He frames AI as a tool for “robust military superiority” on behalf of democratic nations confronting autocratic threats—a vision that assumes a shared commitment to public wellbeing and democratic control.

6. Pentagon Requirements

  • Unique buyer profile: The DoD purchases lethal systems (tanks, artillery, grenades) that lack ethical guardrails.

  • Automation trajectory: Weapons are moving toward greater automation, raising ethical and safety concerns.

  • Normal market dynamics:

    • The Pentagon can set product specifications, decide whether to meet them, and choose suppliers accordingly—standard procurement practice.
  • Designation as a “supply‑chain risk”:

    • The Trump administration labeled Anthropic a national‑security risk—a status previously reserved for foreign firms.
    • This blocks not only direct government contracts but also contracts with Anthropic’s contractors and suppliers.
  • Potential invocation of the Defense Production Act (DPA):

    • The DPA could force Anthropic to remove contractual safety provisions or fundamentally alter its models.
  • Ongoing legal battles: Lawsuits are expected to clarify the limits of these governmental actions.

8. The Future of Autonomous Weapons

  • Historical context:

    • Autonomous weapons have long existed (e.g., 1980s Phalanx CIWS, mechanical bear traps).
    • The world continues to debate the ethics of land mines and similar technologies.
  • Current reality:

    • Modern drones can locate and engage targets without direct human input.
    • AI will inevitably be used for military purposes, just as every other technology has been throughout history.

Bottom line: The OpenAI‑Anthropic swap illustrates the intersection of market forces, political pressure, and the evolving ethics of AI in warfare. While the Pentagon’s procurement needs are unique, the broader debate over AI safety, corporate responsibility, and national security is only intensifying. The coming weeks will likely see further legal challenges and policy clarifications that shape how AI will be integrated into defense systems moving forward.

Essay

The lesson here should not be that one company in our rapacious capitalist system is more moral than another, or that one corporate hero can stand in the way of government’s adopting AI as technologies of war, surveillance, or repression. Unfortunately, we don’t live in a world where such barriers are permanent or even particularly sturdy.

Instead, the lesson is about the importance of democratic structures and the urgent need for their renovation in the US. If the Defense Department is demanding the use of AI for mass surveillance or autonomous warfare that we, the public, find unacceptable, that should tell us we need to pass new legal restrictions on those military activities. If we are uncomfortable with the force of government being applied to dictate how and when companies yield to unsafe applications of their products, we should strengthen the legal protections around government procurement.

The Pentagon should maximize its war‑fighting capabilities, subject to the law. And private companies like Anthropic should posture to gain consumer and buyer confidence. But we should not rest on our laurels, thinking that either is doing so in the public’s interest.

This essay was written with Nathan E. Sanders, and originally appeared in The Guardian.

0 views
Back to Blog

Related posts

Read more »

Claude Used to Hack Mexican Government

Claude Used to Hack Mexican Government An unknown hacker used Anthropic’s LLM to hackhttps://www.bloomberg.com/news/articles/2026-02-25/hacker-used-anthropic-s...

Manipulating AI Summarization Features

Microsoft is reportinghttps://www.microsoft.com/en-us/security/blog/2026/02/10/ai-recommendation-poisoning/: Companies are embedding hidden instructions in “Sum...