Employees at Google and OpenAI support Anthropic’s Pentagon stand in open letter
Source: TechCrunch
Background
Anthropic is in a stalemate with the United States Department of War over the Pentagon’s request for unrestricted access to the company’s AI technology. As the Pentagon’s Friday‑afternoon deadline approaches, more than 300 Google employees and over 60 OpenAI employees have signed an open letter urging their leaders to support Anthropic and refuse the unilateral use of its models.
The Open Letter
The letter calls on executives at Google and OpenAI to:
- Uphold Anthropic’s red lines against domestic mass surveillance and fully autonomous weaponry.
- “Put aside their differences and stand together” to refuse the Department of War’s current demands.
“They’re trying to divide each company with fear that the other will give in. That strategy only works if none of us know where the others stand.” – Open letter
The signatories argue that mass surveillance violates the Fourth Amendment and has a chilling effect on freedom of expression, while autonomous weapons raise profound ethical concerns.
Company Reactions
OpenAI
In a CNBC interview on Friday morning, OpenAI CEO Sam Altman said he “doesn’t personally think the Pentagon should be threatening DPA against these companies.” An OpenAI spokesperson later confirmed that the company shares Anthropic’s red lines against autonomous weapons and mass surveillance.
Google has not issued an official statement, but Chief Scientist Jeff Dean, speaking as an individual, posted on X:
“Mass surveillance violates the Fourth Amendment and has a chilling effect on freedom of expression. Surveillance systems are prone to misuse for political or discriminatory purposes.”
— Jeff Dean on X (February 25, 2026)
Pentagon Demands
Defense Secretary Pete Hegseth told Anthropic CEO Dario Amodei that if the company does not concede, the Pentagon could:
- Declare Anthropic a “supply chain risk,” or
- Invoke the Defense Production Act (DPA) to force compliance.
Anthropic’s response, posted on Thursday, reiterated its stance:
“These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security. Regardless, these threats do not change our position: we cannot in good conscience accede to their request.”
— Anthropic statement
Context
- The military already uses X’s Grok, Google’s Gemini, and OpenAI’s ChatGPT for unclassified tasks and is negotiating to extend access to classified work.
- Anthropic maintains an existing partnership with the Pentagon but insists its AI must not be used for mass domestic surveillance or fully autonomous weaponry.
All links and references have been preserved. No author bios, navigation elements, or promotional content are included.