OpenAI’s Sam Altman announces Pentagon deal with ‘technical safeguards’
Source: TechCrunch
Background
OpenAI CEO Sam Altman announced late on Friday that his company has reached an agreement allowing the Department of Defense to use its AI models in the department’s classified network.
The Pentagon had previously pushed AI companies, including Anthropic, to allow their models to be used “for all lawful purposes.”1 Anthropic sought to draw a red line around mass domestic surveillance and fully autonomous weapons.
In a lengthy statement released Thursday, Anthropic CEO Dario Amodei said the company “never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner,” but argued that “in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values.”2
More than 60 OpenAI employees and 300 Google employees signed an open letter this week asking their employers to support Anthropic’s position.3
After Anthropic and the Pentagon failed to reach an agreement, former President Donald Trump criticized the “Left‑wing nut jobs at Anthropic” in a social‑media post that also directed federal agencies to stop using the company’s products after a six‑month phase‑out period.4
Secretary of Defense Pete Hegseth later claimed Anthropic was trying to “seize veto power over the operational decisions of the United States military” and designated Anthropic as a supply‑chain risk, stating:
“Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.”5
Anthropic responded that it had “not yet received direct communication from the Department of War or the White House on the status of our negotiations,” but insisted it would “challenge any supply‑chain risk designation in court.”6
OpenAI Pentagon Agreement
Altman posted on X that OpenAI’s new defense contract includes protections addressing the same issues that became a flashpoint for Anthropic. He highlighted two safety principles:
- Prohibition on domestic mass surveillance
- Human responsibility for the use of force, including autonomous weapon systems
Altman said the Department of War (DoW) agrees with these principles, reflects them in law and policy, and that OpenAI will “build technical safeguards to ensure our models behave as they should.” Engineers will be deployed with the Pentagon to help with model safety.
“We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept.”
Fortune’s Sharon Goldman reported that Altman told OpenAI employees at an all‑hands meeting the government will allow the company to build its own “safety stack” to prevent misuse, and that “if the model refuses to do a task, then the government would not force OpenAI to make it do that task.”7
Reactions
- Anthropic: Continues to seek a court challenge to the supply‑chain risk designation and has not received direct communication from the DoW or the White House.
- OpenAI staff: A significant portion of employees signed the open letter supporting Anthropic’s stance, indicating internal concern over the Pentagon deal.
- Political figures: Former President Trump publicly condemned Anthropic and called for a halt to its use by federal agencies.
References
Footnotes
-
“Pentagon pushes AI companies, including Anthropic, to allow their models be used ‘all lawful purposes.’” – TechCrunch ↩
-
Anthropic statement – Anthropic News ↩
-
Open letter signed by OpenAI and Google employees – TechCrunch ↩
-
Trump’s social‑media post – TechCrunch ↩
-
Anthropic’s response – Anthropic News ↩