OpenAI reveals more details about its agreement with the Pentagon
Source: TechCrunch
Background
After negotiations between Anthropic and the Pentagon fell through, President Donald Trump directed federal agencies to stop using Anthropic’s technology after a six‑month transition period, and Secretary of Defense Pete Hegseth designated the AI company as a supply‑chain risk.
OpenAI’s Agreement with the Department of Defense
OpenAI quickly announced that it had reached a deal of its own for models to be deployed in classified environments. The company published a blog post outlining its approach, which identified three areas where OpenAI’s models cannot be used:
- Mass domestic surveillance
- Autonomous weapon systems
- High‑stakes automated decisions (e.g., “social credit” systems)
OpenAI argued that, unlike other AI companies that have “reduced or removed their safety guardrails and relied primarily on usage policies,” its agreement protects these red lines through a “more expansive, multi‑layered approach.”
“We retain full discretion over our safety stack, we deploy via cloud, cleared OpenAI personnel are in the loop, and we have strong contractual protections,” the blog said. “This is all in addition to the strong existing protections in U.S. law.”
The post added, “We don’t know why Anthropic could not reach this deal, and we hope that they and more labs will consider it.”
Safeguards and Deployment Architecture
OpenAI’s head of national security partnerships, Katrina Mulligan, emphasized that deployment architecture matters more than contract language. By limiting deployment to a cloud API, OpenAI can ensure that its models cannot be integrated directly into weapons systems, sensors, or other operational hardware.
Criticism and Responses
Techdirt’s Mike Masnick claimed the deal “absolutely does allow for domestic surveillance,” because it states that the collection of private data will comply with Executive Order 12333 (and other laws). Masnick described that order as “how the NSA hides its domestic surveillance by capturing communications by tapping into lines outside the US even if it contains info from/on US persons.”
In a LinkedIn post, Mulligan argued that the discussion around the contract language assumes “the only thing standing between Americans and the use of AI for mass domestic surveillance and autonomous weapons is a single usage policy provision in a single contract with the Department of War.” She countered, “Deployment architecture matters more than contract language … By limiting our deployment to cloud API, we can ensure that our models cannot be integrated directly into weapons systems, sensors, or other operational hardware.”
Altman addressed questions on X, admitting the agreement had been rushed and noting the backlash it generated. He acknowledged that the deal’s speed contributed to criticism, including the fact that Anthropic’s Claude overtook OpenAI’s ChatGPT in Apple’s App Store shortly after the dispute.
“We really wanted to de‑escalate things, and we thought the deal on offer was good,” Altman said. “If we are right and this does lead to a de‑escalation between the DoW and the industry, we will look like geniuses… If not, we will continue to be characterized as … rushed and uncareful.”
Industry Impact
The controversy highlighted differing approaches among AI labs to government contracts and raised questions about transparency, safety safeguards, and the role of deployment architecture in preventing misuse of advanced AI models.