Our agreement with the Department of War

Published: (February 28, 2026 at 07:30 AM EST)
7 min read

Source: OpenAI Blog

Agreement with the Pentagon

Yesterday we reached an agreement with the Pentagon for deploying advanced AI systems in classified environments, which we requested they also make available to all AI companies.

We think our agreement has more guardrails than any previous agreement for classified AI deployments, including Anthropic’s. Here’s why.

Our Red Lines

We have three main red lines that guide our work with the DoW, which are generally shared by several other frontier labs:

  • No use of OpenAI technology for mass domestic surveillance.
  • No use of OpenAI technology to direct autonomous weapons systems.
  • No use of OpenAI technology for high‑stakes automated decisions (e.g., systems such as “social credit”).

Other AI labs have reduced or removed their safety guardrails and relied primarily on usage policies as their primary safeguards in national‑security deployments. We think our approach better protects against unacceptable use.

Multi‑Layered Protection

In our agreement, we protect our red lines through a more expansive, multi‑layered approach. We:

  • Retain full discretion over our safety stack.
  • Deploy via cloud only.
  • Keep cleared OpenAI personnel in the loop.
  • Include strong contractual protections.

All of this is in addition to the strong existing protections in U.S. law.

We believe strongly in democracy.
Given the importance of this technology, we believe that the only good path forward requires deep collaboration between AI efforts and the democratic process. We also believe our technology will introduce new risks, and we want the people defending the United States to have the best tools.

Our Agreement Includes

1. Deployment Architecture

  • Cloud‑only deployment with a safety stack that we run, incorporating the principles above and others.
  • We are not providing the DoW with “guardrails off” or non‑safety‑trained models, nor are we deploying our models on edge devices (where there could be a possibility of usage for autonomous lethal weapons).
  • Our deployment architecture enables us to independently verify that these red lines are not crossed, including running and updating classifiers.

2. Contract Language

The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well‑established safety and oversight protocols.
The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control, nor will it be used to assume other high‑stakes decisions that require approval by a human decision‑maker under the same authorities.
Per DoD Directive 3000.09 (dtd 25 January 2023), any use of AI in autonomous and semi‑autonomous systems must undergo rigorous verification, validation, and testing to ensure they perform as intended in realistic environments before deployment.

For intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947, the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives requiring a defined foreign‑intelligence purpose. The AI System shall not be used for unconstrained monitoring of U.S. persons’ private information as consistent with these authorities. The system shall also not be used for domestic law‑enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.

3. AI‑Expert Involvement

We will have cleared forward‑deployed OpenAI engineers helping the government, with cleared safety and alignment researchers in the loop.

FAQ

Why are you doing this?

  1. The U.S. military needs strong AI models to support its mission, especially given growing threats from adversaries integrating AI into their systems.
  2. We initially refrained from a classified‑deployment contract because our safeguards and systems were not ready. We have since worked hard to ensure a classified deployment can happen with safeguards that keep red lines intact.
  3. We remain unwilling to remove key technical safeguards to enhance performance on national‑security work—that is not the correct approach to supporting the U.S. military.
  4. We also want to de‑escalate tensions between the DoW and U.S. AI labs. A good future requires real, deep collaboration between government and AI labs. As part of our deal, we asked that the same terms be made available to all AI labs and that the government try to resolve issues with Anthropic; the current state is a very bad way to kick off the next phase of collaboration.

Why could you reach a deal when Anthropic could not? Did you sign the deal they wouldn’t?

Based on what we know, our contract provides better guarantees and more responsible safeguards than earlier agreements, including Anthropic’s original contract. Our red lines are more enforceable because deployment is limited to cloud‑only (not at the edge), our safety stack remains intact, and cleared OpenAI personnel stay in the loop. We do not know why Anthropic could not reach this deal, and we hope they and more labs will consider it.

Do you think Anthropic should be designated as a “supply chain risk”?

No. We have made our position on this clear to the government.

Will this deal enable the Department of War to use OpenAI models to power autonomous weapons?

No. Based on our safety stack, cloud‑only deployment, contract language, and existing laws, regulations, and policies, we are confident this cannot happen. OpenAI personnel will also be in the loop for additional assurance.

Will this deal enable the Department of War to use OpenAI models to conduct mass surveillance?

Answer pending…

Can we be compelled to surveil U.S. persons?

No. Based on our safety stack, the contract language, and existing laws that heavily restrict the DoW from domestic surveillance, we are confident that this cannot happen. We will also have OpenAI personnel in the loop for additional assurance.

Do you have to deploy models without a safety stack?

No. We retain full control over the safety stack we deploy and will not deploy without safety guardrails. In addition, our safety and alignment researchers will be in the loop and help improve systems over time. We know that other AI labs have reduced model guardrails and relied on usage policies as the primary safeguard, but we think our layered approach better protects against unacceptable use.

What happens if the government violates the terms of the contract?

As with any contract, we could terminate it if the counter‑party violates the terms. We don’t expect that to happen.

What if the government changes the law or existing DoW policies?

Our contract explicitly references the surveillance and autonomous‑weapons laws and policies as they exist today, so that even if those laws or policies change in the future, use of our systems must still remain aligned with the current standards reflected in the agreement.

Additional context from Anthropic

In their post, Anthropic states two of their red lines (we have the same two red lines, plus a third: automated high‑stakes decision making), and reasons they do not believe these red lines would be upheld in the contracts they had seen from the DoW at that time. Below is why we believe those same red lines would hold in our contract:

  • Mass domestic surveillance – It was clear in our interaction that the DoW considers mass domestic surveillance illegal and was not planning to use it for this purpose. We ensured that the fact that it is not covered under lawful use was made explicit in our contract.
  • Fully autonomous weapons – The cloud‑deployment surface covered in our contract would not permit powering fully autonomous weapons, as this would require edge deployment.

In addition to these protections, our contract offers additional layered safeguards, including our safety stack and OpenAI technical experts in the loop.

0 views
Back to Blog

Related posts

Read more »