Our Agreement with the Department of War
Source: Hacker News
Our agreement includes
-
Deployment architecture
- Cloud‑only deployment with a safety stack that we run, including the principles above.
- No “guardrails off” or non‑safety‑trained models are provided, and models are not deployed on edge devices (which could enable autonomous lethal weapons).
- The architecture enables independent verification that redlines are not crossed, including running and updating classifiers.
-
Our contract – relevant language
The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well‑established safety and oversight protocols. The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control, nor will it be used to assume other high‑stakes decisions that require approval by a human decision‑maker under the same authorities.
Per DoD Directive 3000.09 (dtd 25 January 2023), any use of AI in autonomous and semi‑autonomous systems must undergo rigorous verification, validation, and testing to ensure they perform as intended in realistic environments before deployment.
For intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947, the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives requiring a defined foreign‑intelligence purpose. The AI System shall not be used for unconstrained monitoring of U.S. persons’ private information as consistent with these authorities. The system shall also not be used for domestic law‑enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.
-
AI‑expert involvement
- Cleared, forward‑deployed OpenAI engineers will assist the government.
- Cleared safety and alignment researchers will remain in the loop.
FAQ
Why are you doing this?
- The U.S. military needs strong AI models to support its mission amid growing threats from adversaries integrating AI.
- We delayed a classified‑deployment contract until our safeguards were ready.
- We refuse to remove key technical safeguards for national‑security work.
- We aim to de‑escalate tensions between the DoW and U.S. AI labs and promote deep collaboration.
- As part of the deal, we asked that the same terms be made available to all AI labs and that the government address issues with Anthropic.
Could you reach a deal when Anthropic could not? Did you sign a deal they wouldn’t?
- Our contract provides stronger guarantees and more responsible safeguards than earlier agreements, including Anthropic’s.
- Redlines are more enforceable because deployment is cloud‑only, the safety stack stays active, and cleared OpenAI personnel stay involved.
- We do not know why Anthropic could not reach a similar deal.
Should Anthropic be designated as a “supply chain risk”?
- No. We have communicated this position clearly to the government.
Will this deal enable the Department of War to use OpenAI models to power autonomous weapons?
- No. The safety stack, cloud‑only deployment, contract language, and existing laws prevent this. OpenAI personnel will also be in the loop for additional assurance.
Will this deal enable the Department of War to conduct mass surveillance?
- No. The contract explicitly prohibits unconstrained monitoring of U.S. persons’ private information. All deployments are subject to the safeguards described above.
Does the deal affect U.S. persons?
- No. Our safety stack, contract language, and existing laws heavily restrict the DoW from domestic surveillance. OpenAI personnel will remain involved for assurance.
Do you have to deploy models without a safety stack?
- No. We retain full control over the safety stack and will not deploy without safety guardrails. Safety and alignment researchers will stay in the loop and help improve systems over time.
What happens if the government violates the terms of the contract?
- We could terminate the contract if the counter‑party breaches its terms. We do not expect this to happen.
What if the government changes the law or existing DoW policies?
- The contract references surveillance and autonomous‑weapons laws and policies as they exist today. Even if laws or policies change, use of our systems must remain aligned with the current standards reflected in the agreement.
Red‑Line Protections (compared with Anthropic’s stance)
Anthropic identified two red lines (mass domestic surveillance and fully autonomous weapons). We share those and add a third—automated high‑stakes decision making. Our contract upholds these red lines:
- Mass domestic surveillance – The DoW considers this illegal and will not use it for that purpose; the contract makes this explicit.
- Fully autonomous weapons – Cloud‑only deployment prevents powering fully autonomous weapons, which would require edge deployment.
In addition to these protections, our contract includes layered safeguards such as our safety stack and OpenAI technical experts in the loop.