Sam Altman Answers Questions on X.com About Pentagon Deal, Threats to Anthropic
Source: Slashdot
Background
Saturday afternoon Sam Altman announced he would start answering questions on X.com about OpenAI’s work with the U.S. Department of War and the developments of the past few days.
- After the Department’s negotiations with Anthropic failed, the agency announced it would stop using Anthropic’s technology and threatened to designate Anthropic as a “Supply‑Chain Risk to National Security.”
- The Department then reached a deal with OpenAI. According to Altman, the agreement includes OpenAI’s own prohibitions against using its products for domestic mass surveillance and requires “human responsibility” for the use of force in autonomous weapon systems.
Sam Altman’s Statements
“Enforcing that Supply‑Chain Risk designation on Anthropic would be very bad for our industry, our country, and obviously their company. We said that to the Department of War before and after. Part of the reason we were willing to move quickly was the hope of de‑escalation…
We should all care very much about the precedent. To say it very clearly: I think this is a very bad decision from the Department of War and I hope they reverse it. If we take heat for strongly criticizing it, so be it.”
“For a long time OpenAI was planning to do non‑classified work only, but this week we found the Department of War flexible on what we needed.”
Sam Altman: “The reason for rushing is an attempt to de‑escalate the situation. I think the current path is dangerous for Anthropic, healthy competition, and the U.S. We negotiated to make sure similar terms would be offered to all other AI labs.”
“I know what it’s like to feel backed into a corner, and I think it’s worth some empathy for the Department of War. They are a very dedicated group with an extremely important mission. I cannot imagine doing their work. Our industry tells them, ‘The technology we are building is going to be the high‑order bit in geopolitical conflict. China is rushing ahead. You are very behind.’ And then we say, ‘But we won’t help you, and we think you are kind of evil.’ I don’t think I’d react great in that situation. I do not believe unelected leaders of private companies should have as much power as our democratically elected government. But I do think we need to help them.”
Q&A (Answered by Sam Altman)
| Question | Answer |
|---|---|
| Are you worried at all about the potential for things to go really south during a possible dispute over what’s legal or not later on and be deemed a supply‑chain risk? | “Yes, I am. If we have to take on that fight we will, but it clearly exposes us to some risk. I am still very hopeful this will get resolved, and part of why we wanted to act fast was to increase the chances of that.” |
| Why the rush to sign the deal? Obviously the optics don’t look great. | “It was definitely rushed, and the optics don’t look good. We really wanted to de‑escalate things, and we thought the deal on offer was good. If we are right and this leads to de‑escalation between the Department of War and the industry, we will look like geniuses—a company that took on a lot of pain to help the industry. If not, we will continue to be characterized as rushed and careless. I don’t know where it’s going to land, but I have already seen promising signs. A good relationship between the government and the companies developing this technology is critical over the next couple of years.” |
| What was the core difference why you think the Department of War accepted OpenAI but not Anthropic? | “We believe in a layered approach to safety—building a safety stack, deploying FDEs (embedded Forward‑Deployed Engineers), involving our safety and alignment researchers, deploying via cloud, and working directly with the Department of War. Anthropic seemed more focused on specific prohibitions in the contract rather than citing applicable laws, which we felt comfortable with. It’s very important to build safe systems, and although documents are also important, I’d clearly rather rely on technical safeguards if I only had to pick one.” |
| Were the terms you accepted the same ones Anthropic rejected? | “No, we had some different terms. But our terms would now be available to them (and others) if they wanted.” |
| Will you turn off the tool if they violate the rules? | “Yes, we will turn it off in that very unlikely event, but we believe the U.S. government is an institution that does its best to follow law and policy. What we won’t do is turn it off because we disagree with a particular (legal military) decision. We trust their authority.” |
| Are employees allowed to opt out of working on Department of War‑related projects? | “We won’t ask employees to support Department of War‑related projects if they don’t want to.” |
| How much is the deal worth? | “It’s a few million USD, completely inconsequential compared to our $20 B+ in revenue, and definitely not worth the cost of a PR blow‑up. We’re doing it because it’s the right thing to do for the country, at great cost to ourselves, not because of revenue impact.” |
| Can you explicitly state which specific technical safeguard OpenAI has that allowed you to sign what Anthropic called a ‘threat to democratic values’? | “We think the deal we made has more guardrails than a …” (the answer was cut off in the source material) |
Additional Comments
The interview also included input from OpenAI’s Head of National Security Partnerships (who previously managed the White House response to the Snowden disclosures and helped write post‑Snowden surveillance policies during the Obama administration).
“With OpenAI’s deal with the Department of War, we control how we train the models and what types of requests the models refuse.”
Classified AI Deployments – Contractual Safeguards vs. Policy
Our previous agreement for classified AI deployments, including Anthropic’s, highlighted a critical issue: many AI labs (including Anthropic) have reduced or removed their safety guardrails and now rely primarily on usage policies as the sole safeguard in national‑security contexts.
Usage policies alone are not a guarantee of safety. Any responsible deployment of AI in classified environments should involve layered safeguards, such as:
- A prudent safety stack
- Limits on deployment architecture
- Direct involvement of AI experts in consequential use cases
These were the terms we negotiated into our contract.
OpenAI’s Position (as shared on LinkedIn)
Deployment architecture matters more than contract language.
Our contract limits our deployment to cloud API. Autonomous systems require inference at the edge. By restricting deployment to the cloud API, we can ensure that our models cannot be integrated directly into weapons systems, sensors, or other operational hardware…
Why Contractual Flexibility Trumps Static Language
Instead of hoping contract language will be enough, our agreement:
- Allows us to embed forward‑deployed engineers.
- Commits the partner to give us visibility into how models are being used.
- Gives us the ability to iterate on safety safeguards over time.
If our team observes that models aren’t refusing queries they should, or that operational risk exceeds expectations, the contract permits us to make modifications at our discretion. This provides far more influence over outcomes (and insight into possible abuse) than a static provision ever could.
Legal Constraints
U.S. law already limits the worst outcomes. We accepted the “all lawful uses” language proposed by the Department, but we required them to define the laws that constrain surveillance and autonomy directly in the contract. Because laws can change, having these definitions codified protects us against unforeseen legal or policy shifts.
Read more of this story at Slashdot.