OpenAI changes deal with US military after backlash

Published: (March 3, 2026 at 06:51 AM EST)
4 min read

Source: BBC Technology

OpenAI revises its Pentagon agreement

OpenAI says it is making changes to the “opportunistic and sloppy” deal it struck with the U.S. government over the use of its technology in classified military operations. The move raises questions about how AI is used in war and how much power rests with governments and private companies.

A statement made on Saturday by OpenAI claimed its agreement with the Pentagon had “more guardrails than any previous agreement for classified AI deployments, including Anthropic’s”【https://openai.com/index/our-agreement-with-the-department-of-war/】. On Monday, CEO Sam Altman posted on X that further changes were being made, including ensuring the system would not be “intentionally used for domestic surveillance of U.S. persons and nationals”.

As part of the new amendments, intelligence agencies such as the National Security Agency would also be unable to use OpenAI’s system without a “follow‑on modification” to the contract. Altman added that the company had made a mistake by rushing “to get this out on Friday”.

“The issues are super complex, and demand clear communication,” he said.
“We were genuinely trying to de‑escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy.”

OpenAI has faced backlash from users after announcing its work with the Pentagon. Day‑over‑day uninstalls of the ChatGPT mobile app reportedly surged to 295 % on Saturday, compared with a typical 9 %【https://techcrunch.com/2026/03/02/chatgpt-uninstalls-surged-by-295-after-dod-deal/】. Meanwhile, Anthropic’s Claude rose to the top of Apple’s App Store ranking, where it still remainshttps://apps.apple.com/us/iphone/charts】.

Claude was blacklisted by the Trump administration after Anthropic refused to drop a corporate “red‑line” principle that its technology should not be used to create fully autonomous weapons. Despite this, the use of Claude in the U.S.–Israel war with Iran has since emergedhttps://www.wsj.com/livecoverage/iran-strikes-2026/card/u-s-strikes-in-middle-east-use-anthropic-hours-after-trump-ban-ozNO0iClZpfpL7K7ElJ2?gaa_at=eafs&gaa_n=AWEtsqd1PRr0Da9XAC_JX9iHAqWPudILMGUWGu5JqI-QFVBMW3FiO9ity8OR0K_xrdI%3D&gaa_ts=69a6c3d6&gaa_sig=6ZRQN1M_BA4y75uzp-tQ67wO3Mwi3OK0yy3jDPjRzn85uiEZTQCppQ0u-CDg4QFCaaVJoMdVdqum_ezzSmCJ4w%3D%3D】, hours after Trump’s ban. The Pentagon declined to comment on its dealings with Anthropic.

How AI is used by the military

AI is employed in a number of ways in the military, such as streamlining logistics and quickly processing large amounts of information.

The U.S., Ukraine, and NATO all use technology from Palantir, an American company that provides data‑analytics tools to government customers for intelligence gathering, surveillance, counter‑terrorism, and military purposes. The UK Ministry of Defence recently signed a £240 m contract with the firm.

At the end of last year, the BBC spoke to some of those involved in integrating Palantir’s AI‑powered defence platform Maven into NATO. The software brings together a huge range of military information—from satellite data to intelligence reports—which can then be analysed by commercial AI systems such as Claude to help make “faster, more efficient, and ultimately more lethal decisions where that’s appropriate”, said Louis Mosley, head of Palantir’s UK operations.

Getty Images A phone with the white OpenAI logo on it, in the background is the American flag

BBC/Palantir A picture showing a satellite image of military vehicles each covered with a purple box labelled

A screenshot from a demo of Palantir’s AI system.

Large language models can make mistakes, or even fabricate information—known as “hallucinating”. Lieutenant Colonel Amanda Gustave, chief data officer for NATO’s Task Force Maven, stressed that there is human oversight, adding that they are “always introducing a human in the loop” and that it “would never be the case” that an AI would “make a decision for us”.

Palantir, unlike Anthropic, does not support a blanket ban on autonomous weapons but says there should be a “human in the loop”. Professor Mariarosaria Taddeo of Oxford University told the BBC that with Anthropic out of the Pentagon, “the most safety‑conscious actor” was now “out from the room”. “That is a real problem,” she added.

0 views
Back to Blog

Related posts

Read more »