OpenAI and Google employees rush to Anthropic’s defense in DOD lawsuit
Source: TechCrunch
Overview
More than 30 OpenAI and Google DeepMind employees filed a statement on Monday supporting Anthropic’s lawsuit against the U.S. Defense Department after the federal agency labeled the AI firm a supply‑chain risk, according to court filings.
“The government’s designation of Anthropic as a supply chain risk was an improper and arbitrary use of power that has serious ramifications for our industry,” the brief reads, and its signatories include Google DeepMind chief scientist Jeff Dean.
Pentagon’s designation
Late last week, the Pentagon labeled Anthropic a supply‑chain risk—typically reserved for foreign adversaries—after the AI firm refused to allow the Department of Defense to use its technology for mass surveillance of Americans or for autonomously firing weapons. The DOD argued that it should be able to use AI for any “lawful” purpose and not be constrained by a private contractor.
Amicus brief
The amicus brief in support of Anthropic appeared on the docket a few hours after the Claude maker filed two lawsuits against the DOD and other federal agencies. Wired was the first outlet to report the news.
In the court filing, the Google and OpenAI employees argue that if the Pentagon was “no longer satisfied with the agreed‑upon terms of its contract with Anthropic,” the agency could have “simply canceled the contract and purchased the services of another leading AI company.”
The DOD did, in fact, sign a deal with OpenAI within moments of designating Anthropic a supply‑chain risk—a move many of the ChatGPT maker’s employees protested.
“If allowed to proceed, this effort to punish one of the leading U.S. AI companies will undoubtedly have consequences for the United States’ industrial and scientific competitiveness in the field of artificial intelligence and beyond,” the brief reads. “And it will chill open deliberation in our field about the risks and benefits of today’s AI systems.”
Industry response
The filing also affirms that Anthropic’s stated red lines are legitimate concerns warranting strong guardrails. Without public law to govern AI use, the brief argues, the contractual and technical restrictions developers impose on their systems are a critical safeguard against catastrophic misuse.
Many of the employees who signed the statement also signed open letters over the past weeks urging the DOD to withdraw the label and calling on the leaders of their companies to support Anthropic and refuse unilateral use of their AI systems.