AI vs. the Pentagon: killer robots, mass surveillance, and red lines
Source: The Verge
Washington, DC – U.S. Secretary of War Pete Hegseth (C) speaks during a meeting of the Cabinet as President Donald Trump (L) and Commerce Secretary Howard Lutnick (R) listen in the Cabinet Room of the White House on January 29, 2026. The meeting comes as the Senate prepares a vote on a spending package to avoid another government shutdown, while Democrats push for funding for the Department of Homeland Security. (Photo by Win McNamee / Getty Images)
Can AI firms set limits on how and where the military uses their models? Anthropic is in heated negotiations with the Pentagon after refusing to comply with new contract terms that would require it to loosen guardrails on its AI models, permitting “any lawful use,” including mass surveillance of Americans and fully autonomous lethal weapons.
Pentagon CTO Emil Michael is pushing for Anthropic to be designated a “supply‑chain risk” if it doesn’t comply—a label usually reserved for national‑security threats. Anthropic’s rivals OpenAI and xAI have reportedly agreed to the new terms. Even after a White House meeting with Defense Secretary Pete Hegseth, Anthropic CEO Dario Amodei remains firm, stating that “threats do not change our position: we cannot in good conscience accede to their request.”
Follow along for the latest updates on the clash between AI companies and the Pentagon.
We don’t have to have unsupervised killer robots
Anthropic argues that granting the military unrestricted access to its models would cross an ethical line, especially when it comes to autonomous lethal systems. The company maintains that AI should be deployed under strict human oversight to prevent unintended escalation or violations of international humanitarian law.
Anthropic refuses Pentagon’s new terms, standing firm on lethal autonomous weapons and mass surveillance
- Red line: Anthropic will not remove safeguards that prohibit the use of its technology for mass surveillance of U.S. citizens or for fully autonomous weapons.
- Rationale: The company cites moral responsibility and potential legal liabilities under both U.S. and international law.
- Outcome so far: The Pentagon has threatened to label Anthropic a “supply‑chain risk,” which could limit the firm’s ability to secure government contracts and affect its reputation in the broader tech ecosystem.
Pete Hegseth’s Pentagon AI “bro squad” includes a former Uber executive and a private‑equity billionaire
The Pentagon’s AI advisory team, assembled by Secretary Hegseth, blends industry veterans with defense officials:
| Member | Background |
|---|---|
| Emil Michael | Former Uber executive; now Pentagon CTO |
| [Name Redacted] | Private‑equity billionaire with investments in AI startups |
| [Name Redacted] | Former senior official at the Department of Defense, now AI policy lead |
The group’s mandate is to accelerate the integration of advanced AI into defense systems while navigating ethical and security concerns.
Inside Anthropic’s existential negotiations with the Pentagon
-
Negotiation timeline:
- January 2026: Pentagon issues revised contract terms.
- Mid‑January: Anthropic’s leadership rejects the “any lawful use” clause.
- Late January: White House meeting with Secretary Hegseth; no concession from Anthropic.
-
Key sticking points:
- Guardrails: Anthropic insists on maintaining usage restrictions for surveillance and lethal autonomous weapons.
- Liability: The company seeks clear legal protections against misuse of its models.
- Transparency: Anthropic demands a public statement from the Pentagon outlining how the technology will be employed.
-
Potential implications:
- If Anthropic is labeled a supply‑chain risk, other AI firms may follow suit, prompting a broader industry debate over the balance between national security and ethical AI deployment.
- Conversely, a compromise could set a precedent for future contracts, potentially eroding industry‑wide safeguards.
For ongoing coverage, stay tuned to this page.