US Military leaders meet with Anthropic to argue against Claude safeguards
Source: Hacker News
Background
U.S. military leaders, including Defense Secretary Pete Hegseth, met with executives from the artificial‑intelligence firm Anthropic on Tuesday to discuss a dispute over the government’s use of the company’s large language model, Claude. Hegseth gave Anthropic CEO Dario Amodei until the end of the day on Friday to accept the Department of Defense’s (DoD) terms or face penalties, as reported by Axios.
Anthropic positions itself as one of the most safety‑forward AI companies, but it has been at odds with the Pentagon for weeks over how the military may employ Claude. While U.S. defense officials seek unfettered access to Claude’s capabilities, Anthropic has reportedly resisted allowing its model to be used for mass surveillance or autonomous weapons that could kill without human input, according to the Wall Street Journal.
Negotiations
- DoD demands: Full access to Claude for all lawful military purposes.
- Anthropic’s stance: Refusal to enable uses that conflict with its safety policies, such as autonomous lethal systems or mass surveillance.
- Potential penalties: The DoD has threatened punitive measures, including canceling a large contract and designating Anthropic a “supply chain risk” (Axios).
The outcome of these talks could set a precedent for how the AI industry responds to government demands for military applications—a long‑standing point of contention among researchers and ethical‑AI advocates.
Industry Context
- In July of the previous year, the DoD struck deals with several major AI firms—Anthropic, Google, and OpenAI—offering contracts worth up to $200 million. Until recently, Claude was the only model cleared for use in classified military systems.
- On Monday, the DoD signed a separate agreement allowing the use of Elon Musk’s xAI chatbot in classified environments, despite recent backlash over the model’s generation of non‑consensual sexualized images of children (Axios).
- Both xAI and OpenAI have reportedly agreed to the government’s terms. According to the Washington Post, a defense official said OpenAI allowed its model to be used for “all lawful purposes.” OpenAI has not commented on the agreement.
Political Implications
- The meeting follows a reported incident in which the U.S. military used Claude to assist in the capture of Venezuelan leader Nicolás Maduro.
- The Trump administration has pushed for rapid AI integration into the armed forces, with former President Donald Trump repeatedly pledging that the United States will win a global AI arms race.
- Emil Michael, the Pentagon’s chief technology officer and former Uber executive, has publicly urged Anthropic to “cross the Rubicon” and accept the DoD’s terms, stating: “If someone wants to make money from the government, those guardrails ought to be tuned for our use cases—so long as they’re lawful.” (Defense Scoop).
Anthropic’s Position and Funding
- Dario Amodei has long advocated for stronger AI regulation. Anthropic backs a political action committee that pushes for robust safeguards on artificial intelligence.
- Amodei opposed Trump during the 2024 presidential campaign, and Anthropic has hired several former Biden staffers. According to the Wall Street Journal, this political alignment contributed to a pro‑Trump venture‑capital firm withdrawing its investment earlier this year.
Ethical Concerns
The DoD has poured billions of dollars into AI‑enabled technologies, ranging from unmanned aerial drones to automated targeting systems. The rapid advancement of these tools raises pressing ethical questions about delegating lethal decision‑making to AI. The issue is no longer theoretical; the war in Ukraine has already featured deadly semiautonomous drones that can operate without human control (NY Times).