Anthropic Just Told You Who They Are. Believe Them.
Source: Dev.to
Three things happened on February 24. Separately, they’re news. Together, they’re a strategy reveal most people will miss.
The Pentagon Ultimatum
Defense Secretary Pete Hegseth met with Dario Amodei and essentially said: give us unrestricted Claude access by Friday, or we will designate Anthropic a supply‑chain risk and invoke the Defense Production Act—a Korean War‑era law that lets the government conscript private companies into national‑security production. Whether they want to or not.
Claude is already the only AI model running on classified U.S. defense systems, deployed through Palantir, and was reportedly used in the operation to capture Venezuela’s Maduro. The Pentagon doesn’t want to replace Claude; it wants to unleash it for mass surveillance, autonomous weapons, and the full menu of capabilities. Anthropic’s usage policy explicitly bans both.
So Anthropic faces a choice no amount of fundraising prepared it for: comply and torch the “responsible AI” brand, or resist and get conscripted anyway.
Revised Responsible Scaling Policy (RSP 3.0)
On the same day Anthropic released RSP 3.0, its new Responsible Scaling Policy. The previous version had a hard line: don’t train more powerful models unless safety measures are confirmed first. That line is gone. The new version says development will only be delayed if leadership believes Anthropic leads the AI race and catastrophic risks are significant.
Chief Scientist Jared Kaplan explained the logic: “If competitors are blazing ahead, pausing wouldn’t help—it would result in a less safe world.” The condition for slowing down now requires two things to be true simultaneously, and one of them—“we’re in the lead”—is something Anthropic can always argue against, given the presence of OpenAI, Google, and others.
This isn’t merely a policy update; it’s a permission structure.
Enterprise Plugin Launch and Market Reaction
Anthropic also launched Claude Cowork integrations with Google Workspace, Slack, DocuSign, FactSet, LegalZoom, SimilarWeb, WordPress, S&P Global, MSCI, and more. The rollout includes private plugin marketplaces and multi‑step workflows across Excel and PowerPoint with context passing between apps.
Market reaction was immediate:
- Salesforce +4 %
- Thomson Reuters +11 %
- FactSet +6 %
- Intapp +7.1 %
- S&P up 0.77 % to 6,890.07
- Nasdaq up 1.04 % to 22,863.68
Weeks earlier, Anthropic’s legal plugin had triggered an $830 billion global software sell‑off; this time the same company catalyzed a recovery.
Strategic Implications
The pattern is clear:
- Raise capital – Anthropic closed a $30 billion raise at a roughly $380 billion valuation, with annualized revenue of $14 billion (Claude Code alone contributes $2.5 billion).
- Loosen constraints – The safety guardrails were relaxed on the same day the funding arrived.
- Expand in every direction – Military access, enterprise plugins, developer tools—all simultaneously.
Anthropic isn’t pivoting from safety; it’s graduating from it. “We were the responsible ones” becomes past tense, a credential on a résumé rather than an operating principle. Investors didn’t write checks for restraint; they wrote them for market dominance.
Doug O’Laughlin (SemiAnalysis) noted that 4 % of public GitHub commits are now authored by Claude Code, projected to exceed 20 % by the end of 2026. Liam Ottley’s side‑by‑side testing showed Claude Code beating OpenClaw on every metric that matters for real business automation—terminal access, filesystem integration, API orchestration. The enterprise plugin launch is therefore infrastructure for making Claude the default operating layer for knowledge work.
Outlook
The Pentagon situation will likely resolve quietly—some version of expanded access with nominal policy language that lets both sides save face. The safety policy change appears permanent. The real battle will be fought in the enterprise arena, not against governments but against Microsoft and Google for control of the AI workflow layer.
Anthropic told you exactly who they are on February 24. The question isn’t whether you believe them; it’s whether you’ve updated your model of the world accordingly.
The “safe AI company” era has ended. What comes next is simply an AI company—with very, very good technology and $30 billion in pressure to use it.