Anthropic accuses Chinese AI labs of mining Claude as US debates AI chip exports
Source: TechCrunch
Anthropic is accusing three Chinese AI companies—DeepSeek, Moonshot AI, and MiniMax—of creating more than 24,000 fake accounts to interact with its Claude model. Through these accounts the labs allegedly generated over 16 million exchanges using a technique called “distillation,” targeting Claude’s most differentiated capabilities: agentic reasoning, tool use, and coding.
Distillation attacks
-
DeepSeek
- Generated >150,000 exchanges focused on foundational logic, alignment, and safe alternatives to policy‑sensitive queries.
- Previously released an open‑source R1 reasoning model that matched frontier labs at a fraction of the cost and is expected to launch DeepSeek V4, which reportedly can outperform Claude and OpenAI’s ChatGPT in coding.
-
Moonshot AI
- Produced >3.4 million exchanges targeting agentic reasoning, tool use, coding, data analysis, computer‑use agents, and computer vision.
- Recently released the open‑source model Kimi K2.5 and a coding agent.
- Source: TechCrunch article.
-
MiniMax
- Conducted 13 million exchanges aimed at agentic coding, tool use, and orchestration.
- Anthropic observed MiniMax redirect nearly half its traffic to siphon capabilities from the latest Claude model upon its launch.
Anthropic says it will continue investing in defenses that make distillation attacks harder to execute and easier to identify, calling for “a coordinated response across the AI industry, cloud providers, and policymakers.”
Export‑control debate
The attacks coincide with ongoing discussions about U.S. export controls on advanced AI chips. The Trump administration recently allowed companies such as Nvidia to export advanced AI chips (e.g., the H200) to China. Critics argue that loosening controls boosts China’s AI computing capacity at a critical moment in the global AI race.
Anthropic notes that the scale of extraction performed by DeepSeek, MiniMax, and Moonshot “requires access to advanced chips.” It argues that “restricted chip access limits both direct model training and the scale of illicit distillation,” reinforcing the rationale for tighter export controls.
Industry reaction
-
Dmitri Alperovitch, chairman of the Silverado Policy Accelerator and co‑founder of CrowdStrike, told TechCrunch:
“It’s been clear for a while now that part of the reason for the rapid progress of Chinese AI models has been theft via distillation of US frontier models. Now we know this for a fact. This should give us even more compelling reasons to refuse to sell any AI chips to any of these companies, which would only advantage them further.”
-
Anthropic warns that distillation not only threatens U.S. AI dominance but also creates national‑security risks:
“Anthropic and other U.S. companies build systems that prevent state and non‑state actors from using AI to, for example, develop bioweapons or carry out malicious cyber activities. Models built through illicit distillation are unlikely to retain those safeguards, meaning that dangerous capabilities can proliferate with many protections stripped out entirely.”
The blog post also highlights the risk of authoritarian governments deploying frontier AI for offensive cyber operations, disinformation campaigns, and mass surveillance—risks amplified when such models are open‑sourced.