Anthropic accuses three Chinese AI labs of abusing Claude to improve their own models

Published: (February 23, 2026 at 03:52 PM EST)
2 min read
Source: Engadget

Source: Engadget

Anthropic accuses three Chinese AI labs of abusing Claude

Anthropic is issuing a call to action against AI “distillation attacks” after accusing three AI companies of misusing its Claude chatbot. On its website, Anthropic claimed that DeepSeek, Moonshot, and MiniMax have been conducting “industrial‑scale campaigns…to illicitly extract Claude’s capabilities to improve their own models.”

Distillation attacks

In the AI world, distillation refers to less capable models leaning on the responses of more powerful ones to train themselves. While distillation can be a legitimate technique, Anthropic warned that it can also be used nefariously. The company says the three Chinese firms were responsible for more than 16 million exchanges with Claude through approximately 24 000 fraudulent accounts. According to Anthropic, these competing companies were using Claude as a shortcut to develop more advanced AI models, potentially circumventing certain safeguards.

Evidence linking the campaigns

Anthropic stated that it linked each distillation attack campaign to the specific companies with “high confidence” by analyzing:

  • IP address correlation
  • Metadata requests
  • Infrastructure indicators

The findings were corroborated with other players in the AI industry who have observed similar behaviors.

Anthropic’s response

Anthropic announced that it will upgrade its system to make distillation attacks harder to execute and easier to detect. The company emphasized that it is taking steps to protect Claude’s capabilities and safeguard its users.

While pointing fingers at the other firms, Anthropic is also facing a lawsuit from music publishers who allege that the company used illegal copies of songs to train Claude.

This article originally appeared on Engadget.
Read the original article

0 views
Back to Blog

Related posts

Read more »