Anthropic: Chinese AI firms created 24,000 fraudulent accounts for distillation attacks

Published: (February 23, 2026 at 05:52 PM EST)
6 min read

Source: Mashable Tech

Anthropic accuses DeepSeek and two other Chinese AI firms of large‑scale “distillation attacks”

DeepSeek logo on a mobile phone screen with the Chinese flag in the background
Credit: Jonathan Raa / NurPhoto via Getty Images

By Timothy Beck Werth

Timothy Beck Werth
Tech Editor, Mashable

Timothy Beck Werth is the Tech Editor at Mashable, where he leads coverage and assignments for the Tech and Shopping verticals. He has over 15 years of experience as a journalist and editor, covering consumer technology, smart‑home gadgets, and men’s grooming and style products. Previously, he was Managing Editor and Site Director of SPY.com, a men’s product‑review site, and has written for GQ, The Daily Beast, Gear Patrol, and The Awl.

Read Full Bio

Published on February 23, 2026


Anthropic is accusing three Chinese artificial‑intelligence companies of “industrial‑scale campaigns” to illicitly extract its technology using distillation attacks. According to Anthropic, the firms created 24 000 fraudulent accounts to hide these efforts.

In a blog post detailing the attacks, Anthropic named the three AI labs:

  • DeepSeek – maker of the popular DeepSeek AI models
  • Moonshot
  • MiniMax

Anthropic framed the attacks as a national‑security issue, stating:

“We have identified industrial‑scale campaigns by three AI laboratories—DeepSeek, Moonshot, and MiniMax—to illicitly extract Claude’s capabilities to improve their own models. These labs generated over 16 million exchanges with Claude through approximately 24 000 fraudulent accounts, in violation of our terms of service and regional access restrictions.”

Earlier accusations

In January, OpenAI also accused DeepSeek of engaging in distillation attacks, effectively stealing its technology. The reaction was largely mocking rather than sympathetic, reflecting a broader industry stance that AI companies have the right to train on copyrighted works without permission or payment. Critics argue that this position is contradictory: AI firms claim their own intellectual property is off‑limits for training while simultaneously employing the same methods.

“You can’t be expected to have a successful AI program when every single article, book, or anything else that you’ve read or studied, you’re supposed to pay for,” said former President Donald Trump at an AI event in July 2025. “When a person reads a book or an article, you’ve gained great knowledge. That does not mean that you’re violating copyright laws or have to make deals with every content provider.” He added, “China’s not doing it.”

This double standard puts AI companies in an awkward position, forcing them to defend their intellectual‑property claims while engaging in similar behavior themselves.


You May Also Like


Mashable Light Speed

What are distillation attacks?

Distillation is a common training technique for large‑language models; however, it can also be used to effectively reverse‑engineer some aspects of the technology. In distillation, AI researchers run variations of the same prompt repeatedly to see how a particular model responds.

“Distillation is a widely used and legitimate training method. For example, frontier AI labs routinely distill their own models to create smaller, cheaper versions for their customers. But distillation can also be used for illicit purposes: competitors can use it to acquire powerful capabilities from other labs in a fraction of the time, and at a fraction of the cost, that it would take to develop them independently.”

Chinese companies have a reputation for flagrantly ignoring intellectual‑property treaties and copyright laws, and reverse‑engineering technology from Western companies. However, while Anthropic says the distillation attacks it uncovered violated its terms of service, it’s not clear that they violated any international laws, or what remedy Anthropic has besides suspending the violating accounts.

To prevent attacks like this, Anthropic called for cooperation between AI companies, government agencies, and other stakeholders.

AI companies like Anthropic, xAI, Meta, and OpenAI are in the midst of one of the largest spending booms ever seen, with tens of billions of dollars being poured into AI infrastructure, data centers, and research and development. If rival foreign AI companies can cheaply recreate their LLM technology using distillation, they would clearly have an advantage over their U.S. rivals.

“These campaigns are growing in intensity and sophistication,” the blog post reads. “The window to act is narrow, and the threat extends beyond any single company or region. Addressing it will require rapid, coordinated action among industry players, policymakers, and the global AI community.”

Mashable reached out to Anthropic with questions about the distillation attacks, and we’ll update this article if we receive a response.

Disclosure: Ziff Davis, Mashable’s parent company, in April 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.


Author

headshot of Timothy Beck Werth, a handsome journalist with great hair

Timothy Beck Werth – Tech Editor

Timothy Beck Werth is the Tech Editor at Mashable, where he leads coverage and assignments for the Tech and Shopping verticals. Tim has over 15 years of experience as a journalist and editor, with particular expertise covering and testing consumer technology, smart‑home gadgets, and men’s grooming and style products. Previously, he was the Managing Editor and then Site Director of SPY.com, a men’s product review and lifestyle website. As a writer for GQ, he covered everything from bull‑riding competitions to the best Legos for adults, and he’s also contributed to publications such as The Daily Beast, Gear Patrol, and The Awl.

Tim studied print journalism at the University of Southern California. He currently splits his time between Brooklyn, NY and Charleston, SC. He’s currently working on his second novel, a science‑fiction book.


Images

Mashable Potato

a close‑up of a person using Proton VPN on a laptop
a close‑up of a person using Proton VPN on a laptop (larger)

Windows logo on a phone lying on a keyboard
Windows logo on a phone lying on a keyboard (larger)

The Anthropic logo appears on a smartphone screen and as the background on a laptop computer screen in this photo illustration in Athens, Greece, on November 12, 2025. Anthropic PBC plans to spend $50 billion to build custom data centers for artificial intelligence work in several US locations, including Texas and New York, as the latest expensive pledge for infrastructure to support the AI boom.

/hero-image.fill.size_220x133.v1764001342.jpg)

![The Anthropic logo appears on a smartphone screen and as the background on a laptop computer screen in this photo illustration in Athens, Greece, on November 12, 2025. Anthropic PBC plans to spend $50 billion to build custom data centers for artificial‑intelligence work in several US locations, including Texas and New York, as the latest expensive pledge for infrastructure to support the AI boom.](https://helios-i.mashable.com/imagery/articles/01clihEtYMOXfPDNZFeVhDY/hero-image.fill.size_220x220.v1764001342.jpg)

![Dario Amodei in glasses, making a point with two hands on stage.](https://helios-i.mashable.com/imagery/articles/012uOQiMzgGE1MxZwh1A7wn/hero-image.fill.size_220x133.v1769480595.jpg)

![Dario Amodei in glasses, making a point with two hands on stage.](https://helios-i.mashable.com/imagery/articles/012uOQiMzgGE1MxZwh1A7wn/hero-image.fill.size_220x220.v1769480595.jpg)

These newsletters may contain advertising, deals, or affiliate links. By clicking **Subscribe**, you confirm you are 16 + and agree to our [Terms of Use](https://www.ziffdavis.com/terms-of-use) and [Privacy Policy](https://www.ziffdavis.com/ztg-privacy-policy).
0 views
Back to Blog

Related posts

Read more »