White House memo claims mass AI theft by Chinese firms

Published: (April 23, 2026 at 07:13 PM EDT)
2 min read

Source: BBC Technology

EPA Michael Kratsios, a White House director and advisor on technology, speaking into a microphone at a podium, wearing a black suit jacket, white dress shirt and blue patterned neck tie. An American flag is positioned upright behind him.

White House memo on AI theft

The White House announced that it will work more closely with U.S. artificial intelligence (AI) firms to combat “industrial‑scale campaigns” by foreign actors seeking to steal advances in the technology.

Michael Kratsios, Director of Science and Technology Policy, wrote in an internal memo that the administration had new information indicating “foreign entities, principally based in China” were exploiting American firms. Through a process called distilling, these firms copy AI technology developed by U.S. companies.

Planned White House actions

Kratsios outlined four steps to “avoid and halt malicious exploitation”:

  1. Share more information with U.S. AI companies about the tactics employed and actors involved in distillation campaigns.
  2. Improve coordination with companies to fight the attacks.
  3. Develop best practices to identify, mitigate, and remediate distillation.
  4. Explore accountability mechanisms for foreign actors.

The memo did not detail specific plans for action against entities found to be undertaking distillation of U.S. AI technology. A White House spokesperson declined to comment beyond the memo.

China’s response

A representative of China’s U.S. embassy in Washington, D.C., took issue with what was described as “the unjustified suppression of Chinese companies by the U.S.” The spokesperson said:

“China is not only the world’s factory but is also becoming the world’s innovation lab.
China’s development is the result of its own dedication and effort as well as international cooperation that delivers mutual benefits.”

How distillation campaigns work

Distillation campaigns are carried out by firms that operate thousands of individual accounts for a given AI chatbot or tool, allowing them to appear as normal users. These accounts then conduct coordinated attempts to “jailbreak” or otherwise expose information about AI models that is not meant to be public. The harvested data is saved and applied to the firm’s own AI model building and training.

“As methods to detect and mitigate industrial‑scale distillation grow more sophisticated, foreign entities who build their AI capabilities on such fragile foundations should have little confidence in the integrity and reliability of the models they produce,” Kratsios said.

0 views
Back to Blog

Related posts

Read more »