US’s frontier labs had been talking about how Chinese companies were allegedly distilling their models, but the US government has now gotten involved.
The White House Office of Science and Technology Policy has issued a formal memorandum around Chinese entities distilling American AI models to build their own. The memo implied that this amounted to China stealing US tech, and added that the government would take measures from prevent this from happening.

What Is Distillation — And Why Does It Matter
Distillation, in its legitimate form, is a standard technique in AI development: a smaller, cheaper model is trained on the outputs of a more capable one, inheriting some of its reasoning and performance without needing the same compute or training investment. Frontier labs do this routinely with their own models to produce lighter, faster, more deployable versions.
The problem the government is flagging is adversarial distillation — where a competitor systematically queries a frontier model at scale, harvesting its outputs to train a rival model. The result doesn’t fully replicate the original, but it can produce something that performs comparably on select benchmarks at a fraction of the cost — without the years of foundational research investment behind it.
There’s a further wrinkle: the distilled models can have their safety protocols stripped out in the process. The memo notes these campaigns “allow those actors to deliberately strip security protocols from the resulting models and undo mechanisms that ensure those AI models are ideologically neutral and truth-seeking.”
What the Government Says Is Happening
The memo states that “foreign entities, principally based in China, are engaged in deliberate, industrial-scale campaigns to distill U.S. frontier AI systems.” The methods include “leveraging tens of thousands of proxy accounts to evade detection and using jailbreaking techniques to expose proprietary information.”
This isn’t abstract. Anthropic had previously disclosed that DeepSeek, Moonshot AI, and MiniMax collectively created over 24,000 fraudulent accounts and generated more than 16 million exchanges with Claude in violation of its terms of service. DeepSeek’s prompts specifically asked Claude to articulate its internal reasoning step by step — harvesting chain-of-thought training data at scale. OpenAI had told Congress much the same: “the majority of adversarial distillation activity we’ve observed on our platform appears to originate from China.”
The latest memo now formalizes the government’s view of this threat.
What the Administration Plans to Do
The memo outlines four actions the Trump Administration will take:
- Share intelligence with US AI companies about the tactics and actors involved in unauthorized distillation campaigns.
- Enable private sector coordination against such attacks.
- Develop best practices with industry to identify, mitigate, and remediate industrial-scale distillation.
- Explore accountability measures to hold foreign actors responsible.
That fourth point is the most consequential — and the least defined. “Explore a range of measures to hold foreign actors accountable” is deliberately vague, leaving open the possibility of sanctions, export controls, or other economic tools.
The Distinction the Government Is Drawing
The memo is careful to distinguish between legitimate and adversarial distillation. “AI distillation, when legitimately used to produce smaller, lighter-weight models from more advanced systems, is a vital part of that ecosystem,” it states. What it objects to is “industrial distillation activities that aim to systematically undermine American research and development and access proprietary information.”
In other words: the technique itself isn’t the target. The scale, the deception, and the intent are.
The memo closes with a warning aimed as much at the distillers as at its immediate audience: “As methods to detect and mitigate industrial-scale distillation grow more sophisticated, foreign entities who build their AI capabilities on such fragile foundations should have little confidence in the integrity and reliability of the models they produce.”
The Bigger Picture
This memo arrives amid a broader debate about what distillation actually proves. Critics of the frontier labs have noted that the same companies crying foul trained their own models on vast quantities of scraped web content — a form of extraction that content creators never consented to either. Others point out that the Chinese labs accused of distillation have released open-source models, making their capabilities freely available, which complicates the narrative of pure theft.
None of that changes the government’s calculus. With the White House now formally characterizing this as a national security issue, the question is no longer whether adversarial distillation will face a policy response — but what that response will look like.