Hypocrite Or Victim? The Backlash To Anthropic’s Attack On Chinese AI Labs Over Distillation, Explained

Anthropic’s disclosure that DeepSeek, Moonshot AI, and MiniMax used tens of thousands of fake accounts to extract capabilities from its Claude models was always going to provoke a reaction. What perhaps surprised some observers was how quickly the backlash turned not on the Chinese labs, but on Anthropic itself.

On X, a wave of critics — including Elon Musk — accused Anthropic of hypocrisy. “Anthropic is guilty of stealing training data at massive scale and has had to pay multi-billion dollar settlements for their theft. This is just a fact,” Musk wrote. The argument, bluntly put: how can a company that built its business on the unlicensed work of millions of authors, coders, and web publishers claim the moral high ground when a competitor does something similar to it?

It’s a charge that deserves to be taken seriously — even if the full picture is more complicated than the critics suggest.

The Case Against Anthropic

The critics’ strongest argument is the most basic one: Anthropic trained its models on vast quantities of data created by other people, without asking permission and without paying them. Anthropic, like most frontier AI labs, has faced lawsuits from authors who allege their books were used without consent, and has been party to significant legal settlements as a result. The data that powers Claude — the very product the Chinese labs are accused of stealing from — was itself assembled, critics argue, through a form of large-scale extraction.

There is also a pointed economic argument. The Chinese labs, whatever else they did, actually paid Anthropic for API access. The millions of writers and web publishers whose work was scraped to train Claude were given no such consideration. If anything, the distillation campaigns were a more commercially legitimate form of extraction than the original training data collection — at least someone was paying someone.

Then there is the open-source dimension. DeepSeek, Moonshot AI, and MiniMax have all released open models, making their capabilities freely available to researchers, developers, and the broader public. Anthropic keeps Claude firmly closed. So even if one accepts that the Chinese labs extracted value from Anthropic’s model, the argument goes, they redistributed it in a way that benefits the global AI community. Anthropic’s original training data extraction, by contrast, enriched a private company and its investors.

Critics also raise the legal dimension more pointedly: Anthropic and other frontier labs have been accused not merely of scraping publicly available data, but of specifically using pirated books and other copyrighted works that were never legally made available in the first place. If that is true, the company is not just in a morally grey area — it may have broken the law in multiple jurisdictions.

The Case For Anthropic

Anthropic and its defenders would push back on several of these points, and not without force.

The most straightforward response is contractual. Anthropic’s terms of service explicitly prohibit using its API to train competing models. The Chinese labs agreed to those terms and then violated them at industrial scale, using fake accounts and proxy services specifically designed to evade detection. Whatever one thinks of how AI companies assembled their training data, no individual author ever signed a contract with Anthropic agreeing to conditions and then systematically circumvented them. The violation here is not just ethical — it is a deliberate breach of a legal agreement.

There is also a meaningful difference in the nature and scale of harm. Web scraping for training data, however controversial, did not directly damage the economic position of any single author in a measurable way. The distillation campaigns, by contrast, allowed Chinese labs to build directly competing products at a fraction of the research and development cost — effectively offloading years of expensive work onto Anthropic’s balance sheet without compensation. That is a concrete, targeted commercial injury.

The geopolitical argument is harder to dismiss than critics tend to acknowledge. Anthropic’s disclosure was not just a complaint about lost revenue. The company argues that illicitly distilled models are likely to have safety guardrails stripped out, and that Chinese labs could feed those unprotected capabilities into military, intelligence, and surveillance infrastructure. Whether or not one agrees with Anthropic’s politics or business practices, the prospect of frontier AI capabilities — with safety measures removed — flowing into authoritarian governments’ offensive cyber and surveillance programs is a distinct and serious concern that has nothing to do with copyright law.

A Genuine Tension, Not A Simple Answer

The honest conclusion is that both sides have real points, and the attempt to reduce this to a simple story of hero or hypocrite obscures more than it reveals.

The critics are right that there is something uncomfortable about a company that built its value on unlicensed data now invoking legal and moral arguments to protect that value from others. The AI industry’s relationship with training data has been legally murky and ethically contested from the beginning, and Anthropic is not exempt from that critique.

But the critics are wrong to suggest this makes Anthropic’s complaints invalid. Two wrongs do not negate each other. If we think large-scale data extraction without consent is problematic — and there are good reasons to think so — that principle applies to everyone, including the Chinese labs that extracted from Claude. The fact that Anthropic may have done something similar does not license others to do it to them.

What the backlash really reflects is a broader unresolved tension at the heart of the AI industry: an industry built substantially on other people’s work, without clear legal or ethical frameworks governing that process, is now trying to erect clear legal and ethical frameworks to protect its own outputs. Those two positions are not easily reconciled, and Anthropic — for all the merit in its national security arguments — has not yet fully squared that circle.

Posted in AI