They’re Scaring People With Dubious Studies For Regulatory Capture: Yann LeCun On Anthropic’s Chinese Hacking Claims

Just yesterday, Anthropic had published a detailed report on how its AI models were used to infiltrate several companies by Chinese hackers, but not everyone is taking these claims at face value.

Meta’s Chief AI Scientist Yann LeCun has hit out at Anthropic’s study, calling it a ploy for regulatory capture. “We believe this is the first documented case of a large-scale AI cyberattack executed without substantial human intervention. It has significant implications for cybersecurity in the age of AI agents,” Anthropic had said on X while sharing the results of the study.

The post caught the attention of Chris Murphy, who is the US Senator from Connecticut. “Guys wake the f up. This is going to destroy us – sooner than we think – if we don’t make AI regulation a national priority tomorrow,” he posted.

Yann LeCun wasn’t impressed with this take. “You’re being played by people who want regulatory capture,” he posted in response. “They are scaring everyone with dubious studies so that open source models are regulated out of existence,” he added.

Now Yann LeCun has previously called out Anthropic’s claims as well. In March this year, he’d said that Anthropic CEO Dario Amodei’s idea of country of geniuses in a datacenter was “completely bs”. “We are not going to get to human-level AI by just scaling up LLMs,” LeCun had said. “And this is just not going to happen. There’s no way, okay, absolutely no way. And and whatever you can hear from some of my more adventurous colleagues, it’s not going to happen within the next two years. This absolutely no way in hell. The idea that we were going to have, you know, a country of geniuses in the data center — that’s completely BS,” he had said.

Interestingly, the world’s top open-source models are currently Chinese, and Anthropic has often trained its guns on China. Anthropic CEO Dario Amodei had earlier hinted that he supported US export control of chips to China. “As we go to hundreds of thousands and millions of chips, there’s two possible futures. In one of those futures, the U.S. and its allies are able to provision that many chips fast enough and because of the export controls on chips to China and because Chinese Huawei chips are inferior, China cannot get to that scale. There’s another world where both sides get to that scale, (where there will be parity between US and China),” he had said in January this year after DeepSeek’s release. He’d also later said that DeepSeek had no safety blocks against generating harmful information. He’s also played down the importance of open-source in AI, claiming that open models still needed lots of computing power to run. And Anthropic has long billed itself as a safety-focused lab, and regularly publishes research around new findings about how AI models work.

Yann LeCun, though, seems to think this is all part of a grand plan. With his latest post, he seemed to be insinuating that Anthropic focuses on safety through regular reports and disclosures to get AI models to become regulated by the government, which in turn would favour larger players like itself and stifle competition. While it’s impossible to tell what Anthropic’s plans are — it does seem important to keep researching how models work to be adequately prepared for any potential downsides of the new technology — there are prominent and vocal voices in Silicon Valley which seem to doubt its true intentions.

Posted in AI