AI researchers are getting paid eye-popping sums of money to move from one company to another — Meta has been reportedly been offering $100 million bonuses and $250 million pay packages — but it’s still a mystery as to what these employees bring to the table that’s so valuable. Anthropic CEO Dario Amodei, however, has shed some light on the secrets in AI which could possibly justify these salaries.
On the Cheeky Pint podcast with Stripe co-founder John Collison, Amodei was asked how companies could maintain a lead in AI with new developments taking place weekly and researchers jumping ship all the time. “The pharma industry, they protect their secrets with patents,” Collison told Amodei. “Wall Street, where also they have hundred million dollar secrets that (can be) just a a very simple idea — Renaissance technology is the hedge fund that just very successfully locked up its employees. How do you manage to matain a commercial lead in the current AI environment?” he asked.

“Yeah, so there are some things that are like that,” Amodei replied, hinting that there were some easy tricks that AI insiders knew about. “But I think more and more as the field matures, it starts to be more about know-how and ability to build complex kind of objects. So some of the ideas we work with are simple. But I would say the simple ideas, the ones that are like, oh twiddle this element of the transformer or something, those tend to be independently discovered, and anyone knows them before too long. But there are things like, oh man, this thing is actually really hard to implement from an engineering sense and we have it implemented. Or this thing is just kind of a pain to do, or there’s a know-how to doing it,” he added.
There are plenty of secrets in the AI space. The biggest are the closed-source model weights, which are just billions of numbers which together define an AI model. These model weights can fit into a small storage device, but can take tens of millions of dollars of compute during the training process to generate. These weights are so secretive that OpenAI had considered putting them in a bullet-proof room in the early days of the company. These weights being leaked or stolen would mean that the millions of dollars that a company would’ve spent on training AI models would essentially go to waste, and anyone with the access to these numbers would be able to independently run the proprietary models the company would’ve spent time and effort developing.
The other secrets are around training and inference techniques, and know-how that’s transmitted between researchers. It’s likely that Mark Zuckerberg was after this bit of knowledge when he’d poached researchers from OpenAI and other labs for $100 million or more. Meta’s own Llama 4 model had been a disappointing failure, and Meta had seen several Chinese companies go past it in the open-model space. But Meta likely wants to claw back to the top by taking top researchers from other companies, and hoping they’ll bring their knowledge with them. But like Dario Amodei says, these skills might not be simple to transfer to other labs — engineering implementations of complex projects can often get incredibly nuanced, with very few people even knowing what makes the whole system tick. And if AI is already at a stage where this is what the secret sauce is, it might not necessarily be possible to poach a handful of researchers and rebuild a complex model afresh at a new company.