Geoffrey Hinton is continuing to make comparisons between freely-available AI models and nuclear bombs.
Geoffrey Hinton, who won the Nobel Prize in 2024 and is known as the Godfather of Deep Learning, has said that releasing foundational model weights is akin to making nuclear material freely available. Hinton had previously said earlier said that open-sourcing big models was like selling nuclear weapons at Radioshack.
“If you ask, why doesn’t Alabama have a (nuclear) bomb? It’s because you need fissile material, and it’s hard to get fissile material. It takes a lot of time and energy to produce the fissile material. Once you have the fissile material, it’s much easier to make a bomb,” he said in an interview.
“And so the government clearly doesn’t want fissile material to be out there. You can’t go on eBay and buy some fissile material. That’s why we don’t have lots of little atomic bombs belonging to tiny states. So if you ask, what’s the equivalent for these big chatbots, the equivalent is a foundation model that’s been trained, maybe using a hundred million dollars, maybe a billion dollars. It’s been trained on lots of data. It’s got a huge amount of competence,” he continued.
“If you release the weights of that model, you can now fine tune it to all sorts of bad things. So I think it’s crazy to release the weights of these big models, because they are our main constraint on bad actors. And Meta’s now done it, and other people have followed suit. So it’s too late now — the cat’s out of the bag, but it was a crazy move,” he said.
Hinton has previously talked about how dangerous it was to open-source model weights, saying that doing so was like selling nuclear weapons at electronics parts store Radioshack. He’s also recently said that AI has begun behaving in deceptive behaviours, and has learnt to lie to human users. Hinton seems to believe that open-sourcing these weights will allow anyone to finetune a model that could be used for nefarious purposes — the foundational model cost $100 million or train, but to make it available for anyone to modify could pose grave dangers from adversarial actors.
Hinton is one of the leading figures in AI, but not everyone shares his concerns. Yann Lecun, who is the Chief AI Scientist at Meta, has said that the dangers of AI have been “incredibly inflated” to the point of being distorted, and added that AI will itself have no desire to dominate humans. There are two different schools of thoughts emerging on the AI safety issue, but with powerful models already being open-source for nearly a year now, and no real negative fallouts from this having been reported, humanity can be cautiously optimistic that powerful AI models might not end up being as dangerous as many believe.