Open-Sourcing Big AI Models Is Like Selling Nuclear Weapons At RadioShack: Geoffrey Hinton

Open-source AI models are becoming increasingly capable and commonplace, and while this is being cheered on by the developer community, not everyone is comfortable with the trend.

Nobel Prize winning AI researcher Geoffrey Hinton has cautioned against making powerful AI models open-source. “I think open-sourcing big (AI) models is like being able to buy nuclear weapons at Radioshack. Do you guys still remember what Radioshack was? Maybe not,” he laughed. Radioshack was a popular chain of stores for electronics parts, where hobbyists could buy components for their projects.

Hinton seemed to be saying that giving away a powerful AI model to the public at large — as electronics were sold at RadioShack — could have unintended consequences. He even called for regulations of the AI industry.

“I think if you want regulations, the most important regulation would be to not open source big models…it’s crazy to open source these big models because bad actors can then fine-tune them for all sorts of bad things,” he said.

Open source models are those that share their “weights”, or the numerical values of their parameters, with users, which allows anyone to modify them as they choose. GPT-2, for instance, is an open source model, but OpenAI has chosen not to make its later iterations, such as GPT-3 and GPT-4 open source — these models can only be accessed through ChatGPT’s interface or through APIs, and only their outputs are made available to users. OpenAI places restrictions on the questions users can ask these models, and the model has been trained to not respond to help users with illegal activities. These guardrails have been put in place to prevent the model from being used for unintended purposes by bad actors.

But not every company is released closed-source models like OpenAI. Meta, for instance, has released a series of models named as Llama, which aren’t fully open source because their training data hasn’t been shared, but can still be edited and modified. In China, Alibaba released the QwQ series of models, which have surprised many by their capabilities. Such open-source models allow users to fine-tune the model to specific tasks. It appears that Geoffrey Hinton believes that this might be putting too much power into people’s hands, and their capabilities can be end up being used for nefarious purposes.

There is, however, an argument to be made for open-source models as well. If all powerful models are closed-source, they’ll remain accessible only to a handful of companies in the world, who could then choose to manipulate them and use them as they wish. Having only closed-source models would give too much power in the hands of a handful of companies, which could arguably be worse than letting this power be more broadly distributed. Also, thus far there have been no serious downsides of open source models — models like Llama have been in use for a while, but haven’t yet been used widely for nefarious purposes. But Geoffrey Hinton is one of the pioneers of the field, and is known as one of the godfathers of AI — if he believes that models are going to become exponentially more powerful, it might be wise to consider the downsides of open-source models that are released for all to use.