There’s no shortage of experts who’re predicting serious consequences with the creation of increasingly powerful AI systems, but a prominent voice in the field believes that these concerns might be overblown.
Meta’s AI Chief Yann LeCun has said that concerns around AI are overblown to the point of being distorted. “When OpenAI came up with GPT 2, they said oh we we’re not going to open source it because it’s very dangerous. People could do really bad things. They could flood the internet with disinformation and blah, blah, blah. So we’re not going to open source it. I made fun of them because it was kind of ridiculous at the time,” he said in an interview.
“The capability of the system really was not that bad. So you have to accept the fact that those things have been available for several years and nothing really bad has happened. There was a bit of worry that people would use this for disinformation in the run up of the elections in the US. There were like three major elections this year in the world (and nothing happened). And all kinds of things like cyberattacks, none of that really has happened,” he continued.
LeCun said that some people were afraid that bad people would use the AI to do bad things, like AI systems being used to develop bioweapons or chemical weapons. “I think, frankly, those dangers have been formulated for several years now and they’ve been like incredibly inflated to the point of being kind of distorted so much that that they really don’t make don’t make any sense,” he said.
“Yes, delusional is the word you use,” the interviewer told him. “Well I don’t call them delusional,” LeCun replied. “I call some of the other people who are more extreme and are pushing for regulation like SB-1047 (California’s AI safety bill) delusional. I mean, some people would tell you to your face a year ago if you asked them how long is it going to take for AI to kill us all, and they would say, like, five months, and obviously they were wrong,” he said.
LeCun might have a point. Companies like OpenAI had refused to open-source their early models over fears that they might cause widespread harm, but AI models have been out in the wild for over two years now, and don’t seem to have caused any damage. Also, models far more powerful than GPT-3 are now open-source, and they too don’t seem to have led to any unintended consequences. While it’s always good to remain cautious, it does seem that many of the early fears that many companies and experts had over AI progress don’t seem to have materialized so far.