Much has been written about how Google let OpenAI take the lead in AI in spite of having the best AI research talent working for it, but a Nobel Prize winner — and a former Google employee — has an interesting take in the situation.
Nobel Prize winning researcher Geoffrey Hinton has said that Google lost the lead in AI because it acted responsibly and didn’t release AI models over safety concerns. He said that Google was forced to enter the AI race after Microsoft invested in OpenAI. After OpenAI began releasing powerful AI models such as ChatGPT, Google had no choice but to release its own models to compete.
“When I was at Google and they had a big lead (in AI),” Hinton said at a public appearance. “Around 2017 and for a few years after that, after they’d done the Transformer, — which they published and probably now regret that they published — they had, they had much more advanced chatbots than anybody else,” he added.
Researchers at Google had published the seminal Transformer paper in 2017. The transformer architecture was initially designed for translations, but researchers found that it was also very useful in building Language Models. OpenAI used Google’s discovery of the Transformer to release ChatGPT in late 2022.
Hinton however says that Google too had built large language models before OpenAI, but had chosen not to release them. “(Google) was extremely responsible in not releasing them because they’d seen what happened at Microsoft. When Microsoft released a chatbot, it started spewing racist hate speech very quickly. Google obviously had a good reputation didn’t want to ruin it. So it wasn’t for ethical reasons that they didn’t release these things. (It was) because they didn’t want to ruin their reputation. But they behaved very responsibly,” Hinton said.
Hinton was referring to the ‘Tay’ chatbot that Microsoft had released in 2016 which was trained on social media conversations. But users had bombarded it with objectionable content, which had caused the model to make some objectionable remarks itself, and Microsoft had been forced to take it down. There had been misgiving over AI with Google too — six months before ChatGPT was released, a Google engineer had claimed that a chatbot that they were developing internally “had become a person” with feelings. Google had suspended the engineer after he had taken these concerns public, and never released the model to the public.
But OpenAI went ahead and released Chatgpt in late 2022. “As soon as OpenAI made a deal with Microsoft, (Google couldn’t sit back anymore),” Hinton said. “They had to compete and release stuff and have chatbots out there. So I think Google behaved very well when they could afford to. But once you get in a capitalist system, when you get profit driven companies, particularly ones run by CEOs who have stock options that depend on what they do in the next quarter, you’re going to get short term profits dominating everything else. And that’s what, that’s exactly what we’re seeing at OpenAI. So OpenAI is an experiment that’s been running in real time on AI safety versus profits,” Hinton explained.
Hinton seemed to be saying that Google had already created high-quality AI models, but had chosen not to release them for the negative implications such a powerful technology might have. But OpenAI, after its backing from Microsoft in 2019, had no such compunctions, and released ChatGPT in 2022. This forced Google’s hand into entering the AI race — like Google, Microsoft was a publicly-traded company, and Google felt compelled to release its own models to compete. And this interesting interplay between Google, Microsoft and OpenAI ultimately led to the AI revolution that we see today.