Sold My Startup To Google For Financial Security For Son With Learning Disabilities: Geoffrey Hinton

In the lore of Silicon Valley, founders are often depicted as relentless visionaries, driven by an unwavering belief in their technology’s potential to change the world. Geoffrey Hinton, one of the godfathers of AI, certainly fits the visionary mold. However, a candid revelation from the man himself paints a far more personal and relatable picture of his journey from a university professor to a highly sought-after tech guru at Google. His motivation, it turns out, was not just academic curiosity but a father’s profound desire to secure his son’s future.

“I have a son who has learning difficulties, and to be sure he would never be out on the street, I needed to get several million dollars,” Hinton explained on a podcast. “I wasn’t going to get that as an academic. I tried; I taught a Coursera course in the hope that I’d make lots of money that way, but there was no money in that.”

This pressing personal need led him to a pivotal decision. “So, I figured out the only way to get millions of dollars was to sell myself to a big company,” he stated. The opportunity arose from a breakthrough moment in AI history. “When I was 65, fortunately for me, I had two brilliant students who produced something called AlexNet, which was a neural net that was very good at recognizing objects in images. So, Ilya [Sutskever] and Alex [Krizhevsky] and I set up a little company and auctioned it. We actually set up an auction where we had a number of big companies bidding for us.”

The auction culminated in a landmark acquisition by Google. “Google ultimately ended up acquiring our technology,” Hinton recalled. “They were very nice to me. They said pretty much I could do what I liked.” During his tenure, he made significant contributions, including his work on “distillation,” a technique for transferring knowledge from a large AI model to a smaller one, which he notes “worked really well, and that’s now used all the time.” It was this deep technical work, he says, that led to a profound realization about the future of intelligence and his now-public concerns about AI safety.

Hinton’s reasons for eventually leaving Google were as straightforward as his reasons for joining. “The main reason I left Google was because I was 75 and I wanted to retire,” he says. “[though] I’ve done a very bad job of that,” he admitted, referring to his many appearances on podcasts and events over the last few years. He is quick to clarify that his departure was not due to any disagreement with the tech giant. In fact, he praises Google’s caution. “I think Google actually behaved very responsibly. When they had these big chatbots, they didn’t release them, possibly because they were worried about their reputation,” he observed. “OpenAI didn’t have a reputation, so they could afford to take the gamble.”

Hinton’s story offers a compelling, multi-layered perspective on the trajectory of modern AI. His initial, deeply personal motivation underscores the stark financial chasm between academic research and corporate tech, a gap that has driven many top minds into the private sector. The 2013 acquisition of his company, DNNresearch, for a reported $44 million was a watershed moment, signaling the immense value corporations placed on deep learning talent and technology. This trend has only accelerated, with tech giants engaging in an ongoing, high-stakes battle for AI experts. Furthermore, Hinton’s reflection on the differing strategies of Google and OpenAI encapsulates the core tension in the AI race: the established incumbent’s “responsibility” versus the disruptive upstart’s willingness to “take the gamble.” This dynamic came into sharp focus with OpenAI’s release of ChatGPT in late 2022, which prompted a “code red” at Google and a subsequent scramble to release its own chatbot, Bard, which has now become Gemini. And Hinton’s journey, from seeking financial security to becoming one of the most vocal proponents of AI safety, serves as a powerful narrative about the unforeseen consequences of technological advancement and the evolving responsibilities of its creators.

Posted in AI