Google & Anthropic Are Responsible About AI Safety, Meta & OpenAI Less So: Geoffrey Hinton

Geoffrey Hinton has been warning anyone who’d listen about the dangers of developing superintelligent AI, but he believes that some companies are paying more attention than others.

The “Godfather of AI” and Nobel Prize winner didn’t mince words in a recent interview when asked about the AI industry’s approach to safety. Using a striking analogy, Hinton painted a picture of an existential threat that he believes deserves far more urgency than it’s receiving—and he singled out specific companies for both praise and criticism in how they’re handling the challenge.

“Suppose that some telescope had seen an alien invasion fleet that was going to get here in about 10 years, we would be scared and we would be doing stuff about it,” Hinton said. “Well, that’s what we have. We’re constructing these aliens, but they’re going to get here in about 10 years and they’re going to be smarter than us. We should be thinking very, very hard: How are we going to coexist with these things?”

When it comes to which companies are taking that question seriously, Hinton drew clear distinctions. “I think both Dario Amodei and Demis Hassabis and also Jeff Dean, they all take safety fairly seriously,” he said, referring to the CEOs of Anthropic and Google DeepMind, and Google’s Chief Scientist, respectively. “Obviously, they’re involved in a big commercial competition too, so it’s difficult, but they will understand the existential threat that when AI gets superintelligent, it might just replace us. So they worry about it a bit.”

His assessment of other major players was far less charitable. “I think that some companies are less responsible than others. So for example, I think Meta isn’t particularly responsible,” Hinton said bluntly. “OpenAI was founded to be responsible about this, but it gets less responsible every day and their best safety researchers are all leaving or have left.”

His final verdict was unequivocal: “I think Anthropic and Google are somewhat concerned with safety and the other companies less so.”

Hinton’s comments arrive at a pivotal moment for AI development. His criticism of OpenAI aligns with a well-documented exodus of safety-focused researchers from the company, including the departures of co-founder Ilya Sutskever and key safety team members Jan Leike and Daniel Kokotajlo, who have publicly voiced concerns about the company’s direction. Meanwhile, Meta has faced scrutiny for its open-source approach to releasing powerful AI models, which critics argue could accelerate risks by making advanced capabilities widely available without adequate safeguards. On the other hand, Hinton has spent a lot of time working at Google, so he likely understands the company well. The competitive dynamics Hinton acknowledges—where companies race to develop more powerful systems while balancing safety concerns—underscore a fundamental tension in the industry. As AI capabilities rapidly advance, his alien invasion analogy serves as a stark reminder that the timeline for solving alignment and safety challenges may be shorter than the timeline for achieving superintelligence itself.

Posted in AI