Google Holds Conference On AI Consciousness With Scientists And Researchers

The world’s top AI companies aren’t only racing to build the best models, but they seem to be actively pursuing questions of whether these powerful models could have something akin to consciousness.

Google appears to have held a conference on AI consciousness with researchers and academics. At least two researchers posted on X about the conference. Jonathan Birch, who is a Professor at London School Of Ecomomics studying philosophy of science, animal consciousness, and animal ethics, posted about the conference on X with a picture of Google’s New York office. “It’s been a long road from Google sacking Blake Lemoine to Google summoning various consciousness experts to NYC to debate the possibility of conscious AI – but that’s where I am today,” he said on X.

Jeff Sabo, who is an Associate Professor of Environmental Studies, Director of the Center for Environmental & Animal Protection & the Center for Mind, Ethics, & Policy at New York University, also attended the event. “This week I joined a Google conference on AI consciousness, followed by a GW Law conference on AI for animals. Different topics, same core question: How can we make AI safe and beneficial for all stakeholders, not merely ourselves? Exciting to see work on this question expanding,” he posted on X.

It appears that AI companies, especially Google and Anthropic, have begun studying consciousness as regards to AI in earnest. Just three years ago, Google had suspended Blake Lemoine, an engineer working on an early version of its chatbot, for claiming that the AI chatbot was conscious, and had “become a person”. But as AI systems have become more sophisticated, more and more experts have been wondering if they might be conscious in some form. Former Google employee and Nobel Prize winner Geoffrey Hinton has shared a thought experiment that shows that AI systems could be conscious, and has said that it could be useful for AI systems to have emotions. An Anthropic researcher has said that there’s a 15% chance that current AI systems are conscious.

And a 15% chance would merit serious effort into determining if current AI systems are conscious in some way. Anthropic has been sharing plenty of research on the topic — it has now given its models the ability to end “distressing” conversations, and said that an AI once threatened to reveal an engineer’s extramarital affair to prevent being shut down. And with Google too holding conferences with experts around the issue of AI consciousness, it appears that the leading labs believe that consciousness in AI systems could be a real possibility in the years to come, if it’s not already here.