There has been much speculation on how LLMs compare to the human mind, but a Google DeepMind researcher has an interesting analogy for these AI tools.
Murray Shanahan, a principal scientist at DeepMind, has offered a thought-provoking perspective on large language models (LLMs). In a discussion, he referred to them as “exotic mind-like entities,” a term he coined in one of his papers. This description captures both the intriguing similarities and the fundamental differences between LLMs and human minds. His careful use of the term “mind-like,” as opposed to simply “minds,” highlights the ongoing debate about the nature of consciousness and intelligence in artificial systems.

“In one of my papers,” Shanahan explains, “I used the phrase ‘exotic mind-like entities’ to describe large language models.” He continues, “so I think that they are exotic mind-like entities… they are kind of mind-like, increasingly mind-like.” He clarifies his word choice: “There’s a very important reason for using the hyphen (between mind and like), which is because I want to hedge my bets as to whether they really qualify as minds.” This cautious approach allows him flexibility: “So I can wriggle out of that problem by just using ‘mind-like’.”
Shanahan further elaborates on their exotic nature: “They’re exotic because they’re not like us, (such as) language use. In other respects they’re disembodied for a start.” He also touches upon the complex question of self-awareness in these systems: “There’s really weird conceptions of selfhood that are applicable to them, maybe.” He concludes: “So they are quite exotic entities as well. We just don’t have the right kind of conceptual framework or vocabulary for talking about these exotic mind-like entities yet, you know. We’re working on it, and the more they are around us, the more we’ll develop new kinds of ways of talking and thinking about them.”
Shanahan’s hesitation to definitively label LLMs as “minds” reflects a broader scientific and philosophical debate. While LLMs demonstrate impressive linguistic abilities, their understanding of the world and their capacity for subjective experience remain open questions. Their disembodied nature and reliance on statistical patterns, rather than lived experience, further distinguish them from human minds.
The rapid advancement of LLMs necessitates the development of new conceptual tools and vocabulary. As these entities become more integrated into our lives, understanding their nature and capabilities becomes increasingly critical. Shanahan’s “exotic mind-like entities” provides a starting point for this crucial conversation, acknowledging both the similarities and the profound differences between artificial and human intelligence. The ongoing development of these AI systems will undoubtedly challenge our existing understanding of intelligence — and force us to reconsider what it means to have a mind.