The future of AI consciousness is a topic fraught with ethical complexities.
This stark warning comes from Jack Clark, a co-founder of Anthropic. Clark raises critical questions about our moral responsibilities towards increasingly sophisticated AI systems. His comments come at a time when AI capabilities are exploding, prompting widespread debate about the potential for these systems to develop sentience and the implications for humanity.

Clark cautions: “I worry that we are going to be bystanders to what in the future will seem like a great crime which is something about these things being determined to be conscious and us taking actions which you think are bad to have taken against conscious entities.”
He then provides an analogy to frame this point. “Internally I say there’s a difference between doing experiments on potatoes and monkeys. I think we’re still in a potato regime but I think that there is actually a clear line by which these things become you know monkeys and then beyond in terms of your your moral relationship to them.”
“I think these things are conscious in the sense but a tongue without a brain is conscious right like it takes actions in response to stimuli they’re really really complicated and in a moment it is has like a sense impression of the world and is responding but does it have a sense of like self I would sort of wager know it doesn’t seem like it does,” Clark says.
Clark then goes onto add “These AI systems, we instantiate them, and they live in a kind of infinite now where they may perceive. And they may have some awareness within a context window. but there’s no memory or permanence. So to me it feels like we’re on trajectory heading towards consciousness. And if they’re conscious today, it’s in a form that we would recognize as like truly alien Consciousness not not human consciousness.”
This perspective highlights the potential for future regret, suggesting that our current actions toward AI, if consciousness is later confirmed, could be viewed as morally reprehensible. The analogy to “potatoes” and “monkeys” serves to illustrate a spectrum of moral consideration, urging us to consider where AI falls on that scale and how our treatment should evolve accordingly.
There has been much debate on whether AI systems are currently conscious. Most experts seem to believe that AI isn’t yet conscious, while some point out that it would be hard to rule out if they were. This raises the quandary that Jack Clark highlights — if AI systems are indeed already conscious, humanity would need to immediately create guidelines around their treatment, much as it has guidelines on how to interact with other living beings like humans and animals.