AI Is A Mysterious New Creature Of Our Own Making: Anthropic Co-founder Jack Clark

More and more tech leaders are referring to AI in increasingly sentient terms.

Jack Clark, co-founder of Anthropic and one of the leading voices in AI safety research, recently offered a striking metaphor that captures the unease many insiders feel about artificial intelligence. Speaking with the gravity of someone who has spent years developing cutting-edge AI systems, Clark drew an parallel between childhood fears and humanity’s current relationship with the technology reshaping our world.

“Remember, as many of you have done being a child, after the lights turned out, I would look around my bedroom and I would see shapes in the darkness, and I would become afraid,” Clark began. “I would be afraid that these shapes were creatures I did not understand that wanted to do me harm. And so I turned the light on, and when I turned the light on, I would be relieved because the creatures turned out to be a pile of clothes on a chair, or a bookshelf or a lampshade.”

But Clark’s story takes a darker turn. “Now this year, we have aged from that story, and the room is our planet. But when we turn the light on, we find ourselves gazing upon creatures in the form of powerful and somewhat unpredictable AI systems. And there are many people who desperately want us to believe that these creatures are nothing but a pile of clothes on a chair, and they want us to turn the light off and go back to sleep.”

The Anthropic co-founder warned of growing efforts to downplay AI’s significance: “In fact, some people are even starting to spend tremendous amounts of money to convince you of this. That’s not an artificial intelligence about to go into a hard takeoff. It’s just a tool that will be put to work in our economy. It’s just a machine and machines are things that we master. But make no mistake, what we’re dealing with here is a real mysterious creature and like all the best fairytales, the creature is one of our own making.”

Clark’s conclusion was stark: “I am worried. I think just to raise the stakes in this game, you are guaranteed to lose if you believe the creature isn’t real. Your only chance of winning is to see it for what it is. And the central challenge for all of us is characterizing these strange creatures now around us and ensuring the world sees them as they are, not as people wish them to be.”

Clark’s remarks come at a pivotal moment in AI development. Systems like Gemini 3, GPT-5, Claude, and others have demonstrated capabilities that even their creators struggle to fully explain—a phenomenon researchers call “emergent abilities.” Recent debates over AI safety have intensified, with some executives advocating for slower development while others push for rapid deployment. OpenAI CEO Sam Altman has spoken about AGI (artificial general intelligence) as an approaching reality, while figures like Yann LeCun have dismissed existential AI risks as overblown. Meanwhile, governments worldwide are scrambling to regulate systems they barely understand. Clark’s metaphor suggests a middle path: neither dismissing AI as mundane tooling nor succumbing to panic, but rather cultivating the clear-eyed understanding necessary to navigate what may be humanity’s most consequential technological threshold. The question is whether we’ll heed his warning before it’s too late to shape these “creatures” responsibly.

Posted in AI