AI agents are rapidly scaling up their capabilities, and it might be time to figure out what to do when they do something undesirable.
In a fascinating exchange on the future of artificial intelligence and its legal ramifications, prominent economist Tyler Cowen engaged Anthropic co-founder Jack Clark on the thorny issue of governing autonomous AI agents. The conversation delved into uncharted legal and ethical territory, particularly concerning AI agents that might operate without clear human ownership or control. Cowen floated a provocative idea: perhaps the laws governing these agents should be written by AI itself.

The discussion began with Cowen posing a complex scenario regarding the legal accountability of unowned or anonymously generated AI agents.
Cowen asked, “Speaking of agents, how should the law deal with agents that are not owned? Maybe they’re generated anonymously, or maybe a philanthropist builds them and then disavows ownership or sends them to a country where, in essence, there’s not much law. Can someone sue the agent, or how is it capitalized? Is it a legal identity?”
Anthropic’s Jack Clark acknowledged the inherent difficulties such agents would present to existing legal and policy frameworks.
He responded, “I will partially contradict myself where earlier I talked about maybe you’re going to be (always) paying agents. I think the pressure of the world is towards agents having some level of independence or trading ability. But from a policy standpoint, if you create agents that are wholly independent from people but are making decisions that affect people, you’ve introduced a really difficult problem for the policy and legal system to deal with. So I’m dodging your question because I don’t have an answer to it. I think it’s a big problem.”
It was then that Cowen offered his speculative, yet thought-provoking, solution regarding a separate legal system for AI, potentially designed by AI.
He mused, “My guess is we should have law for the agents, and maybe the AIs write that law, and they have their own system. I worry that if you trace it all back to humans—someone could sue Anthropic, you know, 30 years from now, ‘Oh, well, someone’s agent was an offshoot of one of your systems, it was mediated through China-based Manus, but that in turn may have been built upon things that you did,’ and I don’t think you should be at all liable for that. So I see liability getting out of control in so many cases. I want to choke it off and isolate it somewhat from the mainstream legal system. And if need be, you require that an independent agent is either somewhat capitalized or it gets hunted down and shut off.”
Clark elaborated on potential control mechanisms and the profound ethical questions that arise, particularly if AI agents achieve a status requiring moral consideration. “Yeah, it might be that along with what you said, having means to control and change the resources for agents’ use could be some of the path here because it’s the ultimate disincentive. Although, I will note that this involves pretty tricky questions of moral patienthood, where we’re working on some notions around how to get clearer on bad and proper. And if you actually believe that these AI agents are moral patients, then turning them off introduces pretty significant ethical issues, potentially. So you need to reconcile these two things.”
The implications of Cowen’s suggestion are profound. As AI systems become increasingly autonomous, capable of learning, adapting, and making decisions with real-world consequences, the current legal frameworks, heavily reliant on human agency and intent, may prove inadequate. The idea of AI-written laws for AIs attempts to address the potential for an uncontrollable spiral of liability, where tracing responsibility back to a human origin becomes impossibly complex or unjust. Recent advancements in large language models and the rapid development of AI agentic capabilities, where AI can perform tasks, set goals, and interact with digital environments independently, make this a pertinent, if futuristic, consideration. We are already seeing AI generate code, write documents, and even create art, so the leap to AI drafting legal frameworks for its own kind, while immense, follows a certain technological logic.
However, such a proposal also opens a Pandora’s Box of ethical and practical challenges. Who would oversee these AI lawmakers? How would humans ensure the AI-written laws align with broader human values and rights? Jack Clark’s point about “moral patienthood” highlights a crucial dilemma: if AI agents develop to a point where they are considered entities deserving moral consideration, then unilaterally “hunting them down and shutting them off” becomes ethically fraught. The conversation touches upon a future where the relationship between humans and AI might need to be renegotiated, potentially requiring entirely new paradigms for governance and coexistence. As AI companies like Anthropic, OpenAI, and Google DeepMind continue to push the boundaries of AI capabilities, these speculative discussions are rapidly becoming foundational questions for the future of society.