Giving AI Final Goals Works Better Than Creating Detailed And Specific Workflows: Claude Code Creator Boris Cherny

There’s a tendency for computer programmers to treat AI models as simple ‘functions’ in a computer program, which break down tasks into simple components, but AI models might work better in a counterintuitive way.

Boris Cherny, the creator of Claude Code at Anthropic, has a piece of advice that cuts against how most engineers instinctively approach AI: stop trying to control every step. In a recent interview, Cherny laid out a principle he believes produces consistently better results — give the model a goal and get out of the way.

“Don’t try to box the model in,” Cherny says. “I think a lot of people’s instinct when they build on the model is they try to make it behave in a very particular way. They think: this is a component of a bigger system. I think some examples of this are people layering very strict workflows on the model.”

The temptation, he explains, is understandable. Developers are trained to decompose problems, so they build elaborate orchestration systems that tell the model exactly what to do and when. But Cherny argues this instinct backfires:

“For example, to say: you must do step one, then step two, then step three, and you have this very fancy orchestrator doing this. But actually, almost always you get better results if you just give the model tools. You give it a goal and you let it figure it out. I think a year ago you actually needed a lot of the scaffolding, but nowadays you don’t really need it.”

His recommendation is a shift in mindset — from curating a model’s every move to equipping it and stepping back:

“I don’t know what to call this principle, but it’s something like: ask not what the model can do for you. Just think about how do you give the model the tools to do things? Don’t try to over-curate it. Don’t try to put it into a box. Don’t try to give it a bunch of context upfront. Give it a tool so that it can get the context it needs. You’re just going to get better results.”

The broader implication is significant. For years, the dominant paradigm in AI product development has been prompt engineering and rigid workflow design — carefully scripted sequences designed to prevent the model from going off-track. Cherny is saying that era is largely over. The models have caught up to the point where the scaffolding is now a constraint, not a safeguard.

This aligns with a broader pattern in how Anthropic itself builds with Claude. Cherny has previously described how early versions of Claude Code used RAG — a structured retrieval approach — before they abandoned it in favour of agentic search, where the model dynamically fetches what it needs using standard tools like grep and glob. The result outperformed the more controlled system by a significant margin.

The principle is also reflected in how Claude Code itself is used in practice. A Google Principal Engineer recently noted that Claude Code built in an hour what her team had spent a year building — she gave it a description of the problem, not a step-by-step process. And Cherny himself no longer opens an IDE, instead shipping entire pull requests by pointing Claude at a goal.

For businesses building on top of AI, the lesson is practical and immediate: the engineering effort spent building elaborate control structures around models may be producing worse outcomes than simply trusting the model with better tools. The instinct to manage AI like a deterministic system is, increasingly, a liability.

Posted in AI