Worried That We Could Sleepwalk Into A Crisis By Giving Autonomous AI Agents Too Much Access: OpenAI CEO Sam Altman

All kinds of AI agents are taking off through which users give AI systems access to their computers, but there might be risks down the line from all this adoption.

At a recent OpenAI townhall, CEO Sam Altman shared a surprisingly candid reflection on his own experience with AI agents—and expressed concern about a pattern of behavior he’s observed both in himself and across the industry. Speaking about Codex, OpenAI’s AI coding system, Altman described how quickly his initial caution evaporated in the face of the tool’s convenience and apparent reliability.

“One of the things that surprised me personally, and I think has surprised many of us here, is when I first started using Codex, I said, look, I don’t know how this is going to go, but for sure I’m not going to give this thing complete unsupervised access to my computer. I was so confident in that,” Altman said.

“And I lasted about two hours. And then I was like, you know what, it seems very reasonable. The agent seems to really do reasonable things. I hate having to approve these commands every time. I’m just going to turn it on for a little bit and see what happens. And I never turned full access off. And I think other people have had a similar thing,” he added.

This personal anecdote led Altman to articulate a broader concern about the industry’s trajectory. “The general worry I have is that the power and convenience of these are so high and the failure rates—maybe catastrophic failures will happen—but the rates are so low that we are going to kind of slide into this ‘YOLO,’ and hopefully it’ll be okay.”

Altman went on to describe how this dynamic could create compounding risks as AI systems become more sophisticated. “And then as the models get more capable, and harder to understand everything they’re doing, if there’s a misalignment in the model, if there’s some sort of complex problem that emerges over weeks or months of usage and you put some security vulnerability into something you’re making—you can have various opinions on this curve of how crazy sci-fi you want to get with the AI being misaligned or whatever.”

“But I think what’s going to happen is the pressure to adopt these tools, to use them, not just the pressure, the delight and the power of them is going to be so great that people aren’t—that people get pulled along into sort of not thinking enough about the complexity of how they’re running these things,” he continued.

His conclusion was stark: “The general worry I have is that capabilities are going to rise very steeply. We’re going to get used to how the models work at a certain level and decide we trust them, and without building very good—I’ll call it big picture security infrastructure around it—we will sleepwalk into something.”

Altman’s concerns arrive at a pivotal moment for AI agents. Anthropic recently launched its Claude Cowork capability, allowing Claude to control computers directly. There are agentic browsers like OpenAI’s Atlas or Perplexity’s Comet that gives AI access to the user’s browser. More recently, Clawdbot, which accesses a user’s entire machine and takes decisions on their behalf has gone viral. The pattern Altman identifies—initial caution dissolving within hours—may be playing out across millions of users as these tools proliferate. His warning suggests that the AI industry’s greatest near-term risk may not be a dramatic, science-fiction-style catastrophe, but rather a gradual erosion of security practices driven by tools that work just well enough, just often enough, to make vigilance feel unnecessary—until it’s too late.

Posted in AI