There’s much hype around AI agents at the moment, with many dubbing 2025 as the year of AI agents, but there might be some risks associated with these agents that people could be ignoring.
Meredith Whittaker, the president of the privacy-focused messaging app Signal, recently voiced serious concerns about the rush to integrate AI agents into our daily lives. Her warning highlights the potential security and privacy risks that come with granting these agents the extensive access they need to function effectively. She paints a picture of a future where our devices are effectively controlled by these agents, with access to everything from our messages to our financial information.

Whittaker argues: “I think there’s a real danger that we’re facing, and Signal is clocking this pretty closely, of the introduction of this sort of notion of agentic AI into our devices and lives. In part, because what we’re doing is giving so much control to these systems that are going to need access to data. The value add is something like: can look up a concert, book a ticket, schedule it in your calendar, and message all your friends that it’s booked. Right? So what would it need to do that? Well, it would need access to our browser and ability to drive that. It would need our credit card information to pay for the tickets. It would need access to our calendar, everything we’re doing, everyone we’re meeting. It would need access to Signal to open and send that message to our friends.”
She continues, outlining the technical implications: “And it would need to be able to drive that across our entire system with something that looks like root permission, accessing every single one of those databases, probably in the clear, because there’s no model to do that encrypted. And if we’re talking about a sufficiently powerful model, AI model that’s powering that, there’s no way that’s happening on device. Even though on-device isn’t a prophylactic, isn’t going to really solve a lot of those issues, that’s almost certainly being sent to a Cloud Server where it’s being processed and sent back.”
Whittaker then connects this back to the larger implications for user privacy: “So there’s a profound issue with security and privacy that is haunting this sort of hype around agents. And that is ultimately threatening to break the blood-brain barrier between the application layer and the OS layer by conjoining all of these separate services, muddying their data, and doing things like undermining the privacy of your Signal messages because, hey, the agent’s got to get in, the agent’s got to text your friends, the agent’s got to pull the data out of your text and got to summarize that so that, again, your brain can sit in a jar and you’re not doing any of that yourself. You’re doing something else. So I think we need to be really careful. You know, when I think about the immediate concerns, not simply the history of AI and the fact that it’s, you know, predicated on this larger surveillance model, there’s a real issue right now of the undermining that AI systems are poised to do in these privacy and security guarantees in the name of this sort of, you know, magic Genie bot that’s going to take care of the exigency of life.”
Whittaker’s concerns underscore the potential trade-off between convenience and privacy in the age of AI agents. Granting these AI agents access to our sensitive data and core functionalities could streamline our lives, but it also creates a single point of failure, a highly attractive target for malicious actors. The “magic genie bot,” as Whittaker describes it, could quickly turn into a nightmare if security and privacy are not prioritized. Her warning is particularly relevant given the increasing sophistication of AI-powered phishing and social engineering attacks. As these agents become more integrated into our lives, they become a more potent tool for manipulation and exploitation. The potential for data breaches, unauthorized access, and surveillance becomes exponentially greater. As we deploy AI agents in the coming months, we’d do well to keep a close eye on the potential downsides — and risks — as well.