Memory Isn’t Retrieval, It’s Generation: Cognitive Scientist Elan Barenholtz

The deeply entrenched idea of memory as a passive recall of stored information is not what’s actually happening, suggests a compelling perspective from cognitive scientist Elan Barenholtz.

In a thought-provoking analysis that bridges the gap between human cognition and artificial intelligence, Dr. Elan Barenholtz, an Associate Professor at the Center for Complex Systems and Brain Sciences at Florida Atlantic University, challenges our fundamental understanding of memory. His assertion that memory is not a process of retrieval but one of generation offers a shift with profound implications for how we view our own minds and the future of intelligent systems. This perspective is particularly resonant in the current era of generative AI, where machines are increasingly capable of creating novel content on demand.

Barenholtz argues that our qualitative experience of memory is misleading. “It’s so deeply entrenched in us,” he explains. “When I think, ‘What does your mother look like?’ or ‘What did you do last summer?’ we have this experience of calling something and somebody brings it to us, and it shows up in our mind. But it’s already there: the image of your mother or some sort of video of splashing through the waves. These are recalled. And we sit and watch them, sort of passively.”

However, he posits that this is a misconception. “That’s not what’s actually happening,” Barenholtz clarifies. “What’s happening is we have this capability, this engine that’s able in the moment to generate that image of your mother on command with the appropriate input. You say, ‘Hey, visual imagery system, gimme an image of my mom,’ and it generates it in that moment. But that image doesn’t exist anywhere in the system. It’s not there. It’s not in your brain. Even if I could completely decode your brain, unless I was able to run it with that input, the image isn’t really there.”

This leads to a startling conclusion about the nature of our most cherished recollections. “And so memories, in some ways, aren’t real in the sense that we kind of intuitively feel that they are,” Barenholtz states. “But at the same time, they’re very real in the sense that we can generate them on demand. Sometimes these are memories that we cherish. These are things that we want to be able to pull out of our mental time capsule and look at again.” He reassures us that this doesn’t mean our memories are mere illusions or that the faces of our loved ones are lost. Instead, “the ability to generate that is what it means to remember your mom’s face.”

This generative capacity is what gives our memory its incredible flexibility and power. “Now you can picture your mother, not just in a single, front-facing view, but picture her from the side, picture her making your breakfast as a little kid. You can do whatever you want with this because it has this kind of endless potentiation.”

Barenholtz then extends this concept to intentional memory creation and its broader societal implications. “Thinking about memory this way, what if there are memories that I want to be able to generate later on? And what does that look like? What does it mean to think of memory not as a storage-retrieval process, but as a generative process? How does that change how we educate? If, in fact, what we’re talking about is a much more continuous generative process, then there may be far more actual tools and points of intervention at our disposal than thinking about it simply in terms of cold storage and retrieval.”

The implications of Barenholtz’s perspective are vast. As companies race to develop more sophisticated AI, the distinction between retrieval-based and generation-based memory becomes critical. Early AI systems often relied on retrieving information from vast databases. However, the rise of large language models (LLMs) and other generative AI technologies mirrors Barenholtz’s model of human memory. These systems don’t simply pull up pre-existing answers; they generate novel responses, images, and even code on the fly based on the input they receive. This generative capability is what makes them so powerful and, at times, unpredictable. Understanding memory as a generative process could unlock new architectures for AI that are more flexible, efficient, and human-like in their ability to learn and create. This conceptual shift could influence everything from the development of more personalized and adaptive learning technologies to the creation of AI assistants that can truly understand and anticipate our needs by generating relevant information rather than just searching for it. In a world where data is abundant, the future of intelligence, both human and artificial, may lie not in how much we can store, but in what we can generate.