AI allowed anyone to code through vibe coding, and it might soon allow something along the lines of vibe-research as well.
NVIDIA CEO Jensen Huang has outlined a future where scientists can have natural language conversations with proteins, asking them about their properties and behaviors as easily as we query AI chatbots today. Speaking about the transformative potential of AI in scientific research, Huang painted a picture that sounds fantastical now but follows the same trajectory that has already revolutionized how we interact with images and text.

“We should be able to talk to our protein in the future,” Huang explained. “What are you, how would you behave? Are you soluble? How do you behave in temperature, high temperature? How do you behave in different types of liquids, in different context? How would you react to this particular chemical? How would you bind to it? You could literally talk to a protein in the future.”
The NVIDIA chief acknowledged the seemingly far-fetched nature of the concept before grounding it in present-day reality. “Now, what I just described sounds a little ridiculous right now, but as you know, you could talk to an image today. You just go up to an image and, what are you? I’m a picture of a cat. What kind of cat are you? Can you move? And all of a sudden, the image turns into a video.”
Huang emphasized that this represents a fundamental shift in how we interact with data. “Notice this is the world that we’re in now. Not only were you processing data, we understand the data that we’re processing, and the implications of that in the field of drug discovery or material sciences or any other form of sciences, really quite profound.”
The vision Huang describes is already taking shape across computational biology. Google DeepMind’s AlphaFold has revolutionized protein structure prediction, solving a 50-year-old challenge by accurately predicting the 3D structures of proteins from their amino acid sequences. The technology has mapped over 200 million protein structures, essentially covering nearly every known protein. Meta has developed ESMFold with similar capabilities, while AI models are now being applied to predict protein-protein interactions, design novel proteins from scratch, and accelerate antibody discovery for drug development.
What Huang envisions takes this several steps further: moving from static prediction to dynamic, conversational interfaces where researchers can query proteins about their behavior under various conditions without running extensive laboratory experiments. This “vibe-research” approach could democratize scientific inquiry in the same way that large language models have made coding accessible to non-programmers. The implications for drug discovery, materials science, and synthetic biology could compress years of experimental work into conversational exchanges, fundamentally reshaping the pace and accessibility of scientific discovery.