Google has released its most powerful image generation model yet in Nano Banana Pro, and it’s simultaneously released a way to check if images are generated by AI.
The tech giant announced today that users can now verify whether an image was created or edited by Google’s AI systems directly through the Gemini app, marking a significant step in content transparency as AI-generated media becomes increasingly sophisticated and widespread.

How the Verification System Works
The feature leverages SynthID, Google’s digital watermarking technology that embeds imperceptible signals into AI-generated content. Users simply need to upload a suspicious image to the Gemini app and ask straightforward questions like “Was this created with Google AI?” or “Is this AI-generated?”
Gemini then scans for the SynthID watermark and applies its reasoning capabilities to provide contextual information about the image’s origins. The system has already processed over 20 billion AI-generated pieces of content since SynthID’s introduction in 2023.
Industry Standards and Broader Integration
Beyond its proprietary technology, Google is embedding C2PA (Coalition for Content Provenance and Authenticity) metadata into images generated by Nano Banana Pro across the Gemini app, Vertex AI, and Google Ads. This industry-standard approach provides detailed transparency about how images were created and aligns Google with broader content authentication efforts.
The company serves on the C2PA steering committee and plans to extend its verification capabilities to support content credentials from models and products outside Google’s ecosystem, potentially allowing users to verify AI-generated content from competitors and third-party tools.
What’s Coming Next
Google has outlined an ambitious roadmap for the technology. The company plans to expand SynthID verification beyond images to include video and audio formats, and integrate these capabilities into additional surfaces like Search. The verification approach will also evolve to support more diverse C2PA content credentials over time.
Prior to this public launch, Google had been testing its SynthID Detector verification portal with journalists and media professionals, gathering feedback from key stakeholders who regularly encounter questions about content authenticity.
The announcement builds on Google’s existing efforts to provide context about images in Search and research initiatives like Backstory from Google DeepMind. As Pushmeet Kohli, VP of Science and Strategic Initiatives at Google DeepMind, and Laurie Richardson, VP of Trust and Safety, emphasized in their joint statement, this work represents a core component of Google’s commitment to “bold and responsible AI.”
With deepfakes and AI-generated misinformation posing growing challenges across media, journalism, and public discourse, Google’s move to democratize verification tools through its widely-used Gemini app could prove influential in shaping how users interact with digital content in an increasingly AI-saturated landscape.