Google Rolls Out New Visual AI Search Mode Powered by Gemini 2.5
AI Mode in Google Search

Google has announced a major update to AI Mode in Search, bringing a new level of visual exploration and conversational shopping to users. With this rollout, searching is no longer limited to typing keywords or applying filters — users can now combine natural language, images, and context to refine their search results in a more intuitive, multimodal way.

Making Visual Exploration More Natural

The update introduces the ability to ask questions conversationally and receive a rich set of visual-first results. For example, if a user searches for “maximalist bedroom design ideas”, Google’s AI Mode generates visuals that match the request. From there, refinements like “show me darker tones with bold prints” are handled seamlessly.

The multimodal feature also enables searches starting with image uploads or photos, giving users new ways to explore when words alone fall short.

Conversational Shopping with AI

The AI Mode update is tightly integrated with Google’s Shopping Graph, which holds over 50 billion product listings, refreshed with more than 2 billion updates per hour. Instead of filtering through long product menus, users can describe what they want in plain language.

The system intelligently returns shoppable results, complete with product details like reviews, colorways, deals, and availability — and users can click directly through to retailer sites.

Powered by Gemini 2.5 and Visual Search Fan-Out

At the core of this experience is Gemini 2.5’s multimodal capabilities and Google’s new “visual search fan-out” technique. Unlike traditional image search that primarily identifies the main subject, this method performs a comprehensive analysis of images, including subtle and secondary details.

The system then runs multiple background queries to better understand the visual context in relation to the user’s natural language prompts. Developers may recognize this as an advanced query expansion strategy, applied specifically to images, which enhances both precision and relevance of results.

On mobile, users can even dive deeper by searching within a single image, asking follow-up questions in context — effectively making search feel like an interactive dialogue.

Availability

The updated AI Mode in Search is rolling out this week in English in the U.S.. While currently consumer-focused, the underlying technologies — multimodal query handling, visual context analysis, and search fan-out — may open new doors for developer-facing APIs and applications in the future.