Google’s AI image generator, Nano Banana, is moving deeper into the company’s ecosystem[1], showing up this week in both NotebookLM and Google Search, with Photos integration expected soon. What began as a Gemini feature for creative image generation is now shaping into a wider visual tool across Google’s apps.

New Visual Options in NotebookLM

NotebookLM, Google’s AI assistant for note-taking and research, is getting a notable visual boost. The platform’s Video Overviews, which turn text summaries into short explainers, can now include automatically generated illustrations. These visuals are powered by Nano Banana and appear in one of several new artistic styles, including Watercolor, Papercraft, Anime, Whiteboard, Retro Print, Heritage, and Classic.

Alongside these new looks, users can choose how much detail they want in a video. A new “Brief” format delivers concise summaries, while the existing “Explainer” format provides in-depth coverage. Each video can be fine-tuned by editing focus points or presentation style within the customization panel.

These additions are first arriving for paid AI Pro users, with free access expected soon. The update positions NotebookLM not just as a note organizer but as a lightweight content studio for educational or creative work.

Image Creation Comes to Search

At the same time, Nano Banana is being added to Google Search through Lens and AI Mode. Within the Google app, users can open Lens, select the new Create mode marked by a yellow banana icon, and instantly edit or transform photos. You can capture something through your camera, describe how you want to modify it, or start from scratch by typing a prompt in AI Mode.

The rollout begins in English for users in the United States and India, with additional regions and languages planned. Google says the goal is to make visual creativity accessible directly inside Search without requiring users to open Gemini or any external editor.

Part of a Broader AI Shift

Although Nano Banana started as a playful experiment, it has quickly become one of Google’s most visible AI tools. In its first few months, it handled hundreds of millions of image edits and generated wide interest across social platforms. Its expansion into Lens and NotebookLM reflects Google’s growing effort to link generative AI features across its core products… from summarization in NotebookLM to visual creation in Search.

Within Lens, users can modify or recreate photos in real time, while AI Mode allows the same capabilities using text input. Together, they blur the line between searching, editing, and imagining. The company has also hinted that similar tools may appear in Circle to Search and Photos later this year.

What’s Next

By embedding Nano Banana in multiple apps, Google appears to be aligning its creative and productivity tools under a single AI model. The rollout strengthens the connection between Gemini’s generative systems and the everyday Google experience. As more users gain access, the feature could evolve from a novelty into an integrated visual engine that supports both creative exploration and practical image editing.

Read next: Google Outlines How AI Mode Is Reshaping the Core Search Experience[2]

By admin