Did you know you can customize Google to filter out garbage? Take these steps for better search results, including adding Lifehacker as a preferred source for tech news.
Google’s updated its Gemini app (and website) to make image generation a bit more intuitive, and for once, what I previously wrote off as a novelty might now actually be a viable Photoshop alternative. There’s still some typical AI junk, but the new model, tested under the name “nano banana” and now live for all Gemini users as Gemini 2.5 Flash Image, does a lot to let you fine-tune an image to your liking. Everything still has a watermark and “made with AI” warnings in the metadata, but get ready to be a lot more discerning over whether a photo is real or not—the new Gemini blurs those lines more than ever before.
Google Gemini is now better at editing real photos
What makes the updated model so special is a focus on maintaining details across multiple photos. Now, instead of essentially generating from scratch each time you ask the Gemini app for a photo, it can carry over parts of either a source photo or a previously generated image and only change what you ask it to. There’s two big reasons why that matters, and ironically, one of them actually means using less AI.
For instance, let’s say you have a photo of yourself wearing a red shirt, but you want it to be blue. Previously, you had two options: You either had to take the image into Photoshop yourself and tweak it manually, or use it as a prompt for AI and keep generating until you got something that looked close enough to the original photo, but now with the shirt in blue. With the changes in nano banana, Google’s fine-tuned its model so that it leaves most of your image alone, and only changes the shirt.

Credit: Michelle Ehrhardt, Google
As an example, here’s that exact situation, with a couple photos of me. Notice how the model maintains fine details like the frizz of my hair or my specific facial expression and pose. It’s not perfect, and you’ll notice that my skin actually looks a little smoother in the edited version, but with the new updates, Gemini is now able to determine what I mean by “shirt” and focus most of its edits on that. I will say the shirt also looks a little unnatural, specifically around my right shoulder, but I also didn’t give Gemini much to work with in my prompt. That’s where the next big change comes in.
Use Gemini to edit the same result multiple times
This is where the real trick is. Whether an image is entirely AI-generated or not, you can now use previously generated images as a base for future generations. In other words, if Gemini didn’t get something quite right the first time, you can ask it to try again until it does.
To give you an idea of what that looks like, here’s the same photo of me in the blue shirt, but now with polka dots added in, to better match the red shirt from the original photo.

Credit: Michelle Ehrhardt, Google
And here’s an entirely AI-generated image of a cat, which I had Gemini change to orange.

Credit: Google
This is huge for AI image generation. Previously, when asking Gemini to make small tweaks to content it’s already generated, you would essentially get brand new photos each time, as is the case with these dogs wearing hats.

Credit: Google
Now, though, you can have the app iterate on the same photo multiple times, which means that if the initial result looks unconvincing, you have a chance to fix it. To me, that takes this from being a novelty—where you essentially have to spin a wheel with each generation and hope it lands on something useful—to a genuine Photoshop threat.
Google suggests, for instance, that you could use this to see how you’d look if you lived in a different decade, or had a different career. I’ll admit that the results look convincing enough to work for casual posts, especially if you upload a real photo as context. Here’s me standing next to the real life Mona Lisa, but re-imagined as an artist.

Credit: Michelle Ehrhardt, Google
That’s not strictly realistic (why is there a second Mona Lisa next to me?), but I could see a certain type of person getting enough of a hoot out of it that they flood social media with posts like it. Spend some time iterating on it, and you could probably even make it look like I just went to the Louvre.
But if you’re an AI skeptic like me, there is still one saving grace that shows the model has a little room to grow.
Combining photos is still not quite right
While the new Gemini updates make iterating on existing photos much more viable, asking it to generate new content, where it can’t rely too much on a source photo, still gives you a noticeable AI sheen. One of the additional features Google announced with this update was the ability to use Gemini to combine multiple source photos into one. But while the other changes mostly involve making small tweaks to existing photos, this one still requires the AI to make up a lot in order to put the photos together, and it’s here where you’re most likely to run into the same old problems.

Credit: Michelle Ehrhardt, Google
For instance, following one of Google’s suggested examples, I uploaded a photo of myself and my cat to Gemini, and asked it to make a photo of us cuddling together. But whereas the other tests I did with this update looked a lot like the source photos, the result here gave me a version of myself in a too-tight shirt, with too-shiny hair, cuddling a too-chunky cat. The broad strokes were right—my face still looks mostly like myself, my cat’s fur pattern is roughly intact, and the couch even has the right color and general shape. But on top of some small inconsistencies with, say, the folds on the couch, or my dimples, or the lamp in the background (which seems to have two poles), anyone who’s met my cat knows she’s not that big. The photo also just has that Vaseline-like, over-processed look that’s endemic to AI.
To a degree, that’s to be expected. I didn’t upload too many photos, and certainly none of me or my cat in the poses presented in the AI image. The AI had no way of knowing how we would look from different angles, especially since my selfie was just a headshot. But what I got does mean that when the AI runs out of useful source info and needs to intuit how a scene should look, it still runs into familiar problems that make it pretty easy to distinguish from photos made without AI. I could probably make the AI photo more realistic if I uploaded source photos closer to what Gemini wanted to generate, sure, but then I have to wonder what the point of involving AI in the editing process would even be?
At any rate, I can confidently say that making advanced AI edits look convincing will still take a good bit of human intervention.
Get ready for a blend of AI and reality
Gemini’s new updates are, to me, most impressive when used for smaller tweaks, which is really where I think the threat to Photoshop comes in. I like to think I have a knack for spotting AI-generated photos, but on a quick scroll, I’m not sure that image of me in a blue shirt would raise any alarm bells.
What does that mean? Well, for one, it means free AI tools are finally at the point where you might be able to use them to do with a natural language prompt what might have taken a few minutes to do by hand before. Adobe has already said it plans to incorporate nano banana into Photoshop, but be prepared for further changes to traditionally untouchable apps as AI progresses. It’s at the point where, at least for the small stuff, it really can threaten your traditional workflow.
For people who aren’t content creators, expect to have to develop an even more discerning eye about what is and isn’t real online. While completely AI-fabricated images are often still pretty easy to spot, and more realistic edits can be mostly innocuous (nobody’s gonna care about the color of my shirt), Gemini’s updates now make it easier than ever to blend reality with just a little bit of untruth. Here’s an image I had the new Gemini make of Taylor Swift in a red baseball cap, if you catch my drift.

Credit: Google
While we wait to see how this plays out, it’s a good time to remember that if an image does get your alarm bells going, Gemini does put AI watermarks in the lower left corner of all of its results, and will mark photos generated using it in their metadata, which you can see on both iPhone and Android by swiping up on a downloaded photo. There are ways to scrub metadata, but as a fallback, because the most convincing edits are likely to use real photos as their sources (I did for the Taylor Swift one above), as a last resort, you can also use a Google reverse image search to try to find the unaltered original. Be careful out there.