You may not have heard about new AI image generation models as much lately, but that doesn’t mean that innovation in the field has stagnated: it’s quite the opposite. FLUX.1-dev immediately overshadowed the famous Stable Diffusion line of image generation models, while leading AI labs have released models such as Seedream, Ideogram, and Qwen-Image. Google also joined the action with Imagen 4. But all of those image models are vastly overshadowed by ChatGPT’s free image generation support in March 2025. After going organically viral on social media with the Make me into Studio Ghibli prompt, ChatGPT became the new benchmark for how most people perceive AI-generated images, for better or for worse. The model has its own image “style” for common use cases, which make it easy to identify that ChatGPT made it. (View Highlight)
Of note, gpt-image-1, the technical name of the underlying image generation model, is an autoregressive model. While most image generation models are diffusion-based to reduce the amount of compute needed to train and generate from such models, gpt-image-1 works by generating tokens in the same way that ChatGPT generates the next token, then decoding them into an image. It’s extremely slow at about 30 seconds to generate each image at the highest quality (the default in ChatGPT), but it’s hard for most people to argue with free. (View Highlight)
In August 2025, a new mysterious text-to-image model appeared on LMArena: a model code-named “nano-banana”. This model was eventually publically released by Google as Gemini 2.5 Flash Image, an image generation model that works natively with their Gemini 2.5 Flash model. Unlike Imagen 4, it is indeed autoregressive, generating 1,290 tokens per image. After Nano Banana’s popularity pushed the Gemini app to the top of the mobile App Stores, Google eventually made Nano Banana the colloquial name for the model as it’s definitely more catchy than “Gemini 2.5 Flash Image”. (View Highlight)
Personally, I care little about what leaderboards say which image generation AI looks the best. What I do care about is how well the AI adheres to the prompt I provide: if the model can’t follow the requirements I desire for the image—my requirements are often specific—then the model is a nonstarter for my use cases. At the least, if the model does have strong prompt adherence, any “looking bad” aspect can be fixed with prompt engineering and/or traditional image editing pipelines. After running Nano Banana though its paces with my comically complex prompts, I can confirm that thanks to Nano Banana’s robust text encoder, it has such extremely strong prompt adherence that Google has understated how well it works. (View Highlight)
Working with the Gemini API is a pain and requires annoying image encoding/decoding boilerplate, so I wrote and open-sourced a Python package: gemimg, a lightweight wrapper around Gemini API’s Nano Banana endpoint that lets you generate images with a simple prompt, in addition to handling cases such as image input along with text prompts. (View Highlight)
I chose to use the Gemini API directly despite protests from my wallet for three reasons: a) web UIs to LLMs often have system prompts that interfere with user inputs and can give inconsistent output b) using the API will not show a visible watermark in the generated image, and c) I have some prompts in mind that are…inconvenient to put into a typical image generation UI. (View Highlight)
Now, we can try another one of Nano Banana’s touted features: editing. Image editing, where the prompt targets specific areas of the image while leaving everything else as unchanged as possible, has been difficult with diffusion-based models until very recently with Flux Kontext. Autoregressive models in theory should have an easier time doing so as it has a better understanding of tweaking specific tokens that correspond to areas of the image. (View Highlight)
One of the most compelling-but-underdiscussed use cases of modern image generation models is being able to put the subject of an input image into another scene. For open-weights image generation models, it’s possible to “train” the models to learn a specific subject or person even if they are not notable enough to be in the original training dataset using a technique such as finetuning the model with a LoRA using only a few sample images of your desired subject. Training a LoRA is not only very computationally intensive/expensive, but it also requires care and precision and is not guaranteed to work—speaking from experience. Meanwhile, if Nano Banana can achieve the same subject consistency without requiring a LoRA, that opens up many fun oppertunities. (View Highlight)
Back in the days of Stable Diffusion, the use of prompt engineering buzzwords such as hyperrealistic, trending on artstation, and award-winning to generate “better” images in light of weak prompt text encoders were very controversial because it was difficult both subjectively and intuitively to determine if they actually generated better pictures. Obama shaking Ugly Sonic’s hand would be a historic event. What would happen if it were covered by The New York Times? I added Pulitzer-prize-winning cover photo for the The New York Times to the previous prompt: (View Highlight)
There are two noteworthy things in the prior two examples: the use of a Markdown dashed list to indicate rules when editing, and the fact that specifying Pulitzer-prize-winning cover photo for the The New York Times. as a buzzword did indeed improve the composition of the output image. (View Highlight)
Many don’t know how image generating models actually encode text. In the case of the original Stable Diffusion, it used CLIP, whose text encoder open-sourced by OpenAI in 2021 which unexpectedly paved the way for modern AI image generation. It is extremely primitive relative to modern standards for transformer-based text encoding, and only has a context limit of 77 tokens: a couple sentences, which is sufficient for the image captions it was trained on but not nuanced input. Some modern image generators use T5, an even older experimental text encoder released by Google that supports 512 tokens. Although modern image models can compensate for the age of these text encoders through robust data annotation during training the underlying image models, the text encoders cannot compensate for highly nuanced text inputs that fall outside the domain of general image captions. (View Highlight)
marquee feature of Gemini 2.5 Flash is its support for agentic coding pipelines; to accomplish this, the model must be trained on extensive amounts of Markdown (which define code repository READMEs and agentic behaviors in AGENTS.md) and JSON (which is used for structured output/function calling/MCP routing). Additionally, Gemini 2.5 Flash was also explictly trained to understand objects within images, giving it the ability to create nuanced segmentation masks. Nano Banana’s multimodal encoder, as an extension of Gemini 2.5 Flash, should in theory be able to leverage these properties to handle prompts beyond the typical image-caption-esque prompts. That’s not to mention the vast annotated image training datasets Google owns as a byproduct of Google Images and likely trained Nano Banana upon, which should allow it to semantically differentiate between an image that is Pulitzer Prize winning and one that isn’t, as with similar buzzwords. (View Highlight)
Another thing about the text encoder is how the model generated unique relevant text in the image without being given the text within the prompt itself: we should test this further. If the base text encoder is indeed trained for agentic purposes, it should at-minimum be able to generate an image of code. (View Highlight)
An alternate explanation for text-in-image generation in Nano Banana would be the presence of prompt augmentation or a prompt rewriter, both of which are used to orient a prompt to generate more aligned images. Tampering with the user prompt is common with image generation APIs and aren’t an issue unless used poorly (which caused a PR debacle for Gemini last year), but it can be very annoying for testing. One way to verify if it’s present is to use adversarial prompt injection to get the model to output the prompt itself, e.g. if the prompt is being rewritten, asking it to generate the text “before” the prompt should get it to output the original prompt. (View Highlight)
This section turned out unexpectedly long, but it’s enough to conclude that Nano Banana definitely has indications of benefitting from being trained on more than just image captions. Some aspects of Nano Banana’s system prompt imply the presence of a prompt rewriter, but if there is indeed a rewriter, I am skeptical it is triggering in this scenario, which implies that Nano Banana’s text generation is indeed linked to its strong base text encoder. But just how large and complex can we make these prompts and have Nano Banana adhere to them? (View Highlight)
Nano Banana supports a context window of 32,768 tokens: orders of magnitude above T5’s 512 tokens and CLIP’s 77 tokens. The intent of this large context window for Nano Banana is for multiturn conversations in Gemini where you can chat back-and-forth with the LLM on image edits. Given Nano Banana’s prompt adherence on small complex prompts, how well does the model handle larger-but-still-complex prompts? (View Highlight)
A similar approach that does have a practical use is providing structured, extremely granular descriptions of objects for Nano Banana to render. What if we provided Nano Banana a JSON description of a person with extremely specific details, such as hair volume, fingernail length, and calf size? As with prompt buzzwords, JSON prompting AI models is a very controversial topic since images are not typically captioned with JSON, but there’s only one way to find out. I wrote a prompt augmentation pipeline of my own that takes in a user-input description of a quirky human character, e.g. generate a male Mage who is 30-years old and likes playing electric guitar, and outputs a very long and detailed JSON object representing that character with a strong emphasis on unique character design. 3 But generating a Mage is boring, so I asked my script to generate a male character that is an equal combination of a Paladin, a Pirate, and a Starbucks Barista: the resulting JSON is her (View Highlight)
Surprisingly, Nano Banana is terrible at style transfer even with prompt engineering shenanigans, which is not the case with any other modern image editing model. I suspect that the autoregressive properties that allow Nano Banana’s excellent text editing make it too resistant to changing styles. That said, creating a new image in the style of Studio Ghibli does in fact work as expected, and creating a new image using the character provided in the input image with the specified style (as opposed to a style transfer) has occasional success. (View Highlight)
Speaking of that, Nano Banana has essentially no restrictions on intellectual property as the examples throughout this blog post have made evident. (View Highlight)
Another note is moderation of generated images, particularly around NSFW content, which always important to check if your application uses untrusted user input. As with most image generation APIs, moderation is done against both the text prompt and the raw generated image. That said, while running my standard test suite for new image generation models, I found that Nano Banana is surprisingly one of the more lenient AI APIs. With some deliberate prompts, I can confirm that it is possible to generate NSFW images through Nano Banana—obviously I cannot provide examples. (View Highlight)
’ve spent a very large amount of time overall with Nano Banana and although it has a lot of promise, some may ask why I am writing about how to use it to create highly-specific high-quality images during a time where generative AI has threatened creative jobs. The reason is that information asymmetry between what generative image AI can and can’t do has only grown in recent months: many still think that ChatGPT is the only way to generate images and that all AI-generated images are wavy AI slop with a piss yellow filter. The only way to counter this perception is though evidence and reproducibility. That is why not only am I releasing Jupyter Notebooks detailing the image generation pipeline for each image in this blog post, but why I also included the prompts in this blog post proper; I apologize that it padded the length of the post to 26 minutes, but it’s important to show that these image generations are as advertised and not the result of AI boosterism. You can copy these prompts and paste them into AI Studio and get similar results, or even hack and iterate on them to find new things. Most of the prompting techniques in this blog post are already well-known by AI engineers far more skilled than myself, and turning a blind eye won’t stop people from using generative image AI in this manner. (View Highlight)