- Alex Cooper
- Posts
- How I Use ChatGPT Image Generation in June 2025
How I Use ChatGPT Image Generation in June 2025
The GPT-4o image gen playbook I wish I had 2 months ago
Hey guys,
It’s been 2 months since ChatGPT image generation was released and since then we’ve generated multiple top performing ads with it. After spending countless hours playing around with the tool, here’s a few thoughts on how I’m using it for ad generation…
But first, here’s what’s I’ve come across in AI this week:
ChatGPT dropped ‘Record’, a feature that transcribes and summarizes audio recordings like meetings, brainstorms, or voice notes.
This guy dropped a Full 5-Hour N8N Course on X
This guy shared a Prompt That Stops ChatGPT Hallucinating (I haven’t tried it yet, but it looks cool)
How I’m Using ChatGPT Image Generation in June 2025
Unrealistic AI visuals are working
Some of our highest performing ads so far have come from generating intentionally unrealistic visuals. I actually think this is one of the strongest use cases for image gen today.
For example, overly cracked lips for lip balm or a foot covered in flames for nerve pain cream. You could do this in Midjourney previously, but being able to add products and text natively makes it way more powerful now.

Prompting (the easy way)
I prompt in 2 different ways - the easy way and the difficult way. The easy way is just by picking a good reference ad and a really simple prompt. By a good reference ad, I don’t mean from a design perspective, but more so how easily the product slots into that design/style. I pick reference ads that I can easily slot my product shot into. Less guesswork for the model = more consistent outputs.
If you pick a good reference ad, you actually don’t need a detailed prompt to get a good output from GPT 4o.
Not just a good reference from a design perspective, but also how easily your product slots into that design/style. This is why some of my best ads so far have come from
— Alex Cooper (@alexgoughcooper)
1:12 PM • Apr 1, 2025
Prompting (the hard way)
The other way takes longer, but gives better outputs (at least from a design perspective). I typically do this when I want to make more premium looking statics (shadows, gradients, layering etc). I ask Claude to write a graphic design brief to recreate a reference ad in extreme detail, then paste the brief into GPT. If it doesn’t come out well, screenshot it, give it back to Claude and ask it to rewrite the prompt fixing the error. Keep doing this until you get exactly what you want. The prompting method I use depends on what type of ad I’m trying to create.

My Claude graphic design prompt
Always generate multiple versions
For every ad you make on GPT, open 2 more tabs and copy/paste the prompt in again to give you 3 generations in the same amount of time. Then just pick the best one. I’ve found that you can get away with 3, but 4+ at a time just slows the model down. If you generate multiple images and still can’t get it to work, go back and iterate your prompt.

Use Image Gen for product variants
This prompt lets you take an ad and generate variants for different flavors, colors, or designs in seconds. Keep everything that's working - the layout, the model, the text style - and just swap out the product-specific elements.
We've used this to create tons of iterations on winning ads for almost no extra work.
ChatGPT prompt for color/flavor/design variants:
"Take this ad image and update it to match the [new variation]. Change the packaging, background, and supporting visuals to reflect the [new variation] while keeping the layout, model, and text style the same."
— Alex Cooper (@alexgoughcooper)
8:57 PM • Apr 7, 2025
If you’re still struggling with text/labelling…
AI models aren't fully there yet with text generation/placement, and there are still some products that they flat out struggle with. So if you're running into constant issues, it may not be a prompting problem.
A quick solution if you're struggling with text (not on the product): make ads without text and add it elsewhere (Figma, Photoshop) as a temporary workaround while we wait for the models to get better.
And if you’re struggling with text on the product, think about how you can create image gen ads that don't directly involve the product.
Honestly, in most cases you're better off generating the image in GPT and the text in another platform.
It's easy to one shot a pretty product image - it's difficult (and unreliable) to one-shot good text spacing and sizing.
Someone will probably drop a third party tool in the
— Alex Cooper (@alexgoughcooper)
9:40 PM • Mar 27, 2025
I try to be objective about the ads we make in GPT. A lot of ads that went viral around image gen when it initially came out objectively suck - but they look cool. So, just make sure you think the ads you’re generating are actually good ads, and not just something novel :)
Announcements:
We kicked off the AI Creative Strategist Blueprint program last week and the first live session was PACKED. We had 300+ marketers in there and the sessions lasted for over 2 hours!
If you’re still thinking about joining, it’s not too late. We have 15 live sessions to go (including tons of special bonus sessions). Plus, all recordings are made available to catch up on what you’ve missed so far. Click here to enrol today!
Enjoyed this email? Forward it to a friend who’d find it valuable!
And if someone forwarded this to you, here’s a link to signup. I send tips on how to make more winning ads with the help of AI straight to your inbox every week!
See you next week,
Alex