A hat, a haircut, a tattoo: Experimenting with AI image generator Blunge.ai
Creating images for my upcoming book chapter in the Entrepreneur's Manifesto using generative AI tool Blunge.ai, and evaluating its ethics
As many of you know, I’m writing the chapter on AI and Data for the upcoming “Entrepreneur’s Manifesto“ book by Action Takers Publishing. (Shout out to Lynda Sunshine West and Sally Larkin Green ☕️.) I’m submitting my chapter tomorrow and the book will be published early in 2026. (More info on my published and future books is here.)
In the part of my chapter where I talk about choosing AI tools, I refer to the “hat, haircut, tattoo” analogy originated by James Clear in his 2024-02-01 newsletter and explained nicely by Russ Miles in his recent article On Developer Platform Experiments. I thought it might be good to have an illustration for this analogy in my book chapter. I did a search and didn’t find any suitable images for this analogy that I could reuse.
As my readers know, until now, I have avoided AI image generation tools due to ethical concerns:
But thanks to Caitlin McColl 🇨🇦 and her Notes relating to this collaborative post brokered by Sam Illingworth (on which she and I and five others contributed)1, I recently discovered Blunge.ai. This looked like a great opportunity to try it. (See this Blunge blog post for more on their approach to AI ethics.)
Blunge is based in Sydney, Australia. They offer a free plan which allows limited generations of public images that can’t be used commercially. Their site lets you try a few images without even creating an account at first, which I really appreciated. Everything I write about in this article was done without creating a login.
First, I chose a style. Here’s a partial screen shot of the available styles:
My requirements:
I want my book chapter images to be simple so they are legible if someone is reading the ebook on a small screen, such as a mobile device.
I’m assuming the hard copies of the Entrepreneur’s Manifesto book will be printed only in black & white, so I don’t need or want color.
I chose Blunge’s Style - the simple B&W outline format at upper left.
First iteration
Prompt: “create a composite image with 3 adjacent tiles in a horizontal row. the three tiles should show a hat, a haircut, and a tattoo. all images should be non-gendered and simple.”
Result:
Verdict: It got the layout and style right, but the hat and haircut both looked masculine-coded (even though I had specified ‘non-gendered’). So I tried to correct this.
Second iteration
Prompt: “make the hat and the haircut images gender-neutral“
Result:
Verdict: I think that works! Maybe a caption would help? (Yes, I could add words myself with an image editing tool, but I wanted to see what the tool could do with text.)
Third iteration
Prompt: “put the words Hat, Haircut, and Tattoo in a simple sans serif font, centered underneath the 3 tiles”
Result:
Verdict: Oh nooo … the text is perfectly fine, but instead of keeping the 3 images I was happy with, it regenerated the tiles and gave me back three gendered tile images 😖 (At least, I have never yet seen that nautical heart tattoo on a woman.)
Maybe I hadn’t selected my previous image in the prompt box, and it defaulted to regenerating? It obviously remembered the gist of my previous instructions, though. I clicked the handy delete icon (trash can) and selected generated image #2 as input.
Fourth iteration
Prompt: “Use the tile images in this version - do not regenerate the tile images - and add the words "Hat", "Haircut", and "Tattoo" in a simple sans serif font, each centered underneath its respective tile.”
Result:
Verdict: I think that works. What do you think?
Fifth Iteration
For fun, I wanted to see what would happen if I chose to apply a different style. I picked the simple Vintage Collage Objects style (the cat with the open mouth in the big image above).
Prompt: “Apply the newly selected style to this image. Keep the gender-neutral hat, haircut, and tattoo tile images and the words underneath.”
Result:
Verdict: Blunge did a good job keeping the tile images and words I wanted, and these do look nicer (although not what I expected from the cat thumbnail image above). But the colors and background won’t work for my book chapter.
Experiment Wrap-Up
At this point, Blunge gave me the message: “You’ve used up all your credits. Start your free trial now to generate more images for free and get access to video generation and more.” Fair enough.
All of these experiments were completed in under an hour. Most of that time was in writing up what I was doing, here, not in running the tool! The image generations generally completed within 10 seconds or so. Granted, these were simple styles and simple requests, but it was free, and I’m pretty happy with what I got for my purposes.
And if their ethics positioning is accurate, I wasn’t exploiting any human artists by using Blunge. I was still using up computing resources, though, so my experiment had a bit of data center impact. And I was a bit disappointed with needing to reprompt to avoid gender bias - but, unfortunately, that’s pretty common among genAI tools. (Note to self: “gender-neutral” worked better than “non-gendered”.)
I’m still going to mostly rely on searches for existing images, when I think one might exist that could fit my needs, and collaborations with human artists.
What’s Next?
To use this image (or others) in my book chapter, I need a paid Blunge plan. Fortunately, at $10/month, the base paid plan is affordable even on my ‘ethical shoestring’ budget. That plan will let me generate 50 images/month. That should be more than enough for my books in progress, and even my Substack posts. The next plan up at $20/month allows 100 images, which I don’t think I need, and video generation, which I don’t care about. And they offer a 14-day free trial, which at this point will carry me into my 2026 budget year before I have to start paying.
So my next step is to sign up for an account, start my free trial, and save this “hat, haircut, tattoo” image as PNG and SVG for the book chapter. Then I may experiment with generating other simple illustrations … like maybe the chainsaw analogy from my Everyday Ethical AI book, which I’m also referencing in this chapter. I might also try generating nicer branding images for my newsletters.
Open Questions
Along with my free trial, I’m also going to investigate Blunge.ai’s ethical position more closely. I’m a little concerned that:
Their blog post doesn’t mention if their underlying image generation model was trained only on works that were not copyrighted.
It does say that artists can upload their own images and train their own model, which would be ethical sourcing. That doesn’t fit my needs, though; I’m not an artist. It’s also not likely that they are actually doing a full retrain of a model with only one artist’s works. More likely, they are developing a style model.
Their image generation widget shows a choice of models to be used:
Nano Banana Pro x Blunge (Gemini 3 Pro / Nano Banana 2)
GPT Image x Blunge (GPT Image 1)
Nano Banana (Gemini 2.5 Flash Image / Nano Banana 1)
GPT Image (GPT Image 1)
Nano Banana Pro (Gemini 3 Pro Image / Nano Banana 2)
GPT Image (mini)
Imagen
By extension, any images generated in Blunge with a model based on Nano Banana, GPT Image, or Imagen will be unethically sourced, since OpenAI and Google did not ethically source the data for those models. (This article by Genevieve Smith-Nunes has more info on ethical considerations for Nano Banana2.)
I’ve messaged Blunge with inquiries. For now, I’m not using this image in my book chapter. If it turns out that there are ethically-sourced model options, I’ll try regenerating this image. Once I have answers, I’ll update my ethical shoestring list accordingly.
I’ll write more in 2026 about my further adventures with Blunge. I invite you to subscribe to stay informed and let me know this kind of post is interesting for you!












out of curiosity i asked google: is blunge ai still ethical if it uses nano banana and other gpt models? and this is what the AI overview spit out:
AI Overview
The ethical status of Blunge AI when using models like "Nano Banana" is a subject of debate, primarily stemming from concerns over the potentially unethically sourced training data used by foundational models from companies like Google and OpenAI. Blunge AI itself claims to be ethical by focusing on specific principles, but its reliance on underlying models with questionable data practices makes its overall ethical standing complicated.
Blunge AI's Stated Ethical Principles
Blunge positions itself as an "ethical AI artwork generator" that aims to protect artists' rights. Its stated principles include:
- Manual Image Ownership Verification: A process designed to prevent AI-generated art theft.
- Full Copyright Retention: Users retain 100% ownership and copyright of the artwork they create using the platform.
- Private AI Models: Custom-trained models run on secure, private servers and are not shared publicly or uploaded to public domains.
- Ethically-sourced internal datasets: Blunge provides pre-trained models based on open-source and non-copyrighted datasets for users who don't have custom models.
Ethical Concerns with "Nano Banana" and GPT Models
The ethical issue arises because major AI companies, including those behind "Nano Banana" (Google's AI image model) and GPT image models (OpenAI), have faced criticism and lawsuits for training their models on vast amounts of data "slurped up" from the internet, which often includes copyrighted material used without permission or fair compensation.
- Unethically Sourced Data: Critics argue that any images generated using a model based on unethically sourced data are, by extension, also unethically sourced.
- Lack of Transparency: The exact composition of the training datasets for many of these large models is often shrouded in secrecy, making it difficult to verify their ethical compliance.
- Bias and Harmful Content: Models trained on broad internet data can also perpetuate biases and generate discriminatory or harmful content.
Conclusion on Ethicality
Whether Blunge AI is still ethical depends on the specific models used for an image generation request:
- If Blunge solely uses its own, privately-trained models on verified, ethically-sourced or user-provided data, it aligns with its ethical claims.
- If a user generates an image using a Blunge feature that leverages a foundational model like "Nano Banana" or a GPT image model, the process may be considered unethical by critics due to the original model's potentially problematic training data.
Ultimately, the ethical consensus is that while it is possible to train AI models ethically, many current major AI tools are not, and their use in any downstream application carries those same ethical considerations.
thanks for this deep dive into Blunge! Your experiments/iterations were interesting! (and frustrating! lol) but worked out in the end? but that's interesting about the Blunge x different models potentially erasing any ethicalness (not a word, I know!)...I'm curious to hear what they say to your inquiries! Please let us know!