The creations of AI art are truly dreamlike, which is to say, they’re only interesting if they’re yours. The endless scroll of a MidJourney Discord server is an index of desires, dreams, whims, and commercial needs, all compliantly rendered by the machine into artlike images . You can see the wishes the genie grants but you don’t know why these things matter to the wisher. A woman on a beach with green hair, ultra detailed. Men in suits, no beards (“BEARDS”, read the genie, and drew several). A stoat Roman Emperor.

I don’t know what my fellow users thought of my prompts. “A wise old owl telling stories to other woodland creatures, by Norman Rockwell”. “An illustration by Norman Rockwell of a delighted crowd leaving a cinema”. “A Norman Rockwell picture of a beautiful woman disembarking from a ship carrying an old suitcase”. Who was this person with an obsession with Norman, as Lana Del Rey put it, Fucking Rockwell?

It was me. I was writing a presentation about stories, for a marketing conference, and I decided to illustrate it with AI ‘art’. AI is a dangerous thing for a presenter like me to have, but I have it. And I thought the best way to understand it was to try and actually use it for something, rather than laughing at what it can’t do.

(“It can’t draw hands!” Mate, I read comics in the 1990s. Hands aren’t a dealbreaker.)

That doesn’t answer the question: “Why Norman Rockwell?” The answer to that was the first thing I learned. Working with AI image generation is a constant string of compromises. In theory your canvas is unlimited. In reality I hit an issue very quickly: consistency.

I wanted my 12-slide presentation to look consistent – the same artistic style on each slide. There are lots of ways you can constrain your AI generator to try and achieve this consistency – it’s why so many prompts are vast columns of technical or simply hopeful descriptors – “Unreal Engine, Highly Detailed, 8K, photorealistic” and on and on. Maybe a careful process of A/B testing revealed which to use in which order, but I doubt it – magpie accumulation of advice seems more likely.

That stuff is meant for things which look like photos or modern videogames. Not my bag – I wanted a vintage illustration style, something which would make my eventual audience think about stories. And there are not many illustrators whose style MidJourney gets right. Which isn’t a problem – the approximations it makes are often malformed but sometimes have their own charm. Except that trying to get them to stick to a style was a fool’s errand, particularly if the style is a botched imitation.

So there are even fewer illustrators whose style MidJourney can do twice in a row. I ran my first two prompts through a dozen styles – asking it to ape named artists, generic styles, eras, and more – and ended up with one pair of  pictures which looked, at a distance, like the same artist might have drawn them.

Norman Fucking Rockwell.

Still, Rockwell fitted my bill. Immediately recognisable, old-timey, immensely famous, vastly exploited commercially –  and long-dead so I didn’t feel the guilt about biting his style I’d have felt for a living illustrator. 

But even before I’d worked on a single slide I’d made compromises. Rockwell also has conservative overtones – not a fair reflection of his own views as I understand it, but he painted the society he lived in in particular ways, and those ways have resonances which I don’t necessarily want in my own work. Fortunately, the Fake-Rockwell style held when I asked it to diversify the people it was “painting”. Still, he wouldn’t have been my first choice.

This was my main lesson in trying to put AI image generation to use. You’re not a “creator”. You work with what the machine gives you.

MidJourney – the image generator I was using – offers a quartet of pictures for each prompt, each of which you can ask for variations on, or “upscale” to get a higher definition, more finished piece. So your input as a ‘patron’ happens at each end of the process. First working on the prompt you give it, second selecting from the output.

This process of selection involved choosing the best option, obviously. But it also involved a large element of self-deception, as you worked to persuade yourself that the interpretation the AI had made approximated what you thought you had in mind. AI image-making is a test, again and again, of how far you’re willing to take “the perfect is the enemy of the good”.

For instance, I wanted a picture of a reporter excitedly hammering at the keys of a vintage typewriter. I ran the prompt, with variations, again and again, and the same issue came up, again and again. MidJourney doesn’t ‘know’ what a reporter or a typewriter is. Most of its typewriters are photographed from the front. So are most of its reporters. Again and again it gave me a parade of nincompoops trying to write on typewriter whose keys faced away from them. At least the infinite monkeys were facing the right way.

Finally, after multiple attempts, I got a fellow facing the right way at a typewriter. The only problem – I instantly hated him. He looks like a right prick. I decided to use him, but I was mentally doing the calculus – do I mention it? Am I going to frame the slide more negatively because of him?

What I was doing didn’t feel anything like creativity, and it certainly didn’t feel like magic. It didn’t feel much like curation, either – the work of collage and juxtaposition to create an overall impression more powerful than the parts.

It felt a little like briefing – throwing something you can’t do out into the ether for those who can do it and waiting to see what they come up with. Sometimes it’s things you’d never have thought of. But there’s no real they on the other end, no set of choices you can ask about, worry about or enthuse over.

Partly to assuage my guilt about using AI images at all, on the day I decided to try it out I commissioned a human to do an illustration for a fundraiser I’m doing. I got the roughs back while I was wrestling with Norman Fucking AI Rockwell’s typewriter goons and the excitement of seeing them and thinking “oh that’s cool” or “hmm why that?” doesn’t compare.

What my AI project felt most like was something that’s become extremely familiar to me, and probably to you, over the last 25 years. It felt like searching on Google, or using hashtags on LinkedIn, or trying to discover something new on Spotify, or writing SEO copy. It felt like a cross between negotiation and problem solving. The act of trying to get something out of an algorithm that is enormously complex but fundamentally doesn’t understand you (or anyone). It just felt like the internet.

Negotiating with algorithms is inescapable, but it has its own satisfaction. It’s a skill. You see it when you talk to SEO specialists, who have rewired themselves to view language and writing in different ways from the rest of us, like surfers looking out at the sea. I myself am quietly proud of how good I got at using Spotify to discover new and weird stuff. The new generation of generative AI tools are already creating their own breed of wranglers, part horse whisperer part snake-oil salesman, ready to convince the wary or greedy that their promised land is just a prompt away.

For the rest of us, a future of tinkering awaits. Douglas Adams predicted it in the Hitch-Hiker’s Guide To The Galaxy, with the onboard drinks computer that can make a million million beverages but can’t quite synthesise tea. In the face of Generative AI, but also of the web in general, we are all Arthur Dent just wanting his nice cup of tea, constantly tweaking the request then settling for what the machine comes up with instead.

(Don’t look too closely at the dog’s legs, that’s my advice.)

APPENDIX AND EDIT: But is it any good? Well, see for yourself and don’t zoom in too much. But on some level I think the question’s meaningless. The lack of conscious creative choices removes many of the grounds on which criticism operates. Art is what an artist declares as art – but there’s no artist here (as I say, using MidJourney does not feel like making art, any more than using a search engine feels like making art).

It doesn’t remove all critical grounds, though. AI-generated images, like procedurally generated NFTs, can act as objects that operate in art-like ways, particularly in the marketplace, and be critiqued as such. But not just in the marketplace – this stuff can create some kind of emotional response. I’m using my Fake Rockwells to try and generate a low-level response myself, operating at the level of “vibe”, a kind of visual muzak. 

I won’t know if they work on that level until I actually give the presentation, near the end of this month. That’s when I can take these highfalutin’ stock photos and definitively say – yes, these were good.  After thinking about this over lunch I went back and removed most references to ‘art’ from the body of this piece.