Obsolescence Redux

Last night – on Walpurgisnacht, of all nights – an article from last month began making the rounds suggesting that “AI” had been used to generate preproduction art for the upcoming Hellboy movie. This took Hellboy creator Mike Mignola by surprise and by this morning the Mignolas had spoken with the studio and the film’s director, Brian Taylor, had already taken to the website formerly known as Twitter to clarify that there is “exactly *ZERO* AI used” in the film.

Which is a relief for Hellboy fans and a cautionary tale for all of us not to get too het up about news too quickly. BUT it doesn’t really address the larger problem which is that the guy (Jonathan Yunger, President of Millenium Films) still probably did say the thing about using “AI” to create preproduction art, he was just referring to a different movie.

I have been taken to task in the past for my unflinching stance against what is commonly called “AI art.” I have heard all of the usual claims that it is “harmless,” that it is “inevitable,” and so on. And many of the arguments that I have heard were made by other creatives, in good faith, simply coming from places of misunderstanding.

So, let me be clear: My stance is that so-called “AI” has no place in any creative endeavor and “AI” art, writing, music, etc. cannot ever be ethically used in a commercial venture in its current form. The reasons for this range from the ethical to the personal, but the ethical reasons alone are more than enough.

For starters, though, we need to understand what we’re talking about. The people pushing this technology scored a major PR coup when they were able to get everyone to adopt “AI” as the preferred terminology, which has muddied the waters of what the actual threats are here, and what is at stake.

So-called “AI art” (or writing, or music, etc.) has no actual “intelligence” in it. It is more accurately called “procedurally generated,” which is what I’ll mostly be calling it for the rest of this post. Such procedurally generated images or text or songs are made using what essentially amounts to the same algorithm that recommends you movies on Netflix or decides what posts and ads you’ll see on Facebook, just operating at a much larger scale.

It does not know things, and it cannot find them out. It takes datasets and compares them to the sorts of datasets that are usually near them across a staggeringly large number of examples, and then it churns out a new dataset using that information. Nothing more. There are no robot overlords coming from this to wipe us out, Terminator-style. There are simply the same overlords we’ve always had – greedy corporations wanting to take more from us and give us less.

Nor is the problem with these procedurally generated products the same problem as with automation in general. Automation (self-checkout stations replacing human checkers at the grocery store, say) has its own concerns, but they are different than the concerns attendant to “AI art.” We are a long way from automation being something that artists need to worry about just yet and, before we get there, we still have these other problems to address.

The major ethical concerns relating to procedurally generated art, writing, music, what-have-you as I understand them are threefold:

1. “AI art” is theft. Pure and simple. Whether we’re talking about Midjourney or ChapGPT or whatever, these “large learning models” as they are sometimes called have to be “trained” on existing art (or writing, or whatever). And that “training” is actually just copying. It is reproducing – in whole or in part – an image or some text or a piece of music without paying for the rights to do so. And that’s the thing. When an artist creates a piece of art, they own that piece of art (unless they are doing it work-for-hire under contract, in which case the person who hired them owns it). It can thereafter only be used commercially if the artist sells the rights for that use, but they continue to own the drawing (or whatever) itself. When these “large learning models” are “trained” on a piece of existing artwork, they are copying that artwork and re-using it in a way that they have not acquired the rights for. And this is a commercial breach, even if you’re just using MidJourney or whatever to make silly pictures to share on Facebook, because these “large learning models” are, themselves, money making ventures, even without the resulting product being used in an additional commercial capacity. Therefore, “AI art” is labor theft, pure and simple, and can never be ethically used unless a “large learning model” was developed that was “trained” solely on art that was properly licensed for that purpose.

2. “AI art” is environmentally catastrophic. While the actual use of something like ChatGPT to generate a page of text takes relatively little energy, like the various blockchain scams that came before it (indeed, the playbook of “AI art” matches the hype around NFTs nearly exactly), “training” these “large learning models” consumes absolutely mind-boggling quantities of water and electricity. And while there are lots of things that consume large sums of water and electricity, one must ask if the end result is worth the cost and, in this case, it absolutely is not.

3. “AI art” is a tool of labor exploitation. I said earlier that artists, writers, etc. were a long way from needing to worry about being automated out of existence. And that’s true. But they’re already being exploited, and these “large learning models” are already making it worse. “I was able to make 3000 creature designs in an hour,” Yunger said in that article linked above. Artists, designers, writers, etc. are already underpaid and overworked and if procedurally generated assets are able to be used, they will be (and already are) expected to work faster and for less, with the output of a “large learning model” used as justification.

Those are the primary ethical concerns surrounding procedurally generated art, writing, etc. But there’s still a personal reason why I will never truck with “AI art” or what-have-you, one that would not change even if all of those ethical considerations went away.

Sadly, seemingly lost to the internet ether is a pithy reply on social media which sums it up nicely. “Why would I bother to read something that nobody could be bothered to write,” it says, in essence. And that’s what it ultimately boils down to.

Art – whether it be writing, music, film, or visual arts of other kinds – is not a commodity. It should not be something we merely consume or use. We come to it for meaning, for inspiration, for connection, for transcendance, even just for entertainment, and all of those things require that it be a two-way street running between people; between writer and reader, musician and listener, artist and viewer.

Our lives are fleeting and our time here precious. We already have to waste too much of it for various reasons. I have no desire to waste any of it “consuming” art that nobody wanted to actually create.

(From “Hellboy: The Crooked Man,” art by Richard Corben.)

Leave a comment