Big game developers like Ubisoft, who were interested in NFTs at first, have thankfully stopped bringing them up. Now, some people hope that the "make it so uncool that no one will touch it" strategy can be used to stop another trend: the fast-developing AI image generators that make fake, flattering portraits of our friends and stills from fake David Lynch Warhammer movies.
I believe they will be disappointed. AI "art" is not going away.
In several ways, NFTs and AI art are diametrically opposed: NFTs offer that each piece of digital artwork may be a one-of-a-kind and valuable product, but AI art threatens to devalue digital art by flooding the internet with an infinite supply of it. I don't believe most people care whether Jimmy Fallon keeps all those awful NFT ape photographs, but the cheap, quick development of AI images has made it difficult not to see more and more of them. You've probably seen AI-generated pictures on social media in the last year.
And I suspect it's just a passing craze. Whereas blockchain investment is chastised for generating unnecessary waste, AI art is bemoaned for endangering the livelihoods of illustrators. Everyone recognizes the utility of a machine that converts words into images. Even if you don't like it on principle, it's difficult to resist giving it a go. If someone claims to have a machine that can create images of anything, how can you not want to put the claim to the test at least once?
The way we interact with these machine learning algorithms reminds me of how adults tease toddlers by enjoying their reactions to new things and pointing to anything that could mean they've understood us. A pleasantly uncanny feeling arises when an image generator appears to "get" what we've asked for—it's difficult to believe that a computer program successfully translated a complex idea like "John Oliver looking lovingly at his cabbage having realized he's falling in love" into an image, but there it is, undeniably on the screen in front of us.
That, I believe, is what makes AI art so insulting to so many people. It's not just the automation of labor that seems obscene; it's the automation of creative work. Something truly human has been twisted into a party trick.
The good news and bad news for humanity is that the sleight of hand is easy to detect: image generators don't accomplish anything unless they're trained on piles of human-made artwork and images, and in some instances it's done without the artists whose work was used's permission. Indeed, the popular Lensa AI portrait builder (opens in new tab) regularly replicated garbled signatures: the mangled corpses of the genuine painters provided to it.
If you ask me, an early effort to rescue AI art from this critique is simply rejected. According to the claim, AI art generators are "simply doing what human artists do" by "learning" from existing artwork via mining internet artist portfolios for training material. Sure, people learn by mimicking and expanding on the work of others, but anthropomorphizing algorithms that trawl millions of photos as live individuals who are simply incredibly quick at going to art school is not a viewpoint I take seriously. Even if some of the pictures look like they were made by humans, it is way too soon to say that silicon chips have human traits just because they can make pictures of cats appear on demand.
What interests me about AI-generated photographs is that they rarely appear to be human-made. One manifestation of machine learning's inhumanity is its lack of self-consciousness. AI art generators do not cry over their mistakes, nor do they feel bored or upset by their inability to render hands in Euclidean space. They can't assess their own work, at least not in any manner that a human can understand, and that fearlessness results in astonishing visuals: ones we've never seen before, which some artists use as inspiration.