It’s August 2022, and by now you’ve no doubt read (or more likely seen) something about AI art. Whether it’s random jokes made for Twitter or paintings that look like they were made by actual human beings, artificial intelligence’s ability to create art has exploded onto the scene over the last few months, and while this has been great news for shitposts and fans of tech, it has also raised a number of important questions and concerns.
If you haven’t read or seen anything about the subject, AI art—at least as it exists in the state we know it today—is, as Ahmed Elgammal writing in American Scientist so neatly puts it, made when “artists write algorithms not to follow a set of rules, but to ‘learn’ a specific aesthetic by analyzing thousands of images. The algorithm then tries to generate new images in adherence to the aesthetics it has learned.”
From a user’s perspective, this is most often done by entering a text prompt, so you can type something like “wizard standing on a hillside under a rainbow”, and an AI will attempt to give you a fairly decent approximation of that in image form. You could also type “Spongebob grieving for Batman’s parents” and you’ll get something just as close to what you’re thinking.
Basically, we now live in a world where machines have been fed millions upon millions of pieces of human endeavour, and are now using the cumulative data they’ve amassed to create their own works. This has been fun for casual users and interesting for tech enthusiasts, sure, but it has also created an ethical and copyright black hole, where everyone from artists to lawyers to engineers has very strong opinions on what this all means, for their jobs and for the nature of art itself.
Given my interests here, I’ve spoken with a number of professional artists for this piece, all of them working in video games, films, and television, and many are concerned for the future of art jobs in the entertainment business. “Artists skills were already undervalued before this technology; I fear this will compound that even more”, Jeanette (not their actual name), a concept artist who has worked at several major AAA publishers, tells me.
Bruce (again, not their real name), an artist who has worked on a bunch of award-winning indie hits, says “The endgame of a potential employer is not to make my job easier, it’s to replace me, or to reduce all my years spent honing my craft into a boring-ass machine learning pilot, where I’m trained to vaguely direct an equivalent software in hundreds of different directions until by chance it spits out an asset we could feasibly use in a game”.
“I can’t think of many worse hells to wind up in for my career. Experientially and morally.”
“I don’t think this tech will hurt any established, ‘big deal’ concept artists and illustrators as much as the low level ones”, says RJ Palmer, who has worked for Ubisoft and also on the film Detective Pikachu. “I could easily envision a scenario where using AI a single artist or art director could take the place of 5-10 entry level artists. The tech is fairly basic (but still impressive) right now but it’s advancing so fast. The unfortunate reality of this industry is that speed is favoured over quality so often that a cleaned up, ‘good enough’ AI-generated image could suffice for a lot of needs.”
“I have seen a lot of self-published authors and such say how great it will be that they don’t have to hire an artist”, Palmer says. “Doing that kind of work for small creators is how a lot of us got our start as professional artists. So as an artist seeing this attitude grow gives me concern for the next generation of artists being able to find consistent entry level work.”
The worry over young, upcoming and part-time artists is one shared by Karla Ortiz, who has worked for Ubisoft, Marvel and HBO. “The technology is not quite there yet in terms of a finalized product”, she tells Kotaku. “No matter how good it looks initially, it still requires professionals to fix the errors the AI generates. It also seems to be legally murky territory, enough to scare many major companies.”
“However, it does yield results that will be ‘good enough’ for some, especially those less careful companies who offer lower wages for creative work. Because the end result is ‘good enough’, I think we could see a lot of loss of entry level and less visible jobs. This would affect not just illustrators, but photographers, graphic designers, models, or pretty much any job that requires visuals. That could all potentially be outsourced to AI.”
Travis Wright, a veteran entertainment industry artist, tells me “There’ll always be a need for someone who can give an art director exactly what they want, particularly character design, but with how quickly these algorithms have improved in just six months, it’s scary and I can totally see indie horror games, card games and Tabletop Role-Playing Games are going to benefit from using AI over paying an artist.”
AI-generated art is already being used in commercial spaces
Websites like The Atlantic have already been spotted using AI-generated artwork at the top of articles, a space normally reserved for photos (taken by humans and paid for) or artwork (made by humans and paid for). Waypoint have also covered a (terrible) video game made using AI-generated assets.
Jon Juárez, an artist who has worked with Square Enix and Microsoft, agrees that some companies and clients will only be too happy to make use of AI art. “Many authors see this as a great advantage, because this harvesting process offers the possibility of manipulating falsely copyright-free solutions immediately, otherwise they would take days to arrive at the same place, or simply would never arrive”, he says . “If a large company sees an image or an idea that can be useful to them, they just have to enter it into the system and obtain mimetic results in seconds, they will not need to pay the artist for that image. These platforms are washing machines of intellectual property.”
“Intellectual property will no longer have value for small authors, because you will not be able to make a Star Wars movie, but Disney will be able to use your work for their movie. If AI ends up being an Aleph of narratives, the Aleph is going to be privatized and shielded by patents.”
Which brings us to our next point of contention. Calling them a “washing machine of intellectual property” is definitely one way of putting the legal concerns surrounding these art generators. Simply put, as we often see with technology that has advanced faster than the law can keep up, there is no definitive, binding stance on the copyright issues at the heart of machines chewing up human art then spitting out artificial compilations of what they’ve learned.
In February, the US Copyright Office “refused to grant a copyright” for a piece of art made by AI, saying that “human authorship is a prerequisite to copyright protection”. That case is now being appealed to a federal court, however, because the AI’s creator thinks that, having programmed the machine, he should be able to claim copyright over the works it produces. Even when a decision is ultimately reached in this case, it will take a lot more time and cases for a firmer legal consensus to form around the subject.
But what is that work the AI’s creator is claiming, if not simply a casserole made from art created by actual human artists, who are not being paid or even credited for their contributions? Juárez says one of the major platforms “has used one of my images, subject to copyright, without my consent. It’s already inside the system, the program can use it to mimic my style and the damage is irreparable”.
“In many of the results there have been traces of watermarks and signatures, these programs are explicitly designed with the function of removing such marks that can circumvent intellectual property”, Juárez adds. He’s referencing examples of AI-generated artworks appearing to have signatures in their corners, suggesting that while drawing from pieces they have been fed, they’ve either tried to erase or copy the signature—albeit imperfectly—as well.
Not everyone I spoke with is as downbeat on the copyright implications of these machines, however. Frank (not their real name), an artist who has worked on several blockbuster AAA console titles, tells me “People steal our art all the time. I don’t know how many client meetings I’ve been in where they show me some artist-I-know’s work and say ‘Make it like that’”.
“It’s the highly unfortunate result of doing what we do. When you do it on a high level, people try to find ways to rip it off and duplicate it. AI is just another way that’s going to inevitably happen. I do question the ethics of it for sure, but currently it does a piss-poor job of actually pulling off what I do, and shit, if it does figure it out that’s going to save me so much time [laughs]. Go ahead AI, learn how to paint like me really well so I can just adjust it a bit and turn that in and then go take a nap because the world stinks and every day is hell.”
Floris Didden, art director at Karakter, an Emmy-award winning studio (Game of Thrones), tells me something similar. “The nature of art-generating AIs doesn’t bother me as much as it seems to bother many artists”, he says. “We all look at each other’s work for inspiration on style, execution, ideas, subjects, etc., and mixing it with our own ideas in some way to hopefully create something that can stand on its own. To my mind the programmer is doing the same thing through the use of the AI they created. I’m not saying there’s no originality but let’s not pretend we don’t massively feed off each other.”
“I don’t think legally speaking your copyright was violated when your art was fed into an AI, but I do think morally they owe you something. If you train an AI to perfectly match a specific artist’s style, I think that obviously violates the artist’s rights somehow, if not their copyright. I just don’t know how to legally enforce that.”
Not everything about AI art is an ethical and copyright battleground, though. For all the discord surrounding their creation (and creations), the machines spitting these images out are themselves mere tools, and in the right hands, tools can be useful.
“There are tremendous benefits to the tech for artists as well, which is part of why it’s such a headache”, Palmer says. “In the same way that a non-artist can now create an image, an artist can too which can be fine-tuned and enhanced through their sensibilities and training. I have had access to Dall-e 2 and it’s fun to see how far you can push it into creating things that don’t have a great 1:1 representation in real life (though it is currently not very good at this). Having it come up with loose compositions, color patterns, lighting, etc can all be very cool for getting inspiration.”
Ortiz is equally enthused—and conflicted—by the practical possibilities for artists. “For me personally, I could see myself utilizing AI generated imagery for initial visual references and inspiration”, she says. “What if I wish to paint an object in a particular light scheme, or require a specific texture in a specific shape? AI would be an invaluable tool to assist me in my artworks! For some artists, AI would be an absolute game-changer, allowing them to have nearly immediate references to further inspire and potentially inform.”
Didden is another who sees AI art as having a practical benefit. “I’m a concept artist and art director and fundamentally I think design is about solving problems, and more specifically the problems of other humans”, he says. “To do this you need to understand the constraints of the project, have ways of generating solutions, and be able to recognize when you hit on the right one. I always thought that as a concept artist you basically just needed problem-solving skills, some way to visualize your solution, and a dose of good taste (whatever that is). So for a designer, I think AI-generated art is going to be just another tool to use.”
Beyond the immediate concerns and potential uses for working artists, there are larger forces at play, and questions—sorry to bring this up on a video game website considering how tiring our scene’s own conversations can be—about the nature of art, and work, and working in art. What does it say about us as a point in human history if we have people working toward, and championing, the use of artificial intelligence to create art? As though it was something that needed to be industrialised, the latest front in a seemingly never-ending struggle between workers and machines?
The reason for this is of course because there are, as there always are in these times, financial considerations at the heart of this movement, some of which are mixing in the same circles as so many other dystopian technological creations—which care only about the tech itself and its possible uses than any ethical, environmental or industrial concerns—like cryptocurrency and NFTs. OpenAI, the lab behind Dall-E, was co-founded by Elon Musk, and already there have been million-dollar sales of NFT artworks generated by artificial intelligence. And that’s just the start.
“Stable Diffusion is planning to make profit out of ‘private’ models for customers, profiting from creating general infrastructure layer, and currently some of their lead developers are utilizing AI generated imagery for sale”, Ortiz says. “Both DALL-E and Midjourney have subscription models as well.”
“Some of these companies’ current and potential profits are directly linked, via obscure data sets, to hundreds and thousands of copyrighted creative works from all kinds of creative professionals”, she adds. “That alone is chilling, but to also have no way to opt out of these tools–especially once your work has been used to train an AI–concerns me as an artist very much. I know the coming legal battles will change the landscape. All I can hope for is that the law will move quickly to protect our creative livelihoods, while simultaneously allowing for these new technologies to grow in a way that is beneficial to us all, not just a handful of companies and developers.”
Most ludicrously, there now exists a marketplace called PromptBase, designed solely to sell “prompts”, which are the inputs used to actually generate AI images. Surprising nobody, this marketplace is already rife with copyrighted works, ranging from pop culture characters to branded sneakers.
At the heart of this entire conundrum looms the false equivalency of even calling what an AI generates “art”. Art is inherently human. Its ability to draw upon and inspire our emotions is perhaps the most defining thing that separates us from other animals. (Sorry, opposable thumbs.) It is defined specifically as “a diverse range of human activity, and resulting product, that involves creative or imaginative talent expressive of technical proficiency, beauty, emotional power, or conceptual ideas”.
A machine is not creating art. A machine, even ones as advanced as the AI we’re talking about here, is crunching data. There is no perspective to AI art, no inspiration, nothing it is trying to communicate. It’s a compilation playlist built by an algorithm, spinning an endless number of remixes and cover songs. The fact so many people are getting bogged down comparing AI art to the creations of human beings, as though the former is doing anything but adhering to an algorithm, is playing right into the hands of those championing this mimicry, because it sets AI creations on a level playing field that they don’t deserve.
Swedish artist Simon Stålenhag perhaps summed it up better than anyone when he said last week “What I don’t like about AI tech is not that it can produce brand new 70s rock hits like ‘Keep On Glowing, You Mad Jewel’ by Fink Ployd, but how it reveals that that kind of derivative, generated goo is what our new tech lords are hoping to feed us in their vision of the future”.
“I think AI art, just like NFTs, is a technology that just amplifies all the shit I hate with being an artist in this feudal capitalist dystopia, where every promising new tool always ends up in the hands of the least imaginative and most exploitative and unscrupulous people.”