The AI art debate: Excitement, Fear and Ethics
Issue 6: A look at the ethical debate surrounding AI-generated imagery and its impact on artists.
First off, apologies for skipping a week! I was pushing to get this piece written, and it kept on wanting to be bigger and bigger. It still only contains about half of what I want to talk about on this topic, but it’s a start. I was also busy last-minute packing for our annual trip to the Philippines for Christmas, which is where this issue is coming to you from. So, mabuhay, happy holidays and if you find this piece interesting, share it with a friend or buy me a coffee. Thanks so much.
The AI art debate: Excitement, Fear and Ethics
Like many people, I’ve been watching the explosion of AI tools unfold over the course of 2022. MidJourney, Stable Diffusion, DALL·E, ChatGPT and many others are tools that use machine learning models (which I’ll call “AI” for simplicity and SEO) trained on massive data sets to generate unique images or text based on a combination of human-provided text prompts, parameters, or images.
When it works well, it feels like magic. It feels too easy.
A culture war
Since DALL·E and MidJourney captured mainstream attention, there’s been an intense debate happening at the intersection of the art and tech worlds. This debate centers primarily around the negative impact AI-generated imagery has on working artists. This debate feels completely polarized with loud voices from each side shouting past each other for the most part. It feels like “Artists vs Tech bros” in a similar way to the NFT discourse from last year did (and with a lot of overlapping voices).
Not to be all “Centrist” about it, but I feel somewhat caught in the middle of these camps. A kind of cautious optimism with a huge list of caveats and concerns, I guess. I’ve used Midjourney to generate over 4000 images, some of which I love, many of which are terrible, but I also think a lot of the industry’s practices are pretty gross and problematic right now.
I’m going to do my best to summarize a few of the points I think are valid and deserve deeper discussion, as well as pointing out some others I feel are baseless or misguided. This is not an exhaustive list of the issues here, I’ll probably write more about this in future.
What is art?
Somehow this is both the best and worst place to start. Trying to define art is an interesting theoretical exercise that has a lot of value, but achieving a shared definition of art I believe is pretty pointless, and inevitably ends up being more exclusionary than inclusive.
I hope we can all agree that art can be many things to many people, and is above all else, subjective. It’s fine that two people don’t agree on whether a banana taped to a wall is “Art” or not.
A piece of art can be created with one intent in mind (or none at all), and interpreted with another point of view. It can be aesthetically pleasing, or ugly. It can be a labor-intensive act of honed and practiced craft, or it can be a quick sketch of a fleeting idea. It can be an expression of randomness. I don’t think something needs to have been created with a specific intent, or be difficult or time-intensive to create to be considered art.
Many people (definitely not all) seem to concede that AI-generated images can be considered a form of art in many cases, but the more contentious question is in cases like this, who is the artist or author? The human, the AI, both (but in what roles and to what extent), or none of the above?
Authorship & credit
I don’t consider myself an artist. I’m a designer, and that’s how I make my living. Of course like a lot of people, I’ve created things I consider to be art or artistic in nature before. I’ve written music, but don’t call myself a musician. I’ve taken photos, but I don’t call myself a photographer. I cook every day, but I’m not a chef.
I create images using Midjourney, but I am not an artist.
There’s a very reasonable argument to be made that AI-generated imagery should be treated differently and distinctly when it comes to authorship and credit. Nothing is gained from tricking people into thinking you were more involved in the creation of something than you actually were.
Readers who’ve been following me for a bit will know I use generated imagery more often than not as editorial type images for my articles, rather than free stock photos. I enjoy the process, and the kind of messed up imperfect aesthetics.
I’ve gone through a few iterations to try and figure out what feels like a reasonable, transparent, and fair way of crediting these. In conversation, I’ve tried to talk about it different ways with people, and ended up with a spectrum of feelings depending on the language I used.
‘Some art I made' feels the most scummy.
'An image I generated' is a little better, but still not transparent without context.
‘An image Midjourney generated based my prompting and iteration' is just overly mechanical and, who talks like that?
Like changing how you talk about anything (like transitioning from saying “guys” to “folks” for example) it’s going to feel weird and awkward to start with, but will become more natural over time.
Luckily outside of conversation, credits don’t need to sound conversational.
The line I’m currently using in that context is:
Image by Midjourney, directed by the author.
Some previous attempts were “Image by the author, generated with Midjourney” implying Midjourney is simply a tool, in the way that a 3D artist might say “Image by the author, sculpted in Zbrush and textured in Substance Painter”. I pivoted away from this because I strongly believe these aren’t even close to equivalent. Art-direction feels like a closer parallel, especially as tools like “inpainting” which allow you to replace specific parts of an image, and finer-grained prompt iteration continue to develop.
The discussion should continue, people should experiment with what feels good to them, and doesn’t make other people feel devalued. Whatever the conclusion, at the moment “By [your name]” alone is in my opinion unethical and should be discouraged regardless of your enthusiasm or skepticism of the space.
Continuing that line of thought, at the moment I believe AI-imagery should have specific bounded AI categories, tags, or areas on any website that shows art, and certainly in any sort of art awards. The idea of comparing art in a competition setting and deciding one piece of art is 'better' than another is kind of baffling to me, but ultimately it helps and supports artists. Artists as a group of people have an outsized and vastly under-appreciated positive impact on society that isn’t reflected enough in compensation or visibility.
We shouldn’t need to keep having the debate that sparked up around the Midjourney piece that won Jason Allen a few hundred dollars in the Colorado State Fair art competition. Start with an explicit “AI Art” category and go from there.
We often do this with Photography, and sometimes (though less and less common), digital art. This way, the people who care about it can care about it, and the rest can more easily ignore it.
Saturating the market
The other major concern is that images that include prompts of artists names (such as “in the style of Greg Rutkowski”), are then published on the web, and crawlable, leeching SEO away from the actual artists, and confusing people searching for actual art by actual human person Greg Rutkowski, which can be a primary income source for many artists.
“It’s been just a month. What about in a year? I probably won’t be able to find my work out there because [the internet] will be flooded with AI art. That’s concerning.” — Greg Rutkowski, interview with MIT Technology Review
Unethical data usage
By far the biggest issue in my opinion, is the ethics of how source data is gathered and used. Right now, many models are fully or partially using a datasets such as LAION5B for their source data. LAION creates enormous datasets billions of images and corresponding text descriptions, scraped from alt-text and web links by a non-profit called Common Crawl. The explicit intent of these datasets is for them to be used for research purposes, and not commercial application. These companies are adding “Research” to their names in a transparent attempt to get around this, but it’s clear they’re monetizing and plan to profit off the application of these models.
Given the sheer size of these data sets, they contain enormous amounts of copyrighted images, and images belonging to living, working artists (and otherwise).
These images that Common Crawl has scraped, and that LAION has exposed, have been published originally by their respective authors with a huge range of intended uses, but it’s safe to say that “training machine learning models to put themselves out of a job” isn’t a common intent.
It’s common to see artists imply that the generated art is plagiarizing or stealing their work. It’s likened to “searching google images” and passing off the work as your own. I think this is an overly reductive, and inaccurate way of describing how machine learning models generate images, but that’s not particularly important, as it doesn’t really affect the solution.
I think the only truly ethical way forward for AI-imagery, is to create an open training data set that people have to explicitly opt-in to. The financial side would need to be worked out, but perhaps there are small payments for initially submitting images to be used in training data, and maybe some kind of royalties system based on prompts that explicitly use artists names (and much, much higher royalties for anything used in commercial work)? This could be scaled with some kind of CC-BY type license that is explicitly for use in AI training data, exposed through image metadata somehow.
Perhaps a system like this augments a more curated open data set that excludes data from sources such as Behance, Dribbble, ArtStation, DeviantArt, etc, but can still learn things like “this is what a human looks like”, “this is what a tree looks like”, and that sort of thing.
Note: ArtStation has been under fire from artists recently for not doing enough to protect the interests of artists. ArtStation responded with a “NoAI” tagging initiative, which allows users to opt-out of having their artwork used in training models, but being opt-out isn’t going far enough, and the community is letting them know that.
This space is moving fast. Tech advancements can be exciting (especially ones that feel like magic), but people and entire industries can get hurt along the way. In many cases, such as with the transition from coal & gas to renewable energy, the tech advancements can be largely net-positive. It’s easy to justify the benefits of combatting climate change vs the human cost of job losses, or industry profits.
The story here isn’t the same though. The people losing out are just artists, and replacing artists en-masse with machines is net-negative for humanity. We should think about ways to use this technology to augment and enhance tools, to help create new aesthetics, but with an eye to having people at the centre, with all their imagination, intent, and complexity intact.
Overall I would say people should consider the following:
Let companies like Midjourney, LAION, and StabilityAI know that an alternative, ethical dataset needs to be invested in. If you work in this space, advocate for it, build it. It’s an interesting problem to be solved.
Credit and label AI-generated imagery appropriately, when using for non-commercial use. Don’t oversell your part in the process.
Don’t prompt using artists names. Period.
Listen to each other. Be respectful when faced with criticism, and give criticism with good intent. It’s not enough to just say “I don’t see the ethical issue here”. It’s not always about you. If others are saying there is an ethical issue, that’s valid and you should consider listening better or reading more.
Look for ways to help people bridge from AI art to traditional art. In the way Guitar hero helped inspire many people to learn to play guitar, there’s an opportunity here to help people onboard from AI generation to actual techniques. Deconstruct the aesthetics of AI images, what works, what doesn’t, and how you might go about approaching the same image using other techniques.
Support artists you enjoy. Buy prints, share their work, pay for commissions. Support the Concept Art Association, who’s lobbying to help protect Artists from AI technologies.
Check out the significantly less controversial AI art by Refik Anadol. This isn’t text-to-image generation, but visualising non-visual data in unique ways to form abstract images.
Supernova hosted a fireside chat moderated by Dan Mall on the future of design tokens. Donna Vitan, Jina Anne, Kaelig Deloumeau-Prigent, and Nathan Curtis chat about their visions for the future of design tokens and how individuals can put them into practice.