Artists, AI, and the Courts: Two Cases and a Complicated Issue


Article by Dave Cabrera

Most relevant to us at RenderHub, millions of users are at this moment training generative AI models like Midjourney and Stable Diffusion to create new art based upon databases of millions of pieces of art scraped from all over the internet.
And more people creating new art is great, but herein lies the catch. Most of the millions of people whose images are used in the databases on which AI models train- perhaps their personal photos uploaded to a sharing site like Pinterest, perhaps art they created themselves and uploaded to a gallery- never gave permission for their property to be used in this way. It just happened one day, and it isn't reversible.
We can't really say "they should have known", here; before generative AI very recently arrived at this level of quality and availability, most people could not have even imagined this scenario in the first place. The idea that an artist should just never post art anywhere if they don't want it used for anything anybody else wants to do with it is both infeasible and culturally self-destructive. For an artist who needs to be seen to advance their career, but who doesn't want to be copied for profit, there is no reasonable opt-out.
And wait a second; these are for-profit companies charging users for the use of a product that they're training with millions of other people's data without those people's permission. In today's world- where there's big money in the business of data- what is the difference between the highly profitable methods of the big AI companies and mass theft?
That's a legal question that nobody has the answer to just yet, but American courts are working on it. There are both livelihoods at stake and a lot of money to be made on the backs of those livelihoods, so expect a long, drawn-out and costly legal fight before we have any real answers.
Andersen Vs Stability

Their position is straightforward: by training their models on millions of indiscriminately scraped images without obtaining permission from any party involved in their creation, Andersen and the artists allege that big AI companies are knowingly and directly profiting on the unpaid and unwilling collaboration of thousands of unwitting individuals.
In response to the suit, the AI companies asked for a dismissal, noting that the lawsuit never actually names or identifies directly any particular work of the artists' that was allegedly used by the models, and claiming that the models' output has no resemblance to the artists' work.
It is worth noting that upon typing her own name into Stable Diffusion, artist and plaintiff Sarah Andersen got a result that's clearly drawn from the unique character who appears in her viral web comics. It's also trivially easy to find home-spun AI models, and prompt libraries for models, that are trained directly on artists' work and specifically created to pump out style copies of that work. (Including Andersen!)
Stable Diffusion is open-source and people are going to do what they please with it, but does nobody bear any responsibility when users use it to create a machine that turns out good-enough ripoffs of a living, working artist?
Judge William Orrick did indeed dismiss the suit, unconvinced by the lawsuit's lack of hard evidence and doubtful that copying an artist via text prompt can actually constitute copyright infringement. That being said, his issues were largely with the quality of the lawsuit, and he left the artists the possibility of making their case again with a more conclusive lawsuit.
This isn't a conclusive ruling on the subject- the dismissal sounds more like the judge wants the plaintiffs to give him a more convincing argument- and as the door remains open, surely more people than just these three artists will make their case before America's courts.
Thaler Vs. Perlmutter
Now while you're pondering that, here's a second question: say you build yourself an AI- for our purposes let's say it's not trained on anybody's copyrighted data- and the AI spits out a new image. For purposes of copyright, who owns that image? Is the machine itself the creator? Does it hold the copyright?
That's what computer scientist Stephen Thaler argued to the US Copyright Bureau when he attempted to copyright his Creativity Engine's output, a piece called "A Recent Entrance to Paradise." The Creativity Engine is a system of neural networks that's designed to crank out unique creative output with minimal input from its human creator, and "Paradise" is one of its early works. Thaler- a strong believer that his AIs possess the same degree of consciousness that a human being does- makes his copyright claim not for himself but on behalf of the machine.
The Copyright Office asserts in its rejection that "human authorship" is specifically necessary in order to claim copyright; trying to claim copyright to a machine is a non-starter according to the law. A photo is still copyright of the photographer, not the camera, after all. Thaler calls that "speciesism".
Thaler's beliefs are pretty far out there, and his crusade is unlikely to succeed. But with the advances in the technology that continue to happen, certainly others will try make the same argument. Perhaps the day will even come when the Copyright Office will have to reconsider the idea of human ownership.
To be continued
This is a complicated issue- even more so once we get the court system involved- and one without an easy answer. The fact is, nobody knows how the fight between AI and artists will turn out. Nobody even has an idea of where it's going! The only thing we can really do is wait for the next case.
