AI Art Generators Face Legal Challenges As Their Ethical Shortfalls Continue To Surface

After months of pushback and criticism from artists in a wide swath of industries, Stability AI, the creators of the popular AI art tool Stable Diffusion, have been hit with a pair of lawsuits over copyright violations that could change how the company does business. Most notably, photography giant Getty Images announced this week that it started legal proceedings alleging “Stability AI unlawfully copied and processed millions of images protected by copyright.”
Stability AI was also named late last week in a California class action suit that alleges “direct copyright infringement, vicarious copyright infringement related to forgeries, violations of the Digital Millennium Copyright Act (DMCA), violation of class members’ rights of publicity, breach of contract related to the DeviantArt Terms of Service, and various violations of California’s unfair competition laws.”
Late last year, artist Darek Zabrocki, whose resume includes Planet of the Apes, Netflix’s Love + Robots, Sonic The Hedgehog 2 and major video game franchises like Call of Duty: Modern Warfare, Assassin’s Creed and The Witcher, found his artwork was being used as well. Zabrocki told Paste that he has always been excited about incorporating new technologies into his work. But, while the tech behind Stable Diffusion looks promising, the underlying ethics are completely off.
“AI generators are trained on human art and human creations. It takes artists’ work without their consent in order to make ‘new’ pictures or to ‘copy’ a style of an artist to feature that on an AI-generated picture,” Zabrocki told Paste via email. “Taking a chunk of artists’ work, making a mish-mash of these art pieces in order to spit out a new piece within a minute is what makes AI problematic, and it seriously violates the copyright laws on many levels. Especially if such ‘art’ is going to be used commercially.”
In September 2022, a company known as Spawning created the site HaveIBeenTrained.com to search the LAION-5B image set, a collection of 5.8 billion images that have been used to train popular AI art models. Shortly thereafter, ArsTechnica reported that a California-based AI artist who goes by the name Lapine discovered private medical record photos taken by her doctor in 2013 referenced in the LAION-5B set. In December, Stability AI announced it would allow artists to remove their work from the training dataset for an upcoming Stable Diffusion 3.0 release.