Monday, December 23, 2024

Artists Score Major Win in Copyright Case Against AI Art Generators

Must read

Artists suing generative artificial intelligence art generators have cleared a major hurdle in a first-of-its-kind lawsuit over the uncompensated and unauthorized use of billions of images downloaded from the internet to train AI systems, with a federal judge allowing key claims to move forward.

U.S. District Judge William Orrick on Monday advanced all copyright infringement and trademark claims in a pivotal win for artists. He found that Stable Diffusion, Stability’s AI tool that can create hyperrealistic images in response to a prompt of just a few words, may have been “built to a significant extent on copyrighted works” and created with the intent to “facilitate” infringement. The order could entangle in the litigation any AI company that incorporated the model into its products.

Claims against the companies for breach of contract and unjust enrichment, plus violations of the Digital Millennium Copyright Act for removal of information identifying intellectual property, were dismissed. The case will move forward to discovery, where the artists could uncover information related to the way in which the AI firms harvested copyrighted material that were then used to train large language models.

Karla Ortiz, who brought the lawsuit, has worked on projects like Black Panther, Avengers: Infinity War and Thor: Ragnarok and is credited with coming up with the main character design for Doctor Strange. Amid the rise of AI tools in the production pipeline, concept artists like Ortiz are taking stock of further displacement down the road if the tech advances and courts side with AI firms on certain intellectual property questions posed by the tools.

Widespread adoption of AI in the movie­making process will depend largely on how courts rule on novel legal issues raised by the tech. Among the few considerations holding back further deployment of the tech is the specter of a court ruling that the use of copyrighted materials to train AI systems constitutes copyright infringement. Another factor is that AI-generated works are not eligible for copyright protection.

The lawsuit, filed last year, revolves around the LAION dataset, which was built using five billion images that were allegedly scraped from the internet and utilized by Stability and Runway to create Stable Diffusion. It implicated Midjourney, which trained its AI system using the model, as well as DeviantArt for using the model in DreamUp, an image generation tool.

On dismissal, Stability and Runway challenged the artists’ arguments that it induced copyright infringement and that the Stable Diffusion models are themselves infringing works. Under this theory, they induced infringement by distributing the models when any third-party uses the models provided by the company, exposing it to potentially massive damages.

Siding with artists, Orrick concluded that they sufficiently alleged that Stable Diffusion is built off of copyrighted material and that the “way the product operates necessarily invokes copies or protected elements of those works.” In a finding that could spell trouble for AI companies that used the model, he said that Stability and Runway could’ve promote copyright infringement and that Stable Diffusion was “created to facilitate that infringement by design.”

When it dismissed infringement claims last year, the court found that the theory of the case was “unclear” as to whether there are copies of training images stored in Stable Diffusion that’re then utilized by DeviantArt and Midjourney. It pointed to the defense’s arguments that it’s impossible for billions of images “to be compressed into an active program,” like Stable Diffusion.

Following the dismissal, the artists amended one of the prongs of their lawsuit to claim that Midjourney separately trained its product on the LAION dataset and that it incorporates Stable Diffusion into its own product.

In another loss for the AI companies, the court rebuffed arguments that the lawsuit must identify specific, individual works that each of the artists who filed the complaint alleges were used for training. 

“Given the unique facts of this case – including the size of the LAION datasets and the nature of defendants’ products, including the added allegations disputing the transparency of the ‘open source’ software at the heart of Stable Diffusion – that level of detail is not required for plaintiffs to state their claims,” the order stated.

In a May hearing, DeviantArt warned that several other companies will be sued if the artists’ infringement claims against firms that simply utilized Stable Diffusion and had no part in creating it survives dismissal.

“The havoc that would be wreaked by allowing this to proceed against DeviantArt is hard to state,” said Andy Gass, a lawyer for the company. “Here, we really have an innumerable number of parties no differently situated than [us] that would be subject to a claim.”

Gass added that DeviantArt “didn’t develop any gen AI models” and that “all [it’s] alleged to have done is take StabilityAI’s Stable Diffusion model, download it, upload it and offer a version DreamUp to users.”

The court also stressed that Midjourney produced images similar to artists’ works when their names were used as prompts. This, along with claims that the company published images that incorporate plaintiffs’ names on its site showcasing the capability of its tool, served as the basis for allowing trademark claims to move forward. It said that whether a consumer would be misled by Stability’s actions into believing that artists endorsed its product can be tested at a later stage of the case.

In a thread on Discord, the platform where Midjourney operates, chief executive David Holz posted the names of roughly 4,700 artists he said that its AI tool can replicate. This followed Stability chief executive Prem Akkaraju saying that the company downloaded from the internet troves of images and compressed it in a way that “recreate” any of those images.

In discovery, lawyers for the artists are expected to pursue information related to how Stability and Runway built Stable Diffusion and the LAION dataset. They represent Sarah Andersen, Kelly McKernan and Ortiz, among several others.

Latest article