AI’s Creative Ambitions: A Case Review of Thaler v. Perlmutter (2023)

By: Stella B. Haynes Kiehn

Is it possible for AI to achieve genuine creativity?  Inventor and self-dubbed “AI Director”, Dr. Stephen Thaler (“Thaler”), has been attempting to prove to the U.S. Copyright Office for the past several years that not only can AI be creative, but also that AI can create works capable of reaching copyright standards.

On November 3, 2018, Thaler filed an application to register a copyright claim for the work, A Recent Entrance to Paradise. While Thaler filed the application, Thaler listed “The Creativity Machine”, as the author of the work, and himself as the copyright claimant. According to Thaler, A Recent Entrance to Paradise was drawn and named by the Creativity Machine, an AI program. The artwork “depicts a three-track railway heading into what appears to be a leafy, partly pixelated tunnel.” In Thaler’s copyright application, he noted that A Recent Entrance to Paradise “was autonomously created by a computer algorithm running on a machine” and he was “seeking to register this computer-generated work as a work-for-hire to the owner of the Creativity Machine.”

The U.S. Copyright Office denied Thaler’s application primarily on the grounds that his work lacked the human authorship necessary to support a copyright claim. On a second request for reconsideration of refusal, “Thaler did not assert that the Work was created with contribution from a human author … [but that] the Office’s human authorship requirement is unconstitutional and unsupported by case law.” The U.S. Copyright Office once again denied the application. Upon receiving this decision, Thaler appealed the ruling to the U.S. District Court for the District of Columbia.

On appeal, Judge Beryl A. Howell reiterated that “human authorship is an essential part of a valid copyright claim.” Notably, Section 101 of the Copyright Act requires that a work have an “author” to be eligible for copyright. Drawing upon decades of Supreme Court case law, the Court concluded that the author must be human, for three primary reasons.

First, the Court stated that the government adopted the Copyright Clause of the U.S. Constitution to incentivize the creation of uniquely original works of authorship. This incentivization is often financial, and non-human actors, unlike human authors, do not require financial incentives to create. “Copyright was therefore not designed to reach” artificial intelligence systems.

Second, the Court pointed to the legislative history of the Copyright Act of 1976 as evidence against Thaler’s copyright claim. The Court looked to the Copyright Act of 1909’s provision that only a “person” could “secure copyright” for a work. Additionally, the Court found that the legislative history of the Copyright Act of 1976 fails to indicate that Congress intended to extend authorship to nonhuman actors, such as AI. To the contrary, the congressional reports stated that Congress sought to incorporate the “original work of authorship” standard “without change.”

Finally, the Court noted that case law has “consistently recognized” the human authorship requirement. The decision pointed to the U.S. Supreme Court’s 1884 opinion in Burrow-Giles Lithographic Company v. Sarony, in upholding the constitutionality of the human only authorship requirement. This case, upholding authorship rights for photographers, found it significant that the human creator, not the camera, “conceived of and designed the image and then used the camera to capture the image.”

Ultimately, this decision is consistent with recent case law, and administrative opinions on this topic. In mid 2024, the Copyright Office plans to issue guidance on AI and copyright issues, in response to a survey of AI industry professionals, copyright applicants, and legal professionals. In relation to the Creativity Machine, one of Thaler’s main supporters in this legal battle is Ryan Abbott, a professor of law and health sciences at the University of Surrey in the UK, and a prominent AI litigant. Abbott is the creator of the Artificial Inventor Project—a group of intellectual property lawyers and an AI scientist working on IP rights for AI-generated outputs. The Artificial Inventor Project is currently working on several other cases for Thaler, including attempting to patent two of the Creativity Machine’s other “authored” works. While the District Court’s decision seems to mark the end of Thaler’s quest to copyright A Recent Entrance to Paradise, it seems as if the fight for AI authorship rights in copyright is only beginning.

Liability, Authorship, & Symmetrial Causation in AI-Generated Outputs

By: Jacob Alhadeff

Copyright has insufficiently analyzed causation for both authorship and liability because, until now, causation was relatively obvious. If someone creates a painting, then they caused the work and receive authorial rights. If it turned out that the painting was of Mickey Mouse, then that painter may be liable for an infringing reproduction. However, recent technological advances have challenged the element of causation in both authorship and infringement. In response, recent law and scholarship have begun to address these issues. However, because they have addressed causation in isolation, current analysis has provided logically or ethically insufficient answers. In other words, authorial causation has ignored potential implications for an entity’s infringement liability, and vice-versa. Regardless of how the law responds, generative AI will require copyright to explore and enumerate the previously assumed causation analyses for both infringement and authorship. This blog explores how generative AI exposes the logical inconsistencies that result from analyzing authorial causation without analyzing causation for infringing reproductions.

Generative AI largely requires the following process: (1) an original artist creates works, (2) a developer trains an AI model on these works, and (3) an end-user prompts the AI to generate an output, such as “a mouse in the style of Walt Disney.” This generative AI process presents a novel challenge for copyright in determining who or what caused the output because generative AI challenges conventional notions of creation.

Causing Infringement

Andersen et al. recently filed a complaint against Stability AI, one of the most popular text-to-art foundation models. This class action alleges that Stability AI is directly liable for infringing that result from end-user prompted generations. However, in a recent decision more closely analyzing causation and volition in infringement, the Ninth Circuit found that “direct liability must be premised on conduct that can reasonably be described as the direct cause of infringement.” Stability AI should not be found directly liable for infringing these artists’ copyright, in part because Stability AI cannot reasonably be said to be the direct cause of infringement. Such a finding would be similar to holding Google liable for reproducing images of Mickey Mouse on people’s computer screens when they search for “Mickey Mouse.”  

This lawsuit is particularly relevant since end-users have prompted thousands of generations that include the phrase “Mickey Mouse” and many appear substantially similar to Disney’s Mickey. If thousands of end-users have intentionally prompted the AI to generate Mickey Mouse, then what volitional conduct can most reasonably be described as the direct cause of infringement? It is clearly the end-user. However, what if the end-user simply prompted “a cartoon mouse” and the AI generated an infringing image of Mickey? Here, the end-user may not have intended to generate Mickey and reasonable notions of fairness may not find the end-user as the most direct cause of infringement. However, copyright is a strict liability tort, meaning that liability attaches regardless of a reproducer’s intent. Therefore, unless copyright applies an intentional or a negligence theory for direct liability, which it should not, then whomever or whatever is liable for infringing outputs shall be liable for both of the infringing outputs— “Mickey Mouse” and “a cartoon mouse.” Such an outcome not only feels deeply unfair, but it is unreasonable to say that the end-user is the direct cause of infringement when prompting “a cartoon mouse,” and vice versa. 

Cases called to answer similar questions have recently grappled with these same issues of volition and causation. Generally, courts have been hesitant to find companies liable for actions that are not reasonably deemed volitional conduct causing infringement. The court in Cartoon Network, for example, found that “volition is an important element of direct liability.” In the Loopnet case, the court found that “the Copyright Act… requires conduct by a person who causes in some meaningful way an infringement.” In this way, the law has so far mirrored our prior intuitions of fairness. Legal scholarship has noted that when copyright law has grappled with novel technology, it has found that causation in infringement requires volition that “can never be satisfied by machines.” This reasoning, as applied  to generative AI, may mean that an AI company should not normally be directly liable for outputs that infringe the reproduction right. 

Causing Authorship

This causation analysis has also begun for authorship rights. One copyright scholar compellingly argues that copyright law should explicitly enumerate a causal analysis for granting authorship rights. Such an analysis would follow tort law’s two step causation analysis including: (1) creation in fact and (2) legal creation. Aviv Gaon surveys authorial options in The Future of Copyright in the Age of AI, writing that there are those that favor assigning authorship to the end-user prompter, the AI developer, finding outputs joint works, or even attributing authorship to the AI itself. The simplest legal option would be to treat AI like a tool and grant authorship to the end-user. This is exactly how the law responded when photography challenged conventional notions of creativity and authorship. Opponents of finding photographers as authors argued that photography was “merely mechanical, with no place for… originality.” The Supreme Court in Burrow Giles instead found that the photographer “gives effect to the idea” and is the work’s “mastermind” deserving of copyright. 

However, treating AI like a conventional tool is an inconsistent oversimplification in the current context. Not only is it often less analogous to say that an end-user prompter is the ‘mastermind’ of the output, but AI presents a more attenuated causation analysis that should not result in  a copyright for all AI-generations. As an extreme example, recent AIs are employing other AIs as replicable agents. In these circumstances, a single prompt could catalyze one AI to automatically employ other AI agents to generate numerous potentially creative or infringing outputs. Here, the most closely linked human input would be a prompt that could not be said to have masterminded or caused the many resultant expressive outputs. Under Balganesh’s framework, no human could reasonably be found as the factual or legal cause of the output. Such use-cases will further challenge the law’s notions of foreseeability as reasonable causation becomes increasingly attenuated.

Importantly, in the face of this ongoing debate and scholarship, the Copyright Office recently made their determination on authorship for AI-generated works. In February 2023, the US Copyright Office amended its decision regarding Kristina Kashtanova’s comic book, Zarya of the Dawn, stating that the exclusively AI-generated content is not copyrightable.  Ms. Kashtanova created her comic book using Midjourney, a text-to-art AI, to generate much of the visual art involved. The copyright office stated that her “selection, coordination, and arrangement” of AI-generated images are copyrightable, but not the images themselves. The Office’s decision means that all exclusively AI-generated content, like natural phenomena, is not the type of content copyright protects and is freely accessible to all. The Office’s decision was based on their interpretation that “it was Midjourney—not Kashtanova—that originated the ‘traditional elements of authorship.’” The Office’s decision is appropriate policy, but when analyzed in conjunction with the current law on causation in infringement, it is inconsistent and may result in an asymmetrical allocation of the rights and duties that attend creation. Relevantly, how can a machine that is incapable of volition originate art? This is one of many ontological paradoxes that AI will present to law. 

Symmetrically Analyzing Causation

Two things are apparent. First, there is a beautiful symmetry in AI-generations being uncopyrightable, and the machines originating such works symmetrically do not have sufficient volition to infringe. If such a system persists, then copyright law may not play a major role in generative AI, though this is doubtful. Second, such inconsistencies inevitably result from causation analyses for mechanically analogous actions that only analyze one of infringement or authorship. Instead, I propose that copyright law symmetrically analyze mechanically analogous causation for both authorship and infringement of the reproduction right. Since copyright law has only recently begun analyzing causation, it is reasonable, and potentially desirable, that the law does not require this symmetrical causation. After all, the elements of authorship and infringement are usefully different. However, what has been consistent throughout copyright is that when an author creates, they risk both an infringing reproduction and the benefits of authorship rights. In other words, by painting, a painter may create a valuable copyrightable work, but they also may paint an infringing reproduction of Mickey Mouse. Asymmetrical causation for AI art could be analogized to the painter receiving authorship rights while the company that made the paintbrush being liable for the painter’s infringing reproductions. Such a result would not incentivize a painter to avoid infringement, and thereby improperly balance the risks and benefits of creation. Ultimately, if the law decides either the end-user or the AI company is the author, then the other entity should not be asymmetrically liable for infringing reproductions. Otherwise, the result will be ethically and logically inconsistent. After all, as Antony Honore wrote in Responsibility and Fault, in our outcome-based society and legal system, we receive potential benefit from and are responsible for the harms reasonably connected to our actions.

AI Art: Infringement is Not the Answer

By: Jacob Alhadeff

In the early 2000s, courts determined that the emerging technology of peer-to-peer “file-sharing” was massively infringing and categorically abolished its use. Here, the Ninth Circuit and Supreme Court found that Napster, Aimster, and Grokster were secondarily liable for the reproductions of their users. Each of these companies facilitated or instructed their users on how to share verbatim copies of media files with millions of other people online. In this nascent internet, users were able to download each other’s music and movies virtually for free. In response, the courts held these companies liable for the infringements of their users. In so doing, they functionally destroyed that form of peer-to-peer “file-sharing.” File-sharing and AI are in not analogous, but multiple recent lawsuits present a similarly existential question for AI art companies. Courts should not find AI art companies massively infringing and risk fundamentally undermining these text-to-art AIs.

A picture containing text, person, person, suit

Description automatically generated

Text-to-art AI, aka generative art or AI art, allows users to type in a simple phrase, such as “a happy lawyer,” and the AI will generate a nightmarish representation of this law student’s desired future. 

Currently, this AI art functions only because (1) billions of original human authors throughout history have created art that has been posted online, (2) companies such as Stability AI (“Stable Diffusion”) or Open AI (“Dall-E”) download/copy these images to train their AI, and (3) end-users prompt the AI, which then generates an image that corresponds to the input text. Due to the large data requirements, all three of these steps are necessary for the technology, and finding either the second or third steps generally infringing poses and existential threat to AI Art. 

In a recent class action filed against Stability AI, et al (“Stable Diffusion”), plaintiffs allege that Stable Diffusion directly and vicariously infringed on the artist’s copyright through both the training of the AI and the generation of derivative images, i.e., steps 2 and 3 above. Answering each of these claims requires complex legal analyses. However, functionally, a finding of infringement on any of these counts threatens to fundamentally undermine the viability of text-to-art AI technology. Therefore, regardless of the legal analysis (which likely points in the same direction anyways) courts should not find Stable Diffusion liable for infringement because doing so would contravene the constitutionally enumerated purpose of copyright—to incentivize the progress of the arts. 

In general, artists have potential copyright infringement claims against AI Art companies (1) for downloading their art to train their AI and (2) for the AI’s substantially similar generations that the end-user prompts. In the conventional text-to-art AI context, these AI art companies should not be found liable for infringement in either instance because doing so would undermine the progress of the arts. However, a finding of non-infringement leaves conventional artists with unaddressed cognizable harms. Neither of these two potential outcomes are ideal. 

How courts answer these questions will shape how AI art and artists function in this brave new world of artistry. However, copyright infringement, the primary mode of redress that copyright protection offers, does not effectively balance the interests of the primary stakeholders. Instead of relying on the courts, Congress should create an AI Copyright Act that protects conventional artistry, ensures AI Art’s viability, and curbs its greatest harms. 

Finding AI Art Infringing Would Undermine the Underlying Technology

A finding of infringement for the underlying training or the outputs undermines AI Art for many reasons: copyright’s large statutory damages, the low bar for granting someone a copyright, that works are retroactively copyrightable, the length of copyright, and the volume of images the AI generates and needs for training.

First, copyright provides statutory damages of $750 to $30,000 and up to $150,000 if the infringement is willful. Determining the statutory value of each infringement is likely moot because of the massive volume of potential infringements. Moreover, it is likely that if infringement is found, AI art companies would be enjoined from functioning, as occurred in the “file-sharing” cases of the early 2000s. 

Second, the threshold for a copyrightable work is incredibly low, so it is likely that many of the billions of images used in Stable Diffusion’s training data are copyrightable. In Feist, the Supreme Court wrote, “the requisite level of creativity is extremely low [to receive copyright]; even a slight amount will suffice. The vast majority of works make the grade quite easily.” This incredibly low bar means that each of us likely creates several copyrightable works every day. 

Third, works are retroactively copyrightable, meaning that the law does not require the plaintiff to have registered their work with the copyright office to receive their exclusive monopoly. Therefore, an author can register their copyright after they are made aware of an infringement and still have a valid claim. If these companies were found liable, then anyone with a marginally creative image in a training set would have a potentially valid claim against a generative art company.

Fourth, the copyright monopoly lasts for 70 years after the death of the author. Therefore, many of the copyrights in the training set have not lapsed. Retroactive copyright registration combined with the extensive duration of copyrightability means that few of the training images are likely in the public domain. In other words, “virtually all datasets that will be created for ML [Machine Learning] will contain copyrighted materials.”

Finally, as discussed earlier, the two bases for infringement claims against the AI art companies are (1) copying to train the AI and (2) copying in the resultant end generation. Each basis would likely result in billions or millions of potential claims, respectively. First, Stable Diffusion is trained on approximately 5.85 billion images which they downloaded from the internet. Given these four characteristics of copyright, it is likely that if infringement were found, many or all of the copyright owners of these images would then have a claim against AI art companies. Second, regarding infringement of end generations, Dall-E has suggested that their AI produces millions of generations every day. If AI art companies were found liable for infringing outputs, then any generation that was found to be substantially similar to an artist’s copyrighted original would be the basis of another claim against Dall-E. This would open them up to innumerable infringement claims every day. 

A picture containing text, fabric

Description automatically generated

At the same time, generative art is highly non-deterministic, meaning that, on its face, it is hard to know what the AI will generate before it is generated. The AI’s emergent properties, combined with the subjective and fact-specific “substantial similarity” analysis of infringement, do not lend themselves to an AI Art company ensuring that end-generations are non-infringing. More simply, from a technical perspective, it would be near-impossible for an AI art company to guarantee that their generations do not infringe on another’s work. 

Finding AI art companies liable for infringement may open them up to trillions of dollars in potential copyright lawsuits or they may simply be enjoined from functioning.

An AI Copyright Act

Instead, Congress should create an AI Copyright Act. Technology forcing a reevaluation of copyright law is not new. In 1998, Congress passed the DMCA (Digital Millennium Copyright Act) to fulfill their WIPO (World Intellectual Property Organization) treaty obligations, reduce piracy, and facilitate e-commerce. While the DMCA’s overly broad application may have stifled research and free speech, it does provide an example of Congress recognizing copyright’s limitations in addressing technological change and responding legislatively. What was true in 1998 is true today. 

Finding infringement for a necessary aspect of text-to-art AI may fundamentally undermine the technology and run counter to the constitutionally enumerated purpose of copyright—“to promote the progress of science and useful arts.” On the other hand, finding no infringement leaves these cognizably harmed artists without remedy. Therefore, Congress should enact an AI Copyright Act that balances the interests of conventional artists, technological development, and the public. This legislation should aim to curb the greatest harms posed by text-to-art AI through a safe harbor system like that in the DMCA. 

First AI Art Generator Lawsuit Hits the Courts

By: HR Fitzmorris

Your social media accounts may have recently been inundated with spookily elegant renderings of your once-familiar friends’ faces. Or, if you’re on a particular side of the internet, you may have seen any number of info-graphics scolding users for contributing to the devaluation of flesh and blood artists’ livelihoods. What you may not have seen is news of the recent class-action lawsuit filed on behalf of artists who are unhappy with technological advances that, in their view, were ‘advanced’ through art theft.

The Complaint

In the first-of-its-kind proposed class action, named plaintiffs allege copyright infringement, asking for damages to the tune of one billion dollars. Specifically, artists allege that the named AI companies downloaded and fed billions of copyrighted images into their AI software to ‘train’ the artificial intelligence software to create its own digital ‘art.’ In addition to damages, the plaintiffs have asked the court to issue an injunction preventing the AI companies from using artists’ work without permission and requiring the companies to seek appropriate licensing in the future.

The Plaintiffs

The named plaintiffs, who will represent the pool of affected artists if the class is certified by the court, are Sarah Andersen, a popular webcomic artist; Kelly McKernan, who specializes in colorful watercolor and acryla gouache paintings; and Karla Ortiz, a professional concept artist with clients such as Wizards of the Coast and Ubisoft.

In a New York Times opinion piece about the appropriation of her art by both the Alt-Right and artificial intelligence art generators, Ms. Andersen stated, “[t]he notion that someone could type my name into a generator and produce an image in my style immediately disturbed me.” She also explains that the appropriation made her “feel violated” by the way the AI stripped her artwork of its personal meaning and of her human mark that she honed and defined through the “complex culmination of [her] education, the comics [she] devoured as a child and the many small choices that make up the sum of [her] life.” Clearly, for these artists, there is more at stake than the threat to their livelihoods.

The Defendants

The plaintiffs named four entities as defendants in the suit: Stability AI Ltd., Stability AI, Inc., Midjourney, Inc., and DeviantArt, Inc. Each of these companies has a hand in creating, hosting, or perpetuating the use of engines that use AI to create art.

The Legal Issues

The Stable Diffusion engine, for example, is described as a “deep learning, text-to-image model” that anyone can use “to generate detailed images conditioned on text descriptions.” In layperson’s terms, users input text (such as an artist’s name or a specific medium) to generate images with those attributes. This is the heart of the issue. In order to do this, the tool (and others like it) must be “trained,” which involves, in the words of Plaintiff Sarah Andersen

[B]uil[ding] on collections of images known as “data sets,” from which a detailed map of the data set’s contents, the “model,” is formed by finding the connections among images and between images and words. Images and text are linked in the data set, so the model learns how to associate words with images. It can then make a new image based on the words you type in.

Stable Diffusion was built using a dataset that contained somewhere in the neighborhood of six billion images culled from the internet without regard to intellectual property and copyright laws or creator consent. Additionally, these companies are not building these engines out of the goodness of their hearts, they are making immense revenue. Stability AI, for example, is currently valued at approximately $1 billion.

The suit, which was filed in the Northern District of California, alleges violations of federal as well as state copyright laws, including “direct copyright infringement, vicarious copyright infringement related to forgeries, violations of the Digital Millennium Copyright Act (DMCA), violation of class members’ rights of publicity, breach of contract related to the DeviantArt Terms of Service, and various violations of California’s unfair competition laws.” The crucial argument for the plaintiffs is that “[e]very output image from the system is derived exclusively from the latent images, which are copies of copyrighted images. For these reasons, every hybrid image is necessarily a derivative work.” (emphasis added).

The defendant companies, though, will likely argue that some version of the “fair use doctrine” protects their activity. To prevail, the defendants must prove that their use of the images was sufficiently “transformative”—unlikely to be confused for, or usurp the market for, the original artwork. 

Whatever the court decides, this type of intersection between art and technology will likely remain a hotbed of intellectual and legal debate as artificial intelligence continues to grow in prevalence and accessibility.

AI Art “In the Style of” & Contributory Liability

By: Jacob Alhadeff

Greg Rutkowski illustrates fantastical images for games such as Dungeons & Dragons and Magic the Gathering. Rutkowski’s name has been used thousands of times in generative art platforms, such as Stable Diffusion and Dall-E, flooding the internet with thousands of works in his style. For example, type in “Wizard with sword and a glowing orb of magic fire fights a fierce dragon Greg Rutkowski,” and Stable Diffusion will output something similar to Rutkowski’s actual work. Rutkowski is now reasonably concerned that his work will be drowned out by these hundreds of thousands of emulations, ultimately preventing customers from being able to find his work online. 

A picture containing nature

Description automatically generated

Examples of images generated by Dream Studio (Stable Diffusion) in Rutkowski’s style.

These machine learning algorithms are trained using freely available information, which is largely a good thing. However, it may feel unfair that an artist’s copyrighted images are freely copied to train their potential replacement. Ultimately, nothing these algorithms or their owners are doing is copyright infringement, and there are many good reasons for this. However, in certain exceptional circumstances, like Rutkowski’s, it may seem like copyright laws insufficiently protect human creation and unreasonably prioritizes computer generation.

A primary reason why Rutkowski has no legal recourse is because an entity that trains its AI on Rutkowski’s copyrighted work is not the person generating the emulating art. Instead, thousands of end-users are collectively causing Rutkowski harm. Since distinct entities cause aggregate harm, there is no infringement. By contrast, if Stable Diffusion verbatim copied Rutkowski’s work to train their AI before generating hundreds of thousands of look-a-likes, this would likely be an unfair infringement. Understanding the importance of this separation is best seen through understanding the process of text-to-art generation and analyzing each person’s role in the process. 

Text-to-Image Copyright AnalysisDiagram, text

Description automatically generated

To give a brief summary of this process, billions of original human artists throughout history have created art that has been posted online. Then a group like Common Crawl scrapes those billions of images and their textual pairs from billions of web pages for public use. Later, a non-profit such as LAION creates a massive dataset that includes internet indexes and similarity scores between text and images. Subsequently, a company such as Stable Diffusion trains its text-to-art AI generator on these text-image pairs. Notably, when a text-to-art generator uses the LAION database, they are not necessarily downloading the images themselves to train their AI. Finally, when the end user goes to Dream Studio and types in the phrase “a mouse in the style of Walt Disney,” the AI generates unique images of Mickey Mouse. 

A picture containing doll

Description automatically generated

A picture containing indoor

Description automatically generated
Examples of images generated by Dream Studio (Stable Diffusion) using the phrase “a mouse in the style of Walt Disney”

These several distributed roles complicate our copyright analysis, but for now, we will limit our discussion of copyright liability to three primary entities: (1) the original artist, (2) the Text-to-Image AI Company, and (3) the end-user. 

The Text-to-Image Company likely has copied Rutkowski’s work. If the Text-to-Image company actually downloads the images from the dataset to train its AI, then there is verbatim intermediate copying of potentially billions of copyrightable images. However, this is likely fair use because the generative AI provides what the court would consider a public benefit and has transformed the purpose and character of the original art. This reasoning is demonstrated by Kelly v. Arriba, where an image search’s use of thumbnail images was determined to be transformative and fair partly because of the public benefit provided by the ability to search images and the transformed purpose for that art, searching versus viewing. Here, the purpose of the original art was to be viewed by humans, and the Text-to-Image AI Company has transformatively used the art to be “read” by machines to train an AI. The public benefit of text-to-art AI is the ability to create complex and novel art by simply typing a few words into a prompt. It is more likely that the Generative AI’s use is fair because the public does not see these downloaded images, which means that they have not directly impacted the market for the copyrighted originals. 

The individual end-user is any person that prompts the AI to generate hundreds of thousands of works “in the style of Greg Rutkowski.” However, the end-user has not copied Rutkowski’s art because copyright’s idea-expression distinction means that Rutkowski’s style is not copyrightable. The end-user simply typed 10 words into Stable Diffusion’s UI. While the images of wizards fighting dragons may seem similar to Rutkowski’s work, they may not be substantially similar enough to be deemed infringing copies. Therefore, the end-user similarly didn’t unfairly infringe on Rutkowski’s copyright.

Secondary Liability & AI Copyright

Generative AI portends dramatic social and economic change for many, and copyright will necessarily respond to these changes. Copyright could change to protect Rutkowski in different ways, but many of these potential changes would result in either a complete overhaul of copyright law or the functional elimination of generative art, neither of which is desirable. One minor alteration that could give Rutkowski, and other artists like him, slightly more protection is a creative expansion of contributory liability in copyright. One infringes contributorily by intentionally inducing or encouraging direct infringement.

Dall-E has actively encouraged end-users to generate art “in the style of” artists. So not only are these text-to-art AI companies verbatim copying artists’ works, but they are then also encouraging users to emulate the artists’ work. At present, this is not considered contributory liability and is frequently innocuous. Style is not copyrightable because ideas are not copyrightable, which is a good thing for artistic freedom and creation. So, while the work of these artists is not being directly copied by end-users when Dall-E encourages users to flood the internet with AI art in Rutkowski’s style, it feels like copyright law should offer Rutkowski slightly more protection.

A picture containing text

Description automatically generated
An astronaut riding a horse in the style of Andy Warhol.
A painting of a fox in the style of Claude Monet.

Contributory liability could offer this modicum of protection if, and only if, it expanded to include circumstances where the copying fairly occurred by the contributor, but not the thousands of end-users. As previously stated, the end-users are not directly infringing Rutkowski’s copyright, so under current law, Dall-E has not contributorily copied. However, there has never been a contributory copyright case such as this one, where the contributing entity themselves verbatim copied the copyrighted work, albeit fairly, but the end user did not. As such, copyright’s flexibility and policy-oriented nature could permit a unique carveout for such protection.

Analyzing the potential contributory liability of Dall-E is more complicated than it sounds, particularly because of the quintessential modern contributory liability case, MGM v. Grokster, which involved intentionally instructing users on how to file-share millions of songs. Moreover, Sony v. Universal would rightfully protect Dall-E generally as due to many similarities between the two situations. In that case, the court found Sony not liable for copyright infringement for the sale of VHS recorders which facilitated direct copying of TV programming because the technology had “commercially significant non-infringing uses.” Finally, regardless of Rutkowski’s theoretical likelihood of success, if contributory liability were expanded in this way, then it would at least stop companies such as Dall-E from advertising the fact that their generations are a great way to emulate, or copy, an artist’s work that they themselves initially copied. 

This article has been premised on the idea that the end-users aren’t copying, but what if they are? It is clear that Rutkowski’s work was not directly infringed by the wizard fighting the dragon, but what about “a mouse in the style of Walt Disney?” How about “a yellow cartoon bear with a red shirt” or “a yellow bear in the style of A. A. Milne?” How similar does an end-user’s generation need to be for Disney to sue over an end-user’s direct infringement? What if there were hundreds of thousands of unique AI-generated Mickey Mouse emulations flooding the internet, and Twitter trolls were harassing Disney instead of Rutkowski? Of course, each individual generation would require an individual infringement analysis. Maybe the “yellow cartoon bear with a red shirt” is not substantially similar to Winnie the Pooh, but the “mouse in the style of Walt Disney” could be. These determinations would impact a generative AI’s potential contributory liability in such a claim. Whatever copyright judges and lawmakers decide, the law will need to find creative solutions that carefully balance the interests of artists and technological innovation. 

A picture containing doll

Description automatically generatedA yellow stuffed animal

Description automatically generated with low confidenceA picture containing text, fabric

Description automatically generated