Liability, Authorship, & Symmetrial Causation in AI-Generated Outputs

By: Jacob Alhadeff

Copyright has insufficiently analyzed causation for both authorship and liability because, until now, causation was relatively obvious. If someone creates a painting, then they caused the work and receive authorial rights. If it turned out that the painting was of Mickey Mouse, then that painter may be liable for an infringing reproduction. However, recent technological advances have challenged the element of causation in both authorship and infringement. In response, recent law and scholarship have begun to address these issues. However, because they have addressed causation in isolation, current analysis has provided logically or ethically insufficient answers. In other words, authorial causation has ignored potential implications for an entity’s infringement liability, and vice-versa. Regardless of how the law responds, generative AI will require copyright to explore and enumerate the previously assumed causation analyses for both infringement and authorship. This blog explores how generative AI exposes the logical inconsistencies that result from analyzing authorial causation without analyzing causation for infringing reproductions.

Generative AI largely requires the following process: (1) an original artist creates works, (2) a developer trains an AI model on these works, and (3) an end-user prompts the AI to generate an output, such as “a mouse in the style of Walt Disney.” This generative AI process presents a novel challenge for copyright in determining who or what caused the output because generative AI challenges conventional notions of creation.

Causing Infringement

Andersen et al. recently filed a complaint against Stability AI, one of the most popular text-to-art foundation models. This class action alleges that Stability AI is directly liable for infringing that result from end-user prompted generations. However, in a recent decision more closely analyzing causation and volition in infringement, the Ninth Circuit found that “direct liability must be premised on conduct that can reasonably be described as the direct cause of infringement.” Stability AI should not be found directly liable for infringing these artists’ copyright, in part because Stability AI cannot reasonably be said to be the direct cause of infringement. Such a finding would be similar to holding Google liable for reproducing images of Mickey Mouse on people’s computer screens when they search for “Mickey Mouse.”  

This lawsuit is particularly relevant since end-users have prompted thousands of generations that include the phrase “Mickey Mouse” and many appear substantially similar to Disney’s Mickey. If thousands of end-users have intentionally prompted the AI to generate Mickey Mouse, then what volitional conduct can most reasonably be described as the direct cause of infringement? It is clearly the end-user. However, what if the end-user simply prompted “a cartoon mouse” and the AI generated an infringing image of Mickey? Here, the end-user may not have intended to generate Mickey and reasonable notions of fairness may not find the end-user as the most direct cause of infringement. However, copyright is a strict liability tort, meaning that liability attaches regardless of a reproducer’s intent. Therefore, unless copyright applies an intentional or a negligence theory for direct liability, which it should not, then whomever or whatever is liable for infringing outputs shall be liable for both of the infringing outputs— “Mickey Mouse” and “a cartoon mouse.” Such an outcome not only feels deeply unfair, but it is unreasonable to say that the end-user is the direct cause of infringement when prompting “a cartoon mouse,” and vice versa. 

Cases called to answer similar questions have recently grappled with these same issues of volition and causation. Generally, courts have been hesitant to find companies liable for actions that are not reasonably deemed volitional conduct causing infringement. The court in Cartoon Network, for example, found that “volition is an important element of direct liability.” In the Loopnet case, the court found that “the Copyright Act… requires conduct by a person who causes in some meaningful way an infringement.” In this way, the law has so far mirrored our prior intuitions of fairness. Legal scholarship has noted that when copyright law has grappled with novel technology, it has found that causation in infringement requires volition that “can never be satisfied by machines.” This reasoning, as applied  to generative AI, may mean that an AI company should not normally be directly liable for outputs that infringe the reproduction right. 

Causing Authorship

This causation analysis has also begun for authorship rights. One copyright scholar compellingly argues that copyright law should explicitly enumerate a causal analysis for granting authorship rights. Such an analysis would follow tort law’s two step causation analysis including: (1) creation in fact and (2) legal creation. Aviv Gaon surveys authorial options in The Future of Copyright in the Age of AI, writing that there are those that favor assigning authorship to the end-user prompter, the AI developer, finding outputs joint works, or even attributing authorship to the AI itself. The simplest legal option would be to treat AI like a tool and grant authorship to the end-user. This is exactly how the law responded when photography challenged conventional notions of creativity and authorship. Opponents of finding photographers as authors argued that photography was “merely mechanical, with no place for… originality.” The Supreme Court in Burrow Giles instead found that the photographer “gives effect to the idea” and is the work’s “mastermind” deserving of copyright. 

However, treating AI like a conventional tool is an inconsistent oversimplification in the current context. Not only is it often less analogous to say that an end-user prompter is the ‘mastermind’ of the output, but AI presents a more attenuated causation analysis that should not result in  a copyright for all AI-generations. As an extreme example, recent AIs are employing other AIs as replicable agents. In these circumstances, a single prompt could catalyze one AI to automatically employ other AI agents to generate numerous potentially creative or infringing outputs. Here, the most closely linked human input would be a prompt that could not be said to have masterminded or caused the many resultant expressive outputs. Under Balganesh’s framework, no human could reasonably be found as the factual or legal cause of the output. Such use-cases will further challenge the law’s notions of foreseeability as reasonable causation becomes increasingly attenuated.

Importantly, in the face of this ongoing debate and scholarship, the Copyright Office recently made their determination on authorship for AI-generated works. In February 2023, the US Copyright Office amended its decision regarding Kristina Kashtanova’s comic book, Zarya of the Dawn, stating that the exclusively AI-generated content is not copyrightable.  Ms. Kashtanova created her comic book using Midjourney, a text-to-art AI, to generate much of the visual art involved. The copyright office stated that her “selection, coordination, and arrangement” of AI-generated images are copyrightable, but not the images themselves. The Office’s decision means that all exclusively AI-generated content, like natural phenomena, is not the type of content copyright protects and is freely accessible to all. The Office’s decision was based on their interpretation that “it was Midjourney—not Kashtanova—that originated the ‘traditional elements of authorship.’” The Office’s decision is appropriate policy, but when analyzed in conjunction with the current law on causation in infringement, it is inconsistent and may result in an asymmetrical allocation of the rights and duties that attend creation. Relevantly, how can a machine that is incapable of volition originate art? This is one of many ontological paradoxes that AI will present to law. 

Symmetrically Analyzing Causation

Two things are apparent. First, there is a beautiful symmetry in AI-generations being uncopyrightable, and the machines originating such works symmetrically do not have sufficient volition to infringe. If such a system persists, then copyright law may not play a major role in generative AI, though this is doubtful. Second, such inconsistencies inevitably result from causation analyses for mechanically analogous actions that only analyze one of infringement or authorship. Instead, I propose that copyright law symmetrically analyze mechanically analogous causation for both authorship and infringement of the reproduction right. Since copyright law has only recently begun analyzing causation, it is reasonable, and potentially desirable, that the law does not require this symmetrical causation. After all, the elements of authorship and infringement are usefully different. However, what has been consistent throughout copyright is that when an author creates, they risk both an infringing reproduction and the benefits of authorship rights. In other words, by painting, a painter may create a valuable copyrightable work, but they also may paint an infringing reproduction of Mickey Mouse. Asymmetrical causation for AI art could be analogized to the painter receiving authorship rights while the company that made the paintbrush being liable for the painter’s infringing reproductions. Such a result would not incentivize a painter to avoid infringement, and thereby improperly balance the risks and benefits of creation. Ultimately, if the law decides either the end-user or the AI company is the author, then the other entity should not be asymmetrically liable for infringing reproductions. Otherwise, the result will be ethically and logically inconsistent. After all, as Antony Honore wrote in Responsibility and Fault, in our outcome-based society and legal system, we receive potential benefit from and are responsible for the harms reasonably connected to our actions.

The Importance of Artwork Authentication in the Digital Era

By: Lauren Liu

The digital era has gifted the art world with new mediums, increased access to audiences, and innovative platforms for art exhibitions and transactions. For artists, the internet has provided  greater  access to the art marketplace and modern tools for bolder digital creation. However, some may also consider these changes troublesome. One of the biggest concerns in the art world is the threat of forgery. Although forgeries existed long before the digital era, modern technology has given art forgers and those who sell their products more temptation and opportunities to create and sell their forged works.

Now, more than ever, emerging artists need to protect themselves from forgeries, making the authentication of artworks increasingly crucial.

One important aspect of authenticating artwork is also known as provenance, or the documentation that outlines a particular art piece’s creator and history. A signed certificate of authenticity is one of the most common forms of provenance. For such documentation to establish authenticity, it should include the work’s title, date it was made, mediums, dimensions, and appraisal value. For example, a provenance could list an individual as the owner of the particular work of art in question in a museum exhibit catalog. This would constitute valid provenance. Most of the time, only names of previous owners do not constitute valid provenance. For art purchasers, they should consider getting full names and contact information for the current and previous owners to ensure the authenticity of the artwork in question. A “good provenance” is often taken as an indication of authenticity, because the longer the chain of ownership, the more likely that the artwork is authentic. Prominent or well-attended exhibitions of a picture are also taken as not only indications of value but also some evidence of authenticity and ownership, the logic being that an artwork would not be frequently displayed if its authenticity was questionable or if there was a dispute as to ownership. Provenance, even if not usable in court as evidence of authenticity or ownership, may still be admissible to oppose a new claim of ownership on the legal doctrine of laches (prejudice caused by undue delay by a claimant in coming forth with a claim).

Another popular method of authentication is the examination of the artist’s signature. Technology now allows fairly easy investigation of artworks and signatures via computerized databases and photographs to gather large samples of an artist’s works and signature for comparison. When creating signatures, artists should consider using a hand signature that is different from their legal signatures and is legible. Such a signature can later be thought of as a brand logo that makes artwork recognizable, and handwriting it makes it harder for forgers to replicate. Furthermore, artists should consider signing all works upon completion, preferably before the paint dries. By doing this, the artist essentially embeds the signature into the work. Artists should also use the same medium as the art to prevent the suspicion that the signature is forged or added by another person later. 

Authenticating art is important and worthwhile, especially for any artist who wants to build a recognizable brand and protect their reputation and livelihood. Understandably, the prevalence of digital art theft, fakes, forgeries, art scams, fraudulent art sales, and falsified certificates of authenticity can be discouraging. However, methods of authentication can help prevent the likelihood of such violations. While art thieves, plagiarists, and scammers continue to evolve as quickly as technology does, artists can also protect themselves using their own creativity and following legal advice on authentication.

AI Art: Infringement is Not the Answer

By: Jacob Alhadeff

In the early 2000s, courts determined that the emerging technology of peer-to-peer “file-sharing” was massively infringing and categorically abolished its use. Here, the Ninth Circuit and Supreme Court found that Napster, Aimster, and Grokster were secondarily liable for the reproductions of their users. Each of these companies facilitated or instructed their users on how to share verbatim copies of media files with millions of other people online. In this nascent internet, users were able to download each other’s music and movies virtually for free. In response, the courts held these companies liable for the infringements of their users. In so doing, they functionally destroyed that form of peer-to-peer “file-sharing.” File-sharing and AI are in not analogous, but multiple recent lawsuits present a similarly existential question for AI art companies. Courts should not find AI art companies massively infringing and risk fundamentally undermining these text-to-art AIs.

A picture containing text, person, person, suit

Description automatically generated

Text-to-art AI, aka generative art or AI art, allows users to type in a simple phrase, such as “a happy lawyer,” and the AI will generate a nightmarish representation of this law student’s desired future. 

Currently, this AI art functions only because (1) billions of original human authors throughout history have created art that has been posted online, (2) companies such as Stability AI (“Stable Diffusion”) or Open AI (“Dall-E”) download/copy these images to train their AI, and (3) end-users prompt the AI, which then generates an image that corresponds to the input text. Due to the large data requirements, all three of these steps are necessary for the technology, and finding either the second or third steps generally infringing poses and existential threat to AI Art. 

In a recent class action filed against Stability AI, et al (“Stable Diffusion”), plaintiffs allege that Stable Diffusion directly and vicariously infringed on the artist’s copyright through both the training of the AI and the generation of derivative images, i.e., steps 2 and 3 above. Answering each of these claims requires complex legal analyses. However, functionally, a finding of infringement on any of these counts threatens to fundamentally undermine the viability of text-to-art AI technology. Therefore, regardless of the legal analysis (which likely points in the same direction anyways) courts should not find Stable Diffusion liable for infringement because doing so would contravene the constitutionally enumerated purpose of copyright—to incentivize the progress of the arts. 

In general, artists have potential copyright infringement claims against AI Art companies (1) for downloading their art to train their AI and (2) for the AI’s substantially similar generations that the end-user prompts. In the conventional text-to-art AI context, these AI art companies should not be found liable for infringement in either instance because doing so would undermine the progress of the arts. However, a finding of non-infringement leaves conventional artists with unaddressed cognizable harms. Neither of these two potential outcomes are ideal. 

How courts answer these questions will shape how AI art and artists function in this brave new world of artistry. However, copyright infringement, the primary mode of redress that copyright protection offers, does not effectively balance the interests of the primary stakeholders. Instead of relying on the courts, Congress should create an AI Copyright Act that protects conventional artistry, ensures AI Art’s viability, and curbs its greatest harms. 

Finding AI Art Infringing Would Undermine the Underlying Technology

A finding of infringement for the underlying training or the outputs undermines AI Art for many reasons: copyright’s large statutory damages, the low bar for granting someone a copyright, that works are retroactively copyrightable, the length of copyright, and the volume of images the AI generates and needs for training.

First, copyright provides statutory damages of $750 to $30,000 and up to $150,000 if the infringement is willful. Determining the statutory value of each infringement is likely moot because of the massive volume of potential infringements. Moreover, it is likely that if infringement is found, AI art companies would be enjoined from functioning, as occurred in the “file-sharing” cases of the early 2000s. 

Second, the threshold for a copyrightable work is incredibly low, so it is likely that many of the billions of images used in Stable Diffusion’s training data are copyrightable. In Feist, the Supreme Court wrote, “the requisite level of creativity is extremely low [to receive copyright]; even a slight amount will suffice. The vast majority of works make the grade quite easily.” This incredibly low bar means that each of us likely creates several copyrightable works every day. 

Third, works are retroactively copyrightable, meaning that the law does not require the plaintiff to have registered their work with the copyright office to receive their exclusive monopoly. Therefore, an author can register their copyright after they are made aware of an infringement and still have a valid claim. If these companies were found liable, then anyone with a marginally creative image in a training set would have a potentially valid claim against a generative art company.

Fourth, the copyright monopoly lasts for 70 years after the death of the author. Therefore, many of the copyrights in the training set have not lapsed. Retroactive copyright registration combined with the extensive duration of copyrightability means that few of the training images are likely in the public domain. In other words, “virtually all datasets that will be created for ML [Machine Learning] will contain copyrighted materials.”

Finally, as discussed earlier, the two bases for infringement claims against the AI art companies are (1) copying to train the AI and (2) copying in the resultant end generation. Each basis would likely result in billions or millions of potential claims, respectively. First, Stable Diffusion is trained on approximately 5.85 billion images which they downloaded from the internet. Given these four characteristics of copyright, it is likely that if infringement were found, many or all of the copyright owners of these images would then have a claim against AI art companies. Second, regarding infringement of end generations, Dall-E has suggested that their AI produces millions of generations every day. If AI art companies were found liable for infringing outputs, then any generation that was found to be substantially similar to an artist’s copyrighted original would be the basis of another claim against Dall-E. This would open them up to innumerable infringement claims every day. 

A picture containing text, fabric

Description automatically generated

At the same time, generative art is highly non-deterministic, meaning that, on its face, it is hard to know what the AI will generate before it is generated. The AI’s emergent properties, combined with the subjective and fact-specific “substantial similarity” analysis of infringement, do not lend themselves to an AI Art company ensuring that end-generations are non-infringing. More simply, from a technical perspective, it would be near-impossible for an AI art company to guarantee that their generations do not infringe on another’s work. 

Finding AI art companies liable for infringement may open them up to trillions of dollars in potential copyright lawsuits or they may simply be enjoined from functioning.

An AI Copyright Act

Instead, Congress should create an AI Copyright Act. Technology forcing a reevaluation of copyright law is not new. In 1998, Congress passed the DMCA (Digital Millennium Copyright Act) to fulfill their WIPO (World Intellectual Property Organization) treaty obligations, reduce piracy, and facilitate e-commerce. While the DMCA’s overly broad application may have stifled research and free speech, it does provide an example of Congress recognizing copyright’s limitations in addressing technological change and responding legislatively. What was true in 1998 is true today. 

Finding infringement for a necessary aspect of text-to-art AI may fundamentally undermine the technology and run counter to the constitutionally enumerated purpose of copyright—“to promote the progress of science and useful arts.” On the other hand, finding no infringement leaves these cognizably harmed artists without remedy. Therefore, Congress should enact an AI Copyright Act that balances the interests of conventional artists, technological development, and the public. This legislation should aim to curb the greatest harms posed by text-to-art AI through a safe harbor system like that in the DMCA. 

AI Art “In the Style of” & Contributory Liability

By: Jacob Alhadeff

Greg Rutkowski illustrates fantastical images for games such as Dungeons & Dragons and Magic the Gathering. Rutkowski’s name has been used thousands of times in generative art platforms, such as Stable Diffusion and Dall-E, flooding the internet with thousands of works in his style. For example, type in “Wizard with sword and a glowing orb of magic fire fights a fierce dragon Greg Rutkowski,” and Stable Diffusion will output something similar to Rutkowski’s actual work. Rutkowski is now reasonably concerned that his work will be drowned out by these hundreds of thousands of emulations, ultimately preventing customers from being able to find his work online. 

A picture containing nature

Description automatically generated

Examples of images generated by Dream Studio (Stable Diffusion) in Rutkowski’s style.

These machine learning algorithms are trained using freely available information, which is largely a good thing. However, it may feel unfair that an artist’s copyrighted images are freely copied to train their potential replacement. Ultimately, nothing these algorithms or their owners are doing is copyright infringement, and there are many good reasons for this. However, in certain exceptional circumstances, like Rutkowski’s, it may seem like copyright laws insufficiently protect human creation and unreasonably prioritizes computer generation.

A primary reason why Rutkowski has no legal recourse is because an entity that trains its AI on Rutkowski’s copyrighted work is not the person generating the emulating art. Instead, thousands of end-users are collectively causing Rutkowski harm. Since distinct entities cause aggregate harm, there is no infringement. By contrast, if Stable Diffusion verbatim copied Rutkowski’s work to train their AI before generating hundreds of thousands of look-a-likes, this would likely be an unfair infringement. Understanding the importance of this separation is best seen through understanding the process of text-to-art generation and analyzing each person’s role in the process. 

Text-to-Image Copyright AnalysisDiagram, text

Description automatically generated

To give a brief summary of this process, billions of original human artists throughout history have created art that has been posted online. Then a group like Common Crawl scrapes those billions of images and their textual pairs from billions of web pages for public use. Later, a non-profit such as LAION creates a massive dataset that includes internet indexes and similarity scores between text and images. Subsequently, a company such as Stable Diffusion trains its text-to-art AI generator on these text-image pairs. Notably, when a text-to-art generator uses the LAION database, they are not necessarily downloading the images themselves to train their AI. Finally, when the end user goes to Dream Studio and types in the phrase “a mouse in the style of Walt Disney,” the AI generates unique images of Mickey Mouse. 

A picture containing doll

Description automatically generated

A picture containing indoor

Description automatically generated
Examples of images generated by Dream Studio (Stable Diffusion) using the phrase “a mouse in the style of Walt Disney”

These several distributed roles complicate our copyright analysis, but for now, we will limit our discussion of copyright liability to three primary entities: (1) the original artist, (2) the Text-to-Image AI Company, and (3) the end-user. 

The Text-to-Image Company likely has copied Rutkowski’s work. If the Text-to-Image company actually downloads the images from the dataset to train its AI, then there is verbatim intermediate copying of potentially billions of copyrightable images. However, this is likely fair use because the generative AI provides what the court would consider a public benefit and has transformed the purpose and character of the original art. This reasoning is demonstrated by Kelly v. Arriba, where an image search’s use of thumbnail images was determined to be transformative and fair partly because of the public benefit provided by the ability to search images and the transformed purpose for that art, searching versus viewing. Here, the purpose of the original art was to be viewed by humans, and the Text-to-Image AI Company has transformatively used the art to be “read” by machines to train an AI. The public benefit of text-to-art AI is the ability to create complex and novel art by simply typing a few words into a prompt. It is more likely that the Generative AI’s use is fair because the public does not see these downloaded images, which means that they have not directly impacted the market for the copyrighted originals. 

The individual end-user is any person that prompts the AI to generate hundreds of thousands of works “in the style of Greg Rutkowski.” However, the end-user has not copied Rutkowski’s art because copyright’s idea-expression distinction means that Rutkowski’s style is not copyrightable. The end-user simply typed 10 words into Stable Diffusion’s UI. While the images of wizards fighting dragons may seem similar to Rutkowski’s work, they may not be substantially similar enough to be deemed infringing copies. Therefore, the end-user similarly didn’t unfairly infringe on Rutkowski’s copyright.

Secondary Liability & AI Copyright

Generative AI portends dramatic social and economic change for many, and copyright will necessarily respond to these changes. Copyright could change to protect Rutkowski in different ways, but many of these potential changes would result in either a complete overhaul of copyright law or the functional elimination of generative art, neither of which is desirable. One minor alteration that could give Rutkowski, and other artists like him, slightly more protection is a creative expansion of contributory liability in copyright. One infringes contributorily by intentionally inducing or encouraging direct infringement.

Dall-E has actively encouraged end-users to generate art “in the style of” artists. So not only are these text-to-art AI companies verbatim copying artists’ works, but they are then also encouraging users to emulate the artists’ work. At present, this is not considered contributory liability and is frequently innocuous. Style is not copyrightable because ideas are not copyrightable, which is a good thing for artistic freedom and creation. So, while the work of these artists is not being directly copied by end-users when Dall-E encourages users to flood the internet with AI art in Rutkowski’s style, it feels like copyright law should offer Rutkowski slightly more protection.

A picture containing text

Description automatically generated
An astronaut riding a horse in the style of Andy Warhol.
A painting of a fox in the style of Claude Monet.

Contributory liability could offer this modicum of protection if, and only if, it expanded to include circumstances where the copying fairly occurred by the contributor, but not the thousands of end-users. As previously stated, the end-users are not directly infringing Rutkowski’s copyright, so under current law, Dall-E has not contributorily copied. However, there has never been a contributory copyright case such as this one, where the contributing entity themselves verbatim copied the copyrighted work, albeit fairly, but the end user did not. As such, copyright’s flexibility and policy-oriented nature could permit a unique carveout for such protection.

Analyzing the potential contributory liability of Dall-E is more complicated than it sounds, particularly because of the quintessential modern contributory liability case, MGM v. Grokster, which involved intentionally instructing users on how to file-share millions of songs. Moreover, Sony v. Universal would rightfully protect Dall-E generally as due to many similarities between the two situations. In that case, the court found Sony not liable for copyright infringement for the sale of VHS recorders which facilitated direct copying of TV programming because the technology had “commercially significant non-infringing uses.” Finally, regardless of Rutkowski’s theoretical likelihood of success, if contributory liability were expanded in this way, then it would at least stop companies such as Dall-E from advertising the fact that their generations are a great way to emulate, or copy, an artist’s work that they themselves initially copied. 

This article has been premised on the idea that the end-users aren’t copying, but what if they are? It is clear that Rutkowski’s work was not directly infringed by the wizard fighting the dragon, but what about “a mouse in the style of Walt Disney?” How about “a yellow cartoon bear with a red shirt” or “a yellow bear in the style of A. A. Milne?” How similar does an end-user’s generation need to be for Disney to sue over an end-user’s direct infringement? What if there were hundreds of thousands of unique AI-generated Mickey Mouse emulations flooding the internet, and Twitter trolls were harassing Disney instead of Rutkowski? Of course, each individual generation would require an individual infringement analysis. Maybe the “yellow cartoon bear with a red shirt” is not substantially similar to Winnie the Pooh, but the “mouse in the style of Walt Disney” could be. These determinations would impact a generative AI’s potential contributory liability in such a claim. Whatever copyright judges and lawmakers decide, the law will need to find creative solutions that carefully balance the interests of artists and technological innovation. 

A picture containing doll

Description automatically generatedA yellow stuffed animal

Description automatically generated with low confidenceA picture containing text, fabric

Description automatically generated