Liability, Authorship, & Symmetrial Causation in AI-Generated Outputs

By: Jacob Alhadeff

Copyright has insufficiently analyzed causation for both authorship and liability because, until now, causation was relatively obvious. If someone creates a painting, then they caused the work and receive authorial rights. If it turned out that the painting was of Mickey Mouse, then that painter may be liable for an infringing reproduction. However, recent technological advances have challenged the element of causation in both authorship and infringement. In response, recent law and scholarship have begun to address these issues. However, because they have addressed causation in isolation, current analysis has provided logically or ethically insufficient answers. In other words, authorial causation has ignored potential implications for an entity’s infringement liability, and vice-versa. Regardless of how the law responds, generative AI will require copyright to explore and enumerate the previously assumed causation analyses for both infringement and authorship. This blog explores how generative AI exposes the logical inconsistencies that result from analyzing authorial causation without analyzing causation for infringing reproductions.

Generative AI largely requires the following process: (1) an original artist creates works, (2) a developer trains an AI model on these works, and (3) an end-user prompts the AI to generate an output, such as “a mouse in the style of Walt Disney.” This generative AI process presents a novel challenge for copyright in determining who or what caused the output because generative AI challenges conventional notions of creation.

Causing Infringement

Andersen et al. recently filed a complaint against Stability AI, one of the most popular text-to-art foundation models. This class action alleges that Stability AI is directly liable for infringing that result from end-user prompted generations. However, in a recent decision more closely analyzing causation and volition in infringement, the Ninth Circuit found that “direct liability must be premised on conduct that can reasonably be described as the direct cause of infringement.” Stability AI should not be found directly liable for infringing these artists’ copyright, in part because Stability AI cannot reasonably be said to be the direct cause of infringement. Such a finding would be similar to holding Google liable for reproducing images of Mickey Mouse on people’s computer screens when they search for “Mickey Mouse.”  

This lawsuit is particularly relevant since end-users have prompted thousands of generations that include the phrase “Mickey Mouse” and many appear substantially similar to Disney’s Mickey. If thousands of end-users have intentionally prompted the AI to generate Mickey Mouse, then what volitional conduct can most reasonably be described as the direct cause of infringement? It is clearly the end-user. However, what if the end-user simply prompted “a cartoon mouse” and the AI generated an infringing image of Mickey? Here, the end-user may not have intended to generate Mickey and reasonable notions of fairness may not find the end-user as the most direct cause of infringement. However, copyright is a strict liability tort, meaning that liability attaches regardless of a reproducer’s intent. Therefore, unless copyright applies an intentional or a negligence theory for direct liability, which it should not, then whomever or whatever is liable for infringing outputs shall be liable for both of the infringing outputs— “Mickey Mouse” and “a cartoon mouse.” Such an outcome not only feels deeply unfair, but it is unreasonable to say that the end-user is the direct cause of infringement when prompting “a cartoon mouse,” and vice versa. 

Cases called to answer similar questions have recently grappled with these same issues of volition and causation. Generally, courts have been hesitant to find companies liable for actions that are not reasonably deemed volitional conduct causing infringement. The court in Cartoon Network, for example, found that “volition is an important element of direct liability.” In the Loopnet case, the court found that “the Copyright Act… requires conduct by a person who causes in some meaningful way an infringement.” In this way, the law has so far mirrored our prior intuitions of fairness. Legal scholarship has noted that when copyright law has grappled with novel technology, it has found that causation in infringement requires volition that “can never be satisfied by machines.” This reasoning, as applied  to generative AI, may mean that an AI company should not normally be directly liable for outputs that infringe the reproduction right. 

Causing Authorship

This causation analysis has also begun for authorship rights. One copyright scholar compellingly argues that copyright law should explicitly enumerate a causal analysis for granting authorship rights. Such an analysis would follow tort law’s two step causation analysis including: (1) creation in fact and (2) legal creation. Aviv Gaon surveys authorial options in The Future of Copyright in the Age of AI, writing that there are those that favor assigning authorship to the end-user prompter, the AI developer, finding outputs joint works, or even attributing authorship to the AI itself. The simplest legal option would be to treat AI like a tool and grant authorship to the end-user. This is exactly how the law responded when photography challenged conventional notions of creativity and authorship. Opponents of finding photographers as authors argued that photography was “merely mechanical, with no place for… originality.” The Supreme Court in Burrow Giles instead found that the photographer “gives effect to the idea” and is the work’s “mastermind” deserving of copyright. 

However, treating AI like a conventional tool is an inconsistent oversimplification in the current context. Not only is it often less analogous to say that an end-user prompter is the ‘mastermind’ of the output, but AI presents a more attenuated causation analysis that should not result in  a copyright for all AI-generations. As an extreme example, recent AIs are employing other AIs as replicable agents. In these circumstances, a single prompt could catalyze one AI to automatically employ other AI agents to generate numerous potentially creative or infringing outputs. Here, the most closely linked human input would be a prompt that could not be said to have masterminded or caused the many resultant expressive outputs. Under Balganesh’s framework, no human could reasonably be found as the factual or legal cause of the output. Such use-cases will further challenge the law’s notions of foreseeability as reasonable causation becomes increasingly attenuated.

Importantly, in the face of this ongoing debate and scholarship, the Copyright Office recently made their determination on authorship for AI-generated works. In February 2023, the US Copyright Office amended its decision regarding Kristina Kashtanova’s comic book, Zarya of the Dawn, stating that the exclusively AI-generated content is not copyrightable.  Ms. Kashtanova created her comic book using Midjourney, a text-to-art AI, to generate much of the visual art involved. The copyright office stated that her “selection, coordination, and arrangement” of AI-generated images are copyrightable, but not the images themselves. The Office’s decision means that all exclusively AI-generated content, like natural phenomena, is not the type of content copyright protects and is freely accessible to all. The Office’s decision was based on their interpretation that “it was Midjourney—not Kashtanova—that originated the ‘traditional elements of authorship.’” The Office’s decision is appropriate policy, but when analyzed in conjunction with the current law on causation in infringement, it is inconsistent and may result in an asymmetrical allocation of the rights and duties that attend creation. Relevantly, how can a machine that is incapable of volition originate art? This is one of many ontological paradoxes that AI will present to law. 

Symmetrically Analyzing Causation

Two things are apparent. First, there is a beautiful symmetry in AI-generations being uncopyrightable, and the machines originating such works symmetrically do not have sufficient volition to infringe. If such a system persists, then copyright law may not play a major role in generative AI, though this is doubtful. Second, such inconsistencies inevitably result from causation analyses for mechanically analogous actions that only analyze one of infringement or authorship. Instead, I propose that copyright law symmetrically analyze mechanically analogous causation for both authorship and infringement of the reproduction right. Since copyright law has only recently begun analyzing causation, it is reasonable, and potentially desirable, that the law does not require this symmetrical causation. After all, the elements of authorship and infringement are usefully different. However, what has been consistent throughout copyright is that when an author creates, they risk both an infringing reproduction and the benefits of authorship rights. In other words, by painting, a painter may create a valuable copyrightable work, but they also may paint an infringing reproduction of Mickey Mouse. Asymmetrical causation for AI art could be analogized to the painter receiving authorship rights while the company that made the paintbrush being liable for the painter’s infringing reproductions. Such a result would not incentivize a painter to avoid infringement, and thereby improperly balance the risks and benefits of creation. Ultimately, if the law decides either the end-user or the AI company is the author, then the other entity should not be asymmetrically liable for infringing reproductions. Otherwise, the result will be ethically and logically inconsistent. After all, as Antony Honore wrote in Responsibility and Fault, in our outcome-based society and legal system, we receive potential benefit from and are responsible for the harms reasonably connected to our actions.

Regulating Emerging Technology: How Can Regulators Get a Grasp on AI?

By: Chisup Kim

Uses of Artificial Intelligence (“AI”), such as ChatGPT, are fascinating experiments that have the potential to morph their user’s parameters, requests, and questions into answers. However, as malleable these AIs are to user requests, governments and regulators have not had the same flexibility in governing this new technology. Countries have taken drastically different approaches to AI regulations. For example, on April 11, 2023, China announced that AI products developed in China must undergo a security assessment to ensure that content upholds “Chinese socialist values and do[es] not generate content that suggests regime subversion, violence or pornography, or disput[ions to] to economic or social order.” Italy took an even more cautionary stance, outright banning ChatGPT. Yet domestically, in stark contrast to the decisive action taken by other countries, the Biden Administration has only begun vaguely examining whether there should be rules for AI tools.

In the United States, prospective AI regulators seem to be more focused on the application of AI tools to a specific industry. For example, the Equal Employment Opportunity Commission (“EEOC”) has begun an initiative to examine whether AI in employment decisions comply with federal civil rights laws. On autonomous vehicles, while the National Highway Traffic Safety Administration (“NHTSA”) has not yet given autonomous vehicles the green light exemption from occupant safety standards, they do maintain a web page open to a future with automated vehicles. Simultaneously, while regulators are still trying to grasp this technology, AI is entering every industry and field in some capacity. TechCrunch chronicled the various AI applications from Y Combinator’s Winter Demo Day. TechCrunch’s partial list included the following: an AI document editor, SEC-compliance robo-advisors, Generative AI photographer for e-commerce, automated sales emails, an AI receptionist to answer missed calls for small companies, and many more. While the EEOC and NHTSA have taken proactive steps for their own respective fields, we may need a more proactive and overarching approach for the widespread applications of AI. 

Much like their proactive GDPR regulations in privacy, the EU proposed a regulatory framework on AI. The framework proposes a list of high-risk applications for AI, and creates more strenuous obligations for those high-risk applications and tempered regulations for the limited and no risk applications of AI. Applications identified as high-risk include the use of AI in critical infrastructure, education or vocational training, law enforcement, and administration of justice. High-risk applications would require adequate risk assessment and mitigation, logging of data with traceability, and clear notice and information provided to the user. ChatBots are considered limited risk but require that the user has adequate notice that they’re interacting with a machine. Lastly, the vast majority of AI applications are likely to fall under the “no risk” bucket for harmless applications, including applications such as video games or spam filters. 

If U.S. regulators fail to create a comprehensive regulatory framework for AI, they will likely fall behind on this issue, much like they have fallen behind on privacy issues. For example, with privacy, the vacuum of guidance and self-regulating bodies forced many states and foreign countries to begin adopting GDPR-like regulations. The current initiatives by the EEOC and NHTSA are applaudable, but these organizations seem to be waiting for actual harm to occur before taking proactive steps to regulate the industry. For example, last year, NHTSA found that the Tesla autopilot system, among other driver-assisted systems, was linked to nearly 400 crashes in the United States with six fatal accidents. Waiting for the technology to come to us did not work for privacy regulations; we should not wait for AI technology to arrive either.

AI Art: Infringement is Not the Answer

By: Jacob Alhadeff

In the early 2000s, courts determined that the emerging technology of peer-to-peer “file-sharing” was massively infringing and categorically abolished its use. Here, the Ninth Circuit and Supreme Court found that Napster, Aimster, and Grokster were secondarily liable for the reproductions of their users. Each of these companies facilitated or instructed their users on how to share verbatim copies of media files with millions of other people online. In this nascent internet, users were able to download each other’s music and movies virtually for free. In response, the courts held these companies liable for the infringements of their users. In so doing, they functionally destroyed that form of peer-to-peer “file-sharing.” File-sharing and AI are in not analogous, but multiple recent lawsuits present a similarly existential question for AI art companies. Courts should not find AI art companies massively infringing and risk fundamentally undermining these text-to-art AIs.

A picture containing text, person, person, suit

Description automatically generated

Text-to-art AI, aka generative art or AI art, allows users to type in a simple phrase, such as “a happy lawyer,” and the AI will generate a nightmarish representation of this law student’s desired future. 

Currently, this AI art functions only because (1) billions of original human authors throughout history have created art that has been posted online, (2) companies such as Stability AI (“Stable Diffusion”) or Open AI (“Dall-E”) download/copy these images to train their AI, and (3) end-users prompt the AI, which then generates an image that corresponds to the input text. Due to the large data requirements, all three of these steps are necessary for the technology, and finding either the second or third steps generally infringing poses and existential threat to AI Art. 

In a recent class action filed against Stability AI, et al (“Stable Diffusion”), plaintiffs allege that Stable Diffusion directly and vicariously infringed on the artist’s copyright through both the training of the AI and the generation of derivative images, i.e., steps 2 and 3 above. Answering each of these claims requires complex legal analyses. However, functionally, a finding of infringement on any of these counts threatens to fundamentally undermine the viability of text-to-art AI technology. Therefore, regardless of the legal analysis (which likely points in the same direction anyways) courts should not find Stable Diffusion liable for infringement because doing so would contravene the constitutionally enumerated purpose of copyright—to incentivize the progress of the arts. 

In general, artists have potential copyright infringement claims against AI Art companies (1) for downloading their art to train their AI and (2) for the AI’s substantially similar generations that the end-user prompts. In the conventional text-to-art AI context, these AI art companies should not be found liable for infringement in either instance because doing so would undermine the progress of the arts. However, a finding of non-infringement leaves conventional artists with unaddressed cognizable harms. Neither of these two potential outcomes are ideal. 

How courts answer these questions will shape how AI art and artists function in this brave new world of artistry. However, copyright infringement, the primary mode of redress that copyright protection offers, does not effectively balance the interests of the primary stakeholders. Instead of relying on the courts, Congress should create an AI Copyright Act that protects conventional artistry, ensures AI Art’s viability, and curbs its greatest harms. 

Finding AI Art Infringing Would Undermine the Underlying Technology

A finding of infringement for the underlying training or the outputs undermines AI Art for many reasons: copyright’s large statutory damages, the low bar for granting someone a copyright, that works are retroactively copyrightable, the length of copyright, and the volume of images the AI generates and needs for training.

First, copyright provides statutory damages of $750 to $30,000 and up to $150,000 if the infringement is willful. Determining the statutory value of each infringement is likely moot because of the massive volume of potential infringements. Moreover, it is likely that if infringement is found, AI art companies would be enjoined from functioning, as occurred in the “file-sharing” cases of the early 2000s. 

Second, the threshold for a copyrightable work is incredibly low, so it is likely that many of the billions of images used in Stable Diffusion’s training data are copyrightable. In Feist, the Supreme Court wrote, “the requisite level of creativity is extremely low [to receive copyright]; even a slight amount will suffice. The vast majority of works make the grade quite easily.” This incredibly low bar means that each of us likely creates several copyrightable works every day. 

Third, works are retroactively copyrightable, meaning that the law does not require the plaintiff to have registered their work with the copyright office to receive their exclusive monopoly. Therefore, an author can register their copyright after they are made aware of an infringement and still have a valid claim. If these companies were found liable, then anyone with a marginally creative image in a training set would have a potentially valid claim against a generative art company.

Fourth, the copyright monopoly lasts for 70 years after the death of the author. Therefore, many of the copyrights in the training set have not lapsed. Retroactive copyright registration combined with the extensive duration of copyrightability means that few of the training images are likely in the public domain. In other words, “virtually all datasets that will be created for ML [Machine Learning] will contain copyrighted materials.”

Finally, as discussed earlier, the two bases for infringement claims against the AI art companies are (1) copying to train the AI and (2) copying in the resultant end generation. Each basis would likely result in billions or millions of potential claims, respectively. First, Stable Diffusion is trained on approximately 5.85 billion images which they downloaded from the internet. Given these four characteristics of copyright, it is likely that if infringement were found, many or all of the copyright owners of these images would then have a claim against AI art companies. Second, regarding infringement of end generations, Dall-E has suggested that their AI produces millions of generations every day. If AI art companies were found liable for infringing outputs, then any generation that was found to be substantially similar to an artist’s copyrighted original would be the basis of another claim against Dall-E. This would open them up to innumerable infringement claims every day. 

A picture containing text, fabric

Description automatically generated

At the same time, generative art is highly non-deterministic, meaning that, on its face, it is hard to know what the AI will generate before it is generated. The AI’s emergent properties, combined with the subjective and fact-specific “substantial similarity” analysis of infringement, do not lend themselves to an AI Art company ensuring that end-generations are non-infringing. More simply, from a technical perspective, it would be near-impossible for an AI art company to guarantee that their generations do not infringe on another’s work. 

Finding AI art companies liable for infringement may open them up to trillions of dollars in potential copyright lawsuits or they may simply be enjoined from functioning.

An AI Copyright Act

Instead, Congress should create an AI Copyright Act. Technology forcing a reevaluation of copyright law is not new. In 1998, Congress passed the DMCA (Digital Millennium Copyright Act) to fulfill their WIPO (World Intellectual Property Organization) treaty obligations, reduce piracy, and facilitate e-commerce. While the DMCA’s overly broad application may have stifled research and free speech, it does provide an example of Congress recognizing copyright’s limitations in addressing technological change and responding legislatively. What was true in 1998 is true today. 

Finding infringement for a necessary aspect of text-to-art AI may fundamentally undermine the technology and run counter to the constitutionally enumerated purpose of copyright—“to promote the progress of science and useful arts.” On the other hand, finding no infringement leaves these cognizably harmed artists without remedy. Therefore, Congress should enact an AI Copyright Act that balances the interests of conventional artists, technological development, and the public. This legislation should aim to curb the greatest harms posed by text-to-art AI through a safe harbor system like that in the DMCA. 

The War on Forgery: An Exploration into Current Technologies Used to Catch Art Fraud

By: Zachary Finn

The field of art authentication has been revolutionized by several new technologies designed to spot fake art. Supposedly, up to fifty percent of all artworks in the market are fake, forged, or misattributed. Forgery is the act of making, exploiting, selling, and peddling fake art. This practice has become one of the most lucrative businesses in the world. According to the US Department of Justice and UNESCO, the crime of art forgery and laundering has been the third highest-grossing criminal commerce in the world over the last 40 years. This is just behind drugs and weapons. As technology has developed over the years, so has a plethora of developments and methods to detect fake and forged art. Many of the new technologies have successfully infiltrated the art crime domain, but they also raise legal implications to consider. 

One of the most encouraging is spectroscopy, which analyzes the chemical composition of the artwork and compares it to the known composition of genuine works from the same period. Spectroscopists test to identify whether certain specific elements and molecules are present in the pigment used to create works of art. For example, scientists use Mass Spectrometry to identify whether lead is present in certain artworks. Throughout early art history, lead was popularly used in paintings. Ancient paintings are identifiable through this technology because lead is now extremely rare and difficult to come by. After discovering the toxic qualities of lead, the art scene was quick to remove lead from its paint belt. Therefore, using spectrometry technology, an individual can spot a forged or fake painting by testing to see the presence of lead or other comparable elements and molecules. If a Da Vinci is without lead, it is almost certainly a fake. Mass spectrometry requires samples from an artwork, which may cause damage. This can create legal disputes over the damage and restoration of the artwork, especially since most of the artwork being tested has historical and cultural significance.

Similar to spectrometry, X-ray fluorescence is another technology that analyzes the elemental composition of art. With this technology, X-rays analyze shine beams on an artwork, which causes atoms in the pigments to emanate ancillary X-rays These rays identify the specific elements, where then experts can determine if they are consistent with materials used in works from the same period. Forgers practice and develop methods of painting over less valuable but still old artworks to create a more valuable fake art. The advantage of using X-ray fluorescence is that it penetrates through layers of paint. This offers scientists and art historians the capability to examine the underlying painting of an artwork. Like mass spectroscopy, X-ray fluorescence raises legal considerations because it potentially damages an artwork in question. On top of this, like most of these technologies, a legal consideration regarding admissibility for evidential purposes emerges. Courts and juries will have to weigh the credibility of experts and these technologies. 

Continuing with scientific technology, Multispectral Imaging uses expert cameras to capture images of an art piece in different wavelengths of light. This allows the examiners to identify inconsistencies that can be indicative of forgery. With multispectral imaging, cameras use different imaging techniques, including ultraviolet and infrared light. UV imaging reveals polishes, touch-ups, and overpainting. Infrared exposes details such as underlying paint jobs. A big advantage of this tool is that it is a non-invasive process so that it does not alter an art’s composition. Delicate and rare artworks may be susceptible to damage by other types of testing, so therefore this technology can be most useful in the war against art forgery. However, this technology also leads to legal questions involving expert opinions and declarations as imaging results are still open to interpretation, and different experts may reach different results as to conclusions of an art’s composition.

In the “most tech-savvy” way to detect forgery, Artificial Intelligence and machine learning algorithms analyze large databases of both genuine and fake art. The AI and machines extract patterns and features that distinguish real and fake art from one another. In a research development by Case Western Reserve University, this technology “combines data from the precise, three-dimensional mapping of a painting’s surface with analysis through artificial intelligence — a computer system based on the human brain and nervous system that can learn to identify and compare patterns.” In one study, AI and machine learning were able to spot forged art with greater than 95% accuracy. A key advantage of using AI and machine learning in art forgery is that large amounts of data can be analyzed and evaluated quickly and efficiently. This expedites spotting potential forgeries and works more accurately and efficiently compared to other methods. However, legal issues involving privacy arise as AI sift through large amounts of datasets that can possibly contain private or unconsented information. As technology evolves, AI algorithms and machine learning can be updated and revised to improve accuracy and proficiency.

The art world has been plagued with crimes of forgery and faking artworks for centuries, but with new technologies such as spectroscopy, X-rays, multispectral imaging, AI, and machine learning, the ability to detect counterfeit art has revolutionized the way experts fight this war against forgery. It will be exciting to see what other technologies emerge in the upcoming years, as well as what new paintings are discovered to be just fake copies.

Talking to Machines – The Legal Implications of ChatGPT

By: Stephanie Ngo

Chat Generative Pre-trained Transformer, known as ChatGPT, was launched on November 30, 2022.  The program has since swept the world by storm with its articulate answers and detailed responses to a multitude of questions. A quick Google Search of “chat gpt” amasses approximately 171 million results. Similarly, in the first five days of launch, more than a million people had signed up to test the chatbot, according to OpenAI’s president, Greg Brockman. But with new technology comes legal issues that require legal solutions. As ChatGPT continues to grow in popularity, it is now more important than ever to discuss how such a smart system could affect the legal field. 

What is Artificial Intelligence? 

Artificial intelligence (AI), per John McCarthy, a world-renowned computer scientist at Stanford University, is “the science and engineering of making intelligent machines, especially intelligent computer programs, that can be used to understand human intelligence.” The first successful AI program was written in 1951 to play a game of checkers, but the idea of “robots” taking on human-like characteristics has been traced back even earlier. Recently, it has been predicted that AI, although prominent now, will permeate the daily lives of individuals by 2025 and seep into various business sectors.  Today, the buzz around AI stems from the fast-growing influx of  emerging technologies, and how AI can be integrated with current technology to innovate products like self-driving cars, electronic medical records, and personal assistants. Many are aware of what “Siri” is, and consumers’ expectations that Siri will soon become all-knowing is what continues to push the field of AI to develop at such fast speeds.

What is ChatGPT? 

ChatGPT is a chatbot that uses a large language model trained by OpenAI. OpenAI is an AI research and deployment company founded in 2015 dedicated to ensuring that artificial intelligence benefits all of humanity. ChatGPT was trained with data from items such as books and other written materials to generate natural and conversational responses, as if a human had written the reply. Chatbots are not a recent invention. In 2019, Salesforce reported that twenty-three percent of service organizations used AI chatbots. In 2021, Salesforce reported the percentage is now closer to thirty-eight percent of organizations, a sixty-seven percent increase since their 2018 report. The effectiveness, however, left many consumers wishing for a faster, smarter way of getting accurate answers.

In comes ChatGPT, which has been hailed as the “best artificial intelligence chatbot ever released to the general public” by technology columnist, Kevin Roose from the New York Times. ChatGPT’s ability to answer extremely convoluted questions, explain scientific concepts, or even debug large amounts of code is indicative of just how far chatbots have advanced since their creation. Prior to ChatGPT, answers from chatbots were taken with a grain of salt because of the inaccurate, roundabout responses that were likely programmed from a template. ChatGPT, while still imperfect and slightly outdated (its knowledge is restricted to information from before 2021), is being used in manners that some argue could impact many different occupations and render certain inventions obsolete.

The Legal Issues with ChatGPT

ChatGPT has widespread applicability, being touted as rivaling Google in its usage. Since the beta launch in November, there have been countless stories from people in various occupations about ChatGPT’s different use cases. Teachers can use ChatGPT to draft quiz questions. Job seekers can use it to draft and revise cover letters and resumes. Doctors have used the chatbot to diagnose a patient, write letters to insurance companies,  and even do certain medical examinations. 

On the other hand, ChatGPT has its downsides. One of the main arguments against ChatGPT is that the chatbot’s responses are so natural that students may use it to shirk their homework or plagiarize. To combat the issue of academic dishonesty and misinformation, OpenAI has begun work on accompanying software and training a classifier to distinguish between AI-written text and human-written text. While not wholly reliable, OpenAI has noted the classifier will become more reliable the longer it is trained.

Another argument that has arisen involves intellectual property issues. Is the material that ChatGPT produces legal to use? In a similar situation, a different artificial intelligence program, Stable Diffusion, was trained to replicate an artist’s style of illustration and create new artwork based upon the user’s prompt. The artist was concerned that the program’s creations would be associated with her name because the training used her artwork.

Because of how new the technology is, the case law addressing this specific issue is limited. In January 2023, Getty Images, a popular stock photo company, commenced legal proceedings against Stability AI, the creators of Stable Diffusion, in the High Court of Justice in London, claiming Stability AI had infringed on intellectual property rights in content owned or represented by Getty Images absent a license and to the detriment of the content creators. A group of artists have also filed a class-action lawsuit against companies with AI art tools, including Stable AI, alleging the violation of rights of millions of artists. Regarding ChatGPT, when asked about any potential legal issues, the chatbot stated that “there should not be any legal issues” as long as the chatbot is used according to the terms and conditions set by the company and with the appropriate permissions and licenses needed, if any. 
Last, but certainly not least, ChatGPT is unable to assess whether the chatbot itself is compliant with the protection of personal data under state privacy laws, as well as the European Union’s General Data Protection Regulation (GDPR). Known by many as the gold-standard of privacy regulations, ChatGPT’s lack of privacy compliance with the GDPR or any privacy laws could have serious consequences if a user feeds ChatGPT sensitive information. OpenAI’s privacy policy does state that the company may collect any communication information that a user communicates with the feature, so it is important for anyone using ChatGPT to pause and think about the impact that sharing information with the chatbot will have before proceeding. As ChatGPT improves and advances, the legal implications are likely to only grow in turn.