Artificial Desires: How AI is Shaping our Consumption of Pornography

By: Devina Stone

The rise of Artificial Intelligence (AI) in creating pornography has has introduced novel challenges, threatened a growing move towards ethical creation and consumption, and remains mostly ungoverned by the law. The law must address the accountability of those using AI to exploit likenesses and content produced by legitimate creators. The best option for victims to seek justice now may be utilizing the principle of the right of publicity to allow for civil action against these perpetrators, but plaintiffs may face significant challenges.

The world of adult entertainment has long presented issues of ethics, and the pornography industry is frequently perceived as vulgar  and indecent. Statistics, however, say that most of us consume it anyways; in fact, 92% of men and 60% of women report consuming some form of pornography monthly, whether visual, auditory, or written. PornHub, the most popular video website, sees 42 billion visitors in a year

In the past decade, Millennials have become the largest share of adults worldwide and Gen Z has emerged as ethically motivated, progressive young adults. Social issues are  taking  a forefront in marketing, politics, and nearly every other aspect of our lives. This recent wave of socially conscious dispositions has influenced how the adult entertainment industry approaches pornography creation and consumption. Ethically focused creators have emerged, creating pornography that is rooted in consent, where actors are regularly tested for STD’s, paid fairly and intimacy coordinators monitor actors’ wellbeing and treatment. One such director, Erika Lust, states “[p]orn forms part of a healthy sexual experience…[it] can also be artistic and beautiful.” Apps like Quinn and Dipsea and streaming platforms like Make Love Not Porn have attracted people who feel traditional pornography is too graphic, unrealistic, and crass. Queer stories have surpassed “girl on girl” videos, and diversity in race, gender, and sexuality has taken a front seat. Liberated female creators online have taken control of their own narratives, with sites like OnlyFans allowing women to produce the content they want, without the pressure of a director, set, or others’ expectations. Currently, OnlyFans boasts 2.1 million creators and 500,000 new viewers every day.

It would seem, then, that the porn industry is in the midst of a tectonic shift towards sensitivity. The rise of AI, however, threatens this shift. AI has presented ethical concerns from the start, from privacy and surveillance, to bias and discrimination, and, of course, the role of human judgment. Add the increased challenge of sexual content into the mix, and the potential use of AI is downright worrying. Ethically, human porn creators consent to the acts they participate in, but AI isn’t real, and hence there is no consent. One can effectively order the sexual content they want and have it delivered to them—a transaction which in no way reflects how sexual experiences occur in the real world. More, this directs traffic away from legitimate, ethical creators, and towards the free, easily accessible content created by generative AI. 

An AI user can request the creation of pornography that uses the faces of real people, from celebrities to children. It creates content that is, at best, deeply embarrassing for the subject, and, at worst, downright illegal. Not to mention that AI has to learn from existing content on the web, so it inevitably incorporates content and faces of existing porn actors without their consent

The law is trying to catch up, but it’s lagging behind. First, it was deepfakes, or manipulated images using real faces convincingly pasted onto a video or photo of someone else. Only after a deepfake of Taylor Swift engaging in sexual acts went viral did the DEFIANCE Act appear. Passed through the Senate this July, the DEFIANCE Act allows victims of deepfake pornography to file civil suit against perpetrators. Criminal penalties are left to the states, and some states have passed new laws, some have expanded existing law while others have yet to legislate. This progress offers hope for the victims of deepfake pornography, hoping to put power back in the hands of the victim. 

The more obscure issue of AI learning from existing content, without the consent of creators and using actual faces and bodies to create “fake” content, is harder to legislate. Most laws regarding nonconsensual porn and even new deepfake legislation all focuses on one identifiable victim. This means that there is no  penalty for AI users or developers when the AI model uses existing content, against the wishes of the person pictured, to create new, unrecognizable content. Not only is this a violation of privacy and choice, but it allows for the creation of content that would not otherwise exist, like rape-pornography and child sex abuse materials, which presents the possibility of encouraging real-world offenses of the same crimes. 

Exposure to violent pornography has a profound and tangible impact. Teen boys who reported consuming sexually violent content were 2-3x more likely to perpetrate “teen dating violence” against their real world partners. Consumption of “sexually aggressive pornography contributes to increased hostility toward women, acceptance of rape myths, decreased empathy, and compassion for victims and an increased acceptance of physical violence toward women.”

Legislators and law enforcement around the country have begun pushing for legislation to criminalize the creation of this content, including the Child Exploitation and Artificial Intelligence Expert Commission Act of 2024, which would create a commission to explore the issue and propose “appropriate safety measures and updates to existing laws.” But this solution still ignores adult creators whose content is used to create non-consensual AI generated pornographic videos and images. 

The right of publicity “allows individuals to control the commercial exploitation of their identity and reap the rewards associated with their fame or notoriety by requiring others to obtain permission (and pay) to use their name, image, or likeness.” Cases like White v. Samsung Electronics have allowed the right of publicity to stand even when the unpermitted use of likeness is not identical, but extremely suggestive of a particular person. In White, where a robot with blonde hair and a long gown turning over letters on a game show was found to violate Vanna White’s right of publicity. However, the use of generative AI may produce a result that is not explicitly recognizable as any given person, and the right of publicity may not apply. 

Moreover, creative content featuring celebrities has sometimes been held to not violate the right of publicity when courts balance the use of celebrity images with the creator’s right to expression under the First Amendment. Tiger Woods could not sue an artist who painted images of him and sold them, because the work was substantially creative. AI users who create sexually explicit content could potentially use loopholes like this to evade civil liability for their use of real faces in creating this content. 

Today, there is no way to prevent such exploitation, but the law is an ever-evolving body, and hopefully the vigor with which legislators have brought forth the DEFIANCE Act and the Child Exploitation and Artificial Intelligence Expert Commission Act will continue, and creators will be protected soon. 

Talking to Machines – The Legal Implications of ChatGPT

By: Stephanie Ngo

Chat Generative Pre-trained Transformer, known as ChatGPT, was launched on November 30, 2022.  The program has since swept the world by storm with its articulate answers and detailed responses to a multitude of questions. A quick Google Search of “chat gpt” amasses approximately 171 million results. Similarly, in the first five days of launch, more than a million people had signed up to test the chatbot, according to OpenAI’s president, Greg Brockman. But with new technology comes legal issues that require legal solutions. As ChatGPT continues to grow in popularity, it is now more important than ever to discuss how such a smart system could affect the legal field. 

What is Artificial Intelligence? 

Artificial intelligence (AI), per John McCarthy, a world-renowned computer scientist at Stanford University, is “the science and engineering of making intelligent machines, especially intelligent computer programs, that can be used to understand human intelligence.” The first successful AI program was written in 1951 to play a game of checkers, but the idea of “robots” taking on human-like characteristics has been traced back even earlier. Recently, it has been predicted that AI, although prominent now, will permeate the daily lives of individuals by 2025 and seep into various business sectors.  Today, the buzz around AI stems from the fast-growing influx of  emerging technologies, and how AI can be integrated with current technology to innovate products like self-driving cars, electronic medical records, and personal assistants. Many are aware of what “Siri” is, and consumers’ expectations that Siri will soon become all-knowing is what continues to push the field of AI to develop at such fast speeds.

What is ChatGPT? 

ChatGPT is a chatbot that uses a large language model trained by OpenAI. OpenAI is an AI research and deployment company founded in 2015 dedicated to ensuring that artificial intelligence benefits all of humanity. ChatGPT was trained with data from items such as books and other written materials to generate natural and conversational responses, as if a human had written the reply. Chatbots are not a recent invention. In 2019, Salesforce reported that twenty-three percent of service organizations used AI chatbots. In 2021, Salesforce reported the percentage is now closer to thirty-eight percent of organizations, a sixty-seven percent increase since their 2018 report. The effectiveness, however, left many consumers wishing for a faster, smarter way of getting accurate answers.

In comes ChatGPT, which has been hailed as the “best artificial intelligence chatbot ever released to the general public” by technology columnist, Kevin Roose from the New York Times. ChatGPT’s ability to answer extremely convoluted questions, explain scientific concepts, or even debug large amounts of code is indicative of just how far chatbots have advanced since their creation. Prior to ChatGPT, answers from chatbots were taken with a grain of salt because of the inaccurate, roundabout responses that were likely programmed from a template. ChatGPT, while still imperfect and slightly outdated (its knowledge is restricted to information from before 2021), is being used in manners that some argue could impact many different occupations and render certain inventions obsolete.

The Legal Issues with ChatGPT

ChatGPT has widespread applicability, being touted as rivaling Google in its usage. Since the beta launch in November, there have been countless stories from people in various occupations about ChatGPT’s different use cases. Teachers can use ChatGPT to draft quiz questions. Job seekers can use it to draft and revise cover letters and resumes. Doctors have used the chatbot to diagnose a patient, write letters to insurance companies,  and even do certain medical examinations. 

On the other hand, ChatGPT has its downsides. One of the main arguments against ChatGPT is that the chatbot’s responses are so natural that students may use it to shirk their homework or plagiarize. To combat the issue of academic dishonesty and misinformation, OpenAI has begun work on accompanying software and training a classifier to distinguish between AI-written text and human-written text. While not wholly reliable, OpenAI has noted the classifier will become more reliable the longer it is trained.

Another argument that has arisen involves intellectual property issues. Is the material that ChatGPT produces legal to use? In a similar situation, a different artificial intelligence program, Stable Diffusion, was trained to replicate an artist’s style of illustration and create new artwork based upon the user’s prompt. The artist was concerned that the program’s creations would be associated with her name because the training used her artwork.

Because of how new the technology is, the case law addressing this specific issue is limited. In January 2023, Getty Images, a popular stock photo company, commenced legal proceedings against Stability AI, the creators of Stable Diffusion, in the High Court of Justice in London, claiming Stability AI had infringed on intellectual property rights in content owned or represented by Getty Images absent a license and to the detriment of the content creators. A group of artists have also filed a class-action lawsuit against companies with AI art tools, including Stable AI, alleging the violation of rights of millions of artists. Regarding ChatGPT, when asked about any potential legal issues, the chatbot stated that “there should not be any legal issues” as long as the chatbot is used according to the terms and conditions set by the company and with the appropriate permissions and licenses needed, if any. 
Last, but certainly not least, ChatGPT is unable to assess whether the chatbot itself is compliant with the protection of personal data under state privacy laws, as well as the European Union’s General Data Protection Regulation (GDPR). Known by many as the gold-standard of privacy regulations, ChatGPT’s lack of privacy compliance with the GDPR or any privacy laws could have serious consequences if a user feeds ChatGPT sensitive information. OpenAI’s privacy policy does state that the company may collect any communication information that a user communicates with the feature, so it is important for anyone using ChatGPT to pause and think about the impact that sharing information with the chatbot will have before proceeding. As ChatGPT improves and advances, the legal implications are likely to only grow in turn.

AI Art “In the Style of” & Contributory Liability

By: Jacob Alhadeff

Greg Rutkowski illustrates fantastical images for games such as Dungeons & Dragons and Magic the Gathering. Rutkowski’s name has been used thousands of times in generative art platforms, such as Stable Diffusion and Dall-E, flooding the internet with thousands of works in his style. For example, type in “Wizard with sword and a glowing orb of magic fire fights a fierce dragon Greg Rutkowski,” and Stable Diffusion will output something similar to Rutkowski’s actual work. Rutkowski is now reasonably concerned that his work will be drowned out by these hundreds of thousands of emulations, ultimately preventing customers from being able to find his work online. 

A picture containing nature

Description automatically generated

Examples of images generated by Dream Studio (Stable Diffusion) in Rutkowski’s style.

These machine learning algorithms are trained using freely available information, which is largely a good thing. However, it may feel unfair that an artist’s copyrighted images are freely copied to train their potential replacement. Ultimately, nothing these algorithms or their owners are doing is copyright infringement, and there are many good reasons for this. However, in certain exceptional circumstances, like Rutkowski’s, it may seem like copyright laws insufficiently protect human creation and unreasonably prioritizes computer generation.

A primary reason why Rutkowski has no legal recourse is because an entity that trains its AI on Rutkowski’s copyrighted work is not the person generating the emulating art. Instead, thousands of end-users are collectively causing Rutkowski harm. Since distinct entities cause aggregate harm, there is no infringement. By contrast, if Stable Diffusion verbatim copied Rutkowski’s work to train their AI before generating hundreds of thousands of look-a-likes, this would likely be an unfair infringement. Understanding the importance of this separation is best seen through understanding the process of text-to-art generation and analyzing each person’s role in the process. 

Text-to-Image Copyright AnalysisDiagram, text

Description automatically generated

To give a brief summary of this process, billions of original human artists throughout history have created art that has been posted online. Then a group like Common Crawl scrapes those billions of images and their textual pairs from billions of web pages for public use. Later, a non-profit such as LAION creates a massive dataset that includes internet indexes and similarity scores between text and images. Subsequently, a company such as Stable Diffusion trains its text-to-art AI generator on these text-image pairs. Notably, when a text-to-art generator uses the LAION database, they are not necessarily downloading the images themselves to train their AI. Finally, when the end user goes to Dream Studio and types in the phrase “a mouse in the style of Walt Disney,” the AI generates unique images of Mickey Mouse. 

A picture containing doll

Description automatically generated

A picture containing indoor

Description automatically generated
Examples of images generated by Dream Studio (Stable Diffusion) using the phrase “a mouse in the style of Walt Disney”

These several distributed roles complicate our copyright analysis, but for now, we will limit our discussion of copyright liability to three primary entities: (1) the original artist, (2) the Text-to-Image AI Company, and (3) the end-user. 

The Text-to-Image Company likely has copied Rutkowski’s work. If the Text-to-Image company actually downloads the images from the dataset to train its AI, then there is verbatim intermediate copying of potentially billions of copyrightable images. However, this is likely fair use because the generative AI provides what the court would consider a public benefit and has transformed the purpose and character of the original art. This reasoning is demonstrated by Kelly v. Arriba, where an image search’s use of thumbnail images was determined to be transformative and fair partly because of the public benefit provided by the ability to search images and the transformed purpose for that art, searching versus viewing. Here, the purpose of the original art was to be viewed by humans, and the Text-to-Image AI Company has transformatively used the art to be “read” by machines to train an AI. The public benefit of text-to-art AI is the ability to create complex and novel art by simply typing a few words into a prompt. It is more likely that the Generative AI’s use is fair because the public does not see these downloaded images, which means that they have not directly impacted the market for the copyrighted originals. 

The individual end-user is any person that prompts the AI to generate hundreds of thousands of works “in the style of Greg Rutkowski.” However, the end-user has not copied Rutkowski’s art because copyright’s idea-expression distinction means that Rutkowski’s style is not copyrightable. The end-user simply typed 10 words into Stable Diffusion’s UI. While the images of wizards fighting dragons may seem similar to Rutkowski’s work, they may not be substantially similar enough to be deemed infringing copies. Therefore, the end-user similarly didn’t unfairly infringe on Rutkowski’s copyright.

Secondary Liability & AI Copyright

Generative AI portends dramatic social and economic change for many, and copyright will necessarily respond to these changes. Copyright could change to protect Rutkowski in different ways, but many of these potential changes would result in either a complete overhaul of copyright law or the functional elimination of generative art, neither of which is desirable. One minor alteration that could give Rutkowski, and other artists like him, slightly more protection is a creative expansion of contributory liability in copyright. One infringes contributorily by intentionally inducing or encouraging direct infringement.

Dall-E has actively encouraged end-users to generate art “in the style of” artists. So not only are these text-to-art AI companies verbatim copying artists’ works, but they are then also encouraging users to emulate the artists’ work. At present, this is not considered contributory liability and is frequently innocuous. Style is not copyrightable because ideas are not copyrightable, which is a good thing for artistic freedom and creation. So, while the work of these artists is not being directly copied by end-users when Dall-E encourages users to flood the internet with AI art in Rutkowski’s style, it feels like copyright law should offer Rutkowski slightly more protection.

A picture containing text

Description automatically generated
An astronaut riding a horse in the style of Andy Warhol.
A painting of a fox in the style of Claude Monet.

Contributory liability could offer this modicum of protection if, and only if, it expanded to include circumstances where the copying fairly occurred by the contributor, but not the thousands of end-users. As previously stated, the end-users are not directly infringing Rutkowski’s copyright, so under current law, Dall-E has not contributorily copied. However, there has never been a contributory copyright case such as this one, where the contributing entity themselves verbatim copied the copyrighted work, albeit fairly, but the end user did not. As such, copyright’s flexibility and policy-oriented nature could permit a unique carveout for such protection.

Analyzing the potential contributory liability of Dall-E is more complicated than it sounds, particularly because of the quintessential modern contributory liability case, MGM v. Grokster, which involved intentionally instructing users on how to file-share millions of songs. Moreover, Sony v. Universal would rightfully protect Dall-E generally as due to many similarities between the two situations. In that case, the court found Sony not liable for copyright infringement for the sale of VHS recorders which facilitated direct copying of TV programming because the technology had “commercially significant non-infringing uses.” Finally, regardless of Rutkowski’s theoretical likelihood of success, if contributory liability were expanded in this way, then it would at least stop companies such as Dall-E from advertising the fact that their generations are a great way to emulate, or copy, an artist’s work that they themselves initially copied. 

This article has been premised on the idea that the end-users aren’t copying, but what if they are? It is clear that Rutkowski’s work was not directly infringed by the wizard fighting the dragon, but what about “a mouse in the style of Walt Disney?” How about “a yellow cartoon bear with a red shirt” or “a yellow bear in the style of A. A. Milne?” How similar does an end-user’s generation need to be for Disney to sue over an end-user’s direct infringement? What if there were hundreds of thousands of unique AI-generated Mickey Mouse emulations flooding the internet, and Twitter trolls were harassing Disney instead of Rutkowski? Of course, each individual generation would require an individual infringement analysis. Maybe the “yellow cartoon bear with a red shirt” is not substantially similar to Winnie the Pooh, but the “mouse in the style of Walt Disney” could be. These determinations would impact a generative AI’s potential contributory liability in such a claim. Whatever copyright judges and lawmakers decide, the law will need to find creative solutions that carefully balance the interests of artists and technological innovation. 

A picture containing doll

Description automatically generatedA yellow stuffed animal

Description automatically generated with low confidenceA picture containing text, fabric

Description automatically generated