Post-Dobbs: A Whole New World of Privacy Law

By: Enny Olaleye

Last summer, The United States was rocked by the U.S. Supreme Court’s (SCOTUS) ruling in Dobbs v. Jackson Women’s Health Organization, a landmark decision striking down the right to abortion, thereby overruling both Roe v. Wade and Planned Parenthood v. Casey. In its wake, the Dobbs decision left many questioning whether their most sensitive information—information relating to their reproductive health care—would remain private. Dobbs set in motion a web of state laws which make having, providing, or aiding and abetting the provision of abortion a criminal offense, and many now fear that enforcing those laws will require data tracking. Private groups and state agencies ranging from the health tech sector to hospitality industries may be asked to turn over data as a form of cooperation or a part of their prosecution of these new crimes. 

Thus, the question arises: Exactly how much of my information is actually private?

When determining one’s respective right to privacy, it is important to consider what “privacy” actually is. Ultimately, the scope of privacy is wide-ranging. Some may consider the term by its literal definition, where privacy is the quality or state of being apart from company or observation. Alternatively, some may conceptualize privacy a bit further and view privacy as 

a dignitary right focused on knowledge someone may or may not possess about a person. Others may not view privacy by its definition at all, but rather cement their views in the belief that a person’s private information should be free from public scrutiny and that all people have a right to be left alone. 

Regardless of one’s opinions on privacy, it is important to understand that, with respect to the U.S Constitution, you have no explicitly recognized right to privacy. 

How could that be possible?  Some may point to the First Amendment, which preserves a person’s rights of speech and assembly or perhaps the Fourth Amendment, which restricts the government’s intrusion into people’s private property and belongings. However, these amendments focus more on a specific right to privacy with respect to freedom and liberty, with the goal of limiting government interference. They do not constitute an explicit, overarching constitutional right to privacy. While the right to privacy is not specifically listed in the Constitution, the Supreme Court has recognized it as an outgrowth of protections for individual liberty. 

In Griswold v, Connecticut, the Supreme Court concluded that people have privacy rights that prevent the government from forbidding married couples from using contraception. Such a ruling first identified people’s right to independently control the most personal aspects of their lives—thus creating an implicit right to privacy. Later, the Court extended this right of privacy to include a woman’s right to have an abortion in Roe v Wade, holding that “the right of decisional privacy is based in the Constitution’s assurance that people cannot be ‘deprived of life, liberty or property, without due process of law.’” The Roe decision was largely made by the notion that the 14th Amendment contains an implicit right to privacy, as well as protects against state interference in a person’s private decisions more generally. However, the Dobbs ruling has now dismissed this precedent, with the implicit right of privacy no longer extending to abortion. With a 6-3 majority, the Court reasoned that abortion lacked due process protection, as it was not mentioned in the Constitution and was outlawed in many states at the time of the Roe decision. 

Fast forward to today—some government entities have attempted to make progress in preserving an individual’s privacy, particularly in relation to their healthcare. The Biden administration released an executive order aimed at protecting access to abortion and treatment for pregnancy complications. Additionally, the Federal Trade Commission has started to implement federal privacy rules for consumer data, citing “a need to protect people’s right to seek healthcare information.” However, most of this progress centers on a misconception that “privacy” and “data protection” are the same thing. 

So, let’s set the record straight: privacy and data protection are not the same thing. 

While data protection does stem from the right to privacy, it mainly focuses on ensuring that data has been fairly processed. With the concept of privacy constantly being intertwined with freedom and liberty over the past few decades, it can be difficult for people to fully grasp which exactly of their information is private. The Dobbs majority pointed out a distinction between privacy and liberty, citing that “as to precedent, citing a broad array of cases, the Court found support for a constitutional ‘right of personal privacy.’ But Roe conflated the right to shield information from disclosure and to make and implement important personal decisions without governmental interference.” 

There is a valid concern that personal information, ranging from instant messages and location history to third-party app usage and digital records, can end up being subpoenaed or sold to law enforcement. In response to the Dobbs decision, the U.S. Department of Health and Human Services issued a guidance that unless a state law “expressly requires” reporting on certain health conditions, the HIPAA exemption for disclosure to law enforcement would not apply. However, some people may not realize that the application privacy agreements and HIPAA medical privacy rules are not automatically protected against subpoenas. Wholeheartedly, data brokers will not hesitate to sell to the highest bidder any and all personal information they have access to. 

“So now what?” 


Ultimately, the Dobbs decision serves as a rather harsh reminder of just how valuable our privacy is, and what can happen if we lose it. As some of us have already realized, companies, governments, and even our peers are incredibly interested in our private lives. With respect to protecting reproductive freedom, it is imperative to establish federal privacy laws that protect information related to health care from being handed over to law enforcement unless doing so is absolutely necessary to avert substantial public harm. While it is unfortunate that individuals are placed in positions where they are solely responsible for protecting themselves against corporate or governmental surveillance, it is imperative for everyone to remain vigilant and aware of where their information is going.

First AI Art Generator Lawsuit Hits the Courts

By: HR Fitzmorris

Your social media accounts may have recently been inundated with spookily elegant renderings of your once-familiar friends’ faces. Or, if you’re on a particular side of the internet, you may have seen any number of info-graphics scolding users for contributing to the devaluation of flesh and blood artists’ livelihoods. What you may not have seen is news of the recent class-action lawsuit filed on behalf of artists who are unhappy with technological advances that, in their view, were ‘advanced’ through art theft.

The Complaint

In the first-of-its-kind proposed class action, named plaintiffs allege copyright infringement, asking for damages to the tune of one billion dollars. Specifically, artists allege that the named AI companies downloaded and fed billions of copyrighted images into their AI software to ‘train’ the artificial intelligence software to create its own digital ‘art.’ In addition to damages, the plaintiffs have asked the court to issue an injunction preventing the AI companies from using artists’ work without permission and requiring the companies to seek appropriate licensing in the future.

The Plaintiffs

The named plaintiffs, who will represent the pool of affected artists if the class is certified by the court, are Sarah Andersen, a popular webcomic artist; Kelly McKernan, who specializes in colorful watercolor and acryla gouache paintings; and Karla Ortiz, a professional concept artist with clients such as Wizards of the Coast and Ubisoft.

In a New York Times opinion piece about the appropriation of her art by both the Alt-Right and artificial intelligence art generators, Ms. Andersen stated, “[t]he notion that someone could type my name into a generator and produce an image in my style immediately disturbed me.” She also explains that the appropriation made her “feel violated” by the way the AI stripped her artwork of its personal meaning and of her human mark that she honed and defined through the “complex culmination of [her] education, the comics [she] devoured as a child and the many small choices that make up the sum of [her] life.” Clearly, for these artists, there is more at stake than the threat to their livelihoods.

The Defendants

The plaintiffs named four entities as defendants in the suit: Stability AI Ltd., Stability AI, Inc., Midjourney, Inc., and DeviantArt, Inc. Each of these companies has a hand in creating, hosting, or perpetuating the use of engines that use AI to create art.

The Legal Issues

The Stable Diffusion engine, for example, is described as a “deep learning, text-to-image model” that anyone can use “to generate detailed images conditioned on text descriptions.” In layperson’s terms, users input text (such as an artist’s name or a specific medium) to generate images with those attributes. This is the heart of the issue. In order to do this, the tool (and others like it) must be “trained,” which involves, in the words of Plaintiff Sarah Andersen

[B]uil[ding] on collections of images known as “data sets,” from which a detailed map of the data set’s contents, the “model,” is formed by finding the connections among images and between images and words. Images and text are linked in the data set, so the model learns how to associate words with images. It can then make a new image based on the words you type in.

Stable Diffusion was built using a dataset that contained somewhere in the neighborhood of six billion images culled from the internet without regard to intellectual property and copyright laws or creator consent. Additionally, these companies are not building these engines out of the goodness of their hearts, they are making immense revenue. Stability AI, for example, is currently valued at approximately $1 billion.

The suit, which was filed in the Northern District of California, alleges violations of federal as well as state copyright laws, including “direct copyright infringement, vicarious copyright infringement related to forgeries, violations of the Digital Millennium Copyright Act (DMCA), violation of class members’ rights of publicity, breach of contract related to the DeviantArt Terms of Service, and various violations of California’s unfair competition laws.” The crucial argument for the plaintiffs is that “[e]very output image from the system is derived exclusively from the latent images, which are copies of copyrighted images. For these reasons, every hybrid image is necessarily a derivative work.” (emphasis added).

The defendant companies, though, will likely argue that some version of the “fair use doctrine” protects their activity. To prevail, the defendants must prove that their use of the images was sufficiently “transformative”—unlikely to be confused for, or usurp the market for, the original artwork. 

Whatever the court decides, this type of intersection between art and technology will likely remain a hotbed of intellectual and legal debate as artificial intelligence continues to grow in prevalence and accessibility.

Administrative Agencies & Their Role in Technological Regulation

By: Chi Kim

On January 7, 2023, Kevin McCarthy became Speaker of the House after his colleagues from the House of Representatives held fifteen separate voting sessions. The House demonstrated an equally impressive and depressing feat given the inability of our current elected officials to achieve results for even seemingly mundane decisions. While many liberal observers may have rejoiced at the chaos, the fifteen votes is emblematic of an overall trend of inefficiency within the legislative branch and political processes, especially when tackling more fluid concepts and problems within the technology sector. Creating regulations requires large amounts of information, lobbying, and time to convince policymakers with inflexible positions and procedures around fluid and emerging technologies of the merits of the proposed regulations. In addition to the typical policy lag, the timeline for proposed technological regulations are further exacerbated by the following intrinsic and extrinsic factors. 

Intrinsically, Congress is not equipped to handle technological regulation by design. Although our most recent Congress is younger than its predecessor by one year, this small change alone is a historical anomaly. The 118th Congress is the third oldest since 1789 and generally has been climbing since the early 1980s.The average ages in the Senate and House are 63.9 and 57.5, respectively. While this could be the result of modern medical advancements, the increasing age of our elected officials bodes negatively for the hope that our policymakers will understand the technology that they are regulating. Remember, for instance, the famous Facebook hearings? Even the generally unpopular Mark Zuckerberg looked relatable when forced into the position of explaining a new technology to an older person. Beyond the general lack of subject matter expertise, congressional officials cannot invest the requisite time to learn about these issues while also tackling persistent issues within voting rights legislation, labor and supply chain constraints from international pressures, and a looming recession creeping closer layoff by layoff. 

Extrinsically, big tech still has a massive voice within our congressional chambers. During the 2020 election cycle fifteen major tech companies, including Amazon, Facebook, Google, Microsoft, Oracle, and others, spent $96.3 million to influence forthcoming bills like the National Defense Authorization Act, Fairness for High Skilled Immigrants Act, and the CHIPS for America Act. While Congress receives input from stakeholders, there is often a cost to frame their political positions. 

Despite our political gridlock, the American government is not completely unarmed against big tech. In political law, hydraulics is the concept that political energy is never destroyed but rather manifests into new forms, finding new gaps and openings within the regulatory or political landscape, much like water does on earth. In the context of the technological landscape, the responsibility of passing regulations has flowed to administrative bodies. The Federal Trade Commission (FTC), for example, influences technology policy in a number of different ways. The FTC recently filed a lawsuit against data broker Kochava Inc. for selling geolocation data from millions of mobile devices. If the FTC is successful, such a ruling would likely affect the overall data broker industry. Notably, the FTC leadership impacts the policy direction advanced by the agency. For FTC Commissioner, President Biden appointed Alvaro Bedoya, who previously served as the founding director of the Center on Privacy and Technology at Georgetown Law Center where he worked at the intersection of privacy and civil rights. Additionally, as of the writing of this article, the FTC is accepting public comments for a proposed rule to ban non-compete clauses. This rule is intended to increase worker earnings and create more competition among big tech. While administrative agencies do have their own procedural “policy lags,” the FTC can still actively tackle issues while receiving input from internal and external industry experts without being directly tainted by lobbying efforts. 

Law and technology are often portrayed as incompatible ideas — rising technology  meeting archaic regulations. However, policymakers need to realize that law and technology are not so different — both policymaking and technology development require troubleshooting and reiterations over time. However, unlike the software engineers in the companies that they regulate, policymakers do not have endless opportunities to sandbox their regulations before fully staking their political careers and capital. The responsibility of making such regulations has often flowed to administrative agencies that can take measured steps on the daunting task of regulating big tech companies. However, Congress should build on administrative agency efforts by passing bills based on the failures or successes of the agency actions. Doing so could result in more relevant and long-lasting technology regulations. 

Cannabis Patents in Federal Courts

By: Yixin Bao

Introduction

Technology impacts almost every industry, and the cannabis industry is no exception. There are multitudes of cannabis patents granted by the United States Patent and Trademark Office (“USPTO”) each year, including the technology to process and cultivate cannabis plants, and the medical uses of cannabis in the treatment of diseases. As states continue to legalize cannabis, the dispute about whether a federal court should apply the illegality doctrine to cannabis-related patents would become more prevalent in the future.

Background

Traditionally, USPTO does not prohibit the filing of patents related to cannabis. In fact, the number of cannabis-related patent filings continues to increase in recent years. The explanation for this increase seems to be related to the more advanced technologies resulting in the rising medical and recreational use of cannabis and a trend favoring the legalization of cannabis on a state-by-state level.  21 states have acted to legalize recreational marijuana, and even more states have legalized the medical use of marijuana. Nevertheless, in most circumstances, at the federal level, marijuana and marijuana-related products are still considered illegal. Because the legalization of cannabis and marijuana is a relatively recent occurrence, unsurprisingly there has been limited cannabis patent litigation in legal history. 

With the expectation of increased patent litigation over cannabis patents, the question then becomes whether the illegality doctrine should apply to cannabis patents in a federal court, where marijuana and cannabis are schedule 1 controlled substances under the Controlled Substances Act in the eyes of the federal judiciary. The idea of the illegality doctrine comes from Everet v. Williams, also known as “the Highwayman’s case,” a 1725 case in an English court. The court refused to uphold a lawsuit regarding the enforceability of contracts, which was to share the spoils of the armed robber. “No court will lend its aid to a man who founds his cause of action upon an immoral or an illegal act.” Lord Mansfield spoke so. The illegality doctrine is based on the belief that a person shouldn’t be able to benefit from his or her wrongdoing. 

Discussion

This question of whether the illegality doctrine should apply to cannabis patents in a federal court has already been raised more often in the legal profession. For example, according to several Goodwin Procter LLP attorneys, including Rob Cerwinski, Brett Schuman, Daniel Mello, and Nikhil Sethi, the uptick in cannabis-related patenting activities in recent years might lead to a potential cannabis patent “war.” These attorneys argue that a federal court should not apply the doctrine because these patents are not the fruit of a crime. There is a big difference between the private agreement between the two criminals in the Highwayman’s case and the patent owners’ rights granted by the USPTO. For example, many cannabis patent holders are pharmaceutical companies and research institutions, instead of criminals. Even the U.S. government holds a cannabis patent. The U.S. Department of Health and Human Services has a patent on certain parts of the marijuana, the non-psychoactive cannabinoids, for their potential use to protect the brain from damage by certain diseases. These holders’ businesses are legal, where the illegality doctrine should not be applied. 

A second reason that the illegality doctrine should not be applied is that patent rights themselves do not violate federal drug laws. Patent rights are the rights to exclude others from making or using the invention, which is again, different from the rights to grant owners to make or sell the invention. 

Last but not least, if a federal court decides to apply the illegality doctrine to the cannabis patents, it will be in direct conflict with an agency that serves as the national patent office and trademark registration authority for the United States, USPTO. 

Future

While marijuana stays illegal under federal law, a large majority of the public seems to favor federal legalization of recreational and medical marijuana according to a CBC News poll published in 2022. As the technologies grow, the public shows support, and states continue to legalize cannabis, this dispute about whether a federal court should apply the doctrine to these patents could become more prevalent.

AI Art “In the Style of” & Contributory Liability

By: Jacob Alhadeff

Greg Rutkowski illustrates fantastical images for games such as Dungeons & Dragons and Magic the Gathering. Rutkowski’s name has been used thousands of times in generative art platforms, such as Stable Diffusion and Dall-E, flooding the internet with thousands of works in his style. For example, type in “Wizard with sword and a glowing orb of magic fire fights a fierce dragon Greg Rutkowski,” and Stable Diffusion will output something similar to Rutkowski’s actual work. Rutkowski is now reasonably concerned that his work will be drowned out by these hundreds of thousands of emulations, ultimately preventing customers from being able to find his work online. 

A picture containing nature

Description automatically generated

Examples of images generated by Dream Studio (Stable Diffusion) in Rutkowski’s style.

These machine learning algorithms are trained using freely available information, which is largely a good thing. However, it may feel unfair that an artist’s copyrighted images are freely copied to train their potential replacement. Ultimately, nothing these algorithms or their owners are doing is copyright infringement, and there are many good reasons for this. However, in certain exceptional circumstances, like Rutkowski’s, it may seem like copyright laws insufficiently protect human creation and unreasonably prioritizes computer generation.

A primary reason why Rutkowski has no legal recourse is because an entity that trains its AI on Rutkowski’s copyrighted work is not the person generating the emulating art. Instead, thousands of end-users are collectively causing Rutkowski harm. Since distinct entities cause aggregate harm, there is no infringement. By contrast, if Stable Diffusion verbatim copied Rutkowski’s work to train their AI before generating hundreds of thousands of look-a-likes, this would likely be an unfair infringement. Understanding the importance of this separation is best seen through understanding the process of text-to-art generation and analyzing each person’s role in the process. 

Text-to-Image Copyright AnalysisDiagram, text

Description automatically generated

To give a brief summary of this process, billions of original human artists throughout history have created art that has been posted online. Then a group like Common Crawl scrapes those billions of images and their textual pairs from billions of web pages for public use. Later, a non-profit such as LAION creates a massive dataset that includes internet indexes and similarity scores between text and images. Subsequently, a company such as Stable Diffusion trains its text-to-art AI generator on these text-image pairs. Notably, when a text-to-art generator uses the LAION database, they are not necessarily downloading the images themselves to train their AI. Finally, when the end user goes to Dream Studio and types in the phrase “a mouse in the style of Walt Disney,” the AI generates unique images of Mickey Mouse. 

A picture containing doll

Description automatically generated

A picture containing indoor

Description automatically generated
Examples of images generated by Dream Studio (Stable Diffusion) using the phrase “a mouse in the style of Walt Disney”

These several distributed roles complicate our copyright analysis, but for now, we will limit our discussion of copyright liability to three primary entities: (1) the original artist, (2) the Text-to-Image AI Company, and (3) the end-user. 

The Text-to-Image Company likely has copied Rutkowski’s work. If the Text-to-Image company actually downloads the images from the dataset to train its AI, then there is verbatim intermediate copying of potentially billions of copyrightable images. However, this is likely fair use because the generative AI provides what the court would consider a public benefit and has transformed the purpose and character of the original art. This reasoning is demonstrated by Kelly v. Arriba, where an image search’s use of thumbnail images was determined to be transformative and fair partly because of the public benefit provided by the ability to search images and the transformed purpose for that art, searching versus viewing. Here, the purpose of the original art was to be viewed by humans, and the Text-to-Image AI Company has transformatively used the art to be “read” by machines to train an AI. The public benefit of text-to-art AI is the ability to create complex and novel art by simply typing a few words into a prompt. It is more likely that the Generative AI’s use is fair because the public does not see these downloaded images, which means that they have not directly impacted the market for the copyrighted originals. 

The individual end-user is any person that prompts the AI to generate hundreds of thousands of works “in the style of Greg Rutkowski.” However, the end-user has not copied Rutkowski’s art because copyright’s idea-expression distinction means that Rutkowski’s style is not copyrightable. The end-user simply typed 10 words into Stable Diffusion’s UI. While the images of wizards fighting dragons may seem similar to Rutkowski’s work, they may not be substantially similar enough to be deemed infringing copies. Therefore, the end-user similarly didn’t unfairly infringe on Rutkowski’s copyright.

Secondary Liability & AI Copyright

Generative AI portends dramatic social and economic change for many, and copyright will necessarily respond to these changes. Copyright could change to protect Rutkowski in different ways, but many of these potential changes would result in either a complete overhaul of copyright law or the functional elimination of generative art, neither of which is desirable. One minor alteration that could give Rutkowski, and other artists like him, slightly more protection is a creative expansion of contributory liability in copyright. One infringes contributorily by intentionally inducing or encouraging direct infringement.

Dall-E has actively encouraged end-users to generate art “in the style of” artists. So not only are these text-to-art AI companies verbatim copying artists’ works, but they are then also encouraging users to emulate the artists’ work. At present, this is not considered contributory liability and is frequently innocuous. Style is not copyrightable because ideas are not copyrightable, which is a good thing for artistic freedom and creation. So, while the work of these artists is not being directly copied by end-users when Dall-E encourages users to flood the internet with AI art in Rutkowski’s style, it feels like copyright law should offer Rutkowski slightly more protection.

A picture containing text

Description automatically generated
An astronaut riding a horse in the style of Andy Warhol.
A painting of a fox in the style of Claude Monet.

Contributory liability could offer this modicum of protection if, and only if, it expanded to include circumstances where the copying fairly occurred by the contributor, but not the thousands of end-users. As previously stated, the end-users are not directly infringing Rutkowski’s copyright, so under current law, Dall-E has not contributorily copied. However, there has never been a contributory copyright case such as this one, where the contributing entity themselves verbatim copied the copyrighted work, albeit fairly, but the end user did not. As such, copyright’s flexibility and policy-oriented nature could permit a unique carveout for such protection.

Analyzing the potential contributory liability of Dall-E is more complicated than it sounds, particularly because of the quintessential modern contributory liability case, MGM v. Grokster, which involved intentionally instructing users on how to file-share millions of songs. Moreover, Sony v. Universal would rightfully protect Dall-E generally as due to many similarities between the two situations. In that case, the court found Sony not liable for copyright infringement for the sale of VHS recorders which facilitated direct copying of TV programming because the technology had “commercially significant non-infringing uses.” Finally, regardless of Rutkowski’s theoretical likelihood of success, if contributory liability were expanded in this way, then it would at least stop companies such as Dall-E from advertising the fact that their generations are a great way to emulate, or copy, an artist’s work that they themselves initially copied. 

This article has been premised on the idea that the end-users aren’t copying, but what if they are? It is clear that Rutkowski’s work was not directly infringed by the wizard fighting the dragon, but what about “a mouse in the style of Walt Disney?” How about “a yellow cartoon bear with a red shirt” or “a yellow bear in the style of A. A. Milne?” How similar does an end-user’s generation need to be for Disney to sue over an end-user’s direct infringement? What if there were hundreds of thousands of unique AI-generated Mickey Mouse emulations flooding the internet, and Twitter trolls were harassing Disney instead of Rutkowski? Of course, each individual generation would require an individual infringement analysis. Maybe the “yellow cartoon bear with a red shirt” is not substantially similar to Winnie the Pooh, but the “mouse in the style of Walt Disney” could be. These determinations would impact a generative AI’s potential contributory liability in such a claim. Whatever copyright judges and lawmakers decide, the law will need to find creative solutions that carefully balance the interests of artists and technological innovation. 

A picture containing doll

Description automatically generatedA yellow stuffed animal

Description automatically generated with low confidenceA picture containing text, fabric

Description automatically generated