(A.I.) Drake, The Weeknd, and the Future of Music

By: Melissa Torres

A new song titled “Heart on My Sleeve” went viral this month before being taken down by streaming services. The song racked up 600,000 Spotify streams, 275,000 YouTube views, and 15 million TikTok views in the two weeks it was available. 

Created by an anonymous TikTok user, @ghostwriter977, the song uses generative AI to mimic the voices of Drake and The Weeknd. The song also featured a signature tagline from music producer Metro Boomin. 

Generative AI is a technology that is gaining popularity because of its ability to generate realistic images, audio and text. However, concerns have been raised about its potential negative implications, particularly in the music industry, because of its impact on artists. 

Universal Music Group (UMG) caught wind of the song and had the original version removed from platforms due to copyright infringement. 

UMG, the label representing these artists, claims that the Metro Boomin producer tag at the beginning of the song is an unauthorized sample. YouTube spokesperson Jack Malon says, “We removed the video after receiving a valid copyright notification for a sample included in the video. Whether or not the video was generated using artificial intelligence does not impact our legal responsibility to provide a pathway for rights holders to remove content that allegedly infringes their copyrighted expression.”

While UMG was able to remove the song based on an unauthorized sample of the producer tagline, it still leaves the legal question surrounding the use of voices generated by AI unanswered. 

In “Heart on My Sleeve”, it is unclear exactly which elements of the song were created by the TikTok user. While the lyrics, instrumental beat, and melody may have been created by the individual, the vocals were created by AI. This creates a legal issue as the vocals sound like they’re from Drake and The Weeknd, but are not actually a direct copy of anything. 

These issues may be addressed by the courts for the first time, as initial lawsuits involving these technologies have been filed. In January, Andersen et. al. filed a class-action lawsuit raising copyright infringement claims. In the complaint, they assert that the defendants directly infringed the plaintiffs’ copyrights by using the plaintiffs’ works to train the models and by creating unauthorized derivative works and reproductions of the plaintiffs’ work in connection with the images generated using these tools.

While music labels argue that a license is required because the AI’s output is based on preexisting musical works, proponents for AI maintain that using such data falls under the fair use exception in copyright law. Under the four factors of fair use, advocates for AI claim the resulting works are transformative, meaning they do not create substantially similar works and have no impact on the market for the original musical work.

As of now, there are no regulations regarding what training data AI can and cannot use. Last March, the US Copyright Office released new guidance on how to register literary, musical, and artistic works made with AI. The new guidance states that copyright will be determined on a case-by-case basis based on how the AI tool operates and how it was used to create the final piece or work. 

In further attempts to protect artists, UMG urged all streaming services to block access from AI services that might be using the music on their platforms to train their algorithms. UMG claims that “the training of generative AI using our artists’ music…represents both a breach of our agreements and a violation of copyright law… as well as the availability of infringing content created with generative AI on DSPs…” 

Moreover, the Entertainment Industry Coalition announced the Human Artistry Campaign, in hopes to ensure AI technologies are developed and used in ways that support, rather than replace, human culture and artistry. Along with the campaign, the group outlined principles advocating AI best practices, emphasizing respect for artists, their work, and their personas; transparency; and adherence to existing law including copyright and intellectual property. 

Regardless, numerous AI-generated covers have gone viral on social media including Beyoncé’s “Cuff It” featuring Rihanna’s vocals and the Plain White T’s’ “Hey There Delilah” featuring Kanye West’s vocals. More recently, the musician Grimes recently shared her support toward AI-generated music, tweeting that she would split 50% royalties on any successful AI-generated song that uses her voice. “Feel free to use my voice without penalty,” she tweeted, “I think it’s cool to be fused [with] a machine and I like the idea of open sourcing all art and killing copyright.”

As UMG states, it “begs the question as to which side of history all stakeholders in the music ecosystem want to be on: the side of artists, fans and human creative expression, or on the side of deep fakes, fraud and denying artists their due compensation.”

While the music industry and lawyers scramble to address concerns presented by generative AI, it is clear that “this is just the beginning” as @ghostwriter977 ominously noted under the original TikTok posting of the song. 

Tick-Tock, TikTok: Could the Party Actually Stop?

By: Enny Olaleye

Over the past couple years, the United States government has expressed several concerns about the potential security risks associated with TikTok, a popular social media application that has amassed over 1 billion active users every month. Some officials have suggested that TikTok could be used by the Chinese government to collect data on American citizens or to spread propaganda.

According to TikTok, the social media app serves as a platform for users to “come together to learn, be entertained, and grow their business, as they continue to create, discover and connect with a broader global community.” TikTok now serves as one of the largest worldwide social networks, coming in third after Facebook and Instagram. This is evidenced by its widely popular international status—approximately 150 million Americans alone are active members of the social media app—making up nearly half of the U.S. population. 

As a result of the app’s widespread popularity in the United States, government officials have become wary about the potential risks the social media app poses. In August 2020, former President Donald Trump issued an executive order that would have effectively banned TikTok in the US unless it was sold to an American company within 45 days. This order cited concerns about national security and the app’s handling of user data, as the US government was worried that user data could be accessed by the Chinese government. Additionally, the US government accused TikTok of censoring content that would be unfavorable to the Chinese Communist Party and allowing misinformation to spread on its platform. 

However, the ban was temporarily blocked by a federal judge and the Biden administration later put the order on hold to conduct a broader review of security risks posed by Chinese technology. In June 2021, President Biden signed an executive order that expanded the scope of previous orders related to Chinese-owned apps and other technology. The order aimed to protect US citizens’ sensitive data and prevent the Chinese government from gaining access to that data through apps like TikTok. In September 2021, the Biden administration announced that it would not pursue a ban on TikTok, but it would continue to monitor the app’s potential security risks.

In response, TikTok has repeatedly denied these allegations, saying that it stores U.S. user data on servers located in the U.S. and that it has never provided data to the Chinese government. The company has also taken steps to distance itself from its Chinese parent company, ByteDance, by appointing a U.S. CEO and creating a U.S.-based data center.

Unfortunately, TikTok’s actions have done little to appease the concerns raised by federal officials and have only led legislators to double down on taking action against the platform. As of March 2023, the Biden administration has called on TikTok’s parent company, ByteDance, to either sell the app or face a possible ban in the United States because of concerns about data privacy and national security. The White House has also signaled its support for draft legislation in the House of Representatives that would allow the federal government to regulate or ban technology produced by some foreign countries, including TikTok. The federal government and multiple states have also already banned TikTok on government devices

Thus, the question arises: “Can the federal government actually do that?” 

Well—not necessarily. First and foremost, the U.S. government does not have the authority to ban speech, as free speech is a right guaranteed to citizens under the First Amendment of the U.S. Constitution. Posts on TikTok are protected by the First Amendment since they are a form of speech. While the Biden Administration has replaced the Trump order with one that provides a more solid legal ground for potential action, it still cannot override the First Amendment protections afforded to speech on TikTok.

Further, if this issue is brought to the courts, proponents of a TikTok ban will most likely claim the national security risks posed by the app are self-evident. Proponents of a ban believe that TikTok’s relationship to China and how Chinese law requires companies to cooperate with all requests from Beijing’s security and intelligence services creates obvious security problems. However, that line of reasoning is unlikely to find support with federal judges, who will be weighing the potential security risks against the imposition of real-world restrictions on the rights of 150 million Americans to post and exercise free speech on an extremely popular platform. 

Even if judges were to rule that a TikTok ban is neutral when it comes to content and viewpoint, the government would still have to prove that the remedy is narrowly tailored to serve, at a minimum, a “significant government interest,” in order to justify a ban and the corresponding restriction on speech. To ensure narrow tailoring, the Supreme Court developed the standard of strict scrutiny when reviewing free speech cases. To satisfy strict scrutiny, the government must show that the law meets a compelling government interest and that the regulation is being implemented using the least restrictive means. However, narrow tailoring is not confined to strict scrutiny cases, as seen in McCullen v. Coakley, where the Court determined that a Massachusetts law regulating protests outside abortion clinics was not content based and, thus, not subject to strict scrutiny.

As of April 2023, there has been no federal legislation passed that would permit an outright ban of TikTok in the United States. While the First Amendment likely limits the government’s ability to ban the app outright, it could still target TikTok’s ability to conduct U.S.-based financial transactions. That includes potential restrictions on its relationship with Apple and Google’s mobile app stores, which would severely hamper TikTok’s growth. By targeting conduct instead of speech, such a restriction would be outside of the First Amendment’s protections.
Regardless, amid all this government action, there is one thing that has made itself apparent. As the federal government escalates its efforts against TikTok, it’s coming up against a stark reality: even a politically united Washington may not have the regulatory and legal powers to wipe TikTok off American phones.

Graffiti Art and Related Legal Issues in Washington

By: Yixin Bao

Graffiti is a type of visual communication that is written or painted on a kind of surface. This usually happens without permission from the surface’s property owner, with the resulting work often in public view. Some understand graffiti as antisocial behavior which is used to gain public attention, especially when graffiti is created by a member of a street gang. Others, however, treat graffiti as a type of expression and an art form.

Starting in the 1960s, graffiti became a popular form of art in the United States. In New York, young people started to use spray paint to leave their signatures on public spaces, mostly on city walls and subway cars. For example, artists like TAKI 183, became famous for his frequent illegal tagging and was eventually known to be one of the “forefathers” of graffiti. While TAKI 183, whose real name is Demetrius, never considered himself an artist, he left his name and street number on hundreds of surfaces in New York City, making him a part of the history of American graffiti. Demetrius said: “I think a lot of what the graffiti movement spawned, early on, was just vandalism and defacement. But later on, real artists started doing it, and it did become a true art form.” As the art form grew, graffiti became more than lettering. Accompanying the text, abstract and complex compositions were incorporated, with additional color and lines. Such change also brought commercial success for these artists. 

Some graffiti artworks might be qualified to be protected as a visual work under copyright law. Copyright is a form of intellectual property that protects original works of authorship. The work must be original and fixed in a tangible medium of expression. The fundamental exclusive rights that a copyright owner has are the right to reproduce, the right to prepare derivative works, the right to distribute, and the right to public display/performance. Similar to other art forms, if a graffiti work complies with these requirements, it can be protected under copyright. For example, Keith Haring’s famous street art in the New York City subway, using white chalk to draw dancing people on the black advertising panels, is protected under copyright law, because they are original and fixed on subway panels.  However, not all graffiti qualifies for copyright protection. Some graffiti is too simple to be considered as artwork to be protected. These include for example, short phrases and words. 

Locally, graffiti is generally illegal if it is created without permission from the surface’s owner. According to Washington state law, graffiti is a gross misdemeanor. Under RCW 9A.48.090, a person is guilty of malicious mischief in the third degree if he or she writes, paints, or draws a mark of any type on any public or private building unless he or she has gained the permission of the owner of the property. 

Controversies surrounding graffiti art have persisted.  In Washington state, graffiti is everywhere on bridges, walls, and traffic signs. From 2015 to 2017, state transportation officials spent more than $600,000 to remove graffiti and this number raised to $1.4 million between 2019 and 2021. However, when the city officials quietly painted over a tunnel full of graffiti in Washington Heights, some residents accused them of “whitewashing” the culture and the history of the neighborhood. The comments show how the community has different stances on the issue of graffiti. In 2021, individuals brought a lawsuit challenging Seattle’s graffiti ordinance. Four people were arrested and jailed for writing easy-to-clean political messages on temporary barricades. They filed a complaint alleging that “SPD only select[s] to enforce the ordinance when views are expressed that do not align with their own.” None of the four, however, were ever prosecuted for the graffiti.

Graffiti is a form of artistic expression and brings positive outcomes to the community. At the same time,  graffiti without consent is also illegal and considered to be vandalism. Prior to creating their artwork, graffiti artists should seek the property owner’s consent, as a standard practice. Additionally, if the work meets the qualifications, including originality and fixation requirements,  it should be safeguarded under copyright law as a form of artistic expression. Given the ambiguity between graffiti and artistic expression, graffiti artists should always exercise caution and be mindful of the context and legality of their artistic endeavors in public spaces. 

Art mishaps: Who Foots the Bill?

By: Nicholas Lipperd

One misstep at a museum social hour was all it took to destroy a $42,000 sculpture. Seconds after a museum patron accidentally bumped the pedestal, Jeff Koon’s porcelain “balloon dog” sculpture lay shattered on the floor. As onlookers watched in horror, the person who bumped it surely had one thing racing through her mind: will I have to pay for this? It was surely the same question asked by the parents of the twelve-year-old who tripped and accidentally put a fist through a $1.5 million painting in Taiwan. Exploring both practical effects and legal theories that apply to mishaps with museum patrons, this article comes to the conclusion that there is only minimal worry.

The majority of mishaps involving art end up being covered by insurance, but relying on insurance is never a straightforward and easy process. As damaged art claims are on the rise, the incentive for insurance companies to make claims a straightforward process continually shrinks. Further concerns arise if there are terms in the insurance contract that disclaim damage from patrons in certain instances. What if the museum is displaying the art for sale on consignment and does not obtain insurance, thinking to save a few pennies? This is certainly an option for museums, though states like Washington impose strict liability for damage on museums when selling art on consignment. While insurance removes most of the worry over museum mishaps, it is not a foolproof solution. 

Even if museums lack the safety net of insurance coverage, patrons likely need not fear the price tag of accidental damage. Any claims based on such damage will be governed by state tort law because museum patrons have traditionally been considered invitees. While many states have moved past such rigid categories in tort law with respect to third-party harm on public land, the categorization of invitee is still important to understand why liability will not likely fall on a museum patron.

A public invitee is a person who is invited to the property for a purpose for which the land is held open to the public. A museum thus owes a duty of care to museum patrons as invitees, and the museum is liable for injuries and damages caused by the condition of the museum. In layperson terms, this means if a museum failed to properly secure a priceless sculpture and a patron bumped it, it is the museum and not the patron who is responsible. This protection may not hold when the patron specifically recognizes a danger and fails to adhere to it, is trespassing, acts intentionally, or is otherwise acting negligently. The responsible museum-goer need not worry. Yet, these exceptions to invitee protection call in to question a few problematic situations.

If a patron’s actions in damaging art are truly intentional, there are not many defenses available. This is not particularly controversial; if one intends to destroy art, one should be held responsible. But when the action is intentional but the consequences are not, what then? The outcome may be uncertain. In one comical example, a museum janitor thought a contemporary art exhibit was simply trash and consequently “cleaned up” the exhibit by throwing it away. Luckily, the actions were viewed as an honest mistake by the museum, and she was not responsible for the cost. 

If museums have interactive exhibits, the patron is acting intentionally when interacting with the exhibit. When such exhibits invite the patron to physically engage with the art past merely pushing a button, greater risk of damage is inherent. Common sense would dictate that a patron who, hypothetically, breaks a lever on a piece of interactive art after being invited to push said lever, has not intentionally broken anything, despite the act being intentional. One legal theory that protects the patron here parallels the personal injury defense of assumption of risk. The museum is responsible for setting up any interactive exhibit and understands that the risk of damage is increased when inviting patrons to interact. While this protects patrons who act reasonably in such exhibits, a negligence standard may still be applied to their actions in fact-specific circumstances. 

Negligence may pose the most risk to museum patrons just as it does in many other social settings: when alcohol is present. It is increasingly common for museums to host special mixers or functions where alcohol is provided or available. “I just had one too many” is not a valid excuse in any setting and especially not at a museum. A patron’s actions will be judged as either responsible or negligent when compared to a sober adult in the same setting. While commercial hosts can be held liable for damages caused by the intoxication of the persons they serve if those persons are apparently under the influence of alcohol, this is fact-specific and not a protection to be relied upon when the liability for tens of thousands of dollars of damage may be called into question.

So if you plan on enjoying a nice afternoon at the museum, you shouldn’t spend much time worrying about covering the exorbitant cost of an unfortunate mishap. However, should you consider visiting a new interactive exhibit at your local glass museum after a few happy hour drinks, more caution is certainly warranted.

Liability, Authorship, & Symmetrial Causation in AI-Generated Outputs

By: Jacob Alhadeff

Copyright has insufficiently analyzed causation for both authorship and liability because, until now, causation was relatively obvious. If someone creates a painting, then they caused the work and receive authorial rights. If it turned out that the painting was of Mickey Mouse, then that painter may be liable for an infringing reproduction. However, recent technological advances have challenged the element of causation in both authorship and infringement. In response, recent law and scholarship have begun to address these issues. However, because they have addressed causation in isolation, current analysis has provided logically or ethically insufficient answers. In other words, authorial causation has ignored potential implications for an entity’s infringement liability, and vice-versa. Regardless of how the law responds, generative AI will require copyright to explore and enumerate the previously assumed causation analyses for both infringement and authorship. This blog explores how generative AI exposes the logical inconsistencies that result from analyzing authorial causation without analyzing causation for infringing reproductions.

Generative AI largely requires the following process: (1) an original artist creates works, (2) a developer trains an AI model on these works, and (3) an end-user prompts the AI to generate an output, such as “a mouse in the style of Walt Disney.” This generative AI process presents a novel challenge for copyright in determining who or what caused the output because generative AI challenges conventional notions of creation.

Causing Infringement

Andersen et al. recently filed a complaint against Stability AI, one of the most popular text-to-art foundation models. This class action alleges that Stability AI is directly liable for infringing that result from end-user prompted generations. However, in a recent decision more closely analyzing causation and volition in infringement, the Ninth Circuit found that “direct liability must be premised on conduct that can reasonably be described as the direct cause of infringement.” Stability AI should not be found directly liable for infringing these artists’ copyright, in part because Stability AI cannot reasonably be said to be the direct cause of infringement. Such a finding would be similar to holding Google liable for reproducing images of Mickey Mouse on people’s computer screens when they search for “Mickey Mouse.”  

This lawsuit is particularly relevant since end-users have prompted thousands of generations that include the phrase “Mickey Mouse” and many appear substantially similar to Disney’s Mickey. If thousands of end-users have intentionally prompted the AI to generate Mickey Mouse, then what volitional conduct can most reasonably be described as the direct cause of infringement? It is clearly the end-user. However, what if the end-user simply prompted “a cartoon mouse” and the AI generated an infringing image of Mickey? Here, the end-user may not have intended to generate Mickey and reasonable notions of fairness may not find the end-user as the most direct cause of infringement. However, copyright is a strict liability tort, meaning that liability attaches regardless of a reproducer’s intent. Therefore, unless copyright applies an intentional or a negligence theory for direct liability, which it should not, then whomever or whatever is liable for infringing outputs shall be liable for both of the infringing outputs— “Mickey Mouse” and “a cartoon mouse.” Such an outcome not only feels deeply unfair, but it is unreasonable to say that the end-user is the direct cause of infringement when prompting “a cartoon mouse,” and vice versa. 

Cases called to answer similar questions have recently grappled with these same issues of volition and causation. Generally, courts have been hesitant to find companies liable for actions that are not reasonably deemed volitional conduct causing infringement. The court in Cartoon Network, for example, found that “volition is an important element of direct liability.” In the Loopnet case, the court found that “the Copyright Act… requires conduct by a person who causes in some meaningful way an infringement.” In this way, the law has so far mirrored our prior intuitions of fairness. Legal scholarship has noted that when copyright law has grappled with novel technology, it has found that causation in infringement requires volition that “can never be satisfied by machines.” This reasoning, as applied  to generative AI, may mean that an AI company should not normally be directly liable for outputs that infringe the reproduction right. 

Causing Authorship

This causation analysis has also begun for authorship rights. One copyright scholar compellingly argues that copyright law should explicitly enumerate a causal analysis for granting authorship rights. Such an analysis would follow tort law’s two step causation analysis including: (1) creation in fact and (2) legal creation. Aviv Gaon surveys authorial options in The Future of Copyright in the Age of AI, writing that there are those that favor assigning authorship to the end-user prompter, the AI developer, finding outputs joint works, or even attributing authorship to the AI itself. The simplest legal option would be to treat AI like a tool and grant authorship to the end-user. This is exactly how the law responded when photography challenged conventional notions of creativity and authorship. Opponents of finding photographers as authors argued that photography was “merely mechanical, with no place for… originality.” The Supreme Court in Burrow Giles instead found that the photographer “gives effect to the idea” and is the work’s “mastermind” deserving of copyright. 

However, treating AI like a conventional tool is an inconsistent oversimplification in the current context. Not only is it often less analogous to say that an end-user prompter is the ‘mastermind’ of the output, but AI presents a more attenuated causation analysis that should not result in  a copyright for all AI-generations. As an extreme example, recent AIs are employing other AIs as replicable agents. In these circumstances, a single prompt could catalyze one AI to automatically employ other AI agents to generate numerous potentially creative or infringing outputs. Here, the most closely linked human input would be a prompt that could not be said to have masterminded or caused the many resultant expressive outputs. Under Balganesh’s framework, no human could reasonably be found as the factual or legal cause of the output. Such use-cases will further challenge the law’s notions of foreseeability as reasonable causation becomes increasingly attenuated.

Importantly, in the face of this ongoing debate and scholarship, the Copyright Office recently made their determination on authorship for AI-generated works. In February 2023, the US Copyright Office amended its decision regarding Kristina Kashtanova’s comic book, Zarya of the Dawn, stating that the exclusively AI-generated content is not copyrightable.  Ms. Kashtanova created her comic book using Midjourney, a text-to-art AI, to generate much of the visual art involved. The copyright office stated that her “selection, coordination, and arrangement” of AI-generated images are copyrightable, but not the images themselves. The Office’s decision means that all exclusively AI-generated content, like natural phenomena, is not the type of content copyright protects and is freely accessible to all. The Office’s decision was based on their interpretation that “it was Midjourney—not Kashtanova—that originated the ‘traditional elements of authorship.’” The Office’s decision is appropriate policy, but when analyzed in conjunction with the current law on causation in infringement, it is inconsistent and may result in an asymmetrical allocation of the rights and duties that attend creation. Relevantly, how can a machine that is incapable of volition originate art? This is one of many ontological paradoxes that AI will present to law. 

Symmetrically Analyzing Causation

Two things are apparent. First, there is a beautiful symmetry in AI-generations being uncopyrightable, and the machines originating such works symmetrically do not have sufficient volition to infringe. If such a system persists, then copyright law may not play a major role in generative AI, though this is doubtful. Second, such inconsistencies inevitably result from causation analyses for mechanically analogous actions that only analyze one of infringement or authorship. Instead, I propose that copyright law symmetrically analyze mechanically analogous causation for both authorship and infringement of the reproduction right. Since copyright law has only recently begun analyzing causation, it is reasonable, and potentially desirable, that the law does not require this symmetrical causation. After all, the elements of authorship and infringement are usefully different. However, what has been consistent throughout copyright is that when an author creates, they risk both an infringing reproduction and the benefits of authorship rights. In other words, by painting, a painter may create a valuable copyrightable work, but they also may paint an infringing reproduction of Mickey Mouse. Asymmetrical causation for AI art could be analogized to the painter receiving authorship rights while the company that made the paintbrush being liable for the painter’s infringing reproductions. Such a result would not incentivize a painter to avoid infringement, and thereby improperly balance the risks and benefits of creation. Ultimately, if the law decides either the end-user or the AI company is the author, then the other entity should not be asymmetrically liable for infringing reproductions. Otherwise, the result will be ethically and logically inconsistent. After all, as Antony Honore wrote in Responsibility and Fault, in our outcome-based society and legal system, we receive potential benefit from and are responsible for the harms reasonably connected to our actions.