The NO FAKES Act: Protecting Actors’ Likeness by Expanding Copyrightable Protections

Photo by Tara Winstead on Pexels.com

By: Dillon Koch

If you spend a few minutes scrolling through TikTok, you’re sure to come across Plankton belting out an 80s pop hit, or Harry Styles duetting a Taylor Swift song (ostensibly written about their iconic fling). However, the Bikini Bottom villain and Dunkirk’s bravest soldier never actually made those videos – they were created using generative Artificial Intelligence (AI).

With the rise in accessible AI tools, effectively anyone has the power to make anyone say anything—just ask Tom Hanks and Gayle King, who were victims of generative AI when they “appeared” in commercials for dental services and weight loss products. However, neither Hanks nor King ever filmed, recorded, or otherwise approved either campaign. While generative AI can lead to harmless viral content on social media, it can also cause real issues for performers who never authorized these performances.  

Producers Proposed Unlimited Access to Background Actors

In what seemed to be the nexus for the Screen Actors Guild – American Film, Television, and Radio Actors (SAG-AFTRA) strike, initial bargaining proposals from the Alliance of Motion Picture and Television Producers (AMPTP) indicated that studios were interested in minimizing costs by maximizing how background actors would be cast, compensated… and created.

According to union members, AMPTP proposed terms that would allow them to have nearly unlimited use of low-cost background actors. AMPTP proposed that SAG-AFTRA background performers should be able to be scanned, get a single day’s pay, and their companies could own that scan, the performer’s image, and their likeness. With this, AMPTP sought to be able to use the performer’s scan and likeness for an unlimited period, without explicit consent or compensation for individual project usage. This could have a significant impact on financial stability and workforce participation for union members, and has led to the longest strike in SAG-AFTRA’s history.

Now, in an effort to preserve their craft while also maintaining the ability to work and survive, members of SAG-AFTRA aim to obtain similar AI protections to the Writers’ Guild of America (WGA): putting creative humans at the forefront of film and television, and leveraging AI as a tool rather than an inherent creative force.

Popular Copyright Theory Is Favorable Toward Performers, But Jurisprudence Disagrees

Prominent theories of copyright seem to indicate that performers should be entitled to rights over their likeness and expression of themselves. Hegelian and Kantian perspectives indicate that personhood, autonomy, and self-expression should be protected by copyright. The intuition is that the best way of providing control over resources is to recognize property

rights, and a person’s property interest is strongest in the resources that entangle their personality. An extension of this theory posits that property provides a mechanism for self-definition, personal expression, and dignity of an individual person.

Even the definition of copyright itself “includes” well-established categories of eligible works, indicating that there is room to expand copyrightability beyond the named eight categories as interpretation allows. While jurisprudence has not revealed new categories in the definition of “works,” the drafters of the Constitution seemed to acknowledge that creativity’s broad scope could lead to unforeseen innovative creations.

While it seems like a no-brainer that a performer should maintain rights to their likeness and expression, courts have established clear jurisprudence to the contrary. In the seminal case Garcia v. Google, courts held that a performance was not copyrightable by the performer, which which resulted in a plaintiff’s likeness being used in anti-Muslim propaganda without her explicit knowledge or consent. The dissent believed that Garcia was effectively “bamboozled, acknowledging that its decision in this case “robs performers and other creative talent of rights Congress gave them.” However, the Ninth Circuit ultimately held that, although the film’s treatment of Garcia was “blasphemous,” her performance was not substantially creative enough to constitute authorship. Even if she were granted authorship, she was not responsible for the fixation of her performance, and therefore could not get copyright protection.

So, despite the philosophy that performers should be able obtain copyrights in order to protect their personhood and rights to self-expression, courts have maintained that performers are generally not eligible for copyrights to their performance. However, with increasing weariness around generative AI, this may be the perfect moment for Congress and the courts to add an additional category to copyrightable works: likeness.

Introducing a New Category of Copyrightable Material

In a seemingly rare act of bipartisanship, four members of Congress have drafted the Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act. Senators Coons, Blackburn, Klobuchar, and Tillis aim to combat AI’s “unique challenges that make it easier than ever to use someone’s voice, image, or likeness without their consent” through policies regulating the use and impact of generative AI. The Senators aim to defend individual rights and abide by First Amendment rights to free speech, without stifling innovation and creativity.

The NO FAKES Act would prevent a person from producing or distributing an unauthorized AI-generated replica of an individual to perform in a recording without the explicit consent of the individual being replicated. The person creating or sharing the replication, as well as the platform knowingly hosting it, would be liable for the damages caused by the AI-generated fake. 

SAG-AFTRA has applauded the proposal, acknowledging that the rise in accessible generative AI tools has increased opportunity to exploit the voices and likenesses of union actors without their knowledge, consent, or fair compensation. Especially after WGA reached their resolution with studios, SAG-AFTRA seems more hopeful than they were at the beginning of the summer. If WGA can receive copyright and employment protections from AI-generated content, why shouldn’t performers be entitled to the same?

It seems, with the proposal of NO FAKES, Congress hopes to increase protections around generative AI by broadening copyright-like protections to to include performers’ likenesses. As SAG-AFTRA President Fran Drescher noted, “A performer’s voice and their appearance are all part of their unique essence, and it’s not ok when those are used without their permission,” which is directly aligned with copyright principles of protecting personhood and self-expression. 

While the bill doesn’t mention copyright specifically, NO FAKES sends a signal that likeness and performance are essential creative works which should receive federal protection. This seems likely to have a direct impact on future copyright infringement claims. As more actors file claims against unauthorized use of their likeness, government leaders are eager to impose uniform limitations on AI. The NO FAKES bill ultimately provides the perfect opportunity to protect autonomy and creativity, and in such finally aligning with principles of copyright by expanding creative protection to actors.

The Complexities of Racism in AI Art

By: Imaad Huda

AI generative art is a recent advance in the field of consumer and social artificial intelligence. Anybody can write a few words into a program, and, within seconds, the AI will generate an image that roughly depicts that prompt. AI generative art can incorporate a number of artistic styles to develop digital art without somebody lifting a pen. While many users are simply fascinated with art being created by their computers, few are aware of how the AI generates its images and the implications of what it produces. Now that AI art programs have made their way into consumer hands, users have noticed stereotypical and racialized depictions in their auto-generated images. Entering prompts that incorporate types of employment, education, and history often produce images that incorporate racial bias. As AI becomes more mainstream, racist and sexist depictions by AI will only serve to entrench long standing stereotypes, and the lack of a legal standard will only make the matter worse. 

Quantifying the Racism 

Leonard Nicoletti and Dina Bass for Bloomberg note that the generative images take the “human” biases to the extreme. In a generative span of 5,000 images with the Stable Diffusion AI, depictions of prompts for people with higher-paying jobs were compared to people with lower-paying jobs. The result was an overrepresentation of people of color for lower-paying jobs. Prompts including “fast-food workers” yielded an image with a darker skinned person seventy percent of the time, even though Bloomberg noted that seventy percent of fast-food workers are white. Meanwhile, prompts for higher-paying jobs, such as “CEO and lawyer” generated images of people with lighter skin at a rate of over eighty percent, potentially proportional to the eighty percent of people that hold those jobs. When it came to occupations, Stable Diffusion showed the most bias when depicting occupations for women, “amplify[ing] both gender and racial stereotypes.” Among all generations for high-paying jobs, only one image, that of a judge, generated of a person of color. Commercial facial-recognition software, a tool specifically designed to identify the genders of people, had “the lowest accuracy on darker skinned people,” presenting a problem when these softwares are “implemented for healthcare and law enforcement.” 

Stable Diffusion was also biased when comparing criminality. For depictions of “inmate,” the AI generated a person of color eighty percent of the time when only half of the inmates in the U.S. are people of color. Bloomberg notes that the rates for generating criminals could be skewed by the racial bias by the U.S. “policing and sentencing” mechanisms. 

The Legality

Is racism in AI legal? The answer is complicated for a number of reasons. The law surrounding AI generative imaging is new. In 2021, the Federal Trade Commission (FTC) declared the use of discriminatory algorithms to make automated decisions illegal, citing opportunities for “jobs, housing, education, or banking.” New York City has also enacted its own Local Law 144, which requires that AI tools undergo a “bias audit” before aiding in employment decisions.” The National Law Review states that a bias audit includes a calculation of the “rate at which individuals in a category are either selected to move on or assigned a classification” by the hiring tool. The law also states that audits “include historical data in their analysis,” and the results of the audit “must be made publically available.” 

The advancement of anti-racism laws regulating AI tools represents progress. However, how these laws pertain to AI art still has yet to be seen. Laws concerning AI generated art are currently focused on theft, as AI art often copies the originalism and stylistic choices of human artists. The racial depictions of AI art have not been touched on legally but could perpetuate stereotypes when used in an educational context, which the FTC prohibits under its 2021 declaration. Judges and lawmakers may not see AI art’s contribution to systemic racism as a legal issue that could stand in the courtroom just yet. 

What’s The Solution?

The bias in generated art results from its algorithm, which, depending on the user’s prompt, pulls together images that match a description and style to develop into a new image. From multiple prompts from many different users and the data available on the internet, the algorithm continuously produces these images. Almost a decade ago, Google postponed its consumer AI search program because images of black people were being filtered into searches for “gorillas” and “monkeys.” The reason for this, according to former Google employees, was Google not training its AI with enough images of black people. The problem in this case, again, could be a lack of representation, from too few AI algorithm employees of color to inadequate representation in the data sets being used to generate images. However, a simple fix to increase representation is not so easy. AI computing is built based on models that already exist; a new model will be based off of an older model, and the biases present in the older algorithm may stand. As issues with machines get more complicated, so do the solutions. Derogatory depictions should not be allowed to stand in the absence of a legal standard, and lawmakers should take the necessary measures to end AI discrimination before it becomes a true social problem.

Even Better Than The Real Thing: U2 and Brambilla Bring Elvis to Life

By: Bella Hood

Thanks to social media, U2’s visual performance at the Las Vegas Sphere is one few can claim they aren’t at a minimum tickled by. The ominous round structure seats nearly 18,000 people sitting at 366 feet tall and 516 feet wide. A creative project of James Dolan, the executive chair of Madison Square Garden and owner of the New York Knicks and Rangers, the novel entertainment venue was completed in September 2023 on an astounding $2.3B budget.

U2 holds the honor of christening the venue with a multi-month residency that has been so well-received that the band just announced 11 additional shows to occur in January and February of 2024, for a total of 36 performances. Perhaps surprisingly, while droves of middle-aged suburbanites filed in to scratch their 80s Rock nostalgic itch, the music took a backseat to the immersive visual experience encompassing a 16K wraparound LED screen.

Several songs into U2’s performance, a 4-minute whimsical display of hundreds of images of Elvis Presley engulfs the venue and transcends all existing mental perceptions of The King.

An artist known for his elaborate re-contextualizations of popular and found imagery, as well as his pioneering use of digital imaging technologies in video installation and art, Marco Brambilla leveraged AI to portray Elvis in a fantastical, sci-fi-esc light. He fed clips from over 30 Elvis movies into the 2022 text-to-image model Stable Diffusion, the more realistic-looking sibling to Dall-E2.

U2 and Elvis may sound like an odd coupling, but the band’s lead singer, Bono, has been a vocal supporter of the icon for decades. In fact, U2’s lyrics are sprinkled with allusions to The King of Rock and Roll and even patent references at times, including the song titled “Elvis Presley and America.”

Regardless of how famous a musician or band may be, one cannot use just any person’s likeness on a whim. Failure to obtain permission from said person, or their estate, can result in a potential defamation claim. While many aspects of entertainment law involve overlapping state and federal government oversight, this issue is largely state-specific. According to the American Bar Association, the modern test requires two elements:

  • the defendant, without permission, has used some aspect of the plaintiff’s identity or persona in such a way that the plaintiff is identifiable from the defendant’s use; and
  • the defendant’s use is likely to cause damage to the commercial value of that persona.

Without a doubt, Elvis’ estate is well-versed in likeness laws. In 1981, his ex-wife, Priscilla Presley, established Elvis Presley Enterprises, Inc (EPE). Currently, Authentic Brands Group (ABG) owns roughly 85% of EPE, the remainder belonging to The King’s only descendant, Lisa Marie Presley. The estate does not shy of the legal system, vigorously protecting the cultural icon’s legacy.

Past targets of EPE include wedding chapels in Las Vegas, Nevada, gun manufacturer Beretta (headquartered near Milan, Italy), and a nightclub in Houston, Texas called the Velvet Elvis.

This begs the question, how was Brambilla able to create and U2 able to display an entire video montage of hundreds of versions of Elvis for the entire length of the song “Even Better Than The Real Thing”? Despite speaking to multiple outlets, Brambilla has yet to confirm or deny his permission to use over 12,000 film samples of Elvis’s performances.

In looking at the propensity to sue over likeness or otherwise, one should consider the parties involved. ABG was valued at $12.7 billion in 2021 after nearly going public and also owns the intellectual property rights of Marilyn Monroe, Muhammad Ali, and Shaquille O’Neal.

Between the behemoth’s unlimited legal resources, the Sphere’s already infamous reputation, and U2’s success thus far with the residency, it seems unlikely at this point that Authentic Brands Group could be unaware of the Elvis tribute. Therefore, if ABG wanted to send a cease and desist, they would have done so by now. Even if a lawsuit were imminent, ABG would be hard-pressed to demonstrate that U2 and Brambilla’s portrayal of Elvis is even remotely damaging to his commercial persona.

Move Fast and Break Things: Ethical Concerns in AI

By: Taylor Dumaine

In Jurassic Park, Dr. Ian Malcolm famously admonished the park’s creator by saying “your scientists were so preoccupied with whether or not they could that they didn’t stop to think if they should.” Technological advancement for the sake of advancement alone ignores the genuine negative effects that advancement could cause or contribute to. The negative externalities of technological advancement have often been overlooked or ignored. There is also often a reliance on federal and state governments to regulate industry rather than self-regulation or ethics standards.  That reliance has become especially true in the AI and generative AI spaces. The lack of government regulation in AI technology is far outpaced by its rapid development, hindering the government’s ability to address ethical issues adequately.

Relying on government regulation is a copout for large tech companies. Congress’s record on technology regulation is poor at best, with most bills failing to become law and those that do being insufficient to effectively regulate. The United States still does not have a national privacy law and there is little political will to pass one. The increasingly octogenarian makeup of Congress does not have the best track record of actually understanding basic concepts in technology let alone increasingly complicated technology, such as AI, they are tasked with regulating. During Senate testimony regarding the Cambridge Analytical scandal, Meta CEO, Mark Zuckerberg, had to explain some pretty rudimentary internet concepts.

Earlier this year, Open AI CEO, Sam Altman, called for government regulation of AI in testimony before Congress. Altman also carries a backpack around that would allow him to remotely detonate ChatGPT datacenters in the scenario where the generative AI goes rogue. While by no means a perfect example of ethics in the AI space, Altman seems to at least be aware of the risks of his technology. Altman relies on the federal government to regulate his technology rather than engaging in any meaningful self-regulation.

In contrast to Altman, David Holz, Founder and CEO of Midjourney, an image generation AI program,  is wary of regulation, saying in an interview with Forbes, “You have to balance the freedom to do something with the freedom to be protected. The technology itself isn’t the problem. It’s like water. Water can be dangerous, you can drown in it. But it’s also essential. We don’t want to ban water just to avoid the dangerous parts.” Holz highlights that his goal is to promote imagination and is less concerned with how his goal may impact people so long as others benefit. This thinking is common in tech spaces.

 Even the serious issues in generative AI, such as copyright infringement, seem almost mundane when faced with facial recognition tools such as Clearview AI. Dubbed “The Technology Facebook and Google Didn’t Dare Release,” these facial recognition tools have the disturbing ability to recognize faces across the internet. Clearview AI specifically has raised serious Fourth and Fifth Amendment concerns regarding police use of the software. Surprisingly, the large tech companies, Apple, Google, and Facebook, served as de facto gatekeepers of this technology for over a decade due to their acquisitions of facial recognition technology, recognizing the dangers of this technology. Facebook was subject to a $650 million lawsuit related to its use of facial recognition on the platform.  Clearview AI’s CEO Hoan Ton-That has no ethical qualms regarding the technology he is creating and marketing specifically to law enforcement. Clearview AI is backed by Peter Thiel who founded Palantir, which has its own issues regarding police and government surveillance. The potential integration of the two companies could result in an Orwellian situation. Therefore, Clearview AI represents a worst-case scenario for tech without ethical limits, the effects of which have already been disastrous.

Law students, medical students, and Ph.D. students are all required to take an ethics class at some point. Many self-taught programmers do not incorporate ethics classes or study into their learning. There are very real and important ethical concerns when it comes to technology development. In an age, culture, and society that values advancement without taking the time to consider the negative ramifications, it is unlikely that society’s concern over ethics in technology will change much. In a perfect scenario, government regulation would be swift, well-informed, and effective to protect against the dangers of AI. With the rate of technological innovation, it is hard to stay proactive in the ethics space, but that does not mean there should be no attempt to. Arguing for a professional ethics standard in computer science and software engineering is not without its own serious problems and would be almost entirely impossible to implement. However, by creating a culture where ethical concerns are not just valued but considered in the development of new technology, we can hopefully avoid a Jurassic Park scenario.

Distress in the West: The Collapse of the Pac-12 Conference

By: Mayel Andres Tapia-Fregoso

2023 marks the end of the 108-year legacy of the Pac-12 athletic conference as we know it. One by one, the conference’s top athletic programs have abandoned the conference to secure their own financial futures after a decade of risky network gambles, scandals, and significant losses in revenue. 

The Pac-12 Cements its Power Five Status

In July 2009, the presidents of the Pac-12 (known as the Pac-10 until 2011) agreed to a 11-year contract with Larry Scott to serve as the commissioner of the Pac-10. Scott took control of the Pac-10 after presiding over a successful six-year run as the Women’s Tennis Association’s CEO. The Pac-10 hired Scott with the goal of boosting its national brand and securing a television contract comparable to those obtained by rival Power Five conferences, the most well recognized and highest earning athletic conferences in the National Collegiate Athletic Association (NCAA). In 2008, another Power Five conference, the Southeastern Conference (SEC), had agreed to a $2 billion agreement with ESPN to exclusively broadcast football and men’s and women’s basketball over 15 years. Meanwhile the Big Ten conference, in partnership with Fox, launched the “Big Ten Network.” To capitalize on the rapidly growing market for college sports, the Pac-10 desperately needed to expand beyond the west coast to bring its content to viewers across the nation. However, Scott had an even more ambitious plan for the Pac-10: securing a lucrative television deal and starting the Pac-10 network, modeled after the Big Ten Network, but owned entirely by the conference.

 In 2010, the Pac-10 announced the additions of Colorado and Utah, the top athletic programs in the mountain states. Scott hoped these acquisitions would expand the conference’s reach across nine television markets and increase its annual television earnings. In 2011, the new rebranded Pac-12 agreed to a record setting 12-year, $2.7 billion television contract with ESPN and Fox, earning the conference approximately $225 million per year. Fox and ESPN split the rights to most of the Pac-12’s football and men’s basketball games. Scott preserved the remaining football and men’s basketball games, non-revenue sports and Olympic sports for his newly created Pac-12 Network. Unlike the Big Ten Network, who shared its ownership with Fox, the Pac-12 was the sole owner of its network. By choosing not to partner with a major network like ESPN or Fox, Scott relied heavily on his ability to sell the Pac-12’s limited product to cable providers like DirecTV and Comcast. After this deal, it appeared like the Pac-12 would become the nation’s next marquee conference.

Some Gambles Just Don’t Pay Off

When the Pac-12 agreed to the record-setting contract with ESPN and Fox, the Pac-12 jumped from fifth among conferences in television revenue to first. The Pac-12 Network even retained full ownership of its own network with the promise that it would deliver its sports content to subscribers across the nation and enhance the schools’ ability to recruit athletes at the national level.

When the Pac-12 Network launched in 2012, Scott asked cable providers to pay $0.80 cents per subscriber. All providers except one agreed to Scott’s terms. DirecTV, one of the nation’s largest television carriers with over 20 million subscribers, two million in southern California alone, refused to pay Scott’s asking price. At the same time, the SEC member schools began to reap the benefits of their significant investment in their football programs, resulting in seven consecutive national championships from 2006 to 2012. Building off their success, in 2014, the SEC partnered with ESPN to launch the “SEC Network.” The SEC’s success on the field and its partnership with ESPN also allowed it to charge providers like DirecTV $1.30 per subscriber

Meanwhile the Pac-12’s product, and with it, its leverage over DirecTV suffered. A Pac-12 school has not won a Football National Championship since USC in 2004 or won the NCAA college basketball tournament since Arizona in 1997. From 2017 to 2022, the Pac-12 schools spent an average of $681,404 on football recruiting expenses, behind the Big Ten and SEC who spent $798,122 and $1,176,055, respectively. To add insult to injury, several schools became embroiled in recruiting violations, and other scandals like the infamous Reggie Bush scandal, that led the NCAA to hand down numerous sanctions and postseason bans against Pac-12 schools. 

In 2014, the Pac-12 Network reached 11 million paying subscribers. In comparison, the Big Ten and SEC Networks reached 57 million and 67 million households, respectively. Without DirecTV subscribers, the Pac-12 could not compete with the other conferences within its own markets including the lucrative Southern California market, home to two Pac-12 schools: UCLA and USC. 

While the Pac-12 struggled, Larry Scott’s salary continued climbing, reaching $5.4 million in 2019, making him the highest-paid commissioner in the NCAA. During the 2018-19 season the SEC, in partnership with ESPN, distributed $45 million to its member schools. The Big Ten, in partnership with Fox, distributed $55 million to its schools while the Pac-12, without a partner, distributed $32 million to its schools. Despite all these troubles, the worst was yet to come. The Covid-19 pandemic had serious financial consequences for the Pac-12’s members. Schools like Arizona and Utah reported losses in excess of $60 million, causing sports programs to be discontinued, significant budget cuts, and layoffs. In 2021, amid these financial troubles, the Pac-12 parted ways with Larry Scott. With the current television contract expiring after the 2023 season and a breakdown in negotiations with DirecTV, the situation in the Pac-12 became dire. 

Abandon Ship!

It became clear to many member schools, especially those in the largest television markets, that remaining in the Pac-12 conference was no longer in their best interest. On June 30 2022, UCLA and USC, two flagship members of the Pac-12 conference for almost a century, announced they would be joining the Big Ten conference effective after the 2023 football season. With the loss of the Pac-12’s largest market, Southern California, it was only a matter of time before the conference collapsed. Oregon, Washington, California, Stanford, Colorado, Utah, Arizona, and Arizona State have all left the conference to join other Power Five conferences beginning in 2024. OSU and WSU are the only remaining members of the Pac-12. In response, OSU and WSU have filed a lawsuit against the Pac-12 to strip the 10 departing member schools’ voting rights, pursuant to the Pac-12’s bylaws. OSU and WSU hope to gain control of the conference’s remaining assets, valued at $42.7 million as of 2022, to prevent the departing schools from taking the Pac-12’s reserve funds on their way out. OSU and WSU hope to lure members of lesser conferences by offering them membership in Power Five conference and money from Pac-12’s remaining assets in hopes of rebuilding the gutted conference. However, a recent lawsuit, requiring the Pac-12 to pay $72 million to Comcast due to conference executives overreporting its subscriber numbers to Comcast, will likely exhaust most of the conference’s assets.

Larry Scott and the Pac-12 elected to launch its network, without a partner, in hopes of maximizing profits. Instead, the Pac-12’s legacy will be mired in scandal, greed, and failure to its schools, fans, and most importantly, its student athletes.