Massive Copyright Lawsuit Threatens to Monopolize Reggaeton Music

By: Mayel Tapia-Fregoso

In the United States, copyright law protects original works of authorship that are fixed in a tangible medium. A song can receive two separate copyrights. First, a song can be copyrighted as a musical composition, including its lyrics. Second, the sound recording of that musical composition can also be copyrighted. While entire songs and their sound recordings can be copyrighted, small groupings of notes within a musical work are typically not protected. Recently, a lawsuit filed in the Central District of California names reggaeton stars Bad Bunny, Karol G, Daddy Yankee, and more than one hundred other artists in a copyright infringement lawsuit that has the potential to rock the music world and expand copyright protections for musical works. 

Browne v. Donalds

On April 1, 2021, Steely and Clevie Productions filed a copyright infringement lawsuit against more than one hundred defendants, alleging that their music infringed on Steely and Clevie’s copyrighted works. In the complaint, Steely and Clevie—a Jamaican reggae duo—allege that the defendants sampled the “dembow” rhythm without permission. Sampling is when artists take a portion of another artist’s sound recording and incorporate it into their own audio-only recording of a new song. Artists must obtain permission from the copyright owner of the song and the copyright owner of the sound recording they wish to sample to avoid a copyright infringement lawsuit. The copyright holder(s) of that musical work and sound recording have the option to refuse to license the work, unlike compulsory licensing involved in song covers.

In 1989, Steely and Clevie released the song “Fish Market,” which first featured the dembow rhythm. In 1990, they collaborated with artist Shaba Ranks, incorporating the beat into his song “Dem Bow,” which gave the dembow rhythm its moniker. Later that year, another artist, Dennis “the Menace” Haliburton, incorporated that same beat in his song, “Pounder Riddim.” According to Steely and Clevie’s complaint, the defendants sampled and “mathematically copied” the dembow rhythm from Pounder Riddim for decades. Many artists in the Dominican Republic would later adopt the dembow drum pattern, which became the foundation of reggaeton and Latin American pop music. Some of reggaeton’s biggest hit songs—Daddy Yankee’s “Gasolina,” Bad Bunny’s “Tití Me Preguntó,” and Karol G & Peso Pluma’s “Qlona”—are among thousands of songs that incorporate an iteration of the iconic rhythm. 

Attorneys for Steely and Clevie argue that reggaeton artists did not obtain a license for the “distinctive drum pattern that has become the foundation of the entire genre.” They claim that the industry has exploited the rhythm and has generated revenue from the infringing works. Attorneys for the defendants argue that Steely and Clevie seek to “monopolize” the reggaeton genre by “claiming exclusive rights to the rhythm and other unprotectable elements” shared by all reggaeton songs. The presiding judge, Andre Birotte Jr., expressed concerns for the “stifling” effect that a verdict for the plaintiffs could have on the music industry. The judge is tasked with ruling on the defendant’s motion to dismiss. 

Can Dembow be Copyrighted?

The lawsuit poses a number of questions. First, is the dembow rhythm protectable under copyright law? Under the Copyright Act of 1976, artists, composers, and publishers can copyright musical compositions and sound recordings. In a musical composition, typically, the lyrics of a work and the work’s melody are protectable. The melody includes “the order and rhythm of pitches that make up the main melody line of a piece of music.” However, “in most cases, the sequence of rhythms and “groove of a song” lie outside the protections of copyright law. Likewise, a song’s arrangement and structure is not copyrightable because two songs with the same structure may sound different. But rarely, a rhythm can be copyrighted if the plaintiffs can successfully prove that it is “substantially unique or original.” Courts have spent decades balancing the interests of copyright holders and the interests of the creative community to encourage the production of arts. 

Why Bring This Case Now?

Second, were Steely and Clevie truly the first to create the dembow rhythm or simply the first to “record the popular Jamaican street beat” in a fixed medium? In Jamaica and Latin America, it is common for artists to borrow and sample instrumental tracks without the threat of litigation. Early reggaeton artists in Puerto Rico were inspired by “Jamaica’s tradition of using popular instrumentals to propel new, live, and local performances.” Although Steely and Clevie’s Fish Market was the first song to “fix” the dembow instrumental rhythm, it may be impossible to determine if Fish Market inspired early reggaeton artists or if they were first inspired by the Jamaican music scene that was so popular throughout Latin America. In reggaeton’s early years, the genre had little economic value. By 2023, though, reggaeton music was responsible for billions of streams, helping propel Latin music to become the fourth most popular music genre in the world by stream volume. Now, more than 30 years after Fish Market’s release, Steely and Clevie seek to capitalize on reggaeton’s billion-dollar industry. 

The Case for Steely and Clevie

Steely and Clevie’s supporters argue that the Jamaican duo and other Jamaican artists that have inspired the highly successful genre deserve recognition for their contributions to reggaeton music. These proponents argue that, at its core, this case is about black artists’ music being exploited for profit. Reggaeton historian Katelina Eccleston believes that despite the tradition of reuse in Jamaican and Latin music, it shouldn’t preclude artists, like Steely and Clevie, from receiving songwriting credit. In her view, Jamaican genres have amassed worldwide popularity but “lack economic parity” with reggaeton because, across the Americas, artists with lighter skin complexions that dominate reggaeton are given greater privileges. Eccleston says, “Everybody wants Jamaican music and culture, but they don’t want to make sure Jamaicans can eat.” 

If Steely and Clevie are successful, there will likely be more lawsuits alleging infringement for appropriating popular rhythms, creating more confusion over the protectable musical elements in compositions due to the nuanced cultural components of this infringement case.

Girlboss, Gaslight, Gatekeep: The New Frontier of Antitrust Regulation

By: Karina Paup Byrnes

On November 1, 2022, the Digital Markets Act (DMA) was passed by the European Union (EU). This legislation aims to comprehensively address antitrust concerns by regulating the digital sector of large tech companies. The DMA represents the new frontier of antitrust legislation, for no prior law has strived to regulate the digital market so thoroughly. The Act’s purpose is to allow for more equitable practices in the marketplace such as greater competition, choice, innovation, and consumer benefits.

The DMA specifically targets six “gatekeeper” organizations, which are the tech companies the EU has identified as being the most influential and impactful on the digital marketplace: Alphabet, Amazon, Apple, ByteDance, Meta, and Microsoft. With the exception of ByteDance, the Chinese parent company of TikTok, these United States-based companies are now subject to a number of new compliance standards. The response to the regulations has been at best mixed, and companies have expressed concern with the unclear legal landscape of complying with these new laws. The DMA’s large-scale regulatory impact is the first of its kind. While EU regulators are still navigating the DMA’s complex framework, the gatekeepers are scrambling to respond to the DMA and shape the future of a new generation of regulatory antitrust legislation.

The regulation of consumer and user privacy is a core purpose of the DMA. The Act imposes limitations on gatekeeper companies by mandating that personal data can only be acquired with that user’s explicit consent. Such guidelines are designed to provide transparency for collecting, managing, and recording user data. However, what is less clear is how gatekeepers will satisfy this requirement. Gatekeeper companies’ software heavily relies on unrestricted access to user data and the DMA threatens to unravel years of software development. Gatekeepers must now pivot to protect consumer data, rather than rely on collecting unregulated user data to profit.

Additionally, the DMA is intended to equalize the tech marketplace by requiring gatekeepers to open their closed software ecosystems, giving smaller companies access to the software. A closed software ecosystem is when a company has full control over a software system, as opposed to an open ecosystem which is free for anyone to use and has no restrictions on what apps may be installed on it. It is no surprise that Apple has expressed concerns as to how the company will be able to comply without breaching secure information. Apple is pushing back against its status as a “gatekeeper” company, like Samsung successfully did, in an effort to try and curtail the number of DMA regulations it must comply with. In response, the EU Internal Market Commissioner dismissed Apple’s concerns and believes that the DMA can “foster innovation without compromising on security and privacy.” The Commissioner believes that iPhone users should be able to benefit from a wider range of competitive services.

While gatekeepers are unhappy with complying with the DMA, the companies that the Act is intended to support are voicing their frustrations about enforcing gatekeepers’ compliance with the Act. In January 2024, an open letter was penned by a conglomerate of smaller tech companies stating that the six gatekeepers “have either failed to engage in a dialogue with third parties or have presented solutions falling short of compliance with the DMA.” These companies believe that smaller tech competitors and consumers have been ignored, leaving them “in the dark” when it comes to how the gatekeepers are adapting their policies to meet the DMA’s standards. The smaller companies’ statements demonstrate that there is a large concern over whether the EU will strongly enforce compliance, as well as worries that the gatekeepers are not being as forthcoming as they should regarding the changes to their software access and user data protections.

The DMA requires the six gatekeepers to be in full compliance with the Act by March 7, 2024. Only then may there be a true assessment of how these companies have pivoted to conform to the guidelines or what further legal action they may pursue in order to escape compliance. Compelling these companies to take a user-first stance on their software operations could potentially bring great change to how people view their rights as consumers. Additionally, whether the Act increases the competitiveness of the digital marketplace will likely be a development that everyone in the tech industry monitors. Creating a more equitable digital sector could lead to many potential benefits for the consumer. Regardless of the outcome, the successful passage of the DMA shows that there is a strong interest in supervising the growing industry of user data collecting and the power that large tech companies wield.

Time to Face the Music: A.I. Music Copyright Infringement Battle Makes It to Court

By: Mackenzie Kinsella

AI-generated music has been a point of contention in copyright law since the song, “Heart on My Sleeve,” performed by an AI-generation of Drake and The Weeknd’s voices. The song acquired millions of listens on TikTok, Spotify, Apple Music and other streaming platforms before it was taken down due to the requests from Universal, Drake and The Weeknd’s label. “Heart on My Sleeve” became an alarming indication of potential copyright and right of publicity issues in music surrounding the use of AI. After the release and the pressure to take down this song, Universal urged different platforms to stop AI companies from using these AI generated tracks, and even have gone so far as to figure out different legal avenues to prevent the use of AI generated music. Publishers and streaming platforms could argue that AI companies have infringed on their copyrights, however, a fair use defense is anticipated to be used by AI companies and users. 

Now, AI is problematically being used to create lyrics to iconic songs like Katy Perry’s “Roar,” Gloria Gaynor’s “I Will Survive,” and the Rolling Stones’ “You Can’t Always Get What You Want.” In response, Universal and other music publishers have filed a lawsuit accusing AI-company, Anthropic PBC, of “training” its AI models to mimic copyrighted music. 

Universal Music Group (UMG) v. Anthropic 

Universal Music Publishing, Concord and ABKCO have sued Anthropic in a Tennessee federal court for exploiting lyrics that each music publisher controls in the training of Anthropic’s AI chatbot, Claude 2. UMG alleges that because Claude 2 generates identical or nearly identical copies of copyrighted lyrics, Anthropic violated their publishers’ right by using lyrics from at least 500 songs without any licenses. UMG v. Anthropic and other similar cases signal a point of contention over the copyright obligations of AI companies. Music publishers, like other copyright owners, argue that tech companies have to gain permission to use copyright protected works for training AI models. Anthropic is accused by publishers of profiting from infringement and removing copyright management information from Claude 2’s output. Furthermore, UMG claims that there is already an existing market for licensing copyrighted works, currently being exploited by  music streaming platforms, like Spotify, that operate licensing artists’ work. 

UMG has  asked the court to issue a preliminary injunction for Anthropic’s current AI model to prevent it from generating outputs that disseminate UMG’s lyrics and for the mentioned lyrics to not be used in training any of Anthropic’s future models. Without a preliminary injunction, Anthropic’s alleged copying and continued use of Claude could cause irreparable harm in regard to the licensing market for lyrics, and could erode the relationships between publishers, licensees, and the songwriters these music publishers represent

What This Case Means for AI and Copyright Law Today?

The decision in UMG v. Anthropic could change the entire landscape of music copyright law because it is one of the first cases that addresses the use of song lyrics in AI models. Whatever the outcome, the case could set a precedent for other AI cases. A win for Anthropic could be a substantial win for AI as a whole industry and would establish a precedent against this type of AI copyright regulation within the music industry. Furthermore, a court finding in favor of Anthropic could have the potential to establish that “fair learning,” the fair use of copyrightable materials for sake of training AI models to allow for the betterment of AI development, is appropriate. However, if the court finds in favor of UMG, the ruling could act as a signal for AI developers to become more cautious in their treatment of copyrighted material without licensing. Anthropic, who has financial partnerships with Google and Amazon, could be used as an example for consequences of AI development. Google and Amazon have invested $300 million and $4 billion, respectively. Thus, the outcome of this case may affect how major tech companies evaluate their investments in AI startups. 

Your Face Says It All: the FTC Sends a Warning and Rite Aid Settles Down

By: Caroline Dolan

If someone were to glance at your face, they wouldn’t necessarily know if you won big in Vegas or if you’re silently battling a gambling addiction. When you stroll down the street, your face can conceal many a secret, even such a lucrative side hustle. While facial recognition (“FR”) software is not a new innovation, deep pockets are investing a staggeringly large amount of money into the FR market. Last year, the market was globally valued at $5.98 billion and is projected to grow at a compound annual growth rate of 14.9% into 2030. This rapid and bold deployment of facial recognition technology may therefore make our faces more revealing than ever, transforming them into our most valuable—yet vulnerable—asset.

A Technical Summary for Non-Techies

Facial recognition uses software to assess similarities between faces and provide determinations. Facial characterization further classifies a face based on individual characteristics like gender, facial expression, and age. Through deep learning AI, artificial neural networks mimic how our brains process data. The neural network consists of various layers of algorithms which process and learn from training data, like images or text, and eventually develop the ability to identify features and make comparisons.

However, when the dataset used to train the FR model is unrepresentative of different genders and races, a biased algorithm is created. Training data that is biased toward certain features creates a critical weak spot in a model’s capabilities and can result in “overfitting” wherein the machine learning model performs well on the training data but poorly on data that is different from which it was trained. For example, a model that is trained on data that is biased towards images of men with Western features will likely struggle to make accurate determinations on images of East Asian females.

Data collection and curation poses its own set of challenges, but selection bias is a constant risk whether training data is collected from a proprietary large language model (“LLM”), which requires customers to purchase a license with restrictions, or from an open-source LLM, which is freely available and provides flexibility. Ensuring that training data represents a variety of demographics requires AI ethic awareness, intentionality, and potentially federal regulation.

The FTC Cracks Down

In December of 2023, Rite Aid settled with the FTC following the agency’s complaint alleging that the company’s deployment of FR software was reckless and lacked reasonable safeguards, resulting in false identifications and foreseeable harm. Between 2012 and 2020, Rite Aid employed an AI FR program to monitor shoppers without their knowledge and flag “persons of interest.” Those whose faces were deemed a match to one in the company’s “watchlist database” were confronted by employees, searched, and often publicly humiliated before being expelled from the store. 

The agency’s complaint under section 5 of the FTC Act asserted that Rite Aid recklessly overlooked the risk that its FR software would misidentify people based on gender, race, or other demographics. The FTC stated that “Rite Aid’s facial recognition technology was more likely to generate false positives in stores located in predominantly Black and Asian neighborhoods than in predominantly white communities, where 80% of Rite Aid stores are located.” This also violated Rite Aid’s 2010 Security Order which required the company to oversee its third-party software providers.  

The recent settlement prohibits Rite Aid from implementing AI FR technology for five years. It also requires the company to destroy all data that the system has collected. The FTC’s stipulated Order imposes various comprehensive safeguards on “facial recognition or analysis systems,” defined as “an Automated Biometric Security or Surveillance System that analyzes . . . images, descriptions, recordings . . . of or related to an individual’s face to generate an Output.” If Rite Aid later seeks to implement an Automated Biometric Security or Surveillance System, the company must adhere to numerous forms of monitoring, public notices, and data deletion requirements based on the “volume and sensitivity” of the data. Given that Rite Aid filed Chapter 11 bankruptcy in October of 2023, the settlement is pending approval by the bankruptcy court while the FTC’s proposed consent Order goes through public notice and comment.

Facing the FutureGoing forward, it is expected that the FTC will remain “vigilant in protecting the public from unfair biometric surveillance and unfair data security practices.” Meanwhile, companies may be incentivized to embrace AI ethics as a new component of “Environmental, Social, and Corporate Governance” while legislators wrestle with how to ensure that automated decision-making technologies evolve responsibly and do not perpetuate discrimination and harm.

2023, A Roller Coaster Towards Unionization for Game Developers?

By: Kevin Vu

No doubt, 2023 has been a “blockbuster year for video games.” From the Game Awards breaking viewership records, the long-anticipated Baldur’s Gate 3 winning several awards, including the “Game of the Year,” and the redemption of Cyberpunk 2077, it’s evident that 2023 will be celebrated for its many great releases. But, one little-told story of gaming in 2023 is the massive amount of layoffs that have emerged among many developers. Perhaps layoffs were inevitable, given the enormous costs that the top video games incur, and how some notable games only generated half as much revenue as they had anticipated.  

But there may be an even more fundamental reason for this rollercoaster of a year in gaming. Tech, the umbrella industry for gaming, has historically been resistant to unionization. As layoffs continue in the tech industry, the call to unionization has grown louder and louder. With the gaming industry celebrating one of its most consequential years, it’s time to ask whether unionization would ultimately benefit the industry.

Reasons to Unionize

Traditional reasons for unionization often include higher wages, creating a safer workplace, job stabilization, and collective bargaining. Traditionally, both tech developers and game developers have made six-figure salaries, eliminating the high wage factor. However, the remaining factors seem to point out that the gaming industry should unionize. Riot Games, Activision-Blizzard, and other companies within the video game space are notorious for workplace harassment. Having a union can help advocate for those workers, lead to greater enforcement of workplace harassment and discrimination laws, and ultimately, help create and facilitate a culture where workplace harassment is no longer the norm. And, with gaming companies being notorious for their long hours (dubbed as “crunch” times), negotiating for better conditions through unions seems obvious. But perhaps the most obvious reason would be the widespread layoffs that happened in 2023, as unions can help secure better severance pay as employees transition to other endeavors.  

Reasons Not to Unionize

However, various reasons have emerged against unionization in gaming, including the rapid development of technology, blurred lines between management and workers, and stifling the creative process. Ultimately though, many of those reasons seem strained. One of the popular emerging technologies, virtual reality, has a lot of its roots in video game development. That technology has now had various successes in helping doctors, patients, incarcerated individuals, and many others. Now, the rapid development of technology seems to threaten game developers. Companies are beginning to use generative AI for their video games, whether it is voice acting, or promotional art. Indeed, some developers are now promising to use artificial intelligence to develop games, too. Using the advancement of technology as a reason to stymie the workers who helped create that technology seems backhanded at best.  

On an even more fundamental level, shifting over to generative technology to develop video games seems to be counterintuitive, given that video games are a creative product. What creativity exists with AI? This year in games should be telling companies that developers are needed and should be treasured. Baldur’s Gate 3, 2023’s Game of the Year, spent nearly three years in early access, where developers continued to work on the game as the public played the game before its official release. Zelda: Tears of the Kingdom, a runner-up for that same award, was finished for nearly a year, with one year being spent on polishing the game. Cyberpunk 2077, a game with a tumultuous start, won 2023’s Best Ongoing Game Award because the developers ultimately believed in their product. In an industry where some of the biggest games are passion projects made by small teams, trying to justify anti-unionization sentiment by citing creativity, but in turn, using technology that stifles such creativity is disingenuous.  

What Now?

It seems evident that video game developers should seriously consider unionization. Despite a big year in gaming releases, the industry is still threatened by layoffs, and crunch work conditions persist. Video game unionization is not a new thing either. The first multi-department video game union emerged in 2023, which included developers. Quality assurance workers, individuals who help test games to a more polished product, have also begun unionizing. Other creatives in the video game space, like voice actors, have taken collective action as well. Unions have been effective in these creative spaces, and in addressing technology. For example, the Writers Guild of America’s strike ended in favorable terms for screenwriters, including limiting the use of AI. Ultimately, video game developers should look at their industry and ask whether the current climate is sustainable.