Girlboss, Gaslight, Gatekeep: The New Frontier of Antitrust Regulation

By: Karina Paup Byrnes

On November 1, 2022, the Digital Markets Act (DMA) was passed by the European Union (EU). This legislation aims to comprehensively address antitrust concerns by regulating the digital sector of large tech companies. The DMA represents the new frontier of antitrust legislation, for no prior law has strived to regulate the digital market so thoroughly. The Act’s purpose is to allow for more equitable practices in the marketplace such as greater competition, choice, innovation, and consumer benefits.

The DMA specifically targets six “gatekeeper” organizations, which are the tech companies the EU has identified as being the most influential and impactful on the digital marketplace: Alphabet, Amazon, Apple, ByteDance, Meta, and Microsoft. With the exception of ByteDance, the Chinese parent company of TikTok, these United States-based companies are now subject to a number of new compliance standards. The response to the regulations has been at best mixed, and companies have expressed concern with the unclear legal landscape of complying with these new laws. The DMA’s large-scale regulatory impact is the first of its kind. While EU regulators are still navigating the DMA’s complex framework, the gatekeepers are scrambling to respond to the DMA and shape the future of a new generation of regulatory antitrust legislation.

The regulation of consumer and user privacy is a core purpose of the DMA. The Act imposes limitations on gatekeeper companies by mandating that personal data can only be acquired with that user’s explicit consent. Such guidelines are designed to provide transparency for collecting, managing, and recording user data. However, what is less clear is how gatekeepers will satisfy this requirement. Gatekeeper companies’ software heavily relies on unrestricted access to user data and the DMA threatens to unravel years of software development. Gatekeepers must now pivot to protect consumer data, rather than rely on collecting unregulated user data to profit.

Additionally, the DMA is intended to equalize the tech marketplace by requiring gatekeepers to open their closed software ecosystems, giving smaller companies access to the software. A closed software ecosystem is when a company has full control over a software system, as opposed to an open ecosystem which is free for anyone to use and has no restrictions on what apps may be installed on it. It is no surprise that Apple has expressed concerns as to how the company will be able to comply without breaching secure information. Apple is pushing back against its status as a “gatekeeper” company, like Samsung successfully did, in an effort to try and curtail the number of DMA regulations it must comply with. In response, the EU Internal Market Commissioner dismissed Apple’s concerns and believes that the DMA can “foster innovation without compromising on security and privacy.” The Commissioner believes that iPhone users should be able to benefit from a wider range of competitive services.

While gatekeepers are unhappy with complying with the DMA, the companies that the Act is intended to support are voicing their frustrations about enforcing gatekeepers’ compliance with the Act. In January 2024, an open letter was penned by a conglomerate of smaller tech companies stating that the six gatekeepers “have either failed to engage in a dialogue with third parties or have presented solutions falling short of compliance with the DMA.” These companies believe that smaller tech competitors and consumers have been ignored, leaving them “in the dark” when it comes to how the gatekeepers are adapting their policies to meet the DMA’s standards. The smaller companies’ statements demonstrate that there is a large concern over whether the EU will strongly enforce compliance, as well as worries that the gatekeepers are not being as forthcoming as they should regarding the changes to their software access and user data protections.

The DMA requires the six gatekeepers to be in full compliance with the Act by March 7, 2024. Only then may there be a true assessment of how these companies have pivoted to conform to the guidelines or what further legal action they may pursue in order to escape compliance. Compelling these companies to take a user-first stance on their software operations could potentially bring great change to how people view their rights as consumers. Additionally, whether the Act increases the competitiveness of the digital marketplace will likely be a development that everyone in the tech industry monitors. Creating a more equitable digital sector could lead to many potential benefits for the consumer. Regardless of the outcome, the successful passage of the DMA shows that there is a strong interest in supervising the growing industry of user data collecting and the power that large tech companies wield.

Misery on Montlake: How New State Bill Could Support Huskies (and Coug) NIL

By: Sam W. Kuper

While “misery” may be a bit facetious to describe a program and fanbase which just experienced its most successful season in recent memory, there is little doubt that nothing has gone UW football’s way since kicking off the national championship game on January 8th. In the span of a week, UW lost the national championship game, its coaching staff, and countless starters on offense and defense—with many lost to the depths of the “transfer portal.” Even a few years ago, this turnover would be completely unheard of—but that is the current reality of collegiate athletics. However, recently proposed Washington State legislation may make it easier for UW to recruit and retain prospective student athletes (PSAs) in the Name, Image, and Likeness (NIL) and transfer portal era. 

Payment to the Players 

“Nowhere else in America can business get away with agreeing not to pay their workers a fair market rate on the theory that their product is defined by not paying their workers a fair market rate . . . . The NCAA is not above the law.” Justice Kavanaugh’s concurring opinion in NCAA v. Alston (2021) signaled to the NCAA that they would likely lose any future antitrust challenges to their longtime ban on student athlete compensation. While the Alston decision was limited to education-related non-cash compensation, the NCAA quickly recognized the changing landscape and announced an interim NIL policy just nine days after the decision’s release. 

To the distress of ex-collegiate athletes like Reggie Bush, the interim policy now allows student athletes to receive cash or non-cash compensation for the use of their NIL through activities like endorsements, signings, and appearances. But NIL agreements must still be based on the market value that each player brings to the deal—ostensibly prohibiting NIL agreements that are “pay-for-play” (compensation solely for playing sports at the university) or compensation that is contingent on enrollment at a particular school (known as a “recruiting inducement”). 

The “Wild West”

Due to NIL and the introduction of the transfer portal in 2018 —which eliminated the requirement for players to serve a “year of residence” before becoming eligible after transferring schools—student athletes now have agency to move to schools where they will both play and earn more. Facilitated mostly through what are known as “NIL Collectives”—independent entities that pool the money of two or more influential supporters or alumni of certain institutions—student athletes now have legal access to the type of money that traditionally was reserved only for professionals (schools themselves are still prohibited from directly paying athletes for their NIL).

As House subcommittee chair Gus Bilirakis (R-Fla.) stated during an NIL hearing on January 18th, “the sudden transition to NIL has enabled a wild west environment where pay-for-play is rampant.” By NCAA rule, collectives are not only disallowed to contact or issue recruiting inducements to PSAs, but they also cannot communicate with coaching staffs or institutions regarding recruiting lists or watch lists. Florida State University’s football program, for example, was penalized by the NCAA earlier this year for coordinating an improper recruiting inducement worth $15,000 per month to a PSA. But despite reports of pervasive rule breaking by coaches, athletes, and collectives alike, this has been the only significant punishment dished out by the NCAA regarding NIL rules. As one anonymous Pac-12 football coach stated regarding conversations about pay-for-play deals with PSAs and their parents, “We’ll tell them those are against NCAA rules, but you know how it works… it’s basically NFL free agency money.” Colorado football coach Deion Sanders even publicly asserted in an interview “Fifty [thousand]!?…Fifty will get you a walk-on these days. … [A]in’t never seen nothing like it.” NCAA leaders have cited lack of evidence for not pursuing more cases concerning NIL violations. In reality, it is likely the murkiness of the interim policy, combined with the lack of a federal law and inconsistent state law, that makes it almost impossible for the NCAA to enforce its own laws on pay to play and improper inducement. 

Ongoing Development of Federal and State NIL Laws

During the aforementioned January 18th hearing, Congressman Bilrakis proposed the FAIR College Sports Act, aiming to federally address the compliance and standardization issues in NIL by creating a nongovernmental oversight board that would dictate NIL rules. But despite some support by current student athletes, it also received significant pushback via the testimony of current UCLA quarterback and NIL star, Chase Griffin. For now, standardized federal legislation may still be a long way off. 

While the NCAA announced proposed changes to the NIL interim policy to also address these issues, to date, collegiate institutions have mostly been guided by their own state’s laws for NIL compliance. Thirty-two states (not including Washington) have passed legislation largely modeled after California’s “Fair Pay to Play Act” passed two years prior to the Alston decision. The advantage of these laws in concurrence with NCAA interim policy comes down to one crucial aspect—many of them allow school personnel to work directly with students to facilitate deals without running afoul with state ethics laws. This provides a significant advantage in both recruiting and player development, as it allows coaches and athletics departments to “basically barter on behalf of athletes.” 

Proposed Washington State Bill

In a tongue and cheek opening to a hearing for his proposed bill on January 16th, UW grad and State Senator Javier Valdez communicated the concern voiced to him by UW and WSU athletics administrators by saying, “[a]s much as I would love to have a bill about the transfer portal . . . this is actually about NIL.” The bill would amend the current ethics guidelines to allow state employees to directly communicate with student athletes about NIL opportunities and help organize NIL deals with collectives like Montlake Futures. Testifying in support of the bill, UW Chief Compliance Officer Kiley Strong noted, “[W]e want to avoid putting our student athletes at a disadvantage and to ensure that recruits want to continue competing at schools in Washington State.” During the hearing, UW reported that in 2023, 150 students disclosed NIL deals to the school. 

Future of NIL?

NIL rulemaking and regulation is clearly still in flux. Countless issues all deserve their own debate: the legality of 501(c)3 NIL collectives; whether these unprecedented payments mainly to male athletes comply with Title IX; predatory NIL deals; the effect of transferring on preexisting NIL contracts; and whether student athletes should be considered employees. (e.g. a recent University of Alabama sophomore had to forfeit $1.5 million in PGA tour winnings due to his “amateur” status). The NCAA, the States, and the federal government all must wrestle with these problems going forward.One thing is certain: all student athletes deserve the opportunities that NIL presents. They pour their time, effort, and heart into representing their respective institutions. To quote Chris Mulick, the Washington State University representative who testified in support of the bill: “The star quarterback will always be found, but the tennis player may not be, and we think . . . this bill could really help us grow into that.” As of now the proposed legislation has passed to the Rules Committee for a second reading. 

Time to Face the Music: A.I. Music Copyright Infringement Battle Makes It to Court

By: Mackenzie Kinsella

AI-generated music has been a point of contention in copyright law since the song, “Heart on My Sleeve,” performed by an AI-generation of Drake and The Weeknd’s voices. The song acquired millions of listens on TikTok, Spotify, Apple Music and other streaming platforms before it was taken down due to the requests from Universal, Drake and The Weeknd’s label. “Heart on My Sleeve” became an alarming indication of potential copyright and right of publicity issues in music surrounding the use of AI. After the release and the pressure to take down this song, Universal urged different platforms to stop AI companies from using these AI generated tracks, and even have gone so far as to figure out different legal avenues to prevent the use of AI generated music. Publishers and streaming platforms could argue that AI companies have infringed on their copyrights, however, a fair use defense is anticipated to be used by AI companies and users. 

Now, AI is problematically being used to create lyrics to iconic songs like Katy Perry’s “Roar,” Gloria Gaynor’s “I Will Survive,” and the Rolling Stones’ “You Can’t Always Get What You Want.” In response, Universal and other music publishers have filed a lawsuit accusing AI-company, Anthropic PBC, of “training” its AI models to mimic copyrighted music. 

Universal Music Group (UMG) v. Anthropic 

Universal Music Publishing, Concord and ABKCO have sued Anthropic in a Tennessee federal court for exploiting lyrics that each music publisher controls in the training of Anthropic’s AI chatbot, Claude 2. UMG alleges that because Claude 2 generates identical or nearly identical copies of copyrighted lyrics, Anthropic violated their publishers’ right by using lyrics from at least 500 songs without any licenses. UMG v. Anthropic and other similar cases signal a point of contention over the copyright obligations of AI companies. Music publishers, like other copyright owners, argue that tech companies have to gain permission to use copyright protected works for training AI models. Anthropic is accused by publishers of profiting from infringement and removing copyright management information from Claude 2’s output. Furthermore, UMG claims that there is already an existing market for licensing copyrighted works, currently being exploited by  music streaming platforms, like Spotify, that operate licensing artists’ work. 

UMG has  asked the court to issue a preliminary injunction for Anthropic’s current AI model to prevent it from generating outputs that disseminate UMG’s lyrics and for the mentioned lyrics to not be used in training any of Anthropic’s future models. Without a preliminary injunction, Anthropic’s alleged copying and continued use of Claude could cause irreparable harm in regard to the licensing market for lyrics, and could erode the relationships between publishers, licensees, and the songwriters these music publishers represent

What This Case Means for AI and Copyright Law Today?

The decision in UMG v. Anthropic could change the entire landscape of music copyright law because it is one of the first cases that addresses the use of song lyrics in AI models. Whatever the outcome, the case could set a precedent for other AI cases. A win for Anthropic could be a substantial win for AI as a whole industry and would establish a precedent against this type of AI copyright regulation within the music industry. Furthermore, a court finding in favor of Anthropic could have the potential to establish that “fair learning,” the fair use of copyrightable materials for sake of training AI models to allow for the betterment of AI development, is appropriate. However, if the court finds in favor of UMG, the ruling could act as a signal for AI developers to become more cautious in their treatment of copyrighted material without licensing. Anthropic, who has financial partnerships with Google and Amazon, could be used as an example for consequences of AI development. Google and Amazon have invested $300 million and $4 billion, respectively. Thus, the outcome of this case may affect how major tech companies evaluate their investments in AI startups. 

Your Face Says It All: the FTC Sends a Warning and Rite Aid Settles Down

By: Caroline Dolan

If someone were to glance at your face, they wouldn’t necessarily know if you won big in Vegas or if you’re silently battling a gambling addiction. When you stroll down the street, your face can conceal many a secret, even such a lucrative side hustle. While facial recognition (“FR”) software is not a new innovation, deep pockets are investing a staggeringly large amount of money into the FR market. Last year, the market was globally valued at $5.98 billion and is projected to grow at a compound annual growth rate of 14.9% into 2030. This rapid and bold deployment of facial recognition technology may therefore make our faces more revealing than ever, transforming them into our most valuable—yet vulnerable—asset.

A Technical Summary for Non-Techies

Facial recognition uses software to assess similarities between faces and provide determinations. Facial characterization further classifies a face based on individual characteristics like gender, facial expression, and age. Through deep learning AI, artificial neural networks mimic how our brains process data. The neural network consists of various layers of algorithms which process and learn from training data, like images or text, and eventually develop the ability to identify features and make comparisons.

However, when the dataset used to train the FR model is unrepresentative of different genders and races, a biased algorithm is created. Training data that is biased toward certain features creates a critical weak spot in a model’s capabilities and can result in “overfitting” wherein the machine learning model performs well on the training data but poorly on data that is different from which it was trained. For example, a model that is trained on data that is biased towards images of men with Western features will likely struggle to make accurate determinations on images of East Asian females.

Data collection and curation poses its own set of challenges, but selection bias is a constant risk whether training data is collected from a proprietary large language model (“LLM”), which requires customers to purchase a license with restrictions, or from an open-source LLM, which is freely available and provides flexibility. Ensuring that training data represents a variety of demographics requires AI ethic awareness, intentionality, and potentially federal regulation.

The FTC Cracks Down

In December of 2023, Rite Aid settled with the FTC following the agency’s complaint alleging that the company’s deployment of FR software was reckless and lacked reasonable safeguards, resulting in false identifications and foreseeable harm. Between 2012 and 2020, Rite Aid employed an AI FR program to monitor shoppers without their knowledge and flag “persons of interest.” Those whose faces were deemed a match to one in the company’s “watchlist database” were confronted by employees, searched, and often publicly humiliated before being expelled from the store. 

The agency’s complaint under section 5 of the FTC Act asserted that Rite Aid recklessly overlooked the risk that its FR software would misidentify people based on gender, race, or other demographics. The FTC stated that “Rite Aid’s facial recognition technology was more likely to generate false positives in stores located in predominantly Black and Asian neighborhoods than in predominantly white communities, where 80% of Rite Aid stores are located.” This also violated Rite Aid’s 2010 Security Order which required the company to oversee its third-party software providers.  

The recent settlement prohibits Rite Aid from implementing AI FR technology for five years. It also requires the company to destroy all data that the system has collected. The FTC’s stipulated Order imposes various comprehensive safeguards on “facial recognition or analysis systems,” defined as “an Automated Biometric Security or Surveillance System that analyzes . . . images, descriptions, recordings . . . of or related to an individual’s face to generate an Output.” If Rite Aid later seeks to implement an Automated Biometric Security or Surveillance System, the company must adhere to numerous forms of monitoring, public notices, and data deletion requirements based on the “volume and sensitivity” of the data. Given that Rite Aid filed Chapter 11 bankruptcy in October of 2023, the settlement is pending approval by the bankruptcy court while the FTC’s proposed consent Order goes through public notice and comment.

Facing the FutureGoing forward, it is expected that the FTC will remain “vigilant in protecting the public from unfair biometric surveillance and unfair data security practices.” Meanwhile, companies may be incentivized to embrace AI ethics as a new component of “Environmental, Social, and Corporate Governance” while legislators wrestle with how to ensure that automated decision-making technologies evolve responsibly and do not perpetuate discrimination and harm.

2023, A Roller Coaster Towards Unionization for Game Developers?

By: Kevin Vu

No doubt, 2023 has been a “blockbuster year for video games.” From the Game Awards breaking viewership records, the long-anticipated Baldur’s Gate 3 winning several awards, including the “Game of the Year,” and the redemption of Cyberpunk 2077, it’s evident that 2023 will be celebrated for its many great releases. But, one little-told story of gaming in 2023 is the massive amount of layoffs that have emerged among many developers. Perhaps layoffs were inevitable, given the enormous costs that the top video games incur, and how some notable games only generated half as much revenue as they had anticipated.  

But there may be an even more fundamental reason for this rollercoaster of a year in gaming. Tech, the umbrella industry for gaming, has historically been resistant to unionization. As layoffs continue in the tech industry, the call to unionization has grown louder and louder. With the gaming industry celebrating one of its most consequential years, it’s time to ask whether unionization would ultimately benefit the industry.

Reasons to Unionize

Traditional reasons for unionization often include higher wages, creating a safer workplace, job stabilization, and collective bargaining. Traditionally, both tech developers and game developers have made six-figure salaries, eliminating the high wage factor. However, the remaining factors seem to point out that the gaming industry should unionize. Riot Games, Activision-Blizzard, and other companies within the video game space are notorious for workplace harassment. Having a union can help advocate for those workers, lead to greater enforcement of workplace harassment and discrimination laws, and ultimately, help create and facilitate a culture where workplace harassment is no longer the norm. And, with gaming companies being notorious for their long hours (dubbed as “crunch” times), negotiating for better conditions through unions seems obvious. But perhaps the most obvious reason would be the widespread layoffs that happened in 2023, as unions can help secure better severance pay as employees transition to other endeavors.  

Reasons Not to Unionize

However, various reasons have emerged against unionization in gaming, including the rapid development of technology, blurred lines between management and workers, and stifling the creative process. Ultimately though, many of those reasons seem strained. One of the popular emerging technologies, virtual reality, has a lot of its roots in video game development. That technology has now had various successes in helping doctors, patients, incarcerated individuals, and many others. Now, the rapid development of technology seems to threaten game developers. Companies are beginning to use generative AI for their video games, whether it is voice acting, or promotional art. Indeed, some developers are now promising to use artificial intelligence to develop games, too. Using the advancement of technology as a reason to stymie the workers who helped create that technology seems backhanded at best.  

On an even more fundamental level, shifting over to generative technology to develop video games seems to be counterintuitive, given that video games are a creative product. What creativity exists with AI? This year in games should be telling companies that developers are needed and should be treasured. Baldur’s Gate 3, 2023’s Game of the Year, spent nearly three years in early access, where developers continued to work on the game as the public played the game before its official release. Zelda: Tears of the Kingdom, a runner-up for that same award, was finished for nearly a year, with one year being spent on polishing the game. Cyberpunk 2077, a game with a tumultuous start, won 2023’s Best Ongoing Game Award because the developers ultimately believed in their product. In an industry where some of the biggest games are passion projects made by small teams, trying to justify anti-unionization sentiment by citing creativity, but in turn, using technology that stifles such creativity is disingenuous.  

What Now?

It seems evident that video game developers should seriously consider unionization. Despite a big year in gaming releases, the industry is still threatened by layoffs, and crunch work conditions persist. Video game unionization is not a new thing either. The first multi-department video game union emerged in 2023, which included developers. Quality assurance workers, individuals who help test games to a more polished product, have also begun unionizing. Other creatives in the video game space, like voice actors, have taken collective action as well. Unions have been effective in these creative spaces, and in addressing technology. For example, the Writers Guild of America’s strike ended in favorable terms for screenwriters, including limiting the use of AI. Ultimately, video game developers should look at their industry and ask whether the current climate is sustainable.