Diarra v. FIFA: A Clash Between Global Sports Governance and Individual Labor Rights

By: Tavis McClain

When a footballer challenges the most powerful governing body in sports, the world pays attention. Lassana Diarra, a talented midfielder, has entered into a stand-off with the very institution meant to uphold the spirit of the game. This case incited intense debate on fairness, freedom, and the balance of power in professional football.

Introduction

Diarra v. FIFA challenges the foundational structures of sports governance. This dispute raises crucial questions about how international sports bodies govern, how athletes assert their rights, and where the boundaries lie between sporting rules and fundamental labor protections. The situation is emblematic of a deeper tension in modern sport: the power of centralized global institutions like FIFA versus the rights of individuals navigating the professional system. It reveals a crucial lens into how the governance of international sport must adapt in response to legal, ethical, and human rights standards.

Background

Lassana Diarra, a well-regarded French international, terminated his contract with Russian club Lokomotiv Moscow in 2014. Diarra claimed the club had breached the contract by failing to pay his wages and creating a hostile work environment. However, FIFA’s Dispute Resolution Chamber (DRC) ruled Diarra had no just cause for the termination as there wasn’t sufficient evidence of a breach by Lokomotiv Moscow. FIFA subsequently imposed a fine and a global playing ban until the compensation was paid—a punishment that effectively paused Diarra’s career and forced him out of the game.

FIFA’s Ruling and CAS Appeal

Diarra appealed to the Court of Arbitration for Sport (CAS), which largely upheld FIFA’s decision. Decisions from the CAS are generally considered binding on FIFA as well as other institutions. The CAS ruling emphasized the principle of contractual stability in football—a cornerstone of FIFA’s regulations, designed to prevent players and clubs from breaching agreements unilaterally. Diarra subsequently challenged the ban in French civil courts, arguing that the enforcement of FIFA’s decision within national jurisdiction violated his fundamental labor rights, particularly his right to work.

Global Sports Governance Under Scrutiny

FIFA, as the global governing body of football, enforces a centralized dispute resolution system designed to streamline legal matters and preserve uniformity across jurisdictions. This case, however, exposes the limitations of that model:

  • Enforcement Without Borders: FIFA’s global ban extended beyond Russia, restricting Diarra’s ability to play in any league worldwide. This reveals how sports governing bodies can bypass national labor protections through international enforcement.
  • Lack of Worker Protections: FIFA’s mechanisms often prioritize contractual order over employee rights, a model that may not always align with domestic labor standards, especially within the EU.
  • Limited Transparency and Appeal: The CAS arbitration system, while designed for expediency, is often criticized for its lack of transparency, limited recourse, and the asymmetry in power between athletes and governing bodies.

What This Means for Labor Rights in Sport

  1. The Right to Just Cause
    The case demonstrates how difficult it can be for players to assert just cause in contract disputes, especially when proving workplace mistreatment or unpaid wages. It sparks questions about the fairness of a system where players are held to stricter standards than employers. Players are forced to prove they were not fired for a legitimate purpose, while FIFA is not required to present evidence.
  2. Access to National Legal Systems
    Diarra’s recourse to the French courts signaled that national jurisdictions can and will challenge the authority of sports bodies when fundamental rights are at stake. This could set a precedent for athletes bypassing arbitration if labor rights are seen to be compromised. If FIFA does not provide relief when its players have their labor rights violated, then they will seek alternative routes of action. It is in the interest of FIFA and the players to settle these conflicts on their behalf to reduce transaction costs and promote transparency.
  3. Rethinking the Role of CAS
    As the de facto “supreme court” of sports, CAS must adapt its structure to better balance institutional interests with individual protections. The current structure favors institutional interests over those of the individual. This may include more transparent hearings, greater independence, and recognition of fundamental labor norms. If these changes are made, it may remedy the issues that footballers are facing.

Conclusion

Diarra v. FIFA exposed a fault line between the old world of insular sports governance and the new reality where labor rights and ethical governance matter more than ever. For international sport to remain credible and fair, its legal infrastructure must shift toward transparency, equity, and respect for the individual.

Bassnectar, Settled and Still Spinning: What #MeToo Justice Leaves Behind

By: Jacqueline Purmort-LaBue

The Bassnectar case will not be going to trial. Earlier this year, Bassnectar, born Lorin Ashton, reached a private settlement with three women who had accused him of sexually abusing them when they were underage. The announcement comes shortly after his motion to dismiss was denied late last year. 

Background

The dubstep DJ and music producer became a well-known celebrity in the electronic dance music (“EDM”) scene after releasing Divergent Spectrum, his first album to hit the Billboard charts. After four years of touring and hosting specially-curated Bassnectar festivals, Ashton went on to play almost exclusively at well-known commercial festivals such as Bonnaroo, Electric Daisy Carnival, Electric Forest Festival, Lollapalooza, and Okechobee. 

In mid-2020, Ashton announced that he was stepping back from music amidst numerous accusations of sexual misconduct that surfaced on social media. The following year, Rachel Ramsbottom and Alexis Bowling filed suit, claiming to be survivors of sex trafficking and child pornography. Additionally, the lawsuit named Ashton’s management and production companies, his record label, and his charity as “knowing participants or beneficiaries” of such acts. 

Selective Justice in the #MeToo Era

Ashton is not the only celebrity who has faced charges like this. With the rise of the #MeToo movement in 2017, many women have come forward to share their experiences as survivors of sexual violence in the workplace. Although a multitude of male celebrities have been accused of sexual misconduct both within and outside the EDM community, only a handful have faced criminal or civil charges in court. Well-known names like Harvey Weinstein, Bill Cosby, R. Kelly, Kevin Spacey, and Sean “Diddy” Combs are among that handful. 

Many of these lawsuits are still in process. Some have been convicted. Others have had those convictions overturned. Only 2.5% of perpetrators will go to prison for their crimes. Frequently, survivors privately settle before the lawsuit is filed. The Rape, Abuse, and Incest National Network (“RAINN”) has reported that despite the fact that one in every six women in the U.S. experiences rape or attempted rape, five in six women who are raped do not report it. The primary reason for underreporting is a lack of trust in the policing and legal systems. 

The Bassnectar settlement includes a confidentiality agreement, meaning that there will be no details released about the settlement, no public vindication for the survivors, and no finding of wrongdoing on the part of Ashton. This raises critical legal questions: Who gets held accountable, and who gets to walk away? Why does justice still feel so selective in the post-#MeToo era?

Cancelled or Still Cashing In?

After the @EvidenceAgainstBassnectar Instagram account went live in 2020, fans have grappled with the allegations. At the time, they debated whether or not to retire their clothing featuring the famed Bassnectar bassdrop symbol, some creating altered versions meant to symbolize the community moving forward without Ashton. One fan even started a petition that has garnered nearly 2,000 signatures, calling for Ashton to give the bassdrop back to the fans and be held accountable for his actions. Many fans have elected to remove or cover up their bassdrop tattoos

The ripple effect of the allegations hasn’t just affected the fans. In October 2023, two shows were canceled at Harrah’s Cherokee Center in Asheville, North Carolina. The Gateway Center Arena in College Park, Georgia, also canceled two of his shows scheduled for April 2024 after an investigation into the allegations against Ashton. 

Ashton has spoken out publicly against cancel culture, calling it a form of “domestic terrorism” following the numerous cancellations of his shows. Since his “comeback” launched in 2023, Bassnectar has played a pair of sold-out shows in Las Vegas in October, and another pair of sold-out shows in New York City on New Year’s Eve. Clearly, not all his fans believe the allegations brought by Ramsbottom and Bowling. 

This mixed public response highlights the complexity of accountability in the digital age. While venues and some fans have taken decisive steps to distance themselves from Ashton, others continue to support him, filling arenas and defending his legacy. The split raises broader questions: Can a fan base truly separate art from artist? Is “cancel culture” a meaningful mechanism for justice, or simply a temporary disruption? As Ashton’s career presses on despite serious allegations, the Bassnectar case forces us to reckon with what accountability and fairness for survivors looks like when public opinion is so starkly divided.

Lipsticks and Lawsuits: The Legal Consequences of Virtual Glam

By: Penny Pathanaporn

Introduction

Have you ever had a shade match done at Sephora by a sales associate or used a virtual try-on tool on a cosmetics website to visualize how a certain lipstick might look on your features? These tools are integral to the shopping experience; they help shoppers like you and me decide which products to add to our carts and which products to skip. But what if I told you that these tools could also raise important legal questions relating to biometric data collection? 

Overview of U.S. Biometric Privacy Laws 

In the United States, only state-level legislation that specifically addresses biometric privacy has been enacted; no federal law currently does so. Since 2023, at least eleven states have introduced legislation to regulate the collection of biometric data by private companies. However, only three states—Washington, Texas, and Illinois—have enacted legislation that governs the regulation of biometric privacy. Out of the four laws, Illinois’ Biometric Information Privacy Act (BIPA) is the most robust as it allows plaintiffs to bring private lawsuits for BIPA violations and claim statutory damages

While Washington’s My Health My Data Act (MHMDA) also allows plaintiffs to bring private lawsuits, plaintiffs can only claim actual damages. Actual damages are calculated by the degree of loss or harm a plaintiff experiences. Unfortunately, in cases of non-consensual data collection, actual damages can be fairly difficult to prove. Texas’ biometric privacy law—namely, the Capture or Use of Biometric Identifier Act (CUBI)—is also fairly limited in scope. CUBI only covers the collection of biometric information for commercial use and does not provide a private right of action to individuals. 

What is BIPA?

Private entities that conduct business in Illinois–whether they are incorporated or headquartered in the state–are subject to BIPA. While “person[s], partnership[s], corporation[s], limited liability compan[ies] . . . [and] other group[s]” constitute private entities under BIPA, state and local governments, governmental agents, and government contractors do not. Under BIPA, the following identifiers constitute biometric information: “fingerprints, voice prints, retina scans, hand scans, or face geometry.” 

Generally, BIPA prohibits private entities from selling or deriving profits from individuals’ biometric data. Additionally, before collecting biometric information, BIPA requires private entities to (1) inform individuals of the type of data being obtained, (2) provide individuals with written information on why the data is being collected and the duration for which the data will be stored, and (3) acquire individuals’ consent in writing. 

Charlotte Tilbury Beauty Class Action Lawsuit

From 2019 and 2023, Charlotte Tilbury Beauty—a cosmetics company—offered virtual try-on tools such as “Foundation Shade Finder,” “Highlight Shade Finder,” and “Blush Finder” on their website. When using the virtual try-on tools, consumers were prompted to enable camera access and allow the website to scan their face in real time before digital makeup effects were rendered.  

In 2022, consumers with ties to Illinois filed a class action lawsuit against Charlotte Tilbury Beauty, alleging that the company violated BIPA by collecting biometric information without prior consent. Plaintiffs claimed that when using the virtual try-on tools, the cosmetics company’s website failed to inform or disclose to them that their facial geometry scans were being captured, archived, and used. 

In 2024, Charlotte Tilbury Beauty reached a $2.925 million settlement. As part of the settlement, individual plaintiffs may be entitled to compensation ranging from $700 to $1,100. Interestingly, settlement amounts for biometric data privacy cases can reach as high as $650 million, as seen in the class action lawsuit against Facebook.

E.L.F. Beauty Class Action Lawsuit

Similar to Charlotte Tilbury Beauty, another cosmetics company, E.L.F. Beauty, has also recently come under legal scrutiny for their virtual try-on tool. Consumers of E.L.F. Beauty filed a class action lawsuit against the company in 2024. Plaintiffs alleged that the beauty company collected, saved, and used their facial geometry through the virtual try-on tool without obtaining consumer consent. The District Court for the Northern District of Illinois Eastern Division allowed the lawsuit to proceed by denying E.L.F. Beauty’s request to compel arbitration

Although the outcome of this case remains uncertain, the class action lawsuits filed against both Charlotte Tilbury Beauty and E.L.F Beauty show that cosmetics companies must proceed with caution when conducting business in states with robust biometric privacy laws.

BIPA Amendment: A Silver Lining? 

Class action lawsuits arising from BIPA violations can be quite costly for private companies, especially if statutory damages are calculated per violation. The Illinois legislature alleviated this concern by amending BIPA in August 2024. Under the amendment, BIPA violations are calculated per individual rather than per instance of data collection. This means that, in all circumstances, each plaintiff is only entitled to one award of statutory damages. Statutory damages amounts are set by statutes and are not determined by the degree of loss or harm a plaintiff experiences.  

Although the amendment provides a silver lining for private entities such as Charlotte Tilbury Beauty and E.L.F. Beauty, significant uncertainties still remain when it comes to BIPA-related litigation. Judges in the Northern District of Illinois have expressed contrasting views on whether the terms of the BIPA amendment should be enacted retroactively. 

For many private entities, BIPA-related litigation still poses many risks. Companies that violated BIPA before the amendment may be liable for each individual instance of biometric data collection. This uncertainty could perhaps be one of the key factors that pushed Charlotte Tilbury Beauty to enter into a hefty settlement agreement.

The Future of the Cosmetics Industry

Given how expensive litigation can be, private companies operating in states with robust biometric privacy laws should tread carefully before implementing tools that capture or archive consumers’ biometric information. Many websites already use scrollable Terms and Conditions that require consumers to check a box or provide an electronic signature to confirm that they consent to the terms. Because virtual try-on tools are integral to the beauty industry, cosmetics companies might consider implementing consent mechanisms to continue offering these services. Such mechanisms will not only protect companies from potential liability but will also enable consumers to make informed choices when shopping for beauty products.  

#BeautyIndustry #BiometricPrivacy #BIPA

How Section 230 Fails to Address the Modern Internet

By: Matthew Bellavia

When asked under oath during one of many congressional hearings, Mark Zuckerberg said:

“Senator, we consider ourselves to be a platform for all ideas”.

While this statement sounds like mere corporate virtue-signaling, it constitutes much more. When Section 230 of the Communications Decency Act was enacted in 1996, the prevailing vision of the internet was a neutral space where users could post ideas—a passive message board. Nearly three decades later, this vision fails to understand the modern internet. Today, social media platforms not only host content but actively control what content is shown to users and which posts go viral. These decisions are often made through proprietary and secret recommendation algorithms and shape what content users see and how widely it spreads. An updated, nuanced legal framework that recognizes the active role platforms play in amplifying content is necessary to improve transparency and accountability.

Section 230 Legal Framework

Section 230 was enacted in response to conflicting court decisions on platform liability. In Cubby, Inc. v. CompuServe, Inc, an online information service that provided subscribers with access to thousands of sites and over 100 forums was found not liable for libel because it did not and could not review content on the forums before it was posted. Alternatively, in Oakmont, Inc. v. Prodigy Servs. Co., an online bulletin board provider was found akin to a publisher because it selectively moderated its content and was liable for defamatory postings that were published. Clearly, there was a need for a more definitive rule. As a result, 47 U.S.C. § 230(c)(1), states: 

“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

The law was designed to encourage internet growth by protecting platforms from liability for user-generated content while allowing them to moderate in good faith. Section 230(c)(2), the “Good Samaritan” provision, specifically protects platforms that remove objectionable content. The statute distinguishes platforms from “information content providers”—those responsible, in whole or in part, for creating or developing information. Platforms materially contributing to content may lose their immunity.

Prominent Cases After Section 230

Courts interpreting Section 230 have generally reinforced broad platform immunity, prioritizing the interests of innovation and free speech at the expense of accountability. In Zeran v. AOL (1997), the plaintiff’s personal contact information was maliciously and repeatedly posted on AOL forums alongside offensive merchandise related to the Oklahoma City bombing. Despite multiple notifications, AOL failed to promptly remove the content, and the plaintiff received death threats and harassment calls. The Fourth Circuit found AOL not liable because of broad Section 230 protection, even after notice was given of the harmful content. 

In contrast, Fair Housing Council v. Roommates.com (2008) held that platforms lose immunity when requiring users to input illegal content, as Roommates.com did by prompting discriminatory preferences. Recently, the Supreme Court in Gonzalez v. Google (2023) declined to rule on whether algorithmic recommendations constitute content development under Section 230.

Algorithmic Promotion and Co-Authorship

Modern platforms do not show content chronologically but algorithmically rank and prioritize posts based on engagement metrics and user behavior. TikTok’s “For You” page curates individualized feeds via machine learning. YouTube’s autoplay and “Up Next” queues automatically recommended videos, and recommendations make up 70% of all views on the site. Facebook similarly uses proprietary signals to prioritize its News Feed.

Critics argue that algorithm design reflects editorial choices rather than passive, neutral functions. Sites actively choose which content to amplify based on revenue-driven decisions, which impact the financial interests of the platforms and their respective creators. Alternatively, these sites could argue that their implementation of personalized ranking and the various tools offered to control content feeds suggest that users take a more active role in their own curation.

Legal Implications – When Does Immunity Break?

If platforms are found to be co-authors or material contributors, the consequences could be significant. Under Section 230, immunity is lost when a platform is deemed to have helped “create or develop” unlawful content. Courts have struggled with what that means, but algorithmic editing or targeted amplification might tip the scales. One could argue that using algorithms that predictably promote harmful content could constitute content development, especially if the platform profits from the activity. Moreover, platforms monetizing harmful content via advertising may be seen as active participants rather than neutral intermediaries. The Roommates.com decision already established that platforms that require or solicit unlawful content can lose immunity. Could algorithmic design, predictably amplifying harmful content, be the next frontier?

Potential Intermediate Standards

Section 230 is commonly referred to as “The Twenty-Six Words That Created the Internet.” A full repeal of the law would destroy the current online ecosystem. Media companies simply do not have the infrastructure or resources to moderate all the content posted. For example, YouTube receives 500 hours of content uploaded every minute. The recent adoption and explosion of AI has added to this problem. Instead of repeal, intermediate reforms could pose a viable adjustment to bring the law up to date. For example, the EU’s Digital Services Act already imposes obligations on platforms to mitigate the risks of algorithmic recommendations. Other alternative solutions could be: conditioning immunity on algorithm transparency or limiting immunity for the distribution of harmful content via algorithmic design.

Practicality

Tech companies argue that narrowing Section 230 would cause over-moderation and chill innovation and free speech. This aligns further with recent movements away from proactive moderation and fact-checking. Critics respond that platforms already wield considerable power, touching all aspects of society. Requiring transparency into algorithmic content delivery could help evaluate when platforms cross into co-authorship. However, this is not something media companies are likely to agree to without a fight.

Conclusion

The internet that Section 230 was designed for is long gone. Today, algorithms blur the publisher-platform distinction by enabling sites to curate, promote, and profit from content they choose. While sites provide some tools for users to control their content, they still take a far more active role in curation than the drafters ever could have contemplated in 1996. As litigation around algorithmic content grows, Section 230 must evolve to recognize the active role platforms play in their content to increase transparency and accountability. 

#Section230 #PlatformImmunity #SocialMedia #WJLTA

GenAI in the Courtroom: Transformative Tool or Dangerous Shortcut?

By: Dustin Lennon-Jones

What happens when your legal advocate isn’t human? In an age where generative AI can write essays, compose music, and even mimic human speech with startling realism, some have begun to wonder: can it argue a case in court? As AI-powered services become more sophisticated, the legal world finds itself at a crossroads: how far should generative AI be allowed to go in the courtroom? The complex, evolving relationship between AI and the law raises important questions about the ethical use of AI in the legal community and the fine line between innovation and deception.

GenAI as your lawyer?

Jerome Dewald was representing himself in a dispute with a former employer, but due to his previous battle with throat cancer, he was having trouble articulating himself in court. With his case on appeal to a New York State appellate court, Dewald applied for and was granted permission to play a recorded video in place of standard oral argument. 

However, when the video began, the five-judge panel was instead greeted by a man who looked nothing like Dewald. When one judge asked him if the man was his lawyer, Dewald responded that he had generated it and that the man in the video was not real. Associate Justice Sallie Manzanet-Daniels was not pleased, rebuking Dewald for misleading the court and attempting to promote his own business

Dewald had utilized the services of Tavus, a San Francisco-based generative AI start-up. The product Dewald was attempting to use allows users to upload a video of themselves talking, at which point the program will generate a “photo-realistic replica” of the user. This digital alter-ego can be fed a script which it can then read aloud in the user’s voice. However, he was unsuccessful at creating a satisfactory replica of himself and settled on a stock avatar

Though not utilized by Dewald, Tavus’ other product has the potential to be even more problematic. The conversational video interface (CVI) operates much in the same way as the replica program but also has the ability to engage in a conversation on its own. According to Tavus, the CVI allows a user to build an AI agent that “feel[s] like talking to an actual person.” This means that, theoretically at least, AI agents could be used to participate in oral arguments and respond to a judge’s questioning. 

Legal Framework for the use of AI

Under New York law, parties can appear and participate in civil actions personally or be represented by a licensed attorney. Had Dewald used the CVI software, this would clearly have been unlawful. Since the AI avatar is neither a party nor a licensed attorney, it cannot appear in court. The use of a script-reading avatar is less clear. 

Judge Manzanet-Daniels seemed to most take issue with the fact that the court had been misled by Dewald, telling him that it “would have been nice to know that when you made your application.” This may indicate, as Dr. Adam Wandt has speculated, that if Dewald had disclosed his intent to use an AI avatar, his application would have been denied. 

Some courts have embraced this disclosure approach. One trial court in New York requires that any document filed in court that was prepared with the use of AI to contain a statement identifying the portions drafted by AI and certifying that it was reviewed by a human for accuracy. The Western District of North Carolina took the opposite stance, requiring a certification that no AI was used in the preparation of court filings. While not requiring disclosure, the Eastern District of Missouri warns litigants that they are responsible for the content generated by AI. 

As is often the case, new technology is advancing faster than the rules that regulate them. Dewald’s use of the Tavus service is part of a growing trend of the misuse of AI which underscores the need for guidance. In one such case, two lawyers who used ChatGPT to perform legal research were fined $5,000 after it made up non-existent cases which they then cited. Michael Cohen, the former personal attorney of Donald Trump, used Google’s AI service Bard which similarly hallucinated cases which Cohen cited in a motion. And in perhaps the most egregious case, the FTC fined DoNotPay, a legal information and “self-help” company, $193,000 for advertising “robot lawyers” who could replace humans in drafting legal documents. Unsurprisingly, the FTC found the service to be ineffective

AI is Inevitable

While perhaps lacking a place in the courtroom, AI is becoming an increasingly embraced tool. In a survey of legal professionals by Thomson Reuters, 72% of respondents view AI as a force for good in the profession. Half of responding law firms stated that exploring the potential uses of AI and implementing them was their top priority. The potential benefits could be game-changing. At the current rate of adoption, AI could save an average of 200 hours per person in 2025. This could allow lawyers to spend more time on more expertise-driven tasks, business development, or simply have more time for themselves.

The Arizona Supreme Court is leading the way to reap some of these benefits. In March, they rolled out a new AI spokesperson program using a service similar to the one used by Dewald. The justice who authored a given opinion also will draft a script, which is then published to their website so that the public can have an easy-to-understand explanation of the results of a case. Court spokesman Alberto Rodriguez said that this cut what used to be an hours-long process down to just 30 minutes.

Conclusion

For better or for worse, the Pandora’s box of generative AI is open. Its potential to save lawyer’s vast amounts of time and increase accessibility to the public have already been demonstrated. However, cases such as Dewald’s and DoNotPay serve as an important reminder of the limitations. Generative AI is a useful tool in a lawyer’s belt, but it is not a replacement for the lawyer itself. 

#WJLTA #GenerativeAI #ethics #robotlawyer