Lipsticks and Lawsuits: The Legal Consequences of Virtual Glam

By: Penny Pathanaporn

Introduction

Have you ever had a shade match done at Sephora by a sales associate or used a virtual try-on tool on a cosmetics website to visualize how a certain lipstick might look on your features? These tools are integral to the shopping experience; they help shoppers like you and me decide which products to add to our carts and which products to skip. But what if I told you that these tools could also raise important legal questions relating to biometric data collection? 

Overview of U.S. Biometric Privacy Laws 

In the United States, only state-level legislation that specifically addresses biometric privacy has been enacted; no federal law currently does so. Since 2023, at least eleven states have introduced legislation to regulate the collection of biometric data by private companies. However, only three states—Washington, Texas, and Illinois—have enacted legislation that governs the regulation of biometric privacy. Out of the four laws, Illinois’ Biometric Information Privacy Act (BIPA) is the most robust as it allows plaintiffs to bring private lawsuits for BIPA violations and claim statutory damages

While Washington’s My Health My Data Act (MHMDA) also allows plaintiffs to bring private lawsuits, plaintiffs can only claim actual damages. Actual damages are calculated by the degree of loss or harm a plaintiff experiences. Unfortunately, in cases of non-consensual data collection, actual damages can be fairly difficult to prove. Texas’ biometric privacy law—namely, the Capture or Use of Biometric Identifier Act (CUBI)—is also fairly limited in scope. CUBI only covers the collection of biometric information for commercial use and does not provide a private right of action to individuals. 

What is BIPA?

Private entities that conduct business in Illinois–whether they are incorporated or headquartered in the state–are subject to BIPA. While “person[s], partnership[s], corporation[s], limited liability compan[ies] . . . [and] other group[s]” constitute private entities under BIPA, state and local governments, governmental agents, and government contractors do not. Under BIPA, the following identifiers constitute biometric information: “fingerprints, voice prints, retina scans, hand scans, or face geometry.” 

Generally, BIPA prohibits private entities from selling or deriving profits from individuals’ biometric data. Additionally, before collecting biometric information, BIPA requires private entities to (1) inform individuals of the type of data being obtained, (2) provide individuals with written information on why the data is being collected and the duration for which the data will be stored, and (3) acquire individuals’ consent in writing. 

Charlotte Tilbury Beauty Class Action Lawsuit

From 2019 and 2023, Charlotte Tilbury Beauty—a cosmetics company—offered virtual try-on tools such as “Foundation Shade Finder,” “Highlight Shade Finder,” and “Blush Finder” on their website. When using the virtual try-on tools, consumers were prompted to enable camera access and allow the website to scan their face in real time before digital makeup effects were rendered.  

In 2022, consumers with ties to Illinois filed a class action lawsuit against Charlotte Tilbury Beauty, alleging that the company violated BIPA by collecting biometric information without prior consent. Plaintiffs claimed that when using the virtual try-on tools, the cosmetics company’s website failed to inform or disclose to them that their facial geometry scans were being captured, archived, and used. 

In 2024, Charlotte Tilbury Beauty reached a $2.925 million settlement. As part of the settlement, individual plaintiffs may be entitled to compensation ranging from $700 to $1,100. Interestingly, settlement amounts for biometric data privacy cases can reach as high as $650 million, as seen in the class action lawsuit against Facebook.

E.L.F. Beauty Class Action Lawsuit

Similar to Charlotte Tilbury Beauty, another cosmetics company, E.L.F. Beauty, has also recently come under legal scrutiny for their virtual try-on tool. Consumers of E.L.F. Beauty filed a class action lawsuit against the company in 2024. Plaintiffs alleged that the beauty company collected, saved, and used their facial geometry through the virtual try-on tool without obtaining consumer consent. The District Court for the Northern District of Illinois Eastern Division allowed the lawsuit to proceed by denying E.L.F. Beauty’s request to compel arbitration

Although the outcome of this case remains uncertain, the class action lawsuits filed against both Charlotte Tilbury Beauty and E.L.F Beauty show that cosmetics companies must proceed with caution when conducting business in states with robust biometric privacy laws.

BIPA Amendment: A Silver Lining? 

Class action lawsuits arising from BIPA violations can be quite costly for private companies, especially if statutory damages are calculated per violation. The Illinois legislature alleviated this concern by amending BIPA in August 2024. Under the amendment, BIPA violations are calculated per individual rather than per instance of data collection. This means that, in all circumstances, each plaintiff is only entitled to one award of statutory damages. Statutory damages amounts are set by statutes and are not determined by the degree of loss or harm a plaintiff experiences.  

Although the amendment provides a silver lining for private entities such as Charlotte Tilbury Beauty and E.L.F. Beauty, significant uncertainties still remain when it comes to BIPA-related litigation. Judges in the Northern District of Illinois have expressed contrasting views on whether the terms of the BIPA amendment should be enacted retroactively. 

For many private entities, BIPA-related litigation still poses many risks. Companies that violated BIPA before the amendment may be liable for each individual instance of biometric data collection. This uncertainty could perhaps be one of the key factors that pushed Charlotte Tilbury Beauty to enter into a hefty settlement agreement.

The Future of the Cosmetics Industry

Given how expensive litigation can be, private companies operating in states with robust biometric privacy laws should tread carefully before implementing tools that capture or archive consumers’ biometric information. Many websites already use scrollable Terms and Conditions that require consumers to check a box or provide an electronic signature to confirm that they consent to the terms. Because virtual try-on tools are integral to the beauty industry, cosmetics companies might consider implementing consent mechanisms to continue offering these services. Such mechanisms will not only protect companies from potential liability but will also enable consumers to make informed choices when shopping for beauty products.  

#BeautyIndustry #BiometricPrivacy #BIPA

How Section 230 Fails to Address the Modern Internet

By: Matthew Bellavia

When asked under oath during one of many congressional hearings, Mark Zuckerberg said:

“Senator, we consider ourselves to be a platform for all ideas”.

While this statement sounds like mere corporate virtue-signaling, it constitutes much more. When Section 230 of the Communications Decency Act was enacted in 1996, the prevailing vision of the internet was a neutral space where users could post ideas—a passive message board. Nearly three decades later, this vision fails to understand the modern internet. Today, social media platforms not only host content but actively control what content is shown to users and which posts go viral. These decisions are often made through proprietary and secret recommendation algorithms and shape what content users see and how widely it spreads. An updated, nuanced legal framework that recognizes the active role platforms play in amplifying content is necessary to improve transparency and accountability.

Section 230 Legal Framework

Section 230 was enacted in response to conflicting court decisions on platform liability. In Cubby, Inc. v. CompuServe, Inc, an online information service that provided subscribers with access to thousands of sites and over 100 forums was found not liable for libel because it did not and could not review content on the forums before it was posted. Alternatively, in Oakmont, Inc. v. Prodigy Servs. Co., an online bulletin board provider was found akin to a publisher because it selectively moderated its content and was liable for defamatory postings that were published. Clearly, there was a need for a more definitive rule. As a result, 47 U.S.C. § 230(c)(1), states: 

“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

The law was designed to encourage internet growth by protecting platforms from liability for user-generated content while allowing them to moderate in good faith. Section 230(c)(2), the “Good Samaritan” provision, specifically protects platforms that remove objectionable content. The statute distinguishes platforms from “information content providers”—those responsible, in whole or in part, for creating or developing information. Platforms materially contributing to content may lose their immunity.

Prominent Cases After Section 230

Courts interpreting Section 230 have generally reinforced broad platform immunity, prioritizing the interests of innovation and free speech at the expense of accountability. In Zeran v. AOL (1997), the plaintiff’s personal contact information was maliciously and repeatedly posted on AOL forums alongside offensive merchandise related to the Oklahoma City bombing. Despite multiple notifications, AOL failed to promptly remove the content, and the plaintiff received death threats and harassment calls. The Fourth Circuit found AOL not liable because of broad Section 230 protection, even after notice was given of the harmful content. 

In contrast, Fair Housing Council v. Roommates.com (2008) held that platforms lose immunity when requiring users to input illegal content, as Roommates.com did by prompting discriminatory preferences. Recently, the Supreme Court in Gonzalez v. Google (2023) declined to rule on whether algorithmic recommendations constitute content development under Section 230.

Algorithmic Promotion and Co-Authorship

Modern platforms do not show content chronologically but algorithmically rank and prioritize posts based on engagement metrics and user behavior. TikTok’s “For You” page curates individualized feeds via machine learning. YouTube’s autoplay and “Up Next” queues automatically recommended videos, and recommendations make up 70% of all views on the site. Facebook similarly uses proprietary signals to prioritize its News Feed.

Critics argue that algorithm design reflects editorial choices rather than passive, neutral functions. Sites actively choose which content to amplify based on revenue-driven decisions, which impact the financial interests of the platforms and their respective creators. Alternatively, these sites could argue that their implementation of personalized ranking and the various tools offered to control content feeds suggest that users take a more active role in their own curation.

Legal Implications – When Does Immunity Break?

If platforms are found to be co-authors or material contributors, the consequences could be significant. Under Section 230, immunity is lost when a platform is deemed to have helped “create or develop” unlawful content. Courts have struggled with what that means, but algorithmic editing or targeted amplification might tip the scales. One could argue that using algorithms that predictably promote harmful content could constitute content development, especially if the platform profits from the activity. Moreover, platforms monetizing harmful content via advertising may be seen as active participants rather than neutral intermediaries. The Roommates.com decision already established that platforms that require or solicit unlawful content can lose immunity. Could algorithmic design, predictably amplifying harmful content, be the next frontier?

Potential Intermediate Standards

Section 230 is commonly referred to as “The Twenty-Six Words That Created the Internet.” A full repeal of the law would destroy the current online ecosystem. Media companies simply do not have the infrastructure or resources to moderate all the content posted. For example, YouTube receives 500 hours of content uploaded every minute. The recent adoption and explosion of AI has added to this problem. Instead of repeal, intermediate reforms could pose a viable adjustment to bring the law up to date. For example, the EU’s Digital Services Act already imposes obligations on platforms to mitigate the risks of algorithmic recommendations. Other alternative solutions could be: conditioning immunity on algorithm transparency or limiting immunity for the distribution of harmful content via algorithmic design.

Practicality

Tech companies argue that narrowing Section 230 would cause over-moderation and chill innovation and free speech. This aligns further with recent movements away from proactive moderation and fact-checking. Critics respond that platforms already wield considerable power, touching all aspects of society. Requiring transparency into algorithmic content delivery could help evaluate when platforms cross into co-authorship. However, this is not something media companies are likely to agree to without a fight.

Conclusion

The internet that Section 230 was designed for is long gone. Today, algorithms blur the publisher-platform distinction by enabling sites to curate, promote, and profit from content they choose. While sites provide some tools for users to control their content, they still take a far more active role in curation than the drafters ever could have contemplated in 1996. As litigation around algorithmic content grows, Section 230 must evolve to recognize the active role platforms play in their content to increase transparency and accountability. 

#Section230 #PlatformImmunity #SocialMedia #WJLTA

GenAI in the Courtroom: Transformative Tool or Dangerous Shortcut?

By: Dustin Lennon-Jones

What happens when your legal advocate isn’t human? In an age where generative AI can write essays, compose music, and even mimic human speech with startling realism, some have begun to wonder: can it argue a case in court? As AI-powered services become more sophisticated, the legal world finds itself at a crossroads: how far should generative AI be allowed to go in the courtroom? The complex, evolving relationship between AI and the law raises important questions about the ethical use of AI in the legal community and the fine line between innovation and deception.

GenAI as your lawyer?

Jerome Dewald was representing himself in a dispute with a former employer, but due to his previous battle with throat cancer, he was having trouble articulating himself in court. With his case on appeal to a New York State appellate court, Dewald applied for and was granted permission to play a recorded video in place of standard oral argument. 

However, when the video began, the five-judge panel was instead greeted by a man who looked nothing like Dewald. When one judge asked him if the man was his lawyer, Dewald responded that he had generated it and that the man in the video was not real. Associate Justice Sallie Manzanet-Daniels was not pleased, rebuking Dewald for misleading the court and attempting to promote his own business

Dewald had utilized the services of Tavus, a San Francisco-based generative AI start-up. The product Dewald was attempting to use allows users to upload a video of themselves talking, at which point the program will generate a “photo-realistic replica” of the user. This digital alter-ego can be fed a script which it can then read aloud in the user’s voice. However, he was unsuccessful at creating a satisfactory replica of himself and settled on a stock avatar

Though not utilized by Dewald, Tavus’ other product has the potential to be even more problematic. The conversational video interface (CVI) operates much in the same way as the replica program but also has the ability to engage in a conversation on its own. According to Tavus, the CVI allows a user to build an AI agent that “feel[s] like talking to an actual person.” This means that, theoretically at least, AI agents could be used to participate in oral arguments and respond to a judge’s questioning. 

Legal Framework for the use of AI

Under New York law, parties can appear and participate in civil actions personally or be represented by a licensed attorney. Had Dewald used the CVI software, this would clearly have been unlawful. Since the AI avatar is neither a party nor a licensed attorney, it cannot appear in court. The use of a script-reading avatar is less clear. 

Judge Manzanet-Daniels seemed to most take issue with the fact that the court had been misled by Dewald, telling him that it “would have been nice to know that when you made your application.” This may indicate, as Dr. Adam Wandt has speculated, that if Dewald had disclosed his intent to use an AI avatar, his application would have been denied. 

Some courts have embraced this disclosure approach. One trial court in New York requires that any document filed in court that was prepared with the use of AI to contain a statement identifying the portions drafted by AI and certifying that it was reviewed by a human for accuracy. The Western District of North Carolina took the opposite stance, requiring a certification that no AI was used in the preparation of court filings. While not requiring disclosure, the Eastern District of Missouri warns litigants that they are responsible for the content generated by AI. 

As is often the case, new technology is advancing faster than the rules that regulate them. Dewald’s use of the Tavus service is part of a growing trend of the misuse of AI which underscores the need for guidance. In one such case, two lawyers who used ChatGPT to perform legal research were fined $5,000 after it made up non-existent cases which they then cited. Michael Cohen, the former personal attorney of Donald Trump, used Google’s AI service Bard which similarly hallucinated cases which Cohen cited in a motion. And in perhaps the most egregious case, the FTC fined DoNotPay, a legal information and “self-help” company, $193,000 for advertising “robot lawyers” who could replace humans in drafting legal documents. Unsurprisingly, the FTC found the service to be ineffective

AI is Inevitable

While perhaps lacking a place in the courtroom, AI is becoming an increasingly embraced tool. In a survey of legal professionals by Thomson Reuters, 72% of respondents view AI as a force for good in the profession. Half of responding law firms stated that exploring the potential uses of AI and implementing them was their top priority. The potential benefits could be game-changing. At the current rate of adoption, AI could save an average of 200 hours per person in 2025. This could allow lawyers to spend more time on more expertise-driven tasks, business development, or simply have more time for themselves.

The Arizona Supreme Court is leading the way to reap some of these benefits. In March, they rolled out a new AI spokesperson program using a service similar to the one used by Dewald. The justice who authored a given opinion also will draft a script, which is then published to their website so that the public can have an easy-to-understand explanation of the results of a case. Court spokesman Alberto Rodriguez said that this cut what used to be an hours-long process down to just 30 minutes.

Conclusion

For better or for worse, the Pandora’s box of generative AI is open. Its potential to save lawyer’s vast amounts of time and increase accessibility to the public have already been demonstrated. However, cases such as Dewald’s and DoNotPay serve as an important reminder of the limitations. Generative AI is a useful tool in a lawyer’s belt, but it is not a replacement for the lawyer itself. 

#WJLTA #GenerativeAI #ethics #robotlawyer

Streaming the USA 2026 FIFA World Cup: How Tech Innovations Are Shaping the Viewing Experience

By: Santi Pedraza Arenas

Introduction

The FIFA World Cup, one of the most-viewed events worldwide, is set to return to American soil in 2026. With matches spanning across the United States, Canada, and Mexico, excitement is already building. While tens of thousands of fans will pack the stadiums from New York to Seattle, millions more will be watching from the comfort of their homes, devices in hand. Because there are only a limited number of in-person tickets and prices are expected to reflect overwhelming demand, most fans will turn to streaming services to catch every moment of the action. This growing reliance on digital broadcasts has been a driving force behind rapid advancements in streaming technology, pushing developers to create more immersive and seamless viewing experiences than ever before. However, these innovations come with legal and regulatory challenges as companies compete to stake their claims in the evolving digital sports media landscape. This blog will explore the technological advancements shaping the World Cup streaming experience and the legal questions that come with them.

Technology That Brings the Stadium to You

In preparation for 2026, tech companies and broadcasters are making bold moves to deliver a seamless, high-impact digital viewing experience. A standout example is Lenovo’s recent partnership with FIFA as the official technology sponsor. This deal is representative of the industry’s pivot toward high-performance streaming experiences. Their involvement goes beyond branding; Lenovo’s hardware, servers, and IT infrastructure will be essential to powering the event’s broadcast backbone, enabling fans around the globe to access ultra-HD streams in 4K and even 8K resolution. More than just delivering clearer video, these streams will feature innovations in real-time overlays, dynamic camera switching, and potentially AI-driven analytics that let viewers interact with the game in ways never before possible.

But these immersive tools go far beyond visual quality. They reflect a growing effort to recreate the social and emotional aspects of live stadium viewing within digital platforms. FIFA is actively exploring “watch together” features that synchronize streams across users in real time, allowing fans to cheer, react, and chat together virtually. These experiences may include live discussion panels, personalized stat overlays, or interactive reaction features that transform passive watching into a shared, social event. However, delivering that level of interactivity introduces not only technical challenges but also complex legal questions. As the line between viewer and participant continues to blur, the legal framework governing digital sports broadcasting is being tested in new and profound ways.

The Legal Pitfalls of the Streaming Gold Rush 

Behind every technological innovation in streaming lies a competitive race for control over digital rights. Major tech firms and media companies are aggressively pursuing exclusive deals, hoping to capture massive viewership and the accompanying revenue. But exclusivity can come at a price. In response to  Disney, Fox, and Warner Bros.’ proposal of a joint streaming service called Venu Sports, rival companies like FuboTV quickly responded by filing antitrust claims, arguing that the joint venture would unfairly limit competition by locking out smaller players. The backlash was swift and effective, and the resulting legal pressure led to the shelving of the Venu Sports project entirely. Legally, this wasn’t just a business dispute; it raised serious and unresolved questions about what constitutes fair competition in the modern streaming ecosystem. As digital platforms become the new stadiums, the rules that govern them are still being written.

At the same time, the legal challenges do not stop with streaming rights alone. The nature of the content being streamed is evolving, bringing new legal complications. Today’s sports broadcasts are enhanced by real-time data: player tracking, ball trajectory, biometric readings, and even environmental sensor inputs. This data is not simply gathered and stored, it’s transformed into live, interactive content. That transformation raises crucial legal questions: Is the data proprietary? If so, who owns the data? Is it the league, the broadcaster, the player, or the technology provider? These questions are no longer theoretical. They touch on active issues in data privacy law, intellectual property, and even athletes’ rights. For instance, biometric data collected during a match could be subject to the European Union’s General Data Protection Regulation (GDPR) or California’s Consumer Privacy Act (CCPA), even if the player is performing in a U.S. stadium like Lumen Field. This could restrict the interactive streams designed for the fans, but it would also protect the rights of players to benefit from their own data. AI-generated features derived from that data, such as predictive models or auto-generated highlight reels, complicate things further by challenging traditional copyright frameworks since the content is synthesized rather than authored.

Conclusion:

Looking ahead, the future of sports streaming will hinge on how well the industry balances bold innovation with regulatory responsibility. Technologies like augmented reality, virtual reality, and sophisticated AI analytics are likely to become core elements of the viewing experience. As that happens, stakeholders such as tech firms, broadcasters, leagues, and lawmakers will need to collaborate more closely to shape legal structures that support innovation without compromising individual rights. Global regulatory trends, from GDPR in Europe to potential new federal data laws in the United States, will likely steer this conversation. These regulations are expected to clarify what qualifies as “sensitive data,” reinforce consent standards, and ensure transparency in how user and player data are collected and used.

In conclusion, the 2026 FIFA World Cup will not only be a showcase of athletic excellence but also a critical testing ground for the next generation of digital media innovations. The World Cup will invite all stakeholders to engage in an ongoing dialogue about fairness, privacy, and the future of sports entertainment. As we stand on the cusp of this new era, the key challenge will be to harness the power of technology in a manner that elevates the fan experience, drives positive innovation, and upholds the legal and ethical standards necessary to protect both individual rights and the integrity of the game.

#WJLTA #UWLAW #FIFAWorldCup #EntertainmentLaw

Exploring the Legal Implications of Online Therapy with BetterHelp

By: Jonah Haseley

As discussing mental health has become more normalized, advertisements for mental health apps like Better Help have become ubiquitous. BetterHelp spent over $100,000,000 on advertising its services in 2023 and has over 400,000 users as of the fourth quarter of 2024. With its growing societal footprint, BetterHelp has also faced growing legal scrutiny, particularly regarding privacy concerns.

Privacy Concerns and FTC Investigation

The Federal Trade Commission (FTC) found in a 2023 investigation that BetterHelp allegedly shared sensitive patient information with advertisers—despite telling its users that it would never do that. BetterHelp settled with the FTC, but the settlement only applies to users who signed up between 2017 and 2020. Though the settlement was worth 7.8 million dollars, each class member was only entitled to $10. The company did not admit to wrongdoing and maintains that it never shared sensitive patient data with advertisers.

In addition to the FTC investigation, BetterHelp has also been subject to multiple class action lawsuits. These lawsuits echo the FTC’s allegations of deceptive privacy practices, and the cases are pending in the U.S. District Court for the Northern District of California.  

Industry Problems

The problem with telehealth data privacy is not limited to BetterHelp. A recent study evaluated 50 telehealth startups and found that 49 of them shared sensitive patient data with big tech companies like Google and Facebook. This data includes data like names and emails addresses, but also prescriptions and answers to medical intake forms.

Governing Laws

With all of these issues in the industry, it is worth considering what laws constrain companies like BetterHelp. Unsurprisingly, the FTC considered BetterHelp’s representation that it was HIPPA (Health Insurance Portability and Accountability Act) compliant to be a deceptive trade practice. One of BetterHealth’s responses was that it recently received its HITRUST (Health Information Trust Alliance) certification. One could forgive a consumer for being confused by these acronyms. HITRUST is not a law – it is a cybersecurity framework adopted by private organizations. HIPPA is a federal statute about protecting patient privacy. While HIPPA is binding, it does not include a private cause of action, which would allow patients harmed by the unlawful release of their information to sue under HIPPA. But, alternatives to a private cause of action exist. One option is to file a complaint with the Department of Health and Human Services’ Office for Civil Rights (DHHS OCR). However, the White House has been rapidly dismantling civil rights offices throughout the federal government, so it is unclear whether federal investigation will remain a viable option. These closures are not limited to civil rights offices either. For example, the White House recently closed the entire Seattle Regional Office of the DHHS. Without a private cause of action and with administrative remedies threatened, consumers face increasing risks with their health data.

Potential Benefits of Telehealth While the implications of tech companies using confidential information obtained from therapy to target advertisements are terrifying, BetterHelp makes its case to the public that its service is a social good. The company argues that online therapy increases access for those in rural areas and makes treatment more financially accessible. Assuming the company is correct, the alternative to telehealth for some may be no treatment at all, which for those in need is not much of a choice. The status quo is caused by the failure of the federal government to meaningfully regulate the industry. While governmental regulation may not ultimately be feasible, a more realistic approach would be to provide consumers with a legal remedy for misuse of their data, to incentivize the industry to change its behavior.

#BetterHelp #telehealth #therapy