#Sponsored or #Deceptive? Understanding the FTC’s Rule on Influencer Ads

By: Penny Pathanaporn

Introduction

Have you ever noticed the endless stream of brand endorsements flooding your social media feed? Maybe you’d never even considered buying that gadget or outfit, but after watching a few influencer hauls and product reviews, you suddenly find yourself engaging in overconsumption. 

While endorsement content may be enticing enough to make you click “add to cart,” they also raise important questions: just how truthful—and lawful—are these advertisements? To answer that question, we must examine the legislation that governs marketing practices, as enforced by the Federal Trade Commission (FTC).

Legal Framework and FTC Authority 

Under Section 5 of the Federal Trade Commission Act (15 USC 45), commercial entities are prohibited from engaging in ‘‘unfair or deceptive acts or practices . . . .’’ According to the Federal Trade Commission (FTC), a practice is considered deceptive if three elements are met: (1) there is “a representation, omission or practice that is likely to mislead the consumer,” (2) the representation, omission or practice is directed toward a “consumer [who is] acting reasonably,” and (3) the representation, omission, or practice is likely to impact the consumer’s decision regarding the product. 

 In an effort to further regulate deceptive marketing practices, the FTC implemented a new rule on August 14, 2024: the Trade Regulation Rule on the Use of Consumer Reviews and Testimonials. Under this new rule, commercial entities are prohibited from, among other practices, “selling or purchasing fake consumer reviews or testimonials, buying positive or negative consumer reviews . . . [, and] creating a company-controlled review website that falsely purports to provide independent reviews . . . .” In addition, the rule bars “insiders [from] creating consumer reviews or testimonials without clearly disclosing their relationships.” 

Given how readily traditional advertising has evolved into influencer marketing, it is no surprise that the FTC introduced this rule to directly address the shift in modern-day promotional tactics.

Revolve Class Action Lawsuit

Within the last month or so, Revolve—a fashion retailer—has been hit with a $50 million dollar class action lawsuit. The plaintiffs allege that Revolve’s marketing practices do not comply with Section 5 of the Federal Trade Commission Act (15 USC 45). In the lawsuit, the plaintiffs claim that Revolve allowed influencers to promote its products on social media without disclosing that these endorsements were paid partnerships, or that the influencers had preexisting relationships with the brand. The plaintiffs further claim that this marketing approach misled consumers into making purchases they might have reconsidered if they had known about the true nature of the endorsements.

Shein Class Action Lawsuit 

Similar to Revolve, Shein–a fast fashion retailer with a global online presence—was recently named in a class action lawsuit alleging violations of the Federal Trade Commission Act. The plaintiffs allege that Shein paid influencers to endorse their products on social media without clearly disclosing their financial relationships. 

According to the lawsuit, Shein allegedly depended on these influencers to portray themselves as regular shoppers or genuine supporters of the brand. This marketing strategy allegedly involved concealing sponsorship disclosures within hashtag-heavy captions or leaving the disclosures out altogether. Like the plaintiffs in the Revolve lawsuit, the plaintiffs here assert that they would have reconsidered their Shein purchases had they been aware of the true nature of the endorsements.  

FTC Guidance: What Brands and Businesses Can Do to Prevent Liability 

In direct response to the rise in influencer marketing, the FTC has published guidelines on how brands and influencers can collaborate while ensuring compliance with U.S. consumer protection laws. Per the guidelines, the FTC advises influencers to always “disclose when [they] have any financial, employment, personal, or family relationship with a brand.” This means that, whether the influencer was paid to promote the brand or merely gifted free products, the influencer must still make the appropriate disclosures to remain legally compliant. 

In regard to disclosure placements, the FTC emphasizes that disclosures should be easily noticeable by consumers. For example, the FTC discourages placing the disclosures within a list of hashtags or links; instead, disclosures should appear directly alongside the message of endorsement. For video content, the FTC recommends including disclosures in the video itself in addition to the accompanying caption. As for language, disclosures should be written in clear and simple terms—ranging from direct acknowledgments of brand partnerships to shorter hashtags like “#sponsored” or “#ad.”

Lastly, it is crucial for brands and influencers alike to understand that, although brand endorsements may be published abroad, U.S. consumer protection laws will still apply if it is “reasonably foreseeable that the post will affect U.S. consumers.” 

Conclusion

Influencer marketing represents a modern form of advertising—one that is both highly accessible and incredibly personal, blurring the line between genuine content and paid promotion. Left unchecked, influencer marketing—which involves consistent and personal engagement with consumers—can easily lead to negative impacts on consumption. The FTC’s new rule and guidelines help protect consumer rights while giving companies and influencers the freedom to develop their brands, honor their creativity, and grow their businesses.

#FTC #SocialMediaMarketing #AdDisclosure #WJLTA

Is the Take It Down Act Enough?

By: Lindsey Vickers

Last week, legislators united to sign into law the Take It Down Act, a bill introduced by Texas Representative Ted Cruz. The bill’s next stop will be President Trump’s desk, where many expect he will sign it into law. (Melania is certainly rooting for the bill.) 

But in a world of deepfakes and alternative realities, is the Take It Down Act truly enough to protect the public from the negative consequences of deepfakes? 

What is the Take It Down Act? 

The Take It Down Act is federal legislation that mirrors a growing number of state laws. The law aims to ban “nonconsensual online publication of intimate visual depictions of individuals.” 

You might have read that and thought, “huh?” Yeah, me too. In essence, the Take It Down Act is criminalizing what have come to be known as “deepfakes.” This internet-era term refers to computer-generated media that depict things that didn’t actually happen or that distort things that did happen into a twisted, alternate reality. Deepfakes can be audio, video, or images—in essence, many types of media that easily spread online. 

The worst part? Deepfakes can be virtually indistinguishable from reality. 

While the term “deepfake” dates back to 2017, the media only reached a real boiling point when Taylor Swift was the subject of a deepfake scandal just over a year ago. Users took to a Microsoft AI technology to create illegitimate deepfakes of Swift in the form of nude images. With Swift’s touch of star power the depictions of her, and other nonconsensual deepfakes, became a hot national issue. 

But people across the country have been subject of non-consensual deepfake porn, ranging from high school teens, to a woman whose image was manipulated into porn by a coworker

However, this is not the only nefarious use of deepfakes. Audio deepfakes, for example, have been used to scam people out of millions of dollars, automate realistic robocalls, and even demand ransom from unsuspecting parents

What Does The Take It Down Act Do? 

The Take It Down Act’s purview is limited to deepfake pornography and mimics state laws. It does not provide a cause of action, or a way for people to bring a legal allegation, for all deepfakes. Instead, it only criminalizes nonconsensual “intimate imagery,” which essentially means porn or nude images. In essence, it criminalizes the action of sharing or threatening to share nonconsensual intimate images. 

The act puts the onus to remove nonconsensual intimate depictions on the websites where the depictions are posted. However, the burden to remove them only kicks in after the website owner receives notice (aka a person has complained about a nonconsensual deepfake). 

Why Doesn’t the Take It Down Act Conflict with Section 230?  

In general, providers of interactive internet services are exempt from liability for posts by third parties, which includes deepfakes like those targeted by the Take It Down Act. This promotes innovation and protects the free internet

But, there are a couple of caveats. That’s where section 230 comes in. Section 230 was enacted in 1996 as part of the Communications Decency Act. The act initially aimed to regulate obscene speech online, but most of it was quickly struck down by the Supreme Court for being overbroad and violating the First Amendment

Section 230, though, remained intact. This section of the act broadly exempts providers from liability, only protects websites and internet service providers from civil liability, or things like torts. So, you can’t sue a website provider for defamatory statements posted by another person, for example. For example, you couldn’t, say, go after Yelp for another person’s bad review of your restaurant. 

However, Section 230 contains an exception for crimes. This means that internet service providers can still be liable for criminal activity that breaks the law. For example, a classifieds list or housing search engine could be held liable for violating the Fair Housing Act by asking users to answer questions that may be discriminatory. 

The Take It Down Act makes posting nonconsensual deepfakes a crime, meaning it falls under the exception carved out of Section 230. 

Is the Take It Down Act Enough? 

The Take It Down Act poses some free speech concerns, but also has other pitfalls. Namely, it only targets one category of nefarious deepfakes: intimate images. The law fails to provide protections against people making a deepfake TikTok inspired by your fit check, for example, and using it to spawn a confusing, illegitimate competitor account. 

While this might sound like a pie-in-the-sky, it’s a legal problem many states have taken steps to solve through what’s called the “Right of Publicity.” These laws offer protections to people’s personality, which includes their name, image, and likeness. 

Some state Right of Publicity laws, including Washington’s, provide all citizens a right to their name, image, and likeness. Others only offer protections to celebrities, whose persona has commercial value. Regardless, these laws offer a more expansive framework to combat AI deepfakes of all sorts, not just the pornographic kinds. 

So, while the Take It Down Act does put some worries at bay, it leaves many citizens unprotected from non-sexual uses of their identities—despite the presence of a clear alternative through a federal Right of Publicity law.

#takeitdownact #newlegislation #deepfake

Your Face Is a Ticket: The Legal Risks of Facial Recognition at Concerts and Stadiums

By Jonah M. Haseley

Traditionally, people carried paper tickets to concerts, sports games, and other venues. By today’s standards, that seems quaint. But the rise of biometric data, particularly facial recognition technology, is allowing your face to become your ticket. 

The Rise of Facial Recognition in Live Events

Venues are beginning to use facial recognition to expedite entry and augment security. The New York Mets introduced facial recognition at Citi Field, allowing fans to enter without traditional tickets. The NFL has deployed facial recognition to control access to restricted areas within stadiums.

Legal Challenges and Privacy Concerns

While facial recognition offers convenience, it also raises legal and privacy issues. In 2024, a federal judge dismissed a lawsuit against Madison Square Garden, which used facial recognition to identify and ban individuals who were in litigation against the company. The court found that the practice did not violate existing privacy laws, despite calling it “objectionable” in its order dismissing the lawsuit. 

Not all venues have followed this tech trend. In 2023, more than 100 artists and venues pledged to not use facial recognition at their events, citing concerns over civil liberties and privacy. This resistance exemplifies unease within the entertainment industry about biometric surveillance.

Biometric Privacy Laws: A Patchwork of Protections

The legal landscape for biometric data in the U.S. is fragmented. Only a few states—such as Illinois, Texas, and Washington—have enacted comprehensive biometric privacy laws that require informed consent for the collection and use of facial data. Illinois’s Biometric Information Privacy Act (BIPA) is perhaps the strongest of these, allowing private individuals to sue for violations.

However, most states offer no such protections, meaning that depending on where a venue is located, concertgoers may unknowingly surrender biometric data. This lack of consistency leaves fans vulnerable and venues with unclear obligations.

The Need for Transparency and Consent

A major issue with the use of facial recognition technology is the lack of transparency. Venues often fail to disclose that they are collecting biometric data. Venues sometimes obtain consent first, but often the consent is buried in complex terms and conditions.

One notable example occurred at a Taylor Swift concert, where facial recognition was used to scan attendees for known stalkers. Stalking is a real problem, and celebrities like Taylor Swift have legitimate security concerns. But fans were unaware that their images were being captured and analyzed, raising ethical and legal questions about covert surveillance at public events.

The Path Forward

As facial recognition becomes more common at live event venues, lawmakers should enact clear, nationwide rules that protect individuals’ privacy rights and regulate how biometric data is collected, stored, and used. Venues must also take responsibility by being transparent about their practices, obtaining clear, informed consent, and securing the data they collect. People deserve to have their data protected and their privacy respected. By implementing stronger legal protections and ethical standards, we can ensure that attending a concert remains about watching the performance—not about being watched.

#FacialRecognition #PrivacyRights #ConcertTech #BiometricData

Breaking the Game: The Legal Fallout of the EA-FIFA Divorce

By: Santi Pedrazas Arenas

I. Introduction

For nearly three decades, the FIFA video game series stood as both a cultural phenomenon and a revenue juggernaut, melding the world’s most popular sport with cutting‑edge digital technology. Yet on May 10th, 2022, Electronic Arts (EA) and Fédération Internationale de Football Association (FIFA) announced that their long‑running licensing agreement would not be renewed at the end of that year. This departure was far more than a simple rebranding exercise; it reflected a complex tug‑of‑war over intellectual property (“IP”) rights, brand equity, and digital distribution in an age when gaming companies increasingly rival traditional sports institutions in global influence.

Beyond the headlines, the EA‑FIFA breakup offers a rich case study in contract negotiations, trademark strategy, and the evolving contours of digital IP. By examining the key legal fault lines, from licensing fees and player likenesses to trademark dilution and collective bargaining with player unions, we can trace how tech giants assert greater autonomy over digital assets once held by legacy organizations. 

II. Background: A $20 Billion Partnership

EA first partnered with FIFA in 1993, releasing FIFA International Soccer for the Sega Genesis and Super Nintendo Entertainment System. Over the ensuing years, the franchise evolved into EA’s flagship title, particularly following the introduction of FIFA Ultimate Team (FUT) in 2009, a game mode that grew to dominate the company’s monetization strategy. By the time of the split announcement, “FIFA 23” accounted for a significant amount of the financial success for EA

Under the terms of the licensing deal, FIFA granted EA exclusive rights to use its trademark, official competition names (including the World Cup), and related branding elements. In return, reports suggested that annual licensing fees ran into the billions of dollars per World Cup cycle. Meanwhile, EA negotiated separate agreements with player associations (FIFPro), major leagues (Premier League, LaLiga, Bundesliga), and individual clubs to secure likeness rights, kits, and stadiums — a sprawling web of sublicenses that gave the series’ authenticity.

This dual‑track licensing approach meant that while FIFA owned the name, EA controlled the experience. As digital distribution overtook physical sales, EA began to question the value of the FIFA trademark itself. The core gameplay, player likenesses, leagues, and clubs that fans cared about were secured through separate agreements and remained intact regardless of the FIFA name. In this context, the branding offered by FIFA was increasingly seen as symbolic rather than essential. For EA, long-term value lay in recurring in‑game revenues from microtransactions and content updates, not in legacy naming rights. This shift in perspective helped set the stage for license renegotiations in 2022.

III. The Licensing Dispute: FIFA vs. EA

At the heart of the breakup lay a disagreement over the value and scope of FIFA’s trademark. Reports indicate that FIFA sought over $1 billion for a renewed naming‑rights deal covering the next World Cup cycle, a figure EA deemed unjustifiable in light of its digital‑first business model. EA countered with a proposal that would have granted it broader rights to digital and streaming content, global mobile distribution, and extended sublicensing flexibility, terms FIFA ultimately refused to grant.

When negotiations collapsed in May 2022, both parties publicly assured fans that the split would be “amicable,” but behind the scenes, lawyers scrambled to untangle overlapping rights before the December 2022 deadline. 

IV. Who Owns the Game? A Legal Anatomy of the Split

A. Trademark Law

Under U.S. and international trademark principles, a mark grants its owner the exclusive right to use a brand identifier in commerce. FIFA’s insistence on preserving exclusive control over “FIFA” threatened to limit EA’s ability to leverage the brand in new digital arenas. In contrast, EA holds registered trademarks for “EA Sports,” “FUT,” and related subbrands. The split has tested the consumer confusion doctrines. This raises important questions about whether or not fans can distinguish EA Sports FC from FIFA games, or whether EA’s longstanding association would dilute FIFA’s goodwill. 

B. Collective Licensing & Player Likenesses

Crucially, EA’s separate agreements with FIFPro conferred rights to more than 17,000 player likenesses, independent of the FIFA deal. This collective bargaining arrangement allowed EA to continue featuring top athletes even as the FIFA name disappeared. From a contract‑law perspective, these parallel licenses insulated EA against the fallout of a single counterparty walkout, showcasing a best practice in risk diversification for IP‑heavy ventures.

VI. Conclusion

The end of the EA‑FIFA partnership marks more than the sunset of an era; it signals a tectonic shift in how IP, branding, and digital distribution intersect in sports entertainment. By dissecting the legal anatomy of the split, from the high‑stakes trademark negotiations and contract‑law intricacies, we glimpse the future battlegrounds where tech companies and traditional institutions will fight for control. As virtual sports become ever more immersive and monetized, law will play a pivotal role in defining the balance of power. Can governing bodies adapt to digital‑first licensing models? And will new stars emerge amid the legal skirmishes over fan engagement and metaverse extensions? For lawyers, technologists, and gamers alike, the story of EA Sports FC versus FIFA is just the opening whistle in a game whose final outcome remains to be determined.

Hargis v. Pacifica: The Case with Potential to Shape AI’s Legal Future 

By: Miranda Glisson

The internet has made it incredibly easy for people to find, copy, and paste other’s photography. But what are the legal protections available for photographers? How likely is it that artists, well-known or novice, can find every unlawful use of their copyrighted work? In a groundbreaking case, photographer Scott Hargis made history with a record-setting damages award for the unauthorized use of his photographs. 

Introduction 

Hargis is an architecture and interiors photographer, living in the San Francisco Bay Area, with worldwide clientele. Hargis was hired by Atria Management Company to take photos of several senior living facilities. Another company, Pacifica Senior Living Management, then acquired the senior living properties from Atria and used 42 of Hargis’s photos depicting these properties on their website, without obtaining Hargis’ permission. Hargis’ agent informed Pacifica that those photo licenses were not transferable from Atria to Pacifica, and representatives of Hargis requested Pacifica to take Hargis’ images off of their website. However, Pacifica refused on multiple occasions, and Hargis brought suit against Pacifica for Copyright Infringement

Willful Copyright Infringement 

Statutory damages are damages awarded by a judge or jury in a copyright infringement suit to a copyright owner. The amount of statutory damages awarded to a copyright owner when copyright infringement is found depends on whether the infringement is considered innocent or willful. A court may find innocent infringement when the defendant, or infringer, can demonstrate they were “not aware and had no reason to believe that the activity constituted an infringement.” However, innocent infringement cannot be found when there was a proper copyright notice on the work, as found in Hargis v. Pacific Senior Living Management. 

Willful infringement does not require that the defendant have actual knowledge of their infringing actions. Rather, it is only required that there is a showing by a preponderance of the evidence that the infringer “acted with reckless disregard for, or willful blindness to the copyright holder’s rights.” If copyright infringement is found and determined to be innocent infringement, the statutory maximum is $30,000 per copyrighted work infringed upon. The statutory maximum for willful infringement is much larger at $150,000 per copyrighted work. However, even if willful infringement is found, the fact-finders must determine how much statutory damages should be awarded to the Plaintiff between the minimum of $750 (also the minimum for innocent infringement) and the maximum of $150,000 for willful infringement. 

Hargis v. Pacific Senior Living Management: $6.3 Million Jury Verdict 

Legitimate copyright infringement cases will often end in settlement instead of going to trial. However, in the case of Hargis v. Pacific Senior Living Management, Pacifica refused to settle leading to the largest jury verdicts for copyright infringement of photographs. The United States District Court, Central District of California jury found that Pacifica infringed on all 42 of Hargis’ photographs. the evidence supported a finding of willful infringement due to Pacifica ignoring Hargis’ request for payment and refusing to take the photos off the website for a year and half after the suit was filed. They found each infringement to be willful and asserted the maximum statutory damage amount of $150,00 for each of the 42 photographs, leading to a $6.3 million jury verdict

Protection of Photographers Works in the Growing World of AI 

In 2019, Copytrack, a global company that enforces image rights, investigated how many photographer’s images are stolen on the internet. They estimated that more than 2.5 billion images are stolen daily. Hargis v. Pacific Senior Living Management demonstrates how seriously U.S. courts view the infringement of photographs and the financial impact unlawful uses of copyrighted works can result in. Currently, with the ever growing world of AI, more lawsuits are popping up, with claims that AI companies are infringing on their copyrights by using the owner’s images to train AI Models.  

With the favorable results for Hargis and his images, willful use of copyrighted images has the potential to cost AI companies millions, maybe billions, as AI models need to see between 200-600 images of a particular concept before it can replicate it. Further, training a model from scratch, or fine tuning one, can still require thousands of data points. With heaps of data points and works used to train AI models, developers of these models could be on the hook for massive fines depending on how willful the use of the copyrighted work is found. How courts and companies will approach this problem in the future is unknown, however it has the potential to cause ripple effects in AI development.

Conclusion 

Hargis v. Pacific Senior Living Management sets a powerful precedent for protecting photographers’ right in the growing digital era and the severe financial consequences of infringement, especially willful infringement. Photographs, and other copyrighted works, are exposed to misuse and as courts began to evaluate AI’s use of copyrighted material, the lessons from Hargis v. Pacific Senior Living Management may play an instrumental role in decision making and serve as a warning to infringers.