Kids Are Hooked On Microtransactions – Now What?

By Wolf Chivers

Predatory Game Design

Did you know that video game addiction is a condition recognized by the World Health Organization? As more and more games have implemented microtransactions, countries around the world have started considering whether those games should be regulated as a form of gambling. Certainly people sometimes spend incredible amounts of money on in-game microtransactions, especially in the form of loot boxes that provide randomized in-game benefits in exchange for actual money. So when parents hear that their kids are potentially getting addicted to video game gambling, what then is likely to happen? Lawsuits—lots and lots of lawsuits

Identifying Claims

What exactly aggrieved parents might claim in those lawsuits, however, is not as clear as it might seem at first glance. Many countries in the world are considering regulating loot boxes as some form of gambling, but have not yet explicitly done so, which narrows the options for parents. What is left? Is it negligence on the part of the game companies? Maybe one could argue that the companies have ignored known risks in their design, but the companies are not making these games by reckless mistake. The core of some of the lawsuits is that the companies are intentionally making the games as addictive as possible. If so, it seems like some sort of intentional tort. However, most of the classic intentional torts that someone might come up with at first glance—assault, battery, trespass, and the like—do not seem intuitively to fit. 

If the claim is not negligence and not obviously an intentional tort, it might seem to leave plaintiffs in an awkward spot. The games represent a peculiar intersection between the fact that the games are fun, widely-enjoyed activities that are harmless in moderation, and the fact that they are also designed to be addictive and can cause great harm when abused. In short, the game practices have many of the same issues as conventional gambling, target a much younger demographic, and lack equal regulatory oversight. If game companies say “This isn’t gambling; all we did is make a fun game,” are there legal theories plaintiffs could still use? There are at least two that the plaintiffs are currently claiming—that the games are defectively designed and that the companies failed to provide adequate warnings of the safety risks. 

Design Defect

The strongest claim available to the plaintiffs in these lawsuits may be in product liability; that is, alleging that the games were defectively designed. A design defect is a flaw in a product that was produced as intended (contrasted with a manufacturing defect), and works as intended, which then produces harm to consumers. In order to show a design defect, the plaintiffs at least must show that the product posed a foreseeable risk of danger to a consumer using it for its intended purpose. That much is likely easy; the possible harms and dangers of video game addiction are wide-ranging and encompass everything from financial harm to long-term mental health challenges. 

In some jurisdictions, however, the plaintiffs also must show that there was a reasonable alternative design, both practically and economically, which is potentially more difficult. If the companies have designed the games to be as addictive as possible, and therefore bring in as much money as possible, any less-addictive alternative designs are logically likely to bring in less money. There are, however, at least some alternatives that seem eminently reasonable; Epic Games settled with the FTC over allegations that, among other things, Fortnite players were tricked into spending money simply by confusing button layouts. Additionally, even if the games were to adopt less-addictive designs, they would likely still be worth billions of dollars, which certainly seems to indicate that they could be less addictive without being significantly less economically viable.

Failure to Warn

Game companies may rebut a design defect argument by suggesting that the games’ designs are not defective or inherently harmful—there are plenty of people who play them safely, and consumers presumably want the games to be as fun as possible. The addictiveness, the argument might run, is a feature, not a bug, distinct from something like a brake failure in a vehicle line. If so, plaintiffs could fall back on a claim that the companies failed to warn consumers of the inherent risks.

A failure to warn claim alleges that the company failed to provide adequate warnings about the risks of using a product, or instructions on how to do so safely. Game companies are no strangers to the process of supplying warnings; for instance, game consoles already have extensive, in-depth Health and Safety sections in their manuals that cover a broad range of known risks. If the plaintiffs can show that the game companies intentionally designed their games to be addictive, failing to provide warnings to that effect is a strong claim. 

The two common defenses in failure to warn cases are that the risk was obvious, or that the misuse was unforeseeable, neither of which seems likely to work to the game companies’ benefit. First, at least in some cases the purpose of the design was to trick people into spending money, in which case at least some of the risks are patently non-obvious. Second, the use to which the consumers put the games, i.e. playing them, was not only entirely foreseeable, it was the whole point. A failure to warn claim thus seems likely to succeed.

Conclusion

Even though design defects and failures to warn seem like strong claims, they also seem vaguely unsatisfying given the degree of harm and manipulation involved. At least one man spent over $16,000 in approximately a year and may have ruined his life. Anyone might easily feel wronged if they later discovered the games were designed to suck that much money out of them, and that example involved a mature adult. If, as alleged, game companies are intentionally getting impressionable children addicted to their games to milk as much money out of them and their parents as humanly possible, with no regard for the harms it may cause or the long-term damage to the children, lawsuits over whether there were enough warnings in the manual seem trite.

Given the current state of regulation, however, it is not entirely clear what else those impacted by the games’ predatory designs ought to do. There is also no easy answer for regulators, since the games likely should not be outlawed entirely. Even so, at a bare minimum, children should not be the target audience. Minimum age laws akin to restrictions on casinos or other forms of gambling would at least reduce the risk to some of the most vulnerable, and might provide a clear legal framework for plaintiffs to use if game companies continue targeting children.

#WJLTA #videogames #microtransactions #lootboxes #gambling #productliability #addiction

#Sponsored or #Deceptive? Understanding the FTC’s Rule on Influencer Ads

By: Penny Pathanaporn

Introduction

Have you ever noticed the endless stream of brand endorsements flooding your social media feed? Maybe you’d never even considered buying that gadget or outfit, but after watching a few influencer hauls and product reviews, you suddenly find yourself engaging in overconsumption. 

While endorsement content may be enticing enough to make you click “add to cart,” they also raise important questions: just how truthful—and lawful—are these advertisements? To answer that question, we must examine the legislation that governs marketing practices, as enforced by the Federal Trade Commission (FTC).

Legal Framework and FTC Authority 

Under Section 5 of the Federal Trade Commission Act (15 USC 45), commercial entities are prohibited from engaging in ‘‘unfair or deceptive acts or practices . . . .’’ According to the Federal Trade Commission (FTC), a practice is considered deceptive if three elements are met: (1) there is “a representation, omission or practice that is likely to mislead the consumer,” (2) the representation, omission or practice is directed toward a “consumer [who is] acting reasonably,” and (3) the representation, omission, or practice is likely to impact the consumer’s decision regarding the product. 

 In an effort to further regulate deceptive marketing practices, the FTC implemented a new rule on August 14, 2024: the Trade Regulation Rule on the Use of Consumer Reviews and Testimonials. Under this new rule, commercial entities are prohibited from, among other practices, “selling or purchasing fake consumer reviews or testimonials, buying positive or negative consumer reviews . . . [, and] creating a company-controlled review website that falsely purports to provide independent reviews . . . .” In addition, the rule bars “insiders [from] creating consumer reviews or testimonials without clearly disclosing their relationships.” 

Given how readily traditional advertising has evolved into influencer marketing, it is no surprise that the FTC introduced this rule to directly address the shift in modern-day promotional tactics.

Revolve Class Action Lawsuit

Within the last month or so, Revolve—a fashion retailer—has been hit with a $50 million dollar class action lawsuit. The plaintiffs allege that Revolve’s marketing practices do not comply with Section 5 of the Federal Trade Commission Act (15 USC 45). In the lawsuit, the plaintiffs claim that Revolve allowed influencers to promote its products on social media without disclosing that these endorsements were paid partnerships, or that the influencers had preexisting relationships with the brand. The plaintiffs further claim that this marketing approach misled consumers into making purchases they might have reconsidered if they had known about the true nature of the endorsements.

Shein Class Action Lawsuit 

Similar to Revolve, Shein–a fast fashion retailer with a global online presence—was recently named in a class action lawsuit alleging violations of the Federal Trade Commission Act. The plaintiffs allege that Shein paid influencers to endorse their products on social media without clearly disclosing their financial relationships. 

According to the lawsuit, Shein allegedly depended on these influencers to portray themselves as regular shoppers or genuine supporters of the brand. This marketing strategy allegedly involved concealing sponsorship disclosures within hashtag-heavy captions or leaving the disclosures out altogether. Like the plaintiffs in the Revolve lawsuit, the plaintiffs here assert that they would have reconsidered their Shein purchases had they been aware of the true nature of the endorsements.  

FTC Guidance: What Brands and Businesses Can Do to Prevent Liability 

In direct response to the rise in influencer marketing, the FTC has published guidelines on how brands and influencers can collaborate while ensuring compliance with U.S. consumer protection laws. Per the guidelines, the FTC advises influencers to always “disclose when [they] have any financial, employment, personal, or family relationship with a brand.” This means that, whether the influencer was paid to promote the brand or merely gifted free products, the influencer must still make the appropriate disclosures to remain legally compliant. 

In regard to disclosure placements, the FTC emphasizes that disclosures should be easily noticeable by consumers. For example, the FTC discourages placing the disclosures within a list of hashtags or links; instead, disclosures should appear directly alongside the message of endorsement. For video content, the FTC recommends including disclosures in the video itself in addition to the accompanying caption. As for language, disclosures should be written in clear and simple terms—ranging from direct acknowledgments of brand partnerships to shorter hashtags like “#sponsored” or “#ad.”

Lastly, it is crucial for brands and influencers alike to understand that, although brand endorsements may be published abroad, U.S. consumer protection laws will still apply if it is “reasonably foreseeable that the post will affect U.S. consumers.” 

Conclusion

Influencer marketing represents a modern form of advertising—one that is both highly accessible and incredibly personal, blurring the line between genuine content and paid promotion. Left unchecked, influencer marketing—which involves consistent and personal engagement with consumers—can easily lead to negative impacts on consumption. The FTC’s new rule and guidelines help protect consumer rights while giving companies and influencers the freedom to develop their brands, honor their creativity, and grow their businesses.

#FTC #SocialMediaMarketing #AdDisclosure #WJLTA

Is the Take It Down Act Enough?

By: Lindsey Vickers

Last week, legislators united to sign into law the Take It Down Act, a bill introduced by Texas Representative Ted Cruz. The bill’s next stop will be President Trump’s desk, where many expect he will sign it into law. (Melania is certainly rooting for the bill.) 

But in a world of deepfakes and alternative realities, is the Take It Down Act truly enough to protect the public from the negative consequences of deepfakes? 

What is the Take It Down Act? 

The Take It Down Act is federal legislation that mirrors a growing number of state laws. The law aims to ban “nonconsensual online publication of intimate visual depictions of individuals.” 

You might have read that and thought, “huh?” Yeah, me too. In essence, the Take It Down Act is criminalizing what have come to be known as “deepfakes.” This internet-era term refers to computer-generated media that depict things that didn’t actually happen or that distort things that did happen into a twisted, alternate reality. Deepfakes can be audio, video, or images—in essence, many types of media that easily spread online. 

The worst part? Deepfakes can be virtually indistinguishable from reality. 

While the term “deepfake” dates back to 2017, the media only reached a real boiling point when Taylor Swift was the subject of a deepfake scandal just over a year ago. Users took to a Microsoft AI technology to create illegitimate deepfakes of Swift in the form of nude images. With Swift’s touch of star power the depictions of her, and other nonconsensual deepfakes, became a hot national issue. 

But people across the country have been subject of non-consensual deepfake porn, ranging from high school teens, to a woman whose image was manipulated into porn by a coworker

However, this is not the only nefarious use of deepfakes. Audio deepfakes, for example, have been used to scam people out of millions of dollars, automate realistic robocalls, and even demand ransom from unsuspecting parents

What Does The Take It Down Act Do? 

The Take It Down Act’s purview is limited to deepfake pornography and mimics state laws. It does not provide a cause of action, or a way for people to bring a legal allegation, for all deepfakes. Instead, it only criminalizes nonconsensual “intimate imagery,” which essentially means porn or nude images. In essence, it criminalizes the action of sharing or threatening to share nonconsensual intimate images. 

The act puts the onus to remove nonconsensual intimate depictions on the websites where the depictions are posted. However, the burden to remove them only kicks in after the website owner receives notice (aka a person has complained about a nonconsensual deepfake). 

Why Doesn’t the Take It Down Act Conflict with Section 230?  

In general, providers of interactive internet services are exempt from liability for posts by third parties, which includes deepfakes like those targeted by the Take It Down Act. This promotes innovation and protects the free internet

But, there are a couple of caveats. That’s where section 230 comes in. Section 230 was enacted in 1996 as part of the Communications Decency Act. The act initially aimed to regulate obscene speech online, but most of it was quickly struck down by the Supreme Court for being overbroad and violating the First Amendment

Section 230, though, remained intact. This section of the act broadly exempts providers from liability, only protects websites and internet service providers from civil liability, or things like torts. So, you can’t sue a website provider for defamatory statements posted by another person, for example. For example, you couldn’t, say, go after Yelp for another person’s bad review of your restaurant. 

However, Section 230 contains an exception for crimes. This means that internet service providers can still be liable for criminal activity that breaks the law. For example, a classifieds list or housing search engine could be held liable for violating the Fair Housing Act by asking users to answer questions that may be discriminatory. 

The Take It Down Act makes posting nonconsensual deepfakes a crime, meaning it falls under the exception carved out of Section 230. 

Is the Take It Down Act Enough? 

The Take It Down Act poses some free speech concerns, but also has other pitfalls. Namely, it only targets one category of nefarious deepfakes: intimate images. The law fails to provide protections against people making a deepfake TikTok inspired by your fit check, for example, and using it to spawn a confusing, illegitimate competitor account. 

While this might sound like a pie-in-the-sky, it’s a legal problem many states have taken steps to solve through what’s called the “Right of Publicity.” These laws offer protections to people’s personality, which includes their name, image, and likeness. 

Some state Right of Publicity laws, including Washington’s, provide all citizens a right to their name, image, and likeness. Others only offer protections to celebrities, whose persona has commercial value. Regardless, these laws offer a more expansive framework to combat AI deepfakes of all sorts, not just the pornographic kinds. 

So, while the Take It Down Act does put some worries at bay, it leaves many citizens unprotected from non-sexual uses of their identities—despite the presence of a clear alternative through a federal Right of Publicity law.

#takeitdownact #newlegislation #deepfake

Your Face Is a Ticket: The Legal Risks of Facial Recognition at Concerts and Stadiums

By Jonah M. Haseley

Traditionally, people carried paper tickets to concerts, sports games, and other venues. By today’s standards, that seems quaint. But the rise of biometric data, particularly facial recognition technology, is allowing your face to become your ticket. 

The Rise of Facial Recognition in Live Events

Venues are beginning to use facial recognition to expedite entry and augment security. The New York Mets introduced facial recognition at Citi Field, allowing fans to enter without traditional tickets. The NFL has deployed facial recognition to control access to restricted areas within stadiums.

Legal Challenges and Privacy Concerns

While facial recognition offers convenience, it also raises legal and privacy issues. In 2024, a federal judge dismissed a lawsuit against Madison Square Garden, which used facial recognition to identify and ban individuals who were in litigation against the company. The court found that the practice did not violate existing privacy laws, despite calling it “objectionable” in its order dismissing the lawsuit. 

Not all venues have followed this tech trend. In 2023, more than 100 artists and venues pledged to not use facial recognition at their events, citing concerns over civil liberties and privacy. This resistance exemplifies unease within the entertainment industry about biometric surveillance.

Biometric Privacy Laws: A Patchwork of Protections

The legal landscape for biometric data in the U.S. is fragmented. Only a few states—such as Illinois, Texas, and Washington—have enacted comprehensive biometric privacy laws that require informed consent for the collection and use of facial data. Illinois’s Biometric Information Privacy Act (BIPA) is perhaps the strongest of these, allowing private individuals to sue for violations.

However, most states offer no such protections, meaning that depending on where a venue is located, concertgoers may unknowingly surrender biometric data. This lack of consistency leaves fans vulnerable and venues with unclear obligations.

The Need for Transparency and Consent

A major issue with the use of facial recognition technology is the lack of transparency. Venues often fail to disclose that they are collecting biometric data. Venues sometimes obtain consent first, but often the consent is buried in complex terms and conditions.

One notable example occurred at a Taylor Swift concert, where facial recognition was used to scan attendees for known stalkers. Stalking is a real problem, and celebrities like Taylor Swift have legitimate security concerns. But fans were unaware that their images were being captured and analyzed, raising ethical and legal questions about covert surveillance at public events.

The Path Forward

As facial recognition becomes more common at live event venues, lawmakers should enact clear, nationwide rules that protect individuals’ privacy rights and regulate how biometric data is collected, stored, and used. Venues must also take responsibility by being transparent about their practices, obtaining clear, informed consent, and securing the data they collect. People deserve to have their data protected and their privacy respected. By implementing stronger legal protections and ethical standards, we can ensure that attending a concert remains about watching the performance—not about being watched.

#FacialRecognition #PrivacyRights #ConcertTech #BiometricData

Breaking the Game: The Legal Fallout of the EA-FIFA Divorce

By: Santi Pedrazas Arenas

I. Introduction

For nearly three decades, the FIFA video game series stood as both a cultural phenomenon and a revenue juggernaut, melding the world’s most popular sport with cutting‑edge digital technology. Yet on May 10th, 2022, Electronic Arts (EA) and Fédération Internationale de Football Association (FIFA) announced that their long‑running licensing agreement would not be renewed at the end of that year. This departure was far more than a simple rebranding exercise; it reflected a complex tug‑of‑war over intellectual property (“IP”) rights, brand equity, and digital distribution in an age when gaming companies increasingly rival traditional sports institutions in global influence.

Beyond the headlines, the EA‑FIFA breakup offers a rich case study in contract negotiations, trademark strategy, and the evolving contours of digital IP. By examining the key legal fault lines, from licensing fees and player likenesses to trademark dilution and collective bargaining with player unions, we can trace how tech giants assert greater autonomy over digital assets once held by legacy organizations. 

II. Background: A $20 Billion Partnership

EA first partnered with FIFA in 1993, releasing FIFA International Soccer for the Sega Genesis and Super Nintendo Entertainment System. Over the ensuing years, the franchise evolved into EA’s flagship title, particularly following the introduction of FIFA Ultimate Team (FUT) in 2009, a game mode that grew to dominate the company’s monetization strategy. By the time of the split announcement, “FIFA 23” accounted for a significant amount of the financial success for EA

Under the terms of the licensing deal, FIFA granted EA exclusive rights to use its trademark, official competition names (including the World Cup), and related branding elements. In return, reports suggested that annual licensing fees ran into the billions of dollars per World Cup cycle. Meanwhile, EA negotiated separate agreements with player associations (FIFPro), major leagues (Premier League, LaLiga, Bundesliga), and individual clubs to secure likeness rights, kits, and stadiums — a sprawling web of sublicenses that gave the series’ authenticity.

This dual‑track licensing approach meant that while FIFA owned the name, EA controlled the experience. As digital distribution overtook physical sales, EA began to question the value of the FIFA trademark itself. The core gameplay, player likenesses, leagues, and clubs that fans cared about were secured through separate agreements and remained intact regardless of the FIFA name. In this context, the branding offered by FIFA was increasingly seen as symbolic rather than essential. For EA, long-term value lay in recurring in‑game revenues from microtransactions and content updates, not in legacy naming rights. This shift in perspective helped set the stage for license renegotiations in 2022.

III. The Licensing Dispute: FIFA vs. EA

At the heart of the breakup lay a disagreement over the value and scope of FIFA’s trademark. Reports indicate that FIFA sought over $1 billion for a renewed naming‑rights deal covering the next World Cup cycle, a figure EA deemed unjustifiable in light of its digital‑first business model. EA countered with a proposal that would have granted it broader rights to digital and streaming content, global mobile distribution, and extended sublicensing flexibility, terms FIFA ultimately refused to grant.

When negotiations collapsed in May 2022, both parties publicly assured fans that the split would be “amicable,” but behind the scenes, lawyers scrambled to untangle overlapping rights before the December 2022 deadline. 

IV. Who Owns the Game? A Legal Anatomy of the Split

A. Trademark Law

Under U.S. and international trademark principles, a mark grants its owner the exclusive right to use a brand identifier in commerce. FIFA’s insistence on preserving exclusive control over “FIFA” threatened to limit EA’s ability to leverage the brand in new digital arenas. In contrast, EA holds registered trademarks for “EA Sports,” “FUT,” and related subbrands. The split has tested the consumer confusion doctrines. This raises important questions about whether or not fans can distinguish EA Sports FC from FIFA games, or whether EA’s longstanding association would dilute FIFA’s goodwill. 

B. Collective Licensing & Player Likenesses

Crucially, EA’s separate agreements with FIFPro conferred rights to more than 17,000 player likenesses, independent of the FIFA deal. This collective bargaining arrangement allowed EA to continue featuring top athletes even as the FIFA name disappeared. From a contract‑law perspective, these parallel licenses insulated EA against the fallout of a single counterparty walkout, showcasing a best practice in risk diversification for IP‑heavy ventures.

VI. Conclusion

The end of the EA‑FIFA partnership marks more than the sunset of an era; it signals a tectonic shift in how IP, branding, and digital distribution intersect in sports entertainment. By dissecting the legal anatomy of the split, from the high‑stakes trademark negotiations and contract‑law intricacies, we glimpse the future battlegrounds where tech companies and traditional institutions will fight for control. As virtual sports become ever more immersive and monetized, law will play a pivotal role in defining the balance of power. Can governing bodies adapt to digital‑first licensing models? And will new stars emerge amid the legal skirmishes over fan engagement and metaverse extensions? For lawyers, technologists, and gamers alike, the story of EA Sports FC versus FIFA is just the opening whistle in a game whose final outcome remains to be determined.