Training AI on Trauma: The Exploitation Risks of Child Abuse Imagery in Machine Learning

By Olivia Bravo

In October 2023, Steven Anderegg, a 43-year old Wisconsinite, was indicted for knowingly producing at least one visual depiction of a minor engaging in sexually explicit conduct. Anderegg allegedly used a text-to-image generative artificial intelligence (GenAI) model called Stable Diffusion to “create thousands of realistic images of prepubescent minors.” In May 2024, Anderegg became the first person in the U.S. criminally charged for generating and distributing AI-created child sexual abuse material (CSAM). His case became a turning point for U.S. authorities, underscoring the legal and ethical challenges AI-generated CSAM poses, highlighting the need for clearer policies and enforcement strategies as they continue to grapple with regulating AI-generated explicit content. 

What is CSAM?

Child Sexual Abuse Material (CSAM) or “child pornography,” is any visual depiction of sexually explicit conduct involving a person less than 18 years old. Due to rapid technological advances, online child sexual exploitation and victimization have increased in scale and complexity. One of the current legal challenges of the new technological age is the use of AI. This is problematic in two ways: (1) offenders are able to use AI capability to create CSAM, and (2) AI models are being trained with CSAM.

Legal Precedents and Challenges

Under U.S. federal law, Child Sexual Abuse Material (CSAM) is considered illegal contraband and is not protected under the First Amendment. Statutes such as 18 U.S.C. §§ 2251, 2252, and 2252A criminalize the production, possession, and distribution of such material through any means of interstate or foreign commerce. 

However, AI-generated CSAM introduces legal complexities. Because some synthetic images do not involve identifiable victims, they may fall outside the scope of laws written before the advent of generative models. This raises questions about whether such material qualifies as illegal “depictions,” and how harm is defined in the absence of a real child.

To address emerging risks, lawmakers have begun to update and expand relevant legislation:

Despite these efforts, no comprehensive federal framework yet exists to regulate the use of CSAM in AI training datasets or the creation of AI-generated abuse imagery. As the technology rapidly evolves, regulators face growing pressure to close these legal gaps while balancing free expression and innovation.

How AI Changes the Game

What is AI model training and how is it impacted by CSAM? An AI model is both a set of algorithms and the data used to train those algorithms so they can make accurate predictions based on consumer queries. The term “AI model training” refers to a process where the model is fed massive amounts of data, the results are examined, and the model output is tweaked to increase accuracy and efficacy. However, what happens when these models are trained on exploitative images of children found on a public dataset?

An investigation by Stanford Internet Observatory (SIO) revealed hundreds of known images of CSAM in an open dataset (LAION-5B) used to train popular AI models such as Stable Diffusion. Stable Diffusion is the same AI text-image generator that Steven Anderegg used to create hyper-realistic images of children. This creation of images from text is an example of harnessing the power of Generative Artificial Intelligence (GAI). GAI enables the creation of fake imagery, including synthetic media, digital forgery, and in this case, CSAM. GAI allows offenders to create hyper-real sexual abuse material that depicts the victimization of children, and can then be used to retrain AI datasets. A July 2024 report by the Internet Watch Foundation (IWF) found that since October 2023, there has been a clear web increase in AI-generated CSAM material, with more images uploaded onto the dark web, and more severe images in Category A abuse, indicating that perpetrators are more able to generate complex ‘hardcore’ scenarios. “AI-generated imagery of child sexual abuse has progressed at such an accelerated rate that the IWF is now seeing the first realistic examples of AI videos depicting the sexual abuse of children.”

Conclusion

Steven Anderegg may have been the first person in the U.S. prosecuted for generating AI-created child sexual abuse material, but he will not be the last. In this way, the technological advances brought on by AI force us to rethink harm and the accountability that we have as users of these platforms. As generative AI becomes more powerful and accessible, the risk of its misuse to produce, circulate, and train future models on CSAM escalates. For lawmakers, this means crafting forward-looking policies that not only criminalize synthetic abuse content but also prevent its proliferation through stricter oversight of training data and AI development practices.

#CSAM #ChildProtection #AITraining #WJLTA 

A Legal Hail Mary: Georgia Man Sues NFL Over Draft Disappointment

By: Joseph Valcazar

Every year, the National Football League (NFL) hosts its draft, an opportunity for each of the 32 teams to select players who have just finished their time at the collegiate level in hopes of building the next Super Bowl winning roster. As a result, every year, over 200 young men have their professional football dreams realized in what becomes one of the most pivotal moments in their life. Sometimes teams make a pick that surprises fans and experts — selecting a player earlier or later than expected. Dedicated fans are fast to voice their opinions of these choices, making their thoughts known on social media or to fellow fans, but a fan filing a lawsuit in response to a draft pick would sound outlandish. That is until this year, when one man filed an intriguing lawsuit against the NFL in the aftermath of this year’s draft, and it revolves around one player: Shadeur Sanders. 

Shadeur the Football Player

Shadeur Sanders, the son of NFL Hall of Famer Deion Sanders, made a name for himself as a quarterback during his college career. As the starting quarterback for the University of Colorado, Shadeur helped take a program that had won just one game in 2022 to nine wins and a bowl game appearance in 2024. In his senior season Shadeur broke multiple school records and won the Johnny Unitas award, which is given to the top upperclassman quarterback. This collegiate success showed that he could play the sport at a high level, and a future in the NFL seemed inevitable, leading experts and fans alike to believe Shadeur was a lock to be an early pick in this year’s draft. All of this discourse made the actual events of the draft shocking for everyone.

Draft Disaster

Sport talking heads love to make predictions, especially where they think specific players will get drafted. For a time, Shadeur was considered to be a number one pick contender. Over time the general consensus shifted to quarterback Cam Ward being drafted number one overall (which became true). Leading up to the draft, analysts still had Shadeur as the second quarterback to be selected. In early April, just a few weeks before the draft was set to begin, the New Orleans Saints — who held the 9th pick — were the betting favorites to select Shadeur. Falling to the second round was viewed as a surprising “slip” for the highly discussed prospect. 

No one could have predicted what would transpire during the draft. Not only was Shadeur not the second quarterback selected, or the third … or the fourth … or even the fifth; Shadeur was the sixth quarterback to be drafted, getting selected at pick number 144 in the fifth round (of seven) to the Cleveland Browns. Instantly, questions started being asked: how could this have happened?

For one Shadeur Sanders fan, only one thing could explain what happened … collusion.

The “Lawsuit”

An anonymous man from Georgia has filed a lawsuit in federal court against the NFL, alleging collusive antitrust violations, civil rights violations, and personal emotional distress. To top it all off, he is asking for a formal apology from the NFL and $100 million in damages for the harm the NFL’s actions caused his “emotional well-being.” The basis of these claims? Reportedly leaked statements from NFL personnel that Shadeur “tanked interviews,” “wasn’t prepared,” and was “too cocky” during his pre-draft meetings. Describing these statements as “slanderous,” the John Doe plaintiff believes these statements reflect the NFL’s bias and intention to harm Shadeur. 

At this point, you may be saying to yourself, “Does this all seem a bit ridiculous?” And you would be correct. The claims presented here are shaky at best. The plaintiff likely lacks standing to bring any of the claims presented. Standing requires a party to have some kind of connection to the harm being challenged. In federal court, a three-part test has been developed to determine a party’s standing. These include:

  • Injury in Fact: The injury suffered is concrete, particularized, and actual or imminent;
  • Causal Connection: The defendant’s conduct is traceable to the plaintiff’s injuries; and
  • Redressible: A favorable decision must be likely, not speculative, to redress the injuries.

The plaintiff in this instance is likely to struggle to meet the injury-in-fact element. To show that the injury was particularized, the plaintiff would have to show that he was affected in a “personal and individual way.” This is not met for both the antitrust and civil rights claims. There is no personal connection that ties these claims to the anonymous plaintiff. If anyone could bring these types of claims, it would be Shadeur himself, as he would be the one who suffered a particularized injury. 

Unsurprisingly, the man’s emotional distress claims also face critical standing issues. While this claim at least relates to a “harm” suffered by the plaintiff, he will be grasping at straws to establish a causal connection from Shaduer’s drop in the draft and any alleged emotional distress. While sports are known for their passionate die hard fans — I can admit that my Sunday moods are often influenced by the result of my favorite team’s game — claiming that the spot a specific player was drafted at (they still made it to the NFL) has resulted in “trauma” and “psychological harm” is a stretch, and one that a federal court is unlikely to be a fan of.

An interesting procedural note is that this lawsuit was filed in forma pauperis (Latin for “as a poor person”), which allows an indigent plaintiff to sue without incurring court costs. This is a request made to the court which has the discretion to approve or disapprove such a request. Typically, the natural next step once a complaint has been filed is for the named defendants to file any pre-answer motions if available or an answer. But in an in forma pauperis manner, the court is statutorily required to dismiss a claim at any time if the suit is frivolous or malicious. A lawsuit is frivolous when a claim lacks any basis in law or fact. This means the court could dismiss the lawsuit without any action from the NFL. 

The court has granted the plaintiff’s in forma pauperis request. The judge has also directed their clerk to submit the matter for a frivolity determination consistent with the federal statute. Eugene Volokh, a UCLA law professor, believes the court will determine this case to be frivolous, and it’s hard to disagree. The claims are highly speculative and lack concrete, substantiated evidence.

Conclusion

Every once in a while, you see a miracle Hail Mary play occur at the end of an NFL game. Describing this lawsuit as a Hail Mary would be generous. By all accounts, this case is set up to be tossed without the NFL having to lift a finger. However, it will be a fun footnote when looking back on one of the most talked-about and controversial NFL draft stories. Shadeur will compete for the starting position this coming season, and for the sake of one Georgia man’s psychological well-being, I hope it happens. 

#NFLDraft #SportsLaw #ShadeurSanders #WJLTA

Kids Are Hooked On Microtransactions – Now What?

By Wolf Chivers

Predatory Game Design

Did you know that video game addiction is a condition recognized by the World Health Organization? As more and more games have implemented microtransactions, countries around the world have started considering whether those games should be regulated as a form of gambling. Certainly people sometimes spend incredible amounts of money on in-game microtransactions, especially in the form of loot boxes that provide randomized in-game benefits in exchange for actual money. So when parents hear that their kids are potentially getting addicted to video game gambling, what then is likely to happen? Lawsuits—lots and lots of lawsuits

Identifying Claims

What exactly aggrieved parents might claim in those lawsuits, however, is not as clear as it might seem at first glance. Many countries in the world are considering regulating loot boxes as some form of gambling, but have not yet explicitly done so, which narrows the options for parents. What is left? Is it negligence on the part of the game companies? Maybe one could argue that the companies have ignored known risks in their design, but the companies are not making these games by reckless mistake. The core of some of the lawsuits is that the companies are intentionally making the games as addictive as possible. If so, it seems like some sort of intentional tort. However, most of the classic intentional torts that someone might come up with at first glance—assault, battery, trespass, and the like—do not seem intuitively to fit. 

If the claim is not negligence and not obviously an intentional tort, it might seem to leave plaintiffs in an awkward spot. The games represent a peculiar intersection between the fact that the games are fun, widely-enjoyed activities that are harmless in moderation, and the fact that they are also designed to be addictive and can cause great harm when abused. In short, the game practices have many of the same issues as conventional gambling, target a much younger demographic, and lack equal regulatory oversight. If game companies say “This isn’t gambling; all we did is make a fun game,” are there legal theories plaintiffs could still use? There are at least two that the plaintiffs are currently claiming—that the games are defectively designed and that the companies failed to provide adequate warnings of the safety risks. 

Design Defect

The strongest claim available to the plaintiffs in these lawsuits may be in product liability; that is, alleging that the games were defectively designed. A design defect is a flaw in a product that was produced as intended (contrasted with a manufacturing defect), and works as intended, which then produces harm to consumers. In order to show a design defect, the plaintiffs at least must show that the product posed a foreseeable risk of danger to a consumer using it for its intended purpose. That much is likely easy; the possible harms and dangers of video game addiction are wide-ranging and encompass everything from financial harm to long-term mental health challenges. 

In some jurisdictions, however, the plaintiffs also must show that there was a reasonable alternative design, both practically and economically, which is potentially more difficult. If the companies have designed the games to be as addictive as possible, and therefore bring in as much money as possible, any less-addictive alternative designs are logically likely to bring in less money. There are, however, at least some alternatives that seem eminently reasonable; Epic Games settled with the FTC over allegations that, among other things, Fortnite players were tricked into spending money simply by confusing button layouts. Additionally, even if the games were to adopt less-addictive designs, they would likely still be worth billions of dollars, which certainly seems to indicate that they could be less addictive without being significantly less economically viable.

Failure to Warn

Game companies may rebut a design defect argument by suggesting that the games’ designs are not defective or inherently harmful—there are plenty of people who play them safely, and consumers presumably want the games to be as fun as possible. The addictiveness, the argument might run, is a feature, not a bug, distinct from something like a brake failure in a vehicle line. If so, plaintiffs could fall back on a claim that the companies failed to warn consumers of the inherent risks.

A failure to warn claim alleges that the company failed to provide adequate warnings about the risks of using a product, or instructions on how to do so safely. Game companies are no strangers to the process of supplying warnings; for instance, game consoles already have extensive, in-depth Health and Safety sections in their manuals that cover a broad range of known risks. If the plaintiffs can show that the game companies intentionally designed their games to be addictive, failing to provide warnings to that effect is a strong claim. 

The two common defenses in failure to warn cases are that the risk was obvious, or that the misuse was unforeseeable, neither of which seems likely to work to the game companies’ benefit. First, at least in some cases the purpose of the design was to trick people into spending money, in which case at least some of the risks are patently non-obvious. Second, the use to which the consumers put the games, i.e. playing them, was not only entirely foreseeable, it was the whole point. A failure to warn claim thus seems likely to succeed.

Conclusion

Even though design defects and failures to warn seem like strong claims, they also seem vaguely unsatisfying given the degree of harm and manipulation involved. At least one man spent over $16,000 in approximately a year and may have ruined his life. Anyone might easily feel wronged if they later discovered the games were designed to suck that much money out of them, and that example involved a mature adult. If, as alleged, game companies are intentionally getting impressionable children addicted to their games to milk as much money out of them and their parents as humanly possible, with no regard for the harms it may cause or the long-term damage to the children, lawsuits over whether there were enough warnings in the manual seem trite.

Given the current state of regulation, however, it is not entirely clear what else those impacted by the games’ predatory designs ought to do. There is also no easy answer for regulators, since the games likely should not be outlawed entirely. Even so, at a bare minimum, children should not be the target audience. Minimum age laws akin to restrictions on casinos or other forms of gambling would at least reduce the risk to some of the most vulnerable, and might provide a clear legal framework for plaintiffs to use if game companies continue targeting children.

#WJLTA #videogames #microtransactions #lootboxes #gambling #productliability #addiction

#Sponsored or #Deceptive? Understanding the FTC’s Rule on Influencer Ads

By: Penny Pathanaporn

Introduction

Have you ever noticed the endless stream of brand endorsements flooding your social media feed? Maybe you’d never even considered buying that gadget or outfit, but after watching a few influencer hauls and product reviews, you suddenly find yourself engaging in overconsumption. 

While endorsement content may be enticing enough to make you click “add to cart,” they also raise important questions: just how truthful—and lawful—are these advertisements? To answer that question, we must examine the legislation that governs marketing practices, as enforced by the Federal Trade Commission (FTC).

Legal Framework and FTC Authority 

Under Section 5 of the Federal Trade Commission Act (15 USC 45), commercial entities are prohibited from engaging in ‘‘unfair or deceptive acts or practices . . . .’’ According to the Federal Trade Commission (FTC), a practice is considered deceptive if three elements are met: (1) there is “a representation, omission or practice that is likely to mislead the consumer,” (2) the representation, omission or practice is directed toward a “consumer [who is] acting reasonably,” and (3) the representation, omission, or practice is likely to impact the consumer’s decision regarding the product. 

 In an effort to further regulate deceptive marketing practices, the FTC implemented a new rule on August 14, 2024: the Trade Regulation Rule on the Use of Consumer Reviews and Testimonials. Under this new rule, commercial entities are prohibited from, among other practices, “selling or purchasing fake consumer reviews or testimonials, buying positive or negative consumer reviews . . . [, and] creating a company-controlled review website that falsely purports to provide independent reviews . . . .” In addition, the rule bars “insiders [from] creating consumer reviews or testimonials without clearly disclosing their relationships.” 

Given how readily traditional advertising has evolved into influencer marketing, it is no surprise that the FTC introduced this rule to directly address the shift in modern-day promotional tactics.

Revolve Class Action Lawsuit

Within the last month or so, Revolve—a fashion retailer—has been hit with a $50 million dollar class action lawsuit. The plaintiffs allege that Revolve’s marketing practices do not comply with Section 5 of the Federal Trade Commission Act (15 USC 45). In the lawsuit, the plaintiffs claim that Revolve allowed influencers to promote its products on social media without disclosing that these endorsements were paid partnerships, or that the influencers had preexisting relationships with the brand. The plaintiffs further claim that this marketing approach misled consumers into making purchases they might have reconsidered if they had known about the true nature of the endorsements.

Shein Class Action Lawsuit 

Similar to Revolve, Shein–a fast fashion retailer with a global online presence—was recently named in a class action lawsuit alleging violations of the Federal Trade Commission Act. The plaintiffs allege that Shein paid influencers to endorse their products on social media without clearly disclosing their financial relationships. 

According to the lawsuit, Shein allegedly depended on these influencers to portray themselves as regular shoppers or genuine supporters of the brand. This marketing strategy allegedly involved concealing sponsorship disclosures within hashtag-heavy captions or leaving the disclosures out altogether. Like the plaintiffs in the Revolve lawsuit, the plaintiffs here assert that they would have reconsidered their Shein purchases had they been aware of the true nature of the endorsements.  

FTC Guidance: What Brands and Businesses Can Do to Prevent Liability 

In direct response to the rise in influencer marketing, the FTC has published guidelines on how brands and influencers can collaborate while ensuring compliance with U.S. consumer protection laws. Per the guidelines, the FTC advises influencers to always “disclose when [they] have any financial, employment, personal, or family relationship with a brand.” This means that, whether the influencer was paid to promote the brand or merely gifted free products, the influencer must still make the appropriate disclosures to remain legally compliant. 

In regard to disclosure placements, the FTC emphasizes that disclosures should be easily noticeable by consumers. For example, the FTC discourages placing the disclosures within a list of hashtags or links; instead, disclosures should appear directly alongside the message of endorsement. For video content, the FTC recommends including disclosures in the video itself in addition to the accompanying caption. As for language, disclosures should be written in clear and simple terms—ranging from direct acknowledgments of brand partnerships to shorter hashtags like “#sponsored” or “#ad.”

Lastly, it is crucial for brands and influencers alike to understand that, although brand endorsements may be published abroad, U.S. consumer protection laws will still apply if it is “reasonably foreseeable that the post will affect U.S. consumers.” 

Conclusion

Influencer marketing represents a modern form of advertising—one that is both highly accessible and incredibly personal, blurring the line between genuine content and paid promotion. Left unchecked, influencer marketing—which involves consistent and personal engagement with consumers—can easily lead to negative impacts on consumption. The FTC’s new rule and guidelines help protect consumer rights while giving companies and influencers the freedom to develop their brands, honor their creativity, and grow their businesses.

#FTC #SocialMediaMarketing #AdDisclosure #WJLTA

Is the Take It Down Act Enough?

By: Lindsey Vickers

Last week, legislators united to sign into law the Take It Down Act, a bill introduced by Texas Representative Ted Cruz. The bill’s next stop will be President Trump’s desk, where many expect he will sign it into law. (Melania is certainly rooting for the bill.) 

But in a world of deepfakes and alternative realities, is the Take It Down Act truly enough to protect the public from the negative consequences of deepfakes? 

What is the Take It Down Act? 

The Take It Down Act is federal legislation that mirrors a growing number of state laws. The law aims to ban “nonconsensual online publication of intimate visual depictions of individuals.” 

You might have read that and thought, “huh?” Yeah, me too. In essence, the Take It Down Act is criminalizing what have come to be known as “deepfakes.” This internet-era term refers to computer-generated media that depict things that didn’t actually happen or that distort things that did happen into a twisted, alternate reality. Deepfakes can be audio, video, or images—in essence, many types of media that easily spread online. 

The worst part? Deepfakes can be virtually indistinguishable from reality. 

While the term “deepfake” dates back to 2017, the media only reached a real boiling point when Taylor Swift was the subject of a deepfake scandal just over a year ago. Users took to a Microsoft AI technology to create illegitimate deepfakes of Swift in the form of nude images. With Swift’s touch of star power the depictions of her, and other nonconsensual deepfakes, became a hot national issue. 

But people across the country have been subject of non-consensual deepfake porn, ranging from high school teens, to a woman whose image was manipulated into porn by a coworker

However, this is not the only nefarious use of deepfakes. Audio deepfakes, for example, have been used to scam people out of millions of dollars, automate realistic robocalls, and even demand ransom from unsuspecting parents

What Does The Take It Down Act Do? 

The Take It Down Act’s purview is limited to deepfake pornography and mimics state laws. It does not provide a cause of action, or a way for people to bring a legal allegation, for all deepfakes. Instead, it only criminalizes nonconsensual “intimate imagery,” which essentially means porn or nude images. In essence, it criminalizes the action of sharing or threatening to share nonconsensual intimate images. 

The act puts the onus to remove nonconsensual intimate depictions on the websites where the depictions are posted. However, the burden to remove them only kicks in after the website owner receives notice (aka a person has complained about a nonconsensual deepfake). 

Why Doesn’t the Take It Down Act Conflict with Section 230?  

In general, providers of interactive internet services are exempt from liability for posts by third parties, which includes deepfakes like those targeted by the Take It Down Act. This promotes innovation and protects the free internet

But, there are a couple of caveats. That’s where section 230 comes in. Section 230 was enacted in 1996 as part of the Communications Decency Act. The act initially aimed to regulate obscene speech online, but most of it was quickly struck down by the Supreme Court for being overbroad and violating the First Amendment

Section 230, though, remained intact. This section of the act broadly exempts providers from liability, only protects websites and internet service providers from civil liability, or things like torts. So, you can’t sue a website provider for defamatory statements posted by another person, for example. For example, you couldn’t, say, go after Yelp for another person’s bad review of your restaurant. 

However, Section 230 contains an exception for crimes. This means that internet service providers can still be liable for criminal activity that breaks the law. For example, a classifieds list or housing search engine could be held liable for violating the Fair Housing Act by asking users to answer questions that may be discriminatory. 

The Take It Down Act makes posting nonconsensual deepfakes a crime, meaning it falls under the exception carved out of Section 230. 

Is the Take It Down Act Enough? 

The Take It Down Act poses some free speech concerns, but also has other pitfalls. Namely, it only targets one category of nefarious deepfakes: intimate images. The law fails to provide protections against people making a deepfake TikTok inspired by your fit check, for example, and using it to spawn a confusing, illegitimate competitor account. 

While this might sound like a pie-in-the-sky, it’s a legal problem many states have taken steps to solve through what’s called the “Right of Publicity.” These laws offer protections to people’s personality, which includes their name, image, and likeness. 

Some state Right of Publicity laws, including Washington’s, provide all citizens a right to their name, image, and likeness. Others only offer protections to celebrities, whose persona has commercial value. Regardless, these laws offer a more expansive framework to combat AI deepfakes of all sorts, not just the pornographic kinds. 

So, while the Take It Down Act does put some worries at bay, it leaves many citizens unprotected from non-sexual uses of their identities—despite the presence of a clear alternative through a federal Right of Publicity law.

#takeitdownact #newlegislation #deepfake