
By Olivia Bravo
In October 2023, Steven Anderegg, a 43-year old Wisconsinite, was indicted for knowingly producing at least one visual depiction of a minor engaging in sexually explicit conduct. Anderegg allegedly used a text-to-image generative artificial intelligence (GenAI) model called Stable Diffusion to “create thousands of realistic images of prepubescent minors.” In May 2024, Anderegg became the first person in the U.S. criminally charged for generating and distributing AI-created child sexual abuse material (CSAM). His case became a turning point for U.S. authorities, underscoring the legal and ethical challenges AI-generated CSAM poses, highlighting the need for clearer policies and enforcement strategies as they continue to grapple with regulating AI-generated explicit content.
What is CSAM?
Child Sexual Abuse Material (CSAM) or “child pornography,” is any visual depiction of sexually explicit conduct involving a person less than 18 years old. Due to rapid technological advances, online child sexual exploitation and victimization have increased in scale and complexity. One of the current legal challenges of the new technological age is the use of AI. This is problematic in two ways: (1) offenders are able to use AI capability to create CSAM, and (2) AI models are being trained with CSAM.
Legal Precedents and Challenges
Under U.S. federal law, Child Sexual Abuse Material (CSAM) is considered illegal contraband and is not protected under the First Amendment. Statutes such as 18 U.S.C. §§ 2251, 2252, and 2252A criminalize the production, possession, and distribution of such material through any means of interstate or foreign commerce.
However, AI-generated CSAM introduces legal complexities. Because some synthetic images do not involve identifiable victims, they may fall outside the scope of laws written before the advent of generative models. This raises questions about whether such material qualifies as illegal “depictions,” and how harm is defined in the absence of a real child.
To address emerging risks, lawmakers have begun to update and expand relevant legislation:
- The Protecting Children From Online Predators Act (The PROTECT Act) (S.151) extended prohibitions to include synthetic and computer-generated imagery of child sexual abuse, closing loopholes for non-photographic, “synthetic” CSAM.
- The Children’s Online Privacy Protection Act (COPPA), updated in January 2024, introduced stricter consent requirements and limited how children’s data can be collected and shared by online platforms, though it doesn’t directly address AI model training.
- California’s (LEAD) for Kids Act (AB 1064) takes a step further by targeting AI model development. It mandates parental consent for training AI on children’s data and requires developers to conduct risk assessments of potential harms.
Despite these efforts, no comprehensive federal framework yet exists to regulate the use of CSAM in AI training datasets or the creation of AI-generated abuse imagery. As the technology rapidly evolves, regulators face growing pressure to close these legal gaps while balancing free expression and innovation.
How AI Changes the Game
What is AI model training and how is it impacted by CSAM? An AI model is both a set of algorithms and the data used to train those algorithms so they can make accurate predictions based on consumer queries. The term “AI model training” refers to a process where the model is fed massive amounts of data, the results are examined, and the model output is tweaked to increase accuracy and efficacy. However, what happens when these models are trained on exploitative images of children found on a public dataset?
An investigation by Stanford Internet Observatory (SIO) revealed hundreds of known images of CSAM in an open dataset (LAION-5B) used to train popular AI models such as Stable Diffusion. Stable Diffusion is the same AI text-image generator that Steven Anderegg used to create hyper-realistic images of children. This creation of images from text is an example of harnessing the power of Generative Artificial Intelligence (GAI). GAI enables the creation of fake imagery, including synthetic media, digital forgery, and in this case, CSAM. GAI allows offenders to create hyper-real sexual abuse material that depicts the victimization of children, and can then be used to retrain AI datasets. A July 2024 report by the Internet Watch Foundation (IWF) found that since October 2023, there has been a clear web increase in AI-generated CSAM material, with more images uploaded onto the dark web, and more severe images in Category A abuse, indicating that perpetrators are more able to generate complex ‘hardcore’ scenarios. “AI-generated imagery of child sexual abuse has progressed at such an accelerated rate that the IWF is now seeing the first realistic examples of AI videos depicting the sexual abuse of children.”
Conclusion
Steven Anderegg may have been the first person in the U.S. prosecuted for generating AI-created child sexual abuse material, but he will not be the last. In this way, the technological advances brought on by AI force us to rethink harm and the accountability that we have as users of these platforms. As generative AI becomes more powerful and accessible, the risk of its misuse to produce, circulate, and train future models on CSAM escalates. For lawmakers, this means crafting forward-looking policies that not only criminalize synthetic abuse content but also prevent its proliferation through stricter oversight of training data and AI development practices.
#CSAM #ChildProtection #AITraining #WJLTA