Alice in Algorithm-land: Legal recourse for victims of content-recommendation rabbit holes

By: Cameron Eldridge

There was a time early on in the social media landscape when all anyone would be able to tell about you based on the content of your feed was who you followed: friends, family, preferred news networks, favorite tv shows, or bands. However, content-recommendation algorithms, which were once only used for advertising, are now the backbone of social media platforms, determining what users see and when they see it. 

The content-recommendation algorithms used by Facebook, Instagram, Twitter, and Tiktok have one goal: maximizing user engagement, which means showing users whatever will keep them looking. This can benefit users when liking one video of an adorable baby animal means they get fed more. But it can also be dangerous, when a single interaction with content about mental illness or a terrorist organization can trigger the algorithm to send users spiraling down a rabbit hole, slowly distorting how they view themselves and how they interact with the world. Unfortunately, due to Section 230, when users find themselves or their loved ones have been victims of these rabbit holes they’re often left with no one to legally blame.  

Shattering the Section 230 shield

Section 230(c)(1) of the Communications Decency Act immunizes “interactive computer services” like social media platforms for publishing content created by another party. Historically, Section 230 has served as a shield protecting social media platforms from any and all liability for harmful videos, comments, and posts made on their platforms. So when a Louisiana teen’s family sues Meta because she killed herself after being fed content about suicide and self-harm, or when the family of a ten-year-old who choked themselves to death while participating in TikTok challenge sues Tiktok, the companies can avoid any consequences. If victims of the algorithm want any chance at holding social media platforms accountable, they’ll need a more creative legal strategy than content-based attacks.

A flaw in the design

A recent products liability claim against Meta brought by the Social Media Victims Law Center on behalf of plaintiff Alexis Spence is attempting to hold Instagram accountable by arguing that Instagram’s feed and explore features are defective by design. Spence, who was eleven years old when she first started using Instagram, and now at twenty years old suffers from severe mental illness, claims that these design features of the Instagram app are the but-for cause of her injuries. While it is too early to tell how Spence’s case will pan out, there is some supporting precedent in another recent case, Lemmon v. Snap, Inc. The court in this case held Snapchat liable for foreseeable injuries resulting from its ‘speed filter,’ another design-based claim. 

Another promising strategy that is currently being tested is an attack against the recommendation algorithm itself. Next month the question of whether Section 230 should protect platforms when they make targeted recommendations of information, or only protect platforms when they engage in traditional editorial functions like publishing or withdrawing content, will be raised in front of the Supreme Court by University of Washington Law Professor Eric Schnapper in Gonzalez v. Google

Gonzalez is brought on behalf of Nohemi Gonzalez, a 23-year-old U.S. citizen who was studying in Paris in November 2015, when he was murdered in one of a series of violent ISIS attacks that resulted in the deaths of over a hundred people. The complaint alleges that YouTube not only unknowingly published hundreds of ISIS recruitment videos but also affirmatively recommended those videos to users and that these recommendations go beyond the traditional editorial functions of a publisher which Section 230 textually protects. 

Many in the tech world fear that alterations to Section 230 protections like those Gonzalez seeks to make would render the existence of social media platforms legally impossible. How would apps like TikTok, which is based almost entirely on its content-recommendation algorithm, continue to function if they could be held liable for its every consequence? A ruling against Google would certainly change social media platforms as we know them, but it may also force them to take more responsibility for the kind of rabbit holes they’re sending users down. While this would pose a financial and logistical burden, it’s one that tech companies like Meta and Google probably can and should bear. 

Leave a comment