
By: Matthew Bellavia
When asked under oath during one of many congressional hearings, Mark Zuckerberg said:
“Senator, we consider ourselves to be a platform for all ideas”.
While this statement sounds like mere corporate virtue-signaling, it constitutes much more. When Section 230 of the Communications Decency Act was enacted in 1996, the prevailing vision of the internet was a neutral space where users could post ideas—a passive message board. Nearly three decades later, this vision fails to understand the modern internet. Today, social media platforms not only host content but actively control what content is shown to users and which posts go viral. These decisions are often made through proprietary and secret recommendation algorithms and shape what content users see and how widely it spreads. An updated, nuanced legal framework that recognizes the active role platforms play in amplifying content is necessary to improve transparency and accountability.
Section 230 Legal Framework
Section 230 was enacted in response to conflicting court decisions on platform liability. In Cubby, Inc. v. CompuServe, Inc, an online information service that provided subscribers with access to thousands of sites and over 100 forums was found not liable for libel because it did not and could not review content on the forums before it was posted. Alternatively, in Oakmont, Inc. v. Prodigy Servs. Co., an online bulletin board provider was found akin to a publisher because it selectively moderated its content and was liable for defamatory postings that were published. Clearly, there was a need for a more definitive rule. As a result, 47 U.S.C. § 230(c)(1), states:
“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
The law was designed to encourage internet growth by protecting platforms from liability for user-generated content while allowing them to moderate in good faith. Section 230(c)(2), the “Good Samaritan” provision, specifically protects platforms that remove objectionable content. The statute distinguishes platforms from “information content providers”—those responsible, in whole or in part, for creating or developing information. Platforms materially contributing to content may lose their immunity.
Prominent Cases After Section 230
Courts interpreting Section 230 have generally reinforced broad platform immunity, prioritizing the interests of innovation and free speech at the expense of accountability. In Zeran v. AOL (1997), the plaintiff’s personal contact information was maliciously and repeatedly posted on AOL forums alongside offensive merchandise related to the Oklahoma City bombing. Despite multiple notifications, AOL failed to promptly remove the content, and the plaintiff received death threats and harassment calls. The Fourth Circuit found AOL not liable because of broad Section 230 protection, even after notice was given of the harmful content.
In contrast, Fair Housing Council v. Roommates.com (2008) held that platforms lose immunity when requiring users to input illegal content, as Roommates.com did by prompting discriminatory preferences. Recently, the Supreme Court in Gonzalez v. Google (2023) declined to rule on whether algorithmic recommendations constitute content development under Section 230.
Algorithmic Promotion and Co-Authorship
Modern platforms do not show content chronologically but algorithmically rank and prioritize posts based on engagement metrics and user behavior. TikTok’s “For You” page curates individualized feeds via machine learning. YouTube’s autoplay and “Up Next” queues automatically recommended videos, and recommendations make up 70% of all views on the site. Facebook similarly uses proprietary signals to prioritize its News Feed.
Critics argue that algorithm design reflects editorial choices rather than passive, neutral functions. Sites actively choose which content to amplify based on revenue-driven decisions, which impact the financial interests of the platforms and their respective creators. Alternatively, these sites could argue that their implementation of personalized ranking and the various tools offered to control content feeds suggest that users take a more active role in their own curation.
Legal Implications – When Does Immunity Break?
If platforms are found to be co-authors or material contributors, the consequences could be significant. Under Section 230, immunity is lost when a platform is deemed to have helped “create or develop” unlawful content. Courts have struggled with what that means, but algorithmic editing or targeted amplification might tip the scales. One could argue that using algorithms that predictably promote harmful content could constitute content development, especially if the platform profits from the activity. Moreover, platforms monetizing harmful content via advertising may be seen as active participants rather than neutral intermediaries. The Roommates.com decision already established that platforms that require or solicit unlawful content can lose immunity. Could algorithmic design, predictably amplifying harmful content, be the next frontier?
Potential Intermediate Standards
Section 230 is commonly referred to as “The Twenty-Six Words That Created the Internet.” A full repeal of the law would destroy the current online ecosystem. Media companies simply do not have the infrastructure or resources to moderate all the content posted. For example, YouTube receives 500 hours of content uploaded every minute. The recent adoption and explosion of AI has added to this problem. Instead of repeal, intermediate reforms could pose a viable adjustment to bring the law up to date. For example, the EU’s Digital Services Act already imposes obligations on platforms to mitigate the risks of algorithmic recommendations. Other alternative solutions could be: conditioning immunity on algorithm transparency or limiting immunity for the distribution of harmful content via algorithmic design.
Practicality
Tech companies argue that narrowing Section 230 would cause over-moderation and chill innovation and free speech. This aligns further with recent movements away from proactive moderation and fact-checking. Critics respond that platforms already wield considerable power, touching all aspects of society. Requiring transparency into algorithmic content delivery could help evaluate when platforms cross into co-authorship. However, this is not something media companies are likely to agree to without a fight.
Conclusion
The internet that Section 230 was designed for is long gone. Today, algorithms blur the publisher-platform distinction by enabling sites to curate, promote, and profit from content they choose. While sites provide some tools for users to control their content, they still take a far more active role in curation than the drafters ever could have contemplated in 1996. As litigation around algorithmic content grows, Section 230 must evolve to recognize the active role platforms play in their content to increase transparency and accountability.
#Section230 #PlatformImmunity #SocialMedia #WJLTA