Who’s Your Drug Dealer? Snapchat and Section 230 Under Scrutiny

By: Caroline Dolan

While Snapchat may no longer feel as trendy as it once was, the social media platform is alive and well. However, the same cannot be said for all its adolescent users. Snapchat’s unique filters and features attract 406 million daily active users, but the app is being dubbed “a haven for drug trafficking” by grieving parents. Numerous parents are seeking justice for their children who used Snapchat to purchase drugs unknowingly laced with fentanyl. While Section 230 would normally immunize a social media platform from civil liability and be grounds for dismissal, a Los Angeles judge has denied Snapchat’s invocation of Section 230 immunity and overruled twelve of its sixteen demurrers. In other words, the judge has determined that the causes of action asserted by the Plaintiffs have merit and will continue through the litigation process. 

A Snapshot of Section 230 

The Communications Decency Act (“CDA”) of 1996 was passed in light of the internet’s rise and Congress’s desire to protect children from exposure to dangerous content, particularly pornography. However, out of fear that platforms would overly censor themselves to avoid violating the CDA, Congress passed the Internet Freedom and Family Empowerment Act, better known as Section 230. Section 230(c) governs the liability of providers of an “interactive computer service” and states that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” In other words, an “information content provider” like Twitter or Facebook cannot be held civilly liable for what you or your friend post. It also cannot be held liable for voluntarily moderating content in good faith. While Section 230 does not protect against federal crimes, ​​electronic communications privacy law, or intellectual property violations, it is still a wide-reaching shield and online platforms rarely hesitate to invoke it. 

The Suit Against Snap

Represented by the Social Media Victims Law Center, the Plaintiffs asserted that Snap’s product features and business choices resulted in the serious injury and foreseeable deaths of their children. The Plaintiffs alleged that Snap’s automatic message deletion, location mapping, and “quick-add” features create an inherently dangerous app and enable kids to connect with adult predators. The parents contend that Snap has been aware that the app is an “open air drug market,” yet failed to implement any meaningful changes to improve age and identity verifications or prevent foreseeable consequences. Notably, the Plaintiffs did not allege that Snap is liable for failing to eliminate or moderate the content of the third-parties selling drugs, but rather that the “feature-packed social media app” facilitates an unreasonably dangerous avenue for strangers to contact vulnerable adolescents. 

Snap demurred to all sixteen alleged causes of action and invoked general immunity under Section 230. It advocated for an extremely broad reading of the statute and asserted a “but for”/ “based on”/ “flows from” construction wherein, “if the alleged harm flows from the content provided by third parties, Section 230 applies.” In Snap’s view, it should be privy to Section 230 immunity because the Plaintiffs’ children would not have been injured but for the content of the third-party drug dealers.

The Ninth Circuit’s three-prong test established in Barnes v. Yahoo!, Inc. applies Section 230 immunity to “(1) a provider or user of an interactive computer service (2) whom a plaintiff seeks to treat, under a state law cause of action, as a publisher or speaker (3) of information provided by another information content provider.“ 

In Neville v. Snap, Inc., the judge agreed with Snap that as an (1) “interactive computer service,” Section 230 absolves it from liability as a (2) “publisher or speaker” of any (3) information posted by third-party users. However, the judge concluded that the Plaintiffs had not alleged that Snap was a “publisher or speaker.” Rather, the allegations centered on the purported unreasonably dangerous and defective product. The judge recognized the unsettled bounds of Section 230 but nonetheless found that the claims related to Snap’s product and business decisions were “independent … of the drug sellers’ posted content” and beyond Snap’s “incidental editorial functions” (e.g. choosing to publish, remove, or edit content), which Courts consistently have held as protected under Section 230.

Snapchat On The Docket

Section 230 does not have the same meaning or relevance as it did nearly forty years ago. Yet, it continues to tread through pressing issues related to AI technology, national security, and public health and safety. The Supreme Court has continued to sidestep these questions but may soon be forced to more clearly define this statute. Neville v. Snap, Inc. will seek to clarify the outer bounds of Section 230 as well as provide justice and solace to the victims’ families.

Skepticism:  Should “The Nine Greatest Experts on the Internet” be taking more social media cases?

By: Kevin Vu

Last year, in one of the oral arguments for cases about Section 230 of the Communications Decency Act (Section 230), Justice Kagan opined that the United States Supreme Court is not comprised of “the nine greatest experts on the internet.”  Despite that observation, and eventually siding with the government in those Section 230 cases, the Court granted certiorari to four cases this year that again concern the regulation of social media.  Two of those cases, Moody, et al. v. NetChoice, LLC, et al. and NetChoice, LLC et al. v. Paxton, are concerning newly passed state laws in Florida and Texas that purport to regulate social media companies.  Additionally, the court recently heard oral arguments on two other cases, O’Connor Ratcliff v. Garnier and Lindke v. Freede, concerning whether elected officials can block members of the public on social media.  These cases—and Justice Kagan’s observations—beg the question of why the Court would be inclined to answer questions about the internet and social media companies, despite being self-admitted non-experts. 

  1. The current cases.

To provide some background, the Court is considering four cases this term.  The NetChoice cases concern whether Florida and Texas laws can prohibit social media companies from censoring individuals, or refusing to give a platform to political candidates.  But both laws are slightly different; the Florida law would require social media platforms to provide a rationale for removing content or censoring individuals on their platforms.  In contrast, the Texas law would bar large platforms, like Facebook, from engaging in “viewpoint discrimination.”  The Court is considering whether those state laws violate a company’s First Amendment rights.  

The other two cases, O’Connor Ratcliffe and Lindke, concern whether an elected official blocking users on social media would amount to a First Amendment violation of the user’s rights.  The elected officials cases have been presented to the Court before.  During the 2020 presidential election, then-President Trump blocked users on his Twitter account, but the Court dismissed Trump’s case as moot when President Trump lost the election.  

  1. Possible reasons the Court is considering social media cases.

Several reasons could explain the Court’s consideration of the NetChoice and elected officials’ cases.  First, unlike the Section 230 cases, the cases in this term implicate First Amendment rights.  The Section 230 cases concerned whether the plaintiffs could hold social media companies liable for content users posted on their websites, and did not present any such First Amendment issues.  As noted above, the NetChoice and elected officials cases present First Amendment concerns, and the Section 230 cases in contrast did not present those questions.  Because First Amendment questions have well-established case law and principles to answer the question presented, the Court may be more interested in wading into how its precedent affects these giant social media companies, especially as other branches of government have failed to address those companies.  

As social media websites continue to be leveraged to spread misinformation, and distrust in those platforms grows, the Court could be seeing an opportunity to weigh in on questions the other branches of the federal government have failed to address.  Despite bipartisan efforts to introduce bills regulating social media, those efforts have languished especially as uncertainty looms over whether government shutdowns are imminent.  But Congress’s inaction is not the only sign that the Court seems to want to consider cases regarding social media companies and the Internet.  This past month, the Court lifted a lower court’s restriction on President Biden’s administration.  Biden had attempted to alert social media companies to content that violated the company’s policies, and several state Attorneys General and social media users sued the Biden administration.  Those parties argued that Biden suppressed disfavored political speech, such as claims of election fraud, and information about the COVID-19 pandemic.  In allowing the Biden administration to contact social media companies in that way, the Court could be signaling its interest in cases involving social media companies.  

Finally, the Court’s public rationale for granting review weighs in favor of taking these social media cases.   The Court ordinarily only grants review to cases that could have “national significance, might harmonize conflicting decisions in the federal Circuit courts,” or if the case “could have precedential value.”  These social media cases generally meet all three of the criteria.  First, these are the kinds of cases that have national significance:  Can states regulate the speech of giant social media companies?  If states can regulate these national and international companies, what would happen if separate states impose different restrictions or requirements on those companies?  And, can elected officials restrict a user’s access to their social media accounts?  Those issues will have a profound impact on how social media companies regulate their multi-million user platforms.  

Second, with regards to the NetChoice cases, there are two conflicting decisions in the federal courts.  The Florida law was struck down by the Eleventh Circuit, while the Texas law was upheld by the Fifth Circuit.  Because those laws are similar enough in nature, the Court ultimately needs to resolve the NetChoice cases to determine whether a state can instruct social media companies on how to regulate, or not regulate, their content and users.  

And third, these cases are likely to have a profound effect on state policy and legislative decisions that have national effect.  For example, last year the Court held that California could forbid the sale of pork produced in a cruel manner.  That pork case has similar implications as the NetChoice cases; whether a state can essentially regulate an entire industry.  All of these cases will also provide precedential value:  the Court has been presented with questions of first impression as to how the First Amendment applies in various social media contexts.  The Court’s decisions will have a profound effect on how social media companies are run. 

Ultimately, these are the kinds of cases and questions that the Court must answer for prudential reasons.  As the public grows skeptical of social media companies, decisive action needs to be taken.  The lack of action from the other branches of federal government, along with the actions taken by the state governments in some of these cases, presents the following question: what branch of government should be taking action?