Post-mortem privacy rights: dying with indignity in the digital age

By: Nicole Buckley

Last Friday night, my roommate struck up a conversation with me as he heated rice on the stovetop. “Did you hear about the murder-suicide in Pennsylvania yesterday? Some guy shot his neighbors, then went back inside his house, grabbed an AR-15, and sprayed their bodies with bullets. A home security system caught all of it.” I paused, then: “Wait. Are you saying you clicked on a video and watched two people die?” Indeed, he had.

Although gruesome content is de jure banned on major social networking sites like Facebook and Twitter, other platforms are much more liberal in what content they permit users to post, view, and share. To that end, “free speech” is a dog whistle for graphic, violent, or hateful content on forum-style websites like Reddit, Gab, and Bitchute. Rather than removing or obscuring graphic content, these “free speech” platforms tend to be breeding grounds for the fetishization of extreme violence. The video my roommate elected to click on is one example among many.

At present, the legal community is laser-focused on so-called “free speech” platforms, Section 230 of the Communications Act of 1934, and election-related disinformation. Admittedly, I am, too. However, questions about post-mortem privacy in the digital age—for now lurking beneath the surface of popular debate—seem primed for the spotlight in years to come. If alternative platforms are positioned as ready and willing to host videos depicting murder, what is the legal community to make of it? How far does Section 230 extend? If users are unable to reach platforms to remove the content, would they be able to bring a tort claim against other users on behalf of the decedent(s) depicted? Some questions are easier to answer than others. The second question is difficult to answer. The last question is likely to produce a simple response: no.

Ironically, even though privacy rights appear to be an invention tailored to curtail the inquisition of technology on various subsections of society, privacy law lags behind advancements in the digital age. Thus, despite public opinion surveys suggesting a general desire to expand privacy rights online, the expansion of those rights has proved difficult. This is largely because an individual’s privacy rights are thought to terminate upon their death. Salting the wound is the common law majority view that a decedent’s estate may not bring an action to protect that decedent’s privacy interests. As such, depictions of gruesome deaths can be extraordinarily difficult to remove from the internet.

The debate around internet-era post-mortem privacy rights has been brewing for over a decade. After a teenager died tragically in a car accident, photographs of her decapitated body were plastered across the internet. Although the decedent’s family pursued legal recourse, eventually settling with the law enforcement agency responsible for leaking the photographs, a quick search on Reddit reveals that the images are still being circulated. Two interesting points emerged from this case. First, the California Court of Appeals never reached the privacy claim on the merits due to the family’s decision to accept and out-of-court settlement. Therefore, although the court appeared willing to entertain a privacy discussion, the privacy issue was not litigated and the law is thus unclear. Second, and prior to settlement, the law enforcement agency relied on the First Amendment as a defense.

The collision of Section 230, the First Amendment, and post-mortem privacy is inevitable. First Amendment jurisprudence, often thought to be in tension with privacy rights, permits platforms that host content depicting graphic content to claim “newsworthiness” as a legal defense. Additionally, Section 230 absolves internet platforms of liability for failing to remove most user-generated content. Not only are decedents disadvantaged in a common law privacy context, but constitutional jurisprudence and federal regulatory law also fail to provide litigants with adequate legal recourse.

Thus, theoretically available avenues to relief appear sealed shut: decedents’ estates cannot easily sue the people who shared content depicting a loved one’s death, nor can they get at the platforms hosting the content. As Danielle Citron Keats points out, the available private responses meant to curb abuse online are wholly inadequate. Yet, all hope is not lost; there is still time to broaden scholarly debate about private rights in the context of the digital age. After all, the common law is not inflexible, and debate around Section 230 is presently reaching a fever pitch. As a new generation of lawyers enter practice, it is imperative we consider online privacy rights against their origin story: a mechanism developed not only to check technology, but to balance its uses with the good of society. If death is to retain dignity in the digital age, private rights of action must be accorded more opportunity to succeed.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s