
By: Lindsey Vickers
Last week, legislators united to sign into law the Take It Down Act, a bill introduced by Texas Representative Ted Cruz. The bill’s next stop will be President Trump’s desk, where many expect he will sign it into law. (Melania is certainly rooting for the bill.)
But in a world of deepfakes and alternative realities, is the Take It Down Act truly enough to protect the public from the negative consequences of deepfakes?
What is the Take It Down Act?
The Take It Down Act is federal legislation that mirrors a growing number of state laws. The law aims to ban “nonconsensual online publication of intimate visual depictions of individuals.”
You might have read that and thought, “huh?” Yeah, me too. In essence, the Take It Down Act is criminalizing what have come to be known as “deepfakes.” This internet-era term refers to computer-generated media that depict things that didn’t actually happen or that distort things that did happen into a twisted, alternate reality. Deepfakes can be audio, video, or images—in essence, many types of media that easily spread online.
The worst part? Deepfakes can be virtually indistinguishable from reality.
While the term “deepfake” dates back to 2017, the media only reached a real boiling point when Taylor Swift was the subject of a deepfake scandal just over a year ago. Users took to a Microsoft AI technology to create illegitimate deepfakes of Swift in the form of nude images. With Swift’s touch of star power the depictions of her, and other nonconsensual deepfakes, became a hot national issue.
But people across the country have been subject of non-consensual deepfake porn, ranging from high school teens, to a woman whose image was manipulated into porn by a coworker.
However, this is not the only nefarious use of deepfakes. Audio deepfakes, for example, have been used to scam people out of millions of dollars, automate realistic robocalls, and even demand ransom from unsuspecting parents.
What Does The Take It Down Act Do?
The Take It Down Act’s purview is limited to deepfake pornography and mimics state laws. It does not provide a cause of action, or a way for people to bring a legal allegation, for all deepfakes. Instead, it only criminalizes nonconsensual “intimate imagery,” which essentially means porn or nude images. In essence, it criminalizes the action of sharing or threatening to share nonconsensual intimate images.
The act puts the onus to remove nonconsensual intimate depictions on the websites where the depictions are posted. However, the burden to remove them only kicks in after the website owner receives notice (aka a person has complained about a nonconsensual deepfake).
Why Doesn’t the Take It Down Act Conflict with Section 230?
In general, providers of interactive internet services are exempt from liability for posts by third parties, which includes deepfakes like those targeted by the Take It Down Act. This promotes innovation and protects the free internet.
But, there are a couple of caveats. That’s where section 230 comes in. Section 230 was enacted in 1996 as part of the Communications Decency Act. The act initially aimed to regulate obscene speech online, but most of it was quickly struck down by the Supreme Court for being overbroad and violating the First Amendment.
Section 230, though, remained intact. This section of the act broadly exempts providers from liability, only protects websites and internet service providers from civil liability, or things like torts. So, you can’t sue a website provider for defamatory statements posted by another person, for example. For example, you couldn’t, say, go after Yelp for another person’s bad review of your restaurant.
However, Section 230 contains an exception for crimes. This means that internet service providers can still be liable for criminal activity that breaks the law. For example, a classifieds list or housing search engine could be held liable for violating the Fair Housing Act by asking users to answer questions that may be discriminatory.
The Take It Down Act makes posting nonconsensual deepfakes a crime, meaning it falls under the exception carved out of Section 230.
Is the Take It Down Act Enough?
The Take It Down Act poses some free speech concerns, but also has other pitfalls. Namely, it only targets one category of nefarious deepfakes: intimate images. The law fails to provide protections against people making a deepfake TikTok inspired by your fit check, for example, and using it to spawn a confusing, illegitimate competitor account.
While this might sound like a pie-in-the-sky, it’s a legal problem many states have taken steps to solve through what’s called the “Right of Publicity.” These laws offer protections to people’s personality, which includes their name, image, and likeness.
Some state Right of Publicity laws, including Washington’s, provide all citizens a right to their name, image, and likeness. Others only offer protections to celebrities, whose persona has commercial value. Regardless, these laws offer a more expansive framework to combat AI deepfakes of all sorts, not just the pornographic kinds.
So, while the Take It Down Act does put some worries at bay, it leaves many citizens unprotected from non-sexual uses of their identities—despite the presence of a clear alternative through a federal Right of Publicity law.
#takeitdownact #newlegislation #deepfake