The Reality of Deepfakes: The Dark Side of Technology

By: Kayleigh McNiel

We’ve all seen the viral Tom Cruise Deepfake or played around with the face-swapping Snapchat filters. But the dark reality of deepfake technology is far more terrifying than an ever-youthful Top Gun star. 

Deepfakes are images and videos digitally altered using artificial intelligence (AI) and machine learning algorithms to superimpose one person’s face seamlessly onto another’s. They can be incredibly realistic and impossible to detect with the naked eye. Many websites and apps allow anyone with access to a computer to produce images and videos of someone saying or doing something that never actually happened. 

While law-makers and the media have focused their concerns on the potential impact of political deepfakes, nearly all deepfakes online are actually non-consensual porn targeting women. Gaps in the law and easy access to deepfake technology have created a perfect storm, where anyone can make their most perverse fantasy come to life, at the expense of real people.

The Tech Behind The Fakes

Deepfakes are created using generative adversarial networks (GANs) that use AI and two machine learning algorithms (an image generator and an image discriminator) which work in tandem to create and refine the fakes. The process begins by feeding each algorithm the same source data, i.e., images, video, or even audio. Then, the generator iteratively creates new samples with the target image until the discriminator cannot tell whether the generated image is a real image of the target or a fake.

Historically, creating a truly realistic and quality deepfake required dozens of images of a person with enough similarities to the original subject. That is, until July 2022, when Samsung developed MegaPortriats, a technique that creates high-resolution deepfakes from a single image. Now, highly realistic deepfakes can be made from a single innocuous selfie posted online. 

With advancements in technology, detecting deepfakes has become increasingly more difficult. In response, researchers have raced to develop more accurate detection tools. For example, in July 2022 Computer scientists at University of California Riverside created a program that detects manipulated facial expressions in videos and images with up to 99% accuracy. While promising, there is still a long way to go before this or similar detection tools are widely available to law enforcement, consumer protection agencies, and the public. 

The Dark Side of Deepfakes

Realistic deepfakes pose an enormous risk to politicians and fair elections. Many deepfakes have already surfaced of high-profile politicians engaging in acts designed to undermine their credibility. In March 2022, Russian hackers posted a deepfake video of Ukrainian President, Volodymyr Zelenskyy, telling his soldiers to surrender on Ukrainian news outlets and social media. While the video was quickly debunked, it demonstrates how this technology is likely to become a standard tactic used by adversaries to interfere in politics.  

While political deepfakes do pose a very real danger to our democratic institutions, the technology is currently primarily used to victimize women. A 2019 report by Deeptrace confirmed that 96% of all deepfakes online are actually non-consensual porn targeting women and the  number of such deepfakes is rapidly growing. Cybersecurity firm Sensity reports the volume of deepfakes online nearly doubles every six months, largely due to the increase in availability of cheap and easy deepfake technology. Free face-swapping software found on apps like Deepnude, Deepswap, and FaceMagic are commonly used to create deepfake porn. Scammers have even begun using these in extortion cases; threatening to release the fake videos to victims’ family, friends, and employers unless they pay up. 

Having your likeness stolen and used to perform degrading sex acts without your consent is becoming a disturbing reality for celebrities and women in the public eye. A quick Google search reveals nearly a dozen websites with hundreds of deepfake porn videos using the faces of celebrities like Emma Watson, Gal Gadot, and Maisey Williams, among many others. Earlier this month, Twitch streamer, Atrioc, was forced to apologize after he accidentally revealed he used a website dedicated to sharing deepfake porn of popular female streamers, many of whom he is friends with in real-life. 

While celebrities are most at risk, there are websites (which I will not name here) specifically designed for men to create non-consensual deepfake porn of the women in their lives. While no longer publicly active, an anonymous user released an AI bot on right-wing messaging app, Telegram, which rapidly generated thousands of deepfakes of women and underage girls from photos uploaded by men seeking revenge. An investigation by Sensity found that these deepfakes were shared over 100,000 times before the bot was reported to the platform.

To add insult to injury, women who speak out against revenge porn are often the targets of relentless online harassment. Kate Isaacs, a 30-year-old woman from the UK, was the victim of deepfake porn after she successfully campaigned Pornhub to remove nearly 10 million non-consensual and child porn videos. Afterwards she was subjected to humiliating and terrifying harassment from men who “felt they were entitled to non-consensual porn.” They posted her work and home addresses online and threatened to follow her, rape her, and then post the video of it on Pornhub. Shortly thereafter, deepfake porn videos of her began to circulate online. 

Many victims of deepfake and revenge porn are forced to shut down their social media accounts and minimize their online presence to avoid further harassment and embarrassment. Notably, it is somewhat ironic that the men who seek to silence women by creating and sharing these videos often do so under the guise of the First Amendment. The dangers of deepfakes are undeniable, but women have largely been left to fend for themselves.

Our Legal System Is Not Ready for This

The combination of a lack of awareness and the difficulty in detecting deepfakes creates a significant challenge for victims when reporting.. Most law enforcement agencies lack the training and software to confirm that a video is a deepfake. Even if law enforcement can prove it is a forgery, by the time they do so significant damage is already done. People have already seen what they believe to be the victim engaging in degrading sex acts. Those images can never be unseen and will continue to damage victims’ reputations, relationships, and mental health. 

The legal system has been slow to react to the threat women face from deepfake porn. While 48 states and Washington D.C. finally have laws against the creation and distribution of non-consensual “revenge” pornography, only three have specifically banned deepfake porn. In 2018 proposed federal deepfake legislation died in the Senate. The state laws prohibiting deepfakes will likely face huge hurdles from First Amendment and personal jurisdiction challenges:

  • In 2019, Texas was the first State to ban deepfakes, but only those intended to influence elections. 
  • Also in 2019, Virginia amended its “revenge porn” statute to include deepfakes. 
  • In 2020, California prohibited the creation of deepfakes within 60 days of an election and for unauthorized use in pornography.
  • Also in 2020, New York passed a law protecting a person’s likenesses from unauthorized commercial use as well as non-consensual deepfake pornography.

In states without laws against deepfakes, victims will be forced to find relief through a patchwork of consumer privacy protection, defamation, and revenge porn laws. Notably, many state’s revenge porn laws do not apply to deepfakes because the victim’s body is not actually being portrayed. 

Biometric privacy laws could be used to combat deepfake porn in states like Illinois, Texas, Washington, New York, and Arkansas, where residents can file a civil claim against those who use their faceprints, facial mapping, or identifiable images without their consent. Similarly, defamation claims could potentially be brought against the creators of deepfake porn. 

Even if a clearly applicable law exists, bringing any civil claim requires the victim to be able to prove the identity of the video’s creator. This can be incredibly challenging when websites and apps allow users to upload videos with near total anonymity. The bottom line is that current laws do little to deter deepfake creators from continuing to victimize women for their own pleasure. 

What Are Tech Platforms Doing To Fix the Problem They Created?

Furthermore, the tech platforms on which deepfakes are widely shared are completely shielded from any legal liability under Section 230 of the Communications Decency Act. Without any consequences, it has been difficult to get platforms to address the impact that content shared on their site has on people’s lives. Still, some have taken action against deepfakes. In 2018 both Reddit and Pornhub banned deepfake porn, categorizing it as inherently non-consensual. The following year Discord banned the sale of Deepnude, an app designed to remove clothing from women (yes—only women) in photos. Apple removed the Telegram deepfake bot from its iOS for violating its guidelines. Pornhub and YouPorn both redirect users searching for deepfakes to a warning that they’re searching for potentially illegal and abusive sexual material. Users are then provided with directions on how to request the removal of content and resources for victims. Telegram, on the other hand, has never publicly commented on it and has never identified its creator. 

While these efforts are promising, more still needs to be done. Tech companies, lawmakers, and communities must work together to regulate the use of deepfake technology.

If you or someone you know has been the victim of online sex abuse, you are not alone. Support is available through the Cyber Civil Rights Initiative online or via their 24-hour hotline at 1-844-878-2274.

Leave a comment