Across Nations, Across Identities: Why Deepfake Victims are Left Without Remedies

By: Hanan Fathima

When a deepfake video of former President Barack Obama appeared in 2018, the public was stunned—this was not just clever editing, but a wake-up call. AI-generated content became hyper-realistic and often indistinguishable as compared to non-AI-generated content. Deepfakes are highly realistic AI-generated content that can imitate a person’s appearance and voice through technologies like generative adversarial networks (GANs). We’ve entered a digital era where every piece of media demands scrupulous scrutiny, raising questions about regulation and justice in a digital age. Different jurisdictions have adopted varying approaches to deepfake regulation, with countries like the US, UK, and EU members emphasizing on international laws on deepfakes, while countries like China and Russia preferring digital sovereignty. A key challenge is navigating the jurisdictional gaps in deepfake laws and regulations.

The Global Surge in Deepfake-Driven Crimes

Deepfake phishing and fraud cases have escalated at an alarming rate, recording a 3000% surge since 2022. In 2024, attempts to create deepfake content occurred every five minutes. This sharp escalation in global deepfake activity is alarming, particularly due to the potential for deepfakes manipulate election outcomes, fabricate non-consensual pornographic content , and facilitate sextortion scams. Deepfake criminals exploit gaps in cross-border legal systems. These gaps allow criminals to evade liability and continue their schemes with reduced risk. Because national laws are misaligned and international frameworks remain limited, victims of deepfake crimes face an uphill battle for justice. Combined with limited judicial precedents, tracing and prosecuting offenders has proved to be a massive challenge for many countries.

When Crime Crosses Borders and Laws Don’t

One striking example is a Hong Kong deepfake fraud case in which scammers impersonated a company’s chief financial officer using an AI-generated video in a conference call, duping an employee into transferring HK$200 million (~US$25 million). Investigators uncovered a complex web of stolen identities and bank accounts spread across multiple countries, complicating the tracing and recovery of funds. This case underscores the need for international cooperation, standardized laws and regulations, and robust legal framework for AI-related deepfake crimes[MB6]  in order to effectively combat the growing threat of deepfake fraud.

At a national level, there have been efforts to address these challenges. An example is the U.S. federal TAKE IT DOWN Act 2025, which criminalizes the distribution of non-consensual private deepfake images and mandates prompt removal upon request. States like Tennessee have enacted the ELVIS Act 2024, which protects individuals against use of their voice and likeness in deepfake content, while Texas and Minnesota have introduced laws criminalizing election-related deepfakes to preserve democratic integrity.Similarly, Singapore passed the Elections (Integrity of Online Advertising) (Amendment) Bill to safeguard against misinformation during the election period. China’s Deep Synthesis Regulation 2025 regulates deepfake technology and services, placing responsibility on both platform providers and end-users.

On an international scale, the European Union’s AI Act serves as among the first comprehensive legal frameworks to tackle AI-generated content. It calls for transparency, accountability, and emphasizes labelling AI-manipulated media rather than outright bans.

However, these laws are region-specific and thus rely on international and regional cooperation frameworks like MLATs and multilateral partnerships for prosecuting foreign perpetrators. A robust framework must incorporate cross-border mechanisms such as provisions for extraterritorial jurisdiction and standardized enforcement protocols to address jurisdictional gaps in deepfake crimes. These mechanisms could take the form of explicit cooperation protocols under conventions like the UN Cybercrime Convention, with strict timelines for MLAT procedures, and regional agreements on joint investigations and evidence-sharing.

How Slow International Processes Enable Offender Impunity

The lack of concrete laws and thus concrete relief mechanisms means victims of deepfake crimes face multiple barriers in their ability to access justice. When cases involve multiple jurisdictions, investigations and prosecutions often rely on Mutual Legal Assistance Treat (MLAT) processes. Mutual Legal Assistance is “a process by which states seek and provide assistance in gathering evidence for use in criminal cases,” as defined by the United Nations Office on Drugs and Crime (2018). MLAT is the primary mechanism used for cross-border cooperation in criminal proceedings. Unfortunately, victims may experience delays in international investigations and prosecutions due to slow and cumbersome processes associated with MLAT. Moreover, the process has its own set of limitations such as human rights concerns, conflicting national interests, and data privacy issues. According to the Interpol Africa Cyberthreat Assessment Report 2025, requests for Mutual Legal Assistance (MLA) can take months, severely delaying justice and often allowing offenders to escape international accountability.

Differing legal standards and enforcement mechanisms across countries make criminal proceedings related to deepfake crimes difficult. On a similar note, cloud platforms and social media companies hosting deepfake content may be registered in countries with weak regulations or limited international cooperation, making it harder for authorities to remove content or obtain evidence.

The Human Cost of Delayed Justice

The psychological and social impacts on victims are profound. The maxim justice delayed is justice denied” is particularly relevant—delays in legal recourse means the victim’s suffering is prolonged. This often presents as reputational harm, long-term mental health issues, and career-related issues. Thus, victims of cross-border deepfake crimes may hesitate to report or pursue legal action. They are further deterred due to language, cultural, or economic barriers. Poor transparency in enforcement creates mistrust in international legal systems and marginalizes victims, weakening deterrence.

Evolving International Law on Cross-Border Jurisdiction

There have been years of opinions and debates over the application of international law for cybercrimes and whether it conflicts with cyber sovereignty. The Council of Europe’s 2024 AI Policy Summit highlighted the need for global cooperation in investigation and prosecutorial activities of law enforcement and reaffirmed the role of cooperation channels like MLATs. Calls for a multilateral AI research institute were made in the 2024 UN Security Council debate on AI governance. Recently, in the 2025 AI Action Summit, discussions were focused on research and the transformative capability of AI, and the regulation of such technology. Discussion on cybercrimes and its jurisdiction was limited.

In 2024, the UN Convention Against Cybercrime addressed AI-based cybercrimes, including deepfakes, emphasizing on electronic evidence sharing between countries, cooperation between states for extradition requests and Mutual Legal Assistance. The convention also allows states to establish jurisdiction over offences committed against their nationals regardless of where the offense occurred. However, challenges in implementation persist as a number of nations are yet to ratify this convention, including the United States.

Towards a Coherent Cross-Border Response

Addressing the complex jurisdictional challenges posed by cross-border deepfake crimes requires a multi-faceted approach that combines legal reforms, international collaboration, technological innovations, and victim-centered mechanisms. Firstly, Mutual Legal Assistance Treaties (MLATs) must be streamlined with standardized request formats, clearer evidentiary requirements, and dedicated cybercrime units to reduce delays. Secondly, national authorities need stronger digital forensic and AI-detection capabilities, including investing in deepfake-verification tools like blockchain-based tracing techniques. Thirdly, generative AI platforms must be held accountable, with mandates for detection systems and prompt takedown obligations. However, since these rules vary regionally, platforms do not face the same responsibilities everywhere, underscoring the need for all countries to adopt consistent standards for platforms. Fourth, nations must play an active role in multilateral initiatives and bilateral agreements targeting cross-border cybercrime, supporting the creation of global governance frameworks governing extraterritorial jurisdiction of cybercrimes like deepfakes. While countries like the United States, UK, EU members, and Japan are active participants in international AI governance initiatives, many developing countries are excluded from these discussions. Countries like Russia and China have also resisted UN cybercrime treaties, citing sovereignty values. Notably, despite being a global leader in AI innovation, the US has also not ratified the 2024 UN Convention against Cybercrime. Lastly, a victim-centered approach, through legal aid services and compensation mechanisms, is essential to ensure that victims are not left to navigate these complex jurisdictional challenges alone.

While deepfake technology has the potential to drive innovation and creativity, its rampant misuse has led to unprecedented avenues for crimes that transcend national borders and challenge existing legal systems. Bridging these jurisdictional and technological gaps is essential for building a resilient and robust international legal framework that is capable of combating deepfake-related crimes and offering proper recourse for victims.

The Reality of Deepfakes: The Dark Side of Technology

By: Kayleigh McNiel

We’ve all seen the viral Tom Cruise Deepfake or played around with the face-swapping Snapchat filters. But the dark reality of deepfake technology is far more terrifying than an ever-youthful Top Gun star. 

Deepfakes are images and videos digitally altered using artificial intelligence (AI) and machine learning algorithms to superimpose one person’s face seamlessly onto another’s. They can be incredibly realistic and impossible to detect with the naked eye. Many websites and apps allow anyone with access to a computer to produce images and videos of someone saying or doing something that never actually happened. 

While law-makers and the media have focused their concerns on the potential impact of political deepfakes, nearly all deepfakes online are actually non-consensual porn targeting women. Gaps in the law and easy access to deepfake technology have created a perfect storm, where anyone can make their most perverse fantasy come to life, at the expense of real people.

The Tech Behind The Fakes

Deepfakes are created using generative adversarial networks (GANs) that use AI and two machine learning algorithms (an image generator and an image discriminator) which work in tandem to create and refine the fakes. The process begins by feeding each algorithm the same source data, i.e., images, video, or even audio. Then, the generator iteratively creates new samples with the target image until the discriminator cannot tell whether the generated image is a real image of the target or a fake.

Historically, creating a truly realistic and quality deepfake required dozens of images of a person with enough similarities to the original subject. That is, until July 2022, when Samsung developed MegaPortriats, a technique that creates high-resolution deepfakes from a single image. Now, highly realistic deepfakes can be made from a single innocuous selfie posted online. 

With advancements in technology, detecting deepfakes has become increasingly more difficult. In response, researchers have raced to develop more accurate detection tools. For example, in July 2022 Computer scientists at University of California Riverside created a program that detects manipulated facial expressions in videos and images with up to 99% accuracy. While promising, there is still a long way to go before this or similar detection tools are widely available to law enforcement, consumer protection agencies, and the public. 

The Dark Side of Deepfakes

Realistic deepfakes pose an enormous risk to politicians and fair elections. Many deepfakes have already surfaced of high-profile politicians engaging in acts designed to undermine their credibility. In March 2022, Russian hackers posted a deepfake video of Ukrainian President, Volodymyr Zelenskyy, telling his soldiers to surrender on Ukrainian news outlets and social media. While the video was quickly debunked, it demonstrates how this technology is likely to become a standard tactic used by adversaries to interfere in politics.  

While political deepfakes do pose a very real danger to our democratic institutions, the technology is currently primarily used to victimize women. A 2019 report by Deeptrace confirmed that 96% of all deepfakes online are actually non-consensual porn targeting women and the  number of such deepfakes is rapidly growing. Cybersecurity firm Sensity reports the volume of deepfakes online nearly doubles every six months, largely due to the increase in availability of cheap and easy deepfake technology. Free face-swapping software found on apps like Deepnude, Deepswap, and FaceMagic are commonly used to create deepfake porn. Scammers have even begun using these in extortion cases; threatening to release the fake videos to victims’ family, friends, and employers unless they pay up. 

Having your likeness stolen and used to perform degrading sex acts without your consent is becoming a disturbing reality for celebrities and women in the public eye. A quick Google search reveals nearly a dozen websites with hundreds of deepfake porn videos using the faces of celebrities like Emma Watson, Gal Gadot, and Maisey Williams, among many others. Earlier this month, Twitch streamer, Atrioc, was forced to apologize after he accidentally revealed he used a website dedicated to sharing deepfake porn of popular female streamers, many of whom he is friends with in real-life. 

While celebrities are most at risk, there are websites (which I will not name here) specifically designed for men to create non-consensual deepfake porn of the women in their lives. While no longer publicly active, an anonymous user released an AI bot on right-wing messaging app, Telegram, which rapidly generated thousands of deepfakes of women and underage girls from photos uploaded by men seeking revenge. An investigation by Sensity found that these deepfakes were shared over 100,000 times before the bot was reported to the platform.

To add insult to injury, women who speak out against revenge porn are often the targets of relentless online harassment. Kate Isaacs, a 30-year-old woman from the UK, was the victim of deepfake porn after she successfully campaigned Pornhub to remove nearly 10 million non-consensual and child porn videos. Afterwards she was subjected to humiliating and terrifying harassment from men who “felt they were entitled to non-consensual porn.” They posted her work and home addresses online and threatened to follow her, rape her, and then post the video of it on Pornhub. Shortly thereafter, deepfake porn videos of her began to circulate online. 

Many victims of deepfake and revenge porn are forced to shut down their social media accounts and minimize their online presence to avoid further harassment and embarrassment. Notably, it is somewhat ironic that the men who seek to silence women by creating and sharing these videos often do so under the guise of the First Amendment. The dangers of deepfakes are undeniable, but women have largely been left to fend for themselves.

Our Legal System Is Not Ready for This

The combination of a lack of awareness and the difficulty in detecting deepfakes creates a significant challenge for victims when reporting.. Most law enforcement agencies lack the training and software to confirm that a video is a deepfake. Even if law enforcement can prove it is a forgery, by the time they do so significant damage is already done. People have already seen what they believe to be the victim engaging in degrading sex acts. Those images can never be unseen and will continue to damage victims’ reputations, relationships, and mental health. 

The legal system has been slow to react to the threat women face from deepfake porn. While 48 states and Washington D.C. finally have laws against the creation and distribution of non-consensual “revenge” pornography, only three have specifically banned deepfake porn. In 2018 proposed federal deepfake legislation died in the Senate. The state laws prohibiting deepfakes will likely face huge hurdles from First Amendment and personal jurisdiction challenges:

  • In 2019, Texas was the first State to ban deepfakes, but only those intended to influence elections. 
  • Also in 2019, Virginia amended its “revenge porn” statute to include deepfakes. 
  • In 2020, California prohibited the creation of deepfakes within 60 days of an election and for unauthorized use in pornography.
  • Also in 2020, New York passed a law protecting a person’s likenesses from unauthorized commercial use as well as non-consensual deepfake pornography.

In states without laws against deepfakes, victims will be forced to find relief through a patchwork of consumer privacy protection, defamation, and revenge porn laws. Notably, many state’s revenge porn laws do not apply to deepfakes because the victim’s body is not actually being portrayed. 

Biometric privacy laws could be used to combat deepfake porn in states like Illinois, Texas, Washington, New York, and Arkansas, where residents can file a civil claim against those who use their faceprints, facial mapping, or identifiable images without their consent. Similarly, defamation claims could potentially be brought against the creators of deepfake porn. 

Even if a clearly applicable law exists, bringing any civil claim requires the victim to be able to prove the identity of the video’s creator. This can be incredibly challenging when websites and apps allow users to upload videos with near total anonymity. The bottom line is that current laws do little to deter deepfake creators from continuing to victimize women for their own pleasure. 

What Are Tech Platforms Doing To Fix the Problem They Created?

Furthermore, the tech platforms on which deepfakes are widely shared are completely shielded from any legal liability under Section 230 of the Communications Decency Act. Without any consequences, it has been difficult to get platforms to address the impact that content shared on their site has on people’s lives. Still, some have taken action against deepfakes. In 2018 both Reddit and Pornhub banned deepfake porn, categorizing it as inherently non-consensual. The following year Discord banned the sale of Deepnude, an app designed to remove clothing from women (yes—only women) in photos. Apple removed the Telegram deepfake bot from its iOS for violating its guidelines. Pornhub and YouPorn both redirect users searching for deepfakes to a warning that they’re searching for potentially illegal and abusive sexual material. Users are then provided with directions on how to request the removal of content and resources for victims. Telegram, on the other hand, has never publicly commented on it and has never identified its creator. 

While these efforts are promising, more still needs to be done. Tech companies, lawmakers, and communities must work together to regulate the use of deepfake technology.

If you or someone you know has been the victim of online sex abuse, you are not alone. Support is available through the Cyber Civil Rights Initiative online or via their 24-hour hotline at 1-844-878-2274.