
By: Hanan Fathima
When a deepfake video of former President Barack Obama appeared in 2018, the public was stunned—this was not just clever editing, but a wake-up call. AI-generated content became hyper-realistic and often indistinguishable as compared to non-AI-generated content. Deepfakes are highly realistic AI-generated content that can imitate a person’s appearance and voice through technologies like generative adversarial networks (GANs). We’ve entered a digital era where every piece of media demands scrupulous scrutiny, raising questions about regulation and justice in a digital age. Different jurisdictions have adopted varying approaches to deepfake regulation, with countries like the US, UK, and EU members emphasizing on international laws on deepfakes, while countries like China and Russia preferring digital sovereignty. A key challenge is navigating the jurisdictional gaps in deepfake laws and regulations.
The Global Surge in Deepfake-Driven Crimes
Deepfake phishing and fraud cases have escalated at an alarming rate, recording a 3000% surge since 2022. In 2024, attempts to create deepfake content occurred every five minutes. This sharp escalation in global deepfake activity is alarming, particularly due to the potential for deepfakes manipulate election outcomes, fabricate non-consensual pornographic content , and facilitate “sextortion scams.” Deepfake criminals exploit gaps in cross-border legal systems. These gaps allow criminals to evade liability and continue their schemes with reduced risk. Because national laws are misaligned and international frameworks remain limited, victims of deepfake crimes face an uphill battle for justice. Combined with limited judicial precedents, tracing and prosecuting offenders has proved to be a massive challenge for many countries.
When Crime Crosses Borders and Laws Don’t
One striking example is a Hong Kong deepfake fraud case in which scammers impersonated a company’s chief financial officer using an AI-generated video in a conference call, duping an employee into transferring HK$200 million (~US$25 million). Investigators uncovered a complex web of stolen identities and bank accounts spread across multiple countries, complicating the tracing and recovery of funds. This case underscores the need for international cooperation, standardized laws and regulations, and robust legal framework for AI-related deepfake crimes[MB6] in order to effectively combat the growing threat of deepfake fraud.
At a national level, there have been efforts to address these challenges. An example is the U.S. federal TAKE IT DOWN Act 2025, which criminalizes the distribution of non-consensual private deepfake images and mandates prompt removal upon request. States like Tennessee have enacted the ELVIS Act 2024, which protects individuals against use of their voice and likeness in deepfake content, while Texas and Minnesota have introduced laws criminalizing election-related deepfakes to preserve democratic integrity.Similarly, Singapore passed the Elections (Integrity of Online Advertising) (Amendment) Bill to safeguard against misinformation during the election period. China’s Deep Synthesis Regulation 2025 regulates deepfake technology and services, placing responsibility on both platform providers and end-users.
On an international scale, the European Union’s AI Act serves as among the first comprehensive legal frameworks to tackle AI-generated content. It calls for transparency, accountability, and emphasizes labelling AI-manipulated media rather than outright bans.
However, these laws are region-specific and thus rely on international and regional cooperation frameworks like MLATs and multilateral partnerships for prosecuting foreign perpetrators. A robust framework must incorporate cross-border mechanisms such as provisions for extraterritorial jurisdiction and standardized enforcement protocols to address jurisdictional gaps in deepfake crimes. These mechanisms could take the form of explicit cooperation protocols under conventions like the UN Cybercrime Convention, with strict timelines for MLAT procedures, and regional agreements on joint investigations and evidence-sharing.
How Slow International Processes Enable Offender Impunity
The lack of concrete laws and thus concrete relief mechanisms means victims of deepfake crimes face multiple barriers in their ability to access justice. When cases involve multiple jurisdictions, investigations and prosecutions often rely on Mutual Legal Assistance Treat (MLAT) processes. Mutual Legal Assistance is “a process by which states seek and provide assistance in gathering evidence for use in criminal cases,” as defined by the United Nations Office on Drugs and Crime (2018). MLAT is the primary mechanism used for cross-border cooperation in criminal proceedings. Unfortunately, victims may experience delays in international investigations and prosecutions due to slow and cumbersome processes associated with MLAT. Moreover, the process has its own set of limitations such as human rights concerns, conflicting national interests, and data privacy issues. According to the Interpol Africa Cyberthreat Assessment Report 2025, requests for Mutual Legal Assistance (MLA) can take months, severely delaying justice and often allowing offenders to escape international accountability.
Differing legal standards and enforcement mechanisms across countries make criminal proceedings related to deepfake crimes difficult. On a similar note, cloud platforms and social media companies hosting deepfake content may be registered in countries with weak regulations or limited international cooperation, making it harder for authorities to remove content or obtain evidence.
The Human Cost of Delayed Justice
The psychological and social impacts on victims are profound. The maxim “justice delayed is justice denied” is particularly relevant—delays in legal recourse means the victim’s suffering is prolonged. This often presents as reputational harm, long-term mental health issues, and career-related issues. Thus, victims of cross-border deepfake crimes may hesitate to report or pursue legal action. They are further deterred due to language, cultural, or economic barriers. Poor transparency in enforcement creates mistrust in international legal systems and marginalizes victims, weakening deterrence.
Evolving International Law on Cross-Border Jurisdiction
There have been years of opinions and debates over the application of international law for cybercrimes and whether it conflicts with cyber sovereignty. The Council of Europe’s 2024 AI Policy Summit highlighted the need for global cooperation in investigation and prosecutorial activities of law enforcement and reaffirmed the role of cooperation channels like MLATs. Calls for a multilateral AI research institute were made in the 2024 UN Security Council debate on AI governance. Recently, in the 2025 AI Action Summit, discussions were focused on research and the transformative capability of AI, and the regulation of such technology. Discussion on cybercrimes and its jurisdiction was limited.
In 2024, the UN Convention Against Cybercrime addressed AI-based cybercrimes, including deepfakes, emphasizing on electronic evidence sharing between countries, cooperation between states for extradition requests and Mutual Legal Assistance. The convention also allows states to establish jurisdiction over offences committed against their nationals regardless of where the offense occurred. However, challenges in implementation persist as a number of nations are yet to ratify this convention, including the United States.
Towards a Coherent Cross-Border Response
Addressing the complex jurisdictional challenges posed by cross-border deepfake crimes requires a multi-faceted approach that combines legal reforms, international collaboration, technological innovations, and victim-centered mechanisms. Firstly, Mutual Legal Assistance Treaties (MLATs) must be streamlined with standardized request formats, clearer evidentiary requirements, and dedicated cybercrime units to reduce delays. Secondly, national authorities need stronger digital forensic and AI-detection capabilities, including investing in deepfake-verification tools like blockchain-based tracing techniques. Thirdly, generative AI platforms must be held accountable, with mandates for detection systems and prompt takedown obligations. However, since these rules vary regionally, platforms do not face the same responsibilities everywhere, underscoring the need for all countries to adopt consistent standards for platforms. Fourth, nations must play an active role in multilateral initiatives and bilateral agreements targeting cross-border cybercrime, supporting the creation of global governance frameworks governing extraterritorial jurisdiction of cybercrimes like deepfakes. While countries like the United States, UK, EU members, and Japan are active participants in international AI governance initiatives, many developing countries are excluded from these discussions. Countries like Russia and China have also resisted UN cybercrime treaties, citing sovereignty values. Notably, despite being a global leader in AI innovation, the US has also not ratified the 2024 UN Convention against Cybercrime. Lastly, a victim-centered approach, through legal aid services and compensation mechanisms, is essential to ensure that victims are not left to navigate these complex jurisdictional challenges alone.
While deepfake technology has the potential to drive innovation and creativity, its rampant misuse has led to unprecedented avenues for crimes that transcend national borders and challenge existing legal systems. Bridging these jurisdictional and technological gaps is essential for building a resilient and robust international legal framework that is capable of combating deepfake-related crimes and offering proper recourse for victims.



