Driving Change: Washington State Legislature Considers New Regulations for ALPRs

Photo by Valentin Ivantsov on Pexels.com

By: Anusha Nasrulai

The Washington state legislature is currently considering the Driver Privacy Act, a bill regulating automated license plate readers (ALPRs). ALPRs are cameras that capture license plate information of a vehicle along with location and time information. Currently, many agencies’ retention and sharing of ALPR data is subject only to internal policy, if any exists. The surveillance capabilities of ALPR systems have profound consequences, such as chilling the exercise of civil liberties and invading the privacy of vulnerable individuals, including immigrants or people who come to Washington to access reproductive or gender-affirming health care. The Driver Privacy Act presents the opportunity to raise the floor of privacy protections afforded by the Federal and Washington State Constitutions.

The Driver Privacy Act limits authorized uses of ALPRs to law enforcement, parking and toll enforcement, and transportation agencies. The current bill also puts forth warrant thresholds, a 21-day data retention period, and auditing requirements for agencies. Additionally, the bill places restrictions on accessing and sharing ALPR data and creates civil and criminal liability for violations under the Act. The bill puts specific prohibition on ALPR use around protected activities, including exercising First Amendment rights and accessing healthcare.

Advocates are calling on lawmakers to strengthen the bill by further limiting the data retention period and prohibiting third party vendors and agencies from sharing ALPR data without a warrant.

Eyes on Washington

ALPRs have faced increased scrutiny in Washington state in the past year. In October 2025, the UW Center for Human Rights released a report exposing how immigration enforcement and other out-of-state law enforcement access data from ALPR systems operated by Washington agencies. This is despite the Keep Washington Working Act and Shield Law that are intended to limit local law enforcement assisting federal immigration enforcement and “protect[] people in Washington from civil and criminal actions in other states that restrict or criminalize reproductive and gender-affirming care.” Many municipalities have halted their use or procurement of ALPRs, at least till the state passes guidance. Washington now is at a turning point, to either implement guardrails that protect individuals’ privacy from undue government surveillance, or pass legislation that sanctions expanded use of ALPRs across Washington state.

Flock Safety, a widely-adopted ALPR vendor in Washington, has faced “growing controversy” for enabling ALPR data operated by public agencies to be accessed by federal agencies and other jurisdictions. As of this June, Washington has 80 cities, six counties, and three tribes using Flock cameras. While Flock Safety has received nationwide attention recently, there are other prominent ALPR vendors used in Washington, including Vigilant Solutions (Motorola) and Axon (formerly Taser)

How ALPR data is used, stored, and shared matters because these cameras capture more than license plate information. Law enforcement use ALPRs during real-time investigations, by checking a vehicle’s license plate information against a “hot list” of vehicles associated with an investigation or reported crime. The information collected by ALPRs can be cross-referenced with other law enforcement or public agency databases to identify individuals which vehicles are registered to. Law enforcement can also search historic ALPR data to track the direction, speed, and travel patterns of a vehicle. In aggregate, ALPR data can reveal sensitive information about where an individual frequents and their travel patterns. Furthermore, ALPR photos can capture the likeness of drivers, passengers, and nearby surroundings.

Constitutional Concerns

The specific protections provided by the Driver Privacy Act are guided by Constitutional principles. The surveillance capabilities of ALPRs can have a “chilling effect” on the exercise of Constitutionally protected activities. Awareness of constant surveillance may alter or deter people from exercising their protected rights of expression, association, and religion. These concerns are valid given historic law enforcement surveillance of political rallies, protests, and places of worship.

ALPRs also implicate the Constitutional right to privacy. The Supreme Court has not required a warrant for law enforcement to collect and search license plate information because there is a diminished expectation of privacy due to the systematic regulation of vehicles and drivers’ movements taking place on public roads. Though, the Court has also addressed in United States v. Jones , Carpenter v. United States, and Kyllo v. United States, whether law enforcement’s use of emerging surveillance technologies infringes on Fourth Amendment protections.

The Supreme Court has found that a warrant is required before installing a GPS tracker or obtaining cell site location information (CSLI) to track an individual’s long-term movements. Justice Sotomayor wrote a concurring opinion in Jones highlighting how emerging technologies have enhanced law enforcement surveillance capabilities without physical intrusion: “GPS monitoring generates a precise, comprehensive record of a person’s public movements that reflects a wealth of detail about her familial, political, professional, religious, and sexual associations.” The aggregation of ALPR data to reveal historical travel patterns raise concerns similar to those articulated in the Jones concurrence. Similarly, the Court in Carpenter, was concerned by how CSLI time-stamped data provides an intimate view into a person’s life and is cheaper and more easily accessible compared to other surveillance strategies. This is comparable to retained, historical ALPR data.

The Supreme Court recognized in Kyllo that people’s Fourth Amendment protections should not be left to “the mercy of advancing technology.” Law enforcement use of sense-enhancing technology in the form of infrared scanners to collect information “that could not otherwise have been obtained without physical ‘intrusion into a constitutionally protected area,’” was found to be a search subject to Fourth Amendment protections. While cars have lesser Constitutional privacy protections than homes, modern ALPR systems with embedded AI also provide law enforcement with extra-sensory capabilities that may implicate the Fourth Amendment.

Federal courts have yet to conclude that police use of ALPRs violate Fourth Amendment search and seizure requirements. The Washington State Constitution provides an affirmative right to privacy, enshrined in Section 1, Article 7. Washington courts have interpreted this article to create a higher standard for lawful search and seizures. As of December 2024, 18 states already have ALPR laws, with more states considering passing ALPR legislation. Washington is now contemplating joining these states in passing ALPR regulations.

Where the Bill Stands Now

The Driver Privacy Act has already passed in the Senate with amendments. The House Civil Rights & Judiciary Committee will hold a public hearing in consideration of the amended bill on February 18th.

The Driver Privacy Act not only regulates the use of ALPRs in Washington state, but it can also create meaningful privacy protections for all Washingtonians.

Reclaiming Urban Housing: A Case Study on Regulating Online Platforms

By: Matt Unutzer

Sky-high rents are a defining feature of modern urban life. Among the many forces blamed for rising housing costs, one issue has drawn sustained regulatory attention: the conversion of long-term housing into short-term rentals (STRs) listed on platforms such as Airbnb and VRBO. Critics argue that when apartments and homes are diverted into the short-term market, overall housing supply shrinks, placing upward pressure on rents and home prices. In response, cities across the country have spent the last decade experimenting with new regulatory frameworks aimed at curbing the perceived housing impacts of STR proliferation. The following sections examine how Washington D.C., Santa Monica, and New York City regulate short-term rentals, and, in doing so, illustrate the boundaries of regulating online platforms.

Washington D.C.’s Host-Liability Model

Most cities regulating short-term rentals utilize a common approach: placing compliance obligations on individual property owners and enforcing violations through traditional municipal oversight. Washington D.C. exemplifies this default model.

Washington, D.C.’s short-term rental law requires hosts to register with the city and obtain a short-term rental license before offering a unit for rent. Hosts are generally limited to operating a single short-term rental associated with their primary residence. Operating without a license or offering an unregistered unit may result in civil penalties or license suspension.

Enforcement authority rests with the city’s Department of Consumer and Regulatory Affairs, which investigates violations through complaints, audits, and reviews of booking activity. The city bears responsibility for identifying noncompliant listings and linking them to individual hosts; for these activities, penalties are imposed directly on hosts who violate the law.

This regulatory model imposes limited duties on booking platforms. Platforms are not required to independently verify license status before allowing a listing to appear; further, these booking services may only be fined for processing a booking when the city has already identified the underlying listing as non-compliant and sent the platform notice. Platforms are required to submit periodic reports to the city identifying short-term rental transactions and associated host identity information to aid the city in identifying unlicensed STRs.

This host-based enforcement model places significant administrative demands on the city’s enforcement entity, requiring the city to identify noncompliant listings, trace them to individual operators, and pursue penalties. Furthermore, because unlawful listings may remain active until discovered, this approach does not guarantee the reduction in short-term rental activity that the regulatory framework seeks to achieve.

Santa Monica’s Platform-Liability Model

In response to the administrative burdens and enforcement limitations associated with a traditional host-based enforcement model, some cities have adopted regulatory frameworks that shift liability for unlicensed STR bookings upstream to the platforms themselves. Santa Monica represents one of the clearest examples of this model.

Santa Monica’s short-term rental ordinance requires hosts to obtain a city-issued license before offering a short-term rental and provides for a municipal registry of all licensed STR hosts. The ordinance makes it unlawful for a booking platform to complete a short-term rental transaction for any host that does not appear on the City’s registry, attaching civil fines for each such transaction.

In contrast to a host-based enforcement model, this regulatory framework has proved successful in realizing desired STR reductions. However, the imposition of fines on the platforms themselves poses the question of how far municipalities may go in regulating the online platforms which operate in their communities.

That question was addressed in HomeAway.com, Inc. v. City of Santa Monica, where short-term rental platforms Airbnb and HomeAway.com challenged the ordinance claiming immunity for fines under Section 230(c)(1) of the Communications Decency Act. Section 230(c)(1) provides online platform immunity for the content it hosts if posted by third parties. In so doing, it draws a line between platforms themselves and the third-party “publisher or speaker” of the content. In the platforms’ view, Santa Monica’s ordinance effectively established platform liability for the third-party listing content hosted on the platform.

The Ninth Circuit rejected this argument, holding that the ordinance did not impose liability for publishing or failing to remove third-party content, but instead regulated the platforms’ own commercial conduct by imposing fees when the platforms completed booking transactions for short-term rentals of unregistered properties.

While the courts have upheld Santa Monica’s use of platform liability as a lawful enforcement mechanism, the platform-liability model does not substantially reduce the administrative burden borne by the city. Enforcement still requires the city to identify individual non-compliant transactions and pursue penalties against the platforms that facilitated them.

New York City’s Affirmative Duty to Verify Model

The most aggressive iteration of STR regulation laws is found in New York City’s Local Law 18. Local Law 18, enacted on January 9th, 2022, establishes an automated STR registration verification system. First, an STR host is required to register with the city, which assigns them an STR registration number. Second, the ordinance provides for an electronic verification portal where platforms must submit a prospective host’s STR registration number and receive a confirmation code prior to processing a booking with that host. The ordinance also includes a mandatory reporting requirement directing STR platforms to submit an inventory of all STR transactions completed each month and certify that they received a confirmation code from the city’s verification portal prior to each booking.

This innovative regulatory framework automates compliance, ensuring the desired reduction in STRs is realized while minimizing the administrative burden of enforcement. However, this verification-based model has not yet been directly evaluated under Section 230. Curiously, Airbnb has not chosen to challenge the law under Section 230 and instead has largely complied with the regulatory regime, focusing its efforts on lobbying instead. Perhaps the platform has “read the tea leaves” of past lawsuits, such as the aforementioned Santa Monica suit, and determined that when liability is tied to a commercial transaction, platforms cannot claim section 230 immunity.

There are, however, material differences between the two frameworks. In Santa Monica, liability attaches when a platform completes a booking for a host who is not registered in the City’s STR registry. In New York City, by contrast, liability attaches because the platform failed to perform a mandated verification step prior to the booking, regardless of the host’s registration status. It remains an open question whether this structural shift––which ties liability to a platform’s screening process rather than underlying host noncompliance––moves closer to treating platforms as “publishers” in a manner that implicates Section 230’s platform-liability protections.

Conclusion

The ultimate impact of short-term rentals on local housing supply remains unsettled. What is clear, however, is that cities across the country are responding to growing concerns about the effects of STR platforms like Airbnb on housing supply. The result is an ongoing, nationwide case study on how local governments can regulate both short-term rentals and the online platforms that facilitate them. As municipalities continue to experiment with regulatory regimes, the legal boundaries emerging from these efforts may influence the future of platform regulation far beyond the housing context.

#ShortTermRentals #HousingPolicy #PlatformRegulation

Jody Allen and the Future of the Seahawks: A Week of Legal Confusion

By: Thomas Oatridge

Media Reports and Conflicting Narratives About a Seahawks Sale

Just days before Super Bowl LX between the Seattle Seahawks and the New England Patriots was set to kick off, it was announced that the Seattle franchise would be back on the market after nearly three decades, a deal that is estimated to close for around $7 to $8 billion. Paul Allen, the Seattle Seahawks’ longtime owner and a co-founder of Microsoft, passed away in 2018. Prior to his death, Allen established a trust encompassing most of his assets and appointed his sister, Jody Allen, to be the personal representative of the estate and trustee to oversee the eventual sale of the trust’s assets, including the Seattle Seahawks. Although it is widely understood that the trust documents do not impose a specific timeline for selling the team, ESPN reported that the franchise would soon be put on the market. The Paul G. Allen Trust promptly issued a statement dismissing the report as rumor and stating unequivocally that “the team is not for sale.” Adding to the speculation, the Wall Street Journal reported that the NFL issued the Seahawks a $5 million fine for being out of compliance with ownership requirements. However, NFL Commissioner Roger Goodell denied such allegations shortly after these reports surfaced.

Days after this initial story broke, Yahoo Sports released an article outlining the confusion, while simultaneously creating more confusion by reporting contradictory statements about Washington State trust and estate law. The article opens by asserting the estate mandates that the Seahawks will “eventually” be sold. The article subsequently quotes a local Seattle sportswriter who claims that “when the estate comes around and says, ‘you got to sell the team,’ she has to sell the team,” because “her job is to carry out the will of the estate.” Yet, as the same article reports, just moments earlier, the estate’s governing documents never set a specific timeline for selling its assets. The week leading up to the Super Bowl has underscored the need to ask more precise legal questions, rather than accepting the latest rumor as a statement of law.

The Legal Pressure Point: NFL Ownership Rules

To frame our legal analysis and fairly characterize Yahoo Sports’ interpretation, it’s important to point out the key legal risk the Paul G. Allen Trust assumes by deferring the sale of the Seahawks. The National Football League’s bylaws are clear and unambiguous regarding ownership structure, mandating the majority stakeholder must be an individual, rather than a trust. Additionally, all controlling owners must maintain a 30% ownership stake in their respective team. It is possible that this contractual obligation to the league will trigger a sale of the team earlier than what the trustee of the Paul G. Allen Trust, Jody Allen, would have otherwise preferred. The aforementioned stories by ESPN and the Wall Street Journal may in fact be pointing to this as the likely outcome, especially given the recent announcement that the estate agreed to sell the Trail Blazers to the majority stakeholder of the Carolina Hurricanes for $4 billion.

Does Washington State Law Require an Immediate Sale?

Contractual obligations to the NFL are only part of the legal picture. In accordance with Paul Allen’s will, his sister Jody was assigned as personal representative to properly probate his estate. She was also given the role of trustee to the Paul G. Allen Trust. Therefore, trust and estate law must be considered to properly understand this situation. Under the Revised Code of Washington (RCW), a trust is created by the transfer of property to a trustee to carry out the terms of the trust. Personal representatives and trustees must fulfill functionally identical fiduciary duties such as administering the trust solely in the interests of the beneficiaries, keeping the beneficiaries reasonably informed, managing assets properly, and avoiding self-dealing for personal benefit. In a 2022 interview, Jody Allen indicated the estate could take 10–20 years to unwind due to its complexity and size. Thus, if there is no reason to doubt the validity of this claim and no established deadline for the sale of the trust’s assets, it is hard to say what would trigger a breach of fiduciary duty to the trust if the Seahawks are not sold within the NFL’s preferred timeline. Furthermore, given Jody Allen is both the personal representative of the estate and the trustee of the Paul G. Allen Trust, it is unlikely the estate will “come knocking” to force Jody to sell the team either.

When a Sale Could Become Legally Problematic Under Washington State Law

There is, however, a scenario where Jody Allen could be found in breach of her fiduciary duty as personal representative of the estate and trustee. According to Yahoo Sports, their source discussed a rumor of “Allen and a bunch of her affluent friends at Seattle-based companies Microsoft and Amazon coming in and buying the team from her brother’s trust.” If this rumor turns out to be true, Jody could open herself up to the risk of breaching her fiduciary duties through self-dealing. This occurs when a trustee enters into a sale, encumbrance, or other transaction involving the investment or management of trust property for the trustee’s own personal account or which is otherwise affected by a conflict between the trustee’s fiduciary and personal interests. In 2018, a Washington State appeals court affirmed a lower court’s decision to block the sale of estate assets by a personal representative to himself because it breached his fiduciary duties via self-dealing. However, if Jody Allen decides to move forward with the sale of the Seahawks to herself, Washington State law allows for three exceptions to this doctrine which include waiver by the trust instrument, waiver by the beneficiaries, or permission from the court.

Conclusion

At present, there is no indication that Jody Allen or the Paul G. Allen Trust are under any immediate legal obligation to sell the Seattle Seahawks. If a sale occurs in the near term, it is more likely to stem from contractual obligations to the NFL rather than any requirement imposed by Washington State law. Absent meaningful pressure from the NFL, the timing of any sale remains largely within the discretion of Jody Allen as trustee of the Paul G. Allen Trust.

#Seahawks #JodyAllen #TrustAndEstateLaw #WJLTA

Beyond the Billable Hour: How AI is Forcing Legal Pricing Reform

By: Joyce Jia

Pricing reform to replace billable hours has long been debated in the legal industry. Yet as software companies increasingly shift toward outcome-based pricing with AI agents’ assistance—charging only when measurable value is delivered—the legal profession remains anchored in time-based billing and has been slow to translate technological adoption into pricing change. The recently released Thomson Reuters Institute’s 2026 Report on the State of the US Legal Market (“2026 Legal Market Report”) revealed that average law firm spending on technology grew “an astonishing 9.7% … over the already record growth of 2024”, while “a full 90% of all legal dollars still flow through standard hourly rate arrangements,” This growing disconnect between technological investment and monetization reflects not merely a billing challenge, but a deeper crisis in how legal value is defined, allocated, and captured in the AI era. 

How Did We Get Here?

The billable hours system wasn’t always dominant. As documented by Thomson Reuters Institute’s James W. Jones, hourly billing emerged in the 20th century but remained relatively peripheral until the 1970s, when the rapid growth of corporate in-house legal departments demanded standardized fees and greater transparency from outside counsels’ previously “amorphous” billing practices. The logic was straightforward: time equaled work, work equaled measurable productivity, and productivity justified legal spending for in-house departments (and conversely, profitability for law firms).

That logic, however, is increasingly strained. As AI enables what Clio CEO Jack Newton describes as a “structural incompatibility”, the revenue model built on time becomes increasingly unjustifiable. According to Thomas Reuter’s 2025 Legal Department Operations Index, corporate legal departments face mounting pressure to “do more with less.” Nearly three-quarters of respondents plan to deploy advanced technology to automate legal tasks and reduce costs, while one-quarter are expanding their use of alternative fee arrangements (AFAs) to optimize operations and control costs. As the 2026 Legal Market Report observes, general counsels now scrutinize matter budgets line by line. Seeing their own team leverage AI to perform routine work “at a fraction of the cost,” they question why outside counsels charging premium hourly rates are not delivering comparable efficiencies. Unsurprisingly, corporate legal departments have led their outside firms in AI adoption since 2022

Is AI a “Margin Eroder or Growth Accelerator”?  

Research by Professor Nancy Rapoport and Legal Decoder founder Joseph Tiano frames this tension as a central paradox of AI adoption. When an attorney completes discovery review using AI in 8 hours instead of 40, firm revenue could drop by 80 percent theoretically under the hourly model even as client outcomes improve. This appears to be a productivity trap: AI-driven efficiency directly cannibalizing revenue. But this framing is overly narrow. With careful design, restructuring billing models around technology-enabled premiums need not shrink revenue; instead, it can enhance productivity while strengthening client trust through greater transparency and efficiency.  It also enables a more equitable sharing of the benefits of technological advancement and a more deliberate allocation of the risks inherent in legal matters.

Recapturing the Lost Value of Legal Inefficiencies

According to the Thomson Reuters Institute’s 2023 research on billing practices, the average law firm partner writes down over 300 hours annually, nearly $190,000 in lost potential fees. These write-offs typically involve learning curves in unfamiliar legal areas, time-intensive research, drafting various documents and meeting notes, or correcting associates’ work. Partners often decline to bill clients for such work when it exceeds anticipated time expectations, even though it remains billable in principle. This is precisely where AI excels. By reducing inefficiencies and accelerating routine tasks, AI allows firms to recapture written-off value while offering clients more predictable budgets and higher-quality outputs. 

Justifying Higher Hourly Rates Through AI-Enhanced Value

Paradoxically, AI may also support higher hourly rates for certain categories of legal work. As Rapoport and Tiano argue, AI enables lawyers to deliver “unprecedented insights” through deeper, more comprehensive, and more reliable analysis. By rapidly synthesizing historical case data, identifying patterns, and predicting outcomes, AI may elevate legal judgment in ways that time and cost constraints previously rendered impractical. In this context, premium rates can remain justifiable for complex, strategic work where human judgment and client relationship prove irreplaceable.

Extending Contingency (Outcome-Based) Fee Beyond Litigation

Beyond traditional litigation contingency fees, Rapoport and Tiano identify “disputes, enforcement actions, or complex transactions” as areas ripe for outcome-based pricing, where firms can “shoulder more risk for greater upside.” The term “disputes” may be understood broadly to encompass arbitration, debt collection, and employment-related conflicts, such as discrimination or wage claims.

An even more underexplored application lies in regulatory compliance, a domain characterized by binary and verifiable outcomes. Unlike litigation success or transactional value, compliance outcomes present even clearer metrics: such as GDPR compliance versus violation, SOX compliance versus deficiency, patent prosecution approval versus rejection. This creates opportunities for compliance-as-a-service models that charge for compliance or certification outcomes rather than hours worked. Where AI enables systematic, scalable review, risk allocation becomes explicit: the firm guarantees compliance, and the client pays a premium above hourly equivalents for that assurance.

New Revenue Streams in the AI Era

The rise of data-driven AI also creates entirely new categories of legal work. As Rapoport and Tiano identify, “AI governance policy and advisories, algorithmic bias audits, data privacy by design”, all represent emerging and durable revenue streams. Moreover, as AI regulatory frameworks continue to evolve across jurisdictions, clients will increasingly seek counsel for these specialized services, where interdisciplinary expertise at the insertion of law and technology, combined with sound professional judgment and strategic foresight, remain indispensable for navigating both compliance obligations and long-term risk. 

The Hybrid Solution: Tiered Value Frameworks

Forward-thinking firms are increasingly experimenting with hybrid AFA that blend fixed fees, subscriptions, outcome-based pricing, and legacy hourly billing into tiered value offerings. Ultimately, the legal industry’s pricing transformation is not solely about technology. It is about candidly sharing the gains created by technology and confronting how risk should be allocated when AI reshapes legal work.

As AI simultaneously frees lawyers’ time and creates new revenue opportunities, law firms face a defining challenge: articulating, quantifying, and operationalizing a value-and-risk allocation framework capable of replacing the billable hour and sustaining the economics of legal practice for the next generation.

Across Nations, Across Identities: Why Deepfake Victims are Left Without Remedies

By: Hanan Fathima

When a deepfake video of former President Barack Obama appeared in 2018, the public was stunned—this was not just clever editing, but a wake-up call. AI-generated content became hyper-realistic and often indistinguishable as compared to non-AI-generated content. Deepfakes are highly realistic AI-generated content that can imitate a person’s appearance and voice through technologies like generative adversarial networks (GANs). We’ve entered a digital era where every piece of media demands scrupulous scrutiny, raising questions about regulation and justice in a digital age. Different jurisdictions have adopted varying approaches to deepfake regulation, with countries like the US, UK, and EU members emphasizing on international laws on deepfakes, while countries like China and Russia preferring digital sovereignty. A key challenge is navigating the jurisdictional gaps in deepfake laws and regulations.

The Global Surge in Deepfake-Driven Crimes

Deepfake phishing and fraud cases have escalated at an alarming rate, recording a 3000% surge since 2022. In 2024, attempts to create deepfake content occurred every five minutes. This sharp escalation in global deepfake activity is alarming, particularly due to the potential for deepfakes manipulate election outcomes, fabricate non-consensual pornographic content , and facilitate sextortion scams. Deepfake criminals exploit gaps in cross-border legal systems. These gaps allow criminals to evade liability and continue their schemes with reduced risk. Because national laws are misaligned and international frameworks remain limited, victims of deepfake crimes face an uphill battle for justice. Combined with limited judicial precedents, tracing and prosecuting offenders has proved to be a massive challenge for many countries.

When Crime Crosses Borders and Laws Don’t

One striking example is a Hong Kong deepfake fraud case in which scammers impersonated a company’s chief financial officer using an AI-generated video in a conference call, duping an employee into transferring HK$200 million (~US$25 million). Investigators uncovered a complex web of stolen identities and bank accounts spread across multiple countries, complicating the tracing and recovery of funds. This case underscores the need for international cooperation, standardized laws and regulations, and robust legal framework for AI-related deepfake crimes[MB6]  in order to effectively combat the growing threat of deepfake fraud.

At a national level, there have been efforts to address these challenges. An example is the U.S. federal TAKE IT DOWN Act 2025, which criminalizes the distribution of non-consensual private deepfake images and mandates prompt removal upon request. States like Tennessee have enacted the ELVIS Act 2024, which protects individuals against use of their voice and likeness in deepfake content, while Texas and Minnesota have introduced laws criminalizing election-related deepfakes to preserve democratic integrity.Similarly, Singapore passed the Elections (Integrity of Online Advertising) (Amendment) Bill to safeguard against misinformation during the election period. China’s Deep Synthesis Regulation 2025 regulates deepfake technology and services, placing responsibility on both platform providers and end-users.

On an international scale, the European Union’s AI Act serves as among the first comprehensive legal frameworks to tackle AI-generated content. It calls for transparency, accountability, and emphasizes labelling AI-manipulated media rather than outright bans.

However, these laws are region-specific and thus rely on international and regional cooperation frameworks like MLATs and multilateral partnerships for prosecuting foreign perpetrators. A robust framework must incorporate cross-border mechanisms such as provisions for extraterritorial jurisdiction and standardized enforcement protocols to address jurisdictional gaps in deepfake crimes. These mechanisms could take the form of explicit cooperation protocols under conventions like the UN Cybercrime Convention, with strict timelines for MLAT procedures, and regional agreements on joint investigations and evidence-sharing.

How Slow International Processes Enable Offender Impunity

The lack of concrete laws and thus concrete relief mechanisms means victims of deepfake crimes face multiple barriers in their ability to access justice. When cases involve multiple jurisdictions, investigations and prosecutions often rely on Mutual Legal Assistance Treat (MLAT) processes. Mutual Legal Assistance is “a process by which states seek and provide assistance in gathering evidence for use in criminal cases,” as defined by the United Nations Office on Drugs and Crime (2018). MLAT is the primary mechanism used for cross-border cooperation in criminal proceedings. Unfortunately, victims may experience delays in international investigations and prosecutions due to slow and cumbersome processes associated with MLAT. Moreover, the process has its own set of limitations such as human rights concerns, conflicting national interests, and data privacy issues. According to the Interpol Africa Cyberthreat Assessment Report 2025, requests for Mutual Legal Assistance (MLA) can take months, severely delaying justice and often allowing offenders to escape international accountability.

Differing legal standards and enforcement mechanisms across countries make criminal proceedings related to deepfake crimes difficult. On a similar note, cloud platforms and social media companies hosting deepfake content may be registered in countries with weak regulations or limited international cooperation, making it harder for authorities to remove content or obtain evidence.

The Human Cost of Delayed Justice

The psychological and social impacts on victims are profound. The maxim justice delayed is justice denied” is particularly relevant—delays in legal recourse means the victim’s suffering is prolonged. This often presents as reputational harm, long-term mental health issues, and career-related issues. Thus, victims of cross-border deepfake crimes may hesitate to report or pursue legal action. They are further deterred due to language, cultural, or economic barriers. Poor transparency in enforcement creates mistrust in international legal systems and marginalizes victims, weakening deterrence.

Evolving International Law on Cross-Border Jurisdiction

There have been years of opinions and debates over the application of international law for cybercrimes and whether it conflicts with cyber sovereignty. The Council of Europe’s 2024 AI Policy Summit highlighted the need for global cooperation in investigation and prosecutorial activities of law enforcement and reaffirmed the role of cooperation channels like MLATs. Calls for a multilateral AI research institute were made in the 2024 UN Security Council debate on AI governance. Recently, in the 2025 AI Action Summit, discussions were focused on research and the transformative capability of AI, and the regulation of such technology. Discussion on cybercrimes and its jurisdiction was limited.

In 2024, the UN Convention Against Cybercrime addressed AI-based cybercrimes, including deepfakes, emphasizing on electronic evidence sharing between countries, cooperation between states for extradition requests and Mutual Legal Assistance. The convention also allows states to establish jurisdiction over offences committed against their nationals regardless of where the offense occurred. However, challenges in implementation persist as a number of nations are yet to ratify this convention, including the United States.

Towards a Coherent Cross-Border Response

Addressing the complex jurisdictional challenges posed by cross-border deepfake crimes requires a multi-faceted approach that combines legal reforms, international collaboration, technological innovations, and victim-centered mechanisms. Firstly, Mutual Legal Assistance Treaties (MLATs) must be streamlined with standardized request formats, clearer evidentiary requirements, and dedicated cybercrime units to reduce delays. Secondly, national authorities need stronger digital forensic and AI-detection capabilities, including investing in deepfake-verification tools like blockchain-based tracing techniques. Thirdly, generative AI platforms must be held accountable, with mandates for detection systems and prompt takedown obligations. However, since these rules vary regionally, platforms do not face the same responsibilities everywhere, underscoring the need for all countries to adopt consistent standards for platforms. Fourth, nations must play an active role in multilateral initiatives and bilateral agreements targeting cross-border cybercrime, supporting the creation of global governance frameworks governing extraterritorial jurisdiction of cybercrimes like deepfakes. While countries like the United States, UK, EU members, and Japan are active participants in international AI governance initiatives, many developing countries are excluded from these discussions. Countries like Russia and China have also resisted UN cybercrime treaties, citing sovereignty values. Notably, despite being a global leader in AI innovation, the US has also not ratified the 2024 UN Convention against Cybercrime. Lastly, a victim-centered approach, through legal aid services and compensation mechanisms, is essential to ensure that victims are not left to navigate these complex jurisdictional challenges alone.

While deepfake technology has the potential to drive innovation and creativity, its rampant misuse has led to unprecedented avenues for crimes that transcend national borders and challenge existing legal systems. Bridging these jurisdictional and technological gaps is essential for building a resilient and robust international legal framework that is capable of combating deepfake-related crimes and offering proper recourse for victims.