Reclaiming Urban Housing: A Case Study on Regulating Online Platforms

By: Matt Unutzer

Sky-high rents are a defining feature of modern urban life. Among the many forces blamed for rising housing costs, one issue has drawn sustained regulatory attention: the conversion of long-term housing into short-term rentals (STRs) listed on platforms such as Airbnb and VRBO. Critics argue that when apartments and homes are diverted into the short-term market, overall housing supply shrinks, placing upward pressure on rents and home prices. In response, cities across the country have spent the last decade experimenting with new regulatory frameworks aimed at curbing the perceived housing impacts of STR proliferation. The following sections examine how Washington D.C., Santa Monica, and New York City regulate short-term rentals, and, in doing so, illustrate the boundaries of regulating online platforms.

Washington D.C.’s Host-Liability Model

Most cities regulating short-term rentals utilize a common approach: placing compliance obligations on individual property owners and enforcing violations through traditional municipal oversight. Washington D.C. exemplifies this default model.

Washington, D.C.’s short-term rental law requires hosts to register with the city and obtain a short-term rental license before offering a unit for rent. Hosts are generally limited to operating a single short-term rental associated with their primary residence. Operating without a license or offering an unregistered unit may result in civil penalties or license suspension.

Enforcement authority rests with the city’s Department of Consumer and Regulatory Affairs, which investigates violations through complaints, audits, and reviews of booking activity. The city bears responsibility for identifying noncompliant listings and linking them to individual hosts; for these activities, penalties are imposed directly on hosts who violate the law.

This regulatory model imposes limited duties on booking platforms. Platforms are not required to independently verify license status before allowing a listing to appear; further, these booking services may only be fined for processing a booking when the city has already identified the underlying listing as non-compliant and sent the platform notice. Platforms are required to submit periodic reports to the city identifying short-term rental transactions and associated host identity information to aid the city in identifying unlicensed STRs.

This host-based enforcement model places significant administrative demands on the city’s enforcement entity, requiring the city to identify noncompliant listings, trace them to individual operators, and pursue penalties. Furthermore, because unlawful listings may remain active until discovered, this approach does not guarantee the reduction in short-term rental activity that the regulatory framework seeks to achieve.

Santa Monica’s Platform-Liability Model

In response to the administrative burdens and enforcement limitations associated with a traditional host-based enforcement model, some cities have adopted regulatory frameworks that shift liability for unlicensed STR bookings upstream to the platforms themselves. Santa Monica represents one of the clearest examples of this model.

Santa Monica’s short-term rental ordinance requires hosts to obtain a city-issued license before offering a short-term rental and provides for a municipal registry of all licensed STR hosts. The ordinance makes it unlawful for a booking platform to complete a short-term rental transaction for any host that does not appear on the City’s registry, attaching civil fines for each such transaction.

In contrast to a host-based enforcement model, this regulatory framework has proved successful in realizing desired STR reductions. However, the imposition of fines on the platforms themselves poses the question of how far municipalities may go in regulating the online platforms which operate in their communities.

That question was addressed in HomeAway.com, Inc. v. City of Santa Monica, where short-term rental platforms Airbnb and HomeAway.com challenged the ordinance claiming immunity for fines under Section 230(c)(1) of the Communications Decency Act. Section 230(c)(1) provides online platform immunity for the content it hosts if posted by third parties. In so doing, it draws a line between platforms themselves and the third-party “publisher or speaker” of the content. In the platforms’ view, Santa Monica’s ordinance effectively established platform liability for the third-party listing content hosted on the platform.

The Ninth Circuit rejected this argument, holding that the ordinance did not impose liability for publishing or failing to remove third-party content, but instead regulated the platforms’ own commercial conduct by imposing fees when the platforms completed booking transactions for short-term rentals of unregistered properties.

While the courts have upheld Santa Monica’s use of platform liability as a lawful enforcement mechanism, the platform-liability model does not substantially reduce the administrative burden borne by the city. Enforcement still requires the city to identify individual non-compliant transactions and pursue penalties against the platforms that facilitated them.

New York City’s Affirmative Duty to Verify Model

The most aggressive iteration of STR regulation laws is found in New York City’s Local Law 18. Local Law 18, enacted on January 9th, 2022, establishes an automated STR registration verification system. First, an STR host is required to register with the city, which assigns them an STR registration number. Second, the ordinance provides for an electronic verification portal where platforms must submit a prospective host’s STR registration number and receive a confirmation code prior to processing a booking with that host. The ordinance also includes a mandatory reporting requirement directing STR platforms to submit an inventory of all STR transactions completed each month and certify that they received a confirmation code from the city’s verification portal prior to each booking.

This innovative regulatory framework automates compliance, ensuring the desired reduction in STRs is realized while minimizing the administrative burden of enforcement. However, this verification-based model has not yet been directly evaluated under Section 230. Curiously, Airbnb has not chosen to challenge the law under Section 230 and instead has largely complied with the regulatory regime, focusing its efforts on lobbying instead. Perhaps the platform has “read the tea leaves” of past lawsuits, such as the aforementioned Santa Monica suit, and determined that when liability is tied to a commercial transaction, platforms cannot claim section 230 immunity.

There are, however, material differences between the two frameworks. In Santa Monica, liability attaches when a platform completes a booking for a host who is not registered in the City’s STR registry. In New York City, by contrast, liability attaches because the platform failed to perform a mandated verification step prior to the booking, regardless of the host’s registration status. It remains an open question whether this structural shift––which ties liability to a platform’s screening process rather than underlying host noncompliance––moves closer to treating platforms as “publishers” in a manner that implicates Section 230’s platform-liability protections.

Conclusion

The ultimate impact of short-term rentals on local housing supply remains unsettled. What is clear, however, is that cities across the country are responding to growing concerns about the effects of STR platforms like Airbnb on housing supply. The result is an ongoing, nationwide case study on how local governments can regulate both short-term rentals and the online platforms that facilitate them. As municipalities continue to experiment with regulatory regimes, the legal boundaries emerging from these efforts may influence the future of platform regulation far beyond the housing context.

#ShortTermRentals #HousingPolicy #PlatformRegulation

Jody Allen and the Future of the Seahawks: A Week of Legal Confusion

By: Thomas Oatridge

Media Reports and Conflicting Narratives About a Seahawks Sale

Just days before Super Bowl LX between the Seattle Seahawks and the New England Patriots was set to kick off, it was announced that the Seattle franchise would be back on the market after nearly three decades, a deal that is estimated to close for around $7 to $8 billion. Paul Allen, the Seattle Seahawks’ longtime owner and a co-founder of Microsoft, passed away in 2018. Prior to his death, Allen established a trust encompassing most of his assets and appointed his sister, Jody Allen, to be the personal representative of the estate and trustee to oversee the eventual sale of the trust’s assets, including the Seattle Seahawks. Although it is widely understood that the trust documents do not impose a specific timeline for selling the team, ESPN reported that the franchise would soon be put on the market. The Paul G. Allen Trust promptly issued a statement dismissing the report as rumor and stating unequivocally that “the team is not for sale.” Adding to the speculation, the Wall Street Journal reported that the NFL issued the Seahawks a $5 million fine for being out of compliance with ownership requirements. However, NFL Commissioner Roger Goodell denied such allegations shortly after these reports surfaced.

Days after this initial story broke, Yahoo Sports released an article outlining the confusion, while simultaneously creating more confusion by reporting contradictory statements about Washington State trust and estate law. The article opens by asserting the estate mandates that the Seahawks will “eventually” be sold. The article subsequently quotes a local Seattle sportswriter who claims that “when the estate comes around and says, ‘you got to sell the team,’ she has to sell the team,” because “her job is to carry out the will of the estate.” Yet, as the same article reports, just moments earlier, the estate’s governing documents never set a specific timeline for selling its assets. The week leading up to the Super Bowl has underscored the need to ask more precise legal questions, rather than accepting the latest rumor as a statement of law.

The Legal Pressure Point: NFL Ownership Rules

To frame our legal analysis and fairly characterize Yahoo Sports’ interpretation, it’s important to point out the key legal risk the Paul G. Allen Trust assumes by deferring the sale of the Seahawks. The National Football League’s bylaws are clear and unambiguous regarding ownership structure, mandating the majority stakeholder must be an individual, rather than a trust. Additionally, all controlling owners must maintain a 30% ownership stake in their respective team. It is possible that this contractual obligation to the league will trigger a sale of the team earlier than what the trustee of the Paul G. Allen Trust, Jody Allen, would have otherwise preferred. The aforementioned stories by ESPN and the Wall Street Journal may in fact be pointing to this as the likely outcome, especially given the recent announcement that the estate agreed to sell the Trail Blazers to the majority stakeholder of the Carolina Hurricanes for $4 billion.

Does Washington State Law Require an Immediate Sale?

Contractual obligations to the NFL are only part of the legal picture. In accordance with Paul Allen’s will, his sister Jody was assigned as personal representative to properly probate his estate. She was also given the role of trustee to the Paul G. Allen Trust. Therefore, trust and estate law must be considered to properly understand this situation. Under the Revised Code of Washington (RCW), a trust is created by the transfer of property to a trustee to carry out the terms of the trust. Personal representatives and trustees must fulfill functionally identical fiduciary duties such as administering the trust solely in the interests of the beneficiaries, keeping the beneficiaries reasonably informed, managing assets properly, and avoiding self-dealing for personal benefit. In a 2022 interview, Jody Allen indicated the estate could take 10–20 years to unwind due to its complexity and size. Thus, if there is no reason to doubt the validity of this claim and no established deadline for the sale of the trust’s assets, it is hard to say what would trigger a breach of fiduciary duty to the trust if the Seahawks are not sold within the NFL’s preferred timeline. Furthermore, given Jody Allen is both the personal representative of the estate and the trustee of the Paul G. Allen Trust, it is unlikely the estate will “come knocking” to force Jody to sell the team either.

When a Sale Could Become Legally Problematic Under Washington State Law

There is, however, a scenario where Jody Allen could be found in breach of her fiduciary duty as personal representative of the estate and trustee. According to Yahoo Sports, their source discussed a rumor of “Allen and a bunch of her affluent friends at Seattle-based companies Microsoft and Amazon coming in and buying the team from her brother’s trust.” If this rumor turns out to be true, Jody could open herself up to the risk of breaching her fiduciary duties through self-dealing. This occurs when a trustee enters into a sale, encumbrance, or other transaction involving the investment or management of trust property for the trustee’s own personal account or which is otherwise affected by a conflict between the trustee’s fiduciary and personal interests. In 2018, a Washington State appeals court affirmed a lower court’s decision to block the sale of estate assets by a personal representative to himself because it breached his fiduciary duties via self-dealing. However, if Jody Allen decides to move forward with the sale of the Seahawks to herself, Washington State law allows for three exceptions to this doctrine which include waiver by the trust instrument, waiver by the beneficiaries, or permission from the court.

Conclusion

At present, there is no indication that Jody Allen or the Paul G. Allen Trust are under any immediate legal obligation to sell the Seattle Seahawks. If a sale occurs in the near term, it is more likely to stem from contractual obligations to the NFL rather than any requirement imposed by Washington State law. Absent meaningful pressure from the NFL, the timing of any sale remains largely within the discretion of Jody Allen as trustee of the Paul G. Allen Trust.

#Seahawks #JodyAllen #TrustAndEstateLaw #WJLTA

Beyond the Billable Hour: How AI is Forcing Legal Pricing Reform

By: Joyce Jia

Pricing reform to replace billable hours has long been debated in the legal industry. Yet as software companies increasingly shift toward outcome-based pricing with AI agents’ assistance—charging only when measurable value is delivered—the legal profession remains anchored in time-based billing and has been slow to translate technological adoption into pricing change. The recently released Thomson Reuters Institute’s 2026 Report on the State of the US Legal Market (“2026 Legal Market Report”) revealed that average law firm spending on technology grew “an astonishing 9.7% … over the already record growth of 2024”, while “a full 90% of all legal dollars still flow through standard hourly rate arrangements,” This growing disconnect between technological investment and monetization reflects not merely a billing challenge, but a deeper crisis in how legal value is defined, allocated, and captured in the AI era. 

How Did We Get Here?

The billable hours system wasn’t always dominant. As documented by Thomson Reuters Institute’s James W. Jones, hourly billing emerged in the 20th century but remained relatively peripheral until the 1970s, when the rapid growth of corporate in-house legal departments demanded standardized fees and greater transparency from outside counsels’ previously “amorphous” billing practices. The logic was straightforward: time equaled work, work equaled measurable productivity, and productivity justified legal spending for in-house departments (and conversely, profitability for law firms).

That logic, however, is increasingly strained. As AI enables what Clio CEO Jack Newton describes as a “structural incompatibility”, the revenue model built on time becomes increasingly unjustifiable. According to Thomas Reuter’s 2025 Legal Department Operations Index, corporate legal departments face mounting pressure to “do more with less.” Nearly three-quarters of respondents plan to deploy advanced technology to automate legal tasks and reduce costs, while one-quarter are expanding their use of alternative fee arrangements (AFAs) to optimize operations and control costs. As the 2026 Legal Market Report observes, general counsels now scrutinize matter budgets line by line. Seeing their own team leverage AI to perform routine work “at a fraction of the cost,” they question why outside counsels charging premium hourly rates are not delivering comparable efficiencies. Unsurprisingly, corporate legal departments have led their outside firms in AI adoption since 2022

Is AI a “Margin Eroder or Growth Accelerator”?  

Research by Professor Nancy Rapoport and Legal Decoder founder Joseph Tiano frames this tension as a central paradox of AI adoption. When an attorney completes discovery review using AI in 8 hours instead of 40, firm revenue could drop by 80 percent theoretically under the hourly model even as client outcomes improve. This appears to be a productivity trap: AI-driven efficiency directly cannibalizing revenue. But this framing is overly narrow. With careful design, restructuring billing models around technology-enabled premiums need not shrink revenue; instead, it can enhance productivity while strengthening client trust through greater transparency and efficiency.  It also enables a more equitable sharing of the benefits of technological advancement and a more deliberate allocation of the risks inherent in legal matters.

Recapturing the Lost Value of Legal Inefficiencies

According to the Thomson Reuters Institute’s 2023 research on billing practices, the average law firm partner writes down over 300 hours annually, nearly $190,000 in lost potential fees. These write-offs typically involve learning curves in unfamiliar legal areas, time-intensive research, drafting various documents and meeting notes, or correcting associates’ work. Partners often decline to bill clients for such work when it exceeds anticipated time expectations, even though it remains billable in principle. This is precisely where AI excels. By reducing inefficiencies and accelerating routine tasks, AI allows firms to recapture written-off value while offering clients more predictable budgets and higher-quality outputs. 

Justifying Higher Hourly Rates Through AI-Enhanced Value

Paradoxically, AI may also support higher hourly rates for certain categories of legal work. As Rapoport and Tiano argue, AI enables lawyers to deliver “unprecedented insights” through deeper, more comprehensive, and more reliable analysis. By rapidly synthesizing historical case data, identifying patterns, and predicting outcomes, AI may elevate legal judgment in ways that time and cost constraints previously rendered impractical. In this context, premium rates can remain justifiable for complex, strategic work where human judgment and client relationship prove irreplaceable.

Extending Contingency (Outcome-Based) Fee Beyond Litigation

Beyond traditional litigation contingency fees, Rapoport and Tiano identify “disputes, enforcement actions, or complex transactions” as areas ripe for outcome-based pricing, where firms can “shoulder more risk for greater upside.” The term “disputes” may be understood broadly to encompass arbitration, debt collection, and employment-related conflicts, such as discrimination or wage claims.

An even more underexplored application lies in regulatory compliance, a domain characterized by binary and verifiable outcomes. Unlike litigation success or transactional value, compliance outcomes present even clearer metrics: such as GDPR compliance versus violation, SOX compliance versus deficiency, patent prosecution approval versus rejection. This creates opportunities for compliance-as-a-service models that charge for compliance or certification outcomes rather than hours worked. Where AI enables systematic, scalable review, risk allocation becomes explicit: the firm guarantees compliance, and the client pays a premium above hourly equivalents for that assurance.

New Revenue Streams in the AI Era

The rise of data-driven AI also creates entirely new categories of legal work. As Rapoport and Tiano identify, “AI governance policy and advisories, algorithmic bias audits, data privacy by design”, all represent emerging and durable revenue streams. Moreover, as AI regulatory frameworks continue to evolve across jurisdictions, clients will increasingly seek counsel for these specialized services, where interdisciplinary expertise at the insertion of law and technology, combined with sound professional judgment and strategic foresight, remain indispensable for navigating both compliance obligations and long-term risk. 

The Hybrid Solution: Tiered Value Frameworks

Forward-thinking firms are increasingly experimenting with hybrid AFA that blend fixed fees, subscriptions, outcome-based pricing, and legacy hourly billing into tiered value offerings. Ultimately, the legal industry’s pricing transformation is not solely about technology. It is about candidly sharing the gains created by technology and confronting how risk should be allocated when AI reshapes legal work.

As AI simultaneously frees lawyers’ time and creates new revenue opportunities, law firms face a defining challenge: articulating, quantifying, and operationalizing a value-and-risk allocation framework capable of replacing the billable hour and sustaining the economics of legal practice for the next generation.

Across Nations, Across Identities: Why Deepfake Victims are Left Without Remedies

By: Hanan Fathima

When a deepfake video of former President Barack Obama appeared in 2018, the public was stunned—this was not just clever editing, but a wake-up call. AI-generated content became hyper-realistic and often indistinguishable as compared to non-AI-generated content. Deepfakes are highly realistic AI-generated content that can imitate a person’s appearance and voice through technologies like generative adversarial networks (GANs). We’ve entered a digital era where every piece of media demands scrupulous scrutiny, raising questions about regulation and justice in a digital age. Different jurisdictions have adopted varying approaches to deepfake regulation, with countries like the US, UK, and EU members emphasizing on international laws on deepfakes, while countries like China and Russia preferring digital sovereignty. A key challenge is navigating the jurisdictional gaps in deepfake laws and regulations.

The Global Surge in Deepfake-Driven Crimes

Deepfake phishing and fraud cases have escalated at an alarming rate, recording a 3000% surge since 2022. In 2024, attempts to create deepfake content occurred every five minutes. This sharp escalation in global deepfake activity is alarming, particularly due to the potential for deepfakes manipulate election outcomes, fabricate non-consensual pornographic content , and facilitate sextortion scams. Deepfake criminals exploit gaps in cross-border legal systems. These gaps allow criminals to evade liability and continue their schemes with reduced risk. Because national laws are misaligned and international frameworks remain limited, victims of deepfake crimes face an uphill battle for justice. Combined with limited judicial precedents, tracing and prosecuting offenders has proved to be a massive challenge for many countries.

When Crime Crosses Borders and Laws Don’t

One striking example is a Hong Kong deepfake fraud case in which scammers impersonated a company’s chief financial officer using an AI-generated video in a conference call, duping an employee into transferring HK$200 million (~US$25 million). Investigators uncovered a complex web of stolen identities and bank accounts spread across multiple countries, complicating the tracing and recovery of funds. This case underscores the need for international cooperation, standardized laws and regulations, and robust legal framework for AI-related deepfake crimes[MB6]  in order to effectively combat the growing threat of deepfake fraud.

At a national level, there have been efforts to address these challenges. An example is the U.S. federal TAKE IT DOWN Act 2025, which criminalizes the distribution of non-consensual private deepfake images and mandates prompt removal upon request. States like Tennessee have enacted the ELVIS Act 2024, which protects individuals against use of their voice and likeness in deepfake content, while Texas and Minnesota have introduced laws criminalizing election-related deepfakes to preserve democratic integrity.Similarly, Singapore passed the Elections (Integrity of Online Advertising) (Amendment) Bill to safeguard against misinformation during the election period. China’s Deep Synthesis Regulation 2025 regulates deepfake technology and services, placing responsibility on both platform providers and end-users.

On an international scale, the European Union’s AI Act serves as among the first comprehensive legal frameworks to tackle AI-generated content. It calls for transparency, accountability, and emphasizes labelling AI-manipulated media rather than outright bans.

However, these laws are region-specific and thus rely on international and regional cooperation frameworks like MLATs and multilateral partnerships for prosecuting foreign perpetrators. A robust framework must incorporate cross-border mechanisms such as provisions for extraterritorial jurisdiction and standardized enforcement protocols to address jurisdictional gaps in deepfake crimes. These mechanisms could take the form of explicit cooperation protocols under conventions like the UN Cybercrime Convention, with strict timelines for MLAT procedures, and regional agreements on joint investigations and evidence-sharing.

How Slow International Processes Enable Offender Impunity

The lack of concrete laws and thus concrete relief mechanisms means victims of deepfake crimes face multiple barriers in their ability to access justice. When cases involve multiple jurisdictions, investigations and prosecutions often rely on Mutual Legal Assistance Treat (MLAT) processes. Mutual Legal Assistance is “a process by which states seek and provide assistance in gathering evidence for use in criminal cases,” as defined by the United Nations Office on Drugs and Crime (2018). MLAT is the primary mechanism used for cross-border cooperation in criminal proceedings. Unfortunately, victims may experience delays in international investigations and prosecutions due to slow and cumbersome processes associated with MLAT. Moreover, the process has its own set of limitations such as human rights concerns, conflicting national interests, and data privacy issues. According to the Interpol Africa Cyberthreat Assessment Report 2025, requests for Mutual Legal Assistance (MLA) can take months, severely delaying justice and often allowing offenders to escape international accountability.

Differing legal standards and enforcement mechanisms across countries make criminal proceedings related to deepfake crimes difficult. On a similar note, cloud platforms and social media companies hosting deepfake content may be registered in countries with weak regulations or limited international cooperation, making it harder for authorities to remove content or obtain evidence.

The Human Cost of Delayed Justice

The psychological and social impacts on victims are profound. The maxim justice delayed is justice denied” is particularly relevant—delays in legal recourse means the victim’s suffering is prolonged. This often presents as reputational harm, long-term mental health issues, and career-related issues. Thus, victims of cross-border deepfake crimes may hesitate to report or pursue legal action. They are further deterred due to language, cultural, or economic barriers. Poor transparency in enforcement creates mistrust in international legal systems and marginalizes victims, weakening deterrence.

Evolving International Law on Cross-Border Jurisdiction

There have been years of opinions and debates over the application of international law for cybercrimes and whether it conflicts with cyber sovereignty. The Council of Europe’s 2024 AI Policy Summit highlighted the need for global cooperation in investigation and prosecutorial activities of law enforcement and reaffirmed the role of cooperation channels like MLATs. Calls for a multilateral AI research institute were made in the 2024 UN Security Council debate on AI governance. Recently, in the 2025 AI Action Summit, discussions were focused on research and the transformative capability of AI, and the regulation of such technology. Discussion on cybercrimes and its jurisdiction was limited.

In 2024, the UN Convention Against Cybercrime addressed AI-based cybercrimes, including deepfakes, emphasizing on electronic evidence sharing between countries, cooperation between states for extradition requests and Mutual Legal Assistance. The convention also allows states to establish jurisdiction over offences committed against their nationals regardless of where the offense occurred. However, challenges in implementation persist as a number of nations are yet to ratify this convention, including the United States.

Towards a Coherent Cross-Border Response

Addressing the complex jurisdictional challenges posed by cross-border deepfake crimes requires a multi-faceted approach that combines legal reforms, international collaboration, technological innovations, and victim-centered mechanisms. Firstly, Mutual Legal Assistance Treaties (MLATs) must be streamlined with standardized request formats, clearer evidentiary requirements, and dedicated cybercrime units to reduce delays. Secondly, national authorities need stronger digital forensic and AI-detection capabilities, including investing in deepfake-verification tools like blockchain-based tracing techniques. Thirdly, generative AI platforms must be held accountable, with mandates for detection systems and prompt takedown obligations. However, since these rules vary regionally, platforms do not face the same responsibilities everywhere, underscoring the need for all countries to adopt consistent standards for platforms. Fourth, nations must play an active role in multilateral initiatives and bilateral agreements targeting cross-border cybercrime, supporting the creation of global governance frameworks governing extraterritorial jurisdiction of cybercrimes like deepfakes. While countries like the United States, UK, EU members, and Japan are active participants in international AI governance initiatives, many developing countries are excluded from these discussions. Countries like Russia and China have also resisted UN cybercrime treaties, citing sovereignty values. Notably, despite being a global leader in AI innovation, the US has also not ratified the 2024 UN Convention against Cybercrime. Lastly, a victim-centered approach, through legal aid services and compensation mechanisms, is essential to ensure that victims are not left to navigate these complex jurisdictional challenges alone.

While deepfake technology has the potential to drive innovation and creativity, its rampant misuse has led to unprecedented avenues for crimes that transcend national borders and challenge existing legal systems. Bridging these jurisdictional and technological gaps is essential for building a resilient and robust international legal framework that is capable of combating deepfake-related crimes and offering proper recourse for victims.

The IP Confidentiality Crisis: Why Your Patent Drafts Could Be Training Your Competitor’s AI

By Francis Yoon

The Ultimate Act of Discretion

The process of drafting a patent application is the ultimate act of discretion. Before an invention is filed, its core design, methodology, and advantages are protected as confidential trade secret information. Today, a powerful new tool promises to revolutionize this process: generative AI and LLMs. These models can instantly transform complex invention disclosures into structured patent claims, saving countless hours. However, when legal professionals feed highly sensitive information into public LLMs like ChatGPT or Gemini, they unwittingly expose their clients’ most valuable intellectual property (IP) to an unprecedented security risk. This convenience can create a massive, invisible information leak, turning a law firm’s desktop into a prime data source for the very AI models they rely on.

The Black Box: How Confidentiality is Broken

The core danger lies in how these AI systems learn and the resulting threat to patent novelty governed under 35 U.S.C. § 102(b), which mandates that an invention be new and not previously known or publicly disclosed. When a user submits text to a public LLM, that input often becomes part of the model’s training data or is used to improve its services. Confidential patent information fed into the model for drafting assistance may be logged, analyzed, and integrated into the model’s knowledge base. This risk is formalized in the provider’s terms of service.

While enterprise-level accounts offered by companies like OpenAI or Google typically promise not to use customer input for training by default, free or standard professional tiers usually lack this guarantee unless users proactively opt out. If a lawyer uses a personal subscription to draft a patent claim, they may inadvertently transmit client’s IP directly to a third-party server, violating their professional duty of care and duty of confidentiality, while also potentially exposing their firm to a professional malpractice claim. This conflict establishes the central legal issue: the reliance on public AI creates a massive “Black Box” problem. The invention is disclosed to an opaque system whose ultimate use of that data is neither verifiable nor auditable by the user.

The Novelty Risk: AI as Inadvertent Prior Art

Beyond breaching confidentiality, this practice also fundamentally endangers patentability by jeopardizing the invention’s novelty. Novelty is a fundamental requirement for patentability, which is the legal status an invention must achieve to receive patent protection. The most critical risk is inadvertent public disclosure, which creates prior art—any evidence that an invention is already known or publicly available—and thus invalidates the patent. Once an invention’s confidential details are used to train a widely accessible public model, it may no longer be considered “new” or “secret.” This action could be interpreted as a public disclosure—the invention’s core teaching has been shared with a third party (the AI system) under terms that do not guarantee perpetual confidentiality. This could destroy the invention’s noveltyand the potential for trade secret protection. Furthermore, generative AI can be prompted to generate vast amounts of plausible technical variations based on a limited technical disclosure. If these AI-generated outputs are published, they can become valid prior art. A human inventor’s subsequent application may be rejected because the AI has, in theory, already publicly disclosed a similar concept, rendering the human’s invention unpatentable as non-novel or obvious.

The Legal Hot Potato: IP vs. Contract

When confidentiality is breached through a public AI model, recovering the invention is extremely difficult. If a client’s trade secret is exposed, the client loses the protection entirely, as the secret is no longer “not generally known.” Suing the LLM provider for trade secret misappropriation requires proving that the provider improperly acquired the secret and used it against the owner’s interests. This is challenging because the provider’s legal team can argue the input was authorized under the contractual terms accepted by the user. The attorney who entered the prompt is typically held liable for the breach of confidence. However, the firm has no clear recourse against the LLM provider, as the provider’s liability is severely limited by contract. Often, these liability-limiting clauses cap damages at a minimal amount or specifically disclaim liability for consequential damages, like intellectual property loss. The fragmentation of this liability leaves the inventor exposed while the AI company is shielded by its own terms.

To combat this systemic problem, legal scholars have advocated for imposing a duty of loyalty on tech companies, forcing them to legally prioritize user confidentiality above their own financial interests. This echoes the mandates found in modern privacy law, such as the California Consumer Privacy Act’s rules on the consumers’ right to access information about automated decision-making technology.

Mitigating the Risk: A Confidentiality and Novelty Checklist

Legal teams should adopt a “trust-nothing” protocol to utilize generative AI responsibly. They should implement clear guidelines prohibiting the use of public LLMs for generating, summarizing, or analyzing any client or company information that qualifies as prior art or a trade secret.

Crucially, professionals should never submit a confidential invention disclosure to an AI system before filing a formal provisional patent application with the relevant patent office. A provisional patent application allows inventors to establish an official priority date without submitting a formal patent claim, protecting the invention’s novelty before any exposure to external AI infrastructure.

To safely leverage AI internally, firms should invest in closed AI systems; these systems should be proprietary or securely containerized environments where data transfer and training are fully isolated and auditable. Furthermore, to ensure confidentiality, these systems should utilize edge computing, where processing is done directly on the local device, and federated learning, a method that trains the model using data across many decentralized devices without moving the raw data itself (the original, unprocessed data). This approach keeps the raw technical details strictly within the corporate firewall, preventing the inadvertent creation of prior art.

For necessary exploratory research using public models, firms should implement strict data anonymization and generalization processes. This involves removing or replacing all names, key dates, values, and novel terminologies before submission—a technique directly related to tokenization, the process by which AI models break down and interpret text.

Finally, firms should mandate rigorous review of contractual best practices for AI vendors to ensure indemnification and written guarantees that input data will not be used for training purposes. Indemnification is crucial; it requires the AI vendor to compensate the law firm or client for any loss or damage incurred if the vendor’s technology (or its failure to secure the data) results in a breach of confidence or patent invalidation. Firms should demand explicit clauses confirming that input data will not be logged, retained, or used for model training, and defining vendor liability that extends beyond simple fee refunds to cover the substantial financial harm caused by the loss of IP rights.

Conclusion

The promise of AI to expedite the patent drafting pipeline is undeniable, but the current ethical landscape presents a fundamental challenge to the confidentiality required to preserve patentability. Until legal frameworks universally impose a duty of loyalty on AI providers, the responsibility falls squarely on the professional to protect the client’s IP. The future of intellectual property requires vigilance: innovation should be accelerated by AI, but never disclosed by it.

IP-Security #Patent-Risk #AICrisis