Jody Allen and the Future of the Seahawks: A Week of Legal Confusion

By: Thomas Oatridge

Media Reports and Conflicting Narratives About a Seahawks Sale

Just days before Super Bowl LX between the Seattle Seahawks and the New England Patriots was set to kick off, it was announced that the Seattle franchise would be back on the market after nearly three decades, a deal that is estimated to close for around $7 to $8 billion. Paul Allen, the Seattle Seahawks’ longtime owner and a co-founder of Microsoft, passed away in 2018. Prior to his death, Allen established a trust encompassing most of his assets and appointed his sister, Jody Allen, to be the personal representative of the estate and trustee to oversee the eventual sale of the trust’s assets, including the Seattle Seahawks. Although it is widely understood that the trust documents do not impose a specific timeline for selling the team, ESPN reported that the franchise would soon be put on the market. The Paul G. Allen Trust promptly issued a statement dismissing the report as rumor and stating unequivocally that “the team is not for sale.” Adding to the speculation, the Wall Street Journal reported that the NFL issued the Seahawks a $5 million fine for being out of compliance with ownership requirements. However, NFL Commissioner Roger Goodell denied such allegations shortly after these reports surfaced.

Days after this initial story broke, Yahoo Sports released an article outlining the confusion, while simultaneously creating more confusion by reporting contradictory statements about Washington State trust and estate law. The article opens by asserting the estate mandates that the Seahawks will “eventually” be sold. The article subsequently quotes a local Seattle sportswriter who claims that “when the estate comes around and says, ‘you got to sell the team,’ she has to sell the team,” because “her job is to carry out the will of the estate.” Yet, as the same article reports, just moments earlier, the estate’s governing documents never set a specific timeline for selling its assets. The week leading up to the Super Bowl has underscored the need to ask more precise legal questions, rather than accepting the latest rumor as a statement of law.

The Legal Pressure Point: NFL Ownership Rules

To frame our legal analysis and fairly characterize Yahoo Sports’ interpretation, it’s important to point out the key legal risk the Paul G. Allen Trust assumes by deferring the sale of the Seahawks. The National Football League’s bylaws are clear and unambiguous regarding ownership structure, mandating the majority stakeholder must be an individual, rather than a trust. Additionally, all controlling owners must maintain a 30% ownership stake in their respective team. It is possible that this contractual obligation to the league will trigger a sale of the team earlier than what the trustee of the Paul G. Allen Trust, Jody Allen, would have otherwise preferred. The aforementioned stories by ESPN and the Wall Street Journal may in fact be pointing to this as the likely outcome, especially given the recent announcement that the estate agreed to sell the Trail Blazers to the majority stakeholder of the Carolina Hurricanes for $4 billion.

Does Washington State Law Require an Immediate Sale?

Contractual obligations to the NFL are only part of the legal picture. In accordance with Paul Allen’s will, his sister Jody was assigned as personal representative to properly probate his estate. She was also given the role of trustee to the Paul G. Allen Trust. Therefore, trust and estate law must be considered to properly understand this situation. Under the Revised Code of Washington (RCW), a trust is created by the transfer of property to a trustee to carry out the terms of the trust. Personal representatives and trustees must fulfill functionally identical fiduciary duties such as administering the trust solely in the interests of the beneficiaries, keeping the beneficiaries reasonably informed, managing assets properly, and avoiding self-dealing for personal benefit. In a 2022 interview, Jody Allen indicated the estate could take 10–20 years to unwind due to its complexity and size. Thus, if there is no reason to doubt the validity of this claim and no established deadline for the sale of the trust’s assets, it is hard to say what would trigger a breach of fiduciary duty to the trust if the Seahawks are not sold within the NFL’s preferred timeline. Furthermore, given Jody Allen is both the personal representative of the estate and the trustee of the Paul G. Allen Trust, it is unlikely the estate will “come knocking” to force Jody to sell the team either.

When a Sale Could Become Legally Problematic Under Washington State Law

There is, however, a scenario where Jody Allen could be found in breach of her fiduciary duty as personal representative of the estate and trustee. According to Yahoo Sports, their source discussed a rumor of “Allen and a bunch of her affluent friends at Seattle-based companies Microsoft and Amazon coming in and buying the team from her brother’s trust.” If this rumor turns out to be true, Jody could open herself up to the risk of breaching her fiduciary duties through self-dealing. This occurs when a trustee enters into a sale, encumbrance, or other transaction involving the investment or management of trust property for the trustee’s own personal account or which is otherwise affected by a conflict between the trustee’s fiduciary and personal interests. In 2018, a Washington State appeals court affirmed a lower court’s decision to block the sale of estate assets by a personal representative to himself because it breached his fiduciary duties via self-dealing. However, if Jody Allen decides to move forward with the sale of the Seahawks to herself, Washington State law allows for three exceptions to this doctrine which include waiver by the trust instrument, waiver by the beneficiaries, or permission from the court.

Conclusion

At present, there is no indication that Jody Allen or the Paul G. Allen Trust are under any immediate legal obligation to sell the Seattle Seahawks. If a sale occurs in the near term, it is more likely to stem from contractual obligations to the NFL rather than any requirement imposed by Washington State law. Absent meaningful pressure from the NFL, the timing of any sale remains largely within the discretion of Jody Allen as trustee of the Paul G. Allen Trust.

#Seahawks #JodyAllen #TrustAndEstateLaw #WJLTA

Beyond the Billable Hour: How AI is Forcing Legal Pricing Reform

By: Joyce Jia

Pricing reform to replace billable hours has long been debated in the legal industry. Yet as software companies increasingly shift toward outcome-based pricing with AI agents’ assistance—charging only when measurable value is delivered—the legal profession remains anchored in time-based billing and has been slow to translate technological adoption into pricing change. The recently released Thomson Reuters Institute’s 2026 Report on the State of the US Legal Market (“2026 Legal Market Report”) revealed that average law firm spending on technology grew “an astonishing 9.7% … over the already record growth of 2024”, while “a full 90% of all legal dollars still flow through standard hourly rate arrangements,” This growing disconnect between technological investment and monetization reflects not merely a billing challenge, but a deeper crisis in how legal value is defined, allocated, and captured in the AI era. 

How Did We Get Here?

The billable hours system wasn’t always dominant. As documented by Thomson Reuters Institute’s James W. Jones, hourly billing emerged in the 20th century but remained relatively peripheral until the 1970s, when the rapid growth of corporate in-house legal departments demanded standardized fees and greater transparency from outside counsels’ previously “amorphous” billing practices. The logic was straightforward: time equaled work, work equaled measurable productivity, and productivity justified legal spending for in-house departments (and conversely, profitability for law firms).

That logic, however, is increasingly strained. As AI enables what Clio CEO Jack Newton describes as a “structural incompatibility”, the revenue model built on time becomes increasingly unjustifiable. According to Thomas Reuter’s 2025 Legal Department Operations Index, corporate legal departments face mounting pressure to “do more with less.” Nearly three-quarters of respondents plan to deploy advanced technology to automate legal tasks and reduce costs, while one-quarter are expanding their use of alternative fee arrangements (AFAs) to optimize operations and control costs. As the 2026 Legal Market Report observes, general counsels now scrutinize matter budgets line by line. Seeing their own team leverage AI to perform routine work “at a fraction of the cost,” they question why outside counsels charging premium hourly rates are not delivering comparable efficiencies. Unsurprisingly, corporate legal departments have led their outside firms in AI adoption since 2022

Is AI a “Margin Eroder or Growth Accelerator”?  

Research by Professor Nancy Rapoport and Legal Decoder founder Joseph Tiano frames this tension as a central paradox of AI adoption. When an attorney completes discovery review using AI in 8 hours instead of 40, firm revenue could drop by 80 percent theoretically under the hourly model even as client outcomes improve. This appears to be a productivity trap: AI-driven efficiency directly cannibalizing revenue. But this framing is overly narrow. With careful design, restructuring billing models around technology-enabled premiums need not shrink revenue; instead, it can enhance productivity while strengthening client trust through greater transparency and efficiency.  It also enables a more equitable sharing of the benefits of technological advancement and a more deliberate allocation of the risks inherent in legal matters.

Recapturing the Lost Value of Legal Inefficiencies

According to the Thomson Reuters Institute’s 2023 research on billing practices, the average law firm partner writes down over 300 hours annually, nearly $190,000 in lost potential fees. These write-offs typically involve learning curves in unfamiliar legal areas, time-intensive research, drafting various documents and meeting notes, or correcting associates’ work. Partners often decline to bill clients for such work when it exceeds anticipated time expectations, even though it remains billable in principle. This is precisely where AI excels. By reducing inefficiencies and accelerating routine tasks, AI allows firms to recapture written-off value while offering clients more predictable budgets and higher-quality outputs. 

Justifying Higher Hourly Rates Through AI-Enhanced Value

Paradoxically, AI may also support higher hourly rates for certain categories of legal work. As Rapoport and Tiano argue, AI enables lawyers to deliver “unprecedented insights” through deeper, more comprehensive, and more reliable analysis. By rapidly synthesizing historical case data, identifying patterns, and predicting outcomes, AI may elevate legal judgment in ways that time and cost constraints previously rendered impractical. In this context, premium rates can remain justifiable for complex, strategic work where human judgment and client relationship prove irreplaceable.

Extending Contingency (Outcome-Based) Fee Beyond Litigation

Beyond traditional litigation contingency fees, Rapoport and Tiano identify “disputes, enforcement actions, or complex transactions” as areas ripe for outcome-based pricing, where firms can “shoulder more risk for greater upside.” The term “disputes” may be understood broadly to encompass arbitration, debt collection, and employment-related conflicts, such as discrimination or wage claims.

An even more underexplored application lies in regulatory compliance, a domain characterized by binary and verifiable outcomes. Unlike litigation success or transactional value, compliance outcomes present even clearer metrics: such as GDPR compliance versus violation, SOX compliance versus deficiency, patent prosecution approval versus rejection. This creates opportunities for compliance-as-a-service models that charge for compliance or certification outcomes rather than hours worked. Where AI enables systematic, scalable review, risk allocation becomes explicit: the firm guarantees compliance, and the client pays a premium above hourly equivalents for that assurance.

New Revenue Streams in the AI Era

The rise of data-driven AI also creates entirely new categories of legal work. As Rapoport and Tiano identify, “AI governance policy and advisories, algorithmic bias audits, data privacy by design”, all represent emerging and durable revenue streams. Moreover, as AI regulatory frameworks continue to evolve across jurisdictions, clients will increasingly seek counsel for these specialized services, where interdisciplinary expertise at the insertion of law and technology, combined with sound professional judgment and strategic foresight, remain indispensable for navigating both compliance obligations and long-term risk. 

The Hybrid Solution: Tiered Value Frameworks

Forward-thinking firms are increasingly experimenting with hybrid AFA that blend fixed fees, subscriptions, outcome-based pricing, and legacy hourly billing into tiered value offerings. Ultimately, the legal industry’s pricing transformation is not solely about technology. It is about candidly sharing the gains created by technology and confronting how risk should be allocated when AI reshapes legal work.

As AI simultaneously frees lawyers’ time and creates new revenue opportunities, law firms face a defining challenge: articulating, quantifying, and operationalizing a value-and-risk allocation framework capable of replacing the billable hour and sustaining the economics of legal practice for the next generation.

Across Nations, Across Identities: Why Deepfake Victims are Left Without Remedies

By: Hanan Fathima

When a deepfake video of former President Barack Obama appeared in 2018, the public was stunned—this was not just clever editing, but a wake-up call. AI-generated content became hyper-realistic and often indistinguishable as compared to non-AI-generated content. Deepfakes are highly realistic AI-generated content that can imitate a person’s appearance and voice through technologies like generative adversarial networks (GANs). We’ve entered a digital era where every piece of media demands scrupulous scrutiny, raising questions about regulation and justice in a digital age. Different jurisdictions have adopted varying approaches to deepfake regulation, with countries like the US, UK, and EU members emphasizing on international laws on deepfakes, while countries like China and Russia preferring digital sovereignty. A key challenge is navigating the jurisdictional gaps in deepfake laws and regulations.

The Global Surge in Deepfake-Driven Crimes

Deepfake phishing and fraud cases have escalated at an alarming rate, recording a 3000% surge since 2022. In 2024, attempts to create deepfake content occurred every five minutes. This sharp escalation in global deepfake activity is alarming, particularly due to the potential for deepfakes manipulate election outcomes, fabricate non-consensual pornographic content , and facilitate sextortion scams. Deepfake criminals exploit gaps in cross-border legal systems. These gaps allow criminals to evade liability and continue their schemes with reduced risk. Because national laws are misaligned and international frameworks remain limited, victims of deepfake crimes face an uphill battle for justice. Combined with limited judicial precedents, tracing and prosecuting offenders has proved to be a massive challenge for many countries.

When Crime Crosses Borders and Laws Don’t

One striking example is a Hong Kong deepfake fraud case in which scammers impersonated a company’s chief financial officer using an AI-generated video in a conference call, duping an employee into transferring HK$200 million (~US$25 million). Investigators uncovered a complex web of stolen identities and bank accounts spread across multiple countries, complicating the tracing and recovery of funds. This case underscores the need for international cooperation, standardized laws and regulations, and robust legal framework for AI-related deepfake crimes[MB6]  in order to effectively combat the growing threat of deepfake fraud.

At a national level, there have been efforts to address these challenges. An example is the U.S. federal TAKE IT DOWN Act 2025, which criminalizes the distribution of non-consensual private deepfake images and mandates prompt removal upon request. States like Tennessee have enacted the ELVIS Act 2024, which protects individuals against use of their voice and likeness in deepfake content, while Texas and Minnesota have introduced laws criminalizing election-related deepfakes to preserve democratic integrity.Similarly, Singapore passed the Elections (Integrity of Online Advertising) (Amendment) Bill to safeguard against misinformation during the election period. China’s Deep Synthesis Regulation 2025 regulates deepfake technology and services, placing responsibility on both platform providers and end-users.

On an international scale, the European Union’s AI Act serves as among the first comprehensive legal frameworks to tackle AI-generated content. It calls for transparency, accountability, and emphasizes labelling AI-manipulated media rather than outright bans.

However, these laws are region-specific and thus rely on international and regional cooperation frameworks like MLATs and multilateral partnerships for prosecuting foreign perpetrators. A robust framework must incorporate cross-border mechanisms such as provisions for extraterritorial jurisdiction and standardized enforcement protocols to address jurisdictional gaps in deepfake crimes. These mechanisms could take the form of explicit cooperation protocols under conventions like the UN Cybercrime Convention, with strict timelines for MLAT procedures, and regional agreements on joint investigations and evidence-sharing.

How Slow International Processes Enable Offender Impunity

The lack of concrete laws and thus concrete relief mechanisms means victims of deepfake crimes face multiple barriers in their ability to access justice. When cases involve multiple jurisdictions, investigations and prosecutions often rely on Mutual Legal Assistance Treat (MLAT) processes. Mutual Legal Assistance is “a process by which states seek and provide assistance in gathering evidence for use in criminal cases,” as defined by the United Nations Office on Drugs and Crime (2018). MLAT is the primary mechanism used for cross-border cooperation in criminal proceedings. Unfortunately, victims may experience delays in international investigations and prosecutions due to slow and cumbersome processes associated with MLAT. Moreover, the process has its own set of limitations such as human rights concerns, conflicting national interests, and data privacy issues. According to the Interpol Africa Cyberthreat Assessment Report 2025, requests for Mutual Legal Assistance (MLA) can take months, severely delaying justice and often allowing offenders to escape international accountability.

Differing legal standards and enforcement mechanisms across countries make criminal proceedings related to deepfake crimes difficult. On a similar note, cloud platforms and social media companies hosting deepfake content may be registered in countries with weak regulations or limited international cooperation, making it harder for authorities to remove content or obtain evidence.

The Human Cost of Delayed Justice

The psychological and social impacts on victims are profound. The maxim justice delayed is justice denied” is particularly relevant—delays in legal recourse means the victim’s suffering is prolonged. This often presents as reputational harm, long-term mental health issues, and career-related issues. Thus, victims of cross-border deepfake crimes may hesitate to report or pursue legal action. They are further deterred due to language, cultural, or economic barriers. Poor transparency in enforcement creates mistrust in international legal systems and marginalizes victims, weakening deterrence.

Evolving International Law on Cross-Border Jurisdiction

There have been years of opinions and debates over the application of international law for cybercrimes and whether it conflicts with cyber sovereignty. The Council of Europe’s 2024 AI Policy Summit highlighted the need for global cooperation in investigation and prosecutorial activities of law enforcement and reaffirmed the role of cooperation channels like MLATs. Calls for a multilateral AI research institute were made in the 2024 UN Security Council debate on AI governance. Recently, in the 2025 AI Action Summit, discussions were focused on research and the transformative capability of AI, and the regulation of such technology. Discussion on cybercrimes and its jurisdiction was limited.

In 2024, the UN Convention Against Cybercrime addressed AI-based cybercrimes, including deepfakes, emphasizing on electronic evidence sharing between countries, cooperation between states for extradition requests and Mutual Legal Assistance. The convention also allows states to establish jurisdiction over offences committed against their nationals regardless of where the offense occurred. However, challenges in implementation persist as a number of nations are yet to ratify this convention, including the United States.

Towards a Coherent Cross-Border Response

Addressing the complex jurisdictional challenges posed by cross-border deepfake crimes requires a multi-faceted approach that combines legal reforms, international collaboration, technological innovations, and victim-centered mechanisms. Firstly, Mutual Legal Assistance Treaties (MLATs) must be streamlined with standardized request formats, clearer evidentiary requirements, and dedicated cybercrime units to reduce delays. Secondly, national authorities need stronger digital forensic and AI-detection capabilities, including investing in deepfake-verification tools like blockchain-based tracing techniques. Thirdly, generative AI platforms must be held accountable, with mandates for detection systems and prompt takedown obligations. However, since these rules vary regionally, platforms do not face the same responsibilities everywhere, underscoring the need for all countries to adopt consistent standards for platforms. Fourth, nations must play an active role in multilateral initiatives and bilateral agreements targeting cross-border cybercrime, supporting the creation of global governance frameworks governing extraterritorial jurisdiction of cybercrimes like deepfakes. While countries like the United States, UK, EU members, and Japan are active participants in international AI governance initiatives, many developing countries are excluded from these discussions. Countries like Russia and China have also resisted UN cybercrime treaties, citing sovereignty values. Notably, despite being a global leader in AI innovation, the US has also not ratified the 2024 UN Convention against Cybercrime. Lastly, a victim-centered approach, through legal aid services and compensation mechanisms, is essential to ensure that victims are not left to navigate these complex jurisdictional challenges alone.

While deepfake technology has the potential to drive innovation and creativity, its rampant misuse has led to unprecedented avenues for crimes that transcend national borders and challenge existing legal systems. Bridging these jurisdictional and technological gaps is essential for building a resilient and robust international legal framework that is capable of combating deepfake-related crimes and offering proper recourse for victims.

The IP Confidentiality Crisis: Why Your Patent Drafts Could Be Training Your Competitor’s AI

By Francis Yoon

The Ultimate Act of Discretion

The process of drafting a patent application is the ultimate act of discretion. Before an invention is filed, its core design, methodology, and advantages are protected as confidential trade secret information. Today, a powerful new tool promises to revolutionize this process: generative AI and LLMs. These models can instantly transform complex invention disclosures into structured patent claims, saving countless hours. However, when legal professionals feed highly sensitive information into public LLMs like ChatGPT or Gemini, they unwittingly expose their clients’ most valuable intellectual property (IP) to an unprecedented security risk. This convenience can create a massive, invisible information leak, turning a law firm’s desktop into a prime data source for the very AI models they rely on.

The Black Box: How Confidentiality is Broken

The core danger lies in how these AI systems learn and the resulting threat to patent novelty governed under 35 U.S.C. § 102(b), which mandates that an invention be new and not previously known or publicly disclosed. When a user submits text to a public LLM, that input often becomes part of the model’s training data or is used to improve its services. Confidential patent information fed into the model for drafting assistance may be logged, analyzed, and integrated into the model’s knowledge base. This risk is formalized in the provider’s terms of service.

While enterprise-level accounts offered by companies like OpenAI or Google typically promise not to use customer input for training by default, free or standard professional tiers usually lack this guarantee unless users proactively opt out. If a lawyer uses a personal subscription to draft a patent claim, they may inadvertently transmit client’s IP directly to a third-party server, violating their professional duty of care and duty of confidentiality, while also potentially exposing their firm to a professional malpractice claim. This conflict establishes the central legal issue: the reliance on public AI creates a massive “Black Box” problem. The invention is disclosed to an opaque system whose ultimate use of that data is neither verifiable nor auditable by the user.

The Novelty Risk: AI as Inadvertent Prior Art

Beyond breaching confidentiality, this practice also fundamentally endangers patentability by jeopardizing the invention’s novelty. Novelty is a fundamental requirement for patentability, which is the legal status an invention must achieve to receive patent protection. The most critical risk is inadvertent public disclosure, which creates prior art—any evidence that an invention is already known or publicly available—and thus invalidates the patent. Once an invention’s confidential details are used to train a widely accessible public model, it may no longer be considered “new” or “secret.” This action could be interpreted as a public disclosure—the invention’s core teaching has been shared with a third party (the AI system) under terms that do not guarantee perpetual confidentiality. This could destroy the invention’s noveltyand the potential for trade secret protection. Furthermore, generative AI can be prompted to generate vast amounts of plausible technical variations based on a limited technical disclosure. If these AI-generated outputs are published, they can become valid prior art. A human inventor’s subsequent application may be rejected because the AI has, in theory, already publicly disclosed a similar concept, rendering the human’s invention unpatentable as non-novel or obvious.

The Legal Hot Potato: IP vs. Contract

When confidentiality is breached through a public AI model, recovering the invention is extremely difficult. If a client’s trade secret is exposed, the client loses the protection entirely, as the secret is no longer “not generally known.” Suing the LLM provider for trade secret misappropriation requires proving that the provider improperly acquired the secret and used it against the owner’s interests. This is challenging because the provider’s legal team can argue the input was authorized under the contractual terms accepted by the user. The attorney who entered the prompt is typically held liable for the breach of confidence. However, the firm has no clear recourse against the LLM provider, as the provider’s liability is severely limited by contract. Often, these liability-limiting clauses cap damages at a minimal amount or specifically disclaim liability for consequential damages, like intellectual property loss. The fragmentation of this liability leaves the inventor exposed while the AI company is shielded by its own terms.

To combat this systemic problem, legal scholars have advocated for imposing a duty of loyalty on tech companies, forcing them to legally prioritize user confidentiality above their own financial interests. This echoes the mandates found in modern privacy law, such as the California Consumer Privacy Act’s rules on the consumers’ right to access information about automated decision-making technology.

Mitigating the Risk: A Confidentiality and Novelty Checklist

Legal teams should adopt a “trust-nothing” protocol to utilize generative AI responsibly. They should implement clear guidelines prohibiting the use of public LLMs for generating, summarizing, or analyzing any client or company information that qualifies as prior art or a trade secret.

Crucially, professionals should never submit a confidential invention disclosure to an AI system before filing a formal provisional patent application with the relevant patent office. A provisional patent application allows inventors to establish an official priority date without submitting a formal patent claim, protecting the invention’s novelty before any exposure to external AI infrastructure.

To safely leverage AI internally, firms should invest in closed AI systems; these systems should be proprietary or securely containerized environments where data transfer and training are fully isolated and auditable. Furthermore, to ensure confidentiality, these systems should utilize edge computing, where processing is done directly on the local device, and federated learning, a method that trains the model using data across many decentralized devices without moving the raw data itself (the original, unprocessed data). This approach keeps the raw technical details strictly within the corporate firewall, preventing the inadvertent creation of prior art.

For necessary exploratory research using public models, firms should implement strict data anonymization and generalization processes. This involves removing or replacing all names, key dates, values, and novel terminologies before submission—a technique directly related to tokenization, the process by which AI models break down and interpret text.

Finally, firms should mandate rigorous review of contractual best practices for AI vendors to ensure indemnification and written guarantees that input data will not be used for training purposes. Indemnification is crucial; it requires the AI vendor to compensate the law firm or client for any loss or damage incurred if the vendor’s technology (or its failure to secure the data) results in a breach of confidence or patent invalidation. Firms should demand explicit clauses confirming that input data will not be logged, retained, or used for model training, and defining vendor liability that extends beyond simple fee refunds to cover the substantial financial harm caused by the loss of IP rights.

Conclusion

The promise of AI to expedite the patent drafting pipeline is undeniable, but the current ethical landscape presents a fundamental challenge to the confidentiality required to preserve patentability. Until legal frameworks universally impose a duty of loyalty on AI providers, the responsibility falls squarely on the professional to protect the client’s IP. The future of intellectual property requires vigilance: innovation should be accelerated by AI, but never disclosed by it.

IP-Security #Patent-Risk #AICrisis

Software’s Fourth Pricing Revolution Emerging for AI Agents

Photo by Christina Morillo on Pexels.com

By: Joyce Jia

The Fourth Pricing Revolution: Outcome-Based Pricing

In August 2024, customer experience (CX) software Zendesk made a stunning announcement: customers would only pay for issues resolved “from start to finish” by its AI agent. No resolution or human escalations? No Charge. Meanwhile, Zendesk’s competitor Sierra, a conversational AI startup, introduced its own outcome-based pricing tied to metrics like resolved support conversations or successful upsells. 

Zendesk claims to be the first in CX industry to adopt outcome-based pricing powered by AI, but it seems to have already fallen behind: Intercom has launched a similar model in 2023 for its “Fin” AI Chatbot, charging enterprise customers $0.99 only when the bot successfully resolves an end-user query. 

The wave to outcome-based pricing represents the fourth major pricing revolution in software. The first revolution began in the 1980s-1990s with seat-based licenses for shrink-wrapped boxes, where customers paid a one-time flat fee for software ownership without automatic version upgrades. The second revolution emerged in the 2000s when industry pricing transitioned to SaaS subscriptions, converting software into a recurring operational expense with continuous updates. The third revolution came in the 2010s with consumption-based cloud pricing, tying costs directly to actual resource usage. The fourth and current revolution is outcome-based pricing, where customers are charged only when measurable value is delivered, rather than for licenses purchased or resources consumed.

In fact, the shift to outcome-based pricing extends far beyond AI customer support, spanning AI-driven sectors from CRM platform like Salesforce to AI legal tech (EvenUp), fintech firm (Chargeflow), fraud prevention (Riskified and iDenfy) and Healthcare AI Agent. These companies are experimenting with pure outcome-based pricing or hybrid models that combine traditional flat fees, usage-based charges with outcome-based components. Recent tech industry analysis shows seat-based pricing for AI products dropped from 21% to 15% of companies in just one year, while hybrid pricing significantly increased from 27% to 41%.

Historical Precedent from Legal Practice: Contingency Fees

Outcome-based contracting isn’t a novel concept. It has been growing for over a decade in other industries. In the legal field, professionals have long worked with its equivalent in the form of contingency fees. Since the 19th century, lawyers in the United States have been compensated based on results: earning a percentage of the recovery only if they successfully settle or win a case. However, this model has been accompanied by strict guardrails. Under ABA Model Rule 1.5(c), contingency fee agreements must be in writing and clearly explain both the qualifying outcome and calculation method. Additionally, contingency arrangements are prohibited in certain matters, such as criminal defense and domestic relations cases. 

Beyond professional ethical concerns, the key principle is straightforward: when compensation hinges on outcomes, the law demands heightened transparency and well-defined terms. AI vendors adopting outcome-based pricing should expect similar guardrails to develop, ensuring both contract enforceability and customer trust. This requirement stems from traditional contract law, not AI-specific regulation. 

The Critical Legal Question: Defining “Outcome”

One of the biggest challenges in outcome-based pricing is contract clarity. Contract law requires essential terms to be clearly defined. If such terms are vague or cannot be determined as reasonably certain, the agreements may be unenforceable. Applying it to AI agents, one critical question arises: How do you precisely and fairly define a “successful” outcome?

The answer can be perplexing. Depending on the nature of the AI product, multiple layers can contribute to “outcome” delivery, such as internal infrastructure or workflows, external market conditions, marketing efforts, or third-party dependencies. These complex factors make it hard to judge clear ownership of results or to establish precise payment triggers. This is especially true when “outcome” is delivered over an extended period.

The venture capital firm, Andreessen Horowitz, recently conducted a survey highlighting the issue: 47% of enterprise buyers struggle to define measurable outcomes, 25% find it difficult to agree on how value should be attributed to an AI tool or model, and another 24% note that outcomes often depend on factors outside the AI vendor’s control.

These are not just operational challenges. They raise a real legal question about whether the contract terms are enforceable under the law. 

Consider these scenarios that illustrate the difficulty:

  • What happens if the outcome is only partially achieved?
  • What if the AI agent resolves the issue but too slowly, leaving the user frustrated despite a technically successful outcome?
  • What if an AI chatbot closes a conversation successfully, but the customer returns later with a complaint?
  • What if a user ends the chat session without explicitly confirming whether the issue was resolved?

As Kyle Poyar, a SaaS pricing expert and author of an influential newsletter on pricing strategy and product-led growth, observed: 

“Most products are just not built in a way that they own the outcome from beginning to end and can prove the incrementality to customers. I think true success-based pricing will remain rare. I do think people will tap into the concept of success-based pricing to market their products. They’ll be calling themselves ‘success based’ but really charge based on the amount of work that’s completed by a combination of AI and software.”

Legal Implication for the Future

Just as the rapid growth of AI agents themselves, outcome-based AI pricing is evolving at breakneck speed. The blossoming of this new pricing model presents a challenge for contract implementation and requires existing contract terms to adapt once again to accommodate new forms of value creation and innovative business models.

The scenarios above are just a few examples, but they underscore the importance of attorneys working closely with engineering and business teams to meticulously identify potential conflicts and articulate key contract terms grounded in clear metrics and KPIs that objectively define successful outcomes. 

“Outcome” could mean different things to different parties, and its definitional ambiguity could create misaligned incentives. Buyers may underreport value while vendors might game metrics to overstate performance. These dynamics will inevitably lead to disputes. AI vendors that have adopted or plan to adopt outcome-based pricing must develop robust frameworks addressing contract definiteness and attribution standards before dispute rises. Without these safeguards, we can likely see a wave of conflicts over vague terms, unenforceable agreements, and unmet expectations on both sides as AI agents surge.