Beyond the Billable Hour: How AI is Forcing Legal Pricing Reform

By: Joyce Jia

Pricing reform to replace billable hours has long been debated in the legal industry. Yet as software companies increasingly shift toward outcome-based pricing with AI agents’ assistance—charging only when measurable value is delivered—the legal profession remains anchored in time-based billing and has been slow to translate technological adoption into pricing change. The recently released Thomson Reuters Institute’s 2026 Report on the State of the US Legal Market (“2026 Legal Market Report”) revealed that average law firm spending on technology grew “an astonishing 9.7% … over the already record growth of 2024”, while “a full 90% of all legal dollars still flow through standard hourly rate arrangements,” This growing disconnect between technological investment and monetization reflects not merely a billing challenge, but a deeper crisis in how legal value is defined, allocated, and captured in the AI era. 

How Did We Get Here?

The billable hours system wasn’t always dominant. As documented by Thomson Reuters Institute’s James W. Jones, hourly billing emerged in the 20th century but remained relatively peripheral until the 1970s, when the rapid growth of corporate in-house legal departments demanded standardized fees and greater transparency from outside counsels’ previously “amorphous” billing practices. The logic was straightforward: time equaled work, work equaled measurable productivity, and productivity justified legal spending for in-house departments (and conversely, profitability for law firms).

That logic, however, is increasingly strained. As AI enables what Clio CEO Jack Newton describes as a “structural incompatibility”, the revenue model built on time becomes increasingly unjustifiable. According to Thomas Reuter’s 2025 Legal Department Operations Index, corporate legal departments face mounting pressure to “do more with less.” Nearly three-quarters of respondents plan to deploy advanced technology to automate legal tasks and reduce costs, while one-quarter are expanding their use of alternative fee arrangements (AFAs) to optimize operations and control costs. As the 2026 Legal Market Report observes, general counsels now scrutinize matter budgets line by line. Seeing their own team leverage AI to perform routine work “at a fraction of the cost,” they question why outside counsels charging premium hourly rates are not delivering comparable efficiencies. Unsurprisingly, corporate legal departments have led their outside firms in AI adoption since 2022

Is AI a “Margin Eroder or Growth Accelerator”?  

Research by Professor Nancy Rapoport and Legal Decoder founder Joseph Tiano frames this tension as a central paradox of AI adoption. When an attorney completes discovery review using AI in 8 hours instead of 40, firm revenue could drop by 80 percent theoretically under the hourly model even as client outcomes improve. This appears to be a productivity trap: AI-driven efficiency directly cannibalizing revenue. But this framing is overly narrow. With careful design, restructuring billing models around technology-enabled premiums need not shrink revenue; instead, it can enhance productivity while strengthening client trust through greater transparency and efficiency.  It also enables a more equitable sharing of the benefits of technological advancement and a more deliberate allocation of the risks inherent in legal matters.

Recapturing the Lost Value of Legal Inefficiencies

According to the Thomson Reuters Institute’s 2023 research on billing practices, the average law firm partner writes down over 300 hours annually, nearly $190,000 in lost potential fees. These write-offs typically involve learning curves in unfamiliar legal areas, time-intensive research, drafting various documents and meeting notes, or correcting associates’ work. Partners often decline to bill clients for such work when it exceeds anticipated time expectations, even though it remains billable in principle. This is precisely where AI excels. By reducing inefficiencies and accelerating routine tasks, AI allows firms to recapture written-off value while offering clients more predictable budgets and higher-quality outputs. 

Justifying Higher Hourly Rates Through AI-Enhanced Value

Paradoxically, AI may also support higher hourly rates for certain categories of legal work. As Rapoport and Tiano argue, AI enables lawyers to deliver “unprecedented insights” through deeper, more comprehensive, and more reliable analysis. By rapidly synthesizing historical case data, identifying patterns, and predicting outcomes, AI may elevate legal judgment in ways that time and cost constraints previously rendered impractical. In this context, premium rates can remain justifiable for complex, strategic work where human judgment and client relationship prove irreplaceable.

Extending Contingency (Outcome-Based) Fee Beyond Litigation

Beyond traditional litigation contingency fees, Rapoport and Tiano identify “disputes, enforcement actions, or complex transactions” as areas ripe for outcome-based pricing, where firms can “shoulder more risk for greater upside.” The term “disputes” may be understood broadly to encompass arbitration, debt collection, and employment-related conflicts, such as discrimination or wage claims.

An even more underexplored application lies in regulatory compliance, a domain characterized by binary and verifiable outcomes. Unlike litigation success or transactional value, compliance outcomes present even clearer metrics: such as GDPR compliance versus violation, SOX compliance versus deficiency, patent prosecution approval versus rejection. This creates opportunities for compliance-as-a-service models that charge for compliance or certification outcomes rather than hours worked. Where AI enables systematic, scalable review, risk allocation becomes explicit: the firm guarantees compliance, and the client pays a premium above hourly equivalents for that assurance.

New Revenue Streams in the AI Era

The rise of data-driven AI also creates entirely new categories of legal work. As Rapoport and Tiano identify, “AI governance policy and advisories, algorithmic bias audits, data privacy by design”, all represent emerging and durable revenue streams. Moreover, as AI regulatory frameworks continue to evolve across jurisdictions, clients will increasingly seek counsel for these specialized services, where interdisciplinary expertise at the insertion of law and technology, combined with sound professional judgment and strategic foresight, remain indispensable for navigating both compliance obligations and long-term risk. 

The Hybrid Solution: Tiered Value Frameworks

Forward-thinking firms are increasingly experimenting with hybrid AFA that blend fixed fees, subscriptions, outcome-based pricing, and legacy hourly billing into tiered value offerings. Ultimately, the legal industry’s pricing transformation is not solely about technology. It is about candidly sharing the gains created by technology and confronting how risk should be allocated when AI reshapes legal work.

As AI simultaneously frees lawyers’ time and creates new revenue opportunities, law firms face a defining challenge: articulating, quantifying, and operationalizing a value-and-risk allocation framework capable of replacing the billable hour and sustaining the economics of legal practice for the next generation.

AI’s Creative Ambitions: A Case Review of Thaler v. Perlmutter (2023)

By: Stella B. Haynes Kiehn

Is it possible for AI to achieve genuine creativity?  Inventor and self-dubbed “AI Director”, Dr. Stephen Thaler (“Thaler”), has been attempting to prove to the U.S. Copyright Office for the past several years that not only can AI be creative, but also that AI can create works capable of reaching copyright standards.

On November 3, 2018, Thaler filed an application to register a copyright claim for the work, A Recent Entrance to Paradise. While Thaler filed the application, Thaler listed “The Creativity Machine”, as the author of the work, and himself as the copyright claimant. According to Thaler, A Recent Entrance to Paradise was drawn and named by the Creativity Machine, an AI program. The artwork “depicts a three-track railway heading into what appears to be a leafy, partly pixelated tunnel.” In Thaler’s copyright application, he noted that A Recent Entrance to Paradise “was autonomously created by a computer algorithm running on a machine” and he was “seeking to register this computer-generated work as a work-for-hire to the owner of the Creativity Machine.”

The U.S. Copyright Office denied Thaler’s application primarily on the grounds that his work lacked the human authorship necessary to support a copyright claim. On a second request for reconsideration of refusal, “Thaler did not assert that the Work was created with contribution from a human author … [but that] the Office’s human authorship requirement is unconstitutional and unsupported by case law.” The U.S. Copyright Office once again denied the application. Upon receiving this decision, Thaler appealed the ruling to the U.S. District Court for the District of Columbia.

On appeal, Judge Beryl A. Howell reiterated that “human authorship is an essential part of a valid copyright claim.” Notably, Section 101 of the Copyright Act requires that a work have an “author” to be eligible for copyright. Drawing upon decades of Supreme Court case law, the Court concluded that the author must be human, for three primary reasons.

First, the Court stated that the government adopted the Copyright Clause of the U.S. Constitution to incentivize the creation of uniquely original works of authorship. This incentivization is often financial, and non-human actors, unlike human authors, do not require financial incentives to create. “Copyright was therefore not designed to reach” artificial intelligence systems.

Second, the Court pointed to the legislative history of the Copyright Act of 1976 as evidence against Thaler’s copyright claim. The Court looked to the Copyright Act of 1909’s provision that only a “person” could “secure copyright” for a work. Additionally, the Court found that the legislative history of the Copyright Act of 1976 fails to indicate that Congress intended to extend authorship to nonhuman actors, such as AI. To the contrary, the congressional reports stated that Congress sought to incorporate the “original work of authorship” standard “without change.”

Finally, the Court noted that case law has “consistently recognized” the human authorship requirement. The decision pointed to the U.S. Supreme Court’s 1884 opinion in Burrow-Giles Lithographic Company v. Sarony, in upholding the constitutionality of the human only authorship requirement. This case, upholding authorship rights for photographers, found it significant that the human creator, not the camera, “conceived of and designed the image and then used the camera to capture the image.”

Ultimately, this decision is consistent with recent case law, and administrative opinions on this topic. In mid 2024, the Copyright Office plans to issue guidance on AI and copyright issues, in response to a survey of AI industry professionals, copyright applicants, and legal professionals. In relation to the Creativity Machine, one of Thaler’s main supporters in this legal battle is Ryan Abbott, a professor of law and health sciences at the University of Surrey in the UK, and a prominent AI litigant. Abbott is the creator of the Artificial Inventor Project—a group of intellectual property lawyers and an AI scientist working on IP rights for AI-generated outputs. The Artificial Inventor Project is currently working on several other cases for Thaler, including attempting to patent two of the Creativity Machine’s other “authored” works. While the District Court’s decision seems to mark the end of Thaler’s quest to copyright A Recent Entrance to Paradise, it seems as if the fight for AI authorship rights in copyright is only beginning.

Your Face Says It All: the FTC Sends a Warning and Rite Aid Settles Down

By: Caroline Dolan

If someone were to glance at your face, they wouldn’t necessarily know if you won big in Vegas or if you’re silently battling a gambling addiction. When you stroll down the street, your face can conceal many a secret, even such a lucrative side hustle. While facial recognition (“FR”) software is not a new innovation, deep pockets are investing a staggeringly large amount of money into the FR market. Last year, the market was globally valued at $5.98 billion and is projected to grow at a compound annual growth rate of 14.9% into 2030. This rapid and bold deployment of facial recognition technology may therefore make our faces more revealing than ever, transforming them into our most valuable—yet vulnerable—asset.

A Technical Summary for Non-Techies

Facial recognition uses software to assess similarities between faces and provide determinations. Facial characterization further classifies a face based on individual characteristics like gender, facial expression, and age. Through deep learning AI, artificial neural networks mimic how our brains process data. The neural network consists of various layers of algorithms which process and learn from training data, like images or text, and eventually develop the ability to identify features and make comparisons.

However, when the dataset used to train the FR model is unrepresentative of different genders and races, a biased algorithm is created. Training data that is biased toward certain features creates a critical weak spot in a model’s capabilities and can result in “overfitting” wherein the machine learning model performs well on the training data but poorly on data that is different from which it was trained. For example, a model that is trained on data that is biased towards images of men with Western features will likely struggle to make accurate determinations on images of East Asian females.

Data collection and curation poses its own set of challenges, but selection bias is a constant risk whether training data is collected from a proprietary large language model (“LLM”), which requires customers to purchase a license with restrictions, or from an open-source LLM, which is freely available and provides flexibility. Ensuring that training data represents a variety of demographics requires AI ethic awareness, intentionality, and potentially federal regulation.

The FTC Cracks Down

In December of 2023, Rite Aid settled with the FTC following the agency’s complaint alleging that the company’s deployment of FR software was reckless and lacked reasonable safeguards, resulting in false identifications and foreseeable harm. Between 2012 and 2020, Rite Aid employed an AI FR program to monitor shoppers without their knowledge and flag “persons of interest.” Those whose faces were deemed a match to one in the company’s “watchlist database” were confronted by employees, searched, and often publicly humiliated before being expelled from the store. 

The agency’s complaint under section 5 of the FTC Act asserted that Rite Aid recklessly overlooked the risk that its FR software would misidentify people based on gender, race, or other demographics. The FTC stated that “Rite Aid’s facial recognition technology was more likely to generate false positives in stores located in predominantly Black and Asian neighborhoods than in predominantly white communities, where 80% of Rite Aid stores are located.” This also violated Rite Aid’s 2010 Security Order which required the company to oversee its third-party software providers.  

The recent settlement prohibits Rite Aid from implementing AI FR technology for five years. It also requires the company to destroy all data that the system has collected. The FTC’s stipulated Order imposes various comprehensive safeguards on “facial recognition or analysis systems,” defined as “an Automated Biometric Security or Surveillance System that analyzes . . . images, descriptions, recordings . . . of or related to an individual’s face to generate an Output.” If Rite Aid later seeks to implement an Automated Biometric Security or Surveillance System, the company must adhere to numerous forms of monitoring, public notices, and data deletion requirements based on the “volume and sensitivity” of the data. Given that Rite Aid filed Chapter 11 bankruptcy in October of 2023, the settlement is pending approval by the bankruptcy court while the FTC’s proposed consent Order goes through public notice and comment.

Facing the FutureGoing forward, it is expected that the FTC will remain “vigilant in protecting the public from unfair biometric surveillance and unfair data security practices.” Meanwhile, companies may be incentivized to embrace AI ethics as a new component of “Environmental, Social, and Corporate Governance” while legislators wrestle with how to ensure that automated decision-making technologies evolve responsibly and do not perpetuate discrimination and harm.

2023, A Roller Coaster Towards Unionization for Game Developers?

By: Kevin Vu

No doubt, 2023 has been a “blockbuster year for video games.” From the Game Awards breaking viewership records, the long-anticipated Baldur’s Gate 3 winning several awards, including the “Game of the Year,” and the redemption of Cyberpunk 2077, it’s evident that 2023 will be celebrated for its many great releases. But, one little-told story of gaming in 2023 is the massive amount of layoffs that have emerged among many developers. Perhaps layoffs were inevitable, given the enormous costs that the top video games incur, and how some notable games only generated half as much revenue as they had anticipated.  

But there may be an even more fundamental reason for this rollercoaster of a year in gaming. Tech, the umbrella industry for gaming, has historically been resistant to unionization. As layoffs continue in the tech industry, the call to unionization has grown louder and louder. With the gaming industry celebrating one of its most consequential years, it’s time to ask whether unionization would ultimately benefit the industry.

Reasons to Unionize

Traditional reasons for unionization often include higher wages, creating a safer workplace, job stabilization, and collective bargaining. Traditionally, both tech developers and game developers have made six-figure salaries, eliminating the high wage factor. However, the remaining factors seem to point out that the gaming industry should unionize. Riot Games, Activision-Blizzard, and other companies within the video game space are notorious for workplace harassment. Having a union can help advocate for those workers, lead to greater enforcement of workplace harassment and discrimination laws, and ultimately, help create and facilitate a culture where workplace harassment is no longer the norm. And, with gaming companies being notorious for their long hours (dubbed as “crunch” times), negotiating for better conditions through unions seems obvious. But perhaps the most obvious reason would be the widespread layoffs that happened in 2023, as unions can help secure better severance pay as employees transition to other endeavors.  

Reasons Not to Unionize

However, various reasons have emerged against unionization in gaming, including the rapid development of technology, blurred lines between management and workers, and stifling the creative process. Ultimately though, many of those reasons seem strained. One of the popular emerging technologies, virtual reality, has a lot of its roots in video game development. That technology has now had various successes in helping doctors, patients, incarcerated individuals, and many others. Now, the rapid development of technology seems to threaten game developers. Companies are beginning to use generative AI for their video games, whether it is voice acting, or promotional art. Indeed, some developers are now promising to use artificial intelligence to develop games, too. Using the advancement of technology as a reason to stymie the workers who helped create that technology seems backhanded at best.  

On an even more fundamental level, shifting over to generative technology to develop video games seems to be counterintuitive, given that video games are a creative product. What creativity exists with AI? This year in games should be telling companies that developers are needed and should be treasured. Baldur’s Gate 3, 2023’s Game of the Year, spent nearly three years in early access, where developers continued to work on the game as the public played the game before its official release. Zelda: Tears of the Kingdom, a runner-up for that same award, was finished for nearly a year, with one year being spent on polishing the game. Cyberpunk 2077, a game with a tumultuous start, won 2023’s Best Ongoing Game Award because the developers ultimately believed in their product. In an industry where some of the biggest games are passion projects made by small teams, trying to justify anti-unionization sentiment by citing creativity, but in turn, using technology that stifles such creativity is disingenuous.  

What Now?

It seems evident that video game developers should seriously consider unionization. Despite a big year in gaming releases, the industry is still threatened by layoffs, and crunch work conditions persist. Video game unionization is not a new thing either. The first multi-department video game union emerged in 2023, which included developers. Quality assurance workers, individuals who help test games to a more polished product, have also begun unionizing. Other creatives in the video game space, like voice actors, have taken collective action as well. Unions have been effective in these creative spaces, and in addressing technology. For example, the Writers Guild of America’s strike ended in favorable terms for screenwriters, including limiting the use of AI. Ultimately, video game developers should look at their industry and ask whether the current climate is sustainable.

The Complexities of Racism in AI Art

By: Imaad Huda

AI generative art is a recent advance in the field of consumer and social artificial intelligence. Anybody can write a few words into a program, and, within seconds, the AI will generate an image that roughly depicts that prompt. AI generative art can incorporate a number of artistic styles to develop digital art without somebody lifting a pen. While many users are simply fascinated with art being created by their computers, few are aware of how the AI generates its images and the implications of what it produces. Now that AI art programs have made their way into consumer hands, users have noticed stereotypical and racialized depictions in their auto-generated images. Entering prompts that incorporate types of employment, education, and history often produce images that incorporate racial bias. As AI becomes more mainstream, racist and sexist depictions by AI will only serve to entrench long standing stereotypes, and the lack of a legal standard will only make the matter worse. 

Quantifying the Racism 

Leonard Nicoletti and Dina Bass for Bloomberg note that the generative images take the “human” biases to the extreme. In a generative span of 5,000 images with the Stable Diffusion AI, depictions of prompts for people with higher-paying jobs were compared to people with lower-paying jobs. The result was an overrepresentation of people of color for lower-paying jobs. Prompts including “fast-food workers” yielded an image with a darker skinned person seventy percent of the time, even though Bloomberg noted that seventy percent of fast-food workers are white. Meanwhile, prompts for higher-paying jobs, such as “CEO and lawyer” generated images of people with lighter skin at a rate of over eighty percent, potentially proportional to the eighty percent of people that hold those jobs. When it came to occupations, Stable Diffusion showed the most bias when depicting occupations for women, “amplify[ing] both gender and racial stereotypes.” Among all generations for high-paying jobs, only one image, that of a judge, generated of a person of color. Commercial facial-recognition software, a tool specifically designed to identify the genders of people, had “the lowest accuracy on darker skinned people,” presenting a problem when these softwares are “implemented for healthcare and law enforcement.” 

Stable Diffusion was also biased when comparing criminality. For depictions of “inmate,” the AI generated a person of color eighty percent of the time when only half of the inmates in the U.S. are people of color. Bloomberg notes that the rates for generating criminals could be skewed by the racial bias by the U.S. “policing and sentencing” mechanisms. 

The Legality

Is racism in AI legal? The answer is complicated for a number of reasons. The law surrounding AI generative imaging is new. In 2021, the Federal Trade Commission (FTC) declared the use of discriminatory algorithms to make automated decisions illegal, citing opportunities for “jobs, housing, education, or banking.” New York City has also enacted its own Local Law 144, which requires that AI tools undergo a “bias audit” before aiding in employment decisions.” The National Law Review states that a bias audit includes a calculation of the “rate at which individuals in a category are either selected to move on or assigned a classification” by the hiring tool. The law also states that audits “include historical data in their analysis,” and the results of the audit “must be made publically available.” 

The advancement of anti-racism laws regulating AI tools represents progress. However, how these laws pertain to AI art still has yet to be seen. Laws concerning AI generated art are currently focused on theft, as AI art often copies the originalism and stylistic choices of human artists. The racial depictions of AI art have not been touched on legally but could perpetuate stereotypes when used in an educational context, which the FTC prohibits under its 2021 declaration. Judges and lawmakers may not see AI art’s contribution to systemic racism as a legal issue that could stand in the courtroom just yet. 

What’s The Solution?

The bias in generated art results from its algorithm, which, depending on the user’s prompt, pulls together images that match a description and style to develop into a new image. From multiple prompts from many different users and the data available on the internet, the algorithm continuously produces these images. Almost a decade ago, Google postponed its consumer AI search program because images of black people were being filtered into searches for “gorillas” and “monkeys.” The reason for this, according to former Google employees, was Google not training its AI with enough images of black people. The problem in this case, again, could be a lack of representation, from too few AI algorithm employees of color to inadequate representation in the data sets being used to generate images. However, a simple fix to increase representation is not so easy. AI computing is built based on models that already exist; a new model will be based off of an older model, and the biases present in the older algorithm may stand. As issues with machines get more complicated, so do the solutions. Derogatory depictions should not be allowed to stand in the absence of a legal standard, and lawmakers should take the necessary measures to end AI discrimination before it becomes a true social problem.