Protecting Privacy in Libraries as AI Adoption Accelerates

By: Anusha Nasrulai

Like picking what movie to watch, what restaurant to eat at, or where to go on vacation, what we read next is often recommended to us by personalization algorithms. Social media or reading platforms such as Goodreads already process user data to generate recommended content. Recently, the library catalog browsing app, Libby, announced its own book recommendation feature, Inspire Me.

Inspire Me by Libby recommends users’ books based on their own prompts or previously saved titles in the app. Originally announced as an optional feature, Inspire Me features prominently at the top of the home screen when users open the Libby App. The feature recommends books available through the catalogs of the libraries which users have linked accounts with. When the feature was first announced, users and libraries showed resistance, voicing concerns about forced AI adoption and diminished patron privacy. OverDrive, the parent company of Libby, states that readers’ personally identifying data and reading activity are not provided to the AI model.

Libraries work with vendor platforms, distributors, and publishers to deliver library services, particularly for e-materials. Despite popular backlash, vendors are expanding development of AI integrations. OverDrive CEO Steve Potash has announced goals to use AI to “match users to content across its platforms,” which also include streaming platform Kanopy, and k-12 education platform Sora. Other subscription vendor companies, such as OCLC, EBSCO, and Clarivate, have introduced AI features for content recommendation, enhanced search, text summaries, and AI-generated research assistants. Beyond externally marketed AI tools, vendors are incorporating AI into their internal workflows for “building, improving, and refining products.” Libraries now are finding the balance between their duty to protect patron data and privacy and providing access to digital resources.

Legal Regulations

The integration of AI by vendor platforms poses new privacy considerations for libraries. AI introduces new risk points at data collection, processing, training, and deployment.

The United States currently has no comprehensive AI or data privacy laws. Instead, states have passed dozens of laws regulating certain AI use cases. As of now, 6 states have passed cross-sectoral AI governance laws that apply to commercial entities. Vendors are likely subject to state-level AI and data privacy laws that target commercial entities. Libraries can leverage legal regulations to negotiate with vendors for stronger privacy protections. Trends in AI regulations show that states are increasingly passing and updating AI legislation amid legal challenges and an absence of federal regulation

AI Governance and Contracting

In light of legal uncertainties, contracts and licenses are a key opportunity for imposing guardrails on AI use. These agreements address how vendors and third parties can collect, process, and disclose user data.

More often, vendor agreements will not explicitly disclose internal use of AI tools or AI model training. Research and policy organization, Library Futures, and staff attorney, Layla Maurer, presented on this issue, flagging that broad language around operational mechanisms and data usage may permit vendors to train and deploy AI models using patron and institutional data. When reviewing vendor contracts for AI usage, libraries should focus on:

  • Vendor’s rights around data use and sharing, including with third parties. Use of patron data for “analytics” or “development and improvement of services” may include AI training.
  • References to third-party applications or tools, processors, or contractors necessary to carry out services under the agreement.
  • Whether there is a defined data retention period and what happens to patron data when the contract ends

Libraries can strengthen contract terms by including language requiring compliance with applicable federal and state laws, as well as with industry standards such as ISO and NIST. In addition, libraries may negotiate with vendors to:

  • Define user rights to data, including the right to opt out of nonessential data collection and the right to delete their data.
  • Limit secondary uses of data, including for training internal or external AI tools
  • Disclose third party partners and whether data is shared or sold to third parties
  • Conduct privacy and security audits
  • Establish a data retention period and protocol for destroying data at the end of the retention period

As said by attorney Layla Maurer, “Updating contract language to allow flexibility around software development needs while retaining safeguards for what the licensee… wants to protect is not just an expeditious way to reach an agreement with a software vendor, it’s also a strategy that helps ensure the licensee can continue to safely use the software despite future legislative changes provided the vendor updates their software in a manner consistent with the intent of the legislation.”

Future-proofing

Digital lending and services are a popular means of accessing materials from libraries, but at the same time, raise new challenges for protecting patron privacy. Therefore, as AI becomes embedded in services, libraries need to adopt AI guardrails in contracting to manage the harms and opportunities related to AI use in libraries, particularly around privacy.

Beyond the Billable Hour: How AI is Forcing Legal Pricing Reform

By: Joyce Jia

Pricing reform to replace billable hours has long been debated in the legal industry. Yet as software companies increasingly shift toward outcome-based pricing with AI agents’ assistance—charging only when measurable value is delivered—the legal profession remains anchored in time-based billing and has been slow to translate technological adoption into pricing change. The recently released Thomson Reuters Institute’s 2026 Report on the State of the US Legal Market (“2026 Legal Market Report”) revealed that average law firm spending on technology grew “an astonishing 9.7% … over the already record growth of 2024”, while “a full 90% of all legal dollars still flow through standard hourly rate arrangements,” This growing disconnect between technological investment and monetization reflects not merely a billing challenge, but a deeper crisis in how legal value is defined, allocated, and captured in the AI era. 

How Did We Get Here?

The billable hours system wasn’t always dominant. As documented by Thomson Reuters Institute’s James W. Jones, hourly billing emerged in the 20th century but remained relatively peripheral until the 1970s, when the rapid growth of corporate in-house legal departments demanded standardized fees and greater transparency from outside counsels’ previously “amorphous” billing practices. The logic was straightforward: time equaled work, work equaled measurable productivity, and productivity justified legal spending for in-house departments (and conversely, profitability for law firms).

That logic, however, is increasingly strained. As AI enables what Clio CEO Jack Newton describes as a “structural incompatibility”, the revenue model built on time becomes increasingly unjustifiable. According to Thomas Reuter’s 2025 Legal Department Operations Index, corporate legal departments face mounting pressure to “do more with less.” Nearly three-quarters of respondents plan to deploy advanced technology to automate legal tasks and reduce costs, while one-quarter are expanding their use of alternative fee arrangements (AFAs) to optimize operations and control costs. As the 2026 Legal Market Report observes, general counsels now scrutinize matter budgets line by line. Seeing their own team leverage AI to perform routine work “at a fraction of the cost,” they question why outside counsels charging premium hourly rates are not delivering comparable efficiencies. Unsurprisingly, corporate legal departments have led their outside firms in AI adoption since 2022

Is AI a “Margin Eroder or Growth Accelerator”?  

Research by Professor Nancy Rapoport and Legal Decoder founder Joseph Tiano frames this tension as a central paradox of AI adoption. When an attorney completes discovery review using AI in 8 hours instead of 40, firm revenue could drop by 80 percent theoretically under the hourly model even as client outcomes improve. This appears to be a productivity trap: AI-driven efficiency directly cannibalizing revenue. But this framing is overly narrow. With careful design, restructuring billing models around technology-enabled premiums need not shrink revenue; instead, it can enhance productivity while strengthening client trust through greater transparency and efficiency.  It also enables a more equitable sharing of the benefits of technological advancement and a more deliberate allocation of the risks inherent in legal matters.

Recapturing the Lost Value of Legal Inefficiencies

According to the Thomson Reuters Institute’s 2023 research on billing practices, the average law firm partner writes down over 300 hours annually, nearly $190,000 in lost potential fees. These write-offs typically involve learning curves in unfamiliar legal areas, time-intensive research, drafting various documents and meeting notes, or correcting associates’ work. Partners often decline to bill clients for such work when it exceeds anticipated time expectations, even though it remains billable in principle. This is precisely where AI excels. By reducing inefficiencies and accelerating routine tasks, AI allows firms to recapture written-off value while offering clients more predictable budgets and higher-quality outputs. 

Justifying Higher Hourly Rates Through AI-Enhanced Value

Paradoxically, AI may also support higher hourly rates for certain categories of legal work. As Rapoport and Tiano argue, AI enables lawyers to deliver “unprecedented insights” through deeper, more comprehensive, and more reliable analysis. By rapidly synthesizing historical case data, identifying patterns, and predicting outcomes, AI may elevate legal judgment in ways that time and cost constraints previously rendered impractical. In this context, premium rates can remain justifiable for complex, strategic work where human judgment and client relationship prove irreplaceable.

Extending Contingency (Outcome-Based) Fee Beyond Litigation

Beyond traditional litigation contingency fees, Rapoport and Tiano identify “disputes, enforcement actions, or complex transactions” as areas ripe for outcome-based pricing, where firms can “shoulder more risk for greater upside.” The term “disputes” may be understood broadly to encompass arbitration, debt collection, and employment-related conflicts, such as discrimination or wage claims.

An even more underexplored application lies in regulatory compliance, a domain characterized by binary and verifiable outcomes. Unlike litigation success or transactional value, compliance outcomes present even clearer metrics: such as GDPR compliance versus violation, SOX compliance versus deficiency, patent prosecution approval versus rejection. This creates opportunities for compliance-as-a-service models that charge for compliance or certification outcomes rather than hours worked. Where AI enables systematic, scalable review, risk allocation becomes explicit: the firm guarantees compliance, and the client pays a premium above hourly equivalents for that assurance.

New Revenue Streams in the AI Era

The rise of data-driven AI also creates entirely new categories of legal work. As Rapoport and Tiano identify, “AI governance policy and advisories, algorithmic bias audits, data privacy by design”, all represent emerging and durable revenue streams. Moreover, as AI regulatory frameworks continue to evolve across jurisdictions, clients will increasingly seek counsel for these specialized services, where interdisciplinary expertise at the insertion of law and technology, combined with sound professional judgment and strategic foresight, remain indispensable for navigating both compliance obligations and long-term risk. 

The Hybrid Solution: Tiered Value Frameworks

Forward-thinking firms are increasingly experimenting with hybrid AFA that blend fixed fees, subscriptions, outcome-based pricing, and legacy hourly billing into tiered value offerings. Ultimately, the legal industry’s pricing transformation is not solely about technology. It is about candidly sharing the gains created by technology and confronting how risk should be allocated when AI reshapes legal work.

As AI simultaneously frees lawyers’ time and creates new revenue opportunities, law firms face a defining challenge: articulating, quantifying, and operationalizing a value-and-risk allocation framework capable of replacing the billable hour and sustaining the economics of legal practice for the next generation.

AI’s Creative Ambitions: A Case Review of Thaler v. Perlmutter (2023)

By: Stella B. Haynes Kiehn

Is it possible for AI to achieve genuine creativity?  Inventor and self-dubbed “AI Director”, Dr. Stephen Thaler (“Thaler”), has been attempting to prove to the U.S. Copyright Office for the past several years that not only can AI be creative, but also that AI can create works capable of reaching copyright standards.

On November 3, 2018, Thaler filed an application to register a copyright claim for the work, A Recent Entrance to Paradise. While Thaler filed the application, Thaler listed “The Creativity Machine”, as the author of the work, and himself as the copyright claimant. According to Thaler, A Recent Entrance to Paradise was drawn and named by the Creativity Machine, an AI program. The artwork “depicts a three-track railway heading into what appears to be a leafy, partly pixelated tunnel.” In Thaler’s copyright application, he noted that A Recent Entrance to Paradise “was autonomously created by a computer algorithm running on a machine” and he was “seeking to register this computer-generated work as a work-for-hire to the owner of the Creativity Machine.”

The U.S. Copyright Office denied Thaler’s application primarily on the grounds that his work lacked the human authorship necessary to support a copyright claim. On a second request for reconsideration of refusal, “Thaler did not assert that the Work was created with contribution from a human author … [but that] the Office’s human authorship requirement is unconstitutional and unsupported by case law.” The U.S. Copyright Office once again denied the application. Upon receiving this decision, Thaler appealed the ruling to the U.S. District Court for the District of Columbia.

On appeal, Judge Beryl A. Howell reiterated that “human authorship is an essential part of a valid copyright claim.” Notably, Section 101 of the Copyright Act requires that a work have an “author” to be eligible for copyright. Drawing upon decades of Supreme Court case law, the Court concluded that the author must be human, for three primary reasons.

First, the Court stated that the government adopted the Copyright Clause of the U.S. Constitution to incentivize the creation of uniquely original works of authorship. This incentivization is often financial, and non-human actors, unlike human authors, do not require financial incentives to create. “Copyright was therefore not designed to reach” artificial intelligence systems.

Second, the Court pointed to the legislative history of the Copyright Act of 1976 as evidence against Thaler’s copyright claim. The Court looked to the Copyright Act of 1909’s provision that only a “person” could “secure copyright” for a work. Additionally, the Court found that the legislative history of the Copyright Act of 1976 fails to indicate that Congress intended to extend authorship to nonhuman actors, such as AI. To the contrary, the congressional reports stated that Congress sought to incorporate the “original work of authorship” standard “without change.”

Finally, the Court noted that case law has “consistently recognized” the human authorship requirement. The decision pointed to the U.S. Supreme Court’s 1884 opinion in Burrow-Giles Lithographic Company v. Sarony, in upholding the constitutionality of the human only authorship requirement. This case, upholding authorship rights for photographers, found it significant that the human creator, not the camera, “conceived of and designed the image and then used the camera to capture the image.”

Ultimately, this decision is consistent with recent case law, and administrative opinions on this topic. In mid 2024, the Copyright Office plans to issue guidance on AI and copyright issues, in response to a survey of AI industry professionals, copyright applicants, and legal professionals. In relation to the Creativity Machine, one of Thaler’s main supporters in this legal battle is Ryan Abbott, a professor of law and health sciences at the University of Surrey in the UK, and a prominent AI litigant. Abbott is the creator of the Artificial Inventor Project—a group of intellectual property lawyers and an AI scientist working on IP rights for AI-generated outputs. The Artificial Inventor Project is currently working on several other cases for Thaler, including attempting to patent two of the Creativity Machine’s other “authored” works. While the District Court’s decision seems to mark the end of Thaler’s quest to copyright A Recent Entrance to Paradise, it seems as if the fight for AI authorship rights in copyright is only beginning.

Your Face Says It All: the FTC Sends a Warning and Rite Aid Settles Down

By: Caroline Dolan

If someone were to glance at your face, they wouldn’t necessarily know if you won big in Vegas or if you’re silently battling a gambling addiction. When you stroll down the street, your face can conceal many a secret, even such a lucrative side hustle. While facial recognition (“FR”) software is not a new innovation, deep pockets are investing a staggeringly large amount of money into the FR market. Last year, the market was globally valued at $5.98 billion and is projected to grow at a compound annual growth rate of 14.9% into 2030. This rapid and bold deployment of facial recognition technology may therefore make our faces more revealing than ever, transforming them into our most valuable—yet vulnerable—asset.

A Technical Summary for Non-Techies

Facial recognition uses software to assess similarities between faces and provide determinations. Facial characterization further classifies a face based on individual characteristics like gender, facial expression, and age. Through deep learning AI, artificial neural networks mimic how our brains process data. The neural network consists of various layers of algorithms which process and learn from training data, like images or text, and eventually develop the ability to identify features and make comparisons.

However, when the dataset used to train the FR model is unrepresentative of different genders and races, a biased algorithm is created. Training data that is biased toward certain features creates a critical weak spot in a model’s capabilities and can result in “overfitting” wherein the machine learning model performs well on the training data but poorly on data that is different from which it was trained. For example, a model that is trained on data that is biased towards images of men with Western features will likely struggle to make accurate determinations on images of East Asian females.

Data collection and curation poses its own set of challenges, but selection bias is a constant risk whether training data is collected from a proprietary large language model (“LLM”), which requires customers to purchase a license with restrictions, or from an open-source LLM, which is freely available and provides flexibility. Ensuring that training data represents a variety of demographics requires AI ethic awareness, intentionality, and potentially federal regulation.

The FTC Cracks Down

In December of 2023, Rite Aid settled with the FTC following the agency’s complaint alleging that the company’s deployment of FR software was reckless and lacked reasonable safeguards, resulting in false identifications and foreseeable harm. Between 2012 and 2020, Rite Aid employed an AI FR program to monitor shoppers without their knowledge and flag “persons of interest.” Those whose faces were deemed a match to one in the company’s “watchlist database” were confronted by employees, searched, and often publicly humiliated before being expelled from the store. 

The agency’s complaint under section 5 of the FTC Act asserted that Rite Aid recklessly overlooked the risk that its FR software would misidentify people based on gender, race, or other demographics. The FTC stated that “Rite Aid’s facial recognition technology was more likely to generate false positives in stores located in predominantly Black and Asian neighborhoods than in predominantly white communities, where 80% of Rite Aid stores are located.” This also violated Rite Aid’s 2010 Security Order which required the company to oversee its third-party software providers.  

The recent settlement prohibits Rite Aid from implementing AI FR technology for five years. It also requires the company to destroy all data that the system has collected. The FTC’s stipulated Order imposes various comprehensive safeguards on “facial recognition or analysis systems,” defined as “an Automated Biometric Security or Surveillance System that analyzes . . . images, descriptions, recordings . . . of or related to an individual’s face to generate an Output.” If Rite Aid later seeks to implement an Automated Biometric Security or Surveillance System, the company must adhere to numerous forms of monitoring, public notices, and data deletion requirements based on the “volume and sensitivity” of the data. Given that Rite Aid filed Chapter 11 bankruptcy in October of 2023, the settlement is pending approval by the bankruptcy court while the FTC’s proposed consent Order goes through public notice and comment.

Facing the FutureGoing forward, it is expected that the FTC will remain “vigilant in protecting the public from unfair biometric surveillance and unfair data security practices.” Meanwhile, companies may be incentivized to embrace AI ethics as a new component of “Environmental, Social, and Corporate Governance” while legislators wrestle with how to ensure that automated decision-making technologies evolve responsibly and do not perpetuate discrimination and harm.

2023, A Roller Coaster Towards Unionization for Game Developers?

By: Kevin Vu

No doubt, 2023 has been a “blockbuster year for video games.” From the Game Awards breaking viewership records, the long-anticipated Baldur’s Gate 3 winning several awards, including the “Game of the Year,” and the redemption of Cyberpunk 2077, it’s evident that 2023 will be celebrated for its many great releases. But, one little-told story of gaming in 2023 is the massive amount of layoffs that have emerged among many developers. Perhaps layoffs were inevitable, given the enormous costs that the top video games incur, and how some notable games only generated half as much revenue as they had anticipated.  

But there may be an even more fundamental reason for this rollercoaster of a year in gaming. Tech, the umbrella industry for gaming, has historically been resistant to unionization. As layoffs continue in the tech industry, the call to unionization has grown louder and louder. With the gaming industry celebrating one of its most consequential years, it’s time to ask whether unionization would ultimately benefit the industry.

Reasons to Unionize

Traditional reasons for unionization often include higher wages, creating a safer workplace, job stabilization, and collective bargaining. Traditionally, both tech developers and game developers have made six-figure salaries, eliminating the high wage factor. However, the remaining factors seem to point out that the gaming industry should unionize. Riot Games, Activision-Blizzard, and other companies within the video game space are notorious for workplace harassment. Having a union can help advocate for those workers, lead to greater enforcement of workplace harassment and discrimination laws, and ultimately, help create and facilitate a culture where workplace harassment is no longer the norm. And, with gaming companies being notorious for their long hours (dubbed as “crunch” times), negotiating for better conditions through unions seems obvious. But perhaps the most obvious reason would be the widespread layoffs that happened in 2023, as unions can help secure better severance pay as employees transition to other endeavors.  

Reasons Not to Unionize

However, various reasons have emerged against unionization in gaming, including the rapid development of technology, blurred lines between management and workers, and stifling the creative process. Ultimately though, many of those reasons seem strained. One of the popular emerging technologies, virtual reality, has a lot of its roots in video game development. That technology has now had various successes in helping doctors, patients, incarcerated individuals, and many others. Now, the rapid development of technology seems to threaten game developers. Companies are beginning to use generative AI for their video games, whether it is voice acting, or promotional art. Indeed, some developers are now promising to use artificial intelligence to develop games, too. Using the advancement of technology as a reason to stymie the workers who helped create that technology seems backhanded at best.  

On an even more fundamental level, shifting over to generative technology to develop video games seems to be counterintuitive, given that video games are a creative product. What creativity exists with AI? This year in games should be telling companies that developers are needed and should be treasured. Baldur’s Gate 3, 2023’s Game of the Year, spent nearly three years in early access, where developers continued to work on the game as the public played the game before its official release. Zelda: Tears of the Kingdom, a runner-up for that same award, was finished for nearly a year, with one year being spent on polishing the game. Cyberpunk 2077, a game with a tumultuous start, won 2023’s Best Ongoing Game Award because the developers ultimately believed in their product. In an industry where some of the biggest games are passion projects made by small teams, trying to justify anti-unionization sentiment by citing creativity, but in turn, using technology that stifles such creativity is disingenuous.  

What Now?

It seems evident that video game developers should seriously consider unionization. Despite a big year in gaming releases, the industry is still threatened by layoffs, and crunch work conditions persist. Video game unionization is not a new thing either. The first multi-department video game union emerged in 2023, which included developers. Quality assurance workers, individuals who help test games to a more polished product, have also begun unionizing. Other creatives in the video game space, like voice actors, have taken collective action as well. Unions have been effective in these creative spaces, and in addressing technology. For example, the Writers Guild of America’s strike ended in favorable terms for screenwriters, including limiting the use of AI. Ultimately, video game developers should look at their industry and ask whether the current climate is sustainable.