The IP Confidentiality Crisis: Why Your Patent Drafts Could Be Training Your Competitor’s AI

By Francis Yoon

The Ultimate Act of Discretion

The process of drafting a patent application is the ultimate act of discretion. Before an invention is filed, its core design, methodology, and advantages are protected as confidential trade secret information. Today, a powerful new tool promises to revolutionize this process: generative AI and LLMs. These models can instantly transform complex invention disclosures into structured patent claims, saving countless hours. However, when legal professionals feed highly sensitive information into public LLMs like ChatGPT or Gemini, they unwittingly expose their clients’ most valuable intellectual property (IP) to an unprecedented security risk. This convenience can create a massive, invisible information leak, turning a law firm’s desktop into a prime data source for the very AI models they rely on.

The Black Box: How Confidentiality is Broken

The core danger lies in how these AI systems learn and the resulting threat to patent novelty governed under 35 U.S.C. § 102(b), which mandates that an invention be new and not previously known or publicly disclosed. When a user submits text to a public LLM, that input often becomes part of the model’s training data or is used to improve its services. Confidential patent information fed into the model for drafting assistance may be logged, analyzed, and integrated into the model’s knowledge base. This risk is formalized in the provider’s terms of service.

While enterprise-level accounts offered by companies like OpenAI or Google typically promise not to use customer input for training by default, free or standard professional tiers usually lack this guarantee unless users proactively opt out. If a lawyer uses a personal subscription to draft a patent claim, they may inadvertently transmit client’s IP directly to a third-party server, violating their professional duty of care and duty of confidentiality, while also potentially exposing their firm to a professional malpractice claim. This conflict establishes the central legal issue: the reliance on public AI creates a massive “Black Box” problem. The invention is disclosed to an opaque system whose ultimate use of that data is neither verifiable nor auditable by the user.

The Novelty Risk: AI as Inadvertent Prior Art

Beyond breaching confidentiality, this practice also fundamentally endangers patentability by jeopardizing the invention’s novelty. Novelty is a fundamental requirement for patentability, which is the legal status an invention must achieve to receive patent protection. The most critical risk is inadvertent public disclosure, which creates prior art—any evidence that an invention is already known or publicly available—and thus invalidates the patent. Once an invention’s confidential details are used to train a widely accessible public model, it may no longer be considered “new” or “secret.” This action could be interpreted as a public disclosure—the invention’s core teaching has been shared with a third party (the AI system) under terms that do not guarantee perpetual confidentiality. This could destroy the invention’s noveltyand the potential for trade secret protection. Furthermore, generative AI can be prompted to generate vast amounts of plausible technical variations based on a limited technical disclosure. If these AI-generated outputs are published, they can become valid prior art. A human inventor’s subsequent application may be rejected because the AI has, in theory, already publicly disclosed a similar concept, rendering the human’s invention unpatentable as non-novel or obvious.

The Legal Hot Potato: IP vs. Contract

When confidentiality is breached through a public AI model, recovering the invention is extremely difficult. If a client’s trade secret is exposed, the client loses the protection entirely, as the secret is no longer “not generally known.” Suing the LLM provider for trade secret misappropriation requires proving that the provider improperly acquired the secret and used it against the owner’s interests. This is challenging because the provider’s legal team can argue the input was authorized under the contractual terms accepted by the user. The attorney who entered the prompt is typically held liable for the breach of confidence. However, the firm has no clear recourse against the LLM provider, as the provider’s liability is severely limited by contract. Often, these liability-limiting clauses cap damages at a minimal amount or specifically disclaim liability for consequential damages, like intellectual property loss. The fragmentation of this liability leaves the inventor exposed while the AI company is shielded by its own terms.

To combat this systemic problem, legal scholars have advocated for imposing a duty of loyalty on tech companies, forcing them to legally prioritize user confidentiality above their own financial interests. This echoes the mandates found in modern privacy law, such as the California Consumer Privacy Act’s rules on the consumers’ right to access information about automated decision-making technology.

Mitigating the Risk: A Confidentiality and Novelty Checklist

Legal teams should adopt a “trust-nothing” protocol to utilize generative AI responsibly. They should implement clear guidelines prohibiting the use of public LLMs for generating, summarizing, or analyzing any client or company information that qualifies as prior art or a trade secret.

Crucially, professionals should never submit a confidential invention disclosure to an AI system before filing a formal provisional patent application with the relevant patent office. A provisional patent application allows inventors to establish an official priority date without submitting a formal patent claim, protecting the invention’s novelty before any exposure to external AI infrastructure.

To safely leverage AI internally, firms should invest in closed AI systems; these systems should be proprietary or securely containerized environments where data transfer and training are fully isolated and auditable. Furthermore, to ensure confidentiality, these systems should utilize edge computing, where processing is done directly on the local device, and federated learning, a method that trains the model using data across many decentralized devices without moving the raw data itself (the original, unprocessed data). This approach keeps the raw technical details strictly within the corporate firewall, preventing the inadvertent creation of prior art.

For necessary exploratory research using public models, firms should implement strict data anonymization and generalization processes. This involves removing or replacing all names, key dates, values, and novel terminologies before submission—a technique directly related to tokenization, the process by which AI models break down and interpret text.

Finally, firms should mandate rigorous review of contractual best practices for AI vendors to ensure indemnification and written guarantees that input data will not be used for training purposes. Indemnification is crucial; it requires the AI vendor to compensate the law firm or client for any loss or damage incurred if the vendor’s technology (or its failure to secure the data) results in a breach of confidence or patent invalidation. Firms should demand explicit clauses confirming that input data will not be logged, retained, or used for model training, and defining vendor liability that extends beyond simple fee refunds to cover the substantial financial harm caused by the loss of IP rights.

Conclusion

The promise of AI to expedite the patent drafting pipeline is undeniable, but the current ethical landscape presents a fundamental challenge to the confidentiality required to preserve patentability. Until legal frameworks universally impose a duty of loyalty on AI providers, the responsibility falls squarely on the professional to protect the client’s IP. The future of intellectual property requires vigilance: innovation should be accelerated by AI, but never disclosed by it.

IP-Security #Patent-Risk #AICrisis

Software’s Fourth Pricing Revolution Emerging for AI Agents

Photo by Christina Morillo on Pexels.com

By: Joyce Jia

The Fourth Pricing Revolution: Outcome-Based Pricing

In August 2024, customer experience (CX) software Zendesk made a stunning announcement: customers would only pay for issues resolved “from start to finish” by its AI agent. No resolution or human escalations? No Charge. Meanwhile, Zendesk’s competitor Sierra, a conversational AI startup, introduced its own outcome-based pricing tied to metrics like resolved support conversations or successful upsells. 

Zendesk claims to be the first in CX industry to adopt outcome-based pricing powered by AI, but it seems to have already fallen behind: Intercom has launched a similar model in 2023 for its “Fin” AI Chatbot, charging enterprise customers $0.99 only when the bot successfully resolves an end-user query. 

The wave to outcome-based pricing represents the fourth major pricing revolution in software. The first revolution began in the 1980s-1990s with seat-based licenses for shrink-wrapped boxes, where customers paid a one-time flat fee for software ownership without automatic version upgrades. The second revolution emerged in the 2000s when industry pricing transitioned to SaaS subscriptions, converting software into a recurring operational expense with continuous updates. The third revolution came in the 2010s with consumption-based cloud pricing, tying costs directly to actual resource usage. The fourth and current revolution is outcome-based pricing, where customers are charged only when measurable value is delivered, rather than for licenses purchased or resources consumed.

In fact, the shift to outcome-based pricing extends far beyond AI customer support, spanning AI-driven sectors from CRM platform like Salesforce to AI legal tech (EvenUp), fintech firm (Chargeflow), fraud prevention (Riskified and iDenfy) and Healthcare AI Agent. These companies are experimenting with pure outcome-based pricing or hybrid models that combine traditional flat fees, usage-based charges with outcome-based components. Recent tech industry analysis shows seat-based pricing for AI products dropped from 21% to 15% of companies in just one year, while hybrid pricing significantly increased from 27% to 41%.

Historical Precedent from Legal Practice: Contingency Fees

Outcome-based contracting isn’t a novel concept. It has been growing for over a decade in other industries. In the legal field, professionals have long worked with its equivalent in the form of contingency fees. Since the 19th century, lawyers in the United States have been compensated based on results: earning a percentage of the recovery only if they successfully settle or win a case. However, this model has been accompanied by strict guardrails. Under ABA Model Rule 1.5(c), contingency fee agreements must be in writing and clearly explain both the qualifying outcome and calculation method. Additionally, contingency arrangements are prohibited in certain matters, such as criminal defense and domestic relations cases. 

Beyond professional ethical concerns, the key principle is straightforward: when compensation hinges on outcomes, the law demands heightened transparency and well-defined terms. AI vendors adopting outcome-based pricing should expect similar guardrails to develop, ensuring both contract enforceability and customer trust. This requirement stems from traditional contract law, not AI-specific regulation. 

The Critical Legal Question: Defining “Outcome”

One of the biggest challenges in outcome-based pricing is contract clarity. Contract law requires essential terms to be clearly defined. If such terms are vague or cannot be determined as reasonably certain, the agreements may be unenforceable. Applying it to AI agents, one critical question arises: How do you precisely and fairly define a “successful” outcome?

The answer can be perplexing. Depending on the nature of the AI product, multiple layers can contribute to “outcome” delivery, such as internal infrastructure or workflows, external market conditions, marketing efforts, or third-party dependencies. These complex factors make it hard to judge clear ownership of results or to establish precise payment triggers. This is especially true when “outcome” is delivered over an extended period.

The venture capital firm, Andreessen Horowitz, recently conducted a survey highlighting the issue: 47% of enterprise buyers struggle to define measurable outcomes, 25% find it difficult to agree on how value should be attributed to an AI tool or model, and another 24% note that outcomes often depend on factors outside the AI vendor’s control.

These are not just operational challenges. They raise a real legal question about whether the contract terms are enforceable under the law. 

Consider these scenarios that illustrate the difficulty:

  • What happens if the outcome is only partially achieved?
  • What if the AI agent resolves the issue but too slowly, leaving the user frustrated despite a technically successful outcome?
  • What if an AI chatbot closes a conversation successfully, but the customer returns later with a complaint?
  • What if a user ends the chat session without explicitly confirming whether the issue was resolved?

As Kyle Poyar, a SaaS pricing expert and author of an influential newsletter on pricing strategy and product-led growth, observed: 

“Most products are just not built in a way that they own the outcome from beginning to end and can prove the incrementality to customers. I think true success-based pricing will remain rare. I do think people will tap into the concept of success-based pricing to market their products. They’ll be calling themselves ‘success based’ but really charge based on the amount of work that’s completed by a combination of AI and software.”

Legal Implication for the Future

Just as the rapid growth of AI agents themselves, outcome-based AI pricing is evolving at breakneck speed. The blossoming of this new pricing model presents a challenge for contract implementation and requires existing contract terms to adapt once again to accommodate new forms of value creation and innovative business models.

The scenarios above are just a few examples, but they underscore the importance of attorneys working closely with engineering and business teams to meticulously identify potential conflicts and articulate key contract terms grounded in clear metrics and KPIs that objectively define successful outcomes. 

“Outcome” could mean different things to different parties, and its definitional ambiguity could create misaligned incentives. Buyers may underreport value while vendors might game metrics to overstate performance. These dynamics will inevitably lead to disputes. AI vendors that have adopted or plan to adopt outcome-based pricing must develop robust frameworks addressing contract definiteness and attribution standards before dispute rises. Without these safeguards, we can likely see a wave of conflicts over vague terms, unenforceable agreements, and unmet expectations on both sides as AI agents surge. 

From Bad to Worse: How a Smartphone App Went from Cargo Inspections to Asylum Seekers to Self-Deportation

By: Claire Kenneally

CBP One: From Cattle to Children 

CBP One, launched in October 2022, was a mobile app designed to expedite cargo inspections and entry into the United States for commercial truck drivers, aircraft operators, bus operators, and seaplane pilots transporting perishable goods. It allowed those entities to schedule an inspection with Customs and Border Patrol (CBP). 

In 2023, the Biden administration expanded the app’s use to asylum seekers. CBP One was designated as the only acceptable way to apply for asylum at the border. To schedule an appointment, asylum seekers had to be in northern or central Mexico. After creating an account, they could request an appointment. Appointments were distributed at random the following day, a practice criticized for “creating a lottery system based on chance.

Asylum is legal protection for individuals fleeing persecution based on their religion, race, political opinion, nationality, or participation in a social group. Asylum is recognized under international law and U.S. law. If granted, the asylee may live and work in the United States indefinitely. Asylum seekers apply either at a port of entry along the border, or after entering the U.S. 

Texas National Guard on Rio Grande, Preventing Asylum Seekers from Entering by Lauren Villagram

The Human Cost of Faulty Technology

In researching this article, there were too many stories to count of CBP One’s dysfunction leaving migrants—already months into perilous and exhausting journeys—unable to claim their legal right to apply for asylum.

CBP One was plagued with regular glitches, errors, and crashes that would never be adequately resolved. The app regularly refused to accept required photo uploads, miscalibrated geolocations, and froze– causing the user to miss that day’s available appointments.

Many described the lack of appointments once they created accounts. Migrants in Matamoros were reported waiting up to six months in shelters, rented rooms, or on the street. Asylum seekers often travel by foot to the Mexican-American border and arrive with little more than the clothes on their backs. Most could not afford to pay for months in a hotel, and many experienced racism at the hands of Mexican nationals who refused to employ them. Waiting weeks, let alone months, was unsustainable. Read more about the wait times and the human impact here and here

Other stories pointed out the inaccessibility of relying on a Smartphone app to secure an appointment with CBP. To use it “people need[ed] a compatible mobile device. . . a strong internet connection, resources to pay for data, electricity to charge their devices, tech literacy, and other conditions that place the most vulnerable migrants at a disadvantage.” 

Those with older phones often could not comply with the required photo-upload, and were told to purchase a new phone if they wanted a shot at an appointment. This unexpected financial burden often proved impossible. 

For those with limited technological literacy (the ability to use and understand technology), the 21-step registration system operated as a barrier to entry. Similarly, people with intellectual or physical disabilities were often unable to use the app. 

A Place to Charge a Cell Phone [Was] a High Priority for Migrants Relying on Their Phone to Access the App by Alicia Fernández

The app was only available in English, Spanish, and Haitian Creole and plagued by racist facial-recognition software whereas it failed to register the faces of dark-skinned or Black applicants.

A Pressure Cooker at the Border 

The CBP One bottleneck pushed Mexican border towns to their breaking points- resulting in exceedingly dangerous and unhealthy conditions for migrants waiting for appointments. Doctors Without Borders reported a 70% increase in cases of sexual violence against migrants in one border town last year. 

Centro, a Mexican asylum seeker, waited at a shelter for nine months to receive an appointment. One of her three children has asthma and had to sleep on the floor. “He has been hospitalized three times because of the cold,” she says.

Shelters quickly filled to capacity. Migrants were then forced to sleep on the streets. There, they were subject to extortion, harassment, and assault at the hands of cartel members and military police. They routinely had no access to running water, electricity, hot food, or protection from the elements. 

Mexican Migrants at the Juventud 2000 Shelter in Tijuana, Waiting for a CBP One Appointment  by Aimee Melo

One Harmful App Replaced by Another

On his inauguration day in January 2025, Donald Trump cancelled that afternoon’s CPB appointments and ended asylum seeker’s use of the app. He also revoked the legal status (“parole”) of the 985,000 migrants who successfully entered the US after obtaining CBP One appointments. 

Later that spring, the Trump administration’s rollout of a new app, CBP Home, offered no relief for the migrants at the border. Instead, the app greets asylum seekers currently in the U.S. by explaining the benefits of self-deportation. 

Self-deportation through CBP Home alleges to provide immigrants with a free flight, a $1000 stipend, and the ability to reenter the United States “legally” in the future.  To pay for these promises, the Trump administration moved funds earmarked for refugee resettlement to pay for the flights and stipends.

But the law does not recognize this self-deportation scheme. The Marshall Project notes that “migrants who leave through  [Trump’s self-deportation] process may unwittingly trigger reentry barriers— a possibility that self-deportation posters and marketing materials do not mention.” 

The $1000 stipend also has no legal backing, as there is no law authorizing payments to undocumented immigrants— meaning that there are no legal repercussions if the administration refuses to send the stipend to someone once they have left the country.

And where in this new app does an asylum seeker in Mexico book an appointment with CBP to request asylum? Nowhere. That process has simply been erased.

Colombian migrant Margelis Tinoco, 48, cries after her CBP One appointment was canceled at the Paso del Norte international bridge in Ciudad Juarez, Mexico, on the border with the U.S., Monday, Jan. 20, 2025, the inauguration day of U.S. President Donald Trump by Christian Chavez

CBP One Data: Now Used to Fuel Deportations 

When CBP One was active, asylum seekers were required to input a variety of information about themselves to create an account. There was no way to “opt out” of sharing the information the government demanded. 

Now, using data obtained from those original profiles, DHS has begun stalking and harassing asylum seekers currently in the United States, urging them to “self-deport” with the new CBP Home app. This raises frightening ethical questions about data privacy, government surveillance, and how the information we provide can be weaponized against us. 

Screenshot of the Department of Homeland Security’s webpage on CBP Home, taken by the author on November 12, 2025.

A Bleak New Reality 

As of November 2025, the Trump administration has shut down asylum-seeking at the U.S.-Mexico border, trapped migrants in Mexico, and left those in the U.S. fearing their own data will be used to deport them. What began as a flawed app has escalated into something more sinister and predatory, and requires all of us to pay attention and to advocate for change

#AsylumCrisis #DigitalBorders #WJLTA

The AI-Powered Smart Home: From Convenience to Privacy Trap

By: Francis Yoon

The “smart home” revolution promised a seamless life: a programmable coffee maker, lights that dim automatically, and a thermostat that learns your comfort level. These items that connect and exchange data with other devices and the cloud are collectively known as the Internet of Things (IoT). That was the old “smart.” The Modern AI-powered smart home is now an AI-enabled data-centric habitat; a pervasive ecosystem of sensors, microphones, and cameras whose primary function is not just automation, but data extraction.

Consider this: A voice assistant records you ordering medication late at night; a smart thermostat notes a sudden, prolonged drop in energy use; a smart watch tracks erratic sleep patterns. Separately, these are minor details, but when AI algorithms combine them, they can infer sensitive data (a new chronic illness, a major life event, or a precise work schedule). The potential for this detailed inference is highlighted by privacy advocates who note that even smart meter energy data reveals intimate details about home habits, like showering and sleeping.

This inferred data is the real trap. It is highly personal and potentially discriminatory if used by insurers or targeted advertisers, all while being entirely invisible to the homeowner.

The core danger of modern smart homes is not the collection of a voice command, but the AI-powered inference that follows.

The Danger of Data Inference and the Black Box

This process of data collection is housed within a legal “Black Box”: AI systems that make highly sensitive decisions about individuals without revealing the underlying logic.

Manufacturers claim the AI models and algorithms that create these inferences are protected as proprietary trade secrets. This directly conflicts with a user’s right to access information about the logic, a core tenet of modern data protection law regarding how and why the AI made a certain decision or inference about them. This legal conflict between transparency and corporate intellectual property is the subject of intense debate.

Furthermore, your home data is shared across a fragmented ecosystem that includes: the device maker, the voice assistant platform (e.g., Amazon, Google), and third-party app developers. When a data breach occurs, or a harmful inference is made, the liability for any resulting damage is so fractured that no single entity takes responsibility, leaving the consumer without recourse. This lack of clear accountability is a major flaw in current AI and IoT legal frameworks.

The stakes are real. The Federal Trade Commission (FTC) took action against Amazon for violating the Children’s Online Privacy Protection Act (COPPA) by illegally retaining children’s voice recordings to train its AI algorithm, even after parents requested deletion. This resulted in a $25 million settlement and a prohibition on using the unlawfully retained data to train its algorithms, further showing how data maximalism (collecting and keeping everything) can be prioritized over legal and ethical privacy obligations.

Privacy-by-Design: Aligning Ethics with IP Strategy

The legal landscape is struggling to keep pace, relying on outdated concepts like “Consent,” which is meaningless when buried in a 5,000-word Terms of Service for a $50 smart plug. Consumer reports confirm that pervasive data collection is a widespread concern that requires proactive consumer steps.

The solution should be to shift the burden from the consumer to the manufacturer by mandating Privacy-by-Design (PbD). This concept, already explicitly required by the EU’s General Data Protection Regulation (GDPR) in Article 25, demands that privacy be the default setting, built into the technology, ensuring “by default, only personal data which are necessary for each specific purpose… are processed,” in regards to the amount of data collected and the extent of their processing.

To make this framework actionable and commercially viable, it should be interwoven with Intellectual Property (IP) strategy.

The technical mandate for data minimization is to use Edge AI/Local Processing––meaning raw, sensitive data must be processed on the device itself, not in the cloud. Only necessary, protected data should be transmitted. This technical shift should be incentivized by an IP Strategy that rewards patents protecting Privacy-Enhancing Technologies (PETs), such as techniques that allow AI models to be trained across many devices without ever moving the user’s raw data (federated learning), or methods that obscure individual data points with statistical noise (differential privacy).

For transparency and auditability, manufacturers should be required to provide Granular Control & Logs (simple, mandatory interfaces showing what data is being collected and why, with logs that can be easily audited by regulators). The corresponding IP Strategy should require mandatory disclosure by conditioning the granting of IP protection for AI models on a partial, audited disclosure of their function, thereby eliminating the “Black Box” defense against regulatory inquiry. New laws are making these transparency measures, including machine-readable labeling and comprehensive logging, mandatory for certain high-risk AI systems.

Furthermore, the security mandate should require End-to-End Encryption (E2EE)––a security method that ensures only the communicating parties can read a message––for all data, along with a guaranteed lifecycle for security updates and patches for every device sold. This should be backed by a product liability shift in law that treats a product that failed to provide security updates as a “defective product,” creating a powerful legal incentive for manufacturers to maintain their devices. The need for this is supported by official guidance encouraging manufacturers to adopt a security by design and default mindset.

A Call for Fiduciary Duty and Mandatory Standards

For AI-powered smart homes to be a benefit, not a threat, the law should evolve beyond the current model of consumer consent, which has proven meaningless when privacy obligations are buried in massive Terms of Service agreements. The EU AI Act, for instance, is already moving toward a risk-based legal framework by listing prohibited practices like cognitive behavioral manipulation and social scoring, which are highly relevant to pervasive smart home AI. To this same end, we should implement two major safeguards.

Legislation should introduce minimum technical security and privacy standards for all smart devices before they can be sold (a digital equivalent of safety standards for electrical wiring). The default setting on a new smart device should be the most private one, not the one that maximizes data collection.

Additionally, smart home companies should be held to a fiduciary duty of care toward the users of their products. This legal concept, typically applied to doctors or financial advisors, would require them to place the user’s interests and loyalty above the company’s financial interests in matters concerning data and security. This would force companies to legally act in the best interest of the user, regardless of what a user “consents” to in a convoluted contract. This single shift, supported by seminal legal scholarship, would fundamentally alter the incentives, forcing companies to design for privacy, as their primary legal duty would be to protect the user’s data, not to maximize its commercial value.

Overall, the battle for privacy is increasingly fought on the digital ground of our own homes. The AI-powered smart home doesn’t just automate our lives; it digitizes our intimacy. It is time to enforce a technical and legal framework that ensures innovation serves our well-being, not just corporate profit. The architecture of a truly smart home must start with privacy at its foundation.

#smart-home #privacy-trap #AI-governance

E-Lending Challenges and Libraries’ Mission to Ensure Information Access for All

By: Anusha Seyed Nasrulai

Library services have transformed from being primarily administered in the physical library space to providing library card holders with access to a broad range of digital materials, including ebooks, audiobooks, research, music, film, and more. When digital materials first entered the market, they posed great opportunities to increase the availability and accessibility of library collections. Libraries have adjusted their acquisitions and curation efforts to accommodate an increased demand for digital materials. At the same time, publishers and vendors have repackaged their products to drive profits in response to the demand by raising ebook costs to exorbitant rates. Libraries are “typically required to pay 3–4 times the consumer price for an ebook or audiobook license of a popular title.” Also, many publishers have replaced perpetual licenses with time limited licenses. Publishers further control the market by restricting “how many copies libraries can have, who they can lend to, and how long they (and their patrons) can keep the books.” This has led to library budgets being consumed by licensing costs.

The e-lending marketplace presents multiple challenges to libraries’ longstanding commitment to ensure access to information for all. Digital materials are many patrons’ primary method of accessing information. For example, digital formats are essential resources to patrons with “vision impairment, dyslexia, and other physical or learning needs.”

Libraries are at the whim of the power wielded by vendors controlling access to vital digital materials. About five companies control publishing and dominate the industry for licensing digital materials to libraries. Some companies have business enterprises beyond academic information, including the use and sale of personal and financial information. Thomas Reuters and RELX Group (parent companies of Westlaw and LexisNexis) not only dominate the legal research market, but they also own some of the largest news and academic databases and are data brokers that sell to private entities and law enforcement agencies. Sarah Lamdan, former CUNY law librarian and professor, now ALA director, described the digital information market landscape as a monopoly of information markets, which raises significant ethical and privacy concerns.

Libraries’ Respond to Market Shifts 

The rest of this article examines the implications for the market shift to digital materials for libraries and their patrons, focusing on ownership rights, open source projects, and patron privacy. In response to vendors’ overwhelming control of the digital information marketplace, libraries and researchers are developing solutions to ensure information access for all.

Ownership Rights

Libraries hold ownership rights and control lending access over physical books by the right to first sale. The “first sale” doctrine (17 U.S.C. § 109(a)) “gives the owners of copyrighted works the rights to sell, lend, or share their copies without having to obtain permission or pay fees.” However, this ownership doctrine does not control digital transmissions— including ebook acquisitions. Publishers create license agreements in partnership with vendors, who then license them to libraries. Margaret Chon, Law Professor at Seattle University, argues that high prices and restrictive lending practices undermine the special position libraries have historically held in the copyright system as institutions protecting and facilitating public access to copyrighted works.

Without copyright reform, libraries are often at the behest of vendors’ licensing models. In response, libraries have developed comprehensive strategies to negotiate with vendor providers and select vendors that align with their mission. Still, “the contract-law focused world of copyright for digital content is much more heavily weighted to the benefit of publishers and to the greatest extent possible.” Therefore, libraries have sought legal reforms as one of the solutions to address the modern digital information marketplace. 

ReadersFirst is an organization of almost 300 libraries dedicated to libraries maintaining open and free access to ebooks as collections are increasingly digitized. ReadersFirst advocates for ebook legislation to prevent content restrictions, prohibitively high prices for licenses, and using licenses to excise important copyright law, such as Fair Use. This past summer, Connecticut passed an ebook bill and other states have introduced similar legislation. This bill will be carefully watched after similar legislation in Maryland and New York have been undone by copyright challenges. 

Open Source Projects

During the COVID-19 pandemic, Internet Archive launched the National Emergency Library (NEL). NEL was a continuation of a previous online project where scans of physical library books were “checked out” to people as though they were physical books. In Hachette v. Internet Archive, publishers successfully challenged NEL’s temporary lifting of the one-person-limit on lending. Though this case did not involve a traditional library, it does call into question whether controlled digital lending practices by libraries are vulnerable. 

To protect library projects that expand access to digital materials, new industry standards are being proposed. Controlled digital lending (CDL) protections allow libraries to lend, preserve, and archive digital materials. Currently, a new NISO consensus framework is being developed to support CDL in libraries, with the goal of expanding “understanding of CDL as a natural extension of existing rights held and practices undertaken by libraries for content they legally hold.”

The ability to curate and share open source resources further libraries’ goal to ensure information access for all. An important example of library open source projects are research guides. Research guides are collections of high quality and relevant resources on a given topic from books. Resources included articles, books, media, databases, special collections, exhibits, and programs. Kara Phillips, director of the Seattle University Law Library, stated that research guides “respond to important issues so that patrons can find reliable, authoritative information… [to] support democracy, rule of law, and the legal system.” 

Patron Privacy

As vendors adapt to the competitive digital information marketplace, the change in business models has increased their appetite for patron data. As Roxanne Shirazi, a research librarian at CUNY, puts it, “[a]s lenders, library vendors do not end their relationships with libraries when they complete a sale. Instead, as streaming content providers, vendors become embedded in libraries. They are able to follow library patrons’ research activities, storing data about how people are using their services.”

There are only a handful of states that protect readers’ data outside of libraries. For example, the California’s Reader Privacy Act safeguards readers’ data when accessing physical books or ebooks. Therefore, ensuring patron privacy and holding vendors accountable to ALA privacy standards are central to libraries’ mission.

The Path Forward for Libraries

Librarians and other stakeholders are organizing to address the profound problems that have arisen from changes in the e-lending market. In providing guidance regarding digital access, the American Library Association states, “[i]n order to have a functional democracy, we must have informed citizens. Libraries are an essential part of the national information infrastructure, providing people with access and opportunities for participation in the digital environment, especially those who might otherwise be excluded.”