Protecting Privacy in Libraries as AI Adoption Accelerates

By: Anusha Nasrulai

Like picking what movie to watch, what restaurant to eat at, or where to go on vacation, what we read next is often recommended to us by personalization algorithms. Social media or reading platforms such as Goodreads already process user data to generate recommended content. Recently, the library catalog browsing app, Libby, announced its own book recommendation feature, Inspire Me.

Inspire Me by Libby recommends users’ books based on their own prompts or previously saved titles in the app. Originally announced as an optional feature, Inspire Me features prominently at the top of the home screen when users open the Libby App. The feature recommends books available through the catalogs of the libraries which users have linked accounts with. When the feature was first announced, users and libraries showed resistance, voicing concerns about forced AI adoption and diminished patron privacy. OverDrive, the parent company of Libby, states that readers’ personally identifying data and reading activity are not provided to the AI model.

Libraries work with vendor platforms, distributors, and publishers to deliver library services, particularly for e-materials. Despite popular backlash, vendors are expanding development of AI integrations. OverDrive CEO Steve Potash has announced goals to use AI to “match users to content across its platforms,” which also include streaming platform Kanopy, and k-12 education platform Sora. Other subscription vendor companies, such as OCLC, EBSCO, and Clarivate, have introduced AI features for content recommendation, enhanced search, text summaries, and AI-generated research assistants. Beyond externally marketed AI tools, vendors are incorporating AI into their internal workflows for “building, improving, and refining products.” Libraries now are finding the balance between their duty to protect patron data and privacy and providing access to digital resources.

Legal Regulations

The integration of AI by vendor platforms poses new privacy considerations for libraries. AI introduces new risk points at data collection, processing, training, and deployment.

The United States currently has no comprehensive AI or data privacy laws. Instead, states have passed dozens of laws regulating certain AI use cases. As of now, 6 states have passed cross-sectoral AI governance laws that apply to commercial entities. Vendors are likely subject to state-level AI and data privacy laws that target commercial entities. Libraries can leverage legal regulations to negotiate with vendors for stronger privacy protections. Trends in AI regulations show that states are increasingly passing and updating AI legislation amid legal challenges and an absence of federal regulation

AI Governance and Contracting

In light of legal uncertainties, contracts and licenses are a key opportunity for imposing guardrails on AI use. These agreements address how vendors and third parties can collect, process, and disclose user data.

More often, vendor agreements will not explicitly disclose internal use of AI tools or AI model training. Research and policy organization, Library Futures, and staff attorney, Layla Maurer, presented on this issue, flagging that broad language around operational mechanisms and data usage may permit vendors to train and deploy AI models using patron and institutional data. When reviewing vendor contracts for AI usage, libraries should focus on:

  • Vendor’s rights around data use and sharing, including with third parties. Use of patron data for “analytics” or “development and improvement of services” may include AI training.
  • References to third-party applications or tools, processors, or contractors necessary to carry out services under the agreement.
  • Whether there is a defined data retention period and what happens to patron data when the contract ends

Libraries can strengthen contract terms by including language requiring compliance with applicable federal and state laws, as well as with industry standards such as ISO and NIST. In addition, libraries may negotiate with vendors to:

  • Define user rights to data, including the right to opt out of nonessential data collection and the right to delete their data.
  • Limit secondary uses of data, including for training internal or external AI tools
  • Disclose third party partners and whether data is shared or sold to third parties
  • Conduct privacy and security audits
  • Establish a data retention period and protocol for destroying data at the end of the retention period

As said by attorney Layla Maurer, “Updating contract language to allow flexibility around software development needs while retaining safeguards for what the licensee… wants to protect is not just an expeditious way to reach an agreement with a software vendor, it’s also a strategy that helps ensure the licensee can continue to safely use the software despite future legislative changes provided the vendor updates their software in a manner consistent with the intent of the legislation.”

Future-proofing

Digital lending and services are a popular means of accessing materials from libraries, but at the same time, raise new challenges for protecting patron privacy. Therefore, as AI becomes embedded in services, libraries need to adopt AI guardrails in contracting to manage the harms and opportunities related to AI use in libraries, particularly around privacy.

Malpractice Risks Associated with AI Generated Clinical Notes

By: Matt Unutzer

Artificial intelligence is quietly reshaping one of healthcare’s most foundational documents: the medical record. Increasingly, “ambient AI” systems, such as Microsoft’s Nuance DAX, record patient encounters and generate clinical notes that are incorporated into the patient’s official medical record. These tools promise to reduce administrative burden and allow physicians to focus on patient care. They also raise questions about how AI-generated clinical notes may affect malpractice risk when errors contribute to patient harm.

Traditional Clinical Documentation Baseline

For decades, physicians have documented patient encounters directly within electronic health record (EHR) systems such as Epic. During or immediately after a visit, clinicians enter notes summarizing the interaction, clinical findings, and treatment decisions. These notes are then reviewed and signed in accordance with federal Medicare regulations and incorporated into the patient’s official medical record.

While these records often follow standardized formats, such as Subjective, Objective, Assessment, and Plan, to promote consistency, the practice is not without error. Furthermore, clinical documentation in EHR systems is often associated with physician burnout, accounting for more than half of their time on shift. In response to these challenges, physicians began leveraging a new technology, the medical AI scribe. With initial adoption occurring in 2023 and 2024 and widespread adoption in 2025 and 2026, medical AI scribes have rocketed onto the scene.

What are Medical AI Scribes?

Medical AI scribes like Nuance DAX are often integrated directly into EHR systems and function by converting the real-time physician–patient interactions into draft clinical notes through a multi-step process. Standard procedure typically requires physicians to get consent to use the technology from the patient prior to the clinical interaction. Once consent has been given and the interaction is underway, the system uses ambient listening technology, to capture the conversation occurring during the patient visit and converts this audio into a verbatim transcript through speech recognition. That transcript is then processed using machine learning models that identify clinically relevant information, which is organized in familiar formats such as SOAP. The output is a draft clinical note that is reviewed and electronically signed by the physician before it is uploaded to the patient’s medical record.

Early Impacts of Medical AI Scribes

Despite the relatively recent widespread adoption of the technology, studies suggest that it is already having a meaningful impact on administrative burnout, documentation accuracy, and patient engagement.

While the rapid adoption of medical AI scribes highlights their utility, it has also sparked concern regarding the accuracy of the notes they generate and the risks of utilizing the technology at scale. One 2025 study found that medical AI scribes made “clinically significant errors” in the draft notes they generated. Furthermore, the risks of AI hallucinations, omissions, or confabulations remain present in medical AI scribe outputs. Despite significant improvements in the technology, its rapid and widespread proliferation raises the question: what liability do physicians have for injuries resulting from erroneous clinical notes generated by medical AI scribes?

Liability for Patient Harm Resulting from Erroneous Clinical Notes

Legal frameworks that govern medical malpractice standards are largely the product of state tort law, with differing legal standards in each jurisdiction. Furthermore, given the relatively novel nature of medical AI scribes, appellate courts have had little exposure to the technology. Despite these challenges, existing legal principles paint a clear picture of cognizable malpractice exposure resulting from the use of medical AI scribes.

Medical malpractice claims must typically satisfy four general elements: (1) the physician owed a duty of care to the patient, (2) the physician breached that duty of care, (3) the breach caused the patient’s injury, and (4) the patient suffered legally cognizable damage resulting from that injury.

Courts across the country analyzing injuries resulting from incomplete or inaccurate medical records have held that physicians have a professional duty of care to prepare competent medical records. When physicians fail to accurately document clinically relevant information in a patient’s medical record, that failure may constitute breach. Likewise, courts analyzing causation have found that where a documentation failure results in patient harm, such as a wrong-site or unnecessary surgery, the causation element may be satisfied. Thus, under existing malpractice law, physicians may risk liability for malpractice if they fail to prepare competent clinical notes and that failure results in patient harm. 

The utilization of medical AI scribes does not alter the fundamental allocation of legal responsibility in clinical care. Federal Medicare regulation dictates that medical record entries must be reviewed and signed by the person responsible for the patient’s care. These signatures identify the physician responsible for the clinical note, regardless of whether AI assisted in its generation.

Practical Implications and Safeguards Against Liability

A malpractice claim stemming from AI-generated clinical notes requires a narrow chain of events: the AI must produce a clinically significant error, the physician must fail to identify and correct it, the resulting documentation must fall below the applicable standard of care, and that error must cause patient harm. In any individual case, this sequence is unlikely, meaning the overall risk of liability remains low. However, the risk is not zero. When scaled across a high volume of patient encounters, even infrequent errors can compound exposure to malpractice liability. This risk is heightened in settings with low continuity of care, where downstream providers rely heavily on prior documentation. 

Accordingly, physicians seeking to benefit from medical AI scribes while minimizing exposure to malpractice liability should treat them as assistive tools. By avoiding excessive reliance on their outputs, and carefully review clinical notes before electronically signing or uploading them to patients’ EHRs, physicians can leverage this technology while upholding their professional duties.

#AmbientAI #MedicalMalpractice #WJLTA

Silicon’s New Signature: The Shift from Patentable Circuits to Sonic Trademarks

By: Francis Yoon

For decades, the intellectual property of the hardware world has been defined by physical chips and their structural protection rather than by the experience they produce. However, in 2026, the value of a device is shifting from how it is built to how its performance feels. As Gallium Nitride (GaN) pushes the physics of the possible into the mainstream, the hardware industry is discovering that while a patent protects the circuit, it can no longer protect the soul of the machine.

The GaN Inflection and the Pursuit of Analog Warmth

GaN is a material that allows electrons to move with significantly more energy and speed than traditional silicon. In the high-stakes world of the 2026 power electronics market, this distinguishes a sterile digital signal and a rich linear resonance.

In the audio sector, this is a tonal revolution. Because GaN switches up to four times faster than silicon, reaching slew rates of 150 V/ns, it virtually eliminates the inevitable distortion from switching devices that has plagued digital amplifiers for years. The result is what audiophiles describe as “shockingly good sound” and “crisper treble” from GaN systems, noting that the tech finally provides the “analog warmth” the digital age lacked.

Image taken from: GaN Talk Blog

As seen in the waveform comparison above, the GaN Field-Effect Transistor (GaN FET) eliminates the high-frequency “ringing” (the purple oscillations) found in traditional Metal-Oxide-Semiconductor Field-Effect Transistors (MOSFETs), resulting in a significantly smoother and more linear output.

The Double Wall of Modern Patentability

As GaN matures into a $3.32 billion market this year, the path to protecting these innovations through patents is becoming increasingly fraught. Inventors are hitting two distinct legal hurdles when dealing with the United States Patent and Trademark Office (USPTO). First, as GaN becomes an industry standard, inventors frequently face “obviousness” rejections, where the transition from silicon to GaN is characterized as a predictable substitution of materials rather than a patentable breakthrough.

Second, innovators often face a hurdle in which the Patent Office labels their specific hardware designs as simply a “natural phenomenon” of the material itself rather than a unique, human-made invention. This creates a significant identity problem: there is no legal safe zone for protecting the aesthetic output of hardware through patent law alone.

The Category 30 Strategic Exit

If the internal circuit is becoming harder to patent, the solution lies in protecting the external output. On February 10, 2026, the USPTO provided the industry with a quiet but massive strategic exit through the modernization of the Trademark Design Search Code Manual. The introduction of Category 30, specifically 30.02.06, which provides specific search codes for non-traditional marks like machine-generated tones and musical sounds, allows hardware firms to plant a different kind of flag.

While a patent on a GaN heterostructure will eventually expire in 20 years, a trademark on the unique startup chime or the specific tonal texture enabled by that hardware can last indefinitely. By shifting the IP focus from the atom to the audio, companies can secure long-term brand equity that survives the commodity cycle of the semiconductor market. This pivot allows engineers to turn a functional byproduct into a registrable asset.

Suggestions for the Next Era of Intellectual Property

As hardware identity becomes as valuable as hardware functionality, we risk a future where trademark law prevents innovation. A viable resolution to this gridlock requires a clear path forward for regulators and practitioners. The adoption of a Technical De Minimis safe harbor for hardware output would allow the USPTO and the courts to protect brand identity without granting a monopoly over the natural music of innovation. Just as copyright law protects transformative use, trademark law should distinguish between a branding-heavy sonic logo and the functional, incidental sounds inherent to a material’s physics. A GaN-based hum or a mechanical click should remain a shared resource.

If you are a hardware developer, the 2026 playbook seems to require a hybrid approach: use patents for the truly non-obvious structural leaps, but use Category 30 to tether the sensory experience of your engineering to your brand. In 2026, the goal of IP should be ensuring that the resulting sound remains free for the next generation of creators to build upon.

#Hardware-IP #GaN #Trademarks #USPTO #WJLTA

Is the SEC Ready for the AI IPO Era?

By: Joyce Jia

As explored in The LLM Public Offering: Why One S-1 Filing Will Reshape AI’s Governance, the anticipated IPOs of Anthropic or OpenAI will mark a constitutional moment for AI governance, the first time a frontier AI company must submit its business model to the full discipline of SEC disclosure. The natural question that follows is whether the SEC is institutionally prepared for that moment. On balance, the answer is yes, though with important qualifications. The agency enters the AI IPO era with a well-tested disclosure architecture, a clear institutional commitment to extending it to AI, and a track record of adapting to new technology sectors over time. What remains is the more difficult task of translating that commitment into sector-specific guidance.

A Framework Built to Adapt

The SEC’s core disclosure architecture rests on durable, technology-neutral foundations. Materiality doctrine articulated in TSC Industries v. Northway compels disclosure of information a reasonable investor would consider important in making an investment decision. That principle is broad enough to reach the material risks posed by frontier AI companies: accumulated deficits and timelines to profitability, training data legal exposure, critical dependencies on cloud and semiconductor infrastructure, and arguably the most consequential, the systemic risk that a failure or material change in a foundational LLM could propagate downstream across the ecosystem of agent companies and enterprise applications built upon it.

The SEC’s Commitment: A Path Forward

The SEC has signaled its intention to extend its disclosure framework to AI through sustained institutional engagement. The agency’s Investor Advisory Committee (IAC) has kept AI disclosure on its agenda since the Biden Administration, most recently issuing a recommendation on December 4, 2025, for the Commission to develop “comprehensive guidance” establishing an AI-related disclosure framework for issuers (“IAC’s Recommendation”). That recommendation drew on a panel discussion held on March 6, 2025, convening asset managers, governance experts, and AI practitioners. 

The IAC’s Recommendation centers on three pillars: requiring issuers to define “artificial intelligence,” disclose board oversight of AI deployment, and report separately on AI’s material effects on internal operations and consumer-facing products. Notably, the IAC recommended integrating these requirements into existing Regulation S-K items rather than creating a standalone AI disclosure chapter, recognizing that the existing framework is flexible enough to accommodate AI-specific risks without a full rulemaking cycle. As the cybersecurity disclosure precedent discussed below illustrates, that choice underscores the SEC’s proven capacity to extend its framework incrementally to emerging technology risks.

The Cybersecurity Rule Shows the Way

The SEC’s 2023 cybersecurity disclosure rule, which created Form 8-K Item 1.05 for material cyber incidents and Regulation S-K Item 106 for annual governance and risk management disclosure, built on over a decade of iterative, non-binding guidance: the Division of Corporate Finance’s 2011 Disclosure Guidance on cybersecurity, followed by the SEC’s 2018 Interpretative Guidance on Public Company Cybersecurity Disclosures. That progression, from staff guidance to Commission statement to final rulemaking, produced a disclosure architecture that companies were able to absorb with minimal friction and that investors found decision-useful. 

The AI disclosure gap calls for the same iterative approach: beginning with staff guidance clarifying how existing Regulation S-K items 101, 103, 106, and 303 apply to AI-specific risks, then evolving toward more targeted requirements as the technology and its disclosure challenges become better understood. Initiated now, that sequence could yield formal guidance well ahead of the next generation of frontier AI IPOs. 

A Strong Framework with Key Uncertainties

Research by the AI Disclosures Project at the Social Science Research Council, analyzing more than 7,800 Form 8-K filings by public companies on AI between 2022 and 2025, found that roughly two-thirds of all AI-related disclosures are positive in nature, while risk disclosures, model failures, and safety guardrail changes are systematically underrepresented. Most of these filings appear under Item 8.01, a voluntary catch-all category, suggesting companies are genuinely uncertain both where to report AI events and when they cross the materiality threshold. That uncertainty is precisely what targeted SEC guidance is designed to resolve, and it underscores why the IAC’s Recommendation for issuer-facing clarity is a necessary next step.

The SEC enters the AI IPO era better prepared than the headlines suggest. Its governing principles are sound, its institutional commitment is evident, and the path forward is well-lit by the cybersecurity precedent. The remaining question is one of timing: whether the guidance arrives before the first S-1 does, or after.

# SEC Disclosure # IAC’s Recommendation #AI Governance

When Worship Goes Online: The Hidden Copyright Risks for Churches

By: Daniel Eum

The COVID-19 pandemic, lasting from roughly late 2019 to mid 2022, caused institutions and governments to make changes to combat the spread of the virus, such as mandating that residents remain in their homes. Religious institution gatherings, including church gatherings, were not exempt from stay-at-home orders. As a result, many churches resorted to livestreaming their services rather than having in-person services. Even when the chaos of the pandemic began to settle, and restrictions on in-person gatherings eased, churches continued to provide online services. By mid-2021, 80% of churches offered hybrid services, combining in-person and remote options. One study shows that 91% of churches were livestreaming their services by 2024. In other words, livestreaming services became a common practice for churches post-COVID. However, churches that livestream services, including copyrighted music, likely engage in unlicensed public performances and reproductions, exposing themselves to copyright liability absent proper licensing.

Copyright Protection for Religious Works and the § 110(3) Exemption

Services include a variety of content, including songs commonly referred to as “praise” or “worship.” Religious works, including songs, are fully eligible for copyright protection under federal law. In United Christian Scientists v. Christian Science Bd. of Directors, First Church of Christ, Scientist, a 1987 case from the U.S. Court of Appeals for the D.C. Circuit, the court held that “a grant of a copyright on a religious work poses no constitutional difficulty” and “[r]eligious works are eligible for protection under general copyright laws.” Despite this, narrow exemptions for religious services exist under 17 U.S.C. § 110(3), which provides that “performance of a nondramatic literary or musical work or of a dramatico-musical work of a religious nature, or display of a work, in the course of services at a place of worship or other religious assembly” does not infringe copyright. In other words, while religious works are entitled to copyright protection, religious institutions may perform such works without infringement during services.

The Limits of § 110(3)

However, this statutory exemption for religious services was drafted with in-person worship in mind and provides limited guidance for modern online practices. The exemption critically lacks any language authorizing transmissions, such as livestreaming or broadcasting services online to remote viewers, unlike the educational exemptions in § 110(2), which extensively cover digital transmissions with detailed technological safeguards. This omission strongly suggests that Congress did not intend the religious exemption to extend to online broadcasting. In other words, this exemption is strictly limited to performances and displays occurring during actual worship services at places of worship. In Worldwide Church of God v. Philadelphia Church of God, Inc., the Ninth Circuit emphasized that this privilege is “narrowly limited” to performance or display “in the course of services at a place of worship or other religious assembly.” As such, livestreaming services do not fall within the statutory exemption, and churches may face potential liability for reproduction, distribution, and digital performance of copyrighted materials.

In Simpleville Music v. Mizell, a federal district court in Alabama held that when a radio station broadcasts a church service containing copyrighted music, the broadcast constitutes a separate performance not occurring at a place of worship, requiring royalty payments. Although there are very few, if any, reported post-COVID cases specifically addressing livestreaming church services, these cases indicate that churches expose themselves to copyright liability by livestreaming copyrighted “praise” or “worship” songs.

How Churches Might Mitigate Copyright Risk Today

Nonetheless, a large majority of modern worship songs are covered by Christian Copyright Licensing International (CCLI), which simplifies copyright compliance by providing comprehensive and cost-effective licenses that empower churches and organizations to legally stream a wealth of creative works. As a result, many churches rely on such licensing regimes when livestreaming services to mitigate potential liability. Unless Congress updates § 110(3) to account for digital worship, churches that livestream services will remain dependent on licensing regimes to avoid copyright liability—despite engaging in what many view as a modern extension of traditional religious practice.

#WJLTA #CopyrightLaw #DigitalWorship