Malpractice Risks Associated with AI Generated Clinical Notes

By: Matt Unutzer

Artificial intelligence is quietly reshaping one of healthcare’s most foundational documents: the medical record. Increasingly, “ambient AI” systems, such as Microsoft’s Nuance DAX, record patient encounters and generate clinical notes that are incorporated into the patient’s official medical record. These tools promise to reduce administrative burden and allow physicians to focus on patient care. They also raise questions about how AI-generated clinical notes may affect malpractice risk when errors contribute to patient harm.

Traditional Clinical Documentation Baseline

For decades, physicians have documented patient encounters directly within electronic health record (EHR) systems such as Epic. During or immediately after a visit, clinicians enter notes summarizing the interaction, clinical findings, and treatment decisions. These notes are then reviewed and signed in accordance with federal Medicare regulations and incorporated into the patient’s official medical record.

While these records often follow standardized formats, such as Subjective, Objective, Assessment, and Plan, to promote consistency, the practice is not without error. Furthermore, clinical documentation in EHR systems is often associated with physician burnout, accounting for more than half of their time on shift. In response to these challenges, physicians began leveraging a new technology, the medical AI scribe. With initial adoption occurring in 2023 and 2024 and widespread adoption in 2025 and 2026, medical AI scribes have rocketed onto the scene.

What are Medical AI Scribes?

Medical AI scribes like Nuance DAX are often integrated directly into EHR systems and function by converting the real-time physician–patient interactions into draft clinical notes through a multi-step process. Standard procedure typically requires physicians to get consent to use the technology from the patient prior to the clinical interaction. Once consent has been given and the interaction is underway, the system uses ambient listening technology, to capture the conversation occurring during the patient visit and converts this audio into a verbatim transcript through speech recognition. That transcript is then processed using machine learning models that identify clinically relevant information, which is organized in familiar formats such as SOAP. The output is a draft clinical note that is reviewed and electronically signed by the physician before it is uploaded to the patient’s medical record.

Early Impacts of Medical AI Scribes

Despite the relatively recent widespread adoption of the technology, studies suggest that it is already having a meaningful impact on administrative burnout, documentation accuracy, and patient engagement.

While the rapid adoption of medical AI scribes highlights their utility, it has also sparked concern regarding the accuracy of the notes they generate and the risks of utilizing the technology at scale. One 2025 study found that medical AI scribes made “clinically significant errors” in the draft notes they generated. Furthermore, the risks of AI hallucinations, omissions, or confabulations remain present in medical AI scribe outputs. Despite significant improvements in the technology, its rapid and widespread proliferation raises the question: what liability do physicians have for injuries resulting from erroneous clinical notes generated by medical AI scribes?

Liability for Patient Harm Resulting from Erroneous Clinical Notes

Legal frameworks that govern medical malpractice standards are largely the product of state tort law, with differing legal standards in each jurisdiction. Furthermore, given the relatively novel nature of medical AI scribes, appellate courts have had little exposure to the technology. Despite these challenges, existing legal principles paint a clear picture of cognizable malpractice exposure resulting from the use of medical AI scribes.

Medical malpractice claims must typically satisfy four general elements: (1) the physician owed a duty of care to the patient, (2) the physician breached that duty of care, (3) the breach caused the patient’s injury, and (4) the patient suffered legally cognizable damage resulting from that injury.

Courts across the country analyzing injuries resulting from incomplete or inaccurate medical records have held that physicians have a professional duty of care to prepare competent medical records. When physicians fail to accurately document clinically relevant information in a patient’s medical record, that failure may constitute breach. Likewise, courts analyzing causation have found that where a documentation failure results in patient harm, such as a wrong-site or unnecessary surgery, the causation element may be satisfied. Thus, under existing malpractice law, physicians may risk liability for malpractice if they fail to prepare competent clinical notes and that failure results in patient harm. 

The utilization of medical AI scribes does not alter the fundamental allocation of legal responsibility in clinical care. Federal Medicare regulation dictates that medical record entries must be reviewed and signed by the person responsible for the patient’s care. These signatures identify the physician responsible for the clinical note, regardless of whether AI assisted in its generation.

Practical Implications and Safeguards Against Liability

A malpractice claim stemming from AI-generated clinical notes requires a narrow chain of events: the AI must produce a clinically significant error, the physician must fail to identify and correct it, the resulting documentation must fall below the applicable standard of care, and that error must cause patient harm. In any individual case, this sequence is unlikely, meaning the overall risk of liability remains low. However, the risk is not zero. When scaled across a high volume of patient encounters, even infrequent errors can compound exposure to malpractice liability. This risk is heightened in settings with low continuity of care, where downstream providers rely heavily on prior documentation. 

Accordingly, physicians seeking to benefit from medical AI scribes while minimizing exposure to malpractice liability should treat them as assistive tools. By avoiding excessive reliance on their outputs, and carefully review clinical notes before electronically signing or uploading them to patients’ EHRs, physicians can leverage this technology while upholding their professional duties.

#AmbientAI #MedicalMalpractice #WJLTA

Leave a comment