
By: Nirmalya Chaudhuri
Introduction
In November 2024, a study reported that ChatGPT acting on its own performed better at diagnosing diseases than trained doctors in the US, with a reported 90% success rate. While singing paeans of Artificial Intelligence (AI) in almost every field of human activity has now become par for the course, the possibility of the rise of “AI doctors” raises important questions relating to patient privacy and, in particular, the larger doctor-patient relationship.
Crisis of Confidentiality: From Ethics to Law
The delicate nature of the doctor-patient relationship was beautifully expounded in the eighteenth-century writings of John Gregory and Thomas Percival, two pioneering figures in the field of medical ethics. As Gregory and Percival noted, doctors inevitably gain access to the private lives of their patients, and this naturally requires them to exercise great discretion in dealing with the personal information entrusted to them. In certain fields, such as psychotherapy, this is all the more important, which has led to a strong movement (and its acceptance by some courts in the US) to carve out a“psychotherapist-patient privilege” exception to the Federal Rules of Evidence governing confidential communications.
Today, the narrative on this issue may well shift from a purely ethical viewpoint into a more legalistic perspective on data protection and privacy. It is, therefore, not surprising that the General Data Protection Regulation (GDPR) in the European Union, classifies “health data” as one of the categories of “sensitive personal data”, to which heightened standards and safeguards apply. When patients directly interact with an “AI doctor,” sensitive personal data relating to not only one’s health but also one’s private life, would be disclosed.
In the absence of strong legal regulations for processing such data, it is possible that patient data could be used for purposes other than providing patient care. The concerns are exacerbated by the fact that generative AI may “learn” or “train” itself using such sensitive patient data, leading to outcomes that are both inconceivable and perhaps undesirable for the patient. While a possible counter-argument would be that explicit consent of the patient would be a prerequisite before any health data is divulged to the AI doctor, there is a wealth of secondary literature that questions whether the patients truly know what they are consenting to instead of treating consent like a routine, box-ticking exercise.
Judicial opinion
Case law on this topic is non-existent, presumably because the issue of “AI doctors” is novel. However, in the past, courts have sought to tread cautiously as far as divulging patient records to third parties is concerned. The Ohio Supreme Court, in Biddle v. Warren General Hospital, went so far as to suggest that unauthorized disclosure of patient records to a third party amounted to an independent tort by itself, irrespective of how that information was utilized by the third party. It is, of course, true that the court was dealing with a case where the patients had not consented to disclosure of their data. In the UK case of Source Informatics, the Court of Appeal allowed the passing of patient records to a third party. However, in that case, it is worth noting that the patient data was anonymized before disclosure, thus making the argument relating to data protection that much weaker.
These cases would prove to be of limited application if and when the debate on AI doctors reaches the stage of litigation. In these circumstances, the courts would then be deciding whether patients can consent to the disclosure of non-anonymized sensitive personal data to an AI entity and the extent to which the AI tool can use that information.
Conclusion
In the absence of a federal data protection law in the US, there is a possibility that legal regulation of the wide-ranging use of AI doctors would be State-specific and fragmented. Even more importantly, a lack of legal regulation would raise serious concerns about whether AI doctors are indeed “practicing” medicine, which would directly determine whether they will be bound by professional obligations that are ordinarily applicable to doctors, such as the preservation of confidentiality of patient data. A possible solution to this problem could lie in a conclusive determination of the legal status of AI doctors, and the techno-legal accountability standards like purpose limitation and data minimization that they would be subject to. While AI can potentially lead to great benefits in medical science, one must ensure that confidentiality, privacy, and protection of personal data are not allowed to be sacrificed at the altar of convenience and diagnostic efficiency.



