AI Doctors: Muddying The Waters of Patient Privacy?

By: Nirmalya Chaudhuri

Introduction

In November 2024, a study reported that ChatGPT acting on its own performed better at diagnosing diseases than trained doctors in the US, with a reported 90% success rate. While singing paeans of Artificial Intelligence (AI) in almost every field of human activity has now become par for the course, the possibility of the rise of “AI doctors” raises important questions relating to patient privacy and, in particular, the larger doctor-patient relationship.

Crisis of Confidentiality: From Ethics to Law

The delicate nature of the doctor-patient relationship was beautifully expounded in the eighteenth-century writings of John Gregory and Thomas Percival, two pioneering figures in the field of medical ethics. As Gregory and Percival noted, doctors inevitably gain access to the private lives of their patients, and this naturally requires them to exercise great discretion in dealing with the personal information entrusted to them. In certain fields, such as psychotherapy, this is all the more important, which has led to a strong movement  (and its acceptance by some courts in the US) to carve out a“psychotherapist-patient privilege” exception to the Federal Rules of Evidence governing confidential communications.

Today, the narrative on this issue may well shift from a purely ethical viewpoint into a more legalistic perspective on data protection and privacy. It is, therefore, not surprising that the General Data Protection Regulation (GDPR) in the European Union, classifies “health data” as one of the categories of “sensitive personal data”, to which heightened standards and safeguards apply. When patients directly interact with an “AI doctor,” sensitive personal data relating to not only one’s health but also one’s private life, would be disclosed. 

In the absence of strong legal regulations for processing such data, it is possible that patient data could be used for purposes other than providing patient care. The concerns are exacerbated by the fact that generative AI may “learn” or “train” itself using such sensitive patient data, leading to outcomes that are both inconceivable and perhaps undesirable for the patient. While a possible counter-argument would be that explicit consent of the patient would be a prerequisite before any health data is divulged to the AI doctor, there is a wealth of secondary literature that questions whether the patients truly know what they are consenting to instead of treating consent like a routine, box-ticking exercise. 

Judicial opinion

Case law on this topic is non-existent, presumably because the issue of “AI doctors” is novel. However, in the past, courts have sought to tread cautiously as far as divulging patient records to third parties is concerned. The Ohio Supreme Court, in Biddle v. Warren General Hospital, went so far as to suggest that unauthorized disclosure of patient records to a third party amounted to an independent tort by itself, irrespective of how that information was utilized by the third party. It is, of course, true that the court was dealing with a case where the patients had not consented to disclosure of their data. In the UK case of Source Informatics, the Court of Appeal allowed the passing of patient records to a third party. However, in that case, it is worth noting that the patient data was anonymized before disclosure, thus making the argument relating to data protection that much weaker. 

These cases would prove to be of limited application if and when the debate on AI doctors reaches the stage of litigation. In these circumstances, the courts would then be deciding whether patients can consent to the disclosure of non-anonymized sensitive personal data to an AI entity and the extent to which the AI tool can use that information. 

Conclusion

In the absence of a federal data protection law in the US, there is a possibility that legal regulation of the wide-ranging use of AI doctors would be State-specific and fragmented. Even more importantly, a lack of legal regulation would raise serious concerns about whether AI doctors are indeed “practicing” medicine, which would directly determine whether they will be bound by professional obligations that are ordinarily applicable to doctors, such as the preservation of confidentiality of patient data. A possible solution to this problem could lie in a conclusive determination of the legal status of AI doctors, and the techno-legal accountability standards like purpose limitation and data minimization that they would be subject to. While AI can potentially lead to great benefits in medical science, one must ensure that confidentiality, privacy, and protection of personal data are not allowed to be sacrificed at the altar of convenience and diagnostic efficiency.

Hide Your Info: Exploring the Lackluster Protection of HIPAA

By: Zach Finn

The Health Insurance Portability and Accountability Act (HIPAA) was enacted in 1996, and has since become a touchstone for the protection of confidentiality and security of personal health information in the United States.

Or, so we thought. The rise in technology has advanced the way information is stored and shared. Biomedical databases store high volumes of information, ranging from personal external identifiers such as medical reports, to even individual genetic sequencing, exemplified by 23andMe’s and Ancestry‘s storage of genetic information. Large data and biobanks (a collection of biological samples, like blood and health information) create access to a plethora of quality human data, which prove to be valuable in medical research, clinical trials, and understanding genomics. But at what cost?

HIPAA requires medical and genetic information to be anonymized before being distributed and shared to third parties outside the relationship of medical providers and patients. Technology has created a loophole in HIPAA, through re-identification processes, which allows individuals to match medical information back to specific individuals using open source data. Re-identification, as of now, disarms HIPAA, rendering de-identified (anonymized) medical information basically unprotected from parties who obtain personal biodata through re-identification.

HIPAA nationalizes standards for protecting the privacy and confidentiality of individuals’ personal health information (PHI). It requires covered entities to provide individuals with notice when sharing a person’s genetic information. HIPAA is violated when a covered entity discloses personal and identifiable health information without the consent of the patient. These covered entities include healthcare providers, health plans, and healthcare clearinghouses. Technology provides entities with the ability to de-identify and anonymize large data sets in order to share health information and be in compliance with HIPAA. Anonymization removes personal identifiers like names, addresses, date of birth, and other critical identifiers. HIPAA sets out requirements of what needs to be de-identified, and once anonymized, personal health information is shareable and HIPAA compliant.

Re-identification is the process to which materials and data stored in biobanks can be linked to the name of the individuals from which they were derived. This is done by taking public information and re-matching it the anonymized data. It sounds difficult, but a study concluded that 99.98% of Americans would be correctly re-identified in any dataset using 15 demographic attributes such as age, gender and marital status. For example, in the 1990s, one could purchase the Cambridge, MA voter registration list for $20, and link it to a public version of the state’s hospital discharge database to reveal persons associated with many clinical diagnoses.

HIPAA has yet to play catch up with the innovation of technology. The requirements for compliance in anonymization lack the sophistication and protective measures needed to combat the expanding use of re-identification practices. HIPAA’s privacy rule does not restrict the use or disclosure of de-identified health information, since it no longer is considered protected health information. This means that any re-identification of this earlier protected information is not subject to HIPAA. This ultimately demonstrates HIPAA’s weak protective measures, and the alarming concern of how easily accessible our genetic and medical information is to third parties.

Re-identification of HIPAA compliant anonymized information is not a violation of the statute. We must consider reforming HIPAA to acknowledge technology’s capabilities to bypass its security measures. One way an individual can ensure privacy of his or her genetic and medical information is by not consenting to sharing or storing this data. Covered entities must give notice and obtain consent before de-identifying and sharing biobanks. However, this comes with the price of stifling research, trials, and genomics. Hopefully we can figure out a balance between confidentiality and sharing private information, but it starts with drafting laws that actually protect our personal and most private information!

Alexa: Are You Going to Testify Against Me?

By: Melissa Torres

Life seems pretty great in a world where we can turn lights off, play music, and close the blinds by simply speaking it into existence. But, what happens when your conversations or home noises are used against you in a criminal investigation? 

Smart speakers, such as Google Home and Amazon Alexa, are marketed as great tech gifts and the perfect addition to any home. A smart speaker is a speaker that can be controlled with your voice using a “virtual assistant”. It can answer questions for you, perform various automated tasks and control other compatible smart devices by simply activating its “wake word.”

According to Amazon.com, in order for a device to start recording, the user has to awaken the device by saying the default word, “Alexa.” The website states, “You’ll always know when Alexa is recording and sending your request to Amazon’s secure cloud because a blue light indicator will appear or an audio tone will sound on your Echo device.” Unless the wake word is used, the device does not listen to any other part of your conversations as a result of built-in technology called “keyword spotting”, according to Amazon.

Similarly, Google states, “Google Assistant is designed to wait in standby mode until it detects an activation, like when it hears ‘Hey Google.’ The status indicator on your device will let you know when Google Assistant is activated. When in standby mode, it won’t send what you’re saying to Google servers or anyone else.” 

Consumers consent to being recorded when they willingly enter a contract with these smart devices by clicking “I agree to the terms and conditions.” However, most people assume this refers only when implicating the “wake word.” Despite assurances from tech giants that these devices do not record without being prompted, there have been many reports that suggest otherwise. And recent in years, these smart devices have garnered attention as they have been called as the star witness in murder investigations.  

In October 2022, someone fatally shot two researchers before setting fire to the apartment they were found in. According to the report, Kansas police believe the killer was inside the apartment with the duo for several hours, including before and after their deaths. Investigators found an Amazon Alexa device inside the apartment and filed a search warrant for access to the device’s cloud storage, hoping it may have recorded clues as to who is responsible for the murders. If the police obtain relevant information, they may be able to use it in court, depending on how this evidence is classified.

Under the Federal Rules of Evidence, all relevant evidence is admissible unless another rule specifies otherwise. Specifically, statements that are considered hearsay are not admissible unless an exception applies. Hearsay is any statement made outside the presence of court by a person for the purpose of offering it to prove the truth of the matter asserted. Although these devices technically do produce statements, courts have held that a statement is something uttered by a  person, not a machine. However, there is an important distinction between machines that have computer stored and computer generated data. Computer stored data that was entered by a human has the potential to be hearsay, while computer generated data without the assistance or input of a person is not considered hearsay.  The question of how these statements will be classified and whether they will be permitted in court is up to the judge. 

As such, this isn’t the first time police have requested data from a smart speaker during a murder investigation. In 2019, Florida police obtained search warrants for an Amazon Echo device believing it may have captured crucial information surrounding an alleged argument at a man’s home that ended in his girlfriend’s death. In 2017, a New Hampshire judge ordered Amazon to turn over two days of Amazon Echo recordings in a case where two women were murdered in their home. In these previous cases, the parties consented to handing over the data held on these devices without resistance. In 2015, however, Amazon pushed back when Arkansas authorities requested data over a case involving a dead man floating in a hot tub. Amazon explained that while it intends not to obstruct the investigation, it also seeks to protect its consumers First Amendment rights. 

According to the complaint, Amazon’s legal team wrote, “At the heart of that First Amendment protection is the right to browse and purchase expressive materials anonymously, without fear of government discovery,” later explaining that the protections for Amazon Alexa were twofold: “The responses may contain expressive material, such as a podcast, an audiobook, or music requested by the user. Second, the response itself constitutes Amazon’s First Amendment-protected speech.” Ultimately, the Arkansas court never decided on the issue as the implicated individual offered up the information himself.      

Thus, a question is still unanswered: Exactly how much privacy can we reasonably expect when installing a smart speaker? As previously mentioned, these smart speakers have been known to activate without the use of a “wake word”, potentially capturing damning conversations. Without a specified legal standard, there’s not much consumers can do to protect their private information from being shared as of now, fueling the worry that these devices can be used against them. Tech companies, like Amazon and Google, suggest going into the settings and turning off the microphone when you aren’t using it, but that requires trusting the company to actually honor those settings. Users also have the option to review and delete recordings, but again you have to trust the company to honor this. The only sure way to protect yourself from these devices is by simply not purchasing them. If you can’t bring yourself to do that, be sure to unplug the devices when you’re not using them. Otherwise, it’s possible these smart speakers may be used as evidence against you in court.

The Cellphone: Our Best Helper or an Illegal Recorder? 

By: Lauren Liu

We have all experienced that shocking moment when we realized that the advertisement or post appearing on our screen happens to be the exact topic that we talked about in a very private conversation. Although we did not Google or browse that topic on the internet, somehow, that idea of upgrading our laptop or buying that new pair of shoes slipped into our browser and started waving at us from across the screen. We are in awe, and can even feel violated.

Such an experience has become so common that we forget how much our browser or the apps that we use are tracking us, and how much our cellphones are listening in on our every conversation. Especially after the revelations from Thomas le Bonniec, a former contract consultant for Apple, such an issue has raised more concerns for customers. According to Bonniec, Apple created a quagmire for itself involving many ethical and legal issues, including Siri’s eavesdropping. In many instances, iPhones record users’ private conversations without their awareness of it, and without any activation of Siri, which listens to users’ vocal commands and assists with their needs. The problem stems from the fact that every smartphone, including iPhone and Android devices, is a sophisticated tracking device with very sensitive microphones that can capture audio by the users, or even anyone within the vicinity. Furthermore, with 4G LTE and its bandwidth, these recordings can be stored and uploaded into the seller’s database without the knowledge or consent of the owner. Bonniec mentioned Apple’s explanation that these recordings were gathered into Apple’s database for analytics and transcription improvements. However, Bonniec’s revelation of Apple’s internal operation still caused many privacy concerns from customers and raised potential legal issues. 

In response to such concerns, companies created long consent forms for customers to sign before purchasing the product. The legal definition of consent is that a person with sufficient mental capacity and understanding of the situation voluntarily and willfully agrees to a proposition. Based on such a definition, a majority of customers could not have validly consented, because when most of them sign these consent forms, they do not read or fully understand the content in these forms. More specifically, regarding the problem of Siri, customers often do not clearly understand what Siri listens to or how their iPhones record their conversations. Most ordinary iPhone users often assume that Apple only evaluates voice commands and questions after they activate Siri for specific commands. 

Federal law (18 U.S.C. § 2511) requires one-party consent, which means that a person can record a phone call or conversation, so long as that person is a party to the conversation. If a person is not a party to the conversation, he or she can only record if at least one party consents and has full knowledge that the communication is being recorded. Most state laws follow such federal laws. It remains a question whether or not Apple or Siri should be legally considered a party to a conversation, but based on common sense, most consumers would likely think that it is not. Furthermore, it remains unclear whether or not the signing of a consent form without a comprehensive understanding of the form’s content is considered valid consent. Thus, even if a customer signs such a consent form, it remains possible that he or she still does not consent to be recorded.

In addition to learning about the law, consumers should also ask questions regarding potentially illegal recordings by electronic devices. How much private information is obtained? What confidentiality agreements were in place, and what oversight was implemented? Are actual audio recordings retained, and if so, for how long? With so much ambiguity still remaining, these questions can at least begin the process of addressing consumers’ concerns and reducing potential legal disputes for sellers.

Careful! Big Brother is Watching (or rather Listening)

By: Enny Olaleye

Earlier this year, social media users may have been surprised to see #LiveListen trending on websites such as Twitter and TikTok. This hashtag represented one of Apple’s newest innovations called Live Listen, an accessibility feature designed to help the hearing-impaired, by permitting users to use their AirPods to turn their electronic devices (iPhones, iPad, etc.,) into a microphone—which sends sound to their AirPods. However, what Apple intended to be a simple new feature for their products quickly transformed into a social media craze, where Apple users discovered that they could use this new function to eavesdrop on other people’s conversations. 

Activating the Live Listen feature is as easy as opening your iPhone’s settings application. Once activated, the Live Listen feature allows users to hear conversations more clearly, by tuning out any background noise present. With your AirPods in your ears and your iPhone near the person you are trying to hear, Live Listen will transmit the audio to your AirPods. While navigating this new feature, users soon found out that when their AirPods were connected, they were able to listen in on any conversations happening in the room the iPhone was placed in—even when they were in a different room from the device. Live Listen remains active until the AirPods are put back in their case or disconnected from their mobile device. This feature means that, even if the connected iPhone or iPad is hidden somewhere out of sight, it can still clearly pick up conversations within the same room. 

Social media users began to label this new advancement as a “game-changer,” publicly admitting the different ways as to how they planned to utilize this feature to eavesdrop on their friends, partners, and even their employers. 

Thus, the question arises: Are AirPods our newest security threat? 

When you think of the word “wiretapping,” or what is commonly referred to as “eavesdropping,” you may imagine a black-and-white scene with a bunch of men in suits huddled around a clunker of a machine wearing oversized headphones—looking intently into the distance. Well, thanks to Ring cameras, high-definition drones, and of course smartphones, wiretapping laws have greatly expanded from what they used to be back in the day of drama-filled, black-and-white criminal television shows. The Electronic Communications Privacy Act of 1986 (ECPA), made it a federal crime to engage in, possess, use or disclose information obtained through illegal wiretapping or electronic eavesdropping. This statute applies to any face-to-face conversations, emails, texts, phone calls, or “electronic communication,” that are reasonably expected to be private. 

“But—I don’t plan to record the conversation; I just want to listen in.” Still…no. 

Aside from the literal action of using AirPods as a wiretapping device, the ECPA also considers it a felony to intentionally intercept electronic communication—which translates to setting up your AirPods to listen into private conversations. Further, the ECPA also considers it a felony to attempt to intercept an electronic communication—which includes the mere action of attempting to set up the LiveListen feature for the purpose of listening into a reasonably private conversation. Regardless of whether you are recording or just listening in, the consequences of even attempting to wiretap or eavesdrop include imprisonment of up to five years (if criminal intent can be proven) and up to a $250,000 fine. 

With the advancement of technology not dwindling down any time soon, it brings up the matter that if your peers can so easily listen into your conversations, what does that mean for those with more resources and power? 

Electronic surveillance, whether through AirPods or government-funded access to encryption tools, is fundamentally at odds with personal privacy. Under the Fourth Amendment, government agencies must obtain a warrant, approved by the judge, before engaging in wiretapping or electronic surveillance. However, while government agencies are required to secure a warrant, their requests for wiretaps are almost never turned down by judges. Once authorized, both wiretapping and electronic eavesdropping enable the government to monitor and record conversations and activities without revealing the presence of government listening devices. 

Legislation concerning wiretapping and privacy rights continuously lag behind the fast-paced advancement of technology. Even so, products as simple as AirPods and iPhones will never be tagged as security threats due to the sheer awareness that they already exist everywhere. The old-time anecdote that “Big Brother is Watching You” is slowly coming into fruition as user privacy can be surpassed at our own fingertips. While the expansion of electronic surveillance was originally meant to reduce serious violent crimes after 9/11, it has only led to the heightened violations of privacy rights amongst those in the United States. 

“So now what?” 

Well, simply put—in most circumstances, listening in to conversations that are “reasonably expected” to be private, without the consent of those participating in the conversation, will most likely constitute a federal crime. Thus, activating LiveListen and utilizing it outside its designated role as an accessibility feature is not a good idea. With respect to protecting yourself and your information—that is a bit more difficult. Avoiding the entire “surveillance economy,” by not using Apple products or avoiding Google and Twitter is just very unlikely (I still haven’t been able to give up Amazon Prime). However, taking action can be as small as searching on secure networks only (with the little lock on the search bar), to as large as applying pressure to your state’s representatives to pass legislation centered on protecting our individual privacy rights is a step in the right direction. 

The bottom line is; without the assurance that our private communications are, indeed, private, privacy rights will continue to be glazed over and decisions based upon free will and personal choice will slowly be replaced by decisions centered in prudence and fear.