Sorry Grandma . . . ChatGPT says You’re Healthy: The Growing Prevalence of AI in Insurance Claim Denials

By: Joseph Valcazar

As of 2024, 32.8 million Americans received health insurance through a Medicare Advantage plan. This accounts for over half of all Medicare recipients. Covering some of the most vulnerable members of the populace. Including senior citizens aged 65 and older, individuals with disabilities, and those with end stage renal disease. It should come as no surprise that these groups are reliant on insurance to cover necessary treatments that would otherwise be too costly. Even with coverage, 13.6% of a Medicare family’s total expenses are health-related. In contrast, for non-Medicare families this figure is 6.5%. Now, health insurance carriers are integrating AI driven predictive models to calculate care plans, which is raising concerns among medical professionals that patients are being denied necessary care, leading to legal action.

What is Medicare Advantage?

Traditional Medicare encompasses inpatient treatment through Medicare Part A, and outpatient treatment through Medicare Part B. Eligible recipients of Medicare are automatically enrolled to receive coverage under part A. Part B coverage is voluntary. Users who choose to participate in Part B pay a monthly premium, determined by an individual’s household income.

In 1997, Congress passed the Balanced Budget Act (BBA) which introduced Medicare Part C, later named Medicare Advantage (MA). The BBA permitted the Center for Medicare & Medicaid Services (CMS) to contract with private health insurance carriers to provide health insurance plans to eligible Medicare recipients. In turn, MA participants would receive full Part A and Part B coverage, just as they would under traditional Medicare, but through a private insurance carrier (think UnitedHealthcare, Blue Cross Blue Shield, etc.). In addition, MA plans could offer supplemental benefits not offered under traditional Medicare, such as dental and vision coverage or gym memberships. 

However, under Medicare Advantage, these private companies control all MA related claims, determining how much of received or expected care is covered. This is where the controversial nH Predict model enters the picture. 

The nH Predict Model. 

Created by NaviHealth (now owned by UnitedHealth Group), the nH Predict model is designed to predict post-acute care needs. Post-acute care refers to treatment for a severe injury, illness, or surgery, typically caused by trauma. The most common post-acute treatments involve visits to  skilled nursing facilities (SNF), and home health agencies (HHA)

Investigations of the nH predict model have indicated the model has become “increasingly influential in decisions about patient care and coverage.” While the specifics of the model are unknown, the nH predict model functions by utilizing databases containing millions of medical records, evaluating demographic information such as age, preexisting health conditions, and other factors to determine custom care plans, including duration of treatments.

The utilization of predictive models has garnered concerns from medical professionals and patients alike, who are concerned that an increasing reliance on such models fail to account for the unique individual factors that contribute to a patient’s recovery, leading to inaccurate results. An ongoing class action lawsuit claims the nH predict model has a 90% error rate. The lawsuit also accuses UnitedHealthcare of having knowledge of this error rate and still using the model to override treating physicians’ determinations.

Class Action Lawsuits

Since its creation, multiple health insurance providers have integrated the error-prone nH predict model into their claims process. Many MA patients have filed federal class action lawsuits against major health insurance companies, including UnitedHealthcare and Humana, alleging breach of contract, breach of implied covenant of good faith and fair dealing, and unjust enrichment. The plaintiffs claim that by using the faulty nH predict model, these companies have unfairly denied claims which have directly and proximately caused their damages.

In one claim against UnitedHealthcare, Dale Henry Tezletoff, a 74 year old MA recipient suffered a stroke that required hospital admission. Mr. Tezletoff’s doctor recommended he seek post-acute treatment at a SNF for 100 days. After 20 days of treatment at an SNF, he was informed by UnitedHealthcare that any future treatment would not be covered. It required two separate appeals before a UnitedHealthcare doctor reviewed Mr. Tezletoff’s medical records and concluded additional recovery time was needed. Yet, after 20 more days at the SNF, Mr. Tezletoff was again informed that any future post-acute care had been denied. And this time, even with an opposing opinion from Mr. Tezletoff’s doctor, UnitedHealthcare refused to overrule its decision. As a result, Mr. Tezletoff was required to pay out-of-pocket expenses totalling $70,000 to receive the necessary treatment.

These lawsuits shine a spotlight on the ethical and legal ambiguities of AI in its current state. The legal system is not well equipped to respond on the whim to new complex technological advancements. When a court has the opportunity to hear a case on an emerging issue, it is placed in a position to serve as a voice of authority. A ruling in the plaintiff’s favor would act as a deterrent to similar future conduct. Providing the legislature an additional buffer as they tackle the unenviable task of regulating this new technology.

The fact is, Mr. Tezletoff’s story is not unique, and the implications of these lawsuits are apparent; people’s quality of life is on the line. The outcome of these lawsuits, and the response from the government, will help shape how AI is integrated into the healthcare industry and others like it.

The Government’s Initial Reactions

The federal government has begun to respond to these concerns. On January 1, 2024, the Department of Health and Human Services enacted new rules requiring specialized health care professionals to review any denial involving a determination of a service’s medical necessity. A change that is viewed as fixing “a big hole” in managing the use of AI predictive models.

More recently, on September 28, 2024, California passed SB 1120, requiring health care service plans utilizing AI to determine necessary medical treatments to meet and comply with specific requirements. The objective of this new legislation is to increase the transparency of these models, prevent discrimination, and limit supplantation of health care providers decision making.

The introduction of AI in the healthcare industry is novel, and further reactions from governments on a state and federal level are likely to follow.

Conclusion

Proponents of AI predictive models believe that these systems will speed up the claims process, detect unusual billing patterns, and allow health insurance companies to make more accurate risk assessments. In turn, this will allow these companies to utilize their resources more efficiently and offer better treatment plans. But at what cost to the insured? If AI proves to be as reliable as its proponents believe, then perhaps a future exists where predictive models are commonplace, and serve to benefit not only the insurance companies, but those covered as well. However, many of these models are in their infancy. Currently relying on the outputs of these models, especially when it involves the health and wellbeing of individuals, is a slippery slope that can, and has harmed people physically and financially. 

“Hey Chatbot, Who Owns your Words?”: A look into ChatGPT and Issues of Authorship

By: Zachary Finn

Unless you have lived under a rock, since last December, our world has been popularized by the infamous ChatGPT. Generative Pre-trained Transformer (“ChatGPT”) is an AI powered chatbot which uses adaptive human-like responses to answer questions, converse, write stories, and engage with input transmitted by its user. Chatbots are becoming increasingly popular in many industries and can be found on the web, social media platforms, messaging apps, and other digital services. The world of artificial intelligence sits on the precipice of innovation and exponential technological discovery. Because of this, the law has lagged to catch up and interpret critical issues that have emerged from chatbots like ChatGPT. One issue that has risen within the intersection of AI-Chatbot technology and law is that of copyright and intellectual property over a chatbot’s generated work. The only thing that may be predictable about the copyright of an AI’s work is that (sadly) ChatGPT likely does not own its labor. 

To first understand how ChatGPT figures into the realm of copyright and intellectual property, it is important to understand the foundations and algorithms that give chatbot machines’ life. A chatbot is an artificial intelligence program designed to simulate conversation with human users. OpenAI developed ChatGPT to converse with users, typically through text or voice-based interactions. Chatbots are used in a variety of ways, such as: user services, conversation, information gathering, and language learning. ChatGPT is programmed to understand user contributions and respond with appropriate and relevant information. These inputs are sent by human users, and a chatbot’s response is often based on machine learning algorithms or on a predefined script. Machine learning algorithms are methods by which an AI system functions, generally predicting output values from given input data. In lay terms, a system will learn from previous human inputs to generate a more accurate response. 

The ChatGPT process goes as followed:

1. A human individual inputs data, such as a question or statement: “What were George Washington’s teeth made of?”

2. The Chatbot reads the data and uses machine learning, algorithms, and its powerful processor to generate a response.

3. ChatGPT’s response is relayed back to the user in a discussion-like manner: “Contrary to popular belief, Washington’s dentures were not made of wood, but rather a combination of materials that were common for dentures at the time, including human and animal teeth, ivory, and metal springs. Some of Washington’s dentures also reportedly included teeth from his own slaves” (This response was generated by my personal inquiry with ChatGPT).

So, who ultimately owns content produced by ChatGPT and other AI platforms? Is it the human user? OpenAI or the system developers? Or, does artificial intelligence have its own property rights?

Copyright is a type of intellectual property that protects original works of authorship as soon as an author fixes the work in a tangible form of expression. This is codified in The Copyright Act of 1976, which provides the framework for copyright law. Speaking on the element of authorship, anyone who creates an original fixed work, like taking a photograph, writing a blog, or even creating software, becomes the author and owner of that work. Corporations and other people besides a work’s creator can also be owners, through co-ownership or when a work is made for hire (which authorizes works created by an employee within the scope of employment to be owned by the employer). Ownership can also be contracted.

In a recent Ninth Circuit Court decision, the appellate court held that for a work to be protected by copyright, it must be the product of creative authorship by a human author. In the case of Naruto v Slater, where a monkey ran off with an individual’s camera and took a plethora of selfies, it was concluded that the monkey did not have protections over the selfies because copyright does not extend to animals or nonhumans. §313.2 of the Copyright Act states that the U.S. Copyright Office  will not register works produced by nature, animals, the divine, the supernatural, etc. In the case of AI, a court would likely apply this rule and similar as well as any precedent cases that have dealt with similar fact patterns with computer generated outputs.

Absent human authorship, a work is not entitled to copyright protection. Therefore, AI-created work, like the labor manufactured by ChatGPT will plausibly be considered works of public domain upon creation. If not this, it is likely they will be seen as a derivative work of the information in which the AI based its creation. A derivative work is “a work based on or derived from one or more already existing works”. This fashions a new issue as to whether the materials used by an AI are derived from algorithms created by companies like OpenAI, or by users who influence a bot’s generated response, like when someone investigates George Washington’s teeth. Luckily for OpenAI, the company acknowledges via its terms and agreements that it has ownership over content produced by the ChatGPT.

However, without a contract to waive authorship rights, the law has yet to address intellectual property rights of works produced by chatbots. One wonders when an issue like this will present itself to a court for systemization into law, and if when that time comes, will AI chatbots have the conversational skills and intellect to argue for ownership of their words?

No One Should Own Exclusively AI Generated Art

By: Jacob Alhadeff

On February 14, 2022, the Copyright Review Board (CRB) rejected Physicist Stephen Thaler’s claim for a copyright of his algorithm’s “authorship” because a “human being did not create the work.” On September 15, 2022, Kris Kashtanova received a copyright for their comic book Zarya of the Dawn, in which all of the art was AI generated, but Kris created the other aspects of the book. The difference in treatment is likely down to questions of originality, authorship, and simply that one work required human creativity while the other was effectively the work of a computer. Though these legal arguments are compelling in themselves, a necessary and implicit policy rationale seldom explicitly recognized by the law deserves highlighting — the relationship between work and incentives. Here, copyright incentivizes Kashtanova’s creative human work while reasonably denying that incentive to Thaler’s exclusively AI generated art. 

AI art, AKA generative art, uses machine learning (ML) algorithms that have been trained on billions of images frequently from licensed training sets and images publicly available on the internet. The images these algorithms use are frequently copyrighted or copyrightable. Users then type in a phrase, “carrot parrot,” for example, and a unique image is generated in seconds. Creating novel art can now be as simple as an image search on Google. This technology has been in the works for many years, but recently, platforms like DALL-E, Midjourney, and Stable Diffusion increased the volume of training data from millions to billions of parameters and the emergent result was an exponentially better output. In response, on October 17, 2022, Stable Diffusion announced the completion of a $101M seed round at a $1B valuation. Sequoia Capital then posted a blog suggesting that generative AI could create “trillions of dollars of economic value.” The future of Generative AI looms large, and at the very least promises to expose unexplored ambiguities in copyright. 

Functionally, in generative art there are two primary entities that may be incentivized through copyright — the programmer or the user. The programmer may have spent many hours writing and training the algorithm so that the algorithm may quickly create novel works of art. The user of the algorithm, on the other hand, is “the person who provides the necessary arrangements,” basically the person who prompts the program with a phrase. Providing either of these entities a copyright to exclusively generated art ineffectively balances incentives and ignores the purpose of copyright. 

Incentives and the Purpose of Copyright 

Copyright’s purpose is to “promote the progress of Science and useful Arts.” The Constitutional basis for copyright is therefore explicitly utilitarian. The Supreme Court has expanded on this language, suggesting that copyright’s purpose is to (1) “motivate the creative activity of authors and inventors by the provision of special reward” and (2) “to stimulate artistic creativity for the general public good.” Justice Ginsburg found that copyright’s dual purposes are mutually reinforcing because the public is served through copyright’s individual incentive. This mirrors James Madison’s claim regarding copyright, that “the public good fully coincides in both cases [copyright and patent] with the claims of individuals.” At its core, copyright is a monopoly-based incentive to create art to further public welfare. This incentive is at least implicitly predicated on the notion that creating valuable creative works is not easy, and therefore requires or deserves incentivizing. If improper law and policy are adopted, then Generative AI has the possibility to throw a wrench in this balancing of incentives.

The now rightfully defunct “sweat of the brow” copyright standard awarded a copyright partially because of the amount of work that went into the effort. One reason “sweat of the brow” was flawed was because it meant that facts themselves could be copyrighted if it took substantial work to attain those facts. The ability to copyright a fact “did not lend itself to support[ing] [] the public interest” and the standard was discarded. Though improper, the underlying concept was not entirely baseless. If the Constitutional purpose of copyright is to provide incentives to artists for public benefit, then copyright law must balance incentives, which implicitly balances work versus reward. 

Incentives are not absolute but are contextual and must at least tacitly recognize the difficulty of the act the incentive intends to induce. ‘Energy in’ must be somewhat commensurate with ‘value out’ — otherwise, the incentive structure is misaligned. This balancing of incentives is one of the reasons why a perpetual copyright is unconstitutional. If a copyright holder holds this monopoly right too long after its initial creation, they are rent-seeking, and the incentive that copyright provides far overshadows the public benefit. Rent-seeking is growing one’s wealth without “creating new wealth,” which has pernicious societal effects. For this reason, courts have determined that no amount of creativity, originality, or work merits an infinite monopoly on a creative work. 

Exclusively Generated Art Should Enter The Public Domain

Neither the user nor the programmer should receive a copyright for exclusively generated art, in part because doing so would misalign incentives. To be overly reductive, incentivizing someone to dedicate their life to an artistic craft requires a substantial incentive — a copyright for example. By contrast, if the effort required to create the art is effectively null (typing a prompt into generative AI), then the incentive required to promote the useful art is effectively null. As such, the law should not be reticent to reduce or eliminate the incentive for someone to type five words into a generative AI and provide a public benefit by creating exclusively generated art. Importantly, this reasoning excludes an artist’s creations that use generative AI as a tool or a component of their work – these artist’s works deserve copyright’s protection. Given that without any guarantee of copyright protection, over 1.5 million users are creating 2 million images a day using Dall-E, current evidence suggests that generative art users are not concerned about a monopoly on the economic returns for their creations. Lawmakers should not be concerned either. 

The owners of the generative AI algorithm should not receive a copyright for every work generated by their algorithm. Some in intellectual property suggest that AI generated art should be copyrightable because without protection, there will be a “chilling effect on investment in automated systems.” The argument is basically that if the owner of a generative art algorithm cannot hold a monopoly on the generated art, then there will be insufficient incentive to continue investing in automated systems. This ignores the concept of Software as a Service and the present reality that machine learning algorithms are currently effectively contributing to lucrative business models without guarantees of copyright protection. Relevantly, Stable Diffusion is valued at $1B.  

Further, a world where the algorithm’s owners automatically have a valid copyright claim could completely undermine the market for art. Similar to how no amount of work can justify a perpetual copyright, no amount of work could justify a handful of entities with machine learning algorithms copyrighting a substantial proportion of modern artistic creation. While generative art may simply become another tool for artistry, it is conceivable that someday the world’s human artists would not compare to the volume of work accomplished by ML algorithms. Lawmakers should not reduce artistic markets to whoever can create or purchase the most effective machine-learning algorithms.