“Hey Chatbot, Who Owns your Words?”: A look into ChatGPT and Issues of Authorship

By: Zachary Finn

Unless you have lived under a rock, since last December, our world has been popularized by the infamous ChatGPT. Generative Pre-trained Transformer (“ChatGPT”) is an AI powered chatbot which uses adaptive human-like responses to answer questions, converse, write stories, and engage with input transmitted by its user. Chatbots are becoming increasingly popular in many industries and can be found on the web, social media platforms, messaging apps, and other digital services. The world of artificial intelligence sits on the precipice of innovation and exponential technological discovery. Because of this, the law has lagged to catch up and interpret critical issues that have emerged from chatbots like ChatGPT. One issue that has risen within the intersection of AI-Chatbot technology and law is that of copyright and intellectual property over a chatbot’s generated work. The only thing that may be predictable about the copyright of an AI’s work is that (sadly) ChatGPT likely does not own its labor. 

To first understand how ChatGPT figures into the realm of copyright and intellectual property, it is important to understand the foundations and algorithms that give chatbot machines’ life. A chatbot is an artificial intelligence program designed to simulate conversation with human users. OpenAI developed ChatGPT to converse with users, typically through text or voice-based interactions. Chatbots are used in a variety of ways, such as: user services, conversation, information gathering, and language learning. ChatGPT is programmed to understand user contributions and respond with appropriate and relevant information. These inputs are sent by human users, and a chatbot’s response is often based on machine learning algorithms or on a predefined script. Machine learning algorithms are methods by which an AI system functions, generally predicting output values from given input data. In lay terms, a system will learn from previous human inputs to generate a more accurate response. 

The ChatGPT process goes as followed:

1. A human individual inputs data, such as a question or statement: “What were George Washington’s teeth made of?”

2. The Chatbot reads the data and uses machine learning, algorithms, and its powerful processor to generate a response.

3. ChatGPT’s response is relayed back to the user in a discussion-like manner: “Contrary to popular belief, Washington’s dentures were not made of wood, but rather a combination of materials that were common for dentures at the time, including human and animal teeth, ivory, and metal springs. Some of Washington’s dentures also reportedly included teeth from his own slaves” (This response was generated by my personal inquiry with ChatGPT).

So, who ultimately owns content produced by ChatGPT and other AI platforms? Is it the human user? OpenAI or the system developers? Or, does artificial intelligence have its own property rights?

Copyright is a type of intellectual property that protects original works of authorship as soon as an author fixes the work in a tangible form of expression. This is codified in The Copyright Act of 1976, which provides the framework for copyright law. Speaking on the element of authorship, anyone who creates an original fixed work, like taking a photograph, writing a blog, or even creating software, becomes the author and owner of that work. Corporations and other people besides a work’s creator can also be owners, through co-ownership or when a work is made for hire (which authorizes works created by an employee within the scope of employment to be owned by the employer). Ownership can also be contracted.

In a recent Ninth Circuit Court decision, the appellate court held that for a work to be protected by copyright, it must be the product of creative authorship by a human author. In the case of Naruto v Slater, where a monkey ran off with an individual’s camera and took a plethora of selfies, it was concluded that the monkey did not have protections over the selfies because copyright does not extend to animals or nonhumans. §313.2 of the Copyright Act states that the U.S. Copyright Office  will not register works produced by nature, animals, the divine, the supernatural, etc. In the case of AI, a court would likely apply this rule and similar as well as any precedent cases that have dealt with similar fact patterns with computer generated outputs.

Absent human authorship, a work is not entitled to copyright protection. Therefore, AI-created work, like the labor manufactured by ChatGPT will plausibly be considered works of public domain upon creation. If not this, it is likely they will be seen as a derivative work of the information in which the AI based its creation. A derivative work is “a work based on or derived from one or more already existing works”. This fashions a new issue as to whether the materials used by an AI are derived from algorithms created by companies like OpenAI, or by users who influence a bot’s generated response, like when someone investigates George Washington’s teeth. Luckily for OpenAI, the company acknowledges via its terms and agreements that it has ownership over content produced by the ChatGPT.

However, without a contract to waive authorship rights, the law has yet to address intellectual property rights of works produced by chatbots. One wonders when an issue like this will present itself to a court for systemization into law, and if when that time comes, will AI chatbots have the conversational skills and intellect to argue for ownership of their words?

Are rap lyrics protected as free speech or could they send you to prison?

By: Aminat Sanusi

Over the past couple of decades, rap music has become part of mainstream culture and cultivated a billion-dollar industry. That means that more eyes are on every move the artists make and sometimes those artists get into trouble with the law. However, in recent years prosecutors have tried to use the artists’ lyrics as evidence during their trials to prosecute them for crimes charged against them. There has been much outcry and concern about using artists’ lyrics against them in a court of law because of First Amendment rights and how it disproportionately affects Black, Indigenous, and people of color (BIPOC). The Supreme Court has yet to rule on whether rap lyrics are protected speech under the First Amendment, so prosecutors in many states continue to use them as damming evidence in criminal trials.

How has the history of the First Amendment intertwined with artistic expression?

In a 1992 Supreme Court decision, Dawson v. Delaware, the court held that it is unconstitutional to use protected speech as evidence when that speech is irrelevant to the case. This case set a heightened evidentiary standard when it came to different forms of protected speech. This heightened evidentiary standard is disproportionately applied when it comes to rap music even though country music also uses vulgar language and speaks of violent and graphic imagery. Rap and hip-hop music in criminal trials has been treated as inherently criminating. The prosecutors in these criminal cases sometimes use rap videos in addition to the lyrics to try and convince juries that those artists more than likely committed the crime since they portrayed gang-related activities in their videos.

The majority of artists in the rap and hip-hop music industries are members of the BIPOC community and when they are facing criminal charges the prosecutors often use their rap lyrics as evidence that they committed the crimes. Rap lyrics are known to have vulgar language, and mention drug and alcohol use, gang-related activities, and different types of criminal activity. However, musical lyrics are considered artistic expression and are protected under the First Amendment. The First Amendment to the Constitution grants a person the right to free speech, press, and freedom to exercise the religion of their choice. The argument against the use of rap lyrics to incriminate hip-hop and rap artists is the fact that lyrics are free speech which is constitutionally protected.

Should artists be fearful of expressing their experiences in their music because it could be used against them in a court of law?

 In 2022, Grammy-award-winning hip-hop artist Jeffery Lamar Williams, famously known as, “Young Thug”, was charged with conspiracy and street gang activity under Georgia’s Racketeer Influenced and Corrupt Organizations (RICO) Act. Mr. Williams is an African American male who raps about his life experiences including lyrics covering gang activity, drug and alcohol use, and living in poverty. The main piece of evidence used in Mr. Williams’ case was his rap lyrics. Many well known artists and celebrities have condemned this practice. They believe that prosecutors should not use Mr. Williams’ lyrics as evidence against him because, in addition to his music being protected speech under the First Amendment,  the lyrics have no relevance to the crimes he’s being prosecuted for. The prosecutor in Mr. Williams’ case claimed that the lyrics are not protected speech because they had relevance to the crimes he was facing. Mr. Williams’ trial has just commenced but it is not clear yet whether the use of his rap lyrics will be admissible or limited in his case.

Another unfortunate case of rap lyrics used as evidence was former rapper McKinley “Mac” Phipps Jr., who was convicted of manslaughter in 2001 after prosecutors recited his rap lyrics in court. He served twenty-one years in prison out of a thirty-year sentence and was released in 2021. In Mac’s case, he was arrested when rap and hip-hop first began to play a huge role in mainstream culture but at the time people did not pay much attention to the use of the artist’s music against them in court proceedings in a prejudicial manner.

What does the future hold?

In the summer of 2022, the Governor of California, Gavin Newsom, passed the Decriminalizing Artistic Expression Act into law, which limits the use of rap lyrics as evidence in criminal trials. This Act does not completely exclude the use of the lyrics against artists in court proceedings, however, it does provide that the lyrics need to have relevant substantive value to prove the crimes of the case and not be unduly prejudicial. This Act encompasses visual art, performance art, poetry literature, film, and other media. If the prosecutors in California wish to use the rap lyrics as evidence, they would have to show that the lyrics were written around the time of the crime and that the lyrics have some specific similarity to the crime itself. This new law in California will have an immense effect on the way prosecutors prosecute many of the rap and hip-hop artists accused of criminal activity and how rap lyrics or music videos may be used as an admission of guilt.

Additionally, this past year the New York legislature introduced a bill similar to the law passed in California that would limit the admissibility of a defendant’s artistic expression against such a defendant in a criminal proceeding. The bill is called the Rap Music on Trial Bill. This bill would not completely ban the use of rap lyrics in a criminal proceeding but would ensure that the lyrics used were to show literal and not just figurative or fictional meaning. The Bill has been voted on and passed in the New York State Senate but still awaits a vote in the New York State Assembly. The United States House of Representatives has recently introduced legislation called the Restoring Artistic Protection (RAP) Act which would limit the admissibility of lyrics as evidence in criminal cases. Co-sponsors of this bill, such as Representative Jamaal Bowman from New York has stated he supports this bill because of how it will prevent talented artists from being imprisoned for expressing their experiences. Additionally, it would provide comfort to artists who are afraid of expressing their creativity because they do not want their art used against them in a criminal case. The RAP Act currently has ten co-sponsors and has yet to be taken out of committee and brought to the floor for a vote. Hopefully, with all of these various changes in legislation, in the future rappers and singers will not be charged with crimes simply because of their artistic expression in lyrics and videos.

Major Tuddy’s Major Trademark Issue

By: Kelton McLeod

On January 1st, 2023, the Washington Commanders unveiled a new mascot, Major Tuddy, a tall humanoid hog, wearing a military-inspired helmet and a perpetual grin. The unveiling has gone over with a healthy mix of derision and confusion, just about as well as anything one would expect to come from the Dan Snyder-owned Commanders. While even casual football fans might understand that Major Tuddy is named after the slang term for a touchdown, some are confused why the Washington, D.C.-based football team would want a hog to be the new symbol of their organization, while others are filing a trademark lawsuit because of it. 

From the 1980s and into the 1990s, Washington had the best offensive line in all of professional football. These men, including the likes of Joe Jacoby and Mark May, were known as The Hogs. The Hogs were, and remain, an important piece of Washington’s history, helping the team win its only three Super Bowls. But despite bursting onto the scene through a sign emblazoned with “Let’s Get Hog Wild,” Major Tuddy has not had many fans, players, or former players very excited. 

Instead of being a triumph at the end of a lackluster season (the Commanders were eliminated from playoff contention the day Major Tuddy was revealed), Major Tuddy has proved to be yet another point of controversy for the Commanders Organization. In fact, some of the original Hogs, including the likes of Joe Jacoby and Mark May, are so unenthused with the new mascot that they issued a statement prior to Major Tuddy’s announcement distancing themselves from Dan Snyder (the Commanders current majority owner) and referencing potential legal action related to this new Hog. The members of the original Hogs (joined by John Riggins, Fred Dean, and Doc Walker) created O-Line Entertainment, LLC,  and in July of 2022, O-Line Entertainment filed trademark applications to the federal register for ‘Hogs’ and ‘Original Hogs,’ as the terms relate to professional football paraphernalia and merchandise. O-Line Entertainment sees Major Tuddy as a potential infringement on their mark, an outright attempt to capitalize and commercialize on the work of the Hogs of the 80s and 90s, and an attempt to confuse the fans. 

While the Commanders imply they have no intention to financially capitalize on Hogs as a mark, O-Line Entertainment has a real shot at being able to exclude the Commanders from even trying. The Lanham Act §43(a) creates a statutory cause of action for trademark infringement, where “any person who . . .uses in commerce any word, term, name, symbol, or device, . . . likely to cause confusion, or to cause mistake, or to deceive as to the affiliation, connection, or association of such person with another person . . . shall be liable in a civil action by any person who believes that he is or is likely to be damaged by such act.” Trademarks exist to help consumers identify the source of a good. So, if someone attempts to use another’s mark, or something substantially similar, to cause confusion as to the source or sponsorship of a good, that party is liable for trademark infringement. In this case, it is hard to refute the source of Major Tuddy’s swine heritage, as even his official team profile references the offensive line from the 80s and 90s. O-Line Entertainment would just need to prove that the likelihood of confusion exists between their products and those of the Commanders organization, not that actual confusion has occurred. Under current law, there is a prospect of O-line Entertainment doing so.

While O-Line Entertainment’s trademark registration has yet to be granted, and the Commanders organization could attempt to invalidate it, this likely would not be worth the problems it would cause. The Commanders are already in the midst of a Congressional investigation, and Dan Snyder’s time as owner might not be able to handle the bad press. While it is hard to expect Dan Snyder and the Commanders leadership team–a group known for trying and failing to hide behind the best parts of their team’s legacy–to handle themselves in a morally upstanding way, how they choose to handle the marketing and merchandising of their new mascot could mean another long and protracted trademark dispute where they lack the moral high ground.  

The Cellphone: Our Best Helper or an Illegal Recorder? 

By: Lauren Liu

We have all experienced that shocking moment when we realized that the advertisement or post appearing on our screen happens to be the exact topic that we talked about in a very private conversation. Although we did not Google or browse that topic on the internet, somehow, that idea of upgrading our laptop or buying that new pair of shoes slipped into our browser and started waving at us from across the screen. We are in awe, and can even feel violated.

Such an experience has become so common that we forget how much our browser or the apps that we use are tracking us, and how much our cellphones are listening in on our every conversation. Especially after the revelations from Thomas le Bonniec, a former contract consultant for Apple, such an issue has raised more concerns for customers. According to Bonniec, Apple created a quagmire for itself involving many ethical and legal issues, including Siri’s eavesdropping. In many instances, iPhones record users’ private conversations without their awareness of it, and without any activation of Siri, which listens to users’ vocal commands and assists with their needs. The problem stems from the fact that every smartphone, including iPhone and Android devices, is a sophisticated tracking device with very sensitive microphones that can capture audio by the users, or even anyone within the vicinity. Furthermore, with 4G LTE and its bandwidth, these recordings can be stored and uploaded into the seller’s database without the knowledge or consent of the owner. Bonniec mentioned Apple’s explanation that these recordings were gathered into Apple’s database for analytics and transcription improvements. However, Bonniec’s revelation of Apple’s internal operation still caused many privacy concerns from customers and raised potential legal issues. 

In response to such concerns, companies created long consent forms for customers to sign before purchasing the product. The legal definition of consent is that a person with sufficient mental capacity and understanding of the situation voluntarily and willfully agrees to a proposition. Based on such a definition, a majority of customers could not have validly consented, because when most of them sign these consent forms, they do not read or fully understand the content in these forms. More specifically, regarding the problem of Siri, customers often do not clearly understand what Siri listens to or how their iPhones record their conversations. Most ordinary iPhone users often assume that Apple only evaluates voice commands and questions after they activate Siri for specific commands. 

Federal law (18 U.S.C. § 2511) requires one-party consent, which means that a person can record a phone call or conversation, so long as that person is a party to the conversation. If a person is not a party to the conversation, he or she can only record if at least one party consents and has full knowledge that the communication is being recorded. Most state laws follow such federal laws. It remains a question whether or not Apple or Siri should be legally considered a party to a conversation, but based on common sense, most consumers would likely think that it is not. Furthermore, it remains unclear whether or not the signing of a consent form without a comprehensive understanding of the form’s content is considered valid consent. Thus, even if a customer signs such a consent form, it remains possible that he or she still does not consent to be recorded.

In addition to learning about the law, consumers should also ask questions regarding potentially illegal recordings by electronic devices. How much private information is obtained? What confidentiality agreements were in place, and what oversight was implemented? Are actual audio recordings retained, and if so, for how long? With so much ambiguity still remaining, these questions can at least begin the process of addressing consumers’ concerns and reducing potential legal disputes for sellers.

Should the Police Be Able To Arrest You For Your Face?

By: Kyle Kennedy

As technology has continued to evolve, law enforcement has begun to employ technological advances like facial recognition in their everyday pursuits. This has included issuing arrest warrants exclusively based on facial recognition matches.

Randall Reid was on his way to Thanksgiving dinner at his mother’s house in Georgia when he was arrested and jailed for stealing $10,000 of Chanel and Louis Vuitton handbags from a New Orleans suburb. The only problem was, Reid had never even been to Louisiana. This didn’t stop Reid from spending nearly a week in prison because a facial recognition tool had matched him to surveillance footage of the suspect in the case. Reid was arrested and held despite being about 40 pounds lighter than the suspect on the tape. 

This is not the first case of facial recognition technology leading to a wrongful arrest. Nijeer Parks spent 10 days in prison after an incorrect facial match and began to feel pressure to accept a plea deal because of his prior criminal record. Faced with the pressure of battling the court system and often the weight of prior charges, “[d]efense attorneys and legal experts say some people wrongly accused by facial recognition agree to plea deals”. Robert Williams has had multiple strokes since he was released from a 30 hour stint in jail after he was incorrectly matched to video of a suspect robbing a watch store. These are far from the only instances of inaccurate facial recognition leading to wrongful arrest. These arrests raise two important questions: can a facial recognition match be the basis of probable cause to substantiate a warrant? And if so, should it be? 

Facial recognition is broadly used to accomplish two tasks: verification and identification. Verification, also called one-to-one matching, is used to confirm a person’s identity like when logging into their smartphone or a banking app. Identification, or one-to-many matching, is when software compares an unknown face to a large database of known faces. Identification can be used on cooperative subjects who consent to having their faces scanned or uncooperative subjects whose faces are not scanned. The accuracy of these identification measures is much lower for uncooperative subjects whose facial images were captured in the real world. One algorithm had a 0.1% error rate when matching to high-quality mugshots, but this rate climbed to 9.3% when matching to images captured ‘in the wild’. The accuracy rates of facial recognition technology also vary by vendor; one top algorithm achieved an identification accuracy of 87% at a sporting venue while the accuracy rate of another vendor’s software was a dismal 40%. In addition to variation by vendor, even the most accurate algorithms tended to have “higher false positive rates in women, African-Americans, and particularly African-American women.” 

Probable cause differs for arrest warrants and search warrants. For an arrest warrant, probable cause is interpreted according to a flexible reasonableness standard based on a totality of the circumstances. For search warrants, there is probable cause when there is a fair probability there is evidence of a crime in the place to be searched. Based on the issues with accuracy and the frequency of inaccurate facial matches leading to wrongful arrests, a one-to-many match acquired through use of a facial-recognition algorithm cannot substantiate probable cause for a warrant without further evidence. This is especially true where the algorithm is used on images or video captured from uncooperative individuals ‘in the wild’, which would be the case when officers are trying to find unknown suspects with surveillance footage. A facial match with corroborating evidence linking a location to the individual matched might be able to substantiate a search warrant. On its face this might be less concerning due to the lower risk of wrongful arrests, but wrongfully or erroneously substantiated search warrants lead to many issues of their own.

The greatest danger posed by facial recognition inaccuracies are false positives because they lead to false accusations against otherwise innocent individuals. In response, some scholars have attempted to impose confidence thresholds on facial recognition algorithms to reduce the rate of false positives. One study found that a set of algorithms failed to return a match 4.7% of the time when no threshold was imposed, but that the rate jumped to 35% when a 99% threshold was imposed.  This means that 30% of the time the algorithm identified an individual, it was at a confidence level of below 99%. Many law enforcement departments that use facial recognition technology do not impose confidence thresholds on potential matches. While these confidence thresholds reduce the risk of false accusations, they still leave the door open for problematic uses of technology by law enforcement. Confidence thresholds are a band-aid solution which are difficult to enforce externally and allow the continued use of invasive surveillance technology.

Overall, one-to-many matches of uncooperative faces acquired through facial recognition algorithms are neither accurate nor reliable enough to be the sole basis of probable cause substantiating a warrant. These algorithms are susceptible to high error rates which vary unpredictably based on many factors, including the race and gender of the target individual. Additionally, the use of facial matching as the sole basis for arrest warrants has led to many wrongful arrests with long-term consequences for arrestees, like those mentioned above. The surrounding evidence of inaccuracy and history of wrongful arrests stemming from inaccurate facial matches make it clear that facial recognition matching shouldn’t be the legal basis for probable cause absent corroborating evidence. The remaining questions are to what degree a facial recognition match must be corroborated to substantiate a search warrant, whether police should be liable for harms stemming from inaccurate facial matches, and if so, what duty of care the police have in determining whether a given facial recognition match is accurate. Beyond further inquiry, the need for regulation surrounding the use of facial recognition and other surveillance technologies by law enforcement has never been more apparent.