Talking to Machines – The Legal Implications of ChatGPT

By: Stephanie Ngo

Chat Generative Pre-trained Transformer, known as ChatGPT, was launched on November 30, 2022.  The program has since swept the world by storm with its articulate answers and detailed responses to a multitude of questions. A quick Google Search of “chat gpt” amasses approximately 171 million results. Similarly, in the first five days of launch, more than a million people had signed up to test the chatbot, according to OpenAI’s president, Greg Brockman. But with new technology comes legal issues that require legal solutions. As ChatGPT continues to grow in popularity, it is now more important than ever to discuss how such a smart system could affect the legal field. 

What is Artificial Intelligence? 

Artificial intelligence (AI), per John McCarthy, a world-renowned computer scientist at Stanford University, is “the science and engineering of making intelligent machines, especially intelligent computer programs, that can be used to understand human intelligence.” The first successful AI program was written in 1951 to play a game of checkers, but the idea of “robots” taking on human-like characteristics has been traced back even earlier. Recently, it has been predicted that AI, although prominent now, will permeate the daily lives of individuals by 2025 and seep into various business sectors.  Today, the buzz around AI stems from the fast-growing influx of  emerging technologies, and how AI can be integrated with current technology to innovate products like self-driving cars, electronic medical records, and personal assistants. Many are aware of what “Siri” is, and consumers’ expectations that Siri will soon become all-knowing is what continues to push the field of AI to develop at such fast speeds.

What is ChatGPT? 

ChatGPT is a chatbot that uses a large language model trained by OpenAI. OpenAI is an AI research and deployment company founded in 2015 dedicated to ensuring that artificial intelligence benefits all of humanity. ChatGPT was trained with data from items such as books and other written materials to generate natural and conversational responses, as if a human had written the reply. Chatbots are not a recent invention. In 2019, Salesforce reported that twenty-three percent of service organizations used AI chatbots. In 2021, Salesforce reported the percentage is now closer to thirty-eight percent of organizations, a sixty-seven percent increase since their 2018 report. The effectiveness, however, left many consumers wishing for a faster, smarter way of getting accurate answers.

In comes ChatGPT, which has been hailed as the “best artificial intelligence chatbot ever released to the general public” by technology columnist, Kevin Roose from the New York Times. ChatGPT’s ability to answer extremely convoluted questions, explain scientific concepts, or even debug large amounts of code is indicative of just how far chatbots have advanced since their creation. Prior to ChatGPT, answers from chatbots were taken with a grain of salt because of the inaccurate, roundabout responses that were likely programmed from a template. ChatGPT, while still imperfect and slightly outdated (its knowledge is restricted to information from before 2021), is being used in manners that some argue could impact many different occupations and render certain inventions obsolete.

The Legal Issues with ChatGPT

ChatGPT has widespread applicability, being touted as rivaling Google in its usage. Since the beta launch in November, there have been countless stories from people in various occupations about ChatGPT’s different use cases. Teachers can use ChatGPT to draft quiz questions. Job seekers can use it to draft and revise cover letters and resumes. Doctors have used the chatbot to diagnose a patient, write letters to insurance companies,  and even do certain medical examinations. 

On the other hand, ChatGPT has its downsides. One of the main arguments against ChatGPT is that the chatbot’s responses are so natural that students may use it to shirk their homework or plagiarize. To combat the issue of academic dishonesty and misinformation, OpenAI has begun work on accompanying software and training a classifier to distinguish between AI-written text and human-written text. While not wholly reliable, OpenAI has noted the classifier will become more reliable the longer it is trained.

Another argument that has arisen involves intellectual property issues. Is the material that ChatGPT produces legal to use? In a similar situation, a different artificial intelligence program, Stable Diffusion, was trained to replicate an artist’s style of illustration and create new artwork based upon the user’s prompt. The artist was concerned that the program’s creations would be associated with her name because the training used her artwork.

Because of how new the technology is, the case law addressing this specific issue is limited. In January 2023, Getty Images, a popular stock photo company, commenced legal proceedings against Stability AI, the creators of Stable Diffusion, in the High Court of Justice in London, claiming Stability AI had infringed on intellectual property rights in content owned or represented by Getty Images absent a license and to the detriment of the content creators. A group of artists have also filed a class-action lawsuit against companies with AI art tools, including Stable AI, alleging the violation of rights of millions of artists. Regarding ChatGPT, when asked about any potential legal issues, the chatbot stated that “there should not be any legal issues” as long as the chatbot is used according to the terms and conditions set by the company and with the appropriate permissions and licenses needed, if any. 
Last, but certainly not least, ChatGPT is unable to assess whether the chatbot itself is compliant with the protection of personal data under state privacy laws, as well as the European Union’s General Data Protection Regulation (GDPR). Known by many as the gold-standard of privacy regulations, ChatGPT’s lack of privacy compliance with the GDPR or any privacy laws could have serious consequences if a user feeds ChatGPT sensitive information. OpenAI’s privacy policy does state that the company may collect any communication information that a user communicates with the feature, so it is important for anyone using ChatGPT to pause and think about the impact that sharing information with the chatbot will have before proceeding. As ChatGPT improves and advances, the legal implications are likely to only grow in turn.

Deepfakes – A Disastrous Merger of AI and Porn

fakepornlead

By David O’Hair

First appearing on reddit, a new trend called “deepfakes” has captured the public’s attention with one of the internet’s oldest promises – nude celebrity photos. Intimate celebrity images appearing online is nothing new in and of itself. A 2014 hack exposed hundreds of nude-celebrity images, while Gawker notoriously posted Hulk Hogan’s sex tape.

However, deepfakes present a novel issue in that the images, and often videos, of the celebrities are fake – but the underlying porn is real. Deepfakes use artificial intelligence mixed with facial-mapping software to essentially copy and paste someone’s face into preexisting porn content. The AI-software’s sophistication is such that content created by it, i.e., deepfakes, can be virtually indistinguishable from an authentic porn video featuring a specific celebrity. Celebrities are often the victims of deepfakes, because deepfakes require massive amounts of “raw footage” to import into the pornographic video. Chances are a celebrity has more time collected on video than the average person, but non-public figures can be the victims of deepfakes too. Continue reading

The Key to the YouTube Advertisement Crisis: an Improved AI

maxresdefaultBy Derk Westermeyer

A little over 4 years ago, comedian Ethan Klein uploaded his first video on his YouTube Channel, h3h3productions. That video’s premise was about how people use toilet paper. While this type of comedy may not be for everyone, Ethan’s channel has largely been a success. Since that first video, Ethan has uploaded hundreds more videos to his channel, a large portion of which generate millions of views each. Continue reading

Man or Machine? EU Considering “Rights for Robots”

robotBy Grady Hepworth

Isaac Asimov’s 1942 short story “Runaround” is credited for creating the famous “Three Laws of Robotics.” Asimov’s Laws, although theoretically fictional (and most recently featured in the 2004 motion picture I, Robot), require robots to i) not hurt humans, to ii) obey humans, and to iii) only protect themselves when doing so wouldn’t conflict with the first two rules. However, the European Union (“EU”) made headlines this month when it took steps toward making Asimov’s Laws a reality.
Continue reading

What Can a Foul-Mouthed Twitter Troll and a Board Game Playing Robot Tell Us About Artificial Intelligence’s Ramifications for the Legal System?

AIBy Jeff Bess

Rapid technological development in the digital age has disrupted countless industries and fundamentally reshaped many aspects of modern life. Many of these technologies also present legal challenges; ranging from Constitutional privacy concerns stemming from government surveillance, to ongoing employment law disputes about companies’, like Uber, use of independent contractors. A perhaps even greater disruptor – to both the law and society in general – is found in the emerging field of Artificial Intelligence. There have been numerous scholarly inquiries into theoretical challenges of creating a moral and legal framework to govern Artificial Intelligence technologies, but recent accomplishments in the field can provide clues as to how the direction of the technology will inform necessary legal rules. Continue reading