Is AI Good in Moderation?

By: Chisup Kim

In 2016, Microsoft released Tay, a chatbot based on artificial intelligence on Twitter that became smarter as users interacted with it. Unfortunately, this experiment did not last long, as some Twitter users coordinated a barrage of inappropriate tweets towards Tay to force the chatbot to parrot out racist and sexist tweets. Tay tweeted racial slurs, support for gamergate, and incredibly offensive positions within a matter of hours of being online. Last week, Microsoft returned to the AI space by launching a new AI-powered Bing search engine in partnership with OpenAI, the developers of ChatGPT. Unlike Tay, the Bing Search AI is designed as a highly-powered assistant that summarizes relevant articles or provides related products (e.g., recommending an umbrella for sale with a rain forecast). While many news outlets and platforms are specifically focused on reporting on whether the Bing AI chatbot is sentient, the humanization of an AI-powered assistant creates new questions about the liability that could be created by the AI’s recommendations. 

Content moderation itself is not an easy task technically. While search engines are providing suggestions based on statistics, search engine engineers also run parallel algorithms to “detect adult or offensive content.” However, these rules may not cover more nefariously implicit searches. For example, a search engine likely would limit or ban explicit searches for child pornography. However, a user may type, for example, “children in swimsuits” to get around certain parameters, while simultaneously influencing the overall algorithm. While the influence may not be as direct or to the same extent as Tay on Twitter, AI machine learning algorithms incorporate user behavior into their future outputs that taint the search experience for the original intended audience. In this example, tainted search results influenced by the perverted could affect the results for a parent looking to buy an actual swimsuit for their child with photos depicting inappropriate poses. Around five years ago, Bing was criticized  for suggesting racist and provocative images of children that were likely influenced by the searches by a few nefarious users. Content moderation is not an issue that lives just with the algorithm or just with its users, but rather a complex relationship between both parties that the online platforms and their engineers must consider. 

Furthermore, the humanization of a recommendation service altering how third party content is provided may lead to further liability for the online platform. The University of Washington’s own Professor Eric Schnapper is involved in the Gonzalez v. Google case, which examines the question of whether Section 230(c)(1) of the Communications Decency Act immunizes interactive computer services when making algorithmically targeted recommendations of a third-party content provider. Section 230 currently immunizes most online platforms that are considered an “interactive computer service” from being a “publisher or speaker” of third-party information or content. The Gonzales plaintiff is challenging Google on the grounds that YouTube’s algorithmic recommendation system led some users to be recruited into ISIS, and ultimately led to the death of Nohemi Gonzalez in the 2015 terrorist attacks in Paris. After the first days of arguments, the Supreme Court Justices seemed concerned about “creating a world of lawsuits” by attaching liability to recommendation-based services. No matter the result of this lawsuit, the interactive nature of search engine based assistants creates more of a relationship between the user and the search engine. Assessing how content is being provided has been seen in other administrative and legislative contexts such as the SEC researching the gamification of stock trading in 2021 and California restricting the types of content designs on websites intended for children. If Google’s AI LaMDA could pass the famous Turing Test to appear to have sentience (even if it technically does not), would the corresponding tech company be more responsible for the results from a seemingly sentient service or would it create more responsibility on the user’s responses? 

From my perspective, I think it depends on the role that the search engines give their AI-powered assistants. As long as these assistants are just answering questions and providing pertinent and related recommendations without taking demonstrative steps of guiding the conversation, then search engines’ suggestions  may still be protected as harmless recommendations. However, engineers need to continue to be vigilant on how user interaction in the macroenvironment may influence AI and its underlying algorithm, as seen with Microsoft’s Twitter-chatbot Tay or with  some of Bing’s controversial suggestions. The queries sent with covert nefariousness should be closely monitored as to not influence the experience of the general user. AI can be an incredible tool, but online search platforms should be cognizant of the rising issues of how to properly moderate content and how to display content to its users. 

“Hey Chatbot, Who Owns your Words?”: A look into ChatGPT and Issues of Authorship

By: Zachary Finn

Unless you have lived under a rock, since last December, our world has been popularized by the infamous ChatGPT. Generative Pre-trained Transformer (“ChatGPT”) is an AI powered chatbot which uses adaptive human-like responses to answer questions, converse, write stories, and engage with input transmitted by its user. Chatbots are becoming increasingly popular in many industries and can be found on the web, social media platforms, messaging apps, and other digital services. The world of artificial intelligence sits on the precipice of innovation and exponential technological discovery. Because of this, the law has lagged to catch up and interpret critical issues that have emerged from chatbots like ChatGPT. One issue that has risen within the intersection of AI-Chatbot technology and law is that of copyright and intellectual property over a chatbot’s generated work. The only thing that may be predictable about the copyright of an AI’s work is that (sadly) ChatGPT likely does not own its labor. 

To first understand how ChatGPT figures into the realm of copyright and intellectual property, it is important to understand the foundations and algorithms that give chatbot machines’ life. A chatbot is an artificial intelligence program designed to simulate conversation with human users. OpenAI developed ChatGPT to converse with users, typically through text or voice-based interactions. Chatbots are used in a variety of ways, such as: user services, conversation, information gathering, and language learning. ChatGPT is programmed to understand user contributions and respond with appropriate and relevant information. These inputs are sent by human users, and a chatbot’s response is often based on machine learning algorithms or on a predefined script. Machine learning algorithms are methods by which an AI system functions, generally predicting output values from given input data. In lay terms, a system will learn from previous human inputs to generate a more accurate response. 

The ChatGPT process goes as followed:

1. A human individual inputs data, such as a question or statement: “What were George Washington’s teeth made of?”

2. The Chatbot reads the data and uses machine learning, algorithms, and its powerful processor to generate a response.

3. ChatGPT’s response is relayed back to the user in a discussion-like manner: “Contrary to popular belief, Washington’s dentures were not made of wood, but rather a combination of materials that were common for dentures at the time, including human and animal teeth, ivory, and metal springs. Some of Washington’s dentures also reportedly included teeth from his own slaves” (This response was generated by my personal inquiry with ChatGPT).

So, who ultimately owns content produced by ChatGPT and other AI platforms? Is it the human user? OpenAI or the system developers? Or, does artificial intelligence have its own property rights?

Copyright is a type of intellectual property that protects original works of authorship as soon as an author fixes the work in a tangible form of expression. This is codified in The Copyright Act of 1976, which provides the framework for copyright law. Speaking on the element of authorship, anyone who creates an original fixed work, like taking a photograph, writing a blog, or even creating software, becomes the author and owner of that work. Corporations and other people besides a work’s creator can also be owners, through co-ownership or when a work is made for hire (which authorizes works created by an employee within the scope of employment to be owned by the employer). Ownership can also be contracted.

In a recent Ninth Circuit Court decision, the appellate court held that for a work to be protected by copyright, it must be the product of creative authorship by a human author. In the case of Naruto v Slater, where a monkey ran off with an individual’s camera and took a plethora of selfies, it was concluded that the monkey did not have protections over the selfies because copyright does not extend to animals or nonhumans. §313.2 of the Copyright Act states that the U.S. Copyright Office  will not register works produced by nature, animals, the divine, the supernatural, etc. In the case of AI, a court would likely apply this rule and similar as well as any precedent cases that have dealt with similar fact patterns with computer generated outputs.

Absent human authorship, a work is not entitled to copyright protection. Therefore, AI-created work, like the labor manufactured by ChatGPT will plausibly be considered works of public domain upon creation. If not this, it is likely they will be seen as a derivative work of the information in which the AI based its creation. A derivative work is “a work based on or derived from one or more already existing works”. This fashions a new issue as to whether the materials used by an AI are derived from algorithms created by companies like OpenAI, or by users who influence a bot’s generated response, like when someone investigates George Washington’s teeth. Luckily for OpenAI, the company acknowledges via its terms and agreements that it has ownership over content produced by the ChatGPT.

However, without a contract to waive authorship rights, the law has yet to address intellectual property rights of works produced by chatbots. One wonders when an issue like this will present itself to a court for systemization into law, and if when that time comes, will AI chatbots have the conversational skills and intellect to argue for ownership of their words?