Is AI Good in Moderation?

By: Chisup Kim

In 2016, Microsoft released Tay, a chatbot based on artificial intelligence on Twitter that became smarter as users interacted with it. Unfortunately, this experiment did not last long, as some Twitter users coordinated a barrage of inappropriate tweets towards Tay to force the chatbot to parrot out racist and sexist tweets. Tay tweeted racial slurs, support for gamergate, and incredibly offensive positions within a matter of hours of being online. Last week, Microsoft returned to the AI space by launching a new AI-powered Bing search engine in partnership with OpenAI, the developers of ChatGPT. Unlike Tay, the Bing Search AI is designed as a highly-powered assistant that summarizes relevant articles or provides related products (e.g., recommending an umbrella for sale with a rain forecast). While many news outlets and platforms are specifically focused on reporting on whether the Bing AI chatbot is sentient, the humanization of an AI-powered assistant creates new questions about the liability that could be created by the AI’s recommendations. 

Content moderation itself is not an easy task technically. While search engines are providing suggestions based on statistics, search engine engineers also run parallel algorithms to “detect adult or offensive content.” However, these rules may not cover more nefariously implicit searches. For example, a search engine likely would limit or ban explicit searches for child pornography. However, a user may type, for example, “children in swimsuits” to get around certain parameters, while simultaneously influencing the overall algorithm. While the influence may not be as direct or to the same extent as Tay on Twitter, AI machine learning algorithms incorporate user behavior into their future outputs that taint the search experience for the original intended audience. In this example, tainted search results influenced by the perverted could affect the results for a parent looking to buy an actual swimsuit for their child with photos depicting inappropriate poses. Around five years ago, Bing was criticized  for suggesting racist and provocative images of children that were likely influenced by the searches by a few nefarious users. Content moderation is not an issue that lives just with the algorithm or just with its users, but rather a complex relationship between both parties that the online platforms and their engineers must consider. 

Furthermore, the humanization of a recommendation service altering how third party content is provided may lead to further liability for the online platform. The University of Washington’s own Professor Eric Schnapper is involved in the Gonzalez v. Google case, which examines the question of whether Section 230(c)(1) of the Communications Decency Act immunizes interactive computer services when making algorithmically targeted recommendations of a third-party content provider. Section 230 currently immunizes most online platforms that are considered an “interactive computer service” from being a “publisher or speaker” of third-party information or content. The Gonzales plaintiff is challenging Google on the grounds that YouTube’s algorithmic recommendation system led some users to be recruited into ISIS, and ultimately led to the death of Nohemi Gonzalez in the 2015 terrorist attacks in Paris. After the first days of arguments, the Supreme Court Justices seemed concerned about “creating a world of lawsuits” by attaching liability to recommendation-based services. No matter the result of this lawsuit, the interactive nature of search engine based assistants creates more of a relationship between the user and the search engine. Assessing how content is being provided has been seen in other administrative and legislative contexts such as the SEC researching the gamification of stock trading in 2021 and California restricting the types of content designs on websites intended for children. If Google’s AI LaMDA could pass the famous Turing Test to appear to have sentience (even if it technically does not), would the corresponding tech company be more responsible for the results from a seemingly sentient service or would it create more responsibility on the user’s responses? 

From my perspective, I think it depends on the role that the search engines give their AI-powered assistants. As long as these assistants are just answering questions and providing pertinent and related recommendations without taking demonstrative steps of guiding the conversation, then search engines’ suggestions  may still be protected as harmless recommendations. However, engineers need to continue to be vigilant on how user interaction in the macroenvironment may influence AI and its underlying algorithm, as seen with Microsoft’s Twitter-chatbot Tay or with  some of Bing’s controversial suggestions. The queries sent with covert nefariousness should be closely monitored as to not influence the experience of the general user. AI can be an incredible tool, but online search platforms should be cognizant of the rising issues of how to properly moderate content and how to display content to its users. 

Leave a comment