Is AI Good in Moderation?

By: Chisup Kim

In 2016, Microsoft released Tay, a chatbot based on artificial intelligence on Twitter that became smarter as users interacted with it. Unfortunately, this experiment did not last long, as some Twitter users coordinated a barrage of inappropriate tweets towards Tay to force the chatbot to parrot out racist and sexist tweets. Tay tweeted racial slurs, support for gamergate, and incredibly offensive positions within a matter of hours of being online. Last week, Microsoft returned to the AI space by launching a new AI-powered Bing search engine in partnership with OpenAI, the developers of ChatGPT. Unlike Tay, the Bing Search AI is designed as a highly-powered assistant that summarizes relevant articles or provides related products (e.g., recommending an umbrella for sale with a rain forecast). While many news outlets and platforms are specifically focused on reporting on whether the Bing AI chatbot is sentient, the humanization of an AI-powered assistant creates new questions about the liability that could be created by the AI’s recommendations. 

Content moderation itself is not an easy task technically. While search engines are providing suggestions based on statistics, search engine engineers also run parallel algorithms to “detect adult or offensive content.” However, these rules may not cover more nefariously implicit searches. For example, a search engine likely would limit or ban explicit searches for child pornography. However, a user may type, for example, “children in swimsuits” to get around certain parameters, while simultaneously influencing the overall algorithm. While the influence may not be as direct or to the same extent as Tay on Twitter, AI machine learning algorithms incorporate user behavior into their future outputs that taint the search experience for the original intended audience. In this example, tainted search results influenced by the perverted could affect the results for a parent looking to buy an actual swimsuit for their child with photos depicting inappropriate poses. Around five years ago, Bing was criticized  for suggesting racist and provocative images of children that were likely influenced by the searches by a few nefarious users. Content moderation is not an issue that lives just with the algorithm or just with its users, but rather a complex relationship between both parties that the online platforms and their engineers must consider. 

Furthermore, the humanization of a recommendation service altering how third party content is provided may lead to further liability for the online platform. The University of Washington’s own Professor Eric Schnapper is involved in the Gonzalez v. Google case, which examines the question of whether Section 230(c)(1) of the Communications Decency Act immunizes interactive computer services when making algorithmically targeted recommendations of a third-party content provider. Section 230 currently immunizes most online platforms that are considered an “interactive computer service” from being a “publisher or speaker” of third-party information or content. The Gonzales plaintiff is challenging Google on the grounds that YouTube’s algorithmic recommendation system led some users to be recruited into ISIS, and ultimately led to the death of Nohemi Gonzalez in the 2015 terrorist attacks in Paris. After the first days of arguments, the Supreme Court Justices seemed concerned about “creating a world of lawsuits” by attaching liability to recommendation-based services. No matter the result of this lawsuit, the interactive nature of search engine based assistants creates more of a relationship between the user and the search engine. Assessing how content is being provided has been seen in other administrative and legislative contexts such as the SEC researching the gamification of stock trading in 2021 and California restricting the types of content designs on websites intended for children. If Google’s AI LaMDA could pass the famous Turing Test to appear to have sentience (even if it technically does not), would the corresponding tech company be more responsible for the results from a seemingly sentient service or would it create more responsibility on the user’s responses? 

From my perspective, I think it depends on the role that the search engines give their AI-powered assistants. As long as these assistants are just answering questions and providing pertinent and related recommendations without taking demonstrative steps of guiding the conversation, then search engines’ suggestions  may still be protected as harmless recommendations. However, engineers need to continue to be vigilant on how user interaction in the macroenvironment may influence AI and its underlying algorithm, as seen with Microsoft’s Twitter-chatbot Tay or with  some of Bing’s controversial suggestions. The queries sent with covert nefariousness should be closely monitored as to not influence the experience of the general user. AI can be an incredible tool, but online search platforms should be cognizant of the rising issues of how to properly moderate content and how to display content to its users. 

Litigation Funding in IP: Caveat Emptor

By: Nicholas Lipperd

Gambling seems to be an American tradition, from prop bets on the Super Bowl to riding the stocks through bear and bull markets. The highest stake gambling is done by investment firms, some of whom are finding profitable bets to be had on civil court cases. The process of Third-Party Litigation Funding (“TPLF”) is simple enough on its face: a funder pays a non-recourse sum either to the client directly or to the law firm representing the client. In exchange, subject to the agreed upon terms, the funder receives a portion of any damages awarded. Thus, TPLF is no more than a third-party placing a bet on the client’s case, somewhat similar to the choice a law firm makes when taking a case on contingency. With $2.8 billion invested in the practice in 2021, TPLF seems to be a betting scheme that is paying off.

TPLF is expanding from business litigation into patent litigation. Since the creation of the Federal Circuit, damages in patent infringement cases have skyrocketed, increasing attention to patent cases. TPLF is no exception. The emergence of third-party funding in patent litigation could allow individuals to assert patent rights who previously could not have afforded it. Unlike agreements directly with law firms, third-party funding is not controlled by the American Bar Association’s (“ABA”) Rules of Professional Conduct for lawyers. While this creates some concern in any TPLF case, this lack of protection in patent cases is unique: the funder can obtain the rights of the patent(s) from their clients. 

As previously mentioned, a barrier many patent owners face when attempting to assert their rights is the cost of litigation. While patent litigation cases rarely proceed to a bench trial, these cases typically take three to five years to complete. Intrinsically linked to this timeline is the price tag. Infringement cases where over $25 million in damages is at risk may run a median of $4 million in litigation costs. For cases with less than $1 million at risk, the median litigation cost sits at just under $1 million. A simple look at the risk versus rewards tradeoffs in this case paints a discouraging picture for the plaintiff. Regardless of the expected damages, the cost of litigation is an undeniably large factor, and one that leads to many cases being settled within a year rather than being tried on their merits. 

When the client cannot afford to pay the price of litigation yet intends to assert the patent rights, TPLF creates an opportunity to pursue the case. Plaintiff-side litigation seems like a simple win-win for the client. Yet as with anything, the devil is in the details. A funder is naturally motivated to see a return on this investment, so a client looking at the deal must look past the surface. Not all funding is arranged on a non-recourse model, leaving clients no better off should cases become losers than if they had chosen a billable hour payment structure with the firm. TPLF often does not cover attorney’s fees awards, so clients may be on the hook for more than they realize. Litigation funding arrangements often come with “off-ramps” for the investor should the case take an unexpected turn or the funder stops believing in the merits of the case. This means the funder may be able stop funding the client at certain stages of litigation, leaving the client or the firm without funding for the remaining stages. 

TPLF has often been described as the “wild west” of funding cases. In part, this is due to the lack of regulations surrounding the practice. It is also due to the fact that third-party funders are not constrained by the ABA’s Rules of Professional Conduct like attorneys are. 

Attorneys may not abdicate their Rule 1.1 duty of competency, yet TPLF creates tension with this duty. A third-party funder has made an investment in a case. Like any diligent investor, the funder likely wants to track the case closely to ensure it continues to align with the funder’s financial interest. Should the funder attempt to exert control over the case to protect this interest, it is up to the lawyer to resist. This may leave the client in trouble; should the disagreement be large enough, the client could see their funding removed as the TPLF exercises a contractual “off ramp.”

Perhaps the most implicated ABA rule in TPLF structures is Rule 1.6, the Duty of Confidentiality. This rule, and the related Federal Rule of Evidence 502 regarding attorney-client privilege, create friction with TPLF arrangements. Funders want as much information as possible before and during the funding of a case, as they want to be able to gauge the strength of their future and current investments. While all disclosures must be clearly sanctioned by the client, when do these disclosures waive attorney-client privilege? The majority of lower courts hold that communications about the case with a TPLF fall under work-product privilege protection, a question the Supreme Court has not answered and one the Federal Circuit danced around. (See In re Nimitz Techs. LLC, (Fed. Cir. 2022) denying a writ of mandamus to protect the District Court from reviewing litigation funding materials in camera.)

While ABA Rule 1.5 prevents firms from charging unreasonable fees, including contingency rates, it does not prevent a TPLF from doing so. A funder can bargain with a client for whatever portion of the damages award they like, and they have much more bargaining power than the client does. This bargaining power may be used for more than simply leveraging a larger percentage contingency fee than a firm could charge.

While damages in patent cases are high enough to attract TPLF, it is more than money that may attract third-party funders to patent litigation. Often, the patent itself is just as valuable as the damages award. Rule 1.8(i) prevents attorneys from acquiring proprietary interest in the cause of action or subject matter of litigation. Again, this does not apply to third-party funders, who are free to include any and all patent rights in the contract to fund a client’s case. 

Patent owners may face a catch-22 style choice in this scenario. If they cannot afford to stake the cost of litigation and if a firm does not see value in taking it on contingency, owners may turn towards TPLF. Yet, if funders see more value in the patents than the potential litigation damages, patent owners must make a hard choice. Obtaining funding will give a patent owner a one-time shot at a large damages award. The owner, win or lose, will not be the owner of the patent any longer, and will lose any potential revenue stream it would produce. Yet if owners cannot assert their patent rights in court, what value do their patents hold? Patent owners have promoted the progress of science and useful arts in the United States, and in exchange they receive limited monopolies in the form of their patents. These monopolies are not so easily maintained and defended though. Patent owners must carefully consider their options when looking to assert their rights. Patent litigation is expensive and lengthy, and while having TPLF cover all litigation costs may seem like a sterling option, owners must dig deeper to fully understand the trade-offs. Could they lose funding support halfway through litigation? Would funding the case be worth giving up their patent rights? TPLF is still newly emerging in patent litigation cases, but for potential clients, the message is already clear: in deciding to take on a third-party funder, let the buyer beware.

“Hey Chatbot, Who Owns your Words?”: A look into ChatGPT and Issues of Authorship

By: Zachary Finn

Unless you have lived under a rock, since last December, our world has been popularized by the infamous ChatGPT. Generative Pre-trained Transformer (“ChatGPT”) is an AI powered chatbot which uses adaptive human-like responses to answer questions, converse, write stories, and engage with input transmitted by its user. Chatbots are becoming increasingly popular in many industries and can be found on the web, social media platforms, messaging apps, and other digital services. The world of artificial intelligence sits on the precipice of innovation and exponential technological discovery. Because of this, the law has lagged to catch up and interpret critical issues that have emerged from chatbots like ChatGPT. One issue that has risen within the intersection of AI-Chatbot technology and law is that of copyright and intellectual property over a chatbot’s generated work. The only thing that may be predictable about the copyright of an AI’s work is that (sadly) ChatGPT likely does not own its labor. 

To first understand how ChatGPT figures into the realm of copyright and intellectual property, it is important to understand the foundations and algorithms that give chatbot machines’ life. A chatbot is an artificial intelligence program designed to simulate conversation with human users. OpenAI developed ChatGPT to converse with users, typically through text or voice-based interactions. Chatbots are used in a variety of ways, such as: user services, conversation, information gathering, and language learning. ChatGPT is programmed to understand user contributions and respond with appropriate and relevant information. These inputs are sent by human users, and a chatbot’s response is often based on machine learning algorithms or on a predefined script. Machine learning algorithms are methods by which an AI system functions, generally predicting output values from given input data. In lay terms, a system will learn from previous human inputs to generate a more accurate response. 

The ChatGPT process goes as followed:

1. A human individual inputs data, such as a question or statement: “What were George Washington’s teeth made of?”

2. The Chatbot reads the data and uses machine learning, algorithms, and its powerful processor to generate a response.

3. ChatGPT’s response is relayed back to the user in a discussion-like manner: “Contrary to popular belief, Washington’s dentures were not made of wood, but rather a combination of materials that were common for dentures at the time, including human and animal teeth, ivory, and metal springs. Some of Washington’s dentures also reportedly included teeth from his own slaves” (This response was generated by my personal inquiry with ChatGPT).

So, who ultimately owns content produced by ChatGPT and other AI platforms? Is it the human user? OpenAI or the system developers? Or, does artificial intelligence have its own property rights?

Copyright is a type of intellectual property that protects original works of authorship as soon as an author fixes the work in a tangible form of expression. This is codified in The Copyright Act of 1976, which provides the framework for copyright law. Speaking on the element of authorship, anyone who creates an original fixed work, like taking a photograph, writing a blog, or even creating software, becomes the author and owner of that work. Corporations and other people besides a work’s creator can also be owners, through co-ownership or when a work is made for hire (which authorizes works created by an employee within the scope of employment to be owned by the employer). Ownership can also be contracted.

In a recent Ninth Circuit Court decision, the appellate court held that for a work to be protected by copyright, it must be the product of creative authorship by a human author. In the case of Naruto v Slater, where a monkey ran off with an individual’s camera and took a plethora of selfies, it was concluded that the monkey did not have protections over the selfies because copyright does not extend to animals or nonhumans. §313.2 of the Copyright Act states that the U.S. Copyright Office  will not register works produced by nature, animals, the divine, the supernatural, etc. In the case of AI, a court would likely apply this rule and similar as well as any precedent cases that have dealt with similar fact patterns with computer generated outputs.

Absent human authorship, a work is not entitled to copyright protection. Therefore, AI-created work, like the labor manufactured by ChatGPT will plausibly be considered works of public domain upon creation. If not this, it is likely they will be seen as a derivative work of the information in which the AI based its creation. A derivative work is “a work based on or derived from one or more already existing works”. This fashions a new issue as to whether the materials used by an AI are derived from algorithms created by companies like OpenAI, or by users who influence a bot’s generated response, like when someone investigates George Washington’s teeth. Luckily for OpenAI, the company acknowledges via its terms and agreements that it has ownership over content produced by the ChatGPT.

However, without a contract to waive authorship rights, the law has yet to address intellectual property rights of works produced by chatbots. One wonders when an issue like this will present itself to a court for systemization into law, and if when that time comes, will AI chatbots have the conversational skills and intellect to argue for ownership of their words?

Major Tuddy’s Major Trademark Issue

By: Kelton McLeod

On January 1st, 2023, the Washington Commanders unveiled a new mascot, Major Tuddy, a tall humanoid hog, wearing a military-inspired helmet and a perpetual grin. The unveiling has gone over with a healthy mix of derision and confusion, just about as well as anything one would expect to come from the Dan Snyder-owned Commanders. While even casual football fans might understand that Major Tuddy is named after the slang term for a touchdown, some are confused why the Washington, D.C.-based football team would want a hog to be the new symbol of their organization, while others are filing a trademark lawsuit because of it. 

From the 1980s and into the 1990s, Washington had the best offensive line in all of professional football. These men, including the likes of Joe Jacoby and Mark May, were known as The Hogs. The Hogs were, and remain, an important piece of Washington’s history, helping the team win its only three Super Bowls. But despite bursting onto the scene through a sign emblazoned with “Let’s Get Hog Wild,” Major Tuddy has not had many fans, players, or former players very excited. 

Instead of being a triumph at the end of a lackluster season (the Commanders were eliminated from playoff contention the day Major Tuddy was revealed), Major Tuddy has proved to be yet another point of controversy for the Commanders Organization. In fact, some of the original Hogs, including the likes of Joe Jacoby and Mark May, are so unenthused with the new mascot that they issued a statement prior to Major Tuddy’s announcement distancing themselves from Dan Snyder (the Commanders current majority owner) and referencing potential legal action related to this new Hog. The members of the original Hogs (joined by John Riggins, Fred Dean, and Doc Walker) created O-Line Entertainment, LLC,  and in July of 2022, O-Line Entertainment filed trademark applications to the federal register for ‘Hogs’ and ‘Original Hogs,’ as the terms relate to professional football paraphernalia and merchandise. O-Line Entertainment sees Major Tuddy as a potential infringement on their mark, an outright attempt to capitalize and commercialize on the work of the Hogs of the 80s and 90s, and an attempt to confuse the fans. 

While the Commanders imply they have no intention to financially capitalize on Hogs as a mark, O-Line Entertainment has a real shot at being able to exclude the Commanders from even trying. The Lanham Act §43(a) creates a statutory cause of action for trademark infringement, where “any person who . . .uses in commerce any word, term, name, symbol, or device, . . . likely to cause confusion, or to cause mistake, or to deceive as to the affiliation, connection, or association of such person with another person . . . shall be liable in a civil action by any person who believes that he is or is likely to be damaged by such act.” Trademarks exist to help consumers identify the source of a good. So, if someone attempts to use another’s mark, or something substantially similar, to cause confusion as to the source or sponsorship of a good, that party is liable for trademark infringement. In this case, it is hard to refute the source of Major Tuddy’s swine heritage, as even his official team profile references the offensive line from the 80s and 90s. O-Line Entertainment would just need to prove that the likelihood of confusion exists between their products and those of the Commanders organization, not that actual confusion has occurred. Under current law, there is a prospect of O-line Entertainment doing so.

While O-Line Entertainment’s trademark registration has yet to be granted, and the Commanders organization could attempt to invalidate it, this likely would not be worth the problems it would cause. The Commanders are already in the midst of a Congressional investigation, and Dan Snyder’s time as owner might not be able to handle the bad press. While it is hard to expect Dan Snyder and the Commanders leadership team–a group known for trying and failing to hide behind the best parts of their team’s legacy–to handle themselves in a morally upstanding way, how they choose to handle the marketing and merchandising of their new mascot could mean another long and protracted trademark dispute where they lack the moral high ground.  

Should the Police Be Able To Arrest You For Your Face?

By: Kyle Kennedy

As technology has continued to evolve, law enforcement has begun to employ technological advances like facial recognition in their everyday pursuits. This has included issuing arrest warrants exclusively based on facial recognition matches.

Randall Reid was on his way to Thanksgiving dinner at his mother’s house in Georgia when he was arrested and jailed for stealing $10,000 of Chanel and Louis Vuitton handbags from a New Orleans suburb. The only problem was, Reid had never even been to Louisiana. This didn’t stop Reid from spending nearly a week in prison because a facial recognition tool had matched him to surveillance footage of the suspect in the case. Reid was arrested and held despite being about 40 pounds lighter than the suspect on the tape. 

This is not the first case of facial recognition technology leading to a wrongful arrest. Nijeer Parks spent 10 days in prison after an incorrect facial match and began to feel pressure to accept a plea deal because of his prior criminal record. Faced with the pressure of battling the court system and often the weight of prior charges, “[d]efense attorneys and legal experts say some people wrongly accused by facial recognition agree to plea deals”. Robert Williams has had multiple strokes since he was released from a 30 hour stint in jail after he was incorrectly matched to video of a suspect robbing a watch store. These are far from the only instances of inaccurate facial recognition leading to wrongful arrest. These arrests raise two important questions: can a facial recognition match be the basis of probable cause to substantiate a warrant? And if so, should it be? 

Facial recognition is broadly used to accomplish two tasks: verification and identification. Verification, also called one-to-one matching, is used to confirm a person’s identity like when logging into their smartphone or a banking app. Identification, or one-to-many matching, is when software compares an unknown face to a large database of known faces. Identification can be used on cooperative subjects who consent to having their faces scanned or uncooperative subjects whose faces are not scanned. The accuracy of these identification measures is much lower for uncooperative subjects whose facial images were captured in the real world. One algorithm had a 0.1% error rate when matching to high-quality mugshots, but this rate climbed to 9.3% when matching to images captured ‘in the wild’. The accuracy rates of facial recognition technology also vary by vendor; one top algorithm achieved an identification accuracy of 87% at a sporting venue while the accuracy rate of another vendor’s software was a dismal 40%. In addition to variation by vendor, even the most accurate algorithms tended to have “higher false positive rates in women, African-Americans, and particularly African-American women.” 

Probable cause differs for arrest warrants and search warrants. For an arrest warrant, probable cause is interpreted according to a flexible reasonableness standard based on a totality of the circumstances. For search warrants, there is probable cause when there is a fair probability there is evidence of a crime in the place to be searched. Based on the issues with accuracy and the frequency of inaccurate facial matches leading to wrongful arrests, a one-to-many match acquired through use of a facial-recognition algorithm cannot substantiate probable cause for a warrant without further evidence. This is especially true where the algorithm is used on images or video captured from uncooperative individuals ‘in the wild’, which would be the case when officers are trying to find unknown suspects with surveillance footage. A facial match with corroborating evidence linking a location to the individual matched might be able to substantiate a search warrant. On its face this might be less concerning due to the lower risk of wrongful arrests, but wrongfully or erroneously substantiated search warrants lead to many issues of their own.

The greatest danger posed by facial recognition inaccuracies are false positives because they lead to false accusations against otherwise innocent individuals. In response, some scholars have attempted to impose confidence thresholds on facial recognition algorithms to reduce the rate of false positives. One study found that a set of algorithms failed to return a match 4.7% of the time when no threshold was imposed, but that the rate jumped to 35% when a 99% threshold was imposed.  This means that 30% of the time the algorithm identified an individual, it was at a confidence level of below 99%. Many law enforcement departments that use facial recognition technology do not impose confidence thresholds on potential matches. While these confidence thresholds reduce the risk of false accusations, they still leave the door open for problematic uses of technology by law enforcement. Confidence thresholds are a band-aid solution which are difficult to enforce externally and allow the continued use of invasive surveillance technology.

Overall, one-to-many matches of uncooperative faces acquired through facial recognition algorithms are neither accurate nor reliable enough to be the sole basis of probable cause substantiating a warrant. These algorithms are susceptible to high error rates which vary unpredictably based on many factors, including the race and gender of the target individual. Additionally, the use of facial matching as the sole basis for arrest warrants has led to many wrongful arrests with long-term consequences for arrestees, like those mentioned above. The surrounding evidence of inaccuracy and history of wrongful arrests stemming from inaccurate facial matches make it clear that facial recognition matching shouldn’t be the legal basis for probable cause absent corroborating evidence. The remaining questions are to what degree a facial recognition match must be corroborated to substantiate a search warrant, whether police should be liable for harms stemming from inaccurate facial matches, and if so, what duty of care the police have in determining whether a given facial recognition match is accurate. Beyond further inquiry, the need for regulation surrounding the use of facial recognition and other surveillance technologies by law enforcement has never been more apparent.