AI’s Creative Ambitions: A Case Review of Thaler v. Perlmutter (2023)

By: Stella B. Haynes Kiehn

Is it possible for AI to achieve genuine creativity?  Inventor and self-dubbed “AI Director”, Dr. Stephen Thaler (“Thaler”), has been attempting to prove to the U.S. Copyright Office for the past several years that not only can AI be creative, but also that AI can create works capable of reaching copyright standards.

On November 3, 2018, Thaler filed an application to register a copyright claim for the work, A Recent Entrance to Paradise. While Thaler filed the application, Thaler listed “The Creativity Machine”, as the author of the work, and himself as the copyright claimant. According to Thaler, A Recent Entrance to Paradise was drawn and named by the Creativity Machine, an AI program. The artwork “depicts a three-track railway heading into what appears to be a leafy, partly pixelated tunnel.” In Thaler’s copyright application, he noted that A Recent Entrance to Paradise “was autonomously created by a computer algorithm running on a machine” and he was “seeking to register this computer-generated work as a work-for-hire to the owner of the Creativity Machine.”

The U.S. Copyright Office denied Thaler’s application primarily on the grounds that his work lacked the human authorship necessary to support a copyright claim. On a second request for reconsideration of refusal, “Thaler did not assert that the Work was created with contribution from a human author … [but that] the Office’s human authorship requirement is unconstitutional and unsupported by case law.” The U.S. Copyright Office once again denied the application. Upon receiving this decision, Thaler appealed the ruling to the U.S. District Court for the District of Columbia.

On appeal, Judge Beryl A. Howell reiterated that “human authorship is an essential part of a valid copyright claim.” Notably, Section 101 of the Copyright Act requires that a work have an “author” to be eligible for copyright. Drawing upon decades of Supreme Court case law, the Court concluded that the author must be human, for three primary reasons.

First, the Court stated that the government adopted the Copyright Clause of the U.S. Constitution to incentivize the creation of uniquely original works of authorship. This incentivization is often financial, and non-human actors, unlike human authors, do not require financial incentives to create. “Copyright was therefore not designed to reach” artificial intelligence systems.

Second, the Court pointed to the legislative history of the Copyright Act of 1976 as evidence against Thaler’s copyright claim. The Court looked to the Copyright Act of 1909’s provision that only a “person” could “secure copyright” for a work. Additionally, the Court found that the legislative history of the Copyright Act of 1976 fails to indicate that Congress intended to extend authorship to nonhuman actors, such as AI. To the contrary, the congressional reports stated that Congress sought to incorporate the “original work of authorship” standard “without change.”

Finally, the Court noted that case law has “consistently recognized” the human authorship requirement. The decision pointed to the U.S. Supreme Court’s 1884 opinion in Burrow-Giles Lithographic Company v. Sarony, in upholding the constitutionality of the human only authorship requirement. This case, upholding authorship rights for photographers, found it significant that the human creator, not the camera, “conceived of and designed the image and then used the camera to capture the image.”

Ultimately, this decision is consistent with recent case law, and administrative opinions on this topic. In mid 2024, the Copyright Office plans to issue guidance on AI and copyright issues, in response to a survey of AI industry professionals, copyright applicants, and legal professionals. In relation to the Creativity Machine, one of Thaler’s main supporters in this legal battle is Ryan Abbott, a professor of law and health sciences at the University of Surrey in the UK, and a prominent AI litigant. Abbott is the creator of the Artificial Inventor Project—a group of intellectual property lawyers and an AI scientist working on IP rights for AI-generated outputs. The Artificial Inventor Project is currently working on several other cases for Thaler, including attempting to patent two of the Creativity Machine’s other “authored” works. While the District Court’s decision seems to mark the end of Thaler’s quest to copyright A Recent Entrance to Paradise, it seems as if the fight for AI authorship rights in copyright is only beginning.

Your Face Says It All: the FTC Sends a Warning and Rite Aid Settles Down

By: Caroline Dolan

If someone were to glance at your face, they wouldn’t necessarily know if you won big in Vegas or if you’re silently battling a gambling addiction. When you stroll down the street, your face can conceal many a secret, even such a lucrative side hustle. While facial recognition (“FR”) software is not a new innovation, deep pockets are investing a staggeringly large amount of money into the FR market. Last year, the market was globally valued at $5.98 billion and is projected to grow at a compound annual growth rate of 14.9% into 2030. This rapid and bold deployment of facial recognition technology may therefore make our faces more revealing than ever, transforming them into our most valuable—yet vulnerable—asset.

A Technical Summary for Non-Techies

Facial recognition uses software to assess similarities between faces and provide determinations. Facial characterization further classifies a face based on individual characteristics like gender, facial expression, and age. Through deep learning AI, artificial neural networks mimic how our brains process data. The neural network consists of various layers of algorithms which process and learn from training data, like images or text, and eventually develop the ability to identify features and make comparisons.

However, when the dataset used to train the FR model is unrepresentative of different genders and races, a biased algorithm is created. Training data that is biased toward certain features creates a critical weak spot in a model’s capabilities and can result in “overfitting” wherein the machine learning model performs well on the training data but poorly on data that is different from which it was trained. For example, a model that is trained on data that is biased towards images of men with Western features will likely struggle to make accurate determinations on images of East Asian females.

Data collection and curation poses its own set of challenges, but selection bias is a constant risk whether training data is collected from a proprietary large language model (“LLM”), which requires customers to purchase a license with restrictions, or from an open-source LLM, which is freely available and provides flexibility. Ensuring that training data represents a variety of demographics requires AI ethic awareness, intentionality, and potentially federal regulation.

The FTC Cracks Down

In December of 2023, Rite Aid settled with the FTC following the agency’s complaint alleging that the company’s deployment of FR software was reckless and lacked reasonable safeguards, resulting in false identifications and foreseeable harm. Between 2012 and 2020, Rite Aid employed an AI FR program to monitor shoppers without their knowledge and flag “persons of interest.” Those whose faces were deemed a match to one in the company’s “watchlist database” were confronted by employees, searched, and often publicly humiliated before being expelled from the store. 

The agency’s complaint under section 5 of the FTC Act asserted that Rite Aid recklessly overlooked the risk that its FR software would misidentify people based on gender, race, or other demographics. The FTC stated that “Rite Aid’s facial recognition technology was more likely to generate false positives in stores located in predominantly Black and Asian neighborhoods than in predominantly white communities, where 80% of Rite Aid stores are located.” This also violated Rite Aid’s 2010 Security Order which required the company to oversee its third-party software providers.  

The recent settlement prohibits Rite Aid from implementing AI FR technology for five years. It also requires the company to destroy all data that the system has collected. The FTC’s stipulated Order imposes various comprehensive safeguards on “facial recognition or analysis systems,” defined as “an Automated Biometric Security or Surveillance System that analyzes . . . images, descriptions, recordings . . . of or related to an individual’s face to generate an Output.” If Rite Aid later seeks to implement an Automated Biometric Security or Surveillance System, the company must adhere to numerous forms of monitoring, public notices, and data deletion requirements based on the “volume and sensitivity” of the data. Given that Rite Aid filed Chapter 11 bankruptcy in October of 2023, the settlement is pending approval by the bankruptcy court while the FTC’s proposed consent Order goes through public notice and comment.

Facing the FutureGoing forward, it is expected that the FTC will remain “vigilant in protecting the public from unfair biometric surveillance and unfair data security practices.” Meanwhile, companies may be incentivized to embrace AI ethics as a new component of “Environmental, Social, and Corporate Governance” while legislators wrestle with how to ensure that automated decision-making technologies evolve responsibly and do not perpetuate discrimination and harm.

2023, A Roller Coaster Towards Unionization for Game Developers?

By: Kevin Vu

No doubt, 2023 has been a “blockbuster year for video games.” From the Game Awards breaking viewership records, the long-anticipated Baldur’s Gate 3 winning several awards, including the “Game of the Year,” and the redemption of Cyberpunk 2077, it’s evident that 2023 will be celebrated for its many great releases. But, one little-told story of gaming in 2023 is the massive amount of layoffs that have emerged among many developers. Perhaps layoffs were inevitable, given the enormous costs that the top video games incur, and how some notable games only generated half as much revenue as they had anticipated.  

But there may be an even more fundamental reason for this rollercoaster of a year in gaming. Tech, the umbrella industry for gaming, has historically been resistant to unionization. As layoffs continue in the tech industry, the call to unionization has grown louder and louder. With the gaming industry celebrating one of its most consequential years, it’s time to ask whether unionization would ultimately benefit the industry.

Reasons to Unionize

Traditional reasons for unionization often include higher wages, creating a safer workplace, job stabilization, and collective bargaining. Traditionally, both tech developers and game developers have made six-figure salaries, eliminating the high wage factor. However, the remaining factors seem to point out that the gaming industry should unionize. Riot Games, Activision-Blizzard, and other companies within the video game space are notorious for workplace harassment. Having a union can help advocate for those workers, lead to greater enforcement of workplace harassment and discrimination laws, and ultimately, help create and facilitate a culture where workplace harassment is no longer the norm. And, with gaming companies being notorious for their long hours (dubbed as “crunch” times), negotiating for better conditions through unions seems obvious. But perhaps the most obvious reason would be the widespread layoffs that happened in 2023, as unions can help secure better severance pay as employees transition to other endeavors.  

Reasons Not to Unionize

However, various reasons have emerged against unionization in gaming, including the rapid development of technology, blurred lines between management and workers, and stifling the creative process. Ultimately though, many of those reasons seem strained. One of the popular emerging technologies, virtual reality, has a lot of its roots in video game development. That technology has now had various successes in helping doctors, patients, incarcerated individuals, and many others. Now, the rapid development of technology seems to threaten game developers. Companies are beginning to use generative AI for their video games, whether it is voice acting, or promotional art. Indeed, some developers are now promising to use artificial intelligence to develop games, too. Using the advancement of technology as a reason to stymie the workers who helped create that technology seems backhanded at best.  

On an even more fundamental level, shifting over to generative technology to develop video games seems to be counterintuitive, given that video games are a creative product. What creativity exists with AI? This year in games should be telling companies that developers are needed and should be treasured. Baldur’s Gate 3, 2023’s Game of the Year, spent nearly three years in early access, where developers continued to work on the game as the public played the game before its official release. Zelda: Tears of the Kingdom, a runner-up for that same award, was finished for nearly a year, with one year being spent on polishing the game. Cyberpunk 2077, a game with a tumultuous start, won 2023’s Best Ongoing Game Award because the developers ultimately believed in their product. In an industry where some of the biggest games are passion projects made by small teams, trying to justify anti-unionization sentiment by citing creativity, but in turn, using technology that stifles such creativity is disingenuous.  

What Now?

It seems evident that video game developers should seriously consider unionization. Despite a big year in gaming releases, the industry is still threatened by layoffs, and crunch work conditions persist. Video game unionization is not a new thing either. The first multi-department video game union emerged in 2023, which included developers. Quality assurance workers, individuals who help test games to a more polished product, have also begun unionizing. Other creatives in the video game space, like voice actors, have taken collective action as well. Unions have been effective in these creative spaces, and in addressing technology. For example, the Writers Guild of America’s strike ended in favorable terms for screenwriters, including limiting the use of AI. Ultimately, video game developers should look at their industry and ask whether the current climate is sustainable.

The Complexities of Racism in AI Art

By: Imaad Huda

AI generative art is a recent advance in the field of consumer and social artificial intelligence. Anybody can write a few words into a program, and, within seconds, the AI will generate an image that roughly depicts that prompt. AI generative art can incorporate a number of artistic styles to develop digital art without somebody lifting a pen. While many users are simply fascinated with art being created by their computers, few are aware of how the AI generates its images and the implications of what it produces. Now that AI art programs have made their way into consumer hands, users have noticed stereotypical and racialized depictions in their auto-generated images. Entering prompts that incorporate types of employment, education, and history often produce images that incorporate racial bias. As AI becomes more mainstream, racist and sexist depictions by AI will only serve to entrench long standing stereotypes, and the lack of a legal standard will only make the matter worse. 

Quantifying the Racism 

Leonard Nicoletti and Dina Bass for Bloomberg note that the generative images take the “human” biases to the extreme. In a generative span of 5,000 images with the Stable Diffusion AI, depictions of prompts for people with higher-paying jobs were compared to people with lower-paying jobs. The result was an overrepresentation of people of color for lower-paying jobs. Prompts including “fast-food workers” yielded an image with a darker skinned person seventy percent of the time, even though Bloomberg noted that seventy percent of fast-food workers are white. Meanwhile, prompts for higher-paying jobs, such as “CEO and lawyer” generated images of people with lighter skin at a rate of over eighty percent, potentially proportional to the eighty percent of people that hold those jobs. When it came to occupations, Stable Diffusion showed the most bias when depicting occupations for women, “amplify[ing] both gender and racial stereotypes.” Among all generations for high-paying jobs, only one image, that of a judge, generated of a person of color. Commercial facial-recognition software, a tool specifically designed to identify the genders of people, had “the lowest accuracy on darker skinned people,” presenting a problem when these softwares are “implemented for healthcare and law enforcement.” 

Stable Diffusion was also biased when comparing criminality. For depictions of “inmate,” the AI generated a person of color eighty percent of the time when only half of the inmates in the U.S. are people of color. Bloomberg notes that the rates for generating criminals could be skewed by the racial bias by the U.S. “policing and sentencing” mechanisms. 

The Legality

Is racism in AI legal? The answer is complicated for a number of reasons. The law surrounding AI generative imaging is new. In 2021, the Federal Trade Commission (FTC) declared the use of discriminatory algorithms to make automated decisions illegal, citing opportunities for “jobs, housing, education, or banking.” New York City has also enacted its own Local Law 144, which requires that AI tools undergo a “bias audit” before aiding in employment decisions.” The National Law Review states that a bias audit includes a calculation of the “rate at which individuals in a category are either selected to move on or assigned a classification” by the hiring tool. The law also states that audits “include historical data in their analysis,” and the results of the audit “must be made publically available.” 

The advancement of anti-racism laws regulating AI tools represents progress. However, how these laws pertain to AI art still has yet to be seen. Laws concerning AI generated art are currently focused on theft, as AI art often copies the originalism and stylistic choices of human artists. The racial depictions of AI art have not been touched on legally but could perpetuate stereotypes when used in an educational context, which the FTC prohibits under its 2021 declaration. Judges and lawmakers may not see AI art’s contribution to systemic racism as a legal issue that could stand in the courtroom just yet. 

What’s The Solution?

The bias in generated art results from its algorithm, which, depending on the user’s prompt, pulls together images that match a description and style to develop into a new image. From multiple prompts from many different users and the data available on the internet, the algorithm continuously produces these images. Almost a decade ago, Google postponed its consumer AI search program because images of black people were being filtered into searches for “gorillas” and “monkeys.” The reason for this, according to former Google employees, was Google not training its AI with enough images of black people. The problem in this case, again, could be a lack of representation, from too few AI algorithm employees of color to inadequate representation in the data sets being used to generate images. However, a simple fix to increase representation is not so easy. AI computing is built based on models that already exist; a new model will be based off of an older model, and the biases present in the older algorithm may stand. As issues with machines get more complicated, so do the solutions. Derogatory depictions should not be allowed to stand in the absence of a legal standard, and lawmakers should take the necessary measures to end AI discrimination before it becomes a true social problem.

Move Fast and Break Things: Ethical Concerns in AI

By: Taylor Dumaine

In Jurassic Park, Dr. Ian Malcolm famously admonished the park’s creator by saying “your scientists were so preoccupied with whether or not they could that they didn’t stop to think if they should.” Technological advancement for the sake of advancement alone ignores the genuine negative effects that advancement could cause or contribute to. The negative externalities of technological advancement have often been overlooked or ignored. There is also often a reliance on federal and state governments to regulate industry rather than self-regulation or ethics standards.  That reliance has become especially true in the AI and generative AI spaces. The lack of government regulation in AI technology is far outpaced by its rapid development, hindering the government’s ability to address ethical issues adequately.

Relying on government regulation is a copout for large tech companies. Congress’s record on technology regulation is poor at best, with most bills failing to become law and those that do being insufficient to effectively regulate. The United States still does not have a national privacy law and there is little political will to pass one. The increasingly octogenarian makeup of Congress does not have the best track record of actually understanding basic concepts in technology let alone increasingly complicated technology, such as AI, they are tasked with regulating. During Senate testimony regarding the Cambridge Analytical scandal, Meta CEO, Mark Zuckerberg, had to explain some pretty rudimentary internet concepts.

Earlier this year, Open AI CEO, Sam Altman, called for government regulation of AI in testimony before Congress. Altman also carries a backpack around that would allow him to remotely detonate ChatGPT datacenters in the scenario where the generative AI goes rogue. While by no means a perfect example of ethics in the AI space, Altman seems to at least be aware of the risks of his technology. Altman relies on the federal government to regulate his technology rather than engaging in any meaningful self-regulation.

In contrast to Altman, David Holz, Founder and CEO of Midjourney, an image generation AI program,  is wary of regulation, saying in an interview with Forbes, “You have to balance the freedom to do something with the freedom to be protected. The technology itself isn’t the problem. It’s like water. Water can be dangerous, you can drown in it. But it’s also essential. We don’t want to ban water just to avoid the dangerous parts.” Holz highlights that his goal is to promote imagination and is less concerned with how his goal may impact people so long as others benefit. This thinking is common in tech spaces.

 Even the serious issues in generative AI, such as copyright infringement, seem almost mundane when faced with facial recognition tools such as Clearview AI. Dubbed “The Technology Facebook and Google Didn’t Dare Release,” these facial recognition tools have the disturbing ability to recognize faces across the internet. Clearview AI specifically has raised serious Fourth and Fifth Amendment concerns regarding police use of the software. Surprisingly, the large tech companies, Apple, Google, and Facebook, served as de facto gatekeepers of this technology for over a decade due to their acquisitions of facial recognition technology, recognizing the dangers of this technology. Facebook was subject to a $650 million lawsuit related to its use of facial recognition on the platform.  Clearview AI’s CEO Hoan Ton-That has no ethical qualms regarding the technology he is creating and marketing specifically to law enforcement. Clearview AI is backed by Peter Thiel who founded Palantir, which has its own issues regarding police and government surveillance. The potential integration of the two companies could result in an Orwellian situation. Therefore, Clearview AI represents a worst-case scenario for tech without ethical limits, the effects of which have already been disastrous.

Law students, medical students, and Ph.D. students are all required to take an ethics class at some point. Many self-taught programmers do not incorporate ethics classes or study into their learning. There are very real and important ethical concerns when it comes to technology development. In an age, culture, and society that values advancement without taking the time to consider the negative ramifications, it is unlikely that society’s concern over ethics in technology will change much. In a perfect scenario, government regulation would be swift, well-informed, and effective to protect against the dangers of AI. With the rate of technological innovation, it is hard to stay proactive in the ethics space, but that does not mean there should be no attempt to. Arguing for a professional ethics standard in computer science and software engineering is not without its own serious problems and would be almost entirely impossible to implement. However, by creating a culture where ethical concerns are not just valued but considered in the development of new technology, we can hopefully avoid a Jurassic Park scenario.