The Complexities of Racism in AI Art

By: Imaad Huda

AI generative art is a recent advance in the field of consumer and social artificial intelligence. Anybody can write a few words into a program, and, within seconds, the AI will generate an image that roughly depicts that prompt. AI generative art can incorporate a number of artistic styles to develop digital art without somebody lifting a pen. While many users are simply fascinated with art being created by their computers, few are aware of how the AI generates its images and the implications of what it produces. Now that AI art programs have made their way into consumer hands, users have noticed stereotypical and racialized depictions in their auto-generated images. Entering prompts that incorporate types of employment, education, and history often produce images that incorporate racial bias. As AI becomes more mainstream, racist and sexist depictions by AI will only serve to entrench long standing stereotypes, and the lack of a legal standard will only make the matter worse. 

Quantifying the Racism 

Leonard Nicoletti and Dina Bass for Bloomberg note that the generative images take the “human” biases to the extreme. In a generative span of 5,000 images with the Stable Diffusion AI, depictions of prompts for people with higher-paying jobs were compared to people with lower-paying jobs. The result was an overrepresentation of people of color for lower-paying jobs. Prompts including “fast-food workers” yielded an image with a darker skinned person seventy percent of the time, even though Bloomberg noted that seventy percent of fast-food workers are white. Meanwhile, prompts for higher-paying jobs, such as “CEO and lawyer” generated images of people with lighter skin at a rate of over eighty percent, potentially proportional to the eighty percent of people that hold those jobs. When it came to occupations, Stable Diffusion showed the most bias when depicting occupations for women, “amplify[ing] both gender and racial stereotypes.” Among all generations for high-paying jobs, only one image, that of a judge, generated of a person of color. Commercial facial-recognition software, a tool specifically designed to identify the genders of people, had “the lowest accuracy on darker skinned people,” presenting a problem when these softwares are “implemented for healthcare and law enforcement.” 

Stable Diffusion was also biased when comparing criminality. For depictions of “inmate,” the AI generated a person of color eighty percent of the time when only half of the inmates in the U.S. are people of color. Bloomberg notes that the rates for generating criminals could be skewed by the racial bias by the U.S. “policing and sentencing” mechanisms. 

The Legality

Is racism in AI legal? The answer is complicated for a number of reasons. The law surrounding AI generative imaging is new. In 2021, the Federal Trade Commission (FTC) declared the use of discriminatory algorithms to make automated decisions illegal, citing opportunities for “jobs, housing, education, or banking.” New York City has also enacted its own Local Law 144, which requires that AI tools undergo a “bias audit” before aiding in employment decisions.” The National Law Review states that a bias audit includes a calculation of the “rate at which individuals in a category are either selected to move on or assigned a classification” by the hiring tool. The law also states that audits “include historical data in their analysis,” and the results of the audit “must be made publically available.” 

The advancement of anti-racism laws regulating AI tools represents progress. However, how these laws pertain to AI art still has yet to be seen. Laws concerning AI generated art are currently focused on theft, as AI art often copies the originalism and stylistic choices of human artists. The racial depictions of AI art have not been touched on legally but could perpetuate stereotypes when used in an educational context, which the FTC prohibits under its 2021 declaration. Judges and lawmakers may not see AI art’s contribution to systemic racism as a legal issue that could stand in the courtroom just yet. 

What’s The Solution?

The bias in generated art results from its algorithm, which, depending on the user’s prompt, pulls together images that match a description and style to develop into a new image. From multiple prompts from many different users and the data available on the internet, the algorithm continuously produces these images. Almost a decade ago, Google postponed its consumer AI search program because images of black people were being filtered into searches for “gorillas” and “monkeys.” The reason for this, according to former Google employees, was Google not training its AI with enough images of black people. The problem in this case, again, could be a lack of representation, from too few AI algorithm employees of color to inadequate representation in the data sets being used to generate images. However, a simple fix to increase representation is not so easy. AI computing is built based on models that already exist; a new model will be based off of an older model, and the biases present in the older algorithm may stand. As issues with machines get more complicated, so do the solutions. Derogatory depictions should not be allowed to stand in the absence of a legal standard, and lawmakers should take the necessary measures to end AI discrimination before it becomes a true social problem.

One thought on “The Complexities of Racism in AI Art

  1. Pingback: Am I redundant? The Impact of Generative AI on Legal Hiring | Washington Journal of Law, Technology & Arts

Leave a comment