Are Robots Really Running the Job Market?

By: Penny Pathanaporn

Today, artificial intelligence (AI) has pervaded nearly every aspect of our daily life. Residents and visitors alike in California, Phoenix, and now Texas can experience what it feels like to ride in a self-driving taxi. Over the past few years, internet users have flocked to ChatGPT for assistance on both serious and trivial matters, from creating a travel itinerary to drafting a work email. And, more recently, Elon Musk announced the highly anticipated development of Tesla Optimus, a humanoid robot that can perform everyday tasks such as grocery shopping. Considering the role that AI plays in making our lives more convenient, it is no surprise that AI has been integrated into the employment sector to increase the efficiency of the hiring process. 

How is AI used in the employment sector?

Artificial intelligence has been used to supplement several workplace procedures. For example, AI can be used to help employers screen potential candidates by filtering application materials for specific experiences or buzzwords. Employers have also used AI to review recorded interviews throughout the hiring process,  monitor employees’ computer activity,  track employees’ locations, and determine who gets promoted or laid off.

The problem with AI: furthering institutional biases 

Despite its ability to streamline employment processes, AI is far from perfect and its use can lead to harmful outcomes. The successful performance of AI models is dependent on factors such as the type of data that it has been fed and AI training techniques. Unfortunately, data that has been entered into AI models typically reflect institutional biases that exist in our society today. For instance, an AI hiring model formerly developed by Amazon was trained by a dataset that mostly consisted of men. Consequently, the AI algorithm demonstrated a preference for applications that include buzzwords mostly used amongst men. Once the bias had been discovered in the development process, Amazon ceased all work on their AI project. 

Additionally, AI models that have been trained to prioritize key traits such as optimism, outgoingness, or the ability to work well under pressure may inadvertently put candidates with disabilities (or even candidates from cultural demographics that do not value those traits) at a disadvantage. Accordingly, employers using AI tools in their hiring practices run the risk of committing employment discrimination based on sex, race, nationality, age, disability and other protected demographics.

Legal Implications of AI Usage in the Employment Sector

Under both federal and state laws, disparate treatment discrimination and disparate impact discrimination are not permitted. Disparate treatment discrimination entails intentional discrimination against protected groups, while disparate impact discrimination entails the use of facially neutral policies that disproportionately impact protected demographics.

Although actions taken by employers who use AI tools in good faith may not fall under disparate treatment discrimination, their actions are still at risk of falling under disparate impact discrimination. For example, as seen by the AI model formerly developed by Amazon, AI hiring tools trained on biased datasets are likely to prefer traits that do not correspond with certain protected groups, leading to a disproportionate impact on minorities. 

When determining whether hiring practices may disproportionately impact protected demographics, the EEOC recommends that employers utilize the “four-fifths” rule. Based on the “four-fifths” rule, if the proportion of candidates selected from one demographic is “substantially” different from the proportion of candidates selected from another demographic, then the hiring practices employed may be discriminatory. A ratio that is less than 80% between the two different proportions selected is considered to be the benchmark for “substantial” difference. 

Nevertheless, employers may be permitted to use hiring practices that disproportionately impact certain protected groups if they can demonstrate that the use of those practices is (1) related to the employment and (2) necessary for business purposes. For instance, if the job the candidates have applied for, along with the employer’s business, necessitates a fitness exam, the fact that more men than women pass the exam may not trigger disparate impact discrimination. Either way, employers should still adhere to the least discriminatory practice available in all circumstances. Lastly, employers should be very cautious of AI usage because they can still be held liable for discrimination even if the AI tools were owned or managed by third parties.  

The Crackdown on AI Use in Employment Practices 

The rapid developments in AI technology has undoubtedly led to the rise in AI-related lawsuits in the employment sector. In May 2022, the Equal Employment Opportunity Commission (“EEOC”) filed a lawsuit against “iTutorGroup” (a tutoring company). The EEOC claimed that iTutorGroup violated the Age Discrimination in Employment Act of 1967 (“ADEA”) through their use of AI in hiring practices. In August 2023, the EEOC and “iTutorGroup” settled the case, which marked the very first AI-discrimination lawsuit to be settled.

Currently, there is also an ongoing AI-discrimination class action lawsuit against a company named “Workday” in federal court. The plaintiff, Derek Mobley, alleged that Workday’s use of an AI software in screening applicants had resulted in discrimination based on age, race, and disability status. Although the federal court has not issued a final judgment on the case, the fact that the court has enabled the case to proceed as a class action lawsuit should signal to employers that they must remain vigilant when it comes to AI use. 

Looking Towards the Future 

Today, employers are highly advised to utilize third parties or external experts to assess their AI tools for any possible discrimination. Additionally, both Congress and state legislatures have begun taking legislative action to minimize the discriminatory impact of AI, such as introducing bills that require employers to notify candidates of AI usage. 

Ultimately, the functions of AI platforms are merely a reflection of the biases and prejudices that already exist in our society today. Laws, policies, and legislation can help detect and minimize the enforcement of these biases through AI. But perhaps grassroots advocacy can also provide an alternative avenue for promoting just AI usage.

#EEOC #AI #employmentlaw 

Leave a comment