Is Discrimination Fair? The FTC’s Failure to Regulate Big Tech

By: Gabrielle Ayala-Montgomery

In the age of technological innovation, minorities fight to end discrimination both offline and online.  Biased outcomes produced by technology have material consequences for minorities as advancements in technologies replicate social inequalities and discrimination. Vulnerable populations are denied fair housing, education, loans, and employment because of biased data; however, the commercial use of biased data raises concerns for all consumers.

The FTC should consider using its authority to deliver effective deterrence for the harms to the economy when companies unfairly and disproportionately impact minorities. First, the FTC must bring enforcement actions against companies whose “unfair practices” disproportionately affect minority consumers. Second, the FTC should conduct rulemaking to expand its definition of “unfair practices” to encapsulate businesses practices unfairly affecting minorities.  

The FTC Breakdown

The FTC protects consumers from deceptive or unfair business practices and investigates alleged violations of federal laws or FTC regulations. Section 5 of the FTC Act prohibits “unfair or deceptive business practices in or affecting commerce.” The Act broadly vests the FTC’s authority to bring enforcement actions against businesses to protect consumers against unfair or deceptive practices. Section 18 enables the FTC to promulgate regulations to prevent unfair or deceptive practices. After the FTC issues a rule, it may seek penalties for unfair practices constituting violations.

The FTC explicitly defined its standards for determining whether a practice is deceptive but left the question of whether a practice is unfair to be interpreted by courts. In response, courts have developed a three-factor test to identify an unfair practice.  The test asks: (1) if the practice injures consumers; (2) if it violates established public policy; and (3) if it is unethical or unscrupulous.

Biased Practices in High Tech

Algorithms are quantitative data, a process, or a set of rules involving mathematical calculations that produce data to help humans make decisions. Algorithmic bias, machine learning bias, or AI bias is a systematic error in the coding, collection, or selection of data that produces unintended or unanticipated discriminatory results. Algorithmic bias is perpetuated by programmers who train algorithms based on patterns found in historical data. Humans then use these biased results to make decisions with implications systematically prejudiced towards minorities.  

Data and surveillance are now big businesses. However, some companies have failed to ensure AI products are fair to consumers and free from impermissible bias. For example, Amazon had to disband recruiting tools because the system’s data rated job candidates in a gender-biased manner. The AI models educated themselves from resume data compiled from the previous ten years, composed primarily of white men. Thus, Amazon’s recruiting tool taught itself male candidates were preferable.

The National Institute of Standards and Technology (NIST) examined the tech industry’s leading systems’ facial recognition technology (FRT) algorithms, including 189 algorithms voluntarily submitted by 99 companies, academic institutions, and other developers. The algorithms came from tech companies and surveillance contractors, including Idemia, Intel, Microsoft, Panasonic, SenseTime, and Vigilant Solutions. NIST found empirical evidence” many FRT algorithms exhibited “demographic differentials” that can worsen their accuracy based on a person’s age, gender, or race. Some algorithms produced no errors, where other software was up to 100 times more likely to return an error to a person of color than for a white individual. Overall, middle-aged white men generally benefited from the highest FRT accuracy rates or the least amount of errors. Such bias in algorithms can emanate from unrepresentative or incomplete training data or the reliance on flawed information that reflects historical inequalities. If left unchecked, biased algorithms can lead to decisions that can have a collective, disparate impact on specific groups of people even without the programmer’s intention to discriminate.

Companies’ targeted advertisement systems have been utilized to exclude people of color from seeing ads for homes based on their “ethnic affinity.” For example, Facebook and other tech companies settled with civil rights groups for participating in illegal discrimination practices in their advertisement of housing, employment, and loans. Targeted marketing policies and practices by tech companies permitted users to exclude marginalized groups from seeing specific ads. As a condition of the settlement, Facebook agreed to establish a separate advertising portal for creating housing, employment, and credit “HEC” ads on Facebook, Instagram, and Messenger that will not allow users to block consumers based on gender, age, and multicultural affinity.” However, research demonstrates Facebook’s ad system can still unintentionally alter ad delivery based on demographics. These current advertisement practices online contribute to the systematic inequality faced by minorities’ income, housing, and wealth.

Expanding “Unfair Practices”

What if Facebook had not reexamined its practices or Amazon kept its biased recruiting tech? No single piece of data protection legislation exists in the USA to prevent biased data and surveillance technology. Instead, the country has patchwork laws at federal, state, and municipal levels.  Sen. Ron Wyden, D-Ore., plans to update and reintroduce his Algorithmic Accountability Act of 2019, a bill designed to fight AI bias and require tech companies to audit their AI systems for discrimination. The Act, if passed, would have directed the FTC to require companies to study and fix flawed algorithms resulting in inaccurate, unfair, biased, or discriminatory decisions. The passage of an Algorithmic Accountability Act would reduce decisions based on biased algorithms. However, instead of waiting on Congress, the FTC may use existing laws and policy options addressing such violations and apply them to unfair and discriminatory practices in the digital world.

In the absence of federal legislation, the FTCA could be used to protect consumers against unfair practices that are biased against minorities. For the purposes of Section 5 of the FTC Act, purposeful or negligent practices that disproportionally impact minorities should constitute (1) unethical or unscrupulous, (2) violations of public policy, and (3) actual harm to consumers. The FTC should prioritize enforcement of valid claims of unfair commercial practices that disproportionately impact minorities.

New Rulemaking Group May Provide Hope for Prevention

Following criticism that the FTC failed to use its authorities to address consumer protection harms adequately, it announced its new rulemaking group.  This rulemaking group will allow the FTC to take a strategic and harmonized approach to rulemaking across its different authorities and mission areas. With this new group in place, the FTC is poised to strengthen existing rules by undertaking new rulemakings that would interpret unfair practices further. Chairwoman Rebecca Kelly Slaughter stated, “I believe that we can and must use our rulemaking authority to deliver effective deterrence for the novel harms of the digital economy].” Perhaps under this new rulemaking group, the FTC can meaningfully protect and educate consumers about the harms of commercial practices that unfairly impact minorities by expanding their interpretations of “unfair” to include commercial practices biased against minorities.

The FTC needs to protect consumers by protecting minorities. Current laws do not adequately address biased data or practices that produce discriminatory outcomes for minorities. Without legislation or federal administrative action, high-tech companies will continue developing systems, intentionally or unintentionally, biased against minorities like people of color, women, immigrants, the incarcerated and formerly incarcerated, activists, and others. The FTC has the authority to effectively deter business practices unfairly and disproportionately impacting minorities by bringing enforcement actions against such practices. Further, the FTC should explicitly expand their interpretation of “unfair practices” to encapsulate practices disproportionately affecting minorities.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s