Your Face Says It All: the FTC Sends a Warning and Rite Aid Settles Down

By: Caroline Dolan

If someone were to glance at your face, they wouldn’t necessarily know if you won big in Vegas or if you’re silently battling a gambling addiction. When you stroll down the street, your face can conceal many a secret, even such a lucrative side hustle. While facial recognition (“FR”) software is not a new innovation, deep pockets are investing a staggeringly large amount of money into the FR market. Last year, the market was globally valued at $5.98 billion and is projected to grow at a compound annual growth rate of 14.9% into 2030. This rapid and bold deployment of facial recognition technology may therefore make our faces more revealing than ever, transforming them into our most valuable—yet vulnerable—asset.

A Technical Summary for Non-Techies

Facial recognition uses software to assess similarities between faces and provide determinations. Facial characterization further classifies a face based on individual characteristics like gender, facial expression, and age. Through deep learning AI, artificial neural networks mimic how our brains process data. The neural network consists of various layers of algorithms which process and learn from training data, like images or text, and eventually develop the ability to identify features and make comparisons.

However, when the dataset used to train the FR model is unrepresentative of different genders and races, a biased algorithm is created. Training data that is biased toward certain features creates a critical weak spot in a model’s capabilities and can result in “overfitting” wherein the machine learning model performs well on the training data but poorly on data that is different from which it was trained. For example, a model that is trained on data that is biased towards images of men with Western features will likely struggle to make accurate determinations on images of East Asian females.

Data collection and curation poses its own set of challenges, but selection bias is a constant risk whether training data is collected from a proprietary large language model (“LLM”), which requires customers to purchase a license with restrictions, or from an open-source LLM, which is freely available and provides flexibility. Ensuring that training data represents a variety of demographics requires AI ethic awareness, intentionality, and potentially federal regulation.

The FTC Cracks Down

In December of 2023, Rite Aid settled with the FTC following the agency’s complaint alleging that the company’s deployment of FR software was reckless and lacked reasonable safeguards, resulting in false identifications and foreseeable harm. Between 2012 and 2020, Rite Aid employed an AI FR program to monitor shoppers without their knowledge and flag “persons of interest.” Those whose faces were deemed a match to one in the company’s “watchlist database” were confronted by employees, searched, and often publicly humiliated before being expelled from the store. 

The agency’s complaint under section 5 of the FTC Act asserted that Rite Aid recklessly overlooked the risk that its FR software would misidentify people based on gender, race, or other demographics. The FTC stated that “Rite Aid’s facial recognition technology was more likely to generate false positives in stores located in predominantly Black and Asian neighborhoods than in predominantly white communities, where 80% of Rite Aid stores are located.” This also violated Rite Aid’s 2010 Security Order which required the company to oversee its third-party software providers.  

The recent settlement prohibits Rite Aid from implementing AI FR technology for five years. It also requires the company to destroy all data that the system has collected. The FTC’s stipulated Order imposes various comprehensive safeguards on “facial recognition or analysis systems,” defined as “an Automated Biometric Security or Surveillance System that analyzes . . . images, descriptions, recordings . . . of or related to an individual’s face to generate an Output.” If Rite Aid later seeks to implement an Automated Biometric Security or Surveillance System, the company must adhere to numerous forms of monitoring, public notices, and data deletion requirements based on the “volume and sensitivity” of the data. Given that Rite Aid filed Chapter 11 bankruptcy in October of 2023, the settlement is pending approval by the bankruptcy court while the FTC’s proposed consent Order goes through public notice and comment.

Facing the FutureGoing forward, it is expected that the FTC will remain “vigilant in protecting the public from unfair biometric surveillance and unfair data security practices.” Meanwhile, companies may be incentivized to embrace AI ethics as a new component of “Environmental, Social, and Corporate Governance” while legislators wrestle with how to ensure that automated decision-making technologies evolve responsibly and do not perpetuate discrimination and harm.

Leave a comment