Algorithmic Injustice: The Increased Prevalence of Biased Code in Courts and Law Enforcement

ai-2_meitu_1-900x400

By: Noelle Symanski

Algorithms and artificial intelligence, more specifically machine learning, are technologies that have seen increased use in many professional fields. We use this technology to make Google searches, swipe on online dating profiles, predict stock prices, and even control traffic lights. As algorithms become ubiquitous across fields, these programs have also made their way into the criminal justice system. Law enforcement agencies and courts have begun using technology to drive practices such as suspect identification and sentencing.

It is tempting to view a computer algorithm as an objective series of code that spits out an unbiased result. The reality is far different. In late 2016, Amazon launched a facial recognition tool called “Rekognition.” In May 2018, the ACLU found that the program incorrectly matched 28 members of Congress to individuals convicted of crimes. The research indicated that, “[n]early 40 percent of Rekognition’s false matches in our test were of people of color, even though they make up only 20 percent of Congress.” On its own, that flaw is alarming. Even more worrisome, Amazon has pitched use of Rekognition to U. S. Immigration and Customs Enforcement (ICE). This is troubling because innocent individuals may be held in border detention centers or even deported because flawed technology suspected them of being undocumented. If ICE begins using this technology before critical errors and biases are addressed, citizens will face miscarriages of justice resulting from technologies marketed by private corporations. In addition, Amazon has heavily pitched Rekognition to state law enforcement agencies. While Amazon is the big-data giant responsible for creating the defective program, it is governments’ responsibility to decline to use the technology. It is unclear whether Amazon will be held liable for bringing a faulty product to market, but it is clear that some government agencies have been quick to cozy up to the data giant and have begun Rekognition pilot programs.

Orlando Police are already using algorithms and machine learning for law enforcement purposes. Automatic-decision algorithms are used throughout the criminal justice system in the form of pre-trial risk-assessment tools and post-trial sentencing algorithms. As Jason Tashea stated in Wired, “courts and corrections departments around the US use algorithms to determine a defendant’s ‘risk’, which ranges from the probability that an individual will commit another crime to the likelihood a defendant will appear for his or her court date.” The government does not write their own algorithms, and the general public is typically not allowed to see how these algorithms operate because they are proprietary of the companies and “black boxed.” Algorithms are “black boxed” when data enters the algorithm, is processed, and a result is produced, but outsiders do not have access to the methods by which the initial data was processed. This poses a particularly difficult challenge to defense attorneys who must explain to the court why their client is not a “high-risk” without being able to see or understand the program that labeled their client as such. In Wisconsin v. Loomis, defendant Eric Loomis challenged the lack of transparency in the algorithm that sentenced him to six years in prison and alleged that the risk assessment violated his right to Due Process. The Supreme Court of Wisconsin held that use of the risk assessment at sentencing was not a Due Process violation because data entered in the program was publicly available and provided by the defendant. Loomis is just the beginning. As the use of algorithms in sentencing becomes more prolific, individuals in other states will be likely to bring Due Process and Equal Protection claims.

Not only do these algorithms lack transparency, but the risk-scores and sentencing algorithms are racially biased. ProPublica conducted a study which collected the risk-scores of 7,000 individuals in Florida and, “checked to see how many were charged with new crimes over the next two years, the same benchmark used by the creators of the algorithm.” The study ran a test on the risk-scores isolating race from criminal history and recidivism. The findings indicated, “[b]lack defendants were often predicted to be at a higher risk of recidivism than they actually were. [The] analysis found that black defendants who did not recidivate over a two-year period were nearly twice as likely to be misclassified as higher risk compared to their white counterparts (45 percent vs. 23 percent).”

Some cities have already begun to combat the effects of these discriminatory algorithms. In January of 2018, New York City passed a bill that created a task force charged with coming up with procedures to determine whether automated decision systems create a disproportionate impact on the basis of race, gender, religion, sexual orientation, and more. More cities and states need to join in this effort and examine their own automatic decision algorithms for evidence of discrimination. To combat algorithmic bias in media, Nick Diakopoulos suggests, “[c]omputational journalists in particular should get in the habit of thinking deeply about what the side effects of their algorithmic choices may be and what might be correlated with any criteria their algorithms use to make decisions.” Local courts using algorithms to make sentencing decisions should also pause to consider the impact of sentencing programs used in their jurisdictions. Courts need to ask the companies they purchase sentencing programs from to be transparent about algorithmic choices. If there is evidence of bias in the code, courts should not purchase the programs, and judges should instead opt to sentence using traditional methods while being cognizant of their own implicit biases.

Advances in algorithms, artificial intelligence, and machine learning are crucial for our society. This technology will likely improve the quality of living for individuals around the world. The technology needs to continue to be developed, and it is important that companies continue to create innovative products. As computer technology becomes more engrained in our lives, private corporations should ensure that the codes they create benefit all community members. Companies should make their employees aware of the impact of implicit biases and ensure that their products are tested for these biases before the products go to market

 


Interested in changing the culture embedded in algorithms and AI? Try reaching out to the Algorithmic Justice League or lobbying your state and city council to incorporate laws that will discourage algorithmic discrimination.

One thought on “Algorithmic Injustice: The Increased Prevalence of Biased Code in Courts and Law Enforcement

  1. An interesting, if troubling, read. AI and machine learning are here to stay. They are increasing woven into the fabric of daily life in ways few know, let alone understand. On the one hand, they offer tremendous potential to improve lives across the planet. On the other hand, if misused or misunderstood, they hold an equally tremendous potential for harm. The author makes a compelling case that we should be deeply suspicious of the use of this technology in court — with its potential for damage to due process and equal protection rights. Technology alone should never be allowed to abrogate the court’s heavy responsibility as a finder of facts. And yet, that’s precisely the temptation each new generation of technology offers. This generation no less than its predecessors.

Leave a comment