By Michael Huggins
The film Minority Report tells the story of a future society that uses technology to predict who will commit crimes. When the crime starts to occur, the Pre-Crime police department uses those predications to capture the individual before they commit the offense. Specifically, the Pre-Crime police department uses knowledge acquired from three pre-cognitive beings to predict the time and the place of the crimes. This 2002 film continues to spark intellectual and ethical curiosity in the minds of many science-fiction fans. But Minority Report is just that: science-fiction. Or is it?
Pre-Crime technology is not science-fiction. Many police departments around the country use predictive policing (without using pre-cognitive beings) to predict the types of individuals who might be more likely to commit crimes and where those crimes are more likely to occur. Modern day predictive policing consists of traditional digital policing that uses historical maps, then layers it with knowledge to provide custom insight to police officers on how to adjust their own behavior. One example of predictive policing software is ESRI’s ArcMap, which takes historical crime reports and officer interactions to make a historical map. PrePol is also one of the leading predictive mapping companies, which claim to predict crime ahead of time. PredPol’s systems shows hotspots that say where crime is likely to occur and when. PredPol uses historical crime data that takes in reported crime. According to Larry Samuels, CEO of PredPol, the purpose of predictive policing technology is to “get ahead of the crime.” But this so-called efficient system comes with a potential caveat: a disproportionate impact on people of color.
The problem with predictive policing is that the data generated are based on systemic police practices that have negatively impacted people of color for decades. Systems that rely on this historical crime data will continue to reflect traditional police practices. Using an algorithm to predict crime will not only impact people of color, but it could cause significant procedural due process violations. If the algorithm correctly predicts the crime, but the data is based on biased policing, has the computer deprived an individual of due process under the law? Is the computer practicing discriminatory intent? Proponents of predictive policing believe that the systems will result in unbiased policing, but this system will only focus further police surveillance on communities that have been overpoliced. In addition to increased police bias against people of color, predictive policing may eliminate the few fourth amendment protections that exist for Americans.
Under the Fourth Amendment, reasonable suspicion of crimes allows the police to briefly detain people. Police officers generally need probable cause of a crime to arrest someone. But the fourth amendment was not originally designed to deal with predictive policing and the use of big data. Big data collection for predictive policing allows the police to create a case over time on an individual, even though the collection of that data may not be illegal.
Predictive policing is here. But there may be ways to prevent mass constitutional violations. Privacy activists should discourage police departments from purchasing this technology. If police departments already use this technology, activists should encourage departments to not consider an answer from an algorithm to be the only truth about a particular individual. Legal scholars should also re-imagine the language of the fourth amendment. We do not live in the age of Minority Report. But if we are not careful, a future pre-crime police department might be at our doorstep.
Image Source: Buddy TV