The icon indicates free access to the linked research on JSTOR.

Bias in law enforcement has long been a problem in America. The killing of George Floyd, an unarmed Black man, by Minneapolis police officers in May 2020 most recently brought attention to this fact—sparking waves of protest across the country, and highlighting the ways in which those who are meant to “serve and protect” us do not serve all members of society equally.

JSTOR Daily Membership AdJSTOR Daily Membership Ad

With the dawn of artificial intelligence (AI), a slew of new machine learning tools promise to help protect us—quickly and precisely tracking those who may commit a crime before it happens—through data. Past information about crime can be used as material for machine learning algorithms to make predictions about future crimes, and police departments are allocating resources towards prevention based on these predictions. The tools themselves, however, present a problem: The data being used to “teach” the software systems is embedded with bias, and only serves to reinforce inequality.

Here’s how: Black people are more likely than white people to be reported for a crime—whether the reporter is white or Black. This leads to Black neighborhoods being marked as “high risk” at a disproportionate rate.

Using data as a tool for policing is not new—it’s been going on since the 1990s, in an effort to help departments decide which communities are at “high risk.” If they know where the most crime happens, the thinking went, police could put more resources into policing a given area.

However, the logic is faulty: If more police are dispatched to a certain neighborhood, it clearly follows that “more” crime will appear here. Essentially, it’s a feedback loop, which provides a skewed version of where crime is actually taking place. (Another issue at hand is the allocation of police resources rather than social services. There is much debate, for instance, about whether the role of police in certain poor, Black neighborhoods also tends to create a “police state” environment, in which citizens do not feel safe, and there are strong arguments that more funding for mental health or other social services would better serve these communities). When machine learning algorithms are fed this “data” to train their predictive systems, they replicate this bias, reinforcing false ideas about which neighborhoods are more “high risk.”

Another problem with the thinking is that it relies on past information. While our past may give us a clue into future behavior, it does not take into consideration the concept of and potential for rehabilitation, and has the effect of reinforcing negative views, and continuing to punish those who have already paid their debt.

Police departments across the globe are using these software programs to pinpoint crime. While there are dozens of American tech companies selling this type of software to law enforcement agencies, one particular startup, Voyager Labs, is collecting social media information—including Facebook posts, emojis, friends–and analyzing them to make connections, even cross-referencing this information with private data, to create a “holistic” profile that can be used to find people who pose “risks.”

Inaccuracy and Bias Embedded in AI Systems

Automated-policing approaches are often inaccurate. A 2018 trial conducted by the London Metropolitan Police used facial recognition to identify 104 previously unknown people who were suspected of committing crimes. Only 2 of the 104 were accurate.

“From the moment a police officer wrongly identifies a suspect until the moment the officer realizes their error, significant coercive action can take place: the suspect can be arrested, brought to a police station and detained. It can be terrifying, with irreversible consequences, including human rights violations,” Edward Santow writes in The Australian Quarterly.

Additionally, facial recognition systems have also demonstrated bias against people of color. In an egregious example, Facebook’s facial recognition algorithm labeled Black people “primates”—which it recently told the BBC “was clearly an unacceptable error.”

Lack of Human Oversight in Automated Processes

Automated systems remove human oversight. As law enforcement agencies increasingly rely on these deep learning tools, the tools themselves take on an authority, and their predictions are often unquestioned. This has resulted in what Kate Crawford and Jason Schultz, in their report “AI Systems as State Actors” call an “accountability gap,” which “may result in both state and private human employees having less knowledge or direct involvement in the specific decisions that cause harm.”

The tools themselves could come from various sources—created “in-house” by government agencies, developed by contractors, or even donated, Crawford and Schultz point out. And with these various configurations, there is little information on who should be accountable when the systems fail.

A new project by Columbia University, in tandem with the AI Now Institute and the New York University School of Law’s Center on Race, Inequality, and the Law, and the Electronic Frontier Foundation, was recently begun “to conduct an examination of current United States courtroom litigation where the use of algorithms by government was central to the rights and liberties at issue in the case.” In this report, the researchers focused on cases in which AI is currently being used in law enforcement: in the areas of Medicaid and disability benefits, public teacher evaluations, and criminal risk assessments. In these cases, the researchers looked at how the AI systems were used by humans. The authors concluded:

These AI systems were implemented without meaningful training, support, or oversight, and without any specific protections for recipients. This was due in part to the fact that they were adopted to produce cost savings and standardization under a monolithic technology-procurement model, which rarely takes constitutional liability concerns into account.

The focus of the algorithms were biased—in an effort to cut budgets, they targeted those who would be more likely to need support. “Thus, an algorithmic system itself, optimized to cut costs without consideration of legal or policy concerns, created the core constitutional problems that ultimately decided the lawsuits.” Like “traveling sales representatives,” the authors remarked, these automated tools would take information from one location to another, applying it to new populations, increasing the potential for bias to skew the results.

“As AI systems rely more on deep learning, potentially becoming more autonomous and inscrutable, the accountability gap for constitutional violations threatens to become broader and deeper.”

This raises the question: How should we hold the software companies themselves accountable? When automated systems are given free rein, and human oversight becomes obsolete, should tech companies assume responsibility for how their products are used? The law is still unclear on this issue.

“When challenged, many state governments have disclaimed any knowledge or ability to understand, explain, or remedy problems created by AI systems that they have procured from third parties,” Crawford and Schultz argue. “The general position has been “we cannot be responsible for something we don’t understand.” This means that algorithmic systems are contributing to the process of government decision making without any mechanisms of accountability or liability.”

A failure to address this accountability gap should mean a halt in the use of these tools.

 The Surveillance State

For all of the glaring human rights problems in automated policing in America, we live in a country in which the idea of police protection is baked into our Constitution. In governments that do not have this kind of protection, automated policing technology can be used for ill purposes. In China, for instance, facial recognition is used for purchases and in traffic regulation, surveillance images are stored. “China sells its facial recognition technology to authoritarian governments who wish to track their own citizens. This Chinese tech is relatively inexpensive to acquire and works quite well, being employed furtively, without public detection or uproar,” writes Maria Stefania Cataleta in a report for East-West Center.

Thankfully, some law enforcement agencies are taking these concerns seriously. In September 2021, for instance, the Toronto Police Services Board, announced it would be drafting a policy to govern the use of AI technology. Damning reports on the Chicago police department have led it to suspend its use of predictive policing as well. All law enforcement agencies should take this issue seriously––it could mean the difference between putting an innocent or guilty person behind bars.


Support JSTOR Daily! Join our new membership program on Patreon today.

Resources

JSTOR is a digital library for scholars, researchers, and students. JSTOR Daily readers can access the original research behind our articles for free on JSTOR.

AQ: Australian Quarterly, Vol. 91, No. 4, A Cause for Celebration: A Paradigm Shift in Macroeconomics is Underway (OCT–DEC 2020), pp. 10-17
Australian Institute of Policy and Science
Identifying Systemic Bias in the Acquisition of Machine Learning Decision Aids for Law Enforcement Applications, Jan. 1, 2021
RAND Corporation
Humane Artificial Intelligence: The Fragility of Human Rights Facing AI, Jan. 1, 2020
East-West Center