Society

The ethics of police using technology to predict future crimes

Using computer models to determine where crime is most likely to occur could reinforce police biases about neighbourhoods with ethnic or racial minorities

(iStock)

(iStock)

When London’s first professional police force was established in 1829 through Robert Peel’s Metropolitan Police Act, officers were expected to adhere to nine principles of policing. Among them was the idea that the absence of crime should be the ultimate measure of success; that persuasion, advice and warning—not force—are preferred means; and that public respect, admiration and approval are essential to ethical and effective policing. Since then, police agencies have experimented with many strategies to improve their effectiveness. Among recent strategies, predictive policing is receiving much attention and has already been adopted, in one form or another, by many police agencies. How well does it stand up to ideals of effective and ethical policing?

Predictive policing involves the use and analysis of data to predict where crimes are most likely to occur in the future. Various kinds of data are used—including numbers, types and locations of past arrests, public reports of crime, requests for police assistance, neighbourhood features (such as bars and bus stops) and, in some cases, weather patterns and lunar cycles. Social media activity has also been incorporated into some models to generate real-time indicators of potential crime. Computer models assign and adjust weights to different variables, assess data for patterns and leading indicators, and produce predictive maps showing where crime is most likely to occur. Based on the predictions, police agencies can increase patrols and allocate other resources to meet anticipated risks.

Does it work? The evidence in mixed, but the future looks promising. Some studies suggest that predictive policing has no effect on crime and some analysts claim that, while crime might decline initially, improvements are often short-lived. Other studies conclude that well-designed predictive policing models reduce crime significantly and that the improvements persist. As the models and data sources improve, predictive policing is likely to become more effective.

SEE MORE: The ethics lab archives

Still, a major concern is that predictive policing models will reinforce and exacerbate biases in crime data and police practice. When models draw on flawed or inappropriate data, they may recommend increasing police activity in neighbourhoods with higher proportions of ethnic or racial minorities—not because the risk of crime is higher, but because the input data are biased. If so, our expectation that citizens be treated with equal concern and respect would be in jeopardy.

A simulation study focused on drug offenses in Oakland found that a predictive policing model’s algorithm would recommend sending officers mainly to African American and Latino neighbourhoods, even though rates of drug use are essentially the same across Oakland neighbourhoods according to the U.S. National Survey on Drug Use and Health. The model uses data on previous arrests by location which are as, if not more, likely to capture patterns in police activity as they are the actual prevalence of drug-related offenses. That is, the data reflects who got arrested, not necessarily who is involved in drugs. Because Oakland police have focused disproportionate attention on African American and Latino neighbourhoods in the past, using arrest data simply reinforces bias towards those neighbourhoods.

Predictive policing can also influence the attitudes and behavior of officers assigned to high risk neighbourhoods in ways that heighten danger for both residents and police. An officer who witnesses a nervous 16 year-old cutting through a backyard in a neighbourhood deemed “low risk” might dismiss it as nothing more than a kid taking a short-cut to school. The officer is anchored to the predictive model’s conclusion that the neighbourhood is low-risk so views subsequent activity through a more positive lens. But in a neighbourhood labeled “high-risk” the same officer might assume that a 16 year-old backyard-cutter has committed a crime and that she needs to intervene. She is anchored to the model’s “high-risk” conclusion and views subsequent activity through a negative lens. Mutual suspicion and tension between police and residents of certain neighbourhoods is not new, but predictive policing risks giving that tension and its consequences the undeserved appearance of scientific and statistical validity.

Even if predictive policing improves—contributing to lower crime and avoiding the most obvious kinds of bias—broader accountability will be a problem. As models are fed larger datasets and conduct increasingly incomprehensible analyses, the ability of police agencies to account and provide reasons for their activities, and our ability to assess the fairness of predictive policing’s results, will diminish. Although accountability for decisions made by artificial intelligence is not a problem unique to predictive policing, the need for police agencies to maintain public support for their activities and very existence means the stakes for them are especially high. Indeed, the use of force in a liberal democracy requires justification in terms of reasons that the public can understand and could accept.

Robert Peel was clearly correct in recognizing that the ability of police to perform their duties depends on public approval and cooperation. The links he made between effective and ethical policing should be acknowledged and adherence to the spirit of his nine principles maintained. Less clear is whether predictive policing can overcome its limitations and become both an effective and ethical tool for police.

 

Dan Munro is a Visiting Scholar and Director of Policy Projects in the Innovation Policy Lab at the Munk School for Global Affairs at the University of Toronto. Listen to The Ethics Lab on Ottawa Today with Mark Sutcliffe, Thursdays at 11 EST. @dk_munro

Looking for more?

Get the Best of Maclean's sent straight to your inbox. Sign up for news, commentary and analysis.
  • By signing up, you agree to our terms of use and privacy policy. You may unsubscribe at any time.