SHARE

The deaths of Oscar Grant III, Eric Garner, Michael Brown, and scores of others at the hands of law enforcement over the past few years have shaken Americans’ faith in their police departments. According to a 2014 Gallup poll, 44 percent of Americans said they have only some or very little confidence in the police.

It does not bode well for demo­cracy when a population so widely distrusts those sworn to serve and protect it.

So how did we get here and what can we do about it? Part of the answer has to do with law enforcement methods. For decades, police forces in many cities have relied on policies such as broken windows and stop-and-frisk. The idea was to crack down on minor offenses in order to discourage more serious crimes. But the decisions of where and when to implement these methods were based on an officer’s observations and hunches as much as (or some would argue more than) on hard data about previous crimes committed.

Such policies—once highly praised—are now falling out of favor. Critics maintain that they drive officers to unfairly (and in most cases unwittingly) target minorities. Those biases, real and perceived, are the fulcrum of our national distrust.

But a new model is emerging. Over the past two decades, the National Institute of Justice has awarded $24 million to organizations to map crime and develop so-called predictive policing: using data to determine where future crimes might occur and patrolling those areas proactively.

One forerunner in this field is Hitachi Data Systems, which has developed a software system called Predictive Crime Analytics. In addition to crime and arrest reports, PCA can continuously comb license-plate scans, weather and traffic reports, security camera footage, and Twitter. It layers those data sets onto a digital map so officers can monitor patterns in real time. Instead of following a hunch on who looks suspicious, officers can rely on algorithms to anticipate and act on trends from the data. “PCA is anonymous,” says Mark Jules, who led the system’s development at Hitachi. “It shows you an area. It does not tell you to look for a person.”

In theory, predictive policing could add more analytical oversight to old-school police work. It could also allow law enforcement to monitor every part of a city in one fell swoop. But to realize those benefits, the data fed into the system needs to be neutral in the first place. “It is tempting to think that you put data into a system and out comes an unbiased analysis on other end,” says Rachel Levinson-Waldman, senior counsel at New York University’s Brennan Center for Justice. “But the data is coming from somewhere, and there is a concern these algorithmic models will re-create biased or racially disparate patterns.”

That’s a valid concern, but the only way to know if data can enforce justice without reinforcing bias is to try. So try we must.

Next year, a beta version of PCA will be launched in six yet-to-be-determined cities. (Washington D.C. is rumored to be one.) PCA can’t ensure that mistakes won’t still happen, but to those citizens who feel under siege it should come as some relief that local police (hopefully) will no longer target them unjustly.

This article was originally published in the January/February 2016 issue of Popular Science