In 1975 the late legal philosopher Ronald Dworkin wrote about an imaginary super-judge he named Hercules. Possessed of “superhuman skill, learning, patience and acumen,” Dworkin wrote, Judge Hercules would always come to a just decision.
Humans are fallible, judges included.
Algorithms could bring the dream of an infallible judge closer to reality.
Could algorithms help improve
These algorithms use information about the criminal’s background to predict how likely they are to reoffend.
Proponents of the use of risk-assessment algorithms believe that they offer a more scientific approach to sentencing, and could take bias out of the process. But others say that the algorithms reinforce bias, and violate due process because the private companies that design them don’t reveal the
Algorithms impact our lives in myriad ways, and increasingly so. There are the little things, like the content of our Facebook feeds or the movies Netflix recommends we watch, but algorithms also affect us in bigger ways, like determining our credit scores, whether or not we qualify for loans, or whether our resume gets seen by a potential employer.
“Just” is not a word that many associate with the American criminal justice system these days. According to a 2015 Gallup poll, confidence in the criminal justice system has fallen to an abysmal 23%. Confidence in the police is comparatively high at 52%, but that’s its lowest point in 22 years. The system has been beset by controversy, from stop-and-frisk to kids-for-cash, from Ferguson to the nation-wide Black Lives Matter movement. Shows like the Serial Podcast and more recently the Netflix series Making a Murderer tell tales of the (possibly) wrongly convicted and have left many wondering: could I be next?
Dworkin’s Judge Hercules was a didactic tool, but what if there could be morally flawless judges, cops, and district attorneys, free from the biases and human subjectivity that wreaks so much havoc on American lives?
With the help of artificial intelligence it may not be far fetched.
Parole boards in at least fifteen states already rely on software to determine whether or not prisoners should be eligible for parole. This comes after an Israeli study from 2011 showed that parole board decisions often hinged on factors as arbitrary as whether or not board members had eaten lunch yet. In West Virginia, felony convicts are required to undergo computerized risk assessments, which judges then use to determine appropriate sentencing.
A UC Davis researcher recently published a paper proposing “discretionless policing,” a system in which decisions about motor vehicle stops are made by A.I. systems rather than officers, thus eliminating racial and other biases in traffic stops.
Maybe the question shouldn’t be whether we use algorithms in courts, but how we use them. Propublica reports that these tools were originally intended to give judges guidance about treatment options and probation plans in order to keep people from reoffending. Some jurisdictions, such as Napa County in California, continue to use the tools for this purpose rather than for sentencing.