Like the clairvoyants in the film “Minority Report,” computer-generated risk assessment algorithms aim to predict the likelihood that someone will commit crime in the future.
These risk assessment tools are used to determine whether someone awaits trial in jail or goes home, what their sentence will be, and whether or not parole is granted, among other decisions. Proponents argue that by replacing ad-hoc human intuition with scientifically-backed methods of predicting future offending, we can lower incarceration rates without affecting public safety. But critics are skeptical of turning such important decisions over to computers, and worry that such tools will entrench or even worsen racial inequalities in criminal justice.
Bail reform is long overdue: on any given day, almost 500,000 presumed-innocent people are sitting in jail awaiting trial. It’s estimated that 9 out of 10 of these people are detained due to an inability to pay bail. Even a few days in jail can result in loss of employment and a destabilization of life. Furthermore, those detained pretrial are at a significant disadvantage in their case. They are more likely to plead guilty, receive lengthy sentences, and accrue significant court debt.
The central goals of the Harris-Paul bail — ensuring that no-one is detained simply because they are poor, and restoring a presumption of release for most defendants — can be applauded. However, it’s naïve to assume that these goals can be achieved by simply adopting a risk assessment algorithm. Risk assessments are tools: no more, and no less. Their impacts depend on how they are used.
In New Jersey, the use of pretrial risk assessment contributed to a dramatic decrease in the number of people detained pretrial. In Lucas County, Ohio, the pretrial detention rate actually increased after risk assessment was adopted. In Kentucky, a law making pretrial risk assessment mandatory led to a small decrease in the detention rate, but this dissipated over time as judges returned to their previous habits. Despite a clearly worded statute declaring a presumption of release without money bail for all low and moderate risk defendants, judges ignored this more often than not.
Ultimately, risk assessment algorithms are used by human beings, within a particular institutional environment. Ensuring that risk assessment will help reduce our outrageously overcrowded jail populations requires thinking carefully about context, incentives and the details of implementation. This may vary from place to place, but a few simple guidelines will help.
First, risk assessment tools can help identify a group of defendants for immediate release after arrest, with no bail hearing necessary. By circumventing the idiosyncratic whims of the judge for defendants who have no history of serious or violent crime, we can accelerate the process of release.
Second, no one should be detained solely on the basis of the risk algorithm. A proper hearing, with evidence examined and defense counsel present, is necessary. It’s the sad truth that in many parts of the country, people lose their freedom on the basis of a hearing that lasts only one minute long.
Finally, judges need to be held accountable for how they use the risk assessments – particularly in jurisdictions where they are elected by popular vote. This requires public transparency on how the risk scores are being used and what the impacts are. Such transparency allows the community to evaluate whether the goals of the bail reform movement are being met.
Megan Stevenson is an Assistant Professor of Law at George Mason University Scalia Law School. Her research uses advanced econometric techniques to evaluate criminal law and policy in areas such as bail, pretrial detention, juvenile justice, and risk assessment.