As more states adopt risk assessment tools to gauge risk and recidivism, questions about efficacy, ethics and racial bias continue to rise
The Texas Alliance for Safe Communities today issued the following press release outlining the growing concerns surrounding the use of risk assessment tools in determining the fate of criminal defendants.
Risk assessment tools are computer algorithms used to determine the statistical likelihood of a defendant to re-offend or fail to appear for court while out of jail, assigning them a score based on their risk. These scores are used to determine whether or not a defendant can be released from jail as well as conditions of release. In recent years, it has become increasingly evident that risk assessment tools are far worse in practice than in abstract, with the pitfalls of using such a tool including:
- They have led to an increase in failure-to-appear rates and pretrial crime;
- They are no more effective in determining risk than the average human with no criminal justice experience;
- They are privately-owned, yet costly to taxpayers and relatively unproven;
- They may actually increase incarceration rates;
- They remove jurisdiction from seasoned, elected judges; and
- They are inherently biased toward minority defendants.
As more states consider implementing risk assessment tools, it has become increasingly evident that these computer algorithms would be tremendously detrimental to Texans across the state.
The Dangers of Risk Assessment Tools
In Kentucky, Risk Assessment Tools Have Increased Failure-To-Appear Rates And Pretrial Crime. “Using rich data on more than one million criminal cases, the paper shows that a 2011 [Kentucky] law making risk assessment a mandatory part of the bail decision led to a significant change in bail setting practice, but only a small increase in pretrial release…Furthermore, the increase in releases was not cost-free: failures-to-appear and pretrial crime increased as well.” (Megan Stevenson, “Assessing Risk Assessment In Action,” Minnesota Law Review Vol. 103, 8/29/17)
According To A Study In Science Advances, Risk Assessment Algorithms Were No More Successful In Predicting Recidivism Or Risk Than The Average Person With Little To No Criminal Justice Experience. “In January 2018, an undergraduate computer-studies major and her advisor published a study that challenged many commonly-held beliefs about the relative accuracy of human intuition and algorithmic predictions. Using an experimental method that was explicitly set up as a horse race between survey respondents and algorithmic risk assessment models, they found no evidence that algorithms are more accurate in predicting recidivism than human beings…The authors found no evidence that any of the algorithms could outperform the predictions of a random group of online respondents.” (Megan Stevenson, “Assessing Risk Assessment In Action,” Minnesota Law Review Vol. 103, 8/29/17)
Risk Assessment Tools Are Still Relatively New And Unproven – Texas Can’t Afford To Gamble Billions Of Dollars To Overhaul A System At The Expense Of Taxpayers And Public Safety. “Risk assessment in practice is different from risk assessment in the abstract, and its impacts depend on context and details of implementation. If indeed risk assessment is capable of producing large benefits, it will take research and experimentation to learn how to achieve them. Such a process would be evidence-based criminal justice at its best: not a flocking towards methods that bear the glossy veneer of science, but a careful and iterative evaluation of what works and what does not.” (Megan Stevenson, “Assessing Risk Assessment In Action,” Minnesota Law Review Vol. 103, 8/29/17)
Researchers At Rice University Have Found That Risk Assessment Tools May Increase Incarceration Rates. (Robert Worth, “Theorizing the Performative Effects of Penal Risk Technologies,” Rice University, 5/3/18)
Risk Assessment Tools Unfairly Judge Defendants Not Only On The Crimes They Have Committed, But For Crimes They Haven’t Even Committed Yet. “There’s a second component I’m suggesting that perhaps what makes it difficult for risk assessment to reduce mass incarceration and mass supervision is how they operate — they actually make everybody who’s assessed risky. There is no possibility for zero risk… In theory, risk assessments had good motivations behind them to reduce biases, but all they do is reproduce those biases in policing and prosecution. And more problematically, they give these instruments the gloss, the promise of not being biased, and they hide the bias that already constituted them.” (Keri Blakinger, “Rice researcher finds risk assessment tools may increase incarceration rates,” Houston Chronicle, 6/25/18)
Risk Assessment Tools Have Stripped Authority From Elected Judges, Who Have Traditionally Determined Whether Or Not A Defendant Is Low Risk. “Before these instruments were popular, these decisions were made with clinical or subjective judgments by personnel. That’s a little more binary — someone is a risk or they’re not. Risk assessments have changed the question from ‘is this person dangerous or not’ to ‘how dangerous is this person.’ At some level that’s a very profound shift. So what I’m arguing is that risk-assessment instruments are part of our background that feed into the idea that criminals and offenders are inherently dangerous beings who deserve what they get.” (Keri Blakinger, “Rice researcher finds risk assessment tools may increase incarceration rates,” Houston Chronicle, 6/25/18)
Risk Assessment Tools Have Proven To Be Biased Against African-Americans. “We also turned up significant racial disparities, just as Holder feared. In forecasting who would re-offend, the algorithm made mistakes with black and white defendants at roughly the same rate but in very different ways. The formula was particularly likely to falsely flag black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants. White defendants were mislabeled as low risk more often than black defendants. “(Julia Angwin and Jeff Larson, “Machine Bias,” Pro Publica, 5/23/16)
Artificial Intelligence Can Amplify Human Biases, Potentially Doubling Down On Already-Present Discrimination Problems In The Criminal Justice System. “The most powerful algorithms being used today ‘haven’t been optimized for any definition of fairness,’ says Deirdre Mulligan, an associate professor at the University of California at Berkeley who studies ethics in technology.” (Jonathan Vanian, “Unmasking A.I.’s Bias Problem,” Fortune, 6/25/18)