Risk assessment algorithms lie at the heart of criminal justice reform to tackle mass incarceration. The newest application of risk tools centers on the pretrial stage as a means to reduce both reliance upon wealth-based bail systems and rates of pretrial detention. Yet the ability of risk assessment to achieve the reform movement’s goals will be challengedif the risk tools do not perform equitably for minorities. To date, little is known about the racial fairness of these algorithms as they are used in the field. This Article offers an original empirical study of a popular risk assessment tool to evaluate its race-based performance. The case study is novel in employing a two-sample design with large datasets from diverse jurisdictions, one with a supermajority white population and the other a supermajority Black population.
Statistical analyses examine whether, in these jurisdictions, the algorithmic risk tool results in disparate impact, exhibits test bias, or displays differential validity in terms of unequal performance metrics for white versus Black defendants. Implications of the study results are informative to the broader knowledge base about risk assessment practices in the field. Results contribute to the debate about the topic of algorithmic fairness in an important setting where one’s liberty interests may be infringed despite not being adjudicated guilty of any crime.