The Hidden Inconsistencies Introduced by Predictive Algorithms in Judicial Decision Making

Travis Greene*, Galit Shmueli, Jan Fell, Ching-Fu Lin, Mark Lantry Shope, Han-Wei Liu

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Algorithms, from simple automation to machine learning, have been introduced into judicial contexts to ostensibly increase the consistency and efficiency of legal decision making. In this paper, we describe four types of inconsistencies introduced by risk prediction algorithms. These inconsistencies threaten to violate the principle of treating similar cases similarly and often arise from the need to operationalize legal concepts and human behavior into specific measures that enable the building and evaluation of predictive algorithms. These inconsistencies, however, are likely to be hidden from their end-users: judges, parole officers, lawyers, and other decision-makers. We describe the inconsistencies, their sources, and propose various possible indicators and solutions. We also consider the issue of inconsistencies due to the use of algorithms in light of current trends towards more autonomous algorithms and less human-understandable behavioral big data. We conclude by discussing judges and lawyers' duties of technological ("algorithmic") competence and call for greater alignment between the evaluation of predictive algorithms and corresponding judicial goals.
Original languageEnglish
JournalBig Data and Society
StateSubmitted - 1 Dec 2020

Fingerprint

Dive into the research topics of 'The Hidden Inconsistencies Introduced by Predictive Algorithms in Judicial Decision Making'. Together they form a unique fingerprint.

Cite this