TY - JOUR
T1 - The Hidden Inconsistencies Introduced by Predictive Algorithms in Judicial Decision Making
AU - Greene, Travis
AU - Shmueli, Galit
AU - Fell, Jan
AU - Lin, Ching-Fu
AU - Shope, Mark Lantry
AU - Liu, Han-Wei
PY - 2020/12/1
Y1 - 2020/12/1
N2 - Algorithms, from simple automation to machine learning, have been introduced into judicial contexts to ostensibly increase the consistency and efficiency of legal decision making. In this paper, we describe four types of inconsistencies introduced by risk prediction algorithms. These inconsistencies threaten to violate the principle of treating similar cases similarly and often arise from the need to operationalize legal concepts and human behavior into specific measures that enable the building and evaluation of predictive algorithms. These inconsistencies, however, are likely to be hidden from their end-users: judges, parole officers, lawyers, and other decision-makers. We describe the inconsistencies, their sources, and propose various possible indicators and solutions. We also consider the issue of inconsistencies due to the use of algorithms in light of current trends towards more autonomous algorithms and less human-understandable behavioral big data. We conclude by discussing judges and lawyers' duties of technological ("algorithmic") competence and call for greater alignment between the evaluation of predictive algorithms and corresponding judicial goals.
AB - Algorithms, from simple automation to machine learning, have been introduced into judicial contexts to ostensibly increase the consistency and efficiency of legal decision making. In this paper, we describe four types of inconsistencies introduced by risk prediction algorithms. These inconsistencies threaten to violate the principle of treating similar cases similarly and often arise from the need to operationalize legal concepts and human behavior into specific measures that enable the building and evaluation of predictive algorithms. These inconsistencies, however, are likely to be hidden from their end-users: judges, parole officers, lawyers, and other decision-makers. We describe the inconsistencies, their sources, and propose various possible indicators and solutions. We also consider the issue of inconsistencies due to the use of algorithms in light of current trends towards more autonomous algorithms and less human-understandable behavioral big data. We conclude by discussing judges and lawyers' duties of technological ("algorithmic") competence and call for greater alignment between the evaluation of predictive algorithms and corresponding judicial goals.
UR - https://arxiv.org/abs/2012.00289
M3 - Article
SN - 2053-9517
JO - Big Data and Society
JF - Big Data and Society
ER -