The Hidden Inconsistencies Introduced by Predictive Algorithms in Judicial Decision Making

Travis Greene*, Galit Shmueli, Jan Fell, Ching-Fu Lin, Mark Lantry Shope, Han-Wei Liu

*此作品的通信作者

研究成果: Article同行評審

摘要

Algorithms, from simple automation to machine learning, have been introduced into judicial contexts to ostensibly increase the consistency and efficiency of legal decision making. In this paper, we describe four types of inconsistencies introduced by risk prediction algorithms. These inconsistencies threaten to violate the principle of treating similar cases similarly and often arise from the need to operationalize legal concepts and human behavior into specific measures that enable the building and evaluation of predictive algorithms. These inconsistencies, however, are likely to be hidden from their end-users: judges, parole officers, lawyers, and other decision-makers. We describe the inconsistencies, their sources, and propose various possible indicators and solutions. We also consider the issue of inconsistencies due to the use of algorithms in light of current trends towards more autonomous algorithms and less human-understandable behavioral big data. We conclude by discussing judges and lawyers' duties of technological ("algorithmic") competence and call for greater alignment between the evaluation of predictive algorithms and corresponding judicial goals.
原文English
期刊Big Data and Society
出版狀態Submitted - 1 12月 2020

指紋

深入研究「The Hidden Inconsistencies Introduced by Predictive Algorithms in Judicial Decision Making」主題。共同形成了獨特的指紋。

引用此