TY - GEN
T1 - Playing the blame game with robots
AU - Kneer, Markus
AU - Stuart, Michael T.
N1 - Publisher Copyright:
© 2021 ACM.
PY - 2021/3/8
Y1 - 2021/3/8
N2 - Recent research shows - somewhat astonishingly - that people are willing to ascribe moral blame to AI-driven systems when they cause harm [1]-[4]. In this paper, we explore the moral-psychological underpinnings of these findings. Our hypothesis was that the reason why people ascribe moral blame to AI systems is that they consider them capable of entertaining inculpating mental states (what is called mens rea in the law). To explore this hypothesis, we created a scenario in which an AI system runs a risk of poisoning people by using a novel type of fertilizer. Manipulating the computational (or quasi-cognitive) abilities of the AI system in a between-subjects design, we tested whether people's willingness to ascribe knowledge of a substantial risk of harm (i.e., recklessness) and blame to the AI system. Furthermore, we investigated whether the ascription of recklessness and blame to the AI system would influence the perceived blameworthiness of the system's user (or owner). In an experiment with 347 participants, we found (i) that people are willing to ascribe blame to AI systems in contexts of recklessness, (ii) that blame ascriptions depend strongly on the willingness to attribute recklessness and (iii) that the latter, in turn, depends on the perceived "cognitive"capacities of the system. Furthermore, our results suggest (iv) that the higher the computational sophistication of the AI system, the more blame is shifted from the human user to the AI system.
AB - Recent research shows - somewhat astonishingly - that people are willing to ascribe moral blame to AI-driven systems when they cause harm [1]-[4]. In this paper, we explore the moral-psychological underpinnings of these findings. Our hypothesis was that the reason why people ascribe moral blame to AI systems is that they consider them capable of entertaining inculpating mental states (what is called mens rea in the law). To explore this hypothesis, we created a scenario in which an AI system runs a risk of poisoning people by using a novel type of fertilizer. Manipulating the computational (or quasi-cognitive) abilities of the AI system in a between-subjects design, we tested whether people's willingness to ascribe knowledge of a substantial risk of harm (i.e., recklessness) and blame to the AI system. Furthermore, we investigated whether the ascription of recklessness and blame to the AI system would influence the perceived blameworthiness of the system's user (or owner). In an experiment with 347 participants, we found (i) that people are willing to ascribe blame to AI systems in contexts of recklessness, (ii) that blame ascriptions depend strongly on the willingness to attribute recklessness and (iii) that the latter, in turn, depends on the perceived "cognitive"capacities of the system. Furthermore, our results suggest (iv) that the higher the computational sophistication of the AI system, the more blame is shifted from the human user to the AI system.
KW - Artificial intelligence
KW - Ethics of ai
KW - Mens rea
KW - Moral judgment
KW - Recklessness
KW - Theory of mind
UR - http://www.scopus.com/inward/record.url?scp=85102750105&partnerID=8YFLogxK
U2 - 10.1145/3434074.3447202
DO - 10.1145/3434074.3447202
M3 - Conference contribution
AN - SCOPUS:85102750105
T3 - ACM/IEEE International Conference on Human-Robot Interaction
SP - 407
EP - 411
BT - HRI 2021 - Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction
PB - IEEE Computer Society
T2 - 2021 ACM/IEEE International Conference on Human-Robot Interaction, HRI 2021
Y2 - 8 March 2021 through 11 March 2021
ER -