On How to Build a Moral Machine |
| |
Authors: | Paul Bello Selmer Bringsjord |
| |
Affiliation: | 1. Human and Bioengineered Systems Division - Code 341, Office of Naval Research, 875 N. Randolph St., Arlington, VA, 22203, USA 2. Departments of Cognitive Science, Computer Science and the Lally School of Management, Rensselaer Polytechnic Institute, Troy, NY, 12180, USA
|
| |
Abstract: | Herein we make a plea to machine ethicists for the inclusion of constraints on their theories consistent with empirical data on human moral cognition. As philosophers, we clearly lack widely accepted solutions to issues regarding the existence of free will, the nature of persons and firm conditions on moral agency/patienthood; all of which are indispensable concepts to be deployed by any machine able to make moral judgments. No agreement seems forthcoming on these matters, and we don’t hold out hope for machines that can both always do the right thing (on some general ethic) and produce explanations for its behavior that would be understandable to a human confederate. Our tentative solution involves understanding the folk concepts associated with our moral intuitions regarding these matters, and how they might be dependent upon the nature of human cognitive architecture. It is in this spirit that we begin to explore the complexities inherent in human moral judgment via computational theories of the human cognitive architecture, rather than under the extreme constraints imposed by rational-actor models assumed throughout much of the literature on philosophical ethics. After discussing the various advantages and challenges of taking this particular perspective on the development of artificial moral agents, we computationally explore a case study of human intuitions about the self and causal responsibility. We hypothesize that a significant portion of the variance in reported intuitions for this case might be explained by appeal to an interplay between the human ability to mindread and to the way that knowledge is organized conceptually in the cognitive system. In the present paper, we build on a pre-existing computational model of mindreading (Bello et al. 2007) by adding constraints related to psychological distance (Trope and Liberman 2010), a well-established psychological theory of conceptual organization. Our initial results suggest that studies of folk concepts involved in moral intuitions lead us to an enriched understanding of cognitive architecture and a more systematic method for interpreting the data generated by such studies. |
| |
Keywords: | |
本文献已被 SpringerLink 等数据库收录! |
|