首页 | 本学科首页   官方微博 | 高级检索  
     

算法拒绝的三维动机理论
引用本文:张语嫣,许丽颖,喻丰,丁晓军,邬家骅,赵靓. 算法拒绝的三维动机理论[J]. 心理科学进展, 2022, 30(5): 1093-1105. DOI: 10.3724/SP.J.1042.2022.01093
作者姓名:张语嫣  许丽颖  喻丰  丁晓军  邬家骅  赵靓
作者单位:1.武汉大学哲学学院心理学系, 武汉 430072;2.清华大学社会科学学院心理学系, 北京 100084;3.西安交通大学人文社会科学学院哲学系, 西安 710049;4.武汉大学信息管理学院出版科学系, 武汉 430072
基金项目:国家社科基金青年项目(20CZX059);国家自然科学基金青年项目(72101132);中国博士后科学基金面上项目(2021M701960)
摘    要:算法拒绝意指尽管算法通常能比人类做出更准确的决策, 但人们依然更偏好人类决策的现象。算法拒绝的三维动机理论归纳了算法主体怀疑、道德地位缺失和人类特性湮没这三个主要原因, 分别对应信任、责任和掌控三种心理动机, 并对应改变算法拒绝的三种可行方案: 提高人类对算法的信任度, 强化算法的主体责任, 探索个性化算法设计以突显人对算法决策的控制力。未来研究可进一步以更社会性的视角探究算法拒绝的发生边界和其他可能动机。

关 键 词:算法决策  算法拒绝  心理动机  人-机器人交互  
收稿时间:2021-07-08

A three-dimensional motivation model of algorithm aversion
ZAHNG Yuyan,XU Liying,YU Feng,DING Xiaojun,WU Jiahua,ZHAO Liang. A three-dimensional motivation model of algorithm aversion[J]. Advances In Psychological Science, 2022, 30(5): 1093-1105. DOI: 10.3724/SP.J.1042.2022.01093
Authors:ZAHNG Yuyan  XU Liying  YU Feng  DING Xiaojun  WU Jiahua  ZHAO Liang
Affiliation:1.Department of Psychology, School of Philosophy, Wuhan University, Wuhan 430072, China;2.Department of Psychology, School of Social Sciences, Tsinghua University, Beijing 100084, China;3.Department of Philosophy, School of Humanities and Social Science, Xi?an Jiaotong University, Xi’an 710049, China;4.Department of Publishing Science, School of Information Management, Wuhan University, Wuhan 430072, China
Abstract:In recent years, algorithmic decision-making has rapidly penetrated human social life by virtue of its speedability, accuracy, objectivity and applicability. However, although algorithms are often superior in performance, people are reluctant to use algorithm decisions instead of human decisions - a phenomenon known as algorithm aversion. The three-dimensional motivation model of algorithm aversion summarizes the three main reasons: the doubt of algorithm agents, the lack of moral standing, and the annihilation of human uniqueness. It simulates the intuitive thinking framework of human beings when faced with algorithm decisions, i.e., several progressive questions that humans are expected to ask when faced with algorithm decisions: First, are algorithms capable of making decisions? The answer to this question is often negative. Humans usually doubt and distrust the algorithm’s ability, thus causing algorithm aversion, which is the trust/doubt motivation. Second, even if algorithms are capable of making decisions, does it benefit individuals? Of course, the answer to this question is usually negative as well. The reason why algorithms cannot benefit individuals is that human beings tend to shift responsibility when making decisions, but the lack of moral standing and ability to take responsibility makes algorithms useless for the shifting of responsibility. Therefore, the second motivation of algorithm aversion is responsibility-taking/shifting. Third, even if algorithms are capable and trustworthy to make decisions and take moral responsibility, do algorithm decisions positively impact human beings? The answer to this question is also negative because humans will lose control due to algorithmic decision-making. This will result in the perception of dehumanization due to the annihilation of human identity, thus eventually rejecting the algorithms, which is the motivation of control/loss of control. Given these motivations of algorithm aversion, increasing human trust in algorithms, strengthening algorithm agents’ responsibility, and exploring personalized algorithms to salient human control over algorithms should be three feasible options to weaken algorithm aversion. Future research could further explore the boundary conditions and other possible motivations of algorithm aversion from a more social perspective, such as the need for cognitive closure and psychological connection.
Keywords:algorithmic decision making  algorithm aversion  mental motivation  human-robot interaction  
点击此处可从《心理科学进展》浏览原始摘要信息
点击此处可从《心理科学进展》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号