首页 | 本学科首页   官方微博 | 高级检索  
     

人工智能决策的公平感知
引用本文:蒋路远,曹李梅,秦昕,谭玲,陈晨,彭小斐. 人工智能决策的公平感知[J]. 心理科学进展, 2022, 30(5): 1078-1092. DOI: 10.3724/SP.J.1042.2022.01078
作者姓名:蒋路远  曹李梅  秦昕  谭玲  陈晨  彭小斐
作者单位:1.中山大学管理学院, 广州 510275;2.广东工业大学管理学院, 广州 510520
基金项目:国家自然科学基金集成项目;教育部长江学者奖励计划青年项目;青年项目(92146003);青年项目(71872190);青年项目(71702202);青年项目(71802203);中央高校基本科研业务费专项资金(19wkpy17)
摘    要:不平等问题是全球社会和经济发展需要应对的首要挑战,也是实现全球可持续发展目标的核心障碍。人工智能(artificial intelligence, AI)为缓解不平等、促进社会公平提供了新的途径。然而,新近研究发现,即使客观上AI决策具有公平性和准确性,个体仍可能对AI决策的公平感知较低。因此,近年来越来越多的研究开始关注AI决策公平感知的影响因素。然而,目前研究较为分散,呈现出研究范式不统一、理论不清晰和机制未厘清等特征。这既不利于跨学科的研究对话,也不利于研究者和实践者对AI决策公平感知形成系统性理解。基于此,通过系统的梳理,现有研究可以划分为两类:(1) AI单一决策的公平感知研究,主要聚焦于AI特征和个体特征如何影响个体对AI决策的公平感知;(2) AI-人类二元决策的公平感知研究,主要聚焦于对比个体对AI决策与人类决策公平感知的差异。在上述梳理基础上,未来研究可以进一步探索AI决策公平感知的情绪影响机制等方向。

关 键 词:人工智能  算法  公平  决策
收稿时间:2021-07-13

Fairness perceptions of artificial intelligence decision-making
JIANG Luyuan,CAO Limei,QIN Xin,TAN Ling,CHEN Chen,PENG Xiaofei. Fairness perceptions of artificial intelligence decision-making[J]. Advances In Psychological Science, 2022, 30(5): 1078-1092. DOI: 10.3724/SP.J.1042.2022.01078
Authors:JIANG Luyuan  CAO Limei  QIN Xin  TAN Ling  CHEN Chen  PENG Xiaofei
Affiliation:1.School of Business, Sun Yat-sen University, Guangzhou 510275, China;2.School of Management, Guangdong University of Technology, Guangzhou 510520, China
Abstract:Inequality is the biggest challenge for global social and economic development, which has the potential to impede the goal of global sustainable development. One way to reduce such inequality is to use artificial intelligence (AI) for decision-making. However, recent research has found that while AI is more accurate and is not influenced by personal bias, people are generally averse to AI decision-making and perceive it as being less fair. Given the theoretical and practical importance of fairness perceptions of AI decision-making, a growing number of researchers have recently begun investigating how individuals form fairness perceptions in regard to AI decision-making. However, existing research is generally quite scattered and disorganized, which has limited researchers’ and practitioners’ understanding of fairness perceptions of AI decision-making from a conceptual and systematic perspective. Thus, this review first divided the relevant research into two categories based on the type of decision makers. The first category is fairness perception research in which AI is the decision-maker. Drawn upon moral foundations theory, fairness heuristic theory, and fairness theory, these studies explain how AI characteristics (i.e., transparency, controllability, rule, and appropriateness) and individual characteristics (demographics, personalities, and values) affect individuals’ fairness perceptions. Existing research revealed that there were three main underlying cognitive mechanisms underlying the relationship between AI or individual characteristics and their fairness perceptions of AI decision-making: (a) individual characteristics and AI appropriateness affect individuals’ fairness perceptions via their moral intuition; (b) AI transparency affects individuals’ fairness perceptions via their perceived understandability; and (c) AI controllability affects individuals’ fairness perceptions via individuals’ needs fulfillment. The second category is fairness perception research that compares AI and humans as decision-makers. Based on computers are social actors (CASA) hypothesis, the algorithm reductionism perspective, and the machine heuristic model, these studies explained how individuals’ different perceptions of attributes between AI and humans (i.e., mechanistic attributes vs. societal attributes, simplified attributes vs. complex attributes, objective attributes vs. subjective attributes) affect individuals’ fairness perceptions and have revealed some inconsistent research findings. Specifically, some studies found that individuals perceive AI decision makers as being mechanical (i.e., lack of emotion and human touch) and simplified (i.e., decontextualization) than human decision makers, which leads individuals perceive that the decisions made by humans rather than AI are fairer. However, other studies found that compared to human decision makers, individuals regard AI decision makers as being more objective (i.e., consistent, neutral, and free of responsibility) than human decision makers, which leads individuals perceive that the decisions made by AI rather than human are fairer. Also, a small number of studies found that there is no significant difference in individuals’ fairness perceptions between AI decision makers and human decision makers. Such mixed findings reveal that individuals’ fairness perceptions of decision-making may be dependent on the specifical attributes of AI that individuals perceived in different contexts. Based on this systematic review, we proposed five promising directions for future research to help expand fairness perception literature in the context of AI decision-making. That is, (a) exploring the affective mechanisms underlying the relationship between AI or individual characteristics and their fairness perceptions of AI decision-making; (b) exploring the antecedents of interactional fairness perceptions of AI decision-making; (c) exploring fairness perceptions when robotic AI is the decision maker; (d) clarifying the boundary conditions when AI decision-making is considered to be fairer than human decision-making, versus when human decision-making is considered to be fairer than AI decision-making; and (e) exploring fairness perceptions when AI and humans make decisions jointly. We hope this review contribute to the understanding of individuals' fairness perceptions of AI decision-making theoretically and practically.
Keywords:artificial intelligence  algorithm  fairness  decision-making  
点击此处可从《心理科学进展》浏览原始摘要信息
点击此处可从《心理科学进展》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号