首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 171 毫秒
1.
算法常用于决策, 但相较于由人类做出的决策, 即便内容相同, 算法决策更容易引起个体反应的分化, 此即算法决策趋避。趋近指个体认为算法的决策比人类的更加公平、含有更少的偏见和歧视、也更能信任和接受, 回避则与之相反。算法决策趋避的过程动机理论用以解释趋避现象, 归纳了人与算法交互所经历的原初行为互动、建立类社会关系和形成身份认同三个阶段, 阐述了各阶段中认知、关系和存在三种动机引发个体的趋避反应。未来研究可着眼于人性化知觉、群际感知对算法决策趋避的影响, 并以更社会性的视角来探究算法决策趋避的逆转过程和其他可能的心理动机。  相似文献   

2.
算法决策是设计主体有意识、有目的的创造性行为。工程设计人员通过算法模型构建人工道德主体的设计路径存在着“知”“情”“意”三重伦理困境,其产品应用会引发哲学上的主体性问题,即算法及其设计主体、用户在不同时空背景下形成的决策行为引发主体责任分担的难题。在设计哲学看来,单纯以从自我出发与从他者出发为依据的设计思想,都不能很好地化解算法决策引发的主体性问题。列维纳斯的他者性伦理思想避免了设计者和用户的对象性关系,但绝对的他者性容易导致设计主体的自我迷失,从而降低设计主体的创新意识。经过批判的温和的他者性思想有助于解决创新责任与社会责任的对立问题,从而在一定程度上解决算法决策中的主体性问题,并将主体性增强明晰为算法决策中的设计者责任。  相似文献   

3.
职场中用算法作为人类决策的辅助和替代屡见不鲜,但人们表现出算法厌恶。本研究通过4个递进实验在不同职场应用场景下比较了人们对于人类决策者与算法决策者所做决策的态度,并探讨其内在机制和边界条件。结果发现:在职场情境中,相比于人类决策者,人们对算法决策的可容许性、喜爱程度、利用意愿更低,表现出“算法厌恶”。这一现象的内在心理机制是相比于人类决策,人们认为算法决策者的决策更加不透明(实验2~3)。进一步研究发现,当算法被赋予拟人化特征时人们扭转了对算法决策的厌恶,提高了对其的接纳态度(实验4)。研究结果有助于更好地理解人们对算法决策的反应,为推动社会治理智能化、引导算法使用伦理化提供启示。  相似文献   

4.
孙伟平 《哲学研究》2023,(3):46-55+126-127
算法是以数据为基础资源,以解决问题、完成任务为目标导向的策略机制和运行程序,是人工智能的“中枢神经”和“灵魂”,它本身并不是“价值中立”的。在通过机器学习之类技术采集、存储、分析数据,进而进行自动化决策时,算法设计、编写的主体立场,支撑和训练算法的数据的来源和准确性,算法内蕴的价值负荷和价值选择,特定价值主体基于算法的行为倾向性,以及智能系统的自主评价和决策,都可能导致一定程度的算法歧视,影响社会公正的实现。相较人们熟悉的社会歧视现象,算法歧视更加广泛、多元,更加精准、有针对性,也更加隐蔽、“巧妙”。只有在社会智能化进程中,确立全体人民的价值主体地位,将公正价值观“内嵌”到智能算法之中,并建立动态的评价、监督机制,才能对算法歧视进行必要的规制,重构智能时代公正的社会秩序。  相似文献   

5.
风险决策心理因素的理论综述   总被引:5,自引:1,他引:4  
风险决策理论对于人们是否作出冒险行为的选择提出了两种比较典型的解释:一种解释把风险决策归因于人们共有的基本过程,即“较冷”的心理和认知过程。这些理论认为风险选择是由人类基本的心理和感觉机制引起的。而另一种解释则把风险决策归因于“较热”的情感和动机过程。这些理论认为,情境和人格因素会增强风险决策的动机并导致风险决策个体差异的存在。本文对这两种解释的不同理论作了较为系统的论述。  相似文献   

6.
算法不仅能挖掘出数据背后隐藏的有价值的信息,而且给人们的行动提供指导和建议,在很多情况下已经可以代替人类决策和行动。随着算法越来越多地参与人类活动,接管各种复杂任务,它们已经成为一种调节人、机器和社会之间关系的关键纽结。由此,算法所引起的伦理问题将变得越来越突出。算法伦理的内涵澄清是我们进行理性探讨的前提,基于算法的使用过程可以将其问题域划分为算法的自主性特征、应用性场景和归责性困境三个方面,在此基础上我们能够更有效地展开算法伦理风险分析。  相似文献   

7.
许丽颖  喻丰  彭凯平 《心理学报》2022,54(9):1076-1092
算法歧视屡见不鲜, 人们对其有何反应值得关注。6个递进实验比较了不同类型歧视情境下人们对算法歧视和人类歧视的道德惩罚欲, 并探讨其潜在机制和边界条件。结果发现:相对于人类歧视, 人们对算法歧视的道德惩罚欲更少(实验1~6), 潜在机制是人们认为算法(与人类相比)更缺乏自由意志(实验2~4), 且个体拟人化倾向越强或者算法越拟人化, 人们对算法的道德惩罚欲越强(实验5~6)。研究结果有助于更好地理解人们对算法歧视的反应, 并为算法犯错后的道德惩罚提供启示。  相似文献   

8.
移情对亲社会行为决策的两种功能   总被引:11,自引:0,他引:11  
移情对亲社会行为决策具有动机功能和信息功能。霍夫曼认为,移情忧伤不仅能够作为亲社会道德动机促进亲社会行为的产生,而且能够激活观察者的道德原则,进而引发亲社会行为。巴特森强调移情不仅能够增强解除他人困境的动机,而且带有重视他人福利和想使他人困境得到解除的程度的信息。移情的动机功能依赖于诱发移情的情境,而信息功能具有稳定的倾向性,比动机功能更持久,两种功能共同作用使移情在亲社会行为决策中具有更强的适应性。移情功能理论对道德教育具有启示作用。  相似文献   

9.
现代临床护理决策的"非语言性"道德语境包括道德多元化、患者中心化和责任独立化三个主要方面。对"非语言性"道德语境的忽视导致了护理决策缺乏公信力。临床护理决策伦理模式的建构能使护理决策主体充分考虑"非语言性"道德语境,有利于道德争议的有效解决。  相似文献   

10.
现代临床护理决策的“非语言性”道德语境包括道德多元化、患者中心化和责任独立化三个主要方面.对“非语言性”道德语境的忽视导致了护理决策缺乏公信力.临床护理决策伦理模式的建构能使护理决策主体充分考虑“非语言性”道德语境,有利于道德争议的有效解决.  相似文献   

11.
Despite abundant literature theorizing societal implications of algorithmic decision making, relatively little is known about the conditions that lead to the acceptance or rejection of algorithmically generated insights by individual users of decision aids. More specifically, recent findings of algorithm aversion—the reluctance of human forecasters to use superior but imperfect algorithms—raise questions about whether joint human-algorithm decision making is feasible in practice. In this paper, we systematically review the topic of algorithm aversion as it appears in 61 peer-reviewed articles between 1950 and 2018 and follow its conceptual trail across disciplines. We categorize and report on the proposed causes and solutions of algorithm aversion in five themes: expectations and expertise, decision autonomy, incentivization, cognitive compatibility, and divergent rationalities. Although each of the presented themes addresses distinct features of an algorithmic decision aid, human users of the decision aid, and/or the decision making environment, apparent interdependencies are highlighted. We conclude that resolving algorithm aversion requires an updated research program with an emphasis on theory integration. We provide a number of empirical questions that can be immediately carried forth by the behavioral decision making community.  相似文献   

12.
Why do consumers embrace some algorithms and find others objectionable? The moral relevance of the domain in which an algorithm operates plays a role. The authors find that consumers believe that algorithms are more likely to use maximization (i.e., attempting to maximize some measured outcome) as a decision-making strategy than human decision makers (Study 1). Consumers find this consequentialist decision strategy to be objectionable in morally relevant tradeoffs and disapprove of algorithms making morally relevant tradeoffs as a result (Studies 2, 3a, & 3b). Consumers also object to human employees making morally relevant tradeoffs when they are trained to make decisions by maximizing outcomes, consistent with the notion that their objections to algorithmic decision makers stem from concerns about maximization (Study 4). The results provide insight into why consumers object to some consumer relevant algorithms while adopting others.  相似文献   

13.
ABSTRACT

The central research question guiding this grounded theory study was: How do religiously committed Thai Protestant Christians and Thai Buddhists perceive their motivation for making moral decisions? Data for this grounded theory study were obtained through personal interviews with 24 participants willing to share their thoughts and experiences of moral motivation. Participants were adult Thai individuals who self-identify as religiously committed to Theravada Buddhism or Protestant Christianity. Although motivations were mixed and overlapped, both Buddhist and Christian participants were motivated by four predominant moral motivations: happiness and peace, karma or karma-like belief, a feeling of kreng jai (an emotion of deference and avoidance of conflict), and a concern for others. Two other less prominent categories of moral motivation were also found: Duty to moral law, and a regard for a divine person. Evidence was found that religio-cultural factors have a strong impact on moral reasoning and moral motivation.  相似文献   

14.
It is argued here that the question of whether compatibilism is true is irrelevant to metaphysical questions about the nature of human decision‐making processes—for example, the question of whether or not humans have free will—except in a very trivial and metaphysically uninteresting way. In addition, it is argued that two other questions—namely, the conceptual‐analysis question of what free will is and the question that asks which kinds of freedom are required for moral responsibility—are also essentially irrelevant to metaphysical questions about the nature of human beings.  相似文献   

15.
People frequently escalate their commitment to failing endeavors. Explanations for such behavior typically involve loss aversion, failure to recognize other alternatives, and concerns with justifying prior actions; all of these factors produce recommitment to previous decisions with the goal of erasing losses and vindicating these decisions. Solutions to escalation of commitment have therefore focused on external oversight and divided responsibility during decision making to attenuate loss aversion, blindness to alternatives, and justification biases. However, these solutions require substantial resources and have additional adverse effects. The present studies tested an alternative method for de-escalating commitment: activating broad motivations for growth and advancement (promotion). This approach should reduce concerns with loss and increase perceptions of alternatives, thereby attenuating justification motives. In two studies featuring hypothetical financial decisions, activating promotion motivations reduced recommitment to poorly performing investments as compared with both not activating any additional motivations and activating motivations for safety and security (prevention).  相似文献   

16.
The question I raise is whether Mark Balaguer’s event-causal libertarianism can withstand the disappearing agent objection. The concern is that with the causal role of the events antecedent to a decision already given, nothing settles whether the decision occurs, and so the agent does not settle whether the decision occurs. Thus it would seem that in this view the agent will not have the control in making decisions required for moral responsibility. I examine whether Balaguer’s position has the resources to answer this objection.  相似文献   

17.
What does autonomy mean from a moral point of view? Throughout Western history, autonomy has had no less than four different meanings. The first is political: the capacity of old cities and modern states to give themselves their own laws. The second is metaphysical, and was introduced by Kant in the second half of the 18th century. In this meaning, autonomy is understood as an intrinsic characteristic of all rational beings. Opposed to this is the legal meaning, in which actions are called autonomous when performed with due information and competency and without coercion. This last meaning, the most frequently used in bioethics, is primarily legal instead of moral. Is there a proper moral meaning of the word autonomy? If so, this would be a fourth meaning. Acts can only be called moral when they are postconventional (using the terminology coined by Lawrence Kohlberg), inner-directed (as expressed by David Riesman), and responsible (according to Hannah Arendt). Such acts are autonomous in this new, fourth, and to my mind, the only one proper, moral meaning. The goal of ethics cannot be other than forming human beings capable of making autonomous and responsible decisions, and doing so because they think this is their duty and not because of any other nonmoral motivation, like comfort, convenience, or satisfaction. The goal of ethics is to promote postconventional and mature human beings. This was what Socrates tried to do with the young people of Athens. And it is also the objective of every course of ethics and of any process of training.  相似文献   

18.
Roger R. Adams 《Zygon》2020,55(2):430-443
Technologies for human germ-line modification may soon enable humanity to create new types of human beings. Decisions about use of this power entail an unprecedented combination of difficulties: the stakes are immense, the unknowns are daunting, and moral principles are called into question. Evolved morality is not a sure basis for these decisions, both because of its inherent imperfections and because genetic engineering could eventually change humans’ innate cognitive mechanisms. Nevertheless, consensus is needed on moral values relevant to germ-line modification. These values could be based on characteristics of human beings that would remain constant regardless of revised genomes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号