首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 484 毫秒
1.
研究通过两项实验考察人们对人类和智能机器在个人\非个人道德困境下的道德判断。结果发现,(1)非个人道德困境下,人们对智能机器和人采用相同的道德评价标准,对其期望行为及实际行为的道德评价(责备度、许可度、正确性)完全相同。(2)个人道德困境下,人们对智能机器和人采用不同的道德评价标准,相比于人类,有更多人希望智能机器做出功利主义取向行为,对其功利主义行为的道德评价也更高。  相似文献   

2.
方言是心理学本土化研究的新方向,本研究旨在考察语言类型(方言、普通话)对群体和个体社会决策的影响。实验1通过改编的公共物品博弈范式,发现方言条件下信任与合作水平、主观评定积极情绪和心率的变化值更大,皮肤电无显著变化;普通话条件下信任在主观消极情绪评定和合作行为间发挥着完全中介作用。实验2利用改编的最后通牒博弈范式,发现不公平的¥2-¥8分配方案下方言条件的接受率显著高于普通话条件,提议越公平接受率越高。结果表明方言会影响人们认知、情绪和决策行为,研究为语言和决策领域的探索提供了新思路。  相似文献   

3.
智能机器的责任问题随着人工智能的广泛应用越来越得到人们的关注。很多学者认为智能机器不具备承担道德责任的能力,因此不是道德能动者。从责任的他者视角出发对智能机器道德自主性与意向性的质疑,并不能否认其具有道德责任能力。从他者期望型道德责任的角度看,智能机器由于人们的期望偏见、智能偏见和地位偏见而不被承认具有道德责任能力。如果智能机器想要在现代技术社会的责任网络中获得应有的地位,就需要我们如实地期望智能机器人的道德责任,摒弃事后立场、语境不匹配、人类中心主义等经典责任伦理面临的问题,从而生成新的他者期望型道德能动者。  相似文献   

4.
情绪对危机决策质量的影响   总被引:1,自引:0,他引:1  
杨继平  郑建君 《心理学报》2009,41(6):481-491
为探究情绪在危机情境下对决策质量的影响,研究通过影片诱发实验情绪,运用实验室实验考察了以情绪变量为主导、包括性别和任务难度等在内的相关因素对危机决策的作用机制。结果发现:(1)随着危机决策任务难度的增加,个体用于决策的时间显著增长,且性别与情绪类型间存在显著的二次交互作用;(2)在危机情境下,男性的产生新方案率明显高于女性;(3)男性被试对危机决策过程的自信程度显著高于女性,且情绪与难度变量之间交互作用显著;(4)个体对于自身危机决策结果的满意程度,在性别、情绪和难度变量之间存在显著的三次交互效应。  相似文献   

5.
以大学生为被试,采用3(金钱刺激:无金钱刺激、金钱奖励、金钱惩罚)×2(决策者角色:决策者、旁观者)的被试间实验设计,利用过程分离范式分别计算被试道德决策中的功利主义倾向和道义论倾向,从而探讨金钱刺激和决策者角色对个体道德决策的影响。结果发现:(1)无论是决策者还是旁观者,金钱刺激都不会影响其道义论倾向;(2)决策者的道义论倾向大于旁观者;(3)只在有金钱奖励时决策者的功利主义倾向才大于旁观者。  相似文献   

6.
本研究以三个专业136名毕业生为研究对象,采用《兴趣测验》调查了他们"兴趣-专业"的匹配程度,同时,使用《职业决策自我效能量表》(CDMSE)调查了他们的职业决策自我效能感。结果发现:(1)兴趣与专业的适配程度影响大学生的职业决策,适配性高的大学生职业决策自我效能感更高;(2)职业决策自我效能存在性别差异,女生的职业决策自我效能感更高。  相似文献   

7.
算法拒绝意指尽管算法通常能比人类做出更准确的决策, 但人们依然更偏好人类决策的现象。算法拒绝的三维动机理论归纳了算法主体怀疑、道德地位缺失和人类特性湮没这三个主要原因, 分别对应信任、责任和掌控三种心理动机, 并对应改变算法拒绝的三种可行方案: 提高人类对算法的信任度, 强化算法的主体责任, 探索个性化算法设计以突显人对算法决策的控制力。未来研究可进一步以更社会性的视角探究算法拒绝的发生边界和其他可能动机。  相似文献   

8.
霍荣棉 《心理科学》2014,37(3):710-715
人际互动过程中,信任作为决策过程受到诸多因素的影响,信任的决策逻辑在不同的研究背景中存在冲突。本研究以合适性为信任决策框架,互动关系为激活规范目标的变量,分析了存在互动关系持续预期和不存在互动关系持续预期对信任行为的影响及其影响机制。结果表明:(1)被试间存在关系持续时,信任水平较高;关系持续预期消失时,信任水平显著下降。(2)被试对关系持续的重要性感知与其信任决策显著相关。(3)情境中的目标未被激活时,个体的信任倾向与信任决策显著相关;而当情境目标被激活时,信任倾向与信任决策相关不显著。  相似文献   

9.
李晓明  傅小兰  王新超 《心理科学》2012,35(6):1429-1434
摘 要 本研究将一种重要的道德情绪--移情引入问题权变模型中,以探讨移情因素在道德强度对企业道德决策影响中的作用。本研究基于情景研究法,随机选取256名MBA学生为被试,要求被试基于所提供情景中假想参与者的行为,回答随后测量道德决策(道德识别、道德判断和道德意图)、移情反应、主观道德强度及移情特质的问题。结果发现,1)移情反应在道德强度对企业道德决策的影响中具有中介作用;2)移情关怀特质会通过影响主观道德强度和移情反应而作用于企业道德决策;3)结果大小、社会舆论和效应可能性对道德判断和道德意图的影响机制各有特点。  相似文献   

10.
决策信心是个体对自身决策正确性的主观评价,是对决策过程的元认知体验。决策信心校准指决策信心水平与实际的决策正确率之间的匹配程度,其指标有信心水平和决策正确率的相关系数及Type II信号检测论中的ROC曲线下面积(Aroc)等。已有研究发现进行决策信心评估能够增强对当前或后续决策的元认知监控作用,但目前尚不清楚这种效应是否依赖于个体的决策信心校准水平。本研究通过设置知觉决策后是否进行决策信心评估(有信心评估与无信心评估)两种条件,考察个体决策信心的校准水平(Aroc)对元认知监控作用的影响。结果显示:1)与无信心评估条件相比,有信心评估的决策反应时显著增长,决策正确率显著提高(p<0.005);2)Aroc与有、无信心评估条件下决策正确率的增加值显著正相关(r=0.25,p=0.034),且高Aroc组的决策正确率增加值显著高于低Aroc组(p<0.05)。结果表明,在知觉决策过程中加入决策信心评估具有增强元认知监控作用的效应,体现为决策时间的增长和决策正确率的提高。并且,这种效应的大小依赖于个体的决策信心校准水平,校准水平越高元认知监控作用越好。  相似文献   

11.
Robert M. Geraci 《Zygon》2007,42(4):961-980
In science-fiction literature and film, human beings simultaneously feel fear and allure in the presence of intelligent machines, an experience that approximates the numinous experience as described in 1917 by Rudolph Otto. Otto believed that two chief elements characterize the numinous experience: the mysterium tremendum and the fascinans. Briefly, the mysterium tremendum is the fear of God's wholly other nature and the fascinans is the allure of God's saving grace. Science-fiction representations of robots and artificially intelligent computers follow this logic of threatening otherness and soteriological promise. Science fiction offers empirical support for Anne Foerst's claim that human beings experience fear and fascination in the presence of advanced robots from the Massachusetts Institute of Technology AI Lab. The human reaction to intelligent machines shows that human beings in many respects have elevated those machines to divine status. This machine apotheosis, an interesting cultural event for the history of religions, may—despite Foerst's rosy interpretation—threaten traditional Christian theologies.  相似文献   

12.
许丽颖  喻丰  彭凯平 《心理学报》2022,54(9):1076-1092
算法歧视屡见不鲜, 人们对其有何反应值得关注。6个递进实验比较了不同类型歧视情境下人们对算法歧视和人类歧视的道德惩罚欲, 并探讨其潜在机制和边界条件。结果发现:相对于人类歧视, 人们对算法歧视的道德惩罚欲更少(实验1~6), 潜在机制是人们认为算法(与人类相比)更缺乏自由意志(实验2~4), 且个体拟人化倾向越强或者算法越拟人化, 人们对算法的道德惩罚欲越强(实验5~6)。研究结果有助于更好地理解人们对算法歧视的反应, 并为算法犯错后的道德惩罚提供启示。  相似文献   

13.
The trustworthiness (or otherwise) of AI has been much in discussion of late, not least because of the recent publication of the EU Guidelines for Trustworthy AI. Discussions range from how we might make people trust AI to AI being not possible to trust, with many points inbetween. In this article, we question whether or not these discussions somewhat miss the point, which is that people are going ahead and basically doing their own thing anyway, and that we should probably help them. Acknowledging that trust is a heuristic that is widely used by humans in a range of situations, we lean on the literature concerning how humans make trust decisions, to arrive at a general model of how people might consider trust in AI (and other artefacts) for specific purposes in a human world. We then use a series of thought experiments and observations of trust and trustworthiness, to illustrate the use of the model in taking a functionalist perspective on trust decisions, including with machines. Our hope is that this forms a useful basis upon which to develop intelligent systems in a way that considers how and when people may trust them, and in doing so empowers people to make better trust decisions about AI.  相似文献   

14.
This article reviews the reasons scholars hold that driverless cars and many other AI equipped machines must be able to make ethical decisions, and the difficulties this approach faces. It then shows that cars have no moral agency, and that the term ‘autonomous’, commonly applied to these machines, is misleading, and leads to invalid conclusions about the ways these machines can be kept ethical. The article’s most important claim is that a significant part of the challenge posed by AI-equipped machines can be addressed by the kind of ethical choices made by human beings for millennia. Ergo, there is little need to teach machines ethics even if this could be done in the first place. Finally, the article points out that it is a grievous error to draw on extreme outlier scenarios—such as the Trolley narratives—as a basis for conceptualizing the ethical issues at hand.  相似文献   

15.
"时间流"(timeflow)是指从行为体验的角度来定义时间,即个体在当前某项活动中所感知到行为体验的时间过程,塑造"时间流"的维度包括:外部场景的塑造、自身肢体动作的配合、活动规则的初步感知、目标情感的逐渐融入、文化喻义的深入联想。影响个体"时间流"的主要因素包括感知和情绪,"时间流"也会影响消费体验和幸福体验。未来应关注时间"双扭结"价值函数如何影响人们的行为决策及验证"时间流"五个维度之间的耦合关系。  相似文献   

16.
Artificial intelligence (AI) is a revolutionary and overwhelming technology that is yet to immature. While profoundly changing and shaping people and society, AI also splits into its own opposites and develops into a new external alien force. As the basic technical support of the entire society, intelligent technology entails the overt or covert domination of human beings, who are becoming the “vassals” and “slaves” of this high-speed intelligent social system. Various intelligent systems are constantly replacing human work, so that the “digital poor” gradually lose the opportunities and values offered by labor and hence are excluded by the global economic and social system, rendering their existence empty and absurd. The rapid development of intelligent robots has blurred the boundary between humans and machines and had a strong impact on the nature of man and his position as a conscious agent, making “What is man?” and the human-machine relationship prominent issues for our times, challenging the commonplaces of philosophy. We must face up to the existing or imminent risk of alienation, expand our theoretical horizons, innovate theories of alienation in the era of intelligence, take constructive action in terms of the construction of an ideal society and the evolution of man himself, build an ecological system for the joint evolution and growth of human beings and intelligent machines, and achieve liberty of man and the all-round and free development.  相似文献   

17.
In this essay I discuss a novel engineering ethics class that has the potential to significantly decrease the likelihood that students (and professionals) will inadvertently or unintentionally act unethically in the future. This class is different from standard engineering ethics classes in that it focuses on the issue of why people act unethically and how students (and professionals) can avoid a variety of hurdles to ethical behavior. I do not deny that it is important for students to develop cogent moral reasoning and ethical decision-making as taught in traditional college-level ethics classes, but as an educator, I aim to help students apply moral reasoning in specific, real-life situations so they are able to make ethical decisions and act ethically in their academic careers and after they graduate. Research in moral psychology provides evidence that many seemingly irrelevant situational factors affect the moral judgment of most moral agents and frequently lead agents to unintentionally or inadvertently act wrongly. I argue that, in addition to teaching college students moral reasoning and ethical decision-making, it is important to: 1. Teach students about psychological and situational factors that affect people’s ethical judgments/behaviors in the sometimes stressful, emotion-laden environment of the workplace; 2. Guide students to engage in critical reflection about the sorts of situations they personally might find ethically challenging before they encounter those situations; and 3. Provide students with strategies to help them avoid future unethical behavior when they encounter these situations in school and in the workplace.  相似文献   

18.
Is gratitude a moral affect?   总被引:2,自引:0,他引:2  
Gratitude is conceptualized as a moral affect that is analogous to other moral emotions such as empathy and guilt. Gratitude has 3 functions that can be conceptualized as morally relevant: (a) a moral barometer function (i.e., it is a response to the perception that one has been the beneficiary of another person's moral actions); (b) a moral motive function (i.e., it motivates the grateful person to behave prosocially toward the benefactor and other people); and (c) a moral reinforcer function (i.e., when expressed, it encourages benefactors to behave morally in the future). The personality and social factors that are associated with gratitude are also consistent with a conceptualization of gratitude as an affect that is relevant to people's cognitions and behaviors in the moral domain.  相似文献   

19.
不平等问题是全球社会和经济发展需要应对的首要挑战, 也是实现全球可持续发展目标的核心障碍。人工智能(artificial intelligence, AI)为缓解不平等、促进社会公平提供了新的途径。然而, 新近研究发现, 即使客观上AI决策具有公平性和准确性, 个体仍可能对AI决策的公平感知较低。因此, 近年来越来越多的研究开始关注AI决策公平感知的影响因素。然而, 目前研究较为分散, 呈现出研究范式不统一、理论不清晰和机制未厘清等特征。这既不利于跨学科的研究对话, 也不利于研究者和实践者对AI决策公平感知形成系统性理解。基于此, 通过系统的梳理, 现有研究可以划分为两类: (1) AI单一决策的公平感知研究, 主要聚焦于AI特征和个体特征如何影响个体对AI决策的公平感知; (2) AI-人类二元决策的公平感知研究, 主要聚焦于对比个体对AI决策与人类决策公平感知的差异。在上述梳理基础上, 未来研究可以进一步探索AI决策公平感知的情绪影响机制等方向。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号