首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
长期以来,关于面孔表情识别的研究主要是围绕着面孔本身的结构特征来进行的,但是近年来的研究发现,面孔表情的识别也会受到其所在的情境背景(如语言文字、身体背景、自然与社会场景等)的影响,特别是在识别表情相似的面孔时,情境对面孔表情识别的影响更大。本文首先介绍和分析了近几年关于语言文字、身体动作、自然场景和社会场景等情境影响个体对面孔表情的识别的有关研究;其次,又分析了文化背景、年龄以及焦虑程度等因素对面孔表情识别情境效应的影响;最后,强调了未来的研究应重视研究儿童被试群体、拓展情绪的类别、关注真实生活中的面孔情绪感知等。  相似文献   

2.
本研究采用面孔情绪探测任务, 通过状态-特质焦虑问卷筛选出高、低特质性焦虑被试, 考察场景对不同情绪效价以及不同情绪强度的面部表情加工的影响, 并探讨特质性焦虑在其中所发挥的作用。结果发现:(1)对于不同情绪效价的面部表情, 场景对其情绪探测的影响存在差异:对于快乐面部表情, 在100%、80%和20%三个情绪层级上, 在场景与面孔情绪性一致情况下, 被试对面孔情绪探测的正确率显著高于不一致情况; 对于恐惧面部表情, 在80%、60%、40%和20%四个情绪层级上, 均发现一致条件比不一致条件有着更高的情绪探测正确率。(2)对于高特质性焦虑组, 一致条件和不一致条件中的面孔情绪探测正确率并没有显著差异, 即高特质性焦虑组并未表现出显著的场景效应; 而低特质性焦虑组则差异显著, 即出现显著的场景效应。该研究结果表明:(1)对于情绪强度较低的面部表情, 快乐与恐惧面孔情绪探测都更容易受到场景的影响。(2)相比于中等强度快乐面孔, 场景更容易影响中等强度恐惧面孔情绪的探测。(3)特质性焦虑的个体因素在场景对面孔情绪探测的影响中发挥调节作用, 高特质性焦虑者在情绪识别中较少受到场景信息的影响。  相似文献   

3.
摘 要 面孔情绪识别过程中的多感觉通道效应指参与刺激加工的各个通道对面孔表情认知的综合影响。研究者们通过行为实验、事件相关电位以及脑成像技术等对该过程中的多个获取信息的通道进行研究,肢体表情、情绪性声音、特定气味能系统地影响面孔表情的情绪识别。一系列的研究对多通道效应的作用时间、潜在作用机制、相关激活脑区进行了探索。未来的研究可以整合脑网络的技术,并结合其他学科的新技术以更细致具体地考察这些通道下信息的物理属性所起的作用。 关键词 面孔表情 多通道效应 面孔情绪识别 肢体表情 情绪性声音 嗅觉信号  相似文献   

4.
面孔和身体姿势均为日常交往中情绪性信息的重要载体,然而目前对后者的研究却很少。本文采用事件相关电位(ERP)技术,考察了成年被试对恐惧和中性身体姿势的加工时程,并将此与同类表情的面孔加工ERP结果进行比较。结果发现,与情绪性面孔加工类似,大脑对情绪性身体姿势的加工也是快速的,早在P1阶段即可将恐惧和中性的身体姿势区分开来,同时身体姿势图片比面孔图片诱发了更大的枕区P1成分。与情绪性面孔相比,情绪性身体姿势诱发出的N170和VPP幅度较小,潜伏期较短,这两个ERP成分能区分恐惧和中性的面孔,但不能区分恐惧和中性的身体姿势,说明在情绪性信息加工的中期阶段,大脑在身体姿势加工方面的优势不如在面孔加工方面的优势大。最后,在加工的晚期阶段,P3可区分情绪载体和情绪类别,且在两个主效应上均产生了较大的效应量,体现了大脑在此阶段对情绪性信息的更深入的加工。本研究提示,情绪性身体姿势和面孔的加工具有相似性,但与情绪性面孔相比,大脑似乎对情绪性身体姿势的加工在早期阶段(P1时间窗)更有优势。本文对情绪性面孔和身体姿势的结合研究将有助于深入了解情绪脑的工作机制,同时找到更多的情绪相关的生物标记物以帮助临床诊断具有情绪认知障碍的精神病患者的脑功能缺损。  相似文献   

5.
为检验语境信息在面部表情加工和识别中的作用,通过两个实验考察语境信息的情绪性和自我相关性对中性以及不同强度恐惧面孔情绪加工的影响。结果发现,积极语境下中性情绪面孔效价的评分更高,自我相关语境下中性面孔唤醒度的评分更高;消极语境下面孔的恐惧情绪更容易被察觉。因此,面部表情加工中的语境效应表现为对中性情绪面孔的情绪诱发和增强作用,以及在情绪一致情况下对不同情绪强度面孔判断的促进作用。  相似文献   

6.
研究以大学生为被试,采用延时匹配任务,旨在探讨性别和种族因素对面孔识别的影响,结果发现:(1)男、女被试对女性面孔识别的反应时及正确率均优于男性面孔;(2)被试对不同种族女性面孔识别的优势存在差异,被试对高加索女性面孔的识别表现为更高的正确率,而对中国女性面孔的识别表现为更快的识别速度;(3)面孔种族是导致女性面孔识别优势的重要影响因素。研究表明,面孔识别受到面孔种族、面孔性别和被试性别共同作用的影响。  相似文献   

7.
情绪与语言加工的相互作用   总被引:2,自引:0,他引:2  
情绪与语言加工的相互关系逐渐受到了研究者的重视。一方面,情绪对语言加工有着重要的影响,表现为:(1)语言中蕴含的情绪信息对语言加工的影响,包括情绪词、情绪性语句、情绪性篇章的加工;(2)情绪背景对语言加工的影响,包括自身的情绪状态背景(如,抑郁、焦虑、快乐等心境),以及情绪语调/语境、情绪图片、音乐等外在情绪线索诱发的情绪背景;(3)内化的情绪反应模式(如身体姿势、面部表情等)对语言加工的影响。另一方面,语言对情绪加工也有着重要的影响,表现为:(1)语义概念对情绪知觉的影响;(2)语言指导在情绪学习中的作用;(3)语言在情绪调节中的作用。未来的研究应该深入探讨情绪与语言加工的相互作用的内部机制,并将基础研究和教育及临床应用结合起来。  相似文献   

8.
如何揭示情绪性面孔加工的认知神经机制一直是心理学和社会神经科学的热点课题。以往研究主要采用单独面孔表情作为情绪诱发或呈现方式, 但对群体情绪知觉与体验的关注极其缺乏, 而群体面孔表情作为群体情绪的主要表达方式, 亟待深入关注。因此, 本项目将采用群体面孔(面孔群)表情作为群体情绪刺激, 拟通过事件相关电位(ERP)、核磁共振(fMRI)以及经颅磁刺激(TMS)等技术结合行为研究, 尝试从情绪信息(效价和强度)、朝向信息(正面、侧面、倒置)、完整性(局部呈现、完整呈现)、空间频率信息(完整、高频、低频)等方面探明群体面孔表情加工的时间动态特征和大脑激活模式。这将有助于全面认识和深入了解群体情绪识别的一般规律, 对于更好地优化社会互动也具有现实意义。  相似文献   

9.
韩磊  马娟  焦亭  高峰强  郭永玉  王鹏 《心理学报》2010,42(2):271-278
羞怯与社会认知密切相关,而面孔识别是人们社会生活中的一项重要的社会认知功能。目前关于羞怯的电生理学研究大多关注表情的效价效应和面孔的新旧效应对羞怯个体面孔加工的影响,却忽视了羞怯个体在基本的面孔识别能力——面孔-物体识别中可能存在的认知神经差异。因此,本研究采用ERP技术,使用GO/Nogo范式的面孔-物体识别任务,对17名羞怯大学生和17名非羞怯大学生在面孔结构编码中的N170成分进行考察,以期发现不同羞怯水平大学生在早期面孔加工中的认知神经差异。本研究发现,非羞怯大学生对面孔结构具有加工优势,识别面孔时,非羞怯大学生的N170波幅显著大于羞怯大学生的N170波幅,识别物体时则不存在组间差异;N170是面孔识别的特异性成分,面孔诱发的N170波幅显著大于物体诱发的N170波幅;识别面孔时,N170表现出大脑右半球的加工优势。  相似文献   

10.
研究采用36名被试(女20名,男16名),采用4种变化强度(5%、10%、20%、30%)的5种情绪面孔(愉快、惊讶、悲伤、厌恶、愤怒),记录被试对左右视野呈现的中性面孔和情绪面孔识别的正确率,考察情绪效价单侧化效应的性别差异。结果发现:积极情绪面孔比消极情绪面孔的识别正确率更高;表情强度高的面孔识别正确率高于表情强度低的面孔;女性被试对左视野出现的消极情绪面孔识别正确率高于右视野出现的消极情绪面孔,相反,对右视野出现的积极情绪面孔识别正确率更高。说明在女性被试上出现了效价单侧化效应。  相似文献   

11.
胡治国  刘宏艳 《心理科学》2015,(5):1087-1094
正确识别面部表情对成功的社会交往有重要意义。面部表情识别受到情绪背景的影响。本文首先介绍了情绪背景对面部表情识别的增强作用,主要表现为视觉通道的情绪一致性效应和跨通道情绪整合效应;然后介绍了情绪背景对面部表情识别的阻碍作用,主要表现为情绪冲突效应和语义阻碍效应;接着介绍了情绪背景对中性和歧义面孔识别的影响,主要表现为背景的情绪诱发效应和阈下情绪启动效应;最后对现有研究进行了总结分析,提出了未来研究的建议。  相似文献   

12.
Recognition of emotional facial expressions is a central area in the psychology of emotion. This study presents two experiments. The first experiment analyzed recognition accuracy for basic emotions including happiness, anger, fear, sadness, surprise, and disgust. 30 pictures (5 for each emotion) were displayed to 96 participants to assess recognition accuracy. The results showed that recognition accuracy varied significantly across emotions. The second experiment analyzed the effects of contextual information on recognition accuracy. Information congruent and not congruent with a facial expression was displayed before presenting pictures of facial expressions. The results of the second experiment showed that congruent information improved facial expression recognition, whereas incongruent information impaired such recognition.  相似文献   

13.
Faces and bodies are typically encountered simultaneously, yet little research has explored the visual processing of the full person. Specifically, it is unknown whether the face and body are perceived as distinct components or as an integrated, gestalt-like unit. To examine this question, we investigated whether emotional face-body composites are processed in a holistic-like manner by using a variant of the composite face task, a measure of holistic processing. Participants judged facial expressions combined with emotionally congruent or incongruent bodies that have been shown to influence the recognition of emotion from the face. Critically, the faces were either aligned with the body in a natural position or misaligned in a manner that breaks the ecological person form. Converging data from 3 experiments confirm that breaking the person form reduces the facilitating influence of congruent body context as well as the impeding influence of incongruent body context on the recognition of emotion from the face. These results show that faces and bodies are processed as a single unit and support the notion of a composite person effect analogous to the classic effect described for faces.  相似文献   

14.
Recognition of facial expressions has traditionally been investigated by presenting facial expressions without any context information. However, we rarely encounter an isolated facial expression; usually, we perceive a person's facial reaction as part of the surrounding context. In the present study, we addressed the question of whether emotional scenes influence the explicit recognition of facial expressions. In three experiments, participants were required to categorize facial expressions (disgust, fear, happiness) that were shown against backgrounds of natural scenes with either a congruent or an incongruent emotional significance. A significant interaction was found between facial expressions and the emotional content of the scenes, showing a response advantage for facial expressions accompanied by congruent scenes. This advantage was robust against increasing task load. Taken together, the results show that the surrounding scene is an important factor in recognizing facial expressions.  相似文献   

15.
Emotional influences on memory for events have long been documented yet surprisingly little is known about how emotional signals conveyed by contextual cues influence memory for face identity. This study investigated how positively and negatively valenced contextual emotion cues conveyed by body expressions or background scenes influence face memory. The results provide evidence of emotional context influence on face recognition memory and show that faces encoded in emotional (either fearful or happy) contexts (either the body or background scene) are less well recognized than faces encoded in neutral contexts and this effect is larger for body context than for scene context. The findings are compatible with the hypothesis that emotional signals in visual scenes trigger orienting responses which may lead to a less elaborate processing of featural details like the identity of a face, in turn resulting in a decreased facial recognition memory.  相似文献   

16.
侠牧  李雪榴  叶春  李红 《心理科学进展》2014,22(10):1556-1563
面部表情加工的ERP成分主要包括P1 (80~120 ms)、N170 (120~200 ms), 早期后部负电位(Early Posterior Negativity, EPN, 200~300 ms)和晚期正成分(Late Positive Potential, LPP, 300 ms以后)。这些成分代表了表情加工的不同阶段, 具有不同的心理含义。P1成分只对威胁类表情(恐惧, 厌恶和愤怒)敏感, 反映了对威胁面孔的快速探测, 具有自动加工的性质。N170成分与表情结构信息的编码有关, 同样具有自动加工的性质。EPN反映了对情绪信息的选择性注意, 具有情绪普遍性, 情绪场景图片和表情刺激都会对它产生影响, 在特定的条件下具有自动加工的性质。LPP则反映了对情绪信息的高级认知加工, 较易受注意控制的影响。在对上述成分的特性了解的基础上, 将来的研究应该探索以下问题:(1) P1是否受表情威胁程度的影响?(2) N170受到哪些自上而下因素的影响?(3)那些不能影响N170成分, 却能影响EPN成分的表情刺激是否被当成了普通的情绪刺激来看待?(4)表情加工引发的LPP是否能具有自动加工的性质?(5)不同的基本表情类型(如恐惧和厌恶)是否具有特异性的ERP成分?  相似文献   

17.
The current research investigated the influence of body posture on adults' and children's perception of facial displays of emotion. In each of two experiments, participants categorized facial expressions that were presented on a body posture that was congruent (e.g., a sad face on a body posing sadness) or incongruent (e.g., a sad face on a body posing fear). Adults and 8-year-olds made more errors and had longer reaction times on incongruent trials than on congruent trials when judging sad versus fearful facial expressions, an effect that was larger in 8-year-olds. The congruency effect was reduced when faces and bodies were misaligned, providing some evidence for holistic processing. Neither adults nor 8-year-olds were affected by congruency when judging sad versus happy expressions. Evidence that congruency effects vary with age and with similarity of emotional expressions is consistent with dimensional theories and "emotional seed" models of emotion perception.  相似文献   

18.
This study demonstrates that when people attempt to identify a facial expression of emotion (FEE) by haptically exploring a 3D facemask, they are affected by viewing a simultaneous, task-irrelevant visual FEE portrayed by another person. In comparison to a control condition, where visual noise was presented, the visual FEE facilitated haptic identification when congruent (visual and haptic FEEs same category). When the visual and haptic FEEs were incongruent, haptic identification was impaired, and error responses shifted toward the visually depicted emotion. In contrast, visual emotion labels that matched or mismatched the haptic FEE category produced no such effects. The findings indicate that vision and touch interact in FEE recognition at a level where featural invariants of the emotional category (cf. precise facial geometry or general concepts) are processed, even when the visual and haptic FEEs are not attributable to a common source. Processing mechanisms behind these effects are considered.  相似文献   

19.
The present aim was to investigate how emotional expressions presented on an unattended channel affect the recognition of the attended emotional expressions. In Experiments 1 and 2, facial and vocal expressions were simultaneously presented as stimulus combinations. The emotions (happiness, anger, or emotional neutrality) expressed by the face and voice were either congruent or incongruent. Subjects were asked to attend either to the visual (Experiment 1) or auditory (Experiment 2) channel and recognise the emotional expression. The result showed that the ignored emotional expressions significantly affected the processing of attended signals as measured by recognition accuracy and response speed. In general, attended signals were recognised more accurately and faster in congruent than in incongruent combinations. In Experiment 3, possibility for a perceptual-level integration was eliminated by presenting the response-relevant and response-irrelevant signals separated in time. In this situation, emotional information presented on the nonattended channel ceased to affect the processing of emotional signals on the attended channel. The present results are interpreted to provide evidence for the view that facial and vocal emotional signals are integrated at the perceptual level of information processing and not at the later response-selection stages.  相似文献   

20.
This investigation examined whether impairment in configural processing could explain deficits in face emotion recognition in people with Parkinson’s disease (PD). Stimuli from the Radboud Faces Database were used to compare recognition of four negative emotion expressions by older adults with PD (n = 16) and matched controls (n = 17). Participants were tasked with categorizing emotional expressions from upright and inverted whole faces and facial composites; it is difficult to derive configural information from these two types of stimuli so featural processing should play a larger than usual role in accurate recognition of emotional expressions. We found that the PD group were impaired relative to controls in recognizing anger, disgust and fearful expressions in upright faces. Then, consistent with a configural processing deficit, participants with PD showed no composite effect when attempting to identify facial expressions of anger, disgust and fear. A face inversion effect, however, was observed in the performance of all participants in both the whole faces and facial composites tasks. These findings can be explained in terms of a configural processing deficit if it is assumed that the disruption caused by facial composites was specific to configural processing, whereas inversion reduced performance by making it difficult to derive both featural and configural information from faces.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号