首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
侠牧  李雪榴  叶春  李红 《心理科学进展》2014,22(10):1556-1563
面部表情加工的ERP成分主要包括P1 (80~120 ms)、N170 (120~200 ms), 早期后部负电位(Early Posterior Negativity, EPN, 200~300 ms)和晚期正成分(Late Positive Potential, LPP, 300 ms以后)。这些成分代表了表情加工的不同阶段, 具有不同的心理含义。P1成分只对威胁类表情(恐惧, 厌恶和愤怒)敏感, 反映了对威胁面孔的快速探测, 具有自动加工的性质。N170成分与表情结构信息的编码有关, 同样具有自动加工的性质。EPN反映了对情绪信息的选择性注意, 具有情绪普遍性, 情绪场景图片和表情刺激都会对它产生影响, 在特定的条件下具有自动加工的性质。LPP则反映了对情绪信息的高级认知加工, 较易受注意控制的影响。在对上述成分的特性了解的基础上, 将来的研究应该探索以下问题:(1) P1是否受表情威胁程度的影响?(2) N170受到哪些自上而下因素的影响?(3)那些不能影响N170成分, 却能影响EPN成分的表情刺激是否被当成了普通的情绪刺激来看待?(4)表情加工引发的LPP是否能具有自动加工的性质?(5)不同的基本表情类型(如恐惧和厌恶)是否具有特异性的ERP成分?  相似文献   

2.
躯体和面孔是个体情绪表达与识别的重要线索。与面部表情相比,躯体表情加工的显著特点是补偿情绪信息,感知运动与行为信息,及产生适应性行为。情绪躯体与面孔加工的神经基础可能相邻或部分重合,但也存在分离;EBA、FBA、SPL、IPL等是与躯体表情加工相关的特异性脑区。今后应系统研究面孔、躯体及语音情绪线索加工潜在的神经基础,探讨躯体情绪加工的跨文化差异,考察情绪障碍患者的躯体表情加工特点。  相似文献   

3.
采用实验法探讨面部表情类型、强度和加工方式对孤独症儿童情绪理解的影响机制。20名孤独症儿童参加了正式实验。结果表明:(1)孤独症儿童对不同面部表情的识别存在差异;(2)孤独症儿童对整体面部表情的正确识别率显著好于局部面部表情的正确识别率;(3)孤独症儿童对高强度面部表情的正确识别率显著好于低强度面部表情的正确识别率;(4)孤独症儿童面部表情加工方式和面部表情强度存在交互影响。在整体加工方式下,100%强度的表情识别正确率显著高于50%强度的表情识别正确率,在局部加工方式下,100%强度的表情识别正确率与50%强度的表情识别正确率差异不显著。结论:孤独症儿童情绪理解受到面部表情类型、强度和加工方式的影响。  相似文献   

4.
面部表情是通过眼部、口部等肌肉动作传递出来的情绪信号。婴儿对面部表情的正确识别是婴儿与外界交流的重要手段,有利于情绪认知的发展。婴儿对基本面部表情识别的发展具有非同步性的特点:对正性效价表情的识别早于负性效价表情,2个月大的婴儿能辨别出正性效价表情,4~6个月大时能辨别出不同的负性效价表情;情绪感知能力的发展早于情绪理解能力,7个月大的婴儿已初步具备情绪感知能力,12个月大的婴儿还不能准确区分不同的负性效价表情所表达的情绪意义。婴儿识别表情受到环境和认知因素的双重影响,反映了基本情绪的激活到情绪图式的形成过程。  相似文献   

5.
综述了近年来关于精神分裂症对情绪面部表情加工损伤的研究,讨论了这种损伤的性质,以及对这种损伤性质的解释,比如它属于一般性还是特异性的损伤,与临床症状以及认知特征之间的关系等。比较分析表明,精神分裂症情绪面部表情知觉损伤,可能兼有面部信息加工障碍和情绪信息知觉困难的特性。另外,介绍了国外关于针对精神分裂症面部表情再认和识别的康复训练研究以及近年来利用事件相关电位(ERPs)和功能磁共振成像(fMRI)等认知神经科学技术进行的神经生理机制研究  相似文献   

6.
大量研究证实自闭症谱系障碍(autism spectrum disorder,ASD)患者存在面部表情加工方面的特异性表现,本文回顾了ASD患者面部表情加工的最新研究进展,认为ASD患者面部表情加工特异性可能受面部表情强度、面部表情呈现方式和ASD患者个体差异等因素影响,并通过ASD患者的面孔注视模式及其生理机制加以解...  相似文献   

7.
用于情绪障碍研究的面部表情系统的初步建立   总被引:1,自引:0,他引:1  
目的:建立一套情绪障碍研究用面部表情图片系统,为异常情绪的研究提供标准化刺激材料,增加情绪障碍研究的取材范围。方法:对85名被试拍摄喜悦、惊奇、轻蔑、厌恶、愤怒、恐惧、悲伤、平静、兴趣、羞愧、痛苦11种面部表情图片,拍摄强度分为强、中、弱三个等级,经过两次筛选,再由40名评定者对其进行类别、强度、愉悦度和优势度的判定。结果:获得一系列具有代表性的不同等级强度的8种面部表情图片520张,其中喜悦167张、惊奇78张、轻蔑67张、厌恶29张、愤怒46张、恐惧19张、悲伤65张、平静49张,每张图片都有其相应的一致率、强度、愉悦度和优势度分数。结论:面部表情图片系统为以后情绪障碍的研究提供了较好的标准化刺激材料;首次提供了轻蔑的表情图片;性别可能影响厌恶、恐惧表情的强度识别。  相似文献   

8.
本研究通过3个实验探讨群体信息对面部表情识别的影响。结果发现:(1)周围面孔的情绪状态影响个体对目标面孔情绪的识别,两者情绪一致时的反应时显著短于不一致时的反应时,且面部表情识别准确性更高。(2)群体信息会调节周围面孔情绪对目标面孔的影响,进而影响面部表情识别。具体而言,群体条件下,个体对目标面部表情的识别受到周围面孔情绪状态的影响,相比周围面孔情绪与目标面孔情绪不一致,两者情绪一致时,即符合个体基于知觉线索建立的群体成员情绪具有一致性的预期,面部表情识别的准确性更高、速度更快;而非群体条件下,个体则不受周围面孔情绪状态的影响。研究结果表明,个体能够基于互动人物之间的社会关系识别面孔情绪,群体存在时,会建立群体成员情绪具有一致性的预期,进而影响面部表情识别。  相似文献   

9.
为检验语境信息在面部表情加工和识别中的作用,通过两个实验考察语境信息的情绪性和自我相关性对中性以及不同强度恐惧面孔情绪加工的影响。结果发现,积极语境下中性情绪面孔效价的评分更高,自我相关语境下中性面孔唤醒度的评分更高;消极语境下面孔的恐惧情绪更容易被察觉。因此,面部表情加工中的语境效应表现为对中性情绪面孔的情绪诱发和增强作用,以及在情绪一致情况下对不同情绪强度面孔判断的促进作用。  相似文献   

10.
本文采用脑电技术通过混合因素设计对专家与新手警察识别犯罪嫌疑人说真话和假话时的面部和情绪躯体语言的ERPs进行了研究。结果表明,在350~550ms时窗的P300成分上,在真话面部和情绪躯体语言以及假话情绪躯体语言上,专家警察都比新警察的振幅更正,差异表现在额区和中央区;在550~800ms的晚期慢波成分上,对于嫌犯说假话时的面部表情刺激,出现了效应的反转,新手比专家警察的振幅更正,差异位于枕叶。以上结果表明,具有长期审讯经验的专家警察对于嫌犯的真假话表情识别主要在于对这些刺激的分析和判断的认知过程上显著强于新手警察,且更加关注嫌犯的情绪躯体语言,而新手警察对嫌犯的谎言行为虽然也有一定的敏感性,但仍停留在对嫌犯面部表情的持续注意上,无法形成正确的判断。  相似文献   

11.
白鹭  毛伟宾  王蕊  张文海 《心理学报》2017,(9):1172-1183
本研究以消极情绪间感知相似性较低的厌恶、恐惧面孔表情为材料,提供5个情绪性语言标签减少文字背景对面孔识别的促进作用,通过2个实验对自然场景以及身体动作对面孔表情识别的影响进行了研究,旨在考察面孔表情与自然场景间的情绪一致性对情绪面孔识别和自然场景加工的影响,以及加入与自然场景情绪相冲突的身体动作对面孔表情识别可能产生的影响。研究结果表明:(1)尽管增加了情绪性语言标签选项数量,自然场景的情绪对面孔表情识别的影响依旧显著;(2)当面孔表情与自然场景情绪不一致时,面孔识别需要更多依赖对自然场景的加工,因此对自然场景的加工程度更高;(3)身体动作会在一定程度上干扰自然场景对面孔表情识别的影响,但自然场景依然对情绪面孔的表情识别有重要作用。  相似文献   

12.
本研究采用中国运动员赢分和输分后的表情, 通过行为学和脑电技术比较面孔表情和身体姿势的加工机制。实验1探讨了赢分与输分面孔和身体的效价和强度, 实验2考察了图片的情绪类型(中性、快乐、悲伤、愤怒、恐惧、厌恶), 实验3采用脑电技术比较了赢分和输分情绪的神经机制。3个实验的行为结果表明, 相比面孔, 身体信息更能区分赢分和输分的效价, 而且身体姿势传递的情绪内容相对单一, 面孔表情传递的情绪内容相对复杂和多样化。脑电实验的结果表明, 身体的情绪信息能更早地被大脑识别, 表现在N170成分上, 面孔表情的情绪效应, 反映在EPN成分上。在加工的晚期, 面孔和身体条件下, 均观测到胜利比失败表情诱发了更大的LPP成分。结果表明, 大脑在多个阶段对身体姿势进行情绪评估与分类, 为行为上身体对效价的高区分性提供了证据。  相似文献   

13.
Four experiments were conducted with 5- to 11-year-olds and adults to investigate whether facial identity, facial speech, emotional expression, and gaze direction are processed independently of or in interaction with one another. In a computer-based, speeded sorting task, participants sorted faces according to facial identity while disregarding facial speech, emotional expression, and gaze direction or, alternatively, according to facial speech, emotional expression, and gaze direction while disregarding facial identity. Reaction times showed that children and adults were able to direct their attention selectively to facial identity despite variations of other kinds of face information, but when sorting according to facial speech and emotional expression, they were unable to ignore facial identity. In contrast, gaze direction could be processed independently of facial identity in all age groups. Apart from shorter reaction times and fewer classification errors, no substantial change in processing facial information was found to be correlated with age. We conclude that adult-like face processing routes are employed from 5 years of age onward.  相似文献   

14.
We used the Remember–Know procedure (Tulving, 1985 Tulving, E. 1985. Memory and consciousness. Canadian Psychology, 26: 112. [Crossref], [Web of Science ®] [Google Scholar]) to test the behavioural expression of memory following indirect and direct forms of emotional processing at encoding. Participants (N=32) viewed a series of facial expressions (happy, fearful, angry, and neutral) while performing tasks involving either indirect (gender discrimination) or direct (emotion discrimination) emotion processing. After a delay, participants completed a surprise recognition memory test. Our results revealed that indirect encoding of emotion produced enhanced memory for fearful faces whereas direct encoding of emotion produced enhanced memory for angry faces. In contrast, happy faces were better remembered than neutral faces after both indirect and direct encoding tasks. These findings suggest that fearful and angry faces benefit from a recollective advantage when they are encoded in a way that is consistent with the predictive nature of their threat. We propose that the broad memory advantage for happy faces may reflect a form of cognitive flexibility that is specific to positive emotions.  相似文献   

15.
We used the remember-know procedure (Tulving, 1985 ) to test the behavioural expression of memory following indirect and direct forms of emotional processing at encoding. Participants (N=32) viewed a series of facial expressions (happy, fearful, angry, and neutral) while performing tasks involving either indirect (gender discrimination) or direct (emotion discrimination) emotion processing. After a delay, participants completed a surprise recognition memory test. Our results revealed that indirect encoding of emotion produced enhanced memory for fearful faces whereas direct encoding of emotion produced enhanced memory for angry faces. In contrast, happy faces were better remembered than neutral faces after both indirect and direct encoding tasks. These findings suggest that fearful and angry faces benefit from a recollective advantage when they are encoded in a way that is consistent with the predictive nature of their threat. We propose that the broad memory advantage for happy faces may reflect a form of cognitive flexibility that is specific to positive emotions.  相似文献   

16.
The present study examined whether information processing bias against emotional facial expressions is present among individuals with social anxiety. College students with high (high social anxiety group; n  = 26) and low social anxiety (low social anxiety group; n  = 26) performed three different types of working memory tasks: (a) ordering positive and negative facial expressions according to the intensity of emotion; (b) ordering pictures of faces according to age; and (c) ordering geometric shapes according to size. The high social anxiety group performed significantly more poorly than the low social anxiety group on the facial expression task, but not on the other two tasks with the nonemotional stimuli. These results suggest that high social anxiety interferes with processing of emotionally charged facial expressions.  相似文献   

17.
The present study investigated whether facial expressions of emotion presented outside consciousness awareness will elicit evaluative responses as assessed in affective priming. Participants were asked to evaluate pleasant and unpleasant target words that were preceded by masked or unmasked schematic (Experiment 1) or photographic faces (Experiments 1 and 2) with happy or angry expressions. They were either required to perform the target evaluation only or to perform the target evaluation and to name the emotion expressed by the face prime. Prime-target interval was 300 ms in Experiment 1 and 80 ms in Experiment 2. Naming performance confirmed the effectiveness of the masking procedure. Affective priming was evident after unmasked primes in tasks that required naming of the facial expressions for schematic and photographic faces and after unmasked primes in tasks that did not require naming for photographic faces. No affective priming was found after masked primes. The present study failed to provide evidence for affective priming with masked face primes, however, it indicates that voluntary attention to the primes enhances affective priming.  相似文献   

18.
19.
We tested Ekman's (2003) suggestion that movements of a small number of reliable facial muscles are particularly trustworthy cues to experienced emotion because they tend to be difficult to produce voluntarily. On the basis of theoretical predictions, we identified two subsets of facial action units (AUs): reliable AUs and versatile AUs. A survey on the controllability of facial AUs confirmed that reliable AUs indeed seem more difficult to control than versatile AUs, although the distinction between the two sets of AUs should be understood as a difference in degree of controllability rather than a discrete categorization. Professional actors enacted a series of emotional states using method acting techniques, and their facial expressions were rated by independent judges. The effect of the two subsets of AUs (reliable AUs and versatile AUs) on identification of the emotion conveyed, its perceived authenticity, and perceived intensity was investigated. Activation of the reliable AUs had a stronger effect than that of versatile AUs on the identification, perceived authenticity, and perceived intensity of the emotion expressed. We found little evidence, however, for specific links between individual AUs and particular emotion categories. We conclude that reliable AUs may indeed convey trustworthy information about emotional processes but that most of these AUs are likely to be shared by several emotions rather than providing information about specific emotions. This study also suggests that the issue of reliable facial muscles may generalize beyond the Duchenne smile.  相似文献   

20.
The recognition and interpretation of emotional information (e.g., about happiness) has been shown to elicit, amongst other bodily reactions, spontaneous facial expressions occurring in accordance to the relevant emotion (e.g. a smile). Theories of embodied cognition act on the assumption that such embodied simulations are not only an accessorial, but a crucial factor in the processing of emotional information. While several studies have confirmed the importance of facial motor resonance during the initial recognition of emotional information, its role at later stages of processing, such as during memory for emotional content, remains unexplored. The present study bridges this gap by exploring the impact of facial motor resonance on the retrieval of emotional stimuli. In a novel approach, the specific effects of embodied simulations were investigated at different stages of emotional memory processing (during encoding and/or retrieval). Eighty participants underwent a memory task involving emotional and neutral words consisting of an encoding and retrieval phase. Depending on the experimental condition, facial muscles were blocked by a hardening facial mask either during encoding, during retrieval, during both encoding and retrieval, or were left free to resonate (control). The results demonstrate that not only initial recognition but also memory of emotional items benefits from embodied simulations occurring during their encoding and retrieval.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号