首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
情绪记忆增强(Emotionally Enhanced Memory, EEM)效应受到刺激的唤醒度和效价的影响。Kensinger 等人提出依赖于唤醒的 EEM 效应与自动加工相联系,依赖于效价的 EEM 效应与控制加工相联系。然而现有研究并不能为这一假设提供充分的证据。本研究的三个实验采用学习--再认范式和 DA 范式(divided attention paradigm)相结合的方法,分别考察了在两种注意条件下依赖于唤醒和依赖于效价的EEM 效应在编码阶段的加工机制。结果发现,中性词、正性非唤醒词、负性非唤醒词在集中注意条件下的记忆再认成绩显著高于分散注意条件下的记忆再认成绩;正性唤醒词在集中注意条件下的记忆再认成绩也显著高于分散注意条件下的记忆再认成绩,但负性唤醒词的记忆再认成绩在两种注意条件下不存在显著差异。这表明依赖于效价的 EEM 效应与控制加工相联系,而依赖于唤醒的 EEM 效应并不总是与自动加工相联系,其加工还受到情绪效价的调节,对于负性刺激,依赖于唤醒的 EEM 效应与自动加工相联系;对于正性刺激,依赖于唤醒的EEM效应则与控制加工相联系。  相似文献   

2.
躯体和面孔是个体情绪表达与识别的重要线索。与面部表情相比,躯体表情加工的显著特点是补偿情绪信息,感知运动与行为信息,及产生适应性行为。情绪躯体与面孔加工的神经基础可能相邻或部分重合,但也存在分离;EBA、FBA、SPL、IPL等是与躯体表情加工相关的特异性脑区。今后应系统研究面孔、躯体及语音情绪线索加工潜在的神经基础,探讨躯体情绪加工的跨文化差异,考察情绪障碍患者的躯体表情加工特点。  相似文献   

3.
以103名大学生为对象,在正性、负性两种情绪状态下,用句子整理任务引发被试抑制情绪或表达情绪两种情绪调节方式,采用信号检测论测得正、负性表情的表情知觉敏感性。结果表明:(1)表情知觉敏感性存在情绪一致性效应,在负性情绪状态下,人们对负性表情更敏感,差异显著(p=0.002);在正性情绪状态下,人们对正性表情更敏感,虽然只是边缘显著(p=0.700)。(2)自动抑制情绪会降低人们的情绪体验,并且会影响表情知觉敏感性的情绪一致性效应。在自动抑制启动的状态下,人们对正、负性表情都不太敏感。  相似文献   

4.
综述了近年来关于精神分裂症对情绪面部表情加工损伤的研究,讨论了这种损伤的性质,以及对这种损伤性质的解释,比如它属于一般性还是特异性的损伤,与临床症状以及认知特征之间的关系等。比较分析表明,精神分裂症情绪面部表情知觉损伤,可能兼有面部信息加工障碍和情绪信息知觉困难的特性。另外,介绍了国外关于针对精神分裂症面部表情再认和识别的康复训练研究以及近年来利用事件相关电位(ERPs)和功能磁共振成像(fMRI)等认知神经科学技术进行的神经生理机制研究  相似文献   

5.
采用实验法探讨面部表情类型、强度和加工方式对孤独症儿童情绪理解的影响机制。20名孤独症儿童参加了正式实验。结果表明:(1)孤独症儿童对不同面部表情的识别存在差异;(2)孤独症儿童对整体面部表情的正确识别率显著好于局部面部表情的正确识别率;(3)孤独症儿童对高强度面部表情的正确识别率显著好于低强度面部表情的正确识别率;(4)孤独症儿童面部表情加工方式和面部表情强度存在交互影响。在整体加工方式下,100%强度的表情识别正确率显著高于50%强度的表情识别正确率,在局部加工方式下,100%强度的表情识别正确率与50%强度的表情识别正确率差异不显著。结论:孤独症儿童情绪理解受到面部表情类型、强度和加工方式的影响。  相似文献   

6.
如何揭示情绪性面孔加工的认知神经机制一直是心理学和社会神经科学的热点课题。以往研究主要采用单独面孔表情作为情绪诱发或呈现方式, 但对群体情绪知觉与体验的关注极其缺乏, 而群体面孔表情作为群体情绪的主要表达方式, 亟待深入关注。因此, 本项目将采用群体面孔(面孔群)表情作为群体情绪刺激, 拟通过事件相关电位(ERP)、核磁共振(fMRI)以及经颅磁刺激(TMS)等技术结合行为研究, 尝试从情绪信息(效价和强度)、朝向信息(正面、侧面、倒置)、完整性(局部呈现、完整呈现)、空间频率信息(完整、高频、低频)等方面探明群体面孔表情加工的时间动态特征和大脑激活模式。这将有助于全面认识和深入了解群体情绪识别的一般规律, 对于更好地优化社会互动也具有现实意义。  相似文献   

7.
前瞻记忆提取的自动加工、策略加工和控制加工   总被引:13,自引:3,他引:10  
赵晋全  杨治良 《心理科学》2002,25(5):523-526
引入“准意识”来描述一种不能通达意识但又需要注意资源的状态,基于意识性的控制加工、准意识性的策略加工和无意识性的自动加工提出了前瞻记忆提取的三加工自动激活模型。  相似文献   

8.
王亚鹏  董奇 《心理科学》2006,29(6):1512-1514
本文从情绪的效价载荷及其脑功能成像研究、面部表情的识别及其脑功能成像研究以及情绪的诱发及其脑功能成像研究等三方面介绍了情绪加工的脑机制及其研究现状。从现有的研究成果来看,大脑皮层在加工不同效价载荷的情绪时具有很大的重叠性;有关面部表情识别的研究表明,不同的神经环路负责调节对不同面部表情的反应;有关诱发的情绪的研究表明,前扣带回皮层在表征实验诱发的情绪时扮演着一个非常重要的角色。文章最后指出了情绪研究目前面临的一些问题,并指出在我国开展情绪的脑机制研究的重要意义。  相似文献   

9.
抑郁症情绪加工与认知控制的脑机制   总被引:2,自引:0,他引:2  
抑郁症患者负性认知加工偏向是导致抑郁的一个主要认知因素, 是抑郁症的一个稳定特质。这种偏向与对负性情绪刺激自下而上的情绪加工增强有关, 表现为杏仁核和梭状回等脑区过度激活; 也与自上而下的认知控制功能不足有关, 表现为背外侧前额皮质、前扣带回等脑区活性不足。文章在前人研究基础上, 提出了关于抑郁症情绪加工与认知控制脑机制的一个理论假说, 即抑郁症的产生是由于抑郁症患者情绪加工脑区过度激活与认知控制脑区功能降低之间的相互作用而形成的恶性循环所介导。目前证明此理论假说在研究内容、研究材料、研究技术等方面还存在一些问题, 这些可能是未来研究的方向。  相似文献   

10.
时序信息的加工:自动还是控制   总被引:3,自引:1,他引:3  
从长时记忆的角度出发,以故事的形式为实验材料,对时序信息的加工方式和通道效应进行了研究。结果表明,时序信息的加工存在视听通道效应,通道效应的机制源于记忆,且与加工方式有关。时序信息三个属性的加工方式不同:顺序属性倾向于自动加工;位置属性,就视觉信息来说倾问于自动加工,听觉在有顺序标码情况下倾向于自动加工、而在无顺序标码情况下则是一控制加工过程;间隔特性是一个控制加工过程。  相似文献   

11.
12.
The authors investigated children's ability to recognize emotions from the information available in the lower, middle, or upper face. School-age children were shown partial or complete facial expressions and asked to say whether they corresponded to a given emotion (anger, fear, surprise, or disgust). The results indicate that 5-year-olds were able to recognize fear, anger, and surprise from partial facial expressions. Fear was better recognized from the information located in the upper face than those located in the lower face. A similar pattern of results was found for anger, but only in girls. Recognition improved between 5 and 10 years old for surprise and anger, but not for fear and disgust.  相似文献   

13.
    
The author's purpose was to examine children's recognition of emotional facial expressions, by comparing two types of stimulus: photographs and drawings. The author aimed to investigate whether drawings could be considered as a more evocative material than photographs, as a function of age and emotion. Five- and 7-year-old children were presented with photographs and drawings displaying facial expressions of 4 basic emotions (i.e., happiness, sadness, anger, and fear) and were asked to perform a matching task by pointing to the face corresponding to the target emotion labeled by the experimenter. The photographs we used were selected from the Radboud Faces Database and the drawings were designed on the basis of both the facial components involved in the expression of these emotions and the graphic cues children tend to use when asked to depict these emotions in their own drawings. Our results show that drawings are better recognized than photographs, for sadness, anger, and fear (with no difference for happiness, due to a ceiling effect). And that the difference between the 2 types of stimuli tends to be more important for 5-year-olds compared to 7-year-olds. These results are discussed in view of their implications, both for future research and for practical application.  相似文献   

14.
People who explain why ambiguous faces are expressing anger perceive and remember those faces as angrier than do people who explain why the same faces are expressing sadness. This phenomenon may be explained by a two-stage process in which language decomposes a facial configuration into its component features, which are then reintegrated with emotion categories available in the emotion explanation itself. This configural-decomposition hypothesis is consistent with experimental results showing that the explanation effect is attenuated when configural face processing is impaired (e.g., when the faces are inverted). Ironically, although people explain emotional expressions to make more accurate attributions, the process of explanation itself can decrease accuracy by leading to perceptual assimilation of the expressions to the emotions being explained.  相似文献   

15.
The effect of the emotional quality of study-phase background music on subsequent recall for happy and sad facial expressions was investigated. Undergraduates (N = 48) viewed a series of line drawings depicting a happy or sad child in a variety of environments that were each accompanied by happy or sad music. Although memory for faces was very accurate, emotionally incongruent background music biased subsequent memory for facial expressions, increasing the likelihood that happy faces were recalled as sad when sad music was previously heard, and that sad faces were recalled as happy when happy music was previously heard. Overall, the results indicated that when recalling a scene, the emotional tone is set by an integration of stimulus features from several modalities.  相似文献   

16.
    
We investigated whether and how emotional facial expressions affect sustained attention in face tracking. In a multiple-identity and object tracking paradigm, participants tracked multiple target faces that continuously moved around together with several distractor faces, and subsequently reported where each target face had moved to. The emotional expression (angry, happy, and neutral) of the target and distractor faces was manipulated. Tracking performance was better when the target faces were angry rather than neutral, whereas angry distractor faces did not affect tracking. The effect persisted when the angry faces were presented upside-down and when surface features of the faces were irrelevant to the ongoing task. There was only suggestive and weak evidence for a facilitatory effect of happy targets and a distraction effect of happy distractors in comparison to neutral faces. The results show that angry expressions on the target faces can facilitate sustained attention on the targets via increased vigilance, yet this effect likely depends on both emotional information and visual features of the angry faces.  相似文献   

17.
A new algorithm for multidimensional scaling analysis of sorting data and hierarchical-sorting data is tested by applying it to facial expressions of emotion. We construct maps in “facial expression space” for two sets of still photographs: the I-FEEL series (expressions displayed spontaneously by infants and young children), and a subset of the Lightfoot series (posed expressions, all from one actress). The analysis avoids potential artefacts by fitting a map directly to the subject's judgments, rather than transforming the data into a matrix of estimated dissimilarities as an intermediate step. The results for both stimulus sets display an improvement in the extent to which they agree with existing maps. Some points emerge about the limitations of sorting data and the need for caution when interpreting MDS configurations derived from them.  相似文献   

18.
    
Children's ability to distinguish between enjoyment and non‐enjoyment smiles was investigated by presenting participants with short video excerpts of smiles. Enjoyment smiles differed from non‐enjoyment smiles by greater symmetry and by appearance changes produced in the eye region by the Cheek Raiser action. The results indicate that 6‐ and 7‐year‐old children have the abilities to detect these differences and are able to interpret them with above chance‐level accuracy. Sensitivity was higher for the symmetry of the smiles than for the appearance changes produced in the eye region and improved in later childhood. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

19.
The six basic emotions (disgust, anger, fear, happiness, sadness, and surprise) have long been considered discrete categories that serve as the primary units of the emotion system. Yet recent evidence indicated underlying connections among them. Here we tested the underlying relationships among the six basic emotions using a perceptual learning procedure. This technique has the potential of causally changing participants’ emotion detection ability. We found that training on detecting a facial expression improved the performance not only on the trained expression but also on other expressions. Such a transfer effect was consistently demonstrated between disgust and anger detection as well as between fear and surprise detection in two experiments (Experiment 1A, n?=?70; Experiment 1B, n?=?42). Notably, training on any of the six emotions could improve happiness detection, while sadness detection could only be improved by training on sadness itself, suggesting the uniqueness of happiness and sadness. In an emotion recognition test using a large sample of Chinese participants (n?=?1748), the confusion between disgust and anger as well as between fear and surprise was further confirmed. Taken together, our study demonstrates that the “basic” emotions share some common psychological components, which might be the more basic units of the emotion system.  相似文献   

20.
影响情绪一致性效应的因素   总被引:2,自引:0,他引:2  
庄锦英 《心理科学》2006,29(5):1104-1106
本研究旨在考察情绪一致性效应的本质特点及其影响因素。以模糊程度不同的图片为判断任务,以情绪状态、个性特质以及加工方式为自变量进行实验。结果发现个性因素通过影响情绪间接影响情绪一致性效应,自动加工存在“积极偏向”,图片特征的变化没有显著改变情绪一致性效应。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号