首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 187 毫秒
1.
语言常常伴随表情出现,情绪知觉一定程度上是通过语言建构的,语言对情绪表情的识别具有重要作用。但情绪词对短暂呈现的面部表情(即人工微表情)的识别是否也有影响并不清楚。采用短暂呈现的情绪词和面部表情,使用类Stroop干扰任务与情感启动任务,考察背景情绪词对面部表情识别的影响。结果发现,情绪词效价与面部表情效价的匹配关系的主效应显著,这表明背景情绪词影响短暂呈现的面部表情的识别。研究结果深化了对微表情识别机制的理解。  相似文献   

2.
摘 要 面孔情绪识别过程中的多感觉通道效应指参与刺激加工的各个通道对面孔表情认知的综合影响。研究者们通过行为实验、事件相关电位以及脑成像技术等对该过程中的多个获取信息的通道进行研究,肢体表情、情绪性声音、特定气味能系统地影响面孔表情的情绪识别。一系列的研究对多通道效应的作用时间、潜在作用机制、相关激活脑区进行了探索。未来的研究可以整合脑网络的技术,并结合其他学科的新技术以更细致具体地考察这些通道下信息的物理属性所起的作用。 关键词 面孔表情 多通道效应 面孔情绪识别 肢体表情 情绪性声音 嗅觉信号  相似文献   

3.
长期以来,关于面孔表情识别的研究主要是围绕着面孔本身的结构特征来进行的,但是近年来的研究发现,面孔表情的识别也会受到其所在的情境背景(如语言文字、身体背景、自然与社会场景等)的影响,特别是在识别表情相似的面孔时,情境对面孔表情识别的影响更大。本文首先介绍和分析了近几年关于语言文字、身体动作、自然场景和社会场景等情境影响个体对面孔表情的识别的有关研究;其次,又分析了文化背景、年龄以及焦虑程度等因素对面孔表情识别情境效应的影响;最后,强调了未来的研究应重视研究儿童被试群体、拓展情绪的类别、关注真实生活中的面孔情绪感知等。  相似文献   

4.
本研究采用面孔情绪探测任务, 通过状态-特质焦虑问卷筛选出高、低特质性焦虑被试, 考察场景对不同情绪效价以及不同情绪强度的面部表情加工的影响, 并探讨特质性焦虑在其中所发挥的作用。结果发现:(1)对于不同情绪效价的面部表情, 场景对其情绪探测的影响存在差异:对于快乐面部表情, 在100%、80%和20%三个情绪层级上, 在场景与面孔情绪性一致情况下, 被试对面孔情绪探测的正确率显著高于不一致情况; 对于恐惧面部表情, 在80%、60%、40%和20%四个情绪层级上, 均发现一致条件比不一致条件有着更高的情绪探测正确率。(2)对于高特质性焦虑组, 一致条件和不一致条件中的面孔情绪探测正确率并没有显著差异, 即高特质性焦虑组并未表现出显著的场景效应; 而低特质性焦虑组则差异显著, 即出现显著的场景效应。该研究结果表明:(1)对于情绪强度较低的面部表情, 快乐与恐惧面孔情绪探测都更容易受到场景的影响。(2)相比于中等强度快乐面孔, 场景更容易影响中等强度恐惧面孔情绪的探测。(3)特质性焦虑的个体因素在场景对面孔情绪探测的影响中发挥调节作用, 高特质性焦虑者在情绪识别中较少受到场景信息的影响。  相似文献   

5.
为检验语境信息在面部表情加工和识别中的作用,通过两个实验考察语境信息的情绪性和自我相关性对中性以及不同强度恐惧面孔情绪加工的影响。结果发现,积极语境下中性情绪面孔效价的评分更高,自我相关语境下中性面孔唤醒度的评分更高;消极语境下面孔的恐惧情绪更容易被察觉。因此,面部表情加工中的语境效应表现为对中性情绪面孔的情绪诱发和增强作用,以及在情绪一致情况下对不同情绪强度面孔判断的促进作用。  相似文献   

6.
韦程耀  赵冬梅 《心理科学进展》2012,20(10):1614-1622
近年来面部表情的跨文化研究显示出更多的跨文化一致性和差异性证据。自发面部表情的表达和识别、组内优势效应以及面部表情信息的上下不对称性已成为该领域的研究热点。方言理论、中国民间模型和EMPATH模型从三种不同的角度对面部表情跨文化研究的结果进行了理论解释。而表达规则和解码规则以及语言效应是面部表情跨文化表达与识别的重要影响因素。今后, 面部表情跨文化的表达和识别研究应更加关注面部表情特征信息和影响因素这两个方面。  相似文献   

7.
本研究通过3个实验探讨群体信息对面部表情识别的影响。结果发现:(1)周围面孔的情绪状态影响个体对目标面孔情绪的识别,两者情绪一致时的反应时显著短于不一致时的反应时,且面部表情识别准确性更高。(2)群体信息会调节周围面孔情绪对目标面孔的影响,进而影响面部表情识别。具体而言,群体条件下,个体对目标面部表情的识别受到周围面孔情绪状态的影响,相比周围面孔情绪与目标面孔情绪不一致,两者情绪一致时,即符合个体基于知觉线索建立的群体成员情绪具有一致性的预期,面部表情识别的准确性更高、速度更快;而非群体条件下,个体则不受周围面孔情绪状态的影响。研究结果表明,个体能够基于互动人物之间的社会关系识别面孔情绪,群体存在时,会建立群体成员情绪具有一致性的预期,进而影响面部表情识别。  相似文献   

8.
为探寻自闭症儿童在识别低强度(10%,30%)、中强度(40%,60%)和高强度(70%,90%)的愤怒和开心面部表情时,识别情绪类型的既有能力和差异。采用表情标签范式,用E-prime软件在电脑上呈现不同强度的3D合成面部表情刺激,分别对10名自闭症儿童、10名正常发育儿童和10名智障儿童进行了实验研究。结果发现,自闭症儿童在低强度表情时具有面部表情识别障碍,其对不同强度面部表情识别正确率显著低于智障儿童和正常发育儿童;自闭症儿童面部表情识别正确率与面部表情强度呈正相关,面部表情强度越大,自闭症儿童面部表情识别的正确率越高;自闭症儿童对低强度面部表情识别时,对开心表情的识别正确率高于愤怒表情,但是,在中强度和高强度面部表情识别时,存在显著的愤怒优势效应。  相似文献   

9.
白鹭  毛伟宾  王蕊  张文海 《心理学报》2017,(9):1172-1183
本研究以消极情绪间感知相似性较低的厌恶、恐惧面孔表情为材料,提供5个情绪性语言标签减少文字背景对面孔识别的促进作用,通过2个实验对自然场景以及身体动作对面孔表情识别的影响进行了研究,旨在考察面孔表情与自然场景间的情绪一致性对情绪面孔识别和自然场景加工的影响,以及加入与自然场景情绪相冲突的身体动作对面孔表情识别可能产生的影响。研究结果表明:(1)尽管增加了情绪性语言标签选项数量,自然场景的情绪对面孔表情识别的影响依旧显著;(2)当面孔表情与自然场景情绪不一致时,面孔识别需要更多依赖对自然场景的加工,因此对自然场景的加工程度更高;(3)身体动作会在一定程度上干扰自然场景对面孔表情识别的影响,但自然场景依然对情绪面孔的表情识别有重要作用。  相似文献   

10.
面部表情加工是情绪心理学研究的重要内容,也是认知神经科学研究的热点问题。本文以不同情绪效价的面孔表情图片为研究材料,以高中生为研究对象,采用"识别-判断"的实验范式考察面部表情识别的方法,对高中学生的情绪面孔的识别判断进行具体的研究。研究结果显示:不同面孔加工水平对被试进行面孔表情判断无显著影响;两种情绪极性对被试进行面孔表情图片识别有影响;且被试对正性情绪的判断要快于对负性情绪的判断。  相似文献   

11.
Facial emotional expressions can serve both as emotional stimuli and as communicative signals. The research reported here was conducted to illustrate how responses to both roles of facial emotional expressions unfold over time. As an emotion elicitor, a facial emotional expression (e.g., a disgusted face) activates a response that is similar to responses to other emotional stimuli of the same valence (e.g., a dirty, nonflushed toilet). As an emotion messenger, the same facial expression (e.g., a disgusted face) serves as a communicative signal by also activating the knowledge that the sender is experiencing a specific emotion (e.g., the sender feels disgusted). By varying the duration of exposure to disgusted, fearful, angry, and neutral faces in two subliminal-priming studies, we demonstrated that responses to faces as emotion elicitors occur prior to responses to faces as emotion messengers, and that both types of responses may unfold unconsciously.  相似文献   

12.
Research has largely neglected the effects of gaze direction cues on the perception of facial expressions of emotion. It was hypothesized that when gaze direction matches the underlying behavioral intent (approach-avoidance) communicated by an emotional expression, the perception of that emotion would be enhanced (i.e., shared signal hypothesis). Specifically, the authors expected that (a) direct gaze would enhance the perception of approach-oriented emotions (anger and joy) and (b) averted eye gaze would enhance the perception of avoidance-oriented emotions (fear and sadness). Three studies supported this hypothesis. Study 1 examined emotional trait attributions made to neutral faces. Study 2 examined ratings of ambiguous facial blends of anger and fear. Study 3 examined the influence of gaze on the perception of highly prototypical expressions.  相似文献   

13.
Previous choice reaction time studies have provided consistent evidence for faster recognition of positive (e.g., happy) than negative (e.g., disgusted) facial expressions. A predominance of positive emotions in normal contexts may partly explain this effect. The present study used pleasant and unpleasant odors to test whether emotional context affects the happy face advantage. Results from 2 experiments indicated that happiness was recognized faster than disgust in a pleasant context, but this advantage disappeared in an unpleasant context because of the slow recognition of happy faces. Odors may modulate the functioning of those emotion-related brain structures that participate in the formation of the perceptual representations of the facial expressions and in the generation of the conceptual knowledge associated with the signaled emotion.  相似文献   

14.
Emotional tears tend to increase perceived sadness in facial expressions. However, it is unclear whether tears would still be seen as an indicator of sadness when a tearful face is observed in an emotional context (e.g., a touching moment during a wedding ceremony). We examine the influence of context on the sadness enhancement effect of tears in three studies. In Study 1, participants evaluated tearful or tearless expressions presented without body postures, with emotionally neutral postures, or with emotionally congruent postures (i.e., postures indicating the same emotion as the face). The results show that the presence of tears increases the perceived sadness of faces regardless of context. Similar results are found in Studies 2 and 3, which used visual scenes and written scenarios as contexts, respectively. Our findings demonstrate that tears on faces reliably indicate sadness, even in the presence of contextual information that suggests non-sadness emotions.  相似文献   

15.
This study investigates the discrimination accuracy of emotional stimuli in subjects with major depression compared with healthy controls using photographs of facial expressions of varying emotional intensities. The sample included 88 unmedicated male and female subjects, aged 18-56 years, with major depressive disorder (n = 44) or no psychiatric illness (n = 44), who judged the emotion of 200 facial pictures displaying an expression between 10% (90% neutral) and 80% (nuanced) emotion. Stimuli were presented in 10% increments to generate a range of intensities, each presented for a 500-ms duration. Compared with healthy volunteers, depressed subjects showed very good recognition accuracy for sad faces but impaired recognition accuracy for other emotions (e.g., harsh, surprise, and sad expressions) of subtle emotional intensity. Recognition accuracy improved for both groups as a function of increased intensity on all emotions. Finally, as depressive symptoms increased, recognition accuracy increased for sad faces, but decreased for surprised faces. Moreover, depressed subjects showed an impaired ability to accurately identify subtle facial expressions, indicating that depressive symptoms influence accuracy of emotional recognition.  相似文献   

16.
In a sample of 325 college students, we examined how context influences judgments of facial expressions of emotion, using a newly developed facial affect recognition task in which emotional faces are superimposed upon emotional and neutral contexts. This research used a larger sample size than previous studies, included more emotions, varied the intensity level of the expressed emotion to avoid potential ceiling effects from very easy recognition, did not explicitly direct attention to the context, and aimed to understand how recognition is influenced by non-facial information, both situationally-relevant and situationally-irrelevant. Both accuracy and RT varied as a function of context. For all facial expressions of emotion other than happiness, accuracy increased when the emotion of the face and context matched, and decreased when they mismatched. For all emotions, participants responded faster when the emotion of the face and image matched and slower when they mismatched. Results suggest that the judgment of the facial expression is itself influenced by the contextual information instead of both being judged independently and then combined. Additionally, the results have implications for developing models of facial affect recognition and indicate that there are factors other than the face that can influence facial affect recognition judgments.  相似文献   

17.
Facial expression and gaze perception are thought to share brain mechanisms but behavioural interactions, especially from gaze-cueing paradigms, are inconsistent. We conducted a series of gaze-cueing studies using dynamic facial cues to examine orienting across different emotional expression and task conditions, including face inversion. Across experiments, at a short stimulus–onset asynchrony (SOA) we observed both an expression effect (i.e., faster responses when the face was emotional versus neutral) and a cue validity effect (i.e., faster responses when the target was gazed-at), but no interaction between validity and emotion. Results from face inversion suggest that the emotion effect may have been due to both facial expression and stimulus motion. At longer SOAs, validity and emotion interacted such that cueing by emotional faces, fearful faces in particular, was enhanced relative to neutral faces. These results converge with a growing body of evidence that suggests that gaze and expression are initially processed independently and interact at later stages to direct attentional orienting.  相似文献   

18.
This study explored a bidirectional impact on the recognition accuracy of various facial expressions deriving from both the observer and sender in a sample of Chinese participants. A facial manipulation task was used to examine the ability of an observer's facial feedback to modulate the recognition of various facial expressions. Furthermore, the effect of a sender's facial expression with an open or closed mouth on recognition accuracy was investigated. The results showed that only recognition accuracy of a sad facial expression was influenced simultaneously by bidirectional sources from a sender and observer. Moreover, the impact of the unidirectional cue of a sender's facial feature (i.e., mouth openness) on happy and neutral faces was found to influence the recognition accuracy of these faces, but not the observer's bodily state. These findings indicate that the bidirectional impact derived from an observer and sender on facial expression recognition accuracy differs for emotional and neutral expressions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号