首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 359 毫秒
1.
白鹭  毛伟宾  王蕊  张文海 《心理学报》2017,(9):1172-1183
本研究以消极情绪间感知相似性较低的厌恶、恐惧面孔表情为材料,提供5个情绪性语言标签减少文字背景对面孔识别的促进作用,通过2个实验对自然场景以及身体动作对面孔表情识别的影响进行了研究,旨在考察面孔表情与自然场景间的情绪一致性对情绪面孔识别和自然场景加工的影响,以及加入与自然场景情绪相冲突的身体动作对面孔表情识别可能产生的影响。研究结果表明:(1)尽管增加了情绪性语言标签选项数量,自然场景的情绪对面孔表情识别的影响依旧显著;(2)当面孔表情与自然场景情绪不一致时,面孔识别需要更多依赖对自然场景的加工,因此对自然场景的加工程度更高;(3)身体动作会在一定程度上干扰自然场景对面孔表情识别的影响,但自然场景依然对情绪面孔的表情识别有重要作用。  相似文献   

2.
胡治国  刘宏艳 《心理科学》2015,(5):1087-1094
正确识别面部表情对成功的社会交往有重要意义。面部表情识别受到情绪背景的影响。本文首先介绍了情绪背景对面部表情识别的增强作用,主要表现为视觉通道的情绪一致性效应和跨通道情绪整合效应;然后介绍了情绪背景对面部表情识别的阻碍作用,主要表现为情绪冲突效应和语义阻碍效应;接着介绍了情绪背景对中性和歧义面孔识别的影响,主要表现为背景的情绪诱发效应和阈下情绪启动效应;最后对现有研究进行了总结分析,提出了未来研究的建议。  相似文献   

3.
Emotional tears tend to increase perceived sadness in facial expressions. However, it is unclear whether tears would still be seen as an indicator of sadness when a tearful face is observed in an emotional context (e.g., a touching moment during a wedding ceremony). We examine the influence of context on the sadness enhancement effect of tears in three studies. In Study 1, participants evaluated tearful or tearless expressions presented without body postures, with emotionally neutral postures, or with emotionally congruent postures (i.e., postures indicating the same emotion as the face). The results show that the presence of tears increases the perceived sadness of faces regardless of context. Similar results are found in Studies 2 and 3, which used visual scenes and written scenarios as contexts, respectively. Our findings demonstrate that tears on faces reliably indicate sadness, even in the presence of contextual information that suggests non-sadness emotions.  相似文献   

4.
The present aim was to investigate how emotional expressions presented on an unattended channel affect the recognition of the attended emotional expressions. In Experiments 1 and 2, facial and vocal expressions were simultaneously presented as stimulus combinations. The emotions (happiness, anger, or emotional neutrality) expressed by the face and voice were either congruent or incongruent. Subjects were asked to attend either to the visual (Experiment 1) or auditory (Experiment 2) channel and recognise the emotional expression. The result showed that the ignored emotional expressions significantly affected the processing of attended signals as measured by recognition accuracy and response speed. In general, attended signals were recognised more accurately and faster in congruent than in incongruent combinations. In Experiment 3, possibility for a perceptual-level integration was eliminated by presenting the response-relevant and response-irrelevant signals separated in time. In this situation, emotional information presented on the nonattended channel ceased to affect the processing of emotional signals on the attended channel. The present results are interpreted to provide evidence for the view that facial and vocal emotional signals are integrated at the perceptual level of information processing and not at the later response-selection stages.  相似文献   

5.
本研究采用面孔情绪探测任务, 通过状态-特质焦虑问卷筛选出高、低特质性焦虑被试, 考察场景对不同情绪效价以及不同情绪强度的面部表情加工的影响, 并探讨特质性焦虑在其中所发挥的作用。结果发现:(1)对于不同情绪效价的面部表情, 场景对其情绪探测的影响存在差异:对于快乐面部表情, 在100%、80%和20%三个情绪层级上, 在场景与面孔情绪性一致情况下, 被试对面孔情绪探测的正确率显著高于不一致情况; 对于恐惧面部表情, 在80%、60%、40%和20%四个情绪层级上, 均发现一致条件比不一致条件有着更高的情绪探测正确率。(2)对于高特质性焦虑组, 一致条件和不一致条件中的面孔情绪探测正确率并没有显著差异, 即高特质性焦虑组并未表现出显著的场景效应; 而低特质性焦虑组则差异显著, 即出现显著的场景效应。该研究结果表明:(1)对于情绪强度较低的面部表情, 快乐与恐惧面孔情绪探测都更容易受到场景的影响。(2)相比于中等强度快乐面孔, 场景更容易影响中等强度恐惧面孔情绪的探测。(3)特质性焦虑的个体因素在场景对面孔情绪探测的影响中发挥调节作用, 高特质性焦虑者在情绪识别中较少受到场景信息的影响。  相似文献   

6.
Whether emotional information from facial expressions can be processed unconsciously is still controversial; this debate is partially due to ambiguities in distinguishing the unconscious–conscious boundary and to possible contributions from low-level (rather than emotional) properties. To avoid these possible confounding factors, we adopted an affective-priming paradigm with the continuous flash suppression (CFS) method in order to render an emotional face invisible. After presenting an invisible face (prime) with either positive or negative valence under CFS, a visible word (target) with an emotionally congruent or incongruent valence was presented. The participants were required to judge the emotional valence (positive or negative) of the target. The face prime was presented for either a short (200 ms, Exp. 1) or a long (1,000 ms, Exp. 2) duration in order to test whether the priming effect would vary with the prime duration. The consistent priming effects across the two priming durations showed that, as compared to their incongruent counterparts, congruent facial expressions can facilitate emotional judgments of subsequent words. These results suggest that the emotional information from facial expressions can be processed unconsciously.  相似文献   

7.
The present electromyographic study is a first step toward shedding light on the involvement of affective processes in congruent and incongruent facial reactions to facial expressions. Further, empathy was investigated as a potential mediator underlying the modulation of facial reactions to emotional faces in a competitive, a cooperative, and a neutral setting. Results revealed less congruent reactions to happy expressions and even incongruent reactions to sad and angry expressions in the competition condition, whereas virtually no differences between the neutral and the cooperation condition occurred. Effects on congruent reactions were found to be mediated by cognitive empathy, indicating that the state of empathy plays an important role in the situational modulation of congruent reactions. Further, incongruent reactions to sad and angry faces in a competition setting were mediated by the emotional reaction of joy, supporting the assumption that incongruent facial reactions are mainly based on affective processes. Additionally, strategic processes (specifically, the goal to create and maintain a smooth, harmonious interaction) were found to influence facial reactions while being in a cooperative mindset. Now, further studies are needed to test for the generalizability of these effects.  相似文献   

8.
The current research investigated the influence of body posture on adults' and children's perception of facial displays of emotion. In each of two experiments, participants categorized facial expressions that were presented on a body posture that was congruent (e.g., a sad face on a body posing sadness) or incongruent (e.g., a sad face on a body posing fear). Adults and 8-year-olds made more errors and had longer reaction times on incongruent trials than on congruent trials when judging sad versus fearful facial expressions, an effect that was larger in 8-year-olds. The congruency effect was reduced when faces and bodies were misaligned, providing some evidence for holistic processing. Neither adults nor 8-year-olds were affected by congruency when judging sad versus happy expressions. Evidence that congruency effects vary with age and with similarity of emotional expressions is consistent with dimensional theories and "emotional seed" models of emotion perception.  相似文献   

9.
Rapid object visual categorization in briefly flashed natural scenes is influenced by the surrounding context. The neural correlates underlying reduced categorization performance in response to incongruent object/context associations remain unclear and were investigated in the present study using fMRI. Participants were instructed to categorize objects in briefly presented scenes (exposure duration = 100 ms). Half of the scenes consisted of objects pasted in an expected (congruent) context, whereas for the other half, objects were embedded in incongruent contexts. Object categorization was more accurate and faster in congruent relative to incongruent scenes. Moreover, we found that the two types of scenes elicited different patterns of cerebral activation. In particular, the processing of incongruent scenes induced increased activations in the parahippocampal cortex, as well as in the right frontal cortex. This higher activity may indicate additional neural processing of the novel (non experienced) contextual associations that were inherent to the incongruent scenes. Moreover, our results suggest that the locus of object categorization impairment due to contextual incongruence is in the right anterior parahippocampal cortex. Indeed in this region activity was correlated with the reaction time increase observed with incongruent scenes. Representations for associations between objects and their usual context of appearance might be encoded in the right anterior parahippocampal cortex.  相似文献   

10.
The current study aimed to extend the understanding of the early development of spontaneous facial reactions toward observed facial expressions. Forty-six 9- to 10-month-old infants observed video clips of dynamic human facial expressions that were artificially created with morphing technology. The infants’ facial responses were recorded, and the movements of the facial action unit 12 (e.g., lip-corner raising, associated with happiness) and facial action unit 4 (e.g., brow-lowering, associated with anger) were visually evaluated by multiple naïve raters. Results showed that (1) infants make congruent, observable facial responses to facial expressions, and (2) these specific facial responses are enhanced during repeated observation of the same emotional expressions. These results suggest the presence of observable congruent facial responses in the first year of life, and that they appear to be influenced by contextual information, such as the repetition of presentation of the target emotional expressions.  相似文献   

11.
Research on the relationship between context and facial expressions generally assumes a unidirectional effect of context on expressions. However, according to the model of the meaning of emotion expressions in context (MEEC) the effect should be bidirectional. The present research tested the effect of emotion expression on the interpretation of scenes. A total of 380 participants either (a) rated facial expressions with regard to the likely appraisal of the eliciting situation by the emoter, (b) appraised the scenes alone or (c) appraised scenes shown together with the expressions they supposedly elicited. The findings strongly supported the MEEC. When a scene was combined with an expression signalling a situation that is undesirable, or high in locus of control or sudden, the participants appraised the scene correspondingly. Thus, the meaning of scenes is malleable and affected by the way that people are seen to react to them.  相似文献   

12.
长期以来,关于面孔表情识别的研究主要是围绕着面孔本身的结构特征来进行的,但是近年来的研究发现,面孔表情的识别也会受到其所在的情境背景(如语言文字、身体背景、自然与社会场景等)的影响,特别是在识别表情相似的面孔时,情境对面孔表情识别的影响更大。本文首先介绍和分析了近几年关于语言文字、身体动作、自然场景和社会场景等情境影响个体对面孔表情的识别的有关研究;其次,又分析了文化背景、年龄以及焦虑程度等因素对面孔表情识别情境效应的影响;最后,强调了未来的研究应重视研究儿童被试群体、拓展情绪的类别、关注真实生活中的面孔情绪感知等。  相似文献   

13.
Emotions can be recognized whether conveyed by facial expressions, linguistic cues (semantics), or prosody (voice tone). However, few studies have empirically documented the extent to which multi-modal emotion perception differs from uni-modal emotion perception. Here, we tested whether emotion recognition is more accurate for multi-modal stimuli by presenting stimuli with different combinations of facial, semantic, and prosodic cues. Participants judged the emotion conveyed by short utterances in six channel conditions. Results indicated that emotion recognition is significantly better in response to multi-modal versus uni-modal stimuli. When stimuli contained only one emotional channel, recognition tended to be higher in the visual modality (i.e., facial expressions, semantic information conveyed by text) than in the auditory modality (prosody), although this pattern was not uniform across emotion categories. The advantage for multi-modal recognition may reflect the automatic integration of congruent emotional information across channels which enhances the accessibility of emotion-related knowledge in memory.  相似文献   

14.
Emotional influences on memory for events have long been documented yet surprisingly little is known about how emotional signals conveyed by contextual cues influence memory for face identity. This study investigated how positively and negatively valenced contextual emotion cues conveyed by body expressions or background scenes influence face memory. The results provide evidence of emotional context influence on face recognition memory and show that faces encoded in emotional (either fearful or happy) contexts (either the body or background scene) are less well recognized than faces encoded in neutral contexts and this effect is larger for body context than for scene context. The findings are compatible with the hypothesis that emotional signals in visual scenes trigger orienting responses which may lead to a less elaborate processing of featural details like the identity of a face, in turn resulting in a decreased facial recognition memory.  相似文献   

15.
We examined whether facial expressions of performers influence the emotional connotations of sung materials, and whether attention is implicated in audio-visual integration of affective cues. In Experiment 1, participants judged the emotional valence of audio-visual presentations of sung intervals. Performances were edited such that auditory and visual information conveyed congruent or incongruent affective connotations. In the single-task condition, participants judged the emotional connotation of sung intervals. In the dual-task condition, participants judged the emotional connotation of intervals while performing a secondary task. Judgements were influenced by melodic cues and facial expressions and the effects were undiminished by the secondary task. Experiment 2 involved identical conditions but participants were instructed to base judgements on auditory information alone. Again, facial expressions influenced judgements and the effect was undiminished by the secondary task. The results suggest that visual aspects of music performance are automatically and preattentively registered and integrated with auditory cues.  相似文献   

16.
We investigated whether emotional information from facial expression and hand movement quality was integrated when identifying the expression of a compound stimulus showing a static facial expression combined with emotionally expressive dynamic manual actions. The emotions (happiness, neutrality, and anger) expressed by the face and hands were either congruent or incongruent. In Experiment 1, the participants judged whether the stimulus person was happy, neutral, or angry. Judgments were mainly based on the facial expressions, but were affected by manual expressions to some extent. In Experiment 2, the participants were instructed to base their judgment on the facial expression only. An effect of hand movement expressive quality was observed for happy facial expressions. The results conform with the proposal that perception of facial expressions of emotions can be affected by the expressive qualities of hand movements.  相似文献   

17.
Emotion influences memory in many ways. For example, when a mood-dependent processing shift is operative, happy moods promote global processing and sad moods direct attention to local features of complex visual stimuli. We hypothesized that an emotional context associated with to-be-learned facial stimuli could preferentially promote global or local processing. At learning, faces with neutral expressions were paired with a narrative providing either a happy or a sad context. At test, faces were presented in an upright or inverted orientation, emphasizing configural or analytical processing, respectively. A recognition advantage was found for upright faces learned in happy contexts relative to those in sad contexts, whereas recognition was better for inverted faces learned in sad contexts than for those in happy contexts. We thus infer that a positive emotional context prompted more effective storage of holistic, configural, or global facial information, whereas a negative emotional context prompted relatively more effective storage of local or feature-based facial information  相似文献   

18.
It has generally been assumed that high-level cognitive and emotional processes are based on amodal conceptual information. In contrast, however, "embodied simulation" theory states that the perception of an emotional signal can trigger a simulation of the related state in the motor, somatosensory, and affective systems. To study the effect of social context on the mimicry effect predicted by the "embodied simulation" theory, we recorded the electromyographic (EMG) activity of participants when looking at emotional facial expressions. We observed an increase in embodied responses when the participants were exposed to a context involving social valence before seeing the emotional facial expressions. An examination of the dynamic EMG activity induced by two socially relevant emotional expressions (namely joy and anger) revealed enhanced EMG responses of the facial muscles associated with the related social prime (either positive or negative). These results are discussed within the general framework of embodiment theory.  相似文献   

19.
It has generally been assumed that high-level cognitive and emotional processes are based on amodal conceptual information. In contrast, however, “embodied simulation” theory states that the perception of an emotional signal can trigger a simulation of the related state in the motor, somatosensory, and affective systems. To study the effect of social context on the mimicry effect predicted by the “embodied simulation” theory, we recorded the electromyographic (EMG) activity of participants when looking at emotional facial expressions. We observed an increase in embodied responses when the participants were exposed to a context involving social valence before seeing the emotional facial expressions. An examination of the dynamic EMG activity induced by two socially relevant emotional expressions (namely joy and anger) revealed enhanced EMG responses of the facial muscles associated with the related social prime (either positive or negative). These results are discussed within the general framework of embodiment theory.  相似文献   

20.
The influence of emotional prosody on the evaluation of emotional facial expressions was investigated in an event-related brain potential (ERP) study using a priming paradigm, the facial affective decision task. Emotional prosodic fragments of short (200-msec) and medium (400-msec) duration were presented as primes, followed by an emotionally related or unrelated facial expression (or facial grimace, which does not resemble an emotion). Participants judged whether or not the facial expression represented an emotion. ERP results revealed an N400-like differentiation for emotionally related prime-target pairs when compared with unrelated prime-target pairs. Faces preceded by prosodic primes of medium length led to a normal priming effect (larger negativity for unrelated than for related prime-target pairs), but the reverse ERP pattern (larger negativity for related than for unrelated prime-target pairs) was observed for faces preceded by short prosodic primes. These results demonstrate that brief exposure to prosodic cues can establish a meaningful emotional context that influences related facial processing; however, this context does not always lead to a processing advantage when prosodic information is very short in duration.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号