首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We examined 7-month-old infants' processing of emotionally congruent and incongruent face-voice pairs using ERP measures. Infants watched facial expressions (happy or angry) and, after a delay of 400 ms, heard a word spoken with a prosody that was either emotionally congruent or incongruent with the face being presented. The ERP data revealed that the amplitude of a negative component and a subsequent positive component in infants' ERPs varied as a function of crossmodal emotional congruity. An emotionally incongruent prosody elicited a larger negative component in infants' ERPs than did an emotionally congruent prosody. Conversely, the amplitude of infants' positive component was larger to emotionally congruent than to incongruent prosody. Previous work has shown that an attenuation of the negative component and an enhancement of the later positive component in infants' ERPs reflect the recognition of an item. Thus, the current findings suggest that 7-month-olds integrate emotional information across modalities and recognize common affect in the face and voice.  相似文献   

2.
In the face literature, it is debated whether the identification of facial expressions requires holistic (i.e., whole face) or analytic (i.e., parts-based) information. In this study, happy and angry composite expressions were created in which the top and bottom face halves formed either an incongruent (e.g., angry top + happy bottom) or congruent composite expression (e.g., happy top + happy bottom). Participants reported the expression in the target top or bottom half of the face. In Experiment 1, the target half in the incongruent condition was identified less accurately and more slowly relative to the baseline isolated expression or neutral face conditions. In contrast, no differences were found between congruent and the baseline conditions. In Experiment 2, the effects of exposure duration were tested by presenting faces for 20, 60, 100 and 120 ms. Interference effects for the incongruent faces appeared at the earliest 20 ms interval and persisted for the 60, 100 and 120 ms intervals. In contrast, no differences were found between the congruent and baseline face conditions at any exposure interval. In Experiment 3, it was found that spatial alignment impaired the recognition of incongruent expressions, but had no effect on congruent expressions. These results are discussed in terms of holistic and analytic processing of facial expressions.  相似文献   

3.
In the face literature, it is debated whether the identification of facial expressions requires holistic (i.e., whole face) or analytic (i.e., parts-based) information. In this study, happy and angry composite expressions were created in which the top and bottom face halves formed either an incongruent (e.g., angry top + happy bottom) or congruent composite expression (e.g., happy top + happy bottom). Participants reported the expression in the target top or bottom half of the face. In Experiment 1, the target half in the incongruent condition was identified less accurately and more slowly relative to the baseline isolated expression or neutral face conditions. In contrast, no differences were found between congruent and the baseline conditions. In Experiment 2, the effects of exposure duration were tested by presenting faces for 20, 60, 100 and 120 ms. Interference effects for the incongruent faces appeared at the earliest 20 ms interval and persisted for the 60, 100 and 120 ms intervals. In contrast, no differences were found between the congruent and baseline face conditions at any exposure interval. In Experiment 3, it was found that spatial alignment impaired the recognition of incongruent expressions, but had no effect on congruent expressions. These results are discussed in terms of holistic and analytic processing of facial expressions.  相似文献   

4.
Previous research has shown that redundant information in faces and voices leads to faster emotional categorization compared to incongruent emotional information even when attending to only one modality. The aim of the present study was to test whether these crossmodal effects are predominantly due to a response conflict rather than interference at earlier, e.g. perceptual processing stages. In Experiment 1, participants had to categorize the valence and rate the intensity of happy, sad, angry and neutral unimodal or bimodal face–voice stimuli. They were asked to rate either the facial or vocal expression and ignore the emotion expressed in the other modality. Participants responded faster and more precisely to emotionally congruent compared to incongruent face–voice pairs in both the Attend Face and in the Attend Voice condition. Moreover, when attending to faces, emotionally congruent bimodal stimuli were more efficiently processed than unimodal visual stimuli. To study the role of a possible response conflict, Experiment 2 used a modified paradigm in which emotional and response conflicts were disentangled. Incongruency effects were significant even in the absence of response conflicts. The results suggest that emotional signals available through different sensory channels are automatically combined prior to response selection.  相似文献   

5.
Recognition of facial expressions has traditionally been investigated by presenting facial expressions without any context information. However, we rarely encounter an isolated facial expression; usually, we perceive a person's facial reaction as part of the surrounding context. In the present study, we addressed the question of whether emotional scenes influence the explicit recognition of facial expressions. In three experiments, participants were required to categorize facial expressions (disgust, fear, happiness) that were shown against backgrounds of natural scenes with either a congruent or an incongruent emotional significance. A significant interaction was found between facial expressions and the emotional content of the scenes, showing a response advantage for facial expressions accompanied by congruent scenes. This advantage was robust against increasing task load. Taken together, the results show that the surrounding scene is an important factor in recognizing facial expressions.  相似文献   

6.
We investigated whether emotional information from facial expression and hand movement quality was integrated when identifying the expression of a compound stimulus showing a static facial expression combined with emotionally expressive dynamic manual actions. The emotions (happiness, neutrality, and anger) expressed by the face and hands were either congruent or incongruent. In Experiment 1, the participants judged whether the stimulus person was happy, neutral, or angry. Judgments were mainly based on the facial expressions, but were affected by manual expressions to some extent. In Experiment 2, the participants were instructed to base their judgment on the facial expression only. An effect of hand movement expressive quality was observed for happy facial expressions. The results conform with the proposal that perception of facial expressions of emotions can be affected by the expressive qualities of hand movements.  相似文献   

7.
Whether emotional information from facial expressions can be processed unconsciously is still controversial; this debate is partially due to ambiguities in distinguishing the unconscious–conscious boundary and to possible contributions from low-level (rather than emotional) properties. To avoid these possible confounding factors, we adopted an affective-priming paradigm with the continuous flash suppression (CFS) method in order to render an emotional face invisible. After presenting an invisible face (prime) with either positive or negative valence under CFS, a visible word (target) with an emotionally congruent or incongruent valence was presented. The participants were required to judge the emotional valence (positive or negative) of the target. The face prime was presented for either a short (200 ms, Exp. 1) or a long (1,000 ms, Exp. 2) duration in order to test whether the priming effect would vary with the prime duration. The consistent priming effects across the two priming durations showed that, as compared to their incongruent counterparts, congruent facial expressions can facilitate emotional judgments of subsequent words. These results suggest that the emotional information from facial expressions can be processed unconsciously.  相似文献   

8.
白鹭  毛伟宾  王蕊  张文海 《心理学报》2017,(9):1172-1183
本研究以消极情绪间感知相似性较低的厌恶、恐惧面孔表情为材料,提供5个情绪性语言标签减少文字背景对面孔识别的促进作用,通过2个实验对自然场景以及身体动作对面孔表情识别的影响进行了研究,旨在考察面孔表情与自然场景间的情绪一致性对情绪面孔识别和自然场景加工的影响,以及加入与自然场景情绪相冲突的身体动作对面孔表情识别可能产生的影响。研究结果表明:(1)尽管增加了情绪性语言标签选项数量,自然场景的情绪对面孔表情识别的影响依旧显著;(2)当面孔表情与自然场景情绪不一致时,面孔识别需要更多依赖对自然场景的加工,因此对自然场景的加工程度更高;(3)身体动作会在一定程度上干扰自然场景对面孔表情识别的影响,但自然场景依然对情绪面孔的表情识别有重要作用。  相似文献   

9.
Verbal framing effects have been widely studied, but little is known about how people react to multiple framing cues in risk communication, where verbal messages are often accompanied by facial and vocal cues. We examined joint and differential effects of verbal, facial, and vocal framing on risk preference in hypothetical monetary and life–death situations. In the multiple framing condition with the factorial design (2 verbal frames × 2 vocal tones × 4 basic facial expressions × 2 task domains), each scenario was presented auditorily with a written message on a photo of the messenger's face. Compared with verbal framing effects resulting in preference reversal, multiple frames made risky choice more consistent and shifted risk preference without reversal. Moreover, a positive tone of voice increased risk‐seeking preference in women. When the valence of facial and vocal cues was incongruent with verbal frame, verbal framing effects were significant. In contrast, when the affect cues were congruent with verbal frame, framing effects disappeared. These results suggest that verbal framing is given higher priority when other affect cues are incongruent. Further analysis revealed that participants were more risk‐averse when positive affect cues (positive tone or facial expressions) were congruently paired with a positive verbal frame whereas participants were more risk‐seeking when positive affect cues were incongruent with the verbal frame. In contrast, for negative affect cues, congruency promoted risk‐seeking tendency whereas incongruency increased risk‐aversion. Overall, the results show that facial and vocal cues interact with verbal framing and significantly affect risk communication. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

10.
Recognition of emotional facial expressions is a central area in the psychology of emotion. This study presents two experiments. The first experiment analyzed recognition accuracy for basic emotions including happiness, anger, fear, sadness, surprise, and disgust. 30 pictures (5 for each emotion) were displayed to 96 participants to assess recognition accuracy. The results showed that recognition accuracy varied significantly across emotions. The second experiment analyzed the effects of contextual information on recognition accuracy. Information congruent and not congruent with a facial expression was displayed before presenting pictures of facial expressions. The results of the second experiment showed that congruent information improved facial expression recognition, whereas incongruent information impaired such recognition.  相似文献   

11.
We examined interference effects of emotionally associated background colours during fast valence categorisations of negative, neutral and positive expressions. According to implicitly learned colour-emotion associations, facial expressions were presented with colours that either matched the valence of these expressions or not. Experiment 1 included infrequent non-matching trials and Experiment 2 a balanced ratio of matching and non-matching trials. Besides general modulatory effects of contextual features on the processing of facial expressions, we found differential effects depending on the valance of target facial expressions. Whereas performance accuracy was mainly affected for neutral expressions, performance speed was specifically modulated by emotional expressions indicating some susceptibility of emotional expressions to contextual features. Experiment 3 used two further colour-emotion combinations, but revealed only marginal interference effects most likely due to missing colour-emotion associations. The results are discussed with respect to inherent processing demands of emotional and neutral expressions and their susceptibility to contextual interference.  相似文献   

12.
Two experiments using identical stimuli were run to determine whether the vocal expression of emotion affects the speed with which listeners can identify emotion words. Sentences were spoken in an emotional tone of voice (Happy, Disgusted, or Petrified), or in a Neutral tone of voice. Participants made speeded lexical decisions about the word or pseudoword in sentence-final position. Critical stimuli were emotion words that were either semantically congruent or incongruent with the tone of voice of the sentence. Experiment 1, with randomised presentation of tone of voice, showed no effect of congruence or incongruence. Experiment 2, with blocked presentation of tone of voice, did show such effects: Reaction times for congruent trials were faster than those for baseline trials and incongruent trials. Results are discussed in terms of expectation (e.g., Kitayama, 1990, 1991, 1996) and emotional connotation, and implications for models of word recognition are considered.  相似文献   

13.
The current research investigated the influence of body posture on adults' and children's perception of facial displays of emotion. In each of two experiments, participants categorized facial expressions that were presented on a body posture that was congruent (e.g., a sad face on a body posing sadness) or incongruent (e.g., a sad face on a body posing fear). Adults and 8-year-olds made more errors and had longer reaction times on incongruent trials than on congruent trials when judging sad versus fearful facial expressions, an effect that was larger in 8-year-olds. The congruency effect was reduced when faces and bodies were misaligned, providing some evidence for holistic processing. Neither adults nor 8-year-olds were affected by congruency when judging sad versus happy expressions. Evidence that congruency effects vary with age and with similarity of emotional expressions is consistent with dimensional theories and "emotional seed" models of emotion perception.  相似文献   

14.
We examined interference effects of emotionally associated background colours during fast valence categorisations of negative, neutral and positive expressions. According to implicitly learned colour–emotion associations, facial expressions were presented with colours that either matched the valence of these expressions or not. Experiment 1 included infrequent non-matching trials and Experiment 2 a balanced ratio of matching and non-matching trials. Besides general modulatory effects of contextual features on the processing of facial expressions, we found differential effects depending on the valance of target facial expressions. Whereas performance accuracy was mainly affected for neutral expressions, performance speed was specifically modulated by emotional expressions indicating some susceptibility of emotional expressions to contextual features. Experiment 3 used two further colour–emotion combinations, but revealed only marginal interference effects most likely due to missing colour–emotion associations. The results are discussed with respect to inherent processing demands of emotional and neutral expressions and their susceptibility to contextual interference.  相似文献   

15.
Faces and bodies are typically encountered simultaneously, yet little research has explored the visual processing of the full person. Specifically, it is unknown whether the face and body are perceived as distinct components or as an integrated, gestalt-like unit. To examine this question, we investigated whether emotional face-body composites are processed in a holistic-like manner by using a variant of the composite face task, a measure of holistic processing. Participants judged facial expressions combined with emotionally congruent or incongruent bodies that have been shown to influence the recognition of emotion from the face. Critically, the faces were either aligned with the body in a natural position or misaligned in a manner that breaks the ecological person form. Converging data from 3 experiments confirm that breaking the person form reduces the facilitating influence of congruent body context as well as the impeding influence of incongruent body context on the recognition of emotion from the face. These results show that faces and bodies are processed as a single unit and support the notion of a composite person effect analogous to the classic effect described for faces.  相似文献   

16.
胡治国  刘宏艳 《心理科学》2015,(5):1087-1094
正确识别面部表情对成功的社会交往有重要意义。面部表情识别受到情绪背景的影响。本文首先介绍了情绪背景对面部表情识别的增强作用,主要表现为视觉通道的情绪一致性效应和跨通道情绪整合效应;然后介绍了情绪背景对面部表情识别的阻碍作用,主要表现为情绪冲突效应和语义阻碍效应;接着介绍了情绪背景对中性和歧义面孔识别的影响,主要表现为背景的情绪诱发效应和阈下情绪启动效应;最后对现有研究进行了总结分析,提出了未来研究的建议。  相似文献   

17.
The current study examined whether the interaction between emotion and executive control (EC) is modulated by the processing type of the emotional information. Namely, whether the emotional information is explicitly processed, implicitly processed or passively viewed. In each trial, a negative or neutral picture preceded an arrow-flanker stimulus that was congruent or incongruent. Incongruent stimuli are known to recruit EC. Explicit processing of the pictures (Experiment 1a), which required responding to their emotional content, resulted in emotional interference for congruent but not for incongruent stimuli. Similar effects were shown for the passive viewing condition (Experiment 2). In contrast, implicit processing (Experiment 1b), which required responding to non-emotional content, resulted in emotional interference for both congruent and incongruent stimuli. Thus, our findings indicate that implicit emotional processing affects performance independently of EC recruitment. In contrast, explicit emotional processing and passive viewing of emotional pictures lead to reduced emotional interference when EC is recruited.  相似文献   

18.
We examined whether facial expressions of performers influence the emotional connotations of sung materials, and whether attention is implicated in audio-visual integration of affective cues. In Experiment 1, participants judged the emotional valence of audio-visual presentations of sung intervals. Performances were edited such that auditory and visual information conveyed congruent or incongruent affective connotations. In the single-task condition, participants judged the emotional connotation of sung intervals. In the dual-task condition, participants judged the emotional connotation of intervals while performing a secondary task. Judgements were influenced by melodic cues and facial expressions and the effects were undiminished by the secondary task. Experiment 2 involved identical conditions but participants were instructed to base judgements on auditory information alone. Again, facial expressions influenced judgements and the effect was undiminished by the secondary task. The results suggest that visual aspects of music performance are automatically and preattentively registered and integrated with auditory cues.  相似文献   

19.
The biological predisposition to resonate emotionally with another person is regarded as a critical aspect of social interaction. There are, however, situations in which the emotional response to others is discordant with their emotional experience. Using event-related potentials, the present study investigated the neural underpinnings of this phenomenon, termed "counterempathy." Participants played a card game under the belief that they were playing jointly with another player who sat in an adjoining room and whose smiles and frowns in response to winning or losing in the game they could observe on a computer screen. Depending upon the experimental setting, the other player's facial expressions conveyed either of two opposing values to the participant. In the empathic setting, his emotional expressions were congruent with the participant's outcome (win or loss), whereas in the counterempathic setting, they indicated incongruent outcomes. Results revealed a reversed pattern of brain responses to facial expressions between congruent and incongruent conditions at ~170 ms (N170) over the temporal cortex. That is, N170 was sensitive to frowns in the congruent condition and to smiles in the incongruent condition, both indicating losses for the participant. Furthermore, frowns in the incongruent condition yielded larger medial frontal negativity (MFN) over the medial prefrontal cortex, which correlated with the subjective pleasantness about one's own winning in the incongruent condition. These findings demonstrate that (1) counterempathic responses are associated with modulation of early sensory processing of emotional cues, (2) that MFN is sensitive to the detection of another person's loss during positive inequity, and (3) that MFN is associated with a pleasant feeling during positive inequity, which is possibly related to "Schadenfreude."  相似文献   

20.
In the present study we considered the two factors that have been advocated for playing a role in emotional attention: perception of gaze direction and facial expression of emotions. Participants performed an oculomotor task in which they had to make a saccade towards one of the two lateral targets, depending on the colour of the fixation dot which appeared at the centre of the computer screen. At different time intervals (stimulus onset asynchronies, SOAs: 50,100,150 ms) following the onset of the dot, a picture of a human face (gazing either to the right or to the left) was presented at the centre of the screen. The gaze direction of the face could be congruent or incongruent with respect to the location of the target, and the expression could be neutral or angry. In Experiment 1 the facial expressions were presented randomly in a single block, whereas in Experiment 2 they were shown in separate blocks. Latencies for correct saccades and percentage of errors (saccade direction errors) were considered in the analyses. Results showed that incongruent trials determined a significantly higher percentage of saccade direction errors with respect to congruent trials, thus confirming that gaze direction, even when task-irrelevant, interferes with the accuracy of the observer’s oculomotor behaviour. The angry expression was found to hold attention for a longer time with respect to the neutral one, producing delayed saccade latencies. This was particularly evident at 100 ms SOA and for incongruent trials. Emotional faces may then exert a modulatory effect on overt attention mechanisms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号