首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The current research investigated the influence of body posture on adults' and children's perception of facial displays of emotion. In each of two experiments, participants categorized facial expressions that were presented on a body posture that was congruent (e.g., a sad face on a body posing sadness) or incongruent (e.g., a sad face on a body posing fear). Adults and 8-year-olds made more errors and had longer reaction times on incongruent trials than on congruent trials when judging sad versus fearful facial expressions, an effect that was larger in 8-year-olds. The congruency effect was reduced when faces and bodies were misaligned, providing some evidence for holistic processing. Neither adults nor 8-year-olds were affected by congruency when judging sad versus happy expressions. Evidence that congruency effects vary with age and with similarity of emotional expressions is consistent with dimensional theories and "emotional seed" models of emotion perception.  相似文献   

2.
People can discriminate cheaters from cooperators by their appearance. However, successful cheater detection can be thwarted by a posed smile, which cheaters display with greater emotional intensity than cooperators. The present study investigated the underlying neural and cognitive mechanisms of a posed smile, which cheaters use to conceal their anti-social attitude, in terms of hemifacial asymmetries of emotional expressions. Raters (50 women and 50 men) performed trustworthiness judgments on composite faces of cheaters and cooperators, operationally defined by the number of deceptions in an economic game. The left–left composites of cheaters were judged to be more trustworthy than the right–right composites when the models posed a happy expression. This left-hemiface advantage for the happy expression was not observed for cooperators. In addition, the left-hemiface advantage of cheaters disappeared for the angry expression. These results suggest that cheaters used the left hemiface, which is connected to the emotional side of the brain (i.e., the right hemisphere), more effectively than the right hemiface to conceal their anti-social attitude.  相似文献   

3.
Older adults perceive less intense negative emotion in facial expressions compared to younger counterparts. Prior research has also demonstrated that mood alters facial emotion perception. Nevertheless, there is little evidence which evaluates the interactive effects of age and mood on emotion perception. This study investigated the effects of sad mood on younger and older adults’ perception of emotional and neutral faces. Participants rated the intensity of stimuli while listening to sad music and in silence. Measures of mood were administered. Younger and older participants’ rated sad faces as displaying stronger sadness when they experienced sad mood. While younger participants showed no influence of sad mood on happiness ratings of happy faces, older adults rated happy faces as conveying less happiness when they experienced sad mood. This study demonstrates how emotion perception can change when a controlled mood induction procedure is applied to alter mood in young and older participants.  相似文献   

4.
37 subjects' facial electromyographic activity at the corrugator and zygomatic muscle regions were recorded while they were posing with happy and sad facial expressions. Analysis showed that the mean value of EMG activity at the left zygomatic muscle region was the highest, followed by the right zygomatic, left corrugator, and right corrugator muscle regions, while a happy facial expression was posed. The mean value of EMG activity at the left corrugator muscle region was the highest, followed by those for the right corrugator, left zygomatic, and right zygomatic muscle regions while a sad facial expression was posed. Further analysis indicated that the power of facial EMG activity on the left side of the face was stronger than on the right side of the face while posing both happy and sad expressions.  相似文献   

5.
Laughter is an auditory stimulus that powerfully conveys positive emotion. We investigated how laughter influenced the visual perception of facial expressions. We presented a sound clip of laughter simultaneously with a happy, a neutral, or a sad schematic face. The emotional face was briefly presented either alone or among a crowd of neutral faces. We used a matching method to determine how laughter influenced the perceived intensity of the happy, neutral, and sad expressions. For a single face, laughter increased the perceived intensity of a happy expression. Surprisingly, for a crowd of faces, laughter produced an opposite effect, increasing the perceived intensity of a sad expression in a crowd. A follow-up experiment revealed that this contrast effect may have occurred because laughter made the neutral distractor faces appear slightly happy, thereby making the deviant sad expression stand out in contrast. A control experiment ruled out semantic mediation of the laughter effects. Our demonstration of the strong context dependence of laughter effects on facial expression perception encourages a reexamination of the previously demonstrated effects of prosody, speech content, and mood on face perception, as they may be similarly context dependent.  相似文献   

6.
Facial expressions are critical for effective social communication, and as such may be processed by the visual system even when it might be advantageous to ignore them. Previous research has shown that categorising emotional words was impaired when faces of a conflicting valence were simultaneously presented. In the present study, we examined whether emotional word categorisation would also be impaired when faces of the same (negative) valence but different emotional category (either angry, sad or fearful) were simultaneously presented. Behavioural results provided evidence for involuntary processing of basic emotional facial expression category, with slower word categorisation when the face and word categories were incongruent (e.g., angry word and sad face) than congruent (e.g., angry word and angry face). Event-related potentials (ERPs) time-locked to the presentation of the word–face pairs also revealed that emotional category congruency effects were evident from approximately 170 ms after stimulus onset.  相似文献   

7.
Facial expressions are critical for effective social communication, and as such may be processed by the visual system even when it might be advantageous to ignore them. Previous research has shown that categorising emotional words was impaired when faces of a conflicting valence were simultaneously presented. In the present study, we examined whether emotional word categorisation would also be impaired when faces of the same (negative) valence but different emotional category (either angry, sad or fearful) were simultaneously presented. Behavioural results provided evidence for involuntary processing of basic emotional facial expression category, with slower word categorisation when the face and word categories were incongruent (e.g., angry word and sad face) than congruent (e.g., angry word and angry face). Event-related potentials (ERPs) time-locked to the presentation of the word-face pairs also revealed that emotional category congruency effects were evident from approximately 170 ms after stimulus onset.  相似文献   

8.
Emotional facial expressions are often asymmetrical, with the left half of the face typically displaying the stronger affective intensity cues. During facial perception, however, most right-handed individuals are biased toward facial affect cues projecting to their own left visual hemifield. Consequently, mirror-reversed faces are typically rated as more emotionally intense than when presented normally. Mirror-reversal permits the most intense side of the expresser's face to project to the visual hemifield biased for processing facial affect cues. This study replicated the mirror-reversal effect in 21 men and 49 women (aged 18-52 yr.) using a videotaped free viewing presentation but also showed the effect of facial orientation is moderated by the sex of the perceiver. The mirror-reversal effect was significant only for men but not for women, suggesting possible sex differences in cerebral organization of systems for facial perception.  相似文献   

9.
We investigated the source of the visual search advantage of some emotional facial expressions. An emotional face target (happy, surprised, disgusted, fearful, angry, or sad) was presented in an array of neutral faces. A faster detection was found for happy targets, with angry and, especially, sad targets being detected more poorly. Physical image properties (e.g., luminance) were ruled out as a potential source of these differences in visual search. In contrast, the search advantage is partly due to the facilitated processing of affective content, as shown by an emotion identification task. Happy expressions were identified faster than the other expressions and were less likely to be confounded with neutral faces, whereas misjudgements occurred more often for angry and sad expressions. Nevertheless, the distinctiveness of some local features (e.g., teeth) that are consistently associated with emotional expressions plays the strongest role in the search advantage pattern. When the contribution of these features to visual search was factored out statistically, the advantage disappeared.  相似文献   

10.
There is evidence that specific regions of the face such as the eyes are particularly relevant for the decoding of emotional expressions, but it has not been examined whether scan paths of observers vary for facial expressions with different emotional content. In this study, eye-tracking was used to monitor scanning behavior of healthy participants while looking at different facial expressions. Locations of fixations and their durations were recorded, and a dominance ratio (i.e., eyes and mouth relative to the rest of the face) was calculated. Across all emotional expressions, initial fixations were most frequently directed to either the eyes or the mouth. Especially in sad facial expressions, participants more frequently issued the initial fixation to the eyes compared with all other expressions. In happy facial expressions, participants fixated the mouth region for a longer time across all trials. For fearful and neutral facial expressions, the dominance ratio indicated that both the eyes and mouth are equally important. However, in sad and angry facial expressions, the eyes received more attention than the mouth. These results confirm the relevance of the eyes and mouth in emotional decoding, but they also demonstrate that not all facial expressions with different emotional content are decoded equally. Our data suggest that people look at regions that are most characteristic for each emotion.  相似文献   

11.
Much research on emotional facial expression employs posed expressions and expressive subjects. To test the generalizability of this research to more spontaneous expressions of both expressive and nonexpressive posers, subjects engaged in happy, sad, angry, and neutral imagery, and voluntarily posed happy, sad, and angry facial expressions while facial muscle activity (brow, cheek, and mouth regions) and autonomic activity (skin resistance and heart period) were recorded. Subjects were classified as expressive or nonexpressive on the basis of the intensity of their posed expressions. The posed and imagery-induced expressions were similar, but not identical. Brow activity present in the imagery-induced sad expressions was weak or absent in the posed ones. Both nonexpressive and expressive subjects demonstrated similar heart rate acceleration during emotional imagery and demonstrated similar posed and imagery-induced happy expressions, but nonexpressive subjects showed little facial activity during both their posed and imagery-induced sad and angry expressions. The implications of these findings are discussed.  相似文献   

12.
People implicitly associate different emotions with different locations in left‐right space. Which aspects of emotion do they spatialize, and why? Across many studies people spatialize emotional valence, mapping positive emotions onto their dominant side of space and negative emotions onto their non‐dominant side, consistent with theories of metaphorical mental representation. Yet other results suggest a conflicting mapping of emotional intensity (a.k.a., emotional magnitude), according to which people associate more intense emotions with the right and less intense emotions with the left — regardless of their valence; this pattern has been interpreted as support for a domain‐general system for representing magnitudes. To resolve the apparent contradiction between these mappings, we first tested whether people implicitly map either valence or intensity onto left‐right space, depending on which dimension of emotion they attend to (Experiments 1a, b). When asked to judge emotional valence, participants showed the predicted valence mapping. However, when asked to judge emotional intensity, participants showed no systematic intensity mapping. We then tested an alternative explanation of findings previously interpreted as evidence for an intensity mapping (Experiments 2a, b). These results suggest that previous findings may reflect a left‐right mapping of spatial magnitude (i.e., the size of a salient feature of the stimuli) rather than emotion. People implicitly spatialize emotional valence, but, at present, there is no clear evidence for an implicit lateral mapping of emotional intensity. These findings support metaphor theory and challenge the proposal that mental magnitudes are represented by a domain‐general metric that extends to the domain of emotion.  相似文献   

13.
Sixteen clinically depressed patients and sixteen healthy controls were presented with a set of emotional facial expressions and were asked to identify the emotion portrayed by each face. They, were subsequently given a recognition memory test for these faces. There was no difference between the groups in terms of their ability to identify emotion between from faces. All participants identified emotional expressions more accurately than neutral expressions, with happy expressions being identified most accurately. During the recognition memory phase the depressed patients demonstrated superior memory for sad expressions, and inferior memory for happy expressions, relative to neutral expressions. Conversely, the controls demonstrated superior memory for happy expressions, and inferior memory for sad expressions, relative to neutral expressions. These results are discussed in terms of the cognitive model of depression proposed by Williams, Watts, MacLeod, and Mathews (1997).  相似文献   

14.
Adults perceive emotional facial expressions categorically. In this study, we explored categorical perception in 3.5-year-olds by creating a morphed continuum of emotional faces and tested preschoolers’ discrimination and identification of them. In the discrimination task, participants indicated whether two examples from the continuum “felt the same” or “felt different.” In the identification task, images were presented individually and participants were asked to label the emotion displayed on the face (e.g., “Does she look happy or sad?”). Results suggest that 3.5-year-olds have the same category boundary as adults. They were more likely to report that the image pairs felt “different” at the image pair that crossed the category boundary. These results suggest that 3.5-year-olds perceive happy and sad emotional facial expressions categorically as adults do. Categorizing emotional expressions is advantageous for children if it allows them to use social information faster and more efficiently.  相似文献   

15.
Human faces are among the most important visual stimuli that we encounter at all ages. This importance partly stems from the face as a conveyer of information on the emotional state of other individuals. Previous research has demonstrated specific scanning patterns in response to threat-related compared to non-threat-related emotional expressions. This study investigated how visual scanning patterns toward faces which display different emotional expressions develop during infancy. The visual scanning patterns of 4-month-old and 7-month-old infants and adults when looking at threat-related (i.e., angry and fearful) versus non-threat-related (i.e., happy, sad, and neutral) emotional faces were examined. We found that infants as well as adults displayed an avoidant looking pattern in response to threat-related emotional expressions with reduced dwell times and relatively less fixations to the inner features of the face. In addition, adults showed a pattern of eye contact avoidance when looking at threat-related emotional expressions that was not yet present in infants. Thus, whereas a general avoidant reaction to threat-related facial expressions appears to be present from very early in life, the avoidance of eye contact might be a learned response toward others' anger and fear that emerges later during development.  相似文献   

16.
This experiment was designed to assess the differential impact of initially presenting affective information to the left versus right hemisphere on both the perception of and response to the input. Nineteen right-handed subjects were presented with faces expressing happiness and sadness. Each face was presented twice to each visual field for an 8-sec duration. The electro-oculogram (EOG) was monitored and fed back to subjects to train them to keep their eyes focused on the central fixation point as well as to eliminate trials confounded by eye movement artifact. Following each slide presentation, subjects rated the intensity of the emotional expression depicted in the face and their emotional reaction to the face on a series of 7-point rating scales. Subjects reported perceiving more happiness in response to stimuli initially presented to the left hemisphere (right visual field) compared to presentations of the identical faces to the right hemisphere (left visual field). This effect was predominantly a function of ratings on sad faces. A similar, albeit less robust, effect was found on self-ratings of happiness (the degree to which the face elicited the emotion in the viewer). These data challenge the view that the right hemisphere is uniquely involved in all emotional behavior. The implications of these findings for theories concerning the lateralization of emotional behavior are discussed.  相似文献   

17.
Forty subjects viewed 10 pictures of facial expressions of emotion while they experienced a happy mood and 10 pictures while they experienced a sad mood. Later, while re-experiencing either a happy or sad mood, they were tested for recognition of these 20 target pictures intermixed with 20 distractors. Recognition of the 10 pictures seen earlier in a disparate mood was impaired significantly when they were presented at testing to the right hemisphere, but not when presented to the left. The right hemisphere appears to store the subject's mood as an integral part of a memory representation for an emotionally expressive face. When that face is presented at testing to the right hemisphere, recognition depends on whether the subject's test mood matches the mood stored in the representation. In contrast, the left hemisphere appears to store the subject's mood separately from encoded visual information about a face, and so recognition of a face presented at testing to the left hemisphere is unaffected by changes in mood.  相似文献   

18.
The effect of the emotional quality of study-phase background music on subsequent recall for happy and sad facial expressions was investigated. Undergraduates (N = 48) viewed a series of line drawings depicting a happy or sad child in a variety of environments that were each accompanied by happy or sad music. Although memory for faces was very accurate, emotionally incongruent background music biased subsequent memory for facial expressions, increasing the likelihood that happy faces were recalled as sad when sad music was previously heard, and that sad faces were recalled as happy when happy music was previously heard. Overall, the results indicated that when recalling a scene, the emotional tone is set by an integration of stimulus features from several modalities.  相似文献   

19.
A signal-detection task was used to assess sex differences in emotional face recognition under conditions of uncertainty. Computer images of Ekman faces showing sad, angry, happy, and fearful emotional states were presented for 50 ms to thirty-six men and thirty-seven women. All participants monitored for presentation of either happy, angry, or sad emotional expressions in three separate blocks. Happy faces were the most easily discriminated. Sad and angry expressions were most often mistaken for each other. Analyses of d' values, hit rates, and reaction times all yielded similar results, with no sex differences for any of the measures.  相似文献   

20.
为探讨高特质焦虑者在前注意阶段对情绪刺激的加工模式以明确其情绪偏向性特点, 本研究采用偏差-标准反转Oddball范式探讨了特质焦虑对面部表情前注意加工的影响。结果发现: 对于低特质焦虑组, 悲伤面孔所诱发的早期EMMN显著大于快乐面孔, 而对于高特质焦虑组, 快乐和悲伤面孔所诱发的早期EMMN差异不显著。并且, 高特质焦虑组的快乐面孔EMMN波幅显著大于低特质焦虑组。结果表明, 人格特质是影响面部表情前注意加工的重要因素。不同于普通被试, 高特质焦虑者在前注意阶段对快乐和悲伤面孔存在相类似的加工模式, 可能难以有效区分快乐和悲伤情绪面孔。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号