首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
North American (Canadian) and Indian observers were shown photographs of six facial emotions; happiness, sadness, fear, anger, surprise, and disgust, expressed by American Caucasian and Indian subjects. Observers were asked to judge each photograph, on a 7-point scale, for the degree of (a) distinctiveness (free from blending with other emotion categories), (b) pleasantness-unpleasantness, and (c) arousal-nonarousal of expressed facial emotion. The results showed significant interaction of Observer × Expressor × Emotion for the distinctiveness judgement. It was found that fearful and angry expressions in Indian faces, in comparison to Caucasian faces, were judged as less distinctly identifiable by observers of both cultural origins. Indian observers rated these two emotion expressions as being more distinctive than did North Americans irrespective of the culture of the expressor. In addition, Indian observers judged fearful and angry expressions as more unpleasant than did North Americans. Caucasians, in comparison to Indians, were judged to have more arousal in most of the emotion expressions.  相似文献   

2.
To date little evidence is available as to how emotional facial expression is decoded, specifically whether a bottom-up (data-driven) or a top-down (schema-driven) approach is more appropriate in explaining the decoding of emotions from facial expression. A study is reported (conducted with N = 20 subjects each in Germany and Italy), in which decoders judged emotions from photographs of facial expressions. Stimuli represented a selection of photographs depicting both single muscular movements (action units) in an otherwise neutral face, and combinations of such action units. Results indicate that the meaning of action units changes often with context; only a few single action units transmit specific emotional meaning, which they retain when presented in context. The results are replicated to a large degree across decoder samples in both nations, implying fundamental mechanisms of emotion decoding.  相似文献   

3.
In two studies, subjects judged a set of facial expressions of emotion by either providing labels of their own choice to describe the stimuli (free-choice condition), choosing a label from a list of emotion words, or choosing a story from a list of emotion stories (fixed-choice conditions). In the free-choice condition, levels of agreement between subjects on the predicted emotion categories for six basic emotions were significantly greater than chance levels, and comparable to those shown in fixed-choice studies. As predicted, there was little to no agreement on a verbal label for contempt. Agreement on contempt was greatly improved when subjects were allowed to identify the expression in terms of an antecedent event for that emotion rather than in terms of a single verbal label, a finding that could not be attributed to the methodological artifact of exclusion in a fixed-choice paradigm. These findings support two conclusions: (1) that the labels used in fixed-choice paradigms accurately reflect the verbal categories people use when free labeling facial expressions of emotion, and (2) that lexically ambiguous emotions, such as contempt, are understood in terms of their situational meanings.This research was supported in part by a Research Scientist Award from the National Institute of Mental Health (MH 06091) to Paul Ekman.  相似文献   

4.
Speech and song are universal forms of vocalization that may share aspects of emotional expression. Research has focused on parallels in acoustic features, overlooking facial cues to emotion. In three experiments, we compared moving facial expressions in speech and song. In Experiment 1, vocalists spoke and sang statements each with five emotions. Vocalists exhibited emotion-dependent movements of the eyebrows and lip corners that transcended speech–song differences. Vocalists’ jaw movements were coupled to their acoustic intensity, exhibiting differences across emotion and speech–song. Vocalists’ emotional movements extended beyond vocal sound to include large sustained expressions, suggesting a communicative function. In Experiment 2, viewers judged silent videos of vocalists’ facial expressions prior to, during, and following vocalization. Emotional intentions were identified accurately for movements during and after vocalization, suggesting that these movements support the acoustic message. Experiment 3 compared emotional identification in voice-only, face-only, and face-and-voice recordings. Emotion judgements for voice-only singing were poorly identified, yet were accurate for all other conditions, confirming that facial expressions conveyed emotion more accurately than the voice in song, yet were equivalent in speech. Collectively, these findings highlight broad commonalities in the facial cues to emotion in speech and song, yet highlight differences in perception and acoustic-motor production.  相似文献   

5.
Since the introduction of empirical methods for studying facial expression, the interpretation of infant facial expressions has generated much debate. The premise of this article is that action tendencies of approach and withdrawal constitute a core organizational feature of emotion in humans, promoting coherence of behavior, facial signaling, and physiological responses. The approach/withdrawal framework can provide a taxonomy of contexts and the neurobehavioral framework for the systematic, empirical study of individual differences in expression, physiology, and behavior within individuals as well as across contexts over time. By adopting this framework in developmental work on basic emotion processes, it may be possible to better understand the behavioral principles governing facial displays, and how individual differences in them are related to physiology and behavioral function in context.  相似文献   

6.
Individuals can quickly and effortlessly recognize facial expressions, which is critical for social perception and emotion regulation. This sensitivity to even slight facial changes could result in unstable percepts of an individual’s expression over time. The visual system must therefore balance accuracy with maintaining perceptual stability. However, previous research has focused on our sensitivity to changing expressions, and the mechanism behind expression stability remains an open question. Recent results demonstrate that perception of facial identity is systematically biased toward recently seen visual input. This positive perceptual pull, or serial dependence, may help stabilize perceived expression. To test this, observers judged random facial expression morphs ranging from happy to sad to angry. We found a pull in perceived expression toward previously seen expressions, but only when the 1-back and current face had similar identities. Our results are consistent with the existence of the continuity field for expression, a specialized mechanism that promotes the stability of emotion perception, which could help facilitate social interactions and emotion regulation.  相似文献   

7.
Emotions can be recognized whether conveyed by facial expressions, linguistic cues (semantics), or prosody (voice tone). However, few studies have empirically documented the extent to which multi-modal emotion perception differs from uni-modal emotion perception. Here, we tested whether emotion recognition is more accurate for multi-modal stimuli by presenting stimuli with different combinations of facial, semantic, and prosodic cues. Participants judged the emotion conveyed by short utterances in six channel conditions. Results indicated that emotion recognition is significantly better in response to multi-modal versus uni-modal stimuli. When stimuli contained only one emotional channel, recognition tended to be higher in the visual modality (i.e., facial expressions, semantic information conveyed by text) than in the auditory modality (prosody), although this pattern was not uniform across emotion categories. The advantage for multi-modal recognition may reflect the automatic integration of congruent emotional information across channels which enhances the accessibility of emotion-related knowledge in memory.  相似文献   

8.
The influence of emotional prosody on the evaluation of emotional facial expressions was investigated in an event-related brain potential (ERP) study using a priming paradigm, the facial affective decision task. Emotional prosodic fragments of short (200-msec) and medium (400-msec) duration were presented as primes, followed by an emotionally related or unrelated facial expression (or facial grimace, which does not resemble an emotion). Participants judged whether or not the facial expression represented an emotion. ERP results revealed an N400-like differentiation for emotionally related prime-target pairs when compared with unrelated prime-target pairs. Faces preceded by prosodic primes of medium length led to a normal priming effect (larger negativity for unrelated than for related prime-target pairs), but the reverse ERP pattern (larger negativity for related than for unrelated prime-target pairs) was observed for faces preceded by short prosodic primes. These results demonstrate that brief exposure to prosodic cues can establish a meaningful emotional context that influences related facial processing; however, this context does not always lead to a processing advantage when prosodic information is very short in duration.  相似文献   

9.
Attending versus ignoring a stimulus can later determine how it will be affectively evaluated. Here, we asked whether attentional states could also modulate subsequent sensitivity to facial expressions of emotion. In a dual-task procedure, participants first rapidly searched for a gender-defined face among two briefly displayed neutral faces. Then a test face with the previously attended or ignored face’s identity was presented, and participants judged whether it was emotionally expressive (happy, angry, or fearful) or neutral. Intensity of expression in the test face was varied so that an expression detection threshold could be determined. When fearful or angry expressions were judged, expression sensitivity was worse for faces bearing the same identity as a previously ignored versus attended face. When happy expressions were judged, sensitivity was unaffected by prior attention. These data support the notion that the motivational value of stimuli may be reduced by processes associated with selective ignoring.  相似文献   

10.
This study examined whether approach–avoidance related behaviour elicited by facial affect is moderated by the presence of an observer-irrelevant trigger that may influence the observer’s attributions of the actor’s emotion. Participants were shown happy, disgusted, and neutral facial expressions. Half of these were presented with a plausible trigger of the expression (a drink). Approach–avoidance related behaviour was indexed explicitly through a questionnaire (measuring intentions) and implicitly through a manikin version of the affective Simon task (measuring automatic behavioural tendencies). In the absence of an observer-irrelevant trigger, participants expressed the intention to avoid disgusted and approach happy facial expressions. Participants also showed a stronger approach tendency towards happy than towards disgusted facial expressions. The presence of the observer-irrelevant trigger had a moderating effect, decreasing the intention to approach happy and to avoid disgusted expressions. The trigger had no moderating effect on the approach–avoidance tendencies. Thus the influence of an observer-irrelevant trigger appears to reflect more of a controlled than automatic process.  相似文献   

11.
In 3 experiments, we investigate how anxiety influences interpretation of ambiguous facial expressions of emotion. Specifically, we examine whether anxiety modulates the effect of contextual cues on interpretation. Participants saw ambiguous facial expressions. Simultaneously, positive or negative contextual information appeared on the screen. Participants judged whether each expression was positive or negative. We examined the impact of verbal and visual contextual cues on participants' judgements. We used 3 different anxiety induction procedures and measured levels of trait anxiety (Experiment 2). Results showed that high state anxiety resulted in greater use of contextual information in the interpretation of the facial expressions. Trait anxiety was associated with mood-congruent effects on interpretation, but not greater use of contextual information.  相似文献   

12.
The common conceptual understanding of emotion is that they are multi-componential, including subjective feelings, appraisals, psychophysiological activation, action tendencies, and motor expressions. Emotion perception, however, has traditionally been studied in terms of emotion labels, such as “happy”, which do not clearly indicate whether one, some, or all emotion components are perceived. We examine whether emotion percepts are multi-componential and extend previous research by using more ecologically valid, dynamic, and multimodal stimuli and an alternative response measure. The results demonstrate that observers can reliably infer multiple types of information (subjective feelings, appraisals, action tendencies, and social messages) from complex emotion expressions. Furthermore, this finding appears to be robust to changes in response items. The results are discussed in light of their implications for research on emotion perception.  相似文献   

13.
Past research has shown that children recognize emotions from facial expressions poorly and improve only gradually with age, but the stimuli in such studies have been static faces. Because dynamic faces include more information, it may well be that children more readily recognize emotions from dynamic facial expressions. The current study of children (N = 64, aged 5–10 years old) who freely labeled the emotion conveyed by static and dynamic facial expressions found no advantage of dynamic over static expressions; in fact, reliable differences favored static expressions. An alternative explanation of gradual improvement with age is that children's emotional categories change during development from a small number of broad emotion categories to a larger number of narrower categories—a pattern found here with both static and dynamic expressions.  相似文献   

14.
Do people always interpret a facial expression as communicating a single emotion (e.g., the anger face as only angry) or is that interpretation malleable? The current study investigated preschoolers' (N = 60; 3-4 years) and adults' (N = 20) categorization of facial expressions. On each of five trials, participants selected from an array of 10 facial expressions (an open-mouthed, high arousal expression and a closed-mouthed, low arousal expression each for happiness, sadness, anger, fear, and disgust) all those that displayed the target emotion. Children's interpretation of facial expressions was malleable: 48% of children who selected the fear, anger, sadness, and disgust faces for the "correct" category also selected these same faces for another emotion category; 47% of adults did so for the sadness and disgust faces. The emotion children and adults attribute to facial expressions is influenced by the emotion category for which they are looking. (PsycINFO Database Record (c) 2012 APA, all rights reserved).  相似文献   

15.
Looking at another person’s facial expression of emotion can trigger the same neural processes involved in producing the expression, and such responses play a functional role in emotion recognition. Disrupting individuals’ facial action, for example, interferes with verbal emotion recognition tasks. We tested the hypothesis that facial responses also play a functional role in the perceptual processing of emotional expressions. We altered the facial action of participants with a gel facemask while they performed a task that involved distinguishing target expressions from highly similar distractors. Relative to control participants, participants in the facemask condition demonstrated inferior perceptual discrimination of facial expressions, but not of nonface stimuli. The findings suggest that somatosensory/motor processes involving the face contribute to the visual perceptual—and not just conceptual—processing of facial expressions. More broadly, our study contributes to growing evidence for the fundamentally interactive nature of the perceptual inputs from different sensory modalities.  相似文献   

16.
It is commonly assumed that threatening expressions are perceptually prioritised, possessing the ability to automatically capture and hold attention. Recent evidence suggests that this prioritisation depends on the task relevance of emotion in the case of attention holding and for fearful expressions. Using a hybrid attentional blink (AB) and repetition blindness (RB) paradigm we investigated whether task relevance also impacts on prioritisation through attention capture and perceptual salience, and if these effects generalise to angry expressions. Participants judged either the emotion (relevant condition) or gender (irrelevant condition) of two target facial stimuli (fearful, angry or neutral) imbedded in a stream of distractors. Attention holding and capturing was operationalised as modulation of AB deficits by first target (T1) and second target (T2) expression. Perceptual salience was operationalised as RB modulation. When emotion was task-relevant (Experiment 1; N?=?29) fearful expressions captured and held attention, and were more perceptually salient than neutral expressions. Angry expressions captured attention, but were less perceptually salient and capable of holding attention than fearful and neutral expressions. When emotion was task-irrelevant (Experiment 2; N?=?30), only fearful attention capture and perceptual salience effects remained significant. Our findings highlight the importance for threat-prioritisation research to heed both the type of threat and prioritisation investigated.  相似文献   

17.
In 3 experiments, we investigate how anxiety influences interpretation of ambiguous facial expressions of emotion. Specifically, we examine whether anxiety modulates the effect of contextual cues on interpretation. Participants saw ambiguous facial expressions. Simultaneously, positive or negative contextual information appeared on the screen. Participants judged whether each expression was positive or negative. We examined the impact of verbal and visual contextual cues on participants' judgements. We used 3 different anxiety induction procedures and measured levels of trait anxiety (Experiment 2). Results showed that high state anxiety resulted in greater use of contextual information in the interpretation of the facial expressions. Trait anxiety was associated with mood-congruent effects on interpretation, but not greater use of contextual information.  相似文献   

18.
This study investigated whether observers' facial reactions to the emotional facial expressions of others represent an affective or a cognitive response to these emotional expressions. Three hypotheses were contrasted: (1) facial reactions to emotional facial expressions are due to mimicry as part of an affective empathic reaction; (2) facial reactions to emotional facial expressions are a reflection of shared affect due to emotion induction; and (3) facial reactions to emotional facial expressions are determined by cognitive load depending on task difficulty. Two experiments were conducted varying type of task, presentation of stimuli, and task difficulty. The results show that depending on the nature of the rating task, facial reactions to facial expressions may be either affective or cognitive. Specifically, evidence for facial mimicry was only found when individuals made judgements regarding the valence of an emotional facial expression. Other types of judgements regarding facial expressions did not seem to elicit mimicry but may lead to facial responses related to cognitive load.  相似文献   

19.
Two studies concerned the relation between facial expression cognitive induction of mood and perception of mood in women undergraduates. In Exp. 1, 20 subjects were randomly assigned to a group who were instructed in exaggerated facial expressions (Demand Group) and 20 subjects were randomly assigned to a group who were not instructed (Nondemand Group). All subjects completed a modified Velten (1968) elation- and depression-induction sequence. Ratings of depression on the Multiple Affect Adjective Checklist increased during the depression condition and decreased during the elation condition. Subjects made more facial expressions in the Demand Group than the Nondemand Group from electromyogram measures of the zygomatic and corrugator muscles and from corresponding action unit measures from visual scoring using the Facial Action Scoring System. Subjects who were instructed in the Demand Group rated their depression as more severe during the depression slides than the other group. No such effect was noted during the elation condition. In Exp. 2, 16 women were randomly assigned to a group who were instructed in facial expressions contradictory to those expected on the depression and elation tasks (Contradictory Expression Group). Another 16 women were randomly assigned to a group who were given no instructions about facial expressions (Nondemand Group). All subjects completed the depression- and elation-induction sequence mentioned in Exp. 1. No differences were reported between groups on the ratings of depression (MAACL) for the depression-induction or for the elation-induction but both groups rated depression higher after the depression condition and lower after the elation condition. Electromyographic and facial action scores verified that subjects in the Contradictory Expression Group were making the requested contradictory facial expressions during the mood-induction sequences. It was concluded that the primary influence on emotion came from the cognitive mood-induction sequences. Facial expressions only seem to modify the emotion in the case of depression being exacerbated by frowning. A contradictory facial expression did not affect the rating of an emotion.  相似文献   

20.
Izard and Haynes question our findings and claims for disovery because they did not consider the difference between a one-to-one and one-to-many relationship between a sign (the facial expression) and what it signifies (a message about emotion). Clarifying this matter not only shows that the disagreement between us is more apparent than real, but more importantly highlights what remains to be discovered about which emotional states are signaled by which facial expressions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号