首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 8 毫秒
1.
In earlier work, the authors analyzed emotion portrayals by professional actors separately for facial expression, vocal expression, gestures, and body movements. In a secondary analysis of the combined data set for all these modalities, the authors now examine to what extent actors use prototypical multimodal configurations of expressive actions to portray different emotions, as predicted by basic emotion theories claiming that expressions are produced by fixed neuromotor affect programs. Although several coherent unimodal clusters are identified, the results show only 3 multimodal clusters: agitation, resignation, and joyful surprise, with only the latter being specific to a particular emotion. Finding variable expressions rather than prototypical patterns seems consistent with the notion that emotional expression is differentially driven by the results of sequential appraisal checks, as postulated by componential appraisal theories.  相似文献   

2.
Observers are remarkably consistent in attributing particular emotions to particular facial expressions, at least in Western societies. Here, we suggest that this consistency is an instance of the fundamental attribution error. We therefore hypothesized that a small variation in the procedure of the recognition study, which emphasizes situational information, would change the participants' attributions. In two studies, participants were asked to judge whether a prototypical "emotional facial expression" was more plausibly associated with a social-communicative situation (one involving communication to another person) or with an equally emotional but nonsocial, situation. Participants were found more likely to associate each facial display with the social than with the nonsocial situation. This result was found across all emotions presented (happiness, fear, disgust, anger, and sadness) and for both Spanish and Canadian participants.  相似文献   

3.
Facial emotional expressions can serve both as emotional stimuli and as communicative signals. The research reported here was conducted to illustrate how responses to both roles of facial emotional expressions unfold over time. As an emotion elicitor, a facial emotional expression (e.g., a disgusted face) activates a response that is similar to responses to other emotional stimuli of the same valence (e.g., a dirty, nonflushed toilet). As an emotion messenger, the same facial expression (e.g., a disgusted face) serves as a communicative signal by also activating the knowledge that the sender is experiencing a specific emotion (e.g., the sender feels disgusted). By varying the duration of exposure to disgusted, fearful, angry, and neutral faces in two subliminal-priming studies, we demonstrated that responses to faces as emotion elicitors occur prior to responses to faces as emotion messengers, and that both types of responses may unfold unconsciously.  相似文献   

4.
Despite a wealth of knowledge about the neural mechanisms behind emotional facial expression processing, little is known about how they relate to individual differences in social cognition abilities. We studied individual differences in the event-related potentials (ERPs) elicited by dynamic facial expressions. First, we assessed the latent structure of the ERPs, reflecting structural face processing in the N170, and the allocation of processing resources and reflexive attention to emotionally salient stimuli, in the early posterior negativity (EPN) and the late positive complex (LPC). Then we estimated brain–behavior relationships between the ERP factors and behavioral indicators of facial identity and emotion-processing abilities. Structural models revealed that the participants who formed faster structural representations of neutral faces (i.e., shorter N170 latencies) performed better at face perception (r = –.51) and memory (r = –.42). The N170 amplitude was not related to individual differences in face cognition or emotion processing. The latent EPN factor correlated with emotion perception (r = .47) and memory (r = .32), and also with face perception abilities (r = .41). Interestingly, the latent factor representing the difference in EPN amplitudes between the two neutral control conditions (chewing and blinking movements) also correlated with emotion perception (r = .51), highlighting the importance of tracking facial changes in the perception of emotional facial expressions. The LPC factor for negative expressions correlated with the memory for emotional facial expressions. The links revealed between the latency and strength of activations of brain systems and individual differences in processing socio-emotional information provide new insights into the brain mechanisms involved in social communication.  相似文献   

5.
Abstract

A review of Metaphors of Anger, Pride and Love: A Lexical Approach to the Structure of Concepts by Z. Kövecses (1986). Philadelphid/Amsterdam: John Benjamins, p. vii, 147. ISBN 1-55619-009-3 (US); ISBN 90-272-2558-3 (Europe). $34  相似文献   

6.
People from Asian cultures are more influenced by context in their visual processing than people from Western cultures. In this study, we examined how these cultural differences in context processing affect how people interpret facial emotions. We found that younger Koreans were more influenced than younger Americans by emotional background pictures when rating the emotion of a central face, especially those younger Koreans with low self-rated stress. In contrast, among older adults, neither Koreans nor Americans showed significant influences of context in their face emotion ratings. These findings suggest that cultural differences in reliance on context to interpret others' emotions depend on perceptual integration processes that decline with age, leading to fewer cultural differences in perception among older adults than among younger adults. Furthermore, when asked to recall the background pictures, younger participants recalled more negative pictures than positive pictures, whereas older participants recalled similar numbers of positive and negative pictures. These age differences in the valence of memory were consistent across culture.  相似文献   

7.
Shame, embarrassment, compassion, and contempt have been considered candidates for the status of basic emotions on the grounds that each has a recognisable facial expression. In two studies (N=88, N=60) on recognition of these four facial expressions, observers showed moderate agreement on the predicted emotion when assessed with forced choice (58%; 42%), but low agreement when assessed with free labelling (18%; 16%). Thus, even though some observers endorsed the predicted emotion when it was presented in a list, over 80% spontaneously interpreted these faces in a way other than the predicted emotion.  相似文献   

8.
Philosophical Studies - Are the categorical laws of ontology metaphysically contingent? I do not intend to give a full answer to this question in this paper. But I shall give a partial answer to...  相似文献   

9.
The purpose of this study is to explore whether subjects exposed to stimuli of facial expressions respond with facial electromyographic (EMG) reactions consistent with the hypothesis that facial expressions are contagious. This study further examines whether males and females differ in facial EMG intensity. Two experiments demonstrated that subjects responded with facial EMG activity over the corrugator supercilii, the zygomatic major , the lateral frontalis , the depressor supercilii , and the levator labii muscle regions to stimuli of sad, angry, fearful, surprised, disgusted and happy faces, that, to large extent, were consistent with the hypothesis that facial expressions are contagious. Aspects of gender differences reported in earlier studies were found, indicating a tendency for females to respond with more pronounced facial EMG intensity.  相似文献   

10.
We investigated whether moral violations involving harm selectively elicit anger, whereas purity violations selectively elicit disgust, as predicted by the Moral Foundations Theory (MFT). We analysed participants’ spontaneous facial expressions as they listened to scenarios depicting moral violations of harm and purity. As predicted by MFT, anger reactions were elicited more frequently by harmful than by impure actions. However, violations of purity elicited more smiling reactions and expressions of anger than of disgust. This effect was found both in a classic set of scenarios and in a new set in which the different kinds of violations were matched on weirdness. Overall, these findings are at odds with predictions derived from MFT and provide support for “monist” accounts that posit harm at the basis of all moral violations. However, we found that smiles were differentially linked to purity violations, which leaves open the possibility of distinct moral modules.  相似文献   

11.
There is substantial evidence to suggest that deafness is associated with delays in emotion understanding, which has been attributed to delays in language acquisition and opportunities to converse. However, studies addressing the ability to recognise facial expressions of emotion have produced equivocal findings. The two experiments presented here attempt to clarify emotion recognition in deaf children by considering two aspects: the role of motion and the role of intensity in deaf children’s emotion recognition. In Study 1, 26 deaf children were compared to 26 age-matched hearing controls on a computerised facial emotion recognition task involving static and dynamic expressions of 6 emotions. Eighteen of the deaf and 18 age-matched hearing controls additionally took part in Study 2, involving the presentation of the same 6 emotions at varying intensities. Study 1 showed that deaf children’s emotion recognition was better in the dynamic rather than static condition, whereas the hearing children showed no difference in performance between the two conditions. In Study 2, the deaf children performed no differently from the hearing controls, showing improved recognition rates with increasing rates of intensity. With the exception of disgust, no differences in individual emotions were found. These findings highlight the importance of using ecologically valid stimuli to assess emotion recognition.  相似文献   

12.
The authors investigated whether differences in facial stimuli could explain the inconsistencies in the facial attractiveness literature regarding whether adults prefer more masculine- or more feminine-looking male faces. Their results demonstrated that use of a female average to dimorphically transform a male facial average produced stimuli that did not accurately reflect the relationship between masculinity and attractiveness. In contrast, use of averages of masculine males and averages of feminine males produced stimuli that did accurately reflect the relationship between masculinity and attractiveness. Their findings suggest that masculinity contributes more to male facial attractiveness than does femininity, but future research should investigate how various combinations of facial cues contribute to male facial attractiveness.  相似文献   

13.
In 6 experiments, the authors investigated whether attention orienting by gaze direction is modulated by the emotional expression (neutral, happy, angry, or fearful) on the face. The results showed a clear spatial cuing effect by gaze direction but no effect by facial expression. In addition, it was shown that the cuing effect was stronger with schematic faces than with real faces, that gaze cuing could be achieved at very short stimulus onset asynchronies (14 ms), and that there was no evidence for a difference in the strength of cuing triggered by static gaze cues and by cues involving apparent motion of the pupils. In sum, the results suggest that in normal, healthy adults, eye direction processing for attention shifts is independent of facial expression analysis.  相似文献   

14.
Do people always interpret a facial expression as communicating a single emotion (e.g., the anger face as only angry) or is that interpretation malleable? The current study investigated preschoolers' (N = 60; 3-4 years) and adults' (N = 20) categorization of facial expressions. On each of five trials, participants selected from an array of 10 facial expressions (an open-mouthed, high arousal expression and a closed-mouthed, low arousal expression each for happiness, sadness, anger, fear, and disgust) all those that displayed the target emotion. Children's interpretation of facial expressions was malleable: 48% of children who selected the fear, anger, sadness, and disgust faces for the "correct" category also selected these same faces for another emotion category; 47% of adults did so for the sadness and disgust faces. The emotion children and adults attribute to facial expressions is influenced by the emotion category for which they are looking. (PsycINFO Database Record (c) 2012 APA, all rights reserved).  相似文献   

15.
16.
Self-deception is an important construct in social, personality, and clinical literatures. Although historical and clinical views of self-deception have regarded it as defensive in nature and operation, modern views of this individual difference variable instead highlight its apparent benefits to subjective mental health. The present four studies reinforce the latter view by showing that self-deception predicts positive priming effects, but not negative priming effects, in reaction time tasks sensitive to individual differences in affective priming. In all studies, individuals higher in self-deception displayed stronger positive priming effects, defined in terms of facilitation with two positive stimuli in a consecutive sequence, but self-deception did not predict negative priming effects in the same tasks. Importantly, these effects occurred both in tasks that called for the retrieval of self-knowledge (Study 1) and those that did not (Studies 2–4). This broad pattern supports substantive views of self-deception rather than views narrowly focused on self-presentation processes. Implications for understanding self-deception are discussed.  相似文献   

17.
Past research using two levels of reward has shown that the higher-value items are remembered better than lower-value items and this enhancement is assumed to be driven by an effect of reward value. In the present study, multiple levels of reward were used to test the influence of reward salience on memory. Using a value-learning procedure, words were associated with reward values, and then memory for these words was later tested with free recall. Critically, multiple reward levels were used, allowing us to test two specific hypotheses whereby rewards can influence memory: (a) higher value items are remembered better than lower value items (reward value hypothesis), and (b) highest and lowest value items are remembered best and intermediate-value items are remembered worst (following a U-shaped relationship between value and memory; reward salience hypothesis). In two experiments we observed a U-shaped relationship between reward value and memory, supporting the notion that memory is enhanced due to reward salience, and not purely through reward value.  相似文献   

18.
The current research investigated the influence of body posture on adults' and children's perception of facial displays of emotion. In each of two experiments, participants categorized facial expressions that were presented on a body posture that was congruent (e.g., a sad face on a body posing sadness) or incongruent (e.g., a sad face on a body posing fear). Adults and 8-year-olds made more errors and had longer reaction times on incongruent trials than on congruent trials when judging sad versus fearful facial expressions, an effect that was larger in 8-year-olds. The congruency effect was reduced when faces and bodies were misaligned, providing some evidence for holistic processing. Neither adults nor 8-year-olds were affected by congruency when judging sad versus happy expressions. Evidence that congruency effects vary with age and with similarity of emotional expressions is consistent with dimensional theories and "emotional seed" models of emotion perception.  相似文献   

19.
Two studies provided evidence that bolsters the Marsh, Adams, and Kleck hypothesis that the morphology of certain emotion expressions reflects an evolved adaptation to mimic babies or mature adults. Study 1 found differences in emotion expressions' resemblance to babies using objective indices of babyfaceness provided by connectionist models that are impervious to overlapping cultural stereotypes about babies and the emotions. Study 2 not only replicated parallels between impressions of certain emotions and babies versus adults but also showed that objective indices of babyfaceness partially mediated impressions of the emotion expressions. babyface effects were independent of strong effects of attractiveness, and babyfaceness did not mediate impressions of happy expressions, to which the evolutionary hypothesis would not apply.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号