首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The purpose of this study is to explore whether subjects exposed to stimuli of facial expressions respond with facial electromyographic (EMG) reactions consistent with the hypothesis that facial expressions are contagious. This study further examines whether males and females differ in facial EMG intensity. Two experiments demonstrated that subjects responded with facial EMG activity over the corrugator supercilii, the zygomatic major , the lateral frontalis , the depressor supercilii , and the levator labii muscle regions to stimuli of sad, angry, fearful, surprised, disgusted and happy faces, that, to large extent, were consistent with the hypothesis that facial expressions are contagious. Aspects of gender differences reported in earlier studies were found, indicating a tendency for females to respond with more pronounced facial EMG intensity.  相似文献   

2.
Observers are remarkably consistent in attributing particular emotions to particular facial expressions, at least in Western societies. Here, we suggest that this consistency is an instance of the fundamental attribution error. We therefore hypothesized that a small variation in the procedure of the recognition study, which emphasizes situational information, would change the participants' attributions. In two studies, participants were asked to judge whether a prototypical "emotional facial expression" was more plausibly associated with a social-communicative situation (one involving communication to another person) or with an equally emotional but nonsocial, situation. Participants were found more likely to associate each facial display with the social than with the nonsocial situation. This result was found across all emotions presented (happiness, fear, disgust, anger, and sadness) and for both Spanish and Canadian participants.  相似文献   

3.
Despite a wealth of knowledge about the neural mechanisms behind emotional facial expression processing, little is known about how they relate to individual differences in social cognition abilities. We studied individual differences in the event-related potentials (ERPs) elicited by dynamic facial expressions. First, we assessed the latent structure of the ERPs, reflecting structural face processing in the N170, and the allocation of processing resources and reflexive attention to emotionally salient stimuli, in the early posterior negativity (EPN) and the late positive complex (LPC). Then we estimated brain–behavior relationships between the ERP factors and behavioral indicators of facial identity and emotion-processing abilities. Structural models revealed that the participants who formed faster structural representations of neutral faces (i.e., shorter N170 latencies) performed better at face perception (r = –.51) and memory (r = –.42). The N170 amplitude was not related to individual differences in face cognition or emotion processing. The latent EPN factor correlated with emotion perception (r = .47) and memory (r = .32), and also with face perception abilities (r = .41). Interestingly, the latent factor representing the difference in EPN amplitudes between the two neutral control conditions (chewing and blinking movements) also correlated with emotion perception (r = .51), highlighting the importance of tracking facial changes in the perception of emotional facial expressions. The LPC factor for negative expressions correlated with the memory for emotional facial expressions. The links revealed between the latency and strength of activations of brain systems and individual differences in processing socio-emotional information provide new insights into the brain mechanisms involved in social communication.  相似文献   

4.
This study investigates the discrimination accuracy of emotional stimuli in subjects with major depression compared with healthy controls using photographs of facial expressions of varying emotional intensities. The sample included 88 unmedicated male and female subjects, aged 18-56 years, with major depressive disorder (n = 44) or no psychiatric illness (n = 44), who judged the emotion of 200 facial pictures displaying an expression between 10% (90% neutral) and 80% (nuanced) emotion. Stimuli were presented in 10% increments to generate a range of intensities, each presented for a 500-ms duration. Compared with healthy volunteers, depressed subjects showed very good recognition accuracy for sad faces but impaired recognition accuracy for other emotions (e.g., harsh, surprise, and sad expressions) of subtle emotional intensity. Recognition accuracy improved for both groups as a function of increased intensity on all emotions. Finally, as depressive symptoms increased, recognition accuracy increased for sad faces, but decreased for surprised faces. Moreover, depressed subjects showed an impaired ability to accurately identify subtle facial expressions, indicating that depressive symptoms influence accuracy of emotional recognition.  相似文献   

5.
Do people always interpret a facial expression as communicating a single emotion (e.g., the anger face as only angry) or is that interpretation malleable? The current study investigated preschoolers' (N = 60; 3-4 years) and adults' (N = 20) categorization of facial expressions. On each of five trials, participants selected from an array of 10 facial expressions (an open-mouthed, high arousal expression and a closed-mouthed, low arousal expression each for happiness, sadness, anger, fear, and disgust) all those that displayed the target emotion. Children's interpretation of facial expressions was malleable: 48% of children who selected the fear, anger, sadness, and disgust faces for the "correct" category also selected these same faces for another emotion category; 47% of adults did so for the sadness and disgust faces. The emotion children and adults attribute to facial expressions is influenced by the emotion category for which they are looking. (PsycINFO Database Record (c) 2012 APA, all rights reserved).  相似文献   

6.
We investigated whether moral violations involving harm selectively elicit anger, whereas purity violations selectively elicit disgust, as predicted by the Moral Foundations Theory (MFT). We analysed participants’ spontaneous facial expressions as they listened to scenarios depicting moral violations of harm and purity. As predicted by MFT, anger reactions were elicited more frequently by harmful than by impure actions. However, violations of purity elicited more smiling reactions and expressions of anger than of disgust. This effect was found both in a classic set of scenarios and in a new set in which the different kinds of violations were matched on weirdness. Overall, these findings are at odds with predictions derived from MFT and provide support for “monist” accounts that posit harm at the basis of all moral violations. However, we found that smiles were differentially linked to purity violations, which leaves open the possibility of distinct moral modules.  相似文献   

7.

This paper describes a method to measure the sensitivity of an individual to different facial expressions. It shows that individual participants are more sensitive to happy than to fearful expressions and that the differences are statistically significant using the model-comparison approach. Sensitivity is measured by asking participants to discriminate between an emotional facial expression and a neutral expression of the same face. The expression was diluted to different degrees by combining it in different proportions with the neutral expression using morphing software. Sensitivity is defined as measurement of the proportion of neutral expression in a stimulus required for participants to discriminate the emotional expression on 75% of presentations. Individuals could reliably discriminate happy expressions diluted with a greater proportion of the neutral expression compared with that required for discrimination of fearful expressions. This tells us that individual participants are more sensitive to happy compared with fearful expressions. Sensitivity is equivalent when measured on two different testing sessions, and greater sensitivity to happy expressions is maintained with short stimulus durations and stimuli generated using different morphing software. Increased sensitivity to happy compared with fear expressions was affected at smaller image sizes for some participants. Application of the approach for use with clinical populations, as well as understanding the relative contribution of perceptual processing and affective processing in facial expression recognition, is discussed.

  相似文献   

8.
The recognition of emotional facial expressions is often subject to contextual influence, particularly when the face and the context convey similar emotions. We investigated whether spontaneous, incidental affective theory of mind inferences made while reading vignettes describing social situations would produce context effects on the identification of same-valenced emotions (Experiment 1) as well as differently-valenced emotions (Experiment 2) conveyed by subsequently presented faces. Crucially, we found an effect of context on reaction times in both experiments while, in line with previous work, we found evidence for a context effect on accuracy only in Experiment 1. This demonstrates that affective theory of mind inferences made at the pragmatic level of a text can automatically, contextually influence the perceptual processing of emotional facial expressions in a separate task even when those emotions are of a distinctive valence. Thus, our novel findings suggest that language acts as a contextual influence to the recognition of emotional facial expressions for both same and different valences.  相似文献   

9.
Facial emotional expressions can serve both as emotional stimuli and as communicative signals. The research reported here was conducted to illustrate how responses to both roles of facial emotional expressions unfold over time. As an emotion elicitor, a facial emotional expression (e.g., a disgusted face) activates a response that is similar to responses to other emotional stimuli of the same valence (e.g., a dirty, nonflushed toilet). As an emotion messenger, the same facial expression (e.g., a disgusted face) serves as a communicative signal by also activating the knowledge that the sender is experiencing a specific emotion (e.g., the sender feels disgusted). By varying the duration of exposure to disgusted, fearful, angry, and neutral faces in two subliminal-priming studies, we demonstrated that responses to faces as emotion elicitors occur prior to responses to faces as emotion messengers, and that both types of responses may unfold unconsciously.  相似文献   

10.
The current study aimed to extend the understanding of the early development of spontaneous facial reactions toward observed facial expressions. Forty-six 9- to 10-month-old infants observed video clips of dynamic human facial expressions that were artificially created with morphing technology. The infants’ facial responses were recorded, and the movements of the facial action unit 12 (e.g., lip-corner raising, associated with happiness) and facial action unit 4 (e.g., brow-lowering, associated with anger) were visually evaluated by multiple naïve raters. Results showed that (1) infants make congruent, observable facial responses to facial expressions, and (2) these specific facial responses are enhanced during repeated observation of the same emotional expressions. These results suggest the presence of observable congruent facial responses in the first year of life, and that they appear to be influenced by contextual information, such as the repetition of presentation of the target emotional expressions.  相似文献   

11.
Four experiments probed the nature of categorical perception (CP) for facial expressions. A model based on naming alone failed to accurately predict performance onthese tasks. The data are instead consistentwith an extension of the category adjustment model (Huttenlocher et al., 2000), in which the generation of a verbal code (e.g., "happy") activated knowledge ofthe expression category's range andcentral tendency (prototype) in memory, which was retained as veridical perceptual memory faded.Further support for amemory bias toward the category center came from a consistently asymmetric pattern of within-category errors. Verbal interference in the retention interval selectively removed CP for facial expressions, under blocked, but not under randomized presentation conditions. However, verbal interference at encoding removed CPeven under randomized conditions and these effects were shown to extend even to caricatured expressions, which lie outside the normal range of expression categories.  相似文献   

12.
The different assumptions made by discrete and componential emotion theories about the nature of the facial expression of emotion and the underlying mechanisms are reviewed. Explicit and implicit predictions are derived from each model. It is argued that experimental expression-production paradigms rather than recognition studies are required to critically test these differential predictions. Data from a large-scale actor portrayal study are reported to demonstrate the utility of this approach. The frequencies with which 12 professional actors use major facial muscle actions individually and in combination to express 14 major emotions show little evidence for emotion-specific prototypical affect programs. Rather, the results encourage empirical investigation of componential emotion model predictions of dynamic configurations of appraisal-driven adaptive facial actions.  相似文献   

13.
Emotion theorists assume certain facial displays to convey information about the expresser's emotional state. In contrast, behavioral ecologists assume them to indicate behavioral intentions or action requests. To test these contrasting positions, over 2,000 online participants were presented with facial expressions and asked what they revealed-feeling states, behavioral intentions, or action requests. The majority of the observers chose feeling states as the message of facial expressions of disgust, fear, sadness, happiness, and surprise, supporting the emotions view. Only the anger display tended to elicit more choices of behavioral intention or action request, partially supporting the behavioral ecology view. The results support the view that facial expressions communicate emotions, with emotions being multicomponential phenomena that comprise feelings, intentions, and wishes.  相似文献   

14.
We compared the ability of angry and neutral faces to drive oculomotor behaviour as a test of the widespread claim that emotional information is automatically prioritized when competing for attention. Participants were required to make a saccade to a colour singleton; photos of angry or neutral faces appeared amongst other objects within the array, and were completely irrelevant for the task. Eye-tracking measures indicate that faces drive oculomotor behaviour in a bottom-up fashion; however, angry faces are no more likely to capture the eyes than neutral faces are. Saccade latencies suggest that capture occurrs via reflexive saccades and that the outcome of competition between salient items (colour singletons and faces) may be subject to fluctuations in attentional control. Indeed, although angry and neutral faces captured the eyes reflexively on a portion of trials, participants successfully maintained goal-relevant oculomotor behaviour on a majority of trials. We outline potential cognitive and brain mechanisms underlying oculomotor capture by faces.  相似文献   

15.
There is evidence that men and women display differences in both cognitive and affective functions. Recent studies have examined the processing of emotions in males and females. However, the findings are inconclusive, possibly the result of methodological differences. The aim of this study was to investigate the perception of emotional facial expressions in men and women. Video clips of neutral faces, gradually morphing into full-blown expressions were used. By doing this, we were able to examine both the accuracy and the sensitivity in labelling emotional facial expressions. Furthermore, all participants completed an anxiety and a depression rating scale. Research participants were 40 female students and 28 male students. Results revealed that men were less accurate, as well as less sensitive in labelling facial expressions. Thus, men show an overall worse performance compared to women on a task measuring the processing of emotional faces. This result is discussed in relation to recent findings.  相似文献   

16.
While there is an extensive literature on the tendency to mimic emotional expressions in adults, it is unclear how this skill emerges and develops over time. Specifically, it is unclear whether infants mimic discrete emotion-related facial actions, whether their facial displays are moderated by contextual cues and whether infants’ emotional mimicry is constrained by developmental changes in the ability to discriminate emotions. We therefore investigate these questions using Baby-FACS to code infants’ facial displays and eye-movement tracking to examine infants’ looking times at facial expressions. Three-, 7-, and 12-month-old participants were exposed to dynamic facial expressions (joy, anger, fear, disgust, sadness) of a virtual model which either looked at the infant or had an averted gaze. Infants did not match emotion-specific facial actions shown by the model, but they produced valence-congruent facial responses to the distinct expressions. Furthermore, only the 7- and 12-month-olds displayed negative responses to the model’s negative expressions and they looked more at areas of the face recruiting facial actions involved in specific expressions. Our results suggest that valence-congruent expressions emerge in infancy during a period where the decoding of facial expressions becomes increasingly sensitive to the social signal value of emotions.  相似文献   

17.
The origins of the appearances of anger and fear facial expressions are not well understood. The authors tested the hypothesis that such origins might lie in the expressions' resemblance to, respectively, mature and babyish faces in three studies. In Study 1, faces expressing anger and fear were judged to physically resemble mature and babyish faces. Study 2 indicated that characteristics associated specifically with babyishness are attributed to persons showing fear, whereas characteristics associated with maturity are attributed to persons showing anger. In Study 3, composite faces were used to minimize the possibility that the attributions were based on associations to the anger and fear emotions alone rather than to the physical resemblance of the expressions to static facial appearance cues. These results suggest that fear and anger expressions may serve socially adaptive purposes for those who show them, similar to the social adaptations associated with a babyish or mature facial appearance.  相似文献   

18.
Shame, embarrassment, compassion, and contempt have been considered candidates for the status of basic emotions on the grounds that each has a recognisable facial expression. In two studies (N=88, N=60) on recognition of these four facial expressions, observers showed moderate agreement on the predicted emotion when assessed with forced choice (58%; 42%), but low agreement when assessed with free labelling (18%; 16%). Thus, even though some observers endorsed the predicted emotion when it was presented in a list, over 80% spontaneously interpreted these faces in a way other than the predicted emotion.  相似文献   

19.
Previous studies have revealed that decoding of facial expressions is a specific component of face comprehension and that semantic information might be processed separately from the basic stage of face perception. In order to explore event-related potentials (ERPs) related to recognition of facial expressions and the effect of the semantic content of the stimulus, we analyzed 20 normal subjects. Faces with three prototypical emotional expressions (fear, happiness, and sadness) and with three morphed expressions were presented in random order. The neutral stimuli represented the control condition. Whereas ERP profiles were similar with respect to an early negative ERP (N170), differences in peak amplitude were observed later between incongruous (morphed) expressions and congruous (prototypical) ones. In fact, the results demonstrated that the emotional morphed faces elicited a negative peak at about 360 ms, mainly distributed over the posterior site. The electrophysiological activity observed may represent a specific cognitive process underlying decoding of facial expressions in case of semantic anomaly detection. The evidence is in favor of the similarity of this negative deflection with the N400 ERP effect elicited in linguistic tasks. A domain-specific semantic module is proposed to explain these results.  相似文献   

20.
The present study was designed to examine the operation of depression-specific biases in the identification or labeling of facial expression of emotions. Participants diagnosed with major depression and social phobia and control participants were presented with faces that expressed increasing degrees of emotional intensity, slowly changing from a neutral to a full-intensity happy, sad, or angry expression. The authors assessed individual differences in the intensity of facial expression of emotion that was required for the participants to accurately identify the emotion being expressed. The depressed participants required significantly greater intensity of emotion than did the social phobic and the control participants to correctly identify happy expressions and less intensity to identify sad than angry expressions. In contrast, social phobic participants needed less intensity to correctly identify the angry expressions than did the depressed and control participants and less intensity to identify angry than sad expressions. Implications of these results for interpersonal functioning in depression and social phobia are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号