首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Synthetic images of facial expression were used to assess whether judges can correctly recognize emotions exclusively on the basis of configurations of facial muscle movements. A first study showed that static, synthetic images modeled after a series of photographs that are widely used in facial expression research yielded recognition rates and confusion patterns comparable to posed photos. In a second study, animated synthetic images were used to examine whether schematic facial expressions consisting entirely of theoretically postulated facial muscle configurations can be correctly recognized. Recognition rates for the synthetic expressions were far above chance, and the confusion patterns were comparable to those obtained with posed photos. In addition, the effect of static versus dynamic presentation of the expressions was studied. Dynamic presentation increased overall recognition accuracy and reduced confusions between unrelated emotions.  相似文献   

2.
Abrupt discontinuities in recognizing categories of emotion are found for the labelling of consciously perceived facial expressions. This has been taken to imply that, at a conscious level, we perceive facial expressions categorically. We investigated whether the abrupt discontinuities found in categorization for conscious recognition would be replaced by a graded transition for subthreshold stimuli. Fifteen volunteers participated in two experiments, in which participants viewed faces morphed from 100% fear to 100% disgust along seven increments. In Experiment A, target faces were presented for 30 ms, in Experiment B for 170 ms. Participants made two-alternative forced-choice decisions between fear and disgust. Results for the 30 ms presentation time indicated a significant linear trend between degree of morphing and classification of the images. Results for 170 ms presentation time followed the higher order function found in studies of categorical perception. These results provide preliminary evidence for separate processes underlying conscious and nonconscious perception of facial expressions of emotion.  相似文献   

3.
Unconscious facial reactions to emotional facial expressions   总被引:22,自引:0,他引:22  
Studies reveal that when people are exposed to emotional facial expressions, they spontaneously react with distinct facial electromyographic (EMG) reactions in emotion-relevant facial muscles. These reactions reflect, in part, a tendency to mimic the facial stimuli. We investigated whether corresponding facial reactions can be elicited when people are unconsciously exposed to happy and angry facial expressions. Through use of the backward-masking technique, the subjects were prevented from consciously perceiving 30-ms exposures of happy, neutral, and angry target faces, which immediately were followed and masked by neutral faces. Despite the fact that exposure to happy and angry faces was unconscious, the subjects reacted with distinct facial muscle reactions that corresponded to the happy and angry stimulus faces. Our results show that both positive and negative emotional reactions can be unconsciously evoked, and particularly that important aspects of emotional face-to-face communication can occur on an unconscious level.  相似文献   

4.
Two experiments investigated categorical perception (CP) effects for affective facial expressions and linguistic facial expressions from American Sign Language (ASL) for Deaf native signers and hearing non-signers. Facial expressions were presented in isolation (Experiment 1) or in an ASL verb context (Experiment 2). Participants performed ABX discrimination and identification tasks on morphed affective and linguistic facial expression continua. The continua were created by morphing end-point photo exemplars into 11 images, changing linearly from one expression to another in equal steps. For both affective and linguistic expressions, hearing non-signers exhibited better discrimination across category boundaries than within categories for both experiments, thus replicating previous results with affective expressions and demonstrating CP effects for non-canonical facial expressions. Deaf signers, however, showed significant CP effects only for linguistic facial expressions. Subsequent analyses indicated that order of presentation influenced signers’ response time performance for affective facial expressions: viewing linguistic facial expressions first slowed response time for affective facial expressions. We conclude that CP effects for affective facial expressions can be influenced by language experience.  相似文献   

5.
Backward masking is a popular method of preventing awareness of facial expressions, but concerns have been expressed as to the effectiveness of masking in previous research, which may have resulted in unjustified claims of unconscious processing. We examined the minimum presentation time for discrimination of fearful, angry, happy and neutral faces in a backward masking task using both objective sensitivity measures, based on signal detection analysis, and subjective awareness ratings. Results from two experiments showed for all expressions the mean sensitivity and the sensitivity scores of most individual participants were above chance at presentation times of 20 ms. Awareness ratings for happy, fearful and angry also exceeded baseline ratings from 20 ms onwards. Overall sensitivity in both experiments was greatest for happy expressions, which is an agreement with previous reports. The results support the possibility of incomplete masking in earlier studies that used masking to prevent awareness of facial expressions.  相似文献   

6.
This study investigated whether sensitivity to and evaluation of facial expressions varied with repeated exposure to non-prototypical facial expressions for a short presentation time. A morphed facial expression was presented for 500 ms repeatedly, and participants were required to indicate whether each facial expression was happy or angry. We manipulated the distribution of presentations of the morphed facial expressions for each facial stimulus. Some of the individuals depicted in the facial stimuli expressed anger frequently (i.e., anger-prone individuals), while the others expressed happiness frequently (i.e., happiness-prone individuals). After being exposed to the faces of anger-prone individuals, the participants became less sensitive to those individuals’ angry faces. Further, after being exposed to the faces of happiness-prone individuals, the participants became less sensitive to those individuals’ happy faces. We also found a relative increase in the social desirability of happiness-prone individuals after exposure to the facial stimuli.  相似文献   

7.
This study investigated whether observers' facial reactions to the emotional facial expressions of others represent an affective or a cognitive response to these emotional expressions. Three hypotheses were contrasted: (1) facial reactions to emotional facial expressions are due to mimicry as part of an affective empathic reaction; (2) facial reactions to emotional facial expressions are a reflection of shared affect due to emotion induction; and (3) facial reactions to emotional facial expressions are determined by cognitive load depending on task difficulty. Two experiments were conducted varying type of task, presentation of stimuli, and task difficulty. The results show that depending on the nature of the rating task, facial reactions to facial expressions may be either affective or cognitive. Specifically, evidence for facial mimicry was only found when individuals made judgements regarding the valence of an emotional facial expression. Other types of judgements regarding facial expressions did not seem to elicit mimicry but may lead to facial responses related to cognitive load.  相似文献   

8.
Nonverbal "accents": cultural differences in facial expressions of emotion   总被引:5,自引:0,他引:5  
We report evidence for nonverbal "accents," subtle differences in the appearance of facial expressions of emotion across cultures. Participants viewed photographs of Japanese nationals and Japanese Americans in which posers' muscle movements were standardized to eliminate differences in expressions, cultural or otherwise. Participants guessed the nationality of posers displaying emotional expressions at above-chance levels, and with greater accuracy than they judged the nationality of the same posers displaying neutral expressions. These findings indicate that facial expressions of emotion can contain nonverbal accents that identify the expresser's nationality or culture. Cultural differences are intensified during the act of expressing emotion, rather than residing only in facial features or other static elements of appearance. This evidence suggests that extreme positions regarding the universality of emotional expressions are incomplete.  相似文献   

9.
The current study investigated the effects of presentation time and fixation to expression-specific diagnostic features on emotion discrimination performance, in a backward masking task. While no differences were found when stimuli were presented for 16.67 ms, differences between facial emotions emerged beyond the happy-superiority effect at presentation times as early as 50 ms. Happy expressions were best discriminated, followed by neutral and disgusted, then surprised, and finally fearful expressions presented for 50 and 100 ms. While performance was not improved by the use of expression-specific diagnostic facial features, performance increased with presentation time for all emotions. Results support the idea of an integration of facial features (holistic processing) varying as a function of emotion and presentation time.  相似文献   

10.
Previous research has demonstrated that even brief exposures to facial expressions of emotions elicit facial mimicry in receivers in the form of corresponding facial muscle movements. As well, vocal and verbal patterns of speakers converge in conversations, a type of vocal mimicry. There is also evidence of cross-modal mimicry in which emotional vocalizations elicit corresponding facial muscle activity. Further, empathic capacity has been associated with enhanced tendency towards facial mimicry as well as verbal synchrony. We investigated a type of potential cross-modal mimicry in a simulated dyadic situation. Specifically, we examined the influence of facial expressions of happy, sad, and neutral emotions on the vocal pitch of receivers, and its potential association with empathy. Results indicated that whereas both mean pitch and variability of pitch varied somewhat in the predicted directions, empathy was correlated with the difference in the variability of pitch while speaking to the sad and neutral faces. Discussion of results considers the dimensional nature of emotional vocalizations and possible future directions.  相似文献   

11.
The aim of this experiment was to study the identification of the facial expressions of six emotions in French-speaking québécois subjects. Two methods of stimuli presentation were used. The results showed high identification levels that were comparable to those of others working with various cultures. The simultaneous presentation of a facial expression and of the same face with a neutral expression had no effect on the subjects' accuracy of judgment. Female subjects had a higher identification level of disgust than male subjects. Finally, the analysis of the distribution of judgment errors partially confirmed previous data concerning confusions of emotions.  相似文献   

12.
Recent studies have shown that cueing eye gaze can affect the processing of visual information, and this phenomenon is called the gaze-orienting effect (visual-GOE). Emerging evidence has shown that the cueing eye gaze also affects the processing of auditory information (auditory-GOE). However, it is unclear whether the auditory-GOE is modulated by emotion. We conducted three behavioural experiments to investigate whether cueing eye gaze influenced the orientation judgement to a sound, and whether the effect was modulated by facial expressions. The current study set four facial expressions (angry, fearful, happy, and neutral), manipulated the display type of facial expressions, and changed the sequence of gaze and emotional expressions. Participants were required to judge the sound orientation after facial expressions and gaze cues. The results showed that the orientation judgement of sound was influenced by gaze direction in all three experiments, and the orientation judgement of sound was faster when the face was oriented to the target location (congruent trials) than when the face was oriented away from the target location (incongruent trials). The modulation of emotion on auditory-GOE was observed only when gaze shifted followed by facial expression (Exp3); the auditory-GOE was significantly greater for angry faces than for neutral faces. These findings indicate that auditory-GOE as a social phenomenon exists widely, and the effect was modulated by facial expression. Gaze shift before the presentation of emotion was the key influencing factor for the emotional modulation in an auditory target gaze-orienting task. Our findings suggest that the integration of facial expressions and eye gaze was context-dependent.  相似文献   

13.
37 subjects' facial electromyographic activity at the corrugator and zygomatic muscle regions were recorded while they were posing with happy and sad facial expressions. Analysis showed that the mean value of EMG activity at the left zygomatic muscle region was the highest, followed by the right zygomatic, left corrugator, and right corrugator muscle regions, while a happy facial expression was posed. The mean value of EMG activity at the left corrugator muscle region was the highest, followed by those for the right corrugator, left zygomatic, and right zygomatic muscle regions while a sad facial expression was posed. Further analysis indicated that the power of facial EMG activity on the left side of the face was stronger than on the right side of the face while posing both happy and sad expressions.  相似文献   

14.
The current study aimed to extend the understanding of the early development of spontaneous facial reactions toward observed facial expressions. Forty-six 9- to 10-month-old infants observed video clips of dynamic human facial expressions that were artificially created with morphing technology. The infants’ facial responses were recorded, and the movements of the facial action unit 12 (e.g., lip-corner raising, associated with happiness) and facial action unit 4 (e.g., brow-lowering, associated with anger) were visually evaluated by multiple naïve raters. Results showed that (1) infants make congruent, observable facial responses to facial expressions, and (2) these specific facial responses are enhanced during repeated observation of the same emotional expressions. These results suggest the presence of observable congruent facial responses in the first year of life, and that they appear to be influenced by contextual information, such as the repetition of presentation of the target emotional expressions.  相似文献   

15.
This study examined possible effects of aging on the lateralization of stimulus-evoked emotional facial muscle activity. Older participants (mean age 68.4 years) and younger participants (mean age 26.4 years) viewed slides of positive, neutral, or negative emotional content. While participants viewed the slides, bilateral electromyographic (EMG) recordings were obtained from the skin surface over zygomatic and corrugator facial muscles. The participants also made ratings of experienced emotional valence and arousal. Expected patterns of subjective experience and asymmetrical EMG activity were found in response to target stimuli. Greater corrugator muscle activity occurred during presentation of negative stimuli, whereas greater zygomatic muscle activity occurred during presentation of positive stimuli. Consistent with right-hemisphere specialization theories of emotion, left-sided facial EMG activity was consistently greater than that of the right side during presentation of emotional stimuli. However, neither subjective ratings nor EMG patterns showed a significant effect of age group. Such similar patterns of emotional response for the two groups suggest, that the aging process does not produce marked changes in stimulus-evoked emotional experience or in the pattern, magnitude, or lateralization of facial muscle activity associated with emotional states.  相似文献   

16.
Facial expressions such as smiling or frowning are normally followed by, and often aim at, the observation of corresponding facial expressions in social counterparts. Given this contingency between one’s own and other persons’ facial expressions, the production of such facial actions might be the subject of so-called action–effect compatibility effects. In the present Experiment 1, we confirmed this assumption. Participants were required to smile or frown. The generation of these expressions was harder when participants produced predictable feedback from a virtual counterpart that was incompatible with their own facial expression; for example, smiling produced the presentation of a frowning face. The results of Experiment 2 revealed that this effect vanishes with inverted faces as action feedback, which shows that the phenomenon is bound to the instantaneous emotional interpretation of the feedback. These results comply with the assumption that the generation of facial expressions is controlled by an anticipation of these expressions’ effects in the social environment.  相似文献   

17.
The effects of Parkinson's disease (PD) on spontaneous and posed facial activity and on the control of facial muscles were assessed by comparing 22 PD patients with 22 controls. Facial activity was analysed using the Facial Action Coding System (FACS; Ekman & Friesen, 1978). As predicted, PD patients showed reduced levels of spontaneous and posed facial expression in reaction to unpleasant odours compared to controls. PD patients were less successful than controls in masking or intensifying negative facial expressions. PD patients were also less able than controls to imitate specific facial muscle movements, but did not differ in the ability to pose emotional facial expressions. These results suggest that not only is spontaneous facial activity disturbed in PD, but also to some degree the ability to pose facial expressions, to mask facial expressions with other expressions, and to deliberately move specific muscles in the face.  相似文献   

18.
The objectives of this study were to propose a method of presenting dynamic facial expressions to experimental subjects, in order to investigate human perception of avatar's facial expressions of different levels of emotional intensity. The investigation concerned how perception varies according to the strength of facial expression, as well as according to an avatar's gender. To accomplish these goals, we generated a male and a female virtual avatar with five levels of intensity of happiness and anger using a morphing technique. We then recruited 16 normal healthy subjects and measured each subject's emotional reaction by scoring affective arousal and valence after showing them the avatar's face. Through this study, we were able to investigate human perceptual characteristics evoked by male and female avatars' graduated facial expressions of happiness and anger. In addition, we were able to identify that a virtual avatar's facial expression could affect human emotion in different ways according to the avatar's gender and the intensity of its facial expressions. However, we could also see that virtual faces have some limitations because they are not real, so subjects recognized the expressions well, but were not influenced to the same extent. Although a virtual avatar has some limitations in conveying its emotion using facial expressions, this study is significant in that it shows that a new potential exists to use or manipulate emotional intensity by controlling a virtual avatar's facial expression linearly using a morphing technique. Therefore, it is predicted that this technique may be used for assessing emotional characteristics of humans, and may be of particular benefit for work with people with emotional disorders through a presentation of dynamic expression of various emotional intensities.  相似文献   

19.
Background: Neuroanatomical evidence suggests that the human brain has dedicated pathways to rapidly process threatening stimuli. This processing bias for threat was examined using the repetition blindness (RB) paradigm. RB (i.e., failure to report the second instance of an identical stimulus rapidly following the first) has been established for words, objects and faces but not, to date, facial expressions. Methods: 78 (Study 1) and 62 (Study 2) participants identified repeated and different, threatening and non-threatening emotional facial expressions in rapid serial visual presentation (RSVP) streams. Results: In Study 1, repeated facial expressions produced more RB than different expressions. RB was attenuated for threatening expressions. In Study 2, attenuation of RB for threatening expressions was replicated. Additionally, semantically related but non-identical threatening expressions reduced RB relative to non-threatening stimuli. Conclusions: These findings suggest that the threat bias is apparent in the temporal processing of facial expressions, and expands the RB paradigm by demonstrating that identical facial expressions are susceptible to the effect.  相似文献   

20.
The mirror neuron system (MNS) has been mooted as a crucial component underlying human social cognition. Initial evidence based on functional magnetic resonance imaging (fMRI) suggests that the MNS plays a role in emotion classification, but further confirmation and convergent evidence is needed. This study employed electroencephalography (EEG) to examine modulations in the mu rhythm associated with the inference of emotions from facial expressions. It was hypothesised that mu suppression would be associated with classifying the emotion portrayed by facial expressions. Nineteen participants viewed pictures of facial expressions or emotion words and were asked to either match the stimulus to an emotion word or to passively observe. Mu activity following stimulus presentation was localised using a 3-D distributed inverse solution, and compared between conditions. Subtractive logic was used to isolate the specific processes of interest. Comparisons of source localisation images between conditions revealed that there was mu suppression associated with recognising emotion from faces, thereby supporting our hypothesis. Further analyses confirmed that those effects were not due to activity associated with the motor response or the observation of facial expressions, offering further support for the hypotheses. This study provides important convergent evidence for the involvement of the MNS in the inference of emotion from facial expressions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号