首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Several studies have reported impairment in the recognition of facial expressions of disgust in patients with Huntington's disease (HD) and preclinical carriers of the HD gene. The aim of this study was to establish whether impairment for disgust in HD patients extended to include the ability to express the emotion on their own faces. Eleven patients with HD, and 11 age and education matched healthy controls participated in three tasks concerned with the expression of emotions. One task assessed the spontaneous production of disgust-like facial expressions during the smelling of offensive odorants. A second assessed the production of posed facial expressions during deliberate attempts to communicate emotion. The third task evaluated HD patients’ ability to imitate the specific facial configurations associated with each emotion. Foul odours induced fewer disgust-like facial reactions in HD patients than in controls, and patients’ posed facial expressions of disgust were less accurate than the posed disgust expressions of controls. The effect was selective to disgust; patients had no difficulty posing expressions of other emotions. These impairments were not explained by compromised muscle control: HD patients had no difficulty imitating the facial movements required to display disgust. Viewed together with evidence of difficulty in other aspects of disgust in HD, the findings suggest that a common substrate might participate in both the processing and the expression of this emotion.  相似文献   

2.
The aim of this study was to investigate the causes of the own-race advantage in facial expression perception. In Experiment 1, we investigated Western Caucasian and Chinese participants’ perception and categorization of facial expressions of six basic emotions that included two pairs of confusable expressions (fear and surprise; anger and disgust). People were slightly better at identifying facial expressions posed by own-race members (mainly in anger and disgust). In Experiment 2, we asked whether the own-race advantage was due to differences in the holistic processing of facial expressions. Participants viewed composite faces in which the upper part of one expression was combined with the lower part of a different expression. The upper and lower parts of the composite faces were either aligned or misaligned. Both Chinese and Caucasian participants were better at identifying the facial expressions from the misaligned images, showing interference on recognizing the parts of the expressions created by holistic perception of the aligned composite images. However, this interference from holistic processing was equivalent across expressions of own-race and other-race faces in both groups of participants. Whilst the own-race advantage in recognizing facial expressions does seem to reflect the confusability of certain emotions, it cannot be explained by differences in holistic processing.  相似文献   

3.
Some theories of emotion emphasise a close relationship between interoception and subjective experiences of emotion. In this study, we used facial expressions to examine whether interoceptive sensibility modulated emotional experience in a social context. Interoceptive sensibility was measured using the heartbeat detection task. To estimate individual emotional sensitivity, we made morphed photos that ranged between a neutral and an emotional facial expression (i.e., anger, sadness, disgust and happy). Recognition rates of particular emotions from these photos were calculated and considered as emotional sensitivity thresholds. Our results indicate that participants with accurate interoceptive awareness are sensitive to the emotions of others, especially for expressions of sadness and happy. We also found that false responses to sad faces were closely related with an individual's degree of social anxiety. These results suggest that interoceptive awareness modulates the intensity of the subjective experience of emotion and affects individual traits related to emotion processing.  相似文献   

4.
There is substantial evidence to suggest that deafness is associated with delays in emotion understanding, which has been attributed to delays in language acquisition and opportunities to converse. However, studies addressing the ability to recognise facial expressions of emotion have produced equivocal findings. The two experiments presented here attempt to clarify emotion recognition in deaf children by considering two aspects: the role of motion and the role of intensity in deaf children’s emotion recognition. In Study 1, 26 deaf children were compared to 26 age-matched hearing controls on a computerised facial emotion recognition task involving static and dynamic expressions of 6 emotions. Eighteen of the deaf and 18 age-matched hearing controls additionally took part in Study 2, involving the presentation of the same 6 emotions at varying intensities. Study 1 showed that deaf children’s emotion recognition was better in the dynamic rather than static condition, whereas the hearing children showed no difference in performance between the two conditions. In Study 2, the deaf children performed no differently from the hearing controls, showing improved recognition rates with increasing rates of intensity. With the exception of disgust, no differences in individual emotions were found. These findings highlight the importance of using ecologically valid stimuli to assess emotion recognition.  相似文献   

5.
Dissociable neural systems for recognizing emotions   总被引:1,自引:0,他引:1  
This study tested the hypothesis that the recognition of emotions would draw upon anatomically separable brain regions, depending on whether the stimuli were static or explicitly conveyed information regarding actions. We investigated the hypothesis in a rare subject with extensive bilateral brain lesions, patient B., by administering tasks that assessed recognition and naming of emotions from visual and verbal stimuli, some of which depicted actions and some of which did not. B. could not recognize any primary emotion other than happiness, when emotions were shown as static images or given as single verbal labels. By contrast, with the notable exception of disgust, he correctly recognized primary emotions from dynamic displays of facial expressions as well as from stories that described actions. Our findings are consistent with the idea that information about actions is processed in occipitoparietal and dorsal frontal cortices, all of which are intact in B.'s brain. Such information subsequently would be linked to knowledge about emotions that depends on structures mapping somatic states, many of which are also intact in B.'s brain. However, one of these somatosensory structures, the insula, is bilaterally damaged, perhaps accounting for B.'s uniformly impaired recognition of disgust (from both static and action stimuli). Other structures that are damaged in B.'s brain, including bilateral inferior and anterior temporal lobe and medial frontal cortices, appear to be critical for linking perception of static stimuli to recognition of emotions. Thus the retrieval of knowledge regarding emotions draws upon widely distributed and partly distinct sets of neural structures, depending on the attributes of the stimulus.  相似文献   

6.
Posed stimuli dominate the study of nonverbal communication of emotion, but concerns have been raised that the use of posed stimuli may inflate recognition accuracy relative to spontaneous expressions. Here, we compare recognition of emotions from spontaneous expressions with that of matched posed stimuli. Participants made forced-choice judgments about the expressed emotion and whether the expression was spontaneous, and rated expressions on intensity (Experiments 1 and 2) and prototypicality (Experiment 2). Listeners were able to accurately infer emotions from both posed and spontaneous expressions, from auditory, visual, and audiovisual cues. Furthermore, perceived intensity and prototypicality were found to play a role in the accurate recognition of emotion, particularly from spontaneous expressions. Our findings demonstrate that perceivers can reliably recognise emotions from spontaneous expressions, and that depending on the comparison set, recognition levels can even be equivalent to that of posed stimulus sets.  相似文献   

7.
We investigated whether emotional information from facial expression and hand movement quality was integrated when identifying the expression of a compound stimulus showing a static facial expression combined with emotionally expressive dynamic manual actions. The emotions (happiness, neutrality, and anger) expressed by the face and hands were either congruent or incongruent. In Experiment 1, the participants judged whether the stimulus person was happy, neutral, or angry. Judgments were mainly based on the facial expressions, but were affected by manual expressions to some extent. In Experiment 2, the participants were instructed to base their judgment on the facial expression only. An effect of hand movement expressive quality was observed for happy facial expressions. The results conform with the proposal that perception of facial expressions of emotions can be affected by the expressive qualities of hand movements.  相似文献   

8.
The aim of this experiment was to study the identification of the facial expressions of six emotions in French-speaking québécois subjects. Two methods of stimuli presentation were used. The results showed high identification levels that were comparable to those of others working with various cultures. The simultaneous presentation of a facial expression and of the same face with a neutral expression had no effect on the subjects' accuracy of judgment. Female subjects had a higher identification level of disgust than male subjects. Finally, the analysis of the distribution of judgment errors partially confirmed previous data concerning confusions of emotions.  相似文献   

9.
Following Yik and Russell (1999) a judgement paradigm was used to examine to what extent differential accuracy of recognition of facial expressions allows evaluation of the well-foundedness of different theoretical views on emotional expression. Observers judged photos showing facial expressions of seven emotions on the basis of: (1) discrete emotion categories; (2) social message types; (3) appraisal results; or (4) action tendencies, and rated their confidence in making choices. Emotion categories and appraisals were judged significantly more accurately and confidently than messages or action tendencies. These results do not support claims of primacy for message or action tendency views of facial expression. Based on a componential model of emotion it is suggested that judges can infer components from categories and vice versa.  相似文献   

10.
In order to investigate the role of facial movement in the recognition of emotions, faces were covered with black makeup and white spots. Video recordings of such faces were played back so that only the white spots were visible. The results demonstrated that moving displays of happiness, sadness, fear, surprise, anger and disgust were recognized more accurately than static displays of the white spots at the apex of the expressions. This indicated that facial motion, in the absence of information about the shape and position of facial features, is informative about these basic emotions. Normally illuminated dynamic displays of these expressions, however, were recognized more accurately than displays of moving spots. The relative effectiveness of upper and lower facial areas for the recognition of these six emotions was also investigated using normally illuminated and spots-only displays. In both instances the results indicated that different facial regions are more informative for different emitions. The movement patterns characterizing the various emotional expressions as well as common confusions between emotions are also discussed.  相似文献   

11.
In this study, we presented computer‐morphing animations of the facial expressions of six emotions to 43 subjects and asked them to evaluate the naturalness of the rate of change of each expression. The results showed that the naturalness of the expressions depended on the velocity of change, and the patterns for the four velocities differed with the emotions. Principal component analysis of the data extracted the structures that underlie the evaluation of dynamic facial expressions, which differed from previously reported structures for static expressions in some aspects. These results suggest that the representations of facial expressions include not only static but also dynamic properties.  相似文献   

12.
There is evidence that facial expressions are perceived holistically and featurally. The composite task is a direct measure of holistic processing (although the absence of a composite effect implies the use of other types of processing). Most composite task studies have used static images, despite the fact that movement is an important aspect of facial expressions and there is some evidence that movement may facilitate recognition. We created static and dynamic composites, in which emotions were reliably identified from each half of the face. The magnitude of the composite effect was similar for static and dynamic expressions identified from the top half (anger, sadness and surprise) but was reduced in dynamic as compared to static expressions identified from the bottom half (fear, disgust and joy). Thus, any advantage in recognising dynamic over static expressions is not likely to stem from enhanced holistic processing, rather motion may emphasise or disambiguate diagnostic featural information.  相似文献   

13.
Perceptual advantages for own-race compared to other-race faces have been demonstrated for the recognition of facial identity and expression. However, these effects have not been investigated in the same study with measures that can determine the extent of cross-cultural agreement as well as differences. To address this issue, we used a photo sorting task in which Chinese and Caucasian participants were asked to sort photographs of Chinese or Caucasian faces by identity or by expression. This paradigm matched the task demands of identity and expression recognition and avoided constrained forced-choice or verbal labelling requirements. Other-race effects of comparable magnitude were found across the identity and expression tasks. Caucasian participants made more confusion errors for the identities and expressions of Chinese than Caucasian faces, while Chinese participants made more confusion errors for the identities and expressions of Caucasian than Chinese faces. However, analyses of the patterns of responses across groups of participants revealed a considerable amount of underlying cross-cultural agreement. These findings suggest that widely repeated claims that members of other cultures “all look the same” overstate the cultural differences.  相似文献   

14.
Research on emotion recognition has been dominated by studies of photographs of facial expressions. A full understanding of emotion perception and its neural substrate will require investigations that employ dynamic displays and means of expression other than the face. Our aims were: (i) to develop a set of dynamic and static whole-body expressions of basic emotions for systematic investigations of clinical populations, and for use in functional-imaging studies; (ii) to assess forced-choice emotion-classification performance with these stimuli relative to the results of previous studies; and (iii) to test the hypotheses that more exaggerated whole-body movements would produce (a) more accurate emotion classification and (b) higher ratings of emotional intensity. Ten actors portrayed 5 emotions (anger, disgust, fear, happiness, and sadness) at 3 levels of exaggeration, with their faces covered. Two identical sets of 150 emotion portrayals (full-light and point-light) were created from the same digital footage, along with corresponding static images of the 'peak' of each emotion portrayal. Recognition tasks confirmed previous findings that basic emotions are readily identifiable from body movements, even when static form information is minimised by use of point-light displays, and that full-light and even point-light displays can convey identifiable emotions, though rather less efficiently than dynamic displays. Recognition success differed for individual emotions, corroborating earlier results about the importance of distinguishing differences in movement characteristics for different emotional expressions. The patterns of misclassifications were in keeping with earlier findings on emotional clustering. Exaggeration of body movement (a) enhanced recognition accuracy, especially for the dynamic point-light displays, but notably not for sadness, and (b) produced higher emotional-intensity ratings, regardless of lighting condition, for movies but to a lesser extent for stills, indicating that intensity judgments of body gestures rely more on movement (or form-from-movement) than static form information.  相似文献   

15.
Past studies found that, for preschoolers, a story specifying a situational cause and behavioural consequence is a better cue to fear and disgust than is the facial expression of those two emotions, but the facial expressions used were static. Two studies (Study 1: N = 68, 36–68 months; Study 2: N = 72, 49–90 months) tested whether this effect could be reversed when the expressions were dynamic and included facial, postural, and vocal cues. Children freely labelled emotions in three conditions: story, still face, and dynamic expression. Story remained a better cue than still face or dynamic expression for fear and disgust and also for the later emerging emotions of embarrassment and pride.  相似文献   

16.
There is substantial evidence for facial emotion recognition (FER) deficits in autism spectrum disorder (ASD). The extent of this impairment, however, remains unclear, and there is some suggestion that clinical groups might benefit from the use of dynamic rather than static images. High-functioning individuals with ASD (n = 36) and typically developing controls (n = 36) completed a computerised FER task involving static and dynamic expressions of the six basic emotions. The ASD group showed poorer overall performance in identifying anger and disgust and were disadvantaged by dynamic (relative to static) stimuli when presented with sad expressions. Among both groups, however, dynamic stimuli appeared to improve recognition of anger. This research provides further evidence of specific impairment in the recognition of negative emotions in ASD, but argues against any broad advantages associated with the use of dynamic displays.  相似文献   

17.
Two experiments were conducted to explore whether representational momentum (RM) emerges in the perception of dynamic facial expression and whether the velocity of change affects the size of the effect. Participants observed short morphing animations of facial expressions from neutral to one of the six basic emotions. Immediately afterward, they were asked to select the last images perceived. The results of the experiments revealed that the RM effect emerged for dynamic facial expressions of emotion: The last images of dynamic stimuli that an observer perceived were of a facial configuration showing stronger emotional intensity than the image actually presented. The more the velocity increased, the more the perceptual image of facial expression intensified. This perceptual enhancement suggests that dynamic information facilitates shape processing in facial expression, which leads to the efficient detection of other people's emotional changes from their faces.  相似文献   

18.
Recognition of emotional facial expressions is a central area in the psychology of emotion. This study presents two experiments. The first experiment analyzed recognition accuracy for basic emotions including happiness, anger, fear, sadness, surprise, and disgust. 30 pictures (5 for each emotion) were displayed to 96 participants to assess recognition accuracy. The results showed that recognition accuracy varied significantly across emotions. The second experiment analyzed the effects of contextual information on recognition accuracy. Information congruent and not congruent with a facial expression was displayed before presenting pictures of facial expressions. The results of the second experiment showed that congruent information improved facial expression recognition, whereas incongruent information impaired such recognition.  相似文献   

19.
隋雪  任延涛 《心理学报》2007,39(1):64-70
采用眼动记录技术探讨面部表情识别的即时加工过程。基本的面部表情可以分为三种性质:正性、中性和负性,实验一探讨大学生对这三种面部表情识别即时加工过程的基本模式;实验二采用遮蔽技术探讨面部不同部位信息对面部表情识别的影响程度。结果发现:(1)被试对不同性质面部表情识别的即时加工过程存在共性,其眼动轨迹呈“逆V型”;(2)被试对不同性质面部表情的识别存在显著差异,在行为指标和眼动指标上都有体现;(3)对不同部位进行遮蔽,眼动模式发生了根本性改变,遮蔽影响面部表情识别的反应时和正确率;(4)面部表情识别对面部不同部位信息依赖程度不同,眼部信息作用更大。上述结果提示,个体对不同性质面部表情识别的即时加工过程具有共性,但在不同性质面部表情识别上的心理能量消耗不同;并且表情识别对面部不同部位信息依赖程度不同,对眼部信息依赖程度更大  相似文献   

20.
IntroductionPsychopaths with the dominant reduced interpersonal and affective ability are characterized by the hypofunction of the right hemisphere, while psychopaths with the dominant impulsivity and antisocial behavior are characterized by the hyperfunction of the left hemisphere. The assumption is that this interhemispheric imbalance in a psychopath will also be reflected in the recognition of facial emotional expressions.ObjectiveThe objective is to examine the lateralization of facial expressions of positive and negative emotions as well as processing of facial expressions of emotions in criminal and non-criminal psychopaths.Participants48 male participants age 24–40 were voluntarily recruited from the psychiatric hospital in Nis, Serbia.Stimuli48 black-and-white photographs in two separate tasks were used for the stimulation with central and lateral exposition.ResultsCriminality is related to the reduced recognition of facial expression of surprise and not necessarily to psychopathy, whereas reduced recognition of facial expression of fear is related to psychopathy, but not criminality. Valence-specific hypothesis has not been confirmed for positive and negative emotions in criminal and non-criminal psychopaths and non-psychopaths, but it was shown that positive emotions are equally well processed in both hemispheres, whereas negative emotions are more successfully processed in the left hemisphere.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号