首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
This study was designed to test three competing hypotheses (impaired configural processing; impaired Theory of Mind; atypical amygdala functioning) to explain the basic facial expression recognition profile of adults with autism spectrum disorders (ASD). In Experiment 1 the Ekman and Friesen (1976) series were presented upright and inverted. Individuals with ASD were significantly less accurate than controls at recognising upright facial expressions of fear, sadness and disgust and their pattern of errors suggested some configural processing difficulties. Impaired recognition of inverted facial expressions suggested some additional difficulties processing the facial features. Unexpectedly, the clinical group misidentified fear as anger. In Experiment 2 feature processing of facial expressions was investigated by presenting stimuli in a piecemeal fashion, starting with either just the eyes or the mouth. Individuals with ASD were impaired at recognising fear from the eyes and disgust from the mouth; they also confused fearful eyes as being angry. The findings are discussed in terms of the three competing hypotheses tested.  相似文献   

2.
Functional magnetic resonance imaging (fMRI) of the human brain was used to compare changes in amygdala activity associated with viewing facial expressions of fear and anger. Pictures of human faces bearing expressions of fear or anger, as well as faces with neutral expressions, were presented to 8 healthy participants. The blood oxygen-level dependent (BOLD) fMRI signal within the dorsal amygdala was significantly greater to Fear versus Anger, in a direct contrast. Significant BOLD signal changes in the ventral amygdala were observed in contrasts of Fear versus Neutral expressions and, in a more spatially circumscribed region, to Anger versus Neutral expressions. Thus, activity in the amygdala is greater to fearful facial expressions when contrasted with either neutral or angry faces. Furthermore, directly contrasting fear with angry faces highlighted involvement of the dorsal amygdaloid region.  相似文献   

3.
The processing of emotional expressions is fundamental for normal socialisation and interaction. Reduced responsiveness to the expressions of sadness and fear has been implicated in the development of psychopathy (R. J. R. Blair, 1995). The current study investigates the sensitivity of children with psychopathic tendencies to facial expressions. Children with psychopathic tendencies and a comparison group, as defined by the Psychopathy Screening Device (PSD; P. J. Frick & R. D. Hare, in press), were presented with a cinematic display of a standardised set of facial expressions that depicted sadness, happiness, anger, disgust, fear, and surprise. Participants observed as these facial expressions slowly evolved through 20 successive frames of increasing intensity. The children with psychopathic tendencies presented with selective impairments; they needed significantly more stages before they could successfully recognise the sad expressions and even when the fearful expressions were at full intensity were significantly more likely to mistake them for another expression. These results are interpreted with reference to an amygdala and empathy impairment explanation of psychopathy.  相似文献   

4.
This investigation used adaptation aftereffects to examine developmental changes in the perception of facial expressions. Previous studies have shown that adults’ perceptions of ambiguous facial expressions are biased following adaptation to intense expressions. These expression aftereffects are strong when the adapting and probe expressions share the same facial identity but are mitigated when they are posed by different identities. We extended these findings by comparing expression aftereffects and categorical boundaries in adults versus 5- to 9-year-olds (n = 20/group). Children displayed adult-like aftereffects and categorical boundaries for happy/sad by 7 years of age and for fear/anger by 9 years of age. These findings suggest that both children and adults perceive expressions according to malleable dimensions in which representations of facial expression are partially integrated with facial identity.  相似文献   

5.
Two tasks, one mapping the recognition of unfamiliar face identity and the other the identification of six facial expressions were unilaterally presented to field dependent and field independent individuals of both genders. Regardless of sex, field independent subjects showed faster-response times (RTs) in the left visual field (LVF) for face identity and for the identification of disgust and fear and faster RTs in the RVF for the identification of anger. A trend toward LVF superiority was found over the whole sample for the facial expression task; this effect was still present when the face identity task was partialled out, indicating the independence of the underlying mechanisms.  相似文献   

6.
The present study aims to explore the influence of facial emotional expressions on pre-scholars' identity recognition was analyzed using a two-alternative forced-choice matching task. A decrement was observed in children's performance with emotional faces compared with neutral faces, both when a happy emotional expression remained unchanged between the target face and the test faces and when the expression changed from happy to neutral or from neutral to happy between the target and the test faces (Experiment 1). Negative emotional expressions (i.e. fear and anger) also interfered with children's identity recognition (Experiment 2). Obtained evidence suggests that in preschool-age children, facial emotional expressions are processed in interaction with, rather than independently from, the encoding of facial identity information. The results are discussed in relationship with relevant research conducted with adults and children.  相似文献   

7.
Functional neuroimaging and lesion-based neuropsychological experiments have demonstrated the human amygdala's role in recognition of certain emotions signaled by sensory stimuli, notably, fear and anger in facial expressions. We examined recognition of two emotional dimensions, arousal and valence, in a rare subject with complete, bilateral damage restricted to the amygdala. Recognition of emotional arousal was impaired for facial expressions, words, and sentences that depicted unpleasant emotions, especially in regard to fear and anger. However, recognition of emotional valence was normal. The findings suggest that the amygdala plays a critical role in knowledge concerning the arousal of negative emotions, a function that may explain the impaired recognition of fear and anger in patients with bilateral amygdala damage, and one that is consistent with the amygdala's role in processing stimuli related to threat and danger.  相似文献   

8.
Exposure to fearful facial expressions enhances vision at low spatial-frequencies and impairs vision at high spatial-frequencies. This perceptual trade-off is thought to be a consequence of a fear-related activation of the magnocellular visual pathway to the amygdala. In this study we examined the generality of the effect of emotion on low-level visual perception by assessing participants' orientation sensitivity to low and high spatial-frequency targets following exposure to disgust, fear, and neutral facial expressions. The results revealed that exposure to fear and disgust expressions have opposing effects on early vision: fearful expressions enhanced low spatial-frequency vision and impaired high spatial-frequency vision, while disgust expressions, like neutral expressions, impaired low spatial-frequency vision and enhanced high spatial-frequency vision. Thus we show the effect of exposure to fear on visual perception is not a general emotional effect, but rather one that may that depend on amygdala activation, or one that may be specific to fear.  相似文献   

9.
The processing of several important aspects of a human face was investigated in a single patient (LZ), who had a large infarct of the right hemisphere involving the parietal, and temporal lobes with extensions into the frontal region. LZ showed selective problems with recognizing emotional expressions, whereas she was flawless in recognizing gender, familiarity, and identity. She was very poor in recognizing negative facial expressions (fear, disgust, anger, sadness), but scored as well as the controls on the positive facial expression of happiness. However, in two experiments using both static and dynamic face stimuli, we showed that LZ also did not have a proper notion of what a facial expression of happiness looks like, and could not adequately apply this label. We conclude that the proper recognition of both negative and positive facial expressions relies on the right hemisphere, and that the left hemisphere produces a default state resulting in a bias towards evaluating expressions as happy. We discuss the implications of the current findings for the main models that aim to explain hemispheric specializations for processing of positive and negative emotions.  相似文献   

10.
A growing body of evidence from humans and other animals suggests the amygdala may be a critical neural substrate for emotional processing. In particular, recent studies have shown that damage to the human amygdala impairs the normal appraisal of social signals of emotion, primarily those of fear. However, effective social communication depends on both the ability to receive (emotional appraisal) and the ability to send (emotional expression) signals of emotional state. Although the role of the amygdala in the appraisal of emotion is well established, its importance for the production of emotional expressions is unknown. We report a case study of a patient with bilateral amygdaloid damage who, despite a severe deficit in interpreting facial expressions of emotion including fear, exhibits an intact ability to express this and other basic emotions. This dissociation suggests that a single neural module does not support all aspects of the social communication of emotional state.  相似文献   

11.
The contribution of encoding deficits to verbal-learning difficulties known to be associated with temporal lobe dysfunction was investigated. Auditory and visual false-recognition tasks, which assess verbal encoding strategies, were given to patients with left-temporal-lobe (LTL) and right-temporal-lobe (RTL) surgical excisions and to a group of normal controls (NC). On both auditory and visual tasks, LTL patients made significantly more false-recognition errors than the other subjects on related, but not unrelated, words in the test list. The findings indicate that LTL patients are able to initially encode verbal material and that a breakdown in information processing occurs at a later stage. On the auditory tasks, the performance of RTL patients did not differ from that of NC subjects. However, on the visual tasks, RTL patients, as compared to both LTL and NC subjects, made fewer false-recognition errors. The performance of RTL patients, in contrast to LTL patients, could be interpreted as a reduced encoding of the visual attribute of verbal material. Another possible explanation considered was difficulty in familiarity discrimination.  相似文献   

12.
Caricaturing facial expressions   总被引:1,自引:0,他引:1  
The physical differences between facial expressions (e.g. fear) and a reference norm (e.g. a neutral expression) were altered to produce photographic-quality caricatures. In Experiment 1, participants rated caricatures of fear, happiness and sadness for their intensity of these three emotions; a second group of participants rated how 'face-like' the caricatures appeared. With increasing levels of exaggeration the caricatures were rated as more emotionally intense, but less 'face-like'. Experiment 2 demonstrated a similar relationship between emotional intensity and level of caricature for six different facial expressions. Experiments 3 and 4 compared intensity ratings of facial expression caricatures prepared relative to a selection of reference norms - a neutral expression, an average expression, or a different facial expression (e.g. anger caricatured relative to fear). Each norm produced a linear relationship between caricature and rated intensity of emotion; this finding is inconsistent with two-dimensional models of the perceptual representation of facial expression. An exemplar-based multidimensional model is proposed as an alternative account.  相似文献   

13.
It has been argued that critical functions of the human amygdala are to modulate the moment-to-moment vigilance level and to enhance the processing and the consolidation of memories of emotionally arousing material. In this functional magnetic resonance study, pictures of human faces bearing fearful, angry, and happy expressions were presented to nine healthy volunteers using a backward masking procedure based on neutral facial expression. Activation of the left and right amygdala in response to the masked fearful faces (compared to neutral faces) was significantly correlated with the number of fearful faces detected. In addition, right but not left amygdala activation in response to the masked angry faces was significantly related to the number of angry faces detected. The present findings underscore the role of the amygdala in the detection and consolidation of memory for marginally perceptible threatening facial expression.  相似文献   

14.
Recent studies have shown that reaction times to expressions of anger with averted gaze and fear with direct gaze appear slower than those to direct anger and averted fear. Such findings have been explained by appealing to the notion of gaze/expression congruence with aversion (avoidance) associated with fear, whereas directness (approach) is associated with anger. The current study examined reactions to briefly presented direct and averted faces displaying expressions of fear and anger. Participants were shown four blocked series of faces; each block contained an equal mix of two facial expressions (neutral plus either fear or anger) presented at one viewpoint (either full face or three quarter leftward facing). Participants were instructed to make rapid responses classifying the expressions as either neutral or expressive. Initial analysis of reaction time distributions showed differences in distribution shape with reactions to averted anger and direct fear showing greater skew than those to direct anger and averted fear. Computational modelling, using a diffusion model of decision making and reaction time, showed a difference in the rate of information accrual with more rapid rates of accrual when viewpoint and expression were congruent. This analysis supports the notion of signal congruence as a mechanism through which gaze and viewpoint affect our responses to facial expressions.  相似文献   

15.
There remains conflict in the literature about the lateralisation of affective face perception. Some studies have reported a right hemisphere advantage irrespective of valence, whereas others have found a left hemisphere advantage for positive, and a right hemisphere advantage for negative, emotion. Differences in injury aetiology and chronicity, proportion of male participants, participant age, and the number of emotions used within a perception task may contribute to these contradictory findings. The present study therefore controlled and/or directly examined the influence of these possible moderators. Right brain-damaged (RBD; n = 17), left brain-damaged (LBD; n = 17), and healthy control (HC; n = 34) participants completed two face perception tasks (identification and discrimination). No group differences in facial expression perception according to valence were found. Across emotions, the RBD group was less accurate than the HC group, however RBD and LBD group performance did not differ. The lack of difference between RBD and LBD groups indicates that both hemispheres are involved in positive and negative expression perception. The inclusion of older adults and the well-defined chronicity range of the brain-damaged participants may have moderated these findings. Participant sex and general face perception ability did not influence performance. Furthermore, while the RBD group was less accurate than the LBD group when the identification task tested two emotions, performance of the two groups was indistinguishable when the number of emotions increased (four or six). This suggests that task demand moderates a study’s ability to find hemispheric differences in the perception of facial emotion.  相似文献   

16.
Eighteen temporal lobectomy patients (9 left, LTL; 9 right, RTL) were administered four verbal tasks, an Affective Implicit Task, a Neutral Implicit Task, an Affective Explicit Task, and a Neutral Explicit Task. For the Affective and Neutral Implicit Tasks, participants were timed while reading aloud passages with affective or neutral content, respectively, as quickly as possible, but not so quickly that they did not understand. A target verbal passage was repeated three times; this target passage was alternated with other previously unread passages, and all passages had the same number of words. The Explicit Affective and Neutral Tasks were administered at the end of testing, and consisted of multiple choice questions regarding passage content. Verbal priming effects in terms of improved reading speed with repetition for the target but not non-target passages were found for patients with both left and right temporal lobectomies. As in the Burton, Rabin et al. [Burton, L., Rabin, L., Vardy, S.B., Frohlich, J., Wyatt, G., Dimitri, D., Constante, S., Guterman, E. (2004). Gender differences in implicit and explicit memory for affective passages. Brain and Cognition, 54(3), 218-224] normative study, there were no interactions between this priming effect and affective/neutral content. For the explicit tasks, items from the repeated passages were remembered better than the unrepeated passages, and there was a trend for information from the affective passages to be remembered better than the neutral passages, similar to the normative pattern. The RTL group did not show the normative pattern of slower reading speed for affective compared to neutral passages that the LTL group showed. Thus, the present findings support the idea that intact right medial temporal structures are important for affective content to influence some aspects of verbal processing.  相似文献   

17.
The purpose of this study was to determine the extent to which lobectomy affects ability to discriminate facial identity or facial expression. Fifteen right temporal, 15 left temporal, 5 right frontal, and 4 left frontal lobectomy patients, pair-matched for age, sex, and education to normal control subjects, participated in this study. Tasks included a Facial Identity Matching Task and a Facial Affect Matching task. The lobectomized patients as a whole were significantly impaired on both tasks (22% decrement in performance). The patients made twice as many errors resulting from perseveration of response-set of the first condition (identity or emotion matching) into the second condition. The site of lobectomy did not influence general performance on any one task or selective performance on any subset of affective categories. It was concluded that all four brain regions play a significant and equal role in face processing, and that circuits more specifically dedicated to visual face processing, which are responsible for hemispheric dominance affects and affect/identity dissociations, are probably located more posteriorly in the brain. Finally, it was concluded that perseveration of acquired habit may, under specific conditions, characterize temporal lobe dysfunction just as much as frontal lobe dysfunction.  相似文献   

18.
Three studies examined the nature of the contributions of each hemisphere to the processing of facial expressions and facial identity. A pair of faces, the members of which differed in either expression or identity, were presented to the right or left field. Subjects were required to compare the members of the pair to each other (experiments 1 and 2) or to a previously presented sample (experiment 3). The results revealed that both face and expression perception show an LVF superiority although the two tasks could be differentiated in terms of overall processing time and the interaction of laterality differences with sex. No clear-cut differences in laterality emerged for processing of positive and negative expressions.  相似文献   

19.
Very few large-scale studies have focused on emotional facial expression recognition (FER) in 3-year-olds, an age of rapid social and language development. We studied FER in 808 healthy 3-year-olds using verbal and nonverbal computerized tasks for four basic emotions (happiness, sadness, anger, and fear). Three-year-olds showed differential performance on the verbal and nonverbal FER tasks, especially with respect to fear. That is to say, fear was one of the most accurately recognized facial expressions as matched nonverbally and the least accurately recognized facial expression as labeled verbally. Sex did not influence emotion-matching nor emotion-labeling performance after adjusting for basic matching or labeling ability. Three-year-olds made systematic errors in emotion-labeling. Namely, happy expressions were often confused with fearful expressions, whereas negative expressions were often confused with other negative expressions. Together, these findings suggest that 3-year-olds' FER skills strongly depend on task specifications. Importantly, fear was the most sensitive facial expression in this regard. Finally, in line with previous studies, we found that recognized emotion categories are initially broad, including emotions of the same valence, as reflected in the nonrandom errors of 3-year-olds.  相似文献   

20.
实验以眼动追踪技术为手段,考察自闭症儿童对不同信息特征表情图片的视觉扫描特征,探讨自闭症表情加工的机制。结果发现:1、看真人表情时,自闭症儿童对高兴、悲伤的整脸扫描时间无显著差异,均高于愤怒和恐惧,对愤怒的整脸扫描时间高于恐惧;2、看真人表情时自闭症儿童对高兴、悲伤表情的眼睛区域和嘴巴区域扫描时间无差异,对恐惧和愤怒的嘴巴区域扫描时间显著多于眼睛区域;3、信息特征削弱对自闭症儿童面孔表情视觉扫描产生了影响,随着削弱程度的增加,自闭症儿童对表情图片的嘴巴区域扫描时间增加,对眼睛区域的扫描时间减少。根据实验结果,认为自闭症儿童对表情的加工主要受到表情信息特征的影响,随着信息特征的变化,会对视觉注意进行调整,以获得对表情的充分加工。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号