首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 218 毫秒
1.
Previous studies have shown inconsistent findings regarding the contribution of the different prefrontal regions in emotion recognition. Moreover, the hemispheric lateralization hypothesis posits that the right hemisphere is dominant for processing all emotions regardless of affective valence, whereas the valence specificity hypothesis posits that the left hemisphere is specialized for processing positive emotions while the right hemisphere is specialized for negative emotions. However, recent findings suggest that the evidence for such lateralization has been less consistent. In this study, we investigated emotion recognition of fear, surprise, happiness, sadness, disgust, and anger in 30 patients with focal prefrontal cortex lesions and 30 control subjects. We also examined the impact of lesion laterality on recognition of the six basic emotions. The results showed that compared to control subjects, the frontal subgroups were impaired in recognition of three negative basic emotions of fear, sadness, and anger – regardless of the lesion laterality. Therefore, our findings did not establish that each hemisphere is specialized for processing specific emotions. Moreover, the voxel-based lesion symptom mapping analysis showed that recognition of fear, sadness, and anger draws on a partially common bilaterally distributed prefrontal network.  相似文献   

2.
《Brain and cognition》2014,84(3):252-261
Most clinical research assumes that modulation of facial expressions is lateralized predominantly across the right-left hemiface. However, social psychological research suggests that facial expressions are organized predominantly across the upper-lower face. Because humans learn to cognitively control facial expression for social purposes, the lower face may display a false emotion, typically a smile, to enable approach behavior. In contrast, the upper face may leak a person’s true feeling state by producing a brief facial blend of emotion, i.e. a different emotion on the upper versus lower face. Previous studies from our laboratory have shown that upper facial emotions are processed preferentially by the right hemisphere under conditions of directed attention if facial blends of emotion are presented tachistoscopically to the mid left and right visual fields. This paper explores how facial blends are processed within the four visual quadrants. The results, combined with our previous research, demonstrate that lower more so than upper facial emotions are perceived best when presented to the viewer’s left and right visual fields just above the horizontal axis. Upper facial emotions are perceived best when presented to the viewer’s left visual field just above the horizontal axis under conditions of directed attention. Thus, by gazing at a person’s left ear, which also avoids the social stigma of eye-to-eye contact, one’s ability to decode facial expressions should be enhanced.  相似文献   

3.
Valence biases in attention allocation were assessed after remembering positive or negative personal events that were either still emotionally hot or to which the person had already adapted psychologically. Differences regarding the current state of psychological adjustment were manipulated experimentally by instructing participants to recall distant vs. recent events (Experiment 1) or affectively hot events vs. events to which the person had accommodated already (Experiment 2). Valence biases in affective processing were measured with a valence search task. Processes of emotional counter-regulation (i.e., attention allocation to stimuli of opposite valence to the emotional event) were elicited by remembering affectively hot events, whereas congruency effects (i.e., attention allocation to stimuli of the same valence as the emotional event) were obtained for events for which a final appraisal had already been established. The results of our study help to resolve conflicting findings from the literature regarding congruent vs. incongruent effects of remembering emotional events on affective processing. We discuss implications of our findings for the conception of emotions and for the dynamics of emotion regulation processes.  相似文献   

4.
Valence and arousal are thought to be the primary dimensions of human emotion. However, the degree to which valence and arousal interact in determining brain responses to emotional pictures is still elusive. This functional MRI study aimed to delineate neural systems responding to valence and arousal, and their interaction. We measured neural activation in healthy females (N = 23) to affective pictures using a 2 (Valence) × 2 (Arousal) design. Results show that arousal was preferentially processed by middle temporal gyrus, hippocampus and ventrolateral prefrontal cortex. Regions responding to negative valence included visual and lateral prefrontal regions, positive valence activated middle temporal and orbitofrontal areas. Importantly, distinct arousal-by-valence interactions were present in anterior insula (negative pictures), and in occipital cortex, parahippocampal gyrus and posterior cingulate (positive pictures). These data demonstrate that the brain not only differentiates between valence and arousal but also responds to specific combinations of these two, thereby highlighting the sophisticated nature of emotion processing in (female) human subjects.  相似文献   

5.
《Brain and cognition》2010,72(3):387-396
Valence and arousal are thought to be the primary dimensions of human emotion. However, the degree to which valence and arousal interact in determining brain responses to emotional pictures is still elusive. This functional MRI study aimed to delineate neural systems responding to valence and arousal, and their interaction. We measured neural activation in healthy females (N = 23) to affective pictures using a 2 (Valence) × 2 (Arousal) design. Results show that arousal was preferentially processed by middle temporal gyrus, hippocampus and ventrolateral prefrontal cortex. Regions responding to negative valence included visual and lateral prefrontal regions, positive valence activated middle temporal and orbitofrontal areas. Importantly, distinct arousal-by-valence interactions were present in anterior insula (negative pictures), and in occipital cortex, parahippocampal gyrus and posterior cingulate (positive pictures). These data demonstrate that the brain not only differentiates between valence and arousal but also responds to specific combinations of these two, thereby highlighting the sophisticated nature of emotion processing in (female) human subjects.  相似文献   

6.
The present study investigated whether counter-regulation in affective processing is triggered by emotions. Automatic attention allocation to valent stimuli was measured in the context of positive and negative affective states. Valence biases were assessed by comparing the detection of positive versus negative words in a visual search task (Experiment 1) or by comparing interference effects of positive and negative distractor words in an emotional Stroop task (Experiment 2). Imagining a hypothetical emotional situation (Experiment 1) or watching romantic versus depressing movie clips (Experiment 2) increased attention allocation to stimuli that were opposite in valence to the current emotional state. Counter-regulation is assumed to reflect a basic mechanism underlying implicit emotion regulation.  相似文献   

7.
Hemisphere differences in conscious and unconscious word reading   总被引:1,自引:0,他引:1  
Hemisphere differences in word reading were examined using explicit and implicit processing measures. In an inclusion task, which indexes both conscious (explicit) and unconscious (implicit) word reading processes, participants were briefly presented with a word in either the right or the left visual field and were asked to use this word to complete a three-letter word stem. In an exclusion task, which estimates unconscious word reading, participants completed the word stem with any word other than the prime word. Experiment 1 showed that words presented to either visual field were processed in very similar ways in both tasks, with the exception that words in the right visual field (left hemisphere) were more readily accessible for conscious report. Experiment 2 indicated that unconsciously processed words are shared between the hemispheres, as similar results were obtained when either the same or the opposite visual field received the word stem. Experiment 3 demonstrated that this sharing between hemispheres is cortically mediated by testing a split-brain patient. These results suggest that the left hemisphere advantage for word reading holds only for explicit measures; unconscious word reading is much more balanced between the hemispheres.  相似文献   

8.
Although the right hemisphere is thought to be preferentially involved in visuospatial processing, the specialization of the two hemispheres with respect to object identification is unclear. The present study investigated the effects of hemifield presentation on object and word identification by presenting objects (Experiment 1) and words (Experiment 2) in a rapid visual stream of distracters. In Experiment 1, object images presented in the left visual field (i.e., to the right hemisphere) were identified with shorter display times. In addition, the left visual field advantage was greater for inverted objects. In Experiment 2, words presented in the right visual field (i.e., to the left hemisphere) under similar conditions were identified with shorter display times. These results support the idea that the right hemisphere is specialized with regard to object identification.  相似文献   

9.
Deficits in facial emotion recognition occur frequently after stroke, with adverse social and behavioural consequences. The aim of this study was to investigate the neural underpinnings of the recognition of emotional expressions, in particular of the distinct basic emotions (anger, disgust, fear, happiness, sadness and surprise). A group of 110 ischaemic stroke patients with lesions in (sub)cortical areas of the cerebrum was included. Emotion recognition was assessed with the Ekman 60 Faces Test of the FEEST. Patient data were compared to data of 162 matched healthy controls (HC’s). For the patients, whole brain voxel-based lesion–symptom mapping (VLSM) on 3-Tesla MRI images was performed. Results showed that patients performed significantly worse than HC’s on both overall recognition of emotions, and specifically of disgust, fear, sadness and surprise. VLSM showed significant lesion–symptom associations for FEEST total in the right fronto-temporal region. Additionally, VLSM for the distinct emotions showed, apart from overlapping brain regions (insula, putamen and Rolandic operculum), also regions related to specific emotions. These were: middle and superior temporal gyrus (anger); caudate nucleus (disgust); superior corona radiate white matter tract, superior longitudinal fasciculus and middle frontal gyrus (happiness) and inferior frontal gyrus (sadness). Our findings help in understanding how lesions in specific brain regions can selectively affect the recognition of the basic emotions.  相似文献   

10.
Three experiments were conducted in order to validate 56 musical excerpts that conveyed four intended emotions (happiness, sadness, threat and peacefulness). In Experiment 1, the musical clips were rated in terms of how clearly the intended emotion was portrayed, and for valence and arousal. In Experiment 2, a gating paradigm was used to evaluate the course for emotion recognition. In Experiment 3, a dissimilarity judgement task and multidimensional scaling analysis were used to probe emotional content with no emotional labels. The results showed that emotions are easily recognised and discriminated on the basis of valence and arousal and with relative immediacy. Happy and sad excerpts were identified after the presentation of fewer than three musical events. With no labelling, emotion discrimination remained highly accurate and could be mapped on energetic and tense dimensions. The present study provides suitable musical material for research on emotions.Keywords.  相似文献   

11.
This research investigates the hemispheric processing of anaphors when readers activate multiple antecedents. Participants read texts promoting an anaphoric inference and performed a lexical decision task to inference-related target words that were consistent (Experiment 1) or inconsistent (Experiment 2) with the text. These targets were preceded by constrained or less constraining text and were presented to participants' right visual field-left hemisphere or to their left visual field-right hemisphere. In Experiment 1, both hemispheres showed facilitation for consistent antecedents and the right hemisphere showed an advantage over the left hemisphere in processing antecedents when preceded by less constrained text. In Experiment 2, the left hemisphere only showed negative facilitation for inconsistent antecedents. When readers comprehend text with multiple antecedents: both hemispheres process consistent information, the left hemisphere inhibits inconsistent information, and the right hemisphere processes less constrained information.  相似文献   

12.
There is considerable debate regarding the extent to which limbic regions respond differentially to items with different valences (positive or negative) or to different stimulus types (pictures or words). In the present event-related fMRI study, 21 participants viewed words and pictures that were neutral, negative, or positive. Negative and positive items were equated on arousal. The participants rated each item for whether it depicted or described something animate or inanimate or something common or uncommon. For both pictures and words, the amygdala, dorsomedial prefrontal cortex (PFC), and ventromedial PFC responded equally to all high-arousal items, regardless of valence. Laterality effects in the amygdala were based on the stimulus type (word 5 left, picture 5 bilateral). Valence effects were most apparent when the individuals processed pictures, and the results revealed a lateral/medial distinction within the PFC: The lateral PFC responded differentially to negative items, whereas the medial PFC was more engaged during the processing of positive pictures.  相似文献   

13.
IntroductionPsychopaths with the dominant reduced interpersonal and affective ability are characterized by the hypofunction of the right hemisphere, while psychopaths with the dominant impulsivity and antisocial behavior are characterized by the hyperfunction of the left hemisphere. The assumption is that this interhemispheric imbalance in a psychopath will also be reflected in the recognition of facial emotional expressions.ObjectiveThe objective is to examine the lateralization of facial expressions of positive and negative emotions as well as processing of facial expressions of emotions in criminal and non-criminal psychopaths.Participants48 male participants age 24–40 were voluntarily recruited from the psychiatric hospital in Nis, Serbia.Stimuli48 black-and-white photographs in two separate tasks were used for the stimulation with central and lateral exposition.ResultsCriminality is related to the reduced recognition of facial expression of surprise and not necessarily to psychopathy, whereas reduced recognition of facial expression of fear is related to psychopathy, but not criminality. Valence-specific hypothesis has not been confirmed for positive and negative emotions in criminal and non-criminal psychopaths and non-psychopaths, but it was shown that positive emotions are equally well processed in both hemispheres, whereas negative emotions are more successfully processed in the left hemisphere.  相似文献   

14.
To deny others’ humanity is one of the most heinous forms of intergroup prejudice. Given evidence that perceiving various forms of complexity in outgroup members reduces intergroup prejudice, we investigated across three experiments whether the novel dimension of emotional complexity, or outgroup members’ joint experience of mixed-valence emotions, would also reduce their dehumanisation. Experiment 1 found that perceiving fictitious aliens’ experience of the same primary emotions (e.g. sadness) presented in mixed vs. non-mixed valence pairs led to reduced prejudice via attenuated dehumanisation, i.e. attribution of uniquely human emotions. Experiment 2 confirmed these results, using an unfamiliar real-world group as an outgroup target. Experiment 3 used a familiar outgroup and found generally similar effects, reducing social distance through reduced dehumanisation. These processes suggest that an alternate route to reduced dehumanising of outgroups might involve presenting mixed valence emotions.  相似文献   

15.
16.
《Acta psychologica》2013,142(2):273-277
The body-specificity hypothesis (Casasanto, 2009) associates positive emotional valence and the space surrounding the dominant hand, and negative valence and the space surrounding the non-dominant hand. This effect has not only been found for manual responses, but also for the left and right side. In the present study, we investigated whether this compatibility effect still shows when hand and side carry incongruent information, and whether it is then related to hand or to side. We conducted two experiments which used an incongruent hand–response key assignment, that is, participants had their hands crossed. Participants were instructed to respond with their right vs. left hand (Experiment 1) or with the right vs. left key (Experiment 2). In both experiments, a compatibility effect related to hand emerged, indicating that the association between hand and valence overrides the one between side and valence when hand and side carry contradicting information.  相似文献   

17.
Most clinical research assumes that modulation of facial expressions is lateralized predominantly across the right-left hemiface. However, social psychological research suggests that facial expressions are organized predominantly across the upper-lower face. Because humans learn to cognitively control facial expression for social purposes, the lower face may display a false emotion, typically a smile, to enable approach behavior. In contrast, the upper face may leak a person’s true feeling state by producing a brief facial blend of emotion, i.e. a different emotion on the upper versus lower face. Previous studies from our laboratory have shown that upper facial emotions are processed preferentially by the right hemisphere under conditions of directed attention if facial blends of emotion are presented tachistoscopically to the mid left and right visual fields. This paper explores how facial blends are processed within the four visual quadrants. The results, combined with our previous research, demonstrate that lower more so than upper facial emotions are perceived best when presented to the viewer’s left and right visual fields just above the horizontal axis. Upper facial emotions are perceived best when presented to the viewer’s left visual field just above the horizontal axis under conditions of directed attention. Thus, by gazing at a person’s left ear, which also avoids the social stigma of eye-to-eye contact, one’s ability to decode facial expressions should be enhanced.  相似文献   

18.
People implicitly associate different emotions with different locations in left‐right space. Which aspects of emotion do they spatialize, and why? Across many studies people spatialize emotional valence, mapping positive emotions onto their dominant side of space and negative emotions onto their non‐dominant side, consistent with theories of metaphorical mental representation. Yet other results suggest a conflicting mapping of emotional intensity (a.k.a., emotional magnitude), according to which people associate more intense emotions with the right and less intense emotions with the left — regardless of their valence; this pattern has been interpreted as support for a domain‐general system for representing magnitudes. To resolve the apparent contradiction between these mappings, we first tested whether people implicitly map either valence or intensity onto left‐right space, depending on which dimension of emotion they attend to (Experiments 1a, b). When asked to judge emotional valence, participants showed the predicted valence mapping. However, when asked to judge emotional intensity, participants showed no systematic intensity mapping. We then tested an alternative explanation of findings previously interpreted as evidence for an intensity mapping (Experiments 2a, b). These results suggest that previous findings may reflect a left‐right mapping of spatial magnitude (i.e., the size of a salient feature of the stimuli) rather than emotion. People implicitly spatialize emotional valence, but, at present, there is no clear evidence for an implicit lateral mapping of emotional intensity. These findings support metaphor theory and challenge the proposal that mental magnitudes are represented by a domain‐general metric that extends to the domain of emotion.  相似文献   

19.
The aim of this study was to investigate the effect of a stroke event on people's ability to recognize basic emotions. In particular, the hypothesis that right brain-damaged (RBD) patients would show less of emotion recognition ability compared with left brain-damaged (LBD) patients and healthy controls, was tested. To investigate this the FEEL Test (Facially Expressed Emotion Labeling) was used, a computer based psychometric test that assesses one's ability to recognize facially displayed basic emotions via a forced-choice paradigm. We examined 24 patients after a stroke event (13 RBD, 11 LBD) and compared them with a matched group of healthy controls (HC, n=29). Results showed that the stroke patients performed significantly worse in the FEEL Test than did HC (p<.001). This deficit was especially evident for negative emotions (fear, anger, sadness, and disgust). In contrast to other studies we did not find any significant differences between RBD and LBD patients in their ability to recognize emotions. These results indicate that a stroke event has a negative effect on the recognition of facially displayed emotions but suggest that this effect is apparently not dependent on the side of the brain damage.  相似文献   

20.
The recognition of emotional facial expressions is often subject to contextual influence, particularly when the face and the context convey similar emotions. We investigated whether spontaneous, incidental affective theory of mind inferences made while reading vignettes describing social situations would produce context effects on the identification of same-valenced emotions (Experiment 1) as well as differently-valenced emotions (Experiment 2) conveyed by subsequently presented faces. Crucially, we found an effect of context on reaction times in both experiments while, in line with previous work, we found evidence for a context effect on accuracy only in Experiment 1. This demonstrates that affective theory of mind inferences made at the pragmatic level of a text can automatically, contextually influence the perceptual processing of emotional facial expressions in a separate task even when those emotions are of a distinctive valence. Thus, our novel findings suggest that language acts as a contextual influence to the recognition of emotional facial expressions for both same and different valences.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号