首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Dichotic Listening (DL) is a valuable tool to study emotional brain lateralization. Regarding the perception of sadness and anger through affective prosody, the main finding has been a left ear advantage (LEA) for the sad but contradictory data for the anger prosody. Regarding an induced mood in the laboratory, its consequences upon DL were a diminished right ear advantage (REA) for the induction of sadness and an increased REA for the induction of anger. The global results fit with the approach-withdrawal motivational model of emotional processing, pointing to sadness as a right hemisphere emotion but anger processed bilaterally or even in the left hemisphere, depending on the subject's preferred mode of expression. On the other hand, the study of DL in clinically depressed patients found an abnormally larger REA in verbal DL tasks which was predictive of therapeutic pharmacological response. However, the mobilization of the available left hemisphere resources in these responders (reflected in a higher REA) would indicate a remission of the episode but would not assure the absence of new relapses.  相似文献   

2.
In a previous study of the comprehension of linguistic prosody in brain-damaged subjects, S. R. Grant and W. O. Dingwall (1984. The role of the right hemisphere in processing linguistic prosody, presentation at the Academy of Aphasia, 1984) demonstrated that the right hemisphere (RH) of nonaphasic patients plays a prominent role in the processing of stress and intonation. The present study examines laterality for affective and linguistic prosody using the dichotic listening paradigm. Both types of prosody elicited a significant left ear advantage. This advantage was more pronounced for affective than for linguistic prosody. These findings strongly support previously documented evidence of RH involvement in the processing of affective prosody (R. G. Ley & M. P. Bryden, 1982. A dissociation of right and left hemispheric effects for recognizing emotional tone and verbal content, Brain and Cognition, 1, 3-9). They also provide support for the previously mentioned demonstration of RH involvement in the processing of linguistic intonation (S. Blumstein & W. E. Cooper, 1974. Hemispheric processing of intonation contours, Cortex, 10, 146-158; Grant & Dingwall, 1984).  相似文献   

3.
郑志伟  黄贤军  张钦 《心理学报》2013,45(4):427-437
采用韵律/词汇干扰范式和延迟匹配任务, 通过两个ERP实验, 考察了汉语口语中情绪韵律能否、以及如何调节情绪词的识别。实验一中, 不同类型的情绪韵律分组呈现, ERP结果显示, 同与情绪韵律效价一致的情绪词相比, 与情绪韵律效价不一致的情绪词诱发了走向更负的P200、N300和N400成分; 实验二中, 不同类型的情绪韵律随机呈现, 上述效价一致性效应依然存在。实验结果表明, 情绪韵律能够调节情绪词识别, 主要表现在对情绪词的音韵编码和语义加工的双重易化上。  相似文献   

4.
Emotionally intoned sentences (happy, sad, angry, and neutral voices) were dichotically paired with monotone sentences. A left ear advantage was found for recognizing emotional intonation, while a simultaneous right ear advantage was found for recognizing the verbal content of the sentences. The results indicate a right hemispheric superiority in recognizing emotional stimuli. These findings are most reasonably attributed to differential lateralization of emotional functions, rather than to subject strategy effects. No evidence was found to support a hypothesis that each hemisphere is involved in processing different types of emotion.  相似文献   

5.
Humans typically combine linguistic and nonlinguistic information to comprehend emotions. We adopted an emotion identification Stroop task to investigate how different channels interact in emotion communication. In experiment 1, synonyms of “happy” and “sad” were spoken with happy and sad prosody. Participants had more difficulty ignoring prosody than ignoring verbal content. In experiment 2, synonyms of “happy” and “sad” were spoken with happy and sad prosody, while happy or sad faces were displayed. Accuracy was lower when two channels expressed an emotion that was incongruent with the channel participants had to focus on, compared with the cross-channel congruence condition. When participants were required to focus on verbal content, accuracy was significantly lower also when prosody was incongruent with verbal content and face. This suggests that prosody biases emotional verbal content processing, even when conflicting with verbal content and face simultaneously. Implications for multimodal communication and language evolution studies are discussed.  相似文献   

6.
The effect of attention on cerebral dominance and the asymmetry between left and right ears was investigated using a selective listening task. Right handed subjects were presented with simultaneous dichotic speech messages; they shadowed one message in either the right or left ear and at the same time tapped with either the right or the left hand when they heard a specified target word in either message. The ear asymmetry was shown only when subjects' attention was focused on some other aspect of the task: they tapped to more targets in the right ear, but only when these came in the non-shadowed message; they made more shadowing errors with the left ear message, but chiefly for non-target words. The verbal response of shadowing showed the right ear dominance more clearly than the manual response of tapping. Tapping with the left hand interfered more with shadowing than tapping with the right hand, but there was little correlation between the degree of hand and of ear asymmetry over individual subjects. The results support the idea that the right ear dominance is primarily a quantitative difference in the distribution of attention to left and right ear inputs reaching the left hemisphere speech areas. This affects both the efficiency of speech perception and the degree of response competition between simultaneous verbal and manual responses.  相似文献   

7.
Sixteen right-handed adult males with localized insult to either the right or left hemisphere and five control subjects without brain damage read aloud target sentences embedded in paragraphs, while intoning their voices in either a declarative, interrogative, happy, or sad mode. Acoustical analysis of the speech wave was performed. Right-anterior (pre-Rolandic) and right-central (pre- and post-Rolandic) brain-damaged patients spoke with less pitch variation and restricted intonational range across emotional and nonemotional domains, while patients with right posterior (post-Rolandic) damage had exaggerated pitch variation and intonational range across both domains. No such deficits were found in patients with left posterior damage, whose prosody was similar to that of normal control subjects. It is suggested that damage to the right hemisphere alone may result in a primary disturbance of speech prosody that may be independent of the disturbances in affect often noted in right-brain-damaged populations.  相似文献   

8.
The majority of evidence on social anxiety (SA)-linked attentional biases to threat comes from research using facial expressions. Emotions are, however, communicated through other channels, such as voice. Despite its importance in the interpretation of social cues, emotional prosody processing in SA has been barely explored. This study investigated whether SA is associated with enhanced processing of task-irrelevant angry prosody. Fifty-three participants with high and low SA performed a dichotic listening task in which pairs of male/female voices were presented, one to each ear, with either the same or different prosody (neutral or angry). Participants were instructed to focus on either the left or right ear and to identify the speaker’s gender in the attended side. Our main results show that, once attended, task-irrelevant angry prosody elicits greater interference than does neutral prosody. Surprisingly, high socially anxious participants were less prone to distraction from attended-angry (compared to attended-neutral) prosody than were low socially anxious individuals. These findings emphasise the importance of examining SA-related biases across modalities.  相似文献   

9.
Lateralization of verbal and affective processes was investigated in P-dyslexic, L-dyslexic and normal children with the aid of a dichotic listening task. The children were asked to detect either the presence of a specific target word or of words spoken in a specific emotional tone of voice. The number of correct responses and reaction time were recorded. For monitoring words, an overall right ear advantage was obtained. However, further tests showed no significant ear advantage for P-types, and a right ear advantage for L-types and controls. For emotions, an overall left ear advantage was obtained that was less robust than the word-effect. The results of the word task are in support of previous findings concerning differences between P- and L-dyslexics in verbal processing according to the balance model of dyslexia. However, dyslexic children do not differ from controls on processing of emotional prosody although certain task variables may have affected this result.  相似文献   

10.
Lateralization of verbal and affective processes was investigated in P-dyslexic, L-dyslexic and normal children with the aid of a dichotic listening task. The children were asked to detect either the presence of a specific target word or of words spoken in a specific emotional tone of voice. The number of correct responses and reaction time were recorded. For monitoring words, an overall right ear advantage was obtained. However, further tests showed no significant ear advantage for P-types, and a right ear advantage for L-types and controls. For emotions, an overall left ear advantage was obtained that was less robust than the word-effect. The results of the word task are in support of previous findings concerning differences between P- and L-dyslexics in verbal processing according to the balance model of dyslexia. However, dyslexic children do not differ from controls on processing of emotional prosody although certain task variables may have affected this result.  相似文献   

11.
BACKGROUND AND OBJECTIVES: Whereas injury to the left hemisphere induces aphasia, injury to the right hemisphere's perisylvian region induces an impairment of emotional speech prosody (affective aprosodia). Left-sided medial frontal lesions are associated with reduced verbal fluency with relatively intact comprehension and repetition (transcortical motor aphasia), but persistent affective prosodic defects associated with right medial frontal lesions have not been described. METHODS: We assessed the prosody of a man who sustained a right medial frontal cerebral infarction seven years prior. RESULTS: While propositional speech expression was normal including syntactic prosody, the patient was impaired at expressing emotions using prosody. His comprehension and repetition of prosody were also impaired but less so than expression. CONCLUSIONS: Right medial frontal lesions can induce an affective aprosodia that primarily impairs expression.  相似文献   

12.
The majority of studies have demonstrated a right hemisphere (RH) advantage for the perception of emotions. Other studies have found that the involvement of each hemisphere is valence specific, with the RH better at perceiving negative emotions and the LH better at perceiving positive emotions [Reuter-Lorenz, P., & Davidson, R.J. (1981) Differential contributions of the 2 cerebral hemispheres to the perception of happy and sad faces. Neuropsychologia, 19, 609-613]. To account for valence laterality effects in emotion perception we propose an 'expectancy' hypothesis which suggests that valence effects are obtained when the top-down expectancy to perceive an emotion outweighs the strength of bottom-up perceptual information enabling the discrimination of an emotion. A dichotic listening task was used to examine alternative explanations of valence effects in emotion perception. Emotional sentences (spoken in a happy or sad tone of voice), and morphed-happy and morphed-sad sentences (which blended a neutral version of the sentence with the pitch of the emotion sentence) were paired with neutral versions of each sentence and presented dichotically. A control condition was also used, consisting of two identical neutral sentences presented dichotically, with one channel arriving before the other by 7 ms. In support of the RH hypothesis there was a left ear advantage for the perception of sad and happy emotional sentences. However, morphed sentences showed no ear advantage, suggesting that the RH is specialised for the perception of genuine emotions and that a laterality effect may be a useful tool for the detection of fake emotion. Finally, for the control condition we obtained an interaction between the expected emotion and the effect of ear lead. Participants tended to select the ear that received the sentence first, when they expected a 'sad' sentence, but not when they expected a 'happy' sentence. The results are discussed in relation to the different theoretical explanations of valence laterality effects in emotion perception.  相似文献   

13.
Dichotic listening performance for different classes of speech sounds was examined under conditions of controlled attention. Consideration of the complex of target item and competing item demonstrated that, in general, targets were more accurately identified when the competing item shared no relevant features with it and less accurately identified when the competing item shared place, voice, or manner with the target item. Nasals as well as stops demonstrated a significant right-ear advantage (REA). False alarm rates were very similar for left and right attentional conditions, whereas intrusions from the right ear while attending to the left were far more common than intrusions from the left while attending to the right. Attention is viewed as serving to select the stimuli that will be reported, but at a late stage, and only after the right ear perceptual advantage has had its effect. A model of dichotic listening performance is proposed in which both the ease of localizing the item and the strength of evidence for the presence of the item are relevant factors.  相似文献   

14.
准确识别言语中的情绪韵律信息对社会交往非常重要。本研究采用功能近红外成像技术, 探索外显和内隐情绪加工条件下愤怒、恐惧、快乐三种情绪韵律加工过程中的大脑皮层神经活动。结果表明, 对愤怒、恐惧、快乐韵律进行特异性加工的脑区分别为左侧额极/眶额叶、右侧缘上回、左侧额下回, 其中右侧缘上回脑区同时受到情绪和任务的调控。此外, 右侧颞中回、颞下回和颞极在情绪外显任务中的激活明显强于内隐任务。本研究的结果部分支持了情绪韵律的层次模型, 也对该模型的第三层次, 即“额区对语音情绪信息的精细加工需要外显性情绪加工任务参与”提出了质疑。  相似文献   

15.
37 subjects' facial electromyographic activity at the corrugator and zygomatic muscle regions were recorded while they were posing with happy and sad facial expressions. Analysis showed that the mean value of EMG activity at the left zygomatic muscle region was the highest, followed by the right zygomatic, left corrugator, and right corrugator muscle regions, while a happy facial expression was posed. The mean value of EMG activity at the left corrugator muscle region was the highest, followed by those for the right corrugator, left zygomatic, and right zygomatic muscle regions while a sad facial expression was posed. Further analysis indicated that the power of facial EMG activity on the left side of the face was stronger than on the right side of the face while posing both happy and sad expressions.  相似文献   

16.
Posers were requested to produce happy and sad emotional expressions, deliberately accentuated on the left and right sides of the face. Raters judged the emotional intensity of expressions when presented in original and mirror-reverse orientation. Left-side-accentuated sad expressions were rated as more intense than right-side-accentuated sad expressions. Raters were biased to judge expressions as more intense when the accentuated side was to their left. The findings indicated that the perceiver bias in weighting information from the side of the face in left hemispace extends to judgments of emotional intensity.  相似文献   

17.
A veraged evoked potentials (AEP) to verbal (digits) and nonverbal (clicks) auditory stimuli were recorded from left and right temporal leads in ten right-handed Ss. With dichotic presentation, there was no significant difference in accuracy of report of the clicks heard in each ear, but significantly more digits were identified correctly from the right ear than from the left. Dichotic verbal stimuli elicited AEP whose early components were of greater amplitude, and whose later components were of shorter latency, from the left hemisphere than from the right. No consistent latency or amplitude differences were observed between AEP from the left and right hemispheres when clicks were presented dichotically.  相似文献   

18.
In a dichotic listening experiment white noise was played to one ear and either music or poetry to the other ear. Subjects rated the stimuli on each of three dimensions. The results showed that both music poetry were judged as more pleasant when heard at the left than the right ear. In addition the music, but not the poetry, was perceived as more soothing at the left ear. The findings are discussed in relation to other indications in the literature that left and right cerebral hemispheres differ in their emotional make-up.  相似文献   

19.
Abstract— In a pitch discrimination task, subjects were faster and more accurate in judging low-frequency sounds when these stimuli were presented to the left ear, compared with the right ear. In contrast, a right-ear advantage was found with high-frequency sounds. The effect was in terms of relative frequency and not absolute frequency, suggesting that the effect arisen from pastsensory mechanisms. A simitar laterality effect has been reported in visual perception with stimuli varying in spatial frequency. These multimodal laterality effects may reflect a general computational difference between the two cerebral hemispheres, with the left hemisphere biased for processing high-frequency information and the right hemisphere biased for processing low-frequency information.  相似文献   

20.
Which brain regions are associated with recognition of emotional prosody? Are these distinct from those for recognition of facial expression? These issues were investigated by mapping the overlaps of co-registered lesions from 66 brain-damaged participants as a function of their performance in rating basic emotions. It was found that recognizing emotions from prosody draws on the right frontoparietal operculum, the bilateral frontal pole, and the left frontal operculum. Recognizing emotions from prosody and facial expressions draws on the right frontoparietal cortex, which may be important in reconstructing aspects of the emotion signaled by the stimulus. Furthermore, there were regions in the left and right temporal lobes that contributed disproportionately to recognition of emotion from faces or prosody, respectively.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号