首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In order to investigate the lateralization of emotional speech we recorded the brain responses to three emotional intonations in two conditions, i.e., "normal" speech and "prosodic" speech (i.e., speech with no linguistic meaning, but retaining the 'slow prosodic modulations' of speech). Participants listened to semantically neutral sentences spoken with a positive, neutral, or negative intonation in both conditions and judged how positive, negative, or neutral the intonation was on a five-point scale. Core peri-sylvian language areas, as well as some frontal and subcortical areas were activated bilaterally in the normal speech condition. In contrast, a bilateral fronto-opercular region was active when participants listened to prosodic speech. Positive and negative intonations elicited a bilateral fronto-temporal and subcortical pattern in the normal speech condition, and more frontal activation in the prosodic speech condition. The current results call into question an exclusive right hemisphere lateralization of emotional prosody and expand patient data on the functional role of the basal ganglia during the perception of emotional prosody.  相似文献   

2.
The functional specificity of different brain areas recruited in auditory language processing was investigated by means of event-related functional magnetic resonance imaging (fMRI) while subjects listened to speech input varying in the presence or absence of semantic and syntactic information. There were two sentence conditions containing syntactic structure, i.e., normal speech (consisting of function and content words), syntactic speech (consisting of function words and pseudowords), and two word-list conditions, i.e., real words and pseudowords. The processing of auditory language, in general, correlates with significant activation in the primary auditory cortices and in adjacent compartments of the superior temporal gyrus bilaterally. Processing of normal speech appeared to have a special status, as no frontal activation was observed in this case but was seen in the three other conditions. This difference may point toward a certain automaticity of the linguistic processes used during normal speech comprehension. When considering the three other conditions, we found that these were correlated with activation in both left and right frontal cortices. An increase of activation in the planum polare bilaterally and in the deep portion of the left frontal operculum was found exclusively when syntactic processes were in focus. Thus, the present data may be taken to suggest an involvement of the left frontal and bilateral temporal cortex when processing syntactic information during comprehension.  相似文献   

3.
The functional specificity of different brain regions recruited in auditory language processing was investigated by means of event-related functional magnetic resonance imaging (fMRI) while subjects listened to speech input varying in the presence or absence of semantic and syntactic information. There were two sentence conditions containing syntactic structure, i.e., normal speech (consisting of function and content words), syntactic speech (consisting of function words and pseudowords), and two word-list conditions, i.e., real words and pseudowords. The processing of auditory language, in general, correlates with significant activation in the primary auditory cortices and in adjacent compartments of the superior temporal gyrus bilaterally. Processing of normal speech appeared to have a special status, as no frontal activation was observed in this case but was seen in the other three conditions. This difference may point toward a certain automaticity of the linguistic processes used during normal speech comprehension. When considering the three other conditions, we found that these were correlated with activation in both left and right frontal cortices. An increase of activation in the planum polare bilaterally and in the deep portion of the left frontal operculum was found exclusively when syntactic processes were in focus. Thus, the present data may be taken to suggest an involvement of the left frontal and bilateral temporal cortex when processing syntactic information during comprehension.  相似文献   

4.
失语症是当今人类面临的一个重要的健康问题。旋律语调疗法被认为是治疗失语症的有效手段之一。传统旋律语调疗法强调严格的程序和材料,改编版本则依据患者情况进行调整,二者均能提高失语症者的自发言语产生、言语复述以及命名等能力。研究还表明,旋律语调疗法不仅能提高失语症者相关脑区的激活水平,而且也能通过影响失语症者的相关神经结构改善其言语功能。未来研究需要进一步确定该疗法的干预机制及其对汉语失语症的适用性。  相似文献   

5.
Current production studies present a mixed view of right hemisphere-damaged (RHD) patients' ability to produce normal sentence intonation. The present study characterized the sentence intonation of RHD patients, focusing on a greater number of acoustic parameters than past works, and relying on more naturally elicited speech samples through use of a story completion task. Eight RHD speakers and seven nonneurological control subjects produced declarative and imperative sentences as well as yes-no and wh-questions. Slope of F0 change, linearity of pitch contour, and variance of F0 points were calculated for each utterance as a whole, as well as for the preterminal and the terminal contour separately. RHD contours were less linear and flatter in F0 decline than normal controls for the declarative sentences. The patients' yes-no questions also differed from normal productions, displaying smaller F0 dispersion around a mean F0. Preterminal range values were more restricted for patients' utterances of yes-no questions, while terminal properties between groups differed for three of the four sentence types examined. The present results suggest some disturbance in the patients' ability to manipulate fundamental frequency across sentential domains. These data are discussed in terms of current theories of a general dysprosody in RHD patients.  相似文献   

6.
Prosody or speech melody subserves linguistic (e.g., question intonation) and emotional functions in speech communication. Findings from lesion studies and imaging experiments suggest that, depending on function or acoustic stimulus structure, prosodic speech components are differentially processed in the right and left hemispheres. This direct current (DC) potential study investigated the linguistic processing of digitally manipulated pitch contours of sentences that carried an emotional or neutral intonation. Discrimination of linguistic prosody was better for neutral stimuli as compared to happily as well as fearfully spoken sentences. Brain activation was increased during the processing of happy sentences as compared to neutral utterances. Neither neutral nor emotional stimuli evoked lateralized processing in the left or right hemisphere, indicating bilateral mechanisms of linguistic processing for pitch direction. Acoustic stimulus analysis suggested that prosodic components related to emotional intonation, such as pitch variability, interfered with linguistic processing of pitch course direction.  相似文献   

7.
Text cues facilitate the perception of spoken sentences to which they are semantically related (Zekveld, Rudner, et al., 2011). In this study, semantically related and unrelated cues preceding sentences evoked more activation in middle temporal gyrus (MTG) and inferior frontal gyrus (IFG) than nonword cues, regardless of acoustic quality (speech in noise or speech in quiet). Larger verbal working memory (WM) capacity (reading span) was associated with greater intelligibility benefit obtained from related cues, with less speech-related activation in the left superior temporal gyrus and left anterior IFG, and with more activation in right medial frontal cortex for related versus unrelated cues. Better ability to comprehend masked text was associated with greater ability to disregard unrelated cues, and with more activation in left angular gyrus (AG). We conclude that individual differences in cognitive abilities are related to activation in a speech-sensitive network including left MTG, IFG and AG during cued speech perception.  相似文献   

8.
Sentence comprehension is a complex task that involves both language-specific processing components and general cognitive resources. Comprehension can be made more difficult by increasing the syntactic complexity or the presentation rate of a sentence, but it is unclear whether the same neural mechanism underlies both of these effects. In the current study, we used event-related functional magnetic resonance imaging (fMRI) to monitor neural activity while participants heard sentences containing a subject-relative or object-relative center-embedded clause presented at three different speech rates. Syntactically complex object-relative sentences activated left inferior frontal cortex across presentation rates, whereas sentences presented at a rapid rate recruited frontal brain regions such as anterior cingulate and premotor cortex, regardless of syntactic complexity. These results suggest that dissociable components of a large-scale neural network support the processing of syntactic complexity and speech presented at a rapid rate during auditory sentence processing.  相似文献   

9.
The possibility of hemisphere interaction in the processing of spoken language was studied in two dichotic listening experiments. The stimulus material consisted of six CV syllable triplets each spoken with each one of six intonation contours. In Experiment I, 15 aphasic patients, 8 patients with unilateral right hemisphere lesions, and 10 normal controls were asked to identify the four components of a dichotic item from a multiple-choice (MC) set comprising all possible CV triplets and intonation contours. In Experiment II, 30 normal subjects were required to identify either the right or left ear stimulus alone from an MC set comprising the right and left ear stimulus together with the two wrong combinations of right ear CV triplet with left ear intonation and vice versa. It is concluded from the results that the left hemisphere is capable of processing both phonetic and intonational information and that there is neither the necessity nor the tendency for right hemisphere participation in the perception of spoken language.  相似文献   

10.
This Nigerian study replicated the work of Bonvillian et al. (1979) on the effects of rate, intonation, and sentence length on children's sentence imitation. Nursery school children (N = 12; M age = 5 years, 0.5 months) were asked to imitate sentences that varied in rate of presentation, intonation, and length. Results revealed better imitation of shorter sentences than longer ones, of sentences read at a rate nearer the children's normal speech rate than those read at a faster or slower rate, and of sentences read with normal intonation than those read with flat intonation. These findings replicated those of Bonvillian et al., while indicating an even stronger effect of intonation in the Nigerian sample, because its effect was not limited to long sentences but affected all sentences. Adults' utterances to children are often slow, with exaggerated intonation. The present findings suggest that such modifications in adults' speech facilitate children's language comprehension.  相似文献   

11.
Musically tone-deaf individuals have psychophysical deficits in detecting pitch changes, yet their discrimination of intonation contours in speech appears to be normal. One hypothesis for this dissociation is that intonation contours use coarse pitch contrasts which exceed the pitch-change detection thresholds of tone-deaf individuals (). We test this idea by presenting intonation contours for discrimination, both in the context of the original sentences in which they occur and in a "pure" form dissociated from any phonetic context. The pure form consists of gliding-pitch analogs of the original intonation contours which exactly follow their pattern of pitch and timing. If the spared intonation perception of tone-deaf individuals is due to the coarse pitch contrasts of intonation, then such individuals should discriminate the original sentences and the gliding-pitch analogs equally well. In contrast, we find that discrimination of the gliding-pitch analogs is severely degraded. Thus it appears that the dissociation between spoken and musical pitch perception in tone-deaf individuals is due to a deficit at a higher level than simple pitch-change detection.  相似文献   

12.
准确识别言语中的情绪韵律信息对社会交往非常重要。本研究采用功能近红外成像技术, 探索外显和内隐情绪加工条件下愤怒、恐惧、快乐三种情绪韵律加工过程中的大脑皮层神经活动。结果表明, 对愤怒、恐惧、快乐韵律进行特异性加工的脑区分别为左侧额极/眶额叶、右侧缘上回、左侧额下回, 其中右侧缘上回脑区同时受到情绪和任务的调控。此外, 右侧颞中回、颞下回和颞极在情绪外显任务中的激活明显强于内隐任务。本研究的结果部分支持了情绪韵律的层次模型, 也对该模型的第三层次, 即“额区对语音情绪信息的精细加工需要外显性情绪加工任务参与”提出了质疑。  相似文献   

13.
《Brain and cognition》2006,60(3):310-313
Musically tone-deaf individuals have psychophysical deficits in detecting pitch changes, yet their discrimination of intonation contours in speech appears to be normal. One hypothesis for this dissociation is that intonation contours use coarse pitch contrasts which exceed the pitch-change detection thresholds of tone-deaf individuals (Peretz & Hyde, 2003). We test this idea by presenting intonation contours for discrimination, both in the context of the original sentences in which they occur and in a “pure” form dissociated from any phonetic context. The pure form consists of gliding-pitch analogs of the original intonation contours which exactly follow their pattern of pitch and timing. If the spared intonation perception of tone-deaf individuals is due to the coarse pitch contrasts of intonation, then such individuals should discriminate the original sentences and the gliding-pitch analogs equally well. In contrast, we find that discrimination of the gliding-pitch analogs is severely degraded. Thus it appears that the dissociation between spoken and musical pitch perception in tone-deaf individuals is due to a deficit at a higher level than simple pitch-change detection.  相似文献   

14.
《Military psychology》2013,25(2):73-89
The comprehension of narrowband digital speech with bit errors was tested using a sentence verification task. The difficulty of the verification task was varied by using predicates that were either strongly or weakly related to the subjects (e.g., A toad has warts./A toad has eyes.). The test conditions included unprocessed speech and speech processed using a 2,400 bitdsec linear predictive coding (LPC) voice processing algorithm with random bit error rates of 0%, 2%, and 5%. In general, response accuracy decreased and reaction time (RT) increased with LPC processing and with increasing bit error rates. Weakly related true sentences and strongly related false sentences were more difficult than strongly related true sentences and weakly related false sentences, respectively. Interactions between sentence type and speech processing conditions are discussed. The longer time taken to react to degraded speech has implications for performance in military combat situa- tions where split-second decisions are required. The higher error rates with degraded sentences that contain little contextual information are particularly relevant to policy conversations that use a varied vocabulary.  相似文献   

15.
Regional cerebral blood flow (rCBF) was measured by the xenon-133 inhalation method in 10 cerebrally healthy subjects at rest and during linguistic activation tests. These consisted of a comprehension test (binaural listening to a narrative text) and a speech test (making sentences from a list of words presented orally at 30-s intervals). The comprehension task induced a moderate increase in the mean right CBF and in both inferior parietal areas, whereas the speech test resulted in a diffuse increase in the mean CBF of both hemispheres, predominating regionally in both inferior parietal, left operculary, and right upper motor and premotor areas. It is proposed that the activation pattern induced by linguistic stimulation depends on not only specific factors, such as syntactic and semantic aspects of language, but also the contents of the material proposed and the attention required by the test situation.  相似文献   

16.
In this paper we report the results of an experiment in which subjects read syntactically unambiguous and ambiguous sentences which were disambiguated after several words to the less likely possibility. Understanding such sentences involves building an initial structure, inhibiting the non-preferred structure, detecting that later input is incompatible with the initial structure, and reactivating the alternative structure. The ambiguous sentences activated four areas more than the unambiguous sentences. These areas are the left inferior frontal gyrus (IFG), the right basal ganglia (BG), the right posterior dorsal cerebellum (CB) and the left median superior frontal gyrus (SFG). The left IFG is normally activated when syntactic processing complexity is increased and probably supports that function in the current study as well. We discuss four hypotheses concerning how these areas may support comprehension of syntactically ambiguous sentences. (1) The left IFG, right CB and BG could support articulatory rehearsal used to support the processing of ambiguous sentences. This seems unlikely since the activation pattern associated with articulatory rehearsal in other studies is not similar to that seen here. (2) The CB acts as an error detector in motor processing. Error detection is important for recognizing that the wrong sentence structure has been chosen initially. (3) The BG acts to select and sequence movements in the motor domain and in cognitive domains may serve to inhibit competing and completed plans which is not unlike inhibiting the initially non-preferred structure or "unchoosing" the initial choice when incompatible syntactic input is received. (4) The left median SFG is relevant for the evaluation of plausibility. Evaluating the plausibility of the two possibilities provides an important basis for choosing between them. The notion of the use of domain general cognitive processes to support a linguistic process is in line with recent suggestions that the a given area may subserve a specific cognitive task because it carries out an appropriate sort of computation rather than because it supports a specific cognitive domain.  相似文献   

17.
韵律特征研究   总被引:1,自引:0,他引:1  
介绍从知觉、认知和语料库分析角度对汉语韵律特征进行的一系列研究。(1)韵律特征知觉:用实验心理学和知觉标注的语料库分析方法,研究汉语语调和音高下倾与降阶问题,语句和语篇中知觉可以区分的韵律层级及相关的声学线索。研究结果支持汉语语调的双线模型理论和语句音高下倾的存在;证明语篇中知觉可以区分的韵律边界是小句、句子和段落,及其知觉相关的声学线索。(2)韵律特征与其他语言学结构的关系:在标注的语料库的基础上,用常规统计方法研究语句常规重音分布规律、语篇信息结构与重音的关系、并用决策树方法研究根据文本信息确定韵律短语边界和焦点的规则。(3)韵律特征在语篇理解中的作用:用实验心理学方法和脑电指标研究韵律对语篇信息整合和指代理解的影响,揭示其作用的认知和神经机制。讨论了这些研究结果对语音工程、语音学理论和心理语言学研究的实践和理论意义  相似文献   

18.
Functional magnetic resonance imaging was used to investigate the neural correlates of passive listening, habitual speech and two modified speech patterns (simulated stuttering and prolonged speech) in stuttering and nonstuttering adults. Within-group comparisons revealed increased right hemisphere biased activation of speech-related regions during the simulated stuttered and prolonged speech tasks, relative to the habitual speech task, in the stuttering group. No significant activation differences were observed within the nonstuttering participants during these speech conditions. Between-group comparisons revealed less left superior temporal gyrus activation in stutterers during habitual speech and increased right inferior frontal gyrus activation during simulated stuttering relative to nonstutterers. Stutterers were also found to have increased activation in the left middle and superior temporal gyri and right insula, primary motor cortex and supplementary motor cortex during the passive listening condition relative to nonstutterers. The results provide further evidence for the presence of functional deficiencies underlying auditory processing, motor planning and execution in people who stutter, with these differences being affected by speech manner.  相似文献   

19.
旨在探讨言语中音高信息自下而上的声学语音学加工的神经机制和大脑偏侧化.发现,被动听和主动判断任务分别激活了颞叶和额叶,激活在颞极、颞上回和额下回眶部表现出明显的右侧优势.结果表明,对言语中音高信息自下而上的声学语音学加工主要是右脑的功能,言语与非言语信号的音高信息可能有相似的加工机制,支持Gandour等提出的理论.  相似文献   

20.
Many older listeners report difficulties in understanding speech in noisy situations. Working memory and other cognitive skills may modulate older listeners’ ability to use context information to alleviate the effects of noise on spoken-word recognition. In the present study, we investigated whether verbal working memory predicts older adults’ ability to immediately use context information in the recognition of words embedded in sentences, presented in different listening conditions. In a phoneme-monitoring task, older adults were asked to detect as fast and as accurately as possible target phonemes in sentences spoken by a target speaker. Target speech was presented without noise, with fluctuating speech-shaped noise, or with competing speech from a single distractor speaker. The gradient measure of contextual probability (derived from a separate offline rating study) affected the speed of recognition. Contextual facilitation was modulated by older listeners’ verbal working memory (measured with a backward digit span task) and age across listening conditions. Working memory and age, as well as hearing loss, were also the most consistent predictors of overall listening performance. Older listeners’ immediate benefit from context in spoken-word recognition thus relates to their ability to keep and update a semantic representation of the sentence content in working memory.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号