首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Models of both speech perception and speech production typically postulate a processing level that involves some form of phonological processing. There is disagreement, however, on the question of whether there are separate phonological systems for speech input versus speech output. We review a range of neuroscientific data that indicate that input and output phonological systems partially overlap. An important anatomical site of overlap appears to be the left posterior superior temporal gyrus. We then present the results of a new event-related functional magnetic resonance imaging (fMRI) experiment in which participants were asked to listen to and then (covertly) produce speech. In each participant, we found two regions in the left posterior superior temporal gyrus that responded both to the perception and production components of the task, suggesting that there is overlap in the neural systems that participate in phonological aspects of speech perception and speech production. The implications for neural models of verbal working memory are also discussed in connection with our findings.  相似文献   

2.
This paper presents evidence for a new model of the functional anatomy of speech/language (Hickok & Poeppel, 2000) which has, at its core, three central claims: (1) Neural systems supporting the perception of sublexical aspects of speech are essentially bilaterally organized in posterior superior temporal lobe regions; (2) neural systems supporting the production of phonemic aspects of speech comprise a network of predominately left hemisphere systems which includes not only frontal regions, but also superior temporal lobe regions; and (3) the neural systems supporting speech perception and production partially overlap in left superior temporal lobe. This model, which postulates nonidentical but partially overlapping systems involved in the perception and production of speech, explains why psycho- and neurolinguistic evidence is mixed regarding the question of whether input and output phonological systems involve a common network or distinct networks.  相似文献   

3.
Sentence comprehension is a complex task that involves both language-specific processing components and general cognitive resources. Comprehension can be made more difficult by increasing the syntactic complexity or the presentation rate of a sentence, but it is unclear whether the same neural mechanism underlies both of these effects. In the current study, we used event-related functional magnetic resonance imaging (fMRI) to monitor neural activity while participants heard sentences containing a subject-relative or object-relative center-embedded clause presented at three different speech rates. Syntactically complex object-relative sentences activated left inferior frontal cortex across presentation rates, whereas sentences presented at a rapid rate recruited frontal brain regions such as anterior cingulate and premotor cortex, regardless of syntactic complexity. These results suggest that dissociable components of a large-scale neural network support the processing of syntactic complexity and speech presented at a rapid rate during auditory sentence processing.  相似文献   

4.
言语想象不仅在大脑预处理机制方面起到重要的作用,还是目前脑机接口领域研究的热点。与正常言语产生过程相比,言语想象的理论模型、激活脑区、神经传导路径等均与其有较多相似之处。而言语障碍群体的言语想象、想象有意义的词语和句子时的脑神经机制与正常言语产生存在差异。鉴于人类言语系统的复杂性,言语想象的神经机制研究还面临一系列挑战,未来研究可在言语想象质量评价工具及神经解码范式、脑控制回路、激活通路、言语障碍群体的言语想象机制、词语和句子想象的脑神经信号等方面进一步探索,为有效提高脑机接口的识别率提供依据,为言语障碍群体的沟通提供便利。  相似文献   

5.
Data from lesion studies suggest that the ability to perceive speech sounds, as measured by auditory comprehension tasks, is supported by temporal lobe systems in both the left and right hemisphere. For example, patients with left temporal lobe damage and auditory comprehension deficits (i.e., Wernicke's aphasics), nonetheless comprehend isolated words better than one would expect if their speech perception system had been largely destroyed (70-80% accuracy). Further, when comprehension fails in such patients their errors are more often semantically-based, than-phonemically based. The question addressed by the present study is whether this ability of the right hemisphere to process speech sounds is a result of plastic reorganization following chronic left hemisphere damage, or whether the ability exists in undamaged language systems. We sought to test these possibilities by studying auditory comprehension in acute left versus right hemisphere deactivation during Wada procedures. A series of 20 patients undergoing clinically indicated Wada procedures were asked to listen to an auditorily presented stimulus word, and then point to its matching picture on a card that contained the target picture, a semantic foil, a phonemic foil, and an unrelated foil. This task was performed under three conditions, baseline, during left carotid injection of sodium amytal, and during right carotid injection of sodium amytal. Overall, left hemisphere injection led to a significantly higher error rate than right hemisphere injection. However, consistent with lesion work, the majority (75%) of these errors were semantic in nature. These findings suggest that auditory comprehension deficits are predominantly semantic in nature, even following acute left hemisphere disruption. This, in turn, supports the hypothesis that the right hemisphere is capable of speech sound processing in the intact brain.  相似文献   

6.
Candidate brain regions constituting a neural network for preattentive phonetic perception were identified with fMRI and multivariate multiple regression of imaging data. Stimuli contrasted along speech/nonspeech, acoustic, or phonetic complexity (three levels each) and natural/synthetic dimensions. Seven distributed brain regions' activity correlated with speech and speech complexity dimensions, including five left-sided foci [posterior superior temporal gyrus (STG), angular gyrus, ventral occipitotemporal cortex, inferior/posterior supramarginal gyrus, and middle frontal gyrus (MFG)] and two right-sided foci (posterior STG and anterior insula). Only the left MFG discriminated natural and synthetic speech. The data also supported a parallel rather than serial model of auditory speech and nonspeech perception.  相似文献   

7.
The functional neuroanatomy of speech perception has been difficult to characterize. Part of the difficulty, we suggest, stems from the fact that the neural systems supporting 'speech perception' vary as a function of the task. Specifically, the set of cognitive and neural systems involved in performing traditional laboratory speech perception tasks, such as syllable discrimination or identification, only partially overlap those involved in speech perception as it occurs during natural language comprehension. In this review, we argue that cortical fields in the posterior-superior temporal lobe, bilaterally, constitute the primary substrate for constructing sound-based representations of speech, and that these sound-based representations interface with different supramodal systems in a task-dependent manner. Tasks that require access to the mental lexicon (i.e. accessing meaning-based representations) rely on auditory-to-meaning interface systems in the cortex in the vicinity of the left temporal-parietal-occipital junction. Tasks that require explicit access to speech segments rely on auditory-motor interface systems in the left frontal and parietal lobes. This auditory-motor interface system also appears to be recruited in phonological working memory.  相似文献   

8.
Using functional MRI, we investigated whether auditory processing of both speech and meaningful non-linguistic environmental sounds in superior and middle temporal cortex relies on a complex and spatially distributed neural system. We found that evidence for spatially distributed processing of speech and environmental sounds in a substantial extent of temporal cortices. Most importantly, regions previously reported as selective for speech over environmental sounds also contained distributed information. The results indicate that temporal cortices supporting complex auditory processing, including regions previously described as speech-selective, are in fact highly heterogeneous.  相似文献   

9.
The anatomy of auditory word processing: individual variability   总被引:4,自引:0,他引:4  
This study used functional magnetic resonance imaging (fMRI) to investigate the neural substrate underlying the processing of single words, comparing activation patterns across subjects and within individuals. In a word repetition task, subjects repeated single words aloud with instructions not to move their jaws. In a control condition involving reverse speech, subjects heard a digitally reversed speech token and said aloud the word "crime." The averaged fMRI results showed activation in the left posterior temporal and inferior frontal regions and in the supplementary motor area, similar to previous PET studies. However, the individual subject data revealed variability in the location of the temporal and frontal activation. Although these results support previous imaging studies, demonstrating an averaged localization of auditory word processing in the posterior superior temporal gyrus (STG), they are more consistent with traditional neuropsychological data, which suggest both a typical posterior STG localization and substantial individual variability. By using careful head restraint and movement analysis and correction methods, the present study further demonstrates the feasibility of using overt articulation in fMRI experiments.  相似文献   

10.
Emotional expression and how it is lateralized across the two sides of the face may influence how we detect audiovisual speech. To investigate how these components interact we conducted experiments comparing the perception of sentences expressed with happy, sad, and neutral emotions. In addition we isolated the facial asymmetries for affective and speech processing by independently testing the two sides of a talker's face. These asymmetrical differences were exaggerated using dynamic facial chimeras in which left- or right-face halves were paired with their mirror image during speech production. Results suggest that there are facial asymmetries in audiovisual speech such that the right side of the face and right-facial chimeras supported better speech perception than their left-face counterparts. Affective information was also found to be critical in that happy expressions tended to improve speech performance on both sides of the face relative to all other emotions, whereas sad emotions generally inhibited visual speech information, particularly from the left side of the face. The results suggest that approach information may facilitate visual and auditory speech detection.  相似文献   

11.
Emotional expression and how it is lateralized across the two sides of the face may influence how we detect audiovisual speech. To investigate how these components interact we conducted experiments comparing the perception of sentences expressed with happy, sad, and neutral emotions. In addition we isolated the facial asymmetries for affective and speech processing by independently testing the two sides of a talker's face. These asymmetrical differences were exaggerated using dynamic facial chimeras in which left- or right-face halves were paired with their mirror image during speech production. Results suggest that there are facial asymmetries in audiovisual speech such that the right side of the face and right-facial chimeras supported better speech perception than their left-face counterparts. Affective information was also found to be critical in that happy expressions tended to improve speech performance on both sides of the face relative to all other emotions, whereas sad emotions generally inhibited visual speech information, particularly from the left side of the face. The results suggest that approach information may facilitate visual and auditory speech detection.  相似文献   

12.
13.
Functional magnetic resonance imaging was used to investigate the neural correlates of passive listening, habitual speech and two modified speech patterns (simulated stuttering and prolonged speech) in stuttering and nonstuttering adults. Within-group comparisons revealed increased right hemisphere biased activation of speech-related regions during the simulated stuttered and prolonged speech tasks, relative to the habitual speech task, in the stuttering group. No significant activation differences were observed within the nonstuttering participants during these speech conditions. Between-group comparisons revealed less left superior temporal gyrus activation in stutterers during habitual speech and increased right inferior frontal gyrus activation during simulated stuttering relative to nonstutterers. Stutterers were also found to have increased activation in the left middle and superior temporal gyri and right insula, primary motor cortex and supplementary motor cortex during the passive listening condition relative to nonstutterers. The results provide further evidence for the presence of functional deficiencies underlying auditory processing, motor planning and execution in people who stutter, with these differences being affected by speech manner.  相似文献   

14.
The analysis of pure word deafness (PWD) suggests that speech perception, construed as the integration of acoustic information to yield representations that enter into the linguistic computational system, (i) is separable in a modular sense from other aspects of auditory cognition and (ii) is mediated by the posterior superior temporal cortex in both hemispheres. PWD data are consistent with neuropsychological and neuroimaging evidence in a manner that suggests that the speech code is analyzed bilaterally. The typical lateralization associated with language processing is a property of the computational system that acts beyond the analysis of the input signal. The hypothesis of the bilateral mediation of the speech code does not imply that both sides execute the same computation. It is proposed that the speech signal is asymmetrically analyzed in the time domain, with left‐hemisphere mechanisms preferentially extracting information over shorter (25–50 ms) temporal integration windows and right mechanisms over longer (150–250 ms) windows.  相似文献   

15.
The effects of viewing the face of the talker (visual speech) on the processing of clearly presented intact auditory stimuli were investigated using two measures likely to be sensitive to the articulatory motor actions produced in speaking. The aim of these experiments was to highlight the need for accounts of the effects of audio-visual (AV) speech that explicitly consider the properties of articulated action. The first experiment employed a syllable-monitoring task in which participants were required to monitor for target syllables within foreign carrier phrases. An AV effect was found in that seeing a talker's moving face (moving face condition) assisted in more accurate recognition (hits and correct rejections) of spoken syllables than of auditory-only still face (still face condition) presentations. The second experiment examined processing of spoken phrases by investigating whether an AV effect would be found for estimates of phrase duration. Two effects of seeing the moving face of the talker were found. First, the moving face condition had significantly longer duration estimates than the still face auditory-only condition. Second, estimates of auditory duration made in the moving face condition reliably correlated with the actual durations whereas those made in the still face auditory condition did not. The third experiment was carried out to determine whether the stronger correlation between estimated and actual duration in the moving face condition might have been due to generic properties of AV presentation. Experiment 3 employed the procedures of the second experiment but used stimuli that were not perceived as speech although they possessed the same timing cues as those of the speech stimuli of Experiment 2. It was found that simply presenting both auditory and visual timing information did not result in more reliable duration estimates. Further, when released from the speech context (used in Experiment 2), duration estimates for the auditory-only stimuli were significantly correlated with actual durations. In all, these results demonstrate that visual speech can assist in the analysis of clearly presented auditory stimuli in tasks concerned with information provided by viewing the production of an utterance. We suggest that these findings are consistent with there being a processing link between perception and action such that viewing a talker speaking will activate speech motor schemas in the perceiver.  相似文献   

16.
Speech production is inextricably linked to speech perception, yet they are usually investigated in isolation. In this study, we employed a verbal-repetition task to identify the neural substrates of speech processing with two ends active simultaneously using functional MRI. Subjects verbally repeated auditory stimuli containing an ambiguous vowel sound that could be perceived as either a word or a pseudoword depending on the interpretation of the vowel. We found verbal repetition commonly activated the audition-articulation interface bilaterally at Sylvian fissures and superior temporal sulci. Contrasting word-versus-pseudoword trials revealed neural activities unique to word repetition in the left posterior middle temporal areas and activities unique to pseudoword repetition in the left inferior frontal gyrus. These findings imply that the tasks are carried out using different speech codes: an articulation-based code of pseudowords and an acoustic-phonetic code of words. It also supports the dual-stream model and imitative learning of vocabulary.  相似文献   

17.
言语知觉领域主要的理论争论是听觉理论和动觉理论的对立, 争论的焦点围绕言语知觉是否需要动作表征的中介。言语知觉脑机制的研究有助于澄清事实。脑机制的探讨表明言语知觉主要激活了后部听觉皮层区, 包括颞上皮层的背侧(颞横回和颞平面)和外侧区(颞上回和颞上沟); 而前部和言语产生相关的动作皮层并没有表现出一致的激活模式。言语产生相关的动作表征主要在一些特殊任务情形中以自上而下的反馈机制影响了言语知觉, 可能并非正常言语知觉所必须。  相似文献   

18.
A number of studies reported that developmental dyslexics are impaired in speech perception, especially for speech signals consisting of rapid auditory transitions. These studies mostly made use of a categorical-perception task with synthetic-speech samples. In this study, we show that deficits in the perception of synthetic speech do not generalise to the perception of more naturally sounding speech, even if the same experimental paradigm is used. This contrasts with the assumption that dyslexics are impaired in the perception of rapid auditory transitions.  相似文献   

19.
Context has been found to have a profound effect on the recognition of social stimuli and correlated brain activation. The present study was designed to determine whether knowledge about emotional authenticity influences emotion recognition expressed through speech intonation. Participants classified emotionally expressive speech in an fMRI experimental design as sad, happy, angry, or fearful. For some trials, stimuli were cued as either authentic or play-acted in order to manipulate participant top-down belief about authenticity, and these labels were presented both congruently and incongruently to the emotional authenticity of the stimulus. Contrasting authentic versus play-acted stimuli during uncued trials indicated that play-acted stimuli spontaneously up-regulate activity in the auditory cortex and regions associated with emotional speech processing. In addition, a clear interaction effect of cue and stimulus authenticity showed up-regulation in the posterior superior temporal sulcus and the anterior cingulate cortex, indicating that cueing had an impact on the perception of authenticity. In particular, when a cue indicating an authentic stimulus was followed by a play-acted stimulus, additional activation occurred in the temporoparietal junction, probably pointing to increased load on perspective taking in such trials. While actual authenticity has a significant impact on brain activation, individual belief about stimulus authenticity can additionally modulate the brain response to differences in emotionally expressive speech.  相似文献   

20.
The relation between oral movement control and speech   总被引:2,自引:0,他引:2  
A large series of neurological patients, selected solely on the basis that they had damage restricted to one hemisphere of the brain, was given a variety of tests of basic speech and praxic function. Within the left-damaged group, patients were further identified as aphasic or nonaphasic, based on preexisting standard tests of aphasia. Subgroups of aphasics were studied on the basis of lesion location, rather than on the basis of aphasia type. The focus of the study was the relation between the production of speech and nonspeech oral movements, particularly across anterior and posterior lesions. Reproduction of single nonverbal oral movements and of single isolated speech sounds was found to be very highly correlated, and both depended selectively on the left anterior region of the brain. This same region was critically important for rapid repeated articulation of a syllable, suggesting that it mediates control at some "unit" level of movement, in a phenomenological sense, for both speech and nonspeech movements. Other "speech" regions in the left hemisphere appeared to be dispensable for the production of single oral movements, whether these were verbal or nonverbal movements. However, for most aphasic patients, an area in the left posterior region was inferred to be essential for production of multiple oral movements, whether nonverbal or verbal, suggesting a critical role in the accurate selection of movements. Within the posterior region, there was further differentiation for multisyllabic speech into a parietal system, which appeared to mediate primarily praxic function, and a temporal system, which appeared to mediate verbal-echolalic function. Aphasias from anterior and posterior lesions resembled "Broca's" and "Wernicke's" aphasia only insofar as they differed in fluency, with anterior aphasics clearly less fluent. Tests of speech comprehension did not differentiate the groups. It is suggested that classifying aphasic patients via lesion location rather than aphasic typology might yield a view of functional subsystems different from those commonly accepted.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号