首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper presents evidence for a new model of the functional anatomy of speech/language (Hickok & Poeppel, 2000) which has, at its core, three central claims: (1) Neural systems supporting the perception of sublexical aspects of speech are essentially bilaterally organized in posterior superior temporal lobe regions; (2) neural systems supporting the production of phonemic aspects of speech comprise a network of predominately left hemisphere systems which includes not only frontal regions, but also superior temporal lobe regions; and (3) the neural systems supporting speech perception and production partially overlap in left superior temporal lobe. This model, which postulates nonidentical but partially overlapping systems involved in the perception and production of speech, explains why psycho- and neurolinguistic evidence is mixed regarding the question of whether input and output phonological systems involve a common network or distinct networks.  相似文献   

2.
Speech production is inextricably linked to speech perception, yet they are usually investigated in isolation. In this study, we employed a verbal-repetition task to identify the neural substrates of speech processing with two ends active simultaneously using functional MRI. Subjects verbally repeated auditory stimuli containing an ambiguous vowel sound that could be perceived as either a word or a pseudoword depending on the interpretation of the vowel. We found verbal repetition commonly activated the audition-articulation interface bilaterally at Sylvian fissures and superior temporal sulci. Contrasting word-versus-pseudoword trials revealed neural activities unique to word repetition in the left posterior middle temporal areas and activities unique to pseudoword repetition in the left inferior frontal gyrus. These findings imply that the tasks are carried out using different speech codes: an articulation-based code of pseudowords and an acoustic-phonetic code of words. It also supports the dual-stream model and imitative learning of vocabulary.  相似文献   

3.
Mitterer H  Ernestus M 《Cognition》2008,109(1):168-173
This study reports a shadowing experiment, in which one has to repeat a speech stimulus as fast as possible. We tested claims about a direct link between perception and production based on speech gestures, and obtained two types of counterevidence. First, shadowing is not slowed down by a gestural mismatch between stimulus and response. Second, phonetic detail is more likely to be imitated in a shadowing task if it is phonologically relevant. This is consistent with the idea that speech perception and speech production are only loosely coupled, on an abstract phonological level.  相似文献   

4.
言语想象不仅在大脑预处理机制方面起到重要的作用,还是目前脑机接口领域研究的热点。与正常言语产生过程相比,言语想象的理论模型、激活脑区、神经传导路径等均与其有较多相似之处。而言语障碍群体的言语想象、想象有意义的词语和句子时的脑神经机制与正常言语产生存在差异。鉴于人类言语系统的复杂性,言语想象的神经机制研究还面临一系列挑战,未来研究可在言语想象质量评价工具及神经解码范式、脑控制回路、激活通路、言语障碍群体的言语想象机制、词语和句子想象的脑神经信号等方面进一步探索,为有效提高脑机接口的识别率提供依据,为言语障碍群体的沟通提供便利。  相似文献   

5.
Models of both speech perception and speech production typically postulate a processing level that involves some form of phonological processing. There is disagreement, however, on the question of whether there are separate phonological systems for speech input versus speech output. We review a range of neuroscientific data that indicate that input and output phonological systems partially overlap. An important anatomical site of overlap appears to be the left posterior superior temporal gyrus. We then present the results of a new event-related functional magnetic resonance imaging (fMRI) experiment in which participants were asked to listen to and then (covertly) produce speech. In each participant, we found two regions in the left posterior superior temporal gyrus that responded both to the perception and production components of the task, suggesting that there is overlap in the neural systems that participate in phonological aspects of speech perception and speech production. The implications for neural models of verbal working memory are also discussed in connection with our findings.  相似文献   

6.
Emotional expression and how it is lateralized across the two sides of the face may influence how we detect audiovisual speech. To investigate how these components interact we conducted experiments comparing the perception of sentences expressed with happy, sad, and neutral emotions. In addition we isolated the facial asymmetries for affective and speech processing by independently testing the two sides of a talker's face. These asymmetrical differences were exaggerated using dynamic facial chimeras in which left- or right-face halves were paired with their mirror image during speech production. Results suggest that there are facial asymmetries in audiovisual speech such that the right side of the face and right-facial chimeras supported better speech perception than their left-face counterparts. Affective information was also found to be critical in that happy expressions tended to improve speech performance on both sides of the face relative to all other emotions, whereas sad emotions generally inhibited visual speech information, particularly from the left side of the face. The results suggest that approach information may facilitate visual and auditory speech detection.  相似文献   

7.
Sentence comprehension is a complex task that involves both language-specific processing components and general cognitive resources. Comprehension can be made more difficult by increasing the syntactic complexity or the presentation rate of a sentence, but it is unclear whether the same neural mechanism underlies both of these effects. In the current study, we used event-related functional magnetic resonance imaging (fMRI) to monitor neural activity while participants heard sentences containing a subject-relative or object-relative center-embedded clause presented at three different speech rates. Syntactically complex object-relative sentences activated left inferior frontal cortex across presentation rates, whereas sentences presented at a rapid rate recruited frontal brain regions such as anterior cingulate and premotor cortex, regardless of syntactic complexity. These results suggest that dissociable components of a large-scale neural network support the processing of syntactic complexity and speech presented at a rapid rate during auditory sentence processing.  相似文献   

8.
Speech sound disorders (SSD) are the largest group of communication disorders observed in children. One explanation for these disorders is that children with SSD fail to form stable phonological representations when acquiring the speech sound system of their language due to poor phonological memory (PM). The goal of this study was to examine PM in individuals with histories of SSD employing functional MR imaging (fMRI). Participants were six right-handed adolescents with a history of early childhood SSD and seven right-handed matched controls with no history of speech and language disorders. We performed an fMRI study using an overt non-word repetition (NWR). Right lateralized hypoactivation in the inferior frontal gyrus and middle temporal gyrus was observed. The former suggests a deficit in the phonological processing loop supporting PM, while the later may indicate a deficit in speech perception. Both are cognitive processes involved in speech production. Bilateral hyperactivation observed in the pre and supplementary motor cortex, inferior parietal, supramarginal gyrus and cerebellum raised the possibility of compensatory increases in cognitive effort or reliance on the other components of the articulatory rehearsal network and phonologic store. These findings may be interpreted to support the hypothesis that individuals with SSD may have a deficit in PM and to suggest the involvement of compensatory mechanisms to counteract dysfunction of the normal network.  相似文献   

9.
In the past, the nature of the compositional units proposed for spoken language has largely diverged from the types of control units pursued in the domains of other skilled motor tasks. A classic source of evidence as to the units structuring speech has been patterns observed in speech errors--"slips of the tongue". The present study reports, for the first time, on kinematic data from tongue and lip movements during speech errors elicited in the laboratory using a repetition task. Our data are consistent with the hypothesis that speech production results from the assembly of dynamically defined action units--gestures--in a linguistically structured environment. The experimental results support both the presence of gestural units and the dynamical properties of these units and their coordination. This study of speech articulation shows that it is possible to develop a principled account of spoken language within a more general theory of action.  相似文献   

10.
We examine the mechanisms that support interaction between lexical, phonological and phonetic processes during language production. Studies of the phonetics of speech errors have provided evidence that partially activated lexical and phonological representations influence phonetic processing. We examine how these interactive effects are modulated by lexical frequency. Previous research has demonstrated that during lexical access, the processing of high frequency words is facilitated; in contrast, during phonetic encoding, the properties of low frequency words are enhanced. These contrasting effects provide the opportunity to distinguish two theoretical perspectives on how interaction between processing levels can be increased. A theory in which cascading activation is used to increase interaction predicts that the facilitation of high frequency words will enhance their influence on the phonetic properties of speech errors. Alternatively, if interaction is increased by integrating levels of representation, the phonetics of speech errors will reflect the retrieval of enhanced phonetic properties for low frequency words. Utilizing a novel statistical analysis method, we show that in experimentally induced speech errors low lexical frequency targets and outcomes exhibit enhanced phonetic processing. We sketch an interactive model of lexical, phonological and phonetic processing that accounts for the conflicting effects of lexical frequency on lexical access and phonetic processing.  相似文献   

11.
This work is a systematic, cross-linguistic examination of speech errors in English, Hindi, Japanese, Spanish and Turkish. It first describes a methodology for the generation of parallel corpora of error data, then uses these data to examine three general hypotheses about the relationship between language structure and the speech production system. All of the following hypotheses were supported by the data. Languages are equally complex. No overall differences were found in the numbers of errors made by speakers of the five languages in the study. Languages are processed in similar ways. English-based generalizations about language production were tested to see to what extent they would hold true across languages. It was found that, to a large degree, languages follow similar patterns. However, the relative numbers of phonological anticipations and perseverations in other languages did not follow the English pattern. Languages differ in that speech errors tend to cluster around loci of complexity within each language. Languages such as Turkish and Spanish, which have more inflectional morphology, exhibit more errors involving inflected forms, while languages such as Japanese, with rich systems of closed-class forms, tend to have more errors involving closed-class items.  相似文献   

12.
Ozdemir R  Roelofs A  Levelt WJ 《Cognition》2007,105(2):457-465
Disagreement exists about how speakers monitor their internal speech. Production-based accounts assume that self-monitoring mechanisms exist within the production system, whereas comprehension-based accounts assume that monitoring is achieved through the speech comprehension system. Comprehension-based accounts predict perception-specific effects, like the perceptual uniqueness-point effect, in the monitoring of internal speech. We ran an extensive experiment testing this prediction using internal phoneme monitoring and picture naming tasks. Our results show an effect of the perceptual uniqueness point of a word in internal phoneme monitoring in the absence of such an effect in picture naming. These results support comprehension-based accounts of the monitoring of internal speech.  相似文献   

13.
There is no consensus regarding the fundamental phonetic units that underlie speech production. There is, however, general agreement that the frequency of occurrence of these units is a significant factor. Investigators often use the effects of manipulating frequency to support the importance of particular units. Studies of pseudoword production have been used to show the importance of sublexical units, such as initial syllables, phonemes, and biphones. However, it is not clear that these units play the same role when the production of pseudowords is compared to the production of real words. In this study, participants overtly repeated real and pseudowords that were similar for length, complexity, and initial syllable frequency while undergoing functional magnetic resonance imaging. Compared to real words, production of pseudowords produced greater activation in much of the speech production network, including bilateral inferior frontal cortex, precentral gyri and supplementary motor areas and left superior temporal cortex and anterior insula. Only middle right frontal gyrus showed greater activation for real words than for pseudowords. Compared to a no-speech control condition, production of pseudowords or real words resulted in activation of all of the areas shown to comprise the speech production network. Our data, in conjunction with previous studies, suggest that the unit that is identified as the basic unit of speech production is influenced by the nature of the speech that is being studied, i.e., real words as compared to other real words, pseudowords as compared to other pseudowords, or real words as compared to pseudowords.  相似文献   

14.
The neural mechanisms underlying the spontaneous, stimulus-independent emergence of intentions and decisions to act are poorly understood. Using a neurobiologically realistic model of frontal and temporal areas of the brain, we simulated the learning of perception–action circuits for speech and hand-related actions and subsequently observed their spontaneous behaviour. Noise-driven accumulation of reverberant activity in these circuits leads to their spontaneous ignition and partial-to-full activation, which we interpret, respectively, as model correlates of action intention emergence and action decision-and-execution. Importantly, activity emerged first in higher-association prefrontal and temporal cortices, subsequently spreading to secondary and finally primary sensorimotor model-areas, hence reproducing the dynamics of cortical correlates of voluntary action revealed by readiness-potential and verb-generation experiments. This model for the first time explains the cortical origins and topography of endogenous action decisions, and the natural emergence of functional specialisation in the cortex, as mechanistic consequences of neurobiological principles, anatomical structure and sensorimotor experience.  相似文献   

15.
This study investigated the encoding of syllable boundary information during speech production in Dutch. Based on Levelt's model of phonological encoding, we hypothesized segments and syllable boundaries to be encoded in an incremental way. In a self-monitoring experiment, decisions about the syllable affiliation (first or second syllable) of a pre-specified consonant, which was the third phoneme in a word, were required (e.g., ka.No 'canoe' vs. kaN.sel 'pulpit'; capital letters indicate pivotal consonants, dots mark syllable boundaries). First syllable responses were faster than second syllable responses, indicating the incremental nature of segmental encoding and syllabification during speech production planning. The results of the experiment are discussed in the context of Levelt's model of phonological encoding.  相似文献   

16.
To understand the neural basis of human speech control, extensive research has been done using a variety of methodologies in a range of experimental models. Nevertheless, several critical questions about learned vocal motor control still remain open. One of them is the mechanism(s) by which neurotransmitters, such as dopamine, modulate speech and song production. In this review, we bring together the two fields of investigations of dopamine action on voice control in humans and songbirds, who share similar behavioral and neural mechanisms for speech and song production. While human studies investigating the role of dopamine in speech control are limited to reports in neurological patients, research on dopaminergic modulation of bird song control has recently expanded our views on how this system might be organized. We discuss the parallels between bird song and human speech from the perspective of dopaminergic control as well as outline important differences between these species.  相似文献   

17.
Native speakers of a language are often unable to consciously perceive, and have altered neural responses to, phonemic contrasts not present in their language. This study examined whether speakers of dialects of the same language with different phoneme inventories also show measurably different neural responses to contrasts not present in their dialect. Speakers with (n=11) and without (n=11) an American English I/E (pin/pen) vowel merger in speech production were asked to discriminate perceptually between minimal pairs of words that contrasted in the critical vowel merger and minimal pairs of control words while their event-related potential (ERPs) were recorded. Compared with unmerged dialect speakers, merged dialect speakers were less able to make behavioral discriminations and exhibited a reduced late positive ERP component (LPC) effect to incongruent merger vowel stimuli. These results indicate that between dialects of a single language, the behavioral response differences may reflect neural differences related to conscious phonological decision processes.  相似文献   

18.
Using functional MRI, we investigated whether auditory processing of both speech and meaningful non-linguistic environmental sounds in superior and middle temporal cortex relies on a complex and spatially distributed neural system. We found that evidence for spatially distributed processing of speech and environmental sounds in a substantial extent of temporal cortices. Most importantly, regions previously reported as selective for speech over environmental sounds also contained distributed information. The results indicate that temporal cortices supporting complex auditory processing, including regions previously described as speech-selective, are in fact highly heterogeneous.  相似文献   

19.
Language-users reduce words in predictable contexts. Previous research indicates that reduction may be stored in lexical representation if a word is often reduced. Because representation influences production regardless of context, production should be biased by how often each word has been reduced in the speaker’s prior experience. This study investigates whether speakers have a context-independent bias to reduce low-informativity words, which are usually predictable and therefore usually reduced. Content word durations were extracted from the Buckeye and Switchboard speech corpora, and analyzed for probabilistic reduction effects using a language model based on spontaneous speech in the Fisher corpus. The analysis supported the hypothesis: low-informativity words have shorter durations, even when the effects of local contextual predictability, frequency, speech rate, and several other variables are controlled for. Additional models that compared word types against only other words of the same segmental length further supported this conclusion. Words that usually appear in predictable contexts are reduced in all contexts, even those in which they are unpredictable. The result supports representational models in which reduction is stored, and where sufficiently frequent reduction biases later production. The finding provides new evidence that probabilistic reduction interacts with lexical representation.  相似文献   

20.
PurposeAdults who stutter speak more fluently during choral speech contexts than they do during solo speech contexts. The underlying mechanisms for this effect remain unclear, however. In this study, we examined the extent to which the choral speech effect depended on presentation of intact temporal speech cues. We also examined whether speakers who stutter followed choral signals more closely than typical speakers did.Method8 adults who stuttered and 8 adults who did not stutter read 60 sentences aloud during a solo speaking condition and three choral speaking conditions (240 total sentences), two of which featured either temporally altered or indeterminate word duration patterns. Effects of these manipulations on speech fluency, rate, and temporal entrainment with the choral speech signal were assessed.ResultsAdults who stutter spoke more fluently in all choral speaking conditions than they did when speaking solo. They also spoke slower and exhibited closer temporal entrainment with the choral signal during the mid- to late-stages of sentence production than the adults who did not stutter. Both groups entrained more closely with unaltered choral signals than they did with altered choral signals.ConclusionsFindings suggest that adults who stutter make greater use of speech-related information in choral signals when talking than adults with typical fluency do. The presence of fluency facilitation during temporally altered choral speech and conversation babble, however, suggests that temporal/gestural cueing alone cannot account for fluency facilitation in speakers who stutter. Other potential fluency enhancing mechanisms are discussed.Educational Objectives: The reader will be able to (a) summarize competing views on stuttering as a speech timing disorder, (b) describe the extent to which adults who stutter depend on an accurate rendering of temporal information in order to benefit from choral speech, and (c) discuss possible explanations for fluency facilitation in the presence of inaccurate or indeterminate temporal cues.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号