首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Hickok G  Poeppel D 《Cognition》2004,92(1-2):67-99
Despite intensive work on language-brain relations, and a fairly impressive accumulation of knowledge over the last several decades, there has been little progress in developing large-scale models of the functional anatomy of language that integrate neuropsychological, neuroimaging, and psycholinguistic data. Drawing on relatively recent developments in the cortical organization of vision, and on data from a variety of sources, we propose a new framework for understanding aspects of the functional anatomy of language which moves towards remedying this situation. The framework posits that early cortical stages of speech perception involve auditory fields in the superior temporal gyrus bilaterally (although asymmetrically). This cortical processing system then diverges into two broad processing streams, a ventral stream, which is involved in mapping sound onto meaning, and a dorsal stream, which is involved in mapping sound onto articulatory-based representations. The ventral stream projects ventro-laterally toward inferior posterior temporal cortex (posterior middle temporal gyrus) which serves as an interface between sound-based representations of speech in the superior temporal gyrus (again bilaterally) and widely distributed conceptual representations. The dorsal stream projects dorso-posteriorly involving a region in the posterior Sylvian fissure at the parietal-temporal boundary (area Spt), and ultimately projecting to frontal regions. This network provides a mechanism for the development and maintenance of "parity" between auditory and motor representations of speech. Although the proposed dorsal stream represents a very tight connection between processes involved in speech perception and speech production, it does not appear to be a critical component of the speech perception process under normal (ecologically natural) listening conditions, that is, when speech input is mapped onto a conceptual representation. We also propose some degree of bi-directionality in both the dorsal and ventral pathways. We discuss some recent empirical tests of this framework that utilize a range of methods. We also show how damage to different components of this framework can account for the major symptom clusters of the fluent aphasias, and discuss some recent evidence concerning how sentence-level processing might be integrated into the framework.  相似文献   

2.
We describe a patient with phonological alexia caused by a small hemorrhage in the posterior-inferior portion of the left temporal lobe. The lesion induced a highly selective impairment of phonological reading without concomitant oral language deficits other than anomia for objects presented in the visual and tactile modalities. We propose that an intact dorsal pathway from inferior visual association areas to Wernicke's area via the angular gyrus could mediate reading by the lexical route, while damage to a ventral pathway disrupted the patient's ability to read nonwords. We suggest further that although visually and tactually presented objects could be recognized and both verbally and nonverbally identified, they could not be named because of a disconnection from the area of word representations.  相似文献   

3.
Past research has established that listeners can accommodate a wide range of talkers in understanding language. How this adjustment operates, however, is a matter of debate. Here, listeners were exposed to spoken words from a speaker of an American English dialect in which the vowel /ae/ is raised before /g/, but not before /k/. Results from two experiments showed that listeners' identification of /k/-final words like back (which are unaffected by the dialect) was facilitated by prior exposure to their dialect-affected /g/-final counterparts, e.g., bag. This facilitation occurred because the competition between interpretations, e.g., bag or back, while hearing the initial portion of the input [bae], was mitigated by the reduced probability for the input to correspond to bag as produced by this talker. Thus, adaptation to an accent is not just a matter of adjusting the speech signal as it is being heard; adaptation involves dynamic adjustment of the representations stored in the lexicon, according to the characteristics of the speaker or the context.  相似文献   

4.
We examine the mechanisms that support interaction between lexical, phonological and phonetic processes during language production. Studies of the phonetics of speech errors have provided evidence that partially activated lexical and phonological representations influence phonetic processing. We examine how these interactive effects are modulated by lexical frequency. Previous research has demonstrated that during lexical access, the processing of high frequency words is facilitated; in contrast, during phonetic encoding, the properties of low frequency words are enhanced. These contrasting effects provide the opportunity to distinguish two theoretical perspectives on how interaction between processing levels can be increased. A theory in which cascading activation is used to increase interaction predicts that the facilitation of high frequency words will enhance their influence on the phonetic properties of speech errors. Alternatively, if interaction is increased by integrating levels of representation, the phonetics of speech errors will reflect the retrieval of enhanced phonetic properties for low frequency words. Utilizing a novel statistical analysis method, we show that in experimentally induced speech errors low lexical frequency targets and outcomes exhibit enhanced phonetic processing. We sketch an interactive model of lexical, phonological and phonetic processing that accounts for the conflicting effects of lexical frequency on lexical access and phonetic processing.  相似文献   

5.
Speech sound disorders (SSD) are the largest group of communication disorders observed in children. One explanation for these disorders is that children with SSD fail to form stable phonological representations when acquiring the speech sound system of their language due to poor phonological memory (PM). The goal of this study was to examine PM in individuals with histories of SSD employing functional MR imaging (fMRI). Participants were six right-handed adolescents with a history of early childhood SSD and seven right-handed matched controls with no history of speech and language disorders. We performed an fMRI study using an overt non-word repetition (NWR). Right lateralized hypoactivation in the inferior frontal gyrus and middle temporal gyrus was observed. The former suggests a deficit in the phonological processing loop supporting PM, while the later may indicate a deficit in speech perception. Both are cognitive processes involved in speech production. Bilateral hyperactivation observed in the pre and supplementary motor cortex, inferior parietal, supramarginal gyrus and cerebellum raised the possibility of compensatory increases in cognitive effort or reliance on the other components of the articulatory rehearsal network and phonologic store. These findings may be interpreted to support the hypothesis that individuals with SSD may have a deficit in PM and to suggest the involvement of compensatory mechanisms to counteract dysfunction of the normal network.  相似文献   

6.
A series of three experiments examined children's sensitivity to probabilistic phonotactic structure as reflected in the relative frequencies with which speech sounds occur and co-occur in American English. Children, ages 212 and 312 years, participated in a nonword repetition task that examined their sensitivity to the frequency of individual phonetic segments and to the frequency of combinations of segments. After partialling out ease of articulation and lexical variables, both groups of children repeated higher phonotactic frequency nonwords more accurately than they did low phonotactic frequency nonwords, suggesting sensitivity to phoneme frequency. In addition, sensitivity to individual phonetic segments increased with age. Finally, older children, but not younger children, were sensitive to the frequency of larger (diphone) units. These results suggest not only that young children are sensitive to fine-grained acoustic-phonetic information in the developing lexicon but also that sensitivity to all aspects of the sound structure increases over development. Implications for the acoustic nature of both developing and mature lexical representations are discussed.  相似文献   

7.
The functional neuroanatomy of speech perception has been difficult to characterize. Part of the difficulty, we suggest, stems from the fact that the neural systems supporting 'speech perception' vary as a function of the task. Specifically, the set of cognitive and neural systems involved in performing traditional laboratory speech perception tasks, such as syllable discrimination or identification, only partially overlap those involved in speech perception as it occurs during natural language comprehension. In this review, we argue that cortical fields in the posterior-superior temporal lobe, bilaterally, constitute the primary substrate for constructing sound-based representations of speech, and that these sound-based representations interface with different supramodal systems in a task-dependent manner. Tasks that require access to the mental lexicon (i.e. accessing meaning-based representations) rely on auditory-to-meaning interface systems in the cortex in the vicinity of the left temporal-parietal-occipital junction. Tasks that require explicit access to speech segments rely on auditory-motor interface systems in the left frontal and parietal lobes. This auditory-motor interface system also appears to be recruited in phonological working memory.  相似文献   

8.
Word recognition is generally assumed to be achieved via competition in the mental lexicon between phonetically similar word forms. However, this process has so far been examined only in the context of auditory phonetic similarity. In the present study, we investigated whether the influence of word-form similarity on word recognition holds in the visual modality and with the patterns of visual phonetic similarity. Deaf and hearing participants identified isolated spoken words presented visually on a video monitor. On the basis of computational modeling of the lexicon from visual confusion matrices of visual speech syllables, words were chosen to vary in visual phonetic distinctiveness, ranging from visually unambiguous (lexical equivalence class [LEC] size of 1) to highly confusable (LEC size greater than 10). Identification accuracy was found to be highly related to the word LEC size and frequency of occurrence in English. Deaf and hearing participants did not differ in their sensitivity to word LEC size and frequency. The results indicate that visual spoken word recognition shows strong similarities with its auditory counterpart in that the same dependencies on lexical similarity and word frequency are found to influence visual speech recognition accuracy. In particular, the results suggest that stimulus-based lexical distinctiveness is a valid construct to describe the underlying machinery of both visual and auditory spoken word recognition.  相似文献   

9.
Candidate brain regions constituting a neural network for preattentive phonetic perception were identified with fMRI and multivariate multiple regression of imaging data. Stimuli contrasted along speech/nonspeech, acoustic, or phonetic complexity (three levels each) and natural/synthetic dimensions. Seven distributed brain regions' activity correlated with speech and speech complexity dimensions, including five left-sided foci [posterior superior temporal gyrus (STG), angular gyrus, ventral occipitotemporal cortex, inferior/posterior supramarginal gyrus, and middle frontal gyrus (MFG)] and two right-sided foci (posterior STG and anterior insula). Only the left MFG discriminated natural and synthetic speech. The data also supported a parallel rather than serial model of auditory speech and nonspeech perception.  相似文献   

10.
A large body of literature has characterized unimodal monolingual and bilingual lexicons and how neighborhood density affects lexical access; however there have been relatively fewer studies that generalize these findings to bimodal (M2) second language (L2) learners of sign languages. The goal of the current study was to investigate parallel language activation in M2L2 learners of sign language and to characterize the influence of spoken language and sign language neighborhood density on the activation of ASL signs. A priming paradigm was used in which the neighbors of the sign target were activated with a spoken English word and compared the activation of the targets in sparse and dense neighborhoods. Neighborhood density effects in auditory primed lexical decision task were then compared to previous reports of native deaf signers who were only processing sign language. Results indicated reversed neighborhood density effects in M2L2 learners relative to those in deaf signers such that there were inhibitory effects of handshape density and facilitatory effects of location density. Additionally, increased inhibition for signs in dense handshape neighborhoods was greater for high proficiency L2 learners. These findings support recent models of the hearing bimodal bilingual lexicon, which posit lateral links between spoken language and sign language lexical representations.  相似文献   

11.
R Frost 《Cognition》1991,39(3):195-214
When an amplitude-modulated noise generated from a spoken word is presented simultaneously with the word's printed version, the noise sounds more speechlike. This auditory illusion obtained by Frost, Repp, and Katz (1988) suggests that subjects detect correspondences between speech amplitude envelopes and printed stimuli. The present study investigated whether the speech envelope is assembled from the printed word or whether it is lexically addressed. In two experiments subjects were presented with speech-plus-noise and with noise-only trials, and were required to detect the speech in the noise. The auditory stimuli were accompanied with matching or non-matching Hebrew print, which was unvoweled in Experiment 1 and voweled in Experiment 2. The stimuli of both experiments consisted of high-frequency words, low-frequency words, and non-words. The results demonstrated that matching print caused a strong bias to detect speech in the noise when the stimuli were either high- or low-frequency words, whereas no bias was found for non-words. The bias effect for words or non-words was not affected by spelling to sound regularity; that is, similar effects were obtained in the voweled and the unvoweled conditions. These results suggest that the amplitude envelope of the word is not assembled from the print. Rather, it is addressed directly from the printed word and retrieved from the mental lexicon. Since amplitude envelopes are contingent on detailed phonetic structures, this outcome suggests that representations of words in the mental lexicon are not only phonological but also phonetic in character.  相似文献   

12.
Goldrick M  Rapp B 《Cognition》2007,102(2):219-260
Theories of spoken word production generally assume a distinction between at least two types of phonological processes and representations: lexical phonological processes that recover relatively arbitrary aspects of word forms from long-term memory and post-lexical phonological processes that specify the predictable aspects of phonological representations. In this work we examine the spoken production of two brain-damaged individuals. We use their differential patterns of accuracy across the tasks of spoken naming and repetition to establish that they suffer from distinct deficits originating fairly selectively within lexical or post-lexical processes. Independent and detailed analyses of their spoken productions reveal contrasting patterns that provide clear support for a distinction between two types of phonological representations: those that lack syllabic and featural information and are sensitive to lexical factors such as lexical frequency and neighborhood density, and those that include syllabic and featural information and are sensitive to detailed properties of phonological structure such as phoneme frequency and syllabic constituency.  相似文献   

13.
Textbooks dealing with the anatomical representation of language in the human brain display two language-related zones, Broca's area and Wernicke's area, connected by a single dorsal fiber tract, the arcuate fascicle. This classical model is incomplete. Modern imaging techniques have identified a second long association tract between the temporal and prefrontal language zones, taking a ventral course along the extreme capsule. This newly identified ventral tract connects brain regions needed for language comprehension, while the well-known arcuate fascicle is used for "sensorimotor mapping" during speech production. More than 130 years ago, Carl Wernicke already described a ventral connection for language, almost identical to the present results, but during scientific debate in the following decades either its function or its existence were rejected. This article tells the story of how this knowledge was lost and how the ventral connection, and in consequence the dual system, fits into current hypotheses and how language relates to other systems.  相似文献   

14.
Giroux I  Rey A 《Cognitive Science》2009,33(2):260-272
Saffran, Newport, and Aslin (1996a) found that human infants are sensitive to statistical regularities corresponding to lexical units when hearing an artificial spoken language. Two sorts of segmentation strategies have been proposed to account for this early word-segmentation ability: bracketing strategies, in which infants are assumed to insert boundaries into continuous speech, and clustering strategies, in which infants are assumed to group certain speech sequences together into units ( Swingley, 2005 ). In the present study, we test the predictions of two computational models instantiating each of these strategies i.e., Serial Recurrent Networks: Elman, 1990 ; and Parser: Perruchet & Vinter, 1998 in an experiment where we compare the lexical and sublexical recognition performance of adults after hearing 2 or 10 min of an artificial spoken language. The results are consistent with Parser's predictions and the clustering approach, showing that performance on words is better than performance on part-words only after 10 min. This result suggests that word segmentation abilities are not merely due to stronger associations between sublexical units but to the emergence of stronger lexical representations during the development of speech perception processes.  相似文献   

15.
Mapping from acoustic signals to lexical representations is a complex process mediated by a number of different levels of representation. This paper reviews properties of the phonetic and phonological levels, and hypotheses about how category structure is represented at each of these levels, and evaluates these hypotheses in light of relevant electrophysiological studies of phonetics and phonology. The paper examines evidence for two alternative views of how infant phonetic representations develop into adult representations, a structure-changing view and a structure-adding view, and suggests that each may be better suited to different kinds of phonetic categories. Electrophysiological results are beginning to provide information about phonological representations, but less is known about how the more abstract representations at this level could be coded in the brain.  相似文献   

16.
While there is evidence that talker-specific details are encoded in the phonetics of the lexicon (Kraljic, Samuel, & Brennan, Psychological Science 19(4):332–228, 2008; Logan, Lively, & Pisoni, Journal of the Acoustical Society of America, 89(2):874-886, 1991) and in sentence processing (Nygaard & Pisoni, Perception & Psychophysics, 60(3):355–376, 1998), it is unclear whether categorical linguistic patterns are also represented in terms of talker-specific details. The present study provides evidence that adult learners form talker-independent representations for productive linguistic patterns. Participants were able to generalize a novel linguistic pattern to unfamiliar talkers. Learners were exposed to spoken words that conformed to a pattern in which vowels of a word agreed in place of articulation, referred to as vowel harmony. All items were presented in the voice of one single talker. Participants were tested on items that included both the familiar talker and an unfamiliar talker. Participants generalized the pattern to novel talkers when the talkers spoke with a familiar accent (Experiment 1), as well as with an unfamiliar accent (Experiment 2). Learners showed a small advantage for talker familiarity when the words were familiar, but not when the words were novel. These results are consistent with a theory of language processing in which the lexicon stores fine-grained, talker-specific phonetic details, but productive linguistic processes are subject to abstract, talker-independent representations.  相似文献   

17.
To examine the influence of age and reading proficiency on the development of the spoken language network, we tested 6- and 9-years-old children listening to native and foreign sentences in a slow event-related fMRI paradigm. We observed a stable organization of the peri-sylvian areas during this time period with a left dominance in the superior temporal sulcus and inferior frontal region. A year of reading instruction was nevertheless sufficient to increase activation in regions involved in phonological representations (posterior superior temporal region) and sentence integration (temporal pole and pars orbitalis). A top-down activation of the left inferior temporal cortex surrounding the visual word form area, was also observed but only in 9 year-olds (3 years of reading practice) listening to their native language. These results emphasize how a successful cultural practice, reading, slots in the biological constraints of the innate spoken language network.  相似文献   

18.
Berent I  Steriade D  Lennertz T  Vaknin V 《Cognition》2007,104(3):591-630
Are speakers equipped with preferences concerning grammatical structures that are absent in their language? We examine this question by investigating the sensitivity of English speakers to the sonority of onset clusters. Linguistic research suggests that certain onset clusters are universally preferred (e.g., bd>lb). We demonstrate that such preferences modulate the perception of unattested onsets by English speakers: Monosyllabic auditory nonwords with onsets that are universally dispreferred (e.g., lbif) are more likely to be classified as disyllabic and misperceived as identical to their disyllabic counterparts (e.g., lebif) compared to onsets that are relatively preferred across languages (e.g., bdif). Consequently, dispreferred onsets benefit from priming by their epenthetic counterpart (e.g., lebif-lbif) as much as they benefit from identity priming (e.g., lbif-lbif). A similar pattern of misperception (e.g., lbif-->lebif) was observed among speakers of Russian, where clusters of this type occur. But unlike English speakers, Russian speakers perceived these clusters accurately on most trials, suggesting that the perceptual illusions of English speakers are partly due to their linguistic experience, rather than phonetic confusion alone. Further evidence against a purely phonetic explanation for our results is offered by the capacity of English speakers to perceive such onsets accurately under conditions that encourage precise phonetic encoding. The perceptual illusions of English speakers are also irreducible to several statistical properties of the English lexicon. The systematic misperception of universally dispreferred onsets might reflect their ill-formedness in the grammars of all speakers, irrespective of linguistic experience. Such universal grammatical preferences implicate constraints on language learning.  相似文献   

19.
Many believe that the ability to understand the actions of others is made possible by mirror neurons and a network of brain areas known as the action-observation network (AON). Despite nearly two decades of research into mirror neurons and the AON, however, there is little evidence that they enable the inference of the intention of observed actions. Instead, theories of action selection during action execution indicate that a ventral pathway, linking middle temporal gyrus with the anterior inferior frontal gyrus, might encode these abstract features during action observation. Here I propose that action understanding requires more than merely the AON, and might be achieved through interactions between a ventral pathway and the dorsal AON.  相似文献   

20.
In a neuroimaging study focusing on young bilinguals, we explored the brains of bilingual and monolingual babies across two age groups (younger 4-6 months, older 10-12 months), using fNIRS in a new event-related design, as babies processed linguistic phonetic (Native English, Non-Native Hindi) and nonlinguistic Tone stimuli. We found that phonetic processing in bilingual and monolingual babies is accomplished with the same language-specific brain areas classically observed in adults, including the left superior temporal gyrus (associated with phonetic processing) and the left inferior frontal cortex (associated with the search and retrieval of information about meanings, and syntactic and phonological patterning), with intriguing developmental timing differences: left superior temporal gyrus activation was observed early and remained stably active over time, while left inferior frontal cortex showed greater increase in neural activation in older babies notably at the precise age when babies’ enter the universal first-word milestone, thus revealing a first-time focalbrain correlate that may mediate a universal behavioral milestone in early human language acquisition. A difference was observed in the older bilingual babies’ resilient neural and behavioral sensitivity to Non-Native phonetic contrasts at a time when monolingual babies can no longer make such discriminations. We advance the “Perceptual Wedge Hypothesis” as one possible explanation for how exposure to greater than one language may alter neural and language processing in ways that we suggest are advantageous to language users. The brains of bilinguals and multilinguals may provide the most powerful window into the full neural “extent and variability” that our human species’ language processing brain areas could potentially achieve.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号