首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
人声是人类听觉环境中最熟知和重要的声音, 传递着大量社会相关信息。与视觉人脸加工类似, 大脑对人声也有着特异性加工。研究者使用电生理、脑成像等手段找到了对人声有特异性反应的脑区, 即颞叶人声加工区(TVA), 并发现非人类动物也有类似的特异性加工区域。人声加工主要涉及言语、情绪和身份信息的加工, 分别对应于三条既相互独立又相互作用的神经通路。研究者提出了双通路模型、多阶段模型和整合模型分别对人声的言语、情绪和身份加工进行解释。未来研究需要进一步讨论人声加工的特异性能否由特定声学特征的选择性加工来解释, 并深入探究特殊人群(如自闭症和精神分裂症患者)的人声加工的神经机制。  相似文献   

2.
皮层功能的正常发展依赖于充分的外部感觉信息的输入。先天性听力障碍群体由于经历早期听觉剥夺, 皮层功能往往出现异常。具体表现为初级听皮层功能退化, 初级、次级听皮层的功能连接变弱, 次级听皮层出现跨通道功能重组; 在后天听力重建后听皮层功能重组仍然存在, 言语加工需要更多高级认知资源的补偿。已有研究在探讨听力重建后皮层的长期可塑性机制、复杂声学环境下言语加工机制、汉语言加工独特性等方面尚不深入, 值得进一步研究。  相似文献   

3.
Using functional MRI, we investigated whether auditory processing of both speech and meaningful non-linguistic environmental sounds in superior and middle temporal cortex relies on a complex and spatially distributed neural system. We found that evidence for spatially distributed processing of speech and environmental sounds in a substantial extent of temporal cortices. Most importantly, regions previously reported as selective for speech over environmental sounds also contained distributed information. The results indicate that temporal cortices supporting complex auditory processing, including regions previously described as speech-selective, are in fact highly heterogeneous.  相似文献   

4.
Speech Perception Within an Auditory Cognitive Science Framework   总被引:1,自引:0,他引:1  
ABSTRACT— The complexities of the acoustic speech signal pose many significant challenges for listeners. Although perceiving speech begins with auditory processing, investigation of speech perception has progressed mostly independently of study of the auditory system. Nevertheless, a growing body of evidence demonstrates that cross-fertilization between the two areas of research can be productive. We briefly describe research bridging the study of general auditory processing and speech perception, showing that the latter is constrained and influenced by operating characteristics of the auditory system and that our understanding of the processes involved in speech perception is enhanced by study within a more general framework. The disconnect between the two areas of research has stunted the development of a truly interdisciplinary science, but there is an opportunity for great strides in understanding with the development of an integrated field of auditory cognitive science.  相似文献   

5.
The functional specificity of different brain areas recruited in auditory language processing was investigated by means of event-related functional magnetic resonance imaging (fMRI) while subjects listened to speech input varying in the presence or absence of semantic and syntactic information. There were two sentence conditions containing syntactic structure, i.e., normal speech (consisting of function and content words), syntactic speech (consisting of function words and pseudowords), and two word-list conditions, i.e., real words and pseudowords. The processing of auditory language, in general, correlates with significant activation in the primary auditory cortices and in adjacent compartments of the superior temporal gyrus bilaterally. Processing of normal speech appeared to have a special status, as no frontal activation was observed in this case but was seen in the three other conditions. This difference may point toward a certain automaticity of the linguistic processes used during normal speech comprehension. When considering the three other conditions, we found that these were correlated with activation in both left and right frontal cortices. An increase of activation in the planum polare bilaterally and in the deep portion of the left frontal operculum was found exclusively when syntactic processes were in focus. Thus, the present data may be taken to suggest an involvement of the left frontal and bilateral temporal cortex when processing syntactic information during comprehension.  相似文献   

6.
The functional specificity of different brain regions recruited in auditory language processing was investigated by means of event-related functional magnetic resonance imaging (fMRI) while subjects listened to speech input varying in the presence or absence of semantic and syntactic information. There were two sentence conditions containing syntactic structure, i.e., normal speech (consisting of function and content words), syntactic speech (consisting of function words and pseudowords), and two word-list conditions, i.e., real words and pseudowords. The processing of auditory language, in general, correlates with significant activation in the primary auditory cortices and in adjacent compartments of the superior temporal gyrus bilaterally. Processing of normal speech appeared to have a special status, as no frontal activation was observed in this case but was seen in the other three conditions. This difference may point toward a certain automaticity of the linguistic processes used during normal speech comprehension. When considering the three other conditions, we found that these were correlated with activation in both left and right frontal cortices. An increase of activation in the planum polare bilaterally and in the deep portion of the left frontal operculum was found exclusively when syntactic processes were in focus. Thus, the present data may be taken to suggest an involvement of the left frontal and bilateral temporal cortex when processing syntactic information during comprehension.  相似文献   

7.
Successful communication in everyday life crucially involves the processing of auditory and visual components of speech. Viewing our interlocutor and processing visual components of speech facilitates speech processing by triggering auditory processing. Auditory phoneme processing, analyzed by event‐related brain potentials (ERP), has been shown to be associated with impairments in reading and spelling (i.e. developmental dyslexia), but visual aspects of phoneme processing have not been investigated in individuals with such deficits. The present study analyzed the passive visual Mismatch Response (vMMR) in school children with and without developmental dyslexia in response to video‐recorded mouth movements pronouncing syllables silently. Our results reveal that both groups of children showed processing of visual speech stimuli, but with different scalp distribution. Children without developmental dyslexia showed a vMMR with typical posterior distribution. In contrast, children with developmental dyslexia showed a vMMR with anterior distribution, which was even more pronounced in children with severe phonological deficits and very low spelling abilities. As anterior scalp distributions are typically reported for auditory speech processing, the anterior vMMR of children with developmental dyslexia might suggest an attempt to anticipate potentially upcoming auditory speech information in order to support phonological processing, which has been shown to be deficient in children with developmental dyslexia.  相似文献   

8.
Are speech-specific processes localized in dedicated cortical regions or do they emerge from developmental plasticity in the connections among non-dedicated regions? Here we claim that all the brain regions activated by the processing of auditory speech can be re-classified according to whether they respond to non-verbal environmental sounds, pitch changes, unfamiliar melodies, or conceptual processes. We therefore argue that speech-specific processing emerges from differential demands on auditory and conceptual processes that are shared by speech and non-speech stimuli. This has implications for domain- vs. process-specific cognitive models, and for the relative importance of segregation and integration in functional anatomy.  相似文献   

9.
The stimulus suffix effect (SSE) was examined with short sequences of words and meaningful nonspeech sounds. In agreement with previous findings, the SSE for word sequences was obtained with a speech, but not a nonspeech, suffix. The reverse was true for sounds. The results contribute further evidence for a functional distinction between speech and nonspeech processing mechanisms in auditory memory.  相似文献   

10.
Under many conditions auditory input interferes with visual processing, especially early in development. These interference effects are often more pronounced when the auditory input is unfamiliar than when the auditory input is familiar (e.g. human speech, pre‐familiarized sounds, etc.). The current study extends this research by examining how auditory input affects 8‐ and 14‐month‐olds’ performance on individuation tasks. The results of the current study indicate that both unfamiliar sounds and words interfered with infants’ performance on an individuation task, with cross‐modal interference effects being numerically stronger for unfamiliar sounds. The effects of auditory input on a variety of lexical tasks are discussed.  相似文献   

11.
This paper reviews a number of studies done by the authors and others, who have utilized various averaged electroencephalic response (AER) techniques to study speech and language processing. Pertinent studies are described in detail. A relatively new AER technique, auditory brainstem responses (ABR), is described and its usefulness in studying auditory processing activity related to speech and language is outlined. In addition, a series of ABR studies, that have demonstrated significant male—female differences in ABR auditory processing abilities, is presented and the relevance of these data to already established differences in male—female language, hearing, and cognitive abilities is discussed.  相似文献   

12.
Kim J  Davis C  Krins P 《Cognition》2004,93(1):B39-B47
This study investigated the linguistic processing of visual speech (video of a talker's utterance without audio) by determining if such has the capacity to prime subsequently presented word and nonword targets. The priming procedure is well suited for the investigation of whether speech perception is amodal since visual speech primes can be used with targets presented in different modalities. To this end, a series of priming experiments were conducted using several tasks. It was found that visually spoken words (for which overt identification was poor) acted as reliable primes for repeated target words in the naming, written and auditory lexical decision tasks. These visual speech primes did not produce associative or reliable form priming. The lack of form priming suggests that the repetition priming effect was constrained by lexical level processes. That priming found in all tasks is consistent with the view that similar processes operate in both visual and auditory speech processing.  相似文献   

13.
Two new experimental operations were used to distinguish between auditory and phonetic levels of processing in speech perception: the first based on reaction time data in speeded classification tasks with synthetic speech stimuli, and the second based on average evoked potentials recorded concurrently in the same tasks. Each of four experiments compared the processing of two different dimensions of the same synthetic consonant-vowel syllables. When a phonetic dimensions was compared to an auditory dimension, different patterns of results were obtained in both the reaction time and evoked potential data. No such differences were obtained for isolated acoustic components of the phonetic dimension or for two purely auditory dimensions. Together with other recent evidence, the present results constitute additional converging operations on the distinction between auditory and phonetic processes in speech perception and on the idea that phonetic processing involves mechanisms that are lateralized in one cerebral hemisphere.  相似文献   

14.
脑干诱发电位是一种考察听觉脑干加工声音信号时神经活动的非侵入性技术, 近年来被广泛用于探索言语感知的神经基础。相关研究主要集中在考察成年人和正常发展儿童语音编码时脑干的活动特征及发展模式, 探讨发展性阅读障碍及其他语言损伤的语音编码缺陷及其神经表现等方面。在已有研究的基础上进一步探索初级语音编码和高级言语加工之间的相互作用机制, 考察阅读障碍的底层神经基础将成为未来该技术在言语感知研究中应用的重点。  相似文献   

15.
Neurological and behavioral findings indicate that atypical auditory processing characterizes autism. The present study tested the hypothesis that auditory processing is less domain-specific in autism than in typical development. Participants with autism and controls completed a pitch sequence discrimination task in which same/different judgments of music and/or speech stimulus pairs were made. A signal detection analysis showed no difference in pitch sensitivity across conditions in the autism group, while controls exhibited significantly poorer performance in conditions incorporating speech. The results are largely consistent with perceptual theories of autism, which propose that a processing bias towards featural/low-level information characterizes the disorder, as well as supporting the notion that such individuals exhibit selective attention to a limited number of simultaneously presented cues.  相似文献   

16.
Recent research suggests an auditory temporal deficit as a possible contributing factor to poor phonemic awareness skills. This study investigated the relationship between auditory temporal processing of nonspeech sounds and phonological awareness ability in children with a reading disability, aged 8-12 years, using Tallal's tone-order judgement task. Normal performance on the tone-order task was established for 36 normal readers. Forty-two children with developmental reading disability were then subdivided by their performance on the tone-order task. Average and poor tone-order subgroups were then compared on their ability to process speech sounds and visual symbols, and on phonological awareness and reading. The presence of a tone-order deficit did not relate to performance on the order processing of speech sounds, to poorer phonological awareness or to more severe reading difficulties. In particular, there was no evidence of a group by interstimulus interval interaction, as previously described in the literature, and thus little support for a general auditory temporal processing difficulty as an underlying problem in poor readers. In this study, deficient order judgement on a nonverbal auditory temporal order task (tone task) did not underlie phonological awareness or reading difficulties.  相似文献   

17.
Apparent changes in auditory scenes are often unnoticed. This change deafness phenomenon was examined in auditory scenes that comprise human voices. In two experiments, listeners were required to detect changes between two auditory scenes comprising two, three, and four talkers who voiced four‐syllable words. One of the voices in the first scene was randomly selected and was replaced with a new word in change trials. The rationale was that higher stimulus familiarity conferred by human voices compared to other everyday sounds, together with encoding and memory advantages for verbal stimuli and the modular processing of speech in auditory processing, should positively influence the change detection efficiency, and the change deafness phenomenon should not be observed when listeners are explicitly required to detect the obvious changes. Contrary to the prediction, change deafness was significantly observed in three‐ and four‐talker conditions. This indicates that change deafness occurs in listeners even for highly familiar stimuli. This suggests the limited ability for perceptual organization of auditory scenes comprising even a relatively small number of voices (three or four).  相似文献   

18.
The present study investigated whether auditory temporal processing deficits are related to the presence and/or the severity of periventricular brain injury and the reading difficulties experienced by extremely low birthweight (ELBW: birthweight <1000 g) children. Results indicate that ELBW children with mild or severe brain lesions obtained significantly lower scores on a test requiring auditory temporal order judgments than ELBW children without periventricular brain injury or children who were full-term. Structural equation modeling indicated that a model in which auditory temporal processing deficits predicted speech sound discrimination and phonological processing ability provided a better fit for the data than did a second model, which hypothesized that auditory temporal processing deficits are associated with poor reading abilities through a working memory deficit. These findings suggest that an impairment in auditory temporal processing may contribute to the reading difficulties experienced by ELBW children.  相似文献   

19.
Speech production is inextricably linked to speech perception, yet they are usually investigated in isolation. In this study, we employed a verbal-repetition task to identify the neural substrates of speech processing with two ends active simultaneously using functional MRI. Subjects verbally repeated auditory stimuli containing an ambiguous vowel sound that could be perceived as either a word or a pseudoword depending on the interpretation of the vowel. We found verbal repetition commonly activated the audition-articulation interface bilaterally at Sylvian fissures and superior temporal sulci. Contrasting word-versus-pseudoword trials revealed neural activities unique to word repetition in the left posterior middle temporal areas and activities unique to pseudoword repetition in the left inferior frontal gyrus. These findings imply that the tasks are carried out using different speech codes: an articulation-based code of pseudowords and an acoustic-phonetic code of words. It also supports the dual-stream model and imitative learning of vocabulary.  相似文献   

20.
Sentence comprehension is a complex task that involves both language-specific processing components and general cognitive resources. Comprehension can be made more difficult by increasing the syntactic complexity or the presentation rate of a sentence, but it is unclear whether the same neural mechanism underlies both of these effects. In the current study, we used event-related functional magnetic resonance imaging (fMRI) to monitor neural activity while participants heard sentences containing a subject-relative or object-relative center-embedded clause presented at three different speech rates. Syntactically complex object-relative sentences activated left inferior frontal cortex across presentation rates, whereas sentences presented at a rapid rate recruited frontal brain regions such as anterior cingulate and premotor cortex, regardless of syntactic complexity. These results suggest that dissociable components of a large-scale neural network support the processing of syntactic complexity and speech presented at a rapid rate during auditory sentence processing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号