首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 34 毫秒
1.
We examine the evidence that speech and musical sounds exploit different acoustic cues: speech is highly dependent on rapidly changing broadband sounds, whereas tonal patterns tend to be slower, although small and precise changes in frequency are important. We argue that the auditory cortices in the two hemispheres are relatively specialized, such that temporal resolution is better in left auditory cortical areas and spectral resolution is better in right auditory cortical areas. We propose that cortical asymmetries might have developed as a general solution to the need to optimize processing of the acoustic environment in both temporal and frequency domains.  相似文献   

2.
Recent research suggests an auditory temporal deficit as a possible contributing factor to poor phonemic awareness skills. This study investigated the relationship between auditory temporal processing of nonspeech sounds and phonological awareness ability in children with a reading disability, aged 8-12 years, using Tallal's tone-order judgement task. Normal performance on the tone-order task was established for 36 normal readers. Forty-two children with developmental reading disability were then subdivided by their performance on the tone-order task. Average and poor tone-order subgroups were then compared on their ability to process speech sounds and visual symbols, and on phonological awareness and reading. The presence of a tone-order deficit did not relate to performance on the order processing of speech sounds, to poorer phonological awareness or to more severe reading difficulties. In particular, there was no evidence of a group by interstimulus interval interaction, as previously described in the literature, and thus little support for a general auditory temporal processing difficulty as an underlying problem in poor readers. In this study, deficient order judgement on a nonverbal auditory temporal order task (tone task) did not underlie phonological awareness or reading difficulties.  相似文献   

3.
In this paper we examine the evidence for human brain areas dedicated to visual or auditory word form processing by comparing cortical activation for auditory word repetition, reading, picture naming, and environmental sound naming. Both reading and auditory word repetition activated left lateralised regions in the frontal operculum (Broca's area), posterior superior temporal gyrus (Wernicke's area), posterior inferior temporal cortex, and a region in the mid superior temporal sulcus relative to baseline conditions that controlled for sensory input and motor output processing. In addition, auditory word repetition increased activation in a lateral region of the left mid superior temporal gyrus but critically, this area is not specific to auditory word processing, it is also activated in response to environmental sounds. There were no reading specific activations, even in the areas previously claimed as visual word form areas: activations were either common to reading and auditory word repetition or common to reading and picture naming. We conclude that there is no current evidence for cortical sites dedicated to visual or auditory word form processing.  相似文献   

4.
The physiological processes underlying the segregation of concurrent sounds were investigated through the use of event-related brain potentials. The stimuli were complex sounds containing multiple harmonics, one of which could be mistuned so that it was no longer an integer multiple of the fundamental. Perception of concurrent auditory objects increased with degree of mistuning and was accompanied by negative and positive waves that peaked at 180 and 400 ms poststimulus, respectively. The negative wave, referred to as object-related negativity, was present during passive listening, but the positive wave was not. These findings indicate bottom-up and top-down influences during auditory scene analysis. Brain electrical source analyses showed that distinguishing simultaneous auditory objects involved a widely distributed neural network that included auditory cortices, the medial temporal lobe, and posterior association cortices.  相似文献   

5.
The functional specificity of different brain regions recruited in auditory language processing was investigated by means of event-related functional magnetic resonance imaging (fMRI) while subjects listened to speech input varying in the presence or absence of semantic and syntactic information. There were two sentence conditions containing syntactic structure, i.e., normal speech (consisting of function and content words), syntactic speech (consisting of function words and pseudowords), and two word-list conditions, i.e., real words and pseudowords. The processing of auditory language, in general, correlates with significant activation in the primary auditory cortices and in adjacent compartments of the superior temporal gyrus bilaterally. Processing of normal speech appeared to have a special status, as no frontal activation was observed in this case but was seen in the other three conditions. This difference may point toward a certain automaticity of the linguistic processes used during normal speech comprehension. When considering the three other conditions, we found that these were correlated with activation in both left and right frontal cortices. An increase of activation in the planum polare bilaterally and in the deep portion of the left frontal operculum was found exclusively when syntactic processes were in focus. Thus, the present data may be taken to suggest an involvement of the left frontal and bilateral temporal cortex when processing syntactic information during comprehension.  相似文献   

6.
Are speech-specific processes localized in dedicated cortical regions or do they emerge from developmental plasticity in the connections among non-dedicated regions? Here we claim that all the brain regions activated by the processing of auditory speech can be re-classified according to whether they respond to non-verbal environmental sounds, pitch changes, unfamiliar melodies, or conceptual processes. We therefore argue that speech-specific processing emerges from differential demands on auditory and conceptual processes that are shared by speech and non-speech stimuli. This has implications for domain- vs. process-specific cognitive models, and for the relative importance of segregation and integration in functional anatomy.  相似文献   

7.
In 10 right-handed Ss, auditory evoked responses (AERs) were recorded from left and right temporal and parietal scalp regions during simple discrimination responses to binaurally presented pairs of synthetic speech sounds ranging perceptually from /ba/ to /da/. A late positive component (P3) in the AER was found to reflect the categorical or phonetic analysis of the stop consonants, with only left scalp sites averaging significantly different responses between acoustic and phonetic comparisons. The result is interpreted as evidence of hemispheric differences in the processing of speech in respect of the level of processing accessed by the particular information processing task.  相似文献   

8.
In studies on auditory speech perception, participants are often asked to perform active tasks, e.g. decide whether the perceived sound is a speech sound or not. However, information about the stimulus, inherent in such tasks, may induce expectations that cause altered activations not only in the auditory cortex, but also in frontal areas such as inferior frontal gyrus (IFG) and motor cortices, even in the absence of an explicit task. To investigate this, we applied spectral mixes of a flute sound and either vowels or specific music instrument sounds (e.g. trumpet) in an fMRI study, in combination with three different instructions. The instructions either revealed no information about stimulus features, or explicit information about either the music instrument or the vowel features. The results demonstrated that, besides an involvement of posterior temporal areas, stimulus expectancy modulated in particular a network comprising IFG and premotor cortices during this passive listening task.  相似文献   

9.
The functional specificity of different brain areas recruited in auditory language processing was investigated by means of event-related functional magnetic resonance imaging (fMRI) while subjects listened to speech input varying in the presence or absence of semantic and syntactic information. There were two sentence conditions containing syntactic structure, i.e., normal speech (consisting of function and content words), syntactic speech (consisting of function words and pseudowords), and two word-list conditions, i.e., real words and pseudowords. The processing of auditory language, in general, correlates with significant activation in the primary auditory cortices and in adjacent compartments of the superior temporal gyrus bilaterally. Processing of normal speech appeared to have a special status, as no frontal activation was observed in this case but was seen in the three other conditions. This difference may point toward a certain automaticity of the linguistic processes used during normal speech comprehension. When considering the three other conditions, we found that these were correlated with activation in both left and right frontal cortices. An increase of activation in the planum polare bilaterally and in the deep portion of the left frontal operculum was found exclusively when syntactic processes were in focus. Thus, the present data may be taken to suggest an involvement of the left frontal and bilateral temporal cortex when processing syntactic information during comprehension.  相似文献   

10.
Speech sounds can be classified on the basis of their underlying articulators or on the basis of the acoustic characteristics resulting from particular articulatory positions. Research in speech perception suggests that distinctive features are based on both articulatory and acoustic information. In recent years, neuroelectric and neuromagnetic investigations provided evidence for the brain's early sensitivity to distinctive features and their acoustic consequences, particularly for place of articulation distinctions. Here, we compare English consonants in a Mismatch Field design across two broad and distinct places of articulation - labial and coronal - and provide further evidence that early evoked auditory responses are sensitive to these features. We further add to the findings of asymmetric consonant processing, although we do not find support for coronal underspecification. Labial glides (Experiment 1) and fricatives (Experiment 2) elicited larger Mismatch responses than their coronal counterparts. Interestingly, their M100 dipoles differed along the anterior/posterior dimension in the auditory cortex that has previously been found to spatially reflect place of articulation differences. Our results are discussed with respect to acoustic and articulatory bases of featural speech sound classifications and with respect to a model that maps distinctive phonetic features onto long-term representations of speech sounds.  相似文献   

11.
近年来听觉表象开始得到关注,相关研究包括言语声音、音乐声音、环境声音的听觉表象三类。本文梳理了认知神经科学领域对上述三种听觉表象所激活的脑区研究,比较了听觉表象和听觉对应脑区的异同,并展望了听觉表象未来的研究方向。  相似文献   

12.
Unlike visual and tactile stimuli, auditory signals that allow perception of timbre, pitch and localization are temporal. To process these, the auditory nervous system must either possess specialized neural machinery for analyzing temporal input, or transform the initial responses into patterns that are spatially distributed across its sensory epithelium. The former hypothesis, which postulates the existence of structures that facilitate temporal processing, is most popular. However, I argue that the cochlea transforms sound into spatiotemporal response patterns on the auditory nerve and central auditory stages; and that a unified computational framework exists for central auditory, visual and other sensory processing. Specifically, I explain how four fundamental concepts in visual processing play analogous roles in auditory processing.  相似文献   

13.
Integrating face and voice in person perception   总被引:4,自引:0,他引:4  
Integration of information from face and voice plays a central role in our social interactions. It has been mostly studied in the context of audiovisual speech perception: integration of affective or identity information has received comparatively little scientific attention. Here, we review behavioural and neuroimaging studies of face-voice integration in the context of person perception. Clear evidence for interference between facial and vocal information has been observed during affect recognition or identity processing. Integration effects on cerebral activity are apparent both at the level of heteromodal cortical regions of convergence, particularly bilateral posterior superior temporal sulcus (pSTS), and at 'unimodal' levels of sensory processing. Whether the latter reflects feedback mechanisms or direct crosstalk between auditory and visual cortices is as yet unclear.  相似文献   

14.
脑干诱发电位是一种考察听觉脑干加工声音信号时神经活动的非侵入性技术, 近年来被广泛用于探索言语感知的神经基础。相关研究主要集中在考察成年人和正常发展儿童语音编码时脑干的活动特征及发展模式, 探讨发展性阅读障碍及其他语言损伤的语音编码缺陷及其神经表现等方面。在已有研究的基础上进一步探索初级语音编码和高级言语加工之间的相互作用机制, 考察阅读障碍的底层神经基础将成为未来该技术在言语感知研究中应用的重点。  相似文献   

15.
Contrasting linguistic and nonlinguistic processing has been of interest to many researchers with different scientific, theoretical, or clinical questions. However, previous work on this type of comparative analysis and experimentation has been limited. In particular, little is known about the differences and similarities between the perceptual, cognitive, and neural processing of nonverbal environmental sounds and that of speech sounds. With the aim of contrasting verbal and nonverbal processing in the auditory modality, we developed a new on-line measure that can be administered to subjects from different clinical, neurological, or sociocultural groups. This is an on-line task of sound to picture matching, in which the sounds are either environmental sounds or their linguistic equivalents and which is controlled for potential task and item confounds across the two sound types. Here, we describe the design and development of our measure and report norming data for healthy subjects from two different adult age groups: younger adults (18–24 years of age) and older adults (54–78 years of age). We also outline other populations to which the test has been or is being administered. In addition to the results reported here, the test can be useful to other researchers who are interested in systematically contrasting verbal and nonverbal auditory processing in other populations.  相似文献   

16.
Grammatical-specific language impairment (G-SLI) in children, arguably, provides evidence for the existence of a specialised grammatical sub-system in the brain, necessary for normal language development. Some researchers challenge this, claiming that domain-general, low-level auditory deficits, particular to rapid processing, cause phonological deficits and thereby SLI. We investigate this possibility by testing the auditory discrimination abilities of G-SLI children for speech and non-speech sounds, at varying presentation rates, and controlling for the effects of age and language on performance. For non-speech formant transitions, 69% of the G-SLI children showed normal auditory processing, whereas for the same acoustic information in speech, only 31% did so. For rapidly presented tones, 46% of the G-SLI children performed normally. Auditory performance with speech and non-speech sounds differentiated the G-SLI children from their age-matched controls, whereas speed of processing did not. The G-SLI children evinced no relationship between their auditory and phonological/grammatical abilities. We found no consistent evidence that a deficit in processing rapid acoustic information causes or maintains G-SLI. The findings, from at least those G-SLI children who do not exhibit any auditory deficits, provide further evidence supporting the existence of a primary domain-specific deficit underlying G-SLI.  相似文献   

17.
Hickok G  Poeppel D 《Cognition》2004,92(1-2):67-99
Despite intensive work on language-brain relations, and a fairly impressive accumulation of knowledge over the last several decades, there has been little progress in developing large-scale models of the functional anatomy of language that integrate neuropsychological, neuroimaging, and psycholinguistic data. Drawing on relatively recent developments in the cortical organization of vision, and on data from a variety of sources, we propose a new framework for understanding aspects of the functional anatomy of language which moves towards remedying this situation. The framework posits that early cortical stages of speech perception involve auditory fields in the superior temporal gyrus bilaterally (although asymmetrically). This cortical processing system then diverges into two broad processing streams, a ventral stream, which is involved in mapping sound onto meaning, and a dorsal stream, which is involved in mapping sound onto articulatory-based representations. The ventral stream projects ventro-laterally toward inferior posterior temporal cortex (posterior middle temporal gyrus) which serves as an interface between sound-based representations of speech in the superior temporal gyrus (again bilaterally) and widely distributed conceptual representations. The dorsal stream projects dorso-posteriorly involving a region in the posterior Sylvian fissure at the parietal-temporal boundary (area Spt), and ultimately projecting to frontal regions. This network provides a mechanism for the development and maintenance of "parity" between auditory and motor representations of speech. Although the proposed dorsal stream represents a very tight connection between processes involved in speech perception and speech production, it does not appear to be a critical component of the speech perception process under normal (ecologically natural) listening conditions, that is, when speech input is mapped onto a conceptual representation. We also propose some degree of bi-directionality in both the dorsal and ventral pathways. We discuss some recent empirical tests of this framework that utilize a range of methods. We also show how damage to different components of this framework can account for the major symptom clusters of the fluent aphasias, and discuss some recent evidence concerning how sentence-level processing might be integrated into the framework.  相似文献   

18.
通常人们接收到来自不同感觉通道的信息时, 首先在大脑中各个分离的区域单独进行加工处理, 而后在多感官区进行整合。前人关于言语感知中视听整合加工的神经成像研究认为, 视觉和听觉信息能够相互影响; 两者进行整合的关键区域是人脑左后侧的颞上沟, 其整合效应受时间和空间因素的限制。未来的研究应致力于建立更加合理的实验范式和数据分析方法来探讨整合加工的脑区机制, 把多感官整合研究进一步延伸到更加复杂的领域。  相似文献   

19.
Recent event-related potential (ERP) and functional magnetic resonance imaging (fMRI) studies suggest that novelty processing may be involved in processes that recognize the meaning of a novel sound, during which widespread cortical regions including the right prefrontal cortex are engaged. However, it remains unclear how those cortical regions are functionally integrated during novelty processing. Because theta oscillation has been assumed to have a crucial role in memory operations, we examined local and inter-regional neural synchrony of theta band activity during novelty processing. Fifteen right-handed healthy university students participated in this study. Subjects performed an auditory novelty oddball task that consisted of the random sequence of three types of stimuli such as a target (1000 Hz pure tone), novel (familiar environmental sounds such as dog bark, buzz, car crashing sound and so on), and standard sounds (950 Hz pure tone). Event-related spectra perturbation (ERSP) and the phase-locking value (PLV) were measured from human scalp EEG during task. Non-parametric statistical tests were applied to test for significant differences between stimulus novelty and stimulus targets in ERSP and PLV. The novelty P3 showed significant higher amplitude and shorter latency compared with target P3 in frontocentral regions. Overall, theta activity was significantly higher in the novel stimuli compared with the target stimuli. Specifically, the difference in theta power between novel and target stimuli was most significant in the right frontal region. This right frontal theta activity was accompanied by phase synchronization with the left temporal region. Our results imply that theta phase synchronization between right frontal and left temporal regions underlie the retrieval of memory traces for unexpected but familiar sounds from long term memory in addition to working memory retrieval or novelty encoding.  相似文献   

20.
刘文理  乐国安 《心理学报》2012,44(5):585-594
采用启动范式, 以汉语听者为被试, 考察了非言语声音是否影响言语声音的知觉。实验1考察了纯音对辅音范畴连续体知觉的影响, 结果发现纯音影响到辅音范畴连续体的知觉, 表现出频谱对比效应。实验2考察了纯音和复合音对元音知觉的影响, 结果发现与元音共振峰频率一致的纯音或复合音加快了元音的识别, 表现出启动效应。两个实验一致发现非言语声音能够影响言语声音的知觉, 表明言语声音知觉也需要一个前言语的频谱特征分析阶段, 这与言语知觉听觉理论的观点一致。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号