首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This fMRI study investigated phonological vs. auditory temporal processing in developmental dyslexia by means of a German vowel length discrimination paradigm (Groth, Lachmann, Riecker, Muthmann, & Steinbrink, 2011). Behavioral and fMRI data were collected from dyslexics and controls while performing same-different judgments of vowel duration in two experimental conditions. In the temporal, but not in the phonological condition, hemodynamic brain activation was observed bilaterally within the anterior insular cortices in both groups and within the left inferior frontal gyrus (IFG) in controls, indicating that the left IFG and the anterior insular cortices are part of a neural network involved in temporal auditory processing. Group subtraction analyses did not demonstrate significant effects. However, in a subgroup analysis, participants performing low in the temporal condition (all dyslexic) showed decreased activation of the insular cortices and the left IFG, suggesting that this processing network might form the neural basis of temporal auditory processing deficits in dyslexia.  相似文献   

2.
Three variant forms of subcortical aphasia in Chinese stroke patients   总被引:1,自引:0,他引:1  
Five right-handed patients with subcortical aphasia that involved the left hemisphere subcortical lesion sites were subjected to CT scans. Given their etiology, two cases were infarctions and the other three were hemorrhages. Two of the patients presented an involvement of the anterior limb of the internal capsule and of the basal ganglia and an anterior superior white-matter lesion extension. In both cases slow scanty dysarthric speech was noted; one had markedly impaired auditory comprehension, and the others were only partially impaired. The third patient presented an involvement of the posterior limb of the internal capsule and of the thalamus and a posterior paraventricular white-matter lesion extension. He had poor auditory comprehension, echolalia, and fluent speech. The last two patients presented an involvement of the internal capsule, the basal ganglia, and the thalamus and an anterior posterior paraventricular white-matter lesion extension. The latter two showed poor auditory comprehension with nonfluent and scanty spontaneous speech. The speech sounds were nonsensical monosyllabic words with a pattern similar to that of global aphasia. All patients had lasting right hemiplegia.  相似文献   

3.
Successful communication in everyday life crucially involves the processing of auditory and visual components of speech. Viewing our interlocutor and processing visual components of speech facilitates speech processing by triggering auditory processing. Auditory phoneme processing, analyzed by event‐related brain potentials (ERP), has been shown to be associated with impairments in reading and spelling (i.e. developmental dyslexia), but visual aspects of phoneme processing have not been investigated in individuals with such deficits. The present study analyzed the passive visual Mismatch Response (vMMR) in school children with and without developmental dyslexia in response to video‐recorded mouth movements pronouncing syllables silently. Our results reveal that both groups of children showed processing of visual speech stimuli, but with different scalp distribution. Children without developmental dyslexia showed a vMMR with typical posterior distribution. In contrast, children with developmental dyslexia showed a vMMR with anterior distribution, which was even more pronounced in children with severe phonological deficits and very low spelling abilities. As anterior scalp distributions are typically reported for auditory speech processing, the anterior vMMR of children with developmental dyslexia might suggest an attempt to anticipate potentially upcoming auditory speech information in order to support phonological processing, which has been shown to be deficient in children with developmental dyslexia.  相似文献   

4.
In a first experiment, we recorded event-related-potentials (ERPs) to "the" followed by meaningful words (Story) versus "the" followed by nonsense syllables (Nonse). Left and right lateral anterior positivities (LAPs) were seen from the onset of "the" up to 200 ms in both conditions. Later than 200 ms following the onset of "the", the left and right LAPs continued for "the" in the Story, but were replaced by a negativity in the Nonse Condition. In a second experiment, ERPs were recorded to "the" in the Story and Nonse contexts mixed together under two different task instructions (attend to the auditory stimuli versus ignore the auditory stimuli). The same pattern of findings as Experiment 1 were observed for the Story and Nonse contexts when the participants attended to the auditory stimuli. Ignoring the auditory stimuli led to an attenuation of the right LAP, supporting the hypothesis that it is an index of discourse processing.  相似文献   

5.
Over the years, a large body of work on the brain basis of language comprehension has accumulated, paving the way for the formulation of a comprehensive model. The model proposed here describes the functional neuroanatomy of the different processing steps from auditory perception to comprehension as located in different gray matter brain regions. It also specifies the information flow between these regions, taking into account white matter fiber tract connections. Bottom-up, input-driven processes proceeding from the auditory cortex to the anterior superior temporal cortex and from there to the prefrontal cortex, as well as top-down, controlled and predictive processes from the prefrontal cortex back to the temporal cortex are proposed to constitute the cortical language circuit.  相似文献   

6.
Seven aphasic patients with circumscribed left basal ganglia infarctions were investigated within the first 15 days after their strokes. Five showed transcortical motor aphasia initially. Two patients suffered from anterior chorioideal artery infarction. As this vessel does not contribute to cortical supply, cortical malfunction probably cannot account for the language deficits. Patients with infarctions in the supply area of anterior lenticulostriate arteries became fluent with frequent phonemic and semantic paraphasias resembling Wernicke's aphasia. Three of four patients showed transiently more pronounced deficits in auditory than in written-language comprehension.  相似文献   

7.
The neuronal system to process and transfer auditory information to the higher motor areas was investigated using fMRI. Two different types of internal modulation of auditory pacing (1 Hz) were combined to design a 2×2 condition experiment, and the activation was compared with that under a visual guidance. The bilateral anterior portion of the BA22 (ant-BA22) and the left BA41/42 were more extensively activated by the combined modulation condition under the auditory cue than that under the visual cue. Among the four auditory conditions with or without the two types of internal modulation, the activation in the ant-BA22 was augmented only on the left side by the combined modulation condition. The left ant-BA22 may be especially involved in integrating the external auditory cue with internal modulation, while the activation on the right side did not depend on the complexity. The role of the left BA41/42 in motor regulation may be more specific to the processing of an auditory cue than that on the right side. These two areas in the left temporal lobe may be organized as a subsystem to handle the timing of complex movements under auditory cues, while the higher motor areas in the frontal lobe support both sensory modalities for the cue. This architecture may be considered as ‘audio-motor control’, which is similar to the visuo-motor control of the front-parietal network.  相似文献   

8.
Brief tonal stimuli and spoken sentences were utilized to examine whether adolescents (aged 14;3-18;1) with specific language impairments (SLI) exhibit atypical neural activity for rapid auditory processing of non-linguistic stimuli and linguistic processing of verb-agreement and semantic constraints. Further, we examined whether the behavioral and electrophysiological indices for rapid auditory processing were correlated with those for linguistic processing. Fifteen adolescents with SLI and 15 adolescents with normal language met strict criteria for displaying consistent diagnoses from kindergarten through the eighth grade. The findings provide evidence that auditory processing for non-linguistic stimuli is atypical in a significant number of adolescents with SLI compared to peers with normal language and indicate that reduced efficiency in auditory processing in SLI is more vulnerable to rapid rates (200ms ISI) of stimuli presentation (indexed by reduced accuracy, a tendency for longer RTs, reduced N100 over right anterior sites, and reduced amplitude P300). Many adolescents with SLI displayed reduced behavioral accuracy for detecting verb-agreement violations and semantic anomalies, along with less robust P600s elicited by verb-agreement violations. The results indicate that ERPs elicited by morphosyntactic aspects of language processing are atypical in many adolescents with SLI. Additionally, correlational analyses between behavioral and electrophysiological indices of processing non-linguistic stimuli and verb-agreement violations suggest that the integrity of neural functions for auditory processing may only account for a small proportion of the variance in morphosyntactic processing in some adolescents.  相似文献   

9.
Temporal sequencing of verbal materials (digits, words, and geometric forms) presented in two sensory modalities (auditory and visual) to three groups of subjects (Broca's with left anterior lesions, patients with right hemisphere lesions, and normals) was examined. Each subject was asked to point to a set of stimuli in the same sequence as presented by the examiner. Results indicated that patients with left hemisphere lesions were more impaired on all tasks than the right hemisphere lesioned patients who, in turn, were impaired compared to normal controls. Response to auditory presentation was superior to response to visual presentation. Also, digits were the easiest for all groups, and words were easier than geometric forms. Of special interest was the finding which suggested that right hemisphere lesions are associated with impairment of verbal temporal sequencing under either auditory or visual presentation.  相似文献   

10.
In a simple auditory rhyming paradigm requiring a button–press response (rhyme/nonrhyme) to the second word (target) of each spoken stimulus pair, both the early (P50, N120, P200, N240) and late (CNV, N400, P300) components of the ERP waveform evidenced considerable change from middle childhood to adulthood. In addition, behavioral accuracy and reaction time improved with increasing age. In contrast, the size, distribution and latency of each of several rhyming effects (including the posterior N400 rhyming effect, a left hemisphere anterior rhyming effect, and early rhyming effects on P50 latency, N120 latency and P200 amplitude) remained constant from age 7 to adulthood. These results indicate that the neurocognitive networks involved in processing auditory rhyme information, as indexed by the present task, are well established and have an adult–like organization at least by the age of 7.  相似文献   

11.
Using fMRI, we investigated the functional organization of prefrontal cortex (PFC) as participants briefly thought of a single just-experienced item (i.e., refreshed an active representation). The results of six studies, and a meta-analysis including previous studies, identified regions in left dorsolateral, anterior, and ventrolateral PFC associated in varying degrees with refreshing different types of information (visual and auditory words, drawings, patterns, people, places, or locations). In addition, activity increased in anterior cingulate with selection demands and in orbitofrontal cortex when a nonselected item was emotionally salient, consistent with a role for these areas in cognitive control (e.g., overcoming “mental rubbernecking”). We also found evidence that presenting emotional information disrupted an anterior component of the refresh circuit. We suggest that refreshing accounts for some neural activity observed in more complex tasks, such as working memory, long-term memory, and problem solving, and that its disruption (e.g., from aging or emotion) could have a broad impact.  相似文献   

12.
Using 12 participants we conducted an fMRI study involving two tasks, word reversal and rhyme judgment, based on pairs of natural speech stimuli, to study the neural correlates of manipulating auditory imagery under taxing conditions. Both tasks engaged the left anterior superior temporal gyrus, reflecting previously established perceptual mechanisms. Engagement of the left inferior frontal gyrus in both tasks relative to baseline could only be revealed by applying small volume corrections to the region of interest, suggesting that phonological segmentation played only a minor role and providing further support for factorial dissociation of rhyming and segmentation in phonological awareness. Most importantly, subtraction of rhyme judgment from word reversal revealed activation of the parietal lobes bilaterally and the right inferior frontal cortex, suggesting that the dynamic manipulation of auditory imagery involved in mental reversal of words seems to engage mechanisms similar to those involved in visuospatial working memory and mental rotation. This suggests that reversing spoken items is a matter of mind twisting rather than tongue twisting and provides support for a link between language processing and manipulation of mental imagery.  相似文献   

13.
A large body of literature implicates the amygdala in Pavlovian fear conditioning. In this study, we examined the contribution of individual amygdaloid nuclei to contextual and auditory fear conditioning in rats. Prior to fear conditioning, rats received a large electrolytic lesion of the amygdala in one hemisphere, and a nucleus-specific neurotoxic lesion in the contralateral hemisphere. Neurotoxic lesions targeted either the lateral nucleus (LA), basolateral and basomedial nuclei (basal nuclei), or central nucleus (CE) of the amygdala. LA and CE lesions attenuated freezing to both contextual and auditory conditional stimuli (CSs). Lesions of the basal nuclei produced deficits in contextual and auditory fear conditioning only when the damage extended into the anterior divisions of the basal nuclei; damage limited to the posterior divisions of the basal nuclei did not significantly impair conditioning to either auditory or contextual CS. These effects were typically not lateralized, although neurotoxic lesions of the posterior divisions of the basal nuclei had greater effects on contextual fear conditioning when the contralateral electrolytic lesion was placed in the right hemisphere. These results indicate that there is significant overlap within the amygdala in the neural pathways mediating fear conditioning to contextual and acoustic CS, and that these forms of learning are not anatomically dissociable at the level of amygdaloid nuclei.  相似文献   

14.
A common assumption is that phonetic sounds initiate unique processing in the superior temporal gyri and sulci (STG/STS). The anatomical areas subserving these processes are also implicated in the processing of non-phonetic stimuli such as music instrument sounds. The differential processing of phonetic and non-phonetic sounds was investigated in this study by applying a “sound-morphing” paradigm, where the presence of phonetic features were parametrically varied, creating a step-wise transition from a non-phonetic sound into a phonetic sound. The stimuli were presented in an event-related fMRI design. The fMRI-BOLD data were analysed using parametric contrasts. The results showed a higher sensitivity for sounds containing phonetic features compared to non-phonetic sounds in the middle part of STG, and in the anterior part of the planum temporale (PT) bilaterally. Although the same areas were involved in the processing of non-phonetic sounds, a difference in activation was evident in the STG, with an increase in activation related to increment of phonetic features in the sounds. The results indicate a stimulus-driven, bottom-up process that utilizes general auditory resources in the secondary auditory cortex, depending on specific phonetic features in the sounds.  相似文献   

15.
Numerous studies have provided clues about the ontogeny of lateralization of auditory processing in humans, but most have employed specific subtypes of stimuli and/or have assessed responses in discrete temporal windows. The present study used near-infrared spectroscopy (NIRS) to establish changes in hemodynamic activity in the neocortex of preverbal infants (aged 4–11 months) while they were exposed to two distinct types of complex auditory stimuli (full sentences and musical phrases). Measurements were taken from bilateral temporal regions, including both anterior and posterior superior temporal gyri. When the infant sample was treated as a homogenous group, no significant effects emerged for stimulus type. However, when infants’ hemodynamic responses were categorized according to their overall changes in volume, two very clear neurophysiological patterns emerged. A high-responder group showed a pattern of early and increasing activation, primarily in the left hemisphere, similar to that observed in comparable studies with adults. In contrast, a low-responder group showed a pattern of gradual decreases in activation over time. Although age did track with responder type, no significant differences between these groups emerged for stimulus type, suggesting that the high- versus low-responder characterization generalizes across classes of auditory stimuli. These results highlight a new way to conceptualize the variable cortical blood flow patterns that are frequently observed across infants and stimuli, with hemodynamic response volumes potentially serving as an early indicator of developmental changes in auditory-processing sensitivity.  相似文献   

16.
In two experiments, the 2-deoxyglucose metabolic mapping technique was used to examine the hypothesis that a stimulus of one modality (a light) will begin to activate the sensory cortex of a stimulus of another modality (a tone) with which it has been repeatedly paired. Adult gerbils received repeated presentations of either a light or the light paired with a tone known to affect 2DG labeling patterns in the auditory cortex. Intermittent footshock was included on a pseudo-random basis to maintain arousal in the subjects. One day after training, each gerbil was injected with 2DG and either received repeated presentations of the light only or was simply exposed to the training context. Analysis of the auditory cortex revealed no differences in overall metabolic activity of the auditory cortex between the groups. However, in both experiments, the light that was previously paired with the tone changed the relative activity of the cortical subfields compared to the light not previously paired with the tone. Specifically, the results indicate greater activity in the anterior auditory field (AAF—Experiments 1 and 2) and the posterior fields (DPVP—Experiment 2) relative to the primary field AI in response to the light that was previously paired with the tone during training. Gerbils either only placed in the context during the 2DG session or that received unpaired presentations of the light and tone during training did not show this shift in relative labeling between the subfields. Because no differences in overall activity of the auditory cortex were found, we conclude that the shift in relative labeling between the subfields reflects, on average, both an increase in activity of fields AAF and DPVP and a concomitant decrease in AI activity in response to the light stimulus. The results have implications for our understanding both of brain learning mechanisms in general and the potential functions of auditory cortex subfields in particular.  相似文献   

17.
Speech sounds can be classified on the basis of their underlying articulators or on the basis of the acoustic characteristics resulting from particular articulatory positions. Research in speech perception suggests that distinctive features are based on both articulatory and acoustic information. In recent years, neuroelectric and neuromagnetic investigations provided evidence for the brain's early sensitivity to distinctive features and their acoustic consequences, particularly for place of articulation distinctions. Here, we compare English consonants in a Mismatch Field design across two broad and distinct places of articulation - labial and coronal - and provide further evidence that early evoked auditory responses are sensitive to these features. We further add to the findings of asymmetric consonant processing, although we do not find support for coronal underspecification. Labial glides (Experiment 1) and fricatives (Experiment 2) elicited larger Mismatch responses than their coronal counterparts. Interestingly, their M100 dipoles differed along the anterior/posterior dimension in the auditory cortex that has previously been found to spatially reflect place of articulation differences. Our results are discussed with respect to acoustic and articulatory bases of featural speech sound classifications and with respect to a model that maps distinctive phonetic features onto long-term representations of speech sounds.  相似文献   

18.
Empirical work is reviewed which correlates the presence or absence of various parts of the auditory evoked potential with the disappearance and reemergence of auditory sensation during induction of and recovery from anesthesia. As a result, the hypothesis is generated that the electrophysiological correlate of auditory sensation is whatever neural activity generates the middle latency waves of the auditory evoked potential. This activity occurs from 20 to 80 ms poststimulus in the primary and secondary areas of the auditory cortex. Evidence is presented suggesting that earlier or later waves in the auditory evoked potential do not covary with auditory sensation (as opposed to auditory perception) and it is therefore suggested that they are possibly not the electrophysiological correlates of sensation.  相似文献   

19.
This research uses fMRI to understand the role of eight cortical regions in a relatively complex information-processing task. Modality of input (visual versus auditory) and modality of output (manual versus vocal) are manipulated. Two perceptual regions (auditory cortex and fusiform gyrus) only reflected perceptual encoding. Two motor regions were involved in information rehearsal as well as programming of overt actions. Two cortical regions (parietal and prefrontal) performed processing (retrieval and representational change) independent of input and output modality. The final two regions (anterior cingulate and caudate) were involved in control of cognition independent of modality of input or output and content of the material. An information-processing model, based on the ACT-R theory, is described that predicts the BOLD response in these regions. Different modules in the theory vary in the degree to which they are modality-specific and the degree to which they are involved in central versus peripheral cognitive processes.  相似文献   

20.
The human voice is one of the principal conveyers of social and affective communication. Recent neuroimaging studies have suggested that observing pain in others activates neural representations similar to those from the first-hand experience of pain; however, studies on pain expressions in the auditory channel are lacking. We conducted a functional magnetic resonance imaging study to examine brain responses to emotional exclamations of others’ pain. The control condition comprised positive (e.g., laughing) or negative (e.g., snoring) stimuli of the human voice that were not associated with pain and suffering. Compared to these control stimuli, pain-related exclamations elicited increased activation in the superior and middle temporal gyri, left insula, secondary somatosensory cortices, thalamus, and right cerebellum, as well as deactivation in the anterior cingulate cortex. The left anterior insular and thalamic activations correlated significantly with the Empathic Concern subscale of the Interpersonal Reactivity Index. Thus, the brain regions involved in hearing others’ pain are similar to those activated in the empathic processing of visual stimuli. Additionally, the findings emphasise the modulating role of interindividual differences in affective empathy.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号