首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Cortical function and related cognitive, language, and communication skills are genetically influenced. The auditory brainstem response to speech is linked to language skill, reading ability, cognitive skills, and speech‐in‐noise perception; however, the impact of shared genetic and environmental factors on the response has not been investigated. We assessed auditory brainstem responses to speech presented in quiet and background noise from (1) 23 pairs of same sex, same learning diagnosis siblings (Siblings), (2) 23 unrelated children matched on age, sex, IQ, and reading ability to one of the siblings (Reading‐Matched), and (3) 22 pairs of unrelated children matched on age and sex but not on reading ability to the same sibling (Age/Sex‐Matched). By quantifying response similarity as the intersubject response‐to‐response correlation for sibling pairs, reading‐matched pairs, and age‐ and sex‐matched pairs, we found that siblings had more similar responses than age‐ and sex‐matched pairs and reading‐matched pairs. Similarity of responses between siblings was as high as the similarity of responses collected from an individual over the course of the recording session. Responses from unrelated children matched on reading were more similar than responses from unrelated children matched only on age and sex, supporting previous data linking variations in auditory brainstem activity with variations in reading ability. These results suggest that auditory brainstem function can be influenced by siblingship and auditory‐based communication skills such as reading, motivating the use of speech‐evoked auditory brainstem responses for assessing risk of reading and communication impairments in family members.  相似文献   

2.
It has been proposed that language impairments in children with Autism Spectrum Disorders (ASD) stem from atypical neural processing of speech and/or nonspeech sounds. However, the strength of this proposal is compromised by the unreliable outcomes of previous studies of speech and nonspeech processing in ASD. The aim of this study was to determine whether there was an association between poor spoken language and atypical event‐related field (ERF) responses to speech and nonspeech sounds in children with ASD (= 14) and controls (= 18). Data from this developmental population (ages 6–14) were analysed using a novel combination of methods to maximize the reliability of our findings while taking into consideration the heterogeneity of the ASD population. The results showed that poor spoken language scores were associated with atypical left hemisphere brain responses (200 to 400 ms) to both speech and nonspeech in the ASD group. These data support the idea that some children with ASD may have an immature auditory cortex that affects their ability to process both speech and nonspeech sounds. Their poor speech processing may impair their ability to process the speech of other people, and hence reduce their ability to learn the phonology, syntax, and semantics of their native language.  相似文献   

3.
The present study examined the extent to which verbal auditory agnosia (VAA) is primarily a phonemic decoding disorder, as contrasted to a more global defect in acoustic processing. Subjects were six young adults who presented with VAA in childhood and who, at the time of testing, showed varying degrees of residual auditory discrimination impairment. They were compared to a group of young adults with normal language development matched for age and gender. Cortical event-related potentials (ERPs) were recorded to tones and to consonant-vowel stimuli presented in an "oddball" discrimination paradigm. In addition to cortical ERPs, auditory brainstem responses (ABRs) and middle latency responses (MLRs) were recorded. Cognitive and language assessments were obtained for the VAA subjects. ABRs and MLRs were normal. In comparison with the control group, the cortical ERPs of the VAA subjects showed a delay in the N1 component recorded over lateral temporal cortex both to tones and to speech sounds, despite an N1 of normal latency overlying the frontocentral region of the scalp. These electrophysiologic findings indicate a slowing of processing of both speech and nonspeech auditory stimuli and suggest that the locus of this abnormality is within the secondary auditory cortex in the lateral surface of the temporal lobes.  相似文献   

4.
Data on typically developing children suggest a link between social interaction and language learning, a finding of interest both to theories of language and theories of autism. In this study, we examined social and linguistic processing of speech in preschool children with autism spectrum disorder (ASD) and typically developing chronologically matched (TDCA) and mental age matched (TDMA) children. The social measure was an auditory preference test that pitted 'motherese' speech samples against non-speech analogs of the same signals. The linguistic measure was phonetic discrimination assessed with mismatch negativity (MMN), an event-related potential (ERP). As a group, children with ASD differed from controls by: (a) demonstrating a preference for the non-speech analog signals, and (b) failing to show a significant MMN in response to a syllable change. When ASD children were divided into subgroups based on auditory preference, and the ERP data reanalyzed, ASD children who preferred non-speech still failed to show an MMN, whereas ASD children who preferred motherese did not differ from the controls. The data support the hypothesis of an association between social and linguistic processing in children with ASD.  相似文献   

5.
Apparent changes in auditory scenes are often unnoticed. This change deafness phenomenon was examined in auditory scenes that comprise human voices. In two experiments, listeners were required to detect changes between two auditory scenes comprising two, three, and four talkers who voiced four‐syllable words. One of the voices in the first scene was randomly selected and was replaced with a new word in change trials. The rationale was that higher stimulus familiarity conferred by human voices compared to other everyday sounds, together with encoding and memory advantages for verbal stimuli and the modular processing of speech in auditory processing, should positively influence the change detection efficiency, and the change deafness phenomenon should not be observed when listeners are explicitly required to detect the obvious changes. Contrary to the prediction, change deafness was significantly observed in three‐ and four‐talker conditions. This indicates that change deafness occurs in listeners even for highly familiar stimuli. This suggests the limited ability for perceptual organization of auditory scenes comprising even a relatively small number of voices (three or four).  相似文献   

6.
Grammatical-specific language impairment (G-SLI) in children, arguably, provides evidence for the existence of a specialised grammatical sub-system in the brain, necessary for normal language development. Some researchers challenge this, claiming that domain-general, low-level auditory deficits, particular to rapid processing, cause phonological deficits and thereby SLI. We investigate this possibility by testing the auditory discrimination abilities of G-SLI children for speech and non-speech sounds, at varying presentation rates, and controlling for the effects of age and language on performance. For non-speech formant transitions, 69% of the G-SLI children showed normal auditory processing, whereas for the same acoustic information in speech, only 31% did so. For rapidly presented tones, 46% of the G-SLI children performed normally. Auditory performance with speech and non-speech sounds differentiated the G-SLI children from their age-matched controls, whereas speed of processing did not. The G-SLI children evinced no relationship between their auditory and phonological/grammatical abilities. We found no consistent evidence that a deficit in processing rapid acoustic information causes or maintains G-SLI. The findings, from at least those G-SLI children who do not exhibit any auditory deficits, provide further evidence supporting the existence of a primary domain-specific deficit underlying G-SLI.  相似文献   

7.
This paper reviews a number of studies done by the authors and others, who have utilized various averaged electroencephalic response (AER) techniques to study speech and language processing. Pertinent studies are described in detail. A relatively new AER technique, auditory brainstem responses (ABR), is described and its usefulness in studying auditory processing activity related to speech and language is outlined. In addition, a series of ABR studies, that have demonstrated significant male—female differences in ABR auditory processing abilities, is presented and the relevance of these data to already established differences in male—female language, hearing, and cognitive abilities is discussed.  相似文献   

8.
脑干诱发电位是一种考察听觉脑干加工声音信号时神经活动的非侵入性技术, 近年来被广泛用于探索言语感知的神经基础。相关研究主要集中在考察成年人和正常发展儿童语音编码时脑干的活动特征及发展模式, 探讨发展性阅读障碍及其他语言损伤的语音编码缺陷及其神经表现等方面。在已有研究的基础上进一步探索初级语音编码和高级言语加工之间的相互作用机制, 考察阅读障碍的底层神经基础将成为未来该技术在言语感知研究中应用的重点。  相似文献   

9.
Deficits in identification and discrimination of sounds with short inter-stimulus intervals or short formant transitions in children with specific language impairment (SLI) have been taken to reflect an underlying temporal auditory processing deficit. Using the sustained frequency following response (FFR) and the onset auditory brainstem responses (ABR) we evaluated if children with SLI show abnormalities at the brainstem level consistent with a temporal processing deficit. To this end, the neural encoding of tonal sweeps, as reflected in the FFR, for different rates of frequency change, and the effects of reducing inter-stimulus interval on the ABR components were evaluated in 10 4–11-year-old SLI children and their age-matched controls. Results for the SLI group showed degraded FFR phase-locked neural activity that failed to faithfully track the frequency change presented in the tonal sweeps, particularly at the faster sweep rates. SLI children also showed longer latencies for waves III and V of the ABR and a greater prolongation of wave III at high stimulus rates (>30/sec), suggesting greater susceptibility to neural adaptation. These results taken together appear to suggest a disruption in the temporal pattern of phase-locked neural activity necessary to encode rapid frequency change and an increased susceptibility to desynchronizing factors related to faster rates of stimulus presentation in children with SLI.  相似文献   

10.
We described disorders of a patient which were uniquely restricted to speech perception of syllable sequences after brain damage. The results of series of experiments using syllable sequences showed "negative recency effect," in which the subject's repetition performance at the latter syllable position was remarkably poor. Experimental analyses suggested that the "negative recency effect" could be due to dual factors: the lower rate of processing of speech sounds and the memory load of holding processes of preceding syllables imposed on the succeeding phonological processing. The results also suggested that the holding processes which imposed the memory load on the succeeding auditory phonological coding processing were modality nonspecific.  相似文献   

11.
Recent findings have revealed that very preterm neonates already show the typical brain responses to place of articulation changes in stop consonants, but data on their sensitivity to other types of phonetic changes remain scarce. Here, we examined the impact of 7–8 weeks of extra‐uterine life on the automatic processing of syllables in 20 healthy moderate preterm infants (mean gestational age at birth 33 weeks) matched in maturational age with 20 full‐term neonates, thus differing in their previous auditory experience. This design allows elucidating the contribution of extra‐uterine auditory experience in the immature brain on the encoding of linguistically relevant speech features. Specifically, we collected brain responses to natural CV syllables differing in three dimensions using a multi‐feature mismatch paradigm, with the syllable/ba/ as the standard and three deviants: a pitch change, a vowel change to/bo/ and a consonant voice‐onset time (VOT) change to/pa/. No significant between‐group differences were found for pitch and consonant VOT deviants. However, moderate preterm infants showed attenuated responses to vowel deviants compared to full terms. These results suggest that moderate preterm infants' limited experience with low‐pass filtered speech prenatally can hinder vowel change detection and that exposure to natural speech after birth does not seem to contribute to improve this capacity. These data are in line with recent evidence suggesting a sequential development of a hierarchical functional architecture of speech processing that is highly sensitive to early auditory experience.  相似文献   

12.
In 10 right-handed Ss, auditory evoked responses (AERs) were recorded from left and right temporal and parietal scalp regions during simple discrimination responses to binaurally presented pairs of synthetic speech sounds ranging perceptually from /ba/ to /da/. A late positive component (P3) in the AER was found to reflect the categorical or phonetic analysis of the stop consonants, with only left scalp sites averaging significantly different responses between acoustic and phonetic comparisons. The result is interpreted as evidence of hemispheric differences in the processing of speech in respect of the level of processing accessed by the particular information processing task.  相似文献   

13.
We examine the evidence that speech and musical sounds exploit different acoustic cues: speech is highly dependent on rapidly changing broadband sounds, whereas tonal patterns tend to be slower, although small and precise changes in frequency are important. We argue that the auditory cortices in the two hemispheres are relatively specialized, such that temporal resolution is better in left auditory cortical areas and spectral resolution is better in right auditory cortical areas. We propose that cortical asymmetries might have developed as a general solution to the need to optimize processing of the acoustic environment in both temporal and frequency domains.  相似文献   

14.
Fetal hearing experiences shape the linguistic and musical preferences of neonates. From the very first moment after birth, newborns prefer their native language, recognize their mother's voice, and show a greater responsiveness to lullabies presented during pregnancy. Yet, the neural underpinnings of this experience inducing plasticity have remained elusive. Here we recorded the frequency-following response (FFR), an auditory evoked potential elicited to periodic complex sounds, to show that prenatal music exposure is associated to enhanced neural encoding of speech stimuli periodicity, which relates to the perceptual experience of pitch. FFRs were recorded in a sample of 60 healthy neonates born at term and aged 12–72 hours. The sample was divided into two groups according to their prenatal musical exposure (29 daily musically exposed; 31 not-daily musically exposed). Prenatal exposure was assessed retrospectively by a questionnaire in which mothers reported how often they sang or listened to music through loudspeakers during the last trimester of pregnancy. The FFR was recorded to either a /da/ or an /oa/ speech-syllable stimulus. Analyses were centered on stimuli sections of identical duration (113 ms) and fundamental frequency (F0 = 113 Hz). Neural encoding of stimuli periodicity was quantified as the FFR spectral amplitude at the stimulus F0. Data revealed that newborns exposed daily to music exhibit larger spectral amplitudes at F0 as compared to not-daily musically-exposed newborns, regardless of the eliciting stimulus. Our results suggest that prenatal music exposure facilitates the tuning to human speech fundamental frequency, which may support early language processing and acquisition.

Research Highlights

  • Frequency-following responses to speech were collected from a sample of neonates prenatally exposed to music daily and compared to neonates not-daily exposed to music.
  • Neonates who experienced daily prenatal music exposure exhibit enhanced frequency-following responses to the periodicity of speech sounds.
  • Prenatal music exposure is associated with a fine-tuned encoding of human speech fundamental frequency, which may facilitate early language processing and acquisition.
  相似文献   

15.
The syllable has received considerable empirical support as a unit of processing in speech perception, but its status in speech production remains unclear. Some researchers propose that syllables are individually represented and retrieved during phonological encoding (e.g., Dell, 1986; Ferrand, Segui, & Grainger, 1996; MacKay, 1987). We test this hypothesis by examining the influence of syllable frequency on the phonological errors of two aphasics. These individuals both had an impairment in phonological encoding, but appeared to differ in the precise locus of that impairment. They each read aloud and repeated 110 pairs of words matched for syllabic complexity, but differing in final syllable frequency. Lexical frequency was also controlled. Neither aphasic was more error-prone on low than on high frequency syllables (indeed, one showed a near-significant reverse effect), and neither showed a preference for more frequent syllables in their errors. These findings provide no support for the view that syllables are individually represented and accessed during phonological encoding.  相似文献   

16.
Gesture–speech synchrony re‐stabilizes when hand movement or speech is disrupted by a delayed feedback manipulation, suggesting strong bidirectional coupling between gesture and speech. Yet it has also been argued from case studies in perceptual–motor pathology that hand gestures are a special kind of action that does not require closed‐loop re‐afferent feedback to maintain synchrony with speech. In the current pre‐registered within‐subject study, we used motion tracking to conceptually replicate McNeill's ( 1992 ) classic study on gesture–speech synchrony under normal and 150 ms delayed auditory feedback of speech conditions (NO DAF vs. DAF). Consistent with, and extending McNeill's original results, we obtain evidence that (a) gesture‐speech synchrony is more stable under DAF versus NO DAF (i.e., increased coupling effect), (b) that gesture and speech variably entrain to the external auditory delay as indicated by a consistent shift in gesture‐speech synchrony offsets (i.e., entrainment effect), and (c) that the coupling effect and the entrainment effect are co‐dependent. We suggest, therefore, that gesture–speech synchrony provides a way for the cognitive system to stabilize rhythmic activity under interfering conditions.  相似文献   

17.
The present study investigated a possible connection between speech processing and cochlear function. Twenty-two subjects with age range from 18 to 39, balanced for gender with normal hearing and without any known neurological condition, were tested with the dichotic listening (DL) test, in which listeners were asked to identify CV-syllables in a nonforced, and also attention-right, and attention-left condition. Transient evoked otoacoustic emissions (TEOAEs) were recorded for both ears, with and without the presentation of contralateral broadband noise. The main finding was a strong negative correlation between language laterality as measured with the dichotic listening task and of the TEOAE responses. The findings support a hypothesis of shared variance between central and peripheral auditory lateralities, and contribute to the attentional theory of auditory lateralization. The results have implications for the understanding of the cortico-fugal efferent control of cochlear activity.  相似文献   

18.
Individuals with developmental dyslexia (DD) may experience, besides reading problems, other speech‐related processing deficits. Here, we examined the influence of visual articulatory information (lip‐read speech) at various levels of background noise on auditory word recognition in children and adults with DD. We found that children with a documented history of DD have deficits in their ability to gain benefit from lip‐read information that disambiguates noise‐masked speech. We show with another group of adult individuals with DD that these deficits persist into adulthood. These deficits could not be attributed to impairments in unisensory auditory word recognition. Rather, the results indicate a specific deficit in audio‐visual speech processing and suggest that impaired multisensory integration might be an important aspect of DD.  相似文献   

19.
Under many conditions auditory input interferes with visual processing, especially early in development. These interference effects are often more pronounced when the auditory input is unfamiliar than when the auditory input is familiar (e.g. human speech, pre‐familiarized sounds, etc.). The current study extends this research by examining how auditory input affects 8‐ and 14‐month‐olds’ performance on individuation tasks. The results of the current study indicate that both unfamiliar sounds and words interfered with infants’ performance on an individuation task, with cross‐modal interference effects being numerically stronger for unfamiliar sounds. The effects of auditory input on a variety of lexical tasks are discussed.  相似文献   

20.
This study investigated the encoding of syllable boundary information during speech production in Dutch. Based on Levelt's model of phonological encoding, we hypothesized segments and syllable boundaries to be encoded in an incremental way. In a self-monitoring experiment, decisions about the syllable affiliation (first or second syllable) of a pre-specified consonant, which was the third phoneme in a word, were required (e.g., ka.No 'canoe' vs. kaN.sel 'pulpit'; capital letters indicate pivotal consonants, dots mark syllable boundaries). First syllable responses were faster than second syllable responses, indicating the incremental nature of segmental encoding and syllabification during speech production planning. The results of the experiment are discussed in the context of Levelt's model of phonological encoding.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号