首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 24 毫秒
1.
Parents are remarkably accurate observers of their infants’‘canonical babbling’, the production of well‐formed syllables. With very little training, many parents across a wide range of socioeconomic status make flawless judgments of canonical stage onset. The results of concordance studies between parental and trained‐observer judgments support the idea that recognition of canonical babbling may be intuitive. Without instruction, parents identify the onset of canonical babbling when it occurs, and thereafter they begin to interpret sounds produced by children in ways that may encourage word learning. The fact that parents can provide accurate information about stage of vocal development, along with the fact that late onset of canonical babbling has been shown to be an extremely important indicator of risk for hearing loss and language‐related disabilities, suggests the possibility of using a brief interview to identify infants at risk.  相似文献   

2.
Infant signs are intentionally taught/learned symbolic gestures which can be used to represent objects, actions, requests, and mental state. Through infant signs, parents and infants begin to communicate specific concepts earlier than children’s first spoken language. This study examines whether cultural differences in language are reflected in children’s and parents’ use of infant signs. Parents speaking East Asian languages with their children utilize verbs more often than do English-speaking mothers; and compared to their English-learning peers, Chinese children are more likely to learn verbs as they first acquire spoken words. By comparing parents’ and infants’ use of infant signs in the U.S. and Taiwan, we investigate cultural differences of noun/object versus verb/action bias before children’s first language. Parents reported their own and their children's use of first infant signs retrospectively. Results show that cultural differences in parents’ and children’s infant sign use were consistent with research on early words, reflecting cultural differences in communication functions (referential versus regulatory) and child-rearing goals (independent versus interdependent). The current study provides evidence that intergenerational transmission of culture through symbols begins prior to oral language.  相似文献   

3.
Canonical babbling (CB) is critical in forming foundations for speech. Research has shown that the emergence of CB precedes first words, predicts language outcomes, and is delayed in infants with several communicative disorders. We seek a naturalistic portrayal of CB development, using all-day home recordings to evaluate the influences of age, language, and social circumstances on infant CB production. Thus we address the nature of very early language foundations and how they can be modulated. This is the first study to evaluate possible interactions of language and social circumstance in the development of babbling. We examined the effects of age (6 and 11 months), language/culture (English and Chinese), and social circumstances (during infant-directed speech [IDS], during infant overhearing of adult-directed speech [ADS], or when infants were alone) on canonical babbling ratios (CBR = canonical syllables/total syllables). The results showed a three-way interaction of infant age by infant language/culture by social circumstance. The complexity of the results forces us to recognize that a variety of factors can interact in the development of foundations for language, and that both the infant vocal response to the language/culture environment and the language/culture environment of the infant may change across age.  相似文献   

4.
This study reports on co‐occurrence of vocal behaviors and motor actions in infants in the prelinguistic stage. Four Japanese infants were studied longitudinally from the age of 6 months to 11 months. For all the infants, a 40 min sample was coded for each monthly period. The vocalizations produced by the infants co‐occurred with their rhythmic actions with high frequency, particularly in the period preceding the onset of canonical babbling. Acoustical analysis was conducted on the vocalizations recorded before and after the period when co‐occurrence took place most frequently. Among the vocalizations recorded in the period when co‐occurrence appeared most frequently, those that co‐occurred with rhythmic action had significantly shorter syllable duration and shorter formant‐frequency transition duration compared with those that did not co‐occur with rhythmic action. The rapid transitions and short syllables were similar to patterns of duration found in mature speech. The acoustic features remained even after co‐occurrence disappeared. These findings suggest that co‐occurrence of rhythmic action and vocal behavior may contribute to the infant’s acquisition of the ability to perform the rapid glottal and articulatory movements that are indispensable for spoken language acquisition.  相似文献   

5.
In bilingual language environments, infants and toddlers listen to two separate languages during the same key years that monolingual children listen to just one and bilinguals rarely learn each of their two languages at the same rate. Learning to understand language requires them to cope with challenges not found in monolingual input, notably the use of two languages within the same utterance (e.g., Do you like the perro? or ¿Te gusta el doggy?). For bilinguals of all ages, switching between two languages can reduce the efficiency in real‐time language processing. But language switching is a dynamic phenomenon in bilingual environments, presenting the young learner with many junctures where comprehension can be derailed or even supported. In this study, we tested 20 Spanish–English bilingual toddlers (18‐ to 30‐months) who varied substantially in language dominance. Toddlers’ eye movements were monitored as they looked at familiar objects and listened to single‐language and mixed‐language sentences in both of their languages. We found asymmetrical switch costs when toddlers were tested in their dominant versus non‐dominant language, and critically, they benefited from hearing nouns produced in their dominant language, independent of switching. While bilingualism does present unique challenges, our results suggest a united picture of early monolingual and bilingual learning. Just like monolinguals, experience shapes bilingual toddlers’ word knowledge, and with more robust representations, toddlers are better able to recognize words in diverse sentences.  相似文献   

6.
Familial risk for developmental dyslexia can compromise auditory and speech processing and subsequent language and literacy development. According to the phonological deficit theory, supporting phonological development during the sensitive infancy period could prevent or ameliorate future dyslexic symptoms. Music is an established method for supporting auditory and speech processing and even language and literacy, but no previous studies have investigated its benefits for infants at risk for developmental language and reading disorders. We pseudo-randomized N∼150 infants at risk for dyslexia to vocal or instrumental music listening interventions at 0–6 months, or to a no-intervention control group. Music listening was used as an easy-to-administer, cost-effective intervention in early infancy. Mismatch responses (MMRs) elicited by speech-sound changes were recorded with electroencephalogram (EEG) before (at birth) and after (at 6 months) the intervention and at a 28 months follow-up. We expected particularly the vocal intervention to promote phonological development, evidenced by enhanced speech-sound MMRs and their fast maturation. We found enhanced positive MMR amplitudes in the vocal music listening intervention group after but not prior to the intervention. Other music activities reported by parents did not differ between the three groups, indicating that the group effects were attributable to the intervention. The results speak for the use of vocal music in early infancy to support speech processing and subsequent language development in infants at developmental risk.

Research Highlights

  • Dyslexia-risk infants were pseudo-randomly assigned to a vocal or instrumental music listening intervention at home from birth to 6 months of age.
  • Neural mismatch responses (MMRs) to speech-sound changes were enhanced in the vocal music intervention group after but not prior to the intervention.
  • Even passive vocal music listening in early infancy can support phonological development known to be deficient in dyslexia-risk.
  相似文献   

7.
A crucial step for acquiring a native language vocabulary is the ability to segment words from fluent speech. English-learning infants first display some ability to segment words at about 7.5 months of age. However, their initial attempts at segmenting words only approximate those of fluent speakers of the language. In particular, 7.5-month-old infants are able to segment words that conform to the predominant stress pattern of English words. The ability to segment words with other stress patterns appears to require the use of other sources of information about word boundaries. By 10.5 months, English learners display sensitivity to additional cues to word boundaries such as statistical regularities, allophonic cues and phonotactic patterns. Infants’ word segmentation abilities undergo further development during their second year when they begin to link sound patterns with particular meanings. By 24 months, the speed and accuracy with which infants recognize words in fluent speech is similar to that of native adult listeners. This review describes how infants use multiple sources of information to locate word boundaries in fluent speech, thereby laying the foundations for language understanding.  相似文献   

8.
Over the past couple of decades, research has established that infants are sensitive to the predominant stress pattern of their native language. However, the degree to which the stress pattern shapes infants' language development has yet to be fully determined. Whether stress is merely a cue to help organize the patterns of speech or whether it is an important part of the representation of speech sound sequences has still to be explored. Building on research in the areas of infant speech perception and segmentation, we asked how several months of exposure to the target language shapes infants' speech processing biases with respect to lexical stress. We hypothesize that infants represent stressed and unstressed syllables differently, and employed analyses of child-directed speech to show how this change to the representational landscape results in better distribution-based word segmentation as well as an advantage for stress-initial syllable sequences. A series of experiments then tested 9- and 7-month-old infants on their ability to use lexical stress without any other cues present to parse sequences from an artificial language. We found that infants adopted a stress-initial syllable strategy and that they appear to encode stress information as part of their proto-lexical representations. Together, the results of these studies suggest that stress information in the ambient language not only shapes how statistics are calculated over the speech input, but that it is also encoded in the representations of parsed speech sequences.  相似文献   

9.
In three experiments, listeners detected vowel or consonant targets in lists of CV syllables constructed from five vowels and five consonants. Responses were faster in a predictable context (e.g., listening for a vowel target in a list of syllables all beginning with the same consonant) than in an unpredictable context (e.g., listening for a vowel target in a list of syllables beginning with different consonants). In Experiment 1, the listeners’ native language was Dutch, in which vowel and consonant repertoires are similar in size. The difference between predictable and unpredictable contexts was comparable for vowel and consonant targets. In Experiments 2 and 3, the listeners’ native language was Spanish, which has four times as many consonants as vowels; here effects of an unpredictable consonant context on vowel detection were significantly greater than effects of an unpredictable vowel context on consonant detection. This finding suggests that listeners’ processing of phonemes takes into account the constitution of their language’s phonemic repertoire and the implications that this has for contextual variability.  相似文献   

10.
Vocal babbling involves production of rhythmic sequences of a mouth close–open alternation giving the perceptual impression of a sequence of consonant–vowel syllables. Petitto and co-workers have argued vocal babbling rhythm is the same as manual syllabic babbling rhythm, in that it has a frequency of 1 cycle per second. They also assert that adult speech and sign language display the same frequency. However, available evidence suggests that the vocal babbling frequency approximates 3 cycles per second. Both adult spoken language and sign language show higher frequencies than babbling in their respective modalities. No information is currently available on the basic rhythmic parameter of intercyclical variability in either modality. A study of reduplicative babbling by 4 infants and 4 adults producing reduplicated syllables confirms the 3 per second vocal babbling rate, as well as a faster rate in adults, and provides new information on intercyclical variability.  相似文献   

11.
To learn to produce speech, infants must effectively monitor and assess their own speech output. Yet very little is known about how infants perceive speech produced by an infant, which has higher voice pitch and formant frequencies compared to adult or child speech. Here, we tested whether pre‐babbling infants (at 4–6 months) prefer listening to vowel sounds with infant vocal properties over vowel sounds with adult vocal properties. A listening preference favoring infant vowels may derive from their higher voice pitch, which has been shown to attract infant attention in infant‐directed speech (IDS). In addition, infants' nascent articulatory abilities may induce a bias favoring infant speech given that 4‐ to 6‐month‐olds are beginning to produce vowel sounds. We created infant and adult /i/ (‘ee’) vowels using a production‐based synthesizer that simulates the act of speaking in talkers at different ages and then tested infants across four experiments using a sequential preferential listening task. The findings provide the first evidence that infants preferentially attend to vowel sounds with infant voice pitch and/or formants over vowel sounds with no infant‐like vocal properties, supporting the view that infants' production abilities influence how they process infant speech. The findings with respect to voice pitch also reveal parallels between IDS and infant speech, raising new questions about the role of this speech register in infant development. Research exploring the underpinnings and impact of this perceptual bias can expand our understanding of infant language development.  相似文献   

12.
The purpose of the present study was (1) to determine whether speech rate, utterance length, and grammatical complexity (number of clauses and clausal constituents per utterance) influenced stuttering-like disfluencies as children became more disfluent at the end of a 1200-syllable speech sample [Sawyer, J., & Yairi, E. (2006). The effect of sample size on the assessment of stuttering severity. American Journal of Speech-Language Pathology, 15, 36-44] and (2) to explore the interaction of speech rate, length, and grammatical complexity at the beginning (syllables 1-300, Section A) and the end (syllables 901-1200, Section B) of the speech sample. Participants were eight boys and six girls (M=40.9 months) who were selected from the Sawyer and Yairi [Sawyer, J., & Yairi, E. (2006). The effect of sample size on the assessment of stuttering severity. American Journal of Speech-Language Pathology, 15, 36-44] study. Mean length of utterance (MLU) in morphemes, the number of clauses, clausal constituents, and articulation rate, measured in syllables per second were analyzed from the children's conversational speech. The median split procedure [Logan, K., & Conture, E. (1995). Length, grammatical complexity, and rate differences in stuttered and fluent conversational utterances of children who stutter. Journal of Fluency Disorders, 20, 35-61; Yaruss, J. S. (1997). Utterance timing and childhood stuttering. Journal of Fluency Disorders, 22, 263-286] was used to study interactions between articulation rate, utterance length, and grammatical complexity across the two sections. The mean number of clauses per utterance, clausal constituents per utterance, and articulation rate revealed no significant differences between Section A and Section B, whereas MLU significantly increased in Section B (p=.013). Clausal constituents and MLU were significantly correlated both in Sections A and B. The median split procedure revealed trends for utterances characterized as high length and low-speech rate to be greater in number in Section B than A, but the differences were not significant. EDUCATIONAL OBJECTIVES: The reader will learn about and be able to: (a) describe the influence of grammatical complexity and mean length of utterance on disfluent speech; (b) compare different procedures for assessing speech rate and determine why the effects of articulation rate have been inconclusive; (c) discuss procedures for comparing length, rate, and complexity across a single-speech sample; and (d) explain why therapeutic methods that emphasize shorter utterance lengths, rather than only slower speech rates, are advisable in establishing fluency in preschool children who stutter.  相似文献   

13.
There is converging evidence that infants are sensitive to prosodic cues from birth onwards and use this kind of information in their earliest steps into the acquisition of words and syntactic regularities of their target language. Regarding word segmentation, it has been found that English-learning infants segment trochaic words by 7.5 months of age, and iambic words only by 10.5 months of age [Jusczyk, P. W., Houston, D. M., & Newsome, M. (1999). The beginnings of word segmentation in English-learning infants. Cognitive Psychology, 39, 159–207]. The question remains how to interpret this finding in relation to results showing that English-learning infants develop a preference for trochaic over iambic words between 6 and 9 months of age [Jusczyk, P. W., Cutler, A., & Redanz, N. (1993). Preference for the predominant stress patterns of English words. Child Development, 64, 675–687]. In the following, we report the results of four experiments using the headturn preference procedure (HPP) to explore the trochaic bias issue in German- and French-learning infants. For German, a trochaic preference was found at 6 but not at 4 months, suggesting an emergence of this preference between both ages (Experiments 1 and 2). For French, 6-month-old infants did not show a preference for either stress pattern (Experiment 3) while they were found to discriminate between the two stress patterns (Experiment 4). Our findings are the first to demonstrate that the trochaic bias is acquired by 6 months of age, is language specific and can be predicted by the rhythmic properties of the language in acquisition. We discuss the implications of this very early acquisition for our understanding of the emergence of segmentation abilities.  相似文献   

14.
In a landmark study, Jusczyk and Aslin (1995) demonstrated that English-learning infants are able to segment words from continuous speech at 7.5 months of age. In the current study, we explored the possibility that infants segment words from the edges of utterances more readily than the middle of utterances. The same procedure was used as in Jusczyk and Aslin (1995); however, our stimuli were controlled for target word location and infants were given a shorter familiarization time to avoid ceiling effects. Infants were familiarized to one word that always occurred at the edge of an utterance (sentence-initial position for half of the infants and sentence-final position for the other half) and one word that always occurred in sentence-medial position. Our results demonstrate that infants segment words from the edges of an utterance more readily than from the middle of an utterance. In addition, infants segment words from utterance-final position just as readily as they segment words from utterance-initial position. Possible explanations for these results, as well as their implications for current models of the development of word segmentation, are discussed.  相似文献   

15.
To explore how online speech processing efficiency relates to vocabulary growth in the 2nd year, the authors longitudinally observed 59 English-learning children at 15, 18, 21, and 25 months as they looked at pictures while listening to speech naming one of the pictures. The time course of eye movements in response to speech revealed significant increases in the efficiency of comprehension over this period. Further, speed and accuracy in spoken word recognition at 25 months were correlated with measures of lexical and grammatical development from 12 to 25 months. Analyses of growth curves showed that children who were faster and more accurate in online comprehension at 25 months were those who showed faster and more accelerated growth in expressive vocabulary across the 2nd year.  相似文献   

16.
Perceptual grouping has traditionally been thought to be governed by innate, universal principles. However, recent work has found differences in Japanese and English speakers’ non-linguistic perceptual grouping, implicating language in non-linguistic perceptual processes (Iversen, Patel, & Ohgushi, 2008). Two experiments test Japanese- and English-learning infants of 5–6 and 7–8 months of age to explore the development of grouping preferences. At 5–6 months, neither the Japanese nor the English infants revealed any systematic perceptual biases. However, by 7–8 months, the same age as when linguistic phrasal grouping develops, infants developed non-linguistic grouping preferences consistent with their language’s structure (and the grouping biases found in adulthood). These results reveal an early difference in non-linguistic perception between infants growing up in different language environments. The possibility that infants’ linguistic phrasal grouping is bootstrapped by abstract perceptual principles is discussed.  相似文献   

17.
A dichotic listening test (composed of VCV nonsense syllables) was administered to 31 right-handed subjects. A χ2 statistical significance criterion was then applied to the resulting ear differences in subjects. Fifteen subjects had ear differences significant at the p < .10 levels. Of these, 14 had right-ear advantages, an incidence consistent with the incidence of left-hemisphere language dominance derived from neurological data. Incidence for the total group of 31 subjects, however, showed the overabundance of left-ear-superior protocols typical of studies that report ear-dominance scores with no statistical criterion.  相似文献   

18.
Infants’ ability to learn complex linguistic regularities from early on has been revealed by electrophysiological studies indicating that 3‐month‐olds, but not adults, can automatically detect non‐adjacent dependencies between syllables. While different ERP responses in adults and infants suggest that both linguistic rule learning and its link to basic auditory processing undergo developmental changes, systematic investigations of the developmental trajectories are scarce. In the present study, we assessed 2‐ and 4‐year‐olds’ ERP indicators of pitch discrimination and linguistic rule learning in a syllable‐based oddball design. To test for the relation between auditory discrimination and rule learning, ERP responses to pitch changes were used as predictor for potential linguistic rule‐learning effects. Results revealed that 2‐year‐olds, but not 4‐year‐olds, showed ERP markers of rule learning. Although, 2‐year‐olds’ rule learning was not dependent on differences in pitch perception, 4‐year‐old children demonstrated a dependency, such that those children who showed more pronounced responses to pitch changes still showed an effect of rule learning. These results narrow down the developmental decline of the ability for automatic linguistic rule learning to the age between 2 and 4 years, and, moreover, point towards a strong modification of this change by auditory processes. At an age when the ability of automatic linguistic rule learning phases out, rule learning can still be observed in children with enhanced auditory responses. The observed interrelations are plausible causes for age‐of‐acquisition effects and inter‐individual differences in language learning.  相似文献   

19.
The development of a lexicon critically depends on the infant’s ability to identify wordlike units in the auditory speech input. The present study investigated at what age infants become sensitive to language-specific phonotactic features that signal word boundaries and to what extent they are able to use this knowledgeto segment speech input. Experiment 1 showed that infants at the age of 9 months were sensitive to the phonotactic structure of word boundaries when wordlike units were presented in isolation. Experiments 2 to 5 demonstrated that this sensitivity was present even when critical items were presented in context, although only under certain conditions. Preferences for legal over illegal word boundary clusters were found when critical items were embedded in two identical syllables, keeping language processing requirements and attentional requirements low. Experiment 6 replicated the findings of Experiment 1. Experiment 7 was a low-pass-filtered version of Experiment 6 that left the prosody of the stimulus items intact while removing most of the distinctive phonotactic cues. As expected, no listening preference for legal over illegal word boundary clusters was found in this experiment. This clearly suggests that the preferential patterns observed can be attributed to the infants’ sensitivity to phonotactic constraints on word boundaries in a given language and not to suprasegmental cues.  相似文献   

20.
Infants’ prelinguistic vocalizations reliably organize vocal turn-taking with social partners, creating opportunities for learning to produce the sound patterns of the ambient language. This social feedback loop supporting early vocal learning is well-documented, but its developmental origins have yet to be addressed. When do infants learn that their non-cry vocalizations influence others? To test developmental changes in infant vocal learning, we assessed the vocalizations of 2- and 5-month-old infants in a still-face interaction with an unfamiliar adult. During the still-face, infants who have learned the social efficacy of vocalizing increase their babbling rate. In addition, to assess the expectations for social responsiveness that infants build from their everyday experience, we recorded caregiver responsiveness to their infants’ vocalizations during unstructured play. During the still-face, only 5-month-old infants showed an increase in vocalizing (a vocal extinction burst) indicating that they had learned to expect adult responses to their vocalizations. Caregiver responsiveness predicted the magnitude of the vocal extinction burst for 5-month-olds. Because 5-month-olds show a vocal extinction burst with unfamiliar adults, they must have generalized the social efficacy of their vocalizations beyond their familiar caregiver. Caregiver responsiveness to infant vocalizations during unstructured play was similar for 2- and 5-month-olds. Infants thus learn the social efficacy of their vocalizations between 2 and 5 months of age. During this time, infants build associations between their own non-cry sounds and the reactions of adults, which allows learning of the instrumental value of vocalizing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号