首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Prior research suggests that stress cues are particularly important for English-hearing infants' detection of word boundaries. It is unclear, though, how infants learn to attend to stress as a cue to word segmentation. This series of experiments was designed to explore infants' attention to conflicting cues at different ages. Experiment 1 replicated previous findings: When stress and statistical cues indicated different word boundaries, 9-month-old infants used syllable stress as a cue to segmentation while ignoring statistical cues. However, in Experiment 2, 7-month-old infants attended more to statistical cues than to stress cues. These results raise the possibility that infants use their statistical learning abilities to locate words in speech and use those words to discover the regular pattern of stress cues in English. Infants at different ages may deploy different segmentation strategies as a function of their current linguistic experience.  相似文献   

2.
Recent evidence suggests division of labor in phonological analysis underlying speech recognition. Adults and children appear to decompose the speech stream into phoneme‐relevant information and into syllable stress. Here we investigate whether both speech processing streams develop from a common path in infancy, or whether there are two separate streams from early on. We presented stressed and unstressed syllables (spoken primes) followed by initially stressed early learned disyllabic German words (spoken targets). Stress overlap and phoneme overlap between the primes and the initial syllable of the targets varied orthogonally. We tested infants 3, 6 and 9 months after birth. Event‐related potentials (ERPs) revealed stress priming without phoneme priming in the 3‐month‐olds; phoneme priming without stress priming in the 6‐month‐olds; and phoneme priming, stress priming as well as an interaction of both in 9‐month‐olds. In general the present findings reveal that infants start with separate processing streams related to syllable stress and to phoneme‐relevant information; and that they need to learn to merge both aspects of speech processing. In particular the present results suggest (i) that phoneme‐free prosodic processing dominates in early infancy; (ii) that prosody‐free phoneme processing dominates in middle infancy; and (iii) that both types of processing are operating in parallel and can be merged in late infancy.  相似文献   

3.
Past research has demonstrated that infants can rapidly extract syllable distribution information from an artificial language and use this knowledge to infer likely word boundaries in speech. However, artificial languages are extremely simplified with respect to natural language. In this study, we ask whether infants’ ability to track transitional probabilities between syllables in an artificial language can scale up to the challenge of natural language. We do so by testing both 5.5‐ and 8‐month‐olds’ ability to segment an artificial language containing four words of uniform length (all CVCV) or four words of varying length (two CVCV, two CVCVCV). The transitional probability cues to word boundaries were held equal across the two languages. Both age groups segmented the language containing words of uniform length, demonstrating that even 5.5‐month‐olds are extremely sensitive to the conditional probabilities in their environment. However, neither age group succeeded in segmenting the language containing words of varying length, despite the fact that the transitional probability cues defining word boundaries were equally strong in the two languages. We conclude that infants’ statistical learning abilities may not be as robust as earlier studies have suggested.  相似文献   

4.
A series of 15 experiments was conducted to explore English-learning infants' capacities to segment bisyllabic words from fluent speech. The studies in Part I focused on 7.5 month olds' abilities to segment words with strong/weak stress patterns from fluent speech. The infants demonstrated an ability to detect strong/weak target words in sentential contexts. Moreover, the findings indicated that the infants were responding to the whole words and not to just their strong syllables. In Part II, a parallel series of studies was conducted examining 7.5 month olds' abilities to segment words with weak/strong stress patterns. In contrast with the results for strong/weak words, 7.5 month olds appeared to missegment weak/strong words. They demonstrated a tendency to treat strong syllables as markers of word onsets. In addition, when weak/strong words co-occurred with a particular following weak syllable (e.g., "guitar is"), 7.5 month olds appeared to misperceive these as strong/weak words (e.g., "taris"). The studies in Part III examined the abilities of 10.5 month olds to segment weak/strong words from fluent speech. These older infants were able to segment weak/strong words correctly from the various contexts in which they appeared. Overall, the findings suggest that English learners may rely heavily on stress cues when they begin to segment words from fluent speech. However, within a few months time, infants learn to integrate multiple sources of information about the likely boundaries of words in fluent speech.  相似文献   

5.
Before infants can learn words, they must identify those words in continuous speech. Yet, the speech signal lacks obvious boundary markers, which poses a potential problem for language acquisition (Swingley, Philos Trans R Soc Lond. Series B, Biol Sci 364 (1536), 3617–3632, 2009). By the middle of the first year, infants seem to have solved this problem (Bergelson & Swingley, Proc Natl Acad Sci 109 (9), 3253–3258, 2012; Jusczyk & Aslin, Cogn Psychol 29 , 1–23, 1995), but it is unknown if segmentation abilities are present from birth, or if they only emerge after sufficient language exposure and/or brain maturation. Here, in two independent experiments, we looked at two cues known to be crucial for the segmentation of human speech: the computation of statistical co‐occurrences between syllables and the use of the language's prosody. After a brief familiarization of about 3 min with continuous speech, using functional near‐infrared spectroscopy, neonates showed differential brain responses on a recognition test to words that violated either the statistical (Experiment 1) or prosodic (Experiment 2) boundaries of the familiarization, compared to words that conformed to those boundaries. Importantly, word recognition in Experiment 2 occurred even in the absence of prosodic information at test, meaning that newborns encoded the phonological content independently of its prosody. These data indicate that humans are born with operational language processing and memory capacities and can use at least two types of cues to segment otherwise continuous speech, a key first step in language acquisition.  相似文献   

6.
Individual variability in infant's language processing is partly explained by environmental factors, like the quantity of parental speech input, as well as by infant‐specific factors, like speech production. Here, we explore how these factors affect infant word segmentation. We used an artificial language to ensure that only statistical regularities (like transitional probabilities between syllables) could cue word boundaries, and then asked how the quantity of parental speech input and infants’ babbling repertoire predict infants’ abilities to use these statistical cues. We replicated prior reports showing that 8‐month‐old infants use statistical cues to segment words, with a preference for part‐words over words (a novelty effect). Crucially, 8‐month‐olds with larger novelty effects had received more speech input at 4 months and had greater production abilities at 8 months. These findings establish for the first time that the ability to extract statistical information from speech correlates with individual factors in infancy, like early speech experience and language production. Implications of these findings for understanding individual variability in early language acquisition are discussed.  相似文献   

7.
Bilingual acquisition presents learning challenges beyond those found in monolingual environments, including the need to segment speech in two languages. Infants may use statistical cues, such as syllable‐level transitional probabilities, to segment words from fluent speech. In the present study we assessed monolingual and bilingual 14‐month‐olds’ abilities to segment two artificial languages using transitional probability cues. In Experiment 1, monolingual infants successfully segmented the speech streams when the languages were presented individually. However, monolinguals did not segment the same language stimuli when they were presented together in interleaved segments, mimicking the language switches inherent to bilingual speech. To assess the effect of real‐world bilingual experience on dual language speech segmentation, Experiment 2 tested infants with regular exposure to two languages using the same interleaved language stimuli as Experiment 1. The bilingual infants in Experiment 2 successfully segmented the languages, indicating that early exposure to two languages supports infants’ abilities to segment dual language speech using transitional probability cues. These findings support the notion that early bilingual exposure prepares infants to navigate challenging aspects of dual language environments as they begin to acquire two languages.  相似文献   

8.
Two experiments were conducted to investigate whether young children are able to take into account phrasal prosody when computing the syntactic structure of a sentence. Pairs of French noun/verb homophones were selected to create locally ambiguous sentences ([la petite ferme ] [est très jolie] ‘the small farm is very nice’ vs. [la petite] [ ferme la fenêtre] ‘the little girl closes the window’ – brackets indicate prosodic boundaries). Although these sentences start with the same three words, ferme is a noun (farm) in the former but a verb (to close) in the latter case. The only difference between these sentence beginnings is the prosodic structure, that reflects the syntactic structure (with a prosodic boundary just before the critical word when it is a verb, and just after it when it is a noun). Crucially, all words following the homophone were masked, such that prosodic cues were the only disambiguating information. Children successfully exploited prosodic information to assign the appropriate syntactic category to the target word, in both an oral completion task (4.5‐year‐olds, Experiment 1) and in a preferential looking paradigm with an eye‐tracker (3.5‐year‐olds and 4.5‐year‐olds, Experiment 2). These results show that both groups of children exploit the position of a word within the prosodic structure when computing its syntactic category. In other words, even younger children of 3.5 years old exploit phrasal prosody online to constrain their syntactic analysis. This ability to exploit phrasal prosody to compute syntactic structure may help children parse sentences containing unknown words, and facilitate the acquisition of word meanings.  相似文献   

9.
Previous research with artificial language learning paradigms has shown that infants are sensitive to statistical cues to word boundaries (Saffran, Aslin & Newport, 1996) and that they can use these cues to extract word‐like units (Saffran, 2001). However, it is unknown whether infants use statistical information to construct a receptive lexicon when acquiring their native language. In order to investigate this issue, we rely on the fact that besides real words a statistical algorithm extracts sound sequences that are highly frequent in infant‐directed speech but constitute nonwords. In three experiments, we use a preferential listening paradigm to test French‐learning 11‐month‐old infants' recognition of highly frequent disyllabic sequences from their native language. In Experiments 1 and 2, we use nonword stimuli and find that infants listen longer to high‐frequency than to low‐frequency sequences. In Experiment 3, we compare high‐frequency nonwords to real words in the same frequency range, and find that infants show no preference. Thus, at 11 months, French‐learning infants recognize highly frequent sound sequences from their native language and fail to differentiate between words and nonwords among these sequences. These results are evidence that they have used statistical information to extract word candidates from their input and stored them in a ‘protolexicon’, containing both words and nonwords.  相似文献   

10.
Computation of Conditional Probability Statistics by 8-Month-Old Infants   总被引:3,自引:0,他引:3  
A recent report demonstrated that 8-month-olds can segment a continuous stream of speech syllables, containing no acoustic or prosodic cues to word boundaries, into wordlike units after only 2 min of listening experience (Saffran, Aslin, & Newport, 1996). Thus, a powerful learning mechanism capable of extracting statistical information from fluent speech is available early in development. The present study extends these results by documenting the particular type of statistical computation–transitional (conditional) probability–used by infants to solve this word-segmentation task. An artificial language corpus, consisting of a continuous stream of trisyllabic nonsense words, was presented to 8-month-olds for 3 min. A postfamiliarization test compared the infants' responses to words versus part-words (trisyllabic sequences spanning word boundaries). The corpus was constructed so that test words and part-words were matched in frequency, but differed in their transitional probabilities. Infants showed reliable discrimination of words from part-words, thereby demonstrating rapid segmentation of continuous speech into words on the basis of transitional probabilities of syllable pairs.  相似文献   

11.
In six experiments with English‐learning infants, we examined the effects of variability in voice and foreign accent on word recognition. We found that 9‐month‐old infants successfully recognized words when two native English talkers with dissimilar voices produced test and familiarization items ( Experiment 1 ). When the domain of variability was shifted to include variability in voice as well as in accent, 13‐, but not 9‐month‐olds, recognized a word produced across talkers when only one had a Spanish accent ( Experiments 2 and 3 ). Nine‐month‐olds accommodated some variability in accent by recognizing words when the same Spanish‐accented talker produced familiarization and test items ( Experiment 4 ). However, 13‐, but not 9‐month‐olds, could do so when test and familiarization items were produced by two distinct Spanish‐accented talkers ( Experiments 5 and 6 ). These findings suggest that, although monolingual 9‐month‐olds have abstract phonological representations, these representations may not be flexible enough to accommodate the modifications found in foreign‐accented speech.  相似文献   

12.
This research examines the issue of speech segmentation in 9-month-old infants. Two cues known to carry probabilistic information about word boundaries were investigated: Phonotactic regularity and prosodic pattern. The stimuli used in four head turn preference experiments were bisyllabic CVC.CVC nonwords bearing primary stress in either the first or the second syllable (strong/weak vs. weak/strong). Stimuli also differed with respect to the phonotactic nature of their cross-syllabic C.C cluster. Clusters had either a low probability of occurring at a word juncture in fluent speech and a high probability of occurring inside of words ("within-word" clusters) or a high probability of occurring at a word juncture and a low probability of occurring inside of words ("between-word" clusters). Our results show that (1) 9-month-olds are sensitive to how phonotactic sequences typically align with word boundaries, (2) altering the stress pattern of the stimuli reverses infants' preference for phonotactic cluster types, (3) the prosodic cue to segmentation is more strongly relied upon than the phonotactic cue, and (4) a preference for high-probability between-word phonotactic sequences can be obtained either by placing stress on the second syllable of the stimuli or by inserting a pause between syllables. The implications of these results are discussed in light of an integrated multiple-cue approach to speech segmentation in infancy.  相似文献   

13.
The lexicon of 6‐month‐olds is comprised of names and body part words. Unlike names, body part words do not often occur in isolation in the input. This presents a puzzle: How have infants been able to pull out these words from the continuous stream of speech at such a young age? We hypothesize that caregivers' interactions directed at and on the infant's body may be at the root of their early acquisition of body part words. An artificial language segmentation study shows that experimenter‐provided synchronous tactile cues help 4‐month‐olds to find words in continuous speech. A follow‐up study suggests that this facilitation cannot be reduced to the highly social situation in which the directed interaction occurs. Taken together, these studies suggest that direct caregiver–infant interaction, exemplified in this study by touch cues, may play a key role in infants' ability to find word boundaries, and suggests that early vocabulary items may consist of words often linked with caregiver touches. A video abstract of this article can be viewed at http://youtu.be/NfCj5ipatyE  相似文献   

14.
This study investigates the influence of the acoustic properties of vowels on 6‐ and 10‐month‐old infants’ speech preferences. The shape of the contour (bell or monotonic) and the duration (normal or stretched) of vowels were manipulated in words containing the vowels /i/ and /u/, and presented to infants using a two‐choice preference procedure. Experiment 1 examined contour shape: infants heard either normal‐duration bell‐shaped and monotonic contours, or the same two contours with stretched duration. The results show that 6‐month‐olds preferred bell to monotonic contours, whereas 10‐month‐olds preferred monotonic to bell contours. In Experiment 2, infants heard either normal‐duration and stretched bell contours, or normal‐duration and stretched monotonic contours. As in Experiment 1, infants showed age‐specific preferences, with 6‐month‐olds preferring stretched vowels, and 10‐month‐olds preferring normal‐duration vowels. Infants’ attention to the acoustic qualities of vowels, and to speech in general, undergoes a dramatic transformation in the final months of the first year, a transformation that aligns with the emergence of other developmental milestones in speech perception.  相似文献   

15.
汉语语句重音的分类和分布   总被引:9,自引:0,他引:9  
王韫佳  初敏  贺琳 《心理学报》2003,35(6):734-742
通过两个独立进行的重音标注实验对汉语语句重音的分类和分布进行了初步探讨。实验l是由60位普通被试参加的音节重音突显度的知觉实验。实验2是由本文三位作者参加的重音类别标注实验,在此实验中语句重音被划分为节奏重音和语义重音。实验2中对于语句重音的分类性标注结果得到了实验l中普通被试对音节重音突显度知觉结果的支持,这说明人们确实能够感知到两种不同类型的重音。实验结果还表明,节奏重音倾向于出现在较大韵律单元内的最末韵律词的末音节上,并且与适当的停延相伴生,语义重音的分布则与语句的韵律结构的关系不大。  相似文献   

16.
A crucial step for acquiring a native language vocabulary is the ability to segment words from fluent speech. English-learning infants first display some ability to segment words at about 7.5 months of age. However, their initial attempts at segmenting words only approximate those of fluent speakers of the language. In particular, 7.5-month-old infants are able to segment words that conform to the predominant stress pattern of English words. The ability to segment words with other stress patterns appears to require the use of other sources of information about word boundaries. By 10.5 months, English learners display sensitivity to additional cues to word boundaries such as statistical regularities, allophonic cues and phonotactic patterns. Infants’ word segmentation abilities undergo further development during their second year when they begin to link sound patterns with particular meanings. By 24 months, the speed and accuracy with which infants recognize words in fluent speech is similar to that of native adult listeners. This review describes how infants use multiple sources of information to locate word boundaries in fluent speech, thereby laying the foundations for language understanding.  相似文献   

17.
By 7 months of age, infants are able to learn rules based on the abstract relationships between stimuli ( Marcus et al ., 1999 ), but they are better able to do so when exposed to speech than to some other classes of stimuli. In the current experiments we ask whether multimodal stimulus information will aid younger infants in identifying abstract rules. We habituated 5‐month‐olds to simple abstract patterns (ABA or ABB) instantiated in coordinated looming visual shapes and speech sounds (Experiment 1), shapes alone (Experiment 2), and speech sounds accompanied by uninformative but coordinated shapes (Experiment 3). Infants showed evidence of rule learning only in the presence of the informative multimodal cues. We hypothesize that the additional evidence present in these multimodal displays was responsible for the success of younger infants in learning rules, congruent with both a Bayesian account and with the Intersensory Redundancy Hypothesis.  相似文献   

18.
How might young learners parse speech into linguistically relevant units? Sensitivity to prosodic markers of these segments is one possibility. Seven experiments examined infants' sensitivity to acoustic correlates of phrasal units in English. The results suggest that: (a) 9 month olds, but not 6 month olds, are attuned to cues that differentially mark speech that is artificially segmented at linguistically COINCIDENT as opposed to NONCOINCIDENT boundaries (Experiments 1 and 2); (b) the pattern holds across both subject phrases and predicate phrases and across samples of both Child- and Adult-directed speech (Experiments 3, 4, and 7); and (c) both 9 month olds and adults show the sensitivity even when most phonetic information is removed by low-pass filtering (Experiments 5 and (6). Acoustic analyses suggest that pitch changes and in some cases durational changes are potential cues that infants might be using to make their discriminations. These findings are discussed with respect to their implications for theories of language acquisition.  相似文献   

19.
20.
Can infants, in the very first stages of word learning, use their perceptual sensitivity to the phonetics of speech while learning words? Research to date suggests that infants of 14 months cannot learn two similar‐sounding words unless there is substantial contextual support. The current experiment advances our understanding of this failure by testing whether the source of infants’ difficulty lies in the learning or testing phase. Infants were taught to associate two similar‐sounding words with two different objects, and tested using a visual choice method rather than the standard Switch task. The results reveal that 14‐month‐olds are capable of learning and mapping two similar‐sounding labels; they can apply phonetic detail in new words. The findings are discussed in relation to infants’ concurrent failure, and the developmental transition to success, in the Switch task.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号