首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Mersad K  Nazzi T 《Memory & cognition》2011,39(6):1085-1093
The present study explored the influence of a new metrics of phonotactics on adults’ use of transitional probabilities to segment artificial languages. We exposed French native adults to continuous streams of trisyllabic nonsense words. High-frequency words had either high or low congruence with French phonotactics, in the sense that their syllables had either high or low positional frequency in French trisyllabic words. At test, participants heard low-frequency words and part-words, which differed in their transitional probabilities (high for words, low for part-words) but were matched for frequency and phonotactic congruency. Participants’ preference for words over part-words was found only in the high-congruence languages. These results establish that subtle phonotactic manipulations can influence adults’ use of transitional probabilities to segment speech and unambiguously demonstrate that this prior knowledge interferes directly with segmentation processes, in addition to affecting subsequent lexical decisions. Implications for a hierarchical theory of segmentation cues are discussed.  相似文献   

2.
Before infants can learn words, they must identify those words in continuous speech. Yet, the speech signal lacks obvious boundary markers, which poses a potential problem for language acquisition (Swingley, Philos Trans R Soc Lond. Series B, Biol Sci 364 (1536), 3617–3632, 2009). By the middle of the first year, infants seem to have solved this problem (Bergelson & Swingley, Proc Natl Acad Sci 109 (9), 3253–3258, 2012; Jusczyk & Aslin, Cogn Psychol 29 , 1–23, 1995), but it is unknown if segmentation abilities are present from birth, or if they only emerge after sufficient language exposure and/or brain maturation. Here, in two independent experiments, we looked at two cues known to be crucial for the segmentation of human speech: the computation of statistical co‐occurrences between syllables and the use of the language's prosody. After a brief familiarization of about 3 min with continuous speech, using functional near‐infrared spectroscopy, neonates showed differential brain responses on a recognition test to words that violated either the statistical (Experiment 1) or prosodic (Experiment 2) boundaries of the familiarization, compared to words that conformed to those boundaries. Importantly, word recognition in Experiment 2 occurred even in the absence of prosodic information at test, meaning that newborns encoded the phonological content independently of its prosody. These data indicate that humans are born with operational language processing and memory capacities and can use at least two types of cues to segment otherwise continuous speech, a key first step in language acquisition.  相似文献   

3.
Bilingual acquisition presents learning challenges beyond those found in monolingual environments, including the need to segment speech in two languages. Infants may use statistical cues, such as syllable‐level transitional probabilities, to segment words from fluent speech. In the present study we assessed monolingual and bilingual 14‐month‐olds’ abilities to segment two artificial languages using transitional probability cues. In Experiment 1, monolingual infants successfully segmented the speech streams when the languages were presented individually. However, monolinguals did not segment the same language stimuli when they were presented together in interleaved segments, mimicking the language switches inherent to bilingual speech. To assess the effect of real‐world bilingual experience on dual language speech segmentation, Experiment 2 tested infants with regular exposure to two languages using the same interleaved language stimuli as Experiment 1. The bilingual infants in Experiment 2 successfully segmented the languages, indicating that early exposure to two languages supports infants’ abilities to segment dual language speech using transitional probability cues. These findings support the notion that early bilingual exposure prepares infants to navigate challenging aspects of dual language environments as they begin to acquire two languages.  相似文献   

4.
Computation of Conditional Probability Statistics by 8-Month-Old Infants   总被引:3,自引:0,他引:3  
A recent report demonstrated that 8-month-olds can segment a continuous stream of speech syllables, containing no acoustic or prosodic cues to word boundaries, into wordlike units after only 2 min of listening experience (Saffran, Aslin, & Newport, 1996). Thus, a powerful learning mechanism capable of extracting statistical information from fluent speech is available early in development. The present study extends these results by documenting the particular type of statistical computation–transitional (conditional) probability–used by infants to solve this word-segmentation task. An artificial language corpus, consisting of a continuous stream of trisyllabic nonsense words, was presented to 8-month-olds for 3 min. A postfamiliarization test compared the infants' responses to words versus part-words (trisyllabic sequences spanning word boundaries). The corpus was constructed so that test words and part-words were matched in frequency, but differed in their transitional probabilities. Infants showed reliable discrimination of words from part-words, thereby demonstrating rapid segmentation of continuous speech into words on the basis of transitional probabilities of syllable pairs.  相似文献   

5.
English‐learning 7.5‐month‐olds are heavily biased to perceive stressed syllables as word onsets. By 11 months, however, infants begin segmenting non‐initially stressed words from speech. Using the same artificial language methodology as Johnson and Jusczyk (2001 ), we explored the possibility that the emergence of this ability is linked to a decreased reliance on prosodic cues to word boundaries accompanied by an increased reliance on syllable distribution cues. In a baseline study, where only statistical cues to word boundaries were present, infants exhibited a familiarity preference for statistical words. When conflicting stress cues were added to the speech stream, infants exhibited a familiarity preference for stress as opposed to statistical words. This was interpreted as evidence that 11‐month‐olds weight stress cues to word boundaries more heavily than statistical cues. Experiment 2 further investigated these results with a language containing convergent cues to word boundaries. The results of Experiment 2 were not conclusive. A third experiment using new stimuli and a different experimental design supported the conclusion that 11‐month‐olds rely more heavily on prosodic than statistical cues to word boundaries. We conclude that the emergence of the ability to segment non‐initially stressed words from speech is not likely to be tied to an increased reliance on syllable distribution cues relative to stress cues, but instead may emerge due to an increased reliance on and integration of a broad array of segmentation cues.  相似文献   

6.
Past research has demonstrated that infants can rapidly extract syllable distribution information from an artificial language and use this knowledge to infer likely word boundaries in speech. However, artificial languages are extremely simplified with respect to natural language. In this study, we ask whether infants’ ability to track transitional probabilities between syllables in an artificial language can scale up to the challenge of natural language. We do so by testing both 5.5‐ and 8‐month‐olds’ ability to segment an artificial language containing four words of uniform length (all CVCV) or four words of varying length (two CVCV, two CVCVCV). The transitional probability cues to word boundaries were held equal across the two languages. Both age groups segmented the language containing words of uniform length, demonstrating that even 5.5‐month‐olds are extremely sensitive to the conditional probabilities in their environment. However, neither age group succeeded in segmenting the language containing words of varying length, despite the fact that the transitional probability cues defining word boundaries were equally strong in the two languages. We conclude that infants’ statistical learning abilities may not be as robust as earlier studies have suggested.  相似文献   

7.
Word segmentation, detecting word boundaries in continuous speech, is a fundamental aspect of language learning that can occur solely by the computation of statistical and speech cues. Fifty‐four children underwent functional magnetic resonance imaging (fMRI) while listening to three streams of concatenated syllables that contained either high statistical regularities, high statistical regularities and speech cues, or no easily detectable cues. Significant signal increases over time in temporal cortices suggest that children utilized the cues to implicitly segment the speech streams. This was confirmed by the findings of a second fMRI run, in which children displayed reliably greater activity in the left inferior frontal gyrus when listening to ‘words’ that had occurred more frequently in the streams of speech they had just heard. Finally, comparisons between activity observed in these children and that in previously studied adults indicate significant developmental changes in the neural substrate of speech parsing.  相似文献   

8.
Language acquisition depends on the ability to detect and track the distributional properties of speech. Successful acquisition also necessitates detecting changes in those properties, which can occur when the learner encounters different speakers, topics, dialects, or languages. When encountering multiple speech streams with different underlying statistics but overlapping features, how do infants keep track of the properties of each speech stream separately? In four experiments, we tested whether 8‐month‐old monolingual infants (N = 144) can track the underlying statistics of two artificial speech streams that share a portion of their syllables. We first presented each stream individually. We then presented the two speech streams in sequence, without contextual cues signaling the different speech streams, and subsequently added pitch and accent cues to help learners track each stream separately. The results reveal that monolingual infants experience difficulty tracking the statistical regularities in two speech streams presented sequentially, even when provided with contextual cues intended to facilitate separation of the speech streams. We discuss the implications of our findings for understanding how infants learn and separate the input when confronted with multiple statistical structures.  相似文献   

9.
Individual variability in infant's language processing is partly explained by environmental factors, like the quantity of parental speech input, as well as by infant‐specific factors, like speech production. Here, we explore how these factors affect infant word segmentation. We used an artificial language to ensure that only statistical regularities (like transitional probabilities between syllables) could cue word boundaries, and then asked how the quantity of parental speech input and infants’ babbling repertoire predict infants’ abilities to use these statistical cues. We replicated prior reports showing that 8‐month‐old infants use statistical cues to segment words, with a preference for part‐words over words (a novelty effect). Crucially, 8‐month‐olds with larger novelty effects had received more speech input at 4 months and had greater production abilities at 8 months. These findings establish for the first time that the ability to extract statistical information from speech correlates with individual factors in infancy, like early speech experience and language production. Implications of these findings for understanding individual variability in early language acquisition are discussed.  相似文献   

10.
To understand language, humans must encode information from rapid, sequential streams of syllables – tracking their order and organizing them into words, phrases, and sentences. We used Near‐Infrared Spectroscopy (NIRS) to determine whether human neonates are born with the capacity to track the positions of syllables in multisyllabic sequences. After familiarization with a six‐syllable sequence, the neonate brain responded to the change (as shown by an increase in oxy‐hemoglobin) when the two edge syllables switched positions but not when two middle syllables switched positions (Experiment 1), indicating that they encoded the syllables at the edges of sequences better than those in the middle. Moreover, when a 25 ms pause was inserted between the middle syllables as a segmentation cue, neonates’ brains were sensitive to the change (Experiment 2), indicating that subtle cues in speech can signal a boundary, with enhanced encoding of the syllables located at the edges of that boundary. These findings suggest that neonates’ brains can encode information from multisyllabic sequences and that this encoding is constrained. Moreover, subtle segmentation cues in a sequence of syllables provide a mechanism with which to accurately encode positional information from longer sequences. Tracking the order of syllables is necessary to understand language and our results suggest that the foundations for this encoding are present at birth.  相似文献   

11.
What is the nature of the representations acquired in implicit statistical learning? Recent results in the field of language learning have shown that adults and infants are able to find the words of an artificial language when exposed to a continuous auditory sequence consisting in a random ordering of these words. Such performance can only be based on processing the transitional probabilities between sequence elements. Two different kinds of mechanisms may account for these data: Participants may either parse the sequence into smaller chunks corresponding to the words of the artificial language, or they may become progressively sensitive to the actual values of the transitional probabilities between syllables. The two accounts are difficult to differentiate because they make similar predictions in comparable experimental settings. In this study, we present two experiments that aimed at contrasting these two theories. In these experiments, participants had to learn 2 sets of pseudo-linguistic regularities: Language 1 (L1) and Language 2 (L2) presented in the context of a serial reaction time task. L1 and L2 were either unrelated (none of the syllabic transitions of L1 were present in L2), or partly related (some of the intra-words transitions of L1 were used as inter-words transitions of L2). The two accounts make opposite predictions in these two settings. Our results indicate that the nature of the representations depends on the learning condition. When cues were presented to facilitate parsing of the sequence, participants learned the words of the artificial language. However, when no cues were provided, performance was strongly influenced by the employed transitional probabilities.  相似文献   

12.
Evidence from infant studies indicates that language learning can be facilitated by multimodal cues. We extended this observation to adult language learning by studying the effects of simultaneous visual cues (nonassociated object images) on speech segmentation performance. Our results indicate that segmentation of new words from a continuous speech stream is facilitated by simultaneous visual input that it is presented at or near syllables that exhibit the low transitional probability indicative of word boundaries. This indicates that temporal audio-visual contiguity helps in directing attention to word boundaries at the earliest stages of language learning. Off-boundary or arrhythmic picture sequences did not affect segmentation performance, suggesting that the language learning system can effectively disregard noninformative visual information. Detection of temporal contiguity between multimodal stimuli may be useful in both infants and second-language learners not only for facilitating speech segmentation, but also for detecting word–object relationships in natural environments.  相似文献   

13.
A series of 15 experiments was conducted to explore English-learning infants' capacities to segment bisyllabic words from fluent speech. The studies in Part I focused on 7.5 month olds' abilities to segment words with strong/weak stress patterns from fluent speech. The infants demonstrated an ability to detect strong/weak target words in sentential contexts. Moreover, the findings indicated that the infants were responding to the whole words and not to just their strong syllables. In Part II, a parallel series of studies was conducted examining 7.5 month olds' abilities to segment words with weak/strong stress patterns. In contrast with the results for strong/weak words, 7.5 month olds appeared to missegment weak/strong words. They demonstrated a tendency to treat strong syllables as markers of word onsets. In addition, when weak/strong words co-occurred with a particular following weak syllable (e.g., "guitar is"), 7.5 month olds appeared to misperceive these as strong/weak words (e.g., "taris"). The studies in Part III examined the abilities of 10.5 month olds to segment weak/strong words from fluent speech. These older infants were able to segment weak/strong words correctly from the various contexts in which they appeared. Overall, the findings suggest that English learners may rely heavily on stress cues when they begin to segment words from fluent speech. However, within a few months time, infants learn to integrate multiple sources of information about the likely boundaries of words in fluent speech.  相似文献   

14.
Statistical learning allows listeners to track transitional probabilities among syllable sequences and use these probabilities for subsequent speech segmentation. Recent studies have shown that other sources of information, such as rhythmic cues, can modulate the dependencies extracted via statistical computation. In this study, we explored how syllables made salient by a pitch rise affect the segmentation of trisyllabic words from an artificial speech stream by native speakers of three different languages (Spanish, English, and French). Results showed that, whereas performance of French participants did not significantly vary across stress positions (likely due to language-specific rhythmic characteristics), the segmentation performance of Spanish and English listeners was unaltered when syllables in word-initial and word-final positions were salient, but it dropped to chance level when salience was on the medial syllable. We argue that pitch rise in word-medial syllables draws attentional resources away from word boundaries, thus decreasing segmentation effectiveness.  相似文献   

15.
Prosodic cues drive speech segmentation and guide syllable discrimination. However, less is known about the attentional mechanisms underlying an infant's ability to benefit from prosodic cues. This study investigated how 6- to 8-month-old Italian infants allocate their attention to strong vs. weak syllables after familiarization with four repeats of a single CV sequence with alternating strong and weak syllables (different syllables on each trial). In the discrimination test-phase, either the strong or the weak syllable was replaced by a pure tone matching the suprasegmental characteristics of the segmental syllable, i.e., duration, loudness and pitch, whereas the familiarized stimulus was presented as a control. By using an eye-tracker, attention deployment (fixation times) and cognitive resource allocation (pupil dilation) were measured under conditions of high and low saliency that corresponded to the strong and weak syllabic changes, respectively. Italian learning infants were found to look longer and also to show, through pupil dilation, more attention to changes in strong syllable replacement rather than weak syllable replacement, compared to the control condition. These data offer insights into the strategies used by infants to deploy their attention towards segmental units guided by salient prosodic cues, like the stress pattern of syllables, during speech segmentation.  相似文献   

16.
Previous research suggests that artificial-language learners exposed to quasi-continuous speech can learn that the first and the last syllables of words have to belong to distinct classes (e.g., Endress & Bonatti, 2007; Peña, Bonatti, Nespor, & Mehler, 2002). The mechanisms of these generalizations, however, are debated. Here we show that participants learn such generalizations only when the crucial syllables are in edge positions (i.e., the first and the last), but not when they are in medial positions (i.e., the second and the fourth in pentasyllabic items). In contrast to the generalizations, participants readily perform statistical analyses also in word middles. In analogy to sequential memory, we suggest that participants extract the generalizations using a simple but specific mechanism that encodes the positions of syllables that occur in edges. Simultaneously, they use another mechanism to track the syllable distribution in the speech streams. In contrast to previous accounts, this model explains why the generalizations are faster than the statistical computations, require additional cues, and break down under different conditions, and why they can be performed at all. We also show that that similar edge-based mechanisms may explain many results in artificial-grammar learning and also various linguistic observations.  相似文献   

17.
Functional magnetic resonance imaging (fMRI) was used to assess neural activation as participants learned to segment continuous streams of speech containing syllable sequences varying in their transitional probabilities. Speech streams were presented in four runs, each followed by a behavioral test to measure the extent of learning over time. Behavioral performance indicated that participants could discriminate statistically coherent sequences (words) from less coherent sequences (partwords). Individual rates of learning, defined as the difference in ratings for words and partwords, were used as predictors of neural activation to ask which brain areas showed activity associated with these measures. Results showed significant activity in the pars opercularis and pars triangularis regions of the left inferior frontal gyrus (LIFG). The relationship between these findings and prior work on the neural basis of statistical learning is discussed, and parallels to the frontal/subcortical network involved in other forms of implicit sequence learning are considered.  相似文献   

18.
Speech is produced mainly in continuous streams containing several words. Listeners can use the transitional probability (TP) between adjacent and non-adjacent syllables to segment "words" from a continuous stream of artificial speech, much as they use TPs to organize a variety of perceptual continua. It is thus possible that a general-purpose statistical device exploits any speech unit to achieve segmentation of speech streams. Alternatively, language may limit what representations are open to statistical investigation according to their specific linguistic role. In this article, we focus on vowels and consonants in continuous speech. We hypothesized that vowels and consonants in words carry different kinds of information, the latter being more tied to word identification and the former to grammar. We thus predicted that in a word identification task involving continuous speech, learners would track TPs among consonants, but not among vowels. Our results show a preferential role for consonants in word identification.  相似文献   

19.
Distributional information is a potential cue for learning syntactic categories. Recent studies demonstrate a developmental trajectory in the level of abstraction of distributional learning in young infants. Here we investigate the effect of prosody on infants' learning of adjacent relations between words. Twelve‐ to thirteen‐month‐old infants were exposed to an artificial language comprised of 3‐word‐sentences of the form aXb and cYd, where X and Y words differed in the number of syllables. Training sentences contained a prosodic boundary between either the first and the second word or the second and the third word. Subsequently, infants were tested on novel test sentences that contained new X and Y words and also contained a flat prosody with no grouping cues. Infants successfully discriminated between novel grammatical and ungrammatical sentences, suggesting that the learned adjacent relations can be abstracted across words and prosodic conditions. Under the conditions tested, prosody may be only a weak constraint on syntactic categorization. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

20.
Finn AS  Hudson Kam CL 《Cognition》2008,108(2):477-499
We investigated whether adult learners' knowledge of phonotactic restrictions on word forms from their first language impacts their ability to use statistical information to segment words in a novel language. Adults were exposed to a speech stream where English phonotactics and phoneme co-occurrence information conflicted. A control where these did not conflict was also run. Participants chose between words defined by novel statistics and words that are phonotactically possible in English, but had much lower phoneme contingencies. Control participants selected words defined by statistics while experimental participants did not. This result held up with increases in exposure and when segmentation was aided by telling participants a word prior to exposure. It was not the case that participants simply preferred English-sounding words, however, when the stimuli contained very short pauses, participants were able to learn the novel words despite the fact that they violated English phonotactics. Results suggest that prior linguistic knowledge can interfere with learners' abilities to segment words from running speech using purely statistical cues at initial exposure.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号