首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
A total of 78 adult participants were asked to read a sample of strings generated by a finite state grammar and, immediately after reading each string, to mark the natural segmentation positions with a slash bar. They repeated the same task after a phase of familiarization with the material, which consisted, depending on the group involved, of learning items by rote, performing a shortterm matching task, or searching for the rules of the grammar. Participants formed the same number of cognitive units before and after the training phase, thus indicating that they did not tend to form increasingly large units. However, the number of different units reliably decreased, whatever the task that participants had performed during familiarization. This result indicates that segmentation was increasingly consistent with the structure of the grammar. A theoretical account of this phenomenon, based on ubiquitous principles of associative memory and learning, is proposed. This account is supported by the ability of a computer model implementing those principles, PARSER, to reproduce the observed pattern of results. The implications of this study for developmental theories aimed at accounting for how children become able to parse sensory input into physically and linguistically relevant units are discussed.  相似文献   

2.
Mirman D  Magnuson JS  Estes KG  Dixon JA 《Cognition》2008,108(1):271-280
Many studies have shown that listeners can segment words from running speech based on conditional probabilities of syllable transitions, suggesting that this statistical learning could be a foundational component of language learning. However, few studies have shown a direct link between statistical segmentation and word learning. We examined this possible link in adults by following a statistical segmentation exposure phase with an artificial lexicon learning phase. Participants were able to learn all novel object-label pairings, but pairings were learned faster when labels contained high probability (word-like) or non-occurring syllable transitions from the statistical segmentation phase than when they contained low probability (boundary-straddling) syllable transitions. This suggests that, for adults, labels inconsistent with expectations based on statistical learning are harder to learn than consistent or neutral labels. In contrast, a previous study found that infants learn consistent labels, but not inconsistent or neutral labels.  相似文献   

3.
Much effort has gone into constructing models of how children segment speech and thereby discover the words of their language. Much effort has also gone into constructing models of how adults access their mental lexicons and thereby segment speech into words. In this paper, I explore the possibility of a model that could account for both word discovery by children and on-line segmentation by adults. In particular, I discuss extensions to the distributional regularity (DR) model of Brent and Cartwright (1996) that could yield an account of on-line segmentation as well as word discovery.  相似文献   

4.
5.
A number of studies have shown that people exploit transitional probabilities between successive syllables to segment a stream of artificial continuous speech into words. It is often assumed that what is actually exploited are the forward transitional probabilities (given XY, the probability that X will be followed by Y), even though the backward transitional probabilities (the probability that Y has been preceded by X) were equally informative about word structure in the languages involved in those studies. In two experiments, we showed that participants were able to learn the words from an artificial speech stream when the only available cues were the backward transitional probabilities. Learning is as good under those conditions as when the only available cues are the forward transitional probabilities. Implications for some current models of word segmentation, particularly the simple recurrent networks and PARSER models, are discussed.  相似文献   

6.
It is well established that variation in caregivers’ speech is associated with language outcomes, yet little is known about the learning principles that mediate these effects. This longitudinal study (n = 27) explores whether Spanish‐learning children's early experiences with language predict efficiency in real‐time comprehension and vocabulary learning. Measures of mothers’ speech at 18 months were examined in relation to children's speech processing efficiency and reported vocabulary at 18 and 24 months. Children of mothers who provided more input at 18 months knew more words and were faster in word recognition at 24 months. Moreover, multiple regression analyses indicated that the influences of caregiver speech on speed of word recognition and vocabulary were largely overlapping. This study provides the first evidence that input shapes children's lexical processing efficiency and that vocabulary growth and increasing facility in spoken word comprehension work together to support the uptake of the information that rich input affords the young language learner.  相似文献   

7.
Saffran JR 《Cognition》2001,81(2):149-169
One of the first problems confronting infant language learners is word segmentation: discovering the boundaries between words. Prior research suggests that 8-month-old infants can detect the statistical patterns that serve as a cue to word boundaries. However, the representational structure of the output of this learning process is unknown. This research assessed the extent to which statistical learning generates novel word-like units, rather than probabilistically-related strings of sounds. Eight-month-old infants were familiarized with a continuous stream of nonsense words with no acoustic cues to word boundaries. A post-familiarization test compared the infants' responses to words versus part-words (sequences spanning a word boundary) embedded either in simple English contexts familiar to the infants (e.g. "I like my tibudo"), or in matched nonsense frames (e.g. "zy fike ny tibudo"). Listening preferences were affected by the context (English versus nonsense) in which the items from the familiarization phase were embedded during testing. A second experiment confirmed that infants can discriminate the simple English contexts and the matched nonsense frames used in Experiment 1. The third experiment replicated the results of Experiment 1 by contrasting the English test frames with non-linguistic frames generated from tone sequences. The results support the hypothesis that statistical learning mechanisms generate word-like units with some status relative to the native language.  相似文献   

8.
At 14 months, children appear to struggle to apply their fairly well-developed speech perception abilities to learning similar sounding words (e.g., bih/dih; Stager & Werker, 1997). However, variability in nonphonetic aspects of the training stimuli seems to aid word learning at this age. Extant theories of early word learning cannot account for this benefit of variability. We offer a simple explanation for this range of effects based on associative learning. Simulations suggest that if infants encode both noncontrastive information (e.g., cues to speaker voice) and meaningful linguistic cues (e.g., place of articulation or voicing), then associative learning mechanisms predict these variability effects in early word learning. Crucially, this means that despite the importance of task variables in predicting performance, this body of work shows that phonological categories are still developing at this age, and that the structure of noninformative cues has critical influences on word learning abilities.  相似文献   

9.
Giroux I  Rey A 《Cognitive Science》2009,33(2):260-272
Saffran, Newport, and Aslin (1996a) found that human infants are sensitive to statistical regularities corresponding to lexical units when hearing an artificial spoken language. Two sorts of segmentation strategies have been proposed to account for this early word-segmentation ability: bracketing strategies, in which infants are assumed to insert boundaries into continuous speech, and clustering strategies, in which infants are assumed to group certain speech sequences together into units ( Swingley, 2005 ). In the present study, we test the predictions of two computational models instantiating each of these strategies i.e., Serial Recurrent Networks: Elman, 1990 ; and Parser: Perruchet & Vinter, 1998 in an experiment where we compare the lexical and sublexical recognition performance of adults after hearing 2 or 10 min of an artificial spoken language. The results are consistent with Parser's predictions and the clustering approach, showing that performance on words is better than performance on part-words only after 10 min. This result suggests that word segmentation abilities are not merely due to stronger associations between sublexical units but to the emergence of stronger lexical representations during the development of speech perception processes.  相似文献   

10.
Evidence from infant studies indicates that language learning can be facilitated by multimodal cues. We extended this observation to adult language learning by studying the effects of simultaneous visual cues (nonassociated object images) on speech segmentation performance. Our results indicate that segmentation of new words from a continuous speech stream is facilitated by simultaneous visual input that it is presented at or near syllables that exhibit the low transitional probability indicative of word boundaries. This indicates that temporal audio-visual contiguity helps in directing attention to word boundaries at the earliest stages of language learning. Off-boundary or arrhythmic picture sequences did not affect segmentation performance, suggesting that the language learning system can effectively disregard noninformative visual information. Detection of temporal contiguity between multimodal stimuli may be useful in both infants and second-language learners not only for facilitating speech segmentation, but also for detecting word–object relationships in natural environments.  相似文献   

11.
Shortlist B: a Bayesian model of continuous speech recognition   总被引:1,自引:0,他引:1  
A Bayesian model of continuous speech recognition is presented. It is based on Shortlist (D. Norris, 1994; D. Norris, J. M. McQueen, A. Cutler, & S. Butterfield, 1997) and shares many of its key assumptions: parallel competitive evaluation of multiple lexical hypotheses, phonologically abstract prelexical and lexical representations, a feedforward architecture with no online feedback, and a lexical segmentation algorithm based on the viability of chunks of the input as possible words. Shortlist B is radically different from its predecessor in two respects. First, whereas Shortlist was a connectionist model based on interactive-activation principles, Shortlist B is based on Bayesian principles. Second, the input to Shortlist B is no longer a sequence of discrete phonemes; it is a sequence of multiple phoneme probabilities over 3 time slices per segment, derived from the performance of listeners in a large-scale gating study. Simulations are presented showing that the model can account for key findings: data on the segmentation of continuous speech, word frequency effects, the effects of mispronunciations on word recognition, and evidence on lexical involvement in phonemic decision making. The success of Shortlist B suggests that listeners make optimal Bayesian decisions during spoken-word recognition.  相似文献   

12.
To what extent can language acquisition be explained in terms of different associative learning mechanisms? It has been hypothesized that distributional regularities in spoken languages are strong enough to elicit statistical learning about dependencies among speech units. Distributional regularities could be a useful cue for word learning even without rich language‐specific knowledge. However, it is not clear how strong and reliable the distributional cues are that humans might use to segment speech. We investigate cross‐linguistic viability of different statistical learning strategies by analyzing child‐directed speech corpora from nine languages and by modeling possible statistics‐based speech segmentations. We show that languages vary as to which statistical segmentation strategies are most successful. The variability of the results can be partially explained by systematic differences between languages, such as rhythmical differences. The results confirm previous findings that different statistical learning strategies are successful in different languages and suggest that infants may have to primarily rely on non‐statistical cues when they begin their process of speech segmentation.  相似文献   

13.
Computation of Conditional Probability Statistics by 8-Month-Old Infants   总被引:3,自引:0,他引:3  
A recent report demonstrated that 8-month-olds can segment a continuous stream of speech syllables, containing no acoustic or prosodic cues to word boundaries, into wordlike units after only 2 min of listening experience (Saffran, Aslin, & Newport, 1996). Thus, a powerful learning mechanism capable of extracting statistical information from fluent speech is available early in development. The present study extends these results by documenting the particular type of statistical computation–transitional (conditional) probability–used by infants to solve this word-segmentation task. An artificial language corpus, consisting of a continuous stream of trisyllabic nonsense words, was presented to 8-month-olds for 3 min. A postfamiliarization test compared the infants' responses to words versus part-words (trisyllabic sequences spanning word boundaries). The corpus was constructed so that test words and part-words were matched in frequency, but differed in their transitional probabilities. Infants showed reliable discrimination of words from part-words, thereby demonstrating rapid segmentation of continuous speech into words on the basis of transitional probabilities of syllable pairs.  相似文献   

14.
In 4 experiments, adults were familiarized with utterances from an artificial language. Short utterances occurred both in isolation and as part of a longer utterance, either at the edge or in the middle of the longer utterance. After familiarization, participants' recognition memory for fragments of the long utterance was tested. Recognition was greatest for the remainder of the longer utterance after extraction of the short utterance, but only when the short utterance was located at the edge of the long utterance. These results support the incremental distributional regularity optimization (INCDROP) model of speech segmentation and word discovery, which asserts that people segment utterances into familiar and new wordlike units in such a way as to minimize the burden of processing new units. INCDROP suggests that segmentation and word discovery during native-language acquisition may be driven by recognition of familiar units from the start, with no need for transient bootstrapping mechanisms.  相似文献   

15.
We describe an account of lexically guided tuning of speech perception based on interactive processing and Hebbian learning. Interactive feedback provides lexical information to prelexical levels, and Hebbian learning uses that information to retune the mapping from auditory input to prelexical representations of speech. Simulations of an extension of the TRACE model of speech perception are presented that demonstrate the efficacy of this mechanism. Further simulations show that acoustic similarity can account for the patterns of speaker generalization. This account addresses the role of lexical information in guiding both perception and learning with a single set of principles of information propagation.  相似文献   

16.
Speech, by its very nature, is a time-based phenomenon. Speech sounds are temporally distributed, with the presentation of one sound roughly conditioned by the fading of the previous one. In this review, three classes of models are discussed with respect to the sequential nature of speech. It is argued that the three resulting conceptions of time are linked to the type of segmentation process proposed by these models to deal with speech continuity. In the first one, lexical activation is viewed as perfectly synchronized with the temporal deployment of speech. This type of model corresponds to the traditional left-to-right (proactive) account of lexical processing. Because serious segmentation problems exist for such an approach (e.g.,car andcard are embedded incardinal), the second type of model treats word recognition as the result of a mechanism that sometimes delays commitment on word identity beyond word offset. Lexical activation, instead of shadowing the unfolding of time, lags behind it until an unambiguous decision can be made. The temporarily unprocessed information is stored in a memory buffer. In the third approach, a prosodic cue (lexical stress) contributes actively to speech segmentation and lexical processing. Every stressed syllable encountered in the signal is postulated as a word onset and thus constitutes the starting point of lexical activation. However, with non-initial-stressed words, retroactive procedures going “back in time” must be used. Finally, the use of time (including proactive, delayed, and retroactive procedures) is discussed in light of cross-linguistic phonological differences.  相似文献   

17.
A central question in psycholinguistic research is how listeners isolate words from connected speech despite the paucity of clear word-boundary cues in the signal. A large body of empirical evidence indicates that word segmentation is promoted by both lexical (knowledge-derived) and sublexical (signal-derived) cues. However, an account of how these cues operate in combination or in conflict is lacking. The present study fills this gap by assessing speech segmentation when cues are systematically pitted against each other. The results demonstrate that listeners do not assign the same power to all segmentation cues; rather, cues are hierarchically integrated, with descending weights allocated to lexical, segmental, and prosodic cues. Lower level cues drive segmentation when the interpretive conditions are altered by a lack of contextual and lexical information or by white noise. Taken together, the results call for an integrated, hierarchical, and signal-contingent approach to speech segmentation.  相似文献   

18.
Räsänen O 《Cognition》2011,(2):149-176
Word segmentation from continuous speech is a difficult task that is faced by human infants when they start to learn their native language. Several studies indicate that infants might use several different cues to solve this problem, including intonation, linguistic stress, and transitional probabilities between subsequent speech sounds. In this work, a computational model for word segmentation and learning of primitive lexical items from continuous speech is presented. The model does not utilize any a priori linguistic or phonemic knowledge such as phones, phonemes or articulatory gestures, but computes transitional probabilities between atomic acoustic events in order to detect recurring patterns in speech. Experiments with the model show that word segmentation is possible without any knowledge of linguistically relevant structures, and that the learned ungrounded word models show a relatively high selectivity towards specific words or frequently co-occurring combinations of short words.  相似文献   

19.
Statistical learning allows listeners to track transitional probabilities among syllable sequences and use these probabilities for subsequent speech segmentation. Recent studies have shown that other sources of information, such as rhythmic cues, can modulate the dependencies extracted via statistical computation. In this study, we explored how syllables made salient by a pitch rise affect the segmentation of trisyllabic words from an artificial speech stream by native speakers of three different languages (Spanish, English, and French). Results showed that, whereas performance of French participants did not significantly vary across stress positions (likely due to language-specific rhythmic characteristics), the segmentation performance of Spanish and English listeners was unaltered when syllables in word-initial and word-final positions were salient, but it dropped to chance level when salience was on the medial syllable. We argue that pitch rise in word-medial syllables draws attentional resources away from word boundaries, thus decreasing segmentation effectiveness.  相似文献   

20.
Lew-Williams C  Saffran JR 《Cognition》2012,122(2):241-246
Infants have been described as ‘statistical learners’ capable of extracting structure (such as words) from patterned input (such as language). Here, we investigated whether prior knowledge influences how infants track transitional probabilities in word segmentation tasks. Are infants biased by prior experience when engaging in sequential statistical learning? In a laboratory simulation of learning across time, we exposed 9- and 10-month-old infants to a list of either disyllabic or trisyllabic nonsense words, followed by a pause-free speech stream composed of a different set of disyllabic or trisyllabic nonsense words. Listening times revealed successful segmentation of words from fluent speech only when words were uniformly disyllabic or trisyllabic throughout both phases of the experiment. Hearing trisyllabic words during the pre-exposure phase derailed infants’ abilities to segment speech into disyllabic words, and vice versa. We conclude that prior knowledge about word length equips infants with perceptual expectations that facilitate efficient processing of subsequent language input.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号