首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Prior research has shown that people can learn many nouns (i.e., word–object mappings) from a short series of ambiguous situations containing multiple words and objects. For successful cross‐situational learning, people must approximately track which words and referents co‐occur most frequently. This study investigates the effects of allowing some word‐referent pairs to appear more frequently than others, as is true in real‐world learning environments. Surprisingly, high‐frequency pairs are not always learned better, but can also boost learning of other pairs. Using a recent associative model (Kachergis, Yu, & Shiffrin, 2012), we explain how mixing pairs of different frequencies can bootstrap late learning of the low‐frequency pairs based on early learning of higher frequency pairs. We also manipulate contextual diversity, the number of pairs a given pair appears with across training, since it is naturalistically confounded with frequency. The associative model has competing familiarity and uncertainty biases, and their interaction is able to capture the individual and combined effects of frequency and contextual diversity on human learning. Two other recent word‐learning models do not account for the behavioral findings.  相似文献   

2.
Phonological rules create alternations in the phonetic realizations of related words. These rules must be learned by infants in order to identify the phonological inventory, the morphological structure, and the lexicon of a language. Recent work proposes a computational model for the learning of one kind of phonological alternation, allophony (Peperkamp, Le Calvez, Nadal, & Dupoux, 2006). This paper extends the model to account for learning of a broader set of phonological alternations and the formalization of these alternations as general rules. In Experiment 1, we apply the original model to new data in Dutch and demonstrate its limitations in learning nonallophonic rules. In Experiment 2, we extend the model to allow it to learn general rules for alternations that apply to a class of segments. In Experiment 3, the model is further extended to allow for generalization by context; we argue that this generalization must be constrained by linguistic principles.  相似文献   

3.
Learning to map words onto their referents is difficult, because there are multiple possibilities for forming these mappings. Cross‐situational learning studies have shown that word‐object mappings can be learned across multiple situations, as can verbs when presented in a syntactic context. However, these previous studies have presented either nouns or verbs in ambiguous contexts and thus bypass much of the complexity of multiple grammatical categories in speech. We show that noun word learning in adults is robust when objects are moving, and that verbs can also be learned from similar scenes without additional syntactic information. Furthermore, we show that both nouns and verbs can be acquired simultaneously, thus resolving category‐level as well as individual word‐level ambiguity. However, nouns were learned more quickly than verbs, and we discuss this in light of previous studies investigating the noun advantage in word learning.  相似文献   

4.
Recent research has demonstrated that word learners can determine word-referent mappings by tracking co-occurrences across multiple ambiguous naming events. The current study addresses the mechanisms underlying this capacity to learn words cross-situationally. This replication and extension of Yu and Smith (2007) investigates the factors influencing both successful cross-situational word learning and mis-mappings. Item analysis and error patterns revealed that the co-occurrence structure of the learning environment as well as the context of the testing environment jointly affected learning across observations. Learners also adopted an exclusion strategy, which contributed conjointly with statistical tracking to performance. Implications for our understanding of the processes underlying cross-situational word learning are discussed.  相似文献   

5.
Cross‐situational learning is a mechanism for learning the meaning of words across multiple exposures, despite exposure‐by‐exposure uncertainty as to the word's true meaning. We present experimental evidence showing that humans learn words effectively using cross‐situational learning, even at high levels of referential uncertainty. Both overall success rates and the time taken to learn words are affected by the degree of referential uncertainty, with greater referential uncertainty leading to less reliable, slower learning. Words are also learned less successfully and more slowly if they are presented interleaved with occurrences of other words, although this effect is relatively weak. We present additional analyses of participants’ trial‐by‐trial behavior showing that participants make use of various cross‐situational learning strategies, depending on the difficulty of the word‐learning task. When referential uncertainty is low, participants generally apply a rigorous eliminative approach to cross‐situational learning. When referential uncertainty is high, or exposures to different words are interleaved, participants apply a frequentist approximation to this eliminative approach. We further suggest that these two ways of exploiting cross‐situational information reside on a continuum of learning strategies, underpinned by a single simple associative learning mechanism.  相似文献   

6.
Yu C  Ballard DH  Aslin RN 《Cognitive Science》2005,29(6):961-1005
We examine the influence of inferring interlocutors' referential intentions from their body movements at the early stage of lexical acquisition. By testing human participants and comparing their performances in different learning conditions, we find that those embodied intentions facilitate both word discovery and word-meaning association. In light of empirical findings, the main part of this article presents a computational model that can identify the sound patterns of individual words from continuous speech, using nonlinguistic contextual information, and employ body movements as deictic references to discover word-meaning associations. To our knowledge, this work is the first model of word learning that not only learns lexical items from raw multisensory signals to closely resemble infant language development from natural environments, but also explores the computational role of social cognitive skills in lexical acquisition.  相似文献   

7.
The self‐teaching hypothesis describes how children progress toward skilled sight‐word reading. It proposes that children do this via phonological recoding with assistance from contextual cues, to identify the target pronunciation for a novel letter string, and in so doing create an opportunity to self‐teach new orthographic knowledge. We present a new computational implementation of self‐teaching within the dual‐route cascaded (DRC) model of reading aloud, and we explore how decoding and contextual cues can work together to enable accurate self‐teaching under a variety of circumstances. The new model (ST‐DRC) uses DRC’s sublexical route and the interactivity between the lexical and sublexical routes to simulate phonological recoding. Known spoken words are activated in response to novel printed words, triggering an opportunity for orthographic learning, which is the basis for skilled sight‐word reading. ST‐DRC also includes new computational mechanisms for simulating how contextual information aids word identification, and it demonstrates how partial decoding and ambiguous context interact to achieve irregular‐word learning. Beyond modeling orthographic learning and self‐teaching, ST‐DRC’s performance suggests new avenues for empirical research on how difficult word classes such as homographs and potentiophones are learned.  相似文献   

8.
Previous research on cross‐situational word learning has demonstrated that learners are able to reduce ambiguity in mapping words to referents by tracking co‐occurrence probabilities across learning events. In the current experiments, we examined whether learners are able to retain mappings over time. The results revealed that learners are able to retain mappings for up to 1 week later. However, there were interactions between the amount of retention and the different learning conditions. Interestingly, the strongest retention was associated with a learning condition that engendered retrieval dynamics that initially challenged the learner but eventually led to more successful retrieval toward the end of learning. The ease/difficulty of retrieval is a critical process underlying cross‐situational word learning and is a powerful example of how learning dynamics affect long‐term learning outcomes.  相似文献   

9.
Words are the essence of communication: They are the building blocks of any language. Learning the meaning of words is thus one of the most important aspects of language acquisition: Children must first learn words before they can combine them into complex utterances. Many theories have been developed to explain the impressive efficiency of young children in acquiring the vocabulary of their language, as well as the developmental patterns observed in the course of lexical acquisition. A major source of disagreement among the different theories is whether children are equipped with special mechanisms and biases for word learning, or their general cognitive abilities are adequate for the task. We present a novel computational model of early word learning to shed light on the mechanisms that might be at work in this process. The model learns word meanings as probabilistic associations between words and semantic elements, using an incremental and probabilistic learning mechanism, and drawing only on general cognitive abilities. The results presented here demonstrate that much about word meanings can be learned from naturally occurring child-directed utterances (paired with meaning representations), without using any special biases or constraints, and without any explicit developmental changes in the underlying learning mechanism. Furthermore, our model provides explanations for the occasionally contradictory child experimental data, and offers predictions for the behavior of young word learners in novel situations.  相似文献   

10.
Sound‐symbolism is the nonarbitrary link between the sound and meaning of a word. Japanese‐speaking children performed better in a verb generalization task when they were taught novel sound‐symbolic verbs, created based on existing Japanese sound‐symbolic words, than novel nonsound‐symbolic verbs ( Imai, Kita, Nagumo, & Okada, 2008 ). A question remained as to whether the Japanese children had picked up regularities in the Japanese sound‐symbolic lexicon or were sensitive to universal sound‐symbolism. The present study aimed to provide support for the latter. In a verb generalization task, English‐speaking 3‐year‐olds were taught novel sound‐symbolic verbs, created based on Japanese sound‐symbolism, or novel nonsound‐symbolic verbs. English‐speaking children performed better with the sound‐symbolic verbs, just like Japanese‐speaking children. We concluded that children are sensitive to universal sound‐symbolism and can utilize it in word learning and generalization, regardless of their native language.  相似文献   

11.
In this study, we apply MOSAIC (model of syntax acquisition in children) to the simulation of the developmental patterning of children's optional infinitive (OI) errors in 4 languages: English, Dutch, German, and Spanish. MOSAIC, which has already simulated this phenomenon in Dutch and English, now implements a learning mechanism that better reflects the theoretical assumptions underlying it, as well as a chunking mechanism that results in frequent phrases being treated as 1 unit. Using 1, identical model that learns from child-directed speech, we obtain a close quantitative fit to the data from all 4 languages despite there being considerable cross-linguistic and developmental variation in the OI phenomenon. MOSAIC successfully simulates the difference between Spanish (a pro-drop language in which OI errors are virtually absent) and obligatory subject languages that do display the OI phenomenon. It also highlights differences in the OI phenomenon across German and Dutch, 2 closely related languages whose grammar is virtually identical with respect to the relation between finiteness and verb placement. Taken together, these results suggest that (a) cross-linguistic differences in the rates at which children produce OIs are graded, quantitative differences that closely reflect the statistical properties of the input they are exposed to and (b) theories of syntax acquisition need to consider more closely the role of input characteristics as determinants of quantitative differences in the cross-linguistic patterning of phenomena in language acquisition.  相似文献   

12.
We introduce a set of biologically and computationally motivated design choices for modeling the learning of language, or of other types of sequential, hierarchically structured experience and behavior, and describe an implemented system that conforms to these choices and is capable of unsupervised learning from raw natural‐language corpora. Given a stream of linguistic input, our model incrementally learns a grammar that captures its statistical patterns, which can then be used to parse or generate new data. The grammar constructed in this manner takes the form of a directed weighted graph, whose nodes are recursively (hierarchically) defined patterns over the elements of the input stream. We evaluated the model in seventeen experiments, grouped into five studies, which examined, respectively, (a) the generative ability of grammar learned from a corpus of natural language, (b) the characteristics of the learned representation, (c) sequence segmentation and chunking, (d) artificial grammar learning, and (e) certain types of structure dependence. The model's performance largely vindicates our design choices, suggesting that progress in modeling language acquisition can be made on a broad front—ranging from issues of generativity to the replication of human experimental findings—by bringing biological and computational considerations, as well as lessons from prior efforts, to bear on the modeling approach.  相似文献   

13.
Cross‐situational statistical learning of words involves tracking co‐occurrences of auditory words and objects across time to infer word‐referent mappings. Previous research has demonstrated that learners can infer referents across sets of very phonologically distinct words (e.g., WUG, DAX), but it remains unknown whether learners can encode fine phonological differences during cross‐situational statistical learning. This study examined learners’ cross‐situational statistical learning of minimal pairs that differed on one consonant segment (e.g., BON–TON), minimal pairs that differed on one vowel segment (e.g., DEET–DIT), and non‐minimal pairs that differed on two or three segments (e.g., BON–DEET). Learners performed above chance for all pairs, but performed worse on vowel minimal pairs than on consonant minimal pairs or non‐minimal pairs. These findings demonstrate that learners can encode fine phonetic detail while tracking word‐referent co‐occurrence probabilities, but they suggest that phonological encoding may be weaker for vowels than for consonants.  相似文献   

14.
The underlying structures that are common to the world's languages bear an intriguing connection with early emerging forms of “core knowledge” (Spelke & Kinzler, 2007), which are frequently studied by infant researchers. In particular, grammatical systems often incorporate distinctions (e.g., the mass/count distinction) that reflect those made in core knowledge (e.g., the non‐verbal distinction between an object and a substance). Here, I argue that this connection occurs because non‐verbal core knowledge systematically biases processes of language evolution. This account potentially explains a wide range of cross‐linguistic grammatical phenomena that currently lack an adequate explanation. Second, I suggest that developmental researchers and cognitive scientists interested in (non‐verbal) knowledge representation can exploit this connection to language by using observations about cross‐linguistic grammatical tendencies to inspire hypotheses about core knowledge.  相似文献   

15.
Cross‐situational word learning (XSWL) tasks present multiple words and candidate referents within a learning trial such that word–referent pairings can be inferred only across trials. Adults encode fine phonological detail when two words and candidate referents are presented in each learning trial (2 × 2 scenario; Escudero, Mulak, & Vlach, 2016a ). To test the relationship between XSWL task difficulty and phonological encoding, we examined XSWL of words differing by one vowel or consonant across degrees of within‐learning trial ambiguity (1 × 1 to 4 × 4). Word identification was assessed alongside three distractors. Adults finely encoded words via XSWL: Learning occurred in all conditions, though accuracy decreased across the 1 × 1 to 3 × 3 conditions. Accuracy was highest for the 1 × 1 condition, suggesting fast‐mapping is a stronger learning strategy here. Accuracy was higher for consonant than vowel set targets, and having more distractors from the same set mitigated identification of vowel set targets only, suggesting possible stronger encoding of consonants than vowels.  相似文献   

16.
The informativity of a computational model of language acquisition is directly related to how closely it approximates the actual acquisition task, sometimes referred to as the model's cognitive plausibility. We suggest that though every computational model necessarily idealizes the modeled task, an informative language acquisition model can aim to be cognitively plausible in multiple ways. We discuss these cognitive plausibility checkpoints generally and then apply them to a case study in word segmentation, investigating a promising Bayesian segmentation strategy. We incorporate cognitive plausibility by using an age‐appropriate unit of perceptual representation, evaluating the model output in terms of its utility, and incorporating cognitive constraints into the inference process. Our more cognitively plausible model shows a beneficial effect of cognitive constraints on segmentation performance. One interpretation of this effect is as a synergy between the naive theories of language structure that infants may have and the cognitive constraints that limit the fidelity of their inference processes, where less accurate inference approximations are better when the underlying assumptions about how words are generated are less accurate. More generally, these results highlight the utility of incorporating cognitive plausibility more fully into computational models of language acquisition.  相似文献   

17.
We explore whether children's willingness to produce unfamiliar sequences of words reflects their experience with similar lexical patterns. We asked children to repeat unfamiliar sequences that were identical to familiar phrases (e.g., A piece of toast) but for one word (e.g., a novel instantiation of A piece of X, like A piece of brick). We explore two predictions-motivated by findings in the statistical learning literature-that children are likely to have detected an opportunity to substitute alternative words into the final position of a four-word sequence if (a) it is difficult to predict the fourth word given the first three words and (b) the words observed in the final position are distributionally similar. Twenty-eight 2-year-olds and thirty-one 3-year-olds were significantly more likely to correctly repeat unfamiliar variants of patterns for which these properties held. The results illustrate how children's developing language is shaped by linguistic experience.  相似文献   

18.
Since the experiments of Saffran et al. [Saffran, J., Aslin, R., & Newport, E. (1996). Statistical learning in 8-month-old infants. Science, 274, 1926-1928], there has been a great deal of interest in the question of how statistical regularities in the speech stream might be used by infants to begin to identify individual words. In this work, we use computational modeling to explore the effects of different assumptions the learner might make regarding the nature of words - in particular, how these assumptions affect the kinds of words that are segmented from a corpus of transcribed child-directed speech. We develop several models within a Bayesian ideal observer framework, and use them to examine the consequences of assuming either that words are independent units, or units that help to predict other units. We show through empirical and theoretical results that the assumption of independence causes the learner to undersegment the corpus, with many two- and three-word sequences (e.g. what’s that, do you, in the house) misidentified as individual words. In contrast, when the learner assumes that words are predictive, the resulting segmentation is far more accurate. These results indicate that taking context into account is important for a statistical word segmentation strategy to be successful, and raise the possibility that even young infants may be able to exploit more subtle statistical patterns than have usually been considered.  相似文献   

19.
Regier T  Gahl S 《Cognition》2004,93(2):147-55; discussion 157-65
Syntactic knowledge is widely held to be partially innate, rather than learned. In a classic example, it is sometimes argued that children know the proper use of anaphoric one, although that knowledge could not have been learned from experience. Lidz et al. [Lidz, J., Waxman, S., & Freedman, J. (2003). What infants know about syntax but couldn't have learned: Experimental evidence for syntactic structure at 18 months. Cognition, 89, B65-B73.] pursue this argument, and present corpus and experimental evidence that appears to support it; they conclude that specific aspects of this knowledge must be innate. We demonstrate, contra Lidz et al., that this knowledge may in fact be acquired from the input, through a simple Bayesian learning procedure. The learning procedure succeeds because it is sensitive to the absence of particular input patterns--an aspect of learning that is apparently overlooked by Lidz et al. More generally, we suggest that a prominent form of the "argument from poverty of the stimulus" suffers from the same oversight, and is as a result logically unsound.  相似文献   

20.
We conducted a close replication of the seminal work by Marcus and colleagues from 1999, which showed that after a brief auditory exposure phase, 7-month-old infants were able to learn and generalize a rule to novel syllables not previously present in the exposure phase. This work became the foundation for the theoretical framework by which we assume that infants are able to learn abstract representations and generalize linguistic rules. While some extensions on the original work have shown evidence of rule learning, the outcomes are mixed, and an exact replication of Marcus et al.'s study has thus far not been reported. A recent meta-analysis by Rabagliati and colleagues brings to light that the rule-learning effect depends on stimulus type (e.g., meaningfulness, speech vs. nonspeech) and is not as robust as often assumed. In light of the theoretical importance of the issue at stake, it is appropriate and necessary to assess the replicability and robustness of Marcus et al.'s findings. Here we have undertaken a replication across four labs with a large sample of 7-month-old infants (= 96), using the same exposure patterns (ABA and ABB), methodology (Headturn Preference Paradigm), and original stimuli. As in the original study, we tested the hypothesis that infants are able to learn abstract “algebraic” rules and apply them to novel input. Our results did not replicate the original findings: infants showed no difference in looking time between test patterns consistent or inconsistent with the familiarization pattern they were exposed to.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号