首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Cross-situational learning is a mechanism for learning the meaning of words across multiple exposures, despite exposure-by-exposure uncertainty as to a word's true meaning. Doubts have been expressed regarding the plausibility of cross-situational learning as a mechanism for learning human-scale lexicons in reasonable timescales under the levels of referential uncertainty likely to confront real word learners. We demonstrate mathematically that cross-situational learning facilitates the acquisition of large vocabularies despite significant levels of referential uncertainty at each exposure, and we provide estimates of lexicon learning times for several cross-situational learning strategies. This model suggests that cross-situational word learning cannot be ruled out on the basis that it predicts unreasonably long lexicon learning times. More generally, these results indicate that there is no necessary link between the ability to learn individual words rapidly and the capacity to acquire a large lexicon.  相似文献   

2.
According to usage‐based approaches to language acquisition, linguistic knowledge is represented in the form of constructions—form‐meaning pairings—at multiple levels of abstraction and complexity. The emergence of syntactic knowledge is assumed to be a result of the gradual abstraction of lexically specific and item‐based linguistic knowledge. In this article, we explore how the gradual emergence of a network consisting of constructions at varying degrees of complexity can be modeled computationally. Linguistic knowledge is learned by observing natural language utterances in an ambiguous context. To determine meanings of constructions starting from ambiguous contexts, we rely on the principle of cross‐situational learning. While this mechanism has been implemented in several computational models, these models typically focus on learning mappings between words and referents. In contrast, in our model, we show how cross‐situational learning can be applied consistently to learn correspondences between form and meaning beyond such simple correspondences.  相似文献   

3.
Prior research has shown that people can learn many nouns (i.e., word–object mappings) from a short series of ambiguous situations containing multiple words and objects. For successful cross‐situational learning, people must approximately track which words and referents co‐occur most frequently. This study investigates the effects of allowing some word‐referent pairs to appear more frequently than others, as is true in real‐world learning environments. Surprisingly, high‐frequency pairs are not always learned better, but can also boost learning of other pairs. Using a recent associative model (Kachergis, Yu, & Shiffrin, 2012), we explain how mixing pairs of different frequencies can bootstrap late learning of the low‐frequency pairs based on early learning of higher frequency pairs. We also manipulate contextual diversity, the number of pairs a given pair appears with across training, since it is naturalistically confounded with frequency. The associative model has competing familiarity and uncertainty biases, and their interaction is able to capture the individual and combined effects of frequency and contextual diversity on human learning. Two other recent word‐learning models do not account for the behavioral findings.  相似文献   

4.
Cross‐situational statistical learning of words involves tracking co‐occurrences of auditory words and objects across time to infer word‐referent mappings. Previous research has demonstrated that learners can infer referents across sets of very phonologically distinct words (e.g., WUG, DAX), but it remains unknown whether learners can encode fine phonological differences during cross‐situational statistical learning. This study examined learners’ cross‐situational statistical learning of minimal pairs that differed on one consonant segment (e.g., BON–TON), minimal pairs that differed on one vowel segment (e.g., DEET–DIT), and non‐minimal pairs that differed on two or three segments (e.g., BON–DEET). Learners performed above chance for all pairs, but performed worse on vowel minimal pairs than on consonant minimal pairs or non‐minimal pairs. These findings demonstrate that learners can encode fine phonetic detail while tracking word‐referent co‐occurrence probabilities, but they suggest that phonological encoding may be weaker for vowels than for consonants.  相似文献   

5.
Previous research on cross‐situational word learning has demonstrated that learners are able to reduce ambiguity in mapping words to referents by tracking co‐occurrence probabilities across learning events. In the current experiments, we examined whether learners are able to retain mappings over time. The results revealed that learners are able to retain mappings for up to 1 week later. However, there were interactions between the amount of retention and the different learning conditions. Interestingly, the strongest retention was associated with a learning condition that engendered retrieval dynamics that initially challenged the learner but eventually led to more successful retrieval toward the end of learning. The ease/difficulty of retrieval is a critical process underlying cross‐situational word learning and is a powerful example of how learning dynamics affect long‐term learning outcomes.  相似文献   

6.
A variety of mechanisms contribute to word learning. Learners can track co‐occurring words and referents across situations in a bottom‐up manner (cross‐situational word learning, CSWL). Equally, they can exploit sentential contexts, relying on top–down information such as verb–argument relations and world knowledge, offering immediate constraints on meaning (word learning based on sentence‐level constraints, SLCL). When combined, CSWL and SLCL potentially modulate each other's influence, revealing how word learners deal with multiple mechanisms simultaneously: Do they use all mechanisms? Prefer one? Is their strategy context dependent? Three experiments conducted with adult learners reveal that learners prioritize SLCL over CSWL. CSWL is applied in addition to SLCL only if SLCL is not perfectly disambiguating, thereby complementing or competing with it. These studies demonstrate the importance of investigating word‐learning mechanisms simultaneously, revealing important characteristics of their interaction in more naturalistic learning environments.  相似文献   

7.
Cross‐situational word learning, like any statistical learning problem, involves tracking the regularities in the environment. However, the information that learners pick up from these regularities is dependent on their learning mechanism. This article investigates the role of one type of mechanism in statistical word learning: competition. Competitive mechanisms would allow learners to find the signal in noisy input and would help to explain the speed with which learners succeed in statistical learning tasks. Because cross‐situational word learning provides information at multiple scales—both within and across trials/situations—learners could implement competition at either or both of these scales. A series of four experiments demonstrate that cross‐situational learning involves competition at both levels of scale, and that these mechanisms interact to support rapid learning. The impact of both of these mechanisms is considered from the perspective of a process‐level understanding of cross‐situational learning.  相似文献   

8.
Two experiments were conducted to examine adult learners' ability to extract multiple statistics in simultaneously presented visual and auditory input. Experiment 1 used a cross‐situational learning paradigm to test whether English speakers were able to use co‐occurrences to learn word‐to‐object mappings and concurrently form object categories based on the commonalities across training stimuli. Experiment 2 replicated the first experiment and further examined whether speakers of Mandarin, a language in which final syllables of object names are more predictive of category membership than English, were able to learn words and form object categories when trained with the same type of structures. The results indicate that both groups of learners successfully extracted multiple levels of co‐occurrence and used them to learn words and object categories simultaneously. However, marked individual differences in performance were also found, suggesting possible interference and competition in processing the two concurrent streams of regularities.  相似文献   

9.
Cross‐situational word learning (XSWL) tasks present multiple words and candidate referents within a learning trial such that word–referent pairings can be inferred only across trials. Adults encode fine phonological detail when two words and candidate referents are presented in each learning trial (2 × 2 scenario; Escudero, Mulak, & Vlach, 2016a ). To test the relationship between XSWL task difficulty and phonological encoding, we examined XSWL of words differing by one vowel or consonant across degrees of within‐learning trial ambiguity (1 × 1 to 4 × 4). Word identification was assessed alongside three distractors. Adults finely encoded words via XSWL: Learning occurred in all conditions, though accuracy decreased across the 1 × 1 to 3 × 3 conditions. Accuracy was highest for the 1 × 1 condition, suggesting fast‐mapping is a stronger learning strategy here. Accuracy was higher for consonant than vowel set targets, and having more distractors from the same set mitigated identification of vowel set targets only, suggesting possible stronger encoding of consonants than vowels.  相似文献   

10.
Listeners infer which object in a visual scene a speaker refers to from the systematic variation of the speaker's tone of voice (ToV). We examined whether ToV also guides word learning. During exposure, participants heard novel adjectives (e.g., “daxen”) spoken with a ToV representing hot, cold, strong, weak, big, or small while viewing picture pairs representing the meaning of the adjective and its antonym (e.g., elephant–ant for big–small). Eye fixations were recorded to monitor referent detection and learning. During test, participants heard the adjectives spoken with a neutral ToV, while selecting referents from familiar and unfamiliar picture pairs. Participants were able to learn the adjectives' meanings, and, even in the absence of informative ToV, generalize them to new referents. A second experiment addressed whether ToV provides sufficient information to infer the adjectival meaning or needs to operate within a referential context providing information about the relevant semantic dimension. Participants who saw printed versions of the novel words during exposure performed at chance during test. ToV, in conjunction with the referential context, thus serves as a cue to word meaning. ToV establishes relations between labels and referents for listeners to exploit in word learning.  相似文献   

11.
From the very first moments of their lives, infants selectively attend to the visible orofacial movements of their social partners and apply their exquisite speech perception skills to the service of lexical learning. Here we explore how early bilingual experience modulates children's ability to use visible speech as they form new lexical representations. Using a cross‐modal word‐learning task, bilingual children aged 30 months were tested on their ability to learn new lexical mappings in either the auditory or the visual modality. Lexical recognition was assessed either in the same modality as the one used at learning (‘same modality’ condition: auditory test after auditory learning, visual test after visual learning) or in the other modality (‘cross‐modality’ condition: visual test after auditory learning, auditory test after visual learning). The results revealed that like their monolingual peers, bilingual children successfully learn new words in either the auditory or the visual modality and show cross‐modal recognition of words following auditory learning. Interestingly, as opposed to monolinguals, they also demonstrate cross‐modal recognition of words upon visual learning. Collectively, these findings indicate a bilingual edge in visual word learning, expressed in the capacity to form a recoverable cross‐modal representation of visually learned words.  相似文献   

12.
This study was designed to examine the possible effect of instructional method and grade on the development of the competences used in reading isolated words in a transparent orthography (i.e., Spanish). A cross‐sectional design was used with a sample of 202 children who were learning to read by different instructional methods (code‐oriented vs. meaning‐oriented approaches). The effect of instructional method was analysed on reaction times, latency responses, and misreading on lexical decision and naming tasks. Words varied in frequency, length, and positional frequency of syllables (PFS) and the nonwords varied only in length and PFS. Our prediction was that the differences in reaction times and error performance as a function of the variables that allow us to test the routes—such as lexicality, word frequency, PFS, and word length—would be greater in the individuals who learn by a meaning‐oriented approach, which means that this group would be more affected by unfamiliar and longer words, low PFS, and nonwords in comparison to individuals who learn by a code‐oriented approach. This would support the view that individuals who learn by a meaning‐oriented approach have particular difficulties in naming words under conditions that require extensive phonological computation. Reliable effects of instructional method were found both in reaction times and latency responses and also on misreading in words and nonwords. The findings demonstrate superiority in the sublexical analysis in children who were learning by code‐oriented approaches. However, individuals who were learning by meaning‐oriented approaches had particular difficulties in naming words under conditions that require extensive phonological computation.  相似文献   

13.
Learning to map words onto their referents is difficult, because there are multiple possibilities for forming these mappings. Cross‐situational learning studies have shown that word‐object mappings can be learned across multiple situations, as can verbs when presented in a syntactic context. However, these previous studies have presented either nouns or verbs in ambiguous contexts and thus bypass much of the complexity of multiple grammatical categories in speech. We show that noun word learning in adults is robust when objects are moving, and that verbs can also be learned from similar scenes without additional syntactic information. Furthermore, we show that both nouns and verbs can be acquired simultaneously, thus resolving category‐level as well as individual word‐level ambiguity. However, nouns were learned more quickly than verbs, and we discuss this in light of previous studies investigating the noun advantage in word learning.  相似文献   

14.
Vogt P 《Cognitive Science》2012,36(4):726-739
Cross-situational learning has recently gained attention as a plausible candidate for the mechanism that underlies the learning of word-meaning mappings. In a recent study, Blythe and colleagues have studied how many trials are theoretically required to learn a human-sized lexicon using cross-situational learning. They show that the level of referential uncertainty exposed to learners could be relatively large. However, one of the assumptions they made in designing their mathematical model is questionable. Although they rightfully assumed that words are distributed according to Zipf's law, they applied a uniform distribution of meanings. In this article, Zipf's law is also applied to the distribution of meanings, and it is shown that under this condition, cross-situational learning can only be plausible when referential uncertainty is sufficiently small. It is concluded that cross-situational learning is a plausible learning mechanism but needs to be guided by heuristics that aid word learners with reducing referential uncertainty.  相似文献   

15.
Changes to our everyday activities mean that adult language users need to learn new meanings for previously unambiguous words. For example, we need to learn that a "tweet" is not only the sound a bird makes, but also a short message on a social networking site. In these experiments, adult participants learned new fictional meanings for words with a single dominant meaning (e.g., "ant") by reading paragraphs that described these novel meanings. Explicit recall of these meanings was significantly better when there was a strong semantic relationship between the novel meaning and the existing meaning. This relatedness effect emerged after relatively brief exposure to the meanings (Experiment 1), but it persisted when training was extended across 7?days (Experiment 2) and when semantically demanding tasks were used during this extended training (Experiment 3). A lexical decision task was used to assess the impact of learning on online recognition. In Experiment 3, participants responded more quickly to words whose new meaning was semantically related than to those with an unrelated meaning. This result is consistent with earlier studies showing an effect of meaning relatedness on lexical decision, and it indicates that these newly acquired meanings become integrated with participants' preexisting knowledge about the meanings of words.  相似文献   

16.
We report three eyetracking experiments that examine the learning procedure used by adults as they pair novel words and visually presented referents over a sequence of referentially ambiguous trials. Successful learning under such conditions has been argued to be the product of a learning procedure in which participants provisionally pair each novel word with several possible referents and use a statistical-associative learning mechanism to gradually converge on a single mapping across learning instances [e.g., Yu, C., & Smith, L. B. (2007). Rapid word learning under uncertainty via cross-situational statistics. Psychological Science, 18(5), 414–420]. We argue here that successful learning in this setting is instead the product of a one-trial procedure in which a single hypothesized word-referent pairing is retained across learning instances, abandoned only if the subsequent instance fails to confirm the pairing – more a ‘fast mapping’ procedure than a gradual statistical one. We provide experimental evidence for this propose-but-verify learning procedure via three experiments in which adult participants attempted to learn the meanings of nonce words cross-situationally under varying degrees of referential uncertainty. The findings, using both explicit (referent selection) and implicit (eye movement) measures, show that even in these artificial learning contexts, which are far simpler than those encountered by a language learner in a natural environment, participants do not retain multiple meaning hypotheses across learning instances. As we discuss, these findings challenge ‘gradualist’ accounts of word learning and are consistent with the known rapid course of vocabulary learning in a first language.  相似文献   

17.
State dependent learning (SDL) occurs when learning acquired in one context is not retrievable in a different context. Although traditionally SDL is thought of in the context of substance use, the role of SDL should be considered during combined medication and exposure treatment for anxiety disorders. Data are presented from a within-subjects, case-series design of four participants with social anxiety disorder. Participants engaged in a series of situational exposures while taking either alprazolam (0.75 mg), propranolol (40 mg), placebo or no medication. They returned 48 h later and engaged in the same situational exposure in an unmedicated state to determine retention of learning following the shift in drug context. Results suggest that SDL effects are possible when combining pharmacotherapy (alprazolam) with exposure therapy. Future research is needed determine the conditions under which SDL is most likely to occur and ways to facilitate transfer of learning across different contexts.  相似文献   

18.
Can infants, in the very first stages of word learning, use their perceptual sensitivity to the phonetics of speech while learning words? Research to date suggests that infants of 14 months cannot learn two similar‐sounding words unless there is substantial contextual support. The current experiment advances our understanding of this failure by testing whether the source of infants’ difficulty lies in the learning or testing phase. Infants were taught to associate two similar‐sounding words with two different objects, and tested using a visual choice method rather than the standard Switch task. The results reveal that 14‐month‐olds are capable of learning and mapping two similar‐sounding labels; they can apply phonetic detail in new words. The findings are discussed in relation to infants’ concurrent failure, and the developmental transition to success, in the Switch task.  相似文献   

19.
We investigated the effects of learning schedule and multi‐modality stimulus presentation on foreign language vocabulary learning. In Experiment 1, participants learned German vocabulary words utilizing three learning methods that were organized either in a blocked or interleaved fashion. We found interleaving with the keyword mnemonic and rote study advantageous over blocking, but retrieval practice was better served in a blocked schedule. It is likely that the excessively delayed feedback for the retrieval practice in the interleaved practice schedule impeded learning while the spacing involved in the interleaved schedule enhanced learning in the keyword mnemonic and rote study. In Experiment 2, we examined whether a multi‐modality stimulus presentation from visual and auditory channels is better suited for aiding learning over a visual presentation condition. We found benefits of multi‐modality presentation only for the keyword mnemonic condition, presumably because the nature of the keyword mnemonic involving sound and visualization was particularly relevant with the multi‐modality presentation. The present study suggests that optimal foreign language learning environments should incorporate learning schedules and multimedia presentations based on specific learning methods and materials. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

20.
Children can selectively attend to various attributes of a model, such as past accuracy or physical strength, to guide their social learning. There is a debate regarding whether a relation exists between theory‐of‐mind skills and selective learning. We hypothesized that high performance on theory‐of‐mind tasks would predict preference for learning new words from accurate informants (an epistemic attribute), but not from physically strong informants (a non‐epistemic attribute). Three‐ and 4‐year‐olds (= 65) completed two selective learning tasks, and their theory‐of‐mind abilities were assessed. As expected, performance on a theory‐of‐mind battery predicted children's preference to learn from more accurate informants but not from physically stronger informants. Results thus suggest that preschoolers with more advanced theory of mind have a better understanding of knowledge and apply that understanding to guide their selection of informants. This work has important implications for research on children's developing social cognition and early learning.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号