首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Recent years have seen a flourishing of Natural Language Processing models that can mimic many aspects of human language fluency. These models harness a simple, decades-old idea: It is possible to learn a lot about word meanings just from exposure to language, because words similar in meaning are used in language in similar ways. The successes of these models raise the intriguing possibility that exposure to word use in language also shapes the word knowledge that children amass during development. However, this possibility is strongly challenged by the fact that models use language input and learning mechanisms that may be unavailable to children. Across three studies, we found that unrealistically complex input and learning mechanisms are unnecessary. Instead, simple regularities of word use in children's language input that they have the capacity to learn can foster knowledge about word meanings. Thus, exposure to language may play a simple but powerful role in children's growing word knowledge. A video abstract of this article can be viewed at https://youtu.be/dT83dmMffnM .

Research Highlights

  • Natural Language Processing (NLP) models can learn that words are similar in meaning from higher-order statistical regularities of word use.
  • Unlike NLP models, infants and children may primarily learn only simple co-occurrences between words.
  • We show that infants' and children's language input is rich in simple co-occurrence that can support learning similarities in meaning between words.
  • We find that simple co-occurrences can explain infants' and children's knowledge that words are similar in meaning.
  相似文献   

2.
Previous research with artificial language learning paradigms has shown that infants are sensitive to statistical cues to word boundaries (Saffran, Aslin & Newport, 1996) and that they can use these cues to extract word‐like units (Saffran, 2001). However, it is unknown whether infants use statistical information to construct a receptive lexicon when acquiring their native language. In order to investigate this issue, we rely on the fact that besides real words a statistical algorithm extracts sound sequences that are highly frequent in infant‐directed speech but constitute nonwords. In three experiments, we use a preferential listening paradigm to test French‐learning 11‐month‐old infants' recognition of highly frequent disyllabic sequences from their native language. In Experiments 1 and 2, we use nonword stimuli and find that infants listen longer to high‐frequency than to low‐frequency sequences. In Experiment 3, we compare high‐frequency nonwords to real words in the same frequency range, and find that infants show no preference. Thus, at 11 months, French‐learning infants recognize highly frequent sound sequences from their native language and fail to differentiate between words and nonwords among these sequences. These results are evidence that they have used statistical information to extract word candidates from their input and stored them in a ‘protolexicon’, containing both words and nonwords.  相似文献   

3.
Words direct visual attention in infants, children, and adults, presumably by activating representations of referents that then direct attention to matching stimuli in the visual scene. Novel, unknown, words have also been shown to direct attention, likely via the activation of more general representations of naming events. To examine the critical issue of how novel words and visual attention interact to support word learning we coded frame-by-frame the gaze of 17- to 31-month-old children (n = 66, 38 females) while generalizing novel nouns. We replicate prior findings of more attention to shape when generalizing novel nouns, and a relation to vocabulary development. However, we also find that following a naming event, children who produce fewer nouns take longer to look at the objects they eventually select and make more transitions between objects before making a generalization decision. Children who produce more nouns look to the objects they eventually select more quickly following the naming event and make fewer looking transitions. We discuss these findings in the context of prior proposals regarding children's few-shot category learning, and a developmental cascade of multiple perceptual, cognitive, and word-learning processes that may operate in cases of both typical development and language delay.

Research Highlights

  • Examined how novel words guide visual attention by coding frame-by-frame where children look when asked to generalize novel names.
  • Gaze patterns differed with vocabulary size: children with smaller vocabularies attended to generalization targets more slowly and did more comparison than those with larger vocabularies.
  • Demonstrates a relationship between vocabulary size and attention to object properties during naming.
  • This work has implications for looking-based tests of early cognition, and our understanding of children's few-shot category learning.
  相似文献   

4.
How do children succeed in learning a word? Research has shown robustly that, in ambiguous labeling situations, young children assume novel labels to refer to unfamiliar rather than familiar objects. However, ongoing debates center on the underlying mechanism: Is this behavior based on lexical constraints, guided by pragmatic reasoning, or simply driven by children's attraction to novelty? Additionally, recent research has questioned whether children's disambiguation leads to long-term learning or rather indicates an attentional shift in the moment of the conversation. Thus, we conducted a pre-registered online study with 2- and 3-year-olds and adults. Participants were presented with unknown objects as potential referents for a novel word. Across conditions, we manipulated whether the only difference between both objects was their relative novelty to the participant or whether, in addition, participants were provided with pragmatic information that indicated which object the speaker referred to. We tested participants’ immediate referent selection and their retention after 5 min. Results revealed that when given common ground information both age groups inferred the correct referent with high success and enhanced behavioral certainty. Without this information, object novelty alone did not guide their selection. After 5 min, adults remembered their previous selections above chance in both conditions, while children only showed reliable learning in the pragmatic condition. The pattern of results indicates how pragmatics may aid referent disambiguation and learning in both adults and young children. From early ontogeny on, children's social-cognitive understanding may guide their communicative interactions and support their language acquisition.

Research Highlights

  • We tested how 2-3-year-olds and adults resolve referential ambiguity without any lexical cues.
  • In the pragmatic context both age groups disambiguated novel word-object-mappings, while object novelty alone did not guide their referent selection.
  • In the pragmatic context, children also showed increased certainty in disambiguation and retained new word-object-mappings over time.
  • These findings contribute to the ongoing debate on whether children learn words on the basis of domain-specific constraints, lower-level associative mechanisms, or pragmatic inferences.
  相似文献   

5.
Child-directed language can support language learning, but how? We addressed two questions: (1) how caregivers prosodically modulated their speech as a function of word familiarity (known or unknown to the child) and accessibility of referent (visually present or absent from the immediate environment); (2) whether such modulations affect children's unknown word learning and vocabulary development. We used data from 38 English-speaking caregivers (from the ECOLANG corpus) talking about toys (both known and unknown to their children aged 3–4 years) both when the toys are present and when absent. We analyzed prosodic dimensions (i.e., speaking rate, pitch and intensity) of caregivers’ productions of 6529 toy labels. We found that unknown labels were spoken with significantly slower speaking rate, wider pitch and intensity range than known labels, especially in the first mentions, suggesting that caregivers adjust their prosody based on children's lexical knowledge. Moreover, caregivers used slower speaking rate and larger intensity range to mark the first mentions of toys that were physically absent. After the first mentions, they talked about the referents louder with higher mean pitch when toys were present than when toys were absent. Crucially, caregivers’ mean pitch of unknown words and the degree of mean pitch modulation for unknown words relative to known words (pitch ratio) predicted children's immediate word learning and vocabulary size 1 year later. In conclusion, caregivers modify their prosody when the learning situation is more demanding for children, and these helpful modulations assist children in word learning.

Research Highlights

  • In naturalistic interactions, caregivers use slower speaking rate, wider pitch and intensity range when introducing new labels to 3–4-year-old children, especially in first mentions.
  • Compared to when toys are present, caregivers speak more slowly with larger intensity range to mark the first mentions of toys that are physically absent.
  • Mean pitch to mark word familiarity predicts children's immediate word learning and future vocabulary size.
  相似文献   

6.
Sleep spindle activity in infants supports their formation of generalized memories during sleep, indicating that specific sleep processes affect the consolidation of memories early in life. Characteristics of sleep spindles depend on the infant's developmental state and are known to be associated with trait‐like factors such as intelligence. It is, however, largely unknown which state‐like factors affect sleep spindles in infancy. By varying infants’ wake experience in a within‐subject design, here we provide evidence for a learning‐ and memory‐dependent modulation of infant spindle activity. In a lexical‐semantic learning session before a nap, 14‐ to 16‐month‐old infants were exposed to unknown words as labels for exemplars of unknown object categories. In a memory test on the next day, generalization to novel category exemplars was tested. In a nonlearning control session preceding a nap on another day, the same infants heard known words as labels for exemplars of already known categories. Central–parietal fast sleep spindles increased after the encoding of unknown object–word pairings compared to known pairings, evidencing that an infant's spindle activity varies depending on its prior knowledge for newly encoded information. Correlations suggest that enhanced spindle activity was particularly triggered, when similar unknown pairings were not generalized immediately during encoding. The spindle increase triggered by previously not generalized object–word pairings, moreover, boosted the formation of generalized memories for these pairings. Overall, the results provide first evidence for a fine‐tuned regulation of infant sleep quality according to current consolidation requirements, which improves the infant long‐term memory for new experiences.  相似文献   

7.
One of the central themes in the study of language acquisition is the gap between the linguistic knowledge that learners demonstrate, and the apparent inadequacy of linguistic input to support induction of this knowledge. One of the first linguistic abilities in the course of development to exemplify this problem is in speech perception: specifically, learning the sound system of one’s native language. Native-language sound systems are defined by meaningful contrasts among words in a language, yet infants learn these sound patterns before any significant numbers of words are acquired. Previous approaches to this learning problem have suggested that infants can learn phonetic categories from statistical analysis of auditory input, without regard to word referents. Experimental evidence presented here suggests instead that young infants can use visual cues present in word-labeling situations to categorize phonetic information. In Experiment 1, 9-month-old English-learning infants failed to discriminate two non-native phonetic categories, establishing baseline performance in a perceptual discrimination task. In Experiment 2, these infants succeeded at discrimination after watching contrasting visual cues (i.e., videos of two novel objects) paired consistently with the two non-native phonetic categories. In Experiment 3, these infants failed at discrimination after watching the same visual cues, but paired inconsistently with the two phonetic categories. At an age before which memory of word labels is demonstrated in the laboratory, 9-month-old infants use contrastive pairings between objects and sounds to influence their phonetic sensitivity. Phonetic learning may have a more functional basis than previous statistical learning mechanisms assume: infants may use cross-modal associations inherent in social contexts to learn native-language phonetic categories.  相似文献   

8.
The overall pattern of vocabulary development is relatively similar across children learning different languages. However, there are considerable differences in the words known to individual children. Historically, this variability has been explained in terms of differences in the input. Here, we examine the alternate possibility that children's individual interest in specific natural categories shapes the words they are likely to learn – a child who is more interested in animals will learn a new animal name easier relative to a new vehicle name. Two‐year‐old German‐learning children (N = 39) were exposed to four novel word–object associations for objects from four different categories. Prior to the word learning task, we measured their interest in the categories that the objects belonged to. Our measure was pupillary change following exposure to familiar objects from these four categories, with increased pupillary change interpreted as increased interest in that category. Children showed more robust learning of word–object associations from categories they were more interested in relative to categories they were less interested in. We further found that interest in the novel objects themselves influenced learning, with distinct influences of both category interest and object interest on learning. These results suggest that children's interest in different natural categories shapes their word learning. This provides evidence for the strikingly intuitive possibility that a child who is more interested in animals will learn novel animal names easier than a child who is more interested in vehicles.  相似文献   

9.
Since speech is a continuous stream with no systematic boundaries between words, how do pre-verbal infants manage to discover words? A proposed solution is that they might use the transitional probability between adjacent syllables, which drops at word boundaries. Here, we tested the limits of this mechanism by increasing the size of the word-unit to four syllables, and its automaticity by testing asleep neonates. Using markers of statistical learning in neonates’ EEG, compared to adult behavioral performances in the same task, we confirmed that statistical learning is automatic enough to be efficient even in sleeping neonates. We also revealed that: (1) Successfully tracking transition probabilities (TP) in a sequence is not sufficient to segment it. (2) Prosodic cues, as subtle as subliminal pauses, enable to recover words segmenting capacities. (3) Adults’ and neonates’ capacities to segment streams seem remarkably similar despite the difference of maturation and expertise. Finally, we observed that learning increased the overall similarity of neural responses across infants during exposure to the stream, providing a novel neural marker to monitor learning. Thus, from birth, infants are equipped with adult-like tools, allowing them to extract small coherent word-like units from auditory streams, based on the combination of statistical analyses and auditory parsing cues.

Research Highlights

  • Successfully tracking transitional probabilities in a sequence is not always sufficient to segment it.
  • Word segmentation solely based on transitional probability is limited to bi- or tri-syllabic elements.
  • Prosodic cues, as subtle as subliminal pauses, enable to recover chunking capacities in sleeping neonates and awake adults for quadriplets.
  相似文献   

10.
Although vocabulary acquisition requires children learn names for multiple things, many investigations of word learning mechanisms teach children the name for only one of the objects presented. This is problematic because it is unclear whether children's performance reflects recall of the correct name–object association or simply selection of the only object that was singled out by being the only object named. Children introduced to one novel name may perform at ceiling as they are not required to discriminate on the basis of the name per se, and appear to rapidly learn words following minimal exposure to a single word. We introduced children to four novel objects. For half the children, only one of the objects was named and for the other children, all four objects were named. Only children introduced to one word reliably selected the target object at test. This demonstration highlights the over-simplicity of one-word learning paradigms and the need for a shift in word learning paradigms where more than one word is taught to ensure children disambiguate objects on the basis of their names rather than their degree of salience.  相似文献   

11.
Distributional information is a potential cue for learning syntactic categories. Recent studies demonstrate a developmental trajectory in the level of abstraction of distributional learning in young infants. Here we investigate the effect of prosody on infants' learning of adjacent relations between words. Twelve‐ to thirteen‐month‐old infants were exposed to an artificial language comprised of 3‐word‐sentences of the form aXb and cYd, where X and Y words differed in the number of syllables. Training sentences contained a prosodic boundary between either the first and the second word or the second and the third word. Subsequently, infants were tested on novel test sentences that contained new X and Y words and also contained a flat prosody with no grouping cues. Infants successfully discriminated between novel grammatical and ungrammatical sentences, suggesting that the learned adjacent relations can be abstracted across words and prosodic conditions. Under the conditions tested, prosody may be only a weak constraint on syntactic categorization. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

12.
Phonological development is sometimes seen as a process of learning sounds, or forming phonological categories, and then combining sounds to build words, with the evidence taken largely from studies demonstrating ‘perceptual narrowing’ in infant speech perception over the first year of life. In contrast, studies of early word production have long provided evidence that holistic word learning may precede the formation of phonological categories. In that account, children begin by matching their existing vocal patterns to adult words, with knowledge of the phonological system emerging from the network of related word forms. Here I review evidence from production and then consider how the implicit and explicit learning mechanisms assumed by the complementary memory systems model might be understood as reconciling the two approaches.  相似文献   

13.
How do young infants discover that a segment of the sound stream refers to a particular aspect of the visual world around them? Speakers do not enunciate each word separately, even to infants; rather, whattheysayrunstogether. To relate an object (say, an apple) to its referent, infants must notice the interactant's target of attention (the apple) and at the same time single out the word that refers to it as the other person speaks (Lookattheapple!). We contend that caregivers through their actions assist by directing and educating an infant's attention, particularly through the use of a show gesture. The onset/offset, rhythm, tempo, and duration of these show gestures are synchronous with the saying of the words referring to the target objects. Our prior research using eye tracking found that show gestures lead an infant to look at the object presented as the word for it is uttered and that show gestures facilitate word learning. In this research we tested the hypothesis that show gestures also lead to enhanced attentional processing as measured through pupil dilation. Comparing pupil diameters while words were introduced with a show, static, or asynchronous dynamic gesture, we found that pupil dilation occurred for the show gesture condition and was positively correlated with word learning.  相似文献   

14.
From the very first moments of their lives, infants selectively attend to the visible orofacial movements of their social partners and apply their exquisite speech perception skills to the service of lexical learning. Here we explore how early bilingual experience modulates children's ability to use visible speech as they form new lexical representations. Using a cross‐modal word‐learning task, bilingual children aged 30 months were tested on their ability to learn new lexical mappings in either the auditory or the visual modality. Lexical recognition was assessed either in the same modality as the one used at learning (‘same modality’ condition: auditory test after auditory learning, visual test after visual learning) or in the other modality (‘cross‐modality’ condition: visual test after auditory learning, auditory test after visual learning). The results revealed that like their monolingual peers, bilingual children successfully learn new words in either the auditory or the visual modality and show cross‐modal recognition of words following auditory learning. Interestingly, as opposed to monolinguals, they also demonstrate cross‐modal recognition of words upon visual learning. Collectively, these findings indicate a bilingual edge in visual word learning, expressed in the capacity to form a recoverable cross‐modal representation of visually learned words.  相似文献   

15.
There are two broad views of children's theory of mind. The mentalist view is that it emerges in infancy and is possibly innate. The minimalist view is that it emerges more gradually in childhood and is heavily dependent on learning. According to minimalism, children initially understand behaviors rather than mental states, and they are assisted in doing so by recognizing repeating patterns in behavior. The regularities in behavior allow them to predict future behaviors, succeed on theory-of-mind tasks, acquire mental state words, and eventually, understand the mental states underlying behavior. The present study provided the first clear evidence for the plausibility of this view by fitting head cameras to 54 infants aged 6 to 25 months, and recording their view of the world in their daily lives. At 6 and 12 months, infants viewed an average of 146.5 repeated behaviors per hour, a rate consistent with approximately 560,000 repetitions in their first year, and with repetitions correlating with children's acquisition of mental state words, even after controlling for their general vocabulary and a range of variables indexing social interaction. We also recorded infants’ view of people searching or searching for and retrieving objects. These were 92 times less common and did not correlate with mental state vocabulary. Overall, the findings indicate that repeated behaviors provide a rich source of information for children that would readily allow them to recognize patterns in behavior and help them acquire mental state words, providing the first clear evidence for this claim of minimalism.

Research Highlights

  • Six- to 25-month-olds wore head cameras to record home life from infants’ point-of-view and help adjudicate between nativist and minimalist views of theory-of-mind (ToM).
  • Nativists say ToM is too early developing to enable learning, whereas minimalists say infants learn to predict behaviors from behavior patterns in environment.
  • Consistent with minimalism, infants had an incredibly rich exposure (146.5/h, >560,000 in first year) to repeated behaviors (e.g., drinking from a cup repeatedly).
  • Consistent with minimalism, more repeated behaviors correlated with infants’ mental state vocabulary, even after controlling for gender, age, searches witnessed and non-mental state vocabulary.
  相似文献   

16.
How do infants begin to understand spoken words? Recent research suggests that word comprehension develops from the early detection of intersensory relations between conventionally paired auditory speech patterns (words) and visible objects or actions. More importantly, in keeping with dynamic systems principles, the findings suggest that word comprehension develops from a dynamic and complementary relationship between the organism (the infant) and the environment (language addressed to the infant). In addition, parallel findings from speech and non‐speech studies of intersensory perception provide evidence for domain general processes in the development of word comprehension. These research findings contrast with the view that a lexical acquisition device with specific lexical principles and innate constraints is required for early word comprehension. Furthermore, they suggest that learning of word–object relations is not merely an associative process. The data support an alternative view of the developmental process that emphasizes the dynamic and reciprocal interactions between general intersensory perception, selective attention and learning in infants, and the specific characteristics of maternal communication.  相似文献   

17.
Holistic processing (HP) of faces refers to the obligatory, simultaneous processing of the parts and their relations, and it emerges over the course of development. HP is manifest in a decrement in the perception of inverted versus upright faces and a reduction in face processing ability when the relations between parts are perturbed. Here, adopting the HP framework for faces, we examined the developmental emergence of HP in another domain for which human adults have expertise, namely, visual word processing. Children, adolescents, and adults performed a lexical decision task and we used two established signatures of HP for faces: the advantage in perception of upright over inverted words and nonwords and the reduced sensitivity to increasing parts (word length). Relative to the other groups, children showed less of an advantage for upright versus inverted trials and lexical decision was more affected by increasing word length. Performance on these HP indices was strongly associated with age and with reading proficiency. Also, the emergence of HP for word perception was not simply a result of improved visual perception over the course of development as no group differences were observed on an object decision task. These results reveal the developmental emergence of HP for orthographic input, and reflect a further instance of experience-dependent tuning of visual perception. These results also add to existing findings on the commonalities of mechanisms of word and face recognition.

Research Highlights

  • Children showed less of an advantage for upright versus inverted trials compared to adolescents and adults.
  • Relative to the other groups, lexical decision in children was more affected by increasing word length.
  • Performance on holistic processing (HP) indices was strongly associated with age and with reading proficiency.
  • HP emergence for word perception was not due to improved visual perception over development as there were no group differences on an object decision task.
  相似文献   

18.
One powerfully robust method for the study of human contingency learning is the colour-word contingency learning paradigm. In this task, participants respond to the print colour of neutral words, each of which is presented most often in one colour. The contingencies between words and colours are learned, as indicated by faster and more accurate responses when words are presented in their expected colour relative to an unexpected colour. In a recent report, Forrin and MacLeod (2017b, Memory & Cognition) asked to what extent this performance (i.e., response time) measure of learning might depend on the relative speed of processing of the word and the colour. With keypress responses, learning effects were comparable when responding to the word and to the colour (contrary to predictions). However, an asymmetry appeared in a second experiment with vocal responses, with a contingency effect only present for colour identification. In a third experiment, the colour was preexposed, and contingency effects were again roughly symmetrical. In their report, they suggested that a simple speed-of-processing (or “horserace”) model might explain when contingency effects are observed in colour and word identification. In the present report, an alternative view is presented. In particular, it is argued that the results are best explained by appealing to the notion of relevant stimulus–response compatibility, which also resolves discrepancies between horserace model predictions and participant results. The article presents simulations with the Parallel Episodic Processing model to demonstrate this case.  相似文献   

19.
ABSTRACT

Congruency effects for colour word associates (e.g., ocean) have been reported in Stroop colour naming tasks. However, incidental memory for such words after word reading and colour naming tasks has not been examined. In the current study, participants incidentally recalled colour word associates (e.g., ocean) and neutral words (e.g., lawyer) immediately after naming their font colour (Experiment 1a) or reading them aloud (Experiment 1b). In both tasks, recall was better for congruent colour word associates (e.g., ocean appearing in blue) than incongruent colour word associates (e.g., ocean appearing in green) or neutral items (lawyer appearing in blue).

This outcome is consistent with the idea that co-activation of a semantic colour code and a lexical representation strengthens the episodic memory representation and makes it more accessible.  相似文献   

20.
Visual perception in adult humans is thought to be tuned to represent the statistical regularities of natural scenes. For example, in adults, visual sensitivity to different hues shows an asymmetry which coincides with the statistical regularities of colour in the natural world. Infants are sensitive to statistical regularities in social and linguistic stimuli, but whether or not infants’ visual systems are tuned to natural scene statistics is currently unclear. We measured colour discrimination in infants to investigate whether or not the visual system can represent chromatic scene statistics in very early life. Our results reveal the earliest association between vision and natural scene statistics that has yet been found: even as young as 4 months of age, colour vision is aligned with the distributions of colours in natural scenes.

Research Highlights

  1. We find infants’ colour sensitivity is aligned with the distribution of colours in the natural world, as it is in adults.
  2. At just 4 months, infants’ visual systems are tailored to extract and represent the statistical regularities of the natural world.
  3. This points to a drive for the human brain to represent statistical regularities even at a young age.
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号