首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Vocabulary differences early in development are highly predictive of later language learning as well as achievement in school. Early word learning emerges in the context of tightly coupled social interactions between the early learner and a mature partner. In the present study, we develop and apply a novel paradigm—dual head‐mounted eye tracking—to record momentary gaze data from both parents and infants during free‐flowing toy‐play contexts. With fine‐grained sequential patterns extracted from continuous gaze streams, we objectively measure both joint attention and sustained attention as parents and 9‐month‐old infants played with objects and as parents named objects during play. We show that both joint attention and infant sustained attention predicted vocabulary sizes at 12 and 15 months, but infant sustained attention in the context of joint attention, not joint attention itself, is the stronger unique predictor of later vocabulary size. Joint attention may predict word learning because joint attention supports infant attention to the named object.  相似文献   

2.
A head camera was used to examine the visual correlates of object name learning by toddlers as they played with novel objects and as the parent spontaneously named those objects. The toddlers’ learning of the object names was tested after play, and the visual properties of the head camera images during naming events associated with learned and unlearned object names were analyzed. Naming events associated with learning had a clear visual signature, one in which the visual information itself was clean and visual competition among objects was minimized. Moreover, for learned object names, the visual advantage of the named target over competitors was sustained, both before and after the heard name. The findings are discussed in terms of the visual and cognitive processes that may depend on clean sensory input for learning and also on the sensory–motor, cognitive, and social processes that may create these optimal visual moments for learning.  相似文献   

3.
Most research on early language learning focuses on the objects that infants see and the words they hear in their daily lives, although growing evidence suggests that motor development is also closely tied to language development. To study the real-time behaviors required for learning new words during free-flowing toy play, we measured infants’ visual attention and manual actions on to-be-learned toys. Parents and 12-to-26-month-old infants wore wireless head-mounted eye trackers, allowing them to move freely around a home-like lab environment. After the play session, infants were tested on their knowledge of object-label mappings. We found that how often parents named objects during play did not predict learning, but instead, it was infants’ attention during and around a labeling utterance that predicted whether an object-label mapping was learned. More specifically, we found that infant visual attention alone did not predict word learning. Instead, coordinated, multimodal attention–when infants’ hands and eyes were attending to the same object–predicted word learning. Our results implicate a causal pathway through which infants’ bodily actions play a critical role in early word learning.  相似文献   

4.
Object names are a major component of early vocabularies and learning object names depends on being able to visually recognize objects in the world. However, the fundamental visual challenge of the moment‐to‐moment variations in object appearances that learners must resolve has received little attention in word learning research. Here we provide the first evidence that image‐level object variability matters and may be the link that connects infant object manipulation to vocabulary development. Using head‐mounted eye tracking, the present study objectively measured individual differences in the moment‐to‐moment variability of visual instances of the same object, from infants’ first‐person views. Infants who generated more variable visual object images through manual object manipulation at 15 months of age experienced greater vocabulary growth over the next six months. Elucidating infants’ everyday visual experiences with objects may constitute a crucial missing link in our understanding of the developmental trajectory of object name learning.  相似文献   

5.
Sixteen Spanish aphasic patients named drawings of objects on three occasions. Multiple regression analyses were carried out on the naming accuracy scores. For the patient group as a whole, naming was affected by visual complexity, object familiarity, age of acquisition, and word frequency. The combination of variables predicted naming accuracy in 15 of the 16 individual patients. Age of acquisition, word frequency, and object familiarity predicted performance in the greatest number of patients, while visual complexity, imageability, animacy, and length all affected performance in at least two patients. High proportions of semantic and phonological errors to particular objects were associated with objects having early learned names while high proportions of no-response errors were associated with low familiarity and low visual complexity. It is suggested that visual complexity and object familiarity affect the ease of object recognition while word frequency affects name retrieval. Age of acquisition may affect both stages, accounting for its influence in patients with a range of different patterns of disorder.  相似文献   

6.
An ongoing issue in visual cognition concerns the roles played by low- and high-level information in guiding visual attention, with current research remaining inconclusive about the interaction between the two. In this study, we bring fresh evidence into this long-standing debate by investigating visual saliency and contextual congruency during object naming (Experiment 1), a task in which visual processing interacts with language processing. We then compare the results of this experiment to data of a memorization task using the same stimuli (Experiment 2). In Experiment 1, we find that both saliency and congruency influence visual and naming responses and interact with linguistic factors. In particular, incongruent objects are fixated later and less often than congruent ones. However, saliency is a significant predictor of object naming, with salient objects being named earlier in a trial. Furthermore, the saliency and congruency of a named object interact with the lexical frequency of the associated word and mediate the time-course of fixations at naming. In Experiment 2, we find a similar overall pattern in the eye-movement responses, but only the congruency of the target is a significant predictor, with incongruent targets fixated less often than congruent targets. Crucially, this finding contrasts with claims in the literature that incongruent objects are more informative than congruent objects by deviating from scene context and hence need a longer processing. Overall, this study suggests that different sources of information are interactively used to guide visual attention on the targets to be named and raises new questions for existing theories of visual attention.  相似文献   

7.
Fulkerson AL  Waxman SR 《Cognition》2007,105(1):218-228
Recent studies reveal that naming has powerful conceptual consequences within the first year of life. Naming distinct objects with the same word highlights commonalities among the objects and promotes object categorization. In the present experiment, we pursued the origin of this link by examining the influence of words and tones on object categorization in infants at 6 and 12 months. At both ages, infants hearing a novel word for a set of distinct objects successfully formed object categories; those hearing a sequence of tones for the same objects did not. These results support the view that infants are sensitive to powerful and increasingly nuanced links between linguistic and conceptual units very early in the process of lexical acquisition.  相似文献   

8.
4.5-month-old infants can use information learned from prior experience with objects to help determine the boundaries of objects in a complex visual scene (Needham, 1998; Needham, Dueker, & Lockhead, 2002). The present studies investigate the effect of delay (between prior experience and test) on infant use of such experiential knowledge. Results indicate that infants can use experience with an object to help them to parse a scene containing that object 24 (Experiment 1). Experiment 2 suggests that after 24 h infants have begun to forget some object attributes, and that this forgetting promotes generalization from one similar object to another. After a 72-h delay, infants did not show any beneficial effect of prior experience with one of the objects in the scene (Experiments 3A and B). However, prior experience with multiple objects, similar to an object in the scene, facilitated infant segregation of the scene 72 h later, suggesting that category information remains available in infant memory longer than experience with a single object. The results are discussed in terms of optimal infant benefit from prior experiences with objects.  相似文献   

9.
How do infants begin to understand spoken words? Recent research suggests that word comprehension develops from the early detection of intersensory relations between conventionally paired auditory speech patterns (words) and visible objects or actions. More importantly, in keeping with dynamic systems principles, the findings suggest that word comprehension develops from a dynamic and complementary relationship between the organism (the infant) and the environment (language addressed to the infant). In addition, parallel findings from speech and non‐speech studies of intersensory perception provide evidence for domain general processes in the development of word comprehension. These research findings contrast with the view that a lexical acquisition device with specific lexical principles and innate constraints is required for early word comprehension. Furthermore, they suggest that learning of word–object relations is not merely an associative process. The data support an alternative view of the developmental process that emphasizes the dynamic and reciprocal interactions between general intersensory perception, selective attention and learning in infants, and the specific characteristics of maternal communication.  相似文献   

10.
Words direct visual attention in infants, children, and adults, presumably by activating representations of referents that then direct attention to matching stimuli in the visual scene. Novel, unknown, words have also been shown to direct attention, likely via the activation of more general representations of naming events. To examine the critical issue of how novel words and visual attention interact to support word learning we coded frame-by-frame the gaze of 17- to 31-month-old children (n = 66, 38 females) while generalizing novel nouns. We replicate prior findings of more attention to shape when generalizing novel nouns, and a relation to vocabulary development. However, we also find that following a naming event, children who produce fewer nouns take longer to look at the objects they eventually select and make more transitions between objects before making a generalization decision. Children who produce more nouns look to the objects they eventually select more quickly following the naming event and make fewer looking transitions. We discuss these findings in the context of prior proposals regarding children's few-shot category learning, and a developmental cascade of multiple perceptual, cognitive, and word-learning processes that may operate in cases of both typical development and language delay.

Research Highlights

  • Examined how novel words guide visual attention by coding frame-by-frame where children look when asked to generalize novel names.
  • Gaze patterns differed with vocabulary size: children with smaller vocabularies attended to generalization targets more slowly and did more comparison than those with larger vocabularies.
  • Demonstrates a relationship between vocabulary size and attention to object properties during naming.
  • This work has implications for looking-based tests of early cognition, and our understanding of children's few-shot category learning.
  相似文献   

11.
The ability to determine how many objects are involved in physical events is fundamental for reasoning about the world that surrounds us. Previous studies suggest that infants can fail to individuate objects in ambiguous occlusion events until their first birthday and that learning words for the objects may play a crucial role in the development of this ability. The present eye-tracking study tested whether the classical object individuation experiments underestimate young infants’ ability to individuate objects and the role word learning plays in this process. Three groups of 6-month-old infants (N = 72) saw two opaque boxes side by side on the eye-tracker screen so that the content of the boxes was not visible. During a familiarization phase, two visually identical objects emerged sequentially from one box and two visually different objects from the other box. For one group of infants the familiarization was silent (Visual Only condition). For a second group of infants the objects were accompanied with nonsense words so that objects’ shape and linguistic labels indicated the same number of objects in the two boxes (Visual & Language condition). For the third group of infants, objects’ shape and linguistic labels were in conflict (Visual vs. Language condition). Following the familiarization, it was revealed that both boxes contained the same number of objects (e.g. one or two). In the Visual Only condition, infants looked longer to the box with incorrect number of objects at test, showing that they could individuate objects using visual cues alone. In the Visual & Language condition infants showed the same looking pattern. However, in the Visual vs Language condition infants looked longer to the box with incorrect number of objects according to linguistic labels. The results show that infants can individuate objects in a complex object individuation paradigm considerably earlier than previously thought and that linguistic cues enforce their own preference in object individuation. The results are consistent with the idea that when language and visual information are in conflict, language can exert an influence on how young infants reason about the visual world.  相似文献   

12.
本研究目的是考察词汇获得年龄(早与晚)这一因素对物体图画和动作图画命名是否产生了不同的影响。采用物体图画和动作图画命名任务,发现:(1)相比于物体图画命名,动作图画命名的反应时更长,表明动词的产生更为复杂。(2)在物体图画命名任务中,与晚获得词相比,早获得的词产生速度更快;相比而言,在动作图画命名中,晚获得词比早获得词的反应时更短,反应速度更快。基于分析和讨论,我们认为Ao A效应可能发生在图画命名过程中的词汇水平,而非概念水平或反应输出阶段。  相似文献   

13.
Two experiments examined the effects of plane rotation on the recognition of briefly displayed pictures of familiar objects, using a picture—word verification task. Mirroring the results of earlier picture naming studies (Jolicoeur, 1985; Jolicoeur & Milliken, 1989), plane rotation away from a canonical upright orientation reduced the efficiency of recognition, although in contrast to the results from picture naming studies, the rotation effects were not reduced with experience with the stimuli. However, the rotation effects were influenced by the visual similarity of the distractor objects to the picture of the object presented, with greater orientation sensitivity being observed when visually similar distractors were presented. We suggest that subjects use orientation-sensitive representations to recognize objects in both the present unspeeded verification and in the earlier speeded naming tests of picture identification.  相似文献   

14.
A key question in early word learning is how children cope with the uncertainty in natural naming events. One potential mechanism for uncertainty reduction is cross‐situational word learning – tracking word/object co‐occurrence statistics across naming events. But empirical and computational analyses of cross‐situational learning have made strong assumptions about the nature of naming event ambiguity, assumptions that have been challenged by recent analyses of natural naming events. This paper shows that learning from ambiguous natural naming events depends on perspective. Natural naming events from parent–child interactions were recorded from both a third‐person tripod‐mounted camera and from a head‐mounted camera that produced a ‘child's‐eye’ view. Following the human simulation paradigm, adults were asked to learn artificial language labels by integrating across the most ambiguous of these naming events. Significant learning was found only from the child's perspective, pointing to the importance of considering statistical learning from an embodied perspective.  相似文献   

15.
Two experiments were conducted with younger and older speakers. In Experiment 1, participants named single objects that were intact or visually degraded, while hearing distractor words that were phonologically related or unrelated to the object name. In both younger and older participants naming latencies were shorter for intact than for degraded objects and shorter when related than when unrelated distractors were presented. In Experiment 2, the single objects were replaced by object triplets, with the distractors being phonologically related to the first object's name. Naming latencies and gaze durations for the first object showed degradation and relatedness effects that were similar to those in single-object naming. Older participants were slower than younger participants when naming single objects and slower and less fluent on the second but not the first object when naming object triplets. The results of these experiments indicate that both younger and older speakers plan object names sequentially, but that older speakers use this planning strategy less efficiently.  相似文献   

16.
Two experiments were conducted with younger and older speakers. In Experiment 1, participants named single objects that were intact or visually degraded, while hearing distractor words that were phonologically related or unrelated to the object name. In both younger and older participants naming latencies were shorter for intact than for degraded objects and shorter when related than when unrelated distractors were presented. In Experiment 2, the single objects were replaced by object triplets, with the distractors being phonologically related to the first object's name. Naming latencies and gaze durations for the first object showed degradation and relatedness effects that were similar to those in single-object naming. Older participants were slower than younger participants when naming single objects and slower and less fluent on the second but not the first object when naming object triplets. The results of these experiments indicate that both younger and older speakers plan object names sequentially, but that older speakers use this planning strategy less efficiently.  相似文献   

17.
Evidence from infant studies indicates that language learning can be facilitated by multimodal cues. We extended this observation to adult language learning by studying the effects of simultaneous visual cues (nonassociated object images) on speech segmentation performance. Our results indicate that segmentation of new words from a continuous speech stream is facilitated by simultaneous visual input that it is presented at or near syllables that exhibit the low transitional probability indicative of word boundaries. This indicates that temporal audio-visual contiguity helps in directing attention to word boundaries at the earliest stages of language learning. Off-boundary or arrhythmic picture sequences did not affect segmentation performance, suggesting that the language learning system can effectively disregard noninformative visual information. Detection of temporal contiguity between multimodal stimuli may be useful in both infants and second-language learners not only for facilitating speech segmentation, but also for detecting word–object relationships in natural environments.  相似文献   

18.
Recent laboratory experiments have shown that both infant and adult learners can acquire word-referent mappings using cross-situational statistics. The vast majority of the work on this topic has used unfamiliar objects presented on neutral backgrounds as the visual contexts for word learning. However, these laboratory contexts are much different than the real-world contexts in which learning occurs. Thus, the feasibility of generalizing cross-situational learning beyond the laboratory is in question. Adapting the Human Simulation Paradigm, we conducted a series of experiments examining cross-situational learning from children's egocentric videos captured during naturalistic play. Focusing on individually ambiguous naming moments that naturally occur during toy play, we asked how statistical learning unfolds in real time through accumulating cross-situational statistics in naturalistic contexts. We found that even when learning situations were individually ambiguous, learners’ performance gradually improved over time. This improvement was driven in part by learners’ use of partial knowledge acquired from previous learning situations, even when they had not yet discovered correct word-object mappings. These results suggest that word learning is a continuous process by means of real-time information integration.  相似文献   

19.
A central component of language development is word learning. One characterization of this process is that language learners discover objects and then look for word forms to associate with these objects (Mcnamara, 1984 ; Smith, 2000 ). Another possibility is that word forms themselves are also important, such that once learned, hearing a familiar word form will lead young word learners to look for an object to associate with it (Juscyzk, 1997 ). This research investigates the relative weighing of word forms and objects in early word–object associations using the anticipatory eye‐movement paradigm (AEM; McMurray & Aslin, 2004 ). Eighteen‐month‐old infants and adults were taught novel word–object associations and then tested on ambiguous stimuli that pitted word forms and objects against each other. Results revealed a change in weighing of these components across development. For 18‐month‐old infants, word forms weighed more in early word–object associative learning, while for adults, objects were more salient. Our results suggest that infants preferentially use word forms to guide the process of word–object association.  相似文献   

20.
婴儿词汇学习是国际语言发展研究的前沿领域, 但大多数研究都是以英语婴儿为研究对象。目前国内关于汉语婴儿早期语言发展研究处于起步阶段。由于汉语在词法、句法等方面具有明显的特殊性, 另外, 成人“言语输入”的语用习惯以及非言语线索都会影响婴儿早期词汇学习, 使得不同语言文化下的婴儿呈现不同的单词学习模式。本项目力图从跨语言视角考察汉语和英语婴儿的早期单词学习(包括单词的理解和产生)以及促进婴儿语言发展的成人言语和非言语输入特征。研究将综合采用实验室实验、半结构化实验室观察和量表测量等研究方法, 利用新的研究技术(如习惯化和IPLP), 从多个角度探索婴儿词汇发展, 以及不同语言文化和个体环境对婴儿早期词汇学习的影响, 揭示词汇获得的跨语言一致性和特异性。研究结果有望对儿童的语言学习提供启示。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号