首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Adults and 12‐month‐old infants recognize that even unfamiliar speech can communicate information between third parties, suggesting that they can separate the communicative function of speech from its lexical content. But do infants recognize that speech can communicate due to their experience understanding and producing language, or do they appreciate that speech is communicative earlier, with little such experience? We examined whether 6‐month‐olds recognize that speech can communicate information about an object. Infants watched a Communicator selectively grasp one of two objects (target). During test, the Communicator could no longer reach the objects; she turned to a Recipient and produced speech (a nonsense word) or non‐speech (coughing). Infants looked longer when the Recipient selected the non‐target than the target object when the Communicator spoke but not when she coughed – unless the Recipient had previously witnessed the Communicator's selective grasping of the target object. Our results suggest that at 6 months, with a receptive vocabulary of no more than a handful of commonly used words, infants possess some abstract understanding of the communicative function of speech. This understanding may provide an early mechanism for language and knowledge acquisition.  相似文献   

2.
Infants expect people to direct actions toward objects, and they respond to actions directed to themselves, but do they have expectations about actions directed to third parties? In two experiments, we used eye tracking to investigate 1- and 2-year-olds’ expectations about communicative actions addressed to a third party. Experiment 1 presented infants with videos where an adult (the Emitter) either uttered a sentence or produced non-speech sounds. The Emitter was either face-to-face with another adult (the Recipient) or the two were back-to-back. The Recipient did not respond to any of the sounds. We found that 2-, but not 1-year-olds looked quicker and longer at the Recipient following speech than non-speech, suggesting that they expected her to respond to speech. These effects were specific to the face-to-face context. Experiment 2 presented 1-year-olds with similar face-to-face exchanges but modified to engage infants and minimize task demands. The infants looked quicker to the Recipient following speech than non-speech, suggesting that they expected a response to speech. The study suggests that by 1 year of age infants expect communicative actions to be directed at a third-party listener.  相似文献   

3.
The present research investigated whether six-month-olds who rarely produce pointing actions can detect the object-directedness and communicative function of others’ pointing actions when linguistic information is provided. In Experiment 1, infants were randomly assigned to either a novel-word or emotional-vocalization condition. They were first familiarized with an event in which an actor uttered either a novel label (novel-word condition) or exclamatory expression (emotional-vocalization condition) and then pointed to one of two objects. Next, the positions of the objects were switched. During test trials, each infant watched the new-referent event where the actor pointed to the object to which the actor had not pointed before or the old-referent event where the actor pointed to the old object in its new location. Infants in the novel-word condition looked reliably longer at the new-referent event than at the old-referent event, suggesting that they encoded the object-directedness of the actor’s point. In contrast, infants in the emotional-vocalization condition showed roughly equal looking times to the two events. To further examine infants’ understanding of the communicative aspect of an actor’s point using a different communicative context, Experiment 2 used an identical procedure to the novel-word condition in Experiment 1, except there was only one object present during the familiarization trials. When the familiarization trials did not include a contrasting object, we found that the communicative intention of the actor’s point could be ambiguous. The infants showed roughly equal looking times during the two test events. The current research suggests that six-month-olds understand the object-directedness and communicative intention of others’ pointing when presented with a label, but not when presented with an emotional non-speech vocalization.  相似文献   

4.
Do young infants understand that pointing gestures allow the pointer to change the information state of a recipient? We used a third-party experimental scenario to examine whether 9- and 11-month-olds understand that a pointer's pointing gesture can inform a recipient about a target object. When the pointer pointed to a target, infants subsequently looked longer when the recipient selected the nontarget rather than the target object. In contrast, infants looked equally long whether the recipient selected the target or nontarget object when the pointer used a noncommunicative gesture, a fist. Finally, when the recipient had no perceptual access to the pointing gesture, infants looked longer when the recipient selected the target rather than the nontarget object. Young infants understand a fundamental aspect of the communicative function of pointing: Pointing, but not all gestures, can transfer information. Gestures may thus be one of the tools infants use for an early understanding of communication.  相似文献   

5.
4.5-month-old infants can use information learned from prior experience with objects to help determine the boundaries of objects in a complex visual scene (Needham, 1998; Needham, Dueker, & Lockhead, 2002). The present studies investigate the effect of delay (between prior experience and test) on infant use of such experiential knowledge. Results indicate that infants can use experience with an object to help them to parse a scene containing that object 24 (Experiment 1). Experiment 2 suggests that after 24 h infants have begun to forget some object attributes, and that this forgetting promotes generalization from one similar object to another. After a 72-h delay, infants did not show any beneficial effect of prior experience with one of the objects in the scene (Experiments 3A and B). However, prior experience with multiple objects, similar to an object in the scene, facilitated infant segregation of the scene 72 h later, suggesting that category information remains available in infant memory longer than experience with a single object. The results are discussed in terms of optimal infant benefit from prior experiences with objects.  相似文献   

6.
Following Leslie, Xu, Tremoulet and Scholl (1998) , we distinguish between individuation (the establishment of an object representation) and identification (the use of information stored in the object representation to decide which previously individuated object is being encountered). Although there has been much work on how infants individuate objects, there is relatively little on the question of when and how property information is used to identify objects. Experiment 1 shows that 9‐month‐old infants use shape, but apparently not color, information in identifying objects that are each moved behind spatially separated screens. Infants could not simply have associated a shape with a location or a screen without regard to objecthood, because on alternate trials the objects switched locations/screens. Infants therefore had to bind shape information to the object representation while tracking the objects’ changing location. In Experiment 2, we tested if infants represented both objects rather than ‘sampled’ only one of them. Using the same alternation procedure, infants again succeeded in using shape (but not color) information when only one of the screens was removed – the screen that occluded the first‐hidden object (requiring the longer time in memory). Finally, we relate our behavioral findings both to a cognitive model and to recent neuroscientific studies, concluding that ventral ‘what’ and dorsal ‘where’ pathways may be functionally integrated by 9 months.  相似文献   

7.
In this study, we investigated exploration and language development, particularly whether preliminary object play mediates the role of exploration in gesture and speech production. We followed 27 infants, aged 8–17 months, and gathered data on the frequency of their exploration, preliminary functional acts with single or multiple objects, and communicative behaviors (e.g., gesturing and single-word utterances). The results of our path analysis indicated that exploration had a direct effect on single-object play, which, in turn, affected gesturing and advanced object play. Gesturing as well as single and multi-object play affected speech production. These findings suggest that exploration is associated with language development. This association may be facilitated by object play milestones in which infants recall the object’s function, which strengthens their memory and representation skills. Further, recalling the usage of an object by the caregivers may encourage an infant’s overall imitation tendency, which is important for learning how to communicate with gestures and words.  相似文献   

8.
The present research examined whether infants as young as 6 months of age would consider what objects a human agent could perceive when interpreting her actions on the objects. In two experiments, the infants took the agent's actions of repeatedly reaching for and grasping one of two possible objects as suggesting her preference for that object only when the agent could detect both objects, not when the agent's perceptual access to the second object was absent, either because a large screen hid the object from the agent ( Experiment 1 ), or because the agent sat with her back toward the object ( Experiment 2 ). These results suggest that young infants recognize the role of perception in constraining an agent's goal‐actions.  相似文献   

9.
How do infants select and use information that is relevant to the task at hand? Infants treat events that involve different spatial relations as distinct, and their selection and use of object information depends on the type of event they encounter. For example, 4.5-month-olds consider information about object height in occlusion events, but infants typically fail to do so in containment events until they reach the age of 7.5 months. However, after seeing a prime involving occlusion, 4.5-month-olds became sensitive to height information in a containment event (Experiment 1). The enhancement lasted over a brief delay (Experiment 2) and persisted even longer when infants were shown an additional occlusion prime but not an object prime (Experiment 3). Together, these findings reveal remarkable flexibility in visual representations of young infants and show that their use of information can be facilitated not by strengthening object representations per se but by strengthening their tendency to retrieve available information in the representations.  相似文献   

10.
The ability to code location in continuous space is fundamental to spatial behavior. Existing evidence indicates a robust ability for such coding by 12 months, but systematic evidence on earlier origins is lacking. A series of studies investigated 5-month-olds’ ability to code the location of an object hidden in a sandbox, using a looking-time paradigm. In Experiment 1, after familiarization with a hiding-and-finding sequence at one location, infants looked longer at an object being disclosed from a location 12 inches (30 cm) away than at an object emerging from the hiding location, showing they were able to code location in continuous space. In Experiment 2, infants reacted with greater looking when objects emerged from locations 8 inches (20 cm) away from the hiding location, showing that location coding was more finely grained than could be inferred based on the first study. In Experiment 3, infants were familiarized with an object shown in hiding-and-finding sequences at two different locations. Infants looked longer at objects emerging 12 inches (30 cm) away from the most recent hiding location than to emergence from the other location, showing that infants could code location even when events had previously occurred at each location. In Experiment 4, after familiarization with two objects with different shapes, colors, and sounding characteristics, shown in hiding-and-finding sequences in two locations, infants reacted to location violations as they had in Experiment 3. However, they did not react to object violations, that is, events in which the wrong object emerged from a hiding location. Experiment 5 also found no effect of object violation, even when the infants initially saw the two objects side by side. Spatiotemporal characteristics may play a more central role in early object individuation than they do later, although further study is required.  相似文献   

11.
Adults and infants can differentiate communicative messages using the nonlinguistic acoustic properties of infant‐directed (ID) speech. Although the distinct prosodic properties of ID speech have been explored extensively, it is currently unknown whether the visual properties of the face during ID speech similarly convey communicative intent and thus represent an additional source of social information for infants during early interactions. To examine whether the dynamic facial movement associated with ID speech confers affective information independent of the acoustic signal, adults' differentiation of the visual properties of speakers' communicative messages was examined in two experiments in which the adults rated silent videos of approving and comforting ID and neutral adult‐directed speech. In Experiment 1, adults differentiated the facial speech groups on ratings of the intended recipient and the speaker's message. In Experiment 2, an original coding scale identified facial characteristics of the speakers. Discriminant correspondence analysis revealed two factors differentiating the facial speech groups on various characteristics. Implications for perception of ID facial movements in relation to speakers' communicative intent are discussed for both typically and atypically developing infants. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

12.
Object knowledge refers to the understanding that all objects share certain properties. Various components of object knowledge (e.g., object occlusion, object causality) have been examined in human infants to determine its developmental origins. Viewpoint invariance--the understanding that an object viewed from different viewpoints is still the same object--is one area of object knowledge, however, that has received less attention. To this end, infants' capacity for viewpoint-invariant perception of multi-part objects was investigated. Three-month-old infants were tested for generalization to an object displayed on a mobile that differed only in orientation (i.e., viewpoint) from a training object. Infants were given experience with a wide range of object views (Experiment 1) or a more restricted range during training (Experiment 2). The results showed that infants generalized between a horizontal and vertical viewpoint (Experiment 1) that they could clearly discriminate between in other contexts (i.e., with restricted view experience, Experiment 2). Overall, the outcome shows that training experience with multiple viewpoints plays an important role in infants' ability to develop a general percept of an object's 3D structure and promotes viewpoint-invariant perception of multi-part objects; in contrast, restricting training experience impedes viewpoint-invariant recognition of multi-part objects.  相似文献   

13.
In laboratory experiments, infants are sensitive to patterns of visual features that co-occur (e.g., Fiser & Aslin, 2002). Once infants learn the statistical regularities, however, what do they do with that knowledge? Moreover, which patterns do infants learn in the cluttered world outside of the laboratory? Across 4 experiments, we show that 9-month-olds use this sensitivity to make inferences about object properties. In Experiment 1, 9-month-old infants expected co-occurring visual features to remain fused (i.e., infants looked longer when co-occurring features split apart than when they stayed together). Forming such expectations can help identify integral object parts for object individuation, recognition, and categorization. In Experiment 2, we increased the task difficulty by presenting the test stimuli simultaneously with a different spatial layout from the familiarization trials to provide a more ecologically valid condition. Infants did not make similar inferences in this more distracting test condition. However, Experiment 3 showed that a social cue did allow inferences in this more difficult test condition, and Experiment 4 showed that social cues helped infants choose patterns among distractor patterns during learning as well as during test. These findings suggest that infants can use feature co-occurrence to learn about objects and that social cues shape such foundational learning in distraction-filled environments.  相似文献   

14.
What infants appear to know depends heavily on how they are tested. For example, infants seem to understand object permanence (that objects continue to exist when no longer perceptible) within the first few months of life when this understanding is assessed through looking measures, but not until several months later when it is assessed through search measures. One explanation of such results is that infants gradually develop stronger representations of objects through experience, and that stronger representations are required for some tasks than for others. The current study confirms one prediction from this account: Stronger representations of familiar objects (relative to novel objects) should support greater sensitivity to their continued existence. After seeing objects hidden, infants reached more for familiar than novel objects, in striking contrast to their robust novelty preferences with visible objects. Theoretical implications concerning the origins of knowledge are discussed.  相似文献   

15.
Mou W  Xiao C  McNamara TP 《Cognition》2008,108(1):136-154
Two experiments investigated participants' spatial memory of a briefly viewed layout. Participants saw an array of five objects on a table and, after a short delay, indicated whether the target object indicated by the experimenter had been moved. Experiment 1 showed that change detection was more accurate when non-target objects were stationary than when non-target objects were moved. This context effect was observed when participants were tested both at the original learning perspective and at a novel perspective. In Experiment 2, the arrays of five objects were presented on a rectangular table and two of the non-target objects were aligned with the longer axis of the table. Change detection was more accurate when the target object was presented with the two objects that were aligned with the longer axis of the table during learning than when the target object was presented with the two objects that were not aligned with the longer axis of the table during learning. These results indicated that the spatial memory of a briefly viewed layout has interobject spatial relations represented and utilizes an allocentric reference direction.  相似文献   

16.
Two experiments examined whether 4‐month‐olds (= 120) who were induced to assign two objects to different categories would then be able to take advantage of these contrastive categorical encodings to individuate and track the objects. In each experiment, infants first watched functional demonstrations of two tools, a masher and tongs (Experiment 1) or a marker and a knife (Experiment 2). Next, half the infants saw the two tools brought out alternately from behind a screen, which was then lowered to reveal only one of the tools (different‐objects condition); the other infants saw similar events except that the same tool was shown on either side of the screen (same‐object condition). In both experiments, infants in the different‐objects condition looked reliably longer than those in the same‐object condition, and this effect was eliminated if the demonstrations involved similar but non‐functional actions. Together, these results indicate that infants (a) were led by the functional demonstrations they observed to assign the two tools to distinct categories, (b) recruited these categorical encodings to individuate and track the tools, and hence (c) detected a violation in the different‐objects condition when the screen was lowered to reveal only one tool. Categorical information thus plays a privileged role in individuation and identity tracking from a very young age.  相似文献   

17.
Three experiments were conducted to investigate human newborns’ ability to perceive texture property tactually, either in a cross-modal transfer task or in an intra-modal tactual discrimination task. In Experiment 1, newborns failed to tactually recognize the texture (smooth vs. granular) of flat objects that they had previously seen, when they held flat objects. This failure was mainly due to a lack of intra-modal tactual discrimination between the two objects (Experiment 2). In contrast, Experiment 3 showed that newborns were able to tactually recognize the texture of previously seen surfaces when they held volumetric objects. Taken together, the results suggest that cross-modal transfer of texture from vision to touch stem from a peripheral mechanism, not a central mechanism. Grasping only allows newborns to perceive the texture of volumetric but not flat objects. As a consequence, this study reveals the limits of newborns’ grasping to detect and process information about texture. The results also suggest that more mature exploratory procedures, such as the “lateral motion” procedure exhibited by adults [Lederman, S. J., & Klatzky, R. (1987). Hand movements: A window into haptic object recognition. Cognitive Psychology, 19, 342–368], might be necessary for detecting the texture of flat objects in newborn infants.  相似文献   

18.
Koenig MA  Echols CH 《Cognition》2003,87(3):179-208
The four studies reported here examine whether 16-month-old infants' responses to true and false utterances interact with their knowledge of human agents. In Study 1, infants heard repeated instances either of true or false labeling of common objects; labels came from an active human speaker seated next to the infant. In Study 2, infants experienced the same stimuli and procedure; however, we replaced the human speaker of Study 1 with an audio speaker in the same location. In Study 3, labels came from a hidden audio speaker. In Study 4, a human speaker labeled the objects while facing away from them. In Study 1, infants looked significantly longer to the human agent when she falsely labeled than when she truthfully labeled the objects. Infants did not show a similar pattern of attention for the audio speaker of Study 2, the silent human of Study 3 or the facing-backward speaker of Study 4. In fact, infants who experienced truthful labeling looked significantly longer to the facing-backward labeler of Study 4 than to true labelers of the other three contexts. Additionally, infants were more likely to correct false labels when produced by the human labeler of Study 1 than in any of the other contexts. These findings suggest, first, that infants are developing a critical conception of other human speakers as truthful communicators, and second, that infants understand that human speakers may provide uniquely useful information when a word fails to match its referent. These findings are consistent with the view that infants can recognize differences in knowledge and that such differences can be based on differences in the availability of perceptual experience.  相似文献   

19.
The present research examined whether 9.5-month-old infants can attribute to an agent a disposition to perform a particular action on objects, and can then use this disposition to predict which of two new objects - one that can be used to perform the action and one that cannot - the agent is likely to reach for next. The infants first received familiarization trials in which they watched an agent slide either three (Experiments 1 and 3) or six (Experiment 2) different objects forward and backward on an apparatus floor. During test, the infants saw two new identical objects placed side by side: one stood inside a short frame that left little room for sliding, and the other stood inside a longer frame that left ample room for sliding. The infants who saw the agent slide six different objects attributed to her a disposition to slide objects: they expected her to select the "slidable" as opposed to the "unslidable" test object, and they looked reliably longer when she did not. In contrast, the infants who saw the agent slide only three different objects looked about equally when she selected either test object. These results add to recent evidence that infants in the first year of life can attribute dispositions to agents, and can use these dispositions to help predict agents' actions in new contexts.  相似文献   

20.
Two experiments examined the effects of postevent information on 18-month-olds' event memory. Experiment 1 (N=60) explored whether children's memory was reinstated when action information was eliminated from the reinstatement and only object information was introduced. Experiment 2 (N=48) examined children's recall when either (a). information about the objects' target actions was replaced with new action information or (b). the original training objects were replaced with new objects. In an elicited-imitation paradigm, children were trained to perform six target actions, watched a video reinstatement 10 weeks later, and were tested for recall 24 h after reinstatement. Two results were found. First, a video reminder eliminating action information reinstated children's memory as effectively as a video containing object and action information. Second, children were reminded of their past training when during reinstatement action information was preserved and new objects were presented but were not reminded when object information was preserved and new actions were presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号