首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This study explored infants' ability to infer communicative intent as expressed in non-linguistic gestures. Sixty children aged 14, 18 and 24 months participated. In the context of a hiding game, an adult indicated for the child the location of a hidden toy by giving a communicative cue: either pointing or ostensive gazing toward the container containing the toy. To succeed in this task children had to do more than just follow the point or gaze to the target container. They also had to infer that the adult's behaviour was relevant to the situation at hand - she wanted to inform them that the toy was inside the container toward which she gestured. Children at all three ages successfully used both types of cues. We conclude that infants as young as 14 months of age can, in some situations, interpret an adult behaviour as a relevant communicative act done for them.  相似文献   

2.
In the current study, 24‐ to 27‐month‐old children (N = 37) used pointing gestures in a cooperative object choice task with either peer or adult partners. When indicating the location of a hidden toy, children pointed equally accurately for adult and peer partners but more often for adult partners. When choosing from one of three hiding places, children used adults’ pointing to find a hidden toy significantly more often than they used peers’. In interaction with peers, children's choice behavior was at chance level. These results suggest that toddlers ascribe informative value to adults’ but not peers’ pointing gestures, and highlight the role of children's social expectations in their communicative development.  相似文献   

3.
The use of an adult as a resource for help and instruction in a problem solving situation was examined in 9, 14, and 18‐month‐old infants. Infants were placed in various situations ranging from a simple means‐end task where a toy was placed beyond infants' prehensile space on a mat, to instances where an attractive toy was placed inside closed transparent boxes that were more or less difficult for the child to open. The experimenter gave hints and modelled the solution each time the infant made a request (pointing, reaching, or showing a box to the experimenter), or if the infant was unable to solve the problem. Infants' success on the problems, sensitivity to the experimenter's modelling, and communicative gestures (requests, co‐occurrence of looking behaviour and requests) were analysed. Results show that older infants had better success in solving problems although they exhibited difficulties in solving the simple means‐end task compared to the younger infants. Moreover, 14‐ and 18‐month‐olds were sensitive to the experimenter's modelling and used her demonstration cues to solve problems. By contrast, 9‐month‐olds did not show such sensitivity. Finally, 9‐month‐old infants displayed significantly fewer communicative gestures toward the adult compared to the other age groups, although in general, all infants tended to increase their frequency of requests as a function of problem difficulty. These observations support the idea that during the first half of the second year infants develop a new collaborative stance toward others. The stance is interpreted as foundational to teaching and instruction, two mechanisms of social learning that are sometime considered as specifically human. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

4.
We investigated whether infants comprehend others’ nonverbal communicative intentions directed to a third person, in an ‘overhearing’ context. An experimenter addressed an assistant and indicated a hidden toy's location by either gazing ostensively or pointing to the location for her. In a matched control condition, the experimenter performed similar behaviors (absent-minded gazing and extended index finger) but did not communicate ostensively with the assistant. Infants could then search for the toy. Eighteen-month-old infants were skillful in using both communicative cues to find the hidden object, whereas 14-month-olds performed above chance only with the pointing cue. Neither age group performed above chance in the control condition. This study thus shows that by 14–18 months of age, infants are beginning to monitor and comprehend some aspects of third party interactions.  相似文献   

5.
Several interaction‐based and looking‐time studies suggest that 1‐year‐old infants understand the referential nature of deictic gestures. However, these studies have not unequivocally established that referential gestures induce object expectations in infants prior to encountering a referent object, and have thus remained amenable to simpler attentional highlighting interpretations. The current study tested whether nonlinguistic referential communication induces object expectations in infants by using a novel pupil dilation paradigm. In Experiment 1, 12‐month‐olds watched videos of a protagonist who either pointed communicatively toward an occluder in front of her or remained still. At test, the occluder opened to reveal one of two outcomes: an empty surface or a toy. Results showed that infants’ pupils were larger for the unexpected outcome of an empty surface following a point compared to the control condition (an empty surface following no point). These differences were not caused by differences in looking times or directions. In Experiment 2, an attention‐directing nonsocial control cue replaced the referential communication. The cue did direct 12‐month‐olds’ attention to the occluder, but it did not induce an object expectation. In Experiment 3, we tested 8‐month‐olds in the setting of Experiment 1. In contrast to 12‐month‐olds, 8‐month‐olds did not reveal object expectations following communication. Findings demonstrate that communicative pointing acts induce object expectations at 12 months of age, but not at 8 months of age, and that these expectations are specific to a referential‐communicative as opposed to an attention‐directing nonsocial cue.  相似文献   

6.
《Cognitive development》2003,18(1):91-110
The goal of the present research was to assess whether communicative gestures, such as gazing and declarative pointing of 12-month-old infants indicate that infants perceive people as intentional agents, or whether infant communicative behaviors are merely triggered by specific perceptual cues in joint visual attention situations. Two experiments were conducted. In Experiment 1, thirty-two 12-month-olds were conditioned to follow the gaze of a contingently interacting person or object. They were then submitted to a paradigm designed to incite them to initiate communicative gestures to the person or object. The temporal coordination between pointing, gazing, and vocalizations occurred at a significantly higher rate in the Person than in the Object condition. In Experiment 2, the effect of the attentional focus of others on the gaze, points and vocalizations of thirty 12-month-olds was investigated. Infants were assessed in conditions where the experimenter vocalized while looking at the same (In-focus), or a different (Out-of-focus) toy than the infants. Infants who pointed produced more co-occurrences of gaze, vocalizations and points in the Out-of-focus condition than in the In-focus condition. Thus, by 12 months, infants are aware of the attentional state of the person. Discussion centers on the implications of these findings for theories of social and cognitive knowing.  相似文献   

7.
The present study investigated the degree to which an infants’ use of simultaneous gesture–speech combinations during controlled social interactions predicts later language development. Nineteen infants participated in a declarative pointing task involving three different social conditions: two experimental conditions (a) available, when the adult was visually attending to the infant but did not attend to the object of reference jointly with the child, and (b) unavailable, when the adult was not visually attending to neither the infant nor the object; and (c) a baseline condition, when the adult jointly engaged with the infant's object of reference. At 12 months of age measures related to infants’ speech-only productions, pointing-only gestures, and simultaneous pointing–speech combinations were obtained in each of the three social conditions. Each child's lexical and grammatical output was assessed at 18 months of age through parental report. Results revealed a significant interaction between social condition and type of communicative production. Specifically, only simultaneous pointing–speech combinations increased in frequency during the available condition compared to baseline, while no differences were found for speech-only and pointing-only productions. Moreover, simultaneous pointing–speech combinations in the available condition at 12 months positively correlated with lexical and grammatical development at 18 months of age. The ability to selectively use this multimodal communicative strategy to engage the adult in joint attention by drawing his attention toward an unseen event or object reveals 12-month-olds’ clear understanding of referential cues that are relevant for language development. This strategy to successfully initiate and maintain joint attention is related to language development as it increases learning opportunities from social interactions.  相似文献   

8.
ABSTRACT— One-year-old infants have a small receptive vocabulary and follow deictic gestures, but it is still debated whether they appreciate the referential nature of these signals. Demonstrating understanding of the complementary roles of symbolic (word) and indexical (pointing) reference provides evidence of referential interpretation of communicative signals. We presented 13-month-old infants with video sequences of an actress indicating the position of a hidden object while naming it. The infants looked longer when the named object was revealed not at the location indicated by the actress's gestures, but on the opposite side of the display. This finding suggests that infants expect that concurrently occurring communicative signals co-refer to the same object. Another group of infants, who were shown video sequences in which the naming and the deictic cues were provided concurrently but by two different people, displayed no evidence of expectation of co-reference. These findings suggest that a single communicative source, and not simply co-occurrence, is required for mapping the two signals onto each other. By 13 months of age, infants appreciate the referential nature of words and deictic gestures alike.  相似文献   

9.
An understanding of intentionality is thought to underlie developing joint attention. Similarly, early social‐communicative behaviours have been argued to reflect an appreciation of adult intentionality. This study explored the relation between social‐communicative behaviours during the still‐face effect at 6 months and joint attention at 12 months in a longitudinal sample of 42 infants. Three types of joint attention were investigated: coordinated joint attention (infant alternates looks between an adult and objects), initiating joint attention (infant uses communicative gestures to engage or direct adult attention) and attention following (infant follows an adult's line of gaze and pointing towards an object). The still‐face effect was correlated with later attention following, but not coordinated or initiating joint attention. Initiating joint attention was correlated with coordinated joint attention. We propose that the former association reflects a lower‐level detection of adult intentionality rather than a higher‐level interpretation of an agent's intentions towards outside entities. The findings support two bodies of research – one advocating for a distinction between types of joint attentional ability and a second proposing that infants can detect intentional actions without understanding or attributing mental states to objects.  相似文献   

10.
Children growing up in a dual-language environment have to constantly monitor the dynamic communicative context to determine what the speaker is trying to say and how to respond appropriately. Such self-generated efforts to monitor speakers' communicative needs may heighten children's sensitivity to, and allow them to make better use of, referential gestures to figure out a speaker's referential intent. In a series of studies, we explored monolingual and bilingual preschoolers' use of nonverbal referential gestures such as pointing and gaze direction to figure out a speaker's intent to refer. In Study 1, we found that 3- and 4-year-old bilingual children were better able than monolingual children to use referential gestures (e.g., gaze direction) to locate a hidden toy in the face of conflicting body-distal information (the experimenter was seated behind an empty box while the cue was directed at the correct box). Study 2 found that by 5 years of age, monolingual children had mastered this task. Study 3 established that the bilingual advantage can be found in children as young as 2 years old. Thus, the experience of growing up in a bilingual environment fosters the development of the understanding of referential intent.  相似文献   

11.
Dogs' (Canis familiaris) and cats' (Felis catus) interspecific communicative behavior toward humans was investigated. In Experiment 1, the ability of dogs and cats to use human pointing gestures in an object-choice task was compared using 4 types of pointing cues differing in distance between the signaled object and the end of the fingertip and in visibility duration of the given signal. Using these gestures, both dogs and cats were able to find the hidden food; there was no significant difference in their performance. In Experiment 2, the hidden food was made inaccessible to the subjects to determine whether they could indicate the place of the hidden food to a naive owner. Cats lacked some components of attention-getting behavior compared with dogs. The results suggest that individual familiarization with pointing gestures ensures high-level performance in the presence of such gestures; however, species-specific differences could cause differences in signaling toward the human.  相似文献   

12.
This study explored children’s development in comprehending four types of pointing gestures with different familiarity. Our aim was to highlight human infants’ pointing comprehension abilities under the same conditions used for various animal species. Sixteen children were tested longitudinally in a two-choice task from 1 year of age. At the age of 12 and 14 months, infants did not exceed chance level with either of the gestures used. Infants were successful with distal pointing and long cross-pointing at the age of 16 months. By the age of 18 months, infants showed a high success rate with the less familiar gestures (forward cross-pointing and far pointing) as well. Their skills at this older age show close similarity with those demonstrated previously by dogs when using exactly the same testing procedures. Our longitudinal studies also revealed that in a few infants, the ability to comprehend pointing gestures is already apparent before 16 months of age. In general, we found large individual variation. This has been described for a variety of cognitive skills in human development and seems to be typical for pointing comprehension as well.  相似文献   

13.
Chimpanzees (Pan troglodytes) and bonobos (Pan paniscus) (Study 1) and 18- and 24-month-old human children (Study 2) participated in a novel communicative task. A human experimenter (E) hid food or a toy in one of two opaque containers before gesturing towards the reward's location in one of two ways. In the Informing condition, she attempted to help the subject find the hidden object by simply pointing to the correct container. In the Prohibiting condition, E held out her arm toward the correct container (palm out) and told the subject firmly 'Don't take this one.' As in previous studies, the apes were at chance in the Informing condition. However, they were above chance in the new Prohibiting condition. Human 18-month-olds showed this same pattern of results, whereas 24-month-olds showed the opposite pattern: they were better in the Informing condition than in the Prohibiting condition. In our interpretation, success in the Prohibiting condition requires subjects to understand E's goal toward them and their behavior, and then to make an inference (she would only prohibit if there were something good in there). Success in the Informing condition requires subjects to understand a cooperative communicative motive - which apparently apes and young infants find difficult.  相似文献   

14.
The theory of natural pedagogy has proposed that infants can use ostensive signals, including eye contact, infant‐directed speech, and contingency to learn from others. However, the role of bodily gestures, such as hand‐waving, in social learning has been largely ignored. To address this gap in the literature, this study sought to determine whether 4‐month‐old infants exhibited a preference for horizontal or vertical (control) hand‐waving gestures. We also examined whether horizontal hand‐waving gestures followed by pointing facilitated the process of object learning in 9‐month‐old infants. Results showed that 4‐month‐old infants preferred horizontal hand‐waving gestures to vertical hand‐waving gestures, even when featural and contextual information were removed. Furthermore, horizontal hand‐waving gestures induced identity encoding for cued objects, whereas vertical gestures did not. These findings highlight the role of communicative intent embedded in bodily movements and indicate that hand‐waving can serve as a new type of ostensive signal.  相似文献   

15.
Infants' understanding of how their actions affect the visibility of hidden objects may be a crucial aspect of the development of search behaviour. To investigate this possibility, 7‐month‐old infants took part in a two‐day training study. At the start of the first session, and at the end of the second, all infants performed a search task with a hiding‐well. On both days, infants had an additional training experience. The ‘Agency group’ learnt to spin a turntable to reveal a hidden toy, whilst the ‘Means‐End’ group learnt the same means‐end motor action, but the toy was always visible. The Agency group showed greater improvement on the hiding‐well search task following their training experience. We suggest that the Agency group's turntable experience was effective because it provided the experience of bringing objects back into visibility by one's actions. Further, the performance of the Agency group demonstrates generalized transfer of learning across situations with both different motor actions and stimuli in infants as young as 7 months.  相似文献   

16.
Two tasks were administered to 40 children aged from 16 to 20 months (mean age = 18;1), to evaluate children's understanding of declarative and informative intention [Behne, T., Carpenter, M., & Tomasello, M. (2005). One-year-olds comprehend the communicative intentions behind gestures in a hiding game. Developmental Science, 8, 492–499; Camaioni, L., Perucchini, P., Bellagamba, F., & Colonnesi, C. (2004). The role of declarative pointing in developing a theory of mind. Infancy, 5, 291–308]. In the first task, children had to respond to the experimenter who pointed at a distal object; in the second task, children had to find a toy in a hiding game after the experimenter indicated the correct location either by pointing or by gazing. In the first task, most children responded to the declarative gesture by “commenting” on the pointed object instead of just looking; however, looking responses were more frequent than commenting responses. In the second task, children chose the correct location of the object significantly more frequently when the informative gesture was the point than when it was the gaze; moreover, there were significantly more correct choices than incorrect choices in the point but not in the gaze condition. Finally, no significant relation was found between tasks. Taken together, the findings support the view that infants’ developing understanding of communicative intention is a complex process in which general cognitive abilities and contextual factors are equally important.  相似文献   

17.
The aim of the present investigation was to study the visual communication between humans and dogs in relatively complex situations. In the present research, we have modelled more lifelike situations in contrast to previous studies which often relied on using only two potential hiding locations and direct association between the communicative signal and the signalled object. In Study 1, we have provided the dogs with four potential hiding locations, two on each side of the experimenter to see whether dogs are able to choose the correct location based on the pointing gesture. In Study 2, dogs had to rely on a sequence of pointing gestures displayed by two different experimenters. We have investigated whether dogs are able to recognise an ‘indirect signal’, that is, a pointing toward a pointer. In Study 3, we have examined whether dogs can understand indirect information about a hidden object and direct the owner to the particular location. Study 1 has revealed that dogs are unlikely to rely on extrapolating precise linear vectors along the pointing arm when relying on human pointing gestures. Instead, they rely on a simple rule of following the side of the human gesturing. If there were more targets on the same side of the human, they showed a preference for the targets closer to the human. Study 2 has shown that dogs are able to rely on indirect pointing gestures but the individual performances suggest that this skill may be restricted to a certain level of complexity. In Study 3, we have found that dogs are able to localise the hidden object by utilising indirect human signals, and they are able to convey this information to their owner.  相似文献   

18.
Speech directed towards young children ("motherese") is subject to consistent systematic modifications. Recent research suggests that gesture directed towards young children is similarly modified (gesturese). It has been suggested that gesturese supports speech, therefore scaffolding communicative development (the facilitative interactional theory). Alternatively, maternal gestural modification may be a consequence of the semantic simplicity of interaction with infants (the interactional artefact theory). The gesture patterns of 12 English mothers were observed with their 20-month-old infants while engaged in two tasks, free play and a counting task, designed to differentially tap into scaffolding. Gestures accounted for 29% of total maternal communicative behaviour. English mothers employed mainly concrete deictic gestures (e.g. pointing) that supported speech by disambiguating and emphasizing the verbal utterance. Maternal gesture rate and informational gesture-speech relationship were consistent across tasks, supporting the interactional artefact theory. This distinctive pattern of gesture use for the English mothers was similar to that reported for American and Italian mothers, providing support for universality. Child-directed gestures are not redundant in relation to child-directed speech but rather both are used by mothers to support their communicative acts with infants.  相似文献   

19.
One of the most fascinating phenomena in early development is that babies not only understand signs others direct to them and later use them to communicate with others, but they also come to direct the same signs towards themselves in a private way. Private gestures become "tools of thought". There is a considerable literature about private language, but almost nothing about private gestures. Private gestures pose an intriguing communicative puzzle: they are communicative, but with the self. In this paper we study two types of private gestures (signs) before language: (1) private ostensive gestures and (2) private pointing gestures. We show in a case study of one child between 12 and 18 months of age that both are used with a self-reflexive function, as a way of "thinking" what to do, in order to solve a problem in the conventional use of an object. The private gestures become self-reflexive signs.  相似文献   

20.
Infants can see someone pointing to one of two buckets and infer that the toy they are seeking is hidden inside. Great apes do not succeed in this task, but, surprisingly, domestic dogs do. However, whether children and dogs understand these communicative acts in the same way is not yet known. To test this possibility, an experimenter did not point, look, or extend any part of her body towards either bucket, but instead lifted and shook one via a centrally pulled rope. She did this either intentionally or accidentally, and did or did not address her act to the subject using ostensive cues. Young 2‐year‐old children but not dogs understood the experimenter's act in intentional conditions. While ostensive pulling of the rope made no difference to children's success, it actually hindered dogs' performance. We conclude that while human children may be capable of inferring communicative intent from a wide variety actions, so long as these actions are performed intentionally, dogs are likely to be less flexible in this respect. Their understanding of communicative intention may be more dependent upon bodily markers of communicative intent, including gaze, orientation, extended limbs, and vocalizations. This may be because humans have come under selective pressure to develop skills for communicating with absent interlocutors – where bodily co‐presence is not possible.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号