首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Two factors hypothesized to affect shared visual attention in 9-month-olds were investigated in two experiments. In Experiment 1, we examined the effects of different attention-directing actions (looking, looking and pointing, and looking, pointing and verbalizing) on 9-month-olds’ engagement in shared visual attention. In Experiment 1 we also varied target object locations (i.e., in front, behind, or peripheral to the infant) to test whether 9-month-olds can follow an adult’s gesture past a nearby object to a more distal target. Infants followed more elaborate parental gestures to targets within their visual field. They also ignored nearby objects to follow adults’ attention to a peripheral target, but not to targets behind them. In Experiment 2, we rotated the parent 90° from the infant’s midline to equate the size of the parents’ head turns to targets within as well as outside the infants’ visual field. This manipulation significantly increased infants’ looking to target objects behind them, however, the frequency of such looks did not exceed chance. The results of these two experiments are consistent with perceptual and social experience accounts of shared visual attention.  相似文献   

2.
Two experiments were designed to investigate shifting attention from an adult’s face to an adult’s hand by 3- and 4-month-olds. In Experiment 1, 24 infants were presented with five types of hand gestures by their mothers and a stranger. Experiment 2 was given to 22 infants with the same procedures, except for the addition of a head inclining while pointing to objects. The results were: (1) after encountering an averted head repeatedly, the infants shifted their attention from the adult’s face to the moving hand and objects; they oriented to what the adult was attending to. (2) The moving head improved the rate of infants turning their heads to the same direction as the adult. The conclusion was averted head and eyes play a major role in infants’ orienting to an adult’s hand. A hand was a shared visual target during the adult’s object performance, indicating that infants’ orientation to the adults’ hand is a precursor stage of joint visual attention.  相似文献   

3.
《Cognitive development》2003,18(1):91-110
The goal of the present research was to assess whether communicative gestures, such as gazing and declarative pointing of 12-month-old infants indicate that infants perceive people as intentional agents, or whether infant communicative behaviors are merely triggered by specific perceptual cues in joint visual attention situations. Two experiments were conducted. In Experiment 1, thirty-two 12-month-olds were conditioned to follow the gaze of a contingently interacting person or object. They were then submitted to a paradigm designed to incite them to initiate communicative gestures to the person or object. The temporal coordination between pointing, gazing, and vocalizations occurred at a significantly higher rate in the Person than in the Object condition. In Experiment 2, the effect of the attentional focus of others on the gaze, points and vocalizations of thirty 12-month-olds was investigated. Infants were assessed in conditions where the experimenter vocalized while looking at the same (In-focus), or a different (Out-of-focus) toy than the infants. Infants who pointed produced more co-occurrences of gaze, vocalizations and points in the Out-of-focus condition than in the In-focus condition. Thus, by 12 months, infants are aware of the attentional state of the person. Discussion centers on the implications of these findings for theories of social and cognitive knowing.  相似文献   

4.
In the current study we investigated infants' communication in the visual and auditory modalities as a function of the recipient's visual attention. We elicited pointing at interesting events from thirty-two 12-month olds and thirty-two 18-month olds in two conditions: when the recipient either was or was not visually attending to them before and during the point. The main result was that infants initiated more pointing when the recipient's visual attention was on them than when it was not. In addition, when the recipient did not respond by sharing interest in the designated event, infants initiated more repairs (repeated pointing) than when she did, again, especially when the recipient was visually attending to them. Interestingly, accompanying vocalizations were used intentionally and increased in both experimental conditions when the recipient did not share attention and interest. However, there was little evidence that infants used their vocalizations to direct attention to their gestures when the recipient was not attending to them.  相似文献   

5.
Several interaction‐based and looking‐time studies suggest that 1‐year‐old infants understand the referential nature of deictic gestures. However, these studies have not unequivocally established that referential gestures induce object expectations in infants prior to encountering a referent object, and have thus remained amenable to simpler attentional highlighting interpretations. The current study tested whether nonlinguistic referential communication induces object expectations in infants by using a novel pupil dilation paradigm. In Experiment 1, 12‐month‐olds watched videos of a protagonist who either pointed communicatively toward an occluder in front of her or remained still. At test, the occluder opened to reveal one of two outcomes: an empty surface or a toy. Results showed that infants’ pupils were larger for the unexpected outcome of an empty surface following a point compared to the control condition (an empty surface following no point). These differences were not caused by differences in looking times or directions. In Experiment 2, an attention‐directing nonsocial control cue replaced the referential communication. The cue did direct 12‐month‐olds’ attention to the occluder, but it did not induce an object expectation. In Experiment 3, we tested 8‐month‐olds in the setting of Experiment 1. In contrast to 12‐month‐olds, 8‐month‐olds did not reveal object expectations following communication. Findings demonstrate that communicative pointing acts induce object expectations at 12 months of age, but not at 8 months of age, and that these expectations are specific to a referential‐communicative as opposed to an attention‐directing nonsocial cue.  相似文献   

6.
This study investigated infants’ rapid learning of two novel words using a preferential looking measure compared with a preferential reaching measure. In Experiment 1, 21 13-month-olds and 20 17-month-olds were given 12 novel label exposures (6 per trial) for each of two novel objects. Next, in the label comprehension tests, infants were shown both objects and were asked, “Where’s the [label]?” (looking preference) and then told, “Put the [label] in the basket” (reaching preference). Only the 13-month-olds showed rapid word learning on the looking measure; neither age group showed rapid word learning on the reaching measure. In Experiment 2, the procedure was repeated 24 h later with 10 participants per age group from Experiment 1. After a further 12 labels per object, both age groups now showed robust evidence of rapid word learning, but again only on the looking measure. This is the earliest looking-based evidence of rapid word learning in infants in a well-controlled (i.e., two-word) procedure; our failure to replicate previous reports of rapid word learning in 13-month-olds with a preferential reaching measure may be due to our use of more rigorous controls for object preferences. The superior performance of the younger infants on the looking measure in Experiment 1 was not straightforwardly predicted by existing theoretical accounts of word learning.  相似文献   

7.
In recent years there has been a resurgence of interest in the motivations behind, and the function of, infant pointing behaviour. Many studies have converged on the view that early pointing reflects a motivation to share attention and interest with others. Under one view, it is the sharing of attention itself that is the ultimate function of pointing, and is an early manifestation of a uniquely human social cognition that is geared towards cooperation and collaboration. In the current study, we tested an alternative hypothesis in which the goal of pointing is not attention sharing itself, but the information-laden response that infants tend to receive as a result of sharing attention. If infants indeed point in order to obtain information, their pointing should be modulated by the perceived ability of the other to provide this information. In Experiment 1, 16-month-olds who interacted with a demonstrably knowledgeable experimenter pointed significantly more to novel objects than infants who interacted with an ignorant experimenter. In Experiment 2, we confirmed that this finding was due to the perceived competence of the experimenter rather than to the different ways in which the experimenter responded to infants' points. Our results suggest that one function of pointing in infancy is to obtain information from others, and that infants selectively elicit desired information from those whom they perceive could competently provide it.  相似文献   

8.
Butler, Caron, & Brooks (2000) tested the gaze following of 14-and 18-month-olds under 3 conditions: (1) when the adult's view of the targets was blocked by barriers, (2) when the barriers contained open windows, and (3) no barriers. Contrary to a nonmentalist "ecological" model (adult turns serve as cues to the location of interesting events), frequency of gaze following by 14-month-olds was not equivalent across the 3 conditions. Contrary to a mentalist model (infant wants to see what the adult is seeing), gaze following was not substantially less in the barrier than in the window and no-barrier conditions (as was the case for 18-month-olds). To examine whether the barriers posed vector projection problems for essentially nonmentalist, or line-of-sight problems for essentially mentalist younger infants, 3 experiments were conducted. In Experiment 1, a 12-month group was tested in the same 3 conditions to determine if, being younger, they might yield a more clearcut nonmentalist pattern. Contrarily, they behaved like Butler et al.'s 14-month-olds. In Experiment 2, a 14-month group was tested in the barrier and window conditions, but now combining pointing with turning. Infants behaved as predicted by the mentalist model: strong responding in the window condition and minimal in the barrier (where many strained to look inside the partitions). In Experiment 3, an attempt was made to differentiate between mentalist and "geometric" (vector projection) interpretations of the results of Experiment 2 by testing another 14-month group with the adult's eyes closed while pointing. Gaze following now dropped precipitously in the window condition as did looking inside the solid barriers, indicating (1) that infants in Experiment 2 had not simply been guided to target by an extended arm, but construed it as part of a referential act that was as much visual as gestural, and (2) that by 14 months, infants may have acquired a mentalistic concept of seeing.  相似文献   

9.
Five experiments investigated the importance of shape and object manipulation when 12-month-olds were given the task of individuating objects representing exemplars of kinds in an event-mapping design. In Experiments 1 and 2, results of the study from Xu, Carey, and Quint (2004, Experiment 4) were partially replicated, showing that infants were able to individuate two natural-looking exemplars from different categories, but not two exemplars from the same category. In Experiment 3, infants failed to individuate two shape-similar exemplars (from Pauen, 2002a) from different categories. However, Experiment 4 revealed that allowing infants to manipulate objects shortly before the individuation task enabled them to individuate shape-similar objects from different categories. In Experiment 5, allowing object manipulation did not induce infants to individuate natural-looking objects from the same category. These findings suggest that object manipulation facilitates kind-based individuation of shape-similar objects by 12-month-olds.  相似文献   

10.
Which objects and animals are children willing to accept as referents for words they know? To answer this question, the authors assessed early word comprehension using the preferential looking task. Children were shown 2 stimuli side by side (a target and a distractor) and heard the target stimulus named. The target stimulus was either a typical or an atypical exemplar of the named category. It was predicted that children first connect typical examples with the target name and broaden the extension of the name as they get older to include less typical examples. Experiment 1 shows that when targets are named, 12-month-olds display an increase in target looking for typical but not atypical targets whereas 24-month-olds display an increase for both. Experiment 2 shows that 18-month-olds display a pattern similar to that of 24-month-olds. Implications for the early development of word comprehension are discussed.  相似文献   

11.
Two experiments examined infants' expectations about how an experimenter should distribute resources and rewards to other individuals. In Experiment 1, 19-month-olds expected an experimenter to divide two items equally, as opposed to unequally, between two individuals. The infants held no particular expectation when the individuals were replaced with inanimate objects, or when the experimenter simply removed covers in front of the individuals to reveal the items (instead of distributing them). In Experiment 2, 21-month-olds expected an experimenter to give a reward to each of two individuals when both had worked to complete an assigned chore, but not when one of the individuals had done all the work while the other played. The infants held this expectation only when the experimenter could determine through visual inspection who had worked and who had not. Together, these results provide converging evidence that infants in the 2nd year of life already possess context-sensitive expectations relevant to fairness.  相似文献   

12.
After a 5-minute inspection of 7 objects laid out on a shelf, subjects were seated with the objects behind them and answered questions about the locations and orientations of objects by throwing a switch left or right. The "visual image" subjects were told to imagine that the objects were still in front of them and to respond accordingly. The "real space" (RS) subjects were told to respond in terms of the positions of the objects in real space behind them. Thus correct responses (left vs. right) were completely opposite for the 2 groups. A control group responded while facing a curtain concealing the objects. The task was harder, by time and error criteria, for group RS than for the other 2 groups, but not dramatically so. All RS subjects denied using a response-reversal strategy. Some reported translating the objects from back to front and thus responding as to a mirror-image of the array. When this evasion was discouraged, RS subjects typically reported responding in terms of visual images located behind them and viewed as if by "eyes in the back of the head." The paradox of a visual image that corresponds to no possible visual input is discussed.  相似文献   

13.
This study explored whether infants aged 12 months already recognize the communicative function of pointing gestures. Infants participated in a task requiring them to comprehend an adult's informative pointing gesture to the location of a hidden toy. They mostly succeeded in this task, which required them to infer that the adult was attempting to direct their attention to a location for a reason – because she wanted them to know that a toy was hidden there. Many of the infants also reversed roles and produced appropriate pointing gestures for the adult in this same game, and indeed there was a correlation such that comprehenders were for the most part producers. These findings indicate that by 12 months of age infants are beginning to show a bidirectional understanding of communicative pointing.  相似文献   

14.
The present study investigated the degree to which an infants’ use of simultaneous gesture–speech combinations during controlled social interactions predicts later language development. Nineteen infants participated in a declarative pointing task involving three different social conditions: two experimental conditions (a) available, when the adult was visually attending to the infant but did not attend to the object of reference jointly with the child, and (b) unavailable, when the adult was not visually attending to neither the infant nor the object; and (c) a baseline condition, when the adult jointly engaged with the infant's object of reference. At 12 months of age measures related to infants’ speech-only productions, pointing-only gestures, and simultaneous pointing–speech combinations were obtained in each of the three social conditions. Each child's lexical and grammatical output was assessed at 18 months of age through parental report. Results revealed a significant interaction between social condition and type of communicative production. Specifically, only simultaneous pointing–speech combinations increased in frequency during the available condition compared to baseline, while no differences were found for speech-only and pointing-only productions. Moreover, simultaneous pointing–speech combinations in the available condition at 12 months positively correlated with lexical and grammatical development at 18 months of age. The ability to selectively use this multimodal communicative strategy to engage the adult in joint attention by drawing his attention toward an unseen event or object reveals 12-month-olds’ clear understanding of referential cues that are relevant for language development. This strategy to successfully initiate and maintain joint attention is related to language development as it increases learning opportunities from social interactions.  相似文献   

15.
An understanding of intentionality is thought to underlie developing joint attention. Similarly, early social‐communicative behaviours have been argued to reflect an appreciation of adult intentionality. This study explored the relation between social‐communicative behaviours during the still‐face effect at 6 months and joint attention at 12 months in a longitudinal sample of 42 infants. Three types of joint attention were investigated: coordinated joint attention (infant alternates looks between an adult and objects), initiating joint attention (infant uses communicative gestures to engage or direct adult attention) and attention following (infant follows an adult's line of gaze and pointing towards an object). The still‐face effect was correlated with later attention following, but not coordinated or initiating joint attention. Initiating joint attention was correlated with coordinated joint attention. We propose that the former association reflects a lower‐level detection of adult intentionality rather than a higher‐level interpretation of an agent's intentions towards outside entities. The findings support two bodies of research – one advocating for a distinction between types of joint attentional ability and a second proposing that infants can detect intentional actions without understanding or attributing mental states to objects.  相似文献   

16.
A pointing gesture creates a referential triangle that incorporates distant objects into the relationship between the signaller and the gesture’s recipient. Pointing was long assumed to be specific to our species. However, recent reports have shown that pointing emerges spontaneously in captive chimpanzees and can be learned by monkeys. Studies have demonstrated that both human children and great apes use manual gestures (e.g. pointing), and visual and vocal signals, to communicate intentionally about out-of-reach objects. Our study looked at how monkeys understand and use their learned pointing behaviour, asking whether it is a conditioned, reinforcement-dependent response or whether monkeys understand it to be a mechanism for manipulating the attention of a partner (e.g. a human). We tested nine baboons that had been trained to exhibit pointing, using operant conditioning. More specifically, we investigated their ability to communicate intentionally about the location of an unreachable food reward in three contexts that differed according to the human partner’s attentional state. In each context, we quantified the frequency of communicative behaviour (auditory and visual signals), including gestures and gaze alternations between the distal food and the human partner. We found that the baboons were able to modulate their manual and visual communicative signals as a function of the experimenter’s attentional state. These findings indicate that monkeys can intentionally produce pointing gestures and understand that a human recipient must be looking at the pointing gesture for them to perform their attention-directing actions. The referential and intentional nature of baboons’ communicative signalling is discussed.  相似文献   

17.
To determine whether infants follow the gaze of adults because they understand the referential nature of looking or because they use the adult turn as a predictive cue for the location of interesting events, the gaze-following behavior of 14- and 18-month-olds was examined in the joint visual attention paradigm under varying visual obstruction conditions: (a) when the experimenter's line of sight was obstructed by opaque screens (screen condition), (b) when the experimenter's view was not obstructed (no-screen condition), and (c) when the opaque screens contained a large transparent window (window condition). It was assumed that infants who simply use adult turns as predictive cues would turn equally in all 3 conditions but infants who comprehend the referential nature of looking would turn maximally when the experimenter's vision was not blocked and minimally when her vision was blocked. Eighteen-month-olds responded in accord with the referential position (turning much more in the no-screen and window conditions than in the screen condition). However, 14-month-olds yielded a mixed response pattern (turning less in the screen than the no-screen condition but turning still less in the window condition). The results suggest that, unlike 18-month-olds, 14-month-olds do not understand the intentional nature of looking and are unclear about the requirements for successful looking.  相似文献   

18.
How do children learn associations between novel words and complex perceptual displays? Using a visual preference procedure, the authors tested 12- and 19-month-olds to see whether the infants would associate a novel word with a complex 2-part object or with either of that object's parts, both of which were potentially objects in their own right and 1 of which was highly salient to infants. At both ages, children's visual fixation times during test were greater to the entire complex object than to the salient part (Experiment 1) or to the less salient part (Experiment 2)--when the original label was requested. Looking times to the objects were equal if a new label was requested or if neutral audio was used during training (Experiment 3). Thus, from 12 months of age, infants associate words with whole objects, even those that could potentially be construed as 2 separate objects and even if 1 of the parts is salient.  相似文献   

19.
Fourteen- and 18-month-old infants observed an adult experiencing each of 2 objects (experienced objects) and then leaving the room; the infant then played with a 3rd object while the adult was gone (unexperienced object). The adult interacted with the 2 experienced objects in 1 of 3 ways: by (a) sharing them with the infant in an episode of joint engagement, (b) actively manipulating and inspecting them on his or her own as the infant watched (individual engagement), or (c) looking at them from a distance as the infant played with them (onlooking). As evidenced in a selection task, infants of both ages knew which objects had been experienced by the adult in the joint engagement condition, only the 18-month-olds knew this in the individual engagement condition, and infants at neither age knew this in the onlooking condition. These results suggest that infants are 1st able to determine what adults know (have experienced) on the basis of their direct, triadic engagements with them.  相似文献   

20.
Two experiments systematically examined factors that influence infants’ manual search for hidden objects (N = 96). Experiment 1 used a new procedure to assess infants’ search for partially versus totally occluded objects. Results showed that 8.75-month-old infants solved partial occlusions by removing the occluder and uncovering the object, but these same infants failed to use this skill on total occlusions. Experiment 2 used sound-producing objects to provide a perceptual clue to the objects’ hidden location. Sound clues significantly increased the success rate on total occlusions for 10-month-olds, but not for 8.75-month-olds. An identity development account is offered for why infants succeed on partial occlusions earlier than total occlusions and why sound helps only the older infants. We propose a mechanism for how infants use object identity as a basis for developing a notion of permanence. Implications are drawn for understanding the dissociation between looking time and search assessments of object permanence.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号