首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Some researchers have suggested that infants' ability to reason about goals develops as a result of their experiences with human agents and is then gradually extended to other agents. Other researchers have proposed that goal attribution is rooted in a specialized system of reasoning that is activated whenever infants encounter entities with appropriate features (e.g., self-propulsion). The first view predicts that young infants should attribute goals to human but not other agents; the second view predicts that young infants should attribute goals to both human and nonhuman agents. The present research revealed that 5-month-old infants (the youngest found thus far to attribute goals to human agents) also attribute goals to nonhuman agents. In two experiments, infants interpreted the actions of a self-propelled box as goal-directed. These results provide support for the view that from an early age, infants attribute goals to any entity they identify as an agent.  相似文献   

2.
We contrast two positions concerning the initial domain of actions that infants interpret as goal-directed. The 'narrow scope' view holds that goal-attribution in 6- and 9-month-olds is restricted to highly familiar actions (such as grasping). The cue-based approach of the infant's 'teleological stance', however, predicts that if the cues of equifinal variation of action and a salient action effect are present, young infants can attribute goals to a 'wide scope' of entities including unfamiliar human actions and actions of novel objects lacking human features. It is argued that previous failures to show goal-attribution to unfamiliar actions were due to the absence of these cues. We report a modified replication of Woodward (1999) showing that when a salient action-effect is presented, even young infants can attribute a goal to an unfamiliar manual action. This study together with other recent experiments reviewed support the 'wide scope' approach indicating that if the cues of goal-directedness are present even 6-month-olds attribute goals to unfamiliar actions.  相似文献   

3.
Previous research indicates that infants’ prediction of the goals of observed actions is influenced by own experience with the type of agent performing the action (i.e., human hand vs. non-human agent) as well as by action-relevant features of goal objects (e.g., object size). The present study investigated the combined effects of these factors on 12-month-olds’ action prediction. Infants’ (N = 49) goal-directed gaze shifts were recorded as they observed 14 trials in which either a human hand or a mechanical claw reached for a small goal area (low-saliency goal) or a large goal area (high-saliency goal). Only infants who had observed the human hand reaching for a high-saliency goal fixated the goal object ahead of time, and they rapidly learned to predict the action goal across trials. By contrast, infants in all other conditions did not track the observed action in a predictive manner, and their gaze shifts to the action goal did not change systematically across trials. Thus, high-saliency goals seem to boost infants’ predictive gaze shifts during the observation of human manual actions, but not of actions performed by a mechanical device. This supports the assumption that infants’ action predictions are based on interactive effects of action-relevant object features (e.g., size) and own action experience.  相似文献   

4.
Goal attribution to inanimate agents by 6.5-month-old infants   总被引:2,自引:0,他引:2  
Csibra G 《Cognition》2008,107(2):705-717
Human infants' tendency to attribute goals to observed actions may help us to understand where people's obsession with goals originates from. While one-year-old infants liberally interpret the behaviour of many kinds of agents as goal-directed, a recent report [Kamewari, K., Kato, M., Kanda, T., Ishiguro, H., & Hiraki, K. (2005). Six-and-a-half-month-old children positively attribute goals to human action and to humanoid-robot motion. Cognitive Development, 20, 303-320] suggested that younger infants restrict goal attribution to humans and human-like creatures. The present experiment tested whether 6.5-month-old infants would be willing to attribute a goal to a moving inanimate box if it slightly varied its goal approach within the range of the available efficient actions. The results were positive, demonstrating that featural identification of agents is not a necessary precondition of goal attribution in young infants and that the single most important behavioural cue for identifying a goal-directed agent is variability of behaviour. This result supports the view that the bias to give teleological interpretation to actions is not entirely derived from infants' experience.  相似文献   

5.
《Cognitive development》2005,20(2):303-320
Recent infant studies indicate that goal attribution (understanding of goal-directed action) is present very early in infancy. We examined whether 6.5-month-olds attribute goals to agents and whether infants change the interpretation of goal-directed action according to the kind of agent. We conducted three experiments using the visual habituation paradigm. In Experiment 1, we investigated whether 6.5-month-olds attribute goals to human action. In Experiment 2, we investigated whether 6.5-month-olds attribute goals to humanoid-robot motion. In Experiment 3, we tested whether infants attribute goals to a moving box. The agent used in Experiment 3 had no human-like appearance. The results of the three experiments show that infants positively attribute goals to both human action (Experiment 1) and humanoid motion (Experiment 2) but not to a moving box (Experiment 3). These results suggest that 6.5-month-olds tend to interpret certain actions in terms of goals, their reasoning about these actions is based on a sophisticated teleological representation, and that human-like appearance of agents may influence this teleological reasoning in early infancy.  相似文献   

6.
The current study distinguishes between attributions of goal-directed perception (i.e. attention) and non-goal-directed perception to examine 9-month-olds' interpretation of others' head and eye turns. In a looking time task, 9-month-olds encoded the relationship between an actor's head and eye turns and a target object if the head and eye turns were embedded in a sequence of multiple, variable actions with equifinal outcomes, but not otherwise. This evidence supports the claim that infants of this age may attribute perception, at least goal-directed perception, to others and undermines arguments that gaze-following at this age consists only of uninterpreted reflexes. The evidence also suggests alternative interpretations of the typical errors infants make in standard gaze-following procedures. Implications for infants' understanding of perception and attention in both human and non-human agents are discussed.  相似文献   

7.
Recent investigations of early psychological understanding have revealed three key findings. First, young infants attribute goals and dispositions to any entity they perceive as an agent, whether human or non-human. Second, when interpreting an agent's actions in a scene, young infants take into account the agent's representation of the scene, even if this representation is less complete than their own. Third, at least by the second year of life, infants recognize that agents can hold false beliefs about a scene. Together, these findings support a system-based, mentalistic account of early psychological reasoning.  相似文献   

8.
It is now widely accepted that sensitivity to goal-directed actions emerges during the first year of life. However, controversy still surrounds the question of how this sensitivity emerges and develops. One set of views emphasizes the role of observing behavioral cues, while another emphasizes the role of experience with producing own action. In a series of four experiments we contrast these two views. In Experiment 1, it was shown that infants as young as 6 months old can interpret an unfamiliar human action as goal-directed when the action involves equifinal variations. Experiments 2 and 3 demonstrated that 12- and 9-month-olds are also able to attribute goals to an inanimate action if it displays behavioral cues such as self-propelledness and an action-effect. In Experiment 4, we found that even 6-months-olds can encode the goal object of an inanimate action if all three cues, equifinality, self-propelledness and an action-effect, were present. These findings suggest that the ability to ascribe goal-directedness does not necessarily emerge from hands-on experience with particular actions and that it is independent from the specific appearance of the actor as long as sufficient behavioral cues are available. We propose a cue-based bootstrapping model in which an initial sensitivity to behavioral cues leads to learning about further cues. The further cues in turn inform about different kinds of goal-directed agents and about different types of actions. By uniting an innate base with a learning process, cue-based bootstrapping can help reconcile divergent views on the emergence of infants' ability to understand actions as goal-directed.  相似文献   

9.
Twelve-month-old infants attribute goals to both familiar, human agents and unfamiliar, non-human agents. They also attribute goal-directedness to both familiar actions and unfamiliar ones. Four conditions examined information 12-month-olds use to determine which actions of an unfamiliar agent are goal-directed. Infants who witnessed the agent interact contingently with a human confederate encoded the agent's actions as goal-directed; infants who saw a human confederate model an intentional stance toward the agent without the agent's participation, did not. Infants who witnessed the agent align itself with one of two potential targets before approaching that target encoded the approach as goal-directed; infants who did not observe the self-alignment did not encode the approach as goal-directed. A possible common underpinning of these two seemingly independent sources of information is discussed.  相似文献   

10.
The present research examined whether 9.5-month-old infants can attribute to an agent a disposition to perform a particular action on objects, and can then use this disposition to predict which of two new objects - one that can be used to perform the action and one that cannot - the agent is likely to reach for next. The infants first received familiarization trials in which they watched an agent slide either three (Experiments 1 and 3) or six (Experiment 2) different objects forward and backward on an apparatus floor. During test, the infants saw two new identical objects placed side by side: one stood inside a short frame that left little room for sliding, and the other stood inside a longer frame that left ample room for sliding. The infants who saw the agent slide six different objects attributed to her a disposition to slide objects: they expected her to select the "slidable" as opposed to the "unslidable" test object, and they looked reliably longer when she did not. In contrast, the infants who saw the agent slide only three different objects looked about equally when she selected either test object. These results add to recent evidence that infants in the first year of life can attribute dispositions to agents, and can use these dispositions to help predict agents' actions in new contexts.  相似文献   

11.
To explore 10-month-old infants' abilities to engage in intentional imitation, they were shown a human agent, a non-human agent (stuffed animal), and a surrogate object (mechanical pincers) model actions on objects. The tendency of infants to perform the target act was compared in several situations: (a) after test items were manipulated but the target action was not shown, (b) after the target act was demonstrated successfully, and (c) after the target act was demonstrated unsuccessfully. Although infants imitated the successful actions of human and non-human agents, they completed the unsuccessful actions of humans only. Toward the surrogate object infants did not respond differentially. These findings suggest that although infant may mimic the actions of human and non-human agents, they only engage in intentional imitation with people.  相似文献   

12.
The present research investigated whether six-month-olds who rarely produce pointing actions can detect the object-directedness and communicative function of others’ pointing actions when linguistic information is provided. In Experiment 1, infants were randomly assigned to either a novel-word or emotional-vocalization condition. They were first familiarized with an event in which an actor uttered either a novel label (novel-word condition) or exclamatory expression (emotional-vocalization condition) and then pointed to one of two objects. Next, the positions of the objects were switched. During test trials, each infant watched the new-referent event where the actor pointed to the object to which the actor had not pointed before or the old-referent event where the actor pointed to the old object in its new location. Infants in the novel-word condition looked reliably longer at the new-referent event than at the old-referent event, suggesting that they encoded the object-directedness of the actor’s point. In contrast, infants in the emotional-vocalization condition showed roughly equal looking times to the two events. To further examine infants’ understanding of the communicative aspect of an actor’s point using a different communicative context, Experiment 2 used an identical procedure to the novel-word condition in Experiment 1, except there was only one object present during the familiarization trials. When the familiarization trials did not include a contrasting object, we found that the communicative intention of the actor’s point could be ambiguous. The infants showed roughly equal looking times during the two test events. The current research suggests that six-month-olds understand the object-directedness and communicative intention of others’ pointing when presented with a label, but not when presented with an emotional non-speech vocalization.  相似文献   

13.
Influential developmental theories claim that infants rely on goals when visually anticipating actions. A widely noticed study suggested that 11-month-olds anticipate that a hand continues to grasp the same object even when it swapped position with another object (Cannon, E., & Woodward, A. L. (2012). Infants generate goal-based action predictions. Developmental Science, 15, 292–298.). Yet, other studies found such flexible goal-directed anticipations only from later ages on. Given the theoretical relevance of this phenomenon and given these contradicting findings, the current work investigated in two different studies and labs, whether infants indeed flexibly anticipate an action goal. Study 1 (N = 144) investigated by means of five experiments, under which circumstances (e.g., animated agent, human agent) 12-month-olds show flexible goal anticipation abilities. Study 2 (N = 104) presented 11-, 32-month-olds and adults both a human grasping action as well as a non-human action. In none of the experiments did infants flexibly anticipate the action based on the goal, but rather on the movement path, irrespective of the type of agent. Although one experiment contained a direct replication of Cannon and Woodward (2012), we were not able to replicate their findings. Overall our work challenges the view that infants are able to flexibly anticipate action goals from early on, but rather rely on movement patterns when processing other’s actions.  相似文献   

14.
The ability to determine how many objects are involved in physical events is fundamental for reasoning about the world that surrounds us. Previous studies suggest that infants can fail to individuate objects in ambiguous occlusion events until their first birthday and that learning words for the objects may play a crucial role in the development of this ability. The present eye-tracking study tested whether the classical object individuation experiments underestimate young infants’ ability to individuate objects and the role word learning plays in this process. Three groups of 6-month-old infants (N = 72) saw two opaque boxes side by side on the eye-tracker screen so that the content of the boxes was not visible. During a familiarization phase, two visually identical objects emerged sequentially from one box and two visually different objects from the other box. For one group of infants the familiarization was silent (Visual Only condition). For a second group of infants the objects were accompanied with nonsense words so that objects’ shape and linguistic labels indicated the same number of objects in the two boxes (Visual & Language condition). For the third group of infants, objects’ shape and linguistic labels were in conflict (Visual vs. Language condition). Following the familiarization, it was revealed that both boxes contained the same number of objects (e.g. one or two). In the Visual Only condition, infants looked longer to the box with incorrect number of objects at test, showing that they could individuate objects using visual cues alone. In the Visual & Language condition infants showed the same looking pattern. However, in the Visual vs Language condition infants looked longer to the box with incorrect number of objects according to linguistic labels. The results show that infants can individuate objects in a complex object individuation paradigm considerably earlier than previously thought and that linguistic cues enforce their own preference in object individuation. The results are consistent with the idea that when language and visual information are in conflict, language can exert an influence on how young infants reason about the visual world.  相似文献   

15.
《Cognition》2013,129(2):309-327
Infants and adults are thought to infer the goals of observed actions by calculating the actions’ efficiency as a means to particular external effects, like reaching an object or location. However, many intentional actions lack an external effect or external goal (e.g. dance). We show that for these actions, adults infer that the agents’ goal is to produce the movements themselves: Movements are seen as the intended outcome, not just a means to an end. We test what drives observers to infer such movement-based goals, hypothesizing that observers infer movement-based goals to explain actions that are clearly intentional, but are not an efficient means to any plausible external goal. In three experiments, we separately manipulate intentionality and efficiency, equating for movement trajectory, perceptual features, and external effects. We find that participants only infer movement-based goals when the actions are intentional and are not an efficient means to external goals. Thus, participants appear to infer that movements are the goal in order to explain otherwise mysterious intentional actions. These findings expand models of goal inference to account for intentional yet ‘irrational’ actions, and suggest a novel explanation for overimitation as emulation of movement-based goals.  相似文献   

16.
Two experiments investigated infants’ sensitivity to familiar size as information for the distances of objects with which they had had only brief experience. Each experiment had two phases: a familiarization phase and a test phase. During the familiarization phase, the infant played with a pair of different-sized objects for 10 min. During the test phase, a pair of objects, identical to those seen in the familiarization phase but now equal in size, were presented to the infant at a fixed distance under monocular or binocular viewing conditions. In the test phase of Experiment 1, 7-month-old infants viewing the objects monocularly showed a significant preference to reach for the object that resembled the smaller object in the familiarization phase. Seven-month-old infants in the binocular viewing condition reached equally to the two test phase objects. These results indicate that, in the monocular condition, the 7-month-olds used knowledge about the objects’ sizes, acquired during the familiarization phase, to perceive distance from the test objects’ visual angles, and that they reached preferentially for the apparently nearer object. The lack of a reaching preference in the binocular condition rules out interpretations of the results not based on the objects’ perceived distances. The results, therefore, indicate that 7-month-old infants can use memory to mediate spatial perception. The implications of this finding for the debate between direct and indirect theories of visual perception are discussed. In the test phase of Experiment 2,5-month-old infants viewing the objects monocularly showed no reaching preference. These infants, therefore, showed no evidence of sensitivity to familiar size as distance information.  相似文献   

17.
Twelve-month-old infants interpret action in context   总被引:2,自引:0,他引:2  
Two experiments assessed infants' understanding that actions that occur in sequence may be related to an overarching goal. Experiment 1 tested whether embedding an ambiguous action (touching the lid of a box) in a sequence that culminated with an action infants readily construe as goal-directed (grasping a toy inside the box) would alter infants' construal of the ambiguous action. Having seen the ambiguous action in this context, infants later construed this action in isolation as being directed at the toy within the box. Experiment 2 tested whether infants related the two actions on the basis of the temporal or the causal relation between them. When the causal relation was disrupted but the temporal relation was preserved, infants no longer related the two actions. These findings indicate that 12-month-old infants relate single actions to overarching goals and that they do so by construing goal-directed action in a causal framework.  相似文献   

18.
Infants engage in social interactions that include multiple partners from very early in development. A growing body of research shows that infants visually predict the outcomes of an individual’s intentional actions, such as a person reaching towards an object (e.g., Krogh-Jespersen & Woodward, 2014), and even show sophistication in their predictions regarding failed actions (e.g., Brandone, Horwitz, Aslin, & Wellman, 2014). Less is known about infants’ understanding of actions involving more than one individual (e.g., collaborative actions), which require representing each partners’ actions in light of the shared goal. Using eye-tracking, Study 1 examined whether 14-month-old infants visually predict the actions of an individual based on her previously shared goal. Infants viewed videos of two women engaged in either a collaborative or noncollaborative interaction. At test, only one woman was present and infants’ visual predictions regarding her future actions were measured. Fourteen-month-olds anticipated an individual’s future actions based on her past collaborative behavior. Study 2 revealed that 11-month-old infants only visually predict higher-order shared goals after engaging in a collaborative intervention. Together, our results indicate that by the second year after birth, infants perceive others’ collaborative actions as structured by shared goals and that active engagement in collaboration strengthens this understanding in young infants.  相似文献   

19.
Olofson EL  Baldwin D 《Cognition》2011,118(2):258-264
We investigated infants’ ability to recognize the similarity between observed and implied goals when actions differed in surface-level motion details. In two experiments, 10- to 12-month-olds were habituated to an actor manipulating an object and then shown test actions in which the actor contacted the object with a novel hand configuration that implied a goal either similar or dissimilar to the habituation event. Infants in both experiments looked significantly longer at test actions depicting a novel implied goal, suggesting that infants glossed over some surface-level motion details and compared implied goals.  相似文献   

20.
Behavioral research has shown that infants use both behavioral cues and verbal cues when processing the goals of others’ actions. For instance, 18-month-olds selectively imitate an observed goal-directed action depending on its (in)congruence with a model’s previous verbal announcement of a desired action goal. This EEG-study analyzed the electrophysiological underpinnings of these behavioral findings on the two functional levels of conceptual action processing and motor activation. Mid-latency mean negative ERP amplitude and mu-frequency band power were analyzed while 18-month-olds (N = 38) watched videos of an adult who performed one out of two potential actions on a novel object. In a within-subjects design, the action demonstration was preceded by either a congruent or an incongruent verbally announced action goal (e.g., “up” or “down” and upward movement). Overall, ERP negativity did not differ between conditions, but a closer inspection revealed that in two subgroups, about half of the infants showed a broadly distributed increased mid-latency ERP negativity (indicating enhanced conceptual action processing) for either the congruent or the incongruent stimuli, respectively. As expected, mu power at sensorimotor sites was reduced (indicating enhanced motor activation) for congruent relative to incongruent stimuli in the entire sample. Both EEG correlates were related to infants’ language skills. Hence, 18-month-olds integrate action-goal-related verbal cues into their processing of others’ actions, at the functional levels of both conceptual processing and motor activation. Further, cue integration when inferring others’ action goals is related to infants’ language proficiency.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号