首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
Human toddlers demonstrate striking failures when searching for hidden objects that interact with other objects, yet successfully locate hidden objects that do not undergo mechanical interactions. This pattern hints at a developmental dissociation between contact-mechanical and spatiotemporal knowledge. Recent studies suggest that adult non-human primates may exhibit a similar dissociation. Here, I provide the first direct test of this dissociation using a search paradigm with adult rhesus monkeys. Subjects watched as a plum rolled behind one of two opaque barriers. In Experiment 1, subjects had to locate the plum based on the position of a wall that blocked the plum's trajectory. Subjects searched incorrectly, apparently neglecting information about the location of the wall. However, subjects searched correctly in Experiments 2-4 when they were given spatiotemporal information about the plum's movement. Results indicate that adult monkeys use spatiotemporal information, but not contact-mechanical information, to locate hidden objects. This dissociation between contact-mechanical and spatiotemporal knowledge is discussed in light of developmental theories of core knowledge and the literature on object-based attention in human adults.  相似文献   

2.
Infants as young as 5 months of age view familiar actions such as reaching as goal-directed (Woodward, 1998), but how do they construe the goal of an actor's reach? Six experiments investigated whether 12-month-old infants represent reaching actions as directed to a particular individual object, to a narrowly defined object category (e.g., an orange dump truck), or to a more broadly defined object category (e.g., any truck, vehicle, artifact, or inanimate object). The experiments provide evidence that infants are predisposed to represent reaching actions as directed to categories of objects at least as broad as the basic level, both when the objects represent artifacts (trucks) and when they represent people (dolls). Infants do not use either narrower category information or spatiotemporal information to specify goal objects. Because spatiotemporal information is central to infants' representations of inanimate object motions and interactions, the findings are discussed in relation to the development of object knowledge and action representations.  相似文献   

3.
Infants younger than 11.5 months typically fail in event-mapping tasks with complex event sequences, yet succeed when the event sequences are made very simple and brief. The present research explored whether younger infants might succeed at mapping complex event sequences if infants were given information to help them organize and structure the event. Three experiments were conducted with 7.5-month-olds. In all of the experiments, the infants were shown a two-phase test event. In the first phase, infants saw a box–ball occlusion sequence in which the objects emerged at least once to each side of the screen, reversing direction each time to return behind the screen. In the second phase, infants saw a one-ball display. Prior to the test trials, infants were shown an “outline” of the test event that contained the basic components of the event. The experiments varied in (a) the kind of information included in the event outline and (b) the complexity of the box–ball test sequence (i.e., the number of object reversals). The results revealed that the 7.5-month-olds benefitted from viewing an event outline, although the performance of the males was more robust than the females. These results add to a growing body of research indicating that young infants can succeed on event-mapping tasks under more supportive conditions and provide insight into why event mapping is such a difficult task for young infants.  相似文献   

4.
Responses of 4-month-old infants to hidden people and objects were investigated with equated task demands. Twenty-one 4-month-old infants were administered a combined task, in which they were shown a sounding stimulus that continued to sound after hiding, an auditory task, in which sound was the only source of information about the position of the object in space, and a vision task, in which a silent stimulus was shown to the infants prior to hiding. Five infant behaviours were coded: reaching, gazing, body movements, vocalizations and smiles. The infants reached significantly more for hidden objects than for people, to whom they vocalized instead. They further smiled, and moved their bodies more towards their invisible mother than to the other stimuli. Thus infants responded differentially to people and objects whether the stimuli were soundless (so that there was no cue to their presence) or not. This suggested that infants appreciated (a) that an object had been hidden; (b) this object was either animate or inanimate; and (c) different procedures were appropriate for the retrieval of, or for interacting with animate and inanimate objects. Discussion centres on the underlying representational system that allows for such appreciation.  相似文献   

5.
Goldstein and Gigerenzer's (2002) "Recognition Heuristic" (RH) was tested for its empirical validity in an experimental paradigm with induced recognition of objects. RH claims that upon inferring which of two objects (e.g., cities) scores higher on a criterion (e.g., city size), a recognized object will be chosen over an unrecognized one, if the recognition is a valid predictor of the criterion without considering additional object information. Trying to avoid potential shortcomings of former studies, we (a) used the city population task, (b) provided additional cue information only for recognized cities, and (c) had participants draw inferences from memory. Participants learned city names and additional information about some cities. They also learned that recognition and the additional information were valid predictors of the criterion "city size". In a subsequent decision phase, the additional information about the cities in memory strongly affected the inferences, suggesting that recognition information is clearly integrated into judgments, but by no means in a non-compensatory fashion that would dominate every other cue.  相似文献   

6.
7.
The current study looked at two theoretical proposals explaining toddlers’ abilities to use cue information for recovering a hidden object that had rolled down a ramp behind an occluded screen. These two approaches, the theory of object directed attention and a landmark-based account, make different predictions regarding the efficacy of an obliquely aligned cue to object position. Accordingly, the search by forty 24-month olds, forty-two 30-month olds, and forty-one 36-month olds for a hidden toy that was cued using either a short versus a long cue, or a vertically aligned versus an obliquely aligned cue, were compared. Analyses of search accuracy revealed that children were more successful when faced with short as opposed to long cues, and when using vertical as opposed to oblique cues. These findings support a landmark-based approach, as opposed to an object-directed attention account, and are discussed with reference to their implications for spatial orientation more generally.  相似文献   

8.
When making decisions animals can rely on information stored in memory and/or on information available through perceptual processes. Under some circumstances, perceptual access to a relevant piece of information can be lost as when a prey hides under a cover. If this piece of information is critical, the animal must be able to keep it active in the working memory until the final decision is made. Species endowed with object permanence can to a certain extent overcome such a lack of perceptual access. Numerous studies have investigated object permanence in animals, but no study systematically examined the interaction when making a decision between an information directly available through perception and an information that can no longer be perceived. In the present study, domestic cats (Felis catus) were administered a progressive elimination task in which they had to visit and deplete either two visible and one hidden target (e.g., Experiments 1 and 2) or one visible and two hidden targets (e.g., Experiments 3 and 4). The cats were brought back to the starting point after each visit to any target whether that target had been previously visited or not. The results revealed that the cats searched at the visible target(s) first and at the hidden target(s) last, which was referred to as the visibility rule. The results also revealed that the position of the bowl that was distinct (e.g., the visible bowl when the two other ones were hidden and the opposite) influenced the way this cognitive rule was implemented. More specifically, when the intermediate bowl was distinct the visibility rule was readily implemented but when either the right of the left bowl was distinct the visibility was violated. That is the cats did no longer choose the visible target(s) first. The visibility rule was interpreted in terms of optimization principles, the external distinct target effect was interpreted in terms of divided attention and lateralization.  相似文献   

9.
We investigated whether 6-year-olds’ understanding of perceptual aspectuality was sufficiently robust to deal with the presence of irrelevant information. A total of 32 children chose whether to look or feel to locate a specific object (identifiable by sight or touch) from four objects that were hidden. In half of the trials, the objects were different on only one modality (e.g., four objects that felt different but were the same color). In the remainder of the trials, the objects also differed (partially) on one irrelevant modality (e.g., four objects that felt different, two red and two blue, where the goal was to locate the soft object). Performance was worse on the latter trials. We discuss children’s difficulty in dealing with irrelevant information.  相似文献   

10.
Infants' ability to represent objects has received significant attention from the developmental research community. With the advent of eye-tracking technology, detailed analysis of infants' looking patterns during object occlusion have revealed much about the nature of infants' representations. The current study continues this research by analyzing infants' looking patterns in a novel manner and by comparing infants' looking at a simple display in which a single three-dimensional (3D) object moves along a continuous trajectory to a more complex display in which two 3D objects undergo trajectories that are interrupted behind an occluder. Six-month-old infants saw an occlusion sequence in which a ball moved along a linear path, disappeared behind a rectangular screen, and then a ball (ball-ball event) or a box (ball-box event) emerged at the other edge. An eye-tracking system recorded infants' eye-movements during the event sequence. Results from examination of infants' attention to the occluder indicate that during the occlusion interval infants looked longer to the side of the occluder behind which the moving occluded object was located, shifting gaze from one side of the occluder to the other as the object(s) moved behind the screen. Furthermore, when events included two objects, infants attended to the spatiotemporal coordinates of the objects longer than when a single object was involved. These results provide clear evidence that infants' visual tracking is different in response to a one-object display than to a two-object display. Furthermore, this finding suggests that infants may require more focused attention to the hidden position of objects in more complex multiple-object displays and provides additional evidence that infants represent the spatial location of moving occluded objects.  相似文献   

11.
The present research examined two alternative interpretations of violation-of-expectation findings that young infants can represent hidden objects. One interpretation is that, when watching an event in which an object becomes hidden behind another object, infants form a prediction about the event's outcome while both objects are still visible, and then check whether this prediction was accurate. The other interpretation is that infants' initial representations of hidden objects are weak and short-lived and as such sufficient for success in most violation-of-expectation tasks (as objects are typically hidden for only a few seconds at a time), but not more challenging tasks. Five-month-old infants succeeded in reasoning about the interaction of a visible and a hidden object even though (1) the two objects were never simultaneously visible, and (2) a 3- or 4-min delay preceded the test trials. These results provide evidence for robust representations of hidden objects in young infants.  相似文献   

12.
《Cognitive development》1994,9(2):193-209
Within a small bounded space, the location of a hidden object can be coded in terms of distance information, general area of hiding, or the boundary of the space. 6.5-month-old infants' use of these three coding strategies was examined using a visual search task. Infants watched as an object was hidden at one of four identical locations. After a short delay (10 s), the object either reappeared at the location where it was hidden (possible event), or reappeared at one of the other three locations (impossible event). Looking behavior was not systematically influenced by the amount of distance the object moved from the original location of hiding or by whether the object was hidden near a boundary. Infants did not appear to code the location of a hidden object in terms of distance information, general area of hiding, or whether it was hidden at a boundary. However, the location of reappearance (i.e., impossible event) did influence looking times. Infants were surprised when the object reappeared at a boundary position that was previously unoccupied. They were not surprised when the object reappeared at a central location. Thus, two factors influenced coding of location: boundary information (but in a different way than specified) and the nature of the change (absence vs. presence of an object). The influence of these two factors on coding of spatial information was discussed.  相似文献   

13.
In risky decision situations, many decision makers search for risk-defusing operators (RDOs). An RDO is an action intended by the decision maker to be performed additionally to a specific alternative and is expected to decrease the risk. Pre-event RDOs (e.g., vaccination) have to be applied before a negative event (e.g., infection) occurs. Post-event RDOs do not need to be initiated before and unless the event happens (e.g., medical treatment). For the successful application of Post-event RDOs, the negative event must be detected in time. Two experiments investigated the effect of uncertainty in the detection of the negative event. In Experiment 1, only a small minority of subjects noted this uncertainty without a cue, and even with cue, only a minority actively searched for probability information. In Experiment 2, the probability for correctly detecting the negative event was varied. When detection was certain, most subjects chose the alternative with a Post-event RDO, whereas this percentage decreased significantly with decreasing probability of correct detection. Also, in the conditions with a more extreme negative outcome, less decision makers chose the alternative with the Post-event RDO.  相似文献   

14.
In two experiments, we investigated whether 4‐ to 5‐year‐old children's ability to demonstrate their understanding of aspectuality was influenced by how the test question was phrased. In Experiment 1, 60 children chose whether to look or feel to gain information about a hidden object (identifiable by sight or touch). Test questions referred either to the perceptual aspect of the hidden object (e.g., whether it was red or blue), the modality dimension (e.g., what colour it was), or the object's identity (e.g., which one it was). Children who heard the identity question performed worse than those who heard the aspect or dimension question. Further investigation in Experiment 2 (N= 23) established that children's difficulty with the identity question was not due to a problem recalling the objects. We discuss how the results of these methodological investigations impact on researchers’ assessment of the development of aspectuality understanding.  相似文献   

15.
ABSTRACT

In the process of searching for targets, our visual system not only prioritizes target-relevant features, but can also suppress nontarget-related features. Although this template for rejection has been well demonstrated, whether the features (i.e. the objects) or locations are suppressed remains unresolved due to the experimental paradigms in previous studies: in particular, object-based templates for rejection were confounded with location-based inhibition in visual search paradigms. The present study examined an object-based template for rejection by introducing search arrays comprised of two overlapping shapes with search items distributed along the shape's contours. To discourage location-based inhibition, the two shapes were spatially intermingled (Experiment 1), rotated (Experiment 2), or jiggled (Experiment 3). Participants identified the colour of a target cross. The pre-cue indicated the shape in which the target would appear (positive cue condition), the shape in which only distractors would appear (negative cue condition), or the shape that was irrelevant to the current search array (neutral cue condition). In all three experiments, the reaction times for the negative cue condition were shorter than those for the neutral cue condition, which is a hallmark of the object-based template for rejection effect, even under conditions in which location-based inhibition was discouraged.  相似文献   

16.
The appearance and disappearance of an object in the visual field is accompanied by changes to multiple visual features at the object's location. When features at a location change asynchronously, the cue of common onset and offset becomes unreliable, with observers tending to report the most recent pairing of features. Here, we use these last feature reports to study the conditions that lead to a new object representation rather than an update to an existing representation. Experiments 1 and 2 establish that last feature reports predominate in asynchronous displays when feature durations are brief. Experiments 3 and 4 demonstrate that these reports also are critically influenced by whether features can be grouped using nontemporal cues such as common shape or location. The results are interpreted within the object-updating framework (Enns, Lleras, & Moore, 2010), which proposes that human vision is biased to represent a rapid image sequence as one or more objects changing over time.  相似文献   

17.
Children often extend names to novel artifacts on the basis of overall shape rather than core properties (e.g., function). This bias is claimed to reflect the fact that nonrandom structure is a reliable cue to an object having a specific designed function. In this article, we show that information about an object's design (i.e., about its creator's intentions) is neither necessary nor sufficient for children to override the shape bias. Children extend names on the basis of any information specifying the artifact's function (e.g., information about design, current use, or possible use), especially when this information is made salient when candidate objects for extension are introduced. Possible mechanisms via which children come to rely less on easily observable cues (e.g., shape) and more on core properties (e.g., function) are discussed.  相似文献   

18.
Objects are rarely viewed in isolation, and so how they are perceived is influenced by the context in which they are viewed and their interaction with other objects (e.g., whether objects are colocated for action). We investigated the combined effects of action relations and scene context on an object decision task. Experiment 1 investigated whether the benefit for positioning objects so that they interact is enhanced when objects are viewed within contextually congruent scenes. The results indicated that scene context influenced perception of nonaction-related objects (e.g., monitor and keyboard), but had no effect on responses to action-related objects (e.g., bottle and glass) that were processed more rapidly. In Experiment 2, we reduced the saliency of the object stimuli and found that, under these circumstances, scene context influenced responses to action-related objects. We discuss the data in terms of relatively late effects of scene processing on object perception.  相似文献   

19.
There has been some debate about whether infants 10 months and younger can use featural information to individuate objects. The present research tested the hypothesis that negative results obtained with younger infants reflect limitations in information processing capacities rather than the inability to individuate objects based on featural differences. Infants aged 9.5 months saw one object (i.e. a ball) or two objects (i.e. a box and a ball) emerge successively to opposite sides of an opaque occluder. Infants then saw a single ball either behind a transparent occluder or without an occluder. Only the infants who saw the ball behind the transparent occluder correctly judged that the one-ball display was inconsistent with the box-ball sequence. These results suggest that: (a) infants categorize events involving opaque and transparent occluders as the same kind of physical situation (i.e. occlusion) and (b) support the notion that infants are more likely to give evidence of object individuation when they need to reason about one kind of event (i.e. occlusion) than when they must retrieve and compare categorically distinct events (i.e. occlusion and no-occlusion).  相似文献   

20.
Previous studies have shown that attention is drawn to the location of manipulable objects and is distributed across pairs of objects that are positioned for action. Here, we investigate whether central, action-related objects can cue attention to peripheral targets. Experiment 1 compared the effect of uninformative arrow and object cues on a letter discrimination task. Arrow cues led to spatial-cueing benefits across a range of stimulus onset asynchronies (SOAs: 0 ms, 120 ms, 400 ms), but object-cueing benefits were slow to build and were only significant at the 400-ms SOA. Similar results were found in Experiment 2, in which the targets were objects that could be either congruent or incongruent with the cue (e.g., screwdriver and screw versus screwdriver and glass). Cueing benefits were not influenced by the congruence between the cue and target, suggesting that the cueing effects reflected the action implied by the central object, not the interaction between the objects. For Experiment 3 participants decided whether the cue and target objects were related. Here, the interaction between congruent (but not incongruent) targets led to significant cueing/positioning benefits at all three SOAs. Reduced cueing benefits were obtained in all three experiments when the object cue did not portray a legitimate action (e.g., a bottle pointing towards an upper location, since a bottle cannot pour upwards), suggesting that it is the perceived action that is critical, rather than the structural properties of individual objects. The data suggest that affordance for action modulates the allocation of visual attention.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号