首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Body- and environmental-stabilized processing of spatial knowledge   总被引:1,自引:0,他引:1  
In 5 experiments, the authors examined the perceptual and cognitive processes used to track the locations of objects during locomotion. Participants learned locations of 9 objects on the outer part of a turntable from a single viewpoint while standing in the middle of the turntable. They subsequently pointed to objects while facing the learning heading and a new heading, using imagined headings that corresponded to their current actual body heading and the other actual heading. Participants in 4 experiments were asked to imagine that the objects moved with them as they turned and were shown or only told that the objects would move with them; in Experiment 5, participants were shown that objects could move with them but were asked to ignore this as they turned. Results showed that participants tracked object locations as though the objects moved with them when shown but not when told about the consequences of their locomotion. Once activated, this processing mode could not be suppressed by instructions. Results indicated that people process object locations in a body- or an environment-stabilized manner during locomotion, depending on the perceptual consequences of locomotion.  相似文献   

2.
Speakers often use gesture to demonstrate how to perform actions—for example, they might show how to open the top of a jar by making a twisting motion above the jar. Yet it is unclear whether listeners learn as much from seeing such gestures as they learn from seeing actions that physically change the position of objects (i.e., actually opening the jar). Here, we examined participants' implicit and explicit understanding about a series of movements that demonstrated how to move a set of objects. The movements were either shown with actions that physically relocated each object or with gestures that represented the relocation without touching the objects. Further, the end location that was indicated for each object covaried with whether the object was grasped with one or two hands. We found that memory for the end location of each object was better after seeing the physical relocation of the objects, that is, after seeing action, than after seeing gesture, regardless of whether speech was absent (Experiment 1) or present (Experiment 2). However, gesture and action built similar implicit understanding of how a particular handgrasp corresponded with a particular end location. Although gestures miss the benefit of showing the end state of objects that have been acted upon, the data show that gestures are as good as action in building knowledge of how to perform an action.  相似文献   

3.
Studies on affordances typically focus on single objects. We investigated whether affordances are modulated by the context, defined by the relation between two objects and a hand. Participants were presented with pictures displaying two manipulable objects linked by a functional (knife-butter), a spatial (knife-coffee mug), or by no relation. They responded by pressing a key whether the objects were related or not. To determine if observing other's actions and understanding their goals would facilitate judgments, a hand was: (a) displayed near the objects; (b) grasping an object to use it; (c) grasping an object to manipulate/move it; (d) no hand was displayed. RTs were faster when objects were functionally rather than spatially related. Manipulation postures were the slowest in the functional context and functional postures were inhibited in the spatial context, probably due to mismatch between the inferred goal and the context. The absence of this interaction with foot responses instead of hands in Experiment 2 suggests that effects are due to motor simulation rather than to associations between context and hand-postures.  相似文献   

4.
Mou W  Xiao C  McNamara TP 《Cognition》2008,108(1):136-154
Two experiments investigated participants' spatial memory of a briefly viewed layout. Participants saw an array of five objects on a table and, after a short delay, indicated whether the target object indicated by the experimenter had been moved. Experiment 1 showed that change detection was more accurate when non-target objects were stationary than when non-target objects were moved. This context effect was observed when participants were tested both at the original learning perspective and at a novel perspective. In Experiment 2, the arrays of five objects were presented on a rectangular table and two of the non-target objects were aligned with the longer axis of the table. Change detection was more accurate when the target object was presented with the two objects that were aligned with the longer axis of the table during learning than when the target object was presented with the two objects that were not aligned with the longer axis of the table during learning. These results indicated that the spatial memory of a briefly viewed layout has interobject spatial relations represented and utilizes an allocentric reference direction.  相似文献   

5.
Previous research investigated the contributions of target objects, situational context and movement kinematics to action prediction separately. The current study addresses how these three factors combine in the prediction of observed actions. Participants observed an actor whose movements were constrained by the situational context or not, and object-directed or not. After several steps, participants had to indicate how the action would continue. Experiment 1 shows that predictions were most accurate when the action was constrained and object-directed. Experiments 2A and 2B investigated whether these predictions relied more on the presence of a target object or cues in the actor's movement kinematics. The target object was artificially moved to another location or occluded. Results suggest a crucial role for kinematics. In sum, observers predict actions based on target objects and situational constraints, and they exploit subtle movement cues of the observed actor rather than the direct visual information about target objects and context.  相似文献   

6.
Accurate representation of a changing environment requires individuation-the ability to determine how many numerically distinct objects are present in a scene. Much research has characterized early individuation abilities by identifying which object features infants can use to individuate throughout development. However, despite the fact that without memory featural individuation would be impossible, little is known about how memory constrains object individuation. Here, we investigated infants' ability to individuate multiple objects at once and asked whether individuation performance changes as a function of memory load. In three experiments, 18-month-old infants saw one, two, or three objects hidden and always saw the correct number of objects retrieved. On some trials, one or more of these objects surreptitiously switched identity prior to retrieval. We asked whether infants would use this identity mismatch to individuate and, hence, continue searching for the missing object(s). We found that infants were less likely to individuate objects as memory load grew, but that infants individuated more successfully when the featural contrast between the hidden and retrieved objects increased. These results suggest that remembering more objects may result in a loss of representational precision, thereby decreasing the likelihood of successful individuation. We close by discussing possible links between our results and findings from adult working memory.  相似文献   

7.
Harman KL  Humphrey GK 《Perception》1999,28(5):601-615
When we look at an object as we move or the object moves, our visual system is presented with a sequence of different views of the object. It has been suggested that such regular temporal sequences of views of objects contain information that can aid in the process of representing and recognising objects. We examined whether seeing a series of perspective views of objects in sequence led to more efficient recognition than seeing the same views of objects but presented in a random order. Participants studied images of 20 novel three-dimensional objects rotating in depth under one of two study conditions. In one study condition, participants viewed an ordered sequence of views of objects that was assumed to mimic important aspects of how we normally encounter objects. In the other study condition, participants were presented the same object views, but in a random order. It was expected that studying a regular sequence of views would lead to more efficient recognition than studying a random presentation of object views. Although subsequent recognition accuracy was equal for the two groups, differences in reaction time between the two study groups resulted. Specifically, the random study group responded reliably faster than the sequence study group. Some possible encoding differences between the two groups are discussed.  相似文献   

8.
A total of 153 children (excluding those who erred on control questions), mainly 5 and 7 years of age, participated in two experiments that involved tests of false belief. In the task, the sought entity was first at Location 1 and then, unknown to the searching protagonist, it moved to Location 2. In Experiment 1, performance was well below ceiling in 5-year-olds when the sought entity was a person, and this contrasted with a task in which the sought entity was a physical object. Performance was especially inaccurate when the sought person moved of his or her own volition rather than when the sought person was requested to move by a third party. Interestingly, 5-year-olds were more likely to nominate Location 1 when asked where the searching protagonist would look first than when asked what he or she would do next. In Experiment 2, however, 5-year-olds also tended to nominate Location 1 following a question that included the word "first" even in a test of true belief--a patently incorrect response. Altogether, the results suggest that 5-year-old children have considerable difficulty with a test of false belief when the sought entity is a person acting under his or her own volition. This suggests that 5-year-olds' handle on states of belief is surprisingly fragile in this kind of task.  相似文献   

9.
In a series of three experiments, we investigated the development of children's understanding of the similarities between photographs and their referents. Based on prior work on the development of analogical understanding (e.g. Gentner & Rattermann, 1991), we suggest that the appreciation of this relation involves multiple levels. Photographs are similar to their referents both in terms of the constituent objects and in terms of the relations among these objects. We predicted that children would appreciate object similarity (whether photographs depict the same objects as in the referent scene) before they would appreciate relational similarity (whether photographs depict the objects in the same spatial positions as in the referent scene). To test this hypothesis, we presented 3-, 4-, 5-, 6-, and 7-year-old children and adults with several candidate photographs of an arrangement of objects. Participants were asked to choose which of the photographs was 'the same' as the arrangement. We manipulated the types of information the photographs preserved about the referent objects. One set of photographs did not preserve the object properties of the scene. Another set of photographs preserved the object properties of the scene, but not the relational similarity, such that the original objects were depicted but occupied different spatial positions in the arrangement. As predicted, younger children based their choices of the photographs largely on object similarity, whereas older children and adults also took relational similarity into account. Results are discussed in terms of the development of children's appreciation of different levels of similarity.  相似文献   

10.
The ability to track moving objects is a crucial skill for performance in everyday spatial tasks. The tracking mechanism depends on representation of moving items as coherent entities, which follow the spatiotemporal constraints of objects in the world. In the present experiment, participants tracked 1 to 4 targets in a display of 8 identical objects. Objects moved randomly and independently (moving condition), passed behind an invisible bar (occluded condition), or momentarily disappeared by shrinking (implosion condition). Scholl and Pylyshyn (1999) found that adults can track entities under the moving and occluded conditions, but not under implosion. This finding suggests that the tracking mechanism is constrained by the spatiotemporal properties of physical objects as they move in the world. In the present study, we adapt these conditions to investigate whether this constraint holds for people with severe spatial impairments associated with Williams syndrome (WS). In Experiment 1, we compare the performance of individuals with WS and typically developing (TD) adults. TD adults replicated Scholl and Pylyshyn’s findings; performance was no different between the moving and occluded conditions but was worse under implosion. People with WS had reduced tracking capacity but demonstrated the same pattern across conditions. In Experiment 2, we tested TD 4-, 5-, and 7-year-olds. People with WS performed at a level that fell between TD 4- and 5-year-olds. These results suggest that the multiple object tracking system in WS operates under the same object-based constraints that hold in typical development.  相似文献   

11.
Investigations of multiple-object tracking aim to further our understanding of how people perform common activities such as driving in traffic. However, tracking tasks in the laboratory have overlooked a crucial component of much real-world object tracking: self-motion. We investigated the hypothesis that keeping track of one’s own movement impairs the ability to keep track of other moving objects. Participants attempted to track multiple targets while either moving around the tracking area or remaining in a fixed location. Participants’ tracking performance was impaired when they moved to a new location during tracking, even when they were passively moved and when they did not see a shift in viewpoint. Self-motion impaired multiple-object tracking in both an immersive virtual environment and a real-world analog, but did not interfere with a difficult non-spatial tracking task. These results suggest that people use a common mechanism to track changes both to the location of moving objects around them and to keep track of their own location.  相似文献   

12.
People's behavior in relation to objects depends on whether they are owned. But how do people judge whether objects are owned? We propose that people expect human-made objects (artifacts) to be more likely to be owned than naturally occurring objects (natural kinds), and we examine the development of these expectations in young children. Experiment 1 found that when shown pictures of familiar kinds of objects, 3-year-olds expected artifacts to be owned and inanimate natural kinds to be non-owned. In Experiments 2A and 2B, 3-6-year-olds likewise had different expectations about the ownership of unfamiliar artifacts and natural kinds. Children at all ages viewed unfamiliar natural kinds as non-owned, but children younger than 6 years of age only endorsed artifacts as owned at chance rates. In Experiment 3, children saw the same pictures but were also told whether objects were human-made. With this information provided, even 3-year-olds viewed unfamiliar artifacts as owned. Finally, in Experiment 4, 4- and 5-year-olds chose unfamiliar artifacts over natural kinds when judging which object in a pair belongs to a person, but not when judging which the person prefers. These experiments provide first evidence about how children judge whether objects are owned. In contrast to claims that children think about natural kinds as being similar to artifacts, the current findings reveal that children have differing expectations about whether they are owned.  相似文献   

13.
Previous research has shown that when subjects search for a particular target object the sudden appearance of a new object captures the eyes on a large proportion of trials. The present study examined whether the onset affects the oculomotor system even when the eyes move directly towards the target. Using a modified version of the oculomotor paradigm (see Theeuwes, Kramer, Hahn, & Irwin, 1998) we show that when the eyes moved to the target object, subsequent saccades were inhibited from moving to a location at which a new object had previously appeared (inhibition-of-return; IOR). Whether or not a saccade to the onset was executed had no effect on the size of the inhibition. In particular conditions, the trajectories of saccades to the target objects were slightly curved in the opposite direction of the onset. The data are interpreted in the context of a novel hypothesis regarding oculomotor IOR.  相似文献   

14.
Action priming by briefly presented objects   总被引:8,自引:0,他引:8  
Tucker M  Ellis R 《Acta psychologica》2004,116(2):185-203
Three experiments investigated how visual objects prime the actions they afford. The principal concern was whether such visuomotor priming depends upon a concurrent visual input--as would be expected if it is mediated by on-line dorsal system processes. Experiment 1 showed there to be essentially identical advantages for making afforded over non-afforded responses when these were made to objects still in view and following brief (30 or 50 ms) object exposures that were backward masked. Experiment 2 showed that affordance effects were also unaffected by stimulus degradation. Finally, Experiment 3 showed there to be statistically equal effects from images of objects and their names. The results suggest that an active object representation is sufficient to generate affordance compatibility effects based on associated actions, whether or not the object is concurrently visible.  相似文献   

15.
How do children succeed in learning a word? Research has shown robustly that, in ambiguous labeling situations, young children assume novel labels to refer to unfamiliar rather than familiar objects. However, ongoing debates center on the underlying mechanism: Is this behavior based on lexical constraints, guided by pragmatic reasoning, or simply driven by children's attraction to novelty? Additionally, recent research has questioned whether children's disambiguation leads to long-term learning or rather indicates an attentional shift in the moment of the conversation. Thus, we conducted a pre-registered online study with 2- and 3-year-olds and adults. Participants were presented with unknown objects as potential referents for a novel word. Across conditions, we manipulated whether the only difference between both objects was their relative novelty to the participant or whether, in addition, participants were provided with pragmatic information that indicated which object the speaker referred to. We tested participants’ immediate referent selection and their retention after 5 min. Results revealed that when given common ground information both age groups inferred the correct referent with high success and enhanced behavioral certainty. Without this information, object novelty alone did not guide their selection. After 5 min, adults remembered their previous selections above chance in both conditions, while children only showed reliable learning in the pragmatic condition. The pattern of results indicates how pragmatics may aid referent disambiguation and learning in both adults and young children. From early ontogeny on, children's social-cognitive understanding may guide their communicative interactions and support their language acquisition.

Research Highlights

  • We tested how 2-3-year-olds and adults resolve referential ambiguity without any lexical cues.
  • In the pragmatic context both age groups disambiguated novel word-object-mappings, while object novelty alone did not guide their referent selection.
  • In the pragmatic context, children also showed increased certainty in disambiguation and retained new word-object-mappings over time.
  • These findings contribute to the ongoing debate on whether children learn words on the basis of domain-specific constraints, lower-level associative mechanisms, or pragmatic inferences.
  相似文献   

16.
We investigated how preschoolers use their understanding of the actions available to a speaker to resolve referential ambiguity. In this study, 58 3- and 4-year-olds were presented with arrays of eight objects in a toy house and were instructed to retrieve various objects from the display. The trials varied in terms of whether the speaker's hands were empty or full when she requested an object as well as whether the request was ambiguous (i.e., more than one potential referent) or unambiguous (i.e., only one potential referent). Results demonstrated that both 3- and 4-year-olds were sensitive to speaker action constraints and used this information to guide on-line processing (as indexed by eye gaze measures) and to make explicit referential decisions.  相似文献   

17.
Preferential looking experiments investigated 5- and 8-month-old infants' perception and understanding of the motions of a shadow that appeared to be cast by a ball upon a box. When all the surfaces within the display were stationary, infants looked reliably longer when the shadow moved than when the shadow was stationary, indicating that they detected the shadow and its motion. In further experiments, however, infants' looking was not consistent with a sensitivity to the shadow's natural motion: They looked longer at natural events in which the shadow moved with the ball or remained at rest under the moving box than at unnatural events in which the shadow moved with the box or remained at rest under the moving ball. These findings suggest that infants overextend to shadows a principle that applies to material objects: Objects move together if and only if they are in contact. In a final experiment, infants were habituated to a moving shadow that repeatedly violated one aspect of the contact principle. In a subsequent test they failed to infer that the shadow would violate another aspect of the contact principle. Instead, they appeared to suspend all predictions concerning the behavior of the shadow.  相似文献   

18.
Object permanence in five-month-old infants   总被引:5,自引:0,他引:5  
A new method was devised to test object permanence in young infants. Five- month-old infants were habituated to a screen that moved back and forth through a 180-degree arc, in the manner of a drawbridge. After infants reached habituation, a box was centered behind the screen. Infants were shown two test events: a possible event and an impossible event. In the possible event, the screen stopped when it reached the occluded box; in the impossible event, the screen moved through the space occupied by the box. The results indicated that infants looked reliably longer at the impossible than at the possible event. This finding suggested that infants (1) understood that the box continued to exist, in its same location, after it was occluded by the screen, and (2) expected the screen to stop against the occluded box and were surprised, or puzzled, when it failed to do so. A control experiment in which the box was placed next to the screen provided support for this interpretation of the results. Together, the results of these experiments indicate that, contrary to Piaget's (1954) claims, infants as young as 5 months of age understand that objects continue to exist when occluded. The results also indicate that 5-month-old infants realize that solid objects do not move through the space occupied by other solid objects.  相似文献   

19.
Does knowledge about which objects and settings tend to co-occur affect how people interpret an image? The effects of consistency on perception were investigated using manipulated photographs containing a foreground object that was either semantically consistent or inconsistent with its setting. In four experiments, participants reported the foreground object, the setting, or both after seeing each picture for 80 ms followed by a mask. In Experiment 1, objects were identified more accurately in a consistent than an inconsistent setting. In Experiment 2, backgrounds were identified more accurately when they contained a consistent rather than an inconsistent foreground object. In Experiment 3, objects were presented without backgrounds and backgrounds without objects; comparison with the other experiments indicated that objects were identified better in isolation than when presented with a background, but there was no difference in accuracy for backgrounds whether they appeared with a foreground object or not. Finally, in Experiment 4, consistency effects remained when both objects and backgrounds were reported. Semantic consistency information is available when a scene is glimpsed briefly and affects both object and background perception. Objects and their settings are processed interactively and not in isolation.  相似文献   

20.
Four experiments investigated how repetition priming of object recognition is affected by the task performed in the prime and test phases. In Experiment 1 object recognition was tested using both vocal naming and two different semantic decision tasks (whether or not objects were manufactured, and whether or not they would be found inside the house). Some aspects of the data were inconsistent with contemporary models of object recognition. Specifically, object priming was eliminated with some combinations of prime and test tasks, and there was no evidence of perceptual (as opposed to conceptual or response) priming in either semantic classification task, even though perceptual identification of the objects is required for at least one of these tasks. Experiment 2 showed that even when perceptual demands were increased by brief presentation, the inside task showed no perceptual priming. Experiment 3 showed that the inside task did not appear to be based on conceptual priming either, as it was not primed significantly when the prime decisions were made to object labels. Experiment 4 showed that visual sensitivity could be restored to the inside task following practice on the task, supporting the suggestion that a critical factor is whether the semantic category is preformed or must be computed. The results show that the visual representational processes revealed by object priming depend crucially on the task chosen.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号