首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Weimin Mou  Hui Zhang 《Cognition》2009,111(2):175-186
Five experiments investigated whether observer locomotion provides specialized information facilitating novel-view scene recognition. Participants detected a position change after briefly viewing a desktop scene when the table stayed stationary or was rotated and when the observer stayed stationary or locomoted. The results showed that 49° novel-view scene recognition was more accurate when the novel view was caused by observer locomotion than when the novel view was caused by table rotation. However such superiority of observer locomotion disappeared when the to-be-tested viewpoint was indicated during the study phase, when the study viewing direction was indicated during the test phase, and when the novel test view was 98°, and was even reversed when the study viewing direction was indicated during the test phase in the table rotation condition but not in the observer locomotion condition. These results suggest scene recognition relies on the identification of the spatial reference directions of the scene and accurately indicating the spatial reference direction can facilitate scene recognition. The facilitative effect of locomotion occurs because the spatial reference direction of the scene is tracked during locomotion and more accurately identified at test.  相似文献   

2.
Mou W  Xiao C  McNamara TP 《Cognition》2008,108(1):136-154
Two experiments investigated participants' spatial memory of a briefly viewed layout. Participants saw an array of five objects on a table and, after a short delay, indicated whether the target object indicated by the experimenter had been moved. Experiment 1 showed that change detection was more accurate when non-target objects were stationary than when non-target objects were moved. This context effect was observed when participants were tested both at the original learning perspective and at a novel perspective. In Experiment 2, the arrays of five objects were presented on a rectangular table and two of the non-target objects were aligned with the longer axis of the table. Change detection was more accurate when the target object was presented with the two objects that were aligned with the longer axis of the table during learning than when the target object was presented with the two objects that were not aligned with the longer axis of the table during learning. These results indicated that the spatial memory of a briefly viewed layout has interobject spatial relations represented and utilizes an allocentric reference direction.  相似文献   

3.
Two experiments were conducted to investigate whether locomotion to a novel test view would eliminate viewpoint costs in visual object processing. Participants performed a sequential matching task for object identity or object handedness, using novel 3-D objects displayed in a head-mounted display. To change the test view of the object, the orientation of the object in 3-D space and the test position of the observer were manipulated independently. Participants were more accurate when the test view was the same as the learned view than when the views were different no matter whether the view change of the object was 50° or 90°. With 50° rotations, participants were more accurate at novel test views caused by participants’ locomotion (object stationary) than caused by object rotation (observer stationary) but this difference disappeared when the view change was 90°. These results indicate that facilitation of spatial updating during locomotion occurs within a limited range of viewpoints, but that such facilitation does not eliminate viewpoint costs in visual object processing.  相似文献   

4.
Subjects in a darkroom saw an array of five phosphorescent objects on a circular table and, after a short delay, indicated which object had been moved. During the delay the subject, the table or a phosphorescent landmark external to the array was moved (a rotation about the centre of the table) either alone or together. The subject then had to indicate which one of the five objects had been moved. A fully factorial design was used to detect the use of three types of representations of object location: (i) visual snapshots; (ii) egocentric representations updated by self-motion; and (iii) representations relative to the external cue. Improved performance was seen whenever the test array was oriented consistently with any of these stored representations. The influence of representations (i) and (ii) replicates previous work. The influence of representation (iii) is a novel finding which implies that allocentric representations play a role in spatial memory, even over short distances and times. The effect of the external cue was greater when initially experienced as stable. Females out-performed males except when the array was consistent with self-motion but not visual snapshots. These results enable a simple egocentric model of spatial memory to be extended to address large-scale navigation, including the effects of allocentric knowledge, landmark stability and gender.  相似文献   

5.
Rushton SK  Bradshaw MF  Warren PA 《Cognition》2007,105(1):237-245
An object that moves is spotted almost effortlessly; it "pops out". When the observer is stationary, a moving object is uniquely identified by retinal motion. This is not so when the observer is also moving; as the eye travels through space all scene objects change position relative to the eye producing a complicated field of retinal motion. Without the unique identifier of retinal motion an object moving relative to the scene should be difficult to locate. Using a search task, we investigated this proposition. Computer-rendered objects were moved and transformed in a manner consistent with movement of the observer. Despite the complex pattern of retinal motion, objects moving relative to the scene were found to pop out. We suggest the brain uses its sensitivity to optic flow to "stabilise" the scene, allowing the scene-relative movement of an object to be identified.  相似文献   

6.
Nine experiments examined the means by which visual memory for individual objects is structured into a larger representation of a scene. Participants viewed images of natural scenes or object arrays in a change detection task requiring memory for the visual form of a single target object. In the test image, 2 properties of the stimulus were independently manipulated: the position of the target object and the spatial properties of the larger scene or array context. Memory performance was higher when the target object position remained the same from study to test. This same-position advantage was reduced or eliminated following contextual changes that disrupted the relative spatial relationships among contextual objects (context deletion, scrambling, and binding change) but was preserved following contextual change that did not disrupt relative spatial relationships (translation). Thus, episodic scene representations are formed through the binding of objects to scene locations, and object position is defined relative to a larger spatial representation coding the relative locations of contextual objects.  相似文献   

7.
Active and passive scene recognition across views.   总被引:7,自引:0,他引:7  
R F Wang  D J Simons 《Cognition》1999,70(2):191-210
Recent evidence suggests that scene recognition across views is impaired when an array of objects rotates relative to a stationary observer, but not when the observer moves relative to a stationary display [Simons, D.J., Wang, R.F., 1998. Perceiving real-world viewpoint changes. Psychological Science 9, 315-320]. The experiments in this report examine whether the relatively poorer performance by stationary observers across view changes results from a lack of perceptual information for the rotation or from the lack of active control of the perspective change, both of which are present for viewpoint changes. Three experiments compared performance when observers passively experienced the view change and when they actively caused the change. Even with visual information and active control over the display rotation, change detection performance was still worse for orientation changes than for viewpoint changes. These findings suggest that observers can update a viewer-centered representation of a scene when they move to a different viewing position, but such updating does not occur during display rotations even with visual and motor information for the magnitude of the change. This experimental approach, using arrays of real objects rather than computer displays of isolated individual objects, can shed light on mechanisms that allow accurate recognition despite changes in the observer's position and orientation.  相似文献   

8.
Effects of information specifying the position of an object in a 3-D scene were investigated in two experiments with twelve observers. To separate the effects of the change in scene position from the changes in the projection that occur with increased distance from the observer, the same projections were produced by simulating (a) a constant object at different scene positions and (b) different objects at the same scene position. The simulated scene consisted of a ground plane, a ceiling plane, and a cylinder on a pole attached to both planes. Motion-parallax scenes were studied in one experiment; texture-gradient scenes were studied in the other. Observers adjusted a line to match the perceived internal depth of the cylinder. Judged depth for objects matched in simulated size decreased as simulated distance from the observer increased. Judged depth decreased at a faster rate for the same projections shown at a constant scene position. Adding object-centered depth information (object rotation) increased judged depth for the motion-parallax displays. These results demonstrate that the judged internal depth of an object is reduced by the change in projection that occurs with increased distance, but this effect is diminished if information for change in scene position accompanies the change in projection.  相似文献   

9.
Completing a representational momentum (RM) task, participants viewed one of three camera motions through a scene and indicated whether test probes were in the same position as they were in the final view of the animation. All camera motions led to RM anticipations in the direction of motion, with larger distortions resulting from rotations than a compound motion of a rotation and a translation. A surprise test of spatial layout, using an aerial map, revealed that the correct map was identified only following aerial views during the RM task. When the RM task displayed field views, including repeated views of multiple object groups, participants were unable to identify the overall spatial layout of the scene. These results suggest that the object–location binding thought to support certain change detection and visual search tasks might be viewpoint dependent.  相似文献   

10.
Dynamic tasks often require fast adaptations to new viewpoints. It has been shown that automatic spatial updating is triggered by proprioceptive motion cues. Here, we demonstrate that purely visual cues are sufficient to trigger automatic updating. In five experiments, we examined spatial updating in a dynamic attention task in which participants had to track three objects across scene rotations that occurred while the objects were temporarily invisible. The objects moved on a floor plane acting as a reference frame and unpredictably either were relocated when reference frame rotations occurred or remained in place. Although participants were aware of this dissociation they were unable to ignore continuous visual cues about scene rotations (Experiments 1a and 1b). This even held when common rotations of floor plane and objects were less likely than a dissociated rotation (Experiments 2a and 2b). However, identifying only the spatial reference direction was not sufficient to trigger updating (Experiment 3). Thus we conclude that automatic spatial target updating occurs with pure visual information.  相似文献   

11.
We investigated the role of visual experience on the spatial representation and updating of haptic scenes by comparing recognition performance across sighted, congenitally and late blind participants. We first established that spatial updating occurs in sighted individuals to haptic scenes of novel objects. All participants were required to recognise a previously learned haptic scene of novel objects presented across the same or different orientation as learning whilst they either remained in the same position to moved to a new position relative to the scene. Scene rotation incurred a cost in recognition performance in all groups. However, overall haptic scene recognition performance was worse in the congenitally blind group. Moreover, unlike the late blind or sighted groups, the congenitally blind group were unable to compensate for the cost in scene rotation with observer motion. Our results suggest that vision plays an important role in representing and updating spatial information encoded through touch and have important implications for the role of vision in the development of neuronal areas involved in spatial cognition.  相似文献   

12.
13.
Although both the object and the observer often move in natural environments, the effect of motion on visual object recognition ha not been well documented. The authors examined the effect of a reversal in the direction of rotation on both explicit and implicit memory for novel, 3-dimensional objects. Participants viewed a series of continuously rotating objects and later made either an old-new recognition judgment or a symmetric-asymmetric decision. For both tasks, memory for rotating objects was impaired when the direction of rotation was reversed at test. These results demonstrate that dynamic information can play a role in visual object recognition and suggest that object representations can encode spatiotemporal information.  相似文献   

14.
All elements of the visual field are known to influence the perception of the egocentric distances of objects. Not only the ground surface of a scene, but also the surface at the back or other objects in the scene can affect an observer's egocentric distance estimation of an object. We tested whether this is also true for exocentric direction estimations. We used an exocentric pointing task to test whether the presence of poster-boards in the visual scene would influence the perception of the exocentric direction between two test-objects. In this task the observer has to direct a pointer, with a remote control, to a target. We placed the poster-boards at various positions in the visual field to test whether these boards would affect the settings of the observer. We found that they only affected the settings when they directly served as a reference for orienting the pointer to the target.  相似文献   

15.
This study examined whether the perception of heading is determined by spatially pooling velocity information. Observers were presented displays simulating observer motion through a volume of 3-D objects. To test the importance of spatial pooling, the authors systematically varied the nonrigidity of the flow field using two types of object motion: adding a unique rotation or translation to each object. Calculations of the signal-to-noise (observer velocity-to-object motion) ratio indicated no decrements in performance when the ratio was .39 for object rotation and .45 for object translation. Performance also increased with the number of objects in the scene. These results suggest that heading is determined by mechanisms that use spatial pooling over large regions.  相似文献   

16.
Many previous studies of object recognition have found view-dependent recognition performance when view changes are produced by rotating objects relative to a stationary viewing position. However, the assumption that an object rotation is equivalent to an observer viewpoint change ignores the potential contribution of extraretinal information that accompanies observer movement. In four experiments, we investigated the role of extraretinal information on real-world object recognition. As in previous studies focusing on the recognition of spatial layouts across view changes, observers performed better in an old/new object recognition task when view changes were caused by viewer movement than when they were caused by object rotation. This difference between viewpoint and orientation changes was due not to the visual background, but to the extraretinal information available during real observer movements. Models of object recognition need to consider other information available to an observer in addition to the retinal projection in order to fully understand object recognition in the real world.  相似文献   

17.
Spatial reference in multiple object tracking is available from configurations of dynamic objects and static reference objects. In three experiments, we studied the use of spatial reference in tracking and in relocating targets after abrupt scene rotations. Observers tracked 1, 2, 3, 4, and 6 targets in 3D scenes, in which white balls moved on a square floor plane. The floor plane was either visible thus providing static spatial reference or it was invisible. Without scene rotations, the configuration of dynamic objects provided sufficient spatial reference and static spatial reference was not advantageous. In contrast, with abrupt scene rotations of 20°, static spatial reference supported in relocating targets. A wireframe floor plane lacking local visual detail was as effective as a checkerboard. Individually colored geometric forms as static reference objects provided no additional benefit either, even if targets were centered on these forms at the abrupt scene rotation. Individualizing the dynamic objects themselves by color for a brief interval around the abrupt scene rotation, however, did improve performance. We conclude that attentional tracking of moving targets proceeds within dynamic configurations but detached from static local background.  相似文献   

18.
When moving toward a stationary scene, people judge their heading quite well from visual information alone. Much experimental and modeling work has been presented to analyze how people judge their heading for stationary scenes. However, in everyday life, we often move through scenes that contain moving objects. Most models have difficulty computing heading when moving objects are in the scene, and few studies have examined how well humans perform in the presence of moving objects. In this study, we tested how well people judge their heading in the presence of moving objects. We found that people perform remarkably well under a variety of conditions. The only condition that affects an observer’s ability to judge heading accurately consists of a large moving object crossing the observer’s path. In this case, the presence of the object causes a small bias in the heading judgments. For objects moving horizontally with respect to the observer, this bias is in the object’s direction of motion. These results present a challenge for computational models.  相似文献   

19.
Six rats were trained to find a previously missing target or 'jackpot' object in a square array of four identical or different objects (the test segment of a trial) after first visiting and collecting sunflower seeds from under the other three objects (the study segment of a trial). During training, objects' local positions within the array and their global positions within the larger foraging array were varied over trials but were not changed between segments within a trial. Following this training, rats were tested on their accuracy for finding the target object when a trial's test array was sometimes moved to a different location in the foraging arena or when the position of the target object within the test array had been changed. Either of these manipulations initially slightly reduced rats' accuracy for finding the missing object but then enhanced it. Relocating test arrays of identical objects enhanced rats' performance only after 10-min inter-segment intervals (ISIs). Relocating test arrays of different objects enhanced rats' performance only after 2-min ISIs. Rats also improved their performance when they encountered the target object in a new position in test arrays of different objects. This enhancement effect occurred after either 2- or 30-min ISIs. These findings suggest that rats separately retrieved a missing (target) object's spatial and non-spatial information when they were relevant but not when they were irrelevant in a trial. The enhancement effects provide evidence for rats' limited retrieval capacity in their visuo-spatial working memory.  相似文献   

20.
In three experiments young children were asked to reconstruct an array of objects after they had imagined its appearance following either a rotation of the array or a change in their own position (Huttenlocher & Presson, 1973). In reconstructing arrays, subjects first positioned that object which would be most prominent to an observer following the imagined transformation. Surprisingly, this occurred even when subjects made an egocentric error by reconstructing a copy of the original array. Hence young children, although apparently egocentric, can imagine themselves in a new position with a new perspective. The mental operations which underlie imagined spatial transformations are discussed in light of the results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号