首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
Many previous studies of object recognition have found view-dependent recognition performance when view changes are produced by rotating objects relative to a stationary viewing position. However, the assumption that an object rotation is equivalent to an observer viewpoint change ignores the potential contribution of extraretinal information that accompanies observer movement. In four experiments, we investigated the role of extraretinal information on real-world object recognition. As in previous studies focusing on the recognition of spatial layouts across view changes, observers performed better in an old/new object recognition task when view changes were caused by viewer movement than when they were caused by object rotation. This difference between viewpoint and orientation changes was due not to the visual background, but to the extraretinal information available during real observer movements. Models of object recognition need to consider other information available to an observer in addition to the retinal projection in order to fully understand object recognition in the real world.  相似文献   

2.
Active and passive scene recognition across views.   总被引:7,自引:0,他引:7  
R F Wang  D J Simons 《Cognition》1999,70(2):191-210
Recent evidence suggests that scene recognition across views is impaired when an array of objects rotates relative to a stationary observer, but not when the observer moves relative to a stationary display [Simons, D.J., Wang, R.F., 1998. Perceiving real-world viewpoint changes. Psychological Science 9, 315-320]. The experiments in this report examine whether the relatively poorer performance by stationary observers across view changes results from a lack of perceptual information for the rotation or from the lack of active control of the perspective change, both of which are present for viewpoint changes. Three experiments compared performance when observers passively experienced the view change and when they actively caused the change. Even with visual information and active control over the display rotation, change detection performance was still worse for orientation changes than for viewpoint changes. These findings suggest that observers can update a viewer-centered representation of a scene when they move to a different viewing position, but such updating does not occur during display rotations even with visual and motor information for the magnitude of the change. This experimental approach, using arrays of real objects rather than computer displays of isolated individual objects, can shed light on mechanisms that allow accurate recognition despite changes in the observer's position and orientation.  相似文献   

3.
《Visual cognition》2013,21(2):157-199
Scene recognition across a perspective change typically exhibits viewpoint dependence. Accordingly, the more the orientation of the test viewpoint departs from that of the study viewpoint, the more time its takes and the less accurate observers are to recognize the spatial layout. Three experiments show that observers can take advantage of a virtual avatar that specifies their future “embodied” perspective on the visual scene. This “out-of-body” priming reduces or even abolishes viewpoint dependence for detecting a change in an object location when the environment is respectively unknown or familiar to the observer. Viewpoint dependence occurs when both the priming and primed viewpoints do not match. Changes to permanent extended structures (such as walls) or altered object-to-object spatial relations across viewpoint change are detrimental to viewpoint priming. A neurocognitive model describes the coordination of “out-of-body” and “embodied” perspectives relevant to social perception when understanding what another individual sees.  相似文献   

4.
Two experiments were conducted to investigate whether locomotion to a novel test view would eliminate viewpoint costs in visual object processing. Participants performed a sequential matching task for object identity or object handedness, using novel 3-D objects displayed in a head-mounted display. To change the test view of the object, the orientation of the object in 3-D space and the test position of the observer were manipulated independently. Participants were more accurate when the test view was the same as the learned view than when the views were different no matter whether the view change of the object was 50° or 90°. With 50° rotations, participants were more accurate at novel test views caused by participants’ locomotion (object stationary) than caused by object rotation (observer stationary) but this difference disappeared when the view change was 90°. These results indicate that facilitation of spatial updating during locomotion occurs within a limited range of viewpoints, but that such facilitation does not eliminate viewpoint costs in visual object processing.  相似文献   

5.
Weimin Mou  Hui Zhang 《Cognition》2009,111(2):175-186
Five experiments investigated whether observer locomotion provides specialized information facilitating novel-view scene recognition. Participants detected a position change after briefly viewing a desktop scene when the table stayed stationary or was rotated and when the observer stayed stationary or locomoted. The results showed that 49° novel-view scene recognition was more accurate when the novel view was caused by observer locomotion than when the novel view was caused by table rotation. However such superiority of observer locomotion disappeared when the to-be-tested viewpoint was indicated during the study phase, when the study viewing direction was indicated during the test phase, and when the novel test view was 98°, and was even reversed when the study viewing direction was indicated during the test phase in the table rotation condition but not in the observer locomotion condition. These results suggest scene recognition relies on the identification of the spatial reference directions of the scene and accurately indicating the spatial reference direction can facilitate scene recognition. The facilitative effect of locomotion occurs because the spatial reference direction of the scene is tracked during locomotion and more accurately identified at test.  相似文献   

6.
Three experiments investigated scene recognition across viewpoint changes, involving same/different judgements on scenes consisting of three objects on a desktop. On same trials, the comparison scene appeared either from the same viewpoint as the standard scene or from a different viewpoint with the desktop rotated about one or more axes. Different trials were created either by interchanging the locations of two or three of the objects (location change condition), or by rotating either one or all three of the objects around their vertical axes (orientation change condition). Response times and errors increased as a function of the angular distance between the standard and comparison views, but this effect was bigger for rotations around the vertical axis than for those about the line of sight or horizontal axis. Furthermore, the time to detect location changes was less than that to detect orientation changes, and this difference increased with increasing angular disparity between the standard and comparison scenes. Rotation times estimated in a double-axis rotation were no longer than other rotations in depth, indicating that alignment was not necessarily simpler around a "natural" axis of rotation. These results are consistent with the hypothesis that scenes, like many objects, may be represented in a viewpoint dependent manner and recognized by aligning standard and comparison views, but that the alignment of scenes is not a holistic process.  相似文献   

7.
Can the visual system extrapolate spatial layout of a scene to new viewpoints after a single view? In the present study, we examined this question by investigating the priming of spatial layout across depth rotations of the same scene (Sanocki &; Epstein, 1997). Participants had to indicate which of two dots superimposed on objects in the target scene appeared closer to them in space. There was as much priming from a prime with a viewpoint that was 10° different from the test image as from a prime that was identical to the target; however, there was no reliable priming from larger differences in viewpoint. These results suggest that a scene’s spatial layout can be extrapolated, but only to a limited extent.  相似文献   

8.
To explore whether effects observed in human object recognition represent fundamental properties of visual perception that are general across species, the authors trained pigeons (Columba livia) and humans to discriminate between pictures of 3-dimensional objects that differed in shape. Novel pictures of the depth-rotated objects were then tested for recognition. Across conditions, the object pairs contained either 0, 1, 3, or 5 distinctive parts. Pigeons showed viewpoint dependence in all object-part conditions, and their performance declined systematically with degree of rotation from the nearest training view. Humans showed viewpoint invariance for novel rotations between the training views but viewpoint dependence for novel rotations outside the training views. For humans, but not pigeons, viewpoint dependence was weakest in the 1-part condition. The authors discuss the results in terms of structural and multiple-view models of object recognition.  相似文献   

9.
10.
Studies concerning the processing of natural scenes using eye movement equipment have revealed that observers retain surprisingly little information from one fixation to the next. Other studies, in which fixation remained constant while elements within the scene were changed, have shown that, even without refixation, objects within a scene are surprisingly poorly represented. Although this effect has been studied in some detail in static scenes, there has been relatively little work on scenes as we would normally experience them, namely dynamic and ever changing. This paper describes a comparable form of change blindness in dynamic scenes, in which detection is performed in the presence of simulated observer motion. The study also describes how change blindness is affected by the manner in which the observer interacts with the environment, by comparing detection performance of an observer as the passenger or driver of a car. The experiments show that observer motion reduces the detection of orientation and location changes, and that the task of driving causes a concentration of object analysis on or near the line of motion, relative to passive viewing of the same scene.  相似文献   

11.
Our intuition that we richly represent the visual details of our environment is illusory. When viewing a scene, we seem to use detailed representations of object properties and interobject relations to achieve a sense of continuity across views. Yet, several recent studies show that human observers fail to detect changes to objects and object properties when localized retinal information signaling a change is masked or eliminated (e.g., by eye movements). However, these studies changed arbitrarily chosen objects which may have been outside the focus of attention. We draw on previous research showing the importance of spatiotemporal information for tracking objects by creating short motion pictures in which objects in both arbitrary locations and the very center of attention were changed. Adult observers failed to notice changes in both cases, even when the sole actor in a scene transformed into another person across an instantaneous change in camera angle (or “cut”).  相似文献   

12.
Current theories of object recognition in human vision make different predictions about whether the recognition of complex, multipart objects should be influenced by shape information about surface depth orientation and curvature derived from stereo disparity. We examined this issue in five experiments using a recognition memory paradigm in which observers (N = 134) memorized and then discriminated sets of 3D novel objects at trained and untrained viewpoints under either mono or stereo viewing conditions. In order to explore the conditions under which stereo-defined shape information contributes to object recognition we systematically varied the difficulty of view generalization by increasing the angular disparity between trained and untrained views. In one series of experiments, objects were presented from either previously trained views or untrained views rotated (15°, 30°, or 60°) along the same plane. In separate experiments we examined whether view generalization effects interacted with the vertical or horizontal plane of object rotation across 40° viewpoint changes. The results showed robust viewpoint-dependent performance costs: Observers were more efficient in recognizing learned objects from trained than from untrained views, and recognition was worse for extrapolated than for interpolated untrained views. We also found that performance was enhanced by stereo viewing but only at larger angular disparities between trained and untrained views. These findings show that object recognition is not based solely on 2D image information but that it can be facilitated by shape information derived from stereo disparity.  相似文献   

13.
We investigated the role of visual experience on the spatial representation and updating of haptic scenes by comparing recognition performance across sighted, congenitally and late blind participants. We first established that spatial updating occurs in sighted individuals to haptic scenes of novel objects. All participants were required to recognise a previously learned haptic scene of novel objects presented across the same or different orientation as learning whilst they either remained in the same position to moved to a new position relative to the scene. Scene rotation incurred a cost in recognition performance in all groups. However, overall haptic scene recognition performance was worse in the congenitally blind group. Moreover, unlike the late blind or sighted groups, the congenitally blind group were unable to compensate for the cost in scene rotation with observer motion. Our results suggest that vision plays an important role in representing and updating spatial information encoded through touch and have important implications for the role of vision in the development of neuronal areas involved in spatial cognition.  相似文献   

14.
In four experiments, we examined whether watching a scene from the perspective of a camera rotating across it allowed participants to recognize or identify the scene's spatial layout. Completing a representational momentum (RM) task, participants viewed a smoothly animated display and then indicated whether test probes were in the same position as they were in the final view of the animation. We found RM anticipations for the camera's movement across the scene, with larger distortions resulting from camera rotations that brought objects into the viewing frame compared with camera rotations that took objects out of the viewing frame. However, the RM task alone did not lead to successful recognition of the scene's map or identification of spatial relations between objects. Watching a scene from a rotating camera's perspective and making position judgments is not sufficient for learning spatial layout.  相似文献   

15.
When pictures of simple shapes (square, diamond) were seen frontally and obliquely, (1) the shapes with a deeper extent into pictured space underwent morerotation (Goldstein, 1979), which is an apparent turning to keep an orientation toward an observer’s changing position; (2) there was little effect of whether the observer knew the picture surface’s orientation in real space, except that such knowledge could prevent multistability; and (3) depicted picture frames also rotated. In other experiments, figural and frame rotations were independent of each other, and rotation was shown for real frames. The rotation of depthless depictions suggests that at least two rotational factors exist, one that involves the object’s virtual depth and one that does not. The nature of this second factor is discussed. Frame rotation appeared to subtract from object rotation when the two were being compared; this could explain a paradox in picture perception: Depicted orientations often seem little changed over viewpoints, despite (apparent) rotations with respect to real-space coordinates.  相似文献   

16.
Viewpoint-dependent recognition of familiar faces   总被引:5,自引:0,他引:5  
Troje NF  Kersten D 《Perception》1999,28(4):483-487
The question whether object representations in the human brain are object-centered or viewer-centered has motivated a variety of experiments with divergent results. A key issue concerns the visual recognition of objects seen from novel views. If recognition performance depends on whether a particular view has been seen before, it can be interpreted as evidence for a viewer-centered representation. Earlier experiments used unfamiliar objects to provide the experimenter with complete control over the observer's previous experience with the object. In this study, we tested whether human recognition shows viewpoint dependence for the highly familiar faces of well-known colleagues and for the observer's own face. We found that observers are poorer at recognizing their own profile, whereas there is no difference in response time between frontal and profile views of other faces. This result shows that extensive experience and familiarity with one's own face is not sufficient to produce viewpoint invariance. Our result provides strong evidence for viewer-centered representations in human visual recognition even for highly familiar objects.  相似文献   

17.
Change blindness is the relative inability of normally sighted observers to detect large changes in scenes when the low-level signals associated with those changes are either masked or of extremely low magnitude. Change detection can be inhibited by saccadic eye movements, artificial saccades or blinks, and 'mud splashes'. We now show that change detection is also inhibited by whole image motion in the form of sinusoidal oscillations. The degree of disruption depends upon the frequency of oscillation, which at 3 Hz is equivalent to that produced by artificial blinks. Image motion causes the retinal image to be blurred and this is known to affect object recognition. However, our results are inconsistent with good change detection followed by a delay due to poor recognition of the changing object. Oscillatory motion can induce eye movements that potentially mask or inhibit the low-level signals related to changes in the scene, but we show that eye movements promote rather than inhibit change detection when the image is moving.  相似文献   

18.
There is a view that faces and objects are processed by different brain mechanisms. Different factors may modulate the extent to which face mechanisms are used for objects. To distinguish these factors, we present a new parametric multipart three-dimensional object set that provides researchers with a rich degree of control of important features for visual recognition such as individual parts and the spatial configuration of those parts. All other properties being equal, we demonstrate that perceived facelikeness in terms of spatial configuration facilitated performance at matching individual exemplars of the new object set across viewpoint changes (Experiment 1). Importantly, facelikeness did not affect perceptual discriminability (Experiment 2) or similarity (Experiment 3). Our findings suggest that perceptual resemblance to faces based on spatial configuration of parts is important for visual recognition even after equating physical and perceptual similarity. Furthermore, the large parametrically controlled object set and the standardized procedures to generate additional exemplars will provide the research community with invaluable tools to further understand visual recognition and visual learning.  相似文献   

19.
Completing a representational momentum (RM) task, participants viewed one of three camera motions through a scene and indicated whether test probes were in the same position as they were in the final view of the animation. All camera motions led to RM anticipations in the direction of motion, with larger distortions resulting from rotations than a compound motion of a rotation and a translation. A surprise test of spatial layout, using an aerial map, revealed that the correct map was identified only following aerial views during the RM task. When the RM task displayed field views, including repeated views of multiple object groups, participants were unable to identify the overall spatial layout of the scene. These results suggest that the object–location binding thought to support certain change detection and visual search tasks might be viewpoint dependent.  相似文献   

20.
In an attempt to reconcile results of previous studies, several theorists have suggested that object recognition performance should range from viewpoint invariant to highly viewpoint dependent depending on how easy it is to differentiate the objects in a given recognition situation. The present study assessed recognition across depth rotations of a single general class of novel objects in three contexts that varied in difficulty. In an initial experiment, recognition in the context involving the most discriminable object differences was viewpoint invariant, but recognition in the least discriminable context and recognition in the intermediate context were equally viewpoint dependent. In a second experiment, utilizing gray-scale versions of the same stimuli, almost identical viewpoint-cost functions were obtained in all three contexts. These results suggest that differences in the geometry of stimulus objects, rather than task difficulty, lie at the heart of previously discrepant findings.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号