首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
When moving toward a stationary scene, people judge their heading quite well from visual information alone. Much experimental and modeling work has been presented to analyze how people judge their heading for stationary scenes. However, in everyday life, we often move through scenes that contain moving objects. Most models have difficulty computing heading when moving objects are in the scene, and few studies have examined how well humans perform in the presence of moving objects. In this study, we tested how well people judge their heading in the presence of moving objects. We found that people perform remarkably well under a variety of conditions. The only condition that affects an observer’s ability to judge heading accurately consists of a large moving object crossing the observer’s path. In this case, the presence of the object causes a small bias in the heading judgments. For objects moving horizontally with respect to the observer, this bias is in the object’s direction of motion. These results present a challenge for computational models.  相似文献   

2.
When a person moves in a straight line through a stationary environment, the images of object surfaces move in a radial pattern away from a single point. This point, known as the focus of expansion (FOE), corresponds to the person's direction of motion. People judge their heading from image motion quite well in this situation. They perform most accurately when they can see the region around the FOE, which contains the most useful information for this task. Furthermore, a large moving object in the scene has no effect on observer heading judgments unless it obscures the FOE. Therefore, observers may obtain the most accurate heading judgments by focusing their attention on the region around the FOE. However, in many situations (e.g., driving), the observer must pay attention to other moving objects in the scene (e.g., cars and pedestrians) to avoid collisions. These objects may be located far from the FOE in the visual field. We tested whether people can accurately judge their heading and the three-dimensional (3-D) motion of objects while paying attention to one or the other task. The results show that differential allocation of attention affects people's ability to judge 3-D object motion much more than it affects their ability to judge heading. This suggests that heading judgments are computed globally, whereas judgments about object motion may require more focused attention.  相似文献   

3.
When a person moves in a straight line through a stationary environment, the images of object surfaces move in a radial pattern away from a single point. This point, known as thefocus of expansion (FOE), corresponds to the person’s direction of motion. People judge their heading from image motion quite well in this situation. They perform most accurately when they can see the region around the FOE, which contains the most useful information for this task. Furthermore, a large moving object in the scene has no effect on observer heading judgments unless it obscures the FOE. Therefore, observers may obtain the most accurate heading judgments by focusing their attention on the region around the FOE. However, in many situations (e.g., driving), the observer must pay attention to other moving objects in the scene (e.g., cars and pedestrians) to avoid collisions. These objects may be located far from the FOE in the visual field. We tested whether people can accurately judge their heading and the three-dimensional (3-D) motion of objects while paying attention to one or the other task. The results show that differential allocation of attention affects people’s ability to judge 3-D object motion much more than it affects their ability to judge heading. This suggests that heading judgments are computed globally, whereas judgments about object motion may require more focused attention.  相似文献   

4.
Four experiments were directed at understanding the influence of multiple moving objects on curvilinear (i.e., circular and elliptical) heading perception. Displays simulated observer movement over a ground plane in the presence of moving objects depicted as transparent, opaque, or black cubes. Objects either moved parallel to or intersected the observer's path and either retreated from or approached the moving observer. Heading judgments were accurate and consistent across all conditions. The significance of these results for computational models of heading perception and for information in the global optic flow field about observer and object motion is discussed.  相似文献   

5.
When motion in the frontoparallel plane is temporally sampled, it is often perceived to be slower than its continuous counterpart. This finding stands in contrast to humans' ability to extrapolate and anticipate constant-velocity motion. We investigated whether this sampling bias generalizes to motion in the sagittal plane (i.e., objects approaching the observer). We employed a paradigm in which observers judged the arrival time of an oncoming object. We found detrimental effects of time sampling on both perceived time to contact and time to passage. Observers systematically overestimated the time it would take a frontally approaching object to intersect their eye plane. To rule out artifacts inherent in computer simulation, we replicated the experiment, using real objects. The bias persisted and proved to be robust across a large range of temporal and spatial variations. Energy and pooling mechanisms are discussed in an attempt to understand the effect.  相似文献   

6.
Although both the object and the observer often move in natural environments, the effect of motion on visual object recognition ha not been well documented. The authors examined the effect of a reversal in the direction of rotation on both explicit and implicit memory for novel, 3-dimensional objects. Participants viewed a series of continuously rotating objects and later made either an old-new recognition judgment or a symmetric-asymmetric decision. For both tasks, memory for rotating objects was impaired when the direction of rotation was reversed at test. These results demonstrate that dynamic information can play a role in visual object recognition and suggest that object representations can encode spatiotemporal information.  相似文献   

7.
Many previous studies of object recognition have found view-dependent recognition performance when view changes are produced by rotating objects relative to a stationary viewing position. However, the assumption that an object rotation is equivalent to an observer viewpoint change ignores the potential contribution of extraretinal information that accompanies observer movement. In four experiments, we investigated the role of extraretinal information on real-world object recognition. As in previous studies focusing on the recognition of spatial layouts across view changes, observers performed better in an old/new object recognition task when view changes were caused by viewer movement than when they were caused by object rotation. This difference between viewpoint and orientation changes was due not to the visual background, but to the extraretinal information available during real observer movements. Models of object recognition need to consider other information available to an observer in addition to the retinal projection in order to fully understand object recognition in the real world.  相似文献   

8.
How do people control locomotion while their eyes are simultaneously rotating? A previous study found that during simulated rotation, they can perceive a straight path of self-motion from the retinal flow pattern, despite conflicting extraretinal information, on the basis of dense motion parallax and reference objects. Here we report that the same information is sufficient for active control ofjoystick steering. Participants steered toward a target in displays that simulated a pursuit eye movement. Steering was highly inaccurate with a textured ground plane (motion parallax alone), but quite accurate when an array of posts was added (motion parallax plus reference objects). This result is consistent with the theory that instantaneous heading is determined from motion parallax, and the path of self-motion is determined by updating heading relative to environmental objects. Retinal flow is thus sufficient for both perceiving self-motion and controlling self-motion with a joystick; extraretinal and positional information can also contribute, but are not necessary.  相似文献   

9.
As an observer views a picture from different viewing angles, objects in the picture appear to change orientation relative to the observer, but some objects change orientation more than others. This difference in rotation for different objects is called the differential rotation effect. The differential rotation is not, however, accompanied by corresponding changes in the perception of the spatial layout of objects in the picture. This lack of correspondence between the perception of rotation and the perception of spatial layout is a result of the fact that the information on a picture's surface defines two kinds of pictorial space with different properties. Rotation is perceived in terms of the pictorial space outside the picture, and spatial layout is perceived in terms of the pictorial space inside the picture.  相似文献   

10.
Perception of translational heading from optical flow   总被引:3,自引:0,他引:3  
Radial patterns of optical flow produced by observer translation could be used to perceive the direction of self-movement during locomotion, and a number of formal analyses of such patterns have recently appeared. However, there is comparatively little empirical research on the perception of heading from optical flow, and what data there are indicate surprisingly poor performance, with heading errors on the order of 5 degrees-10 degrees. We examined heading judgments during translation parallel, perpendicular, and at oblique angles to a random-dot plane, varying observer speed and dot density. Using a discrimination task, we found that heading accuracy improved by an order of magnitude, with 75%-correct thresholds of 0.66 degrees in the highest speed and density condition and 1.2 degrees generally. Performance remained high with displays of 63-10 dots, but it dropped significantly with only 2 dots; there was no consistent speed effect and no effect of angle of approach to the surface. The results are inconsistent with theories based on the local focus of outflow, local motion parallax, multiple fixations, differential motion parallax, and the local maximum of divergence. But they are consistent with Gibson's (1950) original global radial outflow hypothesis for perception of heading during translation.  相似文献   

11.
Rushton SK  Bradshaw MF  Warren PA 《Cognition》2007,105(1):237-245
An object that moves is spotted almost effortlessly; it "pops out". When the observer is stationary, a moving object is uniquely identified by retinal motion. This is not so when the observer is also moving; as the eye travels through space all scene objects change position relative to the eye producing a complicated field of retinal motion. Without the unique identifier of retinal motion an object moving relative to the scene should be difficult to locate. Using a search task, we investigated this proposition. Computer-rendered objects were moved and transformed in a manner consistent with movement of the observer. Despite the complex pattern of retinal motion, objects moving relative to the scene were found to pop out. We suggest the brain uses its sensitivity to optic flow to "stabilise" the scene, allowing the scene-relative movement of an object to be identified.  相似文献   

12.
In four experiments, a scalar judgment of perceived depth was used to examine the spatial and temporal characteristics of the perceptual buildup of three-dimensional (3-D) structure from optical motion as a function of the depth in the simulated object, the speed of motion, the number of elements defining the object, the smoothness of the optic flow field, and the type of motion. In most of the experiments, the objects were polar projections of simulated half-ellipsoids under-going a curvilinear translation about the screen center. It was found that the buildup of 3-D structure was: (1) jointly dependent on the speed at which an object moved and on the range through which the object moved; (2) more rapid for deep simulated objects than for shallow objects; (3) unaffected by the number of points defining the object, including the maximum apparent depth within each simulated object-depth condition; (4) not disrupted by nonsmooth optic flow fields; and (5) more rapid for rotating objects than for curvilinearly translating objects.  相似文献   

13.
In four experiments, a scalar judgment of perceived depth was used to examine the spatial and temporal characteristics of the perceptual buildup of three-dimensional (3-D) structure from optical motion as a function of the depth in the simulated object, the speed of motion, the number of elements defining the object, the smoothness of the optic flow field, and the type of motion. In most of the experiments, the objects were polar projections of simulated half-ellipsoids undergoing a curvilinear translation about the screen center. It was found that the buildup of 3-D structure was: (1) jointly dependent on the speed at which an object moved and on the range through which the object moved; (2) more rapid for deep simulated objects than for shallow objects; (3) unaffected by the number of points defining the object, including the maximum apparent depth within each simulated object-depth condition; (4) not disrupted by nonsmooth optic flow fields; and (5) more rapid for rotating objects than for curvilinearly translating objects.  相似文献   

14.
Subjects either named rotated objects or decided whether the objects would face left or right if they were upright. Response time in the left-right task was influenced by a rotation aftereffect or by the physical rotation of the object, which is consistent with the view that the objects were mentally rotated to the upright and that, depending on its direction, the perceived rotary motion of the object either speeded or slowed mental rotation. Perceived rotary motion did not influence naming time, which suggests that the identification of rotated objects does not involve mental rotation.  相似文献   

15.
Zhang H  Mou W  McNamara TP 《Cognition》2011,(3):419-429
Three experiments examined the role of reference directions in spatial updating. Participants briefly viewed an array of five objects. A non-egocentric reference direction was primed by placing a stick under two objects in the array at the time of learning. After a short interval, participants detected which object had been moved at a novel view that was caused by table rotation or by their own locomotion. The stick was removed at test. The results showed that detection of position change was better when an object not on the stick was moved than when an object on the stick was moved. Furthermore change detection was better in the observer locomotion condition than in the table rotation condition only when an object on the stick was moved but not when an object not on the stick was moved. These results indicated that when the reference direction was not accurately indicated in the test scene, detection of position change was impaired but this impairment was less in the observer locomotion condition. These results suggest that people not only represent objects’ locations with respect to a fixed reference direction but also represent and update their orientation according to the same reference direction, which can be used to recover the accurate reference direction and facilitate detection of position change when no accurate reference direction is presented in the test scene.  相似文献   

16.
The motion of objects during motion parallax can be decomposed into 2 observer-relative components: translation and rotation. The depth ratio of objects in the visual field is specified by the inverse ratio of their angular displacement (from translation) or equivalently by the inverse ratio of their rotations. Despite the equal mathematical status of these 2 information sources, it was predicted that observers would be far more sensitive to the translational than rotational component. Such a differential sensitivity is implicitly assumed by the computer graphics technique billboarding, in which 3-dimensional (3-D) objects are drawn as planar forms (i.e., billboards) maintained normal to the line of sight. In 3 experiments, observers were found to be consistently less sensitive to rotational anomalies. The implications of these findings for kinetic depth effect displays and billboarding techniques are discussed.  相似文献   

17.
We investigated the role of visual experience on the spatial representation and updating of haptic scenes by comparing recognition performance across sighted, congenitally and late blind participants. We first established that spatial updating occurs in sighted individuals to haptic scenes of novel objects. All participants were required to recognise a previously learned haptic scene of novel objects presented across the same or different orientation as learning whilst they either remained in the same position to moved to a new position relative to the scene. Scene rotation incurred a cost in recognition performance in all groups. However, overall haptic scene recognition performance was worse in the congenitally blind group. Moreover, unlike the late blind or sighted groups, the congenitally blind group were unable to compensate for the cost in scene rotation with observer motion. Our results suggest that vision plays an important role in representing and updating spatial information encoded through touch and have important implications for the role of vision in the development of neuronal areas involved in spatial cognition.  相似文献   

18.
Object and observer motion in the perception of objects by infants   总被引:1,自引:0,他引:1  
Sixteen-week-old human infants distinguish optical displacements given by their own motion from displacements given by moving objects, and they use only the latter to perceive the unity of partly occluded objects. Optical changes produced by moving the observer around a stationary object produced attentional levels characteristic of stationary observers viewing stationary displays and much lower than those shown by stationary observers viewing moving displays. Real displacements of an object with no subject-relative displacement, produced by moving an object so as to maintain a constant relation to the moving observer, evoked attentional levels that were higher than with stationary displays and more characteristic of attention to moving displays, a finding suggesting detection of the real motion. Previously reported abilities of infants to perceive the unity of partly occluded objects from motion information were found to depend on real object motion rather than on optical displacements in general. The results suggest that object perception depends on registration of the motions of surfaces in the three-dimensional layout.  相似文献   

19.
Feldman J  Tremoulet PD 《Cognition》2006,99(2):131-165
How does an observer decide that a particular object viewed at one time is actually the same object as one viewed at a different time? We explored this question using an experimental task in which an observer views two objects as they simultaneously approach an occluder, disappear behind the occluder, and re-emerge from behind the occluder, having switched paths. In this situation the observer either sees both objects continue straight behind the occluder (called "streaming") or sees them collide with each other and switch directions ("bouncing"). This task has been studied in the literature on motion perception, where interest has centered on manipulating spatiotemporal aspects of the motion paths (e.g. velocity, acceleration). Here we instead focus on featural properties (size, luminance, and shape) of the objects. We studied the way degrees and types of featural dissimilarity between the two objects influence the percept of bouncing vs. streaming. When there is no featural difference, the preference for straight motion paths dominates, and streaming is usually seen. But when featural differences increase, the preponderance of bounce responses increases. That is, subjects prefer the motion trajectory in which each continuously existing individual object trajectory contains minimal featural change. Under this model, the data reveal in detail exactly what magnitudes of each type of featural change subjects implicitly regard as reasonably consistent with a continuously existing object. This suggests a simple mathematical definition of "individual object:" an object is a path through feature-trajectory space that minimizes feature change, or, more succinctly, an object is a geodesic in Mahalanobis feature space.  相似文献   

20.
Two experiments were conducted to investigate whether locomotion to a novel test view would eliminate viewpoint costs in visual object processing. Participants performed a sequential matching task for object identity or object handedness, using novel 3-D objects displayed in a head-mounted display. To change the test view of the object, the orientation of the object in 3-D space and the test position of the observer were manipulated independently. Participants were more accurate when the test view was the same as the learned view than when the views were different no matter whether the view change of the object was 50° or 90°. With 50° rotations, participants were more accurate at novel test views caused by participants’ locomotion (object stationary) than caused by object rotation (observer stationary) but this difference disappeared when the view change was 90°. These results indicate that facilitation of spatial updating during locomotion occurs within a limited range of viewpoints, but that such facilitation does not eliminate viewpoint costs in visual object processing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号