首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Ooi TL  He ZJ 《Psychological review》2007,114(2):441-454
In her seminal article in Psychological Review, A. S. Gilinsky (1951) successfully described the relationship between physical distance (D) and perceived distance (d) with the equation d = DA/(A + D), where A = constant. To understand its theoretical underpinning, the authors of the current article capitalized on space perception mechanisms based on the ground surface to derive the distance equation d = Hcosalpha/sin(alpha + eta), where H is the observer's eye height, alpha is the angular declination below the horizon, and eta is the slant error in representing the ground surface. Their equation predicts that (a) perceived distance is affected by the slant error in representing the ground surface; (b) when the slant error is small, the ground-based equation takes the same form as Gilinsky's equation; and (c) the parameter A in Gilinsky's equation represents the ratio of the observer's eye height to the sine of the slant error. These predictions were empirically confirmed, thus bestowing a theoretical foundation on Gilinsky's equation.  相似文献   

2.
In two experiments we examined the role of visual horizon information on absolute egocentric distance judgments to on-ground targets. Sedgwick [1983, in Human and Machine Vision (New York: Academic Press) pp 425-458] suggested that the visual system may utilize the angle of declination from a horizontal line of sight to the target location (horizon distance relation) to determine absolute distances on infinite ground surfaces. While studies have supported this hypothesis, less is known about the specific cues (vestibular, visual) used to determine horizontal line of sight. We investigated this question by requiring observers to judge distances under degraded vision given an unaltered or raised visual horizon. The results suggest that visual horizon information does influence perception of absolute distances as evident through two different action-based measures: walking or throwing without vision to previously viewed targets. Distances were judged as shorter in the presence of a raised visual horizon. The results are discussed with respect to how the visual system accurately determines absolute distance to objects on a finite ground plane and for their implications for understanding space perception in low-vision individuals.  相似文献   

3.
Most ground surfaces contain various types of texture gradient information that serve as depth cues for space perception. We investigated how linear perspective, a type of texture gradient information on the ground, affects judged absolute distance and eye level. Phosphorescent elements were used to display linear perspective information on the floor in an otherwise dark room. We found that observers were remarkably receptive to such information. Changing the configuration of the linear perspective information from parallel to converging resulted in relatively larger judged distances and lower judged eye levels. These findings support the proposals that (1) the visual system has a bias for representing an image of converging lines as one of parallel lines on a downward-slanting surface and (2) the convergence point of a converging-lines image represents the eye level. Finally, we found that the visual system may be less sensitive to the manipulation of compression gradient information than of linear perspective information.  相似文献   

4.
He ZJ  Wu B  Ooi TL  Yarbrough G  Wu J 《Perception》2004,33(7):789-806
On the basis of the finding that a common and homogeneous ground surface is vital for accurate egocentric distance judgments (Sinai et al, 1998 Nature 395 497-500), we propose a sequential-surface-integration-process (SSIP) hypothesis to elucidate how the visual system constructs a representation of the ground-surface in the intermediate distance range. According to the SSIP hypothesis, a near ground-surface representation is formed from near depth cues, and is utilized as an anchor to integrate the more distant surfaces by using texture-gradient information as the depth cue. The SSIP hypothesis provides an explanation for the finding that egocentric distance judgment is underestimated when a texture boundary exists on the ground surface that commonly supports the observer and target. We tested the prediction that the fidelity of the visually represented ground-surface reference frame depends on how the visual system selects the surface information for integration. Specifically, if information is selected along a direct route between the observer and target where the ground surface is disrupted by an occluding object, the ground surface will be inaccurately represented. In experiments 1-3 we used a perceptual task and two different visually directed tasks to show that this leads to egocentric distance underestimation. Judgment is accurate however, when the observer selects the continuous ground information bypassing the occluding object (indirect route), as found in experiments 4 and 5 with a visually directed task. Altogether, our findings provide support for the SSIP hypothesis and reveal, surprisingly, that the phenomenal visual space is not unique but depends on how optic information is selected.  相似文献   

5.
Wu B  He ZJ  Ooi TL 《Perception》2007,36(5):703-721
The sequential-surface-integration-process (SSIP) hypothesis was proposed to elucidate how the visual system constructs the ground-surface representation in the intermediate distance range (He et al, 2004 Perception 33 789-806). According to the hypothesis, the SSIP constructs an accurate representation of the near ground surface by using reliable near depth cues. The near ground representation then serves as a template for integrating the adjacent surface patch by using the texture gradient information as the predominant depth cue. By sequentially integrating the surface patches from near to far, the visual system obtains the global ground representation. A critical prediction of the SSIP hypothesis is that, when an abrupt texture-gradient change exists between the near and far ground surfaces, the SSIP can no longer accurately represent the far surface. Consequently, the representation of the far surface will be slanted upward toward the frontoparallel plane (owing to the intrinsic bias of the visual system), and the egocentric distance of a target on the far surface will be underestimated. Our previous findings in the real 3-D environment have shown that observers underestimated the target distance across a texture boundary. Here, we used the virtual-reality system to first test distance judgments with a distance-matching task. We created the texture boundary by having virtual grass- and cobblestone-textured patterns abutting on a flat (horizontal) ground surface in experiment 1, and by placing a brick wall to interrupt the continuous texture gradient of a flat grass surface in experiment 2. In both instances, observers underestimated the target distance across the texture boundary, compared to the homogeneous-texture ground surface (control). Second, we tested the proposal that the far surface beyond the texture boundary is perceived as slanted upward. For this, we used a virtual checkerboard-textured ground surface that was interrupted by a texture boundary. We found that not only was the target distance beyond the texture boundary underestimated relative to the homogeneous-texture condition, but the far surface beyond the texture boundary was also perceived as relatively slanted upward (experiment 3). Altogether, our results confirm the predictions of the SSIP hypothesis.  相似文献   

6.
Under many circumstances, humans do not judge the location of objects in space where they really are. For instance, when a background is added to a target object, the judged position of a target with respect to oneself (egocentric position) is shifted in the opposite direction as the placement of such a background with respect to the body midline. It is an ongoing debate whether such effects are due to a uni- or bi-directional interaction between allo- and egocentric spatial representations in the brain, or reflect a response strategy, known as the perceived midline shift. In this study, the effects of allocentric stimulus coordinates on perceived egocentric position were examined more precisely and in a quantitative manner. Furthermore, it was investigated whether the judged allocentric position (with respect to a background) is also influenced by the egocentric position in space of that object. Allo- and egocentric coordinates were varied independently. Also, the effect of background luminance on the observed interactions between spatial coordinates was determined. Since background luminance had an effect on the size of the interaction between allocentric stimulus coordinates and egocentric judgments, and no reverse interaction was found, it seems that interactions between ego- and allocentric reference frames is most likely only unidirectional, with the latter affecting the former. This interaction effect was described in a quantitative manner.  相似文献   

7.
Wu J  He ZJ  Ooi TL 《Perception》2005,34(9):1045-1060
The eye level and the horizontal midline of the body trunk can serve, respectively, as references for judging the vertical and horizontal egocentric directions. We investigated whether the optic-flow pattern, which is the dynamic motion information generated when one moves in the visual world, can be used by the visual system to determine and calibrate these two references. Using a virtual-reality setup to generate the optic-flow pattern, we showed that judged elevation of the eye level and the azimuth of the horizontal midline of the body trunk are biased toward the positional placement of the focus of expansion (FOE) of the optic-flow pattern. Furthermore, for the vertical reference, prolonged viewing of an optic-flow pattern with lowered FOE not only causes a lowered judged eye level after removal of the optic-flow pattern, but also an overestimation of distance in the dark. This is equivalent to a reduction in the judged angular declination of the object after adaptation, indicating that the optic-flow information also plays a role in calibrating the extraretinal signals used to establish the vertical reference.  相似文献   

8.
An egocentric frame of reference in implicit motor sequence learning   总被引:1,自引:0,他引:1  
We investigated which frame of reference is evoked during implicit motor sequence learning. Participants completed a typical serial reaction time task. In the first experiment, we isolated egocentric and allocentric frames of reference and found that learning was solely in an egocentric reference frame. In a second experiment, we isolated hand-centered space from other egocentric frames of reference. We found that for a one-handed sequencing task, the sequence was coded in an egocentric reference frame but not a hand-centered reference frame. Our results are restricted to implicit learning of novel sequences in the early stages of learning. These findings are consistent with claims that the neural mechanisms involved in motor skill learning operate in egocentric coordinates.  相似文献   

9.
Kelly JW  Loomis JM  Beall AC 《Perception》2004,33(4):443-454
Judgments of exocentric direction are quite common, especially when judging where others are looking or pointing. To investigate these judgments in large-scale space, observers were shown two targets in a large open field and were asked to judge the exocentric direction specified by the targets. The targets ranged in egocentric distance from 5 to 20 m with target-to-target angular separations of 45 degrees, 90 degrees, and 135 degrees. Observers judged exocentric direction using two methods: (i) by judging which point on a distant fence appeared collinear with the two targets, and (ii) by orienting their body in a direction parallel with the perceived line segment. In the collinearity task, observers had to imagine the line connecting the targets and then extrapolate this imagined line out to the fence. Observers indicated the perceived point of collinearity on a handheld 360 degrees panoramic cylinder representing their vista. The two judgment methods gave similar results except for a constant bias associated with the body-pointing response. Aside from this bias, the results of these two methods agree with other existing research indicating an effect of relative egocentric distance to the targets on judgment error--line segments are perceived as being rotated in depth. Additionally, verbal estimates of egocentric and exocentric distance suggest that perceived distance is not the cause for the systematic errors in judging exocentric direction.  相似文献   

10.
There is controversy over the existence, nature, and cause of error in egocentric distance judgments. One proposal is that the systematic biases often found in explicit judgments of egocentric distance along the ground may be related to recently observed biases in the perceived declination of gaze (Durgin & Li, Attention, Perception, & Psychophysics, in press), To measure perceived egocentric distance nonverbally, observers in a field were asked to position themselves so that their distance from one of two experimenters was equal to the frontal distance between the experimenters. Observers placed themselves too far away, consistent with egocentric distance underestimation. A similar experiment was conducted with vertical frontal extents. Both experiments were replicated in panoramic virtual reality. Perceived egocentric distance was quantitatively consistent with angular bias in perceived gaze declination (1.5 gain). Finally, an exocentric distance-matching task was contrasted with a variant of the egocentric matching task. The egocentric matching data approximate a constant compression of perceived egocentric distance with a power function exponent of nearly 1; exocentric matches had an exponent of about 0.67. The divergent pattern between egocentric and exocentric matches suggests that they depend on different visual cues.  相似文献   

11.
The angular declination of a target with respect to eye level is known to be an important cue to egocentric distance when objects are viewed or can be assumed to be resting on the ground. When targets are fixated, angular declination and the direction of the gaze with respect to eye level have the same objective value. However, any situation that limits the time available to shift gaze could leave to-be-localized objects outside the fovea, and, in these cases, the objective values would differ. Nevertheless, angular declination and gaze declination are often conflated, and the role for retinal eccentricity in egocentric distance judgments is unknown. We report two experiments demonstrating that gaze declination is sufficient to support judgments of distance, even when extraretinal signals are all that are provided by the stimulus and task environment. Additional experiments showed no accuracy costs for extrafoveally viewed targets and no systematic impact of foveal or peripheral biases, although a drop in precision was observed for the most retinally eccentric targets. The results demonstrate the remarkable utility of target direction, relative to eye level, for judging distance (signaled by angular declination and/or gaze declination) and are consonant with the idea that detection of the target is sufficient to capitalize on the angular declination of floor-level targets (regardless of the direction of gaze).  相似文献   

12.
Physical constraints produce variations in the shapes of biological objects that correspond to their sizes. Bingham (in press-b) showed that two properties of tree form could be used to evaluate the height of trees. Observers judged simulated tree silhouettes of constant image size appearing on a ground texture gradient with a horizon. According to the horizon ratio hypothesis, the horizon can be used to judge object size because it intersects the image of an object at eye height. The present study was an investigation of whether the locus of the horizon might account for Bingham’s previous results. Tree images were projected to a simulated eye height that was twice that used previously. Judgments were not halved, as predicted by the horizon ratio hypothesis. Next,the original results were replicated in viewing conditions that encouraged the use ofthe horizon ratio by including correct eye height, gaze level, and visual angles. The heights of cylinders were inaccurately judged when they appeared with horizon but without trees. Judgments were much more accurate when the cylinders also appeared in the context of trees.  相似文献   

13.
The effect of egocentric reference frames on palmar haptic perception of orientation was investigated in vertically separated locations in a sagittal plane. Reference stimuli to be haptically matched were presented either haptically (to the contralateral hand) or visually. As in prior investigations of haptic orientation perception, a strong egocentric bias was found, such that haptic orientation matches made in the lower part of personal space were much lower (i.e., were perceived as being higher) than those made at eye level. The same haptic bias was observed both when the reference surface to be matched was observed visually and when bimanual matching was used. These findings support the conclusion that, despite the presence of an unambiguous allocentric (gravitational) reference frame in vertical planes, haptic orientation perception in the sagittal plane reflects an egocentric bias.  相似文献   

14.
We examined the hypothesis that angular errors in visually directed pointing, in which an unseen target is pointed to after its direction has been seen, are attributed to the difference between the locations of the visual and kinesthetic egocentres. Experiment 1 showed that in three of four cases, angular errors in visually directed pointing equaled those in kinesthetically directed pointing, in which a visual target was pointed to after its direction had been felt. Experiment 2 confirmed the results of experiment 1 for the targets at two different egocentric distances. Experiment 3 showed that when the kinesthetic egocentre was used as the reference of direction, angular errors in visually directed pointing equaled those in visually directed reaching, in which an unseen target is reached after its location has been seen. These results suggest that in the visually and the kinesthetically directed pointing, the egocentric directions represented in the visual space are transferred to the kinesthetic space and vice versa.  相似文献   

15.
This study examines bias (constant error) in spatial memory in an effort to determine whether this bias is defined by a dynamic egocentric reference frame that moves with the observer or by an environmentally fixed reference frame. Participants learned the locations of six target objects around them in a room, were blindfolded, and then rotated themselves to face particular response headings. From each response heading, participants used a pointer to indicate the remembered azimuthal locations of the objects. Analyses of the angular pointing errors showed a previously observed pattern of bias. More importantly, it appeared that this pattern of bias was defined relative to and moved with the observer—that is, was egocentric and dynamic. These results were interpreted in the framework of a modified category adjustment model as suggesting the existence of dynamic categorical (nonmetric) spatial codes.  相似文献   

16.
This paper reports a series of experiments of the perceived position of the hand in egocentric space. The experiments focused on the bias in the proprioceptively perceived position of the hand at a series of locations spanning the midline from left to right. Perceived position was tested in a matching paradigm, in which subjects indicated the perceived position of a target, which could have been either a visual stimulus or their own fingertip, by placing the index finger of the other hand in the corresponding location on the other side of a fixed surface. Both the constant error, or bias, and the variable error, or consistency of matching attempts, were measured. Experiment 1 showed that (1) there is a far-left advantage in matching tasks, such that errors in perceived position are significantly lower in extreme-left positions than in extreme-right positions, and (2) there is a strong hand-bias effect in the absence of vision, such that the perceived positions of the left and right index fingertips held in the same actual target position in fact differ significantly. Experiments 2 and 3 demonstrated that this hand-bias effect is genuinely due to errors in the perceived position of the matched hand, and not to the attempt at matching it with the other hand. These results suggest that there is no unifying representation of egocentric, proprioceptive space. Rather, separate representations appear to be maintained for each effector. The bias of these representations may reflect the motor function of that effector.  相似文献   

17.
The projected height of an object in a scene relative to a ground surface influences its perceived size and distance, but the effect of height should change when the object is moved above the horizon. In four experiments, observers judged relative size or relative distance for pairs of objects varying in height with respect to the horizon. Higher objects equal in projected size were judged larger below the horizon, but the relative size effect was reversed either when one object was on the horizon and one was above the horizon or when both objects were above the horizon. With the real horizon not explicitly present in the display, relative size judgements were affected both by the boundary of the visible surface and the vanishing point implied by the converging lines. For relative distance judgements, the higher object was judged more distant regardless of the height of the objects relative to the perceptual horizon, resulting in a reversal of the relation between size and distance judgements for objects above the horizon.  相似文献   

18.
The perception of depth and slant in three-dimensional scenes specified by texture was investigated in five experiments. Subjects were presented with computer-generated scenes of a ground and ceiling plane receding in depth. Compression, convergence, and grid textures were examined. The effect of the presence or absence of a gap in the center of the display was also assessed. Under some conditions perceived slant and depth from compression were greater than those found with convergence. The relative effectiveness of compression in specifying surface slant was greater for surfaces closer to ground planes (80 degrees slant) than for surfaces closer to frontal parallel planes (40 degrees slant). The usefulness of compression was also observed with single-plane displays and with displays with surfaces oriented to reduce information regarding the horizon.  相似文献   

19.
In three experiments we examined whether memory for object locations in the peri-personal space in the absence of vision is affected by the correspondence between encoding and test either of the body position or of the reference point. In particular, the study focuses on the distinction between different spatial representations, by using a paradigm in which participants are asked to relocate objects explored haptically. Three frames of reference were systematically compared. In experiment 1, participants relocated the objects either from the same position of learning by taking as reference their own body (centred egocentric condition) or from a 90 degrees decentred position (allocentric condition). Performance was measured in terms of linear distance errors and angular distance errors. Results revealed that the allocentric condition was more difficult than the centred egocentric condition. In experiment 2, participants performed either the centred egocentric condition or a decentred egocentric condition, in which the body position during the test was the same as at encoding (egocentric) but the frame of reference was based on a point decentred by 90 degrees. The decentred egocentric condition was found to be more difficult than the centred egocentric condition. Finally, in experiment 3, participants performed in the decentred egocentric condition or the allocentric condition. Here, the allocentric condition was found to be more difficult than the decentred egocentric condition. Taken together, the results suggest that also in the peripersonal space and in the absence of vision different frames of reference can be distinguished. In particular, the decentred egocentric condition involves a frame of reference which seems to be neither allocentric nor totally egocentric.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号