首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Egocentric distance perception is a psychological process in which observers use various depth cues to estimate the distance between a target and themselves. The impairment of basic visual function and treatment of amblyopia have been well documented. However, the disorder of egocentric distance perception of amblyopes is poorly understood. In this review, we describe the cognitive mechanism of egocentric distance perception, and then, we focus on empirical evidence for disorders in egocentric distance perception for amblyopes in the whole visual space. In the personal space (within 2 m), it is difficult for amblyopes to show normal hand-eye coordination; in the action space (within 2 m~30 m), amblyopes cannot accurately judge the distance of a target suspended in the air. Few studies have focused on the performance of amblyopes in the vista space (more than 30 m). Finally, five critical topics for future research are discussed: 1) it is necessary to systematically explore the mechanism of egocentric distance perception in all three spaces; 2) the laws of egocentric distance perception in moving objects for amblyopes should be explored; and 3) the comparison of three subtypes of amblyopia is still insufficient; 4) study the perception of distance under another theoretical framework; 5) explore the mechanisms of amblyopia by Virtual Reality.  相似文献   

2.
Philbeck JW 《Perception》2000,29(3):259-272
When observers indicate the magnitude of a previously viewed spatial extent by walking without vision to each endpoint, there is little evidence of the perceptual collapse in depth associated with some other methods (e.g. visual matching). One explanation is that both walking and matching are perceptually mediated, but that the perceived layout is task-dependent. In this view, perceived depth beyond 2-3 m is typically distorted by an equidistance effect, whereby the egocentric distances of nonfixated portions of the depth interval are perceptually pulled toward the fixated point. Action-based responses, however, recruit processes that enhance perceptual accuracy as the stimulus configuration is inspected. This predicts that walked indications of egocentric distance performed without vision should exhibit equidistance effects at short exposure durations, but become more accurate at longer exposures. In this paper, two experiments demonstrate that in a well-lit environment there is substantial perceptual anisotropy at near distances (3-5 m), but that walked indications of egocentric distance are quite accurate after brief glimpses (150 ms), even when the walking target is not directly fixated. Longer exposures do not increase accuracy. The results are clearly inconsistent with the task-dependent information processing explanation, but do not rule out others in which perception mediates both walking and visual matches.  相似文献   

3.
Target selection for action depends not only on the egocentric location of objects estimated from retinal and extraretinal variables, but also on the assessment of current action possibilities. In the present study, we investigated the effect of altering sensorimotor anticipation processes on subsequent perceptual estimates of reachability. To do so, we conducted two experiments in which we changed the relation between visual distance and movement amplitude. Experiment 1 showed that iterative visuomotor adaptation to distorted visual feedback (in steps of ±15?mm, up to a total adaptation of ±75?mm) led to a congruent variation of perceived reachable space, although the first introduction of the shifted visual feedback produced a reduction of perceived reachable space whatever the direction of the feedback shift. Experiment 2 showed that increasing uncertainty about visuomotor performances, by providing a visual feedback randomly shifted in depth (±7.5?mm), produced the same reduction of perceived reachable space in the absence of visuomotor adaptation. Taken together, these data suggest that the visual perception of reachable space depends on a motor-related perceptual system, which is affected by both visuomotor recalibration and reliability of the visuomotor system.  相似文献   

4.
This study examines the change in the perceived distance of an object in three-dimensional space when the object and/or the observer's head is moved along the line of sight (sagittal motion) as a function of the perceived absolute (egocentric) distance of the object and the perceived motion of the head. To analyze the processes involved, two situations, labeled A and B, were used in four experiments. In Situation A, the observer was stationary and the perceived motion of the object was measured as the object was moved toward and away from the observer. In Situation B, the same visual information regarding the changing perceived egocentric distance between the observer and object was provided as in Situation A, but part or all of the change in visual egocentric distance was produced by the sagittal motion of the observer's head. A comparison of the perceived motion of the object in the two situations was used to measure the compensation in the perception of the motion of the object as a result of the head motion. Compensation was often clearly incomplete, and errors were often made in the perception of the motion of the stimulus object. A theory is proposed, which identifies the relation between the changes in the perceived egocentric distance of the object and the tandem motion of the object resulting from the perceived motion of the head to be the significant factor in the perception of the sagittal motion of the stimulus object in Situation B.  相似文献   

5.
This study examines the change in the perceived distance of an object in three-dimensional space when the object andlor the observer’s head is moved along the line of sight (sagittal motion) as a function of the perceived absolute (egocentric) distance of the object and the perceived motion of the head. To analyze the processes involved, two situations, labeled A and B, were used in four experiments. In Situation A, the observer was stationary and the perceived motion of the object was measured as the object was moved toward and away from the observer. In Situation B, the same visual information regarding the changing perceived egocentric distance between the observer and object was provided as in Situation A, but part or all of the change in visual egocentric distance was produced by the sagittal motion of the observer’s head. A comparison of the perceived motion of the object in the two situations was used to measure the compensation in the perception of the motion of the object as a result of the headmotion. Compensation was often clearly incomplete, and errors were often made in the perception of the motion of the stimulus object. A theory is proposed, which identifies the relation between the changes in the perceived egocentric distance of the object and the tandem motion of the object resulting from the perceived motion of the head to be the significant factor in the perception of the sagittal motion of the stimulus object in Situation B.  相似文献   

6.
为探讨社会合作信息对距离知觉的影响,本研究通过两个实验,采用动态追逐场景,对两个追逐者之间的交互模式(或合作或单独追逐同一目标)与追逐目标的存在与否进行了操作。结果发现,相比随机运动和单独追逐,存在合作关系的两个追逐者间的距离被知觉得更远,即存在距离的扩张效应(实验1),且该效应不能由底层的物理特征所解释(实验2)。该结果揭示,社会合作信息使得知觉距离被扩张,其可帮助理解视觉的适应性机制。  相似文献   

7.
肖承丽 《心理学报》2013,45(7):752-761
通过同步视觉或者序列本体感觉,被试学习不规则场景。学习完毕之后,在面对学习方向、自主旋转240°、和持续旋转直至迷向3种运动条件下,被试随机指出各个物体的位置。迷向导致同步视觉学习组指向的内部一致性显著变差,而序列本体感觉学习组不受迷向影响。离线的相对位置判断任务表明两组被试的环境中心空间表征没有差异。这证明通过序列本体感觉学习被试也可以形成稳定的自我中心空间表征,支持了空间快照理论的扩展和空间认知的功能等价假说。  相似文献   

8.
The aim of the two present experiments was to examine the ontogenetic development of the dissociation between perception and action in children using the Duncker illusion. In this illusion, a moving background alters the perceived direction of target motion. Targets were held stationary while appearing to move in an induced displacement. In Experiment 1, 30 children aged 7, 9, and 12 years and 10 adults made a perceptual judgment or pointed as accurately as possible, with their index finger, to the last position of the target. The 7-year-old children were more perceptually deceived than the others by the Duncker illusion but there were no differences for the goal-directed pointing movements. In Experiment 2, 50 children aged 7, 8, 9, 10, and 11 years made a perceptual judgment or reproduced as accurately as possible, with a handle, the distance traveled by the target. Participants were perceptually deceived by the illusion, judging the target as moving although it was stationary. When reproducing the distance covered by the target, children were unaffected by the Duncker illusion. Our results suggest that the separation of the allocentric visual perception pathway from the egocentric action pathway occurs before 7 years of age.  相似文献   

9.
A pitched visual inducer has a strong effect on the visually perceived elevation of a target in extrapersonal space, and also on the elevation of the arm when a subject points with an unseen arm to the target’s elevation. The manual effect is a systematic function of hand-to-body distance (Li and Matin Vision Research 45:533–550, 2005): When the arm is fully extended, manual responses to perceptually mislocalized luminous targets are veridical; when the arm is close to the body, gross matching errors occur. In the present experiments, we measured this hand-to-body distance effect during the presence of a pitched visual inducer and after inducer offset, using three values of hand-to-body distance (0, 40, and 70 cm) and two open-loop tasks (pointing to the perceived elevation of a target at true eye level and setting the height of the arm to match the elevation). We also measured manual behavior when subjects were instructed to point horizontally under induction and after inducer offset (no visual target at any time). In all cases, the hand-to-body distance effect disappeared shortly after inducer offset. We suggest that the rapid disappearance of the distance effect is a manifestation of processes in the dorsal visual stream that are involved in updating short-lived representations of the arm in egocentric visual perception and manual behavior.  相似文献   

10.
The authors model the neural mechanisms underlying spatial cognition, integrating neuronal systems and behavioral data, and address the relationships between long-term memory, short-term memory, and imagery, and between egocentric and allocentric and visual and ideothetic representations. Long-term spatial memory is modeled as attractor dynamics within medial-temporal allocentric representations, and short-term memory is modeled as egocentric parietal representations driven by perception, retrieval, and imagery and modulated by directed attention. Both encoding and retrieval/imagery require translation between egocentric and allocentric representations, which are mediated by posterior parietal and retrosplenial areas and the use of head direction representations in Papez's circuit. Thus, the hippocampus effectively indexes information by real or imagined location, whereas Papez's circuit translates to imagery or from perception according to the direction of view. Modulation of this translation by motor efference allows spatial updating of representations, whereas prefrontal simulated motor efference allows mental exploration. The alternating temporal-parietal flows of information are organized by the theta rhythm. Simulations demonstrate the retrieval and updating of familiar spatial scenes, hemispatial neglect in memory, and the effects on hippocampal place cell firing of lesioned head direction representations and of conflicting visual and ideothetic inputs.  相似文献   

11.
Both judgment studies and studies of feedforward reaching have shown that the visual perception of object distance, size, and shape are inaccurate. However, feedback has been shown to calibrate feedfoward reaches-to-grasp to make them accurate with respect to object distance and size. We now investigate whether shape perception (in particular, the aspect ratio of object depth to width) can be calibrated in the context of reaches-to-grasp. We used cylindrical objects with elliptical cross-sections of varying eccentricity. Our participants reached to grasp the width or the depth of these objects with the index finger and thumb. The maximum grasp aperture and the terminal grasp aperture were used to evaluate perception. Both occur before the hand has contacted an object. In Experiments 1 and 2, we investigated whether perceived shape is recalibrated by distorted haptic feedback. Although somewhat equivocal, the results suggest that it is not. In Experiment 3, we tested the accuracy of feedforward grasping with respect to shape with haptic feedback to allow calibration. Grasping was inaccurate in ways comparable to findings in shape perception judgment studies. In Experiment 4, we hypothesized that online guidance is needed for accurate grasping. Participants reached to grasp either with or without vision of the hand. The result was that the former was accurate, whereas the latter was not. We conclude that shape perception is not calibrated by feedback from reaches-to-grasp and that online visual guidance is required for accurate grasping because shape perception is poor.  相似文献   

12.
Ooi TL  Wu B  He ZJ 《Perception》2006,35(5):605-624
Correct judgment of egocentric/absolute distance in the intermediate distance range requires both the angular declination below the horizon and ground-surface information being represented accurately. This requirement can be met in the light environment but not in the dark, where the ground surface is invisible and hence cannot be represented accurately. We previously showed that a target in the dark is judged at the intersection of the projection line from the eye to the target that defines the angular declination below the horizon and an implicit surface. The implicit surface can be approximated as a slant surface with its far end slanted toward the frontoparallel plane. We hypothesize that the implicit slant surface reflects the intrinsic bias of the visual system and helps to define the perceptual space. Accordingly, we conducted two experiments in the dark to further elucidate the characteristics of the implicit slant surface. In the first experiment we measured the egocentric location of a dimly lit target on, or above, the ground, using the blind-walking-gesturing paradigm. Our results reveal that the judged target locations could be fitted by a line (surface), which indicates an intrinsic bias with a geographical slant of about 12.4 degrees. In the second experiment, with an exocentric/relative-distance task, we measured the judged ratio of aspect ratio of a fluorescent L-shaped target. Using trigonometric analysis, we found that the judged ratio of aspect ratio can be accounted for by assuming that the L-shaped target was perceived on an implicit slant surface with an average geographical slant of 14.4 degrees. That the data from the two experiments with different tasks can be fitted by implicit slant surfaces suggests that the intrinsic bias has a role in determining perceived space in the dark. The possible contribution of the intrinsic bias to representing the ground surface and its impact on space perception in the light environment are also discussed.  相似文献   

13.
We examined whether anticipation is underpinned by perceiving structured patterns or postural cues and whether the relative importance of these processes varied as a function of task constraints. Skilled and less-skilled soccer players completed anticipation paradigms in video-film and point light display (PLD) format. Skilled players anticipated more accurately regardless of display condition, indicating that both perception of structured patterns between players and postural cues contribute to anticipation. However, the Skill × Display interaction showed skilled players’ advantage was enhanced in the video-film condition, suggesting that they make better use of postural cues when available during anticipation. We also examined anticipation as a function of proximity to the ball. When participants were near the ball, anticipation was more accurate for video-film than PLD clips, whereas when the ball was far away there was no difference between viewing conditions. Perceiving advance postural cues appears more important than structured patterns when the ball is closer to the observer, whereas the reverse is true when the ball is far away. Various perceptual-cognitive skills contribute to anticipation with the relative importance of perceiving structured patterns and advance postural cues being determined by task constraints and the availability of perceptual information.  相似文献   

14.
We provide experimental evidence that perceived location is an invariant in the control of action, by showing that different actions are directed toward a single visually specified location in space (corresponding to the putative perceived location) and that this single location, although specified by a fixed physical target, varies with the availability of information about the distance of that target. Observers in two conditions varying in the availability of egocentric distance cues viewed targets at 1.5, 3.1, or 6.0 m and then attempted to walk to the target with eyes closed using one of three paths; the path was not specified until after vision was occluded. The observers stopped at about the same location regardless of the path taken, providing evidence that action was being controlled by some invariant, ostensibly visually perceived location. That it was indeed perceived location was indicated by the manipulation of information about target distance—the trajectories in the full-cues condition converged near the physical target locations, whereas those in the reduced-cues condition converged at locations consistent with the usual perceptual errors found when distance cues are impoverished.  相似文献   

15.
Two experiments were performed to assess the accuracy and precision with which adults perceive absolute egocentric distances to visible targets and coordinate their actions with them when walking without vision. In experiment 1 subjects stood in a large open field and attempted to judge the midpoint of self-to-target distances of between 4 and 24 m. In experiment 2 both highly practiced and unpracticed subjects stood in the same open field, viewed the same targets, and attempted to walk to them without vision or other environmental feedback under three conditions designed to assess the effects on accuracy of time-based memory decay and of walking at an unusually rapid pace. In experiment 1 the visual judgments were quite accurate and showed no systematic constant error. The small variable errors were linearly related to target distance. In experiment 2 the briskly paced walks were accurate, showing no systematic constant error, and the small, variable errors were a linear function of target distance and averaged about 8% of the target distance. Unlike Thomson's (1983) findings, there was not an abrupt increase in variable error at around 9 m, and no significant time-based effects were observed. The results demonstrate the accuracy of people's visual perception of absolute egocentric distances out to 24 m under open field conditions. The accuracy of people's walking without vision to previously seen targets shows that efferent and proprioceptive information about locomotion is closely calibrated to visually perceived distance. Sensitivity to the correlation of optical flow with efferent/proprioceptive information while walking with vision may provide the basis for this calibration when walking without vision.  相似文献   

16.
In two experiments we examined the role of visual horizon information on absolute egocentric distance judgments to on-ground targets. Sedgwick [1983, in Human and Machine Vision (New York: Academic Press) pp 425-458] suggested that the visual system may utilize the angle of declination from a horizontal line of sight to the target location (horizon distance relation) to determine absolute distances on infinite ground surfaces. While studies have supported this hypothesis, less is known about the specific cues (vestibular, visual) used to determine horizontal line of sight. We investigated this question by requiring observers to judge distances under degraded vision given an unaltered or raised visual horizon. The results suggest that visual horizon information does influence perception of absolute distances as evident through two different action-based measures: walking or throwing without vision to previously viewed targets. Distances were judged as shorter in the presence of a raised visual horizon. The results are discussed with respect to how the visual system accurately determines absolute distance to objects on a finite ground plane and for their implications for understanding space perception in low-vision individuals.  相似文献   

17.
Hering's model of egocentric visual direction assumes implicitly that the effect of eye position on direction is both linear and equal for the two eyes; these two assumptions were evaluated in the present experiment. Five subjects pointed (open-loop) to the apparent direction of a target seen under conditions in which the position of one eye was systematically varied while the position of the other eye was held constant. The data were analyzed through examination of the relationship between the variations in perceived egocentric direction and variations in expected egocentric direction based on the positions of the varying eye. The data revealed that the relationship between eye position and egocentric direction is indeed linear. Further, the data showed that, for some subjects, variations in the positions of the two eyes do not have equal effects on egocentric direction. Both the between-eye differences and the linear relationship may be understood in terms of individual differences in the location of the cyclopean eye, an unequal weighting of the positions of the eyes in the processing of egocentric direction, or some combination of these two factors.  相似文献   

18.
Hering’s model of egocentric visual direction assumes implicitly that the effect of eye position on direction is both linear and equal for the two eyes; these two assumptions were evaluated in the present experiment. Five subjects pointed (open-loop) to the apparent direction of a target seen under conditions in which the position of one eye was systematically varied while the position of the other eye was held constant. The data were analyzed through examination of the relationship between the variations in perceived egocentric direction and variations inexpected egocentric direction based on the positions of the varying eye. The data revealed that the relationship between eye position and egocentric direction is indeed linear. Further, the data showed that, for some subjects, variations in the positions of the two eyes do not have equateffectsTjn egocentric direction. Both the between-eye differences and the linear relationship may be understood in terms of individual differences in the location of the cyclopean eye, an unequal weighting of the positions of the eyes in the processing of egocentric direction, or some combination of these two factors.  相似文献   

19.
Two studies were designed to determine whether the perceptual learning that has been demonstrated to occur during exposure to uniocular image magnification can be explained by either a modification in the perception of egocentric distance or the direction of gaze. Experiment 1 was designed to determine whether exposure to uniocular image magnification produces changes in perceived absolute distance. Experiment 2 tested the possibility that exposure to uniocular image magnification modifies the registration of direction of gaze. The results showed that, despite the occurrence of adaptive shifts in perceived depth, no significant changes in perceived absolute distance or in registered direction of gaze occur. These findings bolster confidence in the hypothesis that adaptation to uniocular image magnification is the result of a recalibration of retinal disparity.  相似文献   

20.
Past research (e.g., J. M. Loomis, Y. Lippa, R. L. Klatzky, & R. G. Golledge, 2002) has indicated that spatial representations derived from spatial language can function equivalently to those derived from perception. The authors tested functional equivalence for reporting spatial relations that were not explicitly stated during learning. Participants learned a spatial layout by visual perception or spatial language and then made allocentric direction and distance judgments. Experiments 1 and 2 indicated allocentric relations could be accurately reported in all modalities, but visually perceived layouts, tested with or without vision, produced faster and less variable directional responses than language. In Experiment 3, when participants were forced to create a spatial image during learning (by spatially updating during a backward translation), functional equivalence of spatial language and visual perception was demonstrated by patterns of latency, systematic error, and variability.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号