首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Four experiments examined eye height (EH) scaling of object height across different postures. In Experiment 1, participants viewed rectangular targets while they were standing, seated, and prone. Standing and seated judgments were similar, possibly due to EH scaling. Prone judgments were significantly lower, a result not attributable to the unfamiliarity of that posture (Experiment 2). In Experiment 3, shifts of seated EH produced height overestimations equivalent to those of standing viewers. Experiment 4 examined the visual salience of size information in the seated and prone judgments by holding EH constant and manipulating another source: linear perspective. Participants viewed targets placed on true- and false-perspective (FP) gradients. The FP gradient affected prone judgments but not seated judgments, which presumably relied on EH. It appears that the human visual system weights size information differentially depending on its utility.  相似文献   

2.
Eye-height (EH) scaling of absolute height was investigated in three experiments. In Experiment 1, standing observers viewed cubes in an immersive virtual environment. Observers' center of projection was placed at actual EH and at 0.7 times actual EH. Observers' size judgments revealed that the EH manipulation was 76.8% effective. In Experiment 2, seated observers viewed the same cubes on an interactive desktop display; however, no effect of EH was found in response to the simulated EH manipulation. Experiment 3 tested standing observers in the immersive environment with the field of view reduced to match that of the desktop. Comparable to Experiment 1, the effect of EH was 77%. These results suggest that EH scaling is not generally used when people view an interactive desktop display because the altitude of the center of projection is indeterminate. EH scaling is spontaneously evoked, however, in immersive environments.  相似文献   

3.
Most ground surfaces contain various types of texture gradient information that serve as depth cues for space perception. We investigated how linear perspective, a type of texture gradient information on the ground, affects judged absolute distance and eye level. Phosphorescent elements were used to display linear perspective information on the floor in an otherwise dark room. We found that observers were remarkably receptive to such information. Changing the configuration of the linear perspective information from parallel to converging resulted in relatively larger judged distances and lower judged eye levels. These findings support the proposals that (1) the visual system has a bias for representing an image of converging lines as one of parallel lines on a downward-slanting surface and (2) the convergence point of a converging-lines image represents the eye level. Finally, we found that the visual system may be less sensitive to the manipulation of compression gradient information than of linear perspective information.  相似文献   

4.
Relative size judgments were collected for two objects at 30.5 m and 23.8 m from the observer in order to assess how performance depends on the relationship between the size of the objects and the eye level of the observer. In three experiments in an indoor hallway and in one experiment outdoors, accuracy was higher for objects in the neighborhood of eye level. We consider these results in the light of two hypotheses. One proposes that observers localize the horizon as a reference for judging relative size, and the other proposes that observers perceive the general neighborhood of the horizon and then employ a height-in-visual-field heuristic. The finding that relative size judgments are best around the horizon implies that information that is independent of distance perception is used in perceiving size.  相似文献   

5.
Three experiments were conducted to explore the emergence of sensitivity to the pictorial depth cues of texture gradient and linear perspective. In experiment 1, an initial longitudinal study explored the emergence of sensitivity to pictorial depth information between 5 and 7 months of age. In experiment 2, a cross-sectional study with 5–7-month-olds assessed revised methods designed to study development of pictorial depth sensitivity in individual infants. Experiment 3 applied these methods to a second sample of infants studied longitudinally. The results showed that: (a) a reliable method for assessing sensitivity in individual infants has been constructed; (b) there is variability in the age at which infants begin to use linear perspective and texture gradient for perceiving depth (22–28 weeks of age); and (c) sensitivity emerges across 2–8 weeks.  相似文献   

6.
Mapp AP  Ono H  Khokhotva M 《Perception》2007,36(8):1139-1151
It is generally agreed that absolute-direction judgments require information about eye position, whereas relative-direction judgments do not. The source of this eye-position information, particularly during monocular viewing, is a matter of debate. It may be either binocular eye position, or the position of the viewing-eye only, that is crucial. Using more ecologically valid stimulus situations than the traditional LED in the dark, we performed two experiments. In experiment 1, observers threw darts at targets that were fixated either monocularly or binocularly. In experiment 2, observers aimed a laser gun at targets while fixating either the rear or the front gunsight monocularly, or the target either monocularly or binocularly. We measured the accuracy and precision of the observers' absolute- and relative-direction judgments. We found that (a) relative-direction judgments were precise and independent of phoria, and (b) monocular absolute-direction judgments were inaccurate, and the magnitude of the inaccuracy was predictable from the magnitude of phoria. These results confirm that relative-direction judgments do not require information about eye position. Moreover, they show that binocular eye-position information is crucial when judging the absolute direction of both monocular and binocular targets.  相似文献   

7.
In four experiments on perceived object height and width, the effects of shifting participants' effective eye height (EEH) on affordance (intrinsic) and apparent size (extrinsic) judgments were contrasted. In Experiment 1, EEH shifts produced comparable overestimations of height in intrinsic and extrinsic tasks. A similar result was found with a more abstract extrinsic height task (Experiment 2). However, Experiment 3 revealed a dissociation between intrinsic and extrinsic tasks of perceived width. Affordance judgments were affected by EEH shifts, whereas apparent size judgments were not. Experiment 4 compared participants' performance on comparable extrinsic tasks of height and width. Height judgments were affected by EEH shifts, but width judgments were again unaffected. It is concluded that eye height may be a more natural metric for object height than for width. Moreover, this difference reflects a basic flexibility within the human visual system for selectively attuning to the most accessible sources of size information.  相似文献   

8.
Mental imagery and the third dimension   总被引:1,自引:0,他引:1  
What sort of medium underlies imagery for three-dimensional scenes? In the present investigation, the time subjects took to scan between objects in a mental image was used to infer the sorts of geometric information that images preserve. Subjects studied an open box in which five objects were suspended, and learned to imagine this display with their eyes closed. In the first experiment, subjects scanned by tracking an imaginary point moving in a straight line between the imagined objects. Scanning times increased linearly with increasing distance between objects in three dimensions. Therefore metric 3-D information must be preserved in images, and images cannot simply be 2-D "snapshots." In a second experiment, subjects scanned across the image by "sighting" objects through an imaginary rifle sight. Here scanning times were found to increase linearly with the two-dimensional separations between objects as they appeared from the original viewing angle. Therefore metric 2-D distance information in the original perspective view must be preserved in images, and images cannot simply be 3-D "scale-models" that are assessed from any and all directions at once. In a third experiment, subjects mentally rotated the display 90 degrees and scanned between objects as they appeared in this new perspective view by tracking an imaginary rifle signt, as before. Scanning times increased linearly with the two-dimensional separations between objects as they would appear from the new relative viewing perspective. Therefore images can display metric 2-D distance information in a perspective view never actually experiences, so mental images cannot simply be "snapshot plus scale model" pairs. These results can be explained by a model in which the three-dimensional structure of objects is encoded in long-term memory in 3-D object-centered coordinate systems. When these objects are imagined, this information is then mapped onto a single 2-D "surface display" in which the perspective properties specific to a given viewing angle can be depicted. In a set of perceptual control experiments, subjects scanned a visible display by (a) simply moving their eyes from one object to another, (b) sweeping an imaginary rifle sight over the display, or (c) tracking an imaginary point moving from one object to another. Eye-movement times varied linearly with 2-D interobject distance, as did time to scan with an imaginary rifle sight; time to tract a point varied independently with the 3-D and 2-D interobject distances. These results are compared with the analogous image scanning results to argue that imagery and perception share some representational structures but that mental image scanning is a process distinct from eye movements or eye-movement commands.  相似文献   

9.
In four experiments on perceived object height and width, the effects of shifting participants’ effective eye height (EEH) on affordance (intrinsic) and apparent size (extrinsic) judgments were con-trasted. In Experiment 1, EEH shifts produced comparable overestimations ofheight in intrinsic and extrinsic tasks. A similar result was found with a more abstract extrinsic height task (Experiment 2). However, Experiment 3 revealed a dissociation between intrinsic and extrinsic tasks of perceivedwidth. Affordance judgments were affected by EEH shifts, whereas apparent size judgments were not. Experiment 4 compared participants’ performance on comparable extrinsic tasks of height and width. Height judgments were affected by EEH shifts, but width judgments were again unaffected. It is concluded that eye height may be a more natural metric for object height than for width. Moreover, this difference reflects a basic flexibility within the human visual system for selectively attuning to the most accessible sources of size information.  相似文献   

10.
W H Warren  E E Kim  R Husney 《Perception》1987,16(3):309-336
Human observers may perceive not only spatial and temporal dimensions of the environment, but also dynamic physical properties that are useful for the control of behavior. A study is presented in which visual and auditory perception of elasticity in bouncing objects, which was specified by kinematic (spatiotemporal) patterns of object motion, were examined. In experiment 1, observers could perceive the elasticity of a bouncing ball and were able to regulate the impulse applied to the ball in a bounce pass. In experiments 2 and 3, it was demonstrated that visual perception of elasticity was based on relative height information, when it was available, and on the duration of a single period under other conditions. Observers did not make effective use of velocity information. In experiment 4, visual and auditory period information were compared and equivalent performance in both modalities was found. The results are interpreted as support for the view that dynamic properties of environmental events are perceived by means of kinematic information.  相似文献   

11.
A central problem for psychology is vision's reaction to perspective. In the present studies, observers looked at perspective pictures projected by square tiles on a ground plane. They judged the tile dimensions while positioned at the correct distance, farther or nearer. In some pictures, many tiles appeared too short to be squares, many too long, and many just right. The judgments were strongly affected by viewing from the wrong distance, eye height, and object orientation. The authors propose a 2-factor angles and ratios together (ART) theory, with the following factors: the ratio of the visual angles of the tile's sides and the angle between (a) the direction to the tile from the observer and (b) the perpendicular, from the picture plane to the observer, that passes through the central vanishing point.  相似文献   

12.
The results of research on change blindness and the spotlight effect suggest that (a) others are unlikely to notice changes in our appearance and (b) we are likely to overestimate the extent to which others notice changes in our appearance. However, little research has directly addressed the latter possibility. In Study 1, target persons overestimated the extent to which observers noticed a change in their sweatshirt. In Study 2, observers who followed the target persons throughout the study overestimated the extent to which other observers noticed the change in the target persons' sweatshirt, but target persons' overestimations were significantly higher. The results suggest that the spotlight effect increases blindness to change blindness.  相似文献   

13.
We examined effects of binocular occlusion, binocular camouflage, and vergence-induced disparity cues on the perceived depth between two objects when two stimuli are presented to one eye and a single stimulus to the other (Wheatstone—Panum limiting case). The perceived order and magnitude of the depth were examined in two experimental conditions: (1) The stimulus was presented on the temporal side (occlusion condition) and (2) the nasal side (camouflage condition) of the stimulus pair on one retina so as to fuse with the single stimulus on the other retina. In both conditions, the separation between the stimulus pair presented to one eye was systematically varied. Experiment 1, with 16 observers, showed that the fused object was seen in front of the nonfused object in the occlusion condition and was seen at the same distance as the nonfused object in the camouflage condition. The perceived depth between the two objects was constant and did not depend on the separation of the stimulus pair presented to one eye. Experiment 2, with 45 observers, showed that the disparity induced by vergence mainly determined the perceived depth, and the depth magnitude increased as the separation of the stimulus pair was made wider. The results suggest that (1) occlusion provides depth-order information but not depth-magnitude information, (2) camouflage provides neither depth-order nor depth-magnitude information, and (3) vergence-induced disparity provides both order and magnitude information.  相似文献   

14.
Working memory representations play a key role in controlling attention by making it possible to shift attention to task-relevant objects. Visual working memory has a capacity of three to four objects, but recent studies suggest that only one representation can guide attention at a given moment. We directly tested this proposal by monitoring eye movements while observers performed a visual search task in which they attempted to limit attention to objects drawn in two colors. When the observers were motivated to attend to one color at a time, they searched many consecutive items of one color (long run lengths) and exhibited a delay prior to switching gaze from one color to the other (switch cost). In contrast, when they were motivated to attend to both colors simultaneously, observers' gaze switched back and forth between the two colors frequently (short run lengths), with no switch cost. Thus, multiple working memory representations can concurrently guide attention.  相似文献   

15.
We examined effects of binocular occlusion, binocular camouflage, and vergence-induced disparity cues on the perceived depth between two objects when two stimuli are presented to one eye and a single stimulus to the other (Wheatstone-Panum limiting case). The perceived order and magnitude of the depth were examined in two experimental conditions: (1) The stimulus was presented on the temporal side (occlusion condition) and (2) the nasal side (camouflage condition) of the stimulus pair on one retina so as to fuse with the single stimulus on the other retina. In both conditions, the separation between the stimulus pair presented to one eye was systematically varied. Experiment 1, with 16 observers, showed that the fused object was seen in front of the nonfused object in the occlusion condition and was seen at the same distance as the nonfused object in the camouflage condition. The perceived depth between the two objects was constant and did not depend on the separation of the stimulus pair presented to one eye. Experiment 2, with 45 observers, showed that the disparity induced by vergence mainly determined the perceived depth, and the depth magnitude increased as the separation of the stimulus pair was made wider. The results suggest that (1) occlusion provides depth-order information but not depth-magnitude information, (2) camouflage provides neither depth-order nor depth-magnitude information, and (3) vergence-induced disparity provides both order and magnitude information.  相似文献   

16.
Phosphorescent square tiles (arranged to yield a single image size) were viewed in the dark by 56 monocular observers who utilized a chinrest. The targets were placed at one of three horizontal distances and at one of three eye heights, allowing us to study the relative effect of height in the visual field (HVF) and sagittal distance on observers' verbal reports of the horizontal distance at which the object lay (near, middle, or far). In Experiment 1, we found that reports covaried primarily with HVF and, as predicted, they exhibited a weak paradoxical inverse relation with horizontal distance. In a second and third experiment, a visible surface was placed under the targets at the three eye heights in both dark and fully lighted conditions. In this situation, the inverse distance relation disappeared, and HVF no longer influenced the judgments of most observers. The results show that information projected from relevant support surfaces is essential for veridical information about object distance. These results raise fundamental issues for perceptual researchers regarding how to decide when a cue has been properly delineated, given the assumption that the relation between a cue and what it specifies is probabilistic.  相似文献   

17.
Summary The geometrical optics of approach events is delineated. It is shown that optical magnification provides information about distance and time until collision. An experiment is described in which two objects - white styropor® spheres 10 cm in diameter, seen against a white plaster wall - were moved simultaneously at equal, constant speed along straight, converging paths at eye level towards a human observer and towards a common, virtual point of collision which either coincided with the observer's station point or was placed in front of, or behind, that point. Approach events differed with regard to trajectories, distances, velocities, and times-to-collision involved. Events were observed monocularly fixating and binocularly non-fixating, without head movements. The objects always stopped before colliding, and subjects had to respond to the virtual collisions. Most responses were too early, especially for impending collisions at, or behind the observers' station point. Responses for impending collisions in front of the observers tended to be too late, especially for larger total amounts of optical magnification and higher velocities, which together imply shorter times-to-collision. Relative errors were comparatively larger for very short and very long times-to-collision throughout, where events of the first kind were overshot, the latter ones undershot. Results are interpreted with reference to biological theories and the constraints imposed by geometrical optics. Special attention is focused on the issue of unavoidable, necessary confounding of variables in time-to-collision studies.  相似文献   

18.
Hemker L  Kavsek M 《Perception》2010,39(11):1476-1490
In the current preferential-reaching experiments, 7-month-olds were tested for their ability to respond to a combination of relative height and texture gradients. The infants were presented with a display in which these pictorial depth cues specified that two toys were at different distances. The experimental displays differed from the textured surfaces employed in earlier studies in that linear perspective of the contours of the texture elements was omitted. Experiment A shows that the infants still preferred to reach for the apparently nearer toy under monocular, but not binocular, viewing conditions, indicating that they responded to the pictorial depth cues. In experiment B, relative height and texture provided the infants with conflicting information for depth. Here, relative height outperformed texture information. A statistical comparison between the experiments as well as systematic comparisons with experimental conditions from an earlier study (Hemker et al, 2010 Infancy 15 6-27) revealed that texture gradients, unlike linear perspective, neither enhanced nor weakened the effect exerted by relative height. In sum, 7-month-old infants are obviously more sensitive to relative height and to the linear perspective of the surface contours than to the texture gradients of compression, perspective, and density.  相似文献   

19.
In four experiments, we examined whether generalization to unfamiliar views was better under stereo viewing or under nonstereo viewing across different tasks and stimuli. In the first three experiments, we used a sequential matching task in which observers matched the identities of shaded tube-like objects. Across Experiments 1-3, we manipulated the presentation method of the nonstereo stimuli (having observers wear an eye patch vs. showing observers the same screen image) and the magnitude of the viewpoint change (30 degrees vs. 38 degrees). In Experiment 4, observers identified "easy" and "hard" rotating wire-frame objects at the individual level under stereo and nonstereo viewing conditions. We found a stereo advantage for generalizing to unfamiliar views in all the experiments. However, in these experiments, performance remained view dependent even under stereo viewing. These results strongly argue against strictly 2-D image-based models of object recognition, at least for the stimuli and recognition tasks used, and suggest that observers used representations that contained view-specific local depth information.  相似文献   

20.
A single experiment evaluated observers’ ability to visually discriminate 3-D object shape, where the 3-D structure was defined by motion, texture, Lambertian shading, and occluding contours. The observers’ vision was degraded to varying degrees by blurring the experimental stimuli, using 2.0-, 2.5-, and 3.0-diopter convex lenses. The lenses reduced the observers’ acuity from ?0.091 LogMAR (in the no-blur conditions) to 0.924 LogMAR (in the conditions with the most blur; 3.0-diopter lenses). This visual degradation, although producing severe reductions in visual acuity, had only small (but significant) effects on the observers’ ability to discriminate 3-D shape. The observers’ shape discrimination performance was facilitated by the objects’ rotation in depth, regardless of the presence or absence of blur. Our results indicate that accurate global shape discrimination survives a considerable amount of retinal blur.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号