首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Selective adaptations was used to determine the degree of interactions between channels processing relative depth from stereopsis, motion parallax, and texture. Monocular adaptations with motion parallax or binocular stationary adaptation caused test surfaces, viewed either stationary binocularly or monocularly with motion parallax, to appear to slant in the opposite direction compared with the slant initially adapted to. Monocular adaptations on frontoparallel surfaces covered with a pattern of texture gradients caused a subsequently viewed test surface, viewed either monocularly with motion parallax or stationary binocularly, to appear to slant in the opposite direction as the slant indicated by the texture in the adaptation condition. No aftereffect emerged in the monocular stationary test condition. A mechanism of independent channels for relative depth perception is dismissed in favor of a view of an asymmetrical interactive processing of different information sources. The results suggest asymmetrical inhibitory interactions among habituating slant detector units receiving inputs from static disparity, dynamic disparity, and texture gradients.  相似文献   

2.
The surface and boundaries of an object generally move in unison, so the motion of a surface could provide information about the motion of its boundaries. Here we report the results of three experiments on spatiotemporal boundary formation that indicate that information about the motion of a surface does influence the formation of its boundaries. In Experiment 1, shape identification at low texture densities was poorer for moving forms in which stationary texture was visible inside than for forms in which the stationary texture was visible only outside. In Experiment 2, the disruption found in Experiment 1 was removed by adding a second external boundary. We hypothesized that the disruption was caused by boundary assignment that perceptually grouped the moving boundary with the static texture. Experiment 3 revealed that accurate information about the motion of the surface facilitated boundary formation only when the motion was seen as coming from the surface of the moving form. Potential mechanisms for surface motion effects in dynamic boundary formation are discussed.  相似文献   

3.
The ability of younger and older observers to perceive surface slant was investigated in four experiments. The surfaces possessed slants of 20°, 35°, 50°, and 65°, relative to the frontoparallel plane. The observers judged the slants using either a palm board (Experiments 1, 3, and 4) or magnitude estimation (Experiment 2). In Experiments 1–3, physically slanted surfaces were used (the surfaces possessed marble, granite, pebble, and circle textures), whereas computer-generated 3-D surfaces (defined by motion parallax and binocular disparity) were utilized in Experiment 4. The results showed that the younger and older observers' performance was essentially identical with regard to accuracy. The younger and older age groups, however, differed in terms of precision in Experiments 1 and 2: The judgments of the older observers were more variable across repeated trials. When taken as a whole, the results demonstrate that older observers (at least through the age of 83 years) can effectively extract information about slant in depth from optical patterns containing texture, motion parallax, or binocular disparity.  相似文献   

4.
Recent studies on perceptual organization in humans claim that the ability to represent a visual scene as a set of coherent surfaces is of central importance for visual cognition. We examined whether this surface representation hypothesis generalizes to a non-mammalian species, the barn owl (Tyto alba). Discrimination transfer combined with random-dot stimuli provided the appropriate means for a series of two behavioural experiments with the specific aims of (1) obtaining psychophysical measurements of figure–ground segmentation in the owl, and (2) determining the nature of the information involved. In experiment 1, two owls were trained to indicate the presence or absence of a central planar surface (figure) among a larger region of random dots (ground) based on differences in texture. Without additional training, the owls could make the same discrimination when figure and ground had reversed luminance, or were camouflaged by the use of uniformly textured random-dot stereograms. In the latter case, the figure stands out in depth from the ground when positional differences of the figure in two retinal images are combined (binocular disparity). In experiment 2, two new owls were trained to distinguish three-dimensional objects from holes using random-dot kinematograms. These birds could make the same discrimination when information on surface segmentation was unexpectedly switched from relative motion to half-occlusion. In the latter case, stereograms were used that provide the impression of stratified surfaces to humans by giving unpairable image features to the eyes. The ability to use image features such as texture, binocular disparity, relative motion, and half-occlusion interchangeably to determine figure–ground relationships suggests that in owls, as in humans, the structuring of the visual scene critically depends on how indirect image information (depth order, occlusion contours) is allocated between different surfaces. Electronic Publication  相似文献   

5.
Wu B  He ZJ  Ooi TL 《Perception》2007,36(5):703-721
The sequential-surface-integration-process (SSIP) hypothesis was proposed to elucidate how the visual system constructs the ground-surface representation in the intermediate distance range (He et al, 2004 Perception 33 789-806). According to the hypothesis, the SSIP constructs an accurate representation of the near ground surface by using reliable near depth cues. The near ground representation then serves as a template for integrating the adjacent surface patch by using the texture gradient information as the predominant depth cue. By sequentially integrating the surface patches from near to far, the visual system obtains the global ground representation. A critical prediction of the SSIP hypothesis is that, when an abrupt texture-gradient change exists between the near and far ground surfaces, the SSIP can no longer accurately represent the far surface. Consequently, the representation of the far surface will be slanted upward toward the frontoparallel plane (owing to the intrinsic bias of the visual system), and the egocentric distance of a target on the far surface will be underestimated. Our previous findings in the real 3-D environment have shown that observers underestimated the target distance across a texture boundary. Here, we used the virtual-reality system to first test distance judgments with a distance-matching task. We created the texture boundary by having virtual grass- and cobblestone-textured patterns abutting on a flat (horizontal) ground surface in experiment 1, and by placing a brick wall to interrupt the continuous texture gradient of a flat grass surface in experiment 2. In both instances, observers underestimated the target distance across the texture boundary, compared to the homogeneous-texture ground surface (control). Second, we tested the proposal that the far surface beyond the texture boundary is perceived as slanted upward. For this, we used a virtual checkerboard-textured ground surface that was interrupted by a texture boundary. We found that not only was the target distance beyond the texture boundary underestimated relative to the homogeneous-texture condition, but the far surface beyond the texture boundary was also perceived as relatively slanted upward (experiment 3). Altogether, our results confirm the predictions of the SSIP hypothesis.  相似文献   

6.
O'Brien J  Johnston A 《Perception》2000,29(4):437-452
Both texture and motion can be strong cues to depth, and estimating slant from texture cues can be considered analogous to calculating slant from motion parallax (Malik and Rosenholtz 1994, report UCB/CSD 93/775, University of California, Berkeley, CA). A series of experiments was conducted to determine the relative weight of texture and motion cues in the perception of planar-surface slant when both texture and motion convey similar information. Stimuli were monocularly viewed images of planar surfaces slanted in depth, defined by texture and motion information that could be varied independently. Slant discrimination biases and thresholds were measured by a method of single-stimuli binary-choice procedure. When the motion and texture cues depicted surfaces of identical slants, it was found that the depth-from-motion information neither reduced slant discrimination thresholds, nor altered slant discrimination bias, compared to texture cues presented alone. When there was a difference in the slant depicted by motion and by texture, perceived slant was determined almost entirely by the texture cue. The regularity of the texture pattern did not affect this weighting. Results are discussed in terms of models of cue combination and previous results with different types of texture and motion information.  相似文献   

7.
We examined the interaction between motion and stereo cues to depth order along object boundaries. Relative depth was conveyed by a change in the speed of image motion across a boundary (motion parallax), the disappearance of features on a surface moving behind an occluding object (motion occlusion), or a difference in the stereo disparity of adjacent surfaces. We compared the perceived depth orders for different combinations of cues, incorporating conditions with conflicting depth orders and conditions with varying reliability of the individual cues. We observed large differences in performance between subjects, ranging from those whose depth order judgments were driven largely by the stereo disparity cues to those whose judgments were dominated by motion occlusion. The relative strength of these cues influenced individual subjects' behavior in conditions of cue conflict and reduced reliability.  相似文献   

8.
In theoretical analyses of visual form perception, it is often assumed that the 3-dimensional structures of smoothly curved surfaces are perceptually represented as point-by-point mappings of metric depth and/or orientation relative to the observer. This article describes an alternative theory in which it is argued that our visual knowledge of smoothly curved surfaces can also be defined in terms of local, nonmetric order relations. A fundamental prediction of this analysis is that relative depth judgments between any two surface regions should be dramatically influenced by monotonicity of depth change (or lack of it) along the intervening portions of the surface through which they are separated. This prediction is confirmed in a series of experiments using surfaces depicted with either shading or texture. Additional experiments are reported, moreover, that demonstrate that smooth occlusion contours are a primary source of information about the ordinal structure of a surface and that the depth extrema in between contours can be optically specified by differences in luminance at the points of occlusion.  相似文献   

9.
The human visual system has a remarkable ability to construct surface representations from sparse stereoscopic, as well as texture and motion, information. In impoverished displays where few points are used to define regions in depth, the brain often interpolates depth estimates across intervening blank regions to create a compelling sense of a solid surface. The set of experiments described here examined stereoscopic interpolation using a novel technique based on lightness constancy. The effectiveness of this method is notable because it stands as the only technique to date that unequivocally examines the perception of interpolated surfaces, and not surfaces inferred subjectively from depth information in the stimulus. Further, these data support the growing evidence that a primary function of the stereoscopic system is to define three-dimensional surface structure.  相似文献   

10.
A series of stereograms are presented which demonstrate that texture boundaries can strongly influence the perception of discontinuities between neighbouring three-dimensional (3-D) surfaces portrayed by means of stereo cues. In these demonstration figures, no stereo information is available in the immediate vicinity of the boundary between the two 3-D stereo surfaces because all texture in that region is removed in one eye's view. On the other hand, various forms of texture boundary information are provided in the resulting monocular region. This stimulus paradigm is used to explore the question: what influence does texture boundary information have on the nature of the perceived 3-D surface that is interpolated between two stimulus regions which carry stereo cues? It is shown that if a clear-cut texture boundary is present in the monocular region then this is used by the human visual system to fix the perceived location of 3-D crease and step surface discontinuities between the stereo regions. Collett (1985) explored this issue with a similar methodology and reported weak and unreliable assistance from monocular texture boundaries in helping shape 3-D stereo surface discontinuities. The strong and robust phenomena demonstrated here seem to rely on two main differences between the present stimuli and those of Collett. In the present stimuli, figurally continuous textures containing strong texture boundaries are used, together with a technique for minimising the complications, including binocular rivalry, that arise from the borders of the stimulus regions present in only one half of each stereogram.  相似文献   

11.
We investigated whether the lower region effect on figure-ground organization (Vecera, Vogel, & Woodman, 2002) would generalize to contextual depth planes in vertical orientations, as is predicted by a theoretical analysis based on the ecological statistics of edges arising from objects that are attached to surfaces of support. Observers viewed left/right ambiguous figure-ground displays that occluded middle sections of four types of contextual inducers: two types of attached, receding, vertical planes (walls) that used linear perspective and/or texture gradients to induce perceived depth and two types of similar trapezoidal control figures that used either uniform color or random texture to reduce or eliminate perceived depth. The results showed a reliable bias toward seeing as “figure” the side of the figure-ground display that was attached to the receding depth plane, but no such bias for the corresponding side in either of the control conditions. The results are interpreted as being consistent with the attachment hypothesis that the lower region cue to figure-ground organization results from ecological biases in edge interpretation that arise when objects are attached to supporting surfaces in the terrestrial gravitational field.  相似文献   

12.
The experiments reported in this paper were designed to investigate how depth information from binocular disparity and motion parallax cues is integrated in the human visual system. Observers viewed simulated 3-D corrugated surfaces that translated to and fro across their line of sight. The depth of the corrugations was specified by either motion parallax, or binocular disparities, or some combination of the two. The amount of perceived depth in the corrugations was measured using a matching technique.

A monocularly viewed surface specified by parallax alone was seen as a rigid, corrugated surface translating along a fronto-parallel path. The perceived depth of the corrugations increased monotonically with the amount of parallax motion, just as if observers were viewing an equivalent real surface that produced the same parallax transformation. With binocular viewing and zero disparities between the images seen by the two eyes, the perceived depth was only about half of that predicted by the monocular cue. In addition, this binocularly viewed surface appeared to rotate about a vertical axis as it translated to and fro. With other combinations of motion parallax and binocular disparity, parallax only affected the perceived depth when the disparity gradients of the corrugations were shallow. The discrepancy between the parallax and disparity signals was typically resolved by an apparent rotation of the surface as it translated to and fro. The results are consistent with the idea that the visual system attempts to minimize the discrepancies between (1) the depth signalled by disparity and that required by a particular interpretation of the parallax transformation and (2) the amount of rotation required by that interpretation and the amount of rotation signalled by other cues in the display.  相似文献   

13.
M Kitazaki  S Shimojo 《Perception》1998,27(10):1153-1176
The visual system perceptually decomposes retinal image motion into three basic components that are ecologically significant for the human observer: object depth, object motion, and self motion. Using this conceptual framework, we explored the relationship between them by examining perception of objects' depth order and relative motion during self motion. We found that the visual system obeyed what we call the parallax-sign constraint, but in different ways depending on whether the retinal image motion contained velocity discontinuity or not. When velocity discontinuity existed (e.g. in dynamic occlusion, transparent motion), the subject perceptually interpreted image motion as relative motion between surfaces with stable depth order. When velocity discontinuity did not exist, he/she perceived depth-order reversal but no relative motion. The results suggest that the existence of surface discontinuity or of multiple surfaces indexed by velocity discontinuity inhibits the reversal of global depth order.  相似文献   

14.
Four experiments were conducted to examine the integration of depth information from binocular stereopsis and structure from motion (SFM), using stereograms simulating transparent cylindrical objects. We found that the judged depth increased when either rotational or translational motion was added to a display, but the increase was greater for rotating (SFM) displays. Judged depth decreased as texture element density increased for static and translating stereo displays, but it stayed relatively constant for rotating displays. This result indicates that SFM may facilitate stereo processing by helping to resolve the stereo correspondence problem. Overall, the results from these experiments provide evidence for a cooperative relationship between. SFM and binocular disparity in the recovery of 3-D relationships from 2-D images. These findings indicate that the processing of depth information from SFM and binocular disparity is not strictly modular, and thus theories of combining visual information that assume strong modularity-or-independence cannot accurately characterize all instances of depth perception from multiple sources.  相似文献   

15.
In a series of four experiments, we evaluated observers' abilities to perceive and discriminate ordinal depth relationships between separated local surface regions for objects depicted by static, deforming, and disparate boundary contours or silhouettes. Comparisons were also made between judgments made for silhouettes and for objects defined by surface texture, which permits judgment based on conventional static texture gradients, conventional stereopsis, and conventional structure-from-motion. In all the experiments, the observers were able to detect, with relatively high precision, ordinal depth relationships, an aspect of local three-dimensional (3-D) structure, from boundary contours or silhouettes. The results of the experiments clearly demonstrate that the static, disparate, and deforming boundary contours of solid objects are perceptually important optical sources of information about 3-D shape. Other factors that were found to affect performance were the amount of separation between the local surface regions, the proximity or closeness of the regions to the boundary contour itself, and for the conditions with deforming contours, the overall magnitude of the boundary deformation.  相似文献   

16.
B J Gillam  S G Blackburn 《Perception》1998,27(11):1267-1286
When an isolated surface is stereoscopically slanted around its vertical axis, perceived slant is attenuated relative to prediction, whereas when a frontal-plane surface is placed above or below the slanted surface, slant is close to the predicted magnitude. Gillam et al (1988 Journal of Experimental Psychology: Human Perception and Performance 14 163-175) have argued that this slant enhancement is due to the introduction of a gradient of relative disparities across the abutment of the two surfaces which is a more effective stimulus for slant than is the gradient of absolute disparities present when the slanted surface is presented alone. To test this claim we varied the separation between the two surfaces, along either the vertical or depth axis. Since these manipulations have been reported to reduce the depth response to individual relative disparities, they should similarly affect any slant response based on a gradient of relative disparities. As predicted, increasing the separation, vertically or in depth, systematically reduced both the perceived slant of the stereoscopically slanted surface and also the stereo contrast slant induced in the frontal-plane surface. These results are not predicted by alternative accounts of slant enhancement (disparity-gradient contrast, normalisation, frame of reference). We also demonstrated that sidebands of monocular texture, when added to equate the half-image widths of the slanted surface, increased the perceived slant of this surface (particularly when presented alone) and reduced the contrast slant. Monocular texture, by signalling occlusion, appeared to provide absolute slant information which determined how the total relative slant perceived between the surfaces was allocated to each.  相似文献   

17.
Reinhardt-Rutland AH 《Perception》1999,28(11):1361-1371
The perceived slant of a surface relative to the frontal plane can be reduced when the surface is viewed through a frame between the observer and the surface. Aspects of this framing effect were investigated in three experiments in which observers judged the orientations-in-depth of rectangular and trapezoidal surfaces which were matched for pictorial depth. In experiments 1 and 2, viewing was stationary-monocular. In experiment 1, a frontal rectangular frame was present or absent during viewing. The perceived slants of the surfaces were reduced in the presence of the frame; the reduction for the trapezoidal surface was greater, suggesting that conflict in stimulus information contributes to the phenomenon. In experiment 2, the rectangular frame was either frontal or slanted; in a third condition, a frame was trapezoidal and frontal. The conditions all elicited similar results, suggesting that the framing effect is not explained by pictorial perception of the display, or by assimilation of the surface orientation to the frame orientation. In experiment 3, viewing was moving-monocular to introduce motion parallax; the framing effect was reduced, being appreciable only for a trapezoidal surface. The results are related to other phenomena in which depth perception of points in space tends towards a frontal plane; this frontal-plane tendency is attributed to heavy experimental demands, mainly concerning impoverished, conflicting, and distracting information.  相似文献   

18.
An orientation matching task was used to evaluate observers’ sensitivity to local surface orientation at designated probe points on randomly shaped 3-D objects that were optically defined by texture, lambertian shading, or specular highlights. These surfaces could be stationary or in motion, and they could be viewed either monocularly or stereoscopically, in all possible combinations. It was found that the deformations of shading and/or highlights (either over time or between the two eyes’ views) produced levels of performance similar to those obtained for the optical deformations of textured surfaces. These findings suggest that the human visual system utilizes a much richer array of optical information to support its perception of shape than is typically appreciated.  相似文献   

19.
Identifying contours from occlusion events   总被引:1,自引:0,他引:1  
Surface contours specified by occlusion events that varied in density, velocity, and type of motion (rotation or translation) were examined in four experiments. As a fourth experimental factor, there were both figure-motion trials (the occluding surface moved over a stationary background) and background-motion trials (the background moved behind a stationary surface) in each experiment. Displays contained line patterns and rotary motion (Experiment 1), line patterns and translatory motion (Experiment 2), textured surfaces and rotary motion (Experiment 3), and textured surfaces and translatory motion (Experiment 4). Results indicate that contour identifications are more accurate with translation than with rotation, and that background-motion trials are generally easier than figure-motion trials. Although density in all experiments affected identifications in both background- and figure-motion trials, velocity did so in Experiment 4 only. In Experiments 1, 2, and 3, velocity affected identifications in background-motion trials but not in figure-motion trials. In Experiments 3 and 4, the rate of accretion and deletion of texture was a poor predictor of identification accuracy. These results are not consistent with previous accounts of contour perception from occlusion events, and may reflect an involvement of ocular pursuit as a mechanism for registering contour information.  相似文献   

20.
Yajima T  Ujike H  Uchikawa K 《Perception》1998,27(8):937-949
The two main questions addressed in this study were (a) what effect does yoking the relative expansion and contraction (EC) of retinal images to forward and backward head movements have on the resultant magnitude and stability of perceived depth, and (b) how does this relative EC image motion interact with the depth cues of motion parallax? Relative EC image motion was produced by moving a small CCD camera toward and away from the stimulus, two random-dot surfaces separated in depth, in synchrony with the observers' forward and backward head movements. Observers viewed the stimuli monocularly, on a helmet-mounted display, while moving their heads at various velocities, including zero velocity. The results showed that (a) the magnitude of perceived depth was smaller with smaller head velocities (< 10 cm s-1), including the zero-head-velocity condition, than with a larger velocity (10 cm s-1), and (b) perceived depth, when motion parallax and the EC image motion cues were simultaneously presented, is equal to the greater of the two possible perceived depths produced from either of these two cues alone. The results suggested the role of nonvisual information of self-motion on perceiving depth.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号