首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Albert MK  Tse PU 《Perception》2000,29(4):409-420
Rock [1973, Orientation and Form (New York: Academic Press)] showed that form perception generally depends more on the orientation of a stimulus in world coordinates than on its orientation in retinal coordinates. He suggested that the assignment of an object's 'environmental orientation' depends on gravity, visual frame of reference, and the observer's ability to impose orientation along one axis or another. This paper shows that the assignment of environmental orientation and perceived 3-D form also depends on the relationship between an object and retinally adjacent surfaces in the scene to which it might be attached. Whereas previous examples have demonstrated effects of orientation on 2-D form, we show that orientation can affect the perceived intrinsic 3-D shape of a volume.  相似文献   

2.
Most models of object recognition and mental rotation are based on the matching of an object's 2-D view with representations of the object stored in memory. They propose that a time-consuming normalization process compensates for any difference in viewpoint between the 2-D percept and the stored representation. Our experiment shows that such normalization is less time consuming when it has to compensate for disorientations around the vertical than around the horizontal axis of rotation. By decoupling the different possible reference frames, we demonstrate that this anisotropy of the normalization process is defined not with respect to the retinal frame of reference, but, rather, according to the gravitational or the visuocontextual frame of reference. Our results suggest that the visual system may call upon both the gravitational vertical and the visuocontext to serve as the frame of reference with respect to which 3-D objects are gauged in internal object transformations.  相似文献   

3.
M Kitazaki  S Shimojo 《Perception》1998,27(10):1153-1176
The visual system perceptually decomposes retinal image motion into three basic components that are ecologically significant for the human observer: object depth, object motion, and self motion. Using this conceptual framework, we explored the relationship between them by examining perception of objects' depth order and relative motion during self motion. We found that the visual system obeyed what we call the parallax-sign constraint, but in different ways depending on whether the retinal image motion contained velocity discontinuity or not. When velocity discontinuity existed (e.g. in dynamic occlusion, transparent motion), the subject perceptually interpreted image motion as relative motion between surfaces with stable depth order. When velocity discontinuity did not exist, he/she perceived depth-order reversal but no relative motion. The results suggest that the existence of surface discontinuity or of multiple surfaces indexed by velocity discontinuity inhibits the reversal of global depth order.  相似文献   

4.
S Shimojo  K Nakayama 《Perception》1990,19(3):285-299
A series of demonstrations were created where the perceived depth of targets was controlled by stereoscopic disparity. A closer object (a cloud) was made to jump back and forth horizontally, partially occluding a farther object (a full moon). The more distant moon appeared stationary even though the unoccluded portion of it, a crescent, changed position. Reversal of the relative depth of the moon and cloud gave a totally different percept: the crescent appeared to flip back and forth in the front depth plane. Thus, the otherwise-robust apparent motion of the moon crescents was completely abolished in the cloud-closer case alone. This motion-blocking effect is attributed to the 'amodal presence' of the occluded surface continuing behind the occluding surface. To measure the effect of this occluded 'invisible' surface quantitatively, a bistable apparent motion display was used (Ramachandran and Anstis 1983a): two small rectangular-shaped targets changed their positions back and forth between two frames, and the disparity of a large centrally positioned rectangle was varied. When the perceived depths supported the possibility of amodal completion behind the large rectangle, increased vertical motion of the targets was found, suggesting that the amodal presence of the targets behind the occluder had effectively changed the center position of the moving targets for purposes of motion correspondence. Amodal contours are literally 'invisible', yet it is hypothesized that they have a neural representation at sufficiently early stages of visual processing to alter the correspondence solving process for apparent motion.  相似文献   

5.
Nijhawan R 《Perception》2001,30(3):263-282
An object flashed briefly in a given location, the moment another moving object arrives in the same location, is perceived by observers as lagging behind the moving object (flash-lag effect). Does the flash-lag effect occur if the retinal image of the moving object is rendered stationary by smooth pursuit of the moving object? Does the flash-lag effect occur if the retinal image of a stationary object is caused to move by smooth-pursuit eye movements? A disk was briefly flashed in the center of a moving ring such that the ring center was completely 'filled' by the disk. In this display, observers perceived the flashed disk to lag such that it appeared only to partially 'fill' the ring center. The 'unfilled' portion (perceived void) of the moving ring was seen in the color of the background. With smooth pursuit of the ring, the flash-lag effect was eliminated, and observers saw the flashed disk centered on the moving ring. A strong flash-lag effect was observed when observers smoothly pursued a moving point target past a continuously visible stationary ring. Once again, the flashed disk appeared to only partially fill the center of the continuously visible stationary ring, yielding a vivid 'perceived void'. These results are discussed in terms of neural delays and their compensation.  相似文献   

6.
Five experiments are reported in which subjects judged the movement or spatial location of a visible object that underwent a combination of real and induced (illusory) motion. When subjects attempted to reproduce the distance that the object moved by moving their unseen hands, they were more affected by the illusion than when they pointed to the object's perceived final location. Furthermore, pointing to the final location was more affected by the illusion when the hand movement began from the same position as that at which the object initially appeared, as compared with responses that began from other positions. The results suggest that people may separately encode two distinct types of spatial information: (1) information about the distance moved by an object and (2) information about the absolute spatial location of the object. Information about distance is more susceptible to the influence of an induced motion illusion, and people appear to rely differentially on the different types of spatial information, depending on features of the pointing response. The results have important implications for the mechanisms that underlie spatially oriented behavior in general.  相似文献   

7.
The perceived position of an object is determined not only by the retinal location of the object but also by gaze direction, eye movements, and the motion of the object itself. Recent evidence further suggests that the motion of one object can alter the perceived positions of stationary objects in remote regions of visual space (Whitney & Cavanagh, 2000). This indicates that there is an influence of motion on perceived position, and that this influence can extend over large areas of the visual field. Yet, it remains unclear whether the motion of one object shifts the perceived positions of other moving stimuli. To test this we measured two well-known visual illusions, the Fröhlich effect and representational momentum, in the presence of extraneous surrounding motion. We found that the magnitude of these mislocalizations was altered depending on the direction and speed of the surrounding motion. The results indicate that the positions assigned to stationary and moving objects are affected by motion signals over large areas of space and that both types of stimuli may be assigned positions by a common mechanism.  相似文献   

8.
Elder JH  Trithart S  Pintilie G  MacLean D 《Perception》2004,33(11):1319-1338
We used a visual-search method to investigate the role of shadows in the rapid discrimination of scene properties. Targets and distractors were light or dark 2-D crescents of identical shape and size, on a mid-grey background. From the dark stimuli, illusory 3-D shapes can be created by blurring one arc of the crescent. If the inner arc is blurred, the stimulus is perceived as a curved surface with attached shadow. If the outer arc is blurred, the stimulus is perceived as a flat surface casting a shadow. In a series of five experiments, we used this simple stimulus to map out the shadow properties that the human visual system can rapidly detect and discriminate. To subtract out 2-D image factors, we compared search performance for dark-shadow stimuli with performance for light-shadow stimuli which generally do not elicit strong 3-D percepts. We found that the human visual system is capable of rapid discrimination based upon a number of different shadow properties, including the type of the shadow (cast or attached), the direction of the shadow, and the displacement of the shadow. While it is clear that shadows are not simply discounted in rapid search, it is unclear at this stage whether rapid discrimination is acting upon shadows per se or upon representations of 3-D object shape and position elicited by perceived shadows.  相似文献   

9.
In the present study, we examined whether it is easier to judge when an object will pass one's head if the object's surface is textured. There are three reasons to suspect that this might be so: First, the additional (local) optic flow may help one judge the rate of expansion and the angular velocity more reliably. Second, local deformations related to the change in angle between the object and the observer could help track the object's position along its path. Third, more reliable judgments of the object's shape could help separate global expansioncaused by changes in distance from expansion due to changes in the angle between the object and the observer. We can distinguish among these three reasons by comparing performance for textured and uniform spheres and disks. Moving objects were displayed for 0.5-0.7 sec. Subjects had to decide whether the object would pass them before or after a beep that was presented 1 sec after the object started moving. Subjects were not more precise with textured objects. When the disk rotated in order to compensate for the orientation-related contraction that its image would otherwise undergo during its motion, it appeared to arrive later, despite the fact that this strategy increases the global rate of expansion. We argue that this is because the expected deformation of the object's image during its motion is considered when time to passage is judged. Therefore, the most important role for texture in everyday judgments of time to passage is probably that it helps one judge the object's shape and thereby estimate how its image will deform as it moves.  相似文献   

10.
When a rigid object moves toward the eye, it is usually perceived as being rigid. However, in the case of motion away from the eye, the motion and structure of the object are perceived nonveridically, with the percept tending to reflect the nonrigid transformations that are present in the retinal image. This difference in response to motion to and from the observer was quantified in an experiment using wire-frame computer-generated boxes which moved toward and away from the eye. Two theoretical systems are developed by which uniform three-dimensional velocity can be recovered from an expansion pattern of nonuniform velocity vectors. It is proposed that the human visual system uses two similar systems for processing motion in depth. The mechanism used for motion away from the eye produces perceptual errors because it is not suited to objects with a depth component.  相似文献   

11.
The human visual system possesses a remarkable ability to reconstruct the shape of an object that is partly occluded by an interposed surface. Behavioral results suggest that, under some circumstances, this perceptual process (termed amodal completion) progresses from an initial representation of local image features to a completed representation of a shape that may include features that are not explicitly present in the retinal image. Recent functional magnetic resonance imaging (fMRI) studies have shown that the completed surface is represented in early visual cortical areas. We used fMRI adaptation, combined with brief, masked exposures, to track the amodal completion process as it unfolds in early visual cortical regions. We report evidence for an evolution of the neural representation from the image-based feature representation to the completed representation. Our method offers the possibility of measuring changes in cortical activity using fMRI over a time scale of a few hundred milliseconds.  相似文献   

12.
To examine the relationship between visual motion processing for perception and pursuit, we measured the pursuit eye-movement and perceptual responses to the same complex-motion stimuli. We show that humans can both perceive and pursue the motion of line-figure objects, even when partial occlusion makes the resulting image motion vastly different from the underlying object motion. Our results show that both perception and pursuit can perform largely accurate motion integration, i.e. the selective combination of local motion signals across the visual field to derive global object motion. Furthermore, because we manipulated perceived motion while keeping image motion identical, the observed parallel changes in perception and pursuit show that the motion signals driving steady-state pursuit and perception are linked. These findings disprove current pursuit models whose control strategy is to minimize retinal image motion, and suggest a new framework for the interplay between visual cortex and cerebellum in visuomotor control.  相似文献   

13.
In this paper, we analyze and test three theories of 3-D shape perception: (1) Helmholtzian theory, which assumes that perception of the shape of an object involves reconstructing Euclidean structure of the object (up to size scaling) from the object’s retinal image after taking into account the object’s orientation relative to the observer, (2) Gibsonian theory, which assumes that shape perception involves invariants (projective or affine) computed directly from the object’s retinal image, and (3) perspective invariants theory, which assumes that shape perception involves a new kind of invariants of perspective transformation. Predictions of these three theories were tested in four experiments. In the first experiment, we showed that reliable discrimination between a perspective and nonperspective image of a random polygon is possible even when information only about the contour of the image is present. In the second experiment, we showed that discrimination performance did not benefit from the presence of a textured surface, providing information about the 3-D orientation of the polygon, and that the subjects could not reliably discriminate between the 3-D orientation of the textured surface and that of a shape. In the third experiment, we compared discrimination for solid shapes that either had flat contours (cuboids) or did not have visible flat contours (cylinders). The discrimination was very reliable in the case of cuboids but not in the case of cylinders. In the fourth experiment, we tested the effectiveness of planar motion in perception of distances and showed that the discrimination threshold was large and similar to thresholds when other cues to 3-D orientation were used. All these results support perspective invariants as a model of 3-D shape perception.  相似文献   

14.
Albert MK 《Perception》2000,29(5):601-608
The task of human vision is to reliably infer useful information about the external environment from images formed on the retinae. In general, the inference of scene properties from retinal images is not deductive; it requires knowledge about the external environment. Further, it has been suggested that the environment must be regular in some way in order for any scene properties to be reliably inferred. In particular, Knill and Kersten [1991, in Pattern Recognition by Man and Machine Ed. R J Watt (London: Macmillan)] and Jepson et al [1996, in Bayesian Approaches to Perception Eds D Knill, W Richards (Cambridge: Cambridge University Press)] claim that, given an 'unbiased' prior probability distribution for the scenes being observed, the generic viewpoint assumption is not probabilistically valid. However, this claim depends upon the use of representation spaces that may not be appropriate for the problems they consider. In fact, it is problematic to define a rigorous criterion for a probability distribution to be considered 'random' or 'regularity-free' in many natural domains of interest. This problem is closely related to Bertrand's paradox. I propose that, in the case of 'unbiased' priors, the reliability of inferences based on the generic viewpoint assumption depends partly on whether or not an observed coincidence in the image involves features known to be on the same object. This proposal is based on important differences between the distributions associated with: (i) a 'random' placement of features in 3-D, and (ii) the positions of features on a 'randomly shaped' and 'randomly posed' 3-D object. Similar considerations arise in the case of inferring 3-D motion from image motion.  相似文献   

15.
In two experiments, we manipulated the properties of 3-D objects and terrain texture in order to investigate their effects on active heading control during simulated flight. Simulated crosswinds were used to introduce a rotational component into the retinal flow field that presumably provided the visual cues used for heading control An active control task was used so that the results could be generalized to real-world applications such as flight simulation. In Experiment 1, we examined the effects of three types of terrain, each of which was presented with and without 3-D objects (trees), and found that the presence of 3-D objects was more important than terrain texture for precise heading control In Experiment 2, we investigated the effects of varying the height and density of 3-D objects and found that increasing 3-D object density improved heading control, but that 3-D object height had only a small effect. On the basis of these results, we conclude that the vertical contours improved active heading control by enhancing the motion parallax information contained in the retinal flow.  相似文献   

16.
The relative visual position of a briefly flashed stimulus is systematically modified in the presence of motion signals. We investigated the two-dimensional distortion of the positional representation of a flash relative to a moving stimulus. Analysis of the spatial pattern of mislocalization revealed that the perceived position of a flash was not uniformly displaced, but instead shifted toward a single point of convergence that followed the moving object from behind at a fixed distance. Although the absolute magnitude of mislocalization increased with motion speed, the convergence point remained unaffected. The motion modified the perceived position of a flash, but had little influence on the perceived shape of a spatially extended flash stimulus. These results demonstrate that motion anisotropically distorts positional representation after the shapes of objects are represented. Furthermore, the results imply that the flash-lag effect may be considered a special case of two-dimensional anisotropic distortion.  相似文献   

17.
Results from luminance discriminations with objects defined by apparent motion suggest an object-specific temporal integration of luminance. Further experiments suggested that this integration is weighted to favor the initial display of an object and involves the percept of surface reflectance (lightness). These results are consistent with the object-file metaphor suggested by D. Kahneman, A. Treisman, and B. Gibbs (1992), in which an object's perceived initial surface reflectance is assigned and maintained in an object file. A strategy is proposed in which the intrinsic properties of an object are assumed not to change over time. As intrinsic properties are generally invariant and possibly difficult to compute, this strategy would have the advantage of relatively high accuracy at relatively low computational cost.  相似文献   

18.
Direct and indirect theories of perception differ on whether form perception depends on higher order invariants or on features in the retinal image. The present paper describes a demonstration that an object can be recognized through a higher order pattern (dynamic occlusion) without any of the object's features being displayed. Stimuli consist of computer stimulations of black wireframe objects moving in front of, and occluding, a random layout of point lights on a black background. In this way, no single videoframe of the stimuli displays any of the object's features, and motion of the amodal object in front of the light points is necessary for the form to become visible. The forms can also be recognized when isoluminous colours are used for background and point lights. Finally, it is noted that, if the observer can actively control the motion of the object, e.g., by moving a computer mouse, recognition is enhanced as in Gibson's (1962) experiment on active touch.  相似文献   

19.
When a figure moves behind a narrow aperture in an opaque surface, if it is perceived as a figure, its shape will often appear distorted. Under such anorthoscopic conditions, the speed or direction of the object's motion is ambiguous. However, when the observer simultaneously tracks a moving target, a figure is always perceived, and its precise shape is a function of the speed or direction of tracking. The figure is seen as moving with the speed or in the direction of the target. Thus, it is argued that eye movement serves as a cue to the figure's motion, which, in turn, determines its perceived length or orientation.  相似文献   

20.
Meng and Sedgwick (2001, 2002) found that the perceived distance of an object in a stationary scene was determined by the position at which it contacted the ground in the image, or by nested contact relations among intermediate surfaces. Three experiments investigated whether motion parallax would allow observers to determine the distance of a floating object without intermediate contact relations. The displays consisted of one or more computer-generated textured cylinders inserted into a motion picture or still image of an actual 3-D scene. In the motion displays, both the cylinders and the scene translated horizontally. Judged distance for a single cylinder floating above the ground was determined primarily by the location at which the object contacted the ground in the projected image (“optical contact”), but was altered in the direction indicated by motion parallax. When more than one cylinder was present and observers were asked to judge the distance of the top cylinder, judged distance moved closer to that indicated by motion parallax, almost matching that value with three cylinders. These results indicate that judged distance in a dynamic scene is affected both by optical contact and motion parallax, with motion parallax more effective when multiple objects are present.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号