首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 7 毫秒
1.
In the present research, we investigated the depth information contained in the representations of apparently moving 3-D objects. By conducting three experiments, we measured the magnitude of representational momentum (RM) as an index of the consistency of an object’s representation. Experiment 1A revealed that RM magnitude was greater when shaded, convex, apparently moving objects shifted to a flat circle than when they shifted to a shaded, concave, hemisphere. The difference diminished when the apparently moving objects were concave hemispheres (Experiment 1B). Using luminance-polarized circles, Experiment 2 confirmed that these results were not due to the luminance information of shading. Experiment 3 demonstrated that RM magnitude was greater when convex apparently moving objects shifted to particular blurred convex hemispheres with low-pass filtering than when they shifted to concave hemispheres. These results suggest that the internal object’s representation in apparent motion contains incomplete depth information intermediate between that of 2-D and 3-D objects, particularly with regard to convexity information with low-spatial-frequency components.  相似文献   

2.
Effects of information specifying the position of an object in a 3-D scene were investigated in two experiments with twelve observers. To separate the effects of the change in scene position from the changes in the projection that occur with increased distance from the observer, the same projections were produced by simulating (a) a constant object at different scene positions and (b) different objects at the same scene position. The simulated scene consisted of a ground plane, a ceiling plane, and a cylinder on a pole attached to both planes. Motion-parallax scenes were studied in one experiment; texture-gradient scenes were studied in the other. Observers adjusted a line to match the perceived internal depth of the cylinder. Judged depth for objects matched in simulated size decreased as simulated distance from the observer increased. Judged depth decreased at a faster rate for the same projections shown at a constant scene position. Adding object-centered depth information (object rotation) increased judged depth for the motion-parallax displays. These results demonstrate that the judged internal depth of an object is reduced by the change in projection that occurs with increased distance, but this effect is diminished if information for change in scene position accompanies the change in projection.  相似文献   

3.
The visual system has been suggested to integrate different views of an object in motion. We investigated differences in the way moving and static objects are represented by testing for priming effects to previously seen ("known") and novel object views. We showed priming effects for moving objects across image changes (e.g., mirror reversals, changes in size, and changes in polarity) but not over temporal delays. The opposite pattern of results was observed for objects presented statically; that is, static objects were primed over temporal delays but not across image changes. These results suggest that representations for moving objects are: (1) updated continuously across image changes, whereas static object representations generalize only across similar images, and (2) more short-lived than static object representations. These results suggest two distinct representational mechanisms: a static object mechanism rather spatially refined and permanent, possibly suited for visual recognition, and a motion-based object mechanism more temporary and less spatially refined, possibly suited for visual guidance of motor actions.  相似文献   

4.
Five experiments were designed to determine whether a rotating, transparent 3-D cloud of dots (simulated sphere) could influence the perceived direction of rotation of a subsequent sphere. Experiment 1 established conditions under which the direction of rotation of a virtual sphere was perceived unambiguously. When a near-far luminance difference and perspective depth cues were present, observers consistently saw the sphere rotate in the intended direction. In Experiment 2, a near-far luminance difference was used to create an unambiguous rotation sequence that was followed by a directionally ambiguous rotation sequence that lacked both the near-far luminance cue and the perspective cue. Observers consistently saw the second sequence as rotating in the same direction as the first, indicating the presence of 3-D visual inertia. Experiment 3 showed that 3-D visual inertia was sufficiently powerful to bias the perceived direction of a rotation sequence made unambiguous by a near-far luminance cue. Experiment 5 showed that 3-D visual inertia could be obtained using an occlusion depth cue to create an unambiguous inertia-inducing sequence. Finally, Experiments 2, 4, and 5 all revealed a fast-decay phase of inertia that lasted for approximately 800 msec, followed by an asymptotic phase that lasted for periods as long as 1,600 msec. The implications of these findings are examined with respect to motion mechanisms of 3-D visual inertia.  相似文献   

5.
Lightness constancy in complex scenes requires that the visual system take account of information concerning variations of illumination falling on visible surfaces. Three experiments on the perception of lightness for three-dimensional (3-D) curved objects show that human observers are better able to perform this accounting for certain scenes than for others. The experiments investigate the effect of object curvature, illumination direction, and object shape on lightness perception. Lightness constancy was quite good when a rich local gray-level context was provided. Deviations occurred when both illumination and reflectance changed along the surface of the objects. Does the perception of a 3-D surface and illuminant layout help calibrate lightness judgments? Our results showed a small but consistent improvement between lightness matches on ellipsoid shapes, relative to flat rectangle shapes, under illumination conditions that produce similar image gradients. Illumination change over 3-D forms is therefore taken into account in lightness perception.  相似文献   

6.
In a recent study, Pelli (1999 Science 285 844-846) performed a set of perceptual experiments using portrait paintings by Chuck Close. Close's work is similar to the 'Lincoln' portraits of Harmon and Julesz (1973 Science 180 1194-1197) in that they are composite images consisting of coarsely sampled, individually painted, mostly homogeneous cells. Pelli showed that perceived shape was dependent on size, refuting findings that perception of this type is scale-invariant. In an attempt to broaden this finding we designed a series of experiments to investigate the interaction of 2-D scale and 3-D structure on our perception of 3-D shape. We present a series of experiments where field of view, 3-D object complexity, 2-D image resolution, viewing orientation, and subject matter of the stimulus are manipulated. On each trial, observers indicated if the depicted objects appeared to be 2-D or 3-D. Results for face stimuli are similar to Pelli's, while more geometrically complex stimuli show a further interaction of the 3-D information with distance and image information. Complex objects need more image information to be seen as 3-D when close; however, as they are moved further away from the observer, there is a bias for seeing them as 3-D objects rather than 2-D images. Finally, image orientation, relative to the observer, shows little effect, suggesting the participation of higher-level processes in the determination of the 'solidness' of the depicted object. Thus, we show that the critical image resolution depends systematically on the geometric complexity of the object depicted.  相似文献   

7.
8.
A single line was presented in a succession of orientations, each orientation separated by a fixed angle and by a fixed interval of time, and subjects reported the number of successive lines that appeared to rotate together. The perceived number of rotating lines increased linearly with the rate of stimulus presentation, with a slope that was proportional to the spatial separation. The linear functions obtained in this first experiment predicted the results of a second experiment in which subjects adjusted the spatial and temporal variables to a discrimination threshold for seeing two rotating lines. If the slope of the linear functions is considered to be an estimate of the duration of visible persistence, then these results suggest that the visible persistence of a briefly presented stimulus increases with the distance separating that stimulus from other stimuli.  相似文献   

9.
Most models of object recognition and mental rotation are based on the matching of an object's 2-D view with representations of the object stored in memory. They propose that a time-consuming normalization process compensates for any difference in viewpoint between the 2-D percept and the stored representation. Our experiment shows that such normalization is less time consuming when it has to compensate for disorientations around the vertical than around the horizontal axis of rotation. By decoupling the different possible reference frames, we demonstrate that this anisotropy of the normalization process is defined not with respect to the retinal frame of reference, but, rather, according to the gravitational or the visuocontextual frame of reference. Our results suggest that the visual system may call upon both the gravitational vertical and the visuocontext to serve as the frame of reference with respect to which 3-D objects are gauged in internal object transformations.  相似文献   

10.
Identification thresholds and the corresponding efficiencies (ideal/human thresholds) are typically computed by collapsing data across an entire stimulus set within a given task in order to obtain a “multiple-item” summary measure of information use. However, some individual stimuli may be processed more efficiently than others, and such differences are not captured by conventional multiple-item threshold measurements. Here, we develop and present a technique for measuring “single-item” identification efficiencies. The resulting measure describes the ability of the human observer to make use of the information provided by a single stimulus item within the context of the larger set of stimuli. We applied this technique to the identification of 3-D rendered objects (Exp. 1) and Roman alphabet letters (Exp. 2). Our results showed that efficiency can vary markedly across stimuli within a given task, demonstrating that single-item efficiency measures can reveal important information that is lost by conventional multiple-item efficiency measures.  相似文献   

11.
M Lappe  B Krekelberg 《Perception》1998,27(12):1437-1449
Moving objects occupy a range of positions during the period of integration of the visual system. Nevertheless, a unique position is usually observed. We investigate how the trajectory of a stimulus influences the position at which the object is seen. It has been shown before that moving objects are perceived ahead of static objects shown at the same place and time. We show here that this perceived position difference builds up over the first 500 ms of a visible trajectory. Discontinuities in the visual input reduce this buildup when the presentation frequency of a stimulus with a duration of 42 ms falls below 16 Hz. We interpret this relative mislocalization in terms of a spatiotemporal-filtering model. This model fits well with the data, given two assumptions. First, the position signal persists even though the objects are no longer visible and, second, the perceived distance is a 500 ms average of the difference of these position signals.  相似文献   

12.
Umemura H  Watanabe H  Matsuoka K 《Perception》2007,36(8):1229-1243
We examined whether the position of objects in external space affects the visual-search task associated with the tilt of 3-D objects. An array of cube-like objects was stereoscopically displayed at a distance of 4.5 m on a large screen 1.5 m above or below eye height. Subjects were required to detect a downward-tilted target among upward-tilted distractors or an upward-tilted target among downward-tilted distractors. When the stimuli consisted of shaded parallelepipeds whose upper/bottom faces were lighter than their side faces, the upward-tilted target was detected faster. This result was in accordance with the 'top-view assumption' reported in previous research. Displaying stimuli in the upper position degraded overall performance. However, when the shaded objects whose upper/bottom faces were darker than their side faces were displayed, the detection of a downward-tilted target became as efficient as that of an upward-tilted target only at the upper position. These results indicate that it is possible for the spatial position of the stimulus to promote the detection of a downward-tilted target when shading and perspective information are consistent with the viewing direction.  相似文献   

13.
Explicit memory tests such as recognition typically access semantic, modality-independent representations, while perceptual implicit memory tests typically access presemantic, modality-specific representations. By demonstrating comparable cross- and within-modal priming using vision and haptics with verbal materials (Easton, Srinivas, & Greene, 1997), we recently questioned whether the representations underlying perceptual implicit tests were modality specific. Unlike vision and audition, with vision and haptics verbal information can be presented in geometric terms to both modalities. The present experiments extend this line of research by assessing implicit and explicit memory within and between vision and haptics in the nonverbal domain, using both 2-D patterns and 3-D objects. Implicit test results revealed robust cross-modal priming for both 2-D patterns and 3-D objects, indicating that vision and haptics shared abstract representations of object shape and structure. Explicit test results for 3-D objects revealed modality specificity, indicating that the recognition system keeps track of the modality through which an object is experienced.  相似文献   

14.
Visual short-term memory (VSTM) plays an important role in visual cognition. Although objects are located on different 3-dimensional (3-D) surfaces in the real world, how VSTM capacity may be influenced by the presence of multiple 3-D surfaces has never been examined. By manipulating binocular disparities of visual displays, the authors found that more colored objects could be held in VSTM when they were placed on 2 rather than on 1 planar 3-D surfaces. This between-surface benefit in VSTM was present only when binding of objects' colors to their 3-D locations was required (i.e., when observers needed to remember which color appeared where). When binding was not required, no between-surface benefit in VSTM was observed. This benefit in VSTM could not be attributed to the number of spatial locations attended within a given surface. It was not due to a general perceptual grouping effect either, because grouping by motion and grouping by different regions of the same surface did not yield the same benefit. This increment in capacity indicates that VSTM benefits from the placement of objects in a 3-D scene.  相似文献   

15.
The present study investigated the time course of visual information processing that is responsible for successful object change detection involving the configuration and shape of 3-D novel object parts. Using a one-shot change detection task, we manipulated stimulus and interstimulus mask durations (40—500 msec). Experiments 1A and 1B showed no change detection advantage for configuration at very short (40-msec) stimulus durations, but the configural advantage did emerge with durations between 80 and 160 msec. In Experiment 2, we showed that, at shorter stimulus durations, the number of parts changing was the best predictor of change detection performance. Finally, in Experiment 3, with a stimulus duration of 160 msec, configuration change detection was found to be highly accurate for each of the mask durations tested, suggesting a fast processing speed for this kind of change information. However, switch and shape change detection reached peak levels of accuracy only when mask durations were increased to 160 and 320 msec, respectively. We conclude that, with very short stimulus exposures, successful object change detection depends primarily on quantitative measures of change. However, with longer stimulus exposures, the qualitative nature of the change becomes progressively more important, resulting in the well-known configural advantage for change detection.  相似文献   

16.
In a series of four experiments, we explored whether pigeons complete partially occluded moving shapes. Four pigeons were trained to discriminate between a complete moving shape and an incomplete moving shape in a two-alternative forced-choice task. In testing, the birds were presented with a partially occluded moving shape. In experiment 1, none of the pigeons appeared to complete the testing stimulus; instead, they appeared to perceive the testing stimulus as incomplete fragments. However, in experiments 2, 3, and 4, three of the birds appeared to complete the partially occluded moving shapes. These rare positive results suggest that motion may facilitate amodal completion by pigeons, perhaps by enhancing the figure - ground segregation process.  相似文献   

17.
18.
19.
When moving toward a stationary scene, people judge their heading quite well from visual information alone. Much experimental and modeling work has been presented to analyze how people judge their heading for stationary scenes. However, in everyday life, we often move through scenes that contain moving objects. Most models have difficulty computing heading when moving objects are in the scene, and few studies have examined how well humans perform in the presence of moving objects. In this study, we tested how well people judge their heading in the presence of moving objects. We found that people perform remarkably well under a variety of conditions. The only condition that affects an observer’s ability to judge heading accurately consists of a large moving object crossing the observer’s path. In this case, the presence of the object causes a small bias in the heading judgments. For objects moving horizontally with respect to the observer, this bias is in the object’s direction of motion. These results present a challenge for computational models.  相似文献   

20.
Four experiments were directed at understanding the influence of multiple moving objects on curvilinear (i.e., circular and elliptical) heading perception. Displays simulated observer movement over a ground plane in the presence of moving objects depicted as transparent, opaque, or black cubes. Objects either moved parallel to or intersected the observer's path and either retreated from or approached the moving observer. Heading judgments were accurate and consistent across all conditions. The significance of these results for computational models of heading perception and for information in the global optic flow field about observer and object motion is discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号