首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Humans and pigeons were trained to discriminate between 2 views of actual 3-D objects or their photographs. They were tested on novel views that were either within the closest rotational distance between the training views (interpolated) or outside of that range (extrapolated). When training views were 60 degrees apart, pigeons, but not humans, recognized novel views of actual objects better than their pictures. Further, both species recognized interpolated views of both stimulus types better than extrapolated views, but a single distinctive geon enhanced recognition of novel views only for humans. When training views were 90 degrees apart, pigeons recognized interpolated views better than extrapolated views with actual objects but not with photographs. Thus, pigeons may represent actual objects differently than their pictures.  相似文献   

2.
In studies related to human movement, linked segment models (LSM's) are often used to quantify forces and torques, generated in body joints. Some LSM's represent only a few body segments. Others, for instance used in studies on the control of whole body movements, include all body segments. As a consequence of the complexity of 3-dimensional (3-D) analyses, most LSM's are restricted to one plane of motion. However, in asymmetric movements this may result in a loss of relevant information. The aim of the current study was to develop and validate a 3-D LSM including all body segments. Braces with markers, attached to all body segments, were used to record the body movements. The validation of the model was accomplished by comparing the measured with the estimated ground reaction force and by comparing the torques at the lumbo-sacral joint that resulted from a bottom-up and a top-down mechanical analysis. For both comparisons, reasonable to good agreement was found. Sources of error that could not be analysed this way, were subjected to an additional sensitivity analysis. It was concluded that the internal validity of the current model is quite satisfactory.  相似文献   

3.
Effects of information specifying the position of an object in a 3-D scene were investigated in two experiments with twelve observers. To separate the effects of the change in scene position from the changes in the projection that occur with increased distance from the observer, the same projections were produced by simulating (a) a constant object at different scene positions and (b) different objects at the same scene position. The simulated scene consisted of a ground plane, a ceiling plane, and a cylinder on a pole attached to both planes. Motion-parallax scenes were studied in one experiment; texture-gradient scenes were studied in the other. Observers adjusted a line to match the perceived internal depth of the cylinder. Judged depth for objects matched in simulated size decreased as simulated distance from the observer increased. Judged depth decreased at a faster rate for the same projections shown at a constant scene position. Adding object-centered depth information (object rotation) increased judged depth for the motion-parallax displays. These results demonstrate that the judged internal depth of an object is reduced by the change in projection that occurs with increased distance, but this effect is diminished if information for change in scene position accompanies the change in projection.  相似文献   

4.
Measurements were made of the way human subjects visually inspected an idealized machined tool part (a 'widget') while learning the three-dimensional shape of the object. Subjects were free to rotate the object about any axis. Inspection was not evenly distributed across all views. Subjects focused on views where the faces of the object were orthogonal to the line of sight and the edges of the object were aligned parallel or at right angles to the gravitational axis. These 'face' or 'plan' views were also the easiest for subjects to bring to mind in a mental imagery task. By contrast, when subjects were instructed to imagine the views displaying the most structural information they visualized views lying midway between face views.  相似文献   

5.
6.
This study investigates how mechanisms for amplifying 2-D motion contrast influence the assignment of 3-D depth values. The authors found that the direction of movement of a random-dot conveyor belt strongly inclined observers to report that the front surface of a superimposed, transparent, rotating, random-dot sphere moved in a direction opposite to the belt. This motion-contrast effect was direction selective and demonstrated substantial spatial integration. Varying the stereo depth of the belt did not compromise the main effect, precluding a mechanical interpretation (sphere rolling on belt). Varying the speed of the surfaces of the sphere also did not greatly affect the interpretation of rotation direction. These results suggest that 2-D center-surround interactions influence 3-D depth assignment by differentially modulating the strength of response to the moving surfaces of an object (their prominence) without affecting featural specificity.  相似文献   

7.
Previous studies have shown that spatial attention can shift in three-dimensional (3-D) space determined by binocular disparity. Using Posner's precueing paradigm, the current work examined whether attentional selection occurs in perceived 3-D space defined by occlusion. Experiment 1 showed that shifts of spatial attention induced by central cues between two surfaces in the left and right visual fields did not differ between the conditions when the two surfaces were located at the same or different perceptual depth. In contrast, Experiment 2 found that peripheral cues generated a stronger cue validity effect when the two surfaces were perceived at a different rather than at the same perceptual depth. The results suggest that exogenous but not endogenous attention operates in perceived 3-D space.  相似文献   

8.
In this paper, we analyze and test three theories of 3-D shape perception: (1) Helmholtzian theory, which assumes that perception of the shape of an object involves reconstructing Euclidean structure of the object (up to size scaling) from the object’s retinal image after taking into account the object’s orientation relative to the observer, (2) Gibsonian theory, which assumes that shape perception involves invariants (projective or affine) computed directly from the object’s retinal image, and (3) perspective invariants theory, which assumes that shape perception involves a new kind of invariants of perspective transformation. Predictions of these three theories were tested in four experiments. In the first experiment, we showed that reliable discrimination between a perspective and nonperspective image of a random polygon is possible even when information only about the contour of the image is present. In the second experiment, we showed that discrimination performance did not benefit from the presence of a textured surface, providing information about the 3-D orientation of the polygon, and that the subjects could not reliably discriminate between the 3-D orientation of the textured surface and that of a shape. In the third experiment, we compared discrimination for solid shapes that either had flat contours (cuboids) or did not have visible flat contours (cylinders). The discrimination was very reliable in the case of cuboids but not in the case of cylinders. In the fourth experiment, we tested the effectiveness of planar motion in perception of distances and showed that the discrimination threshold was large and similar to thresholds when other cues to 3-D orientation were used. All these results support perspective invariants as a model of 3-D shape perception.  相似文献   

9.
We examined the ability of human observers to discriminate between different 3-D quadratic surfaces defined by motion, and with head position fed back to the stimulus to provide an up-to-date dynamical perspective view. We tested whether 3-D shape or 3-D curvature would affect discrimination performance. It appeared that discrimination of 3-D quadratic shape clearly depended on shape but not on the amount of curvature. Even when the amount of curvature was randomized, subjects’ performance was not altered. On the other hand, the discrimination of 3-D curvature clearly depended linearly on curvature with Weber fractions of 20% on the average and, to a small degree, on 3-D shape. The experiment shows that observers can easily separate 3-D shape and 3-D curvature, and that Koenderink’s shape index and curvedness provide a convenient way to specify shape. These results warn us against using just any arbitrary 3-D shape in 3-D shape perception tasks and indicate, for example, that emphasizing 3-D shape in computer displays by exaggerating curvature does not have any effect.  相似文献   

10.
The way in which human subjects distribute their time when attempting to learn the surface appearance of objects placed on a stand free to rotate about its vertical axis was investigated. Experiments were undertaken to establish whether observers concentrate their time on particular views and, if so, to determine the image characteristics of the preferred views. For tetrahedra, subjects concentrated on views which presented a face or an edge centred on the line of sight. Both of these views were symmetric about the vertical axis. For potatoes as examples of opaque smooth objects, subjects concentrated on four views in which the object's principal (long) axis was oriented side-on or end-on to their line of sight. For such views the horizontal width (and surface area) of the object's image had maximum and minimum values. Preferred views were not systematically related to views defined as stable from the appearance of surface boundaries or 'singularities'.  相似文献   

11.
The effect of varying information for overall depth in a simulated 3-D scene on the perceived layout of objects in the scene was investigated in two experiments. Subjects were presented with displays simulating textured surfaces receded in depth. Pairs of markers were positioned at equal intervals within the scenes. The subject's task was to judge the depth between the intervals. Overall scene depth was varied by viewing through either a collimating lens or a glass disk. Judged depth for equal depth intervals decreased with increasing distance of the interval from the front of the scene. Judged depth was greater for collimated than for non-collimated viewing. Interestingly, collimated viewing resulted in a uniform rescaling of the perceived depth intervals.  相似文献   

12.
This paper presents a novel three-dimensional (3-D) eye movement analysis algorithm for binocular eye tracking within virtualreality (VR). The user’s gaze direction, head position, and orientation are tracked in order to allow recording of the user’s fixations within the environment. Although the linear signal analysis approach is itself not new, its application to eye movement analysis in three dimensions advances traditional two-dimensional approaches, since it takes into account the six degrees of freedom of head movements and is resolution independent. Results indicate that the 3-D eye movement analysis algorithm can successfully be used for analysis of visual process measures in VR. Process measures not only can corroborate performance measures, but also can lead to discoveries of the reasons for performance improvements. In particular, analysis of users’ eye movements in VR can potentially lead to further insights into the underlying cognitive processes of VR subjects.  相似文献   

13.
The modern "textbook" view of visual perception contains an inherent paradox. On the one hand, it claims that relatively simple edge-extraction processes requires a stimulus exposure of approximately 50 ms. On the other hand, it says that the identification of objects in photographs and line-drawings can be highly accurate with exposure durations as short as 100 ms. It is tempting to conclude that all the difficult work of perception occurs in the 50 ms that elapse between when these two tasks are accomplished. This article argues against this view, suggesting instead that much more than edge-extraction is accomplished by the early visual processes. To illustrate this view, a computational model is described that is capable of recovering the 3-D orientation of objects from some line-drawings, rapidly and in parallel. Data from recent visual search experiments with human observers are presented in support of this model and the implications of this view for the "textbook" view are discussed.  相似文献   

14.
The primary objective of this study was to quantitatively investigate the human perception of surface curvature by using virtual surfaces and motor tasks along with data analysis methods to estimate surface curvature from drawing movements. Three psychophysical experiments were conducted. In Experiment 1, we looked at subjects' sensitivity to the curvature of a curve lying on a surface and changes in the curvature as defined by Euler's formula, which relates maximum and minimum principal curvatures and their directions. Regardless of direction and surface shape (elliptic and hyperbolic), subjects could report the curvature of a curve lying on a surface through a drawing task. In addition, multiple curves drawn by subjects were used to reconstruct the surface. These reconstructed surfaces could be better accounted for by analysis that treated the drawing data as a set of curvatures rather than as a set of depths. A pointing task was utilized in Experiment 2, and subjects could report principal curvature directions of a surface rather precisely and consistently when the difference between principal curvatures was sufficiently large, but performance was poor for the direction of zero curvature (asymptotic direction) on a hyperbolic surface. In Experiment 3, it was discovered that sensitivity to the sign of curvature was different for perceptual judgments and motor responses, and there was also a difference for that of a curve itself and the same curve embedded in a surface. These findings suggest that humans are sensitive to relative changes in curvature and are able to comprehend quantitative surface curvature for some motor tasks.  相似文献   

15.
16.
17.
In four experiments, the effects of sequential priming on the perceptual organization of complex three-dimensional (3-D) displays were examined. Observers were asked to view stereoscopic arrays and to search an embedded subset of items for an odd-colored target while 3-D orientation of the stimuli was varied randomly between trials. Search times decreased reliably when 3-D stimulus orientation was unchanged on consecutive trials, indicating substantial sequential priming by 3-D stimulus layout. The priming was nonsensory and was independent of priming by additional stimulus characteristics. Finally, priming by 3-D layout was unaffected by observers' foreknowledge of display orientation. Results indicate that perceptual organization of 3-D stimuli is guided by a short-term trace of 3-D spatial relationships between stimuli.  相似文献   

18.
Five experiments were designed to determine whether a rotating, transparent 3-D cloud of dots (simulated sphere) could influence the perceived direction of rotation of a subsequent sphere. Experiment 1 established conditions under which the direction of rotation of a virtual sphere was perceived unambiguously. When a near-far luminance difference and perspective depth cues were present, observers consistently saw the sphere rotate in the intended direction. In Experiment 2, a near-far luminance difference was used to create an unambiguous rotation sequence that was followed by a directionally ambiguous rotation sequence that lacked both the near-far luminance cue and the perspective cue. Observers consistently saw the second sequence as rotating in the same direction as the first, indicating the presence of 3-D visual inertia. Experiment 3 showed that 3-D visual inertia was sufficiently powerful to bias the perceived direction of a rotation sequence made unambiguous by a near-far luminance cue. Experiment 5 showed that 3-D visual inertia could be obtained using an occlusion depth cue to create an unambiguous inertia-inducing sequence. Finally, Experiments 2, 4, and 5 all revealed a fast-decay phase of inertia that lasted for approximately 800 msec, followed by an asymptotic phase that lasted for periods as long as 1,600 msec. The implications of these findings are examined with respect to motion mechanisms of 3-D visual inertia.  相似文献   

19.
In a recent study, Pelli (1999 Science 285 844-846) performed a set of perceptual experiments using portrait paintings by Chuck Close. Close's work is similar to the 'Lincoln' portraits of Harmon and Julesz (1973 Science 180 1194-1197) in that they are composite images consisting of coarsely sampled, individually painted, mostly homogeneous cells. Pelli showed that perceived shape was dependent on size, refuting findings that perception of this type is scale-invariant. In an attempt to broaden this finding we designed a series of experiments to investigate the interaction of 2-D scale and 3-D structure on our perception of 3-D shape. We present a series of experiments where field of view, 3-D object complexity, 2-D image resolution, viewing orientation, and subject matter of the stimulus are manipulated. On each trial, observers indicated if the depicted objects appeared to be 2-D or 3-D. Results for face stimuli are similar to Pelli's, while more geometrically complex stimuli show a further interaction of the 3-D information with distance and image information. Complex objects need more image information to be seen as 3-D when close; however, as they are moved further away from the observer, there is a bias for seeing them as 3-D objects rather than 2-D images. Finally, image orientation, relative to the observer, shows little effect, suggesting the participation of higher-level processes in the determination of the 'solidness' of the depicted object. Thus, we show that the critical image resolution depends systematically on the geometric complexity of the object depicted.  相似文献   

20.
In this study, asymmetries in finding pictorial 3-D targets defined by their tilt and rotation in space were investigated by means of a free-scan search task. In Experiment 1, feature search for cube tilt and rotation, as assessed by a spatial forced-choice task, was slow but still exhibited a characteristic "flat" slope; it was also much faster to upward-tilted cubes and to targets located in the upper half of the search field. Faster search times for cubes and rectangular solids in the upper field, an advantage for upward-tilted cubes, and a strong interaction between target tilt and direction of lighting (upward or downward) for the rectangular solids were all demonstrated in Experiment 2. Finally, an advantage in searching for tilted cubes located in the upper half of the display was shown in Experiment 3, which used a present-absent search task. The results of this study confirm that the upper-field bias in visual search is due mainly to a biased search mechanism and not to the features of the target stimulus or to specific ecological factors.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号