首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
Participants saw a standard scene of three objects on a desktop and then judged whether a comparison scene was either thesame, except for the viewpoint of the scene, ordifferent, when one or more of the objects either exchanged places or were rotated around their center. As in Nakatani, Pollatsek, and Johnson (2002), judgment times were longer when the rotation angles of the comparison scene increased, and the size of the rotation effect varied for different axes and was larger forsame judgments than fordifferent judgments. A second experiment, which included trials without the desktop, indicated that removing the desktop frame of reference mainly affected they-axis rotation conditions (the axis going vertically through the desktop plane). In addition, eye movement analyses indicated that the process was far more than a simple analogue rotation of the standard scene. The total response latency was divided into three components: theinitial eye movement latency, the first-pass time, and thesecond-pass time. The only indication of arotation effect in the time to execute the first two components was forz-axis (plane of sight) rotations. Thus, forx- andy-axis rotations, rotation effects occurred only in the probability of there being a second pass and the time to execute it. The data are inconsistent either with an initial rotation of the memory representation of the standard scene to the orientation of the comparison scene or with a holistic alignment of the comparison scene prior to comparing it with the memory representation of the standard scene. Indeed, the eye movement analysis suggests that little of the increased response time for rotated comparison scenes is due to something like a time-consuming analogue process but is, instead, due to more comparisons on individual objects being made (possibly moredouble checking).  相似文献   

2.
Three experiments investigated scene recognition across viewpoint changes, involving same/different judgements on scenes consisting of three objects on a desktop. On same trials, the comparison scene appeared either from the same viewpoint as the standard scene or from a different viewpoint with the desktop rotated about one or more axes. Different trials were created either by interchanging the locations of two or three of the objects (location change condition), or by rotating either one or all three of the objects around their vertical axes (orientation change condition). Response times and errors increased as a function of the angular distance between the standard and comparison views, but this effect was bigger for rotations around the vertical axis than for those about the line of sight or horizontal axis. Furthermore, the time to detect location changes was less than that to detect orientation changes, and this difference increased with increasing angular disparity between the standard and comparison scenes. Rotation times estimated in a double-axis rotation were no longer than other rotations in depth, indicating that alignment was not necessarily simpler around a "natural" axis of rotation. These results are consistent with the hypothesis that scenes, like many objects, may be represented in a viewpoint dependent manner and recognized by aligning standard and comparison views, but that the alignment of scenes is not a holistic process.  相似文献   

3.
In three experiments, difference thresholds (dLs) and points of subjective equality (PSEs) for three-dimensional (3-D) rotation simulations were examined. In the first experiment, observers compared pairs of simulated spheres that rotated in polar projection and that differed in their structure (points plotted in the volume vs. on the surface), axis of rotation (vertical, y, vs. horizontal, x), and magnitude of rotation (20 degrees-70 degrees). DLs were lowest (7%) when points were on the surface and when at least one sphere rotated around the y-axis and varied with changes in the independent variables. PSEs were closest to objective equality when points were on the surface of both spheres and when both spheres rotated about the x-axis. In the second experiment, subjects provided direct estimates of the rotations of the same spheres. Results suggested a reasonable agreement between PSEs for the indirect-scaling and direct-estimate procedures. The third experiment varied sphere diameter (and therefore mean linear velocity of stimulus elements) and showed that although rotation judgments are biased by mean linear velocity, they are not likely to be made solely on the basis of that information. These and past results suggest a model whereby recovery of structure is conducted by low-level motion-detecting mechanisms, whereas rotation (and other) judgments are based on a higher level representation.  相似文献   

4.
Transformed spatial mappings were used to perturb normal visual-motor processes and reveal the structure of internal spatial representations used by the motor control system. In a 2-D discrete aiming task performed under rotated visual-motor mappings, the pattern of spatial movement error was the same for all Ss: peak error between 90 degrees and 135 degrees of rotation and low error for 180 degrees rotation. A two-component spatial representation, based on oriented bidirectional movement axes plus direction of travel along such axes, is hypothesized. Observed reversals of movement direction under rotations greater than 90 degrees are consistent with the hypothesized structure. Aiming error under reflections, unlike rotations, depended on direction of movement relative to the axis of reflection (see Cunningham & Pavel, in press). Reaction time and movement time effects were observed, but a speed-accuracy tradeoff was found only for rotations for which the direction-reversal strategy could be used. Finally, adaptation to rotation operates at all target locations equally but does not alter the relative difficulty of different rotations. Structural properties of the representation are invariant under learning.  相似文献   

5.
Five experiments demonstrated that adults can identify certain novel views of 3-dimensional model objects on the basis of knowledge of a single perspective. Geometrically irregular contour (wire) and surface (clay) objects and geometrically regular surface (pipe) objects were accurately recognized when rotated 180 degrees about the vertical (y) axis. However, recognition accuracy was poor for all types of objects when rotated around the y-axis by 90 degrees. Likewise, more subtle rotations in depth (i.e., 30 degrees and 60 degrees) induced decreases in recognition of both contour and surface objects. These results suggest that accurate recognition of objects rotated in depth by 180 degrees may be achieved through use of information in objects' 2-dimensional bounding contours, the shapes of which remain invariant over flips in depth. Consistent with this interpretation, a final study showed that even slight rotations away from 180 degrees cause precipitous drops in recognition accuracy.  相似文献   

6.
Observers can visually track multiple objects that move independently even if the scene containing the moving objects is rotated in a smooth way. Abrupt scene rotations yield tracking more difficult but not impossible. For nonrotated, stable dynamic displays, the strategy of looking at the targets' centroid has been shown to be of importance for visual tracking. But which factors determine successful visual tracking in a nonstable dynamic display? We report two eye tracking experiments that present evidence for centroid looking. Across abrupt viewpoint changes, gaze on the centroid is more stable than gaze on targets indicating a process of realigning targets as a group. Further, we show that the relative importance of centroid looking increases with object speed.  相似文献   

7.
Spatial reference in multiple object tracking is available from configurations of dynamic objects and static reference objects. In three experiments, we studied the use of spatial reference in tracking and in relocating targets after abrupt scene rotations. Observers tracked 1, 2, 3, 4, and 6 targets in 3D scenes, in which white balls moved on a square floor plane. The floor plane was either visible thus providing static spatial reference or it was invisible. Without scene rotations, the configuration of dynamic objects provided sufficient spatial reference and static spatial reference was not advantageous. In contrast, with abrupt scene rotations of 20°, static spatial reference supported in relocating targets. A wireframe floor plane lacking local visual detail was as effective as a checkerboard. Individually colored geometric forms as static reference objects provided no additional benefit either, even if targets were centered on these forms at the abrupt scene rotation. Individualizing the dynamic objects themselves by color for a brief interval around the abrupt scene rotation, however, did improve performance. We conclude that attentional tracking of moving targets proceeds within dynamic configurations but detached from static local background.  相似文献   

8.
In four experiments, we examined whether watching a scene from the perspective of a camera rotating across it allowed participants to recognize or identify the scene's spatial layout. Completing a representational momentum (RM) task, participants viewed a smoothly animated display and then indicated whether test probes were in the same position as they were in the final view of the animation. We found RM anticipations for the camera's movement across the scene, with larger distortions resulting from camera rotations that brought objects into the viewing frame compared with camera rotations that took objects out of the viewing frame. However, the RM task alone did not lead to successful recognition of the scene's map or identification of spatial relations between objects. Watching a scene from a rotating camera's perspective and making position judgments is not sufficient for learning spatial layout.  相似文献   

9.
Three experiments investigated whether the semantic informativeness of a scene region (object) influences its representation between successive views. In Experiment 1, a scene and a modified version of that scene were presented in alternation, separated by a brief retention interval. A changed object was either semantically consistent with the scene (non-informative) or inconsistent (informative). Change detection latency was shorter in the semantically inconsistent versus consistent condition. In Experiment 2, eye movements were eliminated by presenting a single cycle of the change sequence. Detection accuracy was higher for inconsistent versus consistent objects. This inconsistent object advantage was obtained when the potential strategy of selectively encoding inconsistent objects was no longer advantageous (Experiment 3). These results indicate that the semantic properties of an object influence whether the representation of that object is maintained between views of a scene, and this influence is not caused solely by the differential allocation of eye fixations to the changing region. The potential cognitive mechanisms supporting this effect are discussed.  相似文献   

10.
Subjects either named rotated objects or decided whether the objects would face left or right if they were upright. Response time in the left-right task was influenced by a rotation aftereffect or by the physical rotation of the object, which is consistent with the view that the objects were mentally rotated to the upright and that, depending on its direction, the perceived rotary motion of the object either speeded or slowed mental rotation. Perceived rotary motion did not influence naming time, which suggests that the identification of rotated objects does not involve mental rotation.  相似文献   

11.
丁锦红  汪亚珉  姜扬 《心理学报》2021,53(4):337-348
本研究通过控制深度视觉线索, 分析3D SFM (structure from motion)知觉中的眼动特征, 探讨注意对SFM知觉判断的影响及其时间进程。结果显示, 有线索刺激比模糊刺激的判断更加快、更加肯定(百分比更高); 眼睛移动方向和微眼跳方向都分别与知觉判断的运动方向具有一致性; 微眼跳频次、峰速度和幅度也都分别表现出深度线索的促进效应。实验结果表明, SFM知觉过程大致分为速度计算和构建三维结构两个阶段; 注意对SFM知觉的调节作用主要发生在构建三维结构阶段; 注意从150 ms开始指向选择对象, 驻留持续约200 ms后, 从局部运动矢量流转移到整体运动方向的知觉判断。  相似文献   

12.
In three experiments, difference thresholds (dLs) and points of subjective equality (PSEs) for three-dimensional (3-D) rotation simulations were examined. In the first experiment, observers compared pairs of simulated spheres that rotated in polar projection and that differed in their structure (points plotted in the volume vs. on the surface), axis of rotation (vertical,y, vs. horizontal, x), and magnitude of rotation (200—700). DLs were lowest (7%) when points were on the surface and when at least one sphere rotated around they-axis and varied with changes in the independent variables. PSEs were closest to objective equality when points were on the surface of both spheres and when both spheres rotated about thex-axis. In the second experiment, subjects provided direct estimates of the rotations of the same spheres. Results suggested a reasonable agreement between PSEs for the indirect scaling and direct-estimate procedures. The third experiment varied sphere diameter (and therefore mean linear velocity of stimulus elements) and showed that although rotation judgments are biased by mean linear velocity, they are not likely to be made solely on the basis of that information. These and past results suggest a model whereby recovery of structure is conducted by low-level motion-detecting mechanisms, whereas rotation (and other) judgments are based on a higher level representation.  相似文献   

13.
Once a person has observed a three-dimensional scene, how accurately can he or she then imagine the appearance of that scene from different viewing angles? In a series of experiments addressed to this question, subjects formed mental images of a set of objects hanging in a clear cylinder and mentally rotated their images as they physically rotated the cylinder by various amounts. They were asked to perform four tasks, each demanding the ability to "see" the two-dimensional patterns that should emerge in their images if the images depicted the new perspective view accurately--(a) Subjects described the two-dimensional geometric shape that the imagined objects formed in an image rotated 90 degrees; (b) they "scanned" horizontally from one imagined object to another in a rotated image; (c) they physically rotated the empty cylinder together with their image until two of the imagined objects were vertically aligned; and (d) they adjusted a marker to line up with a single object in a rotated image. The experimental results converged to suggest that subjects' images accurately displayed the two-dimensional patterns emerging from a rotation in depth. However, the amount by which they rotated their image differed systematically from the amount specified by the experimenter. Results are discussed in the context of a model of the mental representation of physical space that incorporates two types of structures, one representing the three-dimensional layout of a scene, and the other representing the two-dimensional perspective view of the scene from a given vantage point.  相似文献   

14.
College men and women judged whether pairs of stimuli were identical or mirror images. One stimulus of a pair was presented upright; the other was rotated 0°–150° from the vertical. The stimuli were either alphanumeric symbols or unfamiliar letter-like characters of the type found on the Primary Mental Abilities Spatial Relations Test. For each individual, the linear function relating response latency to degree of rotation was computed. The slope of this function was steeper for women than for men. Further, the distribution of slopes was more variable among women, with approximately 30% falling outside the range of distribution for men. Women and men were quite similar in the accuracy of their judgments, the intercepts of the latency functions, and the precision with which the linear function characterized the latency data. It is suggested that the sex difference in the slope of the rotation function may reflect differences in strategies of mental rotation.  相似文献   

15.
The present study compared single and dual adaptation to visuomotor rotations in different cueing conditions. Participants adapted either to a constant rotation or to opposing rotations (dual adaptation) applied in an alternating order. In Experiment 1, visual and corresponding postural cues were provided to indicate different rotation directions. In Experiment 2, either a visual or a postural cue was available. In all cueing conditions, substantial dual adaptation was observed, although it was attenuated in comparison to single adaptation. Analysis of switching costs determined as the performance difference between the last trial before and the first trial after the change of rotation direction suggested substantial advantage of the visual cue compared to the postural cue, which was in line with previous findings demonstrating the dominance of visual sense in movement representation and control.  相似文献   

16.
Dynamic tasks often require fast adaptations to new viewpoints. It has been shown that automatic spatial updating is triggered by proprioceptive motion cues. Here, we demonstrate that purely visual cues are sufficient to trigger automatic updating. In five experiments, we examined spatial updating in a dynamic attention task in which participants had to track three objects across scene rotations that occurred while the objects were temporarily invisible. The objects moved on a floor plane acting as a reference frame and unpredictably either were relocated when reference frame rotations occurred or remained in place. Although participants were aware of this dissociation they were unable to ignore continuous visual cues about scene rotations (Experiments 1a and 1b). This even held when common rotations of floor plane and objects were less likely than a dissociated rotation (Experiments 2a and 2b). However, identifying only the spatial reference direction was not sufficient to trigger updating (Experiment 3). Thus we conclude that automatic spatial target updating occurs with pure visual information.  相似文献   

17.
Participants imagined rotating either themselves or an array of objects that surrounded them. Their task was to report on the egocentric position of an item in the array following the imagined rotation. The dependent measures were response latency and number of errors committed. Past research has shown that self-rotation is easier than array rotation. However, we found that imagined egocentric rotations were as difficult to imagine as rotations of the environment when people performed imagined rotations in the midsagittal or coronal plane. The advantages of imagined self-rotations are specific to mental rotations performed in the transverse plane.  相似文献   

18.
Performance in a visual mental rotation (MR) task has been reported to predict the ability to recognize retrograde‐transformed melodies. The current study investigated the effects of melodic structure on the MR of sequentially presented visual patterns. Each trial consisted of a five‐segment sequentially presented visual pattern (standard) followed by a five‐tone melody that was either identical in structure to the standard or its retrograde. A visual target pattern was either the rotated version of the standard or unrelated to it. The task was to indicate whether the target pattern was a rotated version of the standard or not. Periodic patterns were not rotated but melodies facilitated the rotation of non‐periodic patterns. For these, rotation latency was determined by a quantitative index of complexity (number of runs). This study provides the first experimental confirmation for cross‐modal facilitation of MR.  相似文献   

19.
20.
视觉表象操作加工的眼动实验研究   总被引:1,自引:0,他引:1  
张霞  刘鸣 《心理学报》2009,41(4):305-315
本研究通过视觉表象旋转和扫描的眼动实验探讨表象的心理表征方式。实验一结果表明,眼动指标具有与反应时相类似的旋转角度效应。实验二结果显示,表象扫描的反应时和眼动指标都具有与知觉扫描加工一样的距离效应。由此可以认为,表象眼动与知觉眼动模式具有相似性;表象具有相对独立的心理表征方式并有其特殊的加工过程;表象的心理表征可以是形象表征,而非一定是抽象的命题或符号表征  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号