首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 562 毫秒
1.
Dynamic tasks often require fast adaptations to new viewpoints. It has been shown that automatic spatial updating is triggered by proprioceptive motion cues. Here, we demonstrate that purely visual cues are sufficient to trigger automatic updating. In five experiments, we examined spatial updating in a dynamic attention task in which participants had to track three objects across scene rotations that occurred while the objects were temporarily invisible. The objects moved on a floor plane acting as a reference frame and unpredictably either were relocated when reference frame rotations occurred or remained in place. Although participants were aware of this dissociation they were unable to ignore continuous visual cues about scene rotations (Experiments 1a and 1b). This even held when common rotations of floor plane and objects were less likely than a dissociated rotation (Experiments 2a and 2b). However, identifying only the spatial reference direction was not sufficient to trigger updating (Experiment 3). Thus we conclude that automatic spatial target updating occurs with pure visual information.  相似文献   

2.
Observers can visually track multiple objects that move independently even if the scene containing the moving objects is rotated in a smooth way. Abrupt scene rotations yield tracking more difficult but not impossible. For nonrotated, stable dynamic displays, the strategy of looking at the targets' centroid has been shown to be of importance for visual tracking. But which factors determine successful visual tracking in a nonstable dynamic display? We report two eye tracking experiments that present evidence for centroid looking. Across abrupt viewpoint changes, gaze on the centroid is more stable than gaze on targets indicating a process of realigning targets as a group. Further, we show that the relative importance of centroid looking increases with object speed.  相似文献   

3.
Research on dynamic attention has shown that visual tracking is possible even if the observer’s viewpoint on the scene holding the moving objects changes. In contrast to smooth viewpoint changes, abrupt changes typically impair tracking performance. The lack of continuous information about scene motion, resulting from abrupt changes, seems to be the critical variable. However, hard onsets of objects after abrupt scene motion could explain the impairment as well. We report three experiments employing object invisibility during smooth and abrupt viewpoint changes to examine the influence of scene information on visual tracking, while equalizing hard onsets of moving objects after the viewpoint change. Smooth viewpoint changes provided continuous information about scene motion, which supported the tracking of temporarily invisible objects. However, abrupt and, therefore, discontinuous viewpoint changes strongly impaired tracking performance. Object locations retained with respect to a reference frame can account for the attentional tracking that follows invisible objects through continuous scene motion.  相似文献   

4.
Participants saw a standard scene of three objects on a desktop and then judged whether a comparison scene was either the same, except for the viewpoint of the scene, or different, when one or more of the objects either exchanged places or were rotated around their center. As in Nakatani, Pollatsek, and Johnson (2002), judgment times were longer when the rotation angles of the comparison scene increased, and the size of the rotation effect varied for different axes and was larger for same judgments than for different judgments. A second experiment, which included trials without the desktop, indicated that removing the desktop frame of reference mainly affected the y-axis rotation conditions (the axis going vertically through the desktop plane). In addition, eye movement analyses indicated that the process was far more than a simple analogue rotation of the standard scene. The total response latency was divided into three components: the initial eye movement latency, the first-pass time, and the second-pass time. The only indication of a rotation effect in the time to execute the first two components was for z-axis (plane of sight) rotations. Thus, for x- and y-axis rotations, rotation effects occurred only in the probability of there being a second pass and the time to execute it. The data are inconsistent either with an initial rotation of the memory representation of the standard scene to the orientation of the comparison scene or with a holistic alignment of the comparison scene prior to comparing it with the memory representation of the standard scene. Indeed, the eye movement analysis suggests that little of the increased response time for rotated comparison scenes is due to something like a time-consuming analogue process but is, instead, due to more comparisons on individual objects being made (possibly more double checking).  相似文献   

5.
通过改变目标数量、运动框架突变旋转角度探究不同场认知风格被试在多目标追踪任务中的表现。结果发现:(1)在任务难度较低(运动参考框架稳定, 目标数量为3和4)和任务难度中等(运动参考框架突变向右旋转20°, 目标数量为4)条件下, 场独立型被试的多目标追踪表现均显著好于场依存型被试。在任务难度较高(运动框架稳定, 目标数量为5以及运动参考框架突变向右旋转40°, 目标数量为4)条件下, 两组被试差异不显著。表明不同场认知风格被试追踪表现受任务难度影响; (2)随着目标数量由3至5增多, 追踪负荷增大使被试的追踪成绩显著下降; (3)相比运动框架稳定, 运动框架向右突变旋转20°和40°均显著削弱了被试的追踪表现。旋转角度变化破坏了场景连续性, 影响了追踪表现。  相似文献   

6.
Locations of multiple stationary objects are represented on the basis of their global spatial configuration in visual short-term memory (VSTM). Once objects move individually, they form a global spatial configuration with varying spatial inter-object relations over time. The representation of such dynamic spatial configurations in VSTM was investigated in six experiments. Participants memorized a scene with six moving and/or stationary objects and performed a location change detection task for one object specified during the probing phase. The spatial configuration of the objects was manipulated between memory phase and probing phase. Full spatial configurations showing all objects caused higher change detection performance than did no or partial spatial configurations for static and dynamic scenes. The representation of dynamic scenes in VSTM is therefore also based on their global spatial configuration. The variation of the spatiotemporal features of the objects demonstrated that spatiotemporal features of dynamic spatial configurations are represented in VSTM. The presentation of conflicting spatiotemporal cues interfered with memory retrieval. However, missing or conforming spatiotemporal cues triggered memory retrieval of dynamic spatial configurations. The configurational representation of stationary and moving objects was based on a single spatial configuration, indicating that static spatial configurations are a special case of dynamic spatial configurations.  相似文献   

7.
Participants saw a standard scene of three objects on a desktop and then judged whether a comparison scene was either thesame, except for the viewpoint of the scene, ordifferent, when one or more of the objects either exchanged places or were rotated around their center. As in Nakatani, Pollatsek, and Johnson (2002), judgment times were longer when the rotation angles of the comparison scene increased, and the size of the rotation effect varied for different axes and was larger forsame judgments than fordifferent judgments. A second experiment, which included trials without the desktop, indicated that removing the desktop frame of reference mainly affected they-axis rotation conditions (the axis going vertically through the desktop plane). In addition, eye movement analyses indicated that the process was far more than a simple analogue rotation of the standard scene. The total response latency was divided into three components: theinitial eye movement latency, the first-pass time, and thesecond-pass time. The only indication of arotation effect in the time to execute the first two components was forz-axis (plane of sight) rotations. Thus, forx- andy-axis rotations, rotation effects occurred only in the probability of there being a second pass and the time to execute it. The data are inconsistent either with an initial rotation of the memory representation of the standard scene to the orientation of the comparison scene or with a holistic alignment of the comparison scene prior to comparing it with the memory representation of the standard scene. Indeed, the eye movement analysis suggests that little of the increased response time for rotated comparison scenes is due to something like a time-consuming analogue process but is, instead, due to more comparisons on individual objects being made (possibly moredouble checking).  相似文献   

8.
The reported experiment tested the effect of abrupt and unpredictable viewpoint changes on the attentional tracking of multiple objects in dynamic 3-D scenes. Observers tracked targets that moved independently among identically looking distractors on a rectangular floor plane. The tracking interval was 11 s. Abrupt rotational viewpoint changes of 10°, 20°, or 30° occurred after 8 s. Accuracy of tracking targets across a 10° viewpoint change was comparable to accuracy in a continuous control condition, whereas viewpoint changes of 20° and 30° impaired tracking performance considerably. This result suggests that tracking is mainly dependant on a low-level process whose performance is saved against small disturbances by the visual system's ability to compensate for small changes of retinocentric coordinates. Tracking across large viewpoint changes succeeds only if allocentric coordinates are remembered to relocate targets after displacements.  相似文献   

9.
Spatial updating of environments described in texts   总被引:3,自引:0,他引:3  
  相似文献   

10.
This study tested whether multiple-object tracking-the ability to visually index objects on the basis of their spatiotemporal history-is scene based or image based. Initial experiments showed equivalent tracking accuracy for objects in 2-D and 3-D motion. Subsequent experiments manipulated the speeds of objects independent of the speed of the scene as a whole. Results showed that tracking accuracy was influenced by object speed but not by scene speed. This held true whether the scene underwent translation, zoom, rotation, or even combinations of all 3 motions. A final series of experiments interfered with observers' ability to see a coherent scene by moving objects at different speeds from one another and by distorting the perception of 3-D space. These reductions in scene coherence led to reduced tracking accuracy, confirming that tracking is accomplished using a scene-based, or allocentric, frame of reference.  相似文献   

11.
Keeping aware of the locations of objects while one is moving requires the updating of spatial representations. As long as the objects are visible, attentional tracking is sufficient, but knowing where objects out of view went in relation to one's own body involves an updating of spatial working memory. Here, multiple object tracking was employed to study spatial updating and its neural correlates. In a dynamic 3D-scene, targets moved among visually indistinguishable distractors. The targets and distractors either stayed visible during continuous viewpoint changes or they turned invisible. The parametric variation of tracking load revealed load-dependent activations of the intraparietal sulcus, the superior parietal lobule, and the lateral occipital cortex in response to the attentive tracking task. Viewpoint changes with invisible objects that demanded retention and updating produced load-dependent activation only in the precuneus in line with its presumed involvement in updating spatial working memory.  相似文献   

12.
Participants imagined rotating either themselves or an array of objects that surrounded them. Their task was to report on the egocentric position of an item in the array following the imagined rotation. The dependent measures were response latency and number of errors committed. Past research has shown that self-rotation is easier than array rotation. However, we found that imagined egocentric rotations were as difficult to imagine as rotations of the environment when people performed imagined rotations in the midsagittal or coronal plane. The advantages of imagined self-rotations are specific to mental rotations performed in the transverse plane.  相似文献   

13.
Three experiments investigated scene recognition across viewpoint changes, involving same/different judgements on scenes consisting of three objects on a desktop. On same trials, the comparison scene appeared either from the same viewpoint as the standard scene or from a different viewpoint with the desktop rotated about one or more axes. Different trials were created either by interchanging the locations of two or three of the objects (location change condition), or by rotating either one or all three of the objects around their vertical axes (orientation change condition). Response times and errors increased as a function of the angular distance between the standard and comparison views, but this effect was bigger for rotations around the vertical axis than for those about the line of sight or horizontal axis. Furthermore, the time to detect location changes was less than that to detect orientation changes, and this difference increased with increasing angular disparity between the standard and comparison scenes. Rotation times estimated in a double-axis rotation were no longer than other rotations in depth, indicating that alignment was not necessarily simpler around a "natural" axis of rotation. These results are consistent with the hypothesis that scenes, like many objects, may be represented in a viewpoint dependent manner and recognized by aligning standard and comparison views, but that the alignment of scenes is not a holistic process.  相似文献   

14.
The effects of static and kinetic information for depth on judgments of the relative size of objects placed at different distances was studied in 3- and 7-yr-old children and adults. Subjects viewed either a pair of objects placed on the floor of a textured alley or a projected slide of the identical scene. The presence of motion parallax information for depth was also manipulated. All subjects showed a clear sensitivity to static pictorial depth information in judging objects placed so they projected equal retinal areas. When the retinal size of objects was very different, however, children tended to respond to retinal rather than physical size. Motion parallax information increased responsiveness to depth when a 3-dimensional scene was being viewed, but decreased responsiveness with 2-dimensional projections. The decrease was greater in children than adults.  相似文献   

15.
We investigated the role of visual experience on the spatial representation and updating of haptic scenes by comparing recognition performance across sighted, congenitally and late blind participants. We first established that spatial updating occurs in sighted individuals to haptic scenes of novel objects. All participants were required to recognise a previously learned haptic scene of novel objects presented across the same or different orientation as learning whilst they either remained in the same position to moved to a new position relative to the scene. Scene rotation incurred a cost in recognition performance in all groups. However, overall haptic scene recognition performance was worse in the congenitally blind group. Moreover, unlike the late blind or sighted groups, the congenitally blind group were unable to compensate for the cost in scene rotation with observer motion. Our results suggest that vision plays an important role in representing and updating spatial information encoded through touch and have important implications for the role of vision in the development of neuronal areas involved in spatial cognition.  相似文献   

16.
When encountering familiar scenes, observers can use item-specific memory to facilitate the guidance of attention to objects appearing in known locations or configurations. Here, we investigated how memory for relational contingencies that emerge across different scenes can be exploited to guide attention. Participants searched for letter targets embedded in pictures of bedrooms. In a between-subjects manipulation, targets were either always on a bed pillow or randomly positioned. When targets were systematically located within scenes, search for targets became more efficient. Importantly, this learning transferred to bedrooms without pillows, ruling out learning that is based on perceptual contingencies. Learning also transferred to living room scenes, but it did not transfer to kitchen scenes, even though both scene types contained pillows. These results suggest that statistical regularities abstracted across a range of stimuli are governed by semantic expectations regarding the presence of target-predicting local landmarks. Moreover, explicit awareness of these contingencies led to a central tendency bias in recall memory for precise target positions that is similar to the spatial category effects observed in landmark memory. These results broaden the scope of conditions under which contextual cuing operates and demonstrate how semantic memory plays a causal and independent role in the learning of associations between objects in real-world scenes.  相似文献   

17.
We investigated which reference frames are preferred when matching spatial language to the haptic domain. Sighted, low-vision, and blind participants were tested on a haptic-sentence-verification task where participants had to haptically explore different configurations of a ball and a shoe and judge the relation between them. Results from the spatial relation "above", in the vertical plane, showed that various reference frames are available after haptic inspection of a configuration. Moreover, the pattern of results was similar for all three groups and resembled patterns found for the sighted on visual sentence-verification tasks. In contrast, when judging the spatial relation "in front", in the horizontal plane, the blind showed a markedly different response pattern. The sighted and low-vision participants did not show a clear preference for either the absolute/relative or the intrinsic reference frame when these frames were dissociated. The blind, on the other hand, showed a clear preference for the intrinsic reference frame. In the absence of a dominant cue, such as gravity in the vertical plane, the blind might emphasise the functional relationship between the objects owing to enhanced experience with haptic exploration of objects.  相似文献   

18.
It has been reported that the overall shapes of spatial categorical patterns of projective spatial terms such as above and below are not influenced by the rotation of a reference object on a two-dimensional (2D) upright plane. However, is this also true in three-dimensional (3D) space? This study shows the dynamic aspects of the apprehension of projective spatial terms in 3D space by detailing how the rotation of a reference object with an inherent front influences the apprehension of projective spatial terms on a level plane by mapping their spatial categorical patterns. The experiment was designed to examine how spatial categorical patterns on a level plane changed with the rotation of a reference object with an inherent front in 3D computer graphics space. We manipulated the rotation of a reference object with an inherent front at three levels (0°, 90°, and 180° rotations) and examined how such manipulation changed the overall spatial categorical patterns of four basic Japanese projective spatial terms: mae, ushiro, hidari, and migi (similar to in front of, behind, to the left of, and to the right of in English, respectively). The results show that spatial term apprehension was affected by the rotation of the reference object in 3D space. In particular, rotation influenced the mae–ushiro and hidari–migi systems differently. The results also imply that our understanding of projective spatial terms on a level plane in 3D space is affected dynamically by visual information from 3D cues.  相似文献   

19.
In four experiments, we examined whether watching a scene from the perspective of a camera rotating across it allowed participants to recognize or identify the scene's spatial layout. Completing a representational momentum (RM) task, participants viewed a smoothly animated display and then indicated whether test probes were in the same position as they were in the final view of the animation. We found RM anticipations for the camera's movement across the scene, with larger distortions resulting from camera rotations that brought objects into the viewing frame compared with camera rotations that took objects out of the viewing frame. However, the RM task alone did not lead to successful recognition of the scene's map or identification of spatial relations between objects. Watching a scene from a rotating camera's perspective and making position judgments is not sufficient for learning spatial layout.  相似文献   

20.
Six experiments compared spatial updating of an array after imagined rotations of the array versus viewer. Participants responded faster and made fewer errors in viewer tasks than in array tasks while positioned outside (Experiment 1) or inside (Experiment 2) the array. An apparent array advantage for updating objects rather than locations was attributable to participants imagining translations of single objects rather than rotations of the array (Experiment 3). Superior viewer performance persisted when the array was reduced to 1 object (Experiment 4); however, an object with a familiar configuration improved object performance somewhat (Experiment 5). Object performance reached near-viewer levels when rotations included haptic information for the turning object. The researchers discuss these findings in terms of the relative differences in which the human cognitive system transforms the spatial reference frames corresponding to each imagined rotation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号