首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Completing a representational momentum (RM) task, participants viewed one of three camera motions through a scene and indicated whether test probes were in the same position as they were in the final view of the animation. All camera motions led to RM anticipations in the direction of motion, with larger distortions resulting from rotations than a compound motion of a rotation and a translation. A surprise test of spatial layout, using an aerial map, revealed that the correct map was identified only following aerial views during the RM task. When the RM task displayed field views, including repeated views of multiple object groups, participants were unable to identify the overall spatial layout of the scene. These results suggest that the object–location binding thought to support certain change detection and visual search tasks might be viewpoint dependent.  相似文献   

2.
Dynamic tasks often require fast adaptations to new viewpoints. It has been shown that automatic spatial updating is triggered by proprioceptive motion cues. Here, we demonstrate that purely visual cues are sufficient to trigger automatic updating. In five experiments, we examined spatial updating in a dynamic attention task in which participants had to track three objects across scene rotations that occurred while the objects were temporarily invisible. The objects moved on a floor plane acting as a reference frame and unpredictably either were relocated when reference frame rotations occurred or remained in place. Although participants were aware of this dissociation they were unable to ignore continuous visual cues about scene rotations (Experiments 1a and 1b). This even held when common rotations of floor plane and objects were less likely than a dissociated rotation (Experiments 2a and 2b). However, identifying only the spatial reference direction was not sufficient to trigger updating (Experiment 3). Thus we conclude that automatic spatial target updating occurs with pure visual information.  相似文献   

3.
以三维场景图片为实验材料,采用眼动追踪技术,通过两个实验考察了对称场景中物体相似性对空间表征的影响。结果表明:(1)无相似物体条件下,场景本身的内在结构对空间表征有重要影响,对称轴方向可以作为空间表征参照方向;(2)存在部分相似物体条件下,物体的相似性会影响空间表征参照方向的选择,并且相似物体方向也是空间表征的参照方向之一。  相似文献   

4.
Active and passive scene recognition across views.   总被引:7,自引:0,他引:7  
R F Wang  D J Simons 《Cognition》1999,70(2):191-210
Recent evidence suggests that scene recognition across views is impaired when an array of objects rotates relative to a stationary observer, but not when the observer moves relative to a stationary display [Simons, D.J., Wang, R.F., 1998. Perceiving real-world viewpoint changes. Psychological Science 9, 315-320]. The experiments in this report examine whether the relatively poorer performance by stationary observers across view changes results from a lack of perceptual information for the rotation or from the lack of active control of the perspective change, both of which are present for viewpoint changes. Three experiments compared performance when observers passively experienced the view change and when they actively caused the change. Even with visual information and active control over the display rotation, change detection performance was still worse for orientation changes than for viewpoint changes. These findings suggest that observers can update a viewer-centered representation of a scene when they move to a different viewing position, but such updating does not occur during display rotations even with visual and motor information for the magnitude of the change. This experimental approach, using arrays of real objects rather than computer displays of isolated individual objects, can shed light on mechanisms that allow accurate recognition despite changes in the observer's position and orientation.  相似文献   

5.
空间视角转换是从他人视角表征空间关系的能力。本文根据Flavell对空间视角转换能力分出的两个水平,把以往研究的研究方法分为6类:考察一级视角转换能力的可见性任务、数量判断,以及考察二级视角转换能力的识别任务、方向判断、地图巡航、数量判断。随后总结出三种相应的空间视角转换研究的加工理论:,并在以往研究的基础上提出了选取适合的实验任务、开展多个物体的视角转换研究、更多采用虚拟现实呈现刺激材料这3点研究展望。  相似文献   

6.
Can the visual system extrapolate spatial layout of a scene to new viewpoints after a single view? In the present study, we examined this question by investigating the priming of spatial layout across depth rotations of the same scene (Sanocki &; Epstein, 1997). Participants had to indicate which of two dots superimposed on objects in the target scene appeared closer to them in space. There was as much priming from a prime with a viewpoint that was 10° different from the test image as from a prime that was identical to the target; however, there was no reliable priming from larger differences in viewpoint. These results suggest that a scene’s spatial layout can be extrapolated, but only to a limited extent.  相似文献   

7.
In the study, 33 participants viewed photographs from either a potential homebuyer's or a burglar's perspective, or in preparation for a memory test, while their eye movements were recorded. A free recall and a picture recognition task were performed after viewing. The results showed that perspective had rapid effects, in that the second fixation after the scene onset was more likely to land on perspective-relevant than on perspective-irrelevant areas within the scene. Perspective-relevant areas also attracted longer total fixation time, more visits, and longer first-pass dwell times than did perspective-irrelevant areas. As for the effects of visual saliency, the first fixation was more likely to land on a salient than on a nonsalient area; salient areas also attracted more visits and longer total fixation time than did nonsalient areas. Recall and recognition performance reflected the eye fixation results: Both were overall higher for perspective-relevant than for perspective-irrelevant scene objects. The relatively low error rates in the recognition task suggest that participants had gained an accurate memory for scene objects. The findings suggest that the role of bottom-up versus top-down factors varies as a function of viewing task and the time-course of scene processing.  相似文献   

8.
Perceiving Real-World Viewpoint Changes   总被引:10,自引:0,他引:10  
Retinal images vary as observers move through the environment, but observers seem to have little difficulty recognizing objects and scenes across changes in view. Although real-world view changes can be produced both by object rotations (orientation changes) and by observer movements (viewpoint changes), research on recognition across views has relied exclusively on display rotations. However, research on spatial reasoning suggests a possible dissociation between orientation and viewpoint. Here we demonstrate that scene recognition in the real world depends on more than the retinal projection of the visible array; viewpoint changes have little effect on detection of layout changes, but equivalent orientation changes disrupt performance significantly. Findings from our three experiments suggest that scene recognition across view changes relies on a mechanism that updates a viewer-centered representation during observer movements, a mechanism not available for orientation changes. These results link findings from spatial tasks to work on object and scene recognition and highlight the importance of considering the mechanisms underlying recognition in real environments.  相似文献   

9.
The study examined whether people update remote spatial locations in unfamiliar environments during physical movement. Participants learned a layout of objects from one perspective and carried out perspective-taking trials after physically rotating to a new perspective in either the same room as learning or in an adjacent room. Prior to rotation in the adjacent room participants were instructed to visualize the objects as being around them. Responses to perspective-taking trials involved either pointing or verbal labeling. In both testing environments, participants pointed more efficiently from imagined perspectives aligned with either the initial learning perspective or their current facing orientation than from a novel imagined perspective; this indicates that they had updated the encoded spatial relations during the physical rotation and treated remote objects as immediate. Differences in performance among perspectives were less pronounced for verbal labeling in both environments, suggesting that this response mode is more flexibly used from imagined perspectives.  相似文献   

10.
In the study, 33 participants viewed photographs from either a potential homebuyer's or a burglar's perspective, or in preparation for a memory test, while their eye movements were recorded. A free recall and a picture recognition task were performed after viewing. The results showed that perspective had rapid effects, in that the second fixation after the scene onset was more likely to land on perspective-relevant than on perspective-irrelevant areas within the scene. Perspective-relevant areas also attracted longer total fixation time, more visits, and longer first-pass dwell times than did perspective-irrelevant areas. As for the effects of visual saliency, the first fixation was more likely to land on a salient than on a nonsalient area; salient areas also attracted more visits and longer total fixation time than did nonsalient areas. Recall and recognition performance reflected the eye fixation results: Both were overall higher for perspective-relevant than for perspective-irrelevant scene objects. The relatively low error rates in the recognition task suggest that participants had gained an accurate memory for scene objects. The findings suggest that the role of bottom-up versus top-down factors varies as a function of viewing task and the time-course of scene processing.  相似文献   

11.
When confronted with a previously encountered scene, what information is used to guide search to a known target? We contrasted the role of a scene's basic-level category membership with its specific arrangement of visual properties. Observers were repeatedly shown photographs of scenes that contained consistently but arbitrarily located targets, allowing target positions to be associated with scene content. Learned scenes were then unexpectedly mirror reversed, spatially translating visual features as well as the target across the display while preserving the scene's identity and concept. Mirror reversals produced a cost as the eyes initially moved toward the position in the display in which the target had previously appeared. The cost was not complete, however; when initial search failed, the eyes were quickly directed to the target's new position. These results suggest that in real-world scenes, shifts of attention are initially based on scene identity, and subsequent shifts are guided by more detailed information regarding scene and object layout.  相似文献   

12.
We investigated memories of room-sized spatial layouts learned by sequentially or simultaneously viewing objects from a stationary position. In three experiments, sequential viewing (one or two objects at a time) yielded subsequent memory performance that was equivalent or superior to simultaneous viewing of all objects, even though sequential viewing lacked direct access to the entire layout. This finding was replicated by replacing sequential viewing with directed viewing in which all objects were presented simultaneously and participants’ attention was externally focused on each object sequentially, indicating that the advantage of sequential viewing over simultaneous viewing may have originated from focal attention to individual object locations. These results suggest that memory representation of object-to-object relations can be constructed efficiently by encoding each object location separately, when those locations are defined within a single spatial reference system. These findings highlight the importance of considering object presentation procedures when studying spatial learning mechanisms.  相似文献   

13.
Boundary extension (BE) is misremembering more of a particular scene than actually presented. Representational momentum (RM) is misremembering the final position of a moving object as further along its implied trajectory. There is a sense in which both BE and RM are predictions, and the current experiments explore the possibility that these are related phenomena. We showed participants single photographs to measure baseline BE, and then approach sequences and measured BE and RM. We found significant BE and RM, and a significant interaction between baseline BE and BE following an approach sequence, but no interaction with RM. The initial evidence suggests that BE and RM are separate processes, and that BE (the establishing of spatial layout) occurs before RM (the continuation of movement) within a scene.  相似文献   

14.
15.
The present study investigated whether and how the location of bystander objects is encoded, maintained, and integrated across an eye movement. Bystander objects are objects that remain unfixated directly before and after the saccade for which transsaccadic integration is being examined. Three experiments are reported that examine location coding of bystander objects relative to the future saccade target object, relative to the saccade source object, and relative to other bystander objects. Participants were presented with a random‐dot pattern and made a saccade from a central source to a designated saccade target. During this saccade the position of a single bystander was changed on half of the trials and participants had to detect the displacement. Postsaccadically the presence of the target, source, and other bystanders was manipulated. Results indicated that the location of bystander objects could be integrated across a saccade, and that this relied on configurational coding. Furthermore the present data provide evidence for the view that transsaccadic perception of spatial layout is not inevitably tied to the saccade target or the saccade source, that it makes use of objects and object configurations in a flexible manner that is partly governed by the task relevance of the various display items, and that it exploits the incidental configurational structure in the display's layout in order to increase its capacity limits.  相似文献   

16.
17.
被试在矩形房间中从两个不同的观察点学习物体场景并在多个朝向上对物体形成的空间关系进行判断,通过控制场景中物体主要内在轴相对于环境结构(房间和地毯)的方向和被试的学习顺序,探讨被试在场景空间表征中采用何种参照系和参照系选取时的影响因素。两个实验结果发现:(1)内在参照系(intrinsic reference systems)和环境参照系均可以用于物体场景的表征,两类参照系之间的关系却是影响被试物体场景表征时参照系选取的重要因素,即当内在参照系与环境参照系方向一致时,被试无论从哪个朝向学习,都选择从垂直于内在参照系和环境参照系的朝向进行表征。反之,当二者方向不一致时,表征时参照系的选择取决于被试的学习经历;(2)无论内在参照系与环境参照系方向是否一致,物体场景本身内在结构的规则性都能够促进空间记忆,即内在结构的规则性既有助于准确编码物体的相对位置,也有助于提高空间关系判断的准确性。  相似文献   

18.
The present study examined the extent to which learning mechanisms are deployed on semantic-categorical regularities during a visual searching within real-world scenes. The contextual cueing paradigm was used with photographs of indoor scenes in which the semantic category did or did not predict the target position on the screen. No evidence of a facilitation effect was observed in the predictive condition compared to the nonpredictive condition when participants were merely instructed to search for a target T or L (Experiment 1). However, a rapid contextual cueing effect occurred when each display containing the search target was preceded by a preview of the scene on which participants had to make a decision regarding the scene's category (Experiment 2). A follow-up explicit memory task indicated that this benefit resulted from implicit learning. Similar implicit contextual cueing effects were also obtained when the scene to categorize was different from the subsequent search scene (Experiment 3) and when a mere preview of the search scene preceded the visual searching (Experiment 4). These results suggested that if enhancing the processing of the scene was required with the present material, such implicit semantic learning can nevertheless take place when the category is task irrelevant.  相似文献   

19.
The effect of varying information for overall depth in a simulated 3-D scene on the perceived layout of objects in the scene was investigated in two experiments. Subjects were presented with displays simulating textured surfaces receded in depth. Pairs of markers were positioned at equal intervals within the scenes. The subject's task was to judge the depth between the intervals. Overall scene depth was varied by viewing through either a collimating lens or a glass disk. Judged depth for equal depth intervals decreased with increasing distance of the interval from the front of the scene. Judged depth was greater for collimated than for non-collimated viewing. Interestingly, collimated viewing resulted in a uniform rescaling of the perceived depth intervals.  相似文献   

20.
Four experiments investigated the roles of layout geometry in the selection of intrinsic frames of reference in spatial memory. Participants learned the locations of objects in a room from 2 or 3 viewing perspectives. One view corresponded to the axis of bilateral symmetry of the layout, and the other view(s) was (were) nonorthogonal to the axis of bilateral symmetry. Judgments of relative direction using spatial memory were quicker for imagined headings parallel to the symmetric axis than for those parallel to the other viewing perspectives. This advantage disappeared when the symmetric axis was eliminated. Moreover, there was more consistency across participants in the selection of intrinsic axes when the layout contained an axis of bilateral symmetry than when it did not. These results indicate that the layout geometry affects the selection of intrinsic frames of reference supporting the intrinsic model of spatial memory proposed by W. Mou and T. P. McNamara (2002) and by A. L. Shelton and T. P. McNamara (2001).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号