首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Perceiving Real-World Viewpoint Changes   总被引:10,自引:0,他引:10  
Retinal images vary as observers move through the environment, but observers seem to have little difficulty recognizing objects and scenes across changes in view. Although real-world view changes can be produced both by object rotations (orientation changes) and by observer movements (viewpoint changes), research on recognition across views has relied exclusively on display rotations. However, research on spatial reasoning suggests a possible dissociation between orientation and viewpoint. Here we demonstrate that scene recognition in the real world depends on more than the retinal projection of the visible array; viewpoint changes have little effect on detection of layout changes, but equivalent orientation changes disrupt performance significantly. Findings from our three experiments suggest that scene recognition across view changes relies on a mechanism that updates a viewer-centered representation during observer movements, a mechanism not available for orientation changes. These results link findings from spatial tasks to work on object and scene recognition and highlight the importance of considering the mechanisms underlying recognition in real environments.  相似文献   

2.
Changing viewpoints during dynamic events   总被引:1,自引:0,他引:1  
Garsoffky B  Huff M  Schwan S 《Perception》2007,36(3):366-374
The connection of various viewpoints of a visual dynamic scene can be realised in different ways. We examined if various presentation modes influence scene recognition and cognitive representation type. In the learning phase, participants saw clips of basketball scenes from (a) a single, unvaried viewpoint, or with a change of viewpoint during the scene, whereby the connection was realised (b) by an abrupt cut, or (c) by a continuous camera move. In the test phase, participants had to recognise video stills presenting basketball scenes from the same or differing viewpoints. As expected, cuts led to lower recognition accuracy than a fixed unvaried viewpoint, whereas this was not the case for moves. However, the kind of connection between two viewpoints had no influence on the viewpoint dependence of the cognitive representation. Additionally, it was found that the amount of viewpoint deviation seemed to influence the overall conservativeness of participants' reactions.  相似文献   

3.
When novel scenes are encoded, the representations of scene layout are generally viewpoint specific. Past studies of scene recognition have typically required subjects to explicitly study and encode novel scenes, but in everyday visual experience, it is possible that much scene learning occurs incidentally. Here, we examine whether implicitly encoded scene layouts are also viewpoint dependent. We used the contextual cuing paradigm, in which search for a target is facilitated by implicitly learned associations between target locations and novel spatial contexts (Chun & Jiang, 1998). This task was extended to naturalistic search arrays with apparent depth. To test viewpoint dependence, the viewpoint of the scenes was varied from training to testing. Contextual cuing and, hence, scene context learning decreased as the angular rotation from training viewpoint increased. This finding suggests that implicitly acquired representations of scene layout are viewpoint dependent.  相似文献   

4.
Locations of multiple stationary objects are represented on the basis of their global spatial configuration in visual short-term memory (VSTM). Once objects move individually, they form a global spatial configuration with varying spatial inter-object relations over time. The representation of such dynamic spatial configurations in VSTM was investigated in six experiments. Participants memorized a scene with six moving and/or stationary objects and performed a location change detection task for one object specified during the probing phase. The spatial configuration of the objects was manipulated between memory phase and probing phase. Full spatial configurations showing all objects caused higher change detection performance than did no or partial spatial configurations for static and dynamic scenes. The representation of dynamic scenes in VSTM is therefore also based on their global spatial configuration. The variation of the spatiotemporal features of the objects demonstrated that spatiotemporal features of dynamic spatial configurations are represented in VSTM. The presentation of conflicting spatiotemporal cues interfered with memory retrieval. However, missing or conforming spatiotemporal cues triggered memory retrieval of dynamic spatial configurations. The configurational representation of stationary and moving objects was based on a single spatial configuration, indicating that static spatial configurations are a special case of dynamic spatial configurations.  相似文献   

5.
We investigated the role of visual experience on the spatial representation and updating of haptic scenes by comparing recognition performance across sighted, congenitally and late blind participants. We first established that spatial updating occurs in sighted individuals to haptic scenes of novel objects. All participants were required to recognise a previously learned haptic scene of novel objects presented across the same or different orientation as learning whilst they either remained in the same position to moved to a new position relative to the scene. Scene rotation incurred a cost in recognition performance in all groups. However, overall haptic scene recognition performance was worse in the congenitally blind group. Moreover, unlike the late blind or sighted groups, the congenitally blind group were unable to compensate for the cost in scene rotation with observer motion. Our results suggest that vision plays an important role in representing and updating spatial information encoded through touch and have important implications for the role of vision in the development of neuronal areas involved in spatial cognition.  相似文献   

6.
7.
Human spatial encoding of three-dimensional navigable space was studied, using a virtual environment simulation. This allowed subjects to become familiar with a realistic scene by making simulated rotational and translational movements during training. Subsequent tests determined whether subjects could generalize their recognition ability by identifying novel-perspective views and topographic floor plans of the scene. Results from picture recognition tests showed that familiar direction views were most easily recognized, although significant generalization to novel views was observed. Topographic floor plans were also easily identified. In further experiments, novel-view performance diminished when active training was replaced by passive viewing of static images of the scene. However, the ability to make self-initiated movements, as opposed to watching dynamic movie sequences, had no effect on performance. These results suggest that representation of navigable space is view dependent and highlight the importance of spatial-temporal continuity during learning.  相似文献   

8.
Three experiments investigated scene recognition across viewpoint changes, involving same/different judgements on scenes consisting of three objects on a desktop. On same trials, the comparison scene appeared either from the same viewpoint as the standard scene or from a different viewpoint with the desktop rotated about one or more axes. Different trials were created either by interchanging the locations of two or three of the objects (location change condition), or by rotating either one or all three of the objects around their vertical axes (orientation change condition). Response times and errors increased as a function of the angular distance between the standard and comparison views, but this effect was bigger for rotations around the vertical axis than for those about the line of sight or horizontal axis. Furthermore, the time to detect location changes was less than that to detect orientation changes, and this difference increased with increasing angular disparity between the standard and comparison scenes. Rotation times estimated in a double-axis rotation were no longer than other rotations in depth, indicating that alignment was not necessarily simpler around a "natural" axis of rotation. These results are consistent with the hypothesis that scenes, like many objects, may be represented in a viewpoint dependent manner and recognized by aligning standard and comparison views, but that the alignment of scenes is not a holistic process.  相似文献   

9.
How does visual long-term memory store representations of different entities (e.g., objects, actions, and scenes) that are present in the same visual event? Are the different entities stored as an integrated representation in memory, or are they stored separately? To address this question, we asked observers to view a large number of events; in each event, an action was performed within a scene. Afterward, the participants were shown pairs of action–scene sets and indicated which of the two they had seen. When the task required recognizing the individual actions and scenes, performance was high (80 %). Conversely, when the task required remembering which actions had occurred within which scenes, performance was significantly lower (59 %). We observed this dissociation between memory for individual entities and memory for entity bindings across multiple testing conditions and presentation durations. These experiments indicate that visual long-term memory stores information about actions and information about scenes separately from one another, even when an action and scene were observed together in the same visual event. These findings also highlight an important limitation of human memory: Situations that require remembering actions and scenes as integrated events (e.g., eyewitness testimony) may be particularly vulnerable to memory errors.  相似文献   

10.
In 3 experiments, the question of viewpoint dependency in mental representations of dynamic scenes was addressed. Participants viewed film clips of soccer episodes from 1 or 2 viewpoints; they were then required to discriminate between video stills of the original episode and distractors. Recognition performance was measured in terms of accuracy and speed. The degree of viewpoint deviation between the initial presentation and the test stimuli was varied, as was both the point of time presented by the video stills and participants' soccer expertise. Findings suggest that viewers develop a viewpoint-dependent mental representation similar to the spatial characteristics of the original episode presentation, even if the presentation was spatially inhomogeneous.  相似文献   

11.
Many experiments have shown that knowing a targets visual features improves search performance over knowing the target name. Other experiments have shown that scene context can facilitate object search in natural scenes. In this study, we investigated how scene context and target features affect search performance. We examined two possible sources of information from scene context—the scenes gist and the visual details of the scene—and how they potentially interact with target-feature information. Prior to commencing search, participants were shown a scene and a target cue depicting either a picture or the category name (or no-information control). Using eye movement measures, we investigated how the target features and scene context influenced two components of search: early attentional guidance processes and later verification processes involved in the identification of the target. We found that both scene context and target features improved guidance, but that target features also improved speed of target recognition. Furthermore, we found that a scenes visual details played an important role in improving guidance, much more so than did the scenes gist alone.  相似文献   

12.
Nine experiments examined the means by which visual memory for individual objects is structured into a larger representation of a scene. Participants viewed images of natural scenes or object arrays in a change detection task requiring memory for the visual form of a single target object. In the test image, 2 properties of the stimulus were independently manipulated: the position of the target object and the spatial properties of the larger scene or array context. Memory performance was higher when the target object position remained the same from study to test. This same-position advantage was reduced or eliminated following contextual changes that disrupted the relative spatial relationships among contextual objects (context deletion, scrambling, and binding change) but was preserved following contextual change that did not disrupt relative spatial relationships (translation). Thus, episodic scene representations are formed through the binding of objects to scene locations, and object position is defined relative to a larger spatial representation coding the relative locations of contextual objects.  相似文献   

13.
Human observers are able to rapidly and accurately categorize natural scenes, but the representation mediating this feat is still unknown. Here we propose a framework of rapid scene categorization that does not segment a scene into objects and instead uses a vocabulary of global, ecological properties that describe spatial and functional aspects of scene space (such as navigability or mean depth). In Experiment 1, we obtained ground truth rankings on global properties for use in Experiments 2-4. To what extent do human observers use global property information when rapidly categorizing natural scenes? In Experiment 2, we found that global property resemblance was a strong predictor of both false alarm rates and reaction times in a rapid scene categorization experiment. To what extent is global property information alone a sufficient predictor of rapid natural scene categorization? In Experiment 3, we found that the performance of a classifier representing only these properties is indistinguishable from human performance in a rapid scene categorization task in terms of both accuracy and false alarms. To what extent is this high predictability unique to a global property representation? In Experiment 4, we compared two models that represent scene object information to human categorization performance and found that these models had lower fidelity at representing the patterns of performance than the global property model. These results provide support for the hypothesis that rapid categorization of natural scenes may not be mediated primarily though objects and parts, but also through global properties of structure and affordance.  相似文献   

14.
FROM BLOBS TO BOUNDARY EDGES:   总被引:13,自引:1,他引:12  
  相似文献   

15.
Postevent misleading information can distort people's memories by altering and adding scenes. But can you also inhibit the retrieval of information from memory? In two studies we show that postevent information can make memory for a scene less accessible. In both studies participants first saw an event (e.g. a restaurant scene displayed in slides, or a drunk‐driving incident shown via a video clip). Later they were shown the same event without a critical scene and were told either to use this to generate a story (Experiment 1) or to imagine the event (Experiment 2). Finally they were tested. Relative to controls, this postevent omission led to fewer people reporting the critical scene in free recall and in recognition. Thus, we demonstrated that it may be possible to inhibit memories. This finding has important implications for eyewitness testimony and the recovered memory debate. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

16.
本研究采用工作记忆任务和场景辨别任务相结合的双任务范式探究客体工作记忆负载 (实验1)和空间工作记忆负载 (实验2) 对场景主旨辨别的影响。结果发现: (1) 与无负载条件相比, 客体负载条件下的非参辨别力指数(A’)明显下降, 而空间负载条件下A’无明显变化; (2) 随着场景图片呈现时间的延长, 场景辨别任务的A’逐渐提高; (3) 对自然场景和人工场景匹配情况下的A’ 明显高于对自然与自然或者人工与人工匹配情况下的A’。由此可见, 场景主旨识别, 特别是基本水平的场景主旨辨别, 与客体工作记忆共享部分认知资源, 而不与/很少与空间工作记忆共享认知资源; 在识别过程中, 上级水平范畴识别优先于基本水平范畴识别, 进一步支持了上级水平范畴优先的主旨层级加工观点。  相似文献   

17.
Spatial reference in multiple object tracking is available from configurations of dynamic objects and static reference objects. In three experiments, we studied the use of spatial reference in tracking and in relocating targets after abrupt scene rotations. Observers tracked 1, 2, 3, 4, and 6 targets in 3D scenes, in which white balls moved on a square floor plane. The floor plane was either visible thus providing static spatial reference or it was invisible. Without scene rotations, the configuration of dynamic objects provided sufficient spatial reference and static spatial reference was not advantageous. In contrast, with abrupt scene rotations of 20°, static spatial reference supported in relocating targets. A wireframe floor plane lacking local visual detail was as effective as a checkerboard. Individually colored geometric forms as static reference objects provided no additional benefit either, even if targets were centered on these forms at the abrupt scene rotation. Individualizing the dynamic objects themselves by color for a brief interval around the abrupt scene rotation, however, did improve performance. We conclude that attentional tracking of moving targets proceeds within dynamic configurations but detached from static local background.  相似文献   

18.
ABSTRACT

Can eye movements tell us whether people will remember a scene? In order to investigate the link between eye movements and memory encoding and retrieval, we asked participants to study photographs of real-world scenes while their eye movements were being tracked. We found eye gaze patterns during study to be predictive of subsequent memory for scenes. Moreover, gaze patterns during study were more similar to gaze patterns during test for remembered than for forgotten scenes. Thus, eye movements are indeed indicative of scene memory. In an explicit test for context effects of eye movements on memory, we found recognition rate to be unaffected by the disruption of spatial and/or temporal context of repeated eye movements. Therefore, we conclude that eye movements cue memory by selecting and accessing the most relevant scene content, regardless of its spatial location within the scene or the order in which it was selected.  相似文献   

19.
This paper presents a cognitive approach to on-line spatial perception within scenes. A theoretical framework is developed, based on the idea that experience with a scene can activate a complex representation of layout that facilitates subsequent processing of spatial relations within the scene. The representations integrate significant, relevant scenic information and are substantial in amount or extent. The representations are active across short periods of time and across changes in the retinal position of the image. These claims were supported in a series of experiments in which pictures of scenes (primes) facilitated subsequent spatial relations processing within the scenes. The prime-induced representations integrated object identity and layout, were broad in scope, involved both foreground and background information, and were effective across changes in image position.  相似文献   

20.
Theories of objects recognition, scene perception, and neural representation of scenes imply that jumbling a coherent scene should reduce change detection. However, evidence from the change detection literature questions whether jumbling affects change detection. The experiments reported here demonstrate that jumbling does, in fact, reduce change detection. In Experiments 1 and 2, change detection was better for normal scenes than for jumbled scenes. In Experiment 3, inversion failed to interfere with change detection, demonstrating that the disruption of surface and object continuity inherent to jumbling is responsible for reduced change detection. These findings provide a crucial commonality between change detection research and theories of scene perception and neural representation. We also discuss why previous research may have failed to find effects of jumbling.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号