首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The role of attention in memory for objects in natural scenes was investigated using a visual memory. In Experiment 1, participants were asked to memorize six cued (to be attended) objects in a natural scene, and were subsequently tested on one of the cued objects. Four types of test scene images were created by jumbling different sections in scene's background: Attended sections changed, unattended sections changed, both sections changed, and both unchanged. In Experiment 2, the procedure was the same as that of the jumble condition except that scenes to be memorized were also jumbled. Results showed that jumbling of attended sections reduced memory performance, whereas jumbling unattended sections did not, irrespective of the regularity of scene to be memorized. This finding suggests that attention plays an important role in mental construction of a natural scene representation, and leads to enhanced visual memory.  相似文献   

2.
3.
When watching physical events, infants bring to bear prior knowledge about objects and readily detect changes that contradict physical rules. Here we investigate the possibility that scene gist may affect infants, as it affects adults, when detecting changes in everyday scenes. In Experiment 1, 15-month-old infants missed a perceptually salient change that preserved the gist of a generic outdoor scene; the same change was readily detected if infants had insufficient time to process the display and had to rely on perceptual information for change detection. In Experiment 2, 15-month-olds detected a perceptually subtle change that preserved the scene gist but violated the rule of object continuity, suggesting that physical rules may overpower scene gist in infants’ change detection. Finally, Experiments 3 and 4 provided converging evidence for the effects of scene gist, showing that 15-month-olds missed a perceptually salient change that preserved the gist and detected a perceptually subtle change that disrupted the gist. Together, these results suggest that prior knowledge, including scene knowledge and physical knowledge, affects the process by which infants maintain their representations of everyday scenes.  相似文献   

4.
Changes in perception during space missions are usually attributed to microgravity. However, additional factors, such as spatial confinement, may contribute to changes in perception. We tested changes in scene perception using a boundary extension (BE) paradigm during a 105-day Earth-based space-simulation study. In addition to the close-up/wide-angle views used in BE, we presented two types of scenes based on the distance from the observer (proximal/distant scenes). In crew members (n = 6), we found that BE partly increased over time, but the size of BE error did not change in the control group (n = 22). We propose that this effect is caused by an increasing BE effect in stimuli that depict distant scenes and is related to spatial confinement. The results might be important for other situations of spatial confinement with restricted visual depth (e.g., submarine crew, patients confined to a bed). Generally, we found a larger BE effect in proximal scenes compared with the distant scenes. We also demonstrated that with no feedback, subjects preserve the level of the BE effect during repeated measurements.  相似文献   

5.
Boundary extension (BE) is a memory error for close-up views of scenes in which participants tend to remember a picture of a scene as depicting a more wide-angle view than what was actually displayed. However, some experiments have yielded data that indicate a normalized memory of the views depicted in a set of scenes, suggesting that memory for the previously studied scenes has become drawn toward the average view in the image set. In previous studies, normalization is only found when the retention interval is very long or when the stimuli no longer appear to represent a spatial expanse. In Experiment 1, we examine whether normalization can influence results for scenes depicting a partial view of space and when the memory test occurs immediately following the study block by manipulating the degree of difference between studied close-up and wide-angle scenes. In Experiment 2, normalization is induced in a set of scenes by creating conditions expected to lead to memory interference, suggesting that this may be the cause of view normalization. Based on the multi-source model of BE, these scenes should be extended during perception (Intraub, H. (2010). Rethinking scene perception: A multisource model. Psychology of Learning and Motivation, 52, 231–265). In Experiment 3, we show that BE is indeed observable if the same scenes are tested differently, supporting the notion that BE is primarily a perceptual phenomenon while normalization is a memory effect.  相似文献   

6.
Recent research has found visual object memory can be stored as part of a larger scene representation rather than independently of scene context. The present study examined how spatial and nonspatial contextual information modulate visual object memory. Two experiments tested participants’ visual memory by using a change detection task in which a target object's orientation was either the same as it appeared during initial viewing or changed. In addition, we examined the effect of spatial and nonspatial contextual manipulations on change detection performance. The results revealed that visual object representations can be maintained reliably after viewing arrays of objects. Moreover, change detection performance was significantly higher when either spatial or nonspatial contextual information remained the same in the test image. We concluded that while processing complex visual stimuli such as object arrays, visual object memory can be stored as part of a comprehensive scene representation, and both spatial and nonspatial contextual changes modulate visual memory retrieval and comparison.  相似文献   

7.
What controls how long the eyes remain fixated during scene perception? We investigated whether fixation durations are under the immediate control of the quality of the current scene image. Subjects freely viewed photographs of scenes in preparation for a later memory test while their eye movements were recorded. Using the saccade-contingent display change method, scenes were degraded (Experiment 1) or enhanced (Experiment 2) via blurring (low-pass filtering) during predefined saccades. Results showed that fixation durations immediately after a display change were influenced by the degree of blur, with a monotonic relationship between degree of blur and fixation duration. The results also demonstrated that fixation durations can be both increased and decreased by changes in the degree of blur. The results suggest that fixation durations in scene viewing are influenced by the ease of processing of the image currently in view. The results are consistent with models of saccade generation in scenes in which moment-to-moment difficulty in visual and cognitive processing modulates fixation durations.  相似文献   

8.
《Psychologie Fran?aise》2023,68(1):117-135
Research conducted these last years in the field of spatial cognition report empirical findings that are difficult to account for with the traditional visual cognitive model of scene perception. One of the major contributions of these findings has been to invite rethinking scene perception, which would benefit from not being apprehended as centered mainly on the sensory modality considered. On the contrary, the Multisource model of scene perception developed by Intraub et al. offers an alternative theoretical framework considering visual perception as an act of spatial cognition, with spatial information at its core. According to this model, during the initial understanding of a view, the cognitive system would be elaborating a multisource representation, with spatial information constituting an egocentric framework that conveys to the observer a sense of the environment in which he/she is embedded. Scene representation would be organized around an amodal spatial structure combining different sources of information: a bottom-up and external source of information derived from different modalities (e.g., visual, haptic), as well as internal sources of high-level information (i.e., amodal, conceptual and contextual information). These different sources of information would work together to create a simulation of the likely environment, integrating the perceived view into a broader spatial context. Beyond rethinking scene perception, one of the advances of the model is to unify different fields of cognition apprehended until then in isolation. The current paper aims to present this model and some of the results it allows to account for.  相似文献   

9.
The current experiments examined the hypothesis that scene structure affects time perception. In three experiments, participants judged the duration of realistic scenes that were presented in a normal or jumbled (i.e., incoherent) format. Experiment 1 demonstrated that the subjective duration of normal scenes was greater than subjective duration of jumbled scenes. In Experiment 2, gridlines were added to both normal and jumbled scenes to control for the number of line terminators, and scene structure had no effect. In Experiment 3, participants performed a secondary task that required paying attention to scene structure, and scene structure's effect on duration judgements reemerged. These findings are consistent with the idea that perceived duration can depend on visual–cognitive processing, which in turn depends on both the nature of the stimulus and the goals of the observer.  相似文献   

10.
Human observers are able to rapidly and accurately categorize natural scenes, but the representation mediating this feat is still unknown. Here we propose a framework of rapid scene categorization that does not segment a scene into objects and instead uses a vocabulary of global, ecological properties that describe spatial and functional aspects of scene space (such as navigability or mean depth). In Experiment 1, we obtained ground truth rankings on global properties for use in Experiments 2-4. To what extent do human observers use global property information when rapidly categorizing natural scenes? In Experiment 2, we found that global property resemblance was a strong predictor of both false alarm rates and reaction times in a rapid scene categorization experiment. To what extent is global property information alone a sufficient predictor of rapid natural scene categorization? In Experiment 3, we found that the performance of a classifier representing only these properties is indistinguishable from human performance in a rapid scene categorization task in terms of both accuracy and false alarms. To what extent is this high predictability unique to a global property representation? In Experiment 4, we compared two models that represent scene object information to human categorization performance and found that these models had lower fidelity at representing the patterns of performance than the global property model. These results provide support for the hypothesis that rapid categorization of natural scenes may not be mediated primarily though objects and parts, but also through global properties of structure and affordance.  相似文献   

11.
Is boundary extension (false memory beyond the edges of the view) determined solely by the schematic structure of the view or does the quality of the pictorial information impact this error? To examine this, colour photographs or line-drawings of 12 multi-object scenes (Experiment 1: N=64) and 16 single-object scenes (Experiment 2: N=64) were presented for 14 s each. At test, the same pictures were each rated as being the “same”, “closer-up”, or “farther away” (five-point scale). Although the layout, the scope of the view, the distance of the main objects to the edges, the background space and the gist of the scenes were held constant, line drawings yielded greater boundary extension than did their photographic counterparts for multi-object (Experiment 1) and single-object (Experiment 2) scenes. Results are discussed in the context of the multisource model and its implications for the study of scene perception and memory.  相似文献   

12.
Viewing position effects are commonly observed in reading, but they have only rarely been investigated in object perception or in the realistic context of a natural scene. In two experiments, we explored where people fixate within photorealistic objects and the effects of this landing position on recognition and subsequent eye movements. The results demonstrate an optimal viewing position—objects are processed more quickly when fixation is in the centre of the object. Viewers also prefer to saccade to the centre of objects within a natural scene, even when making a large saccade. A central landing position is associated with an increased likelihood of making a refixation, a result that differs from previous reports and suggests that multiple fixations within objects, within scenes, occur for a range of reasons. These results suggest that eye movements within scenes are systematic and are made with reference to an early parsing of the scene into constituent objects.  相似文献   

13.
Recent studies in scene perception suggest that much of what observers believe they see is not retained in visual memory. Depending on the roles they play in organizing the perception of a scene, different visual properties may require different amounts of attention to be incorporated into a mental representation of the scene. The goal of this study was to compare how three visual properties of scenes, colour, object position, and object presence, are encoded in visual memory. We used a variation on the change detection “flicker” task and measured the time to detect scene changes when: (1) a cue was provided regarding the type of change; and, (2) no cue was provided. We hypothesized that cueing would enhance the processing of visual properties that require more attention to be encoded into scene representations, whereas cueing would not have an effect for properties that are readily or automatically encoded in visual memory. In Experiment 1, we found that there was a cueing advantage for colour changes, but not for position or presence changes. In Experiment 2, we found the same cueing effect regardless of whether the colour change altered the configuration of the scene or not. These results are consistent with the idea that properties that typically help determine the configuration of the scene, for example, position and presence, are better encoded in scene representations than are surface properties such as colour.  相似文献   

14.
In two experiments we examined whether the allocation of attention in natural scene viewing is influenced by the gaze cues (head and eye direction) of an individual appearing in the scene. Each experiment employed a variant of the flicker paradigm in which alternating versions of a scene and a modified version of that scene were separated by a brief blank field. In Experiment 1, participants were able to detect the change made to the scene sooner when an individual appearing in the scene was gazing at the changing object than when the individual was absent, gazing straight ahead, or gazing at a nonchanging object. In addition, participants' ability to detect change deteriorated linearly as the changing object was located progressively further from the line of regard of the gazer. Experiment 2 replicated this change detection advantage of gaze-cued objects in a modified procedure using more critical scenes, a forced-choice change/no-change decision, and accuracy as the dependent variable. These findings establish that in the perception of static natural scenes and in a change detection task, attention is preferentially allocated to objects that are the target of another's social attention.  相似文献   

15.
In two experiments we examined whether the allocation of attention in natural scene viewing is influenced by the gaze cues (head and eye direction) of an individual appearing in the scene. Each experiment employed a variant of the flicker paradigm in which alternating versions of a scene and a modified version of that scene were separated by a brief blank field. In Experiment 1, participants were able to detect the change made to the scene sooner when an individual appearing in the scene was gazing at the changing object than when the individual was absent, gazing straight ahead, or gazing at a nonchanging object. In addition, participants' ability to detect change deteriorated linearly as the changing object was located progressively further from the line of regard of the gazer. Experiment 2 replicated this change detection advantage of gaze-cued objects in a modified procedure using more critical scenes, a forced-choice change/no-change decision, and accuracy as the dependent variable. These findings establish that in the perception of static natural scenes and in a change detection task, attention is preferentially allocated to objects that are the target of another's social attention.  相似文献   

16.
Objects are rarely viewed in isolation, and so how they are perceived is influenced by the context in which they are viewed and their interaction with other objects (e.g., whether objects are colocated for action). We investigated the combined effects of action relations and scene context on an object decision task. Experiment 1 investigated whether the benefit for positioning objects so that they interact is enhanced when objects are viewed within contextually congruent scenes. The results indicated that scene context influenced perception of nonaction-related objects (e.g., monitor and keyboard), but had no effect on responses to action-related objects (e.g., bottle and glass) that were processed more rapidly. In Experiment 2, we reduced the saliency of the object stimuli and found that, under these circumstances, scene context influenced responses to action-related objects. We discuss the data in terms of relatively late effects of scene processing on object perception.  相似文献   

17.
In 3 experiments the author investigated the relationship between the online visual representation of natural scenes and long-term visual memory. In a change detection task, a target object either changed or remained the same from an initial image of a natural scene to a test image. Two types of changes were possible: rotation in depth, or replacement by another object from the same basic-level category. Change detection during online scene viewing was compared with change detection after delay of 1 trial (Experiments 2A and 2B) until the end of the study session (Experiment 1) or 24 hr (Experiment 3). There was little or no decline in change detection performance from online viewing to a delay of 1 trial or delay until the end of the session, and change detection remained well above chance after 24 hr. These results demonstrate that long-term memory for visual detail in a scene is robust.  相似文献   

18.
Research on contextual cueing has demonstrated that with simple arrays of letters and shapes, search for a target increases in efficiency as associations between a search target and its surrounding visual context are learned. We investigated whether the visual context afforded by repeated exposure to real-world scenes can also guide attention when the relationship between the scene and a target position is arbitrary. Observers searched for and identified a target letter embedded in photographs of real-world scenes. Although search time within novel scenes was consistent across trials, search time within repeated scenes decreased across repetitions. Unlike previous demonstrations of contextual cueing, however, memory for scene-target covariation was explicit. In subsequent memory tests, observers recognized repeated contexts more often than those that were presented once and displayed superior recall of target position within the repeated scenes. In addition, repetition of inverted scenes, which made the scene more difficult to identify, produced a markedly reduced rate of learning, suggesting semantic information concerning object and scene identity are used to guide attention.  相似文献   

19.
In four experiments, we examined the role of auditory transients and auditory short-term memory in perceiving changes in a complex auditory scene comprising multiple auditory objects. Participants were presented pairs of complex auditory scenes that were composed of a maximum of four animal calls delivered in free field; participants were instructed to decide whether the two scenes were the same or different (Experiments 1, 2, and 4). Changes to the second scene consisted of either the addition or the deletion of one animal call. Contrary to intuitive predictions based on results from the visual change blindness literature, substantial deafness to the change emerged without regard to whether the scenes were separated by 500 msec of masking white noise or by 500 msec of silence (Experiment 1). In fact, change deafness was not even modulated by having the two scenes presented contiguously (i.e., 0-msec interval) or separated by 500 msec of silence (Experiments 2 and 4). This result suggests that change-related auditory transients played little or no role in change detection in complex auditory scenes. Instead, the main determinant of auditory change perception (and auditory change deafness) appears to have been the capacity of auditory short-term memory (Experiments 3 and 4). Taken together, these findings indicate that the intuitive parallels between visual and auditory change perception should be reconsidered.  相似文献   

20.
Boundary extension (BE) is a memory error in which observers remember more of a scene than they actually viewed. This error reflects one’s prediction that a scene naturally continues and is driven by scene schema and contextual knowledge. In two separate experiments we investigated the necessity of context and scene schema in BE. In Experiment 1, observers viewed scenes that either contained semantically consistent or inconsistent objects as well as objects on white backgrounds. In both types of scenes and in the no-background condition there was a BE effect; critically, semantic inconsistency in scenes reduced the magnitude of BE. In Experiment 2 when we used abstract shapes instead of meaningful objects, there was no BE effect. We suggest that although scene schema is necessary to elicit BE, contextual consistency is not required.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号