首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Recent research has found visual object memory can be stored as part of a larger scene representation rather than independently of scene context. The present study examined how spatial and nonspatial contextual information modulate visual object memory. Two experiments tested participants’ visual memory by using a change detection task in which a target object's orientation was either the same as it appeared during initial viewing or changed. In addition, we examined the effect of spatial and nonspatial contextual manipulations on change detection performance. The results revealed that visual object representations can be maintained reliably after viewing arrays of objects. Moreover, change detection performance was significantly higher when either spatial or nonspatial contextual information remained the same in the test image. We concluded that while processing complex visual stimuli such as object arrays, visual object memory can be stored as part of a comprehensive scene representation, and both spatial and nonspatial contextual changes modulate visual memory retrieval and comparison.  相似文献   

2.
Do refixations serve a rehearsal function in visual working memory (VWM)? We analyzed refixations from observers freely viewing multiobject scenes. An eyetracker was used to limit the viewing of a scene to a specified number of objects fixated after the target (intervening objects), followed by a four-alternative forced choice recognition test. Results showed that the probability of target refixation increased with the number of fixated intervening objects, and these refixations produced a 16% accuracy benefit over the first five intervening-object conditions. Additionally, refixations most frequently occurred after fixations on only one to two other objects, regardless of the intervening-object condition. These behaviors could not be explained by random or minimally constrained computational models; a VWM component was required to completely describe these data. We explain these findings in terms of a monitor–refixate rehearsal system: The activations of object representations in VWM are monitored, with refixations occurring when these activations decrease suddenly.  相似文献   

3.
In a glance, the visual system can provide a summary of some kinds of information about objects in a scene. We explore how summary information about orientation is extracted and find that some representations of orientation are privileged over others. Participants judged the average orientation of either a set of 6 bars or 6 circular gratings. For bars, orientation information was carried by object boundary features, while for gratings, orientation was carried by internal surface features. The results showed more accurate averaging performance for bars than for gratings, even when controlling for potential differences in encoding precision for solitary objects. We suggest that, during orientation averaging, the visual system prioritizes object boundaries over surface features. This privilege for boundary features may lead to a better representation of the spatial layout of a scene.  相似文献   

4.
This paper presents a cognitive approach to on-line spatial perception within scenes. A theoretical framework is developed, based on the idea that experience with a scene can activate a complex representation of layout that facilitates subsequent processing of spatial relations within the scene. The representations integrate significant, relevant scenic information and are substantial in amount or extent. The representations are active across short periods of time and across changes in the retinal position of the image. These claims were supported in a series of experiments in which pictures of scenes (primes) facilitated subsequent spatial relations processing within the scenes. The prime-induced representations integrated object identity and layout, were broad in scope, involved both foreground and background information, and were effective across changes in image position.  相似文献   

5.
An object-to-scene binding hypothesis maintains that visual object representations are stored as part of a larger scene representation or scene context, and that scene context facilitates retrieval of object representations (see, e.g., Hollingworth, Journal of Experimental Psychology: Learning, Memory and Cognition, 32, 58-69, 2006). Support for this hypothesis comes from data using an intentional memory task. In the present study, we examined whether scene context always facilitates retrieval of visual object representations. In two experiments, we investigated whether the scene context facilitates retrieval of object representations, using a new paradigm in which a memory task is appended to a repeated-flicker change detection task. Results indicated that in normal scene viewing, in which many simultaneous objects appear, scene context facilitation of the retrieval of object representations-henceforth termed object-to-scene binding-occurred only when the observer was required to retain much information for a task (i.e., an intentional memory task).  相似文献   

6.
What is the nature of the representation formed during the viewing of natural scenes? We tested two competing hypotheses regarding the accumulation of visual information during scene viewing. The first holds that coherent visual representations disintegrate as soon as attention is withdrawn from an object and thus that the visual representation of a scene is exceedingly impoverished. The second holds that visual representations do not necessarily decay upon the withdrawal of attention, but instead can be accumulated in memory from previously attended regions. Target objects in line drawings of natural scenes were changed during a saccadic eye movement away from those objects. Three findings support the second hypothesis. First, changes to the visual form of target objects (token substitution) were successfully detected, as indicated by both explicit and implicit measures, even though the target object was not attended when the change occurred. Second, these detections were often delayed until well after the change. Third, changes to semantically inconsistent target objects were detected better than changes to semantically consistent objects.  相似文献   

7.
The authors examined the prioritization of abruptly appearing objects in real-world scenes by measuring the eyes' propensity to be directed to the new object. New objects were fixated more often than chance whether they appeared during fixations (transient onsets) or saccades (nontransient onsets). However, onsets that appeared during fixations were fixated sooner and more often than those coincident with saccades. Prioritization of onsets during saccades, but not fixations, were affected by manipulations of memory: Reducing scene viewing time prior to the onset eliminated prioritization, whereas prior study of the scenes increased prioritization. Transient objects draw attention quickly and do not depend on memory, but without a transient signal, new objects are prioritized over several saccades as memory is used to explicitly identify the change. These effects were not modulated by observers' expectations concerning the appearance of new objects, suggesting the prioritization of a transient is automatic and that memory-guided prioritization is implicit.  相似文献   

8.
Communication is aided greatly when speakers and listeners take advantage of mutually shared knowledge (i.e., common ground). How such information is represented in memory is not well known. Using a neuropsychological-psycholinguistic approach to real-time language understanding, we investigated the ability to form and use common ground during conversation in memory-impaired participants with hippocampal amnesia. Analyses of amnesics' eye fixations as they interpreted their partner's utterances about a set of objects demonstrated successful use of common ground when the amnesics had immediate access to common-ground information, but dramatic failures when they did not. These findings indicate a clear role for declarative memory in maintenance of common-ground representations. Even when amnesics were successful, however, the eye movement record revealed subtle deficits in resolving potential ambiguity among competing intended referents; this finding suggests that declarative memory may be critical to more basic aspects of the on-line resolution of linguistic ambiguity.  相似文献   

9.
Previewing scenes briefly makes finding target objects more efficient when viewing is through a gaze-contingent window (windowed viewing). In contrast, showing a preview of a randomly arranged search display does not benefit search efficiency when viewing during search is of the full display. Here, we tested whether a scene preview is beneficial when the scene is fully visible during search. Scene previews, when presented, were 250 ms in duration. During search, the scene was either fully visible or windowed. A preview always provided an advantage, in terms of decreasing the time to initially fixate and respond to targets and in terms of the total number of fixations. In windowed visibility, a preview reduced the distance of fixations from the target position until at least the fourth fixation. In full visibility, previewing reduced the distance of the second fixation but not of later fixations. The gist information derived from the initial glimpse of a scene allowed for placement of the first one or two fixations at information-rich locations, but when nonfoveal information was available, subsequent eye movements were only guided by online information.  相似文献   

10.
Visual memory and the perception of a stable visual environment   总被引:2,自引:0,他引:2  
The visual world appears stable and continuous despite eye movements. One hypothesis about how this perception is achieved is that the contents of successive fixations are fused in memory according to environmental coordinates. Two experiments failed to support this hypothesis; they showed that one's ability to detect a grating presented after a saccade is unaffected by the presentation of a grating with the same spatial frequency in the same spatial location before the saccade. A third experiment tested an alternative explanation of perceptual stability that claims that the contents of successive fixations are compared, rather than fused, across saccades, allowing one to determine whether the world has remained stable. This hypothesis was supported: Experienced subjects could accurately determine whether two patterns viewed in successive fixations were identical or different, even when the two patterns appeared in different spatial positions across the saccade. Taken together, these results suggest that perceptual stability and information integration across saccades rely on memory for the relative positions of objects in the environment, rather than on the spatiotopic fusion of visual information from successive fixations.  相似文献   

11.
Nine experiments examined the means by which visual memory for individual objects is structured into a larger representation of a scene. Participants viewed images of natural scenes or object arrays in a change detection task requiring memory for the visual form of a single target object. In the test image, 2 properties of the stimulus were independently manipulated: the position of the target object and the spatial properties of the larger scene or array context. Memory performance was higher when the target object position remained the same from study to test. This same-position advantage was reduced or eliminated following contextual changes that disrupted the relative spatial relationships among contextual objects (context deletion, scrambling, and binding change) but was preserved following contextual change that did not disrupt relative spatial relationships (translation). Thus, episodic scene representations are formed through the binding of objects to scene locations, and object position is defined relative to a larger spatial representation coding the relative locations of contextual objects.  相似文献   

12.
How does visual long-term memory store representations of different entities (e.g., objects, actions, and scenes) that are present in the same visual event? Are the different entities stored as an integrated representation in memory, or are they stored separately? To address this question, we asked observers to view a large number of events; in each event, an action was performed within a scene. Afterward, the participants were shown pairs of action–scene sets and indicated which of the two they had seen. When the task required recognizing the individual actions and scenes, performance was high (80 %). Conversely, when the task required remembering which actions had occurred within which scenes, performance was significantly lower (59 %). We observed this dissociation between memory for individual entities and memory for entity bindings across multiple testing conditions and presentation durations. These experiments indicate that visual long-term memory stores information about actions and information about scenes separately from one another, even when an action and scene were observed together in the same visual event. These findings also highlight an important limitation of human memory: Situations that require remembering actions and scenes as integrated events (e.g., eyewitness testimony) may be particularly vulnerable to memory errors.  相似文献   

13.
Three experiments investigated whether the semantic informativeness of a scene region (object) influences its representation between successive views. In Experiment 1, a scene and a modified version of that scene were presented in alternation, separated by a brief retention interval. A changed object was either semantically consistent with the scene (non-informative) or inconsistent (informative). Change detection latency was shorter in the semantically inconsistent versus consistent condition. In Experiment 2, eye movements were eliminated by presenting a single cycle of the change sequence. Detection accuracy was higher for inconsistent versus consistent objects. This inconsistent object advantage was obtained when the potential strategy of selectively encoding inconsistent objects was no longer advantageous (Experiment 3). These results indicate that the semantic properties of an object influence whether the representation of that object is maintained between views of a scene, and this influence is not caused solely by the differential allocation of eye fixations to the changing region. The potential cognitive mechanisms supporting this effect are discussed.  相似文献   

14.
Eye movements were recorded during the display of two images of a real-world scene that were inspected to determine whether they were the same or not (a comparative visual search task). In the displays where the pictures were different, one object had been changed, and this object was sometimes taken from another scene and was incongruent with the gist. The experiment established that incongruous objects attract eye fixations earlier than the congruous counterparts, but that this effect is not apparent until the picture has been displayed for several seconds. By controlling the visual saliency of the objects the experiment eliminates the possibility that the incongruency effect is dependent upon the conspicuity of the changed objects. A model of scene perception is suggested whereby attention is unnecessary for the partial recognition of an object that delivers sufficient information about its visual characteristics for the viewer to know that the object is improbable in that particular scene, and in which full identification requires foveal inspection.  相似文献   

15.
The authors investigated whether anomalous information in the periphery of a scene attracts saccades when the anomaly is not distinctive in its low-level visual properties. Subjects viewed color photographs for 8 s while their eye movements were monitored. Each subject saw 2 photographs of different scenes. One photograph was a control scene in which familiar objects appeared in their canonical form. In the other picture, objects were altered in a way that rendered them deviant without introducing any obvious changes in low-level visual saliency. In Experiment 1, these alterations involved rotating an object in an unnatural fashion (e.g., an inverted head on a portrait, a truck parked on its front end). In Experiment 2, colors were distributed over objects in a way that was either reasonable or anomalous (e.g., a green cup vs. a green hand). Subjects fixated the anomalous items earlier (both in time and in order of fixations) than the nondistorted objects, suggesting that violations of canonical form are detected peripherally and can affect the likelihood of fixating an item.  相似文献   

16.
In the present experiment, participants were exploring line drawings of scenes in the context of an object-decision task, while eye-contingent display changes manipulated the appearance of the foveal part of the image. Foveal information was replaced by an ovoid noise mask for 83 ms, after a preset delay of 15, 35, 60, or 85 ms following the onset of fixations. In control conditions, a red ellipse appeared for 83 ms, centered around the fixation position, after the same delays as in the noise-mask conditions. It was found that scene exploration was hampered especially when foveal masking occurred early during fixations, replicating earlier findings. Furthermore, fixation durations were shown to increase linearly as the mask delay decreased, which validates the fixation duration as a measure of perceptual processing speed.  相似文献   

17.
The human sentence processor is able to make rapid predictions about upcoming linguistic input. For example, upon hearing the verb eat, anticipatory eye‐movements are launched toward edible objects in a visual scene (Altmann & Kamide, 1999). However, the cognitive mechanisms that underlie anticipation remain to be elucidated in ecologically valid contexts. Previous research has, in fact, mainly used clip‐art scenes and object arrays, raising the possibility that anticipatory eye‐movements are limited to displays containing a small number of objects in a visually impoverished context. In Experiment 1, we confirm that anticipation effects occur in real‐world scenes and investigate the mechanisms that underlie such anticipation. In particular, we demonstrate that real‐world scenes provide contextual information that anticipation can draw on: When the target object is not present in the scene, participants infer and fixate regions that are contextually appropriate (e.g., a table upon hearing eat). Experiment 2 investigates whether such contextual inference requires the co‐presence of the scene, or whether memory representations can be utilized instead. The same real‐world scenes as in Experiment 1 are presented to participants, but the scene disappears before the sentence is heard. We find that anticipation occurs even when the screen is blank, including when contextual inference is required. We conclude that anticipatory language processing is able to draw upon global scene representations (such as scene type) to make contextual inferences. These findings are compatible with theories assuming contextual guidance, but posit a challenge for theories assuming object‐based visual indices.  相似文献   

18.
Gordon RD 《Memory & cognition》2006,34(7):1484-1494
In two experiments, we examined the role of semantic scene content in guiding attention during scene viewing. In each experiment, performance on a lexical decision task was measured following the brief presentation of a scene. The lexical decision stimulus named an object that was either present or not present in the scene. The results of Experiment 1 revealed no priming from inconsistent objects (whose identities conflicted with the scene in which they appeared), but negative priming from consistent objects. The results of Experiment 2 indicated that negative priming from consistent objects occurs only when inconsistent objects are present in the scenes. Together, the results suggest that observers are likely to attend to inconsistent objects, and that representations of consistent objects are suppressed in the presence of an inconsistent object. Furthermore, the data suggest that inconsistent objects draw attention because they are relatively difficult to identify in an inappropriate context.  相似文献   

19.
Observers frequently remember seeing more of a scene than was shown (boundary extension). Does this reflect a lack of eye fixations to the boundary region? Single-object photographs were presented for 14–15 s each. Main objects were either whole or slightly cropped by one boundary, creating a salient marker of boundary placement. All participants expected a memory test, but only half were informed that boundary memory would be tested. Participants in both conditions made multiple fixations to the boundary region and the cropped region during study. Demonstrating the importance of these regions, test-informed participants fixated them sooner, longer, and more frequently. Boundary ratings (Experiment 1) and border adjustment tasks (Experiments 2–4) revealed boundary extension in both conditions. The error was reduced, but not eliminated, in the test-informed condition. Surprisingly, test knowledge and multiple fixations to the salient cropped region, during study and at test, were insufficient to overcome boundary extension on the cropped side. Results are discussed within a traditional visual-centric framework versus a multisource model of scene perception.  相似文献   

20.
We contrasted visual search for targets presented in prototypical views and targets presented in nonprototypical views, when targets were defined by their names and when they were defined by the action that would normally be performed on them. The likelihood of the first fixation falling on the target was increased for prototypical-view targets falling in the lower visual field. When targets were defined by actions, the durations of fixations were reduced for targets in the lower field. The results are consistent with eye movements in search being affected by representations within the dorsal visual stream, where there is strong representation of the lower visual field. These representations are sensitive to the familiarity or the affordance offered by objects in prototypical views, and they are influenced by action-based templates for targets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号