首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 363 毫秒
1.
Nine experiments examined the means by which visual memory for individual objects is structured into a larger representation of a scene. Participants viewed images of natural scenes or object arrays in a change detection task requiring memory for the visual form of a single target object. In the test image, 2 properties of the stimulus were independently manipulated: the position of the target object and the spatial properties of the larger scene or array context. Memory performance was higher when the target object position remained the same from study to test. This same-position advantage was reduced or eliminated following contextual changes that disrupted the relative spatial relationships among contextual objects (context deletion, scrambling, and binding change) but was preserved following contextual change that did not disrupt relative spatial relationships (translation). Thus, episodic scene representations are formed through the binding of objects to scene locations, and object position is defined relative to a larger spatial representation coding the relative locations of contextual objects.  相似文献   

2.
An object-to-scene binding hypothesis maintains that visual object representations are stored as part of a larger scene representation or scene context, and that scene context facilitates retrieval of object representations (see, e.g., Hollingworth, Journal of Experimental Psychology: Learning, Memory and Cognition, 32, 58-69, 2006). Support for this hypothesis comes from data using an intentional memory task. In the present study, we examined whether scene context always facilitates retrieval of visual object representations. In two experiments, we investigated whether the scene context facilitates retrieval of object representations, using a new paradigm in which a memory task is appended to a repeated-flicker change detection task. Results indicated that in normal scene viewing, in which many simultaneous objects appear, scene context facilitation of the retrieval of object representations-henceforth termed object-to-scene binding-occurred only when the observer was required to retain much information for a task (i.e., an intentional memory task).  相似文献   

3.
In 3 experiments the author investigated the relationship between the online visual representation of natural scenes and long-term visual memory. In a change detection task, a target object either changed or remained the same from an initial image of a natural scene to a test image. Two types of changes were possible: rotation in depth, or replacement by another object from the same basic-level category. Change detection during online scene viewing was compared with change detection after delay of 1 trial (Experiments 2A and 2B) until the end of the study session (Experiment 1) or 24 hr (Experiment 3). There was little or no decline in change detection performance from online viewing to a delay of 1 trial or delay until the end of the session, and change detection remained well above chance after 24 hr. These results demonstrate that long-term memory for visual detail in a scene is robust.  相似文献   

4.
Sun HM  Gordon RD 《Memory & cognition》2010,38(8):1049-1057
In five experiments, we examined the influence of contextual objects’ location and visual features on visual memory. Participants’ visual memory was tested with a change detection task in which they had to judge whether the orientation (Experiments 1A, 1B, and 2) or color (Experiments 3A and 3B) of a target object was the same. Furthermore, contextual objects’ locations and visual features were manipulated in the test image. The results showed that change detection performance was better when contextual objects’ locations remained the same from study to test, demonstrating that the original spatial configuration is important for subsequent visual memory retrieval. The results further showed that changes to contextual objects’ orientation, but not color, reduced orientation change detection performance; and changes to contextual objects’ color, but not orientation, impaired color change detection performance. Therefore, contextual objects’ visual features are capable of affecting visual memory. However, selective attention plays an influential role in modulating such effects.  相似文献   

5.
In a change detection paradigm, a target object in a natural scene either rotated in depth, was replaced by another object token, or remained the same. Change detection performance was reliably higher when a target postcue allowed participants to restrict retrieval and comparison processes to the target object (Experiment 1). Change detection performance remained excellent when the target object was not attended at change (Experiment 2) and when a concurrent verbal working memory load minimized the possibility of verbal encoding (Experiment 3). Together, these data demonstrate that visual representations accumulate in memory from attended objects as the eyes and attention are oriented within a scene and that change blindness derives, at least in part, from retrieval and comparison failure.  相似文献   

6.
The human sentence processor is able to make rapid predictions about upcoming linguistic input. For example, upon hearing the verb eat, anticipatory eye‐movements are launched toward edible objects in a visual scene (Altmann & Kamide, 1999). However, the cognitive mechanisms that underlie anticipation remain to be elucidated in ecologically valid contexts. Previous research has, in fact, mainly used clip‐art scenes and object arrays, raising the possibility that anticipatory eye‐movements are limited to displays containing a small number of objects in a visually impoverished context. In Experiment 1, we confirm that anticipation effects occur in real‐world scenes and investigate the mechanisms that underlie such anticipation. In particular, we demonstrate that real‐world scenes provide contextual information that anticipation can draw on: When the target object is not present in the scene, participants infer and fixate regions that are contextually appropriate (e.g., a table upon hearing eat). Experiment 2 investigates whether such contextual inference requires the co‐presence of the scene, or whether memory representations can be utilized instead. The same real‐world scenes as in Experiment 1 are presented to participants, but the scene disappears before the sentence is heard. We find that anticipation occurs even when the screen is blank, including when contextual inference is required. We conclude that anticipatory language processing is able to draw upon global scene representations (such as scene type) to make contextual inferences. These findings are compatible with theories assuming contextual guidance, but posit a challenge for theories assuming object‐based visual indices.  相似文献   

7.
Saccade-contingent change detection provides a powerful tool for investigating scene representation and scene memory. In the present study, critical objects presented within color images of naturalistic scenes were changed during a saccade toward or away from the target. During the saccade,the critical object was changed to another object type, to a visually different token of the same object type, or was deleted from the scene. There were three main results. First, the deletion of a saccade target was special: Detection performance for saccade target deletions was very good, and this level of performance did not decline with the amplitude of the saccade. In contrast, detection of type and token changes at the saccade target, and of all changes including deletions at a location that had just been fixated but was not the saccade target, decreased as the amplitude of the saccade increased. Second, detection performance for type and token changes, both when the changing object was the target of the saccade and when the object had just been fixated but was not the saccade target, was well above chance. Third, mean gaze durations were reliably elevated for those trials in which the change was not overtly detected. The results suggest that the presence of the saccade target plays a special role in trassaccadic integration, and together with other recent findings, suggest more generally that a relatively rich scene representation is retained across saccades and stored in visual memory.  相似文献   

8.
What is the nature of the representation formed during the viewing of natural scenes? We tested two competing hypotheses regarding the accumulation of visual information during scene viewing. The first holds that coherent visual representations disintegrate as soon as attention is withdrawn from an object and thus that the visual representation of a scene is exceedingly impoverished. The second holds that visual representations do not necessarily decay upon the withdrawal of attention, but instead can be accumulated in memory from previously attended regions. Target objects in line drawings of natural scenes were changed during a saccadic eye movement away from those objects. Three findings support the second hypothesis. First, changes to the visual form of target objects (token substitution) were successfully detected, as indicated by both explicit and implicit measures, even though the target object was not attended when the change occurred. Second, these detections were often delayed until well after the change. Third, changes to semantically inconsistent target objects were detected better than changes to semantically consistent objects.  相似文献   

9.
任衍具  孙琪 《心理学报》2014,46(11):1613-1627
采用视空工作记忆任务和真实场景搜索任务相结合的双任务范式, 结合眼动技术将搜索过程划分为起始阶段、扫描阶段和确认阶段, 探究视空工作记忆负载对真实场景搜索绩效的影响机制, 同时考查试次间搜索目标是否变化、目标模板的具体化程度以及搜索场景画面的视觉混乱度所起的调节作用。结果表明, 视空工作记忆负载会降低真实场景搜索的成绩, 在搜索过程中表现为视空负载条件下扫描阶段持续时间的延长、注视点数目的增加和空间负载条件下确认阶段持续时间的延长, 视空负载对搜索过程的影响与目标模板的具体化程度有关; 空间负载会降低真实场景搜索的效率, 且与搜索画面的视觉混乱度有关, 而客体负载则不会。由此可见, 视空工作记忆负载对真实场景搜索绩效的影响不同, 空间负载对搜索过程的影响比客体负载更长久, 二者均受到目标模板具体化程度的调节; 仅空间负载会降低真实场景的搜索效率, 且受到搜索场景画面视觉混乱度的调节。  相似文献   

10.
A "follow-the-dot" method was used to investigate the visual memory systems supporting accumulation of object information in natural scenes. Participants fixated a series of objects in each scene, following a dot cue from object to object. Memory for the visual form of a target object was then tested. Object memory was consistently superior for the two most recently fixated objects, a recency advantage indicating a visual short-term memory component to scene representation. In addition, objects examined earlier were remembered at rates well above chance, with no evidence of further forgetting when 10 objects intervened between target examination and test and only modest forgetting with 402 intervening objects. This robust prerecency performance indicates a visual long-term memory component to scene representation.  相似文献   

11.
ABSTRACT

Change blindness for the contents of natural scenes suggests that only items that are attended while the scene is still visible are stored, leading some to characterize our visual experiences as sparse. Experiments on iconic memory for arrays of discrete symbols or objects, however, indicate observers have access to more visual information for at least several hundred milliseconds at offset of a display. In the experiment presented here, we demonstrate an iconic memory for complex natural or real-world scenes. Using a modified change detection task in which to-be changed objects are cued at offset of the scene, we show that more information from a natural scene is briefly stored than change blindness predicts and more than is contained in visual short-term memory. In our experiment, a cue appearing 0, 300, or 1000?msec after offset of the pre-change scene or at onset of the second scene presentation (a Post Cue) directed attention to the location of a possible change. Compared to a no-cue condition, subjects were significantly better at detecting changes and identifying what changed in the cue condition, with the cue having a diminishing effect as a function of time and no effect when its onset coincided with that of the second scene presentation. The results suggest that an iconic memory of a natural scene exists for at least 1000?msec after scene offset, from which subjects can access the identity of items in the pre-change scene. This implies that change blindness underestimates the amount of information available to the visual system from a brief glance of a natural scene.  相似文献   

12.
In a series of experiments, we investigated the dependence of contextual cueing on working memory resources. A visual search task with 50 % repeated displays was run in order to elicit the implicit learning of contextual cues. The search task was combined with a concurrent visual working memory task either during an initial learning phase or a later test phase. The visual working memory load was either spatial or nonspatial. Articulatory suppression was used to prevent verbalization. We found that nonspatial working memory load had no effect, independent of presentation in the learning or test phase. In contrast, visuospatial load diminished search facilitation in the test phase, but not during learning. We concluded that visuospatial working memory resources are needed for the expression of previously learned spatial contexts, whereas the learning of contextual cues does not depend on visuospatial working memory.  相似文献   

13.
Four experiments investigated the representation and integration in memory of spatial and nonspatial relations. Subjects learned two-dimensional spatial arrays in which critical pairs of object names were semantically related (Experiment 1), semantically and episodically related (Experiment 2), or just episodically related (Experiments 3a and 3b). Episodic relatedness was established in a paired-associate learning task that preceded array learning. After learning an array, subjects participated in two tasks: item recognition, in which the measure of interest was priming; and distance estimation. Priming in item recognition was sensitive to the Euclidean distance between object names and, for neighbouring locations, to nonspatial relations. Errors in distance estimations varied as a function of distance but were unaffected by nonspatial relations. These and other results indicated that nonspatial relations influenced the probability of encoding spatial relations between locations but did not lead to distorted spatial memories.  相似文献   

14.
Active and passive scene recognition across views.   总被引:7,自引:0,他引:7  
R F Wang  D J Simons 《Cognition》1999,70(2):191-210
Recent evidence suggests that scene recognition across views is impaired when an array of objects rotates relative to a stationary observer, but not when the observer moves relative to a stationary display [Simons, D.J., Wang, R.F., 1998. Perceiving real-world viewpoint changes. Psychological Science 9, 315-320]. The experiments in this report examine whether the relatively poorer performance by stationary observers across view changes results from a lack of perceptual information for the rotation or from the lack of active control of the perspective change, both of which are present for viewpoint changes. Three experiments compared performance when observers passively experienced the view change and when they actively caused the change. Even with visual information and active control over the display rotation, change detection performance was still worse for orientation changes than for viewpoint changes. These findings suggest that observers can update a viewer-centered representation of a scene when they move to a different viewing position, but such updating does not occur during display rotations even with visual and motor information for the magnitude of the change. This experimental approach, using arrays of real objects rather than computer displays of isolated individual objects, can shed light on mechanisms that allow accurate recognition despite changes in the observer's position and orientation.  相似文献   

15.
Three experiments were conducted to investigate the existence of incidentally acquired, long-term, detailed visual memory for objects embedded in previously viewed scenes. Participants performed intentional memorization and incidental visual search learning tasks while viewing photographs of real-world scenes. A visual memory test for previously viewed objects from these scenes then followed. Participants were not aware that they would be tested on the scenes following incidental learning in the visual search task. In two types of memory tests for visually specific object information (token discrimination and mirror-image discrimination), performance following both the memorization and visual search conditions was reliably above chance. These results indicate that recent demonstrations of good visual memory during scene viewing are not due to intentional scene memorization. Instead, long-term visual representations are incidentally generated as a natural product of scene perception.  相似文献   

16.
This study investigated memory from interrupted visual searches. Participants conducted a change detection search task on polygons overlaid on scenes. Search was interrupted by various disruptions, including unfilled delay, passive viewing of other scenes, and additional search on new displays. Results showed that performance was unaffected by short intervals of unfilled delay or passive viewing, but it was impaired by additional search tasks. Across delays, memory for the spatial layout of the polygons was retained for future use, but memory for polygon shapes, background scene, and absolute polygon locations was not. The authors suggest that spatial memory aids interrupted visual searches, but the use of this memory is easily disrupted by additional searches.  相似文献   

17.
Locations of multiple stationary objects are represented on the basis of their global spatial configuration in visual short-term memory (VSTM). Once objects move individually, they form a global spatial configuration with varying spatial inter-object relations over time. The representation of such dynamic spatial configurations in VSTM was investigated in six experiments. Participants memorized a scene with six moving and/or stationary objects and performed a location change detection task for one object specified during the probing phase. The spatial configuration of the objects was manipulated between memory phase and probing phase. Full spatial configurations showing all objects caused higher change detection performance than did no or partial spatial configurations for static and dynamic scenes. The representation of dynamic scenes in VSTM is therefore also based on their global spatial configuration. The variation of the spatiotemporal features of the objects demonstrated that spatiotemporal features of dynamic spatial configurations are represented in VSTM. The presentation of conflicting spatiotemporal cues interfered with memory retrieval. However, missing or conforming spatiotemporal cues triggered memory retrieval of dynamic spatial configurations. The configurational representation of stationary and moving objects was based on a single spatial configuration, indicating that static spatial configurations are a special case of dynamic spatial configurations.  相似文献   

18.
We investigated the role of visual experience on the spatial representation and updating of haptic scenes by comparing recognition performance across sighted, congenitally and late blind participants. We first established that spatial updating occurs in sighted individuals to haptic scenes of novel objects. All participants were required to recognise a previously learned haptic scene of novel objects presented across the same or different orientation as learning whilst they either remained in the same position to moved to a new position relative to the scene. Scene rotation incurred a cost in recognition performance in all groups. However, overall haptic scene recognition performance was worse in the congenitally blind group. Moreover, unlike the late blind or sighted groups, the congenitally blind group were unable to compensate for the cost in scene rotation with observer motion. Our results suggest that vision plays an important role in representing and updating spatial information encoded through touch and have important implications for the role of vision in the development of neuronal areas involved in spatial cognition.  相似文献   

19.
Previous research demonstrates that implicitly learned probability information can guide visual attention. We examined whether the probability of an object changing can be implicitly learned and then used to improve change detection performance. In a series of six experiments, participants completed 120–130 training change detection trials. In four of the experiments the object that changed color was the same shape (trained shape) on every trial. Participants were not explicitly aware of this change probability manipulation and change detection performance was not improved for the trained shape versus untrained shapes. In two of the experiments, the object that changed color was always in the same general location (trained location). Although participants were not explicitly aware of the change probability, implicit knowledge of it did improve change detection performance in the trained location. These results indicate that improved change detection performance through implicitly learned change probability occurs for location but not shape.  相似文献   

20.
We examined whether view combination mechanisms shown to underlie object and scene recognition can integrate visual information across views that have little or no three-dimensional information at either the object or scene level. In three experiments, people learned four “views” of a two dimensional visual array derived from a three-dimensional scene. In Experiments 1 and 2, the stimuli were arrays of colored rectangles that preserved the relative sizes, distances, and angles among objects in the original scene, as well as the original occlusion relations. Participants recognized a novel central view more efficiently than any of the Trained views, which in turn were recognized more efficiently than equidistant novel views. Experiment 2 eliminated presentation frequency as an explanation for this effect. Experiment 3 used colored dots that preserved only identity and relative location information, which resulted in a weaker effect, though still one that was inconsistent with both part-based and normalization accounts of recognition. We argue that, for recognition processes to function so effectively with such minimalist stimuli, view combination must be a very general and fundamental mechanism, potentially enabling both visual recognition and categorization.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号