首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Recent research has found visual object memory can be stored as part of a larger scene representation rather than independently of scene context. The present study examined how spatial and nonspatial contextual information modulate visual object memory. Two experiments tested participants’ visual memory by using a change detection task in which a target object's orientation was either the same as it appeared during initial viewing or changed. In addition, we examined the effect of spatial and nonspatial contextual manipulations on change detection performance. The results revealed that visual object representations can be maintained reliably after viewing arrays of objects. Moreover, change detection performance was significantly higher when either spatial or nonspatial contextual information remained the same in the test image. We concluded that while processing complex visual stimuli such as object arrays, visual object memory can be stored as part of a comprehensive scene representation, and both spatial and nonspatial contextual changes modulate visual memory retrieval and comparison.  相似文献   

2.
This study investigated whether and how visual representations of individual objects are bound in memory to scene context. Participants viewed a series of naturalistic scenes, and memory for the visual form of a target object in each scene was examined in a 2-alternative forced-choice test, with the distractor object either a different object token or the target object rotated in depth. In Experiments 1 and 2, object memory performance was more accurate when the test object alternatives were displayed within the original scene than when they were displayed in isolation, demonstrating object-to-scene binding. Experiment 3 tested the hypothesis that episodic scene representations are formed through the binding of object representations to scene locations. Consistent with this hypothesis, memory performance was more accurate when the test alternatives were displayed within the scene at the same position originally occupied by the target than when they were displayed at a different position.  相似文献   

3.
An object-to-scene binding hypothesis maintains that visual object representations are stored as part of a larger scene representation or scene context, and that scene context facilitates retrieval of object representations (see, e.g., Hollingworth, Journal of Experimental Psychology: Learning, Memory and Cognition, 32, 58-69, 2006). Support for this hypothesis comes from data using an intentional memory task. In the present study, we examined whether scene context always facilitates retrieval of visual object representations. In two experiments, we investigated whether the scene context facilitates retrieval of object representations, using a new paradigm in which a memory task is appended to a repeated-flicker change detection task. Results indicated that in normal scene viewing, in which many simultaneous objects appear, scene context facilitation of the retrieval of object representations-henceforth termed object-to-scene binding-occurred only when the observer was required to retain much information for a task (i.e., an intentional memory task).  相似文献   

4.
5.
A "follow-the-dot" method was used to investigate the visual memory systems supporting accumulation of object information in natural scenes. Participants fixated a series of objects in each scene, following a dot cue from object to object. Memory for the visual form of a target object was then tested. Object memory was consistently superior for the two most recently fixated objects, a recency advantage indicating a visual short-term memory component to scene representation. In addition, objects examined earlier were remembered at rates well above chance, with no evidence of further forgetting when 10 objects intervened between target examination and test and only modest forgetting with 402 intervening objects. This robust prerecency performance indicates a visual long-term memory component to scene representation.  相似文献   

6.
金丽芬  刘昌 《心理科学进展》2009,17(6):1139-1145
从两个方面综述了高水平视觉表征中客体与位置的联系,即客体信息影响位置记忆和客体-位置捆绑。其中,客体信息影响一般的位置记忆及其偏向(如位置估计偏向于区域中央或偏向客体的功能部位)。而有关客体-位置捆绑,以往研究发现确实存在这种捆绑,它离不开背景客体的参与,并且背景客体间的空间关系影响它的形成。未来的研究应关注客体信息影响位置记忆的其他证据及背景捆绑改变条件下客体-位置捆绑的存在与否,并探讨位置信息对客体记忆的影响。  相似文献   

7.
How do humans use target-predictive contextual information to facilitate visual search? How are consistently paired scenic objects and positions learned and used to more efficiently guide search in familiar scenes? For example, humans can learn that a certain combination of objects may define a context for a kitchen and trigger a more efficient search for a typical object, such as a sink, in that context. The ARTSCENE Search model is developed to illustrate the neural mechanisms of such memory-based context learning and guidance and to explain challenging behavioral data on positive-negative, spatial-object, and local-distant cueing effects during visual search, as well as related neuroanatomical, neurophysiological, and neuroimaging data. The model proposes how global scene layout at a first glance rapidly forms a hypothesis about the target location. This hypothesis is then incrementally refined as a scene is scanned with saccadic eye movements. The model simulates the interactive dynamics of object and spatial contextual cueing and attention in the cortical What and Where streams starting from early visual areas through medial temporal lobe to prefrontal cortex. After learning, model dorsolateral prefrontal cortex (area 46) primes possible target locations in posterior parietal cortex based on goal-modulated percepts of spatial scene gist that are represented in parahippocampal cortex. Model ventral prefrontal cortex (area 47/12) primes possible target identities in inferior temporal cortex based on the history of viewed objects represented in perirhinal cortex.  相似文献   

8.
Objects likely to appear in a given real-world scene are frequently found to be easier to recognize. Two different sources of contextual information have been proposed as the basis for this effect: global scene background and individual companion objects. The present paper examines the relative importance of these two elements in explaining the context-sensitivity of object identification in full scenes. Specific sequences of object fixations were elicited during free scene exploration, while fixation times on designated target objects were recorded as a measure of ease of target identification. Episodic consistency between the target, the global scene background, and the object fixated just prior to the target (the prime), were manipulated orthogonally. Target fixation times were examined for effects of prime and background. Analyses show effects of both factors, which are modulated by the chronology and spatial extent of scene exploration. The results are discussed in terms of their implications for a model of visual object recognition in the context of real-world scenes.  相似文献   

9.
Models of spatial updating attempt to explain how representations of spatial relationships between the actor and objects in the environment change as the actor moves. In allocentric models, object locations are encoded in an external reference frame, and only the actor’s position and orientation in that reference frame need to be updated. Thus, spatial updating should be independent of the number of objects in the environment (set size). In egocentric updating models, object locations are encoded relative to the actor, so the location of each object relative to the actor must be updated as the actor moves. Thus, spatial updating efficiency should depend on set size. We examined which model better accounts for human spatial updating by having people reconstruct the locations of varying numbers of virtual objects either from the original study position or from a changed viewing position. In consistency with the egocentric updating model, object localization following a viewpoint change was affected by the number of objects in the environment.  相似文献   

10.
Four flicker change-detection experiments demonstrate that scene-specific long-term memory guides attention to both behaviorally relevant locations and objects within a familiar scene. Participants performed an initial block of change-detection trials, detecting the addition of an object to a natural scene. After a 30-min delay, participants performed an unanticipated 2nd block of trials. When the same scene occurred in the 2nd block, the change within the scene was (a) identical to the original change, (b) a new object appearing in the original change location, (c) the same object appearing in a new location, or (d) a new object appearing in a new location. Results suggest that attention is rapidly allocated to previously relevant locations and then to previously relevant objects. This pattern of locations dominating objects remained when object identity information was made more salient. Eye tracking verified that scene memory results in more direct scan paths to previously relevant locations and objects. This contextual guidance suggests that a high-capacity long-term memory for scenes is used to insure that limited attentional capacity is allocated efficiently rather than being squandered.  相似文献   

11.
任衍具  孙琪 《心理学报》2014,46(11):1613-1627
采用视空工作记忆任务和真实场景搜索任务相结合的双任务范式, 结合眼动技术将搜索过程划分为起始阶段、扫描阶段和确认阶段, 探究视空工作记忆负载对真实场景搜索绩效的影响机制, 同时考查试次间搜索目标是否变化、目标模板的具体化程度以及搜索场景画面的视觉混乱度所起的调节作用。结果表明, 视空工作记忆负载会降低真实场景搜索的成绩, 在搜索过程中表现为视空负载条件下扫描阶段持续时间的延长、注视点数目的增加和空间负载条件下确认阶段持续时间的延长, 视空负载对搜索过程的影响与目标模板的具体化程度有关; 空间负载会降低真实场景搜索的效率, 且与搜索画面的视觉混乱度有关, 而客体负载则不会。由此可见, 视空工作记忆负载对真实场景搜索绩效的影响不同, 空间负载对搜索过程的影响比客体负载更长久, 二者均受到目标模板具体化程度的调节; 仅空间负载会降低真实场景的搜索效率, 且受到搜索场景画面视觉混乱度的调节。  相似文献   

12.
Prior research has demonstrated robust sex and sexual orientation-related differences in object location memory in humans. Here we show that this sexual variation may depend on the spatial position of target objects and the task-specific nature of the spatial array. We tested the recovery of object locations in three object arrays (object exchanges, object shifts, and novel objects) relative to veridical center (left compared to right side of the arrays) in a sample of 35 heterosexual men, 35 heterosexual women, and 35 homosexual men. Relative to heterosexual men, heterosexual women showed better location recovery in the right side of the array during object exchanges and homosexual men performed better in the right side during novel objects. However, the difference between heterosexual and homosexual men disappeared after controlling for IQ. Heterosexual women and homosexual men did not differ significantly from each other in location change detection with respect to task or side of array. These data suggest that visual space biases in processing categorical spatial positions may enhance aspects of object location memory in heterosexual women.  相似文献   

13.
Research on contextual cueing has demonstrated that with simple arrays of letters and shapes, search for a target increases in efficiency as associations between a search target and its surrounding visual context are learned. We investigated whether the visual context afforded by repeated exposure to real-world scenes can also guide attention when the relationship between the scene and a target position is arbitrary. Observers searched for and identified a target letter embedded in photographs of real-world scenes. Although search time within novel scenes was consistent across trials, search time within repeated scenes decreased across repetitions. Unlike previous demonstrations of contextual cueing, however, memory for scene-target covariation was explicit. In subsequent memory tests, observers recognized repeated contexts more often than those that were presented once and displayed superior recall of target position within the repeated scenes. In addition, repetition of inverted scenes, which made the scene more difficult to identify, produced a markedly reduced rate of learning, suggesting semantic information concerning object and scene identity are used to guide attention.  相似文献   

14.
Locations of multiple stationary objects are represented on the basis of their global spatial configuration in visual short-term memory (VSTM). Once objects move individually, they form a global spatial configuration with varying spatial inter-object relations over time. The representation of such dynamic spatial configurations in VSTM was investigated in six experiments. Participants memorized a scene with six moving and/or stationary objects and performed a location change detection task for one object specified during the probing phase. The spatial configuration of the objects was manipulated between memory phase and probing phase. Full spatial configurations showing all objects caused higher change detection performance than did no or partial spatial configurations for static and dynamic scenes. The representation of dynamic scenes in VSTM is therefore also based on their global spatial configuration. The variation of the spatiotemporal features of the objects demonstrated that spatiotemporal features of dynamic spatial configurations are represented in VSTM. The presentation of conflicting spatiotemporal cues interfered with memory retrieval. However, missing or conforming spatiotemporal cues triggered memory retrieval of dynamic spatial configurations. The configurational representation of stationary and moving objects was based on a single spatial configuration, indicating that static spatial configurations are a special case of dynamic spatial configurations.  相似文献   

15.
Recent studies in scene perception suggest that much of what observers believe they see is not retained in visual memory. Depending on the roles they play in organizing the perception of a scene, different visual properties may require different amounts of attention to be incorporated into a mental representation of the scene. The goal of this study was to compare how three visual properties of scenes, colour, object position, and object presence, are encoded in visual memory. We used a variation on the change detection “flicker” task and measured the time to detect scene changes when: (1) a cue was provided regarding the type of change; and, (2) no cue was provided. We hypothesized that cueing would enhance the processing of visual properties that require more attention to be encoded into scene representations, whereas cueing would not have an effect for properties that are readily or automatically encoded in visual memory. In Experiment 1, we found that there was a cueing advantage for colour changes, but not for position or presence changes. In Experiment 2, we found the same cueing effect regardless of whether the colour change altered the configuration of the scene or not. These results are consistent with the idea that properties that typically help determine the configuration of the scene, for example, position and presence, are better encoded in scene representations than are surface properties such as colour.  相似文献   

16.
Sun HM  Gordon RD 《Memory & cognition》2010,38(8):1049-1057
In five experiments, we examined the influence of contextual objects’ location and visual features on visual memory. Participants’ visual memory was tested with a change detection task in which they had to judge whether the orientation (Experiments 1A, 1B, and 2) or color (Experiments 3A and 3B) of a target object was the same. Furthermore, contextual objects’ locations and visual features were manipulated in the test image. The results showed that change detection performance was better when contextual objects’ locations remained the same from study to test, demonstrating that the original spatial configuration is important for subsequent visual memory retrieval. The results further showed that changes to contextual objects’ orientation, but not color, reduced orientation change detection performance; and changes to contextual objects’ color, but not orientation, impaired color change detection performance. Therefore, contextual objects’ visual features are capable of affecting visual memory. However, selective attention plays an influential role in modulating such effects.  相似文献   

17.
What is the nature of the representation formed during the viewing of natural scenes? We tested two competing hypotheses regarding the accumulation of visual information during scene viewing. The first holds that coherent visual representations disintegrate as soon as attention is withdrawn from an object and thus that the visual representation of a scene is exceedingly impoverished. The second holds that visual representations do not necessarily decay upon the withdrawal of attention, but instead can be accumulated in memory from previously attended regions. Target objects in line drawings of natural scenes were changed during a saccadic eye movement away from those objects. Three findings support the second hypothesis. First, changes to the visual form of target objects (token substitution) were successfully detected, as indicated by both explicit and implicit measures, even though the target object was not attended when the change occurred. Second, these detections were often delayed until well after the change. Third, changes to semantically inconsistent target objects were detected better than changes to semantically consistent objects.  相似文献   

18.
The effect of spatial position on visual short-term memory (VSTM) for sequentially presented objects has been investigated relatively little, despite the fact that vision in natural environments is characterised by frequent changes in object position and gaze location. We investigated the effect of reusing previously examined spatial positions on VSTM for object appearance. Observers performed a yes-no recognition task following a memory display comprising briefly presented 1/f noise discs (ie possessing spectral properties akin to natural images) shown sequentially at random coordinates. At test, single stimuli were presented either at original spatial positions, new positions, or at a fixed central position. Results, interpreted in terms of appearance and position preview effects, indicate that, where original spatial positions were reused at test, memory performance was elevated by more than 25%, despite that spatial position was task-irrelevant (in the sense that it could not be used to facilitate a correct response per se). This study generalises object-spatial-position binding theory to a sequential display scenario in which the influences of extrafoveal processing, spatial context cues, and long-term memory support were minimised, thereby eliminating the hypothesis that object priming is the principal cause of the 'same-position advantage' in VSTM.  相似文献   

19.
The human sentence processor is able to make rapid predictions about upcoming linguistic input. For example, upon hearing the verb eat, anticipatory eye‐movements are launched toward edible objects in a visual scene (Altmann & Kamide, 1999). However, the cognitive mechanisms that underlie anticipation remain to be elucidated in ecologically valid contexts. Previous research has, in fact, mainly used clip‐art scenes and object arrays, raising the possibility that anticipatory eye‐movements are limited to displays containing a small number of objects in a visually impoverished context. In Experiment 1, we confirm that anticipation effects occur in real‐world scenes and investigate the mechanisms that underlie such anticipation. In particular, we demonstrate that real‐world scenes provide contextual information that anticipation can draw on: When the target object is not present in the scene, participants infer and fixate regions that are contextually appropriate (e.g., a table upon hearing eat). Experiment 2 investigates whether such contextual inference requires the co‐presence of the scene, or whether memory representations can be utilized instead. The same real‐world scenes as in Experiment 1 are presented to participants, but the scene disappears before the sentence is heard. We find that anticipation occurs even when the screen is blank, including when contextual inference is required. We conclude that anticipatory language processing is able to draw upon global scene representations (such as scene type) to make contextual inferences. These findings are compatible with theories assuming contextual guidance, but posit a challenge for theories assuming object‐based visual indices.  相似文献   

20.
Saccade-contingent change detection provides a powerful tool for investigating scene representation and scene memory. In the present study, critical objects presented within color images of naturalistic scenes were changed during a saccade toward or away from the target. During the saccade,the critical object was changed to another object type, to a visually different token of the same object type, or was deleted from the scene. There were three main results. First, the deletion of a saccade target was special: Detection performance for saccade target deletions was very good, and this level of performance did not decline with the amplitude of the saccade. In contrast, detection of type and token changes at the saccade target, and of all changes including deletions at a location that had just been fixated but was not the saccade target, decreased as the amplitude of the saccade increased. Second, detection performance for type and token changes, both when the changing object was the target of the saccade and when the object had just been fixated but was not the saccade target, was well above chance. Third, mean gaze durations were reliably elevated for those trials in which the change was not overtly detected. The results suggest that the presence of the saccade target plays a special role in trassaccadic integration, and together with other recent findings, suggest more generally that a relatively rich scene representation is retained across saccades and stored in visual memory.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号