首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Recent research has found visual object memory can be stored as part of a larger scene representation rather than independently of scene context. The present study examined how spatial and nonspatial contextual information modulate visual object memory. Two experiments tested participants’ visual memory by using a change detection task in which a target object's orientation was either the same as it appeared during initial viewing or changed. In addition, we examined the effect of spatial and nonspatial contextual manipulations on change detection performance. The results revealed that visual object representations can be maintained reliably after viewing arrays of objects. Moreover, change detection performance was significantly higher when either spatial or nonspatial contextual information remained the same in the test image. We concluded that while processing complex visual stimuli such as object arrays, visual object memory can be stored as part of a comprehensive scene representation, and both spatial and nonspatial contextual changes modulate visual memory retrieval and comparison.  相似文献   

2.
Nine experiments examined the means by which visual memory for individual objects is structured into a larger representation of a scene. Participants viewed images of natural scenes or object arrays in a change detection task requiring memory for the visual form of a single target object. In the test image, 2 properties of the stimulus were independently manipulated: the position of the target object and the spatial properties of the larger scene or array context. Memory performance was higher when the target object position remained the same from study to test. This same-position advantage was reduced or eliminated following contextual changes that disrupted the relative spatial relationships among contextual objects (context deletion, scrambling, and binding change) but was preserved following contextual change that did not disrupt relative spatial relationships (translation). Thus, episodic scene representations are formed through the binding of objects to scene locations, and object position is defined relative to a larger spatial representation coding the relative locations of contextual objects.  相似文献   

3.
Three experiments used a change detection paradigm across a range of study–test intervals to address the respective contributions of location, shape, and color to the formation of bindings of features in sensory memory and visual short-term memory (VSTM). In Experiment 1, location was designated task irrelevant and was randomized between study and test displays. The task was to detect changes in the bindings between shape and color. In Experiments 2 and 3, shape and color, respectively, were task irrelevant and randomized, with bindings tested between location and color (Experiment 2) and location and shape (Experiment 3). At shorter study–test intervals, randomizing location was most disruptive, followed by shape and then color. At longer intervals, randomizing any task-irrelevant feature had no impact on change detection for bindings between features, and location had no special role. Results suggest that location is crucial for initial perceptual binding but loses that special status once representations are formed in VSTM, which operates according to different principles, than do visual attention and perception.  相似文献   

4.
A computational model was developed to explain a pattern of results of fMRI activation in the intraparietal sulcus (IPS) supporting visual working memory for multiobject scenes. The model is based on the hypothesis that dendrites of excitatory neurons are major computational elements in the cortical circuit. Dendrites enable formation of a competitive queue that exhibits a gradient of activity values for nodes encoding different objects, and this pattern is stored in working memory. In the model, brain imaging data are interpreted as a consequence of blood flow arising from dendritic processing. Computer simulations showed that the model successfully simulates data showing the involvement of inferior IPS in object individuation and spatial grouping through representation of objects’ locations in space, along with the involvement of superior IPS in object identification through representation of a set of objects’ features. The model exhibits a capacity limit due to the limited dynamic range for nodes and the operation of lateral inhibition among them. The capacity limit is fixed in the inferior IPS regardless of the objects’ complexity, due to the normalization of lateral inhibition, and variable in the superior IPS, due to the different encoding demands for simple and complex shapes. Systematic variation in the strength of self-excitation enables an understanding of the individual differences in working memory capacity. The model offers several testable predictions regarding the neural basis of visual working memory.  相似文献   

5.
In 3 experiments the author investigated the relationship between the online visual representation of natural scenes and long-term visual memory. In a change detection task, a target object either changed or remained the same from an initial image of a natural scene to a test image. Two types of changes were possible: rotation in depth, or replacement by another object from the same basic-level category. Change detection during online scene viewing was compared with change detection after delay of 1 trial (Experiments 2A and 2B) until the end of the study session (Experiment 1) or 24 hr (Experiment 3). There was little or no decline in change detection performance from online viewing to a delay of 1 trial or delay until the end of the session, and change detection remained well above chance after 24 hr. These results demonstrate that long-term memory for visual detail in a scene is robust.  相似文献   

6.
Invariant spatial context can facilitate visual search. For instance, detection of a target is faster if it is presented within a repeatedly encountered, as compared to a novel, layout of nontargets, demonstrating a role of contextual learning for attentional guidance (‘contextual cueing’). Here, we investigated how context-based learning adapts to target location (and identity) changes. Three experiments were performed in which, in an initial learning phase, observers learned to associate a given context with a given target location. A subsequent test phase then introduced identity and/or location changes to the target. The results showed that contextual cueing could not compensate for target changes that were not ‘predictable’ (i.e. learnable). However, for predictable changes, contextual cueing remained effective even immediately after the change. These findings demonstrate that contextual cueing is adaptive to predictable target location changes. Under these conditions, learned contextual associations can be effectively ‘remapped’ to accommodate new task requirements.  相似文献   

7.
We examined the aftermath of accessing and retrieving a subset of information stored in visual working memory (VWM)—namely, whether detection of a mismatch between memory and perception can impair the original memory of an item while triggering recognition-induced forgetting for the remaining, untested items. For this purpose, we devised a consecutive-change detection task wherein two successive testing probes were displayed after a single set of memory items. Across two experiments utilizing different memory-testing methods (whole vs. single probe), we observed a reliable pattern of poor performance in change detection for the second test when the first test had exhibited a color change. The impairment after a color change was evident even when the same memory item was repeatedly probed; this suggests that an attention-driven, salient visual change made it difficult to reinstate the previously remembered item. The second change detection, for memory items untested during the first change detection, was also found to be inaccurate, indicating that recognition-induced forgetting had occurred for the unprobed items in VWM. In a third experiment, we conducted a task that involved change detection plus continuous recall, wherein a memory recall task was presented after the change detection task. The analyses of the distributions of recall errors with a probabilistic mixture model revealed that the memory impairments from both visual changes and recognition-induced forgetting are explained better by the stochastic loss of memory items than by their degraded resolution. These results indicate that attention-driven visual change and recognition-induced forgetting jointly influence the “recycling” of VWM representations.  相似文献   

8.
Because the importance of color in visual tasks such as object identification and scene memory has been debated, we sought to determine whether color is used to guide visual search in contextual cuing with real-world scenes. In Experiment 1, participants searched for targets in repeated scenes that were shown in one of three conditions: natural colors, unnatural colors that remained consistent across repetitions, and unnatural colors that changed on every repetition. We found that the pattern of learning was the same in all three conditions. In Experiment 2, we did a transfer test in which the repeating scenes were shown in consistent colors that suddenly changed on the last block of the experiment. The color change had no effect on search times, relative to a condition in which the colors did not change. In Experiments 3 and 4, we replicated Experiments 1 and 2, using scenes from a color-diagnostic category of scenes, and obtained similar results. We conclude that color is not used to guide visual search in real-world contextual cuing, a finding that constrains the role of color in scene identification and recognition processes.  相似文献   

9.
When representing visual features such as color and shape in visual working memory (VWM), participants also represent the locations of those features as a spatial configuration of the locations of those features in the display. In everyday life, we encounter objects against some background, yet it is unclear whether the configural representation in memory obligatorily constitutes the entire display, including that (often task-irrelevant) background information. In three experiments, participants completed a change detection task on color and shape; the memoranda were presented in front of uniform gray backgrounds, a textured background (Exp. 1), or a background containing location placeholders (Exps. 2 and 3). When whole-display probes were presented, changes to the objects’ locations or feature bindings impacted memory performance—implying that the spatial configuration of the probes influenced participants’ change decisions. Furthermore, when only a single item was probed, the effect of changing its location or feature bindings was either diminished or completely extinguished, implying that single probes do not necessarily elicit the entire spatial configuration. Critically, when task-irrelevant backgrounds were also presented that may have provided a spatial configuration for the single probes, the effect of location or bindings was not moderated. These findings suggest that although the spatial configuration of a display guides VWM-based recognition, this information does not necessarily always influence the decision process during change detection.  相似文献   

10.
研究采用单探测变化检测范式,考察了两维特征图形在视觉客体和视觉空间工作记忆中的存储机制,并对其容量进行测定。40名被试(平均年龄20.56±1.73岁)随机分为两个等组,分别完成实验一和实验二。实验一的刺激图形由颜色和形状两基本特征组成,实验二的刺激为由不同颜色和开口朝向组成的兰道环。两个实验结果均显示:(1)特征交换变化条件下的记忆成绩与单特征变化条件下最差的记忆成绩差异不显著;(2)空间工作记忆任务的成绩显著优于客体工作记忆任务;(3)被试在视觉工作记忆中能存储2~3个客体和3~4个空间位置。这表明,由两种不同维度特征组成的图形在视觉客体和视觉空间工作记忆中均以整合方式进行存储,空间工作记忆的容量大于客体工作记忆。  相似文献   

11.
Whether selective attention binds features in visual short-term memory or prioritizes selection for memory consolidation and decision was investigated with a change detection paradigm. Two types of change were manipulated: Feature or conjunctions of features. Previous work suggests that the allocation of attentional resources affects binding; hence attentional shifts during retention should affect the detection of conjunction changes more than feature changes. The results of Experiments 1 and 2 showed that attention shifts had a similar impact on detecting feature and conjunction changes. Experiment 3 showed a performance benefit with a post-cue occurring 200 or 550 ms after stimulus offset, but no improvement was found when prioritization occurred with a delay of 800 ms. The results of Experiment 4 suggested that signals from both feature changes and conjunction changes contribute to detection. The theoretical implications are discussed.  相似文献   

12.
A behavioral and computational treatment of change detection is reported. The behavioral task was to judge whether a single object substitution change occurred between two “flickering” 9-object scenes. Detection performance was found to vary with the similarity of the changing objects; object changes violating orientation and category yielded the fastest and most accurate detection responses. To account for these data, theBOLAR model was developed, which uses color, orientation, and scale selective filters to compute the visual dissimilarity between the pre- and postchange objects from the behavioral study. Relating the magnitude of the BOLAR difference signals to change detection performance revealed that object pairs estimated as visually least similar were the same object pairs most easily detected by observers. The BOLAR model advances change detection theory by (1) demonstrating that the visual similarity between the change patterns can account for much of the variability in change detection behavior, and (2) providing a computational technique for quantifying these visual similarity relationships for real-world objects.  相似文献   

13.
Perceptual grouping in change detection   总被引:2,自引:0,他引:2  
Detection of an item's changing of its location from one instance to another is typically unaffected by changes in the shape or color of contextual items. However, we demonstrate here that such location change detection is severely impaired if the elongated axes of contextual items change orientation, even though individual locations remain constant and even though the orientation was irrelevant to the task. Changing the orientations of the elongated stimuli altered the perceptual organization of the display, which had an important influence on change detection. In detecting location changes, subjects were unable to ignore changes in orientation unless additional, invariant grouping cues were provided or unless the items changing orientation could be actively ignored using feature-based attention (color cues). Our results suggest that some relational grouping cues are represented in change detection even when they are task irrelevant.  相似文献   

14.
The effects of blocked versus mixed presentation were tested on visual feature binding, assuming that blocked presentation enhances focused attention, whilst mixed presentation recruits extra attentional resources for intratrial as well as intertrial processing. The contextual interference effect suggests that although performance due to mixed presentation is either similar or worse than blocked presentation when tested immediately, it is better when tested after an interval. We explored whether this robust empirical effect, common in psychomotor performance, would be evident in visual feature binding. Stimuli were conjunctions of shape, colour, and location. Study–test intervals from 0 to 2,500 ms were used with a swap detection task. In Experiments 1A and 1B, participants ignored locations to detect shape–colour bindings. In Experiments 2A and 2B, they ignored shapes to detect colour–location binding. In Experiments 3A and 3B, they ignored colours to detect shape–location bindings. Whilst Experiments 1A, 2A, and 3A used blocked presentation, Experiments 1B, 2B, and 3B used mixed presentation of study–test intervals. The results of these experiments and a replication experiment using a within-subjects design showed that the contextual interference effect appeared when spatial attention was engaged, but not when attention was object based.  相似文献   

15.
Although memory for the identities of examined items is not used to guide visual search, identity memory may be acquired during visual search. In all experiments reported here, search was occasionally terminated and a memory test was presented for the identity of a previously examined item. Participants demonstrated memory for the locations of the examined items by avoiding revisits to these items and memory performance for the items’ identities was above chance but lower than expected based on performance in intentional memory tests. Memory performance improved when the foil was not from the search set, suggesting that explicit identity memory is not bound to memory for location. Providing context information during test improved memory for the most recently examined item. Memory for the identities of previously examined items was best when the most recently examined item was tested, contextual information was provided, and location memory was not required.  相似文献   

16.
Statistical properties in the visual environment can be used to improve performance on visual working memory (VWM) tasks. The current study examined the ability to incidentally learn that a change is more likely to occur to a particular feature dimension (shape, color, or location) and use this information to improve change detection performance for that dimension (the change probability effect). Participants completed a change detection task in which one change type was more probable than others. Change probability effects were found for color and shape changes, but not location changes, and intentional strategies did not improve the effect. Furthermore, the change probability effect developed and adapted to new probability information quickly. Finally, in some conditions, an improvement in change detection performance for a probable change led to an impairment in change detection for improbable changes.  相似文献   

17.
This study investigated whether and how visual representations of individual objects are bound in memory to scene context. Participants viewed a series of naturalistic scenes, and memory for the visual form of a target object in each scene was examined in a 2-alternative forced-choice test, with the distractor object either a different object token or the target object rotated in depth. In Experiments 1 and 2, object memory performance was more accurate when the test object alternatives were displayed within the original scene than when they were displayed in isolation, demonstrating object-to-scene binding. Experiment 3 tested the hypothesis that episodic scene representations are formed through the binding of object representations to scene locations. Consistent with this hypothesis, memory performance was more accurate when the test alternatives were displayed within the scene at the same position originally occupied by the target than when they were displayed at a different position.  相似文献   

18.
In the present study, we examined whether greater attentional resources are required for consolidating two features (e.g., color and orientation) than for consolidating one feature (e.g., color) in visual working memory (WM). We used a dual-task procedure: Subjects performed a WM task and a secondary probe task, sometimes concurrently. In the WM task, subjects decided whether two displays (containing one to four objects composed of one or two features) were the same or different. In the probe task, subjects made a speeded discrimination response to a tone. Performance in both tasks was impaired when they were performed concurrently; however, performance costs in the tone task were not greater for multi- than for single-feature conditions (when the orientation and conjunction conditions were considered). Results suggested that equivalent attentional resources were necessary for consolidation of single-orientation or multifeature items.  相似文献   

19.
In the tripartite model of working memory (WM) it is postulated that a unique part system—the visuo-spatial sketchpad (VSSP)—processes non-verbal content. Due to behavioral and neurophysiological findings, the VSSP was later subdivided into visual object and visual spatial processing, the former representing objects’ appearance and the latter spatial information. This distinction is well supported. However, a challenge to this model is the question how spatial information from non-visual sensory modalities, for example the auditory one, is processed. Only a few studies so far have directly compared visual and auditory spatial WM. They suggest that the distinction of two processing domains—one for object and one for spatial information—also holds true for auditory WM, but that only a part of the processes is modality specific. We propose that processing in the object domain (the item’s appearance) is modality specific, while spatial WM as well as object-location binding relies on modality general processes.  相似文献   

20.
Studies of change detection have shown that changing the task-irrelevant features of remembered objects impairs change detection for task-relevant features, a phenomenon known as the irrelevant change effect. Although this effect is pronounced at short study-test intervals, it is eliminated at longer delays. This has prompted the proposal that although all features of attended objects are initially stored together in visual working memory (VWM), top-down control can be used to suppress task-irrelevant features over time. The present study reports the results of three experiments aimed at testing the top-down suppression hypothesis. Experiments 1 and 2 tested whether the magnitude or time course of the irrelevant change effect was affected by the concurrent performance of a demanding executive load task (counting backwards by threes). Contrary to the top-down suppression view, the decreased availability of executive resources did not prolong the duration of the irrelevant change effect in either experiment, as would be expected if these resources were necessary to actively suppress task-irrelevant features. Experiment 3 showed that a visual pattern mask eliminates the irrelevant change effect and suggests that the source of the effect may lie in the use a high-resolution, sensory memory representation to match the memory and test displays when no task-irrelevant feature changes are present. These results suggest that the dissipation of the irrelevant change effect over time likely does not depend on the use of top-down control and raises questions about what can be inferred about the nature of storage in VWM from studies of this effect.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号