首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Justin N. Wood 《Visual cognition》2013,21(10):1464-1485
What frame of reference do we use to remember observed movements? One possibility is that visual working memory (VWM) retains movement information using a retinotopic frame of reference: A coordinate system with respect to the retina that retains view-dependent information. Alternatively, VWM might retain movement information using an allocentric frame of reference: A coordinate system with respect to the scene that retains view-invariant information. To address this question, I examined whether VWM retains view-dependent or view-invariant movement information. Results show that (1) observers have considerable difficulty remembering from which viewpoints they observed movements after a few seconds' delay, and (2) the same number of movements can be retained in VWM whether the movements are encoded and tested from the same viewpoint or from different viewpoints. Thus, movement representations contain little to no view-dependent information, which suggests that VWM uses an allocentric reference frame to retain movement information.  相似文献   

2.
How does visual long-term memory store representations of different entities (e.g., objects, actions, and scenes) that are present in the same visual event? Are the different entities stored as an integrated representation in memory, or are they stored separately? To address this question, we asked observers to view a large number of events; in each event, an action was performed within a scene. Afterward, the participants were shown pairs of action–scene sets and indicated which of the two they had seen. When the task required recognizing the individual actions and scenes, performance was high (80 %). Conversely, when the task required remembering which actions had occurred within which scenes, performance was significantly lower (59 %). We observed this dissociation between memory for individual entities and memory for entity bindings across multiple testing conditions and presentation durations. These experiments indicate that visual long-term memory stores information about actions and information about scenes separately from one another, even when an action and scene were observed together in the same visual event. These findings also highlight an important limitation of human memory: Situations that require remembering actions and scenes as integrated events (e.g., eyewitness testimony) may be particularly vulnerable to memory errors.  相似文献   

3.
A defining characteristic of visual working memory is its limited capacity. This means that it is crucial to maintain only the most relevant information in visual working memory. However, empirical research is mixed as to whether it is possible to selectively maintain a subset of the information previously encoded into visual working memory. Here we examined the ability of participants to use cues to either forget or remember a subset of the information already stored in visual working memory. In Experiment 1, participants were cued to either forget or remember 1 of 2 groups of colored squares during a change-detection task. We found that both types of cues aided performance in the visual working memory task but that observers benefited more from a cue to remember than a cue to forget a subset of the objects. In Experiment 2, we show that the previous findings, which indicated that directed-forgetting cues are ineffective, were likely due to the presence of invalid cues that appeared to cause observers to disregard such cues as unreliable. In Experiment 3, we recorded event-related potentials and show that an electrophysiological index of focused maintenance is elicited by cues that indicate which subset of information in visual working memory needs to be remembered, ruling out alternative explanations of the behavioral effects of retention-interval cues. The present findings demonstrate that observers can focus maintenance mechanisms on specific objects in visual working memory based on cues indicating future task relevance. (PsycINFO Database Record (c) 2012 APA, all rights reserved).  相似文献   

4.
Wong JH  Peterson MS  Thompson JC 《Cognition》2008,108(3):719-731
The capacity of visual working memory was examined when complex objects from different categories were remembered. Previous studies have not examined how visual similarity affects object memory, though it has long been known that similar-sounding phonological information interferes with rehearsal in auditory working memory. Here, experiments required memory for two or four objects. Memory capacity was compared between remembering four objects from a single object category to remembering four objects from two different categories. Two-category sets led to increased memory capacity only when upright faces were included. Capacity for face-only sets never exceeded their nonface counterparts, and the advantage for two-category sets when faces were one of the categories disappeared when inverted faces were used. These results suggest that two-category sets which include faces are advantaged in working memory but that faces alone do not lead to a memory capacity advantage.  相似文献   

5.
Research has shown that performing visual search while maintaining representations in visual working memory displaces up to one object's worth of information from memory. This memory displacement has previously been attributed to a nonspecific disruption of the memory representation by the mere presentation of the visual search array, and the goal of the present study was to determine whether it instead reflects the use of visual working memory in the actual search process. The first hypothesis tested was that working memory displacement occurs because observers preemptively discard about an object's worth of information from visual working memory in anticipation of performing visual search. Second, we tested the hypothesis that on target absent trials no information is displaced from visual working memory because no target is entered into memory when search is completed. Finally, we tested whether visual working memory displacement is due to the need to select a response to the search array. The findings rule out these alternative explanations. The present study supports the hypothesis that change-detection performance is impaired when a search array appears during the retention interval due to nonspecific disruption or masking.  相似文献   

6.
Several different sources of evidence support the idea that visuo-spatial working memory can be segregated into separate cognitive subsystems. However, the nature of these systems remains unclear. Recently we reported data from neurological patients suggesting that information about visual appearance is retained in a different subsystem from information about spatial location. In this paper we report latency data from neurologically intact participants showing an experimental double dissociation between memory for appearance and memory for location. This was achieved by use of a selective dual task interference technique. This pattern provides evidence supporting the segregation of visuo-spatial memory between two systems, one of which supports memory for stimulus appearance and the other which supports memory for spatial location.  相似文献   

7.
Visual working memory for observed actions   总被引:1,自引:0,他引:1  
Human society depends on the ability to remember the actions of other individuals, which is information that must be stored in a temporary buffer to guide behavior after actions have been observed. To date, however, the storage capacity, contents, and architecture of working memory for observed actions are unknown. In this article, the author shows that it is possible to retain information about only 2-3 actions in visual working memory at once. However, it is also possible to retain 9 properties distributed across 3 actions almost as well as 3 properties distributed across 3 actions, showing that working memory stores integrated action representations rather than individual properties. Finally, the author shows that working memory for observed actions is independent from working memory for object and spatial information. These results provide evidence for a previously undocumented system in working memory for storing information about actions. Further, this system operates by the same storage principles as visual working memory for object information. Thus, working memory consists of a series of distinct yet computationally similar mechanisms for retaining different types of visual information.  相似文献   

8.
Recent research using change-detection tasks has shown that a directed-forgetting cue, indicating that a subset of the information stored in memory can be forgotten, significantly benefits the other information stored in visual working memory. How do these directed-forgetting cues aid the memory representations that are retained? We addressed this question in the present study by using a recall paradigm to measure the nature of the retained memory representations. Our results demonstrated that a directed-forgetting cue leads to higher-fidelity representations of the remaining items and a lower probability of dropping these representations from memory. Next, we showed that this is made possible by the to-be-forgotten item being expelled from visual working memory following the cue, allowing maintenance mechanisms to be focused on only the items that remain in visual working memory. Thus, the present findings show that cues to forget benefit the remaining information in visual working memory by fundamentally improving their quality relative to conditions in which just as many items are encoded but no cue is provided.  相似文献   

9.
Previous research indicates that visual attention can be automatically captured by sensory inputs that match the contents of visual working memory. However, Woodman and Luck (2007) showed that information in working memory can be used flexibly as a template for either selection or rejection according to task demands. We report two experiments that extend their work. Participants performed a visual search task while maintaining items in visual working memory. Memory items were presented for either a short or long exposure duration immediately prior to the search task. Memory was tested by a change-detection task immediately afterwards. On a random half of trials items in memory matched either one distractor in the search task (Experiment 1) or three (Experiment 2). The main result was that matching distractors speeded or slowed target detection depending on whether memory items were presented for a long or short duration. These effects were more in evidence with three matching distractors than one. We conclude that the influence of visual working memory on visual search is indeed flexible but is not solely a function of task demands. Our results suggest that attentional capture by perceptual inputs matching information in visual working memory involves a fast automatic process that can be overridden by a slower top-down process of attentional avoidance.  相似文献   

10.
The attentional effect on visual working memory (VWM) has been a heated research topic in the past two decades. Studies show that VWM performance for an attended memory item can be improved by cueing its two-dimensional (2D) spatial location during retention. However, few studies have investigated the effect of attentional selection on VWM in a three-dimensional setting, and it remains unknown whether depth information can produce beneficial attentional effects on 2D visual representations similar to 2D spatial information. Here we conducted four experiments, displaying memory items at various stereoscopic depth planes, and examined the retro-cue effects of four types of cues – a cue would either indicate the 2D or depth location of a memory item, and either in the form of physical (directly pointing to a location) or symbolic (numerically mapping onto a location) cues. We found that retro-cue benefits were only observed for cues directly pointing to a 2D location, whereas a null effect was observed for cues directly pointing to a depth location. However, there was a retro-cue effect when cueing the relative depth order, though the effect was weaker than that for cueing the 2D location. The selective effect on VWM based on 2D spatial attention is different from depth-based attention, and the divergence suggests that an object representation is primarily bound with its 2D spatial location, weakly bound with its depth order but not with its metric depth location. This indicates that attentional selection based on memory for depth, particularly metric depth, is ineffective.  相似文献   

11.
The biased competition theory proposes that items matching the contents of visual working memory will automatically have an advantage in the competition for attention. However, evidence for an automatic effect has been mixed, perhaps because the memory-driven attentional bias can be overcome by top-down suppression. To test this hypothesis, the Pd component of the event-related potential waveform was used as a marker of attentional suppression. While observers maintained a colour in working memory, task-irrelevant probe arrays were presented that contained an item matching the colour being held in memory. We found that the memory-matching probe elicited a Pd component, indicating that it was being actively suppressed. This result suggests that sensory inputs matching the information being held in visual working memory are automatically detected and generate an “attend-to-me” signal, but this signal can be overridden by an active suppression mechanism to prevent the actual capture of attention.  相似文献   

12.
Storage of features, conjunctions and objects in visual working memory   总被引:25,自引:0,他引:25  
Working memory can be divided into separate subsystems for verbal and visual information. Although the verbal system has been well characterized, the storage capacity of visual working memory has not yet been established for simple features or for conjunctions of features. The authors demonstrate that it is possible to retain information about only 3-4 colors or orientations in visual working memory at one time. Observers are also able to retain both the color and the orientation of 3-4 objects, indicating that visual working memory stores integrated objects rather than individual features. Indeed, objects defined by a conjunction of four features can be retained in working memory just as well as single-feature objects, allowing many individual features to be retained when distributed across a small number of objects. Thus, the capacity of visual working memory must be understood in terms of integrated objects rather than individual features.  相似文献   

13.
客体运动方向的视觉工作记忆容量   总被引:2,自引:0,他引:2  
记忆动态场景中多客体的特征和时空信息是人类重要的认知活动。目前有关视觉工作记忆的研究虽然广泛探讨了对视觉信息的存储容量及机制,然而所采用的刺激材料均呈现于静态场景中,且不包含运动信息。而有关多客体追踪的研究只关注动态场景中多客体信息的实时更新,而不涉及一段时间内对客体信息的保持。本研究结合视觉工作记忆领域的变化觉察范式和多客体追踪范式,以独立运动的客体为刺激材料,探讨多个客体的运动方向信息在工作记忆中的存储容量问题。结果表明,在工作记忆中能够存储大约3个客体的运动方向信息。  相似文献   

14.
视觉工作记忆中的特征捆绑   总被引:5,自引:0,他引:5  
特征捆绑是认知科学和神经科学研究的前沿问题,近来已经成为研究者们关于意识问题争论的焦点所在。很多心理活动涉及复杂对象各种特征的捆绑,在工作记忆中保持这些捆绑的机制是心理加工得以有效进行的基础。视觉工作记忆中的特征捆绑近来成为研究热点之一,有关研究大多凭借变化检测范式和单刺激探测范式,探讨了视觉工作记忆中捆绑特征的保持是否需要注意参与,以及分离特征存储和捆绑特征存储的关系等问题,但目前这些问题仍都没有明确的答案。  相似文献   

15.
The Self-Memory System encompasses the working self, autobiographical memory and episodic memory. Specific autobiographical memories are patterns of activation over knowledge structures in autobiographical and episodic memory brought about by the activating effect of cues. The working self can elaborate cues based on the knowledge they initially activate and so control the construction of memories of the past and the future. It is proposed that such construction takes place in the remembering–imagining system – a window of highly accessible recent memories and simulations of near future events. How this malfunctions in various disorders is considered as are the implication of what we term the modern view of human memory for notions of memory accuracy. We show how all memories are to some degree false and that the main role of memories lies in generating personal meanings.  相似文献   

16.
Previous work has demonstrated that visual long-term memory (VLTM) stores detailed information about object appearance. The current experiments investigate whether object appearance information in VLTM is integrated within representations that contain picture-specific viewpoint information. In three experiments using both incidental and intentional encoding instructions, participants were unable to perform above chance on recognition tests that required recognizing the conjunction of object appearance and viewpoint information (Experiments 1a, 1b, 2, and 3). However, performance was better when object appearance information (Experiments 1a, 1b, and 2) or picture-specific viewpoint information (Experiment 3) alone was sufficient to succeed on the memory test. These results replicate previous work demonstrating good memory for object appearance and viewpoint. However the current results suggest that object appearance and viewpoint are not episodically integrated in VLTM.  相似文献   

17.
Making recognition decisions often requires us to reference the contents of working memory, the information available for ongoing cognitive processing. As such, understanding how recognition decisions are made when based on the contents of working memory is of critical importance. In this work we examine whether recognition decisions based on the contents of visual working memory follow a continuous decision process of graded information about the correct choice or a discrete decision process reflecting only knowing and guessing. We find a clear pattern in favor of a continuous latent strength model of visual working memory–based decision making, supporting the notion that visual recognition decision processes are impacted by the degree of matching between the contents of working memory and the choices given. Relation to relevant findings and the implications for human information processing more generally are discussed.  相似文献   

18.
Recent studies show that recognition memory for sounds is inferior to memory for pictures. Four experiments were conducted to examine the nature of auditory and visual memory. Experiments 1–3 were conducted to evaluate the role of experience in auditory and visual memory. Participants received a study phase with pictures/sounds, followed by a recognition memory test. Participants then completed auditory training with each of the sounds, followed by a second memory test. Despite auditory training in Experiments 1 and 2, visual memory was superior to auditory memory. In Experiment 3, we found that it is possible to improve auditory memory, but only after 3 days of specific auditory training and 3 days of visual memory decay. We examined the time course of information loss in auditory and visual memory in Experiment 4 and found a trade-off between visual and auditory recognition memory: Visual memory appears to have a larger capacity, while auditory memory is more enduring. Our results indicate that visual and auditory memory are inherently different memory systems and that differences in visual and auditory recognition memory performance may be due to the different amounts of experience with visual and auditory information, as well as structurally different neural circuitry specialized for information retention.  相似文献   

19.
Many everyday tasks, such as remembering where you parked, require the capacity to store and manipulate information about the visual and spatial properties of the world. The ability to represent, remember, and manipulate spatial information is known as visuospatial working memory (VSWM). Despite substantial interest in VSWM the mechanisms responsible for this ability remain debated. One influential idea is that VSWM depends on activity in the eye-movement (oculomotor) system. However, this has proved difficult to test because experimental paradigms that disrupt oculomotor control also interfere with other cognitive systems, such as spatial attention. Here, we present data from a novel paradigm that selectively disrupts activation in the oculomotor system. We show that the inability to make eye-movements is associated with impaired performance on the Corsi Blocks task, but not on Arrow Span, Visual Patterns, Size Estimation or Digit Span tasks. It is argued that the oculomotor system is required to encode and maintain spatial locations indicted by a change in physical salience, but not non-salient spatial locations indicated by the meaning of a symbolic cue. This suggestion offers a way to reconcile the currently conflicting evidence regarding the role of the oculomotor system in spatial working memory.  相似文献   

20.
Attending to objects in the world affects how we perceive and remember them. What are the consequences of attending to an object in mind? In particular, how does reporting the features of a recently seen object guide visual learning? In three experiments, observers were presented with abstract shapes in a particular color, orientation, and location. After viewing each object, observers were cued to report one feature from visual short-term memory (VSTM). In a subsequent test, observers were cued to report features of the same objects from visual long-term memory (VLTM). We tested whether reporting a feature from VSTM: (1) enhances VLTM for just that feature (practice-benefit hypothesis), (2) enhances VLTM for all features (object-based hypothesis), or (3) simultaneously enhances VLTM for that feature and suppresses VLTM for unreported features (feature-competition hypothesis). The results provided support for the feature-competition hypothesis, whereby the representation of an object in VLTM was biased towards features reported from VSTM and away from unreported features (Experiment 1). This bias could not be explained by the amount of sensory exposure or response learning (Experiment 2) and was amplified by the reporting of multiple features (Experiment 3). Taken together, these results suggest that selective internal attention induces competitive dynamics among features during visual learning, flexibly tuning object representations to align with prior mnemonic goals.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号