首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 968 毫秒
1.
In a change detection paradigm, a target object in a natural scene either rotated in depth, was replaced by another object token, or remained the same. Change detection performance was reliably higher when a target postcue allowed participants to restrict retrieval and comparison processes to the target object (Experiment 1). Change detection performance remained excellent when the target object was not attended at change (Experiment 2) and when a concurrent verbal working memory load minimized the possibility of verbal encoding (Experiment 3). Together, these data demonstrate that visual representations accumulate in memory from attended objects as the eyes and attention are oriented within a scene and that change blindness derives, at least in part, from retrieval and comparison failure.  相似文献   

2.
Three experiments were conducted to investigate the existence of incidentally acquired, long-term, detailed visual memory for objects embedded in previously viewed scenes. Participants performed intentional memorization and incidental visual search learning tasks while viewing photographs of real-world scenes. A visual memory test for previously viewed objects from these scenes then followed. Participants were not aware that they would be tested on the scenes following incidental learning in the visual search task. In two types of memory tests for visually specific object information (token discrimination and mirror-image discrimination), performance following both the memorization and visual search conditions was reliably above chance. These results indicate that recent demonstrations of good visual memory during scene viewing are not due to intentional scene memorization. Instead, long-term visual representations are incidentally generated as a natural product of scene perception.  相似文献   

3.
How can we reconcile remarkably precise long-term memory for thousands of images with failures to detect changes to similar images? We explored whether people can use detailed, long-term memory to improve change detection performance. Subjects studied a set of images of objects and then performed recognition and change detection tasks with those images. Recognition memory performance exceeded change detection performance, even when a single familiar object in the postchange display consistently indicated the change location. In fact, participants were no better when a familiar object predicted the change location than when the displays consisted of unfamiliar objects. When given an explicit strategy to search for a familiar object as a way to improve performance on the change detection task, they performed no better than in a 6-alternative recognition memory task. Subjects only benefited from the presence of familiar objects in the change detection task when they had more time to view the prechange array before it switched. Once the cost to using the change detection information decreased, subjects made use of it in conjunction with memory to boost performance on the familiar-item change detection task. This suggests that even useful information will go unused if it is sufficiently difficult to extract.  相似文献   

4.
Recent research has found visual object memory can be stored as part of a larger scene representation rather than independently of scene context. The present study examined how spatial and nonspatial contextual information modulate visual object memory. Two experiments tested participants’ visual memory by using a change detection task in which a target object's orientation was either the same as it appeared during initial viewing or changed. In addition, we examined the effect of spatial and nonspatial contextual manipulations on change detection performance. The results revealed that visual object representations can be maintained reliably after viewing arrays of objects. Moreover, change detection performance was significantly higher when either spatial or nonspatial contextual information remained the same in the test image. We concluded that while processing complex visual stimuli such as object arrays, visual object memory can be stored as part of a comprehensive scene representation, and both spatial and nonspatial contextual changes modulate visual memory retrieval and comparison.  相似文献   

5.
An ongoing debate concerns whether visual object representations are relatively abstract, relatively specific, both abstract and specific within a unified system, or abstract and specific in separate and dissociable neural subsystems. Most of the evidence for the dissociable subsystems theory has come from experiments that used familiar shapes, and the usage of familiar shapes has allowed for alternative explanations for the results. Thus, we examined abstract and specific visual working memory when the stimuli were novel objects viewed for the first and only time. When participants judged whether cues and probes belonged to the same abstract visual category, they performed more accurately when the probes were presented directly to the left hemisphere than when they were presented directly to the right hemisphere. In contrast, when participants judged whether or not cues and probes were the same specific visual exemplar, they performed more accurately when the probes were presented directly to the right hemisphere than when they were presented directly to the left hemisphere. For the first time, results from experiments using visual working memory tasks support the dissociable subsystems theory.  相似文献   

6.
In the present PET study, we examined brain activity related to processing of pictures and printed words in episodic memory. Our goal was to determine how the perceptual format of objects (verbal versus pictorial) is reflected in the neural organization of episodic memory for common objects. We investigated this issue in relation to encoding and recognition with a particular focus on medial temporal-lobe (MTL) structures. At encoding, participants saw pictures of objects or their written names and were asked to make semantic judgments. At recognition, participants made yes-no recognition judgments in four different conditions. In two conditions, target items were pictures of objects; these objects had originally been encoded either in picture or in word format. In two other conditions, target items were words; they also denoted objects originally encoded either as pictures or as words. Our data show that right MTL structures are differentially involved in picture processing during encoding and recognition. A posterior MTL region showed higher activation in response to the presentation of pictures than of words across all conditions. During encoding, this region may be involved in setting up a representation of the perceptual information that comprises the picture. At recognition, it may play a role in guiding retrieval processes based on the perceptual input, i.e. the retrieval cue. Another more anterior right MTL region was found to be differentially involved in recognition of objects that had been encoded as pictures, irrespective of whether the retrieval cue provided was pictorial or verbal in nature; this region may be involved in accessing stored pictorial representations. Our results suggest that left MTL structures contribute to picture processing only during encoding. Some regions in the left MTL showed an involvement in semantic encoding that was picture specific; others showed a task-specific involvement across pictures and words. Together, our results provide evidence that the involvement of some but not all MTL regions in episodic encoding and recognition is format specific.  相似文献   

7.
Change blindness, the failure to detect visual changes that occur during a disruption, has increasingly been used to infer the nature of internal representations. If every change were detected, detailed representations of the world would have to be stored and accessible. However, because many changes are not detected, visual representations might not be complete, and access to them might be limited. Using change detection to infer the completeness of visual representations requires an understanding of the reasons for change blindness. This article provides empirical support for one such reason: change blindness resulting from the failure to compare retained representations of both the pre- and postchange information. Even when unaware of changes, observers still retained information about both the pre- and postchange objects on the same trial.  相似文献   

8.
Naming novel objects with novel count nouns changes how the objects are drawn from memory, revealing that object categorisation induces reliance on orientation-independent visual representations during longer-term remembering, but not during short-term remembering. Serial position effects integrate this finding with a more established conceptualisation of short-term and longer-term visual remembering in which the former is identified as keeping an item in mind. Adults were shown a series of four novel objects in orientations in which they would not normally be drawn from memory. When not named ("Look at this object"), the objects were drawn in the orientations in which they had been seen. When named with a novel count noun (e.g., "Look at this dax"), the final object continued to be depicted in the orientation in which it had been seen, but all other objects were depicted in an unseen but preferred (canonical) orientation, even though participants could still remember the orientations in which they had been seen. Although orientation-dependent exemplar representations appear to be more accessible than orientation-independent generic representations during short-term remembering, the reverse is the case during longer-term remembering. How the theoretical framework emerging from these observations accommodates a broader body of evidence is discussed.  相似文献   

9.
Eight experiments were conducted to determine whether visual mental imagery preserves left-right orientation in conditions where recognition memory apparently does not. Such a dissociation would suggest that information in memory preserves left-right orientation, but the process that matches input to such representations does not respect this distinction. In the experiments reported here, participants were asked to use one of two methods to indicate which way Abraham Lincoln faces on a US penny: Either they selected the correct profile from two sketches, or they formed a mental image and then specified the direction. If participants had just viewed pennies and were asked to visualize a particular one, they performed far better than chance; in contrast, if imagery was discouraged, they performed poorly when recognizing the profiles perceptually. However, unless actively discouraged from doing so, most participants reported spontaneously visualizing pennies prior to perceptual recognition if they had recently seen pennies.  相似文献   

10.
We report an extension of the procedure devised by Weinstein and Shanks (Memory & Cognition 36:1415–1428, 2008) to study false recognition and priming of pictures. Participants viewed scenes with multiple embedded objects (seen items), then studied the names of these objects and the names of other objects (read items). Finally, participants completed a combined direct (recognition) and indirect (identification) memory test that included seen items, read items, and new items. In the direct test, participants recognized pictures of seen and read items more often than new pictures. In the indirect test, participants’ speed at identifying those same pictures was improved for pictures that they had actually studied, and also for falsely recognized pictures whose names they had read. These data provide new evidence that a false-memory induction procedure can elicit memory-like representations that are difficult to distinguish from “true” memories of studied pictures.  相似文献   

11.
Although both the object and the observer often move in natural environments, the effect of motion on visual object recognition ha not been well documented. The authors examined the effect of a reversal in the direction of rotation on both explicit and implicit memory for novel, 3-dimensional objects. Participants viewed a series of continuously rotating objects and later made either an old-new recognition judgment or a symmetric-asymmetric decision. For both tasks, memory for rotating objects was impaired when the direction of rotation was reversed at test. These results demonstrate that dynamic information can play a role in visual object recognition and suggest that object representations can encode spatiotemporal information.  相似文献   

12.
During a typical day, visual working memory (VWM) is recruited to temporarily maintain visual information. Although individuals often memorize external visual information provided to them, on many other occasions they memorize information they have constructed themselves. The latter aspect of memory, which we term self-initiated WM, is prevalent in everyday behavior but has largely been overlooked in the research literature. In the present study we employed a modified change detection task in which participants constructed the displays they memorized, by selecting three or four abstract shapes or real-world objects and placing them at three or four locations in a circular display of eight locations. Half of the trials included identical targets that participants could select. The results demonstrated consistent strategies across participants. To enhance memory performance, participants reported selecting abstract shapes they could verbalize, but they preferred real-world objects with distinct visual features. Furthermore, participants constructed structured memory displays, most frequently based on the Gestalt organization cue of symmetry, and to a lesser extent on cues of proximity and similarity. When identical items were selected, participants mostly placed them in close proximity, demonstrating the construction of configurations based on the interaction between several Gestalt cues. The present results are consistent with recent findings in VWM, showing that memory for visual displays based on Gestalt organization cues can benefit VWM, suggesting that individuals have access to metacognitive knowledge on the benefit of structure in VWM. More generally, this study demonstrates how individuals interact with the world by actively structuring their surroundings to enhance performance.  相似文献   

13.
The purpose of this series of four experiments was to examine the possible role of spontaneous imagery in memory confusions about the way in which visual information had been experienced. After viewing pictures of familiar objects, complete or incomplete in visual form, participants were asked to remember the way in which the objects had been presented. Although, as predicted, memory for the objects themselves was quite good, participants falsely remembered seeing complete versions of pictures that were actually presented as incomplete. These false reports were observed across a variety of encoding and testing conditions. The results suggest that the false reports (referred to here as completion errors) are due to internal representations based on filling-in processes in response to the encoding of incomplete visual information. As such, the results also speak to alternative explanations for the completion errors and, more broadly, to theoretical perspectives that draw on filling-in processes when accounting for object identification and object memory.  相似文献   

14.
Theories have proposed that the maintenance of object representations in visual working memory is aided by a spatial rehearsal mechanism. In this study, we used two different approaches to test the hypothesis that overt and covert visual–spatial attention mechanisms contribute to the maintenance of object representations in visual working memory. First, we tracked observers’ eye movements while they remembered a variable number of objects during change-detection tasks. We observed that during the blank retention interval, participants spontaneously shifted gaze to the locations that the objects had occupied in the memory array. Next, we hypothesized that if attention mechanisms contribute to the maintenance of object representations, then drawing attention away from the object locations during the retention interval should impair object memory during these change-detection tasks. Supporting this prediction, we found that attending to the fixation point in anticipation of a brief probe stimulus during the retention interval reduced change-detection accuracy, even on the trials in which no probe occurred. These findings support models of working memory in which visual–spatial selection mechanisms contribute to the maintenance of object representations.  相似文献   

15.
This study examined how spatial working memory and visual (object) working memory interact, focusing on two related questions: First, can these systems function independently from one another? Second, under what conditions do they operate together? In a dual-task paradigm, participants attempted to remember locations in a spatial working memory task and colored objects in a visual working memory task. Memory for the locations and objects was subject to independent working memory storage limits, which indicates that spatial and visual working memory can function independently from one another. However, additional experiments revealed that spatial working memory and visual working memory interact in three memory contexts: when retaining (1) shapes, (2) integrated color-shape objects, and (3) colored objects at specific locations. These results suggest that spatial working memory is needed to bind colors and shapes into integrated object representations in visual working memory. Further, this study reveals a set of conditions in which spatial and visual working memory can be isolated from one another.  相似文献   

16.
Although visual object recognition is primarily shape driven, colour assists the recognition of some objects. It is unclear, however, just how colour information is coded with respect to shape in long-term memory and how the availability of colour in the visual image facilitates object recognition. We examined the role of colour in the recognition of novel, 3-D objects by manipulating the congruency of object colour across the study and test phases, using an old/new shape-identification task. In experiment 1, we found that participants were faster at correctly identifying old objects on the basis of shape information when these objects were presented in their original colour, rather than in a different colour. In experiments 2 and 3, we found that participants were faster at correctly identifying old objects on the basis of shape information when these objects were presented with their original part-colour conjunctions, rather than in different or in reversed part-colour conjunctions. In experiment 4, we found that participants were quite poor at the verbal recall of part-colour conjunctions for correctly identified old objects, presented as grey-scale images at test. In experiment 5, we found that participants were significantly slower at correctly identifying old objects when object colour was incongruent across study and test, than when background colour was incongruent across study and test. The results of these experiments suggest that both shape and colour information are stored as part of the long-term representation of these novel objects. Results are discussed in terms of how colour might be coded with respect to shape in stored object representations.  相似文献   

17.
Kazuya Inoue  Yuji Takeda 《Visual cognition》2013,21(9-10):1135-1153
To investigate properties of object representations constructed during a visual search task, we manipulated the proportion of trials/task within a block: In a search-frequent block, 80% of trials were search tasks; remaining trials presented a memory task; in a memory-frequent block, this proportion was reversed. In the search task, participants searched for a toy car (Experiments 1 and 2) or a T-shape object (Experiment 3). In the memory task, participants had to memorize objects in a scene. Memory performance was worse in the search-frequent block than in the memory-frequent block in Experiments 1 and 3, but not in Experiment 2 (token change in Experiment 1; type change in Experiments 2 and 3). Experiment 4 demonstrated that lower performance in the search-frequent block was not due to eye-movement behaviour. Results suggest that object representations constructed during visual search are different from those constructed during memorization and they are modulated by type of target.  相似文献   

18.
In visual search, observers try to find known target objects among distractors in visual scenes where the location of the targets is uncertain. This review article discusses the attentional processes that are active during search and their neural basis. Four successive phases of visual search are described. During the initial preparatory phase, a representation of the current search goal is activated. Once visual input has arrived, information about the presence of target-matching features is accumulated in parallel across the visual field (guidance). This information is then used to allocate spatial attention to particular objects (selection), before representations of selected objects are activated in visual working memory (recognition). These four phases of attentional control in visual search are characterized both at the cognitive level and at the neural implementation level. It will become clear that search is a continuous process that unfolds in real time. Selective attention in visual search is described as the gradual emergence of spatially specific and temporally sustained biases for representations of task-relevant visual objects in cortical maps.  相似文献   

19.
Leek EC 《Perception》1998,27(7):803-816
How does the visual system recognise stimuli presented at different orientations? According to the multiple-views hypothesis, misoriented objects are matched to one of several orientation-specific representations of the same objects stored in long-term memory. Much of the evidence for this hypothesis comes from the observation of group mean orientation effects in recognition memory tasks showing that the time taken to identify objects increases as a function of the angular distance between the orientation of the stimulus and its nearest familiar orientation The aim in this paper is to examine the validity of this interpretation of group mean orientation effects. In particular, it is argued that analyses based on group performance averages that appear consistent with the multiple-views hypothesis may, under certain circumstances, obscure a different theoretically relevant underlying pattern of results. This problem is examined by using hypothetical data and through the detailed analysis of the results from an experiment based on a recognition memory task used in several previous studies. Although a pattern of results that is consistent with the multiple-views hypothesis was observed in both the group mean performance and the underlying data, it is argued that the potential limitations of analyses based solely on group performance averages must be considered in future studies that use orientation effects to make inferences about the kinds of shape representations that mediate visual recognition.  相似文献   

20.
Communication is aided greatly when speakers and listeners take advantage of mutually shared knowledge (i.e., common ground). How such information is represented in memory is not well known. Using a neuropsychological-psycholinguistic approach to real-time language understanding, we investigated the ability to form and use common ground during conversation in memory-impaired participants with hippocampal amnesia. Analyses of amnesics' eye fixations as they interpreted their partner's utterances about a set of objects demonstrated successful use of common ground when the amnesics had immediate access to common-ground information, but dramatic failures when they did not. These findings indicate a clear role for declarative memory in maintenance of common-ground representations. Even when amnesics were successful, however, the eye movement record revealed subtle deficits in resolving potential ambiguity among competing intended referents; this finding suggests that declarative memory may be critical to more basic aspects of the on-line resolution of linguistic ambiguity.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号