首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Although memories are more retrievable if observers’ emotional states are consistent between encoding and retrieval, it is unclear whether the consistency of emotional states increases the likelihood of successful memory retrieval, the precision of retrieved memories, or both. The present study tested visual long-term memory for everyday objects while consistent or inconsistent emotional contexts between encoding and retrieval were induced using background grey-scale images from the International Affective Picture System (IAPS). In the study phase, participants remembered colours of sequentially presented objects in a negative (Experiment 1a) or positive (Experiment 2a) context. In the test phase, participants estimated the colours of previously studied objects in either negative versus neutral (Experiment 1a) or positive versus neutral (Experiment 2a) contexts. Note, IAPS images in the test phase were always visually different from those initially paired with the studied objects. We found that reinstating negative context and positive context at retrieval resulted in better mnemonic precision and a higher probability of successful retrieval, respectively. Critically, these effects could not be attributed to a negative or positive context at retrieval alone (Experiments 1b and 2b). Together, these findings demonstrated dissociable effects of emotion on the quantitative and qualitative aspects of visual long-term memory retrieval.  相似文献   

2.
Attentional Modulation of Size Contrast   总被引:2,自引:0,他引:2  
A test circle surrounded by smaller context circles appears larger if presented in isolation, whereas a test circle surrounded by large context circles is seen as smaller than in isolation. Two experiments are reported indicating that this phenomenon, the Ebbinghaus illusion, depends on whether subjects are attending to the context circles. Subjects first saw a reference circle and then a briefly presented (150 msec) test circle. Their task was to determine whether the test circle was larger or smaller than the reference. The test circle was surrounded by smaller context circles of one colour arrayed along a horizontal axis centred on the test, and larger context circles of a different colour arrayed along a vertical axis centred on the test. Subjects judged both the size of the test and the colours of either the small or large context circles. Perceived test size changed systematically, depending on which context circles were task-relevant.  相似文献   

3.
How does encoding context affect memory? Participants studied visually presented words viewed concurrently with a rich (intact face) or weak (scrambled face) image as context and subsequently made "Remember", "Know", or "New" judgements to words presented alone. In Experiment 1a, younger, but not older, adults showed higher recollection accuracy to words from rich- than from weak-context encoding trials. The age-related deficit in recollection occurred, in Experiment 1b, even when encoding and retrieval time was doubled in older adults, suggesting that insufficient processing time cannot account for this age-related deficit. In Experiment 1c, dividing attention in young, during encoding, reduced overall memory, though the recollection boost from rich encoding contexts remained, suggesting that reduced attention resources cannot explain this age-related deficit. Experiment 2 showed that an own-age bias, to face images as context, could not explain the age-related differences either. Results suggest that age deficits in recollection stem from a lack of spontaneous binding, or elaboration, of context to target information during encoding.  相似文献   

4.
5.
6.
Three experiments investigated context-dependent effects of background colour in free recall with groups of items. Undergraduates (N=113) intentionally studied 24 words presented in blocks of 6 on a computer screen with two different background colours. The two background colours were changed screen-by-screen randomly (random condition) or alternately (alternation condition) during the study period. A 30-second filled retention interval was imposed before an oral free-recall test. A signal for free recall was presented throughout the test on one of the colour background screens presented at study. Recalled words were classified as same- or different-context words according to whether the background colours at study and test were the same or different. The random condition produced significant context-dependent effects, whereas the alternation condition showed no context-dependent effects, regardless of whether the words were presented once or twice. Furthermore, the words presented on the same screen were clustered in recall, whereas the words presented against the same background colour but on different screens were not clustered. The present results imply: (1) background colours can cue spatially massed words; (2) background colours act as temporally local context; and (3) predictability of the next background colour modulates the context-dependent effect.  相似文献   

7.
How does encoding context affect memory? Participants studied visually presented words viewed concurrently with a rich (intact face) or weak (scrambled face) image as context and subsequently made “Remember”, “Know”, or “New” judgements to words presented alone. In Experiment 1a, younger, but not older, adults showed higher recollection accuracy to words from rich- than from weak-context encoding trials. The age-related deficit in recollection occurred, in Experiment 1b, even when encoding and retrieval time was doubled in older adults, suggesting that insufficient processing time cannot account for this age-related deficit. In Experiment 1c, dividing attention in young, during encoding, reduced overall memory, though the recollection boost from rich encoding contexts remained, suggesting that reduced attention resources cannot explain this age-related deficit. Experiment 2 showed that an own-age bias, to face images as context, could not explain the age-related differences either. Results suggest that age deficits in recollection stem from a lack of spontaneous binding, or elaboration, of context to target information during encoding.  相似文献   

8.
9.
An object's context may serve as a source of information for recognition when the object's image is degraded. The current study aimed to quantify this source of information. Stimuli were photographs of objects divided into quantized blocks. Participants decreased block size (increasing resolution) until identification. Critical resolution was compared across three conditions: (1) when the picture of the target object was shown in isolation, (2) in the object's contextual setting where that context was unfamiliar to the participant, and (3) where that context was familiar to the participant. A second experiment assessed the role of object familiarity without context. Results showed a profound effect of context: Participants identified objects in familiar contexts with minimal resolution. Unfamiliar contexts required higher-resolution images, but much less so than those without context. Experiment 2 found a much smaller effect of familiarity without context, suggesting that recognition in familiar contexts is primarily based on object-location memory.  相似文献   

10.
Images that are presented with targets of an unrelated detection task are better remembered than images that are presented with distractors (the attentional boost effect). The likelihood that any of three mechanisms, attentional cuing, prediction-based reinforcement learning, and perceptual grouping, underlies this effect depends in part on how it is modulated by the relative timing of the target and image. Three experiments demonstrated that targets and images must overlap in time for the enhancement to occur; targets that appear 100 ms before or 100 ms after the image without temporally overlapping with it do not enhance memory of the image. However, targets and images need not be synchronized. A fourth experiment showed that temporal overlap of the image and target is not sufficient, as detecting targets did not enhance the processing of task-irrelevant images. These experiments challenge several simple accounts of the attentional boost effect based on attentional cuing, reinforcement learning, and perceptual grouping.  相似文献   

11.
In two experiments, memory was tested for changes in viewpoints in naturalistic scenes. In the key study condition, participants viewed two images of the same scene from viewpoints 40° apart. There were two other study conditions: The two study images were identical or were of different scenes. A test image followed immediately, and participants judged whether it was identical to either of the study images. The scene in the test image was always the same as in a study image and was at least 20° from any study image on different trials. Two models were tested: (1) views stored and retrieved independently and (2) views combined at retrieval. The crucial test of these hypotheses involved a comparison (in the key study condition) of the interpolation condition (the test image was presented between the two study images and 20° from both) and the extrapolation condition (it was 20° from one study image and 60° from the other). Performance in the interpolation condition was far worse than what was predicted by the first model, whereas the second model fit the data quite well. The latter model is parsimonious in that it integrates previous experiences without requiring the integration of the views in memory. We review some of this model’s broader implications.  相似文献   

12.
Previous studies have demonstrated that humans have a remarkable capacity to memorise a large number of scenes. The research on memorability has shown that memory performance can be predicted by the content of an image. We explored how remembering an image is affected by the image properties within the context of the reference set, including the extent to which it is different from its neighbours (image-space sparseness) and if it belongs to the same category as its neighbours (uniformity). We used a reference set of 2,048 scenes (64 categories), evaluated pairwise scene similarity using deep features from a pretrained convolutional neural network (CNN), and calculated the image-space sparseness and uniformity for each image. We ran three memory experiments, varying the memory workload with experiment length and colour/greyscale presentation. We measured the sensitivity and criterion value changes as a function of image-space sparseness and uniformity. Across all three experiments, we found separate effects of 1) sparseness on memory sensitivity, and 2) uniformity on the recognition criterion. People better remembered (and correctly rejected) images that were more separated from others. People tended to make more false alarms and fewer miss errors in images from categorically uniform portions of the image-space. We propose that both image-space properties affect human decisions when recognising images. Additionally, we found that colour presentation did not yield better memory performance over grayscale images.  相似文献   

13.
In a short-term recognition memory experiment with words, subjects: (1) subvocally rehearsed the words, (2) generated a separate visual image for each word, (3) generated an interactive scene with such images, or (4) composed a covert sentence using the words in the memory set. Contrary to Seamon's (1972) results in a similar study, a serial memory search was found in all conditions, instead of the simultaneous scan which was expected when items were combined in interactive images. In a second study with pictures as stimuli, subjects who generated imaginal interactions between separate pictures, viewed interacting pictures, or viewed separate pictures also showed a serial search, i.e., longer RTs were obtained when more stimuli were held m memory. Since interactive imagery facilitated performance in an unexpected paired-associate task with memory set stimuli, one can argue that subjects actually processed or generated such interactions. It was suggested that memory search might not be simultaneous in tasks where the test stimulus constitutes only part of a memory image.  相似文献   

14.
An item that stands out (is isolated) from its context is better remembered than an item consistent with the context. This isolation effect cannot be accounted for by increased attention, because it occurs when the isolated item is presented as the first item, or by impoverished memory of nonisolated items, because the isolated item is better remembered than a control list consisting of equally different items. The isolation effect is seldom experimentally or theoretically related to the primacy or the recency effects—that is, the improved performance on the first few and last items, respectively, on the serial position curve. The primacy effect cannot easily be accounted for by rehearsal in short-term memory because it occurs when rehearsal is eliminated. This article suggests that the primacy, the recency, and the isolation effects can be accounted for by experience-dependent synaptic plasticity in neural cells. Neurological empirical data suggest that the threshold that determines whether cells will show long-term potentiation (LTP) or long-term depression (LTD) varies as a function of recent postsynaptic activity and that synaptic plasticity is bounded. By implementing an adaptive LTP-LTD threshold in an artificial neural network, the various aspects of the isolation, the primacy, and the recency effects are accounted for, whereas none of these phenomena are accounted for if the threshold is constant. This theory suggests a possible link between the cognitive and the neurological levels.  相似文献   

15.
Despite widespread belief that memory is enhanced by emotion, evidence also suggests that emotion can impair memory. Here we test predictions inspired by object-based binding theory, which states that memory enhancement or impairment depends on the nature of the information to be retrieved. We investigated emotional memory in the context of source retrieval, using images of scenes that were negative, neutral or positive in valence. At study each scene was paired with a colour and during retrieval participants reported the source colour for recognised scenes. Critically, we isolated effects of valence by equating stimulus arousal across conditions. In Experiment 1 colour borders surrounded scenes at study: memory impairment was found for both negative and positive scenes. Experiment 2 used colours superimposed over scenes at study: valence affected source retrieval, with memory impairment for negative scenes only. These findings challenge current theories of emotional memory by showing that emotion can impair memory for both intrinsic and extrinsic source information, even when arousal is equated between emotional and neutral stimuli, and by dissociating the effects of positive and negative emotion on episodic memory retrieval.  相似文献   

16.
17.
18.
In the attentional boost effect, participants encode images into memory as they perform an unrelated target-detection task. Later memory is better for images that coincided with a target rather than a distractor. This advantage could reflect a broad processing enhancement triggered by target detection, but it could also reflect inhibitory processes triggered by distractor rejection. To test these possibilities, in four experiments we acquired a baseline measure of image memory when neither a target nor a distractor was presented. Participants memorized faces presented in a continuous series (500- or 100-ms duration). At the same time, participants monitored a stream of squares. Some faces appeared on their own, and others coincided with squares in either a target or a nontarget color. Because the processes associated with both target detection and distractor rejection were minimized when faces appeared on their own, this condition served as a baseline measure of face encoding. The data showed that long-term memory for faces coinciding with a target square was enhanced relative to faces in both the baseline and distractor conditions. We concluded that detecting a behaviorally relevant event boosts memory for concurrently presented images in dual-task situations.  相似文献   

19.
Application of artificial intelligence in Bio-Medical image processing is gaining more and more importance in the field of Medical Science. The bio medical images, has to go through several steps before the diagnosis of the disease. Firstly, the images has to be acquired and preprocessing has to be done and the data has to be stored in memory. It requires huge amount of memory and processing time. Among the preprocessing steps, edge detection is one of the major step. Edge detection filters the unwanted details in the image, and preserves the edges of the image, which describe the boundary of the image. In biomedical application, for the detection of the diseases, it is very essential to have the boundary detail of the acquired image of the organ under observation. Thus it is very essential to extract the edges of the images. Power is one of the main parameters that have to be considered while dealing with biomedical instruments. The biomedical signal processing instruments should be capable of operating at low power and also at high speed. In order to segregate the images into different levels or stage, we use convolutional neural networks for classification. By having a hardware architecture for image edge detection, the computational time for pre-processing of the image can be reduced, and the hardware can be a part of acquisition device itself. In this paper a low-power architecture for edge detection to detect the biomedical images are presented. The edge detection output are given to the system, which will diagnose the diseases using image classification using convolutional neural network. In this paper, Sobel and Prewitt, algorithms are used for edge detection using 180 nm technology. The edge detection algorithms are implemented using VLSI, and digital IC design of the architecture is presented. The algorithms for edge detection is co-simulated using MATLAB and Modelsim. The architecture is first simulated using CMOS logic and new method using domino logic is presented for low power consumption.  相似文献   

20.
The context effect in implicit memory is the finding that presentation of words in meaningful context reduces or eliminates repetition priming compared to words presented in isolation. Virtually all of the research on the context effect has been conducted in the visual modality but preliminary results raise the question of whether context effects are less likely in auditory priming. Context effects in the auditory modality were systematically examined in five experiments using the auditory implicit tests of word-fragment and word-stem completion. The first three experiments revealed the classical context effect in auditory priming: Words heard in isolation produced substantial priming, whereas there was little priming for the words heard in meaningful passages. Experiments 4 and 5 revealed that a meaningful context is not required for the context effect to be obtained: Words presented in an unrelated audio stream produced less priming than words presented individually and no more priming than words presented in meaningful passages. Although context effects are often explained in terms of the transfer-appropriate processing (TAP) framework, the present results are better explained by Masson and MacLeod's (2000) reduced-individuation hypothesis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号