首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Three experiments investigated the effects of two variables -selective attention during encoding and delay between study and test- on implicit (picture fragment completion and object naming) and explicit (free recall and recognition) memory tests. Experiments 1 and 2 consistently indicated that (a) at all delays (immediate to 1 month), picture-fragment identification threshold was lower for the attended than the unattended pictures; (b) the attended pictures were recalled and recognized better than the unattended; and (c) attention and delay interacted in both memory tests. For implicit memory, performance decreased as delay increased for both attended and unattended pictures, but priming was more pronounced and lasted longer for the attended pictures; it was still present after a 1-month delay. For explicit memory, performance decreased as delay increased for attended pictures, but for unattended pictures performance was consistent throughout delay. By using a perceptual object naming task, Experiment 3 showed reliable implicit and explicit memory for attended but not for unattended pictures. This study indicates that picture repetition priming requires attention at the time of study and that neither delay nor attention dissociate performance in explicit and implicit memory tests; both types of memory require attention, but explicit memory does so to a larger degree.  相似文献   

2.
刺激类型及表征关系对无意视盲的影响   总被引:1,自引:0,他引:1  
通过考察刺激类型以及重叠呈现的图片流和词语流的表征关系对非注意刺激的捕获差异,试图考察刺激类型和表征关系对无意视盲(Inattentional Blindness)的影响。分别有20名中学生参加了词和图片基线水平测试,52名中学生参与了无意视盲实验。研究结果表明:(1)当词与图片重叠呈现时,不管注意刺激是词还是图片,也不管词与图片的表征是否一致,与基线水平相比,被试都出现了显著的无意视盲现象。(2)当词与图片的表征意义一致时,如果注意刺激是图片,而非注意刺激是词,那么被试更容易觉察到非注意刺激。(3)非注意刺激与注意刺激表征意义一致时更容易捕获观察者的注意  相似文献   

3.
We investigated whether circadian arousal affects perceptual priming as a function of whether stimuli were attended or ignored during learning. We tested 160 participants on- and off-peak with regards to their circadian arousal. In the study phase, they were presented with two superimposed pictures in different colours. They had to name the pictures of one colour while ignoring the others. In the test phase, they were presented with the same and randomly intermixed new pictures. Each picture was presented in black colour in a fragment completion task. Priming was measured as the difference in fragmentation level at which the pictures from the study phase were named compared to the new pictures. Priming was stronger for attended than ignored pictures. Time of day affected priming only for ignored pictures, with stronger priming effects off-peak than on-peak. Thus, circadian arousal seems to favour the encoding of unattended materials specifically at off-peak.  相似文献   

4.
Integrating pictorial information across eye movements   总被引:5,自引:0,他引:5  
Six experiments are reported dealing with the types of information integrated across eye movements in picture perception. A line drawing of an object was presented in peripheral vision, and subjects made an eye movement to it. During the saccade, the initially presented picture was replaced by another picture that the subject was instructed to name as quickly as possible. The relation between the stimulus on the first fixation and the stimulus on the second fixation was varied. Across the six experiments, there was about 100-130 ms facilitation when the pictures were identical compared with a control condition in which only the target location was specified on the first fixation. This finding clearly implies that information about the first picture facilitated naming the second picture. Changing the size of the picture from one fixation to the next had little effect on naming time. This result is consistent with work on reading and low-level visual processes in indicating that pictorial information is not integrated in a point-by-point manner in an integrated visual buffer. Moreover, only about 50 ms of the facilitation for identical pictures could be attributed to the pictures having the same name. When the pictures represented the same concept (e.g., two different pictures of a horse), there was a 90-ms facilitation effect that could have been the result of either the visual or conceptual similarity of the pictures. However, when the pictures had different names, only visual similarity produced facilitation. Moreover, when the pictures had different names, there appeared to be inhibition from the competing names. The results of all six experiments are consistent with a model in which the activation of both the visual features and the name of the picture seen on the first fixation survive the saccade and combine with the information extracted on the second fixation to produce identification and naming of the second picture.  相似文献   

5.
Two experiments are reported in which participants were instructed to attend to one of two overlapping figures and report how distinctive it was (Experiment 1), or how angular it was or what it resembled (Experiment 2). Tests of recognition memory indicated that recognition of the unattended figures was below chance, consistent with the conclusion that an implicit memory of the unattended figures and an “action tag” to not respond to the figures combine at recognition to suppress positive identification. Furthermore, participants that scored high on an index of working memory ability showed worse memory for the unattended shapes, suggesting that the ability to control attention not only enhances memory for attended items, but also leads to greater suppression of unattended distractors.  相似文献   

6.
The fate of irrelevant and overtly presented stimuli that was temporally aligned with an attended target in a separate task was explored. Seitz and Watanabe (2003) demonstrated that if an irrelevant motion stimulus was implicit (i.e., subthreshold), a later facilitation for the same motion direction was observed if the previously presented implicit motion (of the same direction) was temporally aligned with the presence of an attended target. Later research, however, demonstrated that if the motion stimulus aligned with the attended target was explicit (i.e., suprathreshold), a later inhibition was observed (Tsushima, Seitz, & Watanabe, 2008). The current study expands on this by using more salient stimuli (words and pictures) in an inattentional blindness paradigm, and suggests that when attention is depleted, recognition for target-aligned task-irrelevant items is impaired in a subsequent recognition task. Participants were required to respond to either immediate picture, or word, repetitions in a stream of simultaneously presented line drawings and written words, and later given a surprise recognition test that measured recognition for the words or the pictures. When analyzing word recognition performance after attention had been directed to the pictures, words that had appeared simultaneously with a picture repetition in the repetition detection task were recognized at levels significantly below chance. The same inhibition was mirrored when testing for picture recognition after having attended to the words in the repetition detection task. These data suggest an inhibitory mechanism that is exhibited in later recognition tests for salient information that was previously unattended and had been simultaneously being presented with an attended target in a different task.  相似文献   

7.
The effect of sorrounding context on the recognition of objects from briefly presented pictures was investigated. Forty-eight undergraduates saw 100 msec displays of either line drawings containing several objects embedded in context or drawings of object arrays without background context. Following each exposure they were required to select from among four objects the one that had been contained in the picture. Presentation of objects in context aided recognition only when incorrect response alternatives were inconsistent with the picture context. The results suggest that context contributes to the construction of a general characterization of the pictures which provides expectancies regarding the identity of specific objects.  相似文献   

8.
The current experiment addressed the question, is enhanced memory for emotional pictures due to increased attention to affective stimuli? Participants viewed pairs of pictures (emotional-neutral or neutral-neutral) whilst their eye movements were recorded; participants had to decide which picture out of each pair they preferred. There was increased attention to positive pictures and decreased attention to negative images during picture viewing. Despite this, when a recognition test was given 1 week later, memory enhancements were found for negative pictures only. Moreover, although there was a general correlation between total inspection time and memory performance, this reliability was clear only for neutral pictures, and not for emotional images. The results suggest that memory advantages for emotional pictures can occur without increased attention to such images.  相似文献   

9.
Torsional eye movements are triggered by head tilt and a rotating visual field. We examined whether attention to a misoriented form could also induce torsion. Thirty-six observers viewed an adapting field containing a bright vertical Une, and then they viewed a display that was composed of two misoriented words (one rotated clockwise, the other counterclockwise, by 15°, 30°, or 45°). The subjects were instructed to attend to one of the words. The subjects’ adjustments of a reference line to match the tilt of the afterimage showed that attention to a misoriented word produces torsional eye movement (verified with direct measurements on 4 additional subjects). This eye movement reduces the retinal misorientation of the word by about 1°. The results of this study reinforce the linkage between selective attention and eye movements and may provide a useful tool for dissecting different forms of “mental rotation” and other adjustments in internal reference frames. Apparent-motion displays confirming that the eye rotated in the head may be downloaded from www.psychonomic.org/archive.  相似文献   

10.
Gorillas in our midst: sustained inattentional blindness for dynamic events   总被引:16,自引:0,他引:16  
Simons DJ  Chabris CF 《Perception》1999,28(9):1059-1074
With each eye fixation, we experience a richly detailed visual world. Yet recent work on visual integration and change direction reveals that we are surprisingly unaware of the details of our environment from one view to the next: we often do not detect large changes to objects and scenes ('change blindness'). Furthermore, without attention, we may not even perceive objects ('inattentional blindness'). Taken together, these findings suggest that we perceive and remember only those objects and details that receive focused attention. In this paper, we briefly review and discuss evidence for these cognitive forms of 'blindness'. We then present a new study that builds on classic studies of divided visual attention to examine inattentional blindness for complex objects and events in dynamic scenes. Our results suggest that the likelihood of noticing an unexpected object depends on the similarity of that object to other objects in the display and on how difficult the priming monitoring task is. Interestingly, spatial proximity of the critical unattended object to attended locations does not appear to affect detection, suggesting that observers attend to objects and events, not spatial positions. We discuss the implications of these results for visual representations and awareness of our visual environment.  相似文献   

11.
Two questions were addressed by these experiments. Firstly, do unattended words influence attended words only when they appear in isolation and thereby may attract attention, or are they influential even when embedded amongst ineffective material? Secondly, can the influence of an unattended display be increased by increasing the number of potentially effective words. By having observers give category names to attended words at the same time as masked unattended words appeared in a column to the right of fixation, experiment 1 found that a single word was effective even when embedded, and that an increasing effect was not observed with a display with a 50 msec duration. There was some evidence of a linear increase in the size of the effect with a 200 msec display, but evidence from experiment 2 suggests that subjects may have been aware of the unattended words when they were exposed for this duration. The results were discussed in relation to a model of eye-fixation control during reading which postulates that unattended words gain lexical recognition when they are semantically related to the attended activity. This lexical recognition may then serve to mark interesting locations in the text and attract future eye-fixations.  相似文献   

12.
In two experiments, we examined the relation between gaze control and recollective experience in the context of face recognition. In Experiment 1, participants studied a series of faces, while their eye movements were eliminated either during study or test, or both. Subsequently, they made remember/know judgements for each recognized test face. The preclusion of eye movements impaired explicit recollection without affecting familiarity-based recognition. In Experiment 2, participants examined unfamiliar faces under two study conditions (similarity vs. difference judgements), while their eye movements were registered. Similarity vs. difference judgements produced the opposite effects on remember/know responses, with no systematic effects on eye movements. However, face recollection was related to eye movements, so that remember responses were associated with more frequent refixations than know responses. These findings suggest that saccadic eye movements mediate the nature of recollective experience, and that explicit recollection reflects a greater consistency between study and test fixations than familiarity-based face recognition.  相似文献   

13.
Auditory text presentation improves learning with pictures and texts. With sequential text–picture presentation, cognitive models of multimedia learning explain this modality effect in terms of greater visuo‐spatial working memory load with visual as compared to auditory texts. Visual texts are assumed to demand the same working memory subsystem as pictures, while auditory texts make use of an additional cognitive resource. We provide two alternative assumptions that relate to more basic processes: First, acoustic‐sensory information causes a retention advantage for auditory over visual texts which occurs no matter if a picture is presented or not. Second, eye movements during reading hamper visuo‐spatial rehearsal. Two experiments applying elementary procedures provide first evidence for these assumptions. Experiment 1 demonstrates that, regarding text recall, the auditory advantage is independent of visuo‐spatial working memory load. Experiment 2 reveals worse matrix recognition performance after reading text requiring eye movements than after listening or reading without eye movements. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

14.
Observers tend to remember seeing a greater expanse of a scene than was shown (boundary extension [BE]). Is undivided visual attention necessary for BE? In Experiment 1, 108 observers viewed photographs with superimposed numerals (2s and 5s). Each appeared for 750 msec, followed by a masked interval and a test picture (same, closer up, or wider angled). Test pictures were rated as the same, closer, or wider angled on a 5-point scale. Visual attention was manipulated with a search task: The observers reported the number of 5s (zero, one, or two). The observers performed search only, picture rating only, or both (giving search priority). Search accuracy was unaffected by condition. BE occurred in both conditions but was greater with divided attention. The results were replicated using incidental BE tests (Experiments 2 and 3). We propose that anticipatory representation of layout occurs automatically during scene perception, with focal attention serving to constrain the boundary error.  相似文献   

15.
Four conditions were used to investigate developmental trends in the ability to establish and use a color set to direct the selective processing of pictures. In three conditions, 6-year-old, 9-year-old, and adult subjects viewed a series of pairs of pictures, with one red and one black line drawing in each pair. Subjects were asked to look either at the red pictures only, the black pictures only, or both pictures. In a fourth condition, subjects viewed a series of singly presented red and black pictures. Pictures of both colors were included in a subsequent recognition memory test. At all ages recognition memory was comparable for pictures of each color in the both and single conditions but was higher for pictures of the specified color in the selective red and selective black conditions. There was no evidence at any age that memory for pictures of the specified color was decreased by the presence of the second picture. These results, showing roughly comparable selectivity at all ages, were discussed in relation to findings of developmental trends in selective attention on more “traditional” central-incidental learning tasks.  相似文献   

16.
Long-term recognition memory for some pictures is consistently better than for others (Isola, Xiao, Parikh, Torralba, & Oliva, IEEE Transaction on Pattern Analysis and Machine Intelligence (PAMI), 36(7), 1469–1482, 2014). Here, we investigated whether pictures found to be memorable in a long-term memory test are also perceived more easily when presented in ultra-rapid RSVP. Participants viewed 6 pictures they had never seen before that were presented for 13 to 360 ms per picture in a rapid serial visual presentation (RSVP) sequence. In half the trials, one of the pictures was a memorable or a nonmemorable picture and perception of this picture was probed by a visual recognition test at the end of the sequence. Recognition for pictures from the memorable set was higher than for those from the nonmemorable set, and this difference increased with increasing duration. Nonmemorable picture recognition was low initially, did not increase until 120 ms, and never caught up with memorable picture recognition performance. Thus, the long-term memorability of an image is associated with initial perceptibility: A picture that is hard to grasp quickly is hard to remember later.  相似文献   

17.
Comprehension and memory for pictures   总被引:1,自引:0,他引:1  
The thesis advanced is that people remember nonsensical pictures much better if they comprehend what they are about. Two experiments supported this thesis. In the first, nonsensical "droodles" were studied by subjects with or without an accompanying verbal interpretation of the pictures. Free recall was much better for subjects receiving the interpretation during study. Also, a later recognition test showed that subjects receiving the interpretation rated as more similar to the original picture a distractor which was close to the prototype of the interpreted category. In Experiment II, subjects studied pairs of nonsensical pictures, with or without a linking interpretation provided. Subjects who heard a phrase identifying and interrelating the pictures of a pair showed greater associative recall and matching than subjects who received no interpretation. The results suggest that memory is aided whenever contextual cues arouse appropriate schemata into which the material to be learned can be fitted.  相似文献   

18.
Subjects were instructed to read and comprehend a target (attended) passage while eye movements were recorded. A second (unattended) passage was also present, with attended and unattended passages occupying alternating lines of text. Subsequent multiple-choice questions showed acquisition of semantic information from attended and unattended text. However, a detailed examination of eye-fixation records showed that readers occasionally fixated unattended text, indicating the presence of shifts of visual attention to unattended text. When fixations of unattended text were excluded, there was no longer any indication that readers obtained useful semantic information from unattended text.  相似文献   

19.
Maintenance of stable central eye fixation is crucial for a variety of behavioral, electrophysiological, and neuroimaging experiments. Naive observers in these experiments are not typically accustomed to fixating, either requiring the use of cumbersome and costly eyetracking or producing confounds in results. We devised a flicker display that produced an easily detectable visual phenomenon whenever the eyes moved. A few minutes of training using this display dramatically improved the accuracy of eye fixation while observers performed a demanding spatial attention cuing task. The same amount of training using control displays did not produce significant fixation improvements, and some observers consistently made eye movements to the peripheral attention cue, contaminating the cuing effect. Our results indicate that (1) eye fixation can be rapidly improved in naive observers by providing real-time feedback about eye movements, and (2) our simple flicker technique provides an easy and effective method for providing this feedback. S. Suzuki, satoru@northwestern.edu  相似文献   

20.
Changes between alternating visual displays are difficult to detect when the successive presentations of the displays are separated by a brief temporal interval. To assess whether unattended changes attract attention, observers searched for the location of a change involving either a large or a small number of features, in pairs of displays consisting of 4, 7, 10, 13, or 16 letters (Experiment 1) or digits (Experiments 2 and 3). Each display in a pair of displays was presented for 200 ms, and either a blank screen (Experiments 1 and 2) or a screen of equal luminance to the letters and digits (Experiment 3) was presented for 80 ms between the alternating displays. In all experiments, the search function for locating the larger change was shallower than the search function for locating the smaller change. These results indicate that unattended changes play a functional role in guiding focal attention.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号