首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Previous studies have shown that the efficiency of visual search does not improve when participants search through the same unchanging display for hundreds of trials (repeated search), even though the participants have a clear memory of the search display. In this article, we ask two important questions. First, why do participants not use memory to help search the repeated display? Second, can context be introduced so that participants are able to guide their attention to the relevant repeated items? Experiments 1-4 show that participants choose not to use a memory strategy because, under these conditions, repeated memory search is actually less efficient than repeated visual search, even though the latter task is in itself relatively inefficient. However, when the visual search task is given context, so that only a subset of the items are ever pertinent, participants can learn to restrict their attention to the relevant stimuli (Experiments 5 and 6).  相似文献   

2.
In visual search, observers make decisions about the presence or absence of a target based on their perception of a target during search. The present study investigated whether decisions can be based on observers’ expectation rather than perception of a target. In Experiment 1, participants were allowed to make target-present responses by clicking on the target or, if the target was not perceived, a target-present button. Participants used the target-present button option more frequently in difficult search trials and when target prevalence was high. Experiment 2 and 3 employed a difficult search task that encouraged the use of prevalence-based decisions. Target presence was reported faster when target prevalence was high, indicating that decisions were, in part, cognitive, and not strictly perceptual. A similar pattern of responses were made even when no targets appeared in the search (Experiment 3). The implication of these prevalence-based decisions for visual search models is discussed.  相似文献   

3.
Kazuya Inoue  Yuji Takeda 《Visual cognition》2013,21(9-10):1135-1153
To investigate properties of object representations constructed during a visual search task, we manipulated the proportion of trials/task within a block: In a search-frequent block, 80% of trials were search tasks; remaining trials presented a memory task; in a memory-frequent block, this proportion was reversed. In the search task, participants searched for a toy car (Experiments 1 and 2) or a T-shape object (Experiment 3). In the memory task, participants had to memorize objects in a scene. Memory performance was worse in the search-frequent block than in the memory-frequent block in Experiments 1 and 3, but not in Experiment 2 (token change in Experiment 1; type change in Experiments 2 and 3). Experiment 4 demonstrated that lower performance in the search-frequent block was not due to eye-movement behaviour. Results suggest that object representations constructed during visual search are different from those constructed during memorization and they are modulated by type of target.  相似文献   

4.
A brief glimpse of a scene can guide eye movements but it remains unclear how prior target knowledge influences early scene processing. Using the ‘flash-preview moving window’ (FPMW) paradigm to restrict peripheral vision during search, we manipulated whether target identity was presented before or after previews. Windowed search was more efficient following 250?ms scene previews, and knowing target identity beforehand further improved how search was initiated and executed. However, in Experiment 2 when targets were removed from scene previews, only the initiation of search continued to be modulated by prior activation of target knowledge. Experiment 3 showed that search benefits from scene previews are maintained even when repeatedly searching through the same type of scene for the same type of target. Experiment 4 replicated Experiment 3 whilst also controlling for differences in integration times. We discuss the flexibility of the FPMW paradigm to measure how the first glimpse affects search.  相似文献   

5.
Visual search (e.g., finding a specific object in an array of other objects) is performed most effectively when people are able to ignore distracting nontargets. In repeated search, however, incidental learning of object identities may facilitate performance. In three experiments, with over 1,100 participants, we examined the extent to which search could be facilitated by object memory and by memory for spatial layouts. Participants searched for new targets (real-world, nameable objects) embedded among repeated distractors. To make the task more challenging, some participants performed search for multiple targets, increasing demands on visual working memory (WM). Following search, memory for search distractors was assessed using a surprise two-alternative forced choice recognition memory test with semantically matched foils. Search performance was facilitated by distractor object learning and by spatial memory; it was most robust when object identity was consistently tied to spatial locations and weakest (or absent) when object identities were inconsistent across trials. Incidental memory for distractors was better among participants who searched under high WM load, relative to low WM load. These results were observed when visual search included exhaustive-search trials (Experiment 1) or when all trials were self-terminating (Experiment 2). In Experiment 3, stimulus exposure was equated across WM load groups by presenting objects in a single-object stream; recognition accuracy was similar to that in Experiments 1 and 2. Together, the results suggest that people incidentally generate memory for nontarget objects encountered during search and that such memory can facilitate search performance.  相似文献   

6.
How efficient is visual search in real scenes? In searches for targets among arrays of randomly placed distractors, efficiency is often indexed by the slope of the reaction time (RT) × Set Size function. However, it may be impossible to define set size for real scenes. As an approximation, we hand-labeled 100 indoor scenes and used the number of labeled regions as a surrogate for set size. In Experiment 1, observers searched for named objects (a chair, bowl, etc.). With set size defined as the number of labeled regions, search was very efficient (~5 ms/item). When we controlled for a possible guessing strategy in Experiment 2, slopes increased somewhat (~15 ms/item), but they were much shallower than search for a random object among other distinctive objects outside of a scene setting (Exp. 3: ~40 ms/item). In Experiments 4–6, observers searched repeatedly through the same scene for different objects. Increased familiarity with scenes had modest effects on RTs, while repetition of target items had large effects (>500 ms). We propose that visual search in scenes is efficient because scene-specific forms of attentional guidance can eliminate most regions from the “functional set size” of items that could possibly be the target.  相似文献   

7.
A 90 degrees rotation of a display can turn a relatively easy visual search into a more difficult one. A series of experiments examined the possible causes of this effect, including differences in overall item shape and response mapping (Experiment 1), the interpretation of scene lighting (Experiment 2), the axis of internal symmetry of the search items (Experiment 3), and the axes of interitem symmetry between target and distractor items (Experiment 4). Only the elimination of differences in interitem mirror symmetry resulted in equal search efficiency in the upright and rotated displays. This finding is strong support for the view that visual search is guided by an analysis that considers interitem relations.  相似文献   

8.
Research on contextual cueing has demonstrated that with simple arrays of letters and shapes, search for a target increases in efficiency as associations between a search target and its surrounding visual context are learned. We investigated whether the visual context afforded by repeated exposure to real-world scenes can also guide attention when the relationship between the scene and a target position is arbitrary. Observers searched for and identified a target letter embedded in photographs of real-world scenes. Although search time within novel scenes was consistent across trials, search time within repeated scenes decreased across repetitions. Unlike previous demonstrations of contextual cueing, however, memory for scene-target covariation was explicit. In subsequent memory tests, observers recognized repeated contexts more often than those that were presented once and displayed superior recall of target position within the repeated scenes. In addition, repetition of inverted scenes, which made the scene more difficult to identify, produced a markedly reduced rate of learning, suggesting semantic information concerning object and scene identity are used to guide attention.  相似文献   

9.
The current study investigated from how large a region around their current point of gaze viewers can take in information when searching for objects in real-world scenes. Visual span size was estimated using the gaze-contingent moving window paradigm. Experiment 1 featured window radii measuring 1, 3, 4, 4.7, 5.4, and 6.1°. Experiment 2 featured six window radii measuring between 5 and 10°. Each scene occupied a 24.8 × 18.6° field of view. Inside the moving window, the scene was presented in high resolution. Outside the window, the scene image was low-pass filtered to impede the parsing of the scene into constituent objects. Visual span was defined as the window size at which object search times became indistinguishable from search times in the no-window control condition; this occurred with windows measuring 8° and larger. Notably, as long as central vision was fully available (window radii ≥ 5°), the distance traversed by the eyes through the scene to the search target was comparable to baseline performance. However, to move their eyes to the target, viewers made shorter saccades, requiring more fixations to cover the same image space, and thus more time. Moreover, a gaze-data based decomposition of search time revealed disruptions in specific subprocesses of search. In addition, nonlinear mixed models analyses demonstrated reliable individual differences in visual span size and parameters of the search time function.  相似文献   

10.
Recent research has shown that, in visual search, participants can miss 30-40% of targets when they only appear rarely (i.e. on 1-2% of trials). Low target prevalence alters the behavior of the searcher. It can lead participants to quit their search prematurely (Wolfe et al., 2005), to shift their decision criteria (Wolfe et al., 2007) and/or to make motor or response errors (Fleck & Mitroff, 2007). In this paper we examine whether the LP Effect can be ameliorated if we split the search set in two, spreading the task out over space and/or time. Observers searched for the letter "T" among "L"s. In Experiment 1, the left or right half of the display was presented to the participants before the second half. In Experiment 2, items were spatially intermixed but half of the items were presented first, followed by the second half. Experiment 3 followed the methods of Experiment 2 but allowed observers to correct perceived errors. All three experiments produced robust low prevalence (LP) effects with higher errors at 2% prevalence than at 50% prevalence. Dividing up the display had no beneficial effect on errors. The opportunity to correct errors reduced but did not eliminate the LP effect. Low prevalence continues to elevate errors even when observers are forced to slow down and permitted to correct errors.  相似文献   

11.
Postattentive vision   总被引:4,自引:0,他引:4  
Much research has examined preattentive vision: visual representation prior to the arrival of attention. Most vision research concerns attended visual stimuli; very little research has considered postattentive vision. What is the visual representation of a previously attended object once attention is deployed elsewhere? The authors argue that perceptual effects of attention vanish once attention is redeployed. Experiments 1-6 were visual search studies. In standard search, participants looked for a target item among distractor items. On each trial, a new search display was presented. These tasks were compared to repeated search tasks in which the search display was not changed. On successive trials, participants searched the same display for new targets. Results showed that if search was inefficient when participants searched a display the first time, it was inefficient when the same, unchanging display was searched the second, fifth, or 350th time. Experiments 7 and 8 made a similar point with a curve tracing paradigm. The results have implications for an understanding of scene perception, change detection, and the relationship of vision to memory.  相似文献   

12.
It remains unclear how memory load affects attentional processes in visual search (VS). No effects, as well as beneficial and detrimental effects of memory load, have been found in this type of task. The main goal of the present research was to explore whether memory load has a modulating effect on VS by means of a different attentional set induced by the order of trials (mixed vs. blocked) and by the time presentation of visual display (long vs. short). In Experiment 1, we randomized the order of type of trial (5, 10 and 15 items presented in the display) while it remained constant (10 items) in Experiments 2A and 2B. In the later experiments, we also changed time presentation of visual display (3000 vs. 1300 ms, respectively). Results showed no differential effects of memory load in Experiments 1 and 2A, but they showed up in Experiment 2B: RTs were longer in the attentional task for trials under high memory load conditions. Although our hypothesis of the attentional set is supported by the results, other theoretical implications are also worth discussing in order to better understand how memory load may modulate attentional processes in VS.  相似文献   

13.
Previous research indicates that visual attention can be automatically captured by sensory inputs that match the contents of visual working memory. However, Woodman and Luck (2007) showed that information in working memory can be used flexibly as a template for either selection or rejection according to task demands. We report two experiments that extend their work. Participants performed a visual search task while maintaining items in visual working memory. Memory items were presented for either a short or long exposure duration immediately prior to the search task. Memory was tested by a change-detection task immediately afterwards. On a random half of trials items in memory matched either one distractor in the search task (Experiment 1) or three (Experiment 2). The main result was that matching distractors speeded or slowed target detection depending on whether memory items were presented for a long or short duration. These effects were more in evidence with three matching distractors than one. We conclude that the influence of visual working memory on visual search is indeed flexible but is not solely a function of task demands. Our results suggest that attentional capture by perceptual inputs matching information in visual working memory involves a fast automatic process that can be overridden by a slower top-down process of attentional avoidance.  相似文献   

14.
Recent research has shown that, in visual search, participants can miss 30–40% of targets when they only appear rarely (i.e., on 1–2% of trials). Low target prevalence alters the behaviour of the searcher. It can lead participants to quit their search prematurely (Wolfe, Horowitz, & Kenner, 2005), to shift their decision criteria (Wolfe et al., 2007), and/or to make motor or response errors (Fleck & Mitroff, 2007). In this paper we examine whether the low prevalence (LP) effect can be ameliorated if we split the search set in two, spreading the task out over space and/or time. Observers searched for the letter “T” among “L”s. In Experiment 1, the left or right half of the display was presented to the participants before the second half. In Experiment 2, items were spatially intermixed but half of the items were presented first, followed by the second half. Experiment 3 followed the methods of Experiment 2 but allowed observers to correct perceived errors. All three experiments produced robust LP effects with higher errors at 2% prevalence than at 50% prevalence. Dividing up the display had no beneficial effect on errors. The opportunity to correct errors reduced but did not eliminate the LP effect. Low prevalence continues to elevate errors even when observers are forced to slow down and permitted to correct errors.  相似文献   

15.
In hybrid search, observers memorize a number of possible targets and then search for any of these in visual arrays of items. Wolfe (2012) has previously shown that the response times in hybrid search increase with the log of the memory set size. What enables this logarithmic search of memory? One possibility is a series of steps in which subsets of the memory set are compared to all items in the visual set simultaneously. In the present experiments, we presented single visual items sequentially in a rapid serial visual presentation (RSVP) display, eliminating the possibility of simultaneous testing of all items. We used a staircasing procedure to estimate the time necessary to effectively detect the target in the RSVP stream. Processing time increased in a log–linear fashion with the number of potential targets. This finding eliminates the class of models that require simultaneous comparison of some memory items to all (or many) items in the visual display. Experiment 3 showed that, similar to visual search, memory search efficiency in this paradigm is influenced by the similarity between the target set and the distractors. These results indicate that observers perform separate memory searches on each eligible item in the visual display. Moreover, it appears that memory search for one item can proceed while other items are being categorized as “eligible” or “not eligible.”  相似文献   

16.
What role does the initial glimpse of a scene play in subsequent eye movement guidance? In 4 experiments, a brief scene preview was followed by object search through the scene via a small moving window that was tied to fixation position. Experiment 1 demonstrated that the scene preview resulted in more efficient eye movements compared with a control preview. Experiments 2 and 3 showed that this scene preview benefit was not due to the conceptual category of the scene or identification of the target object in the preview. Experiment 4 demonstrated that the scene preview benefit was unaffected by changing the size of the scene from preview to search. Taken together, the results suggest that an abstract (size invariant) visual representation is generated in an initial scene glimpse and that this representation can be retained in memory and used to guide subsequent eye movements.  相似文献   

17.
In this study, we examined whether the detection of frontal, ¾, and profile face views differs from their categorization as faces. In Experiment 1, we compared three tasks that required observers to determine the presence or absence of a face, but varied in the extents to which participants had to search for the faces in simple displays and in small or large scenes to make this decision. Performance was equivalent for all of the face views in simple displays and small scenes, but it was notably slower for profile views when this required the search for faces in extended scene displays. This search effect was confirmed in Experiment 2, in which we compared observers’ eye movements with their response times to faces in visual scenes. These results demonstrate that the categorization of faces at fixation is dissociable from the detection of faces in space. Consequently, we suggest that face detection should be studied with extended visual displays, such as natural scenes.  相似文献   

18.
In four experiments, saccadic eye movements, reaction times (RTs), and accuracy were measured as observers searched for feature or conjunction targets presented at several eccentricities. A conjunction search deficit, evidenced by a large eccentricity effect on RTs, accuracy, and number of saccades, was seen in Experiments 1A and 1B. Experiment 2 indicated that, when saccades were precluded, there was an even larger eccentricity effect for conjunction search targets. In Experiment 3, practice in a conjunction search task allowed both RT and number of saccades to become independent of eccentricity. Additionally, there was evidence of feature-based selectivity in that observers were more likely to fixate distractors that had the same contrast as the target. Results are consistent with the view that the oculomotor and attentional systems are functionally linked and provide constraints for models of visual attention and search.  相似文献   

19.
Many theories have proposed that visual working memory plays an important role in visual search. In contrast, by showing that a nonspatial working memory load did not interfere with search efficiency, Woodman, Vogel, and Luck (2001) recently proposed that the role of working memory in visual search is insignificant. However, the visual search process may interfere with spatial working memory. In the present study, a visual search task was performed concurrently with either a spatial working memory task (Experiment 1) or a nonspatial working memory task (Experiment 2). We found that the visual search process interfered with a spatial working memory load, but not with a nonspatial working memory load. These results suggest that there is a distinction between spatial and nonspatial working memory in terms of interactions with visual search tasks. These results imply that the visual search process and spatial working memory storage require the same limited-capacity mechanisms.  相似文献   

20.
Humans are very good at remembering large numbers of scenes over substantial periods of time. But how good are they at remembering changes to scenes? In this study, we tested scene memory and change detection two weeks after initial scene learning. In Experiments 13, scenes were learned incidentally during visual search for change. In Experiment 4, observers explicitly memorized scenes. At test, after two weeks observers were asked to discriminate old from new scenes, to recall a change that they had detected in the study phase, or to detect a newly introduced change in the memorization experiment. Next, they performed a change detection task, usually looking for the same change as in the study period. Scene recognition memory was found to be similar in all experiments, regardless of the study task. In Experiment 1, more difficult change detection produced better scene memory. Experiments 2 and 3 supported a “depth-of-processing” account for the effects of initial search and change detection on incidental memory for scenes. Of most interest, change detection was faster during the test phase than during the study phase, even when the observer had no explicit memory of having found that change previously. This result was replicated in two of our three change detection experiments. We conclude that scenes can be encoded incidentally as well as explicitly and that changes in those scenes can leave measurable traces even if they are not explicitly recalled.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号