首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
Harvard Medical School and Peter Bent Brigham Hospital, Boston, Massachusetts 02115 The model treats the detection of targets in a visual search task as a concatenation of two serial detection stages. Preattentive visual mechanisms in the initial stage function as a filter, selecting specific features of a visual pattern for the observer’s explicit attention and final cognitive evaluation. The model uses bivariate normal distributions to represent the decision variables for the two serial stages, assuming different parameters for the target and nontarget features in a test set. The model is applied to the detection performance of radiologists interpreting chest x-rays under various conditions of search. It accounts for the substantial improvement in radiologists’ ability to distinguish between target and nontarget test features when they had to search the x-ray images, compared to their performance without visual search. A change in the ROC curve between two different search tasks could be interpreted as a shift in the selection cutoff used by the preattentive filter.  相似文献   

3.
A popular procedure for investigating working memory processes has been the visual change-detection procedure. Models of performance based on that procedure, however, tend to be based on performance accuracy and treat working memory search as a one-step process, in which memory representations are compared to a test probe to determine if a match is present. To gain a clearer understanding of how search of these representations operate in the change-detection task, we examined reaction time in two experiments, with a single-item probe either located centrally or at the location of an array item. Contrary to current models of visual working memory capacity, our data point to a two-stage search process: a fast first step to check for the novelty of the probe and, in the absence of such novelty, a second, slower step to search exhaustively for a match between the test probe and a memory representation. In addition to these results, we found that participants tended not to use location information provided by the probe that theoretically could have abbreviated the search process. We suggest some basic revisions of current models of processing in this type of visual working memory task.  相似文献   

4.
Real-world visual searches often contain a variable and unknown number of targets. Such searches present difficult metacognitive challenges, as searchers must decide when to stop looking for additional targets, which results in high miss rates in multiple-target searches. In the study reported here, we quantified human strategies in multiple-target search via an ecological optimal foraging model and investigated whether searchers adapt their strategies to complex target-distribution statistics. Separate groups of individuals searched displays with the number of targets per trial sampled from different geometric distributions but with the same overall target prevalence. As predicted by optimal foraging theory, results showed that individuals searched longer when they expected more targets to be present and adjusted their expectations on-line during each search by taking into account the higher-order, across-trial target distributions. However, compared with modeled ideal observers, participants systematically responded as if the target distribution were more uniform than it was, which suggests that training could improve multiple-target search performance.  相似文献   

5.
An important component of routine visual behavior is the ability to find one item in a visual world filled with other, distracting items. This ability to performvisual search has been the subject of a large body of research in the past 15 years. This paper reviews the visual search literature and presents a model of human search behavior. Built upon the work of Neisser, Treisman, Julesz, and others, the model distinguishes between a preattentive, massively parallel stage that processes information about basic visual features (color, motion, various depth cues, etc.) across large portions of the visual field and a subsequent limited-capacity stage that performs other, more complex operations (e.g., face recognition, reading, object identification) over a limited portion of the visual field. The spatial deployment of the limited-capacity process is under attentional control. The heart of the guided search model is the idea that attentional deployment of limited resources isguided by the output of the earlier parallel processes. Guided Search 2.0 (GS2) is a revision of the model in which virtually all aspects of the model have been made more explicit and/or revised in light of new data. The paper is organized into four parts: Part 1 presents the model and the details of its computer simulation. Part 2 reviews the visual search literature on preattentive processing of basic features and shows how the GS2 simulation reproduces those results. Part 3 reviews the literature on the attentional deployment of limited-capacity processes in conjunction and serial searches and shows how the simulation handles those conditions. Finally, Part 4 deals with shortcomings of the model and unresolved issues.  相似文献   

6.
Predictions from a model of visual matching were tested in two experiments. The model consists of a wholistic comparison process followed by an element-by-element comparison process. All stimuli are processed by the first stage but only those that permit a decision based on a wholistic comparison produce responses. When discrimination is difficult and a decision cannot be reached by a wholistic comparison, the second stage of processing is initiated. Degree of discriminability and stimulus duration (100 and 1000 msec.) were varied in both experiments. In Exp. 1, the stimulus elements were arranged in a square configuration to facilitate a wholistic comparison. As predicted, the hard-different stimuli took longer to match than the same or easy-different stimuli. The hard-different stimuli presented for 1000 msec. took longer to match than those presented for 100 msec. There was no difference in accuracy between responses to hard-different pairs at the two durations. In Exp. 2, the stimulus elements were arranged in a horizontal row and placed one above the other to facilitate element-by-element comparison. As predicted, these stimuli produced slower and more accurate responses for same and hard-different stimulus pairs only when they were exposed for 1000 msec. Responses to easy-different stimulus pairs were made quickly and accurately.  相似文献   

7.
We present a computational model and corresponding computer simulations that mimic phenomenologically the eye movement trajectories observed in a conjunctive visual search task. The element of randomness is captured in the model through a Monte Carlo selection of a particular eye movement based on its probability, which depends on three factors, adjusted to match to the observed saccade amplitude distribution, forward bias in consecutive saccades, and return rates. Memory is assumed to operate through tagging of objects already recognized as nontargets, which, in turn, requires their processing within the attentional area of conspicuity (AC). That AC is adjusted so that computer simulations optimally reproduce the distribution of the number of saccades, the failure rate for capturing the target, and the return rate to previously inspected locations. For their viability, computer simulations critically depend on memory’s being long-ranged. In turn, the simulations confirm the formation of circulating or spiraling patterns in the observed eye trajectories. We also relate consistently the average number of saccades per trial to the saccade amplitude distribution by modeling analytically the combined roles of the AC in attention and memory. The full Supplemental Appendix A for this article may be downloaded from http:// app.psychonomic-journals.org/content/supplemental.  相似文献   

8.
Summary Meinecke (1989, Exp. 1, cond. HO) showed that the detectability of a visual target embedded in a linear noise array decreases with increasing retinal eccentricity, while the reaction time (RT) of the hits increases. One of the most interesting features of her results was that the RT of the correct rejections is consistently larger than the RT for signals presented near the fovea. This finding suggests that initially visual attention is concentrated near the fixation point and then diffuses across the stimulus array to perform a serial, exhaustive search. We present a diffusion model of early visual-search processes that quantitatively describes this evolution of attention in time and space; in contrast to most previous conceptions, it is based on a genuine relation between the spatial and temporal dimensions of the search processes performed. The model predicts quantitatively both detection performance and RT. We conducted an experiment similar to that of Meinecke (1989), but with an additional variation of the presentation time. All the main features of the 189 predictions could be explained by the model. The interpretation of the four model's parameters is discussed in some detail and compared with previous estimates of the microscopic search speed derived from alternative models. Finally, we consider some possible modifications related to results of Kehrer (1987, 1989), and some generalizations to multi target detection and two-dimensional stimulus arrays.  相似文献   

9.
Subjects searched sets of items for targets defined by conjunctions of color and form, color and orientation, or color and size. Set size was varied and reaction times (RT) were measured. For many unpracticed subjects, the slopes of the resulting RT X Set Size functions are too shallow to be consistent with Treisman's feature integration model, which proposes serial, self-terminating search for conjunctions. Searches for triple conjunctions (Color X Size X Form) are easier than searches for standard conjunctions and can be independent of set size. A guided search model similar to Hoffman's (1979) two-stage model can account for these data. In the model, parallel processes use information about simple features to guide attention in the search for conjunctions. Triple conjunctions are found more efficiently than standard conjunctions because three parallel processes can guide attention more effectively than two.  相似文献   

10.
Children in second, fourth and sixth grades and college sophomores were compared on a visual search and scanning task under three experimental condilions. In Condition I, a single target letter was sought in a list of letters of low visual confusability. In Condition II, two target letters were sought but only one appeared in a given list. In Condition III, a single target letter was sought in a list of letters of high confusability. Search time decreased with age in all three tasks. Searching for two targets was no harder than searching for one. A highly confusable visual context increased search time at all age levels.  相似文献   

11.
Children in second, fourth and sixth grades and college sophomores were compared on a visual search and scanning task under three experimental conditions. In Condition I, a single target letter was sought in a list of letters of low visual confusability. In Condition II, two target letters were sought but only one appeared in a given list. In Condition III, a single target letter was sought in a list of letters of high confusability. Search time decreased with age in all three tasks. Searching for two targets was no harder than searching for one. A highly confusable visual context increased search time at all age levels.  相似文献   

12.
13.
An inefficient visual search task can be facilitated if half the distractor items are presented as a preview prior to the presentation of the remaining distractor items and target. This benefit in search is termed the preview effect. Recent research has found that a preview effect can still occur if the previewed items disappear before reappearing again just before the search items (the “top-up” procedure). In this paper we investigate the attentional demands of processing during the preview and the top-up periods. Experiment 1 found that if attention is withdrawn from the top-up stage then no preview effect occurs. Likewise if attention is withdrawn from the initial preview period then the preview effect is reduced (Experiment 2). The data suggest that the preview effect is dependent on attention being paid both to the initial display and also to the re-presentation of the old display before the search display appears. The data counter accounts of preview search in terms of automatic attention capture by new items or by inhibition of return. We discuss alternative accounts of the results, and in particular suggest an amalgamation of a temporal grouping and a visual marking account of preview search.  相似文献   

14.
Multiple-target visual searches are especially error prone; once one target is found, additional targets are likely to be missed. This phenomenon, often called satisfaction of search (which we refer to here as subsequent search misses; SSMs), is well known in radiology, despite no existing consensus about the underlying cause(s). Taking a cognitive laboratory approach, we propose that there are multiple causes of SSMs and present a taxonomy of SSMs based on searchers' eye movements during a multiple-target search task, including both previously identified and novel sources of SSMs. The types and distributions of SSMs revealed effects of working memory load, search strategy, and additional causal factors, suggesting that there is no single cause of SSMs. A multifaceted approach is likely needed to understand the psychological causes of SSMs and then to mitigate them in applied settings such as radiology and baggage screening.  相似文献   

15.
Traditional models of visual search assume interitem similarity effects arise from within each feature dimension independently of other dimensions. In the present study, we examine whether distractor–distractor effects also depend on feature conjunctions (i.e., whether feature conjunctions form a separate “feature” dimension that influences interitem similarity). Spatial frequency and orientation feature dimensions were used to generate distractors. In the bound condition, the number of distractors sharing the same conjunction of features was higher than that in the unbound condition, but the sharing of features within frequency and orientation dimensions was the same across conditions. The results showed that the target was found more efficiently in the bound than in the unbound condition, indicating that distractor–distractor similarity is also influenced by conjunctive representations.  相似文献   

16.
Traditional models of visual search assume interitem similarity effects arise from within each feature dimension independently of other dimensions. In the present study, we examine whether distractor-distractor effects also depend on feature conjunctions (i.e., whether feature conjunctions form a separate “feature” dimension that influences interitem similarity). Spatial frequency and orientation feature dimensions were used to generate distractors. In the bound condition, the number of distractors sharing the same conjunction of features was higher than that in the unbound condition, but the sharing of features within frequency and orientation dimensions was the same across conditions. The results showed that the target was found more efficiently in the bound than in the unbound condition, indicating that distractor-distractor similarity is also influenced by conjunctive representations.  相似文献   

17.
Four experiments were conducted comparing the ways in which reading and search are affected by manipulations of word shape and word boundary. Word shape was manipulated by variations in type (normal, capitals, and alternating upper- and lowercase), while word boundary was manipulated by variations in spacing (normal, filled, and absent). The variations were combined factorially for nine space-type combinations. Experiments I and II were basic studies examining the effects of the manipulations on reading and on search, respectively. Search was found to be 2 to 2.5 times faster than reading. Reading and search both slowed to one-third of the normal speeds when spaces were removed and type altered. A significant interaction of Type by Space was found for reading but not for search. Experiments III and IV examined contextual and typographical effects on high-speed visual search through paragraphs. Form-class expectancy and target word predictability, respectively, were manipulated. In both experiments, subjects found the expected predictable words faster than the unexpected unpredictable words. The data were interpreted as providing support for the peripheral and cognitive search guidance processes hypothesized to be active in reading.  相似文献   

18.
Grasping an object rather than pointing to it enhances processing of its orientation but not its color. Apparently, visual discrimination is selectively enhanced for a behaviorally relevant feature. In two experiments we investigated the limitations and targets of this bias. Specifically, in Experiment 1 we were interested to find out whether the effect is capacity demanding, therefore we manipulated the set-size of the display. The results indicated a clear cognitive processing capacity requirement, i.e. the magnitude of the effect decreased for a larger set size. Consequently, in Experiment 2, we investigated if the enhancement effect occurs only at the level of behaviorally relevant feature or at a level common to different features. Therefore we manipulated the discriminability of the behaviorally neutral feature (color). Again, results showed that this manipulation influenced the action enhancement of the behaviorally relevant feature. Particularly, the effect of the color manipulation on the action enhancement suggests that the action effect is more likely to bias the competition between different visual features rather than to enhance the processing of the relevant feature. We offer a theoretical account that integrates the action-intention effect within the biased competition model of visual selective attention.  相似文献   

19.
Linguistically mediated visual search   总被引:1,自引:0,他引:1  
During an individual's normal interaction with the environment and other humans, visual and linguistic signals often coincide and can be integrated very quickly. This has been clearly demonstrated in recent eyetracking studies showing that visual perception constrains on-line comprehension of spoken language. In a modified visual search task, we found the inverse, that real-time language comprehension can also constrain visual perception. In standard visual search tasks, the number of distractors in the display strongly affects search time for a target defined by a conjunction of features, but not for a target defined by a single feature. However, we found that when a conjunction target was identified by a spoken instruction presented concurrently with the visual display, the incremental processing of spoken language allowed the search process to proceed in a manner considerably less affected by the number of distractors. These results suggest that perceptual systems specialized for language and for vision interact more fluidly than previously thought.  相似文献   

20.
Two experiments examine how collaboration influences visual search performance. Working with a partner or on their own, participants reported whether a target was present or absent in briefly presented search displays. We compared the search performance of individuals working together (collaborative pairs) with the pooled responses of the individuals working alone (nominal pairs). Collaborative pairs were less likely than nominal pairs to correctly detect a target and they were less likely to make false alarms. Signal detection analyses revealed that collaborative pairs were more sensitive to the presence of the target and had a more conservative response bias than the nominal pairs. This pattern was observed even when the presence of another individual was matched across pairs. The results are discussed in the context of task-sharing, social loafing and current theories of visual search.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号