首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Cross-dimensional perceptual selectivity   总被引:8,自引:0,他引:8  
Three visual search experiments tested whether top-down selectivity toward particular stimulus dimensions is possible during preattentive parallel search. Subjects viewed multielement displays in which two salient items, each unique in a different dimension--that is, color and intensity (Experiment 1) or color and form (Experiments 2 and 3)--were simultaneously present. One of the dimensions defined the target; the other dimension served as distractor. The results indicate that when search is performed in parallel, top-down selectivity is not possible. These findings suggest that preattentive parallel search is strongly automatic, because it satisfies both the load-insensitivity and the unintentionality criteria of automaticity.  相似文献   

2.
Most accounts of visual perception hold that the detection of primitive features occurs preattentively, in parallel across the visual field. Evidence that preattentive vision operates without attentional limitations comes from visual search tasks in which the detection of the presence of absence of a primitive feature is independent of the number of stimuli in a display. If the detection of primitive features occurs preattentively, in parallel and without capacity limitations, then it should not matter where attention is located in the visual field. The present study shows that even though the detection of a red element in an array of gray elements occurred in parallel without capacity limitations, the allocation of attention did have a large effect on search performance. If attention was directed to a particular region of the display and the target feature was presented elsewhere, response latencies increased. Results indicate that the classic view of preattentive vision requires revision.  相似文献   

3.
4.
An important component of routine visual behavior is the ability to find one item in a visual world filled with other, distracting items. This ability to performvisual search has been the subject of a large body of research in the past 15 years. This paper reviews the visual search literature and presents a model of human search behavior. Built upon the work of Neisser, Treisman, Julesz, and others, the model distinguishes between a preattentive, massively parallel stage that processes information about basic visual features (color, motion, various depth cues, etc.) across large portions of the visual field and a subsequent limited-capacity stage that performs other, more complex operations (e.g., face recognition, reading, object identification) over a limited portion of the visual field. The spatial deployment of the limited-capacity process is under attentional control. The heart of the guided search model is the idea that attentional deployment of limited resources isguided by the output of the earlier parallel processes. Guided Search 2.0 (GS2) is a revision of the model in which virtually all aspects of the model have been made more explicit and/or revised in light of new data. The paper is organized into four parts: Part 1 presents the model and the details of its computer simulation. Part 2 reviews the visual search literature on preattentive processing of basic features and shows how the GS2 simulation reproduces those results. Part 3 reviews the literature on the attentional deployment of limited-capacity processes in conjunction and serial searches and shows how the simulation handles those conditions. Finally, Part 4 deals with shortcomings of the model and unresolved issues.  相似文献   

5.
The authors used visual search tasks in which components of the classic flanker task (B. A. Eriksen & C. W. Eriksen, 1974) were introduced. In several experiments the authors obtained evidence of parallel search for a target among distractor elements. Therefore, 2-stage models of visual search predict no effect of the identity of those distractors. However, clear compatibility effects of the distractors were obtained: Responses were faster when the distractors were compatible with the response than when they were incompatible. These results show that even in parallel search tasks identity information is extracted from the distractors. In addition, alternative interpretations of the results in terms of the occasional identification of a distractor before or after the target was identified could be ruled out. The results showed that flat search slopes obtained in visual search experiments provide no benchmark for preattentive processing.  相似文献   

6.
Automaticity and preattentive processing.   总被引:5,自引:0,他引:5  
The characteristics of automatized performance resemble those of preattentive processing in some respects. In the context of visual search tasks, these include spatially parallel processing, involuntary calling of attention, learning without awareness, and time-sharing with other tasks. However, this article reports some evidence suggesting that extended practice produces its effects through different mechanisms from those that underlie preattentive processing. The dramatic changes in search rate seem to depend not on the formation of new preattentive detectors for the task-relevant stimuli, nor on learned abstracted procedures for responding quickly and efficiently, but rather on changes that are very specific both to the particular stimuli and to the particular task used in practice. We suggest that the improved performance may depend on the accumulation of separate memory traces for each individual experience of a display (see Logan, 1988), and we show that the traces differ for conjunction search in which stimuli must be individuated and for feature search where a global response to the display is sufficient.  相似文献   

7.
Preattentive vision and perceptual groups   总被引:1,自引:0,他引:1  
M Bravo  R Blake 《Perception》1990,19(4):515-522
Recent evidence suggests that preattentive processing may not be limited to the analysis of simple stimulus features as previously suggested. To explore this idea a visual search task was used to test whether the shapes of several perceptual groups can be processed in parallel. Textured displays that give rise to strong perceptual grouping were used to create figures on a background. Search times for a target figure distinguished by a unique shape were found to be independent of the number of distractor figures in the display. This result indicates that perceptual groups may be processed in parallel and suggests an expanded role for preattentive processing in vision.  相似文献   

8.
The nature of capacity limits (if any) in visual search has been a topic of controversy for decades. In 30 years of work, researchers have attempted to distinguish between two broad classes of visual search models. Attention-limited models have proposed two stages of perceptual processing: an unlimited-capacity preattentive stage, and a limited-capacity selective attention stage. Conversely, noise-limited models have proposed a single, unlimited-capacity perceptual processing stage, with decision processes influenced only by stochastic noise. Here, we use signal detection methods to test a strong prediction of attention-limited models. In standard attention-limited models, performance of some searches (feature searches) should only be limited by a preattentive stage. Other search tasks (e.g., spatial configuration search for a "2" among "5"s) should be additionally limited by an attentional bottleneck. We equated average accuracies for a feature and a spatial configuration search over set sizes of 1-8 for briefly presented stimuli. The strong prediction of attention-limited models is that, given overall equivalence in performance, accuracy should be better on the spatial configuration search than on the feature search for set size 1, and worse for set size 8. We confirm this crossover interaction and show that it is problematic for at least one class of one-stage decision models.  相似文献   

9.
Harvard Medical School and Peter Bent Brigham Hospital, Boston, Massachusetts 02115 The model treats the detection of targets in a visual search task as a concatenation of two serial detection stages. Preattentive visual mechanisms in the initial stage function as a filter, selecting specific features of a visual pattern for the observer’s explicit attention and final cognitive evaluation. The model uses bivariate normal distributions to represent the decision variables for the two serial stages, assuming different parameters for the target and nontarget features in a test set. The model is applied to the detection performance of radiologists interpreting chest x-rays under various conditions of search. It accounts for the substantial improvement in radiologists’ ability to distinguish between target and nontarget test features when they had to search the x-ray images, compared to their performance without visual search. A change in the ROC curve between two different search tasks could be interpreted as a shift in the selection cutoff used by the preattentive filter.  相似文献   

10.
Postattentive vision   总被引:4,自引:0,他引:4  
Much research has examined preattentive vision: visual representation prior to the arrival of attention. Most vision research concerns attended visual stimuli; very little research has considered postattentive vision. What is the visual representation of a previously attended object once attention is deployed elsewhere? The authors argue that perceptual effects of attention vanish once attention is redeployed. Experiments 1-6 were visual search studies. In standard search, participants looked for a target item among distractor items. On each trial, a new search display was presented. These tasks were compared to repeated search tasks in which the search display was not changed. On successive trials, participants searched the same display for new targets. Results showed that if search was inefficient when participants searched a display the first time, it was inefficient when the same, unchanging display was searched the second, fifth, or 350th time. Experiments 7 and 8 made a similar point with a curve tracing paradigm. The results have implications for an understanding of scene perception, change detection, and the relationship of vision to memory.  相似文献   

11.
We used visual search to explore whether the preattentive mechanisms that enable rapid detection of facial expressions are driven by visual information from the displacement of features in expressions, or other factors such as affect. We measured search slopes for luminance and contrast equated images of facial expressions and anti-expressions of six emotions (anger, fear, disgust, surprise, happiness, and sadness). Anti-expressions have an equivalent magnitude of facial feature displacements to their corresponding expressions, but different affective content. There was a strong correlation between these search slopes and the magnitude of feature displacements in expressions and anti-expressions, indicating feature displacement had an effect on search performance. There were significant differences between search slopes for expressions and anti-expressions of happiness, sadness, anger, and surprise, which could not be explained in terms of feature differences, suggesting preattentive mechanisms were sensitive to other factors. A categorization task confirmed that the affective content of expressions and anti-expressions of each of these emotions were different, suggesting signals of affect might well have been influencing attention and search performance. Our results support a picture in which preattentive mechanisms may be driven by factors at a number of levels, including affect and the magnitude of feature displacement. We note that indirect effects of feature displacement, such as changes in local contrast, may well affect preattentive processing. These are most likely to be nonlinearly related to feature displacement and are, we argue, an important consideration for any study using images of expression to explore how affect guides attention. We also note that indirect effects of feature displacement (for example, changes in local contrast) may well affect preattentive processing. We argue that such effects are an important consideration for any study using images of expression to explore how affect guides attention.  相似文献   

12.
Jan Theeuwes 《Visual cognition》2013,21(2-3):221-233
Abstract

In the present experiment, subjects searched multielement displays for a colour singleton. With a variable display-to-onset SOA, on some trials an abrupt onset was presented at three possible distances from the target location. The interference effect caused by the abrupt onset as a function of SOA and its relative position revealed the distinctive characteristics of preattentive and attentive processing. During preattentive parallel processing (processing occurring within the first 100 msec), any abrupt onset that occurred within the visual field captured attention. During attentive processing (processing occurring after 100 msec), however, focused attention prevented the abrupt onset from capturing attention. The finding that abrupt onsets interfere with selective search for a colour singleton provides additional evidence for the theory of inadequate top-down control at the level of preattentive processing.  相似文献   

13.
How does one find objects in scenes? For decades, visual search models have been built on experiments in which observers search for targets, presented among distractor items, isolated and randomly arranged on blank backgrounds. Are these models relevant to search in continuous scenes? This article argues that the mechanisms that govern artificial, laboratory search tasks do play a role in visual search in scenes. However, scene-based information is used to guide search in ways that had no place in earlier models. Search in scenes might be best explained by a dual-path model: a 'selective' path in which candidate objects must be individually selected for recognition and a 'nonselective' path in which information can be extracted from global and/or statistical information.  相似文献   

14.
A novel computational model of a preattentive system performing visual search is presented. The model processes displays of lines, reproduced from Wolfe, Friedman-Hill, Stewart, and O'Connell's (1992) and Treisman and Sato's (1990) visual-search experiments. The response times measured in these experiments suggest that some of the displays are searched serially, whereas others are scanned in parallel. Our neural network model operates in two phases. First, the visual displays are compressed via standard methods (principal component analysis), to overcome assumed biological capacity limitations. Second, the compressed representations are further processed to identify a target in the display. The model succeeds in fast detection of targets in experimentally labeled parallel displays, but fails with serial ones. Analysis of the compressed internal representations reveals that compressed parallel displays contain global information that enables instantaneous target detection. However, in representations of serial displays, this global information is obscure, and hence, a target detection system should resort to a serial, attentional scan of local features across the display. Our analysis provides a numerical criterion that is strongly correlated with the experimental response time slopes and enables us to reformulate Duncan and Humphreys's (1989) search surface, using precise quantitative measures. Our findings provide further insight into the important debate concerning the dichotomous versus continuous views of parallel/serial visual search.  相似文献   

15.
The goal of this review is to critically examine contradictory findings in the study of visual search for emotionally expressive faces. Several key issues are addressed: Can emotional faces be processed preattentively and guide attention? What properties of these faces influence search efficiency? Is search moderated by the emotional state of the observer? The authors argue that the evidence is consistent with claims that (a) preattentive search processes are sensitive to and influenced by facial expressions of emotion, (b) attention guidance is influenced by a dynamic interplay of emotional and perceptual factors, and (c) visual search for emotional faces is influenced by the emotional state of the observer to some extent. The authors also argue that the way in which contextual factors interact to determine search performance needs to be explored further to draw sound conclusions about the precise influence of emotional expressions on search efficiency. Methodological considerations (e.g., set size, distractor background, task set) and ecological limitations of the visual search task are discussed. Finally, specific recommendations are made for future research directions.  相似文献   

16.
The present experiment tested for preattentive visual search in 3- and 4-month-old infants using stimulus features described by Treisman and Souther (1985) as producing visual “pop-out” effects in adults. Infants were presented with two visual arrays to the left and right of midline. One array comprised homogeneous elements, while the other had a discrepant element embedded in it. On the basis of previous research, we expected infants to fixate the array containing the embedded discrepant element. The pattern of fixation indicated detection of the embedded discrepant element for both age groups, but only with stimuli shown to elicit visual pop out in adults. This asymmetry in detection is consistent with the presence of preattentive visual search in infants as young as 3 months.  相似文献   

17.
Detection versus discrimination of visual orientation   总被引:4,自引:0,他引:4  
D Sagi  B Julesz 《Perception》1984,13(5):619-628
The role of focused attention in vision is examined. Recent theories of attention hypothesize that serial search by focal attention is required for discrimination between different combinations of features. Experiments are reported which show that the mixture of a few (less than five) horizontal and vertical line segments embedded in an aggregate of diagonal line segments can be rapidly counted (also called 'subitizing') by a parallel (preattentive) process, while the discrimination between horizontal and vertical orientation requires serial search by shifting focal attention to each line segment. Thus detecting and counting targets that differ in orientation can be done in parallel by a preattentive process, whereas knowing 'what' the orientation of a target is (horizontal or vertical, ie of a single conspicuous feature) requires a serial search by focal attention.  相似文献   

18.
From an operational perspective, attention is a matter of organizing multiple brain centres to act in concert on the task at hand. Taking focal visual attention as an example, recent anatomical findings suggest that the pulvinar might act as a remote hub for coordinating spatial activity within multiple cortical visual maps. The pulvinar can, in turn, be influenced by signals originating in the frontal and parietal eye fields, using common visuomotor neural circuitry, with the superior colliculus acting as an important link. By identifying a complex, real neural architecture ('RNA') model for attention, it is possible to integrate several different modes of operation - such as parallel or serial, bottom-up or top-down, preattentive or attentive - that characterize conflicting cognitive models of attention in visual search paradigms.  相似文献   

19.
Early and late selection theories of visual attention disagree about whether identification occurs before or after selection. Studies showing the category effect, i.e., the time to detect a letter is hardly affected by the number of digits present in the display, are taken as evidence for late selection theories since these studies suggest parallel identification of all items in the display. As an extension of previous studies, in the present study two categorically different targets were presented simultaneously among a variable number of nontargets. Subjects were shown brief displays of two target letters among either 2, 4 or 6 nontarget digits. Subjects responded 'same' when the two letters were identical and 'different' otherwise. Since the 'same-different' response reflects the combined outcome of the simultaneous targets, late-selection theory predicts that the time to match the target letters is independent of the number of nontarget digits. Alternatively, early-selection theory predicts a linear increase of reaction time with display size since the presence of more than one target disrupts parallel preattentive processing, leading to a serial search through all items in the display. The results provide evidence for the early-selection view since reaction time increased linearly with the number of categorically different nontargets. A control experiment revealed that none of the alternative explanations could account for the display size effect.  相似文献   

20.
The visual environment is extremely rich and complex, producing information overload for the visual system. But the environment also embodies structure in the form of redundancies and regularities that may serve to reduce complexity. How do perceivers internalize this complex informational structure? We present new evidence of visual learning that illustrates how observers learn how objects and events covary in the visual world. This information serves to guide visual processes such as object recognition and search. Our first experiment demonstrates that search and object recognition are facilitated by learned associations (covariation) between novel visual shapes. Our second experiment shows that regularities in dynamic visual environments can also be learned to guide search behavior. In both experiments, learning occurred incidentally and the memory representations were implicit. These experiments show how top-down visual knowledge, acquired through implicit learning, constrains what to expect and guides where to attend and look.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号