首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Preattentive vision and perceptual groups   总被引:1,自引:0,他引:1  
M Bravo  R Blake 《Perception》1990,19(4):515-522
Recent evidence suggests that preattentive processing may not be limited to the analysis of simple stimulus features as previously suggested. To explore this idea a visual search task was used to test whether the shapes of several perceptual groups can be processed in parallel. Textured displays that give rise to strong perceptual grouping were used to create figures on a background. Search times for a target figure distinguished by a unique shape were found to be independent of the number of distractor figures in the display. This result indicates that perceptual groups may be processed in parallel and suggests an expanded role for preattentive processing in vision.  相似文献   

2.

It is well known that human information processing comprises several distinct subprocesses—namely, the perceptual, central, and motor stage. In each stage, attention plays an important role. Specifically, a type of attention—perceptual attention—operates to detect and identify a sensory input. Following this, another class of attention—central attention—is involved in working memory encoding and response selection at the central stage. While perceptual attention and central attention are known to be separate, distinct processes, some researchers reported findings that loading central attention postponed the deployment of perceptual attention needed to perform a spatial configuration search. We tested whether a similar pattern of results would emerge when a different kind of search task is used. To do so, we had participants perform a visual-search task of searching for a feature conjunction target, taxing perceptual attention while they are engaged in central processes, such as working memory encoding and response selection. The results showed that perceptual processing of conjunction search stimuli could be carried out concurrently with central processes. These results suggest that the nature of the concurrent visual search process is a determinant responsible for the dynamic relationship between perceptual attention deployed for visual search and central attention needed for working memory encoding and response selection.

  相似文献   

3.
An important component of routine visual behavior is the ability to find one item in a visual world filled with other, distracting items. This ability to performvisual search has been the subject of a large body of research in the past 15 years. This paper reviews the visual search literature and presents a model of human search behavior. Built upon the work of Neisser, Treisman, Julesz, and others, the model distinguishes between a preattentive, massively parallel stage that processes information about basic visual features (color, motion, various depth cues, etc.) across large portions of the visual field and a subsequent limited-capacity stage that performs other, more complex operations (e.g., face recognition, reading, object identification) over a limited portion of the visual field. The spatial deployment of the limited-capacity process is under attentional control. The heart of the guided search model is the idea that attentional deployment of limited resources isguided by the output of the earlier parallel processes. Guided Search 2.0 (GS2) is a revision of the model in which virtually all aspects of the model have been made more explicit and/or revised in light of new data. The paper is organized into four parts: Part 1 presents the model and the details of its computer simulation. Part 2 reviews the visual search literature on preattentive processing of basic features and shows how the GS2 simulation reproduces those results. Part 3 reviews the literature on the attentional deployment of limited-capacity processes in conjunction and serial searches and shows how the simulation handles those conditions. Finally, Part 4 deals with shortcomings of the model and unresolved issues.  相似文献   

4.
We used visual search to explore whether the preattentive mechanisms that enable rapid detection of facial expressions are driven by visual information from the displacement of features in expressions, or other factors such as affect. We measured search slopes for luminance and contrast equated images of facial expressions and anti-expressions of six emotions (anger, fear, disgust, surprise, happiness, and sadness). Anti-expressions have an equivalent magnitude of facial feature displacements to their corresponding expressions, but different affective content. There was a strong correlation between these search slopes and the magnitude of feature displacements in expressions and anti-expressions, indicating feature displacement had an effect on search performance. There were significant differences between search slopes for expressions and anti-expressions of happiness, sadness, anger, and surprise, which could not be explained in terms of feature differences, suggesting preattentive mechanisms were sensitive to other factors. A categorization task confirmed that the affective content of expressions and anti-expressions of each of these emotions were different, suggesting signals of affect might well have been influencing attention and search performance. Our results support a picture in which preattentive mechanisms may be driven by factors at a number of levels, including affect and the magnitude of feature displacement. We note that indirect effects of feature displacement, such as changes in local contrast, may well affect preattentive processing. These are most likely to be nonlinearly related to feature displacement and are, we argue, an important consideration for any study using images of expression to explore how affect guides attention. We also note that indirect effects of feature displacement (for example, changes in local contrast) may well affect preattentive processing. We argue that such effects are an important consideration for any study using images of expression to explore how affect guides attention.  相似文献   

5.
This article explores the effects of perceptual grouping on search for targets defined by separate features or by conjunction of features. Treisman and Gelade proposed a feature-integration theory of attention, which claims that in the absence of prior knowledge, the separable features of objects are correctly combined only when focused attention is directed to each item in turn. If items are preattentively grouped, however, attention may be directed to groups rather than to single items whenever no recombination of features within a group could generate an illusory target. This prediction is confirmed: In search for conjunctions, subjects appear to scan serially between groups rather than items. The scanning rate shows little effect of the spatial density of distractors, suggesting that it reflects serial fixations of attention rather than eye movements. Search for features, on the other hand, appears to independent of perceptual grouping, suggesting that features are detected preattentively. A conjunction target can be camouflaged at the preattentive level by placing it at the boundary between two adjacent groups, each of which shares one of its features. This suggests that preattentive grouping creates separate feature maps within each separable dimension rather than one global configuration.  相似文献   

6.
Using a response competition paradigm, we investigated the ability to ignore target response-compatible, target response-incompatible, and neutral visual and auditory distractors presented during a visual search task. The perceptual load model of attention (e.g., Lavie & Tsal, 1994) states that task-relevant processing load determines irrelevant distractor processing in such a way that increasing processing load prevents distractor processing. In three experiments, participants searched sets of one (easy search) or six (hard search) similar items. In Experiment 1, visual distractors influenced reaction time (RT) and accuracy only for easy searches, following the perceptual load model. Surprisingly, auditory distractors yielded larger distractor compatibility effects (median RT for incompatible trials minus median RT for compatible trials) for hard searches than for easy searches. In Experiments 2 and 3, during hard searches, consistent RT benefits with response-compatible and RT costs with response-incompatible auditory distractors occurred only for hard searches. We suggest that auditory distractors are processed regardless of visual perceptual load but that the ability to inhibit cross-modal influence from auditory distractors is reduced under high visual load.  相似文献   

7.
Search for a conjunction of form and motion is greatly affected by manipulations of phase in the target and nontarget motion sets. To test whether this finding can be best explained by perceptual grouping, we moved a random set of dots in phase or counterphase with target or nontarget motion. Perceptual grouping was found to have a dramatic effect on search performance. We propose that this interaction between perceptual grouping and visual search is governed by three general rules. Our data also provide convincing evidence of the preattentive organization of a visual display into surfaces defined by common motion.  相似文献   

8.
9.
Load theory of attention proposes that distractor processing is reduced in tasks with high perceptual load that exhaust attentional capacity within task-relevant processing. In contrast, tasks of low perceptual load leave spare capacity that spills over, resulting in the perception of task-irrelevant, potentially distracting stimuli. Tsal and Benoni (2010) find that distractor response competition effects can be reduced under conditions with a high search set size but low perceptual load (due to a singleton color target). They claim that the usual effect of search set size on distractor processing is not due to attentional load but instead attribute this to lower level visual interference. Here, we propose an account for their findings within load theory. We argue that in tasks of low perceptual load but high set size, an irrelevant distractor competes with the search nontargets for remaining capacity. Thus, distractor processing is reduced under conditions in which the search nontargets receive the spillover of capacity instead of the irrelevant distractor. We report a new experiment testing this prediction. Our new results demonstrate that, when peripheral distractor processing is reduced, it is the search nontargets nearest to the target that are perceived instead. Our findings provide new evidence for the spare capacity spillover hypothesis made by load theory and rule out accounts in terms of lower level visual interference (or mere "dilution") for cases of reduced distractor processing under low load in displays of high set size. We also discuss additional evidence that discounts the viability of Tsal and Benoni's dilution account as an alternative to perceptual load.  相似文献   

10.
Executive working memory (WM) load reduces the efficiency of visual search, but the mechanisms by which this occurs are not fully known. In the present study, we assessed the effect of executive load on perceptual processing during search. Participants performed a serial oculomotor search task, looking for a circle target among gapped-circle distractors. The participants performed the task under high and low executive WM load, and the visual quality (Experiment 1) or discriminability of targets and distractors (Experiment 2) was manipulated across trials. By the logic of the additive factors method (Sternberg, 1969, 1998), if WM load compromises the quality of perceptual processing during visual search, manipulations of WM load and perceptual processing difficulty should produce nonadditive effects. Contrary to this prediction, the effects of WM load and perceptual difficulty were additive. The results imply that executive WM load does not degrade perceptual analysis during visual search.  相似文献   

11.
Attention, the mechanism by which a subset of sensory inputs is prioritized over others, operates at multiple processing stages. Specifically, attention enhances weak sensory signal at the perceptual stage, while it serves to select appropriate responses or consolidate sensory representations into short-term memory at the central stage. This study investigated the independence and interaction between perceptual and central attention. To do so, I used a dual-task paradigm, pairing a four-alternative choice task with a visual search task. The results showed that central attention for response selection was engaged in perceptual processing for visual search when the number of search items increased, thereby increasing the demand for serial allocation of focal attention. By contrast, central attention and perceptual attention remained independent as far as the demand for serial shifting of focal attention remained constant; decreasing stimulus contrast or increasing the set size of a parallel search did not evoke the involvement of central attention in visual search. These results suggest that the nature of concurrent visual search process plays a crucial role in the functional interaction between two different types of attention.  相似文献   

12.
Research on aging and visual search often requires older people to search computer screens for target letters or numbers. The aim of this experiment was to investigate age-related differences using an everyday-based visual search task in a large participant sample (n=261) aged 20-88 years. Our results show that: (1) old-old adults have more difficulty with triple conjunction searches with one highly distinctive feature compared to young-old and younger adults; (2) age-related declines in conjunction searches emerge in middle age then progress throughout older age; (3) age-related declines are evident in feature searches on target absent trials, as older people seem to exhaustively and serially search the whole display to determine a target's absence. Together, these findings suggest that declines emerge in middle age then progress throughout older age in feature integration, guided search, perceptual grouping and/or spreading suppression processes. Discussed are implications for enhancing everyday functioning throughout adulthood.  相似文献   

13.
Grouping effects on spatial attention in visual search.   总被引:6,自引:0,他引:6  
In visual search tasks, spatial attention selects the locations containing a target or a distractor with one of the target's features, implying that spatial attention is driven by target features (M.-S. Kim & K. R. Cave, 1995). The authors measured the effects of location-based grouping processes in visual search. In searches for a color-shape combination (conjunction search), spatial probes indicated that a cluster of same-color or same-shape elements surrounding the target were grouped and selected together. However, in searches for a shape target (feature search), evidence for grouping by an irrelevant feature dimension was weaker or nonexistent. Grouping processes aided search for a visual target by selecting groups of locations that shared a common feature, although there was little or no grouping by an irrelevant feature when the target was defined by a unique salient feature.  相似文献   

14.
Three visual search experiments investigated redundancy gains for single and dual odd-one-out feature targets that differed from distractors in orientation, color, or both. In Experiment 1, redundant-target displays contained (a) a single target defined in 2 dimensions, (b) dual targets each defined in a different dimension, or (c) dual targets both defined in the same dimension. The redundancy gains, relative to single nonredundant targets, decreased from the first condition on, with violations of J. Miller's (1982) race model inequality (RMI) manifested only in the first 2 conditions. Experiment 2 systematically varied the spatial separation between dual targets each defined in a different dimension. Violations of the RMI were evident only when the 2 targets occupied nearby locations. Experiment 3 provided evidence of RMI violations by dimensionally redundant targets at both precued (likely) and noncued (unlikely) display locations. Taken together, these results suggest that there is coactivation of a common mechanism by target signals in different dimensions (not by signals in the same dimension), that the coactivation effects are spatially specific, and that the coactivated mechanisms are located at a preattentive, perceptual stage of processing.  相似文献   

15.
Automaticity and preattentive processing.   总被引:5,自引:0,他引:5  
The characteristics of automatized performance resemble those of preattentive processing in some respects. In the context of visual search tasks, these include spatially parallel processing, involuntary calling of attention, learning without awareness, and time-sharing with other tasks. However, this article reports some evidence suggesting that extended practice produces its effects through different mechanisms from those that underlie preattentive processing. The dramatic changes in search rate seem to depend not on the formation of new preattentive detectors for the task-relevant stimuli, nor on learned abstracted procedures for responding quickly and efficiently, but rather on changes that are very specific both to the particular stimuli and to the particular task used in practice. We suggest that the improved performance may depend on the accumulation of separate memory traces for each individual experience of a display (see Logan, 1988), and we show that the traces differ for conjunction search in which stimuli must be individuated and for feature search where a global response to the display is sufficient.  相似文献   

16.
In visual search tasks, spatial attention selects the locations containing a target or a distractor with one of the target's features, implying that spatial attention is driven by target features (M.S. Kim & K. R. Cave, 1995). The authors measured the effects of location-based grouping processes in visual search. In searches for a color-shape combination (conjunction search), spatial probes indicated that a cluster of same-color or same-shape elements surrounding the target were grouped and selected together. However, in searches for a shape target (feature search), evidence for grouping by an irrelevant feature dimension was weaker or nonexistent. Grouping processes aided search for a visual target by selecting groups of locations that shared a common feature, although there was little or no grouping by an irrelevant feature when the target was defined by a unique salient feature.  相似文献   

17.
Most accounts of visual perception hold that the detection of primitive features occurs preattentively, in parallel across the visual field. Evidence that preattentive vision operates without attentional limitations comes from visual search tasks in which the detection of the presence of absence of a primitive feature is independent of the number of stimuli in a display. If the detection of primitive features occurs preattentively, in parallel and without capacity limitations, then it should not matter where attention is located in the visual field. The present study shows that even though the detection of a red element in an array of gray elements occurred in parallel without capacity limitations, the allocation of attention did have a large effect on search performance. If attention was directed to a particular region of the display and the target feature was presented elsewhere, response latencies increased. Results indicate that the classic view of preattentive vision requires revision.  相似文献   

18.
How do we find a target item in a visual world filled with distractors? A quarter of a century ago, in her influential 'Feature Integration Theory (FIT)', Treisman proposed a two-stage solution to the problem of visual search: a preattentive stage that could process a limited number of basic features in parallel and an attentive stage that could perform more complex acts of recognition, one object at a time. The theory posed a series of problems. What is the nature of that preattentive stage? How do serial and parallel processes interact? How does a search unfold over time? Recent work has shed new light on these issues.  相似文献   

19.
In visual search tasks, subjects look for a target among a variable number of distractor items. If the target is defined by a conjunction of two different features (e.g., color × orientation), efficient search is possible when parallel processing of information about color and about orientation is used to “guid” the deployment of attention to the target. Another type of conjunction search has targets defined by two instances of one type of feature (e.g., a conjunction of two colors). In this case, search is inefficient when the target is an item defined by parts of two different colors but much more efficient if the target can be described as a whole item of one color with a part of another color (Wolfe, Friedman-Hill, & Bilsky, 1994). In this paper, we show that the same distinction holds for size. “Part— whole” size × size conjunction searches are efficient; “part-part” searches are not (Experiments 1–3). In contrast, all orientation × orientation searches are inefficient (Experiments 4–6). This difference between preattentive processing of color and size, on the one hand, and orientation, on the other, may reflect structural relationships between features in real-world objects.  相似文献   

20.
How do language and vision interact? Specifically, what impact can language have on visual processing, especially related to spatial memory? What are typically considered errors in visual processing, such as remembering the location of an object to be farther along its motion trajectory than it actually is, can be explained as perceptual achievements that are driven by our ability to anticipate future events. In two experiments, we tested whether the prior presentation of motion language influences visual spatial memory in ways that afford greater perceptual prediction. Experiment 1 showed that motion language influenced judgments for the spatial memory of an object beyond the known effects of implied motion present in the image itself. Experiment 2 replicated this finding. Our findings support a theory of perception as prediction.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号