首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Inhibition of return facilitates visual search, biasing attention away from previously examined locations. Prior research has shown that, as a result of inhibitory tags associated with rejected distractor items, observers are slower to detect small probes presented at these tagged locations than they are to detect probes presented at locations that were unoccupied during visual search, but only when the search stimuli remain visible during the probe-detection task. Using an interrupted visual search task, in which search displays alternated with blank displays, we found that inhibitory tagging occurred in the absence of the search array when probes were presented during these blank displays. Furthermore, by manipulating participants’ attentional set, we showed that these inhibitory tags were associated only with items that the participants actively searched. Finally, by probing before the search was completed, we also showed that, early in search, processing at distractor locations was actually facilitated, and only as the search progressed did evidence for inhibitory tagging arise at those locations. These results suggest that the context of a visual search determines the presence or absence of inhibitory tagging, as well as demonstrating for the first time the temporal dynamics of location prioritization while search is ongoing.  相似文献   

2.
Klein (1988) reported that increased reaction times for the detection of small light probes could be used as an indicator of inhibitory tagging of rejected distractors in serial visual search tasks. Such a paradigm would be very useful in the study of the mechanics of visual search. Unfortunately, we cannot replicate the result. In this study, we found that probe reaction times were elevated at all distractor locations, relative to empty locations, following both parallel and serial search tasks. This appears to be a forward masking effect.  相似文献   

3.
A common search paradigm requires observers to search for a target among undivided spatial arrays of many items. Yet our visual environment is populated with items that are typically arranged within smaller (subdivided) spatial areas outlined by dividers (e.g., frames). It remains unclear how dividers impact visual search performance. In this study, we manipulated the presence and absence of frames and the number of frames subdividing search displays. Observers searched for a target O among Cs, a typically inefficient search task, and for a target C among Os, a typically efficient search. The results indicated that the presence of divider frames in a search display initially interferes with visual search tasks when targets are quickly detected (i.e., efficient search), leading to early interference; conversely, frames later facilitate visual search in tasks in which targets take longer to detect (i.e., inefficient search), leading to late facilitation. Such interference and facilitation appear only for conditions with a specific number of frames. Relative to previous studies of grouping (due to item proximity or similarity), these findings suggest that frame enclosures of multiple items may induce a grouping effect that influences search performance.  相似文献   

4.
Inhibitory tagging on randomly moving objects   总被引:2,自引:0,他引:2  
Inhibitory tagging is a process that prevents focal attention from revisiting previously checked items in inefficient searches, facilitating search performance. Recent studies suggested that inhibitory tagging is object rather than location based, but it was unclear whether inhibitory tagging operates on moving objects. The present study investigated the tagging effect on moving objects. Participants were asked to search for a moving target among randomly and independently moving distractors. After either efficient or inefficient search, participants performed a probe detection task that measured the inhibitory effect on search items. The inhibitory effect on distractors was observed only after inefficient searches. The present results support the concept of object-based inhibitory tagging.  相似文献   

5.
No need for inhibitory tagging of locations in visual search   总被引:1,自引:0,他引:1  
Participants find it no harder to search for a T among Ls when the items move around at velocities of up to 10.8°/sec than when the items remain static. This result demonstrates that inhibitory tagging of locations is not necessary for successful search, and it provides a challenge to any models of visual search that use a fixed location as the index during accumulation and storage of information about search items.  相似文献   

6.
Space-based accounts of visual attention assume that we select a limited spatial region independent of the number of objects it contains. In contrast, object-based accounts suggest that we select objects independent of their location. We investigated the boundary conditions on the selection modes of attention in a series of tachistoscopic visual search tasks, where the nature of capacity limitations on search was examined. Observers had to search for a horizontally oriented target ellipse among differently oriented distractor ellipses. Across four experiments, we orthogonally manipulated target-distractor (TD) similarity and distractor-distractor (DD) similarity. Each experiment consisted of a two-way design: Firstly, with a central cue, we indicated the spatial extent of the relevant search area. Secondly, we varied the number and spatial proximity of items in the display. Performance could be accounted for in terms of capacity limited object-based attention, assuming also that the spatial proximity of items enhances performance when there is high DD-similarity (and grouping). In addition, the cueing effect interacted with spatial proximity when DD-similarity was high, suggesting that grouping was influenced by attention. We propose that any capacity limits on visual search are due to object-based attention, and that the formation of perceptual objects and object groups is also subject to attentional modulation.  相似文献   

7.
Three experiments examined the domain of visual selective attention (i.e., feature-based selection vs. object-based selection). Experiment 1 extended the requirements of the visual search task by requiring a feature discrimination response to target elements presented for short durations (30-105 msec). Targets were embedded in 47 distractor elements and were defined by either a distinct color or a distinct orientation. Observers made a discrimination response to either the target's color or its orientation. When the target-defining feature and the feature to be discriminated were the same (matched conditions), accuracy was enhanced relative to when these features belonged to separate dimensions (mismatched conditions). In Experiment 2, similar results were found in a task in which the target-defining dimension varied from trial to trial and observers performed both color and orientation discriminations on every trial. The results from these two experiments are consistent with feature-based attentional selection, but not with object-based selection. Experiment 3 extended these findings by showing that the effect is rooted in the overlap between target and distractor values in the stimulus set. The results are discussed in the context of recent models of visual selective attention.  相似文献   

8.
Two experiments investigated whether the conjunctive nature of nontarget items influenced search for a conjunction target. Each experiment consisted of two conditions. In both conditions, the target item was a red bar tilted to the right, among white tilted bars and vertical red bars. As well as color and orientation, display items also differed in terms of size. Size was irrelevant to search in that the size of the target varied randomly from trial to trial. In one condition, the size of items correlated with the other attributes of display items (e.g., all red items were big and all white items were small). In the other condition, the size of items varied randomly (i.e., some red items were small and some were big, and some white items were big and some were small). Search was more efficient in the size-correlated condition, consistent with the parallel coding of conjunctions in visual search  相似文献   

9.
Probing distractor inhibition in visual search: inhibition of return   总被引:4,自引:0,他引:4  
The role of inhibition of return (IOR) in serial visual search was reinvestigated using R. Klein's (1988) paradigm of a search task followed by a probe-detection task. Probes were presented at either the location of a potentially inhibited search distractor or an empty location. No evidence of IOR was obtained when the search objects were removed after the search-task response. But when the search objects remained on, a pattern of effects similar to Klein's results emerged. However, when just the search-critical object parts were removed or when participants received immediate error feedback to prevent rechecking of the search objects, IOR effects were observed only when probes appeared equally likely at search array and empty locations. These results support the operation of object-based IOR in serial visual search, with IOR demonstrable only when rechecking is prevented (facilitating task switching) and monitoring for probes is not biased toward search objects.  相似文献   

10.
Auditory and visual processes demonstrably enhance each other based on spatial and temporal coincidence. Our recent results on visual search have shown that auditory signals also enhance visual salience of specific objects based on multimodal experience. For example, we tend to see an object (e.g., a cat) and simultaneously hear its characteristic sound (e.g., “meow”), to name an object when we see it, and to vocalize a word when we read it, but we do not tend to see a word (e.g., cat) and simultaneously hear the characteristic sound (e.g., “meow”) of the named object. If auditory–visual enhancements occur based on this pattern of experiential associations, playing a characteristic sound (e.g., “meow”) should facilitate visual search for the corresponding object (e.g., an image of a cat), hearing a name should facilitate visual search for both the corresponding object and corresponding word, but playing a characteristic sound should not facilitate visual search for the name of the corresponding object. Our present and prior results together confirmed these experiential association predictions. We also recently showed that the underlying object-based auditory–visual interactions occur rapidly (within 220 ms) and guide initial saccades towards target objects. If object-based auditory–visual enhancements are automatic and persistent, an interesting application would be to use characteristic sounds to facilitate visual search when targets are rare, such as during baggage screening. Our participants searched for a gun among other objects when a gun was presented on only 10% of the trials. The search time was speeded when a gun sound was played on every trial (primarily on gun-absent trials); importantly, playing gun sounds facilitated both gun-present and gun-absent responses, suggesting that object-based auditory–visual enhancements persistently increase the detectability of guns rather than simply biasing gun-present responses. Thus, object-based auditory–visual interactions that derive from experiential associations rapidly and persistently increase visual salience of corresponding objects.  相似文献   

11.
ABSTRACT

In the process of searching for targets, our visual system not only prioritizes target-relevant features, but can also suppress nontarget-related features. Although this template for rejection has been well demonstrated, whether the features (i.e. the objects) or locations are suppressed remains unresolved due to the experimental paradigms in previous studies: in particular, object-based templates for rejection were confounded with location-based inhibition in visual search paradigms. The present study examined an object-based template for rejection by introducing search arrays comprised of two overlapping shapes with search items distributed along the shape's contours. To discourage location-based inhibition, the two shapes were spatially intermingled (Experiment 1), rotated (Experiment 2), or jiggled (Experiment 3). Participants identified the colour of a target cross. The pre-cue indicated the shape in which the target would appear (positive cue condition), the shape in which only distractors would appear (negative cue condition), or the shape that was irrelevant to the current search array (neutral cue condition). In all three experiments, the reaction times for the negative cue condition were shorter than those for the neutral cue condition, which is a hallmark of the object-based template for rejection effect, even under conditions in which location-based inhibition was discouraged.  相似文献   

12.
O'Riordan M 《Cognition》2000,77(2):81-96
The performance of children with and without autism was compared in object-based positive and negative priming tasks within a visual search procedure. Object-based positive and negative priming effects were found in both groups of children. This result provides the first evidence for the activation of object-based representations during visual search task performance and further supports the notion that both excitatory and inhibitory guidance mechanisms are involved in target location in visual search. The children with autism were overall better than the typically developing children at visual search, thus replicating demonstrations of superior discrimination in autism. Furthermore, there was no difference between the magnitude of the positive nor the negative priming effects of the groups. This finding suggests that excitatory and inhibitory control operate comparably in autism and normal development. These results are discussed in the light of the superior ability of individuals with autism to discriminate between items. More specifically, it is argued that superior discrimination in autism does not result from enhanced top-down excitatory and inhibitory control.  相似文献   

13.
It has been argued that visual search is a valid model for human foraging. However, the two tasks differ greatly in terms of the coding of space and the effort required to search. Here we describe a direct comparison between visually guided searches (as studied in visual search tasks) and foraging that is not based upon a visually distinct target, within the same context. The experiment was conducted in a novel apparatus, where search locations were indicated by an array of lights embedded in the floor. In visually guided conditions participants searched for a target defined by the presence of a feature (red target amongst green distractors) or the absence of a feature (green target amongst red and green distractors). Despite the expanded search scale and the different response requirements, these conditions followed the pattern found in conventional visual search paradigms: feature-present search latencies were not linearly related to display size, whereas feature-absent searches were longer as the number of distractors increased. In a non-visually guided foraging condition, participants searched for a target that was only visible once the switch was activated. This resulted in far longer latencies that rose markedly with display size. Compared to eye-movements in previous visual search studies, there were few revisit errors to previously inspected locations in this condition. This demonstrates the important distinction between visually guided and non-visually guided foraging processes, and shows that the visual search paradigm is an equivocal model for general search in any context. We suggest a comprehensive model of human spatial search behaviour needs to include search at a small and large scale as well as visually guided and non-visually guided search.  相似文献   

14.
The relation between attention demand and the number of items in the array (array size) was investigated by engaging subjects in a primary search task and measuring spare capacity at different points in time, with a secondary tone task that occurred randomly on half of the trials. The major variables in both tasks were array size 14, 8, or 12 letters and stimulus onset asynchrony (SOA: ?400, ?200, 0, 200, 400, and 600 msec. Subjects were able to perform the tasks quite independently, and me, st of the interference that resulted from nonindependence appeared in tone-task performance. Theamount of interference (i.e., maximum tone reaction time) was independent of array size, but theduration of interference (li.e., the number of SOAs at which tone reaction time was elevated) increased with array size. The findings were interpreted as supporting unlimited-capacity models of visual search performance.  相似文献   

15.
An analysis of the time course of attention in preview search   总被引:2,自引:0,他引:2  
We used a probe dot procedure to examine the time course of attention in preview search (Watson & Humphreys, 1997). Participants searched for an outline red vertical bar among other new red horizontal bars and old green vertical bars, superimposed on a blue background grid. Following the reaction time response for search, the participants had to decide whether a probe dot had briefly been presented. Previews appeared for 1,000 msec and were immediately followed by search displays. In Experiment 1, we demonstrated a standard preview benefit relative to a conjunction search baseline. In Experiment 2, search was combined with the probe task. Probes were more difficult to detect when they were presented 1,200 msec, relative to 800 msec, after the preview, but at both intervals detection of probes at the locations of old distractors was harder than detection on new distractors or at neutral locations. Experiment 3A demonstrated that there was no difference in the detection of probes at old, neutral, and new locations when probe detection was the primary task and there was also no difference when all of the shapes appeared simultaneously in conjunction search (Experiment 3B). In a final experiment (Experiment 4), we demonstrated that detection on old items was facilitated (relative to neutral locations and probes at the locations of new distractors) when the probes appeared 200 msec after previews, whereas there was worse detection on old items when the probes followed 800 msec after previews. We discuss the results in terms of visual marking and attention capture processes in visual search.  相似文献   

16.
In three experiments we explored whether memory for previous locations of search items influences search efficiency more as the difficulty of exhaustive search increases. Difficulty was manipulated by varying item eccentricity and item similarity (discriminability). Participants searched through items placed at three levels of eccentricity. The search displays were either identical on every trial (repeated condition) or the items were randomly reorganised from trial to trial (random condition), and search items were either relatively easy or difficult to discriminate from each other. Search was both faster and more efficient (i.e., search slopes were shallower) in the repeated condition than in the random condition. More importantly, this advantage for repeated displays was greater (1) for items that were more difficult to discriminate and (2) for eccentric targets when items were easily discriminable. Thus, increasing target eccentricity and reducing item discriminability both increase the influence of memory during search.  相似文献   

17.
In “hybrid” search, observers search a visual display for any of several targets held in memory. It is known that the contents of the memory set can guide visual search (e.g., if the memorized targets are all animals, visual attention can be guided away from signs). It is not known if the visual display can guide memory search (e.g., if the memory set is composed of signs and animals, can a visual display of signs restrict memory search to just the signs?). In three hybrid search experiments, participants memorized sets of items that belonged to either one or several categories. Participants were then presented with visual displays containing multiple items, also drawn from one or several categories. Participants were asked to determine if any of the items from their current memory set were present in the visual display. We replicate the finding that visual search can be guided by the contents of memory. We find weaker, novel evidence that memory search can be guided by the contents of the visual display.  相似文献   

18.
In visual search tasks participants search for a target among distractors in strictly controlled displays. We show that visual search principles observed in these tasks do not necessarily apply in more ecologically valid search conditions, using dynamic and complex displays. A multi-element asynchronous dynamic (MAD) visual search was developed in which the stimuli could either be moving, stationary, and/or changing in luminance. The set sizes were high and participants did not know the specific target template. Experiments 1 through 4 showed that, contrary to previous studies, search for moving items was less efficient than search for static items and targets were missed a high percentage of the time. However, error rates were reduced when participants knew the exact target template (Experiment 5) and the difference in search efficiency for moving and stationary targets disappeared when lower set sizes were used (Experiment 6). In all experiments there was no benefit to finding targets defined by a luminance change. The data show that visual search principles previously shown in the literature do not apply to these more complex and "realistically" driven displays.  相似文献   

19.
Some primitive mechanisms of spatial attention   总被引:3,自引:0,他引:3  
Zenon Pylyshyn   《Cognition》1994,50(1-3):363-384
Our approach to studying the architecture of mind has been to look for certain extremely simple mechanisms which we have good reason to suspect must exist, and to confirm these empirically. We have been concerned primarily with certain low-level mechanisms in vision which allow the visual system to simultaneously index items at multiple spatial locations, and have developed a provisional model (called the FINST model) of these mechanisms. Among the studies we have carried out to support these ideas are ones showing that subjects can track multiple independent moving targets in a field of identical distractors, and that their ability to track these targets and detect changes occuring on them does not generalize to non-targets or to items lying inside the convex polygon that they form (so that a zoom lens of attention does not fit the data). We have used a visual search paradigm to show that (serial or parallel) search can be confined to a subset of indexed items and the layout of these items is of little importance. We have also carried out a large number of studies on the phenomenon known as subitizing and have shown that subitizing occurs only when items can be preattentively individuated and in those cases location precuing has little effect, compared with when counting occurs, which suggests that subitizing may be carried out by counting active indexes rather than items in the visual field. And finally we have run studies showing that a certain motion effect which is sensitive to attention can occur at multiple precued loci. We believe that taken as a whole the evidence is most parsimoniously accounted for in terms of the hypothesis that there is an early preattentive stage in vision where a small number of salient items in the visual field are indexed and thereby made readily accessible for a variety of visual tasks.  相似文献   

20.
In short-term probe-recognition tasks, observers make speeded old–new recognition judgments for items that are members of short lists. However, long-term memory (LTM) for items from previous lists influences current-list performance. The current experiment pursued the nature of these long-term influences—in particular, whether they emerged from item-familiarity or item-response-learning mechanisms. Subjects engaged in varied-mapping (VM) and consistent-mapping (CM) short-term probe-recognition tasks (e.g., Schneider & Shiffrin, Psychological Review, 84, 1–66, 1977). The key manipulation was to vary the frequency with which individual items were presented across trials. We observed a striking dissociation: Whereas increased presentation frequency led to benefits in performance for both old and new test probes in CM search, it resulted in interference effects for both old and new test probes in VM search. Formal modeling suggested that a form of item-response learning took place in both conditions: Each presentation of a test probe led to the storage of that test probe—along with its associated “old” or “new” response—as an exemplar in LTM. These item-response pairs were retrieved along with current-list items in driving observers’ old-– recognition judgments. We conclude that item-response learning is a core component of the LTM mechanisms that influence CM and VM memory search.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号