首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 9 毫秒
1.
Inhibition of return facilitates visual search, biasing attention away from previously examined locations. Prior research has shown that, as a result of inhibitory tags associated with rejected distractor items, observers are slower to detect small probes presented at these tagged locations than they are to detect probes presented at locations that were unoccupied during visual search, but only when the search stimuli remain visible during the probe-detection task. Using an interrupted visual search task, in which search displays alternated with blank displays, we found that inhibitory tagging occurred in the absence of the search array when probes were presented during these blank displays. Furthermore, by manipulating participants’ attentional set, we showed that these inhibitory tags were associated only with items that the participants actively searched. Finally, by probing before the search was completed, we also showed that, early in search, processing at distractor locations was actually facilitated, and only as the search progressed did evidence for inhibitory tagging arise at those locations. These results suggest that the context of a visual search determines the presence or absence of inhibitory tagging, as well as demonstrating for the first time the temporal dynamics of location prioritization while search is ongoing.  相似文献   

2.
In visual search, detection of a target is faster when a layout of nontarget items is repeatedly encountered, suggesting that contextual invariances can guide attention. Moreover, contextual cueing can also adapt to environmental changes. For instance, when the target undergoes a predictable (i.e., learnable) location change, then contextual cueing remains effective even after the change, suggesting that a learned context is “remapped” and adjusted to novel requirements. Here, we explored the stability of contextual remapping: Four experiments demonstrated that target location changes are only effectively remapped when both the initial and the future target positions remain predictable across the entire experiment. Otherwise, contextual remapping fails. In sum, this pattern of results suggests that multiple, predictable target locations can be associated with a given repeated context, allowing the flexible adaptation of previously learned contingencies to novel task demands.  相似文献   

3.
Klein (1988) reported that inhibitory tagging (i.e., inhibition of return in visual search) made reaction times for the detection of small probes increase at locations where there had previously been rejected items in serial visual search. It is reasonable that the attended and rejected locations are inhibited. However, subsequent studies did not support Klein's idea. In these studies, inhibitory tagging was tested after removing the items from the search tasks. The paradigms in these studies were not appropriate for testing an object-based inhibitory effect because the objects (i.e., items) were removed from the display. In the present study, we found that evidence of inhibitory tagging could be observed only when the items of the search tasks were maintained until the responses for the small probes were made. This appeared to be an object-based effect.  相似文献   

4.
Klein (1988) reported that increased reaction times for the detection of small light probes could be used as an indicator of inhibitory tagging of rejected distractors in serial visual search tasks. Such a paradigm would be very useful in the study of the mechanics of visual search. Unfortunately, we cannot replicate the result. In this study, we found that probe reaction times were elevated at all distractor locations, relative to empty locations, following both parallel and serial search tasks. This appears to be a forward masking effect.  相似文献   

5.
Previous research has shown that when attention is directed sequentially to multiple locations, inhibition of return (IOR) can be observed at each location, with a larger magnitude of IOR at the more recently attended locations. In the present study we asked whether this “multiple IOR” effect influences search only for simple feature targets, as has been shown in the past, or whether it generalizes to more complex, attentionally demanding conjunction search situations. The results demonstrated that IOR effects (1) occur for more complex conjunction search environments, (2) are larger for the attentionally demanding conjunction search, and (3) occur at more locations for conjunction search than feature search. Together these data provide a clear demonstration of the robustness and responsiveness of the IOR effect across search situations—which is precisely what is expected of a phenomenon posited to facilitate efficient visual search of real-world environments. Nevertheless, these data do not firmly establish that IOR effects established by the cueing paradigm before search is implemented are the same as the IOR effects that are assumed to be established during search itself. We suggest that this disconnection between paradigms highlights a fundamental limitation of laboratory-based research.  相似文献   

6.
Eye movements made by listeners during language-mediated visual search reveal a strong link between visual processing and conceptual processing. For example, upon hearing the word for a missing referent with a characteristic colour (e.g., "strawberry"), listeners tend to fixate a colour-matched distractor (e.g., a red plane) more than a colour-mismatched distractor (e.g., a yellow plane). We ask whether these shifts in visual attention are mediated by the retrieval of lexically stored colour labels. Do children who do not yet possess verbal labels for the colour attribute that spoken and viewed objects have in common exhibit language-mediated eye movements like those made by older children and adults? That is, do toddlers look at a red plane when hearing "strawberry"? We observed that 24-month-olds lacking colour term knowledge nonetheless recognized the perceptual-conceptual commonality between named and seen objects. This indicates that language-mediated visual search need not depend on stored labels for concepts.  相似文献   

7.
Eye movements made by listeners during language-mediated visual search reveal a strong link between visual processing and conceptual processing. For example, upon hearing the word for a missing referent with a characteristic colour (e.g., “strawberry”), listeners tend to fixate a colour-matched distractor (e.g., a red plane) more than a colour-mismatched distractor (e.g., a yellow plane). We ask whether these shifts in visual attention are mediated by the retrieval of lexically stored colour labels. Do children who do not yet possess verbal labels for the colour attribute that spoken and viewed objects have in common exhibit language-mediated eye movements like those made by older children and adults? That is, do toddlers look at a red plane when hearing “strawberry”? We observed that 24-month-olds lacking colour term knowledge nonetheless recognized the perceptual–conceptual commonality between named and seen objects. This indicates that language-mediated visual search need not depend on stored labels for concepts.  相似文献   

8.
9.
The top-down guidance of visual attention is one of the main factors allowing humans to effectively process vast amounts of incoming visual information. Nevertheless we still lack a full understanding of the visual, semantic, and memory processes governing visual attention. In this paper, we present a computational model of visual search capable of predicting the most likely positions of target objects. The model does not require a separate training phase, but learns likely target positions in an incremental fashion based on a memory of previous fixations. We evaluate the model on two search tasks and show that it outperforms saliency alone and comes close to the maximal performance of the Contextual Guidance Model (CGM; Torralba, Oliva, Castelhano, & Henderson, 2006; Ehinger, Hidalgo-Sotelo, Torralba, & Oliva, 2009), even though our model does not perform scene recognition or compute global image statistics. The search performance of our model can be further improved by combining it with the CGM.  相似文献   

10.
11.
In search of remembrance: evidence for memory in visual search   总被引:2,自引:0,他引:2  
Observers searched for a target among distractors while the display items traded places every 110 ms. Search was slower when the target was always relocated to a position previously occupied by a distractor than when the items remained in place, showing the importance of memory for locations in a visual search task. Experiment 2 repeated a previous study in which items could move to any location within the display, but used a larger range of set sizes than tested in the earlier study. A cost in search times to relocating items was found at the larger set sizes, most likely reflecting that the probability that the target would replace a distractor increased witht he set size. The findings provide strong evidence for the role of memory for locations within trials in a visual search task.  相似文献   

12.
Incidental visual memory for targets and distractors in visual search   总被引:1,自引:0,他引:1  
We explored incidental retention of visual details of encountered objects during search. Participants searched for conjunction targets in 32 arrays of 12 pictures of real-world objects and then performed a token discrimination task that examined their memory for visual details of the targets and distractors from the search task. The results indicate that even though participants had not been instructed to memorize the objects, the visual details of search targets and distractor objects related to the targets were retained after the search. Distractor objects unrelated to the search target were remembered more poorly. Eye-movement measures indicated that the objects that were remembered were looked at more frequently during search than those that were not remembered. These results provide support that detailed visual information is included incidentally in the visual representation of an object after the object is no longer in view.  相似文献   

13.
Using a novel sequential task, Danziger, Kingstone, and Snyder (1998) provided conclusive evidence that inhibition of return (IOR) can co-occur at multiple non-contiguous locations. They argued that their findings depended crucially on the allocation of attention to cued locations. Specifically, they hypothesized that because subjects could not predict whether an onset event was a target or a non-target, all onset events had to be attended. As a result, non-targets were tagged with inhibition. The present study tested this hypothesis by manipulating whether target onset was predictable or not. In support of Danziger et al., three experiments revealed that multiple IOR was only observed when attention had to be directed to the cued locations. Interestingly, when attention did not need to be allocated to the cued locations, and multiple IOR was abolished, an IOR effect was still observed at the most recently cued location. Two possible accounts for this single IOR effect were presented for future investigation. One account attributes the effect to motor-based inhibition as hypothesized by Klein and Taylor (1994). The alternative account attributes the effect to weak attentional capture by a peripheral cue. Together the data support the view that multiple IOR is an attentional phenomenon and, as hypothesized by Tipper, Weaver, and Watson (1996), its presence or absence is largely under the control of the observer.  相似文献   

14.
Grasping an object rather than pointing to it enhances processing of its orientation but not its color. Apparently, visual discrimination is selectively enhanced for a behaviorally relevant feature. In two experiments we investigated the limitations and targets of this bias. Specifically, in Experiment 1 we were interested to find out whether the effect is capacity demanding, therefore we manipulated the set-size of the display. The results indicated a clear cognitive processing capacity requirement, i.e. the magnitude of the effect decreased for a larger set size. Consequently, in Experiment 2, we investigated if the enhancement effect occurs only at the level of behaviorally relevant feature or at a level common to different features. Therefore we manipulated the discriminability of the behaviorally neutral feature (color). Again, results showed that this manipulation influenced the action enhancement of the behaviorally relevant feature. Particularly, the effect of the color manipulation on the action enhancement suggests that the action effect is more likely to bias the competition between different visual features rather than to enhance the processing of the relevant feature. We offer a theoretical account that integrates the action-intention effect within the biased competition model of visual selective attention.  相似文献   

15.
Understanding the relative role of top-down and bottom-up guidance is crucial for models of visual search. Previous studies have addressed the role of top-down and bottom-up processes in search for a conjunction of features but with inconsistent results. Here, the author used an attentional capture method to address the role of top-down and bottom-up processes in conjunction search. The role of bottom-up processing was assayed by inclusion of an irrelevant-size singleton in a search for a conjunction of color and orientation. One object was uniquely larger on each trial, with chance probability of coinciding with the target; thus, the irrelevant feature of size was not predictive of the target's location. Participants searched more efficiently for the target when it was also the size singleton, and they searched less efficiently for the target when a nontarget was the size singleton. Although a conjunction target cannot be detected on the basis of bottom-up processing alone, participants used search strategies that relied significantly on bottom-up guidance in finding the target, resulting in interference from the irrelevant-size singleton.  相似文献   

16.
Several studies have shown that people can selectively attend to stimulus colour, e.g., in visual search, and that preknowledge of a target colour can improve response speed/accuracy. The purpose was to use a form-identification task to determine whether valid colour precues can produce benefits and invalid cues costs. The subject had to identify the orientation of a "T"-shaped element in a ring of randomly-oriented "L"s when either two or four of the elements were differently coloured. Contrary to Moore and Egeth's (1998) recent findings, colour-based attention did affect performance under data-limited conditions: Colour cues produced benefits when processing load was high; when the load was reduced, they incurred only costs. Surprisingly, a valid colour cue succeeded in improving performance in the high-load condition even when its validity was reduced to the chance level. Overall, the results suggest that knowledge of a target colour does not facilitate the processing of the target, but makes it possible to prioritize it.  相似文献   

17.
The role of categorization in visual search was studied in 3 colour search experiments where the target was or was not linearly separable from the distractors. The linear separability effect refers to the difficulty of searching for a target that falls between the distractors in CIE colour space (Bauer, Jolicoeur, & Cowan, 1996b Bauer, B., Jolicoeur, P. and Cowan, W. B. 1996b. Visual search for colour targets that are or are not linearly separable from distractors. Vision Research, 36: 14391465. [Crossref], [PubMed], [Web of Science ®] [Google Scholar]). Observers performed nonlinearly separable searches where the target fell between the two types of distractors in CIE colour space. When the target and distractors fell within the same category, search was difficult. When they fell within three distinct categories, response times and search slopes were significantly reduced. The results suggest that categorical information, when available, facilitates search, reducing the linear separability effect.  相似文献   

18.
Three experiments examined the effects of target-distractor (T-D) similarity and old age on the efficiency of searching for single targets and enumerating multiple targets. Experiment 1 showed that increasing T-D similarity selectively reduced the efficiency of enumerating small (< 4) numerosities (subitizing) but had little effect on enumerating larger numerosities (counting) or searching for a single target. Experiment 2 provided converging evidence using fixation frequencies and a finer range of T-D similarities. Experiment 3 showed that T-D similarity had a greater impact on older than on young adults, but only for subitizing. The data are discussed in terms of the mechanisms and architecture of early visual tagging, dissociable effects in search and enumeration, and the effects of aging on visual processing.  相似文献   

19.
A consistent, albeit fragile, finding over the last couple of decades has been that verbalization of hard-to-verbalize stimuli, such as faces, interferes with subsequent recognition of the described target stimulus. We sought to elicit a similar phenomenon whereby visualization interferes with verbal recognition--that is, visual overshadowing. We randomly assigned participants (n?=?180) to either concrete (easy to visualize) or abstract (difficult to visualize) sentence conditions. Following presentation, participants were asked to verbalize the sentence, visualize the sentence, or work on a filler task. As predicted, visualization of an abstract verbal stimulus resulted in significantly lower recognition accuracy; unexpectedly, however, so did verbalization. The findings are discussed within the framework of fuzzy-trace theory.  相似文献   

20.
A consistent, albeit fragile, finding over the last couple of decades has been that verbalization of hard-to-verbalize stimuli, such as faces, interferes with subsequent recognition of the described target stimulus. We sought to elicit a similar phenomenon whereby visualization interferes with verbal recognition—that is, visual overshadowing. We randomly assigned participants (n?=?180) to either concrete (easy to visualize) or abstract (difficult to visualize) sentence conditions. Following presentation, participants were asked to verbalize the sentence, visualize the sentence, or work on a filler task. As predicted, visualization of an abstract verbal stimulus resulted in significantly lower recognition accuracy; unexpectedly, however, so did verbalization. The findings are discussed within the framework of fuzzy-trace theory.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号