首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
Visual search can be resumed more rapidly following a brief interruption to an old display than it can be initiated on a new display, pointing to a critical role for memory in search (Lleras, Rensink, & Enns, 2005). Here, we examine how this rapid resumption is affected by changes made to the display during the interruption of search. Rapid resumption was found to depend on the prior presentation of the target, not merely the distractor items (Experiment 1), and was unaffected by the relocation of all distractor items (Experiment 2). Further, whereas changes to response-irrelevant features of the target did not eliminate rapid resumption (Experiment 3), changes to response-relevant features did (Experiment 4). These results point to the target specificity of rapid resumption and are consistent with reentrant theories of visual awareness.  相似文献   

2.
Three experiments investigated the role of eye movements in the rapid resumption of an interrupted search. Passive monitoring of eye position in Experiment 1 showed that rapid resumption was associated with a short distance between the eye and the target on the next-to-last look before target detection. Experiments 2 and 3 used two different methods for presenting the target to the point of eye fixation on some trials. If eye position alone is predictive, rapid resumption should increase when the target is near fixation. The results showed that gaze-contingent targets increased overall search success, but that the proportion of rapid responses decreased dramatically. We conclude that rather than depending on a high-quality single look at a search target, rapid resumption of search depends on two glances; a first glance in which a hypothesis is formed, and a second glance in which the hypothesis is confirmed.  相似文献   

3.
In three experiments, we examined possible relationships between the spatial focus of attention and the rapid resumption of a visual search following a brief interruption. In Experiment 1, we tested the role of involuntary (exogenous) spatial orienting to one region (quadrant) of a search display; in Experiment 2, we tested the role of voluntary (endogenous) spatial orienting to the same region; and in Experiment 3, we tested the role of voluntary orienting to the specific location in which the target item appeared. All three experiments indicated that spatial orienting speeds correct responding and greatly increases the probability of search success in the look immediately following the presentation of a spatial cue. However, these benefits of spatial cues were also shown to be completely independent of the rapid resumption effect, which depends on observers’ forming a perceptual hypothesis about a target in one look, but being unable to confirm that hypothesis until a second one (Lleras, Rensink, & Enns, 2005).  相似文献   

4.
In this study, 7-19-year-olds performed an interrupted visual search task in two experiments. Our question was whether the tendency to respond within 500 ms after a second glimpse of a display (the rapid resumption effect [Psychological Science, 16 (2005) 684-688]) would increase with age in the same way as overall search efficiency. The results indicated no correlation of rapid resumption with search speed either across age groups (7, 9, 11, and 19 years) or at the level of individual participants. Moreover, relocating the target randomly between looks reduced the rate of rapid resumption in a very similar way at each age. These results imply that implicit perceptual prediction during search is invariant across this age range and is distinct from other critical processes such as feature integration and control over spatial attention.  相似文献   

5.
Partial orientation pop-out helps difficult search for orientation   总被引:1,自引:0,他引:1  
We interrupted pop-out search before it produced a detection response by adding extra distractors to the search display. We show that when pop-out for an orientation target fails because of this interruption, it nevertheless provides useful information to the processes responsible for difficult search. That is, partial pop-out assists difficult search. This interaction has also been found for color stimuli (Olds, Cowan, & Jolicoeur, 2000a, 2000b). These results indicate that interactions and/or overlap between the mechanisms responsible for pop-out and the mechanisms responsible for difficult search may be quite general in early visual selection.  相似文献   

6.
Everyday visual experience involves making implicit predictions, as revealed by our surprise when something disturbs our expectations. Many theories of vision have been premised on the central role played by prediction. Yet, implicit prediction in human vision has been difficult to assess in the laboratory, and many results have not distinguished between the indisputably important role of memory and the future-oriented aspect of prediction. Now, a new and unexpected finding - that humans can resume an interrupted visual search much faster than they can start a new search - offers new hope, because the rapid resumption of a search seems to depend on participants forming an implicit prediction of what they will see after the interruption. These findings combined with results of recent neurophysiology studies provide a framework for studying implicit prediction in perception.  相似文献   

7.
How do humans use target-predictive contextual information to facilitate visual search? How are consistently paired scenic objects and positions learned and used to more efficiently guide search in familiar scenes? For example, humans can learn that a certain combination of objects may define a context for a kitchen and trigger a more efficient search for a typical object, such as a sink, in that context. The ARTSCENE Search model is developed to illustrate the neural mechanisms of such memory-based context learning and guidance and to explain challenging behavioral data on positive-negative, spatial-object, and local-distant cueing effects during visual search, as well as related neuroanatomical, neurophysiological, and neuroimaging data. The model proposes how global scene layout at a first glance rapidly forms a hypothesis about the target location. This hypothesis is then incrementally refined as a scene is scanned with saccadic eye movements. The model simulates the interactive dynamics of object and spatial contextual cueing and attention in the cortical What and Where streams starting from early visual areas through medial temporal lobe to prefrontal cortex. After learning, model dorsolateral prefrontal cortex (area 46) primes possible target locations in posterior parietal cortex based on goal-modulated percepts of spatial scene gist that are represented in parahippocampal cortex. Model ventral prefrontal cortex (area 47/12) primes possible target identities in inferior temporal cortex based on the history of viewed objects represented in perirhinal cortex.  相似文献   

8.
One of the major challenges of designing an HMI for partially automated vehicles is the trade-off between a sufficient level of system information and avoidance of distracting the driver. This study aimed to investigate drivers’ glance behavior as an indicator of distraction when vehicle guidance is partially automated. Therefore, an on-road experiment was conducted comparing two versions of an in-vehicle display (during partially automated driving) and no display (during manual driving) on a heavy congested highway segment. The distribution of drivers’ total glance durations on the HMI showed that visual attention was shifted away from monitoring the central road scene towards looking at the in-vehicle display to a considerable extent. However, an analysis of the distribution of single glance durations supports the view that using partial automation and a respective HMI does not lead to a critical increase in distraction. Driving with a simplified version of the HMI had the potential to reduce glance duration on and thus potential distraction of the in-vehicle display.  相似文献   

9.
ABSTRACT

Change blindness for the contents of natural scenes suggests that only items that are attended while the scene is still visible are stored, leading some to characterize our visual experiences as sparse. Experiments on iconic memory for arrays of discrete symbols or objects, however, indicate observers have access to more visual information for at least several hundred milliseconds at offset of a display. In the experiment presented here, we demonstrate an iconic memory for complex natural or real-world scenes. Using a modified change detection task in which to-be changed objects are cued at offset of the scene, we show that more information from a natural scene is briefly stored than change blindness predicts and more than is contained in visual short-term memory. In our experiment, a cue appearing 0, 300, or 1000?msec after offset of the pre-change scene or at onset of the second scene presentation (a Post Cue) directed attention to the location of a possible change. Compared to a no-cue condition, subjects were significantly better at detecting changes and identifying what changed in the cue condition, with the cue having a diminishing effect as a function of time and no effect when its onset coincided with that of the second scene presentation. The results suggest that an iconic memory of a natural scene exists for at least 1000?msec after scene offset, from which subjects can access the identity of items in the pre-change scene. This implies that change blindness underestimates the amount of information available to the visual system from a brief glance of a natural scene.  相似文献   

10.
ABSTRACT— What information is available from a brief glance at a novel scene? Although previous efforts to answer this question have focused on scene categorization or object detection, real-world scenes contain a wealth of information whose perceptual availability has yet to be explored. We compared image exposure thresholds in several tasks involving basic-level categorization or global-property classification. All thresholds were remarkably short: Observers achieved 75%-correct performance with presentations ranging from 19 to 67 ms, reaching maximum performance at about 100 ms. Global-property categorization was performed with significantly less presentation time than basic-level categorization, which suggests that there exists a time during early visual processing when a scene may be classified as, for example, a large space or navigable, but not yet as a mountain or lake. Comparing the relative availability of visual information reveals bottlenecks in the accumulation of meaning. Understanding these bottlenecks provides critical insight into the computations underlying rapid visual understanding.  相似文献   

11.
A major issue in elementary cognition and information processing has been whether rapid search of short-term memory or a visual display can terminate when a predesignated target is found or whether it must proceed until all items are examined. This study summarizes past and recent theoretical results on the ability of self-terminating and exhaustive models to predict differences in slopes between positive (target-present) and negative (target-absent) set-size functions, as well as position effects. The empirical literature is reviewed with regard to the presence of slope differences and position effects. Theoretical investigations demonstrate that self-terminating models can readily predict the results often associated with exhaustive processing, but a very broad class of exhaustive models is incapable of predicting position effects and slope differences typically associated with self-termination. Because position effects and slope differences are found throughout the rapid search literature, we conclude that the exhaustive processing hypothesis is not tenable under common experimental conditions.  相似文献   

12.
Eye movements were recorded during the display of two images of a real-world scene that were inspected to determine whether they were the same or not (a comparative visual search task). In the displays where the pictures were different, one object had been changed, and this object was sometimes taken from another scene and was incongruent with the gist. The experiment established that incongruous objects attract eye fixations earlier than the congruous counterparts, but that this effect is not apparent until the picture has been displayed for several seconds. By controlling the visual saliency of the objects the experiment eliminates the possibility that the incongruency effect is dependent upon the conspicuity of the changed objects. A model of scene perception is suggested whereby attention is unnecessary for the partial recognition of an object that delivers sufficient information about its visual characteristics for the viewer to know that the object is improbable in that particular scene, and in which full identification requires foveal inspection.  相似文献   

13.
ABSTRACT

In recent years there has been rapid proliferation of studies demonstrating how reward learning guides visual search. However, most of these studies have focused on feature-based reward, and there has been scant evidence supporting the learning of space-based reward. We raise the possibility that the visual search apparatus is impenetrable to spatial value contingencies, even when such contingencies are learned and represented online in a separate knowledge domain. In three experiments, we interleaved a visual choice task with a visual search task in which one display quadrant produced greater monetary rewards than the remaining quadrants. We found that participants consistently exploited this spatial value contingency during the choice task but not during the search task – even when these tasks were interleaved within the same trials and when rewards were contingent on response speed. These results suggest that the expression of spatial value information is task specific and that the visual search apparatus could be impenetrable to spatial reward information. Such findings are consistent with an evolutionary framework in which the search apparatus has little to gain from spatial value information in most real world situations.  相似文献   

14.
This study was designed to test whether information transmission between the perceptual and motor levels occurs continuously or in discrete steps. Ss performed visual search across nontargets that shared visual features with one of two possible targets, each assigned to a different response. In addition to reaction time, psychophysiological measures were used to assess the duration of target search and the onset of central and peripheral motor activity. Nontargets sharing features with a target selectively activated the response associated with that target, even when it was not present in the display. This suggests that information transmission to the motor level can consist of fine-grained visual information and that visual search and response selection occur in parallel.  相似文献   

15.
Research has shown that performing visual search while maintaining representations in visual working memory displaces up to one object's worth of information from memory. This memory displacement has previously been attributed to a nonspecific disruption of the memory representation by the mere presentation of the visual search array, and the goal of the present study was to determine whether it instead reflects the use of visual working memory in the actual search process. The first hypothesis tested was that working memory displacement occurs because observers preemptively discard about an object's worth of information from visual working memory in anticipation of performing visual search. Second, we tested the hypothesis that on target absent trials no information is displaced from visual working memory because no target is entered into memory when search is completed. Finally, we tested whether visual working memory displacement is due to the need to select a response to the search array. The findings rule out these alternative explanations. The present study supports the hypothesis that change-detection performance is impaired when a search array appears during the retention interval due to nonspecific disruption or masking.  相似文献   

16.
The fear response hypothesis and the associated claim that humans have an evolutionary propensity to detect threats automatically in their immediate visual environment are critically appraised. This review focuses on reports of visual search experiments in which participants were tested with speeded oddball tasks in which the search displays contained photographic images of naturally occurring entities. In such tasks, participants have to judge whether all the images are from one category or whether the display contains a distinctive image. The evidence, which has been used to support the fear response hypothesis, is assessed against a series of concerns that relate to stimulus factors and stimulus selection. It is shown that when careful consideration is given to such methodological details, it becomes very difficult to defend the fear response hypothesis. It is concluded that, at present, the fear response hypothesis has no convincing empirical support, and it is urged that, in the future, researchers who wish to study visual threat detection take stimulus selection much more seriously.  相似文献   

17.
Postattentive vision   总被引:4,自引:0,他引:4  
Much research has examined preattentive vision: visual representation prior to the arrival of attention. Most vision research concerns attended visual stimuli; very little research has considered postattentive vision. What is the visual representation of a previously attended object once attention is deployed elsewhere? The authors argue that perceptual effects of attention vanish once attention is redeployed. Experiments 1-6 were visual search studies. In standard search, participants looked for a target item among distractor items. On each trial, a new search display was presented. These tasks were compared to repeated search tasks in which the search display was not changed. On successive trials, participants searched the same display for new targets. Results showed that if search was inefficient when participants searched a display the first time, it was inefficient when the same, unchanging display was searched the second, fifth, or 350th time. Experiments 7 and 8 made a similar point with a curve tracing paradigm. The results have implications for an understanding of scene perception, change detection, and the relationship of vision to memory.  相似文献   

18.
Target prevalence influences visual search behavior. At low target prevalence, miss rates are high and false alarms are low, while the opposite is true at high prevalence. Several models of search aim to describe search behavior, one of which has been specifically intended to model search at varying prevalence levels. The multiple decision model (Wolfe & Van Wert, Current Biology, 20(2), 121-–124, 2010) posits that all searches that end before the observer detects a target result in a target-absent response. However, researchers have found very high false alarms in high-prevalence searches, suggesting that prevalence rates may be used as a source of information to make “educated guesses” after search termination. Here, we further examine the ability for prevalence level and knowledge gained during visual search to influence guessing rates. We manipulate target prevalence and the amount of information that an observer accumulates about a search display prior to making a response to test if these sources of evidence are used to inform target present guess rates. We find that observers use both information about target prevalence rates and information about the proportion of the array inspected prior to making a response allowing them to make an informed and statistically driven guess about the target’s presence.  相似文献   

19.
When confronted with a previously encountered scene, what information is used to guide search to a known target? We contrasted the role of a scene's basic-level category membership with its specific arrangement of visual properties. Observers were repeatedly shown photographs of scenes that contained consistently but arbitrarily located targets, allowing target positions to be associated with scene content. Learned scenes were then unexpectedly mirror reversed, spatially translating visual features as well as the target across the display while preserving the scene's identity and concept. Mirror reversals produced a cost as the eyes initially moved toward the position in the display in which the target had previously appeared. The cost was not complete, however; when initial search failed, the eyes were quickly directed to the target's new position. These results suggest that in real-world scenes, shifts of attention are initially based on scene identity, and subsequent shifts are guided by more detailed information regarding scene and object layout.  相似文献   

20.
Evidence for preserved representations in change blindness   总被引:2,自引:0,他引:2  
People often fail to detect large changes to scenes, provided that the changes occur during a visual disruption. This phenomenon, known as "change blindness," occurs both in the laboratory and in real-world situations in which changes occur unexpectedly. The pervasiveness of the inability to detect changes is consistent with the theoretical notion that we internally represent relatively little information from our visual world from one glance at a scene to the next. However, evidence for change blindness does not necessarily imply the absence of such a representation---people could also miss changes if they fail to compare an existing representation of the pre-change scene to the post-change scene. In three experiments, we show that people often do have a representation of some aspects of the pre-change scene even when they fail to report the change. And, in fact, they appear to "discover" this memory and can explicitly report details of a changed object in response to probing questions. The results of these real-world change detection studies are discussed in the context of broader claims about change blindness.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号