首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Oculomotor inhibition of return (IOR) is believed to facilitate scene scanning by decreasing the probability that gaze will return to a previously fixated location. This "foraging" hypothesis was tested during scene search and in response to sudden-onset probes at the immediately previous (one-back) fixation location. The latencies of saccades landing within 1o of the previous fixation location were elevated, consistent with oculomotor IOR. However, there was no decrease in the likelihood that the previous location would be fixated relative to distance-matched controls or an a priori baseline. Saccades exhibit an overall forward bias, but this is due to a general bias to move in the same direction and for the same distance as the last saccade (saccadic momentum) rather than to a spatially specific tendency to avoid previously fixated locations. We find no evidence that oculomotor IOR has a significant impact on return probability during scene search.  相似文献   

2.
Previous research on the contribution of top-down control to saccadic target selection has suggested that eye movements, especially short-latency saccades, are primarily salience driven. The present study was designed to systematically examine top-down influences as a function of time and relative salience difference between target and distractor. Observers performed a saccadic selection task, requiring them to make an eye movement to an orientation-defined target, while ignoring a color-defined distractor. The salience of the distractor was varied (five levels), permitting the percentage of target and distractor fixations to be analyzed as a function of the salience difference between the target and distractor. This analysis revealed the same pattern of results for both the overall and the short-latency saccades: When the target and distractor were of comparable salience, the vast majority of saccades went directly to the target; even distractors somewhat more salient than the target led to significantly fewer distractor, as compared with target, fixations. To quantify the amount of top-down control applied, we estimated the point of equal selection probability for the target and distractor. Analyses of these estimates revealed that, to be selected with equal probability to the target, a distractor had to have a considerably greater bottom-up salience, as compared with the target. This difference suggests a strong contribution of top-down control to saccadic target selection—even for the earliest saccades.  相似文献   

3.
眼动返回抑制是指眼睛返回至先前注视过的位置(客体)时眼跳潜伏期长、可能性低的一种抑制现象。近年来, 研究者对真实场景搜索中的眼动返回抑制进行了深入的研究。本文介绍了相关研究的经典实验范式、主要发现及其理论解释, 探讨了眼动返回抑制的容量、眼动返回抑制是否具有任务特异性、场景搜索中是否存在眼动返回促进以及眼动返回抑制的神经基础等问题, 并指出未来该领域研究的方向和途径。  相似文献   

4.
Visual attention enables us to selectively prioritize or suppress information in the environment. Prominent models concerned with the control of visual attention differentiate between goal-directed, top-down and stimulus-driven, bottom-up control, with the former determined by current selection goals and the latter determined by physical salience. In the current review, we discuss recent studies that demonstrate that attentional selection does not need to be the result of top-down or bottom-up processing but, instead, is often driven by lingering biases due to the “history” of former attention deployments. This review mainly focuses on reward-based history effects; yet other types of history effects such as (intertrial) priming, statistical learning and affective conditioning are also discussed. We argue that evidence from behavioral, eye-movement and neuroimaging studies supports the idea that selection history modulates the topographical landscape of spatial “priority” maps, such that attention is biased toward locations having the highest activation on this map.  相似文献   

5.
During real-world scene viewing, humans must prioritize scene regions for attention. What are the roles of low-level image salience and high-level semantic meaning in attentional prioritization? A previous study suggested that when salience and meaning are directly contrasted in scene memorization and preference tasks, attentional priority is assigned by meaning (Henderson & Hayes in Nature Human Behavior, 1, 743–747, 2017). Here we examined the role of meaning in attentional guidance using two tasks in which meaning was irrelevant and salience was relevant: a brightness rating task and a brightness search task. Meaning was represented by meaning maps that captured the spatial distribution of semantic features. Meaning was contrasted with image salience, represented by saliency maps. Critically, both maps were represented similarly, allowing us to directly compare how meaning and salience influenced the spatial distribution of attention, as measured by fixation density maps. Our findings suggest that even in tasks for which meaning is irrelevant and salience is relevant, meaningful scene regions are prioritized for attention over salient scene regions. These results support theories in which scene semantics play a dominant role in attentional guidance in scenes.  相似文献   

6.
石湖清  卢家楣 《心理科学》2016,39(4):862-868
本研究旨在探索刺激的视觉显著性和奖赏价值分别在协同和竞争的条件下对眼跳过程的影响。实验材料为成对的Gabor图案,要求被试选择具有更高奖赏价值的图案,并记录下其眼动过程。实验分为协同条件和竞争条件。结果发现,在不同实验条件下,奖赏价值对眼跳命中率和潜伏期均存在显著效应;视觉显著性的效应则在不同实验条件下出现了分离。刺激驱动过程和目标驱动过程对眼动行为的影响可能是互相区别的两种不同模式。  相似文献   

7.
Many theories assume that preknowledge of an upcoming target helps visual selection. In those theories, a top-down set can alter the salience of the target, such that attention can be deployed to the target more efficiently and responses are faster. Evidence for this account stems from visual search studies in which the identity of the upcoming target is cued in advance. In five experiments, we show that top-down knowledge affects the speed with which a singleton target can be detected but not the speed with which it can be localized. Furthermore, we show that these results are independent of the mode of responding (manual or saccadic) and are not due to a ceiling effect. Our results suggest that in singleton search, top-down information does not affect visual selection but most likely does affect response selection. We argue that such an effect is found only when information from different dimensions needs to be integrated to generate a response and that this is the case in singleton detection tasks but not in other singleton search tasks.  相似文献   

8.
The salience map is a crucial concept for many theories of visual attention. On this map, each object in the scene competes for selection - the more conspicuous the object, the greater its representation, and the more likely it will be chosen. In recent years, the firing patterns of single neurons have been interpreted using this framework. Here, we review evidence showing that the expression of salience is remarkably similar across structures, remarkably different across tasks, and modified in important ways when the salient object is consistent with the goals of the participant. These observations have important ramifications for theories of attention. We conclude that priority--the combined representation of salience and relevance--best describes the firing properties of neurons.  相似文献   

9.
Although the use of semantic information about the world seems ubiquitous in every task we perform, it is not clear whether we rely on a scene’s semantic information to guide attention when searching for something in a specific scene context (e.g., keys in one’s living room). To address this question, we compared contribution of a scene’s semantic information (i.e., scene gist) versus learned spatial associations between objects and context. Using the flash-preview–moving-window paradigm Castelhano and Henderson (Journal of Experimental Psychology: Human Perception and Performance 33:753–763, 2007), participants searched for target objects that were placed in either consistent or inconsistent locations and were semantically consistent or inconsistent with the scene gist. The results showed that learned spatial associations were used to guide search even in inconsistent contexts, providing evidence that scene context can affect search performance without consistent scene gist information. We discuss the results in terms of hierarchical organization of top-down influences of scene context.  相似文献   

10.
Three experiments were conducted to examine the interaction of top-down and bottom-up influences on visual search. More specifically, we examined the extent to which stimulus-driven capture of attention by abrupt onset distractors would disrupt the acquisition and expression of memory-based guidance of attention as exemplified by the contextual cueing effect (Chun & Jiang, 1998, 1999). In Experiment 1 onset distractors were introduced at the beginning of practice on the search task. Results indicated that onset distractors and repeated distractor patterns had independent and opposing influences on the efficiency of search. Experiment 2 ruled out an alternative hypothesis concerning the capture of attention by abrupt onsets. In Experiment 3, abrupt onset distractors were introduced following several hundred trials of practice with repeated and new distractor patterns in visual search. In this case contextual cueing observed in the repeated distractor configuration condition partially suppressed the detrimental influence of the abrupt onset distractors on search performance. These data are discussed in terms of the interaction of top-down and bottom-up influences on visual search.  相似文献   

11.
Our research has previously shown that scene categories can be predicted from observers’ eye movements when they view photographs of real-world scenes. The time course of category predictions reveals the differential influences of bottom-up and top-down information. Here we used these known differences to determine to what extent image features at different representational levels contribute toward guiding gaze in a category-specific manner. Participants viewed grayscale photographs and line drawings of real-world scenes while their gaze was tracked. Scene categories could be predicted from fixation density at all times over a 2-s time course in both photographs and line drawings. We replicated the shape of the prediction curve found previously, with an initial steep decrease in prediction accuracy from 300 to 500 ms, representing the contribution of bottom-up information, followed by a steady increase, representing top-down knowledge of category-specific information. We then computed the low-level features (luminance contrasts and orientation statistics), mid-level features (local symmetry and contour junctions), and Deep Gaze II output from the images, and used that information as a reference in our category predictions in order to assess their respective contributions to category-specific guidance of gaze. We observed that, as expected, low-level salience contributes mostly to the initial bottom-up peak of gaze guidance. Conversely, the mid-level features that describe scene structure (i.e., local symmetry and junctions) split their contributions between bottom-up and top-down attentional guidance, with symmetry contributing to both bottom-up and top-down guidance, while junctions play a more prominent role in the top-down guidance of gaze.  相似文献   

12.
M P Eckstein  B R Beutter  L S Stone 《Perception》2001,30(11):1389-1401
In previous studies of saccadic targeting, the issue how visually guided saccades to unambiguous targets are programmed and executed has been examined. These studies have found different degrees of guidance for saccades depending on the task and task difficulty. In this study, we use ideal-observer analysis to estimate the visual information used for the first saccade during a search for a target disk in noise. We quantitatively compare the performance of the first saccadic decision to that of the ideal observer (ie absolute efficiency of the first saccade) and to that of the associated final perceptual decision at the end of the search (ie relative efficiency of the first saccade). Our results show, first, that at all levels of salience tested, the first saccade is based on visual information from the stimulus display, and its highest absolute efficiency is approximately 20%. Second, the efficiency of the first saccade is lower than that of the final perceptual decision after active search (with eye movements) and has a minimum relative efficiency of 19% at the lowest level of saliency investigated. Third, we found that requiring observers to maintain central fixation (no saccades allowed) decreased the absolute efficiency of their perceptual decision by up to a factor of two, but that the magnitude of this effect depended on target salience. Our results demonstrate that ideal-observer analysis can be extended to measure the visual information mediating saccadic target-selection decisions during visual search, which enables direct comparison of saccadic and perceptual efficiencies.  相似文献   

13.
Prominent models of attentional control assert a dichotomy between top-down and bottom-up control, with the former determined by current selection goals and the latter determined by physical salience. This theoretical dichotomy, however, fails to explain a growing number of cases in which neither current goals nor physical salience can account for strong selection biases. For example, equally salient stimuli associated with reward can capture attention, even when this contradicts current selection goals. Thus, although 'top-down' sources of bias are sometimes defined as those that are not due to physical salience, this conception conflates distinct--and sometimes contradictory--sources of selection bias. We describe an alternative framework, in which past selection history is integrated with current goals and physical salience to shape an integrated priority map.  相似文献   

14.
ABSTRACT— A salient event in the visual field tends to attract attention and the eyes. To account for the effects of salience on visual selection, models generally assume that the human visual system continuously holds information concerning the relative salience of objects in the visual field. Here we show that salience in fact drives vision only during the short time interval immediately following the onset of a visual scene. In a saccadic target-selection task, human performance in making an eye movement to the most salient element in a display was accurate when response latencies were short, but was at chance when response latencies were long. In a manual discrimination task, performance in making a judgment of salience was more accurate with brief than with long display durations. These results suggest that salience is represented in the visual system only briefly after a visual image enters the brain.  相似文献   

15.
Search performance without eye movements   总被引:2,自引:0,他引:2  
Visual search performance (with sets chosen to elicit both serial and parallel search patterns) under two conditions that precluded saccades was compared to the typical situation in which visual inspection of the array is possible. In one condition, the display duration was so brief that any saccades that were executed would be too late to bring the targeted portion of the array into the fovea. In the other, the display remained present until the subject's response, but eye position was monitored and trials with shifts in fixation were excluded from analysis. The latter condition produced search latencies that were nearly identical to those with free inspection. Brief exposure, in contrast, did not produce the pattern typical of serial search, presumably because of strategies induced to deal with the rapid decay of the visual array. It is concluded that saccadic eye movements play little role in the patterns of performance used to infer serial and parallel search, and that brief exposure is not a satisfactory technique for exploring the role of saccadic eye movements in visual search.  相似文献   

16.
To take advantage of the increasing number of in-vehicle devices, automobile drivers must divide their attention between primary (driving) and secondary (operating in-vehicle device) tasks. In dynamic environments such as driving, however, it is not easy to identify and quantify how a driver focuses on the various tasks he/she is simultaneously engaged in, including the distracting tasks. Measures derived from the driver’s scan path have been used as correlates of driver attention. This article presents a methodology for analyzing eye positions, which are discrete samples of a subject’s scan path, in order to categorize driver eye movements. Previous methods of analyzing eye positions recorded in a dynamic environment have relied completely on the manual identification of the focus of visual attention from a point of regard superimposed on a video of a recorded scene, failing to utilize information regarding movement structure in the raw recorded eye positions. Although effective, these methods are too time consuming to be easily used when the large data sets that would be required to identify subtle differences between drivers, under different road conditions, and with different levels of distraction are processed. The aim of the methods presented in this article are to extend the degree of automation in the processing of eye movement data by proposing a methodology for eye movement analysis that extends automated fixation identification to include smooth and saccadic movements. By identifying eye movements in the recorded eye positions, a method of reducing the analysis of scene video to a finite search space is presented. The implementation of a software tool for the eye movement analysis is described, including an example from an on-road test-driving sample.  相似文献   

17.
ABSTRACT

The authors studied 2 tasks that placed differing demands on detecting relevant visual information and generating appropriate gaze shifts in adults and children with and without autism. In Experiment 1, participants fixated a cross and needed to make large gaze shifts, but researchers provided explicit instructions about shifting. Children with autism were indistinguishable from comparison groups in this top-down task. In Experiment 2 (bottom-up), a fixation cross remained or was removed prior to the presentation of a peripheral target of low visual salience. In this gap–effect experiment, children with autism showed lengthened reaction times overall but no specific deficit in overlap trials. The results show evidence of a general deficit in manual responses to visual stimuli of low salience and no evidence of a deficit in top-down attention shifting. Older children with autism appeared able to generate appropriate motor responses, but stimulus-driven visual attention seemed impaired.  相似文献   

18.
A meaningful interaction with our environment relies on the ability to focus on relevant sensory input and to ignore irrelevant information, i.e. top-down control and attention processes are employed to select from competing stimuli following internal goals. In this, the demands for the recruitment of top-down control processes depend on the relative perceptual salience of the competing stimuli. In the present functional magnetic resonance imaging (fMRI) study, we investigated the recruitment of top-down control processes in response to varying degrees of control demands in the auditory modality. For this purpose, we tested 20 male and 20 female subjects with a dichotic listening paradigm, in which the relative perceptual salience of two simultaneously presented stimuli was systematically manipulated by varying the inter-aural intensity difference (IID) and asking the subjects to selectively attend to either ear. The analysis showed that the interaction between IID and attentional direction involves two networks in the brain. A fronto-parietal network, including the pre-supplementary motor area, anterior cingulate cortex, inferior frontal junction, insula and inferior parietal lobe, was recruited during cognitively demanding conditions and can thus be seen as a top-down cognitive control network. In contrast, a second network including the superior temporal and the post-central gyri was engaged under conditions with low cognitive control demands. These findings demonstrate how cognitive control is achieved through the interplay of distinct brain networks, with their differential engagement determined as a function of the level of competition between the sensory stimuli.  相似文献   

19.
While it is clear that the goals of an observer change behaviour, their role in the guidance of visual attention has been much debated. In particular, there has been controversy over whether top-down knowledge can influence attentional guidance in search for a singleton item that is already salient by a bottom-up account (Theeuwes, Reimann, & Mortier, 2006). One suggestion is that passive intertrial priming accounts for what has been called top-down guidance (e.g., Maljkovic & Nakayama, 1994). In the present study, participants responded to the shape of a singleton target among homogenous distractors in a trial-by-trial cueing design. We examined the influence of target expectancy, trial history, and target salience (which was manipulated by changing the number of distractors). Top-down influence resulted in fast RTs that were independent of display size, even on trials that received no priming. Our findings show there is a role for top-down guidance, even in singleton search. The designation of intertrial priming as a bottom-up factor, rather than an implicit top-down factor (Wolfe, Butcher, Lee, & Hyle, 2003), is also discussed.  相似文献   

20.
Eye movements depend on cognitive processes related to visual information processing. Much has been learned about the spatial selection of fixation locations, while the principles governing the temporal control (fixation durations) are less clear. Here, we review current theories for the control of fixation durations in tasks like visual search, scanning, scene perception, and reading and propose a new model for the control of fixation durations. We distinguish two local principles from one global principle of control. First, an autonomous saccade timer initiates saccades after random time intervals (local-I). Second, foveal inhibition permits immediate prolongation of fixation durations by ongoing processing (local-II). Third, saccade timing is adaptive, so that the mean timer value depends on task requirements and fixation history (Global). We demonstrate by numerical simulations that our model qualitatively reproduces patterns of mean fixation durations and fixation duration distributions observed in typical experiments. When combined with assumptions of saccade target selection and oculomotor control, the model accounts for both temporal and spatial aspects of eye movement control in two versions of a visual search task. We conclude that the model provides a promising framework for the control of fixation durations in saccadic tasks.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号