首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
真实环境中的视觉搜索是人和动物赖以生存的重要能力。目前的视觉搜索研究多使用静态的观察者和静止的二维搜索对象, 侧重于探究注意在搜索中的作用; 现有的视觉搜索理论模型主要概括了影响搜索的自上而下的注意因素, 而将自下而上影响因素简单归结为影像显著性, 然而在真实环境中, 观察者或搜索对象是可以运动的, 搜索时可利用的视觉信息包括动态光流和静态影像结构信息。已有的视觉识别研究发现这两种信息相结合可以使观察者准确持久地识别场景、事件和三维结构。在现有视觉搜索理论模型中引入两种视觉信息可以较好还原真实环境中的搜索任务。我们提出研究构想和实验方案,探究利用动、静态视觉信息的视觉搜索过程, 从而完善现有的视觉搜索模型。我们认为充分利用环境信息可以提高搜索效率, 且在视觉搜索训练和智能搜索设计等方面有重要的应用价值。  相似文献   

2.
In visual search, items defined by a unique feature are found easily and efficiently. Search for a moving target among stationary distractors is one such efficient search. Search for a stationary target among moving distractors is markedly more difficult. In the experiments reported here, we confirm this finding and further show that searches for a stationary target within a structured flow field are more efficient than searches for stationary targets among distractors moving in random directions. The structured motion fields tested included uniform direction of motion, a radial flow field simulating observer forward motion, and a deformation flow field inconsistent with observer motion. The results using optic flow stimuli were not significantly different from the results obtained with other structured fields of distractors. The results suggest that the local properties of the flow fields rather than global optic flow properties are important for determining the efficiency of search for a stationary target.  相似文献   

3.
Visual search asymmetries in motion and optic flow fields.   总被引:1,自引:0,他引:1  
In visual search, items defined by a unique feature are found easily and efficiently. Search for a moving target among stationary distractors is one such efficient search. Search for a stationary target among moving distractors is markedly more difficult. In the experiments reported here, we confirm this finding and further show that searches for a stationary target within a structured flow field are more efficient than searches for stationary targets among distractors moving in random directions. The structured motion fields tested included uniform direction of motion, a radial flow field simulating observer forward motion, and a deformation flow field inconsistent with observer motion. The results using optic flow stimuli were not significantly different from the results obtained with other structured fields of distractors. The results suggest that the local properties of the flow fields rather than global optic flow properties are important for determining the efficiency of search for a stationary target.  相似文献   

4.
ABSTRACT— How observers distribute limited processing resources to regions of a scene is based on a dynamic balance between current goals and reflexive tendencies. Past research showed that these reflexive tendencies include orienting toward objects that expand as if they were looming toward the observer, presumably because this signal indicates an impending collision. Here we report that during visual search, items that loom abruptly capture attention more strongly when they approach from the periphery rather than from near the center of gaze (Experiment 1), and target objects are more likely to be attended when they are on a collision path with the observer rather than on a near-miss path (Experiment 2). Both effects are exaggerated when search is performed in a large projection dome (Experiment 3). These findings suggest that the human visual system prioritizes events that are likely to require a behaviorally urgent response.  相似文献   

5.
Watson and Humphreys (1997) proposed that visual marking is a goal-directed process that enhances visual search through the inhibition of old objects. In addition to the standard marking case with targets at new locations, included in Experiment 1 was a set of trials with targets always at old locations, as well as a set of trials with targets varying between new and old locations. The participants' performance when detecting the target at old locations was equivalent to their performance in the full-baseline condition when they knew the target would be at old locations, and was worse when the target appeared at old locations on 50% of the trials. Marking was observed when the target appeared at new locations. In Experiment 2, an offset paradigm was used to eliminate the influence of the salient abrupt-onset feature of the new objects. No significant benefits were found for targets at new locations in the absence of onsets at new locations. The results suggest that visual marking may be an attentional selection mechanism that significantly benefits visual search when (1) the observer has an appropriate search goal, (2) the goal necessitates inhibition of old objects, and (3) the new objects include a salient perceptual feature.  相似文献   

6.
All elements of the visual field are known to influence the perception of the egocentric distances of objects. Not only the ground surface of a scene, but also the surface at the back or other objects in the scene can affect an observer's egocentric distance estimation of an object. We tested whether this is also true for exocentric direction estimations. We used an exocentric pointing task to test whether the presence of poster-boards in the visual scene would influence the perception of the exocentric direction between two test-objects. In this task the observer has to direct a pointer, with a remote control, to a target. We placed the poster-boards at various positions in the visual field to test whether these boards would affect the settings of the observer. We found that they only affected the settings when they directly served as a reference for orienting the pointer to the target.  相似文献   

7.
When visual stimuli remain present during search, people spend more time fixating objects that are semantically or visually related to the target instruction than looking at unrelated objects. Are these semantic and visual biases also observable when participants search within memory? We removed the visual display prior to search while continuously measuring eye movements towards locations previously occupied by objects. The target absent trials contained objects that were either visually or semantically related to the target instruction. When the overall mean proportion of fixation time was considered, we found biases towards the location previously occupied by the target, but failed to find biases towards visually or semantically related objects. However, in two experiments the pattern of biases towards the target over time provided a reliable predictor for biases towards the visually and semantically related objects. We therefore conclude that visual and semantic representations alone can guide eye movements in memory search, but that orienting biases are weak when the stimuli are no longer present.  相似文献   

8.
Two experiments were conducted to investigate how color and stereoscopic depth information are used to segregate objects for visual search in three-dimensional (3-D) visual space. Eight observers were asked to indicate the alphanumeric category (letter or digit) of the target which had its unique color and unique depth plane. In Experiment 1, distractors sharing a common depth plane or a common color appeared in spatial contiguity in thexy plane. The results suggest that visual search for the target involves examination of kernels formed by homogeneous items sharing the same color and depth. In Experiment 2, thexy contiguity of distractors sharing a common color or a common depth plane was varied. The results showed that when target-distractor distinction becomes more difficult on one dimension, the other dimension becomes more important in performing visual search, as indicated by a larger effect on search time. This suggests that observers can make optimal use of the information available. Finally, color had a larger effect on search time than did stereoscopic depth. Overall, the results support models of visual processing which maintain that perceptual segregation and selective attention are determined by similarity among objects in 3-D visual space on both spatial and nonspatial stimulus dimensions.  相似文献   

9.
We contrasted visual search for targets presented in prototypical views and targets presented in nonprototypical views, when targets were defined by their names and when they were defined by the action that would normally be performed on them. The likelihood of the first fixation falling on the target was increased for prototypical-view targets falling in the lower visual field. When targets were defined by actions, the durations of fixations were reduced for targets in the lower field. The results are consistent with eye movements in search being affected by representations within the dorsal visual stream, where there is strong representation of the lower visual field. These representations are sensitive to the familiarity or the affordance offered by objects in prototypical views, and they are influenced by action-based templates for targets.  相似文献   

10.
What can we learn about a scene while we stare at it, but before we know what we will be looking for? Three experiments were performed to investigate whether previewing a search array prior to knowing the target allows search to operate more quickly (lower reaction time [RT]), more efficiently (reduced set size slope), and/or by consulting abstract mental representations. Experiment 1 compared RTs for previewed and nonpreviewed arrays, some of which were highly degraded with visual noise. Preview reduced RTs for the noisy displays but did not affect search efficiency. Limited interactions of visual quality and preview suggested that prior exposure allowed the extraction and maintenance of about three abstract identities. If the target was one of those items, the observer responded without searching; if not, the observer searched the remaining items as if there had been no preview. Experiment 2 replicated these findings with less extreme noise. In Experiment 3, subjects previewed 0-6 items of a 12-item display. RTs decreased linearly as the number of previewed items increased from 0 to 3 and then reached a plateau, confirming that the capacity of the representation was about 3 items. Implications for visual awareness are discussed.  相似文献   

11.
Summary In continuous visual search, targets can be detected within a certain area around the fixation point (control area). Recent observations have suggested that these areas are asymmetrical in their vertical extent, i.e., that targets can be detected at greater distances below than above the fixation point. In order to obtain more direct evidence on this asymmetry, two experiments were conducted using a contingent-display technique. Pronounced asymmetry of the vertical detection span was observed. A model is presented according to which the asymmetry results from the superposition of two sources: the permanent distribution of sensitivity and the actual distribution of attention along the vertical axis of the visual field. The detectability of a target at a given location is a joint function of the strength of these two factors at that location.  相似文献   

12.
The underlying mechanism of search asymmetry is still unknown. Many computational models postulate top-down selection of target-defining features as a crucial factor. This feature selection account implies, and other theories implicitly assume, that predefined target identity is necessary for search asymmetry. The authors tested the validity of the feature selection account using a singleton search task without a predefined target. Participants conducted a target-defined and a singleton search task with a circle (O) and a circle with a vertical bar (Q). Search asymmetry was observed in both tasks with almost identical magnitude. The results were not due to trial-by-trial feature selection, because search asymmetry persisted even when the target was completely unpredictable. Asymmetry in the singleton search was also observed with more complex stimuli, Kanji characters. These results suggest that feature selection is not necessary for search asymmetry, and they impose important constraints on current visual search theories.  相似文献   

13.
Currently one in five adults is still unable to read despite a rapidly developing world. Here we show that (il)literacy has important consequences for the cognitive ability of selecting relevant information from a visual display of nonlinguistic material. In two experiments we compared low to high literacy observers on both an easy and a more difficult visual search task involving different types of chicken. Low literates were consistently slower (as indicated by overall response times) in both experiments. More detailed analyses, including eye movement measures, suggest that the slowing is partly due to display wide (i.e., parallel) sensory processing but mainly due to postselection processes, as low literates needed more time between fixating the target and generating a manual response. Furthermore, high and low literacy groups differed in the way search performance was distributed across the visual field. High literates performed relatively better when the target was presented in central regions, especially on the right. At the same time, high literacy was also associated with a more general bias towards the top and the left, especially in the more difficult search. We conclude that learning to read results in an extension of the functional visual field from the fovea to parafoveal areas, combined with some asymmetry in scan pattern influenced by the reading direction, both of which also influence other (e.g., nonlinguistic) tasks such as visual search.  相似文献   

14.
Incidental visual memory for targets and distractors in visual search   总被引:1,自引:0,他引:1  
We explored incidental retention of visual details of encountered objects during search. Participants searched for conjunction targets in 32 arrays of 12 pictures of real-world objects and then performed a token discrimination task that examined their memory for visual details of the targets and distractors from the search task. The results indicate that even though participants had not been instructed to memorize the objects, the visual details of search targets and distractor objects related to the targets were retained after the search. Distractor objects unrelated to the search target were remembered more poorly. Eye-movement measures indicated that the objects that were remembered were looked at more frequently during search than those that were not remembered. These results provide support that detailed visual information is included incidentally in the visual representation of an object after the object is no longer in view.  相似文献   

15.
In visual search, observers try to find known target objects among distractors in visual scenes where the location of the targets is uncertain. This review article discusses the attentional processes that are active during search and their neural basis. Four successive phases of visual search are described. During the initial preparatory phase, a representation of the current search goal is activated. Once visual input has arrived, information about the presence of target-matching features is accumulated in parallel across the visual field (guidance). This information is then used to allocate spatial attention to particular objects (selection), before representations of selected objects are activated in visual working memory (recognition). These four phases of attentional control in visual search are characterized both at the cognitive level and at the neural implementation level. It will become clear that search is a continuous process that unfolds in real time. Selective attention in visual search is described as the gradual emergence of spatially specific and temporally sustained biases for representations of task-relevant visual objects in cortical maps.  相似文献   

16.
Folk psychology suggests that when an observer views a scene, a unique item will stand out and draw attention to itself. This belief stands in contrast to numerous studies in visual search that have found that a unique target item (e.g., a unique color) is not identified more quickly than a nonunique target. We hypothesized that this finding is the result of task demands of visual search, and that when the task does not involve visual search, uniqueness will pop out. We tested this hypothesis in a task in which observers were presented an array of letters and asked to respond aloud, as quickly as possible, with the identity of any one of the letters. The observers were significantly more likely to respond with a uniquely colored letter than would be expected by chance. In a task in which observers blurt out the first thing that they see, unique pop-out does not poop out.  相似文献   

17.
A visual search experiment using synthetic three-dimensional objects is reported. The target shared its constituent parts, the spatial organization of its parts, or both with the distractors displayed with it. Sharing of parts and sharing of spatial organization both negatively affected visual search performance, and these effects were strictly additive. These findings support theories of complex visual object perception that assume a parsing of the stimulus into its higher-order constituents (volumetric parts or visible surfaces). The additivity of the effects demonstrates that information on parts and information on spatial organization are processed independently in visual search.  相似文献   

18.
Mitsudo H 《Perception》2003,32(1):53-66
Phenomenal transparency reflects a process which makes it possible to recover the structure and lightness of overlapping objects from a fragmented image. This process was investigated by the visual-search paradigm. In three experiments, observers searched for a target that consisted of gray patches among a variable number of distractors and the search efficiency was assessed. Experiments 1 and 2 showed that the search efficiency was greatly improved when the target was distinctive with regard to structure, based on transparency. Experiment 3 showed that the search efficiency was impaired when a target was not distinctive with regard to lightness (ie perceived reflectance), based on transparency. These results suggest that the shape and reflectance of overlapping objects when accompanied by transparency can be calculated in parallel across the visual field, and can be used as a guide for visual attention.  相似文献   

19.
In visual search tasks, observers look for a target stimulus among distractor stimuli. A visual search asymmetry is said to occur when a search for stimulus A among stimulus B produces different results from a search for B among A. Anne Treisman made search asymmetries into an important tool in the study of visual attention. She argued that it was easier to find a target that was defined by the presence of a preattentive basic feature than to find a target defined by the absence of that feature. Four of the eight papers in this symposium in Perception & Psychophysics deal with the use of search asymmetries to identify stimulus attributes that behave as basic features in this context. Another two papers deal with the long-standing question of whether a novelty can be considered to be a basic feature. Asymmetries can also arise when one type of stimulus is easier to identify or classify than another. Levin and Angelone's paper on visual search for faces of different races is an examination of an asymmetry of this variety. Finally, Previc and Naegele investigate an asymmetry based on the spatial location of the target. Taken as a whole, these papers illustrate the continuing value of the search asymmetry paradigm.  相似文献   

20.
Several studies have shown that targets defined on the basis of the spatial relations between objects yield highly inefficient visual search performance (e.g., Logan, 1994; Palmer, 1994), suggesting that the apprehension of spatial relations may require the selective allocation of attention within the scene. In the present study, we tested the hypothesis that depth relations might be different in this regard and might support efficient visual search. This hypothesis was based, in part, on the fact that many perceptual organization processes that are believed to occur early and in parallel, such as figure-ground segregation and perceptual completion, seem to depend on the assignment of depth relations. Despite this, however, using increasingly salient cues to depth (Experiments 2–4) and including a separate test of the sufficiency of the most salient depth cue used (Experiment 5), no evidence was found to indicate that search for a target defined by depth relations is any different than search for a target defined by other types of spatial relations, with regard to efficiency of search. These findings are discussed within the context of the larger literature on early processing of three-dimensional characteristics of visual scenes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号