首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
2.
Lightness constancy in complex scenes requires that the visual system take account of information concerning variations of illumination falling on visible surfaces. Three experiments on the perception of lightness for three-dimensional (3-D) curved objects show that human observers are better able to perform this accounting for certain scenes than for others. The experiments investigate the effect of object curvature, illumination direction, and object shape on lightness perception. Lightness constancy was quite good when a rich local gray-level context was provided. Deviations occurred when both illumination and reflectance changed along the surface of the objects. Does the perception of a 3-D surface and illuminant layout help calibrate lightness judgments? Our results showed a small but consistent improvement between lightness matches on ellipsoid shapes, relative to flat rectangle shapes, under illumination conditions that produce similar image gradients. Illumination change over 3-D forms is therefore taken into account in lightness perception.  相似文献   

3.
During visual search, observers hold in mind a search template, which they match against the stimulus. To characterize the content of this template, we trained observers to discriminate a set of artificial objects at an individual level and at a category level. The observers then searched for the objects on backgrounds that camouflaged the features that defined either the object’s identity or the object’s category. Each search stimulus was preceded by the target’s individual name, its category name, or an uninformative cue. The observers’ task was to locate the target, which was always present and always the only figure in the stimulus. The results showed that name cues slowed search when the features associated with the name were camouflaged. Apparently, the observers required a match between their mental representation of the target and the stimulus, even though this was unnecessary for the task. Moreover, this match involved all distinctive features of the target, not just the features necessary for a definitive identification. We conclude that visual search for a specific target involves a verification process that is performed automatically on all of the target’s distinctive features.  相似文献   

4.
Visual marking and the perception of salience in visual search   总被引:2,自引:0,他引:2  
In the present study, the gap paradigm originally developed by Watson and Humphreys (1997) was used to investigate whether the process of visual marking can influence the perceptual salience of a target in visual search. Consistent with previous studies (Watson & Humphreys, 1997), the results showed that search was not affected by the presence of the preceding distractors when the target was relatively low in salience. This finding suggests that visual marking can increase the efficiency of visual search by decreasing the size of the search set. However, more important, the results also showed that search was affected by the presence of the preceding distractors when the target was relatively high in salience. This finding suggests that visual marking may be limited in its ability to increase the perceptual salience of the target. Together, the results of the present study suggest that the effectiveness of visual marking may vary as a function of search context.  相似文献   

5.
Visual marking inhibits singleton capture   总被引:4,自引:0,他引:4  
This paper is concerned with how we prioritize the selection of new objects in visual scenes. We present four experiments investigating the effects of distractor previews on visual search through new objects. Participants viewed a set of to-be-ignored nontargets, with the task being to search for a target in a second set, added to the first after 1000ms. This second set could contain a salient feature singleton, defined in terms of its color, orientation, or both color and orientation. When the singleton was a distractor, search was slowed relative to when there was no singleton. Search was facilitated when the singleton was a target. Interestingly, both the interference and facilitation effects were modulated when the preview shared features with the singleton. Follow-up experiments showed that this reduction of singleton effects was not due to: (i) low-level sensory aspects of the displays, (ii) increased heterogeneity in the search set in the preview condition, or (iii) color-based grouping of old and new items. Instead, we suggest that there is an inhibitory carry-over from the first to the second set of items based on feature similarity. We suggest the suppression stems from a process termed visual marking, which suppresses irrelevant visual objects in anticipation of more relevant new objects (Watson & Humphreys, 1997). The findings argue against alternative explanations such as the automatic capture by abrupt new onsets account.  相似文献   

6.
Koning A  de Weert CM  van Lier R 《Perception》2008,37(9):1434-1442
We investigated the role of transparency, perceptual grouping, and presentation time on perceived lightness. Both transparency and perceptual grouping have been found to result in assimilation effects, but only for ambiguous stimulus displays and with specific attentional instructions. By varying the presentation times of displays with two partly overlapping transparent E-shaped objects, we measured assimilation in unambiguous stimulus displays and without specific attentional instructions. The task was to judge which of two simultaneously presented E-shaped objects was darker. With unrestrained presentation times, if a transparency interpretation was possible, assimilation was not found. Inhibiting a transparency interpretation by occluding the local junctions between the two E-shaped objects, did lead to assimilation. With short presentation times, if a transparency interpretation was possible, assimilation was now also found. Thus, we conclude that, although transparency appears to enhance assimilation, with unambiguous stimulus displays and without specific attentional instructions, perceptual grouping is more important for assimilation to occur.  相似文献   

7.
Several studies have shown that targets defined on the basis of the spatial relations between objects yield highly inefficient visual search performance (e.g., Logan, 1994; Palmer, 1994), suggesting that the apprehension of spatial relations may require the selective allocation of attention within the scene. In the present study, we tested the hypothesis that depth relations might be different in this regard and might support efficient visual search. This hypothesis was based, in part, on the fact that many perceptual organization processes that are believed to occur early and in parallel, such as figure-ground segregation and perceptual completion, seem to depend on the assignment of depth relations. Despite this, however, using increasingly salient cues to depth (Experiments 2–4) and including a separate test of the sufficiency of the most salient depth cue used (Experiment 5), no evidence was found to indicate that search for a target defined by depth relations is any different than search for a target defined by other types of spatial relations, with regard to efficiency of search. These findings are discussed within the context of the larger literature on early processing of three-dimensional characteristics of visual scenes.  相似文献   

8.
Several studies have shown that targets defined on the basis of the spatial relations between objects yield highly inefficient visual search performance (e.g., Logan, 1994; Palmer, 1994), suggesting that the apprehension of spatial relations may require the selective allocation of attention within the scene. In the present study, we tested the hypothesis that depth relations might be different in this regard and might support efficient visual search. This hypothesis was based, in part, on the fact that many perceptual organization processes that are believed to occur early and in parallel, such as figure-ground segregation and perceptual completion, seem to depend on the assignment of depth relations. Despite this, however, using increasingly salient cues to depth (Experiments 2-4) and including a separate test of the sufficiency of the most salient depth cue used (Experiment 5), no evidence was found to indicate that search for a target defined by depth relations is any different than search for a target defined by other types of spatial relations, with regard to efficiency of search. These findings are discussed within the context of the larger literature on early processing of three-dimensional characteristics of visual scenes.  相似文献   

9.
When visual stimuli remain present during search, people spend more time fixating objects that are semantically or visually related to the target instruction than looking at unrelated objects. Are these semantic and visual biases also observable when participants search within memory? We removed the visual display prior to search while continuously measuring eye movements towards locations previously occupied by objects. The target absent trials contained objects that were either visually or semantically related to the target instruction. When the overall mean proportion of fixation time was considered, we found biases towards the location previously occupied by the target, but failed to find biases towards visually or semantically related objects. However, in two experiments the pattern of biases towards the target over time provided a reliable predictor for biases towards the visually and semantically related objects. We therefore conclude that visual and semantic representations alone can guide eye movements in memory search, but that orienting biases are weak when the stimuli are no longer present.  相似文献   

10.
ABSTRACT— The surface reflectance of objects is highly variable, ranging between 4% for, say, charcoal and 90% for fresh snow. When stimuli are presented simultaneously, people can discriminate hundreds of levels of visual intensity. Despite this, human languages possess a maximum of just three basic terms for describing lightness. In English, these are white (or light), black (or dark), and gray. Why should this be? Using information theory, combined with estimates of the distribution of reflectances in the natural world and the reliability of lightness recall over time, we show that three lightness terms is the optimal number for describing surface reflectance properties in a modern urban or indoor environment. We also show that only two lightness terms would be required in a forest or rural environment.  相似文献   

11.
Watson and Humphreys (1997) proposed that visual marking is a goal-directed process that enhances visual search through the inhibition of old objects. In addition to the standard marking case with targets at new locations, included in Experiment 1 was a set of trials with targets always at old locations, as well as a set of trials with targets varying between new and old locations. The participants' performance when detecting the target at old locations was equivalent to their performance in the full-baseline condition when they knew the target would be at old locations, and was worse when the target appeared at old locations on 50% of the trials. Marking was observed when the target appeared at new locations. In Experiment 2, an offset paradigm was used to eliminate the influence of the salient abrupt-onset feature of the new objects. No significant benefits were found for targets at new locations in the absence of onsets at new locations. The results suggest that visual marking may be an attentional selection mechanism that significantly benefits visual search when (1) the observer has an appropriate search goal, (2) the goal necessitates inhibition of old objects, and (3) the new objects include a salient perceptual feature.  相似文献   

12.
Skilled readers of Chinese participated in sorting and visual search experiments. The sorting results showed that under conditions of conflicting information about structure and component, subjective judgments of the visual similarity among characters were based on the characters' overall configurations (i.e., structures) rather than on the common components the characters possessed. In visual search, both structure and component contributed to the visual similarity reflected by the search efficiency. The steepest search slopes (thus the most similar target-distractor pairs) were found when the target and the distractor characters had the same structure and shared 1 common component, compared with when they had different structures and/or shared no common components. Results demonstrated that character structure plays a greater role in the visual similarity of Chinese characters than has been considered.  相似文献   

13.
In visual search experiments, we examined the existence of a search asymmetry for the direction with which three-dimensional objects are viewed. It was found that an upward-tilted target object among downward-tilted distracting objects was detected faster than when the orientation of target and distractors was reversed. This indicates that the early visual process regards objects tilted downward with respect to the observer as the situation that is more likely to be encountered. That is, the system is set up to expect to see the tops of these objects. We also found a visual field anisotropy, in that the asymmetry was more pronounced in the lower visual field. These findings are consistent with the idea that the tops of objects are usually situated in the lower visual field and less often in the upper field. Examination of the conditions under which the asymmetry and the anisotropy occur demonstrated the importance of the three-dimensional nature of the stimulus objects. Early visual processing thus makes use of heuristics that take into account specific relationships between the relative locations in space of the observer and 3-D objects.  相似文献   

14.
Although lightness perception is clearly influenced by contextual factors, it is not known whether knowledge about the reflectance of specific objects also affects their lightness. Recent research by O. H. MacLin and R. Malpass (2003) suggests that subjects label Black faces as darker than White faces, so in the current experiments, an adjustment methodology was used to test the degree to which expectations about the relative skin tone associated with faces of varying races affect the perceived lightness of those faces. White faces were consistently judged to be relatively lighter than Black faces, even for racially ambiguous faces that were disambiguated by labels. Accordingly, relatively abstract expectations about the relative reflectance of objects can affect their perceived lightness.  相似文献   

15.
How efficient is visual search in real scenes? In searches for targets among arrays of randomly placed distractors, efficiency is often indexed by the slope of the reaction time (RT) × Set Size function. However, it may be impossible to define set size for real scenes. As an approximation, we hand-labeled 100 indoor scenes and used the number of labeled regions as a surrogate for set size. In Experiment 1, observers searched for named objects (a chair, bowl, etc.). With set size defined as the number of labeled regions, search was very efficient (~5 ms/item). When we controlled for a possible guessing strategy in Experiment 2, slopes increased somewhat (~15 ms/item), but they were much shallower than search for a random object among other distinctive objects outside of a scene setting (Exp. 3: ~40 ms/item). In Experiments 4–6, observers searched repeatedly through the same scene for different objects. Increased familiarity with scenes had modest effects on RTs, while repetition of target items had large effects (>500 ms). We propose that visual search in scenes is efficient because scene-specific forms of attentional guidance can eliminate most regions from the “functional set size” of items that could possibly be the target.  相似文献   

16.
Auditory and visual processes demonstrably enhance each other based on spatial and temporal coincidence. Our recent results on visual search have shown that auditory signals also enhance visual salience of specific objects based on multimodal experience. For example, we tend to see an object (e.g., a cat) and simultaneously hear its characteristic sound (e.g., “meow”), to name an object when we see it, and to vocalize a word when we read it, but we do not tend to see a word (e.g., cat) and simultaneously hear the characteristic sound (e.g., “meow”) of the named object. If auditory–visual enhancements occur based on this pattern of experiential associations, playing a characteristic sound (e.g., “meow”) should facilitate visual search for the corresponding object (e.g., an image of a cat), hearing a name should facilitate visual search for both the corresponding object and corresponding word, but playing a characteristic sound should not facilitate visual search for the name of the corresponding object. Our present and prior results together confirmed these experiential association predictions. We also recently showed that the underlying object-based auditory–visual interactions occur rapidly (within 220 ms) and guide initial saccades towards target objects. If object-based auditory–visual enhancements are automatic and persistent, an interesting application would be to use characteristic sounds to facilitate visual search when targets are rare, such as during baggage screening. Our participants searched for a gun among other objects when a gun was presented on only 10% of the trials. The search time was speeded when a gun sound was played on every trial (primarily on gun-absent trials); importantly, playing gun sounds facilitated both gun-present and gun-absent responses, suggesting that object-based auditory–visual enhancements persistently increase the detectability of guns rather than simply biasing gun-present responses. Thus, object-based auditory–visual interactions that derive from experiential associations rapidly and persistently increase visual salience of corresponding objects.  相似文献   

17.
Searching for items in one’s environment often includes considerable reliance on semantic knowledge. The present study examines the importance of semantic information in visual and memory search, especially with respect to whether the items reside in long-term or working memory. In Experiment 1, participants engaged in hybrid visual memory search for items that were either highly familiar or novel. Importantly, the relatively large number of targets in this hybrid search task necessitated that targets be stored in some form of long-term memory. We found that search for familiar objects was more efficient than search for novel objects. In Experiment 2, we investigated search for familiar versus novel objects when the number of targets was low enough to be stored in working memory. We also manipulated how often participants in Experiment 2 were required to update their target (every trial vs. every block) in order to control for target templates that were stored in long-term memory as a result of repeated exposure over trials. We found no differences in search efficiency for familiar versus novel objects when templates were stored in working memory. Our results suggest that while semantic information may provide additional individuating features that are useful for object recognition in hybrid search, this information could be irrelevant or even distracting when searching for targets stored in working memory.  相似文献   

18.
Can visual search be based on preconstancy representations of the scene--that is, ones in which accidental characteristics of the scene, such as shadows, point of view, and distance, have not yet been discounted? This question was addressed within the specific context of lightness constancy, the phenomenon that surface lightness is perceived as relatively unchanged despite changes in illumination conditions. Three experiments yielded evidence of preconstancy influence on visual search. This was true even when the preconstancy information that seemed to influence search was unavailable at a reportable level. The results suggest that visual search processes can be engaged before the processing that leads to the experienced perception of the scene is complete.  相似文献   

19.
In order to determine the reflectance of a surface, it is necessary to discount luminance changes produced by illumination variation, a process that requires the visual system to respond differently to luminance changes that are due to illumination and reflectance. It is known that various cues can be used in this process. By measuring the strength of lightness illusions, we find evidence that straightness is, used as a cue: When a boundary is straight rather than curved, it has a greater tendency to be discounted, as if it were an illumination edge. The strongest illusions occur when a boundary has high contrast and has multiple X-junctions that preserve a consistent contrast ratio.  相似文献   

20.
It has long been debated whether or not a salient stimulus automatically attracts people’s attention in visual search. Recent findings showed that a salient stimulus is likely to capture attention especially when the search process was inefficient due to high levels of competition between the target and distractors. Expanding these studies, the present study proposes that a specific nature of visual search, as well as search efficiency, determines whether or not a salient, task-irrelevant singleton stimulus captures attention. To test this proposition, we conducted three experiments, in which participants performed two visual search tasks whose underlying mechanisms are known to be different: orientation-feature search and Landolt-C search tasks. We found that color singleton distractors captured attention when participants performed the orientation-feature search task. The magnitude of this capture effect increased as search efficiency decreased. On the contrary, the capture by singleton distractors was not observed under the Landolt-C search task. This differential pattern of capture effect was not due to differences in search efficiency across the search tasks; even when search efficiency was controlled for, stimulus-driven capture of attention by a salient distractor was found only under the feature search. Based on these results, the present study suggests that in addition to search efficiency, the nature of search strategy and the extent to which attentional control is strained play crucial roles in observing stimulus-driven attentional capture in visual search.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号