首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Views of natural scenes unfold over time, and objects of interest that were present a moment ago tend to remain present. While visual crowding places a fundamental limit on object recognition in cluttered scenes, most studies of crowding have suffered from the limitation that they typically involved static scenes. The role of temporal continuity in crowding has therefore been unaddressed. We investigated intertrial effects upon crowding in visual scenes, showing that crowding is considerably diminished when objects remain constant on consecutive visual search trials. Repetition of both the target and distractors decreases the critical distance for crowding from flankers. More generally, our results show how object continuity through between-trial priming releases objects that would otherwise be unidentifiable due to crowding. Crowding, although it is a significant bottleneck on object recognition, can be mitigated by statistically likely temporal continuity of the objects. Crowding therefore depends not only on what is momentarily present, but also on what was previously attended.  相似文献   

2.
Three experiments were conducted to investigate the existence of incidentally acquired, long-term, detailed visual memory for objects embedded in previously viewed scenes. Participants performed intentional memorization and incidental visual search learning tasks while viewing photographs of real-world scenes. A visual memory test for previously viewed objects from these scenes then followed. Participants were not aware that they would be tested on the scenes following incidental learning in the visual search task. In two types of memory tests for visually specific object information (token discrimination and mirror-image discrimination), performance following both the memorization and visual search conditions was reliably above chance. These results indicate that recent demonstrations of good visual memory during scene viewing are not due to intentional scene memorization. Instead, long-term visual representations are incidentally generated as a natural product of scene perception.  相似文献   

3.
Many experiments have shown that knowing a targets visual features improves search performance over knowing the target name. Other experiments have shown that scene context can facilitate object search in natural scenes. In this study, we investigated how scene context and target features affect search performance. We examined two possible sources of information from scene context—the scenes gist and the visual details of the scene—and how they potentially interact with target-feature information. Prior to commencing search, participants were shown a scene and a target cue depicting either a picture or the category name (or no-information control). Using eye movement measures, we investigated how the target features and scene context influenced two components of search: early attentional guidance processes and later verification processes involved in the identification of the target. We found that both scene context and target features improved guidance, but that target features also improved speed of target recognition. Furthermore, we found that a scenes visual details played an important role in improving guidance, much more so than did the scenes gist alone.  相似文献   

4.
We examined whether participants would use a negative priming (NP) paradigm to categorize color and grayscale images of natural scenes that were presented peripherally and were ignored. We focused on (1) attentional resources allocated to natural scenes and (2) direct versus indirect processing of them. We set up low and high attention-load conditions, based on the set size of the searched stimuli in the prime display (one and five). Participants were required to detect and categorize the target objects in natural scenes in a central visual search task, ignoring peripheral natural images in both the prime and probe displays. The results showed that, irrespective of attention load, NP was observed for color scenes but not for grayscale scenes. We did not observe any effect of color information in central visual search, where participants responded directly to natural scenes. These results indicate that, in a situation in which participants indirectly process natural scenes, color information is critical to object categorization, but when the scenes are processed directly, color information does not contribute to categorization.  相似文献   

5.
When moving toward a stationary scene, people judge their heading quite well from visual information alone. Much experimental and modeling work has been presented to analyze how people judge their heading for stationary scenes. However, in everyday life, we often move through scenes that contain moving objects. Most models have difficulty computing heading when moving objects are in the scene, and few studies have examined how well humans perform in the presence of moving objects. In this study, we tested how well people judge their heading in the presence of moving objects. We found that people perform remarkably well under a variety of conditions. The only condition that affects an observer’s ability to judge heading accurately consists of a large moving object crossing the observer’s path. In this case, the presence of the object causes a small bias in the heading judgments. For objects moving horizontally with respect to the observer, this bias is in the object’s direction of motion. These results present a challenge for computational models.  相似文献   

6.
The attentional cost of inattentional blindness   总被引:4,自引:0,他引:4  
Bressan P  Pizzighello S 《Cognition》2008,106(1):370-383
When our attention is engaged in a visual task, we can be blind to events which would otherwise not be missed. In three experiments, 97 out of the 165 observers performing a visual attention task failed to notice an unexpected, irrelevant object moving across the display. Surprisingly, this object significantly lowered accuracy in the primary task when, and only when, it failed to reach awareness. We suggest that an unexpected stimulus causes a state of alert that would normally generate an attentional shift; if this response is prevented by an attention-consuming task, a portion of the attentional resources remains allocated to the object. Such a portion is large enough to disturb performance, but not so large that the object can be recognized as task-irrelevant and accordingly ignored. Our findings have one counterintuitive implication: irrelevant stimuli might hamper some types of performance only when perceived subliminally.  相似文献   

7.
We present a computational framework for attention-guided visual scene exploration in sequences of RGB-D data. For this, we propose a visual object candidate generation method to produce object hypotheses about the objects in the scene. An attention system is used to prioritise the processing of visual information by (1) localising candidate objects, and (2) integrating an inhibition of return (IOR) mechanism grounded in spatial coordinates. This spatial IOR mechanism naturally copes with camera motions and inhibits objects that have already been the target of attention. Our approach provides object candidates which can be processed by higher cognitive modules such as object recognition. Since objects are basic elements for many higher level tasks, our architecture can be used as a first layer in any cognitive system that aims at interpreting a stream of images. We show in the evaluation how our framework finds most of the objects in challenging real-world scenes.  相似文献   

8.
重复的画面布局能够促进观察者对目标项的搜索 (情境提示效应)。本研究采用双任务范式,分别在视觉搜索任务的学习阶段 (实验2a) 和测验阶段 (实验2b) 加入空间工作记忆任务, 并与单任务基线 (实验1)进行比较, 考察空间工作记忆负载对真实场景搜索中情境线索学习和情境提示效应表达的影响。结果发现: 空间负载会增大学习阶段的情境提示效应量, 同时削弱测验阶段的情境提示效应量, 而不影响情境线索的外显提取。由此可见, 真实场景中情境线索的学习和提示效应的表达均会受到有限的工作记忆资源的影响, 但情境线索提取的外显性不变。  相似文献   

9.
Visual marking is a mechanism by which new visual stimuli can gain a selection advantage by the top-down attentional inhibition of stimuli already in the field. Previous work (Olivers, Watson, & Humphreys, 1999) has shown that, for moving stimuli, there must be a unique feature difference between the old items and the new items for marking to occur. The present study shows that this constraint is not necessary if the local spatial relationships between the old moving items remain constant. It is proposed that, with a fixed configuration, the old moving items can be grouped to form a single object. An inhibitory template set up to represent the object then coordinates the application of inhibition to the individual stimuli. Implications for the theory and ecological flexibility of visual marking are discussed.  相似文献   

10.
胡艳梅  张明  徐展  李毕琴 《心理学报》2013,45(2):127-138
通过两个实验验证了客体工作记忆内容对注意的导向过程是灵活可控的; 与客体工作记忆内容匹配的刺激既可以捕获注意, 也可以被抑制。实验1通过操作匹配试验概率来诱发不同水平的抑制动机, 考察在客体工作记忆内容对注意的导向作用中抑制动机的影响。匹配试验中有分心物与客体工作记忆内容相同。控制试验中所有搜索项目均与客体工作记忆内容不同。实验2保持实际匹配试验概率恒定, 仅通过改变指导语来调节抑制动机水平, 以排除练习的干扰。两实验结果均表明, 当抑制动机水平较低时, 匹配分心物会捕获注意; 而当抑制动机水平足够高时, 匹配分心物会被抑制。并且, 抑制动机水平的高低还会影响认知控制的效应量和时程。  相似文献   

11.
Many experiments have shown that the human visual system makes extensive use of contextual information for facilitating object search in natural scenes. However, the question of how to formally model contextual influences is still open. On the basis of a Bayesian framework, the authors present an original approach of attentional guidance by global scene context. The model comprises 2 parallel pathways; one pathway computes local features (saliency) and the other computes global (scene-centered) features. The contextual guidance model of attention combines bottom-up saliency, scene context, and top-down mechanisms at an early stage of visual processing and predicts the image regions likely to be fixated by human observers performing natural search tasks in real-world scenes.  相似文献   

12.
In this study, we examined whether the detection of frontal, ¾, and profile face views differs from their categorization as faces. In Experiment 1, we compared three tasks that required observers to determine the presence or absence of a face, but varied in the extents to which participants had to search for the faces in simple displays and in small or large scenes to make this decision. Performance was equivalent for all of the face views in simple displays and small scenes, but it was notably slower for profile views when this required the search for faces in extended scene displays. This search effect was confirmed in Experiment 2, in which we compared observers’ eye movements with their response times to faces in visual scenes. These results demonstrate that the categorization of faces at fixation is dissociable from the detection of faces in space. Consequently, we suggest that face detection should be studied with extended visual displays, such as natural scenes.  相似文献   

13.
When confronted with a previously encountered scene, what information is used to guide search to a known target? We contrasted the role of a scene's basic-level category membership with its specific arrangement of visual properties. Observers were repeatedly shown photographs of scenes that contained consistently but arbitrarily located targets, allowing target positions to be associated with scene content. Learned scenes were then unexpectedly mirror reversed, spatially translating visual features as well as the target across the display while preserving the scene's identity and concept. Mirror reversals produced a cost as the eyes initially moved toward the position in the display in which the target had previously appeared. The cost was not complete, however; when initial search failed, the eyes were quickly directed to the target's new position. These results suggest that in real-world scenes, shifts of attention are initially based on scene identity, and subsequent shifts are guided by more detailed information regarding scene and object layout.  相似文献   

14.
When confronted with a previously encountered scene, what information is used to guide search to a known target? We contrasted the role of a scene's basic-level category membership with its specific arrangement of visual properties. Observers were repeatedly shown photographs of scenes that contained consistently but arbitrarily located targets, allowing target positions to be associated with scene content. Learned scenes were then unexpectedly mirror reversed, spatially translating visual features as well as the target across the display while preserving the scene's identity and concept. Mirror reversals produced a cost as the eyes initially moved toward the position in the display in which the target had previously appeared. The cost was not complete, however; when initial search failed, the eyes were quickly directed to the target's new position. These results suggest that in real-world scenes, shifts of attention are initially based on scene identity, and subsequent shifts are guided by more detailed information regarding scene and object layout.  相似文献   

15.
Everyday tasks often require us to keep track of multiple objects in dynamic scenes. Past studies show that tracking becomes more difficult as objects move faster. In the present study, we show that this trade-off may not be due to increased speed itself but may, instead, be due to the increased crowding that usually accompanies increases in speed. Here, we isolate changes in speed from variations in crowding, by projecting a tracking display either onto a small area at the center of a hemispheric projection dome or onto the entire dome. Use of the larger display increased retinal image size and object speed by a factor of 4 but did not increase interobject crowding. Results showed that tracking accuracy was equally good in the large-display condition, even when the objects traveled far into the visual periphery. Accuracy was also not reduced when we tested object speeds that limited performance in the small-display condition. These results, along with a reinterpretation of past studies, suggest that we might be able to track multiple moving objects as fast as we can a single moving object, once the effect of object crowding is eliminated.  相似文献   

16.
How efficient is visual search in real scenes? In searches for targets among arrays of randomly placed distractors, efficiency is often indexed by the slope of the reaction time (RT) × Set Size function. However, it may be impossible to define set size for real scenes. As an approximation, we hand-labeled 100 indoor scenes and used the number of labeled regions as a surrogate for set size. In Experiment 1, observers searched for named objects (a chair, bowl, etc.). With set size defined as the number of labeled regions, search was very efficient (~5 ms/item). When we controlled for a possible guessing strategy in Experiment 2, slopes increased somewhat (~15 ms/item), but they were much shallower than search for a random object among other distinctive objects outside of a scene setting (Exp. 3: ~40 ms/item). In Experiments 4–6, observers searched repeatedly through the same scene for different objects. Increased familiarity with scenes had modest effects on RTs, while repetition of target items had large effects (>500 ms). We propose that visual search in scenes is efficient because scene-specific forms of attentional guidance can eliminate most regions from the “functional set size” of items that could possibly be the target.  相似文献   

17.
Research on contextual cueing has demonstrated that with simple arrays of letters and shapes, search for a target increases in efficiency as associations between a search target and its surrounding visual context are learned. We investigated whether the visual context afforded by repeated exposure to real-world scenes can also guide attention when the relationship between the scene and a target position is arbitrary. Observers searched for and identified a target letter embedded in photographs of real-world scenes. Although search time within novel scenes was consistent across trials, search time within repeated scenes decreased across repetitions. Unlike previous demonstrations of contextual cueing, however, memory for scene-target covariation was explicit. In subsequent memory tests, observers recognized repeated contexts more often than those that were presented once and displayed superior recall of target position within the repeated scenes. In addition, repetition of inverted scenes, which made the scene more difficult to identify, produced a markedly reduced rate of learning, suggesting semantic information concerning object and scene identity are used to guide attention.  相似文献   

18.
Because the importance of color in visual tasks such as object identification and scene memory has been debated, we sought to determine whether color is used to guide visual search in contextual cuing with real-world scenes. In Experiment 1, participants searched for targets in repeated scenes that were shown in one of three conditions: natural colors, unnatural colors that remained consistent across repetitions, and unnatural colors that changed on every repetition. We found that the pattern of learning was the same in all three conditions. In Experiment 2, we did a transfer test in which the repeating scenes were shown in consistent colors that suddenly changed on the last block of the experiment. The color change had no effect on search times, relative to a condition in which the colors did not change. In Experiments 3 and 4, we replicated Experiments 1 and 2, using scenes from a color-diagnostic category of scenes, and obtained similar results. We conclude that color is not used to guide visual search in real-world contextual cuing, a finding that constrains the role of color in scene identification and recognition processes.  相似文献   

19.
ABSTRACT

Previous research has observed that the size of age differences in short-term memory (STM) depends on the type of material to be remembered, but has not identified the mechanism underlying this pattern. The current study focused on visual STM and examined the contribution of information load, as estimated by the rate of visual search, to STM for two types of stimuli – meaningful and abstract objects. Results demonstrated higher information load and lower STM for abstract objects. Age differences were greater for abstract than meaningful objects in visual search, but not in STM. Nevertheless, older adults demonstrated a decreased capacity in visual STM for meaningful objects. Furthermore, in support of Salthouse's processing speed theory, controlling for search rates eliminated all differences in STM related to object type and age. The overall pattern of findings suggests that STM for visual objects is dependent upon processing rate, regardless of age or object type.  相似文献   

20.
How do observers search through familiar scenes? A novel panoramic search method is used to study the interaction of memory and vision in natural search behavior. In panoramic search, observers see part of an unchanging scene larger than their current field of view. A target object can be visible, present in the display but hidden from view, or absent. Visual search efficiency does not change after hundreds of trials through an unchanging scene (Experiment 1). Memory search, in contrast, begins inefficiently but becomes efficient with practice. Given a choice between vision and memory, observers choose vision (Experiments 2 and 3). However, if forced to use their memory on some trials, they learn to use memory on all trials, even when reliable visual information remains available (Experiment 4). The results suggest that observers make a pragmatic choice between vision and memory, with a strong bias toward visual search even for memorized stimuli.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号