首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 234 毫秒
1.
The present study examined the extent to which learning mechanisms are deployed on semantic-categorical regularities during a visual searching within real-world scenes. The contextual cueing paradigm was used with photographs of indoor scenes in which the semantic category did or did not predict the target position on the screen. No evidence of a facilitation effect was observed in the predictive condition compared to the nonpredictive condition when participants were merely instructed to search for a target T or L (Experiment 1). However, a rapid contextual cueing effect occurred when each display containing the search target was preceded by a preview of the scene on which participants had to make a decision regarding the scene's category (Experiment 2). A follow-up explicit memory task indicated that this benefit resulted from implicit learning. Similar implicit contextual cueing effects were also obtained when the scene to categorize was different from the subsequent search scene (Experiment 3) and when a mere preview of the search scene preceded the visual searching (Experiment 4). These results suggested that if enhancing the processing of the scene was required with the present material, such implicit semantic learning can nevertheless take place when the category is task irrelevant.  相似文献   

2.
采用背景线索范式,探讨在真实场景中记忆对学龄儿童的注意导向。结果发现:(1)搜索任务中,学龄儿童在重复场景中的目标搜索随着学习阶段的增加而得到促进,获得显著的背景线索效应,但在新异场景中却没有出现。(2)回忆任务中,学龄儿童对重复场景及其共变目标位置的记忆好于对新异场景的,且显著地高于随机水平。结果表明,在真实场景中,对背景及目标-背景共变关系的记忆引导着学龄儿童的注意分布,且该引导随着经验的增加而变得更有效;该记忆具有外显性。  相似文献   

3.
重复的画面布局能够促进观察者对目标项的搜索 (情境提示效应)。本研究采用双任务范式,分别在视觉搜索任务的学习阶段 (实验2a) 和测验阶段 (实验2b) 加入空间工作记忆任务, 并与单任务基线 (实验1)进行比较, 考察空间工作记忆负载对真实场景搜索中情境线索学习和情境提示效应表达的影响。结果发现: 空间负载会增大学习阶段的情境提示效应量, 同时削弱测验阶段的情境提示效应量, 而不影响情境线索的外显提取。由此可见, 真实场景中情境线索的学习和提示效应的表达均会受到有限的工作记忆资源的影响, 但情境线索提取的外显性不变。  相似文献   

4.
In contextual cueing, the position of a search target is learned over repeated exposures to a visual display. The strength of this effect varies across stimulus types. For example, real-world scene contexts give rise to larger search benefits than contexts composed of letters or shapes. We investigated whether such differences in learning can be at least partially explained by the degree of semantic meaning associated with a context independently of the nature of the visual information available (which also varies across stimulus types). Chess boards served as the learning context as their meaningfulness depends on the observer's knowledge of the game. In Experiment 1, boards depicted actual game play, and search benefits for repeated boards were 4 times greater for experts than for novices. In Experiment 2, search benefits among experts were halved when less meaningful randomly generated boards were used. Thus, stimulus meaningfulness independently contributes to learning context-target associations.  相似文献   

5.
In contextual cueing, the position of a search target is learned over repeated exposures to a visual display. The strength of this effect varies across stimulus types. For example, real-world scene contexts give rise to larger search benefits than contexts composed of letters or shapes. We investigated whether such differences in learning can be at least partially explained by the degree of semantic meaning associated with a context independently of the nature of the visual information available (which also varies across stimulus types). Chess boards served as the learning context as their meaningfulness depends on the observer's knowledge of the game. In Experiment 1, boards depicted actual game play, and search benefits for repeated boards were 4 times greater for experts than for novices. In Experiment 2, search benefits among experts were halved when less meaningful randomly generated boards were used. Thus, stimulus meaningfulness independently contributes to learning context–target associations.  相似文献   

6.
Because the importance of color in visual tasks such as object identification and scene memory has been debated, we sought to determine whether color is used to guide visual search in contextual cuing with real-world scenes. In Experiment 1, participants searched for targets in repeated scenes that were shown in one of three conditions: natural colors, unnatural colors that remained consistent across repetitions, and unnatural colors that changed on every repetition. We found that the pattern of learning was the same in all three conditions. In Experiment 2, we did a transfer test in which the repeating scenes were shown in consistent colors that suddenly changed on the last block of the experiment. The color change had no effect on search times, relative to a condition in which the colors did not change. In Experiments 3 and 4, we replicated Experiments 1 and 2, using scenes from a color-diagnostic category of scenes, and obtained similar results. We conclude that color is not used to guide visual search in real-world contextual cuing, a finding that constrains the role of color in scene identification and recognition processes.  相似文献   

7.
Contextual cueing refers to the cueing of spatial attention by repeated spatial context. Previous studies have demonstrated distinctive properties of contextual cueing by background scenes and by an array of search items. Whereas scene-based contextual cueing reflects explicit learning of the scene–target association, array-based contextual cueing is supported primarily by implicit learning. In this study, we investigated the interaction between scene-based and array-based contextual cueing. Participants searched for a target that was predicted by both the background scene and the locations of distractor items. We tested three possible patterns of interaction: (1) The scene and the array could be learned independently, in which case cueing should be expressed even when only one cue was preserved; (2) the scene and array could be learned jointly, in which case cueing should occur only when both cues were preserved; (3) overshadowing might occur, in which case learning of the stronger cue should preclude learning of the weaker cue. In several experiments, we manipulated the nature of the contextual cues present during training and testing. We also tested explicit awareness of scenes, scene–target associations, and arrays. The results supported the overshadowing account: Specifically, scene-based contextual cueing precluded array-based contextual cueing when both were predictive of the location of a search target. We suggest that explicit, endogenous cues dominate over implicit cues in guiding spatial attention.  相似文献   

8.
How do humans use target-predictive contextual information to facilitate visual search? How are consistently paired scenic objects and positions learned and used to more efficiently guide search in familiar scenes? For example, humans can learn that a certain combination of objects may define a context for a kitchen and trigger a more efficient search for a typical object, such as a sink, in that context. The ARTSCENE Search model is developed to illustrate the neural mechanisms of such memory-based context learning and guidance and to explain challenging behavioral data on positive-negative, spatial-object, and local-distant cueing effects during visual search, as well as related neuroanatomical, neurophysiological, and neuroimaging data. The model proposes how global scene layout at a first glance rapidly forms a hypothesis about the target location. This hypothesis is then incrementally refined as a scene is scanned with saccadic eye movements. The model simulates the interactive dynamics of object and spatial contextual cueing and attention in the cortical What and Where streams starting from early visual areas through medial temporal lobe to prefrontal cortex. After learning, model dorsolateral prefrontal cortex (area 46) primes possible target locations in posterior parietal cortex based on goal-modulated percepts of spatial scene gist that are represented in parahippocampal cortex. Model ventral prefrontal cortex (area 47/12) primes possible target identities in inferior temporal cortex based on the history of viewed objects represented in perirhinal cortex.  相似文献   

9.
背景线索效应揭示了个体在视觉搜索过程中对刺激之间具有的稳定空间关系(刺激间不变的相对空间位置)的学习能够提高搜索效率。本文基于经典背景线索效应在内隐习得空间布局的机制下结合真实场景视觉搜索的相关理论,对真实场景背景线索效应的实验范式、学习性质与内容进行归纳梳理,将真实场景视觉搜索中影响背景线索效应的视觉信息分为低水平物理特征及高水平语义信息两个维度进行论述。虽然当前研究涉及真实场景背景线索效应在不同场景维度信息的加工机制,但对于发挥作用的场景信息类别以及作用阶段还较少涉及,未来研究仍需进一步的探讨。  相似文献   

10.
When confronted with a previously encountered scene, what information is used to guide search to a known target? We contrasted the role of a scene's basic-level category membership with its specific arrangement of visual properties. Observers were repeatedly shown photographs of scenes that contained consistently but arbitrarily located targets, allowing target positions to be associated with scene content. Learned scenes were then unexpectedly mirror reversed, spatially translating visual features as well as the target across the display while preserving the scene's identity and concept. Mirror reversals produced a cost as the eyes initially moved toward the position in the display in which the target had previously appeared. The cost was not complete, however; when initial search failed, the eyes were quickly directed to the target's new position. These results suggest that in real-world scenes, shifts of attention are initially based on scene identity, and subsequent shifts are guided by more detailed information regarding scene and object layout.  相似文献   

11.
When confronted with a previously encountered scene, what information is used to guide search to a known target? We contrasted the role of a scene's basic-level category membership with its specific arrangement of visual properties. Observers were repeatedly shown photographs of scenes that contained consistently but arbitrarily located targets, allowing target positions to be associated with scene content. Learned scenes were then unexpectedly mirror reversed, spatially translating visual features as well as the target across the display while preserving the scene's identity and concept. Mirror reversals produced a cost as the eyes initially moved toward the position in the display in which the target had previously appeared. The cost was not complete, however; when initial search failed, the eyes were quickly directed to the target's new position. These results suggest that in real-world scenes, shifts of attention are initially based on scene identity, and subsequent shifts are guided by more detailed information regarding scene and object layout.  相似文献   

12.
When encountering familiar scenes, observers can use item-specific memory to facilitate the guidance of attention to objects appearing in known locations or configurations. Here, we investigated how memory for relational contingencies that emerge across different scenes can be exploited to guide attention. Participants searched for letter targets embedded in pictures of bedrooms. In a between-subjects manipulation, targets were either always on a bed pillow or randomly positioned. When targets were systematically located within scenes, search for targets became more efficient. Importantly, this learning transferred to bedrooms without pillows, ruling out learning that is based on perceptual contingencies. Learning also transferred to living room scenes, but it did not transfer to kitchen scenes, even though both scene types contained pillows. These results suggest that statistical regularities abstracted across a range of stimuli are governed by semantic expectations regarding the presence of target-predicting local landmarks. Moreover, explicit awareness of these contingencies led to a central tendency bias in recall memory for precise target positions that is similar to the spatial category effects observed in landmark memory. These results broaden the scope of conditions under which contextual cuing operates and demonstrate how semantic memory plays a causal and independent role in the learning of associations between objects in real-world scenes.  相似文献   

13.
In visual search, detection of a target is faster when a layout of nontarget items is repeatedly encountered, suggesting that contextual invariances can guide attention. Moreover, contextual cueing can also adapt to environmental changes. For instance, when the target undergoes a predictable (i.e., learnable) location change, then contextual cueing remains effective even after the change, suggesting that a learned context is “remapped” and adjusted to novel requirements. Here, we explored the stability of contextual remapping: Four experiments demonstrated that target location changes are only effectively remapped when both the initial and the future target positions remain predictable across the entire experiment. Otherwise, contextual remapping fails. In sum, this pattern of results suggests that multiple, predictable target locations can be associated with a given repeated context, allowing the flexible adaptation of previously learned contingencies to novel task demands.  相似文献   

14.
The visual context in which an object or face resides can provide useful top‐down information for guiding attention orienting, object recognition, and visual search. Although infants have demonstrated sensitivity to covariation in spatial arrays, it is presently unclear whether they can use rapidly acquired contextual knowledge to guide attention during visual search. In this eye‐tracking experiment, 6‐ and 10‐month‐old infants searched for a target face hidden among colorful distracter shapes. Targets appeared in Old or New visual contexts, depending on whether the visual search arrays (defined by the spatial configuration, shape and color of component items in the search display) were repeated or newly generated throughout the experiment. Targets in Old contexts appeared in the same location within the same configuration, such that context covaried with target location. Both 6‐ and 10‐month‐olds successfully distinguished between Old and New contexts, exhibiting faster search times, fewer looks at distracters, and more anticipation of targets when contexts repeated. This initial demonstration of contextual cueing effects in infants indicates that they can use top‐down information to facilitate orienting during memory‐guided visual search.  相似文献   

15.
背景线索效应是由重复场景导致搜索效率提高的现象。以往研究发现三维场景下能获得背景线索效应,但对三维场景下深度知觉线索信息如何影响空间场景提供有效线索引导注意仍缺乏实证研究。本研究以矩阵刺激为材料,通过设置不同的刺激项目水平数探讨深度知觉线索对背景线索效应的影响模式。结果发现:(1)在场景布局由较少项目构成时,深度知觉线索提供目标与刺激项目的有效联结,获得背景线索效应;(2)当刺激项目水平达到一定程度时,深度知觉线索增加了背景线索任务的搜索难度,干扰了重复场景中的背景信息对注意的指导性加工,背景线索效应消失。研究结果为三维场景下深度信息对空间场景中注意引导加工的影响提供了证据。  相似文献   

16.
The repetition of spatial layout implicitly facilitates visual search (contextual cueing effect; Chun & Jiang, 1998). Although a substantial number of studies have explored the mechanism underlying the contextual cueing effect, the manner in which contextual information guides spatial attention to a target location during a visual search remains unclear. We investigated the nature of attentional modulation by contextual cueing, using a hybrid paradigm of a visual search task and a probe dot detection task. In the case of a repeated spatial layout, detection of a probe dot was facilitated at a search target location and was inhibited at distractor locations relative to nonrepeated spatial layouts. Furthermore, these facilitatory and inhibitory effects possessed different learning properties across epochs (Experiment 1) and different time courses within a trial (Experiment 2). These results suggest that contextual cueing modulates attentional processing via both facilitation to the location of “to-be-attended” stimuli and inhibition to the locations of “to-be-ignored” stimuli.  相似文献   

17.
以室内3D场景图为实验材料,采用眼动追踪技术,探讨背景线索对真实场景视觉搜索过程中的注意引导。结果发现:在室内场景目标搜索过程中存在背景线索效应,该效应基于对背景-目标共变关系的外显记忆;背景线索对搜索的启动和确认阶段无影响,对扫描阶段的注视行为有促进作用,有助于更有效地选择注视区域,将注视更直接地导向目标所在位置。  相似文献   

18.
In contextual cueing, the position of a target within a group of distractors is learned over repeated exposure to a display with reference to a few nearby items rather than to the global pattern created by the elements. The authors contrasted the role of global and local contexts for contextual cueing in naturalistic scenes. Experiment 1 showed that learned target positions transfer when local information is altered but not when global information is changed. Experiment 2 showed that scene-target covariation is learned more slowly when local, but not global, information is repeated across trials than when global but not local information is repeated. Thus, in naturalistic scenes, observers are biased to associate target locations with global contexts.  相似文献   

19.
Visual search for a target object is facilitated when the object is repeatedly presented within an invariant context of surrounding items ("contextual cueing"; Chun & Jiang, Cognitive Psychology, 36, 28-71, 1998). The present study investigated whether such invariant contexts can cue more than one target location. In a series of three experiments, we showed that contextual cueing is significantly reduced when invariant contexts are paired with two rather than one possible target location, whereas no contextual cueing occurs with three distinct target locations. Closer data inspection revealed that one "dominant" target always exhibited substantially more contextual cueing than did the other, "minor" target(s), which caused negative contextual-cueing effects. However, minor targets could benefit from the invariant context when they were spatially close to the dominant target. In sum, our experiments suggest that contextual cueing can guide visual attention to a spatially limited region of the display, only enhancing the detection of targets presented inside that region.  相似文献   

20.
Many experiments have shown that the human visual system makes extensive use of contextual information for facilitating object search in natural scenes. However, the question of how to formally model contextual influences is still open. On the basis of a Bayesian framework, the authors present an original approach of attentional guidance by global scene context. The model comprises 2 parallel pathways; one pathway computes local features (saliency) and the other computes global (scene-centered) features. The contextual guidance model of attention combines bottom-up saliency, scene context, and top-down mechanisms at an early stage of visual processing and predicts the image regions likely to be fixated by human observers performing natural search tasks in real-world scenes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号