首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Implicit contextual cueing refers to a top-down mechanism in which visual search is facilitated by learned contextual features. In the current study we aimed to investigate the mechanism underlying implicit contextual learning using object information as a contextual cue. Therefore, we measured eye movements during an object-based contextual cueing task. We demonstrated that visual search is facilitated by repeated object information and that this reduction in response times is associated with shorter fixation durations. This indicates that by memorizing associations between objects in our environment we can recognize objects faster, thereby facilitating visual search.  相似文献   

2.
We investigated whether implicit learning in a visual search task would influence preferences for visual stimuli. Participants performed a contextual cueing task in which they searched for visual targets, the locations of which were either predicted or not predicted by the positioning of distractors. The speed with which participants located the targets increased across trials more rapidly for predictive displays than for non-predictive displays, consistent with contextual cueing. Participants were subsequently asked to rate the "goodness" of visual displays. The rating results showed that they preferred predictive displays to both non-predictive and novel displays. The participants did not recognize predictive displays any more frequently than they did non-predictive or novel displays. These results suggest that contextual cueing occurred implicitly and that the implicit learning of visual layouts promotes a preference for visual layouts that are predictive of target location.  相似文献   

3.
Invariant spatial context can facilitate visual search. For instance, detection of a target is faster if it is presented within a repeatedly encountered, as compared to a novel, layout of nontargets, demonstrating a role of contextual learning for attentional guidance (‘contextual cueing’). Here, we investigated how context-based learning adapts to target location (and identity) changes. Three experiments were performed in which, in an initial learning phase, observers learned to associate a given context with a given target location. A subsequent test phase then introduced identity and/or location changes to the target. The results showed that contextual cueing could not compensate for target changes that were not ‘predictable’ (i.e. learnable). However, for predictable changes, contextual cueing remained effective even immediately after the change. These findings demonstrate that contextual cueing is adaptive to predictable target location changes. Under these conditions, learned contextual associations can be effectively ‘remapped’ to accommodate new task requirements.  相似文献   

4.
重复的画面布局能够促进观察者对目标项的搜索 (情境提示效应)。本研究采用双任务范式,分别在视觉搜索任务的学习阶段 (实验2a) 和测验阶段 (实验2b) 加入空间工作记忆任务, 并与单任务基线 (实验1)进行比较, 考察空间工作记忆负载对真实场景搜索中情境线索学习和情境提示效应表达的影响。结果发现: 空间负载会增大学习阶段的情境提示效应量, 同时削弱测验阶段的情境提示效应量, 而不影响情境线索的外显提取。由此可见, 真实场景中情境线索的学习和提示效应的表达均会受到有限的工作记忆资源的影响, 但情境线索提取的外显性不变。  相似文献   

5.
When spatial stimulus configurations repeat in visual search, a search facilitation, resulting in shorter search times, can be observed that is due to incidental learning. This contextual cueing effect appears to be rather implicit, uncorrelated with observers’ explicit memory of display configurations. Nevertheless, as I review here, this search facilitation due to contextual cueing depends on visuospatial working memory resources, and it disappears when visuospatial working memory is loaded by a concurrent delayed match to sample task. However, the search facilitation immediately recovers for displays learnt under visuospatial working memory load when this load is removed in a subsequent test phase. Thus, latent learning of visuospatial configurations does not depend on visuospatial working memory, but the expression of learning, as memory‐guided search in repeated displays, does. This working memory dependence has also consequences for visual search with foveal vision loss, where top‐down controlled visual exploration strategies pose high demands on visuospatial working memory, in this way interfering with memory‐guided search in repeated displays. Converging evidence for the contribution of working memory to contextual cueing comes from neuroimaging data demonstrating that distinct cortical areas along the intraparietal sulcus as well as more ventral parieto‐occipital cortex are jointly activated by visual working memory and contextual cueing.  相似文献   

6.
In visual search, detection of a target in a repeated layout is faster than search within a novel arrangement, demonstrating that contextual invariances can implicitly guide attention to the target location (“contextual cueing”; Chun & Jiang, 1998). Here, we investigated how display segmentation processes influence contextual cueing. Seven experiments showed that grouping by colour and by size can considerably reduce contextual cueing. However, selectively attending to a relevant subgroup of items (that contains the target) preserved context-based learning effects. Finally, the reduction of contextual cueing by means of grouping affected both the latent learning and the recall of display layouts. In sum, all experiments show an influence of grouping on contextual cueing. This influence is larger for variations of spatial (as compared to surface) features and is consistent with the view that learning of contextual relations critically interferes with processes that segment a display into segregated groups of items.  相似文献   

7.
Repeatedly encountering a visual search display with the target located at a fixed position relative to the distractors facilitates target detection, relative to novel displays – which is attributed to search guidance by (acquired) long‐term memory (LTM) of the distractor ‘context’ of the target. Previous research has shown that this ‘contextual cueing’ effect is severely impeded during learning when participants have to perform a demanding spatial working memory (WM) task concurrently with the search task, though it does become manifest when the WM task is removed. This has led to the proposal that search guidance by LT context memories critically depends on spatial WM to become ‘expressed’ in behaviour. On this background, this study, of two experiments, asked: (1) Would contextual cueing eventually emerge under dual‐task learning conditions if the practice on the task(s) is extended beyond the short training implemented in previous studies? and given sufficient practice, (2) Would performing the search under dual‐task conditions actually lead to an increased cueing effect compared to performing the visual search task alone? The answer is affirmative to both questions. In particular, Experiment 1 showed that a robust contextual cueing effect emerges within 360–720 dual‐task trials as compared to some 240 single‐task trials. Further, Experiment 2 showed that when dual‐ and single‐task conditions are performed in alternating trials blocks, the cueing effect for the very same set of repeated displays is significantly larger in dual‐task blocks than in single‐task blocks. This pattern of effects suggests that dual‐task practice eventually leads to direct, or ‘automatic’, guidance of visual search by learnt spatial LTM representations, bypassing WM processes. These processes are normally engaged in single‐task performance might actually interfere with direct LTM‐based search guidance.  相似文献   

8.
The repetition of spatial layout implicitly facilitates visual search (contextual cueing effect; Chun & Jiang, 1998). Although a substantial number of studies have explored the mechanism underlying the contextual cueing effect, the manner in which contextual information guides spatial attention to a target location during a visual search remains unclear. We investigated the nature of attentional modulation by contextual cueing, using a hybrid paradigm of a visual search task and a probe dot detection task. In the case of a repeated spatial layout, detection of a probe dot was facilitated at a search target location and was inhibited at distractor locations relative to nonrepeated spatial layouts. Furthermore, these facilitatory and inhibitory effects possessed different learning properties across epochs (Experiment 1) and different time courses within a trial (Experiment 2). These results suggest that contextual cueing modulates attentional processing via both facilitation to the location of “to-be-attended” stimuli and inhibition to the locations of “to-be-ignored” stimuli.  相似文献   

9.
背景线索效应揭示了个体在视觉搜索过程中对刺激之间具有的稳定空间关系(刺激间不变的相对空间位置)的学习能够提高搜索效率。本文基于经典背景线索效应在内隐习得空间布局的机制下结合真实场景视觉搜索的相关理论,对真实场景背景线索效应的实验范式、学习性质与内容进行归纳梳理,将真实场景视觉搜索中影响背景线索效应的视觉信息分为低水平物理特征及高水平语义信息两个维度进行论述。虽然当前研究涉及真实场景背景线索效应在不同场景维度信息的加工机制,但对于发挥作用的场景信息类别以及作用阶段还较少涉及,未来研究仍需进一步的探讨。  相似文献   

10.
视觉环境中不变的空间布局信息能够引导观察者将注意快速指向特定的位置并促进对该位置上目标物体的识别,这种现象称为空间情境提示效应.在系统回顾以往研究的基础上,对空间情境提示效应的经典研究和实验范式,空间情境学习的性质、内容与过程,以及空间情境提示效应的产生机制和神经基础进行分析梳理.文章在最后总结了以往研究中存在的5个争议性的问题,并指出未来研究可以操纵研究材料和任务难度等关键变量来解决这些问题.  相似文献   

11.
采用背景线索范式,探讨在真实场景中记忆对学龄儿童的注意导向。结果发现:(1)搜索任务中,学龄儿童在重复场景中的目标搜索随着学习阶段的增加而得到促进,获得显著的背景线索效应,但在新异场景中却没有出现。(2)回忆任务中,学龄儿童对重复场景及其共变目标位置的记忆好于对新异场景的,且显著地高于随机水平。结果表明,在真实场景中,对背景及目标-背景共变关系的记忆引导着学龄儿童的注意分布,且该引导随着经验的增加而变得更有效;该记忆具有外显性。  相似文献   

12.
The present study examined the extent to which learning mechanisms are deployed on semantic-categorical regularities during a visual searching within real-world scenes. The contextual cueing paradigm was used with photographs of indoor scenes in which the semantic category did or did not predict the target position on the screen. No evidence of a facilitation effect was observed in the predictive condition compared to the nonpredictive condition when participants were merely instructed to search for a target T or L (Experiment 1). However, a rapid contextual cueing effect occurred when each display containing the search target was preceded by a preview of the scene on which participants had to make a decision regarding the scene's category (Experiment 2). A follow-up explicit memory task indicated that this benefit resulted from implicit learning. Similar implicit contextual cueing effects were also obtained when the scene to categorize was different from the subsequent search scene (Experiment 3) and when a mere preview of the search scene preceded the visual searching (Experiment 4). These results suggested that if enhancing the processing of the scene was required with the present material, such implicit semantic learning can nevertheless take place when the category is task irrelevant.  相似文献   

13.
The effect of selective attention on implicit learning was tested in four experiments using the "contextual cueing" paradigm (Chun & Jiang, 1998, 1999). Observers performed visual search through items presented in an attended colour (e.g., red) and an ignored colour (e.g., green). When the spatial configuration of items in the attended colour was invariant and was consistently paired with a target location, visual search was facilitated, showing contextual cueing (Experiments 1, 3, and 4). In contrast, repeating and pairing the configuration of the ignored items with the target location resulted in no contextual cueing (Experiments 2 and 4). We conclude that implicit learning is robust only when relevant, predictive information is selectively attended.  相似文献   

14.
In a series of experiments, we investigated the dependence of contextual cueing on working memory resources. A visual search task with 50 % repeated displays was run in order to elicit the implicit learning of contextual cues. The search task was combined with a concurrent visual working memory task either during an initial learning phase or a later test phase. The visual working memory load was either spatial or nonspatial. Articulatory suppression was used to prevent verbalization. We found that nonspatial working memory load had no effect, independent of presentation in the learning or test phase. In contrast, visuospatial load diminished search facilitation in the test phase, but not during learning. We concluded that visuospatial working memory resources are needed for the expression of previously learned spatial contexts, whereas the learning of contextual cues does not depend on visuospatial working memory.  相似文献   

15.
情景线索效应是指个体在视觉搜索过程中通过学习重复不变的情景信息提高搜索效率的现象。本文从情景线索效应相关行为特征、眼动特征及大脑神经活动等三个方面探讨了情景线索效应与选择性注意机制的交互作用,发现现有研究中存在着许多互相冲突的现象,增加了理解注意机制的难度。未来研究应结合认知神经科学技术寻找更多的实证数据,探索注意机制对情景线索效应的影响,以进一步完善情景线索效应中的注意机制理论。  相似文献   

16.
Recent research has shown that simple motor actions, such as pointing or grasping, can modulate the way we perceive and attend to our visual environment. Here we examine the role of action in spatial context learning. Previous studies using keyboard responses have revealed that people are faster locating a target on repeated visual search displays (“contextual cueing”). However, this learning appears to depend on the task and response requirements. In Experiment 1, participants searched for a T-target among L-distractors and responded either by pressing a key or by touching the screen. Comparable contextual cueing was found in both response modes. Moreover, learning transferred between keyboard and touch screen responses. Experiment 2 showed that learning occurred even for repeated displays that required no response, and this learning was as strong as learning for displays that required a response. Learning on no-response trials cannot be accounted for by oculomotor responses, as learning was observed when eye movements were discouraged (Experiment 3). We suggest that spatial context learning is abstracted from motor actions.  相似文献   

17.
Recent research has shown that simple motor actions, such as pointing or grasping, can modulate the way we perceive and attend to our visual environment. Here we examine the role of action in spatial context learning. Previous studies using keyboard responses have revealed that people are faster locating a target on repeated visual search displays ("contextual cueing"). However, this learning appears to depend on the task and response requirements. In Experiment 1, participants searched for a T-target among L-distractors and responded either by pressing a key or by touching the screen. Comparable contextual cueing was found in both response modes. Moreover, learning transferred between keyboard and touch screen responses. Experiment 2 showed that learning occurred even for repeated displays that required no response, and this learning was as strong as learning for displays that required a response. Learning on no-response trials cannot be accounted for by oculomotor responses, as learning was observed when eye movements were discouraged (Experiment 3). We suggest that spatial context learning is abstracted from motor actions.  相似文献   

18.
The development of contextual cueing specifically in relation to attention was examined in two experiments. Adult and 10-year-old participants completed a context cueing visual search task (Jiang & Chun, The Quarterly Journal of Experimental Psychology, 54A(4), 1105-1124, 2001) containing stimuli presented in an attended (e.g., red) and unattended (e.g., green) color. When the spatial configuration of stimuli in the attended and unattended color was invariant and consistently paired with the target location, adult reaction times improved, demonstrating learning. Learning also occurred if only the attended stimuli's configuration remained fixed. In contrast, while 10 year olds, like adults, showed incrementally slower reaction times as the number of attended stimuli increased, they did not show learning in the standard paradigm. However, they did show learning when the ratio of attended to unattended stimuli was high, irrespective of the total number of attended stimuli. Findings suggest children show efficient attentional guidance by color in visual search but differences in contextual cueing.  相似文献   

19.
How do humans use target-predictive contextual information to facilitate visual search? How are consistently paired scenic objects and positions learned and used to more efficiently guide search in familiar scenes? For example, humans can learn that a certain combination of objects may define a context for a kitchen and trigger a more efficient search for a typical object, such as a sink, in that context. The ARTSCENE Search model is developed to illustrate the neural mechanisms of such memory-based context learning and guidance and to explain challenging behavioral data on positive-negative, spatial-object, and local-distant cueing effects during visual search, as well as related neuroanatomical, neurophysiological, and neuroimaging data. The model proposes how global scene layout at a first glance rapidly forms a hypothesis about the target location. This hypothesis is then incrementally refined as a scene is scanned with saccadic eye movements. The model simulates the interactive dynamics of object and spatial contextual cueing and attention in the cortical What and Where streams starting from early visual areas through medial temporal lobe to prefrontal cortex. After learning, model dorsolateral prefrontal cortex (area 46) primes possible target locations in posterior parietal cortex based on goal-modulated percepts of spatial scene gist that are represented in parahippocampal cortex. Model ventral prefrontal cortex (area 47/12) primes possible target identities in inferior temporal cortex based on the history of viewed objects represented in perirhinal cortex.  相似文献   

20.
An oculomotor visual search task was used to investigate how participants follow the gaze of a non-predictive and task irrelevant distractor gaze, and the way in which this gaze following is influenced by the emotional expression (fearful vs. happy) as well as participants' goal. Previous research has suggested that fearful emotions should result in stronger cueing effects than happy faces. Our results demonstrated that the degree to which the emotional expression influenced this gaze following varied as a function of the search target. When searching for a threatening target, participants were more likely to look in the direction of eye gaze on a fearful compared to a happy face. However, when searching for a pleasant target, this stronger cueing effect for fearful faces disappeared. Therefore, gaze following is influenced by contextual factors such as the emotional expression, as well as the participant's goal.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号