首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In visual search, detection of a target is faster when a layout of nontarget items is repeatedly encountered, suggesting that contextual invariances can guide attention. Moreover, contextual cueing can also adapt to environmental changes. For instance, when the target undergoes a predictable (i.e., learnable) location change, then contextual cueing remains effective even after the change, suggesting that a learned context is “remapped” and adjusted to novel requirements. Here, we explored the stability of contextual remapping: Four experiments demonstrated that target location changes are only effectively remapped when both the initial and the future target positions remain predictable across the entire experiment. Otherwise, contextual remapping fails. In sum, this pattern of results suggests that multiple, predictable target locations can be associated with a given repeated context, allowing the flexible adaptation of previously learned contingencies to novel task demands.  相似文献   

2.
Contextual cueing occurs when repetitions of the distractor configuration are implicitly learned. This implicit learning leads to faster search times in repeated displays. Here, we investigated how search adapts to a change of the target location in old displays from a consistent location in the learning phase to a consistent new location in the transfer phase. In agreement with the literature, contextual cueing was accompanied by fewer fixations, a more efficient scan path and, specifically, an earlier onset of a monotonic gaze approach phase towards the target location in repeated displays. When the repeated context was no longer predictive of the old target location, search times and number of fixations for old displays increased to the level of novel displays. Along with this, scan paths for old and new displays became equally efficient. After the target location change, there was a bias of exploration towards the old target location, which soon disappeared. Thus, change of implicitly learned spatial relations between target and distractor configuration eliminated the advantageous effects of contextual cueing, but did not lead to a lasting impairment of search in repeated displays relative to novel displays.  相似文献   

3.
Contextual cueing refers to the cueing of spatial attention by repeated spatial context. Previous studies have demonstrated distinctive properties of contextual cueing by background scenes and by an array of search items. Whereas scene-based contextual cueing reflects explicit learning of the scene–target association, array-based contextual cueing is supported primarily by implicit learning. In this study, we investigated the interaction between scene-based and array-based contextual cueing. Participants searched for a target that was predicted by both the background scene and the locations of distractor items. We tested three possible patterns of interaction: (1) The scene and the array could be learned independently, in which case cueing should be expressed even when only one cue was preserved; (2) the scene and array could be learned jointly, in which case cueing should occur only when both cues were preserved; (3) overshadowing might occur, in which case learning of the stronger cue should preclude learning of the weaker cue. In several experiments, we manipulated the nature of the contextual cues present during training and testing. We also tested explicit awareness of scenes, scene–target associations, and arrays. The results supported the overshadowing account: Specifically, scene-based contextual cueing precluded array-based contextual cueing when both were predictive of the location of a search target. We suggest that explicit, endogenous cues dominate over implicit cues in guiding spatial attention.  相似文献   

4.
Under incidental learning conditions, a spatial layout can be acquired implicitly and facilitates visual search (the contextual cueing effect; Chun & Jiang, 1998). We investigated two aspects of this effect: Whether spatial layouts of a 3D display are encoded automatically or require selective processing (Experiment 1), and whether the learned layouts are limited to 2D configurations or can encompass three dimensions (Experiment 2). In Experiment 1, participants searched for a target presented only in a specific depth plane. Contextual cueing effect was obtained only when the location of the items in the attended plane was invariant and consistently paired with a target location. In contrast, repeating and pairing the layout of the ignored items with the target location did not produce a contextual cueing effect. In Experiment 2, we found that reaction times for the repeated condition increased significantly when the disparity of the repeated distractors was reversed in the last block of trials. These results indicate that contextual cueing in 3D displays occurs when the layout is relevant and selectively attended, and that 3D layouts can be preserved as an implicit visual spatial context.  相似文献   

5.
Visual search is often facilitated when the search display occasionally repeats, revealing a contextual-cueing effect. According to the associative-learning account, contextual cueing arises from associating the display configuration with the target location. However, recent findings emphasizing the importance of local context near the target have given rise to the possibility that low-level repetition priming may account for the contextual-cueing effect. This study distinguishes associative learning from local repetition priming by testing whether search is directed toward a target's expected location, even when the target is relocated. After participants searched for a T among Ls in displays that repeated 24 times, they completed a transfer session where the target was relocated locally to a previously blank location (Experiment 1) or to an adjacent distractor location (Experiment 2). Results revealed that contextual cueing decreased as the target appeared farther away from its expected location, ultimately resulting in a contextual cost when the target swapped locations with a local distractor. We conclude that target predictability is a key factor in contextual cueing.  相似文献   

6.
The authors evaluated age-related variations in contextual cueing, which reflects the extent to which visuospatial regularities can facilitate search for a target. Previous research produced inconsistent results regarding contextual cueing effects in young children and in older adults, and no study has investigated the phenomenon across the life span. Three groups (6, 20, and 70 years old) were compared. Participants located a designated target stimulus embedded in a context of distractor stimuli. During exposure, the location of the target could be predicted from the location of the distracters in each display. During test, these predictable displays were intermixed with new displays that did not predict the target location. Response times to locating predictable relative to unpredictable targets were compared. All groups exhibited facilitation effects greater than 0 (95% CIs [.02, .11], d = .4; [.01, .12], d = .4; and [.01, .10], d = .4, for the children, young adults, and older adults, respectively) indicating that contextual cueing is robust across a wide age range. The relative magnitude of contextual cueing effects was essentially identical across the age range tested, F(2, 103) = 1.71, ηρ2 = .02. The authors argue that a mechanism that uses environmental covariation is available to all age ranges, but the expression of the contextual cueing may depend on the way it is measured.  相似文献   

7.
The time course of attention is a major characteristic on which different types of attention diverge. In addition to explicit goals and salient stimuli, spatial attention is influenced by past experience. In contextual cueing, behaviorally relevant stimuli are more quickly found when they appear in a spatial context that has previously been encountered than when they appear in a new context. In this study, we investigated the time that it takes for contextual cueing to develop following the onset of search layout cues. In three experiments, participants searched for a T target in an array of Ls. Each array was consistently associated with a single target location. In a testing phase, we manipulated the stimulus onset asynchrony (SOA) between the repeated spatial layout and the search display. Contextual cueing was equivalent for a wide range of SOAs between 0 and 1,000 ms. The lack of an increase in contextual cueing with increasing cue durations suggests that as an implicit learning mechanism, contextual cueing cannot be effectively used until search begins.  相似文献   

8.
In visual search, detection of a target in a repeated layout is faster than search within a novel arrangement, demonstrating that contextual invariances can implicitly guide attention to the target location (“contextual cueing”; Chun & Jiang, 1998). Here, we investigated how display segmentation processes influence contextual cueing. Seven experiments showed that grouping by colour and by size can considerably reduce contextual cueing. However, selectively attending to a relevant subgroup of items (that contains the target) preserved context-based learning effects. Finally, the reduction of contextual cueing by means of grouping affected both the latent learning and the recall of display layouts. In sum, all experiments show an influence of grouping on contextual cueing. This influence is larger for variations of spatial (as compared to surface) features and is consistent with the view that learning of contextual relations critically interferes with processes that segment a display into segregated groups of items.  相似文献   

9.

Frequently finding a target in the same location within a familiar context reduces search time, relative to search for objects appearing in novel contexts. This learned association between a context and a target location requires several blocks of training and has long-term effects. Short-term selection history also influences search, where previewing a subset of a search context shortly before the appearance of the target and remaining distractors speeds search. Here we explored the interactions between contextual cueing and preview benefit using a modified version of a paradigm from Hodsoll and Humphreys (Journal of Experimental Psychology: Human Perception and Performance, 31(6), 1346–1358, 2005). Participants searched for a T target among L distractors. Half of the distractors appeared 800 ms before the addition of the other distractors and the target. We independently manipulated the repetition of the previewed distractors and the newly added distractors. Though the previewed set never contained the target, repetition of either the previewed or the newly added context yielded contextual cueing, and the effect was greater when the previewed context repeated. Another experiment trained participants to associate the previewed context with a target location, then disrupted the association in a testing phase. This disruption eliminated contextual cueing, suggesting that learning of the previewed context was associative. These findings demonstrate an important interaction between distinct kinds of selection history effects.

  相似文献   

10.
Visual search for a target object is facilitated when the object is repeatedly presented within an invariant context of surrounding items ("contextual cueing"; Chun & Jiang, Cognitive Psychology, 36, 28-71, 1998). The present study investigated whether such invariant contexts can cue more than one target location. In a series of three experiments, we showed that contextual cueing is significantly reduced when invariant contexts are paired with two rather than one possible target location, whereas no contextual cueing occurs with three distinct target locations. Closer data inspection revealed that one "dominant" target always exhibited substantially more contextual cueing than did the other, "minor" target(s), which caused negative contextual-cueing effects. However, minor targets could benefit from the invariant context when they were spatially close to the dominant target. In sum, our experiments suggest that contextual cueing can guide visual attention to a spatially limited region of the display, only enhancing the detection of targets presented inside that region.  相似文献   

11.
Research on contextual cueing has demonstrated that with simple arrays of letters and shapes, search for a target increases in efficiency as associations between a search target and its surrounding visual context are learned. We investigated whether the visual context afforded by repeated exposure to real-world scenes can also guide attention when the relationship between the scene and a target position is arbitrary. Observers searched for and identified a target letter embedded in photographs of real-world scenes. Although search time within novel scenes was consistent across trials, search time within repeated scenes decreased across repetitions. Unlike previous demonstrations of contextual cueing, however, memory for scene-target covariation was explicit. In subsequent memory tests, observers recognized repeated contexts more often than those that were presented once and displayed superior recall of target position within the repeated scenes. In addition, repetition of inverted scenes, which made the scene more difficult to identify, produced a markedly reduced rate of learning, suggesting semantic information concerning object and scene identity are used to guide attention.  相似文献   

12.
In the contextual cueing paradigm, Endo and Takeda (in Percept Psychophys 66:293–302, 2004) provided evidence that implicit learning involves selection of the aspect of a structure that is most useful to one’s task. The present study attempted to replicate this finding in artificial grammar learning to investigate whether or not implicit learning commonly involves such a selection. Participants in Experiment 1 were presented with an induction task that could be facilitated by several characteristics of the exemplars. For some participants, those characteristics included a perfectly predictive feature. The results suggested that the aspect of the structure that was most useful to the induction task was selected and learned implicitly. Experiment 2 provided evidence that, although salience affected participants’ awareness of the perfectly predictive feature, selection for implicit learning was mainly based on usefulness.  相似文献   

13.
Target detection is faster when search displays repeat, but properties of the memory representations that give rise to this contextual cueing effect remain uncertain. We adapted the contextual cueing task using an ABA design and recorded the eye movements of healthy young adults to determine whether the memory representations are flexible. Targets moved to a new location during the B phase and then returned to their original locations (second A phase). Contextual cueing effects in the first A phase were reinstated immediately in the second A phase, and response time costs eventually gave way to a repeated search advantage in the B phase, suggesting that two target-context associations were learned. However, this apparent flexibility disappeared when eye tracking data were used to subdivide repeated displays based on B-phase viewing of the original target quadrant. Therefore, memory representations acquired in the contextual cueing task resist change and are not flexible.  相似文献   

14.
陈晓宇  杜媛媛  刘强 《心理学报》2022,54(12):1481-1490
背景线索的学习缺乏适应性, 这种缺乏表现在两个方面:其一是难以在已习得的场景表征上捆绑一个新目标位置(Re-learning), 也就是场景表征的更新受阻; 其二是在习得一组场景表征后, 难以学习另一组全新场景(New-learning)。研究表明, 在旧场景表征上捆绑一个新目标位置的能力可能与注意范围大小有关, 而学习全新场景则需要重置学习功能。积极情绪可以有效扩大注意范围, 并改善对旧有认知模式的固着, 因此积极情绪启动将有可能提升背景线索学习的适应性。本研究采用效价为中性和积极的情绪性图片来启动对应的情绪, 探索旧场景捆绑新目标位置时和学习全新场景时, 背景线索的学习情况, 验证积极情绪是否可以提高背景线索学习中的适应性。实验发现, 积极情绪无法促进旧场景上捆绑新目标位置的背景线索学习(Re-learning), 但是可以促进全新场景的学习(New-learning)。该结果说明, 积极情绪可以提高被试的场景学习能力进而促进对全新场景的学习, 却无法减少由表征相似性引起的旧表征的自动检索, 进而无法改善旧表征的更新过程。  相似文献   

15.
How do humans use target-predictive contextual information to facilitate visual search? How are consistently paired scenic objects and positions learned and used to more efficiently guide search in familiar scenes? For example, humans can learn that a certain combination of objects may define a context for a kitchen and trigger a more efficient search for a typical object, such as a sink, in that context. The ARTSCENE Search model is developed to illustrate the neural mechanisms of such memory-based context learning and guidance and to explain challenging behavioral data on positive-negative, spatial-object, and local-distant cueing effects during visual search, as well as related neuroanatomical, neurophysiological, and neuroimaging data. The model proposes how global scene layout at a first glance rapidly forms a hypothesis about the target location. This hypothesis is then incrementally refined as a scene is scanned with saccadic eye movements. The model simulates the interactive dynamics of object and spatial contextual cueing and attention in the cortical What and Where streams starting from early visual areas through medial temporal lobe to prefrontal cortex. After learning, model dorsolateral prefrontal cortex (area 46) primes possible target locations in posterior parietal cortex based on goal-modulated percepts of spatial scene gist that are represented in parahippocampal cortex. Model ventral prefrontal cortex (area 47/12) primes possible target identities in inferior temporal cortex based on the history of viewed objects represented in perirhinal cortex.  相似文献   

16.
The repetition of spatial layout implicitly facilitates visual search (contextual cueing effect; Chun & Jiang, 1998). Although a substantial number of studies have explored the mechanism underlying the contextual cueing effect, the manner in which contextual information guides spatial attention to a target location during a visual search remains unclear. We investigated the nature of attentional modulation by contextual cueing, using a hybrid paradigm of a visual search task and a probe dot detection task. In the case of a repeated spatial layout, detection of a probe dot was facilitated at a search target location and was inhibited at distractor locations relative to nonrepeated spatial layouts. Furthermore, these facilitatory and inhibitory effects possessed different learning properties across epochs (Experiment 1) and different time courses within a trial (Experiment 2). These results suggest that contextual cueing modulates attentional processing via both facilitation to the location of “to-be-attended” stimuli and inhibition to the locations of “to-be-ignored” stimuli.  相似文献   

17.
Non-human primates possess species-specific repertoires of acoustically distinct call types that can be found in adults in predictable ways. Evidence for vocal flexibility is generally rare and typically restricted to acoustic variants within the main call types or sequential production of multiple calls. So far, evidence for context-specific call sequences has been mainly in relation to external disturbances, particularly predation. In this study, we investigated extensively the vocal behaviour of free-ranging and individually identified Diana monkeys in non-predatory contexts. We found that adult females produced four vocal structures alone (‘H’, ‘L’, ‘R’ and ‘A’ calls, the latter consisting of two subtypes) or combined in non-random ways (‘HA’, ‘LA’ and ‘RA’ call combinations) in relation to ongoing behaviour or external events. Specifically, the concatenation of an introductory call with the most frequently emitted and contextually neutral ‘A’ call seems to function as a contextual refiner of this potential individual identifier. Our results demonstrate that some non-human primates are able to increase the effective size of their small vocal repertoire not only by varying the acoustic structure of basic call types but also by combining them into more complex structures. We have demonstrated this phenomenon for a category of vocalisations with a purely social function and discuss the implications of these findings for evolutionary theories of primate vocal communication.  相似文献   

18.
Visual context information constrains what to expect and where to look, facilitating search for and recognition of objects embedded in complex displays. This article reviews a new paradigm called contextual cueing, which presents well-defined, novel visual contexts and aims to understand how contextual information is learned and how it guides the deployment of visual attention. In addition, the contextual cueing task is well suited to the study of the neural substrate of contextual learning. For example, amnesic patients with hippocampal damage are impaired in their learning of novel contextual information, even though learning in the contextual cueing task does not appear to rely on conscious retrieval of contextual memory traces. We argue that contextual information is important because it embodies invariant properties of the visual environment such as stable spatial layout information as well as object covariation information. Sensitivity to these statistical regularities allows us to interact more effectively with the visual world.  相似文献   

19.
The effect of selective attention on implicit learning was tested in four experiments using the "contextual cueing" paradigm (Chun & Jiang, 1998, 1999). Observers performed visual search through items presented in an attended colour (e.g., red) and an ignored colour (e.g., green). When the spatial configuration of items in the attended colour was invariant and was consistently paired with a target location, visual search was facilitated, showing contextual cueing (Experiments 1, 3, and 4). In contrast, repeating and pairing the configuration of the ignored items with the target location resulted in no contextual cueing (Experiments 2 and 4). We conclude that implicit learning is robust only when relevant, predictive information is selectively attended.  相似文献   

20.
The development of contextual cueing specifically in relation to attention was examined in two experiments. Adult and 10-year-old participants completed a context cueing visual search task (Jiang & Chun, The Quarterly Journal of Experimental Psychology, 54A(4), 1105-1124, 2001) containing stimuli presented in an attended (e.g., red) and unattended (e.g., green) color. When the spatial configuration of stimuli in the attended and unattended color was invariant and consistently paired with the target location, adult reaction times improved, demonstrating learning. Learning also occurred if only the attended stimuli's configuration remained fixed. In contrast, while 10 year olds, like adults, showed incrementally slower reaction times as the number of attended stimuli increased, they did not show learning in the standard paradigm. However, they did show learning when the ratio of attended to unattended stimuli was high, irrespective of the total number of attended stimuli. Findings suggest children show efficient attentional guidance by color in visual search but differences in contextual cueing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号