首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Under incidental learning conditions, a spatial layout can be acquired implicitly and facilitates visual search (the contextual cueing effect; Chun & Jiang, 1998). We investigated two aspects of this effect: Whether spatial layouts of a 3D display are encoded automatically or require selective processing (Experiment 1), and whether the learned layouts are limited to 2D configurations or can encompass three dimensions (Experiment 2). In Experiment 1, participants searched for a target presented only in a specific depth plane. Contextual cueing effect was obtained only when the location of the items in the attended plane was invariant and consistently paired with a target location. In contrast, repeating and pairing the layout of the ignored items with the target location did not produce a contextual cueing effect. In Experiment 2, we found that reaction times for the repeated condition increased significantly when the disparity of the repeated distractors was reversed in the last block of trials. These results indicate that contextual cueing in 3D displays occurs when the layout is relevant and selectively attended, and that 3D layouts can be preserved as an implicit visual spatial context.  相似文献   

2.
Humans conduct visual search faster when the same display is presented for a 2nd time, showing implicit learning of repeated displays. This study examines whether learning of a spatial layout transfers to other layouts that are occupied by items of new shapes or colors. The authors show that spatial context learning is sometimes contingent on item identity. For example, when the training session included some trials with black items and other trials with white items, learning of the spatial layout became specific to the trained color--no transfer was seen when items were in a new color during testing. However, when the training session included only trials in black (or white), learning transferred to displays with a new color. Similar results held when items changed shapes after training. The authors conclude that implicit visual learning is sensitive to trial context and that spatial context learning can be identity contingent.  相似文献   

3.
Nabeta T  Ono F  Kawahara J 《Perception》2003,32(11):1351-1358
Under incidental learning conditions, spatial layouts can be acquired implicitly and facilitate visual search (contextual-cueing effect). We examined whether the contextual-cueing effect is specific to the visual modality or transfers to the haptic modality. The participants performed 320 (experiment 1) or 192 (experiment 2) visual search trials based on a typical contextual-cueing paradigm, followed by haptic search trials in which half of the trials had layouts used in the previous visual search trials. The visual contextual-cueing effect was obtained in the learning phase. More importantly, the effect was transferred from visual to haptic searches; there was greater facilitation of haptic search trials when the spatial layout was the same as in the previous visual search trials, compared with trials in which the spatial layout differed from those in the visual search. This suggests the commonality of spatial memory to allocate focused attention in both visual and haptic modalities.  相似文献   

4.
重复的画面布局能够促进观察者对目标项的搜索 (情境提示效应)。本研究采用双任务范式,分别在视觉搜索任务的学习阶段 (实验2a) 和测验阶段 (实验2b) 加入空间工作记忆任务, 并与单任务基线 (实验1)进行比较, 考察空间工作记忆负载对真实场景搜索中情境线索学习和情境提示效应表达的影响。结果发现: 空间负载会增大学习阶段的情境提示效应量, 同时削弱测验阶段的情境提示效应量, 而不影响情境线索的外显提取。由此可见, 真实场景中情境线索的学习和提示效应的表达均会受到有限的工作记忆资源的影响, 但情境线索提取的外显性不变。  相似文献   

5.
ABSTRACT

In recent years there has been rapid proliferation of studies demonstrating how reward learning guides visual search. However, most of these studies have focused on feature-based reward, and there has been scant evidence supporting the learning of space-based reward. We raise the possibility that the visual search apparatus is impenetrable to spatial value contingencies, even when such contingencies are learned and represented online in a separate knowledge domain. In three experiments, we interleaved a visual choice task with a visual search task in which one display quadrant produced greater monetary rewards than the remaining quadrants. We found that participants consistently exploited this spatial value contingency during the choice task but not during the search task – even when these tasks were interleaved within the same trials and when rewards were contingent on response speed. These results suggest that the expression of spatial value information is task specific and that the visual search apparatus could be impenetrable to spatial reward information. Such findings are consistent with an evolutionary framework in which the search apparatus has little to gain from spatial value information in most real world situations.  相似文献   

6.
In visual search, detection of a target in a repeated layout is faster than search within a novel arrangement, demonstrating that contextual invariances can implicitly guide attention to the target location (“contextual cueing”; Chun & Jiang, 1998). Here, we investigated how display segmentation processes influence contextual cueing. Seven experiments showed that grouping by colour and by size can considerably reduce contextual cueing. However, selectively attending to a relevant subgroup of items (that contains the target) preserved context-based learning effects. Finally, the reduction of contextual cueing by means of grouping affected both the latent learning and the recall of display layouts. In sum, all experiments show an influence of grouping on contextual cueing. This influence is larger for variations of spatial (as compared to surface) features and is consistent with the view that learning of contextual relations critically interferes with processes that segment a display into segregated groups of items.  相似文献   

7.
Postattentive vision   总被引:4,自引:0,他引:4  
Much research has examined preattentive vision: visual representation prior to the arrival of attention. Most vision research concerns attended visual stimuli; very little research has considered postattentive vision. What is the visual representation of a previously attended object once attention is deployed elsewhere? The authors argue that perceptual effects of attention vanish once attention is redeployed. Experiments 1-6 were visual search studies. In standard search, participants looked for a target item among distractor items. On each trial, a new search display was presented. These tasks were compared to repeated search tasks in which the search display was not changed. On successive trials, participants searched the same display for new targets. Results showed that if search was inefficient when participants searched a display the first time, it was inefficient when the same, unchanging display was searched the second, fifth, or 350th time. Experiments 7 and 8 made a similar point with a curve tracing paradigm. The results have implications for an understanding of scene perception, change detection, and the relationship of vision to memory.  相似文献   

8.
The repetition of spatial layout implicitly facilitates visual search (contextual cueing effect; Chun & Jiang, 1998). Although a substantial number of studies have explored the mechanism underlying the contextual cueing effect, the manner in which contextual information guides spatial attention to a target location during a visual search remains unclear. We investigated the nature of attentional modulation by contextual cueing, using a hybrid paradigm of a visual search task and a probe dot detection task. In the case of a repeated spatial layout, detection of a probe dot was facilitated at a search target location and was inhibited at distractor locations relative to nonrepeated spatial layouts. Furthermore, these facilitatory and inhibitory effects possessed different learning properties across epochs (Experiment 1) and different time courses within a trial (Experiment 2). These results suggest that contextual cueing modulates attentional processing via both facilitation to the location of “to-be-attended” stimuli and inhibition to the locations of “to-be-ignored” stimuli.  相似文献   

9.
Previous studies have shown that the efficiency of visual search does not improve when participants search through the same unchanging display for hundreds of trials (repeated search), even though the participants have a clear memory of the search display. In this article, we ask two important questions. First, why do participants not use memory to help search the repeated display? Second, can context be introduced so that participants are able to guide their attention to the relevant repeated items? Experiments 1-4 show that participants choose not to use a memory strategy because, under these conditions, repeated memory search is actually less efficient than repeated visual search, even though the latter task is in itself relatively inefficient. However, when the visual search task is given context, so that only a subset of the items are ever pertinent, participants can learn to restrict their attention to the relevant stimuli (Experiments 5 and 6).  相似文献   

10.
How much attention is needed to produce implicit learning? Previous studies have found inconsistent results, with some implicit learning tasks requiring virtually no attention while others rely on attention. In this study we examine the degree of attentional dependency in implicit learning of repeated visual search context. Observers searched for a target among distractors that were either highly similar to the target or dissimilar to the target. We found that the size of contextual cueing was comparable from repetition of the two types of distractors, even though attention dwelled much longer on distractors highly similar to the target. We suggest that beyond a minimal amount, further increase in attentional dwell time does not contribute significantly to implicit learning of repeated search context.  相似文献   

11.
When observers search for a target object, they incidentally learn the identities and locations of "background" objects in the same display. This learning can facilitate search performance, eliciting faster reaction times for repeated displays. Despite these findings, visual search has been successfully modeled using architectures that maintain no history of attentional deployments; they are amnesic (e.g., Guided Search Theory). In the current study, we asked two questions: 1) under what conditions does such incidental learning occur? And 2) what does viewing behavior reveal about the efficiency of attentional deployments over time? In two experiments, we tracked eye movements during repeated visual search, and we tested incidental memory for repeated nontarget objects. Across conditions, the consistency of search sets and spatial layouts were manipulated to assess their respective contributions to learning. Using viewing behavior, we contrasted three potential accounts for faster searching with experience. The results indicate that learning does not result in faster object identification or greater search efficiency. Instead, familiar search arrays appear to allow faster resolution of search decisions, whether targets are present or absent.  相似文献   

12.
How much attention is needed to produce implicit learning? Previous studies have found inconsistent results, with some implicit learning tasks requiring virtually no attention while others rely on attention. In this study we examine the degree of attentional dependency in implicit learning of repeated visual search context. Observers searched for a target among distractors that were either highly similar to the target or dissimilar to the target. We found that the size of contextual cueing was comparable from repetition of the two types of distractors, even though attention dwelled much longer on distractors highly similar to the target. We suggest that beyond a minimal amount, further increase in attentional dwell time does not contribute significantly to implicit learning of repeated search context.  相似文献   

13.
Completing a representational momentum (RM) task, participants viewed one of three camera motions through a scene and indicated whether test probes were in the same position as they were in the final view of the animation. All camera motions led to RM anticipations in the direction of motion, with larger distortions resulting from rotations than a compound motion of a rotation and a translation. A surprise test of spatial layout, using an aerial map, revealed that the correct map was identified only following aerial views during the RM task. When the RM task displayed field views, including repeated views of multiple object groups, participants were unable to identify the overall spatial layout of the scene. These results suggest that the object–location binding thought to support certain change detection and visual search tasks might be viewpoint dependent.  相似文献   

14.
The time course of attention is a major characteristic on which different types of attention diverge. In addition to explicit goals and salient stimuli, spatial attention is influenced by past experience. In contextual cueing, behaviorally relevant stimuli are more quickly found when they appear in a spatial context that has previously been encountered than when they appear in a new context. In this study, we investigated the time that it takes for contextual cueing to develop following the onset of search layout cues. In three experiments, participants searched for a T target in an array of Ls. Each array was consistently associated with a single target location. In a testing phase, we manipulated the stimulus onset asynchrony (SOA) between the repeated spatial layout and the search display. Contextual cueing was equivalent for a wide range of SOAs between 0 and 1,000 ms. The lack of an increase in contextual cueing with increasing cue durations suggests that as an implicit learning mechanism, contextual cueing cannot be effectively used until search begins.  相似文献   

15.
The visual context in which an object or face resides can provide useful top‐down information for guiding attention orienting, object recognition, and visual search. Although infants have demonstrated sensitivity to covariation in spatial arrays, it is presently unclear whether they can use rapidly acquired contextual knowledge to guide attention during visual search. In this eye‐tracking experiment, 6‐ and 10‐month‐old infants searched for a target face hidden among colorful distracter shapes. Targets appeared in Old or New visual contexts, depending on whether the visual search arrays (defined by the spatial configuration, shape and color of component items in the search display) were repeated or newly generated throughout the experiment. Targets in Old contexts appeared in the same location within the same configuration, such that context covaried with target location. Both 6‐ and 10‐month‐olds successfully distinguished between Old and New contexts, exhibiting faster search times, fewer looks at distracters, and more anticipation of targets when contexts repeated. This initial demonstration of contextual cueing effects in infants indicates that they can use top‐down information to facilitate orienting during memory‐guided visual search.  相似文献   

16.
When novel scenes are encoded, the representations of scene layout are generally viewpoint specific. Past studies of scene recognition have typically required subjects to explicitly study and encode novel scenes, but in everyday visual experience, it is possible that much scene learning occurs incidentally. Here, we examine whether implicitly encoded scene layouts are also viewpoint dependent. We used the contextual cuing paradigm, in which search for a target is facilitated by implicitly learned associations between target locations and novel spatial contexts (Chun & Jiang, 1998). This task was extended to naturalistic search arrays with apparent depth. To test viewpoint dependence, the viewpoint of the scenes was varied from training to testing. Contextual cuing and, hence, scene context learning decreased as the angular rotation from training viewpoint increased. This finding suggests that implicitly acquired representations of scene layout are viewpoint dependent.  相似文献   

17.
It has been argued that visual search is a valid model for human foraging. However, the two tasks differ greatly in terms of the coding of space and the effort required to search. Here we describe a direct comparison between visually guided searches (as studied in visual search tasks) and foraging that is not based upon a visually distinct target, within the same context. The experiment was conducted in a novel apparatus, where search locations were indicated by an array of lights embedded in the floor. In visually guided conditions participants searched for a target defined by the presence of a feature (red target amongst green distractors) or the absence of a feature (green target amongst red and green distractors). Despite the expanded search scale and the different response requirements, these conditions followed the pattern found in conventional visual search paradigms: feature-present search latencies were not linearly related to display size, whereas feature-absent searches were longer as the number of distractors increased. In a non-visually guided foraging condition, participants searched for a target that was only visible once the switch was activated. This resulted in far longer latencies that rose markedly with display size. Compared to eye-movements in previous visual search studies, there were few revisit errors to previously inspected locations in this condition. This demonstrates the important distinction between visually guided and non-visually guided foraging processes, and shows that the visual search paradigm is an equivocal model for general search in any context. We suggest a comprehensive model of human spatial search behaviour needs to include search at a small and large scale as well as visually guided and non-visually guided search.  相似文献   

18.
Jiang and Wagner (2004) demonstrated that individual target-distractor associations were learned in contextual cuing. We examined whether individual associations can be learned in efficient visual searches that do not involve attentional deployment to individual search items. In Experiment 1, individual associations were not learned during the efficient search tasks. However, in Experiment 2, where additional exposure duration of the search display was provided by presenting placeholders marking future locations of the search items, individual associations were successfully learned in the efficient search tasks and transferred to inefficient search. Moreover, Experiment 3 demonstrated that a concurrent task requiring attention does not affect the learning of the local visual context. These results clearly showed that attentional deployment is not necessary for learning individual locations and clarified how the human visual system extracts and preserves regularity in complex visual environments for efficient visual information processing.  相似文献   

19.
Repeatedly encountering a visual search display with the target located at a fixed position relative to the distractors facilitates target detection, relative to novel displays – which is attributed to search guidance by (acquired) long‐term memory (LTM) of the distractor ‘context’ of the target. Previous research has shown that this ‘contextual cueing’ effect is severely impeded during learning when participants have to perform a demanding spatial working memory (WM) task concurrently with the search task, though it does become manifest when the WM task is removed. This has led to the proposal that search guidance by LT context memories critically depends on spatial WM to become ‘expressed’ in behaviour. On this background, this study, of two experiments, asked: (1) Would contextual cueing eventually emerge under dual‐task learning conditions if the practice on the task(s) is extended beyond the short training implemented in previous studies? and given sufficient practice, (2) Would performing the search under dual‐task conditions actually lead to an increased cueing effect compared to performing the visual search task alone? The answer is affirmative to both questions. In particular, Experiment 1 showed that a robust contextual cueing effect emerges within 360–720 dual‐task trials as compared to some 240 single‐task trials. Further, Experiment 2 showed that when dual‐ and single‐task conditions are performed in alternating trials blocks, the cueing effect for the very same set of repeated displays is significantly larger in dual‐task blocks than in single‐task blocks. This pattern of effects suggests that dual‐task practice eventually leads to direct, or ‘automatic’, guidance of visual search by learnt spatial LTM representations, bypassing WM processes. These processes are normally engaged in single‐task performance might actually interfere with direct LTM‐based search guidance.  相似文献   

20.
Spatial constraints on learning in visual search: modeling contextual cuing   总被引:1,自引:0,他引:1  
Predictive visual context facilitates visual search, a benefit termed contextual cuing (M. M. Chun & Y. Jiang, 1998). In the original task, search arrays were repeated across blocks such that the spatial configuration (context) of all of the distractors in a display predicted an embedded target location. The authors modeled existing results using a connectionist architecture and then designed new behavioral experiments to test the model's assumptions. The modeling and behavioral results indicate that learning may be restricted to the local context even when the entire configuration is predictive of target location. Local learning constrains how much guidance is produced by contextual cuing. The modeling and new data also demonstrate that local learning requires that the local context maintain its location in the overall global context.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号