首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Previous research has indicated that covariations between the global layout of search displays and target locations result in contextual cuing: the global context guides attention to probable target locations. The present experiments extend these findings by showing that local redundancies also facilitate visual search. Participants searched for randomly located targets in invariant homogenous displays, i.e., the global context provided information neither about the location nor about the identity of the target. The only redundancy referred to spatial relations between the targets and certain distractors: Two of the distractors were frequently presented next to the targets. In four of five experiments, targets with frequent flankers were detected faster than targets with rare flankers. The data suggest that this local contextual cuing does not depend on awareness of the redundant local topography but needs the redundantly related stimuli to be attended to.  相似文献   

2.
A fundamental principle of learning is that predictive cues or signals compete with each other to gain control over behavior. Associative and propositional reasoning theories of learning provide radically different accounts of cue competition. Propositional accounts predict that under conditions that do not afford or warrant the use of higher order reasoning processes, cue competition should not be observed. We tested this prediction in 2 contextual cuing experiments, using a visual search task in which patterns of distractor elements predict the location of a target object. Blocking designs were used in which 2 sets of predictive distractors were trained in compound, with 1 set trained independently. There was no evidence of cue competition in either experiment. In fact, in Experiment 2, we found evidence for augmentation of learning. The findings are contrasted with the predictions of an error-driven associative model of contextual cuing (Brady & Chun, 2007).  相似文献   

3.
We investigated the effect of contextual cuing (M. M. Chun & Y. Jiang, 1998) within the preview paradigm (D. G. Watson & G. W. Humphreys, 1997). Contextual cuing was shown with a 10-item letter search but not with more crowded 20-item displays. However, contextual learning did occur in a preview procedure in which 10 preview items were followed by 10 new items. Repeating the new items alone did not generate contextual learning, but repeating the preview items alone did, as long as they had a consistent spatial relation with the target. This was not merely due to the onset of the preview items being associated with the target location. No learning effect took place with a preview of homogeneous items that competed less for selection with new stimuli. The results provide evidence for old items being processed in preview search and providing a context for subsequent search of new items.  相似文献   

4.
Because the importance of color in visual tasks such as object identification and scene memory has been debated, we sought to determine whether color is used to guide visual search in contextual cuing with real-world scenes. In Experiment 1, participants searched for targets in repeated scenes that were shown in one of three conditions: natural colors, unnatural colors that remained consistent across repetitions, and unnatural colors that changed on every repetition. We found that the pattern of learning was the same in all three conditions. In Experiment 2, we did a transfer test in which the repeating scenes were shown in consistent colors that suddenly changed on the last block of the experiment. The color change had no effect on search times, relative to a condition in which the colors did not change. In Experiments 3 and 4, we replicated Experiments 1 and 2, using scenes from a color-diagnostic category of scenes, and obtained similar results. We conclude that color is not used to guide visual search in real-world contextual cuing, a finding that constrains the role of color in scene identification and recognition processes.  相似文献   

5.
Temporal contextual cuing of visual attention   总被引:8,自引:0,他引:8  
Previous research has shown how spatial attention is guided to a target location, but little is understood about how attention is allocated to an event in time. The authors introduce a paradigm to manipulate the sequential structure of visual events independent of responses. They asked whether this temporal context could be implicitly learned and used to guide attention to a relative point in time or location, or both, in space. Experiments show that sequentially structured event durations, event identities, and spatiotemporal event sequences can guide attention to a point in time as well as to a target event's identity and location. Cuing was found to rely heavily on the element immediately preceding the target, although cuing from earlier items also was evident. Learning was implicit in all cases. These results show that the sequential structure of the visual world plays an important role in guiding visual attention to target events.  相似文献   

6.
Contextual cuing refers to the facilitation of performance in visual search due to the repetition of the same displays. Whereas previous studies have focused on contextual cuing within single-search trials, this study tested whether 1 trial facilitates visual search of the next trial. Participants searched for a T among Ls. In the training phase, the spatial layout on trial N=1 was predictive of the target location on trial N. In the testing phase, the predictive value was removed. Results revealed an intertrial temporal contextual cuing effect: Search speed became progressively shorter in the training phase, but it significantly lengthened during testing. The authors conclude that the visual system is capable of retaining spatial contextual memory established earlier to facilitate perception.  相似文献   

7.
In contextual cuing, faster responses are made to repeated displays containing context-target associations than to novel displays without such covariances. We report that healthy older adults showed learning impairments in contextual cuing when compared with younger adults. The display properties in the task were altered to artificially increase younger adults' response times to match those of older adults and to produce faster responses in older participants; however, younger participants' learning remained intact, whereas older participants continued to show impairments under these conditions. These results suggest that older adults have intrinsic deficits in contextual cuing that cannot be attributed to their slower overall response speed.  相似文献   

8.
Contextual cuing refers to a response time (RT) benefit that occurs when observers search through displays that have been repeated over the course of an experiment. Although it is generally agreed that contextual cuing arises via an associative learning mechanism, there is uncertainty about the type(s) of process(es) that allow learning to influence RT. We contrast two leading accounts of the contextual cuing effect that differ in terms of the general process that is credited with producing the effect. The first, the expedited search account, attributes the cuing effect to an increase in the speed with which the target is acquired. The second, the decision threshold account, attributes the cuing effect to a reduction in the response threshold used by observers when making a subsequent decision about the target (e.g., judging its orientation). We use the diffusion model to contrast the quantitative predictions of these two accounts at the level of individual observers. Our use of the diffusion model allows us to also explore a novel decision-level locus of the cuing effect based on perceptual learning. This novel account attributes the RT benefit to a perceptual learning process that increases the quality of information used to drive the decision process. Our results reveal both individual differences in the process(es) involved in contextual cuing but also identify several striking regularities across observers. We find strong support for both the decision threshold account as well as the novel perceptual learning account. We find relatively weak support for the expedited search account.  相似文献   

9.
Humans conduct visual search more efficiently when the same display is presented for a second time, showing learning of repeated spatial contexts. In this study, we investigate spatial context learning in two tasks: visual search and change detection. In both tasks, we ask whether subjects learn to associate the target with the entire spatial layout of a repeated display (configural learning) or with individual distractor locations (nonconfigural learning). We show that nonconfigural learning results from visual search tasks, but not from change detection tasks. Furthermore, a spatial layout acquired in visual search tasks does not enhance change detection on the same display, whereas a spatial layout acquired in change detection tasks moderately enhances visual search. We suggest that although spatial context learning occurs in multiple tasks, the content of learning is, in part, task specific.  相似文献   

10.
In visual search, detection of a target in a repeated layout is faster than search within a novel arrangement, demonstrating that contextual invariances can implicitly guide attention to the target location (“contextual cueing”; Chun & Jiang, 1998). Here, we investigated how display segmentation processes influence contextual cueing. Seven experiments showed that grouping by colour and by size can considerably reduce contextual cueing. However, selectively attending to a relevant subgroup of items (that contains the target) preserved context-based learning effects. Finally, the reduction of contextual cueing by means of grouping affected both the latent learning and the recall of display layouts. In sum, all experiments show an influence of grouping on contextual cueing. This influence is larger for variations of spatial (as compared to surface) features and is consistent with the view that learning of contextual relations critically interferes with processes that segment a display into segregated groups of items.  相似文献   

11.
To what extent does successful search for a target letter in a visual display depend on the allocation of attention to the target’s spatial position? To investigate this question, we required subjects to discriminate the orientation of a briefly flashed U-shaped form while searching for a target letter. Performance operating characteristics (POCs) were derived by varying the relative amounts of attention subjects were to devote to each task. Extensive tradeoffs in performance were observed when the orientation form and target letter occurred in nonadjacent display positions. In contrast, the tradeoff was much more restricted when the two targets occurred in adjacent positions. These results suggest that the interference between simultaneous visual discriminations depends critically on their separation in visual space. Both visual search and form discrimination require a common limited-capacity visual resource.  相似文献   

12.
In contextual cuing (CC), reaction times for finding targets are faster in repeated displays than in displays that have never been seen before. This has been demonstrated using target-distractor configurations, global background colors, naturalistic scenes, and covariation of targets with distractors. The majority of CC studies have used displays in which the target is always present. This study investigated what happens when the target is sometimes absent. Experiment 1 showed that, although configural CC occurs in displays when the target is always present, there is no CC when the target is always absent. Experiment 2 showed that there is no CC when the same spatial layout can be both target present and target absent on different trials. The presence of distractors in locations that had contained targets on other trials appeared to interfere with CC, and even disrupted the expression of CC in previously learned contexts (Exps. 3-5). These results show that target-distractor associations are the important element in producing CC and that, consistent with a response selection account, changing the response type from an orientation task to a detection task removes the CC effect.  相似文献   

13.
Stone JV  Harper N 《Perception》1999,28(9):1089-1104
Given a constant stream of perceptual stimuli, how can the underlying invariances associated with a given input be learned? One approach consists of using generic truths about the spatiotemporal structure of the physical world as constraints on the types of quantities learned. The learning methodology employed here embodies one such truth: that perceptually salient properties (such as stereo disparity) tend to vary smoothly over time. Unfortunately, the units of an artificial neural network tend to encode superficial image properties, such as individual grey-level pixel values, which vary rapidly over time. However, if the states of units are constrained to vary slowly, then the network is forced to learn a smoothly varying function of the training data. We implemented this temporal-smoothness constraint in a backpropagation network which learned stereo disparity from random-dot stereograms. Temporal smoothness was formalized with the use of regularization theory by modifying the standard cost function minimised during training of a network. Temporal smoothness was found to be similar to other techniques for improving generalisation, such as early stopping and weight decay. However, in contrast to these, the theoretical underpinnings of temporal smoothing are intimately related to fundamental characteristics of the physical world. Results are discussed in terms of regularization theory and the physically realistic assumptions upon which temporal smoothing is based.  相似文献   

14.
15.
Word-fragment cuing: the lexical search hypothesis   总被引:1,自引:0,他引:1  
In four experiments we evaluated aspects of the hypothesis that word-fragment completion depends on the results of lexical but not semantic search. Experiment 1 showed that the number of meaningful associates linked to a studied word does not affect its recovery when the test cue consists of letters and spaces for missing letters. Experiments 2 and 3 showed retroactive interference effects in fragment completion when words in a second list were lexically related to words in a first list but not when the words in the second list were meaningfully related. Experiment 4 indicated that for studied words, instructions to search at the word level facilitated completion performance and that instructions to generate letters to fill missing spaces had no effect. Other findings indicated that completion was affected by the number of words lexically related to the fragment and by the number of letters missing from the fragment. In general, experimental manipulations that focused on lexical characteristics were effective, and those that focused on semantic characteristics were ineffective. The findings support the conclusion that word fragments engender a lexical search process that does not depend on retrieving encoded meaning.  相似文献   

16.
K Lobley  V Walsh 《Perception》1998,27(10):1245-1255
Perceptual learning in colour/orientation visual conjunction search was examined in five experiments. Good transfer occurred to other conjunction arrays when only one element of the conjunction (either colour or orientation) was changed. When both elements (colour and orientation) were changed, but the same feature spaces were used (i.e. other colours and orientations) or when a new dimension was introduced to the transfer task (shapes instead of orientation), transfer was poor. The results suggest that perceptual learning of visual conjunction search is constrained mainly by stimulus parameters rather than by changes in cognitive strategies which are common to all search tasks. Contrary to other reports we found little evidence of long-term retention of learning.  相似文献   

17.
Jiang and Wagner (2004) demonstrated that individual target-distractor associations were learned in contextual cuing. We examined whether individual associations can be learned in efficient visual searches that do not involve attentional deployment to individual search items. In Experiment 1, individual associations were not learned during the efficient search tasks. However, in Experiment 2, where additional exposure duration of the search display was provided by presenting placeholders marking future locations of the search items, individual associations were successfully learned in the efficient search tasks and transferred to inefficient search. Moreover, Experiment 3 demonstrated that a concurrent task requiring attention does not affect the learning of the local visual context. These results clearly showed that attentional deployment is not necessary for learning individual locations and clarified how the human visual system extracts and preserves regularity in complex visual environments for efficient visual information processing.  相似文献   

18.
When spatial stimulus configurations repeat in visual search, a search facilitation, resulting in shorter search times, can be observed that is due to incidental learning. This contextual cueing effect appears to be rather implicit, uncorrelated with observers’ explicit memory of display configurations. Nevertheless, as I review here, this search facilitation due to contextual cueing depends on visuospatial working memory resources, and it disappears when visuospatial working memory is loaded by a concurrent delayed match to sample task. However, the search facilitation immediately recovers for displays learnt under visuospatial working memory load when this load is removed in a subsequent test phase. Thus, latent learning of visuospatial configurations does not depend on visuospatial working memory, but the expression of learning, as memory‐guided search in repeated displays, does. This working memory dependence has also consequences for visual search with foveal vision loss, where top‐down controlled visual exploration strategies pose high demands on visuospatial working memory, in this way interfering with memory‐guided search in repeated displays. Converging evidence for the contribution of working memory to contextual cueing comes from neuroimaging data demonstrating that distinct cortical areas along the intraparietal sulcus as well as more ventral parieto‐occipital cortex are jointly activated by visual working memory and contextual cueing.  相似文献   

19.
Implicit contextual cuing refers to the ability to learn the association between contextual information of our environment and a specific target, which can be used to guide attention during visual search. It was recently suggested that the storage of a snapshot image of the local context of a target underlies implicit contextual cuing. To make such a snapshot, it is necessary to use peripheral vision. In order to test whether peripheral vision can underlie implicit contextual cuing, we used a covert visual search task, in which participants were required to indicate the orientation of a target stimulus while foveating a fixation cross. The response times were shorter when the configuration of the stimuli was repeated than when the configuration was new. Importantly, this effect was still found after 10 days, indicating that peripherally perceived spatial context information can be stored in memory for long periods of time. These results indicate that peripheral vision can be used to make a snapshot of the local context of a target.  相似文献   

20.
The term contextual cuing refers to improved visual search performance with repeated exposure to a configuration of objects. Participants use predictive cues-derived from learned associations between target locations and the spatial arrangement of the surrounding distractors in a configuration--to efficiently guide search behavior. Researchers have claimed that contextual cuing can occur implicitly. The present experiments examined two explicit measures--generation and recognition. In Experiment 1, we found that contextual cuing information was consciously retrievable when the number of trials used in a generation test was increased, and the results also suggested that the shorter tests that were used previously were not statistically powerful enough to detect a true awareness effect. In Experiment 2, concurrent implicit and explicit (generation and recognition) tests were employed. At a group level, learning did not precede awareness. Although contextual cuing was evident in participants who were selected post hoc as having no explicit awareness, and for specific configurations that did not support awareness, we argue that awareness may nevertheless be a necessary concomitant of contextual cuing. These results demonstrate that contextual cuing knowledge is accessible to awareness.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号