首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Previous research has indicated that covariations between the global layout of search displays and target locations result in contextual cuing: the global context guides attention to probable target locations. The present experiments extend these findings by showing that local redundancies also facilitate visual search. Participants searched for randomly located targets in invariant homogenous displays, i.e., the global context provided information neither about the location nor about the identity of the target. The only redundancy referred to spatial relations between the targets and certain distractors: Two of the distractors were frequently presented next to the targets. In four of five experiments, targets with frequent flankers were detected faster than targets with rare flankers. The data suggest that this local contextual cuing does not depend on awareness of the redundant local topography but needs the redundantly related stimuli to be attended to.  相似文献   

2.
A fundamental principle of learning is that predictive cues or signals compete with each other to gain control over behavior. Associative and propositional reasoning theories of learning provide radically different accounts of cue competition. Propositional accounts predict that under conditions that do not afford or warrant the use of higher order reasoning processes, cue competition should not be observed. We tested this prediction in 2 contextual cuing experiments, using a visual search task in which patterns of distractor elements predict the location of a target object. Blocking designs were used in which 2 sets of predictive distractors were trained in compound, with 1 set trained independently. There was no evidence of cue competition in either experiment. In fact, in Experiment 2, we found evidence for augmentation of learning. The findings are contrasted with the predictions of an error-driven associative model of contextual cuing (Brady & Chun, 2007).  相似文献   

3.
We investigated the effect of contextual cuing (M. M. Chun & Y. Jiang, 1998) within the preview paradigm (D. G. Watson & G. W. Humphreys, 1997). Contextual cuing was shown with a 10-item letter search but not with more crowded 20-item displays. However, contextual learning did occur in a preview procedure in which 10 preview items were followed by 10 new items. Repeating the new items alone did not generate contextual learning, but repeating the preview items alone did, as long as they had a consistent spatial relation with the target. This was not merely due to the onset of the preview items being associated with the target location. No learning effect took place with a preview of homogeneous items that competed less for selection with new stimuli. The results provide evidence for old items being processed in preview search and providing a context for subsequent search of new items.  相似文献   

4.
Because the importance of color in visual tasks such as object identification and scene memory has been debated, we sought to determine whether color is used to guide visual search in contextual cuing with real-world scenes. In Experiment 1, participants searched for targets in repeated scenes that were shown in one of three conditions: natural colors, unnatural colors that remained consistent across repetitions, and unnatural colors that changed on every repetition. We found that the pattern of learning was the same in all three conditions. In Experiment 2, we did a transfer test in which the repeating scenes were shown in consistent colors that suddenly changed on the last block of the experiment. The color change had no effect on search times, relative to a condition in which the colors did not change. In Experiments 3 and 4, we replicated Experiments 1 and 2, using scenes from a color-diagnostic category of scenes, and obtained similar results. We conclude that color is not used to guide visual search in real-world contextual cuing, a finding that constrains the role of color in scene identification and recognition processes.  相似文献   

5.
Temporal contextual cuing of visual attention   总被引:8,自引:0,他引:8  
Previous research has shown how spatial attention is guided to a target location, but little is understood about how attention is allocated to an event in time. The authors introduce a paradigm to manipulate the sequential structure of visual events independent of responses. They asked whether this temporal context could be implicitly learned and used to guide attention to a relative point in time or location, or both, in space. Experiments show that sequentially structured event durations, event identities, and spatiotemporal event sequences can guide attention to a point in time as well as to a target event's identity and location. Cuing was found to rely heavily on the element immediately preceding the target, although cuing from earlier items also was evident. Learning was implicit in all cases. These results show that the sequential structure of the visual world plays an important role in guiding visual attention to target events.  相似文献   

6.
Contextual cuing refers to the facilitation of performance in visual search due to the repetition of the same displays. Whereas previous studies have focused on contextual cuing within single-search trials, this study tested whether 1 trial facilitates visual search of the next trial. Participants searched for a T among Ls. In the training phase, the spatial layout on trial N=1 was predictive of the target location on trial N. In the testing phase, the predictive value was removed. Results revealed an intertrial temporal contextual cuing effect: Search speed became progressively shorter in the training phase, but it significantly lengthened during testing. The authors conclude that the visual system is capable of retaining spatial contextual memory established earlier to facilitate perception.  相似文献   

7.
In contextual cuing, faster responses are made to repeated displays containing context-target associations than to novel displays without such covariances. We report that healthy older adults showed learning impairments in contextual cuing when compared with younger adults. The display properties in the task were altered to artificially increase younger adults' response times to match those of older adults and to produce faster responses in older participants; however, younger participants' learning remained intact, whereas older participants continued to show impairments under these conditions. These results suggest that older adults have intrinsic deficits in contextual cuing that cannot be attributed to their slower overall response speed.  相似文献   

8.
Contextual cuing refers to a response time (RT) benefit that occurs when observers search through displays that have been repeated over the course of an experiment. Although it is generally agreed that contextual cuing arises via an associative learning mechanism, there is uncertainty about the type(s) of process(es) that allow learning to influence RT. We contrast two leading accounts of the contextual cuing effect that differ in terms of the general process that is credited with producing the effect. The first, the expedited search account, attributes the cuing effect to an increase in the speed with which the target is acquired. The second, the decision threshold account, attributes the cuing effect to a reduction in the response threshold used by observers when making a subsequent decision about the target (e.g., judging its orientation). We use the diffusion model to contrast the quantitative predictions of these two accounts at the level of individual observers. Our use of the diffusion model allows us to also explore a novel decision-level locus of the cuing effect based on perceptual learning. This novel account attributes the RT benefit to a perceptual learning process that increases the quality of information used to drive the decision process. Our results reveal both individual differences in the process(es) involved in contextual cuing but also identify several striking regularities across observers. We find strong support for both the decision threshold account as well as the novel perceptual learning account. We find relatively weak support for the expedited search account.  相似文献   

9.
Humans conduct visual search more efficiently when the same display is presented for a second time, showing learning of repeated spatial contexts. In this study, we investigate spatial context learning in two tasks: visual search and change detection. In both tasks, we ask whether subjects learn to associate the target with the entire spatial layout of a repeated display (configural learning) or with individual distractor locations (nonconfigural learning). We show that nonconfigural learning results from visual search tasks, but not from change detection tasks. Furthermore, a spatial layout acquired in visual search tasks does not enhance change detection on the same display, whereas a spatial layout acquired in change detection tasks moderately enhances visual search. We suggest that although spatial context learning occurs in multiple tasks, the content of learning is, in part, task specific.  相似文献   

10.
In contextual cuing (CC), reaction times for finding targets are faster in repeated displays than in displays that have never been seen before. This has been demonstrated using target-distractor configurations, global background colors, naturalistic scenes, and covariation of targets with distractors. The majority of CC studies have used displays in which the target is always present. This study investigated what happens when the target is sometimes absent. Experiment 1 showed that, although configural CC occurs in displays when the target is always present, there is no CC when the target is always absent. Experiment 2 showed that there is no CC when the same spatial layout can be both target present and target absent on different trials. The presence of distractors in locations that had contained targets on other trials appeared to interfere with CC, and even disrupted the expression of CC in previously learned contexts (Exps. 3-5). These results show that target-distractor associations are the important element in producing CC and that, consistent with a response selection account, changing the response type from an orientation task to a detection task removes the CC effect.  相似文献   

11.
Stone JV  Harper N 《Perception》1999,28(9):1089-1104
Given a constant stream of perceptual stimuli, how can the underlying invariances associated with a given input be learned? One approach consists of using generic truths about the spatiotemporal structure of the physical world as constraints on the types of quantities learned. The learning methodology employed here embodies one such truth: that perceptually salient properties (such as stereo disparity) tend to vary smoothly over time. Unfortunately, the units of an artificial neural network tend to encode superficial image properties, such as individual grey-level pixel values, which vary rapidly over time. However, if the states of units are constrained to vary slowly, then the network is forced to learn a smoothly varying function of the training data. We implemented this temporal-smoothness constraint in a backpropagation network which learned stereo disparity from random-dot stereograms. Temporal smoothness was formalized with the use of regularization theory by modifying the standard cost function minimised during training of a network. Temporal smoothness was found to be similar to other techniques for improving generalisation, such as early stopping and weight decay. However, in contrast to these, the theoretical underpinnings of temporal smoothing are intimately related to fundamental characteristics of the physical world. Results are discussed in terms of regularization theory and the physically realistic assumptions upon which temporal smoothing is based.  相似文献   

12.
K Lobley  V Walsh 《Perception》1998,27(10):1245-1255
Perceptual learning in colour/orientation visual conjunction search was examined in five experiments. Good transfer occurred to other conjunction arrays when only one element of the conjunction (either colour or orientation) was changed. When both elements (colour and orientation) were changed, but the same feature spaces were used (i.e. other colours and orientations) or when a new dimension was introduced to the transfer task (shapes instead of orientation), transfer was poor. The results suggest that perceptual learning of visual conjunction search is constrained mainly by stimulus parameters rather than by changes in cognitive strategies which are common to all search tasks. Contrary to other reports we found little evidence of long-term retention of learning.  相似文献   

13.
Word-fragment cuing: the lexical search hypothesis   总被引:1,自引:0,他引:1  
In four experiments we evaluated aspects of the hypothesis that word-fragment completion depends on the results of lexical but not semantic search. Experiment 1 showed that the number of meaningful associates linked to a studied word does not affect its recovery when the test cue consists of letters and spaces for missing letters. Experiments 2 and 3 showed retroactive interference effects in fragment completion when words in a second list were lexically related to words in a first list but not when the words in the second list were meaningfully related. Experiment 4 indicated that for studied words, instructions to search at the word level facilitated completion performance and that instructions to generate letters to fill missing spaces had no effect. Other findings indicated that completion was affected by the number of words lexically related to the fragment and by the number of letters missing from the fragment. In general, experimental manipulations that focused on lexical characteristics were effective, and those that focused on semantic characteristics were ineffective. The findings support the conclusion that word fragments engender a lexical search process that does not depend on retrieving encoded meaning.  相似文献   

14.
Jiang and Wagner (2004) demonstrated that individual target-distractor associations were learned in contextual cuing. We examined whether individual associations can be learned in efficient visual searches that do not involve attentional deployment to individual search items. In Experiment 1, individual associations were not learned during the efficient search tasks. However, in Experiment 2, where additional exposure duration of the search display was provided by presenting placeholders marking future locations of the search items, individual associations were successfully learned in the efficient search tasks and transferred to inefficient search. Moreover, Experiment 3 demonstrated that a concurrent task requiring attention does not affect the learning of the local visual context. These results clearly showed that attentional deployment is not necessary for learning individual locations and clarified how the human visual system extracts and preserves regularity in complex visual environments for efficient visual information processing.  相似文献   

15.
Implicit contextual cuing refers to the ability to learn the association between contextual information of our environment and a specific target, which can be used to guide attention during visual search. It was recently suggested that the storage of a snapshot image of the local context of a target underlies implicit contextual cuing. To make such a snapshot, it is necessary to use peripheral vision. In order to test whether peripheral vision can underlie implicit contextual cuing, we used a covert visual search task, in which participants were required to indicate the orientation of a target stimulus while foveating a fixation cross. The response times were shorter when the configuration of the stimuli was repeated than when the configuration was new. Importantly, this effect was still found after 10 days, indicating that peripherally perceived spatial context information can be stored in memory for long periods of time. These results indicate that peripheral vision can be used to make a snapshot of the local context of a target.  相似文献   

16.
Previous studies have shown that context-facilitated visual search can occur through implicit learning. In the present study, we have explored its oculomotor correlates as a step toward unraveling the mechanisms that underlie such learning. Specifically, we examined a number of oculomotor parameters that might accompany the learning of context-guided search. The results showed that a decrease in the number of saccades occurred along with a fall in search time. Furthermore, we identified an effective search period in which each saccade monotonically brought the fixation closer to the target. Most important, the speed with which eye fixation approached the target did not change as a result of learning. We discuss the general implications of these results for visual search.  相似文献   

17.
Locating a target in a visual search task is facilitated when the target location is repeated on successive trials. Global statistical properties also influence visual search, but have often been confounded with local regularities (i.e., target location repetition). In two experiments, target locations were not repeated for four successive trials, but with a target location bias (i.e., the target appeared on one half of the display twice as often as the other). Participants quickly learned to make more first saccades to the side more likely to contain the target. With item-by-item search first saccades to the target were at chance. With a distributed search strategy first saccades to a target located on the biased side increased above chance. The results confirm that visual search behavior is sensitive to simple global statistics in the absence of trial-to-trial target location repetitions.  相似文献   

18.
The term contextual cuing refers to improved visual search performance with repeated exposure to a configuration of objects. Participants use predictive cues-derived from learned associations between target locations and the spatial arrangement of the surrounding distractors in a configuration--to efficiently guide search behavior. Researchers have claimed that contextual cuing can occur implicitly. The present experiments examined two explicit measures--generation and recognition. In Experiment 1, we found that contextual cuing information was consciously retrievable when the number of trials used in a generation test was increased, and the results also suggested that the shorter tests that were used previously were not statistically powerful enough to detect a true awareness effect. In Experiment 2, concurrent implicit and explicit (generation and recognition) tests were employed. At a group level, learning did not precede awareness. Although contextual cuing was evident in participants who were selected post hoc as having no explicit awareness, and for specific configurations that did not support awareness, we argue that awareness may nevertheless be a necessary concomitant of contextual cuing. These results demonstrate that contextual cuing knowledge is accessible to awareness.  相似文献   

19.
This study examined the extent to which structural regularities inherent in visual arrays help to guide target detection and reduce age-related differences in skilled visual search performance. The target-detection performance of medical laboratory technologists in 2 age groups (M = 24.3 years and M = 49.0 years) and age-matched novices was assessed using images of bacterial morphology taken from Gram's stain photomicrographs as targets and search arrays. For skilled observers, response times were longer for middle-aged adults than for young adults except when external location cues were available, or when contextual cues inherent in the array were available to guide target detection. These results demonstrate that contextual information aids the skilled search of middle-aged experts, and suggest that contextual cuing is 1 means by which middle-aged adults can circumvent the effects of normally age-deficient processes on performance in a skilled domain.  相似文献   

20.
Age differences in a semantic category visual search task were investigated to determine whether the age effects were due to target learning deficits, distractor learning deficits, or a combination thereof. Twelve young (mean age 20) and 12 older (mean age 70) adults received 2,400 trials each in consistent and varied versions of the search task. Following training, a series of transfer-reversal manipulations allowed the assessment of target learning and distractor learning both in isolation and in combination. The pattern of data suggests that older adults have a deficit in their ability to increase the attention-attraction strength of targets and to decrease the attention-attraction strength of distractors. The results are interpreted in terms of a strength-based framework of visual search performance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号