首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
How much attention is needed to produce implicit learning? Previous studies have found inconsistent results, with some implicit learning tasks requiring virtually no attention while others rely on attention. In this study we examine the degree of attentional dependency in implicit learning of repeated visual search context. Observers searched for a target among distractors that were either highly similar to the target or dissimilar to the target. We found that the size of contextual cueing was comparable from repetition of the two types of distractors, even though attention dwelled much longer on distractors highly similar to the target. We suggest that beyond a minimal amount, further increase in attentional dwell time does not contribute significantly to implicit learning of repeated search context.  相似文献   

2.
It is well known that observers can implicitly learn the spatial context of complex visual searches, such that future searches through repeated contexts are completed faster than those through novel contexts, even though observers remain at chance at discriminating repeated from new contexts. This contextual-cueing effect arises quickly (within less than five exposures) and asymptotes within 30 exposures to repeated contexts. In spite of being a robust effect (its magnitude is over 100 ms at the asymptotic level), the effect is implicit: Participants are usually at chance at discriminating old from new contexts at the end of an experiment, in spite of having seen each repeated context more than 30 times throughout a 50-min experiment. Here, we demonstrate that the speed at which the contextual-cueing effect arises can be modulated by external rewards associated with the search contexts (not with the performance itself). Following each visual search trial (and irrespective of a participant’s search speed on the trial), we provided a reward, a penalty, or no feedback to the participant. Crucially, the type of feedback obtained was associated with the specific contexts, such that some repeated contexts were always associated with reward, and others were always associated with penalties. Implicit learning occurred fastest for contexts associated with positive feedback, though penalizing contexts also showed a learning benefit. Consistent feedback also produced faster learning than did variable feedback, though unexpected penalties produced the largest immediate effects on search performance.  相似文献   

3.
Implicit sequence learning typically develops gradually, is often expressed quite rigidly, and is heavily reliant on contextual features. Recently we reported results pointing to the role of context-specific processes in the acquisition and expression of implicit sequence knowledge (D’Angelo, Milliken, Jiménez, & Lupiáñez, 2013). Here we examined further the role of context in learning of first-order conditional sequences, and whether context also plays a role in learning second-order conditional structures. Across five experiments we show that the role of context in first-order conditional sequences may not be as clear as we had previously reported, while at the same time we find evidence for the role of context in learning second-order conditional sequences. Together the results suggest that temporal context may be sufficient to learn complementary first-order conditional sequences, but that additional contextual information is necessary to concurrently learn higher-order sequential structures.  相似文献   

4.
If perspective views of an object in two orientations are displayed in alternation, observers will experience the object rotating back and forth in three-dimensional space. Rotational motion is perceived even though only two views are displayed and each view is two-dimensional. The results of 5 experiments converge on the conclusion that the perception of apparent rotational motion produces representations in visual memory corresponding to the spatial structure of the object along its arc of rotation. These representations are view-dependent, preserving information about spatial structure from particular perspectives, but do not preserve low-level perceptual details of the stimulus.  相似文献   

5.
The spatial contextual cuing task (SCCT) (Chun & Jiang, 1998) is an implicit learning task that appears to depend on the medial temporal lobes. This unusual combination has been of interest in functional imaging studies and research with clinical populations, where testing time is at a premium. However, the original version of the SCCT is time-consuming. In this study, 29 young adults (age range, 18–22 years) completed the SCCT, in which participants respond to the orientation of a target in arrays containing 11 distractors. Either 12 (original version) or 6 (abbreviated version) arrays repeated across the experiment, with the remaining novel arrays being generated randomly. Results revealed that the magnitude of learning (faster responses to repeated versus novel arrays) was larger when there were fewer repeated arrays, with no explicit awareness in most participants. Thus, the abbreviated version remained implicit, with the additional benefit of increasing the magnitude of learning.  相似文献   

6.
Previous studies have shown that the efficiency of visual search does not improve when participants search through the same unchanging display for hundreds of trials (repeated search), even though the participants have a clear memory of the search display. In this article, we ask two important questions. First, why do participants not use memory to help search the repeated display? Second, can context be introduced so that participants are able to guide their attention to the relevant repeated items? Experiments 1-4 show that participants choose not to use a memory strategy because, under these conditions, repeated memory search is actually less efficient than repeated visual search, even though the latter task is in itself relatively inefficient. However, when the visual search task is given context, so that only a subset of the items are ever pertinent, participants can learn to restrict their attention to the relevant stimuli (Experiments 5 and 6).  相似文献   

7.
8.
Humans conduct visual search more efficiently when the same display is presented for a second time, showing learning of repeated spatial contexts. In this study, we investigate spatial context learning in two tasks: visual search and change detection. In both tasks, we ask whether subjects learn to associate the target with the entire spatial layout of a repeated display (configural learning) or with individual distractor locations (nonconfigural learning). We show that nonconfigural learning results from visual search tasks, but not from change detection tasks. Furthermore, a spatial layout acquired in visual search tasks does not enhance change detection on the same display, whereas a spatial layout acquired in change detection tasks moderately enhances visual search. We suggest that although spatial context learning occurs in multiple tasks, the content of learning is, in part, task specific.  相似文献   

9.
For inefficient search, target detection is faster for repeated than for regenerated layouts. This effect, calledcontextual cuing, was assumed to arise from implicit learning of local spatial relationships between targets and distractors. However, a more global influence from distractors far from the target has not been tested. In this study, the search field was divided into upper and lower halves containing a repeated and a regenerated configuration set, respectively. The positions of the two sets were or were not exchanged, meaning that their relative as well as their absolute positions were the same or different (Experiment 1). In Experiment 2, the repeated set appeared alone in either the same or the other half of the screen (same or different absolute position). The contextual cuing effect remained when only absolute position was changed, but not when both absolute and relative positions were changed. These results suggest that contextual cuing depends on relative positional information.  相似文献   

10.
Tseng P  Hsu TY  Tzeng OJ  Hung DL  Juan CH 《Perception》2011,40(7):822-829
The visual system possesses a remarkable ability in learning regularities from the environment. In the case of contextual cuing, predictive visual contexts such as spatial configurations are implicitly learned, retained, and used to facilitate visual search-all without one's subjective awareness and conscious effort. Here we investigated whether implicit learning and its facilitatory effects are sensitive to the statistical property of such implicit knowledge. In other words, are highly probable events learned better than less probable ones even when such learning is implicit? We systematically varied the frequencies of context repetition to alter the degrees of learning. Our results showed that search efficiency increased consistently as contextual probabilities increased. Thus, the visual contexts, along with their probability of occurrences, were both picked up by the visual system. Furthermore, even when the total number of exposures was held constant between each probability, the highest probability still enjoyed a greater cuing effect, suggesting that the temporal aspect of implicit learning is also an important factor to consider in addition to the effect of mere frequency. Together, these findings suggest that implicit learning, although bypassing observers' conscious encoding and retrieval effort, behaves much like explicit learning in the sense that its facilitatory effect also varies as a function of its associative strengths.  相似文献   

11.
Summary Most recent work concerned with intuition has demonstrated that people can respond discriminatively to coherence that they cannot identify. Specifically, in a gestalt-closure task subjects were shown slides of paired drawings. One of the drawings represented a fragmented picture of a common object, whereas the other was constructed by rotation of the elements of the coherent gestalt. When the subjects were unable to name the object, they were urged to make a forced-choice decision regarding which of the two drawings represented a real object. The results showed that the proportion of pictures not correctly identified, that were nevertheless correctly selected as coherent, was significantly higher than chance. The current experiment replicated these findings. In addition, it was shown that a study phase with either coherent or incoherent picture primes can bias intuitive judgments in the test phase in accordance with a processing view. Incoherent-picture primes reduced the forced-choice decisions to a level of chance. Moreover, priming was found to be dependent on the similarity between the study and the test stimuli. We argue that a more fluent reprocessing of coherent, or primed, stimuli may be a basis for intuitive judgments. Intuition may go wrong when priming has favored an incoherent solution.  相似文献   

12.
Learners exhibit many apparently irrational behaviors in their use of cues, sometimes learning to ignore relevant cues or to attend to irrelevant ones. A learning phenomenon called highlighting seems especially to demand explanation in terms of learned attention. Highlighting complements the classic phenomenon of conditioned blocking, which has been shown to involve learned inattention. Highlighting and blocking, along with a wide spectrum of other perplexing learning phenomena, can be accounted for by recent connectionist models in which both attentional shifting and associative learning are driven by the rational goal of rapid error reduction.  相似文献   

13.
Data from visual-search tasks are typically interpreted to mean that searching for targets defined by feature differences does not require attention and thus can be performed in parallel, whereas searching for other targets requires serial allocation of attention. The question addressed here was whether a parallel-serial dichotomy would be obtained if data were collected using a variety of targets representing each of several kinds of defining features. Data analyses included several computations in addition to search rate: (1) target-absent to target-present slope ratios; (2) two separate data transformations to control for errors; (3) minimum reaction time; and (4) slopes of standard deviation as a function of set size. Some targets showed strongly parallel or strongly serial search, but there was evidence for several intermediate search classes. Sometimes, for a given target-distractor pair, the results depended strongly on which character was the target and which was the distractor. Implications from theories of visual search are discussed.  相似文献   

14.
Repeatedly searching through invariant spatial arrangements in visual search displays leads to the buildup of memory about these displays (contextual-cueing effect). In the present study, we investigate (1) whether contextual cueing is influenced by global statistical properties of the task and, if so, (2) whether these properties increase the overall strength (asymptotic level) or the temporal development (speed) of learning. Experiment 1a served as baseline against which we tested the effects of increased or decreased proportions of repeated relative to nonrepeated displays (Experiments 1b and 1c, respectively), thus manipulating the global statistical properties of search environments. Importantly, probability variations were achieved by manipulating the number of nonrepeated (baseline) displays so as to equate the total number of repeated displays across experiments. In Experiment 1d, repeated and nonrepeated displays were presented in longer streaks of trials, thus establishing a stable environment of sequences of repeated displays. Our results showed that the buildup of contextual cueing was expedited in the statistically rich Experiments 1b and 1d, relative to the baseline Experiment 1a. Further, contextual cueing was entirely absent when repeated displays occurred in the minority of trials (Experiment 1c). Together, these findings suggest that contextual cueing is modulated by observers’ assumptions about the reliability of search environments.  相似文献   

15.
16.
Visual search (e.g., finding a specific object in an array of other objects) is performed most effectively when people are able to ignore distracting nontargets. In repeated search, however, incidental learning of object identities may facilitate performance. In three experiments, with over 1,100 participants, we examined the extent to which search could be facilitated by object memory and by memory for spatial layouts. Participants searched for new targets (real-world, nameable objects) embedded among repeated distractors. To make the task more challenging, some participants performed search for multiple targets, increasing demands on visual working memory (WM). Following search, memory for search distractors was assessed using a surprise two-alternative forced choice recognition memory test with semantically matched foils. Search performance was facilitated by distractor object learning and by spatial memory; it was most robust when object identity was consistently tied to spatial locations and weakest (or absent) when object identities were inconsistent across trials. Incidental memory for distractors was better among participants who searched under high WM load, relative to low WM load. These results were observed when visual search included exhaustive-search trials (Experiment 1) or when all trials were self-terminating (Experiment 2). In Experiment 3, stimulus exposure was equated across WM load groups by presenting objects in a single-object stream; recognition accuracy was similar to that in Experiments 1 and 2. Together, the results suggest that people incidentally generate memory for nontarget objects encountered during search and that such memory can facilitate search performance.  相似文献   

17.
Changes in environmental context between encoding and retrieval often affect explicit memory but research on implicit memory is equivocal. One proposal is that conceptual but not perceptual priming is influenced by context manipulations. However, findings with conceptual priming may be compromised by explicit contamination. The present study examined the effects of environmental context on conceptual explicit (category-cued recall) and implicit memory (category production). Explicit recall was reduced by context change. The implicit test results depended on test awareness (assessed with a post-test questionnaire). Among test-unaware participants, priming was equivalent for same-context and different-context groups, whereas for the test-aware, the same-context group produced more priming. Thus, when explicit contamination is controlled, changes in environmental context do not impair conceptual priming. Context dependency appears to be a general difference between implicit and explicit memory rather than a difference between conceptual and perceptual implicit memory. Finally, measures of mood indicated no changes in affect across contexts, arguing against mood mediation for the context effects in explicit recall.  相似文献   

18.
Data from visual-search tasks are typically interpreted to mean that searching for targets defined by feature differences does not require attention and thus can be performed in parallel, whereas searching for other targets requires serial allocation of attention. The question addressed here was whether a parallel-serial dichotomy would be obtained if data were collected using a variety of targets representing each of several kinds of defining features. Data analyses included several computations in addition to search rate: (1) target-absent to target-present slope ratios; (2) two separate data transformations to control for errors; (3) minimum reaction time; and (4) slopes of standard deviation as a function of set size. Some targets showed strongly parallel or strongly serial search, but there was evidence for several intermediate search classes. Sometimes, for a given target-distractor pair, the results depended strongly on which character was the target and which was the distractor. Implications from theories of visual search are discussed.  相似文献   

19.
The relation between attention demand and the number of items in the array (array size) was investigated by engaging subjects in a primary search task and measuring spare capacity at different points in time, with a secondary tone task that occurred randomly on half of the trials. The major variables in both tasks were array size 14, 8, or 12 letters and stimulus onset asynchrony (SOA: ?400, ?200, 0, 200, 400, and 600 msec. Subjects were able to perform the tasks quite independently, and me, st of the interference that resulted from nonindependence appeared in tone-task performance. Theamount of interference (i.e., maximum tone reaction time) was independent of array size, but theduration of interference (li.e., the number of SOAs at which tone reaction time was elevated) increased with array size. The findings were interpreted as supporting unlimited-capacity models of visual search performance.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号