首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 9 毫秒
1.
If perspective views of an object in two orientations are displayed in alternation, observers will experience the object rotating back and forth in three-dimensional space. Rotational motion is perceived even though only two views are displayed and each view is two-dimensional. The results of 5 experiments converge on the conclusion that the perception of apparent rotational motion produces representations in visual memory corresponding to the spatial structure of the object along its arc of rotation. These representations are view-dependent, preserving information about spatial structure from particular perspectives, but do not preserve low-level perceptual details of the stimulus.  相似文献   

2.
The spatial contextual cuing task (SCCT) (Chun & Jiang, 1998) is an implicit learning task that appears to depend on the medial temporal lobes. This unusual combination has been of interest in functional imaging studies and research with clinical populations, where testing time is at a premium. However, the original version of the SCCT is time-consuming. In this study, 29 young adults (age range, 18–22 years) completed the SCCT, in which participants respond to the orientation of a target in arrays containing 11 distractors. Either 12 (original version) or 6 (abbreviated version) arrays repeated across the experiment, with the remaining novel arrays being generated randomly. Results revealed that the magnitude of learning (faster responses to repeated versus novel arrays) was larger when there were fewer repeated arrays, with no explicit awareness in most participants. Thus, the abbreviated version remained implicit, with the additional benefit of increasing the magnitude of learning.  相似文献   

3.
Humans conduct visual search faster when the same display is presented for a 2nd time, showing implicit learning of repeated displays. This study examines whether learning of a spatial layout transfers to other layouts that are occupied by items of new shapes or colors. The authors show that spatial context learning is sometimes contingent on item identity. For example, when the training session included some trials with black items and other trials with white items, learning of the spatial layout became specific to the trained color--no transfer was seen when items were in a new color during testing. However, when the training session included only trials in black (or white), learning transferred to displays with a new color. Similar results held when items changed shapes after training. The authors conclude that implicit visual learning is sensitive to trial context and that spatial context learning can be identity contingent.  相似文献   

4.
How much attention is needed to produce implicit learning? Previous studies have found inconsistent results, with some implicit learning tasks requiring virtually no attention while others rely on attention. In this study we examine the degree of attentional dependency in implicit learning of repeated visual search context. Observers searched for a target among distractors that were either highly similar to the target or dissimilar to the target. We found that the size of contextual cueing was comparable from repetition of the two types of distractors, even though attention dwelled much longer on distractors highly similar to the target. We suggest that beyond a minimal amount, further increase in attentional dwell time does not contribute significantly to implicit learning of repeated search context.  相似文献   

5.
Implicit sequence learning typically develops gradually, is often expressed quite rigidly, and is heavily reliant on contextual features. Recently we reported results pointing to the role of context-specific processes in the acquisition and expression of implicit sequence knowledge (D’Angelo, Milliken, Jiménez, & Lupiáñez, 2013). Here we examined further the role of context in learning of first-order conditional sequences, and whether context also plays a role in learning second-order conditional structures. Across five experiments we show that the role of context in first-order conditional sequences may not be as clear as we had previously reported, while at the same time we find evidence for the role of context in learning second-order conditional sequences. Together the results suggest that temporal context may be sufficient to learn complementary first-order conditional sequences, but that additional contextual information is necessary to concurrently learn higher-order sequential structures.  相似文献   

6.
How much attention is needed to produce implicit learning? Previous studies have found inconsistent results, with some implicit learning tasks requiring virtually no attention while others rely on attention. In this study we examine the degree of attentional dependency in implicit learning of repeated visual search context. Observers searched for a target among distractors that were either highly similar to the target or dissimilar to the target. We found that the size of contextual cueing was comparable from repetition of the two types of distractors, even though attention dwelled much longer on distractors highly similar to the target. We suggest that beyond a minimal amount, further increase in attentional dwell time does not contribute significantly to implicit learning of repeated search context.  相似文献   

7.
When stimuli are associated with reward outcome, their visual features acquire high attentional priority such that stimuli possessing those features involuntarily capture attention. Whether a particular feature is predictive of reward, however, will vary with a number of contextual factors. One such factor is spatial location: for example, red berries are likely to be found in low-lying bushes, whereas yellow bananas are likely to be found on treetops. In the present study, I explore whether the attentional priority afforded to reward-associated features is modulated by such location-based contingencies. The results demonstrate that when a stimulus feature is associated with a reward outcome in one spatial location but not another, attentional capture by that feature is selective to when it appears in the rewarded location. This finding provides insight into how reward learning effectively modulates attention in an environment with complex stimulus–reward contingencies, thereby supporting efficient foraging.  相似文献   

8.
Recent research has shown that simple motor actions, such as pointing or grasping, can modulate the way we perceive and attend to our visual environment. Here we examine the role of action in spatial context learning. Previous studies using keyboard responses have revealed that people are faster locating a target on repeated visual search displays (“contextual cueing”). However, this learning appears to depend on the task and response requirements. In Experiment 1, participants searched for a T-target among L-distractors and responded either by pressing a key or by touching the screen. Comparable contextual cueing was found in both response modes. Moreover, learning transferred between keyboard and touch screen responses. Experiment 2 showed that learning occurred even for repeated displays that required no response, and this learning was as strong as learning for displays that required a response. Learning on no-response trials cannot be accounted for by oculomotor responses, as learning was observed when eye movements were discouraged (Experiment 3). We suggest that spatial context learning is abstracted from motor actions.  相似文献   

9.
Recent research has shown that simple motor actions, such as pointing or grasping, can modulate the way we perceive and attend to our visual environment. Here we examine the role of action in spatial context learning. Previous studies using keyboard responses have revealed that people are faster locating a target on repeated visual search displays ("contextual cueing"). However, this learning appears to depend on the task and response requirements. In Experiment 1, participants searched for a T-target among L-distractors and responded either by pressing a key or by touching the screen. Comparable contextual cueing was found in both response modes. Moreover, learning transferred between keyboard and touch screen responses. Experiment 2 showed that learning occurred even for repeated displays that required no response, and this learning was as strong as learning for displays that required a response. Learning on no-response trials cannot be accounted for by oculomotor responses, as learning was observed when eye movements were discouraged (Experiment 3). We suggest that spatial context learning is abstracted from motor actions.  相似文献   

10.
Behavioral and neuroimaging evidence suggest that mindfulness exerts its salutary effects by disengaging habitual processes supported by subcortical regions and increasing effortful control processes supported by the frontal lobes. Here we investigated whether individual differences in dispositional mindfulness relate to performance on implicit sequence learning tasks in which optimal learning may in fact be impeded by the engagement of effortful control processes. We report results from two studies where participants completed a widely used questionnaire assessing mindfulness and one of two implicit sequence learning tasks. Learning was quantified using two commonly used measures of sequence learning. In both studies we detected a negative relationship between mindfulness and sequence learning, and the relationship was consistent across both learning measures. Our results, the first to show a negative relationship between mindfulness and implicit sequence learning, suggest that the beneficial effects of mindfulness do not extend to all cognitive functions.  相似文献   

11.
表征质量理论对意识增长持渐进观点,忽视新异刺激对意识的突变式影响;新异刺激理论强调意识突变,忽视新异刺激本身的表征质量增长。本研究采用经典确定性内隐序列学习范式,将转移组块作为新异刺激,操控其数量和位置,探究新异刺激如何通过表征质量来影响内隐学习和意识。结果表明:(1)数量效应显著,即两个转移组块更能促进内隐学习量,说明新异刺激本身需要足够的表征质量才能发挥“意外事件”的作用。(2)在位置效应上,两个转移组块且靠前的设置更能提高受控意识,表明第一个新异刺激必须出现在原序列初级表征质量阶段,才能使被试对新异刺激和原序列进行对比,加之第二个新异刺激的与之呼应,促进原序列意识增加。  相似文献   

12.
This article examines Kosslyn's (1987) hypothesis of the unequal capacity of cerebral hemispheres to process categorical and coordinate spatial relations. Experiment 1 comprised 4 different tasks and failed to support this hypothesis in normal Ss. With the same stimulus patterns as in Kosslyn's study, the results failed to confirm cerebral asymmetry for representing the 2 types of spatial relations, in normal (Experiment 2) and commissurotomized (Experiment 3) Ss. In Experiment 4, a reduction in stimulus luminance produced a partial confirmation of the hypothesis as the right hemisphere proved more adept than the left hemisphere at operating on coordinate representations, whereas both were equally competent at processing categorical spatial-relation representations. The results suggest that the 2 hemispheres can operate on both types of spatial relations, but their respective efficiency depends on the quality of the representations to be processed.  相似文献   

13.
Tseng P  Hsu TY  Tzeng OJ  Hung DL  Juan CH 《Perception》2011,40(7):822-829
The visual system possesses a remarkable ability in learning regularities from the environment. In the case of contextual cuing, predictive visual contexts such as spatial configurations are implicitly learned, retained, and used to facilitate visual search-all without one's subjective awareness and conscious effort. Here we investigated whether implicit learning and its facilitatory effects are sensitive to the statistical property of such implicit knowledge. In other words, are highly probable events learned better than less probable ones even when such learning is implicit? We systematically varied the frequencies of context repetition to alter the degrees of learning. Our results showed that search efficiency increased consistently as contextual probabilities increased. Thus, the visual contexts, along with their probability of occurrences, were both picked up by the visual system. Furthermore, even when the total number of exposures was held constant between each probability, the highest probability still enjoyed a greater cuing effect, suggesting that the temporal aspect of implicit learning is also an important factor to consider in addition to the effect of mere frequency. Together, these findings suggest that implicit learning, although bypassing observers' conscious encoding and retrieval effort, behaves much like explicit learning in the sense that its facilitatory effect also varies as a function of its associative strengths.  相似文献   

14.
Summary Most recent work concerned with intuition has demonstrated that people can respond discriminatively to coherence that they cannot identify. Specifically, in a gestalt-closure task subjects were shown slides of paired drawings. One of the drawings represented a fragmented picture of a common object, whereas the other was constructed by rotation of the elements of the coherent gestalt. When the subjects were unable to name the object, they were urged to make a forced-choice decision regarding which of the two drawings represented a real object. The results showed that the proportion of pictures not correctly identified, that were nevertheless correctly selected as coherent, was significantly higher than chance. The current experiment replicated these findings. In addition, it was shown that a study phase with either coherent or incoherent picture primes can bias intuitive judgments in the test phase in accordance with a processing view. Incoherent-picture primes reduced the forced-choice decisions to a level of chance. Moreover, priming was found to be dependent on the similarity between the study and the test stimuli. We argue that a more fluent reprocessing of coherent, or primed, stimuli may be a basis for intuitive judgments. Intuition may go wrong when priming has favored an incoherent solution.  相似文献   

15.
Detecting and learning the location of unpleasant or pleasant scenarios, or spatial affect learning, is an essential skill that safeguards well-being (Crawford & Cacioppo, 2002). Potentially altered by psychiatric illness, this skill has yet to be measured in adults with and without major depressive disorder (MDD) and anxiety disorders (AD). This study enrolled 199 adults diagnosed with MDD and AD (n=53), MDD (n=47), AD (n=54), and no disorders (n=45). Measures included clinical interviews, self-reports, and a validated spatial affect task using affective pictures (IAPS; Lang, Bradley, & Cuthbert, 2005). Participants with MDD showed impaired spatial affect learning of negative stimuli and irrelevant learning of pleasant pictures compared with non-depressed adults. Adults with MDD may use a “GOOD is UP” heuristic reflected by their impaired learning of the opposite correlation (i.e., “BAD is UP”) and performance in the pleasant version of the task.  相似文献   

16.
17.
Serial position effects in explicit and implicit memory were investigated in a noncolour word Stroop task. Participants were presented with a study list of four words printed in different colours and were tested for memory of the list position of the colour (explicit memory task); they were then asked to complete a word stem primed (or not primed) by one of the words in the study list (implicit memory task), a task presented as a distractor task. Serial position effects were observed in both explicit and implicit memory, for both response times and proportion of correct responses, with marked primacy effects, a drop in performance towards the third list position and a rise in memory performance at the fourth list position, the recency effect being most pronounced in implicit memory. It is concluded that explicit and implicit expressions of memory are governed by similar principles of temporal information processing.  相似文献   

18.

Spatial learning of real-world environments is impaired with severely restricted peripheral field of view (FOV). In prior research, the effects of restricted FOV on spatial learning have been studied using passive learning paradigms – learners walk along pre-defined paths and are told the location of targets to be remembered. Our research has shown that mobility demands and environmental complexity may contribute to impaired spatial learning with restricted FOV through attentional mechanisms. Here, we examine the role of active navigation, both in locomotion and in target search. First, we compared effects of active versus passive locomotion (walking with a physical guide versus being pushed in a wheelchair) on a task of pointing to remembered targets in participants with simulated 10° FOV. We found similar performance between active and passive locomotion conditions in both simpler (Experiment 1) and more complex (Experiment 2) spatial learning tasks. Experiment 3 required active search for named targets to remember while navigating, using both a mild and a severe FOV restriction. We observed no difference in pointing accuracy between the two FOV restrictions but an increase in attentional demands with severely restricted FOV. Experiment 4 compared active and passive search with severe FOV restriction, within subjects. We found no difference in pointing accuracy, but observed an increase in cognitive load in active versus passive search. Taken together, in the context of navigating with restricted FOV, neither locomotion method nor level of active search affected spatial learning. However, the greater cognitive demands could have counteracted the potential advantage of the active learning conditions.

  相似文献   

19.
Contextual cueing is a visual search phenomenon in which memory of global visual context guides spatial attention towards task-relevant portions of the search display. Recent work has shown that the learning processes underlying contextual cueing exhibit primacy effects; they are more sensitive to early experience than to later experience. These results appear to pose difficulties for associative accounts which typically predict recency effects; behaviour being most strongly influenced by recent experience. The current study utilizes trial sequences that consist of two contradictory sets of regularities. In contrast to previous results, robust recency effects were observed. In a second study it is demonstrated that this recency effect can be minimized, but not reversed by systematically manipulating task-irrelevant features of the search display. These results provide additional support for an associative account of contextual cueing and suggest that contextual cueing may, under some circumstances, be more sensitive to recent experience.  相似文献   

20.
It is well known that observers can implicitly learn the spatial context of complex visual searches, such that future searches through repeated contexts are completed faster than those through novel contexts, even though observers remain at chance at discriminating repeated from new contexts. This contextual-cueing effect arises quickly (within less than five exposures) and asymptotes within 30 exposures to repeated contexts. In spite of being a robust effect (its magnitude is over 100 ms at the asymptotic level), the effect is implicit: Participants are usually at chance at discriminating old from new contexts at the end of an experiment, in spite of having seen each repeated context more than 30 times throughout a 50-min experiment. Here, we demonstrate that the speed at which the contextual-cueing effect arises can be modulated by external rewards associated with the search contexts (not with the performance itself). Following each visual search trial (and irrespective of a participant’s search speed on the trial), we provided a reward, a penalty, or no feedback to the participant. Crucially, the type of feedback obtained was associated with the specific contexts, such that some repeated contexts were always associated with reward, and others were always associated with penalties. Implicit learning occurred fastest for contexts associated with positive feedback, though penalizing contexts also showed a learning benefit. Consistent feedback also produced faster learning than did variable feedback, though unexpected penalties produced the largest immediate effects on search performance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号