首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Statistical properties in the visual environment can be used to improve performance on visual working memory (VWM) tasks. The current study examined the ability to incidentally learn that a change is more likely to occur to a particular feature dimension (shape, color, or location) and use this information to improve change detection performance for that dimension (the change probability effect). Participants completed a change detection task in which one change type was more probable than others. Change probability effects were found for color and shape changes, but not location changes, and intentional strategies did not improve the effect. Furthermore, the change probability effect developed and adapted to new probability information quickly. Finally, in some conditions, an improvement in change detection performance for a probable change led to an impairment in change detection for improbable changes.  相似文献   

2.
Six pigeons were trained in a change detection task with four colors. They were shown two colored circles on a sample array, followed by a test array with the color of one circle changed. The pigeons learned to choose the changed color and transferred their performance to four unfamiliar colors, suggesting that they had learned a generalized concept of color change. They also transferred performance to test delays several times their 50-msec training delay without prior delay training. The accurate delay performance of several seconds suggests that their change detection was memory based, as opposed to a perceptual attentional capture process. These experiments are the first to show that an animal species (pigeons, in this case) can learn a change detection task identical to ones used to test human memory, thereby providing the possibility of directly comparing short-term memory processing across species.  相似文献   

3.
The representation of explicit motor sequence knowledge   总被引:2,自引:0,他引:2  
Much research has investigated the representation of implicitly learned motor sequences: Do subjects learn sequences of stimuli, responses, response locations, or some combination? Most of the work on this subject indicates that when sequences are learned implicitly, it is in terms of response locations. The present work investigated the representation of explicitly learned motor sequences. In four experiments, we found consistent evidence that explicitly learned sequences are represented in terms of stimulus locations. This conclusion held true for both self-report measures (subjects said that they learned stimuli) and performance measures, but when stimuli changed, performance degraded. We interpret these data in a multiple-memory-systems framework.  相似文献   

4.
In three experiments, participants decided whether a Star of David shape was present among distractors. Although the participants were instructed to ignore the colors in the display, detection was slower when each triangle of the Star of David was printed in a different color than when the Star of David was printed in a uniform color or when each triangle was in two colors. Extending the object file theory, we suggest that when the parts of an object are distinguished by a color difference and are perceived as separate objects, the perception of the whole object, which is composed of these same parts, is damaged. One interpretation within object file theory is that when the visual system represents the location of a complex object as occupied by identity tags for its different parts, it cannot also link the same location to the identity of the complex object. A new object file must then be created.  相似文献   

5.
People can maintain accurate representations of visual changes without necessarily being aware of them. Here, we investigate whether a similar phenomenon (implicit change detection) also exists in touch. In Experiments 1 and 2, participants detected the presence of a change between two consecutively-presented tactile displays. Tactile change blindness was observed, with participants failing to report the presence of tactile change. Critically, however, when participants had to make a forced choice response regarding the number of stimuli presented in the two displays, their performance was significantly better than chance (i.e., implicit change detection was observed). Experiment 3 demonstrated that tactile change detection does not necessarily involve a shift of spatial attention toward the location of change, regardless of whether the change is explicitly detected. We conclude that tactile change detection likely results from comparing representations of the two displays, rather than by directing spatial attention to the location of the change.  相似文献   

6.
Sun HM  Gordon RD 《Memory & cognition》2010,38(8):1049-1057
In five experiments, we examined the influence of contextual objects’ location and visual features on visual memory. Participants’ visual memory was tested with a change detection task in which they had to judge whether the orientation (Experiments 1A, 1B, and 2) or color (Experiments 3A and 3B) of a target object was the same. Furthermore, contextual objects’ locations and visual features were manipulated in the test image. The results showed that change detection performance was better when contextual objects’ locations remained the same from study to test, demonstrating that the original spatial configuration is important for subsequent visual memory retrieval. The results further showed that changes to contextual objects’ orientation, but not color, reduced orientation change detection performance; and changes to contextual objects’ color, but not orientation, impaired color change detection performance. Therefore, contextual objects’ visual features are capable of affecting visual memory. However, selective attention plays an influential role in modulating such effects.  相似文献   

7.
Current theories of object recognition in human vision make different predictions about whether the recognition of complex, multipart objects should be influenced by shape information about surface depth orientation and curvature derived from stereo disparity. We examined this issue in five experiments using a recognition memory paradigm in which observers (N = 134) memorized and then discriminated sets of 3D novel objects at trained and untrained viewpoints under either mono or stereo viewing conditions. In order to explore the conditions under which stereo-defined shape information contributes to object recognition we systematically varied the difficulty of view generalization by increasing the angular disparity between trained and untrained views. In one series of experiments, objects were presented from either previously trained views or untrained views rotated (15°, 30°, or 60°) along the same plane. In separate experiments we examined whether view generalization effects interacted with the vertical or horizontal plane of object rotation across 40° viewpoint changes. The results showed robust viewpoint-dependent performance costs: Observers were more efficient in recognizing learned objects from trained than from untrained views, and recognition was worse for extrapolated than for interpolated untrained views. We also found that performance was enhanced by stereo viewing but only at larger angular disparities between trained and untrained views. These findings show that object recognition is not based solely on 2D image information but that it can be facilitated by shape information derived from stereo disparity.  相似文献   

8.
Thirty smokers and 30 nonsmokers participated in aflicker study in which the role of attentional bias in change detection was examined. The participants observed picture pairs of everyday objects flicker on a computer screen until they detected the one object that had changed. In half of the pictures, a smoking-related object (e.g., a lighter) was included among smoking-unrelated objects (e.g., a spoon). Half of the smokers and half of the nonsmokers were aware of the experiment’s focus, and the other half were not. The smokers exhibited shorter detection latencies than did the nonsmokers when a smoking object changed and longer detection latencies when a nonsmoking object changed while a smoking object was present, and they exhibited detection latencies similar to those of the nonsmokers when smoking objects were not present. Interestingly, the nonsmokers displayed the same attentional bias as the smokers when they were aware of the experiment’s smoking focus, but they did not display any attentional bias when they were unaware. Thus, these findings provide evidence for long-term context-independent, as well as for short-term context-dependent, attentional bias.  相似文献   

9.
In order to determine whether people encode spatial configuration information when encoding visual displays, in four experiments, we investigated whether changes in task-irrelevant spatial configuration information would influence color change detection accuracy. In a change detection task, when objects in the test display were presented in new random locations, rather than identical or different locations preserving the overall configuration, participants were more likely to report that the colors had changed. This consistent bias across four experiments suggested that people encode task-irrelevant spatial configuration along with object information. Experiment 4 also demonstrated that only a low-false-alarm group of participants effectively bound spatial configuration information to object information, suggesting that these types of binding processes are open to strategic influences.  相似文献   

10.
Recent research has found visual object memory can be stored as part of a larger scene representation rather than independently of scene context. The present study examined how spatial and nonspatial contextual information modulate visual object memory. Two experiments tested participants’ visual memory by using a change detection task in which a target object's orientation was either the same as it appeared during initial viewing or changed. In addition, we examined the effect of spatial and nonspatial contextual manipulations on change detection performance. The results revealed that visual object representations can be maintained reliably after viewing arrays of objects. Moreover, change detection performance was significantly higher when either spatial or nonspatial contextual information remained the same in the test image. We concluded that while processing complex visual stimuli such as object arrays, visual object memory can be stored as part of a comprehensive scene representation, and both spatial and nonspatial contextual changes modulate visual memory retrieval and comparison.  相似文献   

11.
The effect of concurrent movement on incidental versus intentional statistical learning was examined in two experiments. In Experiment 1, participants learned the statistical regularities embedded within familiarization stimuli implicitly, whereas in Experiment 2 they were made aware of the embedded regularities and were instructed explicitly to learn these regularities. Experiment 1 demonstrated that while the control group were able to learn the statistical regularities, the resistance‐free cycling group and the exercise group did not demonstrate learning. This is in contrast with the findings of Experiment 2, where all three groups demonstrated significant levels of learning. The results suggest that the movement demands, rather than the physiological stress, interfered with statistical learning. We suggest movement activates the striatum, which is not only responsible for motor control but also plays a role in incidental learning.  相似文献   

12.
The cost-effectiveness of the implicit (procedural) knowledge that supports motor expertise enables surprisingly efficient performance when a decision and an action must occur in close temporal proximity. The authors argue that if novices learn the motor component of performance implicitly rather than explicitly, then they will also be efficient when they make a decision and execute an action in close temporal proximity. Participants (N = 35) learned a table tennis shot implicitly or explicitly. The authors assessed participants' motor performance and movement kinematics under conditions that required a concurrent low-complexity decision or a concurrent high-complexity decision about where to direct each shot. Performance was disrupted only for participants who learned explicitly when they made high-complexity decisions but not when they made low-complexity decisions. The authors conclude that implicit motor learning encourages cognitively efficient motor control more than does explicit motor learning, which allows performance to remain stable when time constraints call for a complex decision in tandem with a motor action.  相似文献   

13.
We investigated the influence of implicit learning on cognitive control. In a sequential Stroop task, participants implicitly learned a sequence placed on the color of the Stroop words. In Experiment 1, Stroop conflict was lower in sequenced than in random trials (learning-improved control). However, as these results were derived from an interaction between learning and conflict, they could also be explained by improved implicit learning (difference between random and sequenced trials), under incongruent compared with congruent trials (control-improved learning). Therefore, we further unraveled the direction of the interaction in 2 additional experiments. In Experiment 2, participants who learned the color sequence were no better at resolving conflict than participants who did not undergo sequence training. This shows that implicit knowledge does not directly reduce conflict (no learning-improved control). In Experiment 3, the amount of conflict did not directly improve learning either (no control-improved learning). However, conflict had a significant impact on the expression of implicit learning, as most knowledge was expressed under the highest amount of conflict. Thus, task-optimization was accomplished by an increased reliance on implicit sequence knowledge under high conflict. These findings demonstrate that implicit learning processes can be flexibly recruited to support cognitive control functions. (PsycINFO Database Record (c) 2012 APA, all rights reserved).  相似文献   

14.
Saccade-contingent change detection provides a powerful tool for investigating scene representation and scene memory. In the present study, critical objects presented within color images of naturalistic scenes were changed during a saccade toward or away from the target. During the saccade,the critical object was changed to another object type, to a visually different token of the same object type, or was deleted from the scene. There were three main results. First, the deletion of a saccade target was special: Detection performance for saccade target deletions was very good, and this level of performance did not decline with the amplitude of the saccade. In contrast, detection of type and token changes at the saccade target, and of all changes including deletions at a location that had just been fixated but was not the saccade target, decreased as the amplitude of the saccade increased. Second, detection performance for type and token changes, both when the changing object was the target of the saccade and when the object had just been fixated but was not the saccade target, was well above chance. Third, mean gaze durations were reliably elevated for those trials in which the change was not overtly detected. The results suggest that the presence of the saccade target plays a special role in trassaccadic integration, and together with other recent findings, suggest more generally that a relatively rich scene representation is retained across saccades and stored in visual memory.  相似文献   

15.
People often fail to detect a change between two visual scenes and retrieval failure has been suggested as a reason. We investigated the possibility that retrieval blocking underlies this failure by examining the error pattern in recognizing the pre-change object. The results of Experiment 1 showed that participants were biased toward selecting the lure that was similar to the post-change object when they failed to recognize the pre-change object. This bias was also observed in Experiment 2 when there was sufficient time to encode and consolidate the pre-change object and the bias was as strong as correct recognition in Experiment 3 when participants divided attention during encoding and comparison. The bias in memory error remained significant even when participants had the option to select an “I don’t remember” response in Experiment 4. In Experiment 5, the bias was observed after participants successfully detected a change at an invalidly cued location and after they failed to detect a change at a validly cued location. These findings suggest that blocking can lead to retrieval failure in change detection when participants are aware of a change yet unable to retrieve verbatim traces and also when participants are unaware of a change and use the post-change object to retrieve the identity of the previous object at the same location.  相似文献   

16.
To clarify the relationship between visual long-term memory (VLTM) and online visual processing, we investigated whether and how VLTM involuntarily affects the performance of a one-shot change detection task using images consisting of six meaningless geometric objects. In the study phase, participants observed pre-change (Experiment 1), post-change (Experiment 2), or both pre- and post-change (Experiment 3) images appearing in the subsequent change detection phase. In the change detection phase, one object always changed between pre- and post-change images and participants reported which object was changed. Results showed that VLTM of pre-change images enhanced the performance of change detection, while that of post-change images decreased accuracy. Prior exposure to both pre- and post-change images did not influence performance. These results indicate that pre-change information plays an important role in change detection, and that information in VLTM related to the current task does not always have a positive effect on performance.  相似文献   

17.
When representing visual features such as color and shape in visual working memory (VWM), participants also represent the locations of those features as a spatial configuration of the locations of those features in the display. In everyday life, we encounter objects against some background, yet it is unclear whether the configural representation in memory obligatorily constitutes the entire display, including that (often task-irrelevant) background information. In three experiments, participants completed a change detection task on color and shape; the memoranda were presented in front of uniform gray backgrounds, a textured background (Exp. 1), or a background containing location placeholders (Exps. 2 and 3). When whole-display probes were presented, changes to the objects’ locations or feature bindings impacted memory performance—implying that the spatial configuration of the probes influenced participants’ change decisions. Furthermore, when only a single item was probed, the effect of changing its location or feature bindings was either diminished or completely extinguished, implying that single probes do not necessarily elicit the entire spatial configuration. Critically, when task-irrelevant backgrounds were also presented that may have provided a spatial configuration for the single probes, the effect of location or bindings was not moderated. These findings suggest that although the spatial configuration of a display guides VWM-based recognition, this information does not necessarily always influence the decision process during change detection.  相似文献   

18.
Path planning under spatial uncertainty   总被引:1,自引:0,他引:1  
In this article, we present experiments studying path planning under spatial uncertainties. In the main experiment, the participants' task was to navigate the shortest possible path to find an object hidden in one of four places and to bring it to the final destination. The probability of finding the object (probability matrix) was different for each of the four places and varied between conditions. Givensuch uncertainties about the object's location, planning a single path is not sufficient. Participants had to generate multiple consecutive plans (metaplans)--for example: If the object is found in A, proceed to the destination; if the object is not found, proceed to B; and so on. The optimal solution depends on the specific probability matrix. In each condition, participants learned a different probability matrix and were then asked to report the optimal metaplan. Results demonstrate effective integration of the probabilistic information about the object's location during planning. We present a hierarchical planning scheme that could account for participants' behavior, as well as for systematic errors and differences between conditions.  相似文献   

19.
In many dual-task experiments, the priority observers give to each task is experimentally varied. Most experiments using this methodology have studied the effect of dividing attention between spatially distinct objects. We examined performance when attention had to be divided between stimulus attributes other than spatial location. In the first experiment, observers identified the color and the shape of a single letter. Accuracy was the same for single- and dual-task conditions, and a trialby-trial analysis revealed a strong positive correlation in the correct identification of the color and the shape. In the second experiment, color and shape judgments were separated in space, with opposite results: Dual-task performance was worse than single-task performance, and the trial-by-trial analysis indicated a strong negative correlation between tasks. The results indicated that often only one dimension was processed within a trial. The results support object and space models of attention.  相似文献   

20.
Navon D  Kasten R 《Acta psychologica》2008,127(2):459-475
Subjects instructed to detect targets following moderately valid location cues started being presented at some point in the course of the experiment, without having been informed about it, with a color secondary cue on all invalidly cued trials. In Experiment 1 most subjects quickly learned to use the secondary cue, ending in latency cost being eliminated or even turned negative. The effect failed to manifest only when the secondary cue appeared outside the object serving as imperative cue. Experiment 2 showed that performance with a secondary cue differed significantly from the performance in two control conditions in which colors were not correlated with validity or were not presented at all. On the other hand, it resembled performance of subjects informed beforehand about the secondary cue. Awareness of the contingency as well as of its effect on behavior was probed by a post-test questionnaire. An effect of learning without awareness was not observed in Experiment 1, but was found in Experiment 3, where awareness was probed more shortly after the emergence of incidental learning. Conceivably, subjects first learn to use the contingencies implicitly, and only later do they become aware of the outcome of that learning. Apparently, the attentional system might incidentally learn contingencies detected while being engaged in another task and use them for orienting despite a partial conflict with the following as instructed endogenous cues.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号