首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The present study investigated whether memory for a room-sized spatial layout learned through auditory localization of sounds exhibits orientation dependence similar to that observed for spatial memory acquired from stationary viewing of the environment. Participants learned spatial layouts by viewing objects or localizing sounds and then performed judgments of relative direction among remembered locations. The results showed that direction judgments following auditory learning were performed most accurately at a particular orientation in the same way as were those following visual learning, indicating that auditorily encoded spatial memory is orientation dependent. In combination with previous findings that spatial memories derived from haptic and proprioceptive experiences are also orientation dependent, the present finding suggests that orientation dependence is a general functional property of human spatial memory independent of learning modality.  相似文献   

2.
Three experiments investigated whether spatial information acquired from vision and language is maintained in distinct spatial representations on the basis of the input modality. Participants studied a visual and a verbal layout of objects at different times from either the same (Experiments 1 and 2) or different learning perspectives (Experiment 3) and then carried out a series of pointing judgments involving objects from the same or different layouts. Results from Experiments 1 and 2 indicated that participants pointed equally fast on within- and between-layout trials; coupled with verbal reports from participants, this result suggests that they integrated all locations in a single spatial representation during encoding. However, when learning took place from different perspectives in Experiment 3, participants were faster to respond to within- than between-layout trials and indicated that they kept separate representations during learning. Results are compared to those from similar studies that involved layouts learned from perception only.  相似文献   

3.
Humans conduct visual search faster when the same display is presented for a 2nd time, showing implicit learning of repeated displays. This study examines whether learning of a spatial layout transfers to other layouts that are occupied by items of new shapes or colors. The authors show that spatial context learning is sometimes contingent on item identity. For example, when the training session included some trials with black items and other trials with white items, learning of the spatial layout became specific to the trained color--no transfer was seen when items were in a new color during testing. However, when the training session included only trials in black (or white), learning transferred to displays with a new color. Similar results held when items changed shapes after training. The authors conclude that implicit visual learning is sensitive to trial context and that spatial context learning can be identity contingent.  相似文献   

4.
Under incidental learning conditions, a spatial layout can be acquired implicitly and facilitates visual search (the contextual cueing effect; Chun & Jiang, 1998). We investigated two aspects of this effect: Whether spatial layouts of a 3D display are encoded automatically or require selective processing (Experiment 1), and whether the learned layouts are limited to 2D configurations or can encompass three dimensions (Experiment 2). In Experiment 1, participants searched for a target presented only in a specific depth plane. Contextual cueing effect was obtained only when the location of the items in the attended plane was invariant and consistently paired with a target location. In contrast, repeating and pairing the layout of the ignored items with the target location did not produce a contextual cueing effect. In Experiment 2, we found that reaction times for the repeated condition increased significantly when the disparity of the repeated distractors was reversed in the last block of trials. These results indicate that contextual cueing in 3D displays occurs when the layout is relevant and selectively attended, and that 3D layouts can be preserved as an implicit visual spatial context.  相似文献   

5.
The repetition of spatial layout implicitly facilitates visual search (contextual cueing effect; Chun & Jiang, 1998). Although a substantial number of studies have explored the mechanism underlying the contextual cueing effect, the manner in which contextual information guides spatial attention to a target location during a visual search remains unclear. We investigated the nature of attentional modulation by contextual cueing, using a hybrid paradigm of a visual search task and a probe dot detection task. In the case of a repeated spatial layout, detection of a probe dot was facilitated at a search target location and was inhibited at distractor locations relative to nonrepeated spatial layouts. Furthermore, these facilitatory and inhibitory effects possessed different learning properties across epochs (Experiment 1) and different time courses within a trial (Experiment 2). These results suggest that contextual cueing modulates attentional processing via both facilitation to the location of “to-be-attended” stimuli and inhibition to the locations of “to-be-ignored” stimuli.  相似文献   

6.
It is well known that observers can implicitly learn the spatial context of complex visual searches, such that future searches through repeated contexts are completed faster than those through novel contexts, even though observers remain at chance at discriminating repeated from new contexts. This contextual-cueing effect arises quickly (within less than five exposures) and asymptotes within 30 exposures to repeated contexts. In spite of being a robust effect (its magnitude is over 100 ms at the asymptotic level), the effect is implicit: Participants are usually at chance at discriminating old from new contexts at the end of an experiment, in spite of having seen each repeated context more than 30 times throughout a 50-min experiment. Here, we demonstrate that the speed at which the contextual-cueing effect arises can be modulated by external rewards associated with the search contexts (not with the performance itself). Following each visual search trial (and irrespective of a participant’s search speed on the trial), we provided a reward, a penalty, or no feedback to the participant. Crucially, the type of feedback obtained was associated with the specific contexts, such that some repeated contexts were always associated with reward, and others were always associated with penalties. Implicit learning occurred fastest for contexts associated with positive feedback, though penalizing contexts also showed a learning benefit. Consistent feedback also produced faster learning than did variable feedback, though unexpected penalties produced the largest immediate effects on search performance.  相似文献   

7.
Humans conduct visual search more efficiently when the same display is presented for a second time, showing learning of repeated spatial contexts. In this study, we investigate spatial context learning in two tasks: visual search and change detection. In both tasks, we ask whether subjects learn to associate the target with the entire spatial layout of a repeated display (configural learning) or with individual distractor locations (nonconfigural learning). We show that nonconfigural learning results from visual search tasks, but not from change detection tasks. Furthermore, a spatial layout acquired in visual search tasks does not enhance change detection on the same display, whereas a spatial layout acquired in change detection tasks moderately enhances visual search. We suggest that although spatial context learning occurs in multiple tasks, the content of learning is, in part, task specific.  相似文献   

8.
It has been proposed that spatial reference frames with which object locations are specified in memory are intrinsic to a to-be-remembered spatial layout (intrinsic reference theory). Although this theory has been supported by accumulating evidence, it has only been collected from paradigms in which the entire spatial layout was simultaneously visible to observers. The present study was designed to examine the generality of the theory by investigating whether the geometric structure of a spatial layout (bilateral symmetry) influences selection of spatial reference frames when object locations are sequentially learned through haptic exploration. In two experiments, participants learned the spatial layout solely by touch and performed judgments of relative direction among objects using their spatial memories. Results indicated that the geometric structure can provide a spatial cue for establishing reference frames as long as it is accentuated by explicit instructions (Experiment 1) or alignment with an egocentric orientation (Experiment 2). These results are entirely consistent with those from previous studies in which spatial information was encoded through simultaneous viewing of all object locations, suggesting that the intrinsic reference theory is not specific to a type of spatial memory acquired by the particular learning method but instead generalizes to spatial memories learned through a variety of encoding conditions. In particular, the present findings suggest that spatial memories that follow the intrinsic reference theory function equivalently regardless of the modality in which spatial information is encoded.  相似文献   

9.
We conducted a haptic search experiment to investigate the influence of the Gestalt principles of proximity, similarity, and good continuation. We expected faster search when the distractors could be grouped. We chose edges at different orientations as stimuli because they are processed similarly in the haptic and visual modality. We therefore expected the principles of similarity and good continuation to be operational in haptics as they are in vision. In contrast, because of differences in spatial processing between vision and haptics, we expected differences for the principle of proximity. In haptics, the Gestalt principle of proximity could operate at two distinct levels-somatotopic proximity or spatial proximity-and we assessed both possibilities in our experiments. The results show that the principles of similarity and good continuation indeed operate in this haptic search task. Neither of our proximity manipulations yielded effects, which may suggest that grouping by proximity must take place before an invariant representation of the object has formed. (PsycINFO Database Record (c) 2012 APA, all rights reserved).  相似文献   

10.
陈晓宇  杜媛媛  刘强 《心理学报》2022,54(12):1481-1490
背景线索的学习缺乏适应性, 这种缺乏表现在两个方面:其一是难以在已习得的场景表征上捆绑一个新目标位置(Re-learning), 也就是场景表征的更新受阻; 其二是在习得一组场景表征后, 难以学习另一组全新场景(New-learning)。研究表明, 在旧场景表征上捆绑一个新目标位置的能力可能与注意范围大小有关, 而学习全新场景则需要重置学习功能。积极情绪可以有效扩大注意范围, 并改善对旧有认知模式的固着, 因此积极情绪启动将有可能提升背景线索学习的适应性。本研究采用效价为中性和积极的情绪性图片来启动对应的情绪, 探索旧场景捆绑新目标位置时和学习全新场景时, 背景线索的学习情况, 验证积极情绪是否可以提高背景线索学习中的适应性。实验发现, 积极情绪无法促进旧场景上捆绑新目标位置的背景线索学习(Re-learning), 但是可以促进全新场景的学习(New-learning)。该结果说明, 积极情绪可以提高被试的场景学习能力进而促进对全新场景的学习, 却无法减少由表征相似性引起的旧表征的自动检索, 进而无法改善旧表征的更新过程。  相似文献   

11.
A number of new psycholinguistic variables has been proposed during the last years within embodied cognition framework: modality experience rating (i.e., relationship between words and images of a particular perceptive modality—visual, auditory, haptic etc.), manipulability (the necessity for an object to interact with human hands in order to perform its function), vertical spatial localization. However, it is not clear how these new variables are related to each other and to such traditional variables as imageability, AoA and word frequency. In this article, normative data on the modality (visual, auditory, haptic, olfactory, and gustatory) ratings, vertical spatial localization of the object, manipulability, imageability, age of acquisition, and subjective frequency for 506 Russian nouns are presented. Strongest correlations were observed between olfactory and gustatory modalities (.81), visual modality and imageability (.78), haptic modality and manipulability (.7). Other modalities also significantly correlate with imageability: olfactory (.35), gustatory (.24), and haptic (.67). Factor analysis divided variables into four groups where visual and haptic modality ratings were combined with imageability, manipulability and AoA (the first factor); word length, frequency and AoA formed the second factor; olfactory modality was united with gustatory (the third factor); spatial localization only is included in the fourth factor. Present norms of imageability and AoA are consistent with previous as correlation analysis has revealed. The complete database can be downloaded from supplementary material.  相似文献   

12.
The present study examined the role of vision and haptics in memory for stimulus objects that vary along the dimension of curvature. Experiment 1 measured haptic‐haptic (T‐T) and haptic‐visual (T‐V) discrimination of curvature in a short‐term memory paradigm, using 30‐second retention intervals containing five different interpolated tasks. Results showed poorest performance when the interpolated tasks required spatial processing or movement, thereby suggesting that haptic information about shape is encoded in a spatial‐motor representation. Experiment 2 compared visual‐visual (V‐V) and visual‐haptic (V‐T) short‐term memory, again using 30‐second delay intervals. The results of the ANOVA failed to show a significant effect of intervening activity. Intra‐modal visual performance and cross‐modal performance were similar. Comparing the four modality conditions (inter‐modal V‐T, T‐V; intra‐modal V‐V, T‐T, by combining the data of Experiments 1 and 2), in a global analysis, showed a reliable interaction between intervening activity and experiment (modality). Although there appears to be a general tendency for spatial and movement activities to exert the most deleterious effects overall, the patterns are not identical when the initial stimulus is encoded haptically (Experiment 1) and visually (Experiment 2).  相似文献   

13.
Perceptual learning was used to study potential transfer effects in a duration discrimination task. Subjects were trained to discriminate between two empty temporal intervals marked with auditory beeps, using a twoalternative forced choice paradigm. The major goal was to examine whether perceptual learning would generalize to empty intervals that have the same duration but are marked by visual flashes. The experiment also included longer intervals marked with auditory beeps and filled auditory intervals of the same duration as the trained interval, in order to examine whether perceptual learning would generalize to these conditions within the same sensory modality. In contrast to previous findings showing a transfer from the haptic to the auditory modality, the present results do not indicate a transfer from the auditory to the visual modality; but they do show transfers within the auditory modality.  相似文献   

14.
A number of experiments have demonstrated that the learning of braille is affected by a variety of factors. The present experiment was carried out to determine the relative importance of these variables for braille learning. The variables were stimulus set discriminability (high, low), study modality (visual, haptic), test modality (visual, haptic), study size (large vs standard braille cell), test size (large vs standard braille cell), study rate (5 or 10 seconds per item), and test rate (5 or 10 seconds per item). The results showed that study modality, stimulus set discriminability, and test modality were the variables mainly responsible for differences in performance during acquisition. Some practical and theoretical implications of these results are considered.  相似文献   

15.
When novel scenes are encoded, the representations of scene layout are generally viewpoint specific. Past studies of scene recognition have typically required subjects to explicitly study and encode novel scenes, but in everyday visual experience, it is possible that much scene learning occurs incidentally. Here, we examine whether implicitly encoded scene layouts are also viewpoint dependent. We used the contextual cuing paradigm, in which search for a target is facilitated by implicitly learned associations between target locations and novel spatial contexts (Chun & Jiang, 1998). This task was extended to naturalistic search arrays with apparent depth. To test viewpoint dependence, the viewpoint of the scenes was varied from training to testing. Contextual cuing and, hence, scene context learning decreased as the angular rotation from training viewpoint increased. This finding suggests that implicitly acquired representations of scene layout are viewpoint dependent.  相似文献   

16.
Visual search (e.g., finding a specific object in an array of other objects) is performed most effectively when people are able to ignore distracting nontargets. In repeated search, however, incidental learning of object identities may facilitate performance. In three experiments, with over 1,100 participants, we examined the extent to which search could be facilitated by object memory and by memory for spatial layouts. Participants searched for new targets (real-world, nameable objects) embedded among repeated distractors. To make the task more challenging, some participants performed search for multiple targets, increasing demands on visual working memory (WM). Following search, memory for search distractors was assessed using a surprise two-alternative forced choice recognition memory test with semantically matched foils. Search performance was facilitated by distractor object learning and by spatial memory; it was most robust when object identity was consistently tied to spatial locations and weakest (or absent) when object identities were inconsistent across trials. Incidental memory for distractors was better among participants who searched under high WM load, relative to low WM load. These results were observed when visual search included exhaustive-search trials (Experiment 1) or when all trials were self-terminating (Experiment 2). In Experiment 3, stimulus exposure was equated across WM load groups by presenting objects in a single-object stream; recognition accuracy was similar to that in Experiments 1 and 2. Together, the results suggest that people incidentally generate memory for nontarget objects encountered during search and that such memory can facilitate search performance.  相似文献   

17.
《Acta psychologica》2013,143(1):20-34
Both vision and touch yield comparable results in terms of roughness estimation of familiar textures as was shown in earlier studies. To our knowledge, no research has been conducted on the effect of sensory familiarity with the stimulus material on roughness estimation of unfamiliar textures.The influence of sensory modality and familiarity on roughness perception of dot pattern textures was investigated in a series of five experiments. Participants estimated the roughness of textures varying in mean center-to-center dot spacing in experimental conditions providing visual, haptic and visual–haptic combined information.The findings indicate that roughness perception of unfamiliar dot pattern textures is well described by a bi-exponential function of inter-dot spacing, regardless of the sensory modality used. However, sensory modality appears to affect the maximum of the psychophysical roughness function, with visually perceived roughness peaking for a smaller inter-dot spacing than haptic roughness. We propose that this might be due to the better spatial acuity of the visual modality. Individuals appeared to use different visual roughness estimation strategies depending on their first sensory experience (visual vs. haptic) with the stimulus material, primarily in an experimental context which required the combination of visual and haptic information in a single bimodal roughness estimate. Furthermore, the similarity of findings in experimental settings using real and virtual visual textures indicates the suitability of the experimental setup for neuroimaging studies, creating a more direct link between behavioral and neuroimaging results.  相似文献   

18.
Past research (e.g., J. M. Loomis, Y. Lippa, R. L. Klatzky, & R. G. Golledge, 2002) has indicated that spatial representations derived from spatial language can function equivalently to those derived from perception. The authors tested functional equivalence for reporting spatial relations that were not explicitly stated during learning. Participants learned a spatial layout by visual perception or spatial language and then made allocentric direction and distance judgments. Experiments 1 and 2 indicated allocentric relations could be accurately reported in all modalities, but visually perceived layouts, tested with or without vision, produced faster and less variable directional responses than language. In Experiment 3, when participants were forced to create a spatial image during learning (by spatially updating during a backward translation), functional equivalence of spatial language and visual perception was demonstrated by patterns of latency, systematic error, and variability.  相似文献   

19.
We investigated whether implicit learning in a visual search task would influence preferences for visual stimuli. Participants performed a contextual cueing task in which they searched for visual targets, the locations of which were either predicted or not predicted by the positioning of distractors. The speed with which participants located the targets increased across trials more rapidly for predictive displays than for non-predictive displays, consistent with contextual cueing. Participants were subsequently asked to rate the "goodness" of visual displays. The rating results showed that they preferred predictive displays to both non-predictive and novel displays. The participants did not recognize predictive displays any more frequently than they did non-predictive or novel displays. These results suggest that contextual cueing occurred implicitly and that the implicit learning of visual layouts promotes a preference for visual layouts that are predictive of target location.  相似文献   

20.
肖承丽 《心理学报》2013,45(7):752-761
通过同步视觉或者序列本体感觉,被试学习不规则场景。学习完毕之后,在面对学习方向、自主旋转240°、和持续旋转直至迷向3种运动条件下,被试随机指出各个物体的位置。迷向导致同步视觉学习组指向的内部一致性显著变差,而序列本体感觉学习组不受迷向影响。离线的相对位置判断任务表明两组被试的环境中心空间表征没有差异。这证明通过序列本体感觉学习被试也可以形成稳定的自我中心空间表征,支持了空间快照理论的扩展和空间认知的功能等价假说。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号