首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
In two experiments, we investigated whether reference frames acquired through touch could influence memories for locations learned through vision. Participants learned two objects through touch, and haptic egocentric (Experiment 1) and environmental (Experiment 2) cues encouraged selection of a specific reference frame. Participants later learned eight new objects through vision. Haptic cues were manipulated, whereas visual learning was held constant in order to observe any potential influence of the haptically experienced reference frame on memories for visually learned locations. When the haptically experienced reference frame was defined primarily by egocentric cues, cue manipulation had no effect on memories for objects learned through vision. Instead, visually learned locations were remembered using a reference frame selected from the visual study perspective. When the haptically experienced reference frame was defined by both egocentric and environmental cues, visually learned objects were remembered in the context of the haptically experienced reference frame. These findings support the common reference frame hypothesis, which proposes that locations learned through different sensory modalities are represented within a common reference frame.  相似文献   

2.
Three experiments investigated cross-modal links between touch, audition, and vision in the control of covert exogenous orienting. In the first two experiments, participants made speeded discrimination responses (continuous vs. pulsed) for tactile targets presented randomly to the index finger of either hand. Targets were preceded at a variable stimulus onset asynchrony (150,200, or 300 msec) by a spatially uninformative cue that was either auditory (Experiment 1) or visual (Experiment 2) on the same or opposite side as the tactile target. Tactile discriminations were more rapid and accurate when cue and target occurred on the same side, revealing cross-modal covert orienting. In Experiment 3, spatially uninformative tactile cues were presented prior to randomly intermingled auditory and visual targets requiring an elevation discrimination response (up vs. down). Responses were significantly faster for targets in both modalities when presented ipsilateral to the tactile cue. These findings demonstrate that the peripheral presentation of spatially uninforrnative auditory and visual cues produces cross-modal orienting that affects touch, and that tactile cues can also produce cross-modal covert orienting that affects audition and vision.  相似文献   

3.
Two experiments evaluated change in the perception of an environmental property (object length) in each of 3 perceptual modalities (vision, audition, and haptics) when perceivers were provided with the opportunity to experience the same environmental property by means of an additional perceptual modality (e.g., haptics followed by vision, vision followed by audition, or audition followed by haptics). Experiment 1 found that (a) posttest improvements in perceptual consistency occurred in all 3 perceptual modalities, regardless of whether practice included experience in an additional perceptual modality and (b) posttest improvements in perceptual accuracy occurred in haptics and audition but only when practice included experience in an additional perceptual modality. Experiment 2 found that learning curves in each perceptual modality could be accommodated by a single function in which auditory perceptual learning occurred over short time scales, haptic perceptual learning occurred over middle time scales, and visual perceptual learning occurred over long time scales. Analysis of trial-to-trial variability revealed patterns of long-term correlations in all perceptual modalities regardless of whether practice included experience in an additional perceptual modality.  相似文献   

4.
It has been shown that spatial information can be acquired from both visual and nonvisual modalities. The present study explored how spatial information from vision and proprioception was represented in memory, investigating orientation dependence of spatial memories acquired through visual and proprioceptive spatial learning. Experiment 1 examined whether visual learning alone and proprioceptive learning alone yielded orientation-dependent spatial memory. Results showed that spatial memories from both types of learning were orientation dependent. Experiment 2 explored how different orientations of the same environment were represented when they were learned visually and proprioceptively. Results showed that both visually and proprioceptively learned orientations were represented in spatial memory, suggesting that participants established two different reference systems based on each type of learning experience and interpreted the environment in terms of these two reference systems. The results provide some initial clues to how different modalities make unique contributions to spatial representations.  相似文献   

5.
Subjects judged the elevation (up vs. down, regardless of laterality) of peripheral auditory or visual targets, following uninformative cues on either side with an intermediate elevation. Judgments were better for targets in either modality when preceded by an uninformative auditory cue on the side of the target. Experiment 2 ruled out nonattentional accounts for these spatial cuing effects. Experiment 3 found that visual cues affected elevation judgments for visual but not auditory targets. Experiment 4 confirmed that the effect on visual targets was attentional. In Experiment 5, visual cues produced spatial cuing when targets were always auditory, but saccades toward the cue may have been responsible. No such visual-to-auditory cuing effects were found in Experiment 6 when saccades were prevented, though they were present when eye movements were not monitored. These results suggest a one-way cross-modal dependence in exogenous covert orienting whereby audition influences vision, but not vice versa. Possible reasons for this asymmetry are discussed in terms of the representation of space within the brain.  相似文献   

6.
It has been proposed that spatial reference frames with which object locations are specified in memory are intrinsic to a to-be-remembered spatial layout (intrinsic reference theory). Although this theory has been supported by accumulating evidence, it has only been collected from paradigms in which the entire spatial layout was simultaneously visible to observers. The present study was designed to examine the generality of the theory by investigating whether the geometric structure of a spatial layout (bilateral symmetry) influences selection of spatial reference frames when object locations are sequentially learned through haptic exploration. In two experiments, participants learned the spatial layout solely by touch and performed judgments of relative direction among objects using their spatial memories. Results indicated that the geometric structure can provide a spatial cue for establishing reference frames as long as it is accentuated by explicit instructions (Experiment 1) or alignment with an egocentric orientation (Experiment 2). These results are entirely consistent with those from previous studies in which spatial information was encoded through simultaneous viewing of all object locations, suggesting that the intrinsic reference theory is not specific to a type of spatial memory acquired by the particular learning method but instead generalizes to spatial memories learned through a variety of encoding conditions. In particular, the present findings suggest that spatial memories that follow the intrinsic reference theory function equivalently regardless of the modality in which spatial information is encoded.  相似文献   

7.
Two experiments were conducted to examine the effects of redundant and relevant visual cues on spatial pattern learning. Rats searched for hidden food items on the tops of poles that formed a square (Experiment 1) or a checkerboard (Experiment 2) pattern. The experimental groups were trained with visual cues that specified the locations of the baited poles. All groups were tested without visual cues so that any overshadowing or facilitation of spatial pattern learning by visual cues could be detected. Spatial choices were controlled by the spatial pattern and by the visual cues in both experiments. However, there was no evidence of overshadowing or facilitation of spatial pattern learning by visual cues in either experiment. The results are consistent with the idea that the representation of the spatial pattern that guides choices is not controlled by the same learning processes as those that produce associations between visual cues and food locations.  相似文献   

8.
Three experiments investigated whether spatial information acquired from vision and language is maintained in distinct spatial representations on the basis of the input modality. Participants studied a visual and a verbal layout of objects at different times from either the same (Experiments 1 and 2) or different learning perspectives (Experiment 3) and then carried out a series of pointing judgments involving objects from the same or different layouts. Results from Experiments 1 and 2 indicated that participants pointed equally fast on within- and between-layout trials; coupled with verbal reports from participants, this result suggests that they integrated all locations in a single spatial representation during encoding. However, when learning took place from different perspectives in Experiment 3, participants were faster to respond to within- than between-layout trials and indicated that they kept separate representations during learning. Results are compared to those from similar studies that involved layouts learned from perception only.  相似文献   

9.
Past research (e.g., J. M. Loomis, Y. Lippa, R. L. Klatzky, & R. G. Golledge, 2002) has indicated that spatial representations derived from spatial language can function equivalently to those derived from perception. The authors tested functional equivalence for reporting spatial relations that were not explicitly stated during learning. Participants learned a spatial layout by visual perception or spatial language and then made allocentric direction and distance judgments. Experiments 1 and 2 indicated allocentric relations could be accurately reported in all modalities, but visually perceived layouts, tested with or without vision, produced faster and less variable directional responses than language. In Experiment 3, when participants were forced to create a spatial image during learning (by spatially updating during a backward translation), functional equivalence of spatial language and visual perception was demonstrated by patterns of latency, systematic error, and variability.  相似文献   

10.
The current study investigated the reference frame used in spatial updating when idiothetic cues to self-motion were minimized (desktop virtual reality). In Experiment 1, participants learned a layout of eight objects from a single perspective (learning heading) in a virtual environment. After learning, they were placed in the same virtual environment and used a keyboard to navigate to two of the learned objects (visible) before pointing to a third object (invisible). We manipulated participants’ starting orientation (initial heading) and final orientation (final heading) before pointing, to examine the reference frame used in this task. We found that participants used the initial heading and the learning heading to establish reference directions. In Experiment 2, the procedure was almost the same as in Experiment 1 except that participants pointed to objects relative to an imagined heading that differed from their final heading in the virtual environment. In this case, pointing performance was only affected by alignment with the learning heading. We concluded that the initial heading played an important role in spatial updating without idiothetic cues, but the representation established at this heading was transient and affected by the interruption of spatial updating; the learning heading, on the other hand, corresponded to an enduring representation which was used consistently.  相似文献   

11.
Our environment is richly structured, with objects producing correlated information within and across sensory modalities. A prominent challenge faced by our perceptual system is to learn such regularities. Here, we examined statistical learning and addressed learners’ ability to track transitional probabilities between elements in the auditory and visual modalities. Specifically, we investigated whether cross-modal information affects statistical learning within a single modality. Participants were familiarized with a statistically structured modality (e.g., either audition or vision) accompanied by different types of cues in a second modality (e.g., vision or audition). The results revealed that statistical learning within either modality is affected by cross-modal information, with learning being enhanced or reduced according to the type of cue provided in the second modality.  相似文献   

12.
Li X  Mou W  McNamara TP 《Cognition》2012,124(2):143-155
Four experiments tested whether there are enduring spatial representations of objects' locations in memory. Previous studies have shown that under certain conditions the internal consistency of pointing to objects using memory is disrupted by disorientation. This disorientation effect has been attributed to an absence of or to imprecise enduring spatial representations of objects' locations. Experiment 1 replicated the standard disorientation effect. Participants learned locations of objects in an irregular layout and then pointed to objects after physically turning to face an object and after disorientation. The expected disorientation was observed. In Experiment 2, after disorientation, participants were asked to imagine they were facing the original learning direction and then physically turned to adopt the test orientation. In Experiment 3, after disorientation, participants turned to adopt the test orientation and then were informed of the original viewing direction by the experimenter. A disorientation effect was not observed in Experiment 2 or 3. In Experiment 4, after disorientation, participants turned to face the test orientation but were not told the original learning orientation. As in Experiment 1, a disorientation effect was observed. These results suggest that there are enduring spatial representations of objects' locations specified in terms of a spatial reference direction parallel to the learning view, and that the disorientation effect is caused by uncertainty in recovering the spatial reference direction relative to the testing orientation following disorientation.  相似文献   

13.
The relative role of associative processes and the use of explicit cues about object location in search behavior in dogs (Canis familiaris) was assessed by using a spatial binary discrimination reversal paradigm in which reversal conditions featured: (1) a previously rewarded location and a novel location, (2) a previously nonrewarded location and a novel location, or (3) a previously rewarded location and a previously nonrewarded location. Rule mediated learning predicts a similar performance in these different reversal conditions whereas associative learning predicts the worst performance in Condition 3. Evidence for an associative control of search emerged when no explicit cues about food location were provided (Experiment 1) but also when dogs witnessed the hiding of food in the reversal trials (Experiment 2) and when they did so in both the prereversal and the reversal trials (Experiment 3). Nevertheless, dogs performed better in the prereversal phase of Experiment 3 indicating that their search could be informed by the knowledge of the food location. Experiment 4 confirmed the results of Experiments 1 and 2, under a different arrangement of search locations. We conclude that knowledge about object location guides search behavior in dogs but it cannot override associative processes.  相似文献   

14.
We report three experiments designed to investigate the nature of any crossmodal links between audition and touch in sustained endogenous covert spatial attention, using the orthogonal spatial cuing paradigm. Participants discriminated the elevation (up vs. down) of auditory and tactile targets presented to either the left or the right of fixation. In Experiment 1, targets were expected on a particular side in just one modality; the results demonstrated that the participants could spatially shift their attention independently in both audition and touch. Experiment 2 demonstrated that when the participants were informed that targets were more likely to be on one side for both modalities, elevation judgments were faster on that side in both audition and touch. The participants were also able to "split" their auditory and tactile attention, albeit at some cost, when targets in the two modalities were expected on opposite sides. Similar results were also reported in Experiment 3 when participants adopted a crossed-hands posture, thus revealing that crossmodal links in audiotactile attention operate on a representation of space that is updated following posture change. These results are discussed in relation to previous findings regarding crossmodal links in audiovisual and visuotactile covert spatial attentional orienting.  相似文献   

15.
Human participants searched in a real environment or interactive 3-D virtual environment open field for four hidden goal locations arranged in a 2 × 2 square configuration in a 5 × 5 matrix of raised bins. The participants were randomly assigned to one of two groups: cues 1 pattern or pattern only. The participants experienced a training phase, followed by a testing phase. Visual cues specified the goal locations during training only for the cues 1 pattern group. Both groups were then tested in the absence of visual cues. The results in both environments indicated that the participants learned the spatial relations among goal locations. However, visual cues during training facilitated learning of the spatial relations among goal locations: In both environments, the participants trained with the visual cues made fewer errors during testing than did those trained only with the pattern. The results suggest that learning based on the spatial relations among locations may not be susceptible to cue competition effects and have implications for standard associative and dual-system accounts of spatial learning.  相似文献   

16.
Four experiments investigated the representation and integration in memory of spatial and nonspatial relations. Subjects learned two-dimensional spatial arrays in which critical pairs of object names were semantically related (Experiment 1), semantically and episodically related (Experiment 2), or just episodically related (Experiments 3a and 3b). Episodic relatedness was established in a paired-associate learning task that preceded array learning. After learning an array, subjects participated in two tasks: item recognition, in which the measure of interest was priming; and distance estimation. Priming in item recognition was sensitive to the Euclidean distance between object names and, for neighbouring locations, to nonspatial relations. Errors in distance estimations varied as a function of distance but were unaffected by nonspatial relations. These and other results indicated that nonspatial relations influenced the probability of encoding spatial relations between locations but did not lead to distorted spatial memories.  相似文献   

17.
Humans routinely use spatial language to control the spatial distribution of attention. In so doing, spatial information may be communicated from one individual to another across opposing frames of reference, which in turn can lead to inconsistent mappings between symbols and directions (or locations). These inconsistencies may have important implications for the symbolic control of attention because they can be translated into differences in cue validity, a manipulation that is known to influence the focus of attention. This differential validity hypothesis was tested in Experiment 1 by comparing spatial word cues that were predicted to have high learned spatial validity (“above/below”) and low learned spatial validity (“left/right”). Consistent with this prediction, when two measures of selective attention were used, the results indicated that attention was less focused in response to “left/right” cues than in response to “above/below” cues, even when the actual validity of each of the cues was equal. In addition, Experiment 2 predicted that spatial words such as “left/right” would have lower spatial validity than would other directional symbols that specify direction along the horizontal axis, such as “←/→” cues. The results were also consistent with this hypothesis. Altogether, the present findings demonstrate important semantic-based constraints on the spatial distribution of attention.  相似文献   

18.
Previous research has uncovered three primary cues that influence spatial memory organization:egocentric experience, intrinsic structure (object defined), and extrinsic structure (environment defined). In the present experiments, we assessed the relative importance of these cues when all three were available during learning. Participants learned layouts from two perspectives in immersive virtual reality. In Experiment 1, axes defined by intrinsic and extrinsic structures were in conflict, and learning occurred from two perspectives, each aligned with either the intrinsic or the extrinsic structure. Spatial memories were organized around a reference direction selected from the first perspective, regardless of its alignment with intrinsic or extrinsic structures. In Experiment 2, axes defined by intrinsic and extrinsic structures were congruent, and spatial memories were organized around reference axes defined by those congruent structures, rather than by the initially experienced view. The findings are discussed in the context of spatial memory theory as it relates to real and virtual environments.  相似文献   

19.
The loss of peripheral vision impairs spatial learning and navigation. However, the mechanisms underlying these impairments remain poorly understood. One advantage of having peripheral vision is that objects in an environment are easily detected and readily foveated via eye movements. The present study examined this potential benefit of peripheral vision by investigating whether competent performance in spatial learning requires effective eye movements. In Experiment 1, participants learned room-sized spatial layouts with or without restriction on direct eye movements to objects. Eye movements were restricted by having participants view the objects through small apertures in front of their eyes. Results showed that impeding effective eye movements made subsequent retrieval of spatial memory slower and less accurate. The small apertures also occluded much of the environmental surroundings, but the importance of this kind of occlusion was ruled out in Experiment 2 by showing that participants exhibited intact learning of the same spatial layouts when luminescent objects were viewed in an otherwise dark room. Together, these findings suggest that one of the roles of peripheral vision in spatial learning is to guide eye movements, highlighting the importance of spatial information derived from eye movements for learning environmental layouts.  相似文献   

20.
We investigated the extent to which people can discriminate between languages on the basis of their characteristic temporal, rhythmic information, and the extent to which this ability generalizes across sensory modalities. We used rhythmical patterns derived from the alternation of vowels and consonants in English and Japanese, presented in audition, vision, both audition and vision at the same time, or touch. Experiment 1 confirmed that discrimination is possible on the basis of auditory rhythmic patterns, and extended it to the case of vision, using ‘aperture-close’ mouth movements of a schematic face. In Experiment 2, language discrimination was demonstrated using visual and auditory materials that did not resemble spoken articulation. In a combined analysis including data from Experiments 1 and 2, a beneficial effect was also found when the auditory rhythmic information was available to participants. Despite the fact that discrimination could be achieved using vision alone, auditory performance was nevertheless better. In a final experiment, we demonstrate that the rhythm of speech can also be discriminated successfully by means of vibrotactile patterns delivered to the fingertip. The results of the present study therefore demonstrate that discrimination between language's syllabic rhythmic patterns is possible on the basis of visual and tactile displays.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号