首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Previous work documented that sensorimotor adaptation transfers between sensory modalities: When subjects adapt with one arm to a visuomotor distortion while responding to visual targets, they also appear to be adapted when they are subsequently tested with auditory targets. Vice versa, when they adapt to an auditory-motor distortion while pointing to auditory targets, they appear to be adapted when they are subsequently tested with visual targets. Therefore, it was concluded that visuomotor as well as auditory-motor adaptation use the same adaptation mechanism. Furthermore, it has been proposed that sensory information from the trained modality is weighted larger than sensory information from an untrained one, because transfer between sensory modalities is incomplete. The present study tested these hypotheses for dual arm adaptation. One arm adapted to an auditory-motor distortion and the other either to an opposite directed auditory-motor or visuomotor distortion. We found that both arms adapted significantly. However, compared to reference data on single arm adaptation, adaptation in the dominant arm was reduced indicating interference from the non-dominant to the dominant arm. We further found that arm-specific aftereffects of adaptation, which reflect recalibration of sensorimotor transformation rules, were stronger or equally strong when targets were presented in the previously adapted compared to the non-adapted sensory modality, even when one arm adapted visually and the other auditorily. The findings are discussed with respect to a recently published schematic model on sensorimotor adaptation.  相似文献   

2.
Integrating different senses to reduce sensory uncertainty and increase perceptual precision can have an important compensatory function for individuals with visual impairment and blindness. However, how visual impairment and blindness impact the development of optimal multisensory integration in the remaining senses is currently unknown. Here we first examined how audio‐haptic integration develops and changes across the life span in 92 sighted (blindfolded) individuals between 7 and 70 years of age. We used a child‐friendly task in which participants had to discriminate different object sizes by touching them and/or listening to them. We assessed whether audio‐haptic performance resulted in a reduction of perceptual uncertainty compared to auditory‐only and haptic‐only performance as predicted by maximum‐likelihood estimation model. We then compared how this ability develops in 28 children and adults with different levels of visual experience, focussing on low‐vision individuals and blind individuals that lost their sight at different ages during development. Our results show that in sighted individuals, adult‐like audio‐haptic integration develops around 13–15 years of age, and remains stable until late adulthood. While early‐blind individuals, even at the youngest ages, integrate audio‐haptic information in an optimal fashion, late‐blind individuals do not. Optimal integration in low‐vision individuals follows a similar developmental trajectory as that of sighted individuals. These findings demonstrate that visual experience is not necessary for optimal audio‐haptic integration to emerge, but that consistency of sensory information across development is key for the functional outcome of optimal multisensory integration.  相似文献   

3.
In traditional theories of perceptual learning, sensory modalities support one another. A good example comes from research on dynamic touch, the wielding of an unseen object to perceive its properties. Wielding provides the haptic system with mechanical information related to the length of the object. Visual feedback can improve the accuracy of subsequent length judgments; visual perception supports haptic perception. Such cross-modal support is not the only route to perceptual learning. We present a dynamic touch task in which we replaced visual feedback with the instruction to strike the unseen object against an unseen surface following length judgment. This additional mechanical information improved subsequent length judgments. We propose a self-organizing perspective in which a single modality trains itself.  相似文献   

4.
We investigated the impact of food deprivation on oral and manual haptic size perception of food and non-food objects. From relevant theories (need-proportional perception, motivated perception, frustrative nonreward, perceptual defence, and sensory sensitisation) at least four completely different competing predictions can be derived. Testing these predictions, we found across four experiments that participants estimated the length of both non-food and food objects to be larger when hungry than when satiated, which was true only for oral haptic perception, while manual haptic perception was not influenced by hunger state. Subjectively reported hunger correlated positively with estimated object size in oral, but not in manual, haptic perception. The impact of food deprivation on oral perception vanished after oral stimulations even for hungry individuals. These results favour a sensory sensitisation account maintaining that hunger itself does not alter oral perception but the accompanying lack of sensory stimulation of the oral mucosa. Both oral and manual haptic perception tended to underestimate actual object size. Finally, an enhancing effect of domain-target matching was found, ie food objects were perceived larger by oral than by manual haptics, while non-food objects were perceived larger by manual than by oral haptics.  相似文献   

5.
Planning an action primes feature dimensions that are relevant for that particular action, increasing the impact of these dimensions on perceptual processing. Here, we investigated whether action planning also affects the short-term maintenance of visual information. In a combined memory and movement task, participants were to memorize items defined by size or color while preparing either a grasping or a pointing movement. Whereas size is a relevant feature dimension for grasping, color can be used to localize the goal object and guide a pointing movement. The results showed that memory for items defined by size was better during the preparation of a grasping movement than during the preparation of a pointing movement. Conversely, memory for color tended to be better when a pointing movement rather than a grasping movement was being planned. This pattern was not only observed when the memory task was embedded within the preparation period of the movement, but also when the movement to be performed was only indicated during the retention interval of the memory task. These findings reveal that a weighting of information in visual working memory according to action relevance can even be implemented at the representational level during maintenance, demonstrating that our actions continue to influence visual processing beyond the perceptual stage.  相似文献   

6.
The role of vision in the control of reaching and grasping was investigated by varying the available visual information. Adults (N = 7) reached in conditions that had full visual information, visual information about the target object but not the hand or surrounding environment, and no visual information. Four different object diameters were used. The results indicated that as visual information and object size decreased, subjects used longer movement times, had slower speeds, and more asymmetrical hand-speed profiles. Subjects matched grasp aperture to object diameter, but overcompensated with larger grasp apertures when visual information was reduced. Subjects also qualitatively differed in reach kinematics when challenged with reduced visual information or smaller object size. These results emphasize the importance of vision of the target in reaching and show that subjects do not simply scale a command template with task difficulty.  相似文献   

7.
The authors employed a virtual environment to investigate how humans use haptic and visual feedback in a simple, rhythmic object-manipulation task. The authors hypothesized that feedback would help participants identify the appropriate resonant frequency and perform online control adjustments. The 1st test was whether sensory feedback is needed at all; the 2nd was whether the motor system combines visual and haptic feedback to improve performance. Task performance was quantified in terms of work performed on the virtual inertia, ability to identify the correct rhythm, and variability of movement. Strict feedforward control was found to be ineffective for this task, even when participants had previous knowledge of the rhythm. Participants (N = 11) performed far better when feedback was available (11 times more work, 2.2 times more precise frequency, 30% less variability; p < .05 for all 3 performance measures). Using sensory feedback, participants were able to rapidly identify 4 different spring-inertia systems without foreknowledge of the corresponding resonant frequencies. They performed over 20% more work with 24% less variability when provided with both visual and haptic feedback than they did with either feedback channel alone (p < .05), providing evidence that they integrated online sensory channels. Whereas feedforward control alone led to poor performance, feedback control led to fast tuning or calibration of control according to the resonant frequency of the object, and to better control of the rhythmic movement itself.  相似文献   

8.
Shape recognition can be achieved through vision or touch, raising the issue of how this information is shared across modalities. Here we provide a short review of previous findings on cross-modal object recognition and we provide new empirical data on multisensory recognition of actively explored objects. It was previously shown that, similar to vision, haptic recognition of objects fixed in space is orientation specific and that cross-modal object recognition performance was relatively efficient when these views of the objects were matched across the sensory modalities (Newell, Ernst, Tjan, & Bülthoff, 2001). For actively explored (i.e., spatially unconstrained) objects, we now found a cost in cross-modal relative to within-modal recognition performance. At first, this may seem to be in contrast to findings by Newell et al. (2001). However, a detailed video analysis of the visual and haptic exploration behaviour during learning and recognition revealed that one view of the objects was predominantly explored relative to all others. Thus, active visual and haptic exploration is not balanced across object views. The cost in recognition performance across modalities for actively explored objects could be attributed to the fact that the predominantly learned object view was not appropriately matched between learning and recognition test in the cross-modal conditions. Thus, it seems that participants naturally adopt an exploration strategy during visual and haptic object learning that involves constraining the orientation of the objects. Although this strategy ensures good within-modal performance, it is not optimal for achieving the best recognition performance across modalities.  相似文献   

9.
It is still unclear how the visual system perceives accurately the size of objects at different distances. One suggestion, dating back to Berkeley’s famous essay, is that vision is calibrated by touch. If so, we may expect different mechanisms involved for near, reachable distances and far, unreachable distances. To study how the haptic system calibrates vision we measured size constancy in children (from 6 to 16 years of age) and adults, at various distances. At all ages, accuracy of the visual size perception changes with distance, and is almost veridical inside the haptic workspace, in agreement with the idea that the haptic system acts to calibrate visual size perception. Outside this space, systematic errors occurred, which varied with age. Adults tended to overestimate visual size of distant objects (over‐compensation for distance), while children younger than 14 underestimated their size (under‐compensation). At 16 years of age there seemed to be a transition point, with veridical perception of distant objects. When young subjects were allowed to touch the object inside the haptic workspace, the visual biases disappeared, while older subjects showed multisensory integration. All results are consistent with the idea that the haptic system can be used to calibrate visual size perception during development, more effectively within than outside the haptic workspace, and that the calibration mechanisms are different in children than in adults.  相似文献   

10.
In four experiments, reducing lenses were used to minify vision and generate intersensory size conflicts between vision and touch. Subjects made size judgments, using either visual matching or haptic matching. In visual matching, the subjects chose from a set of visible squares that progressively increased in size. In haptic matching, the subjects selected matches from an array of tangible wooden squares. In Experiment 1, it was found that neither sense dominated when subjects exposed to an intersensory discrepancy made their size estimates by using either visual matching or haptic matching. Size judgments were nearly indentical for conflict subjects making visual or haptic matches. Thus, matching modality did not matter in Experiment 1. In Experiment 2, it was found that subjects were influenced by the sight of their hands, which led to increases in the magnitude of their size judgments. Sight of the hands produced more accurate judgments, with subjects being better able to compensate for the illusory effects of the reducing lens. In two additional experiments, it was found that when more precise judgments were required and subjects had to generate their own size estimates, the response modality dominated. Thus, vision dominated in Experiment 3, where size judgments derived from viewing a metric ruler, whereas touch dominated in Experiment 4, where subjects made size estimates with a pincers posture of their hands. It is suggested that matching procedures are inadequate for assessing intersensory dominance relations. These results qualify the position (Hershberger & Misceo, 1996) that the modality of size estimates influences the resolution of intersensory conflicts. Only when required to self-generate more precise judgments did subjects rely on one sense, either vision or touch. Thus, task and attentional requirements influence dominance relations, and vision does not invariably prevail over touch.  相似文献   

11.
Imagined haptic exploration in judgments of object properties   总被引:1,自引:0,他引:1  
In Experiment 1, each subject rated a single, named object for its roughness, hardness, temperature, weight, size, or shape. In Experiment 2, each subject compared one pair of objects along the same dimensions. In both studies, a substantial proportion of subjects who judged the first four dimensions imagined a hand making exploratory movements appropriate for the designated information. The proportion of hand-exploration images decreased substantially when judging size or shape, or when judgments could be made readily through general semantic knowledge. The results suggest that the incorporation of haptic exploration into visual imagery provides access to information about haptically accessible object properties.  相似文献   

12.
Tactile-based pantomime-grasping requires that a performer use their right hand to ‘grasp’ a target previously held in the palm of their opposite hand – a task examining how mechanoreceptive (i.e., tactile) feedback informs the motor system about an object property (i.e., size). Here, we contrasted pantomime-grasps performed with (H+) and without (H?) haptic feedback (i.e., thumb and forefinger position information derived from the grasping hand touching the object) with a condition providing visual KR (VKR) related to absolute target object size. Just-noticeable-difference (JND) scores were computed to determine whether responses adhered to – or violated – Weber's law. JNDs for H+ trials violated the law, whereas H? and VKR trials adhered to the law. Accordingly, results demonstrate that haptic feedback – and not KR – supports an absolute tactile-haptic calibration.  相似文献   

13.
This aim of this paper was twofold: (1) to display the various competencies of the infant's hands for processing information about the shape of objects; and (2) to show that the infant's haptic mode shares some common mechanisms with the visual mode. Several experiments on infants from birth and up to five months of age using a habituation/dishabituation procedure, intermodal transfer task between touch and vision, and various cognitive tasks revealed that infants may perceive and understand the physical world through their hands without visual control. From birth, infants can habituate to shape and detect discrepancies between shapes. But information exchanges between vision and touch are partial in cross-modal transfer tasks. Plausibly, modal specificities such as discrepancies in information gathering between the two modalities and the different functions of the hands (perceptual and instrumental) limit the links between the visual and haptic modes. In contrast, when infants abstract information from an event not totally felt or seen, amodal mechanisms underlie haptic and visual knowledge in early infancy. Despite various discrepancies between the sensory modes, conceiving the world is possible with hands as with eyes.  相似文献   

14.
Task-dependent information processing for the purpose of recognition or spatial perception is considered a principle common to all the main sensory modalities. Using a dual-task interference paradigm, we investigated the behavioral effects of independent information processing for shape identification and localization of object features within and across vision and touch. In Experiment 1, we established that color and texture processing (i.e., a “what” task) interfered with both visual and haptic shape-matching tasks and that mirror image and rotation matching (i.e., a “where” task) interfered with a feature-location-matching task in both modalities. In contrast, interference was reduced when a “where” interference task was embedded in a “what” primary task and vice versa. In Experiment 2, we replicated this finding within each modality, using the same interference and primary tasks throughout. In Experiment 3, the interference tasks were always conducted in a modality other than the primary task modality. Here, we found that resources for identification and spatial localization are independent of modality. Our findings further suggest that multisensory resources for shape recognition also involve resources for spatial localization. These results extend recent neuropsychological and neuroimaging findings and have important implications for our understanding of high-level information processing across the human sensory systems.  相似文献   

15.
Effects of learning can show in a direct, i.e., explicit way, or they can be expressed indirectly, i.e., in an implicit way. It is investigated whether hepatic information shows implicit effect, and whether implicit haptic memory effects are based primarily on motor or on sensory memory components. In the first phase blindfolded subjects had to palpate objects in order to answer questions about the objects' distinct properties as fast as possible. In the following phase this task was repeated with the same objects and additional control items. Additionally, recognition judgements were required. Results demonstrate reliable effects of implicit memory for haptic information in terms of reaction times to old vs. new objects. Subjects who had to wear plastic gloves in the first stage showed comparable effects of repetition priming. Changing the questions--and, thus, hand movements--during the palpitation of objects known from the first stage, however, abolishes implicit memory expression. It is concluded, therefore, that implicit memory for haptic information is based on motor processes. On the other hand, explicit memory is hampered in subjects wearing gloves during the first phase, as revealed in terms of recognition performance while changing the questions about objects' properties has no effect on recognition judgements. Thus, explicit memory for haptic information seems to be based on the sensory processes when touching objects.  相似文献   

16.
《Acta psychologica》2013,143(1):20-34
Both vision and touch yield comparable results in terms of roughness estimation of familiar textures as was shown in earlier studies. To our knowledge, no research has been conducted on the effect of sensory familiarity with the stimulus material on roughness estimation of unfamiliar textures.The influence of sensory modality and familiarity on roughness perception of dot pattern textures was investigated in a series of five experiments. Participants estimated the roughness of textures varying in mean center-to-center dot spacing in experimental conditions providing visual, haptic and visual–haptic combined information.The findings indicate that roughness perception of unfamiliar dot pattern textures is well described by a bi-exponential function of inter-dot spacing, regardless of the sensory modality used. However, sensory modality appears to affect the maximum of the psychophysical roughness function, with visually perceived roughness peaking for a smaller inter-dot spacing than haptic roughness. We propose that this might be due to the better spatial acuity of the visual modality. Individuals appeared to use different visual roughness estimation strategies depending on their first sensory experience (visual vs. haptic) with the stimulus material, primarily in an experimental context which required the combination of visual and haptic information in a single bimodal roughness estimate. Furthermore, the similarity of findings in experimental settings using real and virtual visual textures indicates the suitability of the experimental setup for neuroimaging studies, creating a more direct link between behavioral and neuroimaging results.  相似文献   

17.
Here, we used functional magnetic resonance imaging to investigate the multisensory processing of object shape in the human cerebral cortex and explored the role of mental imagery in such processing. Regions active bilaterally during both visual and haptic shape perception, relative to texture perception in the respective modality, included parts of the superior parietal gyrus, the anterior intraparietal sulcus, and the lateral occipital complex. Of these bimodal regions, the lateral occipital complexes preferred visual over haptic stimuli, whereas the parietal areas preferred haptic over visual stimuli. Whereas most subjects reported little haptic imagery during visual shape perception, experiences of visual imagery during haptic shape perception were common. Across subjects, ratings of the vividness of visual imagery strongly predicted the amount of haptic shape-selective activity in the right, but not in the left, lateral occipital complex. Thus, visual imagery appears to contribute to activation of some, but not all, visual cortical areas during haptic perception.  相似文献   

18.
This study investigates whether the vertical orientation may be predominantly used as an amodal reference norm by the visual, haptic, and somato-vestibular perceptual systems to define oblique orientations. We examined this question by asking the same sighted adult subjects to reproduce, in the frontal (roll) plane, the vertical (0°) and six oblique orientations in three tasks involving different perceptual systems. In the visual task, the subjects adjusted a moveable rod so that it reproduced the orientation of a visual rod seen previously in a dark room. In the haptic task, the blindfolded sighted subjects scanned an oriented rod with one hand and reproduced its orientation, with the same hand, on a moveable response rod. In the somato-vestibular task, the blindfolded sighted subjects, sitting in a rotating chair, adjusted this chair in order to reproduce the tested orientation of their own body. The results showed that similar oblique effects (unsigned angular error difference between six oblique orientations and vertical orientation) were observed across the three tasks. However, there were no positive correlations between the visual, haptic,  相似文献   

19.
Experimental subjects were exposed to prism-induced visual displacement of a target whose location was correctly given by proprioceptive-kinesthetic information. Control subjects were exposed alternately to visual displacement or proprioceptive-kinesthetic location information. During the adaptation period, experimental subjects in the visual attention condition performed a localization task that directed them to attend selectively to the visual modality; experimental subjects in the proprioceptive attention condition attended selectively to the proprioceptive modaltiy; control subjects performed the task on the basis of the available modality. Measures of adaptation and aftereffect were secured separately in each of the two modalities. These confirmed the predictions that the shifts in the experimental conditions would be confirmed to localization tests dependent on the unattended modality and that control subjects would not exhibit adaptation. We proposed that allocation of attention determines situational dominance and that dominance determines the locus of adaptation. The findings were compared to those reported by Canon (1970) and were applied to a reassessment of the "visual capture" phenomenon.  相似文献   

20.
Viewpoint dependence in visual and haptic object recognition   总被引:5,自引:0,他引:5  
On the whole, people recognize objects best when they see the objects from a familiar view and worse when they see the objects from views that were previously occluded from sight. Unexpectedly, we found haptic object recognition to be viewpoint-specific as well, even though hand movements were unrestricted. This viewpoint dependence was due to the hands preferring the back "view" of the objects. Furthermore, when the sensory modalities (visual vs. haptic) differed between learning an object and recognizing it, recognition performance was best when the objects were rotated back-to-front between learning and recognition. Our data indicate that the visual system recognizes the front view of objects best, whereas the hand recognizes objects best from the back.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号