首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
《Brain and cognition》2006,60(3):258-268
We investigate how vision affects haptic performance when task-relevant visual cues are reduced or excluded. The task was to remember the spatial location of six landmarks that were explored by touch in a tactile map. Here, we use specially designed spectacles that simulate residual peripheral vision, tunnel vision, diffuse light perception, and total blindness. Results for target locations differed, suggesting additional effects from adjacent touch cues. These are discussed. Touch with full vision was most accurate, as expected. Peripheral and tunnel vision, which reduce visuo-spatial cues, differed in error pattern. Both were less accurate than full vision, and significantly more accurate than touch with diffuse light perception, and touch alone. The important finding was that touch with diffuse light perception, which excludes spatial cues, did not differ from touch without vision in performance accuracy, nor in location error pattern. The contrast between spatially relevant versus spatially irrelevant vision provides new, rather decisive, evidence against the hypothesis that vision affects haptic processing even if it does not add task-relevant information. The results support optimal integration theories, and suggest that spatial and non-spatial aspects of vision need explicit distinction in bimodal studies and theories of spatial integration.  相似文献   

2.
In six experiments, we used the Müller-Lyer illusion to investigate factors in the integration of touch, movement, and spatial cues in haptic shape perception, and in the similarity with the visual illusion. Latencies provided evidence against the hypothesis that scanning times explain the haptic illusion. Distinctive fin effects supported the hypothesis that cue distinctiveness contributes to the illusion, but showed also that it depends on modality-specific conditions, and is not the main factor. Allocentric cues from scanning an external frame (EF) did not reduce the haptic illusion. Scanning elicited downward movements and more negative errors for horizontal convergent figures and more positive errors for vertical divergent figures, suggesting a modality-specific movement effect. But the Müller-Lyer illusion was highly significant for both vertical and horizontal figures. By contrast, instructions to use body-centered reference and to ignore the fins reduced the haptic illusion for vertical figures in touch from 12.60% to 1.7%. In vision, without explicit egocentric reference, instructions to ignore fins did not reduce the illusion to near floor level, though external cues were present. But the visual illusion was reduced to the same level as in touch with instructions that included the use of body-centered cues. The new evidence shows that the same instructions reduced the Müller-Lyer illusion almost to zero in both vision and touch. It suggests that the similarity of the illusions is not fortuitous. The results on touch supported the hypothesis that body-centered spatial reference is involved in integrating inputs from touch and movement for accurate haptic shape perception. The finding that explicit egocentric reference had the same effect on vision suggests that it may be a common factor in the integration of disparate inputs from multisensory sources.  相似文献   

3.
In earlier work, the author has demonstrated that tactile pattern perception and visual pattern perception exhibit many parallels when the effective spatial resolution of vision is reduced to that of touch, thus supporting the hypothesis that the two pattern senses are functionally similar when matched in spatial bandwidth. The present experiments demonstrate a clear counter-example to this hypothesis of functional similarity. Specifically, it was found that the lateral masking effect of a surround on tactile character recognition increases when the surround changes in composition from solid lines to dots, whereas for vision, recognition performance goes in the opposite direction. This finding necessitates some modification of the model of character recognition proposed by the author (Loornis, 1990) as it applies to the sensing of raised tactile patterns. One possible modification would be to incorporate, as the initial stage of pattern transformation, the continuum mechanics model for the skin that was developed by Phillips and Johnson (1981b).  相似文献   

4.
Berkeley subscribed to the principle of heterogeneity, that what we see is qualitatively and numerically different from what we touch. He says of this principle that it is “the main part and pillar of [his] theory.” The argument I present here is that the theory to which Berkeley refers is not just his theory of vision, but what that theory was the preparation for, which is nothing less than his idealism. The argument turns on the passivity of perception, which is what is at stake in the principle of heterogeneity. The author targeted by Berkeley's theory is Descartes, who explicitly denies heterogeneity.  相似文献   

5.
The present research investigates newborn infants' perceptions of the shape and texture of objects through studies of the bi-directionality of cross-modal transfer between vision and touch. Using an intersensory procedure, four experiments were performed in newborns to study their ability to transfer shape and texture information from vision to touch and from touch to vision. The results showed that cross-modal transfer of shape is not bi-directional at birth. Newborns visually recognized a shape previously held but they failed to tactually recognize a shape previously seen. In contrast, a bi-directional cross-modal transfer of texture was observed. Taken together, the results suggest that newborn infants, like older children and adults, gather information differently in the visual and tactile modes, for different object properties. The findings provide evidence for continuity in the development of mechanisms for perceiving object properties.  相似文献   

6.
Visual identification of 3-D objects depends on representations that are invariant across changes in size and left-right orientation. We examined whether this finding reflects the unique demands of processing 3-D objects, or whether it generalizes to 2-D patterns and to the tactile modality. Our findings suggest that object representation for identification is influenced greatly by the processing demands of stimulus materials (e.g., 2-D vs. 3-D objects) and stimulus modality (touch vs. vision). Identification of 2-D patterns in vision is adversely affected by left-right orientation changes, but not size changes. Identification of the same patterns in touch is adversely affected by both changes. Together, the results suggest that the unique processing demands of stimulus materials and modality shape the representation of objects in memory.  相似文献   

7.
Although the five primary senses have traditionally been thought of as separate, examples of their interactions, as well as the neural substrate possibly underlying them, have been identified. Arm position sense, for example, depends on touch, proprioception, and spatial vision of the limb. It is, however, unknown whether position sense is also influenced by more fundamental, nonspatial visual information. Here, we report an illusion that demonstrates that the position sense of the eyelid partly depends on information regarding the relative illumination reported by the two eyes. When only one eye is dark-adapted and both eyes are exposed to a dim environment, the lid of the light-adapted eye feels closed or "droopy." The effect decreases when covering the eye by hand or a patch, thus introducing tactile information congruent with the interocular difference in vision. This reveals that the integration of vision with touch and proprioception is not restricted to higher-level spatial vision, but is instead a more fundamental aspect of sensory processing than has been previously shown.  相似文献   

8.
The development of posture and locomotion provides a valuable window for understanding the ontogeny of perception-action relations. In this study, 13 infants were examined cross-sectionally while standing quietly either hands-free or while lightly touching a contact surface. Mean sway amplitude results indicate that infants use light touch for sway attenuation (≈28–40%) as has been seen previously with adults (Jeka & Lackner, 1994). Additionally, while using the contact surface, movement patterns of the head and trunk show reduced temporal coordination (≈25–40%), as well as increased temporal variability, as compared to no touch conditions. These findings are discussed with regard to the ontogeny of perception-action relations, with the overall conclusion that infants use somatosensory information in an exploratory manner to aid in the development of an accurate internal model of upright postural control.  相似文献   

9.
In four experiments, reducing lenses were used to minify vision and generate intersensory size conflicts between vision and touch. Subjects made size judgments, using either visual matching or haptic matching. In visual matching, the subjects chose from a set of visible squares that progressively increased in size. In haptic matching, the subjects selected matches from an array of tangible wooden squares. In Experiment 1, it was found that neither sense dominated when subjects exposed to an intersensory discrepancy made their size estimates by using either visual matching or haptic matching. Size judgments were nearly indentical for conflict subjects making visual or haptic matches. Thus, matching modality did not matter in Experiment 1. In Experiment 2, it was found that subjects were influenced by the sight of their hands, which led to increases in the magnitude of their size judgments. Sight of the hands produced more accurate judgments, with subjects being better able to compensate for the illusory effects of the reducing lens. In two additional experiments, it was found that when more precise judgments were required and subjects had to generate their own size estimates, the response modality dominated. Thus, vision dominated in Experiment 3, where size judgments derived from viewing a metric ruler, whereas touch dominated in Experiment 4, where subjects made size estimates with a pincers posture of their hands. It is suggested that matching procedures are inadequate for assessing intersensory dominance relations. These results qualify the position (Hershberger & Misceo, 1996) that the modality of size estimates influences the resolution of intersensory conflicts. Only when required to self-generate more precise judgments did subjects rely on one sense, either vision or touch. Thus, task and attentional requirements influence dominance relations, and vision does not invariably prevail over touch.  相似文献   

10.
The authors investigated the extent to which touch, vision, and audition mediate the processing of statistical regularities within sequential input. Few researchers have conducted rigorous comparisons across sensory modalities; in particular, the sense of touch has been virtually ignored. The current data reveal not only commonalities but also modality constraints affecting statistical learning across the senses. To be specific, the authors found that the auditory modality displayed a quantitative learning advantage compared with vision and touch. In addition, they discovered qualitative learning biases among the senses: Primarily, audition afforded better learning for the final part of input sequences. These findings are discussed in terms of whether statistical learning is likely to consist of a single, unitary mechanism or multiple, modality-constrained ones.  相似文献   

11.
Targets presented just beyond arm's reach look closer when observers intend to touch them with a reach-extending tool rather than without the tool. This finding is one of several that suggest that a person's ability to act influences perceived distance to objects. However, some critics have argued that apparent action effects were actually due to effects on the judgments rather than on the perception. In other words, the target does not actually look closer, but participants report that it is. To help counter this argument, the current experiments used an indirect measure of perceived distance: Participants reported perceived shape or perceived parallelism. The results revealed that triangles looked shorter and lines looked more horizontal to participants who reached with a tool, and therefore could reach the targets, than they did to participants who reached without the tool. These results demonstrate convergence across multiple types of judgments, a finding that undermines alternative, judgment-based accounts and suggests that the ability to reach an object changes the perceived distance to the object.  相似文献   

12.
Nancy Henley argues that nonreciprocal touch in male—female relations is used by men as a status reminder to keep women in their place. This study examines Henley's argument by exposing 60 observers to photographs of male—female interactions and asking them to rate the pictured actors on the degree to which each dominates the interaction. The interactions differ across two dimensions: status differences evident in the age and dress of the participants (female higher vs. equal vs. male higher) and who is touching whom (female toucher vs. no toucher vs. male toucher). Results of the study support but qualify the status reminder argument. Nonreciprocal touch reduces the perceived power of the person being touched whether the high-status or the low-status person is doing the touching and whether the man or the woman is being touched. Thus, nonreciprocal touch can be used by high-status men to remind lower-status women of their subordinate positions. But it can also be used by lower status women to undermine the status claims of higher status men. In the equal status interactions, nonreciprocal touch does not alter power perceptions as systematically. This finding suggests that without other status cues evident in the relationship, touch alone is insufficient to establish a power advantage for either party.  相似文献   

13.
Four experiments tested the hypothesis that bilateral symmetry is an incidental encoding property in vision, but can also be elicited as an incidental effect in touch, provided that sufficient spatial reference information is available initially for haptic inputs to be organized spatially. Experiment 1 showed that symmetry facilitated processing in vision, even though the task required judgments of stimulus closure rather than the detection of symmetry. The same task and stimuli failed to show symmetry effects in tactual scanning by one finger (Experiment 2). Experiment 3 found facilitating effects for vertically symmetric open stimuli, although not for closed patterns, in two-forefinger exploration when the fore-fingers had previously been aligned to the body midaxis to provide body-centered spatial reference. The one-finger exploration condition again failed to show symmetry effects. Experiment 4 replicated the facilitating effects of symmetry for open symmetric shapes in tactual exploration by the two (previously aligned) forefingers. Closed shapes again showed no effect. Spatial-reference information, finger movements, and stimulus factors in shape perception by touch are discussed.  相似文献   

14.
The ability of dyads with restricted access to the visual channel of communication to establish a reliable pre‐linguistic communicative signalling system has traditionally been viewed as problematic. Such a conclusion is due in part to the emphasis that has been placed on vision as central to communication by traditional theory. The data presented in this paper question these assertions. The results of a longitudinal study exploring the nature of early dyadic interactions in dyads with visual impairment are presented. The dyads' use of three types of non‐visual behaviour—touch, vocalizations and facial orientation—were investigated in terms of their potential as alternatives to visual communication. It is argued that the results are evidence that visually impaired dyads engage in sophisticated communicative exchanges prior to infants' acquisition of language. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

15.
In four experiments, a multidimensional signal detection analysis was used to determine the influence of length, diameter, and mass on haptically perceived heaviness with and without vision. This analysis allowed us to test for sensory and perceptual interactions between mass and size. As in previous research, sensory interactions were apparent in all four experiments. A novel result was the appearance of perceptual interactions that became more prominent when diameter varied and when vision was allowed. Discussion focuses on how vision and the modalities of touch (i.e., haptic and dynamic) might influence which interactions appear in the data.  相似文献   

16.
M A Heller 《Perception》1985,14(5):563-570
In three experiments observers made visual matches to tangible embossed patterns. Stained glass was used to blur vision and thus allow the effect of visual guidance of tactual exploration on the accuracy of symbol recognition to be evaluated. Stained glass rendered the embossed code invisible, but allowed sight of the hand. In the first experiment subjects identified patterns made up of dots and dashes drawn from Morse code; in the second and third experiments they studied braille. The results show that subjects are more accurate in 'reading' tangible codes when provided with visual guidance. Performance was higher for braille than for Morse code. Vision aided touch through the provision of a frame of reference and through sight of scanning movements. Naive sighted observers were able to identify invisible braille dots by watching other individuals touch the symbols, suggesting the importance of vision of kinesthetic patterns.  相似文献   

17.
There is currently a great deal of interest regarding the possible existence of a crossmodal attentional blink (AB) between audition and vision. The majority of evidence now suggests that no such crossmodal deficit exists unless a task switch is introduced. We report two experiments designed to investigate the existence of a crossmodal AB between vision and touch. Two masked targets were presented successively at variable interstimulus intervals. Participants had to respond either to both targets (experimental condition) or to just the second target (control condition). In Experiment 1, the order of target modality was blocked, and an AB was demonstrated when visual targets preceded tactile targets, but not when tactile targets preceded visual targets. In Experiment 2, target modality was mixed randomly, and a significant crossmodal AB was demonstrated in both directions between vision and touch. The contrast between our visuotactile results and those of previous audiovisual studies is discussed, as are the implications for current theories of the AB.  相似文献   

18.
Previous research has shown that visual perception is affected by sensory information from other modalities. For example, sound can alter the visual intensity or the number of visual objects perceived. However, when touch and vision are combined, vision normally dominates—a phenomenon known asvisual capture. Here we report a cross-modal interaction between active touch and vision: The perceived number of brief visual events (flashes) is affected by the number of concurrently performed finger movements (keypresses). This sensorimotor illusion occurred despite little ambiguity in the visual stimuli themselves and depended on a close temporal proximity between movement execution and vision.  相似文献   

19.
In left-right comparisons of perceived length, objects on the left were slightly overestimated by vision alone but not by touch alone. This conflict between vision and touch occurred in the absence of any experimentally induced distortion or illusion. Judgments made with concurrent vision and touch were similar to those made with vision alone regardlesa of whether the Os were judging which objectfelt longer,looked longer, orwas longer. The resolution of a natural conflict between vision and touch is an example of natural visual capture.  相似文献   

20.
In three experiments, subjects were required to make texture judgments about abrasive surfaces. Touch and vision provided comparable levels of performance when observers attempted to select the smoothest of three surfaces, but bimodal visual and tactual input led to greater accuracy. The superiority of bimodal perception was ascribed to visual guidance of tactual exploration. The elimination of visual texture cues did not impair bimodal performance if vision of hand movements were permitted. It is suggested that touch may preempt vision when both sources of texture information are simultaneously available. The results support the notion that perception is normally multimodal, since restriction of the observer to either sense in isolation produces lower levels of performance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号