首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Picard D 《Acta psychologica》2006,121(3):227-248
The present study examined the extent to which vision and touch are perceptually equivalent for texture information in adults. Using Garbin's method, we selected two sets of textures having high versus low cross-modal dissimilarity values between vision and touch (Experiment 1). The two sets of textures were then used as material in a cross-modal matching task (Experiment 2). Results showed that asymmetries occurred in the performances when the stimuli had high cross-modal dissimilarity values, but not when the stimuli had low cross-modal dissimilarity values. These results extend Garbin's findings on shape information to the texture domain and support the idea that partial perceptual equivalence exists between vision and touch.  相似文献   

2.
Vision and active touch lead to similar patterns of constant error for the perception of interpolated position in twodimensional and one-dimensional regions, though the errors for touch are larger than those for vision. The error patterns for the orientation of a radius of a semicircle are more complex, but can be interpreted as due to the interaction of two sets of anchors rather than the single pair available for the linear interpolation. The greater size of the touch errors is interpreted as due to a relative overestimation of larger distances by active touch or of smallerdistances by vision.  相似文献   

3.
4.
Change blindness is the name given to people's inability to detect changes introduced between two consecutively-presented scenes when they are separated by a distractor that masks the transients that are typically associated with change. Change blindness has been reported within vision, audition, and touch, but has never before been investigated when successive patterns are presented to different sensory modalities. In the study reported here, we investigated change detection performance when the two to-be-compared stimulus patterns were presented in the same sensory modality (i.e., both visual or both tactile) and when one stimulus pattern was tactile while the other was presented visually or vice versa. The two to-be-compared patterns were presented consecutively, separated by an empty interval, or else separated by a masked interval. In the latter case, the masked interval could either be tactile or visual. The first experiment investigated visual-tactile and tactile-visual change detection performance. The results showed that in the absence of masking, participants detected changes in position accurately, despite the fact that the two to-be-compared displays were presented in different sensory modalities. Furthermore, when a mask was presented between the two to-be-compared displays, crossmodal change blindness was elicited no matter whether the mask was visual or tactile. The results of two further experiments showed that performance was better overall in the unimodal (visual or tactile) conditions than in the crossmodal conditions. These results suggest that certain of the processes underlying change blindness are multisensory in nature. We discuss these findings in relation to recent claims regarding the crossmodal nature of spatial attention.  相似文献   

5.
Cross-modal illusory conjunctions (ICs) happen when, under conditions of divided attention, felt textures are reported as being seen or vice versa. Experiments provided evidence for these errors, demonstrated that ICs are more frequent if tactile and visual stimuli are in the same hemispace, and showed that ICs still occur under forced-choice conditions but do not occur when attention to the felt texture is increased. Cross-modal ICs were also found in a patient with parietal damage even with relatively long presentations of visual stimuli. The data are consistent with there being cross-modal integration of sensory information, with the modality of origin sometimes being misattributed when attention is constrained. The empirical conclusions from the experiments are supported by formal models.  相似文献   

6.
Ambiguous visual information often produces unstable visual perception. In four psychophysical experiments, we found that unambiguous tactile information about the direction of rotation of a globe whose three-dimensional structure is ambiguous significantly influences visual perception of the globe. This disambiguation of vision by touch occurs only when the two modalities are stimulated concurrently, however. Using functional magnetic resonance imaging, we discovered that touching the rotating globe, even when not looking at it, reliably activates the middle temporal visual area (MT+), a brain region commonly thought to be crucially involved in registering structure from motion. Considered together, our results show that the brain draws on somatosensory information to resolve visual conflict.  相似文献   

7.
In a previous experiment, we showed that bistable visual object motion was partially disambiguated by tactile input. Here, we investigated this effect further by employing a more potent visuotactile stimulus. Monocular viewing of a tangible wire-frame sphere (TS) rotating about its vertical axis produced bistable alternations of direction. Touching the TS biased simultaneous and subsequent visual perception of motion. Both of these biases were in the direction of the tactile stimulation and, therefore, constituted facilitation or priming, as opposed to interference or adaptation. Although touching the TS biased visual perception, tactile stimulation was not able to override the ambiguous visual percept. This led to periods of sensory conflict, during which visual and tactile motion percepts were incongruent. Visual and tactile inputs can sometimes be fused to form a coherent percept of object motion but, when they are in extreme conflict, can also remain independent.  相似文献   

8.
A series of six experiments offers converging evidence that there is no fixed dominance hierarchy for the perception of textured patterns, and in doing so, highlights the importance of recognizing the multidimensionality of texture perception. The relative bias between vision and touch was reversed or considerably altered using both discrepancy and nondiscrepancy paradigms. This shift was achieved merely by directing observers to judge different dimensions of the same textured surface. Experiments 1, 4, and 5 showed relatively strong emphasis on visual as opposed to tactual cues regarding the spatial density of raised dot patterns. In contrast, Experiments 2, 3, and 6 demonstrated considerably greater emphasis on the tactual as opposed to visual cues when observers were instructed to judge the roughness of the same surfaces. The results of the experiments were discussed in terms of a modality appropriateness interpretation of intersensory bias. A weighted averaging model appeared to describe the nature of the intersensory integration process for both spatial density and roughness perception.  相似文献   

9.
Cross-modal transfer (CMT) of figures was studied by varying stimulus complexity and age (9, 11, 14 years). Matched groups learned two series of 10 figures each (simple familiar and complex meaningless), one group first visually and then factually (V-T) and the other in reversed order (T-V), scanning time being unlimited. After learning the figures the Ss drew them from memory. These reproductions were mostly whole figures. CMT was significant in both directions. In the simple figure series transfer was regarded as symmetric, whereas the results for the complex figures suggested that the T-V order was more efficient. Transfer did not increase with age, but each age group had features typical of it.  相似文献   

10.
There is currently a great deal of interest regarding the possible existence of a crossmodal attentional blink (AB) between audition and vision. The majority of evidence now suggests that no such crossmodal deficit exists unless a task switch is introduced. We report two experiments designed to investigate the existence of a crossmodal AB between vision and touch. Two masked targets were presented successively at variable interstimulus intervals. Participants had to respond either to both targets (experimental condition) or to just the second target (control condition). In Experiment 1, the order of target modality was blocked, and an AB was demonstrated when visual targets preceded tactile targets, but not when tactile targets preceded visual targets. In Experiment 2, target modality was mixed randomly, and a significant crossmodal AB was demonstrated in both directions between vision and touch. The contrast between our visuotactile results and those of previous audiovisual studies is discussed, as are the implications for current theories of the AB.  相似文献   

11.
Even though human perceptual development relies on combining multiple modalities, most categorization studies so far have focused on the visual modality. To better understand the mechanisms underlying multisensory categorization, we analyzed visual and haptic perceptual spaces and compared them with human categorization behavior. As stimuli we used a three-dimensional object space of complex, parametrically-defined objects. First, we gathered similarity ratings for all objects and analyzed the perceptual spaces of both modalities using multidimensional scaling analysis. Next, we performed three different categorization tasks which are representative of every-day learning scenarios: in a fully unconstrained task, objects were freely categorized, in a semi-constrained task, exactly three groups had to be created, whereas in a constrained task, participants received three prototype objects and had to assign all other objects accordingly. We found that the haptic modality was on par with the visual modality both in recovering the topology of the physical space and in solving the categorization tasks. We also found that within-category similarity was consistently higher than across-category similarity for all categorization tasks and thus show how perceptual spaces based on similarity can explain visual and haptic object categorization. Our results suggest that both modalities employ similar processes in forming categories of complex objects.  相似文献   

12.
Differences in sensory function between young (n 5 42, 18—31 years old) and older (n 5 137, 60—88 years old) adults were examined for auditory, visual, and tactile measures of threshold sensitivity and temporal acuity (gap-detection threshold). For all but one of the psychophysical measures (visual gap detection), multiple measures were obtained at different stimulus frequencies for each modality and task. This resulted in a total of 14 dependent measures, each based on four to six adaptive psychophysical estimates of 75% correct performance. In addition, all participants completed the Wechsler Adult Intelligence Scale (Wechsler, 1997). Mean data confirmed previously observed differences in performance between young and older adults for 13 of the 14 dependent measures (all but visual threshold at a flicker frequency of 4 Hz). Correlational and principalcomponents factor analyses performed on the data from the 137 older adults were generally consistent with task and modality independence of the psychophysical measures.  相似文献   

13.
We experience the shape of objects in our world largely by way of our vision and touch but the availability and integration of information between the senses remains an open question. The research presented in this article examines the effect of stimulus complexity on visual, haptic and crossmodal discrimination. Using sculpted three-dimensional objects whose features vary systematically, we perform a series of three experiments to determine perceptual equivalence as a function of complexity. Two unimodal experiments - vision and touch-only, and one crossmodal experiment investigating the availability of information across the senses, were performed. We find that, for the class of stimuli used, subjects were able to visually discriminate them reliably across the entire range of complexity, while the experiments involving haptic information show a marked decrease in performance as the objects become more complex. Performance in the crossmodal condition appears to be constrained by the limits of the subjects’ haptic representation, but the combination of the two sources of information is of some benefit over vision alone when comparing the simpler, low-frequency stimuli. This result shows that there is crossmodal transfer, and therefore perceptual equivalency, but that this transfer is limited by the object’s complexity.  相似文献   

14.
15.
In multistable perception, the brain alternates between several perceptual explanations of ambiguous sensory signals. It is unknown whether multistable processes can interact across the senses. In the study reported here, we presented subjects with unisensory (visual or tactile), spatially congruent visuotactile, and spatially incongruent visuotactile apparent motion quartets. Congruent stimulation induced pronounced visuotactile interactions, as indicated by increased dominance times for both vision and touch, and an increased percentage bias for the percept already dominant under unisensory stimulation. Thus, the joint evidence from vision and touch stabilizes the more likely perceptual interpretation and thereby decelerates the rivalry dynamics. Yet the temporal dynamics depended also on subjects' attentional focus and was generally slower for tactile than for visual reports. Our results support Bayesian approaches to perceptual inference, in which the probability of a perceptual interpretation is determined by combining visual, tactile, or visuotactile evidence with modality-specific priors that depend on subjects' attentional focus. Critically, the specificity of visuotactile interactions for spatially congruent stimulation indicates multisensory rather than cognitive-bias mechanisms.  相似文献   

16.
17.
We investigated the extent to which people can discriminate between languages on the basis of their characteristic temporal, rhythmic information, and the extent to which this ability generalizes across sensory modalities. We used rhythmical patterns derived from the alternation of vowels and consonants in English and Japanese, presented in audition, vision, both audition and vision at the same time, or touch. Experiment 1 confirmed that discrimination is possible on the basis of auditory rhythmic patterns, and extended it to the case of vision, using ‘aperture-close’ mouth movements of a schematic face. In Experiment 2, language discrimination was demonstrated using visual and auditory materials that did not resemble spoken articulation. In a combined analysis including data from Experiments 1 and 2, a beneficial effect was also found when the auditory rhythmic information was available to participants. Despite the fact that discrimination could be achieved using vision alone, auditory performance was nevertheless better. In a final experiment, we demonstrate that the rhythm of speech can also be discriminated successfully by means of vibrotactile patterns delivered to the fingertip. The results of the present study therefore demonstrate that discrimination between language's syllabic rhythmic patterns is possible on the basis of visual and tactile displays.  相似文献   

18.
Multisensory integration of nonspatial features between vision and touch was investigated by examining the effects of redundant signals of visual and tactile inputs. In the present experiments, visual letter stimuli and/or tactile letter stimuli were presented, which participants were asked to identify as quickly as possible. The results of Experiment 1 demonstrated faster reaction times for bimodal stimuli than for unimodal stimuli (the redundant signals effect (RSE)). The RSE was due to coactivation of figural representations from the visual and tactile modalities. This coactivation did not occur for a simple stimulus detection task (Experiment 2) or for bimodal stimuli with the same semantic information but different physical stimulus features (Experiment 3). The findings suggest that the integration process might occur at a relatively early stage of object-identification prior to the decision level.  相似文献   

19.
The information that people use to perceive whether a tool is suitable for a certain task depends on what is available at a given time. Visually scanning a tool and wielding it each provide information about the functional attributes of the tool. In experiment 1, we investigated the relative contributions of vision and dynamic touch to perceiving the suitability of various tools for various tasks. The results show that, when both vision and dynamic touch are available, the visual information dominates. When limited to dynamic touch, ratings of suitability are constrained by the inertial properties of the tool, and the inertial properties that are exploited depend on the task. In experiment 2, we asked whether the manner in which a tool is manipulated in exploration depends on the task for which it is being evaluated. The results suggest that tools are manipulated in ways that reflect intentions to perceive particular affordances. Exploratory movements sometimes mimic performatory movements.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号