首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
It is well known that stimuli grab attention to their location, but do they also grab attention to their sensory modality? The modality shift effect (MSE), the observation that responding to a stimulus leads to reaction time benefits for subsequent stimuli in the same modality, suggests that this may be the case. If noninformative cue stimuli, which do not require a response, also lead to benefits for their modality, this would suggest that the effect is automatic. We investigated the time-course of the visuotactile MSE and the difference between the effects of cues and targets. In Experiment 1, when visual and tactile tasks and stimulus locations were matched, uninformative cues did not lead to reaction time benefits for targets in the same modality. However, the modality of the previous target led to a significant MSE. Only stimuli that require a response, therefore, appear to lead to reaction time benefits for their modality. In Experiment 2, increasing attention to the cue stimuli attenuated the effect of the previous target, but the cues still did not lead to a MSE. In Experiment 3, a MSE was demonstrated between successive targets, and this effect decreased with increasing intertrial intervals. Overall, these studies demonstrate how cue- and target-induced effects interact and suggest that modalities do not automatically capture attention as locations do; rather, the MSE is more similar to other task repetition effects.  相似文献   

2.
In four experiments, reducing lenses were used to minify vision and generate intersensory size conflicts between vision and touch. Subjects made size judgments, using either visual matching or haptic matching. In visual matching, the subjects chose from a set of visible squares that progressively increased in size. In haptic matching, the subjects selected matches from an array of tangible wooden squares. In Experiment 1, it was found that neither sense dominated when subjects exposed to an intersensory discrepancy made their size estimates by using either visual matching or haptic matching. Size judgments were nearly indentical for conflict subjects making visual or haptic matches. Thus, matching modality did not matter in Experiment 1. In Experiment 2, it was found that subjects were influenced by the sight of their hands, which led to increases in the magnitude of their size judgments. Sight of the hands produced more accurate judgments, with subjects being better able to compensate for the illusory effects of the reducing lens. In two additional experiments, it was found that when more precise judgments were required and subjects had to generate their own size estimates, the response modality dominated. Thus, vision dominated in Experiment 3, where size judgments derived from viewing a metric ruler, whereas touch dominated in Experiment 4, where subjects made size estimates with a pincers posture of their hands. It is suggested that matching procedures are inadequate for assessing intersensory dominance relations. These results qualify the position (Hershberger & Misceo, 1996) that the modality of size estimates influences the resolution of intersensory conflicts. Only when required to self-generate more precise judgments did subjects rely on one sense, either vision or touch. Thus, task and attentional requirements influence dominance relations, and vision does not invariably prevail over touch.  相似文献   

3.
Change blindness is the name given to people's inability to detect changes introduced between two consecutively-presented scenes when they are separated by a distractor that masks the transients that are typically associated with change. Change blindness has been reported within vision, audition, and touch, but has never before been investigated when successive patterns are presented to different sensory modalities. In the study reported here, we investigated change detection performance when the two to-be-compared stimulus patterns were presented in the same sensory modality (i.e., both visual or both tactile) and when one stimulus pattern was tactile while the other was presented visually or vice versa. The two to-be-compared patterns were presented consecutively, separated by an empty interval, or else separated by a masked interval. In the latter case, the masked interval could either be tactile or visual. The first experiment investigated visual-tactile and tactile-visual change detection performance. The results showed that in the absence of masking, participants detected changes in position accurately, despite the fact that the two to-be-compared displays were presented in different sensory modalities. Furthermore, when a mask was presented between the two to-be-compared displays, crossmodal change blindness was elicited no matter whether the mask was visual or tactile. The results of two further experiments showed that performance was better overall in the unimodal (visual or tactile) conditions than in the crossmodal conditions. These results suggest that certain of the processes underlying change blindness are multisensory in nature. We discuss these findings in relation to recent claims regarding the crossmodal nature of spatial attention.  相似文献   

4.
Cross-modal illusory conjunctions (ICs) happen when, under conditions of divided attention, felt textures are reported as being seen or vice versa. Experiments provided evidence for these errors, demonstrated that ICs are more frequent if tactile and visual stimuli are in the same hemispace, and showed that ICs still occur under forced-choice conditions but do not occur when attention to the felt texture is increased. Cross-modal ICs were also found in a patient with parietal damage even with relatively long presentations of visual stimuli. The data are consistent with there being cross-modal integration of sensory information, with the modality of origin sometimes being misattributed when attention is constrained. The empirical conclusions from the experiments are supported by formal models.  相似文献   

5.
Ambiguous visual information often produces unstable visual perception. In four psychophysical experiments, we found that unambiguous tactile information about the direction of rotation of a globe whose three-dimensional structure is ambiguous significantly influences visual perception of the globe. This disambiguation of vision by touch occurs only when the two modalities are stimulated concurrently, however. Using functional magnetic resonance imaging, we discovered that touching the rotating globe, even when not looking at it, reliably activates the middle temporal visual area (MT+), a brain region commonly thought to be crucially involved in registering structure from motion. Considered together, our results show that the brain draws on somatosensory information to resolve visual conflict.  相似文献   

6.
Cross-modal transfer (CMT) of figures was studied by varying stimulus complexity and age (9, 11, 14 years). Matched groups learned two series of 10 figures each (simple familiar and complex meaningless), one group first visually and then factually (V-T) and the other in reversed order (T-V), scanning time being unlimited. After learning the figures the Ss drew them from memory. These reproductions were mostly whole figures. CMT was significant in both directions. In the simple figure series transfer was regarded as symmetric, whereas the results for the complex figures suggested that the T-V order was more efficient. Transfer did not increase with age, but each age group had features typical of it.  相似文献   

7.
There is currently a great deal of interest regarding the possible existence of a crossmodal attentional blink (AB) between audition and vision. The majority of evidence now suggests that no such crossmodal deficit exists unless a task switch is introduced. We report two experiments designed to investigate the existence of a crossmodal AB between vision and touch. Two masked targets were presented successively at variable interstimulus intervals. Participants had to respond either to both targets (experimental condition) or to just the second target (control condition). In Experiment 1, the order of target modality was blocked, and an AB was demonstrated when visual targets preceded tactile targets, but not when tactile targets preceded visual targets. In Experiment 2, target modality was mixed randomly, and a significant crossmodal AB was demonstrated in both directions between vision and touch. The contrast between our visuotactile results and those of previous audiovisual studies is discussed, as are the implications for current theories of the AB.  相似文献   

8.
9.
In multistable perception, the brain alternates between several perceptual explanations of ambiguous sensory signals. It is unknown whether multistable processes can interact across the senses. In the study reported here, we presented subjects with unisensory (visual or tactile), spatially congruent visuotactile, and spatially incongruent visuotactile apparent motion quartets. Congruent stimulation induced pronounced visuotactile interactions, as indicated by increased dominance times for both vision and touch, and an increased percentage bias for the percept already dominant under unisensory stimulation. Thus, the joint evidence from vision and touch stabilizes the more likely perceptual interpretation and thereby decelerates the rivalry dynamics. Yet the temporal dynamics depended also on subjects' attentional focus and was generally slower for tactile than for visual reports. Our results support Bayesian approaches to perceptual inference, in which the probability of a perceptual interpretation is determined by combining visual, tactile, or visuotactile evidence with modality-specific priors that depend on subjects' attentional focus. Critically, the specificity of visuotactile interactions for spatially congruent stimulation indicates multisensory rather than cognitive-bias mechanisms.  相似文献   

10.
We experience the shape of objects in our world largely by way of our vision and touch but the availability and integration of information between the senses remains an open question. The research presented in this article examines the effect of stimulus complexity on visual, haptic and crossmodal discrimination. Using sculpted three-dimensional objects whose features vary systematically, we perform a series of three experiments to determine perceptual equivalence as a function of complexity. Two unimodal experiments - vision and touch-only, and one crossmodal experiment investigating the availability of information across the senses, were performed. We find that, for the class of stimuli used, subjects were able to visually discriminate them reliably across the entire range of complexity, while the experiments involving haptic information show a marked decrease in performance as the objects become more complex. Performance in the crossmodal condition appears to be constrained by the limits of the subjects’ haptic representation, but the combination of the two sources of information is of some benefit over vision alone when comparing the simpler, low-frequency stimuli. This result shows that there is crossmodal transfer, and therefore perceptual equivalency, but that this transfer is limited by the object’s complexity.  相似文献   

11.
12.
Martino G  Marks LE 《Perception》2000,29(6):745-754
At each moment, we experience a melange of information arriving at several senses, and often we focus on inputs from one modality and 'reject' inputs from another. Does input from a rejected sensory modality modulate one's ability to make decisions about information from a selected one? When the modalities are vision and hearing, the answer is "yes", suggesting that vision and hearing interact. In the present study, we asked whether similar interactions characterize vision and touch. As with vision and hearing, results obtained in a selective attention task show cross-modal interactions between vision and touch that depend on the synesthetic relationship between the stimulus combinations. These results imply that similar mechanisms may govern cross-modal interactions across sensory modalities.  相似文献   

13.
Picard D 《Acta psychologica》2006,121(3):227-248
The present study examined the extent to which vision and touch are perceptually equivalent for texture information in adults. Using Garbin's method, we selected two sets of textures having high versus low cross-modal dissimilarity values between vision and touch (Experiment 1). The two sets of textures were then used as material in a cross-modal matching task (Experiment 2). Results showed that asymmetries occurred in the performances when the stimuli had high cross-modal dissimilarity values, but not when the stimuli had low cross-modal dissimilarity values. These results extend Garbin's findings on shape information to the texture domain and support the idea that partial perceptual equivalence exists between vision and touch.  相似文献   

14.
Rodway P 《Acta psychologica》2005,120(2):199-226
Which is better, a visual or an auditory warning signal? Initial findings suggested that an auditory signal was more effective, speeding reaction to a target more than a visual warning signal, particularly at brief foreperiods [Bertelson, P., & Tisseyre, F. (1969). The time-course of preparation: confirmatory results with visual and auditory warning signals. Acta Psychologica, 30. In W.G. Koster (Ed.), Attention and Performance II (pp. 145-154); Davis, R., & Green, F. A. (1969). Intersensory differences in the effect of warning signals on reaction time. Acta Psychologica, 30. In W.G. Koster (Ed.), Attention and Performance II (pp. 155-167)]. This led to the hypothesis that an auditory signal is more alerting than a visual warning signal [Sanders, A. F. (1975). The foreperiod effect revisited. Quarterly Journal of Experimental Psychology, 27, 591-598; Posner, M. I., Nissen, M. J., & Klein, R. M. (1976). Visual dominance: an information-processing account of its origins and significance. Psychological Review, 83, 157-171]. Recently [Turatto, M., Benso, F., Galfano, G., & Umilta, C. (2002). Nonspatial attentional shifts between audition and vision. Journal of Experimental Psychology: Human Perception and Performance, 28, 628-639] found no evidence for an auditory warning signal advantage and showed that at brief foreperiods a signal in the same modality as the target facilitated responding more than a signal in a different modality. They accounted for this result in terms of the modality shift effect, with the signal exogenously recruiting attention to its modality, and thereby facilitating responding to targets arriving in the modality to which attention had been recruited. The present study conducted six experiments to understand the cause of these conflicting findings. The results suggest that an auditory warning signal is not more effective than a visual warning signal. Previous reports of an auditory superiority appear to have been caused by using different locations for the visual warning signal and visual target, resulting in the target arriving at an unattended location when the foreperiod was brief. Turatto et al.'s results were replicated with a modality shift effect at brief foreperiods. However, it is also suggested that previous measures of the modality shift effect may still have been confounded by a location cuing effect.  相似文献   

15.
The authors report a series of 6 experiments investigating crossmodal links between vision and touch in covert endogenous spatial attention. When participants were informed that visual and tactile targets were more likely on 1 side than the other, speeded discrimination responses (continuous vs. pulsed, Experiments 1 and 2; or up vs. down, Experiment 3) for targets in both modalities were significantly faster on the expected side, even though target modality was entirely unpredictable. When participants expected a target on a particular side in just one modality, corresponding shifts of covert attention also took place in the other modality, as evidenced by faster elevation judgments on that side (Experiment 4). Larger attentional effects were found when directing visual and tactile attention to the same position rather than to different positions (Experiment 5). A final study with crossed hands revealed that these visuotactile links in spatial attention apply to common positions in external space.  相似文献   

16.
The investigation of self-prioritization via a simple matching paradigm represents a new way of enhancing our knowledge about the processing of self-relevant content and also increases our understanding of the self-concept itself. By associating formerly neutral material with the self, and assessing the resulting prioritization of these newly formed self-associations, conclusions can be drawn concerning the effects of self-relevance without the burden of highly overlearned materials such as one’s own name. This approach was used to gain further insights into the structure and complexity of self-associations: a tactile pattern was associated with the self and thereafter, the prioritization of the exact same visual pattern was assessed – enabling the investigation of crossmodal self-associations. The results demonstrate a prioritization of self-associated material that rapidly extends beyond the borders of a sensory modality in which it was first established.  相似文献   

17.
In a visual-tactile interference paradigm, subjects judged whether tactile vibrations arose on a finger or thumb (upper vs. lower locations), while ignoring distant visual distractor lights that also appeared in upper or lower locations. Incongruent visual distractors (e.g. a lower light combined with upper touch) disrupt such tactile judgements, particularly when appearing near the tactile stimulus (e.g. on the same side of space as the stimulated hand). Here we show that actively wielding tools can change this pattern of crossmodal interference. When such tools were held in crossed positions (connecting the left hand to the right visual field, and vice-versa), the spatial constraints on crossmodal interference reversed, so that visual distractors in the other visual field now disrupted tactile judgements most for a particular hand. This phenomenon depended on active tool-use, developing with increased experience in using the tool. We relate these results to recent physiological and neuropsychological findings.  相似文献   

18.
19.
In a previous experiment, we showed that bistable visual object motion was partially disambiguated by tactile input. Here, we investigated this effect further by employing a more potent visuotactile stimulus. Monocular viewing of a tangible wire-frame sphere (TS) rotating about its vertical axis produced bistable alternations of direction. Touching the TS biased simultaneous and subsequent visual perception of motion. Both of these biases were in the direction of the tactile stimulation and, therefore, constituted facilitation or priming, as opposed to interference or adaptation. Although touching the TS biased visual perception, tactile stimulation was not able to override the ambiguous visual percept. This led to periods of sensory conflict, during which visual and tactile motion percepts were incongruent. Visual and tactile inputs can sometimes be fused to form a coherent percept of object motion but, when they are in extreme conflict, can also remain independent.  相似文献   

20.
Thegeneration effect, in which items generated by following some rule are remembered better than stimuli that are simply read, has been studied intensely over the past two decades. To date, however, researchers have largely ignored the temporal aspects of this effect. In the present research, we used a variable onset time for the presentation of the to-be-remembered material, thus providing the ability to determine at what point during processing the generation effect originates. The results indicate that some benefit from generation attempts occurs even when subjects have only a few hundred milliseconds in which to process the stimulus, but that more of the benefit occurs later. This finding suggests that the generation effect results from continuous or multiple discrete stages of information accrual or strengthening of memory traces over time, rather than from a single discrete increment upon final generation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号