首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In the ventriloquism aftereffect, brief exposure to a consistent spatial disparity between auditory and visual stimuli leads to a subsequent shift in subjective sound localization toward the positions of the visual stimuli. Such rapid adaptive changes probably play an important role in maintaining the coherence of spatial representations across the various sensory systems. In the research reported here, we used event-related potentials (ERPs) to identify the stage in the auditory processing stream that is modulated by audiovisual discrepancy training. Both before and after exposure to synchronous audiovisual stimuli that had a constant spatial disparity of 15°, participants reported the perceived location of brief auditory stimuli that were presented from central and lateral locations. In conjunction with a sound localization shift in the direction of the visual stimuli (the behavioral ventriloquism aftereffect), auditory ERPs as early as 100 ms poststimulus (N100) were systematically modulated by the disparity training. These results suggest that cross-modal learning was mediated by a relatively early stage in the auditory cortical processing stream.  相似文献   

2.
It is well known that discrepancies in the location of synchronized auditory and visual events can lead to mislocalizations of the auditory source, so-called ventriloquism. In two experiments, we tested whether such cross-modal influences on auditory localization depend on deliberate visual attention to the biasing visual event. In Experiment 1, subjects pointed to the apparent source of sounds in the presence or absence of a synchronous peripheral flash. They also monitored for target visual events, either at the location of the peripheral flash or in a central location. Auditory localization was attracted toward the synchronous peripheral flash, but this was unaffected by where deliberate visual attention was directed in the monitoring task. In Experiment 2, bilateral flashes were presented in synchrony with each sound, to provide competing visual attractors. When these visual events were equally salient on the two sides, auditory localization was unaffected by which side subjects monitored for visual targets. When one flash was larger than the other, auditory localization was slightly but reliably attracted toward it, but again regardless of where visual monitoring was required. We conclude that ventriloquism largely reflects automatic sensory interactions, with little or no role for deliberate spatial attention.  相似文献   

3.
A period of exposure to trains of simultaneous but spatially offset auditory and visual stimuli can induce a temporary shift in the perception of sound location. This phenomenon, known as the ‘ventriloquist aftereffect’, reflects a realignment of auditory and visual spatial representations such that they approach perceptual alignment despite their physical spatial discordance. Such dynamic changes to sensory representations are likely to underlie the brain’s ability to accommodate inter-sensory discordance produced by sensory errors (particularly in sound localization) and variability in sensory transduction. It is currently unknown, however, whether these plastic changes induced by adaptation to spatially disparate inputs occurs automatically or whether they are dependent on selectively attending to the visual or auditory stimuli. Here, we demonstrate that robust auditory spatial aftereffects can be induced even in the presence of a competing visual stimulus. Importantly, we found that when attention is directed to the competing stimuli, the pattern of aftereffects is altered. These results indicate that attention can modulate the ventriloquist aftereffect.  相似文献   

4.
In a previous study, Ward (1994) reported that spatially uninformative visual cues orient auditory attention but that spatially uninformative auditory cues fail to orient visual attention. This cross-modal asymmetry is consistent with other intersensory perceptual phenomena that are dominated by the visual modality (e.g., ventriloquism). However, Spence and Driver (1997) found exactly the opposite asymmetry under different experimental conditions and with a different task. In spite of the several differences between the two studies, Spence and Driver (see also Driver & Spence, 1998) argued that Ward's findings might have arisen from response-priming effects, and that the cross-modal asymmetry they themselves reported, in which auditory cues affect responses to visual targets but not vice versa, is in fact the correct result. The present study investigated cross-modal interactions in stimulus-driven spatial attention orienting under Ward's complex cue environment conditions using an experimental procedure that eliminates response-priming artifacts. The results demonstrate that the cross-modal asymmetry reported by Ward (1994) does occur when the cue environment is complex. We argue that strategic effects in cross-modal stimulus-driven orienting of attention are responsible for the opposite asymmetries found by Ward and by Spence and Driver (1997).  相似文献   

5.
Multisensory-mediated auditory localization   总被引:1,自引:0,他引:1  
Multisensory integration is a powerful mechanism for maximizing sensitivity to sensory events. We examined its effects on auditory localization in healthy human subjects. The specific objective was to test whether the relative intensity and location of a seemingly irrelevant visual stimulus would influence auditory localization in accordance with the inverse effectiveness and spatial rules of multisensory integration that have been developed from neurophysiological studies with animals [Stein and Meredith, 1993 The Merging of the Senses (Cambridge, MA: MIT Press)]. Subjects were asked to localize a sound in one condition in which a neutral visual stimulus was either above threshold (supra-threshold) or at threshold. In both cases the spatial disparity of the visual and auditory stimuli was systematically varied. The results reveal that stimulus salience is a critical factor in determining the effect of a neutral visual cue on auditory localization. Visual bias and, hence, perceptual translocation of the auditory stimulus appeared when the visual stimulus was supra-threshold, regardless of its location. However, this was not the case when the visual stimulus was at threshold. In this case, the influence of the visual cue was apparent only when the two cues were spatially coincident and resulted in an enhancement of stimulus localization. These data suggest that the brain uses multiple strategies to integrate multisensory information.  相似文献   

6.
Previous research has demonstrated that the localization of auditory or tactile stimuli can be biased by the simultaneous presentation of a visual stimulus from a different spatial position. We investigated whether auditory localization judgments could also be affected by the presentation of spatially displaced tactile stimuli, using a procedure designed to reveal perceptual interactions across modalities. Participants made left-right discrimination responses regarding the perceived location of sounds, which were presented either in isolation or together with tactile stimulation to the fingertips. The results demonstrate that the apparent location of a sound can be biased toward tactile stimulation when it is synchronous, but not when it is asynchronous, with the auditory event. Directing attention to the tactile modality did not increase the bias of sound localization toward synchronous tactile stimulation. These results provide the first demonstration of the tactile capture of audition.  相似文献   

7.
Previous research has demonstrated that the localization of auditory or tactile stimuli can be biased by the simultaneous presentation of a visual stimulus from a different spatial position. We investigated whether auditory localization judgments could also be affected by the presentation of spatially displaced tactile stimuli, using a procedure designed to reveal perceptual interactions across modalities. Participants made left—right discrimination responses regarding the perceived location of sounds, which were presented either in isolation or together with tactile stimulation to the fingertips. The results demonstrate that the apparent location of a sound can be biased toward tactile stimulation when it is synchronous, but not when it is asynchronous, with the auditory event. Directing attention to the tactile modality did not increase the bias of sound localization toward synchronous tactile stimulation. These results provide the first demonstration of the tactilecapture of audition.  相似文献   

8.
The present study examined cross-modal selective attention using a task-switching paradigm. In a series of experiments, we presented lateralized visual and auditory stimuli simultaneously and asked participants to make a spatial decision according to either the visual or the auditory stimulus. We observed consistent cross-modal interference in the form of a spatial congruence effect. This effect was asymmetrical, with higher costs when responding to auditory than to visual stimuli. Furthermore, we found stimulus-modality-shift costs, indicating a persisting attentional bias towards the attended stimulus modality. We discuss our findings with respect to visual dominance, directed-attention accounts, and the modality-appropriateness hypothesis.  相似文献   

9.
Many studies on multisensory processes have focused on performance in simplified experimental situations, with a single stimulus in each sensory modality. However, these results cannot necessarily be applied to explain our perceptual behavior in natural scenes where various signals exist within one sensory modality. We investigated the role of audio-visual syllable congruency on participants' auditory localization bias or the ventriloquism effect using spoken utterances and two videos of a talking face. Salience of facial movements was also manipulated. Results indicated that more salient visual utterances attracted participants' auditory localization. Congruent pairing of audio-visual utterances elicited greater localization bias than incongruent pairing, while previous studies have reported little dependency on the reality of stimuli in ventriloquism. Moreover, audio-visual illusory congruency, owing to the McGurk effect, caused substantial visual interference on auditory localization. Multisensory performance appears more flexible and adaptive in this complex environment than in previous studies.  相似文献   

10.
Investigations of situations involving spatial discordance between auditory and visual data which can otherwise be attributed to a common origin have revealed two main phenomena:cross-modal bias andperceptual fusion (or ventriloquism). The focus of the present study is the relationship between these two. The question asked was whether bias occurred only with fusion, as is predicted by some accounts of reactions to discordance, among them those based on cuesubstitution. The approach consisted of having subjects, on each trial, both point to signals in one modality in the presence of conflicting signals in the other modality and produce same-different origin judgments. To avoid the confounding of immediate effects with cumulative adaptation, which was allowed in most previous studies, the direction and amplitude of discordance was varied randomly from trial to trial. Experiment 1, which was a pilot study, showed that both visual bias of auditory localization and auditory bias of visual localization can be observed under such conditions. Experiment 2, which addressed the main question, used a method which controls for the selection involved in separating fusion from no-fusion trials and showed that the attraction of auditory localization by conflicting visual inputs occurs even when fusion is not reported. This result is inconsistent with purely postperceptual views of cross-modal interactions. The question could not be answered for auditory bias of visual localization, which, although significant, was very small in Experiment 1 and fell below significance under the conditions of Experiment 2.  相似文献   

11.
When auditory stimuli are used in two-dimensional spatial compatibility tasks, where the stimulus and response configurations vary along the horizontal and vertical dimensions simultaneously, a right–left prevalence effect occurs in which horizontal compatibility dominates over vertical compatibility. The right–left prevalence effects obtained with auditory stimuli are typically larger than that obtained with visual stimuli even though less attention should be demanded from the horizontal dimension in auditory processing. In the present study, we examined whether auditory or visual dominance occurs when the two-dimensional stimuli are audiovisual, as well as whether there will be cross-modal facilitation of response selection for the horizontal and vertical dimensions. We also examined whether there is an additional benefit of adding a pitch dimension to the auditory stimulus to facilitate vertical coding through use of the spatial-musical association of response codes (SMARC) effect, where pitch is coded in terms of height in space. In Experiment 1, we found a larger right–left prevalence effect for unimodal auditory than visual stimuli. Neutral, non-pitch coded, audiovisual stimuli did not result in cross-modal facilitation, but did show evidence of visual dominance. The right–left prevalence effect was eliminated in the presence of SMARC audiovisual stimuli, but the effect influenced horizontal rather than vertical coding. Experiment 2 showed that the influence of the pitch dimension was not in terms of influencing response selection on a trial-to-trial basis, but in terms of altering the salience of the task environment. Taken together, these findings indicate that in the absence of salient vertical cues, auditory and audiovisual stimuli tend to be coded along the horizontal dimension and vision tends to dominate audition in this two-dimensional spatial stimulus–response task.  相似文献   

12.
Spatial ventriloquism refers to the phenomenon that a visual stimulus such as a flash can attract the perceived location of a spatially discordant but temporally synchronous sound. An analogous example of mutual attraction between audition and vision has been found in the temporal domain, where temporal aspects of a visual event, such as its onset, frequency, or duration, can be biased by a slightly asynchronous sound. In this review, we examine various manifestations of spatial and temporal attraction between the senses (both direct effects and aftereffects), and we discuss important constraints on the occurrence of these effects. Factors that potentially modulate ventriloquism—such as attention, synesthetic correspondence, and other cognitive factors—are described. We trace theories and models of spatial and temporal ventriloquism, from the traditional unity assumption and modality appropriateness hypothesis to more recent Bayesian and neural network approaches. Finally, we summarize recent evidence probing the underlying neural mechanisms of spatial and temporal ventriloquism.  相似文献   

13.
Exposure to synchronous but spatially discordant auditory and visual inputs produces, beyond immediate cross-modal biases, adaptive recalibrations of the respective localization processes that manifest themselves in aftereffects. Such recalibrations probably play an important role in maintaining the coherence of spatial representations across the various spatial senses. The present study is part of a research program focused on the way recalibrations generalize to stimulus values different from those used for adaptation. Considering the case of sound frequency, we recently found that, in contradiction with an earlier report, auditory aftereffects generalize nearly entirely across two octaves. In this new experiment, participants were adapted to an 18 degrees auditory-visual discordance with either 400 or 6400 Hz tones, and their subsequent sound localization was tested across this whole four-octave frequency range. Substantial aftereffects, decreasing significantly with increasing difference between test and adapter frequency, were obtained at all combinations of adapter and test frequency. Implications of these results concerning the functional site at which visual recalibration of auditory localization might take place are discussed.  相似文献   

14.
Brosch T  Grandjean D  Sander D  Scherer KR 《Cognition》2008,106(3):1497-1503
Emotionally relevant stimuli are prioritized in human information processing. It has repeatedly been shown that selective spatial attention is modulated by the emotional content of a stimulus. Until now, studies investigating this phenomenon have only examined within-modality effects, most frequently using pictures of emotional stimuli to modulate visual attention. In this study, we used simultaneously presented utterances with emotional and neutral prosody as cues for a visually presented target in a cross-modal dot probe task. Response times towards targets were faster when they appeared at the location of the source of the emotional prosody. Our results show for the first time a cross-modal attentional modulation of visual attention by auditory affective prosody.  相似文献   

15.
The participants in this study discriminated the position of tactile target stimuli presented at the tip or the base of the forefinger of one of the participants’ hands, while ignoring visual distractor stimuli. The visual distractor stimuli were presented from two circles on a display aligned with the tactile targets in Experiment 1 or orthogonal to them in Experiment 2. Tactile discrimination performance was slower and less accurate when the visual distractor stimuli were presented from incongruent locations relative to the tactile target stimuli (e.g., tactile target at the base of the finger with top visual distractor) highlighting a cross-modal congruency effect. We examined whether the presence and orientation of a simple line drawing of a hand, which was superimposed on the visual distractor stimuli, would modulate the cross-modal congruency effects. When the tactile targets and the visual distractors were spatially aligned, the modulatory effects of the hand picture were small (Experiment 1). However, when they were spatially misaligned, the effects were much larger, and the direction of the cross-modal congruency effects changed in accordance with the orientation of the picture of the hand, as if the hand picture corresponded to the participants’ own stimulated hand (Experiment 2). The results suggest that the two-dimensional picture of a hand can modulate processes maintaining our internal body representation. We also observed that the cross-modal congruency effects were influenced by the postures of the stimulated and the responding hands. These results reveal the complex nature of spatial interactions among vision, touch, and proprioception.  相似文献   

16.
Auditory saltation is a misperception of the spatial location of repetitive, transient stimuli. It arises when clicks at one location are followed in perfect temporal cadence by identical clicks at a second location. This report describes two psychophysical experiments designed to examine the sensitivity of auditory saltation to different stimulus cues for auditory spatial perception. Experiment 1 was a dichotic study in which six different six-click train stimuli were used to generate the saltation effect. Clicks lateralised by using interaural time differences and clicks lateralised by using interaural level differences produced equivalent saltation effects, confirming an earlier finding. Switching the stimulus cue from an interaural time difference to an interaural level difference (or the reverse) in mid train was inconsequential to the saltation illusion. Experiment 2 was a free-field study in which subjects rated the illusory motion generated by clicks emitted from two sound sources symmetrically disposed around the interaural axis, ie on the same cone of confusion in the auditory hemifield opposite one ear. Stimuli in such positions produce spatial location judgments that are based more heavily on monaural spectral information than on binaural computations. The free-field stimuli produced robust saltation. The data from both experiments are consistent with the view that auditory saltation can emerge from spatial processing, irrespective of the stimulus cue information used to determine click laterality or location.  相似文献   

17.
Rapid adaptation to auditory-visual spatial disparity   总被引:1,自引:0,他引:1       下载免费PDF全文
The so-called ventriloquism aftereffect is a remarkable example of rapid adaptative changes in spatial localization caused by visual stimuli. After exposure to a consistent spatial disparity of auditory and visual stimuli, localization of sound sources is systematically shifted to correct for the deviation of the sound from visual positions during the previous adaptation period. In the present study, this aftereffect was induced by presenting, within 17 min, 1800 repetitive noise or pure-tone bursts in combination with synchronized, and 20° disparate flashing light spots, in total darkness. Post-adaptive sound localization, measured by a method of manual pointing, was significantly shifted 2.4° (noise), 3.1° (1 kHz tones), or 5.8° (4 kHz tones) compared with the pre-adaptation condition. There was no transfer across frequencies; that is, shifts in localization were insignificant when the frequencies used for adaptation and the post-adaptation localization test were different. It is hypothesized that these aftereffects may rely on shifts in neural representations of auditory space with respect to those of visual space, induced by intersensory spatial disparity, and may thus reflect a phenomenon of neural short-term plasticity.  相似文献   

18.
Spatially informative auditory and vibrotactile (cross-modal) cues can facilitate attention but little is known about how similar cues influence visual spatial working memory (WM) across the adult lifespan. We investigated the effects of cues (spatially informative or alerting pre-cues vs. no cues), cue modality (auditory vs. vibrotactile vs. visual), memory array size (four vs. six items), and maintenance delay (900 vs. 1800 ms) on visual spatial location WM recognition accuracy in younger adults (YA) and older adults (OA). We observed a significant interaction between spatially informative pre-cue type, array size, and delay. OA and YA benefitted equally from spatially informative pre-cues, suggesting that attentional orienting prior to WM encoding, regardless of cue modality, is preserved with age. Contrary to predictions, alerting pre-cues generally impaired performance in both age groups, suggesting that maintaining a vigilant state of arousal by facilitating the alerting attention system does not help visual spatial location WM.  相似文献   

19.
When stimuli are associated with reward outcome, their visual features acquire high attentional priority such that stimuli possessing those features involuntarily capture attention. Whether a particular feature is predictive of reward, however, will vary with a number of contextual factors. One such factor is spatial location: for example, red berries are likely to be found in low-lying bushes, whereas yellow bananas are likely to be found on treetops. In the present study, I explore whether the attentional priority afforded to reward-associated features is modulated by such location-based contingencies. The results demonstrate that when a stimulus feature is associated with a reward outcome in one spatial location but not another, attentional capture by that feature is selective to when it appears in the rewarded location. This finding provides insight into how reward learning effectively modulates attention in an environment with complex stimulus–reward contingencies, thereby supporting efficient foraging.  相似文献   

20.
Involuntary listening aids seeing: evidence from human electrophysiology   总被引:3,自引:0,他引:3  
It is well known that sensory events of one modality can influence judgments of sensory events in other modalities. For example, people respond more quickly to a target appearing at the location of a previous cue than to a target appearing at another location, even when the two stimuli are from different modalities. Such cross-modal interactions suggest that involuntary spatial attention mechanisms are not entirely modality-specific. In the present study, event-related brain potentials (ERPs) were recorded to elucidate the neural basis and timing of involuntary, cross-modal spatial attention effects. We found that orienting spatial attention to an irrelevant sound modulates the ERP to a subsequent visual target over modality-specific, extrastriate visual cortex, but only after the initial stages of sensory processing are completed. These findings are consistent with the proposal that involuntary spatial attention orienting to auditory and visual stimuli involves shared, or at least linked, brain mechanisms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号