首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Exposure to synchronous but spatially discordant auditory and visual inputs produces, beyond immediate cross-modal biases, adaptive recalibrations of the respective localization processes that manifest themselves in aftereffects. Such recalibrations probably play an important role in maintaining the coherence of spatial representations across the various spatial senses. The present study is part of a research program focused on the way recalibrations generalize to stimulus values different from those used for adaptation. Considering the case of sound frequency, we recently found that, in contradiction with an earlier report, auditory aftereffects generalize nearly entirely across two octaves. In this new experiment, participants were adapted to an 18 degrees auditory-visual discordance with either 400 or 6400 Hz tones, and their subsequent sound localization was tested across this whole four-octave frequency range. Substantial aftereffects, decreasing significantly with increasing difference between test and adapter frequency, were obtained at all combinations of adapter and test frequency. Implications of these results concerning the functional site at which visual recalibration of auditory localization might take place are discussed.  相似文献   

2.
In order to determine the spatial location of an object that is simultaneously seen and heard, the brain assigns higher weights to the sensory inputs that provide the most reliable information. For example, in the well-known ventriloquism effect, the perceived location of a sound is shifted toward the location of a concurrent but spatially misaligned visual stimulus. This perceptual illusion can be explained by the usually much higher spatial resolution of the visual system as compared to the auditory system. Recently, it has been demonstrated that this cross-modal binding process is not fully automatic, but can be modulated by emotional learning. Here we tested whether cross-modal binding is similarly affected by motivational factors, as exemplified by reward expectancy. Participants received a monetary reward for precise and accurate localization of brief auditory stimuli. Auditory stimuli were accompanied by task-irrelevant, spatially misaligned visual stimuli. Thus, the participants’ motivational goal of maximizing their reward was put in conflict with the spatial bias of auditory localization induced by the ventriloquist situation. Crucially, the amounts of expected reward differed between the two hemifields. As compared to the hemifield associated with a low reward, the ventriloquism effect was reduced in the high-reward hemifield. This finding suggests that reward expectations modulate cross-modal binding processes, possibly mediated via cognitive control mechanisms. The motivational significance of the stimulus material, thus, constitutes an important factor that needs to be considered in the study of top-down influences on multisensory integration.  相似文献   

3.
A sound presented in temporal proximity to a light can alter the perceived temporal occurrence of that light (temporal ventriloquism). The authors explored whether spatial discordance between the sound and light affects this phenomenon. Participants made temporal order judgments about which of 2 lights appeared first, while they heard sounds before the 1st and after the 2nd light. Sensitivity was higher (i.e., a lower just noticeable difference) when the sound-light interval was approximately 100 ms rather than approximately 0 ms. This temporal ventriloquist effect was unaffected by whether sounds came from the same or a different position as the lights, whether the sounds were static or moved, or whether they came from the same or opposite sides of fixation. Yet, discordant sounds interfered with speeded visual discrimination. These results challenge the view that intersensory interactions in general require spatial correspondence between the stimuli.  相似文献   

4.
Rapid adaptation to auditory-visual spatial disparity   总被引:1,自引:0,他引:1       下载免费PDF全文
The so-called ventriloquism aftereffect is a remarkable example of rapid adaptative changes in spatial localization caused by visual stimuli. After exposure to a consistent spatial disparity of auditory and visual stimuli, localization of sound sources is systematically shifted to correct for the deviation of the sound from visual positions during the previous adaptation period. In the present study, this aftereffect was induced by presenting, within 17 min, 1800 repetitive noise or pure-tone bursts in combination with synchronized, and 20° disparate flashing light spots, in total darkness. Post-adaptive sound localization, measured by a method of manual pointing, was significantly shifted 2.4° (noise), 3.1° (1 kHz tones), or 5.8° (4 kHz tones) compared with the pre-adaptation condition. There was no transfer across frequencies; that is, shifts in localization were insignificant when the frequencies used for adaptation and the post-adaptation localization test were different. It is hypothesized that these aftereffects may rely on shifts in neural representations of auditory space with respect to those of visual space, induced by intersensory spatial disparity, and may thus reflect a phenomenon of neural short-term plasticity.  相似文献   

5.
本研究使用空间任务-转换范式,控制视、听刺激的突显性,探讨自下而上注意对视觉主导效应的影响。结果表明视、听刺激突显性显著地影响视觉主导效应,实验一中当听觉刺激为高突显性时,视觉主导效应显著减弱。实验二中当听觉刺激为高突显性并且视觉刺激为低突显性时,视觉主导效应进一步减弱但依然存在。结果支持偏向竞争理论,在跨通道视听交互过程中视觉刺激更具突显性,在多感觉整合过程中更具加工优势。  相似文献   

6.
We constantly integrate the information that is available to our various senses. The extent to which the mechanisms of multisensory integration are subject to the influences of attention, emotion, and/or motivation is currently unknown. The ??ventriloquist effect?? is widely assumed to be an automatic crossmodal phenomenon, shifting the perceived location of an auditory stimulus toward a concurrently presented visual stimulus. In the present study, we examined whether audiovisual binding, as indicated by the magnitude of the ventriloquist effect, is influenced by threatening auditory stimuli presented prior to the ventriloquist experiment. Syllables spoken in a fearful voice were presented from one of eight loudspeakers, while syllables spoken in a neutral voice were presented from the other seven locations. Subsequently, participants had to localize pure tones while trying to ignore concurrent visual stimuli (both the auditory and the visual stimuli here were emotionally neutral). A reliable ventriloquist effect was observed. The emotional stimulus manipulation resulted in a reduction of the magnitude of the subsequently measured ventriloquist effect in both hemifields, as compared to a control group exposed to a similar attention-capturing, but nonemotional, manipulation. These results suggest that the emotional system is capable of influencing multisensory binding processes that have heretofore been considered automatic.  相似文献   

7.
Previous research has demonstrated that the localization of auditory or tactile stimuli can be biased by the simultaneous presentation of a visual stimulus from a different spatial position. We investigated whether auditory localization judgments could also be affected by the presentation of spatially displaced tactile stimuli, using a procedure designed to reveal perceptual interactions across modalities. Participants made left—right discrimination responses regarding the perceived location of sounds, which were presented either in isolation or together with tactile stimulation to the fingertips. The results demonstrate that the apparent location of a sound can be biased toward tactile stimulation when it is synchronous, but not when it is asynchronous, with the auditory event. Directing attention to the tactile modality did not increase the bias of sound localization toward synchronous tactile stimulation. These results provide the first demonstration of the tactilecapture of audition.  相似文献   

8.
In the ventriloquism aftereffect, brief exposure to a consistent spatial disparity between auditory and visual stimuli leads to a subsequent shift in subjective sound localization toward the positions of the visual stimuli. Such rapid adaptive changes probably play an important role in maintaining the coherence of spatial representations across the various sensory systems. In the research reported here, we used event-related potentials (ERPs) to identify the stage in the auditory processing stream that is modulated by audiovisual discrepancy training. Both before and after exposure to synchronous audiovisual stimuli that had a constant spatial disparity of 15°, participants reported the perceived location of brief auditory stimuli that were presented from central and lateral locations. In conjunction with a sound localization shift in the direction of the visual stimuli (the behavioral ventriloquism aftereffect), auditory ERPs as early as 100 ms poststimulus (N100) were systematically modulated by the disparity training. These results suggest that cross-modal learning was mediated by a relatively early stage in the auditory cortical processing stream.  相似文献   

9.
Involuntary listening aids seeing: evidence from human electrophysiology   总被引:3,自引:0,他引:3  
It is well known that sensory events of one modality can influence judgments of sensory events in other modalities. For example, people respond more quickly to a target appearing at the location of a previous cue than to a target appearing at another location, even when the two stimuli are from different modalities. Such cross-modal interactions suggest that involuntary spatial attention mechanisms are not entirely modality-specific. In the present study, event-related brain potentials (ERPs) were recorded to elucidate the neural basis and timing of involuntary, cross-modal spatial attention effects. We found that orienting spatial attention to an irrelevant sound modulates the ERP to a subsequent visual target over modality-specific, extrastriate visual cortex, but only after the initial stages of sensory processing are completed. These findings are consistent with the proposal that involuntary spatial attention orienting to auditory and visual stimuli involves shared, or at least linked, brain mechanisms.  相似文献   

10.
Previous research has demonstrated that the localization of auditory or tactile stimuli can be biased by the simultaneous presentation of a visual stimulus from a different spatial position. We investigated whether auditory localization judgments could also be affected by the presentation of spatially displaced tactile stimuli, using a procedure designed to reveal perceptual interactions across modalities. Participants made left-right discrimination responses regarding the perceived location of sounds, which were presented either in isolation or together with tactile stimulation to the fingertips. The results demonstrate that the apparent location of a sound can be biased toward tactile stimulation when it is synchronous, but not when it is asynchronous, with the auditory event. Directing attention to the tactile modality did not increase the bias of sound localization toward synchronous tactile stimulation. These results provide the first demonstration of the tactile capture of audition.  相似文献   

11.
It is well known that discrepancies in the location of synchronized auditory and visual events can lead to mislocalizations of the auditory source, so-called ventriloquism. In two experiments, we tested whether such cross-modal influences on auditory localization depend on deliberate visual attention to the biasing visual event. In Experiment 1, subjects pointed to the apparent source of sounds in the presence or absence of a synchronous peripheral flash. They also monitored for target visual events, either at the location of the peripheral flash or in a central location. Auditory localization was attracted toward the synchronous peripheral flash, but this was unaffected by where deliberate visual attention was directed in the monitoring task. In Experiment 2, bilateral flashes were presented in synchrony with each sound, to provide competing visual attractors. When these visual events were equally salient on the two sides, auditory localization was unaffected by which side subjects monitored for visual targets. When one flash was larger than the other, auditory localization was slightly but reliably attracted toward it, but again regardless of where visual monitoring was required. We conclude that ventriloquism largely reflects automatic sensory interactions, with little or no role for deliberate spatial attention.  相似文献   

12.
The multisensory response enhancement (MRE), occurring when the response to a visual target integrated with a spatially congruent sound is stronger than the response to the visual target alone, is believed to be mediated by the superior colliculus (SC) (Stein & Meredith, 1993). Here, we used a focused attention paradigm to show that the spatial congruency effect occurs with red (SC-effective) but not blue (SC-ineffective) visual stimuli, when presented with spatially congruent sounds. To isolate the chromatic component of SC-ineffective targets and to demonstrate the selectivity of the spatial congruency effect we used the random luminance modulation technique (Experiment 1) and the tritanopic technique (Experiment 2). Our results indicate that the spatial congruency effect does not require the distribution of attention over different sensory modalities and provide correlational evidence that the SC mediates the effect.  相似文献   

13.
Unlike visual and tactile stimuli, auditory signals that allow perception of timbre, pitch and localization are temporal. To process these, the auditory nervous system must either possess specialized neural machinery for analyzing temporal input, or transform the initial responses into patterns that are spatially distributed across its sensory epithelium. The former hypothesis, which postulates the existence of structures that facilitate temporal processing, is most popular. However, I argue that the cochlea transforms sound into spatiotemporal response patterns on the auditory nerve and central auditory stages; and that a unified computational framework exists for central auditory, visual and other sensory processing. Specifically, I explain how four fundamental concepts in visual processing play analogous roles in auditory processing.  相似文献   

14.
Multisensory-mediated auditory localization   总被引:1,自引:0,他引:1  
Multisensory integration is a powerful mechanism for maximizing sensitivity to sensory events. We examined its effects on auditory localization in healthy human subjects. The specific objective was to test whether the relative intensity and location of a seemingly irrelevant visual stimulus would influence auditory localization in accordance with the inverse effectiveness and spatial rules of multisensory integration that have been developed from neurophysiological studies with animals [Stein and Meredith, 1993 The Merging of the Senses (Cambridge, MA: MIT Press)]. Subjects were asked to localize a sound in one condition in which a neutral visual stimulus was either above threshold (supra-threshold) or at threshold. In both cases the spatial disparity of the visual and auditory stimuli was systematically varied. The results reveal that stimulus salience is a critical factor in determining the effect of a neutral visual cue on auditory localization. Visual bias and, hence, perceptual translocation of the auditory stimulus appeared when the visual stimulus was supra-threshold, regardless of its location. However, this was not the case when the visual stimulus was at threshold. In this case, the influence of the visual cue was apparent only when the two cues were spatially coincident and resulted in an enhancement of stimulus localization. These data suggest that the brain uses multiple strategies to integrate multisensory information.  相似文献   

15.
When participants judge multimodal audiovisual stimuli, the auditory information strongly dominates temporal judgments, whereas the visual information dominates spatial judgments. However, temporal judgments are not independent of spatial features. For example, in the kappa effect, the time interval between two marker stimuli appears longer when they originate from spatially distant sources rather than from the same source. We investigated the kappa effect for auditory markers presented with accompanying irrelevant visual stimuli. The spatial sources of the markers were varied such that they were either congruent or incongruent across modalities. In two experiments, we demonstrated that the spatial layout of the visual stimuli affected perceived auditory interval duration. This effect occurred although the visual stimuli were designated to be task-irrelevant for the duration reproduction task in Experiment 1, and even when the visual stimuli did not contain sufficient temporal information to perform a two-interval comparison task in Experiment 2. We conclude that the visual and auditory marker stimuli were integrated into a combined multisensory percept containing temporal as well as task-irrelevant spatial aspects of the stimulation. Through this multisensory integration process, visuospatial information affected even temporal judgments, which are typically dominated by the auditory modality.  相似文献   

16.
Spatial information processing takes place in different brain regions that receive converging inputs from several sensory modalities. Because of our own movements—for example, changes in eye position, head rotations, and so forth—unimodal sensory representations move continuously relative to one another. It is generally assumed that for multisensory integration to be an orderly process, it should take place between stimuli at congruent spatial locations. In the monkey posterior parietal cortex, the ventral intraparietal (VIP) area is specialized for the analysis of movement information using visual, somatosensory, vestibular, and auditory signals. Focusing on the visual and tactile modalities, we found that in area VIP, like in the superior colliculus, multisensory signals interact at the single neuron level, suggesting that this area participates in multisensory integration. Curiously, VIP does not use a single, invariant coordinate system to encode locations within and across sensory modalities. Visual stimuli can be encoded with respect to the eye, the head, or halfway between the two reference frames, whereas tactile stimuli seem to be prevalently encoded relative to the body. Hence, while some multisensory neurons in VIP could encode spatially congruent tactile and visual stimuli independently of current posture, in other neurons this would not be the case. Future work will need to evaluate the implications of these observations for theories of optimal multisensory integration.Edited by: Marie-Hélène Giard and Mark Wallace  相似文献   

17.
Although the relationship between “mere exposure” and attitude enhancement is well established in the adult domain, there has been little similar work with children. This article examines whether toddlers’ visual attention toward pictures of foods can be enhanced by repeated visual exposure to pictures of foods in a parent-administered picture book. We describe three studies that explored the number and nature of exposures required to elicit positive visual preferences for stimuli and the extent to which induced preferences generalize to other similar items. Results show that positive preferences for stimuli are easily and reliably induced in children and, importantly, that this effect of exposure is not restricted to the exposed stimulus per se but also applies to new representations of the exposed item.  相似文献   

18.
In this paper, we show that human saccadic eye movements toward a visual target are generated with a reduced latency when this target is spatially and temporally aligned with an irrelevant auditory nontarget. This effect gradually disappears if the temporal and/or spatial alignment of the visual and auditory stimuli are changed. When subjects are able to accurately localize the auditory stimulus in two dimensions, the spatial dependence of the reduction in latency depends on the actual radial distance between the auditory and the visual stimulus. If, however, only the azimuth of the sound source can be determined by the subjects, the horizontal target separation determines the strength of the interaction. Neither saccade accuracy nor saccade kinematics were affected in these paradigms. We propose that, in addition to an aspecific warning signal, the reduction of saccadic latency is due to interactions that take place at a multimodal stage of saccade programming, where theperceived positions of visual and auditory stimuli are represented in a common frame of reference. This hypothesis is in agreement with our finding that the saccades often are initially directed to the average position of the visual and the auditory target, provided that their spatial separation is not too large. Striking similarities with electrophysiological findings on multisensory interactions in the deep layers of the midbrain superior colliculus are discussed.  相似文献   

19.
An ability to detect the common location of multisensory stimulation is essential for us to perceive a coherent environment, to represent the interface between the body and the external world, and to act on sensory information. Regarding the tactile environment “at hand”, we need to represent somatosensory stimuli impinging on the skin surface in the same spatial reference frame as distal stimuli, such as those transduced by vision and audition. Across two experiments we investigated whether 6‐ (n = 14; Experiment 1) and 4‐month‐old (n = 14; Experiment 2) infants were sensitive to the colocation of tactile and auditory signals delivered to the hands. We recorded infants’ visual preferences for spatially congruent and incongruent auditory‐tactile events delivered to their hands. At 6 months, infants looked longer toward incongruent stimuli, whilst at 4 months infants looked longer toward congruent stimuli. Thus, even from 4 months of age, infants are sensitive to the colocation of simultaneously presented auditory and tactile stimuli. We conclude that 4‐ and 6‐month‐old infants can represent auditory and tactile stimuli in a common spatial frame of reference. We explain the age‐wise shift in infants’ preferences from congruent to incongruent in terms of an increased preference for novel crossmodal spatial relations based on the accumulation of experience. A comparison of looking preferences across the congruent and incongruent conditions with a unisensory control condition indicates that the ability to perceive auditory‐tactile colocation is based on a crossmodal rather than a supramodal spatial code by 6 months of age at least.  相似文献   

20.
Attentional capture in serial audiovisual search tasks   总被引:1,自引:0,他引:1  
The phenomenon of attentional capture has typically been studied in spatial search tasks. Dalton and Lavie recently demonstrated that auditory attention can also be captured by a singleton item in a rapidly presented tone sequence. In the experiments reported here, we investigated whether these findings extend cross-modally to sequential search tasks using audiovisual stimuli. Participants searched a stream of centrally presented audiovisual stimuli for targets defined on a particular dimension (e.g., duration) in a particular modality. Task performance was compared in the presence versus absence of a unique singleton distractor. Irrelevant auditory singletons captured attention during visual search tasks, leading to interference when they coincided with distractors but to facilitation when they coincided with targets. These results demonstrate attentional capture by auditory singletons during nonspatial visual search.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号