首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In the ventriloquism aftereffect, brief exposure to a consistent spatial disparity between auditory and visual stimuli leads to a subsequent shift in subjective sound localization toward the positions of the visual stimuli. Such rapid adaptive changes probably play an important role in maintaining the coherence of spatial representations across the various sensory systems. In the research reported here, we used event-related potentials (ERPs) to identify the stage in the auditory processing stream that is modulated by audiovisual discrepancy training. Both before and after exposure to synchronous audiovisual stimuli that had a constant spatial disparity of 15°, participants reported the perceived location of brief auditory stimuli that were presented from central and lateral locations. In conjunction with a sound localization shift in the direction of the visual stimuli (the behavioral ventriloquism aftereffect), auditory ERPs as early as 100 ms poststimulus (N100) were systematically modulated by the disparity training. These results suggest that cross-modal learning was mediated by a relatively early stage in the auditory cortical processing stream.  相似文献   

2.
A period of exposure to trains of simultaneous but spatially offset auditory and visual stimuli can induce a temporary shift in the perception of sound location. This phenomenon, known as the ‘ventriloquist aftereffect’, reflects a realignment of auditory and visual spatial representations such that they approach perceptual alignment despite their physical spatial discordance. Such dynamic changes to sensory representations are likely to underlie the brain’s ability to accommodate inter-sensory discordance produced by sensory errors (particularly in sound localization) and variability in sensory transduction. It is currently unknown, however, whether these plastic changes induced by adaptation to spatially disparate inputs occurs automatically or whether they are dependent on selectively attending to the visual or auditory stimuli. Here, we demonstrate that robust auditory spatial aftereffects can be induced even in the presence of a competing visual stimulus. Importantly, we found that when attention is directed to the competing stimuli, the pattern of aftereffects is altered. These results indicate that attention can modulate the ventriloquist aftereffect.  相似文献   

3.
Several studies have shown that handedness has an impact on visual spatial abilities. Here we investigated the effect of laterality on auditory space perception. Participants (33 right-handers, 20 left-handers) completed two tasks of sound localization. In a dark, anechoic, and sound-proof room, sound stimuli (broadband noise) were presented via 21 loudspeakers mounted horizontally (from 80° on the left to 80° on the right). Participants had to localize the target either by using a swivel hand-pointer or by head-pointing. Individual lateral preferences of eye, ear, hand, and foot were obtained using a questionnaire. With both pointing methods, participants showed a bias in sound localization that was to the side contralateral to the preferred hand, an effect that was unrelated to their overall precision. This partially parallels findings in the visual modality as left-handers typically have a more rightward bias in visual line bisection compared with right-handers. Despite the differences in neural processing of auditory and visual spatial information these findings show similar effects of lateral preference on auditory and visual spatial perception. This suggests that supramodal neural processes are involved in the mechanisms generating laterality in space perception.  相似文献   

4.
Observers were adapted to simulated auditory movement produced by dynamically varying the interaural time and intensity differences of tones (500 or 2,000 Hz) presented through headphones. At lO-sec intervals during adaptation, various probe tones were presented for 1 sec (the frequency of the probe was always the same as that of the adaptation stimulus). Observers judged the direction of apparent movement (“left” or “right”) of each probe tone. At 500 Hz, with a 200-deg/sec adaptation velocity, “stationary” probe tones were consistently judged to move in the direction opposite to that of the adaptation stimulus. We call this result an auditory motion aftereffect. In slower velocity adaptation conditions, progressively less aftereffect was demonstrated. In the higher frequency condition (2,000 Hz, 200-deg/sec adaptation velocity), we found no evidence of motion aftereffect. The data are discussed in relation to the well-known visual analog-the “waterfall effect.” Although the auditory aftereffect is weaker than the visual analog, the data suggest that auditory motion perception might be mediated, as is generally believed for the visual system, by direction-specific movement analyzers.  相似文献   

5.
Subjects were tested for both ear-hand and eye-hand co-ordination before and after monitoring a synchronous series of noise bursts and of light flashes coming from the same spatial position, but with the virtual position of the flashes displaced 15° laterally by prisms. Attention was forced on both stimuli by the instruction to detect occasional reductions in intensity. No subject reported noticing the spatial discrepancy. Nevertheless ear--hand co-ordination was shifted in the direction of the prismatic displacement, and eye-hand co-ordination in the opposite direction. Both shifts were observed with instructions suggesting that the sound and the light came from one single source, with instructions suggesting two separate sources, and also with no information regarding the spatial relationship of sound and light. It is concluded that the resolution of auditory-visual spatial conflict involves recalibrations of both visual and auditory data and that these alterations last long enough to be detected as after-effects.  相似文献   

6.
Auditory redundancy gains were assessed in two experiments in which a simple reaction time task was used. In each trial, an auditory stimulus was presented to the left ear, to the right ear, or simultaneously to both ears. The physical difference between auditory stimuli presented to the two ears was systematically increased across experiments. No redundancy gains were observed when the stimuli were identical pure tones or pure tones of different frequencies (Experiment 1). A clear redundancy gain and evidence of coactivation were obtained, however, when one stimulus was a pure tone and the other was white noise (Experiment 2). Experiment 3 employed a two-alternative forced choice localization task and provided evidence that dichotically presented pure tones of different frequencies are apparently integrated into a single percept, whereas a pure tone and white noise are not fused. The results extend previous findings of redundancy gains and coactivation with visual and bimodal stimuli to the auditory modality. Furthermore, at least within this modality, the results indicate that redundancy gains do not emerge when redundant stimuli are integrated into a single percept.  相似文献   

7.
Listeners exposed to a tone increasing in intensity report an aftereffect of decreasing loudness in a steady tone heard afterward. In the present study, the spectral dependence of the monotic decreasing-loudness aftereffect (adapting and testing 1 ear) was compared with (a) the spectral dependence of the interotic decreasing-loudness aftereffect (adapting 1 ear and testing the other ear) and (b) a non-adaptation control condition. The purpose was to test the hypothesis that the decreasing-loudness aftereffect may concern the sensory processing associated with dynamic localization. The hypothesis is based on two premises: (a) dynamic localization requires monaural sensory processing, and (b) sensory processing is reflected in spectral selectivity. Hence, the hypothesis would be supported if the monotic aftereffect were more spectrally dependent and stronger than the interotic aftereffect; A. H. Reinhardt-Rutland (1998) showed that the hypothesis is supported with regard to the related increasing-loudness aftereffect. Two listeners were exposed to a 1-kHz adapting stimulus. From responses of “growing softer” or “growing louder” to test stimuli changing in intensity, nulls were calculated; test carrier frequencies ranged from 0.5 kHz to 2 kHz. Confirming the hypothesis, the monotic aftereffect peaked at around the 1-kHz test carrier frequency. In contrast, the interotic aftereffect showed little evidence of spectrally dependent peaking. Except when test and adaptation carrier frequencies differed markedly, the interotic aftereffect was smaller than the monotic aftereffect.  相似文献   

8.
Listeners exposed to a tone increasing in intensity report an aftereffect of decreasing loudness in a steady tone heard afterward. In the present study, the spectral dependence of the monotic decreasing-loudness aftereffect (adapting and testing 1 ear) was compared with (a) the spectral dependence of the interotic decreasing-loudness aftereffect (adapting 1 ear and testing the other ear) and (b) a non-adaptation control condition. The purpose was to test the hypothesis that the decreasing-loudness aftereffect may concern the sensory processing associated with dynamic localization. The hypothesis is based on two premises: (a) dynamic localization requires monaural sensory processing, and (b) sensory processing is reflected in spectral selectivity. Hence, the hypothesis would be supported if the monotic aftereffect were more spectrally dependent and stronger than the interotic aftereffect; A. H. Reinhardt-Rutland (1998) showed that the hypothesis is supported with regard to the related increasing-loudness aftereffect. Two listeners were exposed to a 1-kHz adapting stimulus. From responses of "growing softer" or "growing louder" to test stimuli changing in intensity, nulls were calculated; test carrier frequencies ranged from 0.5 kHz to 2 kHz. Confirming the hypothesis, the monotic aftereffect peaked at around the 1-kHz test carrier frequency. In contrast, the interotic aftereffect showed little evidence of spectrally dependent peaking. Except when test and adaptation carrier frequencies differed markedly, the interotic aftereffect was smaller than the monotic aftereffect.  相似文献   

9.
时距知觉适应后效是指长时间适应于某一特定时距会导致个体对后续时距产生知觉偏差。其中对视时距知觉适应后效空间选择性的探讨存在争议,有研究支持位置不变性,也有研究支持位置特异性。这类研究能有效揭示时距编码的认知神经机制,位置不变性可能意味着时距编码位于较高级的脑区,而位置特异性则可能意味着时距编码位于初级视觉皮层。未来还可以探究时距知觉适应后效的视觉坐标表征方式,开展多通道研究以及相应的神经基础研究。  相似文献   

10.
Exposure to synchronous but spatially discordant auditory and visual inputs produces, beyond immediate cross-modal biases, adaptive recalibrations of the respective localization processes that manifest themselves in aftereffects. Such recalibrations probably play an important role in maintaining the coherence of spatial representations across the various spatial senses. The present study is part of a research program focused on the way recalibrations generalize to stimulus values different from those used for adaptation. Considering the case of sound frequency, we recently found that, in contradiction with an earlier report, auditory aftereffects generalize nearly entirely across two octaves. In this new experiment, participants were adapted to an 18 degrees auditory-visual discordance with either 400 or 6400 Hz tones, and their subsequent sound localization was tested across this whole four-octave frequency range. Substantial aftereffects, decreasing significantly with increasing difference between test and adapter frequency, were obtained at all combinations of adapter and test frequency. Implications of these results concerning the functional site at which visual recalibration of auditory localization might take place are discussed.  相似文献   

11.
Perceived location of tonal stimuli d narrow noise bands presented in two-dimensional space varies in an orderly manner with changes in stimulus frequency. Hence, frequency has a referent in space that is most apparent during monaural listening. The assumption underlying the present study is that maximum sound pressure level measured at the ear canal entrance for the various frequencies serves as a prominent spectral cue for their spatial referents. Even in binaural localization, location judgments in the vertical plane are strongly influenced by spatial referents. We measured sound pressure levels at the left ear canal entrance for 1.0-kHz-wide noise bands, centered from 4.0 kHz through 10.0 kHz, presented at locations from 60° through ?45° in the vertical plane; the horizontal plane coordinate was fixed at ?90°. On the basis of these measurements, we fabricated three different band-stop stimuli in which differently centered 2.0-kHz-wide frequency segments were filtered from a broadband noise. Unfiltered broadband noise served as the remaining stimulus. Localization accuracy differed significantly among stimulus conditions (p<.01). Where in the vertical plane most errors were made depended on which frequency segment was filtered from the broadband noise.  相似文献   

12.
Listeners, whose right ears were blocked, located low-intensity sounds originating from loudspeakers placed 15 deg apart along the horizontal plane on the side of the open, or functioning, ear. In Experiment 1, the stimuli consisted of noise bursts, 1.0 kHz wide and centered at 4.0 through 14.0 kHz in steps of .5 kHz. We found that the apparent location of the noise bursts was governed by their frequency composition. Specifically, as the center frequency was increased from 4.0 to about 8.0 kHz, the sound appeared to move away from the frontal sector and toward the side. This migration pattern of the apparent sound source was observed again when the center frequency was increased from 8.0 to about 12.0 kHz. Then, with center frequencies of 13.0 and 14.0 kHz, the sound appeared once more in front. We referred to this relation between frequency composition and apparent location in terms of spatial referent maps. In Experiment 2, we showed that localization was more proficient if the frequency content of the stimulus served to connect adjacent spatial referent maps rather than falling within a single map. By these means, we have further elucidated the spectral cues utilized in monaural localization of sound in the horizontal plane.  相似文献   

13.
B Magnani  F Pavani  F Frassinetti 《Cognition》2012,125(2):233-243
The aim of the present study was to explore the spatial organization of auditory time and the effects of the manipulation of spatial attention on such a representation. In two experiments, we asked 28 adults to classify the duration of auditory stimuli as "short" or "long". Stimuli were tones of high or low pitch, delivered left or right of the participant. The time bisection task was performed either on right or left stimuli regardless of their pitch (Spatial experiment), or on high or low tones regardless of their location (Tonal experiment). Duration of left stimuli was underestimated relative to that of right stimuli, in the Spatial but not in the Tonal experiment, suggesting that a spatial representation of auditory time emerges selectively when spatial-encoding is enforced. Further, when we introduced spatial-attention shifts using the prismatic adaptation procedure, we found modulations of auditory time processing as a function of prismatic deviation, which correlated with the interparticipant adaptation effect. These novel findings reveal a spatial representation of auditory time, modulated by spatial attention.  相似文献   

14.
Previous research has demonstrated that the localization of auditory or tactile stimuli can be biased by the simultaneous presentation of a visual stimulus from a different spatial position. We investigated whether auditory localization judgments could also be affected by the presentation of spatially displaced tactile stimuli, using a procedure designed to reveal perceptual interactions across modalities. Participants made left&#x2014;right discrimination responses regarding the perceived location of sounds, which were presented either in isolation or together with tactile stimulation to the fingertips. The results demonstrate that the apparent location of a sound can be biased toward tactile stimulation when it is synchronous, but not when it is asynchronous, with the auditory event. Directing attention to the tactile modality did not increase the bias of sound localization toward synchronous tactile stimulation. These results provide the first demonstration of the tactilecapture of audition.  相似文献   

15.
Multisensory-mediated auditory localization   总被引:1,自引:0,他引:1  
Multisensory integration is a powerful mechanism for maximizing sensitivity to sensory events. We examined its effects on auditory localization in healthy human subjects. The specific objective was to test whether the relative intensity and location of a seemingly irrelevant visual stimulus would influence auditory localization in accordance with the inverse effectiveness and spatial rules of multisensory integration that have been developed from neurophysiological studies with animals [Stein and Meredith, 1993 The Merging of the Senses (Cambridge, MA: MIT Press)]. Subjects were asked to localize a sound in one condition in which a neutral visual stimulus was either above threshold (supra-threshold) or at threshold. In both cases the spatial disparity of the visual and auditory stimuli was systematically varied. The results reveal that stimulus salience is a critical factor in determining the effect of a neutral visual cue on auditory localization. Visual bias and, hence, perceptual translocation of the auditory stimulus appeared when the visual stimulus was supra-threshold, regardless of its location. However, this was not the case when the visual stimulus was at threshold. In this case, the influence of the visual cue was apparent only when the two cues were spatially coincident and resulted in an enhancement of stimulus localization. These data suggest that the brain uses multiple strategies to integrate multisensory information.  相似文献   

16.
Perception of visual information highly depends on spatial context. For instance, perception of a low-level visual feature, such as orientation, can be shifted away from its surrounding context, exhibiting a simultaneous contrast effect. Although previous studies have demonstrated the adaptation aftereffect of gender, a high-level visual feature, it remains largely unknown whether gender perception can also be shaped by a simultaneously presented context. In the present study, we found that the gender perception of a central face or a point-light walker was repelled away from the gender of its surrounding faces or walkers. A norm-based opponent model of lateral inhibition, which accounts for the adaptation aftereffect of high-level features, can also excellently fit the simultaneous contrast effect. But different from the reported contextual effect of low-level features, the simultaneous contrast effect of gender cannot be observed when the centre and the surrounding stimuli are from different categories, or when the surrounding stimuli are suppressed from awareness. These findings on one hand reveal a resemblance between the simultaneous contrast effect and the adaptation aftereffect of high-level features, on the other hand highlight different biological mechanisms underlying the contextual effects of low- and high-level visual features.  相似文献   

17.
In this paper, the auditory motion aftereffect (aMAE) was studied, using real moving sound as both the adapting and the test stimulus. The sound was generated by a loudspeaker mounted on a robot arm that was able to move quietly in three-dimensional space. A total of 7 subjects with normal hearing were tested in three experiments. The results from Experiment 1 showed a robust and reliable negative aMAE in all the subjects. After listening to a sound source moving repeatedly to the right, a stationary sound source was perceived to move to the left. The magnitude of the aMAE tended to increase with adapting velocity up to the highest velocity tested (20°/sec). The aftereffect was largest when the adapting and the test stimuli had similar spatial location and frequency content. Offsetting the locations of the adapting and the test stimuli by 20° reduced the size of the effect by about 50%. A similar decline occurred when the frequency of the adapting and the test stimuli differed by one octave. Our results suggest that the human auditory system possesses specialized mechanisms for detecting auditory motion in the spatial domain.  相似文献   

18.
Subjects adapted to square-wave adaptors, stimuli containing odd harmonics of the fundamental, and, to provide baseline data, sinusoidal adaptors matching the square-wave's fundamental. Nulls were obtained for various frequencies of the test stimulus. At any given frequency, the baseline null was subtracted from the null for square-wave adaptation. These corrected nulls indicated frequency-specific aftereffects at the third and fifth harmonics. The evidence is consistent with previous attempts to link the aftereffect of changing sound level in a tone with the auditory movement aftereffect because the latter may also show frequency specificity.  相似文献   

19.
In this paper, the auditory motion aftereffect (aMAE) was studied, using real moving sound as both the adapting and the test stimulus. The sound was generated by a loudspeaker mounted on a robot arm that was able to move quietly in three-dimensional space. A total of 7 subjects with normal hearing were tested in three experiments. The results from Experiment 1 showed a robust and reliable negative aMAE in all the subjects. After listening to a sound source moving repeatedly to the right, a stationary sound source was perceived to move to the left. The magnitude of the aMAE tended to increase with adapting velocity up to the highest velocity tested (20 degrees/sec). The aftereffect was largest when the adapting and the test stimuli had similar spatial location and frequency content. Offsetting the locations of the adapting and the test stimuli by 20 degrees reduced the size of the effect by about 50%. A similar decline occurred when the frequency of the adapting and the test stimuli differed by one octave. Our results suggest that the human auditory system possesses specialized mechanisms for detecting auditory motion in the spatial domain.  相似文献   

20.
Songbirds and humans share many parallels in vocal learning and auditory sequence processing. However, the two groups differ notably in their abilities to recognize acoustic sequences shifted in absolute pitch (pitch height). Whereas humans maintain accurate recognition of words or melodies over large pitch height changes, songbirds are comparatively much poorer at recognizing pitch-shifted tone sequences. This apparent disparity may reflect fundamental differences in the neural mechanisms underlying the representation of sound in songbirds. Alternatively, because non-human studies have used sine-tone stimuli almost exclusively, tolerance to pitch height changes in the context of natural signals may be underestimated. Here, we show that European starlings, a species of songbird, can maintain accurate recognition of the songs of other starlings when the pitch of those songs is shifted by as much as ±40%. We observed accurate recognition even for songs pitch-shifted well outside the range of frequencies used during training, and even though much smaller pitch shifts in conspecific songs are easily detected. With similar training using human piano melodies, recognition of the pitch-shifted melodies is very limited. These results demonstrate that non-human pitch processing is more flexible than previously thought and that the flexibility in pitch processing strategy is stimulus dependent.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号