首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 276 毫秒
1.
We examined how visual recalibration of apparent sound location obtained at a particular location generalizes to untrained locations. Participants pointed toward the origin of tone bursts scattered along the azimuth, before and after repeated exposure to bursts in one particular location, synchronized with point flashes of light a constant distance to their left/right. Adapter tones were presented straight ahead in Experiment 1, and in the left or right periphery in Experiment 2. With both arrangements, different generalization patterns were obtained on the visual distractor's side of the auditory adapter and onthe opposite side. On the distractor side, recalibration generalized following a descending gradient; practically no generalization was observed on the other side. This dependence of generalization patterns on the direction of the discordance imposed during adaptation has not been reported before, perhaps because the experimental designs in use did not allow its observation.  相似文献   

2.
Investigations of situations involving spatial discordance between auditory and visual data which can otherwise be attributed to a common origin have revealed two main phenomena:cross-modal bias andperceptual fusion (or ventriloquism). The focus of the present study is the relationship between these two. The question asked was whether bias occurred only with fusion, as is predicted by some accounts of reactions to discordance, among them those based on cuesubstitution. The approach consisted of having subjects, on each trial, both point to signals in one modality in the presence of conflicting signals in the other modality and produce same-different origin judgments. To avoid the confounding of immediate effects with cumulative adaptation, which was allowed in most previous studies, the direction and amplitude of discordance was varied randomly from trial to trial. Experiment 1, which was a pilot study, showed that both visual bias of auditory localization and auditory bias of visual localization can be observed under such conditions. Experiment 2, which addressed the main question, used a method which controls for the selection involved in separating fusion from no-fusion trials and showed that the attraction of auditory localization by conflicting visual inputs occurs even when fusion is not reported. This result is inconsistent with purely postperceptual views of cross-modal interactions. The question could not be answered for auditory bias of visual localization, which, although significant, was very small in Experiment 1 and fell below significance under the conditions of Experiment 2.  相似文献   

3.
Previously, we showed that the visual bias of auditory sound location, or ventriloquism, does not depend on the direction of deliberate, or endogenous, attention (Bertelson, Vroomen, de Gelder, & Driver, 2000). In the present study, a similar question concerning automatic, or exogenous, attention was examined. The experimental manipulation was based on the fact that exogenous visual attention can be attracted toward a singleton--that is, an item different on some dimension from all other items presented simultaneously. A display was used that consisted of a row of four bright squares with one square, in either the left- or the rightmost position, smaller than the others, serving as the singleton. In Experiment 1, subjects made dichotomous left-right judgments concerning sound bursts, whose successive locations were controlled by a psychophysical staircase procedure and which were presented in synchrony with a display with the singleton either left or right. Results showed that the apparent location of the sound was attracted not toward the singleton, but instead toward the big squares at the opposite end of the display. Experiment 2 was run to check that the singleton effectively attracted exogenous attention. The task was to discriminate target letters presented either on the singleton or on the opposite big square. Performance deteriorated when the target was on the big square opposite the singleton, in comparison with control trials with no singleton, thus showing that the singleton attracted attention away from the target location. In Experiment 3, localization and discrimination trials were mixed randomly so as to control for potential differences in subjects' strategies in the two preceding experiments. Results were as before, showing that the singleton attracted attention, whereas sound localization was shifted away from the singleton. Ventriloquism can thus be dissociated from exogenous visual attention and appears to reflect sensory interactions with little role for the direction of visual spatial attention.  相似文献   

4.
It is well known that discrepancies in the location of synchronized auditory and visual events can lead to mislocalizations of the auditory source, so-called ventriloquism. In two experiments, we tested whether such cross-modal influences on auditory localization depend on deliberate visual attention to the biasing visual event. In Experiment 1, subjects pointed to the apparent source of sounds in the presence or absence of a synchronous peripheral flash. They also monitored for target visual events, either at the location of the peripheral flash or in a central location. Auditory localization was attracted toward the synchronous peripheral flash, but this was unaffected by where deliberate visual attention was directed in the monitoring task. In Experiment 2, bilateral flashes were presented in synchrony with each sound, to provide competing visual attractors. When these visual events were equally salient on the two sides, auditory localization was unaffected by which side subjects monitored for visual targets. When one flash was larger than the other, auditory localization was slightly but reliably attracted toward it, but again regardless of where visual monitoring was required. We conclude that ventriloquism largely reflects automatic sensory interactions, with little or no role for deliberate spatial attention.  相似文献   

5.
Previously, we showed that the visual bias of auditory sound location, or ventriloquism, does not depend on the direction of deliberate, orendogenous, attention (Bertelson, Vroomen, de Gelder, & Driver, 2000). In the present study, a similar question concerning automatic, orexogenous, attention was examined. The experimental manipulation was based on the fact that exogenous visual attention can be attracted toward asingleton—that is, an item different on some dimension from all other items presented simultaneously. A display was used that consisted of a row of four bright squares with one square, in either the left- or the rightmost position,smaller than the others, serving as the singleton. In Experiment 1, subjects made dichotomous left-right judgments concerning sound bursts, whose successivelocations were controlled by a psychophysical staircase procedure and which were presented in synchrony with a display with the singleton either left or right. Results showed that the apparent location of the sound was attractednot toward the singleton, but instead toward the big squares at the opposite end of the display. Experiment 2 was run to check that the singleton effectively attracted exogenous attention. The task was to discriminate target letters presented either on the singleton or on the opposite big square. Performance deteriorated when the target was on the big square opposite the singleton, in comparison with control trials with no singleton, thus showing that the singleton attracted attention away from the target location. In Experiment 3, localization and discrimination trials were mixed randomly so as to control for potential differences in subjects’ strategies in the two preceding experiments. Results were as before, showing that the singleton attracted attention, whereas sound localization was shifted away from the singleton. Ventriloquism can thus be dissociated from exogenous visual attention and appears to reflect sensory interactions with little role for the direction of visual spatial attention.  相似文献   

6.
In three experiments, listeners were required to either localize or identify the second of two successive sounds. The first sound (the cue) and the second sound (the target) could originate from either the same or different locations, and the interval between the onsets of the two sounds (Stimulus Onset Asynchrony, SOA) was varied. Sounds were presented out of visual range at 135 azimuth left or right. In Experiment 1, localization responses were made more quickly at 100 ms SOA when the target sounded from the same location as the cue (i.e., a facilitative effect), and at 700 ms SOA when the target and cue sounded from different locations (i.e., an inhibitory effect). In Experiments 2 and 3, listeners were required to monitor visual information presented directly in front of them at the same time as the auditory cue and target were presented behind them. These two experiments differed in that in order to perform the visual task accurately in Experiment 3, eye movements to visual stimuli were required. In both experiments, a transition from facilitation at a brief SOA to inhibition at a longer SOA was observed for the auditory task. Taken together these results suggest that location-based auditory IOR is not dependent on either eye movements or saccade programming to sound locations.  相似文献   

7.
Previous research has demonstrated that the localization of auditory or tactile stimuli can be biased by the simultaneous presentation of a visual stimulus from a different spatial position. We investigated whether auditory localization judgments could also be affected by the presentation of spatially displaced tactile stimuli, using a procedure designed to reveal perceptual interactions across modalities. Participants made left-right discrimination responses regarding the perceived location of sounds, which were presented either in isolation or together with tactile stimulation to the fingertips. The results demonstrate that the apparent location of a sound can be biased toward tactile stimulation when it is synchronous, but not when it is asynchronous, with the auditory event. Directing attention to the tactile modality did not increase the bias of sound localization toward synchronous tactile stimulation. These results provide the first demonstration of the tactile capture of audition.  相似文献   

8.
Previous research has demonstrated that the localization of auditory or tactile stimuli can be biased by the simultaneous presentation of a visual stimulus from a different spatial position. We investigated whether auditory localization judgments could also be affected by the presentation of spatially displaced tactile stimuli, using a procedure designed to reveal perceptual interactions across modalities. Participants made left—right discrimination responses regarding the perceived location of sounds, which were presented either in isolation or together with tactile stimulation to the fingertips. The results demonstrate that the apparent location of a sound can be biased toward tactile stimulation when it is synchronous, but not when it is asynchronous, with the auditory event. Directing attention to the tactile modality did not increase the bias of sound localization toward synchronous tactile stimulation. These results provide the first demonstration of the tactilecapture of audition.  相似文献   

9.
Summary The objective of this study was to analyze the structural properties of the respective inputs that are conducive to immediate auditory-visual cross-modal bias. The study was designed as an updated and extended replication of one by Thomas (1941). Subjects were presented with either auditory or visual target signals in several positions around the median plane, together with a competing signal, always in the other modality, 15° to the left or to the right of that plane. The task was to indicate whether the target signal came left or right of center. Target and competing signals were delivered according to three temporal configurations: continuously on for 4 s, or periodically interrupted at either a fast or a slow tempo, and all combinations of the three configurations were used. Judgements of the location of the auditory target signal were attracted toward the visual competing signal in all conditions but two, those with a periodic target signal and a continuous competing one. Conditions with the two signals in the same configuration yielded larger biases than those combining different configurations, confirming that synchronization of discordant inputs is a major condition of cross-modal interactions. The occurrence of significant bias in nonsynchronous conditions, on the other hand, suggests that another factor might be the attraction of localization responses by competing signals with salient temporal configurations, and that interruption might be one important source of saliency. Auditory biases of visual localization were, as usual, smaller than visual biases, but nevertheless reached significance in a majority of conditions, and were also influenced by timing in much the same way as the latter.This work was supported in part by the Belgian Fonds de la Recherche Fondamentale Collective (FRFC) under convention 2.4505.80. The first author is Chercheur qualifié of the Fonds National de la Recherche Scientifique (FNRS). The results have already been reported at the Symposium on Perception, Action and Development, Brussels, January 1984  相似文献   

10.
The effect of a background sound on the auditory localization of a single sound source was examined. Nine loudspeakers were arranged crosswise in the horizontal and the median vertical plane. They ranged from -20 degrees to +20 degrees, with the center loudspeaker at 0 degree azimuth and elevation. Using vertical and horizontal centimeter scales, listeners verbally estimated the position of a 500-ms broadband noise stimulus being presented at the same time as a 2 s background sound, emitted by one of the four outer loudspeakers. When the background sound consisted of continuous broadband noise, listeners consistently shifted the apparent target positions away from the background sound locations. This auditory contrast effect, which is consistent with earlier findings, equally occurred in both planes. But when the background sound was changed to a pulse train of noise bursts, the contrast effect decreased in the horizontal plane and increased in the vertical plane. This discrepancy might be due to general differences in the processing of interaural and spectral localization information.  相似文献   

11.
Listeners, whose right ears were blocked, located low-intensity sounds originating from loudspeakers placed 15 deg apart along the horizontal plane on the side of the open, or functioning, ear. In Experiment 1, the stimuli consisted of noise bursts, 1.0 kHz wide and centered at 4.0 through 14.0 kHz in steps of .5 kHz. We found that the apparent location of the noise bursts was governed by their frequency composition. Specifically, as the center frequency was increased from 4.0 to about 8.0 kHz, the sound appeared to move away from the frontal sector and toward the side. This migration pattern of the apparent sound source was observed again when the center frequency was increased from 8.0 to about 12.0 kHz. Then, with center frequencies of 13.0 and 14.0 kHz, the sound appeared once more in front. We referred to this relation between frequency composition and apparent location in terms of spatial referent maps. In Experiment 2, we showed that localization was more proficient if the frequency content of the stimulus served to connect adjacent spatial referent maps rather than falling within a single map. By these means, we have further elucidated the spectral cues utilized in monaural localization of sound in the horizontal plane.  相似文献   

12.
Multisensory integration can play a critical role in producing unified and reliable perceptual experience. When sensory information in one modality is degraded or ambiguous, information from other senses can crossmodally resolve perceptual ambiguities. Prior research suggests that auditory information can disambiguate the contents of visual awareness by facilitating perception of intermodally consistent stimuli. However, it is unclear whether these effects are truly due to crossmodal facilitation or are mediated by voluntary selective attention to audiovisually congruent stimuli. Here, we demonstrate that sounds can bias competition in binocular rivalry toward audiovisually congruent percepts, even when participants have no recognition of the congruency. When speech sounds were presented in synchrony with speech-like deformations of rivalling ellipses, ellipses with crossmodally congruent deformations were perceptually dominant over those with incongruent deformations. This effect was observed in participants who could not identify the crossmodal congruency in an open-ended interview (Experiment 1) or detect it in a simple 2AFC task (Experiment 2), suggesting that the effect was not due to voluntary selective attention or response bias. These results suggest that sound can automatically disambiguate the contents of visual awareness by facilitating perception of audiovisually congruent stimuli.  相似文献   

13.
While “recalibration by pairing” is now generally held to be the main process responsible for adaptation to intermodal discordance, the conditions under which pairing of heteromodal data occur in spite of a discordance have not been studied systematically. The question has been explored in the case of auditory-visual discordance. Subjects pointed at auditory targets before and after exposure to auditory and visual data from sources 20° apart in azimuth, in conditions varying by (a) the degree of realism of the context, and (b) the synchronization be-tween auditory and visual data. In Experiment 1, the exposure conditions combined the sound of a percussion instrument (bongos) with either the image on a video monitor of the hands of the player (semirealistic situation) or diffuse light modulated by the sound (nonrealistic situation). Experiment 2 featured a voice and either the image of the face of the speaker or light modulated by the voice, and in both situations either sound and image were exactly syn-chronous or the sound was made to lag by 0.35 sec. Desynchronization was found to reduce adaptation significantly, while degree of realism failed to produce an effect. Answers to a question asked at the end of the testing regarding the location of the sound source suggested that the apparent fusion of the auditory and visual data—the phenomenon called “ventriloquism”— was not affected by the conditions in the same way as adaptation. In Experiment 3, subjects were exposed to the experimental conditions of Experiment 2 and were asked to report their impressions of fusion by pressing a key. The results contribute to the suggestion that pairing of registered auditory and visual locations, the hypothetical process at the basis of recalibration, may be a different phenomenon from conscious fusion.  相似文献   

14.
Several studies have shown that handedness has an impact on visual spatial abilities. Here we investigated the effect of laterality on auditory space perception. Participants (33 right-handers, 20 left-handers) completed two tasks of sound localization. In a dark, anechoic, and sound-proof room, sound stimuli (broadband noise) were presented via 21 loudspeakers mounted horizontally (from 80° on the left to 80° on the right). Participants had to localize the target either by using a swivel hand-pointer or by head-pointing. Individual lateral preferences of eye, ear, hand, and foot were obtained using a questionnaire. With both pointing methods, participants showed a bias in sound localization that was to the side contralateral to the preferred hand, an effect that was unrelated to their overall precision. This partially parallels findings in the visual modality as left-handers typically have a more rightward bias in visual line bisection compared with right-handers. Despite the differences in neural processing of auditory and visual spatial information these findings show similar effects of lateral preference on auditory and visual spatial perception. This suggests that supramodal neural processes are involved in the mechanisms generating laterality in space perception.  相似文献   

15.
Despite the often encountered affirmation that vision completely dominates other modalities in intersensory conflict, there are cases where discordant auditorv information affects the localization of a visual signal. Experiment I shows that “auditory capture” occurs with a visual input reduced to a single luminous point in complete darkness, but not with a textured background. The task was to point at a flashing luminous point alternately in the presence of a synchronous sound coming from a source situated 15° to one side (“conflict trials,” designed to measure immediate reaction to conflict) and in its absence (“test trials,” to measure aftereffects). Adaptive immédiate reactions and aftereffects were observed in the dark, but not with a textured background. In Experiment II, on the other hand, “visual capture” of auditory localization was observed at the levels of both measures in the dark and with the textured background. That visual texture affects the degree of auditory capture of vision, but not the degree of visual capture of audition was confirmed at the level of aftereffects in Experiment III, where bisensory monitoring was substituted for pointing during exposure to conflict. The empirical finding eliminates apparent contradictions in the literature on ventriloquism, but cannot itself be explained in terms either of relative accuracy of visual and auditory localization or attentional adjustments.  相似文献   

16.
In the ventriloquism aftereffect, brief exposure to a consistent spatial disparity between auditory and visual stimuli leads to a subsequent shift in subjective sound localization toward the positions of the visual stimuli. Such rapid adaptive changes probably play an important role in maintaining the coherence of spatial representations across the various sensory systems. In the research reported here, we used event-related potentials (ERPs) to identify the stage in the auditory processing stream that is modulated by audiovisual discrepancy training. Both before and after exposure to synchronous audiovisual stimuli that had a constant spatial disparity of 15°, participants reported the perceived location of brief auditory stimuli that were presented from central and lateral locations. In conjunction with a sound localization shift in the direction of the visual stimuli (the behavioral ventriloquism aftereffect), auditory ERPs as early as 100 ms poststimulus (N100) were systematically modulated by the disparity training. These results suggest that cross-modal learning was mediated by a relatively early stage in the auditory cortical processing stream.  相似文献   

17.
Memory for location of a dot inside a circle was investigated with the circle in the center of a computer screen (Experiment 1) or with the circle presented in either the left or the right visual field (Experiment 2). In both experiments, as in Huttenlocher, Hedges, and Duncan’s (1991) study, the task was to relocate the dot by marking the remembered location. When errors in angular and radial estimates were considered separately, it was found that, in both experiments, the angular locations of estimates of the dots’ positions regressed toward different locations inside each quadrant of the circle; the radial locations of the estimates of dots’ positions tended to regress toward locations near the circumference. These variations in the direction of bias appeared to reflect a general shift of estimates toward the upper left arc of the circle. The second experiment replicated the preceding effects but also revealed that the regressions within quadrants of angular values were stronger after right visual field than after left visual field presentations. We interpret the dissociation between visual fields as evidence that memory for categorical spatial relations (Kosslyn, 1987) is more dependent on left-hemisphere than on right-hemisphere processing.  相似文献   

18.
The kinds of aftereffects, indicative of cross-modal recalibration, that are observed after exposure to spatially incongruent inputs from different sensory modalities have not been demonstrated so far for identity incongruence. We show that exposure to incongruent audiovisual speech (producing the well-known McGurk effect) can recalibrate auditory speech identification. In Experiment 1, exposure to an ambiguous sound intermediate between /aba/ and /ada/ dubbed onto a video of a face articulating either /aba/ or /ada/ increased the proportion of /aba/ or /ada/ responses, respectively, during subsequent sound identification trials. Experiment 2 demonstrated the same recalibration effect or the opposite one, fewer /aba/ or /ada/ responses, revealing selective speech adaptation, depending on whether the ambiguous sound or a congruent nonambiguous one was used during exposure. In separate forced-choice identification trials, bimodal stimulus pairs producing these contrasting effects were identically categorized, which makes a role of postperceptual factors in the generation of the effects unlikely.  相似文献   

19.
Auditory psychomotor coordination and visual search performance   总被引:2,自引:0,他引:2  
In Experiments 1 and 2, the time to locate and identify a visual target (visual search performance in a two-alternative forced-choice paradigm) was measured as a function of the location of the target relative to the subject's initial line of gaze. In Experiment 1, tests were conducted within a 260 degree region on the horizontal plane at a fixed elevation (eye level). In Experiment 2, the position of the target was varied in both the horizontal (260 degrees) and the vertical (+/- 46 degrees from the initial line of gaze) planes. In both experiments, and for all locations tested, the time required to conduct a visual search was reduced substantially (175-1,200 msec) when a 10-Hz click train was presented from the same location as that occupied by the visual target. Significant differences in latencies were still evident when the visual target was located within 10 degrees of the initial line of gaze (central visual field). In Experiment 3, we examined head and eye movements that occur as subjects attempt to locate a sound source. Concurrent movements of the head and eyes are commonly encountered during auditorily directed search behavior. In over half of the trials, eyelid closures were apparent as the subjects attempted to orient themselves toward the sound source. The results from these experiments support the hypothesis that the auditory spatial channel has a significant role in regulating visual gaze.  相似文献   

20.
Six experiments examined the issue of whether one single system or separate systems underlie visual and auditory orienting of spatial attention. When auditory targets were used, reaction times were slower on trials in which cued and target locations were at opposite sides of the vertical head-centred meridian than on trials in which cued and target locations were at opposite sides of the vertical visual meridian or were not separated by any meridian. The head-centred meridian effect for auditory stimuli was apparent when targets were cued by either visual (Experiments 2, 3, and 6) or auditory cues (Experiment 5). Also, the head-centred meridian effect was found when targets were delivered either through headphones (Experiments 2, 3, and 5) or external loudspeakers (Experiment 6). Conversely, participants showed a visual meridian effect when they were required to respond to visual targets (Experiment 4). These results strongly suggest that auditory and visual spatial attention systems are indeed separate, as far as endogenous orienting is concerned.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号