首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Auditory perception of the depth of space is based mainly on spectral and amplitude changes of sound waves originating from the sound source and reaching the listener. The perceptive illusion of movement of an auditory image caused by changes in amplitude and/or frequency of the signal tone emanating from an immobile loudspeaker was studied. Analysis of data obtained from the participants revealed the diapason of combinations of amplitude and frequency changes for which the movement direction was perceived similarly by all participants, despite significantly different movement assessment criteria. Additional auditory and visual information of the conditions of radial movement (near or far fields) determined listeners' interpretation of changes in the signal parameters. The data obtained about the perception of approach and withdrawal models are evidence of the fact that the principal cues of the perception of the distance of immobile sound sources manifests similarly to that of an auditory image moving along a radial axis.  相似文献   

2.
While “recalibration by pairing” is now generally held to be the main process responsible for adaptation to intermodal discordance, the conditions under which pairing of heteromodal data occur in spite of a discordance have not been studied systematically. The question has been explored in the case of auditory-visual discordance. Subjects pointed at auditory targets before and after exposure to auditory and visual data from sources 20° apart in azimuth, in conditions varying by (a) the degree of realism of the context, and (b) the synchronization be-tween auditory and visual data. In Experiment 1, the exposure conditions combined the sound of a percussion instrument (bongos) with either the image on a video monitor of the hands of the player (semirealistic situation) or diffuse light modulated by the sound (nonrealistic situation). Experiment 2 featured a voice and either the image of the face of the speaker or light modulated by the voice, and in both situations either sound and image were exactly syn-chronous or the sound was made to lag by 0.35 sec. Desynchronization was found to reduce adaptation significantly, while degree of realism failed to produce an effect. Answers to a question asked at the end of the testing regarding the location of the sound source suggested that the apparent fusion of the auditory and visual data—the phenomenon called “ventriloquism”— was not affected by the conditions in the same way as adaptation. In Experiment 3, subjects were exposed to the experimental conditions of Experiment 2 and were asked to report their impressions of fusion by pressing a key. The results contribute to the suggestion that pairing of registered auditory and visual locations, the hypothetical process at the basis of recalibration, may be a different phenomenon from conscious fusion.  相似文献   

3.
Blind and blindfolded sighted observers were presented with auditory stimuli specifying target locations. The stimulus was either sound from a loudspeaker or spatial language (e.g., "2 o'clock, 16 ft"). On each trial, an observer attempted to walk to the target location along a direct or indirect path. The ability to mentally keep track of the target location without concurrent perceptual information about it (spatial updating) was assessed in terms of the separation between the stopping points for the 2 paths. Updating performance was very nearly the same for the 2 modalities, indicating that once an internal representation of a location has been determined, subsequent updating performance is nearly independent of the modality used to specify the representation.  相似文献   

4.
The hypothesis that the extent of spatial separation between successive sound events directly affects the perception of time intervals between these events was tested using an apparent motion paradigm. Subjects listened to four-tone pitch patterns whose individual tones were sounded alternately at one of two loudspeaker positions, and they adjusted the alternation rate until they could no longer distinguish the four-tone ordering of the pattern. Four horizontal and two vertical loudspeaker separations were tested. Results indicate a direct relation between horizontal separation and the critical stimulus onset asynchrony (SOA) between successive tones within a pattern. At the critical SOA, subjects reported hearing not a four-tone pattern, but two pairs of two-note groups overlapping in time. The findings are discussed in the context .of auditory spatial processing mechanisms and possible sensory-specific representational constraints.  相似文献   

5.
The effect of a background sound on the auditory localization of a single sound source was examined. Nine loudspeakers were arranged crosswise in the horizontal and the median vertical plane. They ranged from -20 degrees to +20 degrees, with the center loudspeaker at 0 degree azimuth and elevation. Using vertical and horizontal centimeter scales, listeners verbally estimated the position of a 500-ms broadband noise stimulus being presented at the same time as a 2 s background sound, emitted by one of the four outer loudspeakers. When the background sound consisted of continuous broadband noise, listeners consistently shifted the apparent target positions away from the background sound locations. This auditory contrast effect, which is consistent with earlier findings, equally occurred in both planes. But when the background sound was changed to a pulse train of noise bursts, the contrast effect decreased in the horizontal plane and increased in the vertical plane. This discrepancy might be due to general differences in the processing of interaural and spectral localization information.  相似文献   

6.
In studies on auditory speech perception, participants are often asked to perform active tasks, e.g. decide whether the perceived sound is a speech sound or not. However, information about the stimulus, inherent in such tasks, may induce expectations that cause altered activations not only in the auditory cortex, but also in frontal areas such as inferior frontal gyrus (IFG) and motor cortices, even in the absence of an explicit task. To investigate this, we applied spectral mixes of a flute sound and either vowels or specific music instrument sounds (e.g. trumpet) in an fMRI study, in combination with three different instructions. The instructions either revealed no information about stimulus features, or explicit information about either the music instrument or the vowel features. The results demonstrated that, besides an involvement of posterior temporal areas, stimulus expectancy modulated in particular a network comprising IFG and premotor cortices during this passive listening task.  相似文献   

7.
In this paper, the auditory motion aftereffect (aMAE) was studied, using real moving sound as both the adapting and the test stimulus. The sound was generated by a loudspeaker mounted on a robot arm that was able to move quietly in three-dimensional space. A total of 7 subjects with normal hearing were tested in three experiments. The results from Experiment 1 showed a robust and reliable negative aMAE in all the subjects. After listening to a sound source moving repeatedly to the right, a stationary sound source was perceived to move to the left. The magnitude of the aMAE tended to increase with adapting velocity up to the highest velocity tested (20 degrees/sec). The aftereffect was largest when the adapting and the test stimuli had similar spatial location and frequency content. Offsetting the locations of the adapting and the test stimuli by 20 degrees reduced the size of the effect by about 50%. A similar decline occurred when the frequency of the adapting and the test stimuli differed by one octave. Our results suggest that the human auditory system possesses specialized mechanisms for detecting auditory motion in the spatial domain.  相似文献   

8.
In this paper, the auditory motion aftereffect (aMAE) was studied, using real moving sound as both the adapting and the test stimulus. The sound was generated by a loudspeaker mounted on a robot arm that was able to move quietly in three-dimensional space. A total of 7 subjects with normal hearing were tested in three experiments. The results from Experiment 1 showed a robust and reliable negative aMAE in all the subjects. After listening to a sound source moving repeatedly to the right, a stationary sound source was perceived to move to the left. The magnitude of the aMAE tended to increase with adapting velocity up to the highest velocity tested (20°/sec). The aftereffect was largest when the adapting and the test stimuli had similar spatial location and frequency content. Offsetting the locations of the adapting and the test stimuli by 20° reduced the size of the effect by about 50%. A similar decline occurred when the frequency of the adapting and the test stimuli differed by one octave. Our results suggest that the human auditory system possesses specialized mechanisms for detecting auditory motion in the spatial domain.  相似文献   

9.
There is growing interest in the effect of sound on visual motion perception. One model involves the illusion created when two identical objects moving towards each other on a two-dimensional visual display can be seen to either bounce off or stream through each other. Previous studies show that the large bias normally seen toward the streaming percept can be modulated by the presentation of an auditory event at the moment of coincidence. However, no reports to date provide sufficient evidence to indicate whether the sound bounce-inducing effect is due to a perceptual binding process or merely to an explicit inference resulting from the transient auditory stimulus resembling a physical collision of two objects. In the present study, we used a novel experimental design in which a subliminal sound was presented either 150 ms before, at, or 150 ms after the moment of coincidence of two disks moving towards each other. The results showed that there was an increased perception of bouncing (rather than streaming) when the subliminal sound was presented at or 150 ms after the moment of coincidence compared to when no sound was presented. These findings provide the first empirical demonstration that activation of the human auditory system without reaching consciousness affects the perception of an ambiguous visual motion display.  相似文献   

10.
Head movement can have a significant effect on the ability to locate the direction of a sound source. A system has been designed to track the head movement in response to sound originating at different azimuth locations with respect to the head. A videotape record is made of a light approximating a point source carried on a lightweight “beanie” mounted on the listener’s head. Movement of the light is monitored by the video camera and recorded on tape, along with the sound stimulus and information concerning loudspeaker location and time. The horizontal and vertical coordinates of the light-spot image are determined in relation to the video synch pulses defining the field borders. Synch signals are available from a video monitor either in real-time or from tape replay to define each TV frame and horizontal scan line. The circuitry interfaces to a computer programmed to take the information, apply a calibration, and process the data into records of time-varying head position and velocity. Examples of both digital and graphic printouts of head movement are given. The system is capable of expansion to three-axis operation.  相似文献   

11.
Two-month-olds and newborns were tested in a situation where they had the opportunity to experience different auditory consequences of their own oral activity on a dummy pacifier. Modulation of oral activity was scored and analyzed relative to two types of contingent auditory feedback, either analog or non-analog to the effort exerted by the infant on the pacifier. The dummy pacifier was connected to an air pressure transducer for recording of oral action. In two different experimental conditions, each time the infant sucked above a certain pressure threshold they heard a perfectly contingent sound of varying pitch. In one condition, the pitch variation was analog to the pressure applied by the infant on the pacifier (analog condition). In another, the pitch variation was random (non-analog condition). As rationale, a differential modulation of oral activity in these two conditions was construed as indexing some voluntary control and the sense of a causal link between sucking and its auditory consequences, beyond mere temporal contingency detection and response–stimulus association. Results indicated that 2-month-olds showed clear signs of modulation of their oral activity on the pacifier as a function of analog versus non-analog condition. In contrast, newborns did not show any signs of such modulation either between experimental conditions (analog versus non-analog contingent sounds) or between baseline (no contingent sounds condition) and experimental conditions. These observations are interpreted as evidence of self-exploration and the emergence of a sense of self-agency by 2 months of age.  相似文献   

12.
The neural mechanisms underlying the perception of pitch, a sensory attribute of paramount importance in hearing, have been a matter of debate for over a century. A question currently at the heart of the debate is whether the pitch of all harmonic complex tones can be determined by the auditory system's using a single mechanism, or whether two different neural mechanisms are involved, depending on the stimulus conditions. When the harmonics are widely spaced, as is the case at high fundamental frequencies (FOs), and/or when the frequencies of the harmonics are low, the frequency components of the sound fall in different peripheral auditory channels and are then "resolved" by the peripheral auditory system. In contrast, at low F0s, or when the harmonics are high in frequency, several harmonics interact within the passbands of the same auditory filters, being thus "unresolved" by the peripheral auditory system. The idea that more than one mechanism mediates the encoding of pitch depending on the resolvability status of the harmonics was investigated here by testing for transfer of learning in F0 discrimination between different stimulus conditions involving either resolved or unresolved harmonics after specific training in one of these conditions. The results, which show some resolvability-specificity of F0-discrimination learning, support the hypothesis that two different underlying mechanisms mediate the encoding of the F0 of resolved and unresolved harmonics.  相似文献   

13.
Since the introduction of the concept of auditory scene analysis, there has been a paucity of work focusing on the theoretical explanation of how attention is allocated within a complex auditory scene. Here we examined signal detection in situations that promote either the fusion of tonal elements into a single sound object or the segregation of a mistuned element (i.e., harmonic) that "popped out" as a separate individuated auditory object and yielded the perception of concurrent sound objects. On each trial, participants indicated whether the incoming complex sound contained a brief gap or not. The gap (i.e., signal) was always inserted in the middle of one of the tonal elements. Our findings were consistent with an object-based account in which perception of two simultaneous auditory objects interfered with signal detection. This effect was observed for a wide range of gap durations and was greater when the mistuned harmonic was perceived as a separate object. These results suggest that attention may be initially shared among concurrent sound objects thereby reducing listeners' ability to process acoustic details belonging to a particular sound object. These findings provide new theoretical insight for our understanding of auditory attention and auditory scene analysis.  相似文献   

14.
This study examined the effects of visual-verbalload (as measured by a visually presented reading-memory task with three levels) on a visual/auditory stimulus-response task. The three levels of load were defined as follows: "No Load" meant no other stimuli were presented concurrently; "Free Load" meant that a letter (A, B, C, or D) appeared at the same time as the visual or auditory stimulus; and "Force Load" was the same as "Free Load," but the participants were also instructed to count how many times the letter A appeared. The stimulus-response task also had three levels: "irrelevant," "compatible," and "incompatible" spatial conditions. These required different key-pressing responses. The visual stimulus was a red ball presented either to the left or to the right of the display screen, and the auditory stimulus was a tone delivered from a position similar to that of the visual stimulus. Participants also processed an irrelevant stimulus. The results indicated that participants perceived auditory stimuli earlier than visual stimuli and reacted faster under stimulus-response compatible conditions. These results held even under a high visual-verbal load. These findings suggest the following guidelines for systems used in driving: an auditory source, appropriately compatible signal and manual-response positions, and a visually simplified background.  相似文献   

15.
In three experiments, listeners were required to either localize or identify the second of two successive sounds. The first sound (the cue) and the second sound (the target) could originate from either the same or different locations, and the interval between the onsets of the two sounds (Stimulus Onset Asynchrony, SOA) was varied. Sounds were presented out of visual range at 135 azimuth left or right. In Experiment 1, localization responses were made more quickly at 100 ms SOA when the target sounded from the same location as the cue (i.e., a facilitative effect), and at 700 ms SOA when the target and cue sounded from different locations (i.e., an inhibitory effect). In Experiments 2 and 3, listeners were required to monitor visual information presented directly in front of them at the same time as the auditory cue and target were presented behind them. These two experiments differed in that in order to perform the visual task accurately in Experiment 3, eye movements to visual stimuli were required. In both experiments, a transition from facilitation at a brief SOA to inhibition at a longer SOA was observed for the auditory task. Taken together these results suggest that location-based auditory IOR is not dependent on either eye movements or saccade programming to sound locations.  相似文献   

16.
Auditory delayed matching in the bottlenose dolphin   总被引:5,自引:3,他引:2       下载免费PDF全文
A bottlenose dolphin, already highly proficient in two-choice auditory discriminations, was trained over a nine-day period on auditory delayed matching-to-sample and then tested on 346 unique matching problems, as a function of the delay between the sample and test sounds. Each problem used new sounds and was from five to 10 trials long, with the same sound used as the sample for all trials of a problem. At each trial, the sample was projected underwater for 2.5 sec, followed by a delay and then by a sequence of two 2.5-sec duration test sounds. One of the test sounds matched the sample and was randomly first or second in the sequence, and randomly appeared at either a left or right speaker. Responses to the locus of the matching test sound were reinforced. Over nine, varying-sized blocks of problems, the longest delay of a set of delays in a block was progressively increased from 15 sec initially to a final value of 120 sec. There was a progressive increase across the early blocks in the percentage of correct Trial 1 responses. A ceiling-level of 100% correct responses was then attained over the final six blocks, during which there were 169 successive Trial 1 responses bracketed by two Trial 1 errors (at 24- and 120-sec delays). Performance on trials beyond the first followed a similar trend. Finally, when the sample duration was decreased to 0.2 sec or less, matching performance on Trial 1 of new problems dropped to chance levels.  相似文献   

17.
Contrasting results in visual and auditory working memory studies suggest that the mechanisms of association between location and identity of stimuli depend on the sensory modality of the input. In this auditory study, we tested whether the association of two features both encoded in the “what” stream is different from the association between a “what” and a “where” feature. In an old–new recognition task, blindfolded participants were presented with sequences of sounds varying in timbre, pitch and location. They were required to judge if either the timbre, pitch or location of a single-probe stimulus was identical or different to the timbre, pitch or location of one of the sounds of the previous sequence. Only variations in one of the three features were relevant for the task, whereas the other two features could vary, with task-irrelevant changes. Results showed that task-irrelevant variations in the “what” features (either timbre or pitch) caused an impaired recognition of sound location and in the other task-relevant “what” feature, whereas changes in sound location did not affect the recognition of either one of the “what” features. We conclude that the identity of sounds is incidentally processed even when not required by the task, whereas sound location is not maintained when task irrelevant.  相似文献   

18.
Head orientation during auditory discriminations was studies in squirrel monkeys using a two-lever trial-by-trial procedure. Animals were studied using auditory discriminations based on the position of the sound and on the spectral content differences between a pure tone and a noise. After the percentage of correct responses reached asymptote, head orientation was measured using videotape recordings. Orientation occurred on virtually every trial and was under the control of the position of the sound under all conditions. Lever responding was controlled by the same parameters of the sound under some conditions, and by different parameters in others. Orientation and lever responding were correlated (a level response could be predicted from the direction of orientation) when both responses were under the control of the same parameters of the sound. The two responses were uncorrelated when they were controlled by different parameters of the sound. Orientation and lever responding were not functionally related.  相似文献   

19.
Recent experiments using a variety of techniques have suggested that speech perception involves separate auditory and phonetic levels of processing. Two models of auditory and phonetic processing appear to be consistent with existing data: (a) a strictserial model in which auditory information would be processed at one level, followed by the processing of phonetic information at a subsequent level; and (b) aparallel model in which auditory and phonetic processing could proceed simultaneously. The present experiment attempted to distinguish empirically between these two models. Ss identified either an auditory dimension (fundamental frequency) or a phonetic dimension (place of articulation of the consonant) of synthetic consonant-vowel syllables. When the two dimensions varied in a completely correlated manner, reaction times were significantly shorter than when either dimension varied alone. This “redundancy gain” could not be attributed to speed-accuracy trades, selective serial processing, or differential transfer between conditions. These results allow rejection of a completely serial model, suggesting instead that at least some portion of auditory and phonetic processing can occur in parallel.  相似文献   

20.
The study assessed whether the auditory reference provided by a music scale could improve spatial exploration of a standard musical instrument keyboard in right‐brain‐damaged patients with left spatial neglect. As performing music scales involves the production of predictable successive pitches, the expectation of the subsequent note may facilitate patients to explore a larger extension of space in the left affected side, during the production of music scales from right to left. Eleven right‐brain‐damaged stroke patients with left spatial neglect, 12 patients without neglect, and 12 age‐matched healthy participants played descending scales on a music keyboard. In a counterbalanced design, the participants' exploratory performance was assessed while producing scales in three feedback conditions: With congruent sound, no‐sound, or random sound feedback provided by the keyboard. The number of keys played and the timing of key press were recorded. Spatial exploration by patients with left neglect was superior with congruent sound feedback, compared to both Silence and Random sound conditions. Both the congruent and incongruent sound conditions were associated with a greater deceleration in all groups. The frame provided by the music scale improves exploration of the left side of space, contralateral to the right hemisphere, damaged in patients with left neglect. Performing a scale with congruent sounds may trigger at some extent preserved auditory and spatial multisensory representations of successive sounds, thus influencing the time course of space scanning, and ultimately resulting in a more extensive spatial exploration. These findings offer new perspectives also for the rehabilitation of the disorder.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号