首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 828 毫秒
1.
The neural mechanisms underlying the perception of pitch, a sensory attribute of paramount importance in hearing, have been a matter of debate for over a century. A question currently at the heart of the debate is whether the pitch of all harmonic complex tones can be determined by the auditory system's using a single mechanism, or whether two different neural mechanisms are involved, depending on the stimulus conditions. When the harmonics are widely spaced, as is the case at high fundamental frequencies (FOs), and/or when the frequencies of the harmonics are low, the frequency components of the sound fall in different peripheral auditory channels and are then "resolved" by the peripheral auditory system. In contrast, at low F0s, or when the harmonics are high in frequency, several harmonics interact within the passbands of the same auditory filters, being thus "unresolved" by the peripheral auditory system. The idea that more than one mechanism mediates the encoding of pitch depending on the resolvability status of the harmonics was investigated here by testing for transfer of learning in F0 discrimination between different stimulus conditions involving either resolved or unresolved harmonics after specific training in one of these conditions. The results, which show some resolvability-specificity of F0-discrimination learning, support the hypothesis that two different underlying mechanisms mediate the encoding of the F0 of resolved and unresolved harmonics.  相似文献   

2.
A sound that is briefly interrupted by a silent gap is perceived as discontinuous. However, when the gap is filled with noise, the sound may be perceived as continuing through the noise. It has been shown that this continuity illusion depends on the masking of the omitted target sound, but the underlying mechanisms have yet to be quantified thoroughly. In this article, we systematically quantify the relation between perceived continuity and the duration, relative power, or notch width of the interrupting broadband noise for interrupted and noninterrupted amplitude-modulated tones at different frequencies. We fitted the psychometric results in order to estimate the range of the noise parameters that induced auditory grouping. To explain our results within a common theoretical framework, we applied a power spectrum model to thedifferent masking resultsand estimated the critical bandwidth of the auditory filter that may be responsible for the continuity illusion. Our results set constraints on the spectral resolution of the mechanisms underlying the continuity illusion and provide a stimulus set that can be readily applied for neurophysiological studies of its neural correlates.  相似文献   

3.
Fifteen blindfoleded Ss judged the spatial- orientation of a bar. which rotated in the horizontal plane, by using proprioceptive and/or auditory information. Judgments were made when information from the two modalities was made to yield the same or conflicting spatial orientations of the bar. Both modalities were Individually capable of providing equally accurate judgments, yet, when an auditory-proprioceptive discrepancy was introduced, auditory judgments were strongly biased by proprioceptive input. Proprioceptive judgments were only minimally influenced by conflicting auditory information.  相似文献   

4.
The effect of a visual stimulus on the auditory continuity illusion was examined. Observers judged whether a tone that was repeatedly alternated with a band-pass noise was continuous or discontinuous. In most observers, a transient visual stimulus that was synchronized with the onset of the noise increased the limit of illusory continuity in terms of maximum noise duration and maximum tone level. The smaller the asynchrony between the noise onset and the visual stimulus onset, the larger the visual effect on this illusion. On the other hand, detection of a tone added to the noise was not enhanced by the visual stimulus. These results cannot be fully explained by the conventional theory that illusory continuity is created by the decomposition of peripheral excitation produced by the occluding sound.  相似文献   

5.
Two auditory phenomena--stream segregation and illusory continuity through a wide-band noise interruption--were studied to determine whether the same principles of perceptual organization applied to both. A cycle was formed of a repeating alternation of two short bursts of narrow-band noise (NBN), one centered at a high frequency (H) and the other at a low frequency (L), with shorter bursts of wide-band noise (WBN) inserted between successive NBNs (H WBN L WBN H WBN...). In some conditions, listeners could hear a single NBN moving up and down behind the WBN bursts, although there was no NBN present with the WBN. Listeners rated the strength of this illusory continuity. Center frequency separation, rate of onsets, and bandwidth of the NBNs were varied. Increases in values of all three variables decreased illusory continuity. Other listeners rated the stream segregation of the H and L bands when successive NBNs were separated either by WBN bursts (as above) or by silences. The same three acoustic variables were manipulated. Increases in all three variables decreased the perception of a single stream. The similar disruptive effects on illusory continuity and on the one-stream percept in the stream segregation task support the idea that both phenomena depend on a common preliminary process of linking together the parts of a sequence that have similar frequencies.  相似文献   

6.
In four experiments the conditions under which frequency judgments reflect the relative frequency of complex perceptual events were explored. Subjects viewed a series of 4 x 4 grids each containing seven items, which were letters and numbers in one of four typefaces. Later judgments of the relative frequency with which particular letters appeared in particular typefaces were unaffected by a warning about an upcoming frequency judgment task, but were affected by both the time available for processing the stimuli and the nature of the cover task subjects engaged in while viewing the grids. Frequency judgments were poor when exposure durations were less than 2 s and when the cover task directed subjects' attention merely to the locations of the items within the grids. Frequency judgments improved when the cover task directed subjects' attention to the identity of the stimuli, especially to the conjunction of letter and typeface. The results suggest that frequency estimation of complex stimuli may be possible only for stimuli that have been processed as phenomenal objects.  相似文献   

7.
Getzmann S  Lewald J  Guski R 《Perception》2004,33(5):591-599
The final position of a moving visual object usually appears to be displaced in the direction of motion. We investigated this phenomenon, termed representational momentum, in the auditory modality. In a dark anechoic environment, an acoustic target (continuous noise or noise pulses) moved from left to right or from right to left along the frontal horizontal plane. Listeners judged the final position of the target using a hand pointer. Target velocity was 8 degrees s(-1) or 16 degrees s(-1). Generally, the final target positions were localised as displaced in the direction of motion. With presentation of continuous noise, target velocity had a strong influence on mean displacement: displacements were stronger with lower velocity. No influence of sound velocity on displacement was found with motion of pulsed noise. Although these findings suggest that the underlying mechanisms may be different in the auditory and visual modality, the occurrence of displacements indicates that representational-momentum-like effects are not restricted to the visual modality, but may reflect a general phenomenon with judgments of dynamic events.  相似文献   

8.
The physiological processes underlying the segregation of concurrent sounds were investigated through the use of event-related brain potentials. The stimuli were complex sounds containing multiple harmonics, one of which could be mistuned so that it was no longer an integer multiple of the fundamental. Perception of concurrent auditory objects increased with degree of mistuning and was accompanied by negative and positive waves that peaked at 180 and 400 ms poststimulus, respectively. The negative wave, referred to as object-related negativity, was present during passive listening, but the positive wave was not. These findings indicate bottom-up and top-down influences during auditory scene analysis. Brain electrical source analyses showed that distinguishing simultaneous auditory objects involved a widely distributed neural network that included auditory cortices, the medial temporal lobe, and posterior association cortices.  相似文献   

9.
Although our subjective experience of the world is one of discrete sound sources, the individual frequency components that make up these separate sources are spread across the frequency spectrum. Listeners. use various simple cues, including common onset time and harmonicity, to help them achieve this perceptual separation. Our ability to use harmonicity to segregate two simultaneous sound sources is constrained by the frequency resolution of the auditory system, and is much more effective for low-numbered, resolved harmonics than for higher-numbered, unresolved ones. Our ability to use interaural time-differences (ITDs) in perceptual segregation poses a paradox. Although ITDs are the dominant cue for the localization of complex sounds, listeners cannot use ITDs alone to segregate the speech of a single talker from similar simultaneous sounds. Listeners are, however, very good at using ITD to track a particular sound source across time. This difference might reflect two different levels of auditory processing, indicating that listeners attend to grouped auditory objects rather than to those frequencies that share a common ITD.  相似文献   

10.
When portions of a sound are replaced by a potential masker, the missing fragments may be perceptually restored, resulting in apparent continuity of the interrupted signal. This phenomenon has been examined extensively by using pulsation threshold, auditory induction, and phonemic restoration paradigms in which two sounds, the inducer and the inducee, are alternated (ABABA ... ), and the conditions required for apparent continuity of the lower amplitude inducee are determined. Previous studies have generally neglected to examine concomitant changes produced in the inducing sound. Results from the present experiments have demonstrated decreases in the loudness of inducers using inducer/inducee pairs consisting of tone/tone and noise/noise, as well as the noise/speech pairs associated with phonemic restorations. Interestingly, reductions in inducer loudness occurred even when the inducee was heard as discontinuous, and these decreases in loudness were accompanied by graded increases in apparent duration of the inducee, contrary to the conventional view of auditory induction as an all-or-none phenomenon. Under some conditions, the reduced loudness of the inducer was coupled with a marked alteration in its timbre. Especially profound changes in the inducer quality occurred when the alternating stimuli were tones having the same frequency and differing only in intensity-it seems that following subtraction of components corresponding to the inducee, an anomalous auditory residue remained that did not correspond to the representation of a tone.  相似文献   

11.
Many studies have shown that apes and monkeys are adept at cross-modal matching tasks requiring the subject to identify objects in one modality when information regarding those objects has been presented in a different modality. However, much less is known about non-human primates’ production of multimodal signaling in communicative contexts. Here, we present evidence from a study of 110 chimpanzees demonstrating that they select the modality of communication in accordance with variations in the attentional focus of a human interactant, which is consistent with previous research. In each trial, we presented desirable food to one of two chimpanzees, turning mid-way through the trial from facing one chimpanzee to facing the other chimpanzee, and documented their communicative displays, as the experimenter turned towards or away from the subjects. These chimpanzees varied their signals within a context-appropriate modality, displaying a range of different visual signals when a human experimenter was facing them and a range of different auditory or tactile (attention-getting) signals when the human was facing away from them; this finding extends previous research on multimodal signaling in this species. Thus, in the impoverished circumstances characteristic of captivity, complex signaling tactics are nevertheless exhibited by chimpanzees, suggesting continuity in intersubjective psychological processes in humans and apes.  相似文献   

12.
Multiple attributes of a visual array are often more efficiently processed when they are attributes of a single object than when they are attributes of different objects—a pattern reflecting the limitations of object attention. This study used psychophysical methods to evaluate the object attention limitations in the report of attributes (orientation and phase) computed early in visual analysis for spatially separated objects. These limitations had large effects on dual-object report thresholds when different judgments were required for the two objects (orientation for one object and phase for the other), but the effects were small or nonexistent when the same judgment was made about both objects. Judgment consistency reduced or eliminated the expression of object attention deficits. Thus, the deficits in dual-object report reflect both division of attention over objects and the calculation of independent reference or judgment operations. Dual-object deficits, when they occurred, were substantial in displays with external noise masks. Smaller effects were observed in clear displays, even when difficulty was equated by stimulus contrast. Thus, the primary consequence of object attention is the exclusion of external noise, or mask suppression, and enhancement of the stimulus in clear displays is a secondary consequence.  相似文献   

13.
Modality effects in rhythm processing were examined using a tempo judgment paradigm, in which participants made speeding-up or slowing-down judgments for auditory and visual sequences. A key element of stimulus construction was that the expected pattern of tempo judgments for critical test stimuli depended on a beat-based encoding of the sequence. A model-based measure of degree of beat-based encoding computed from the pattern of tempo judgments revealed greater beat sensitivity for auditory rhythms than for visual rhythms. Visual rhythms with prior auditory exposure were more likely to show a pattern of tempo judgments similar to that for auditory rhythms than were visual rhythms without prior auditory exposure, but only for a beat period of 600 msec. Slowing down the rhythms eliminated the effect of prior auditory exposure on visual rhythm processing. Taken together, the findings in this study support the view that auditory rhythms demonstrate an advantage over visual rhythms in beat-based encoding and that the auditory encoding of visual rhythms can be facilitated with prior auditory exposure, but only within a limited temporal range. The broad conclusion from this research is that “hearing visual rhythms” is neither obligatory nor automatic, as was previously claimed by Guttman, Gilroy, and Blake (2005).  相似文献   

14.
6 adult subjects (3 men, 3 women) produced highly similar spontaneous speech utterances in quiet and with 90-dB SPL white noise. The frequency of occurrence of perceptual judgments of primary stressing in an utterance was not affected by the masking noise. This finding supplements our previous report that variability for stress production of fundamental frequency (fo) during spontaneous speech was preserved under short-term auditory disruption. Also, it adds further support to the contention that fo is under open-loop regulation.  相似文献   

15.
Since the introduction of the concept of auditory scene analysis, there has been a paucity of work focusing on the theoretical explanation of how attention is allocated within a complex auditory scene. Here we examined signal detection in situations that promote either the fusion of tonal elements into a single sound object or the segregation of a mistuned element (i.e., harmonic) that "popped out" as a separate individuated auditory object and yielded the perception of concurrent sound objects. On each trial, participants indicated whether the incoming complex sound contained a brief gap or not. The gap (i.e., signal) was always inserted in the middle of one of the tonal elements. Our findings were consistent with an object-based account in which perception of two simultaneous auditory objects interfered with signal detection. This effect was observed for a wide range of gap durations and was greater when the mistuned harmonic was perceived as a separate object. These results suggest that attention may be initially shared among concurrent sound objects thereby reducing listeners' ability to process acoustic details belonging to a particular sound object. These findings provide new theoretical insight for our understanding of auditory attention and auditory scene analysis.  相似文献   

16.
Listeners are quite adept at maintaining integrated perceptual events in environments that are frequently noisy. Three experiments were conducted to assess the mechanisms by which listeners maintain continuity for upward sinusoidal glides that are interrupted by a period of broadband noise. The first two experiments used stimulus complexes consisting of three parts: prenoise glide, broadband noise interval, and postnoise glide. For a given prenoise glide and noise interval, the subject's task was to adjust the onset frequency of a same-slope postnoise glide so that, together with the prenoise glide and noise, the complex sounded as "smooth and continuous as possible." The slope of the glides (1.67, 3.33, 5, and 6.67 Bark/sec) as well as the duration (50, 200, and 350 msec) and relative level of the interrupting noise (0, -6, and -12 dB S/N) were varied. For all but the shallowest glides, subjects consistently adjusted the offset portion of the glide to frequencies lower than predicted by accurate interpolation of the prenoise portion. Curiously, for the shallowest glides, subjects consistently selected postnoise glide onset-frequency values higher than predicted by accurate extrapolation of the prenoise glide. There was no effect of noise level on subjects' adjustments in the first two experiments. The third experiment used a signal detection task to measure the phenomenal experience of continuity through the noise. Frequency glides were either present or absent during the noise for stimuli like those use in the first two experiments as well as for stimuli that had no prenoise or postnoise glides. Subjects were more likely to report the presence of glides in the noise when none occurred (false positives) when noise was shorter or of greater relative level and when glides were present adjacent to the noise.  相似文献   

17.
Three experiments were performed to examine listeners’ thresholds for identifying stimuli whose spectra were modeled after the vowels /i/ and /ε/, with the differences between these stimuli restricted to the frequency of the first formant. The stimuli were presented in a low-pass masking noise that spectrally overlapped the first formant but not the higher formants. Identification thresholds were lower when the higher formants were present than when they were not, even though the first formant contained the only distinctive information for stimulus identification. This indicates that listeners were more sensitive in identifying the first formant energy through its contribution to the vowel than as an independent percept; this effect is given the namecoherence masking protection. The first experiment showed this effect for synthetic vowels in which the distinctive first formant was supported by a series of harmonics that progressed through the higher formants. In the second two experiments, the harmonics in the first formant region were removed, and the first formant was simulated by a narrow band of noise. This was done so that harmonic relations did not provide a basis for grouping the lower formant with the higher formants; coherence masking protection was still observed. However, when the temporal alignment of the onsets and offsets of the higher and lower formants was disrupted, the effect was eliminated, although the stimuli were still perceived as vowels. These results are interpreted as indicating that general principles of auditory grouping that can exploit regularities in temporal patterns cause acoustic energy belonging to a coherent speech sound to stand out in the auditory scene.  相似文献   

18.
Listeners are quite adept at maintaining integrated perceptual events in environments that are frequently noisy. Three experiments were conducted to assess the mechanisms by which listeners maintain continuity for upward sinusoidal glides that are interrupted by a period of broadband noise. The first two experiments used stimulus complexes consisting of three parts: prenoise glide, broadband noise interval, and postnoise glide. For a given prenoise glide and noise interval, the subject’s task was to adjust the onset frequency of a same-slope postnoise glide so that, together with the prenoise glide and noise, the complex sounded as “smooth and continuous as possible.” The slope of the glides (1.67, 3.33, 5, and 6.67 Bark/sec) as well as the duration (50, 200, and 350 msec) and relative level of the interrupting noise (0, ?6, and ?12 dB S/N) were varied. For all but the shallowest glides, subjects consistently adjusted the offset portion of the glide to frequencies lower than predicted by accurate interpolation of the prenoise portion. Curiously, for the shallowest glides, subjects consistently selected postnoise glide onset-frequency values higher than predicted by accurate extrapolation of the prenoise glide. There was no effect of noise level on subjects’ adjustments in the first two experiments. The third experiment used a signal detection task to measure the phenomenal experience of continuity through the noise. Frequency glides were either present or absent during the noise for stimuli like those used in the first two experiments as well as for stimuli that had no prenoise or postnoise glides. Subjects were more likely to report the presence of glides in the noise when none occurred (false positives) when noise was shorter or of greater relative level and when glides were present adjacent to the noise.  相似文献   

19.
Two new, long-lasting phenomena involving modality of stimulus presentation are documented. In one series of experiments we investigated effects of modality of presentation on order judgments. Order judgments for auditory words were more accurate than order judgments for visual words at both the beginning and the end of lists, and the auditory advantage increased with the temporal separation of the successive items. A second series of experiments investigated effects of modality on estimates of presentation frequency. Frequency estimates of repeated auditory words exceeded frequency estimates of repeated visual words. The auditory advantage increased with frequency of presentation, and this advantage was not affected by the retention interval. These various effects were taken as support for a temporal coding assumption, that auditory presentation produces a more accurate encoding of time of presentation than does visual presentation.  相似文献   

20.
Five experiments investigated the ability to discriminate between musical timbres based on vibrotactile stimulation alone. Participants made same/different judgments on pairs of complex waveforms presented sequentially to the back through voice coils embedded in a conforming chair. Discrimination between cello, piano, and trombone tones matched for F0, duration, and magnitude was above chance with white noise masking the sound output of the voice coils (Experiment 1), with additional masking to control for bone-conducted sound (Experiment 2), and among a group of deaf individuals (Experiment 4a). Hearing (Experiment 3) and deaf individuals (Experiment 4b) also successfully discriminated between dull and bright timbres varying only with regard to spectral centroid. We propose that, as with auditory discrimination of musical timbre, vibrotactile discrimination may involve the cortical integration of filtered output from frequency-tuned mechanoreceptors functioning as critical bands. (PsycINFO Database Record (c) 2012 APA, all rights reserved).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号