首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The exponential increase of intensity for an approaching sound source provides salient information for a listener to make judgments of time to arrival (TTA). Specifically, a listener will experience a greater rate of increasing intensity for higher than for lower frequencies during a sound source’s approach. To examine the relative importance of this spectral information, listeners were asked to make judgments about the arrival times of nine 1-octave-band sound sources (the bands were consecutive, nonoverlapping single octaves, ranging from 40–80 Hz to ~10–20 kHz). As is typical in TTA tasks, listeners tended to underestimate the arrival time of the approaching sound source. In naturally occurring and independently manipulated amplification curves, bands with center frequencies between 120 and 250 Hz caused the least underestimation, and bands with center frequencies between 2000 and 7500 Hz caused the most underestimation. This spectral influence appears to be related to the greater perceived urgency of higher-frequency sounds.  相似文献   

2.
Grassi M 《Perception》2010,39(10):1424-1426
Looming sounds (sounds increasing in intensity over time) are more salient than receding sounds (a looming sound reversed in time). For example, they are estimated as being longer, louder, and more changing in loudness than receding sounds. Some authors interpret the looming salience as evolutionarily adaptive, because it increases the margins of safety of the perceiver in the case of preparatory behaviours (e.g., a motor reaction to an approaching sound source). Recently, Neuhoff et al (2009, Journal of Experimental Psychology: Human Perception and Performance 35 225-234) found that females more than males show overestimation of the spatiotemporal properties of virtually simulated looming sound sources. Here, I investigated whether the sex difference could be observed for the subjective duration of looming and receding sounds, and found that females more than males overestimate the duration of looming sounds in comparison to receding sounds.  相似文献   

3.
Auditory apparent motion under binaural and monaural listening conditions   总被引:1,自引:0,他引:1  
This investigation examined the ability of listeners to perceive apparent motion under binaural and monaural listening conditions. Fifty-millisecond broadband noise sources were presented through two speakers separated in space by either 10 degrees, 40 degrees, or 160 degrees, centered about the subject's midline. On each trial, the sources were temporally separated by 1 of 12 interstimulus onset intervals (ISOIs). Six listeners were asked to place their experience of these sounds into one of five categories (single sound, simultaneous sounds, continuous motion, broken motion, or successive sounds), and to indicate either the proper temporal sequence of presentation or the direction of motion, depending on whether or not motion was perceived. Each listener was tested at all spatial separations under binaural and monaural listening conditions. Motion was perceived in the binaural listening condition at all spatial separations tested for ISOIs between 20 and 130 msec. In the monaural listening condition, motion was reliably heard by all subjects at 10 degrees and 40 degrees for the same range of ISOIs. At 160 degrees, only 3 of the 6 subjects consistently reported motion. However, when motion was perceived in the monaural condition, the direction of motion could not be determined.  相似文献   

4.
In the present experiment, the authors tested Mandarin and English listeners on a range of auditory tasks to investigate whether long-term linguistic experience influences the cognitive processing of nonspeech sounds. As expected, Mandarin listeners identified Mandarin tones significantly more accurately than English listeners; however, performance did not differ across the listener groups on a pitch discrimination task requiring fine-grained discrimination of simple nonspeech sounds. The crucial finding was that cross-language differences emerged on a nonspeech pitch contour identification task: The Mandarin listeners more often misidentified flat and falling pitch contours than the English listeners in a manner that could be related to specific features of the sound structure of Mandarin, which suggests that the effect of linguistic experience extends to nonspeech processing under certain stimulus and task conditions.  相似文献   

5.
Many auditory displays use acoustic attributes such as frequency, intensity, and spectral content to represent different characteristics of multidimensional data. This study demonstrated a perceptual interaction between dynamic changes in pitch and loudness, as well as perceived asymmetries in directional acoustic change, that distorted the data relations represented in an auditory display. Three experiments showed that changes in loudness can influence pitch change, that changes in pitch can influence loudness change, and that increases in acoustic intensity are judged to change more than equivalent decreases. Within a sonification of stock market data, these characteristics created perceptual distortions in the data set. The results suggest that great care should be exercised when using lower level acoustic dimensions to represent multidimensional data.  相似文献   

6.
Event-related potentials (ERPs) were utilized to study brain activity while subjects listened to speech and nonspeech stimuli. The effect of duplex perception was exploited, in which listeners perceive formant transitions that are isolated as nonspeech "chirps," but perceive formant transitions that are embedded in synthetic syllables as unique linguistic events with no chirp-like sounds heard at all (Mattingly et al., 1971). Brain ERPs were recorded while subjects listened to and silently identified plain speech-only tokens, duplex tokens, and tone glides (perceived as "chirps" by listeners). A highly controlled set of stimuli was developed that represented equivalent speech and nonspeech stimulus tokens such that the differences were limited to a single acoustic parameter: amplitude. The acoustic elements were matched in terms of number and frequency of components. Results indicated that the neural activity in response to the stimuli was different for different stimulus types. Duplex tokens had significantly longer latencies than the pure speech tokens. The data are consistent with the contention of separate modules for phonetic and auditory stimuli.  相似文献   

7.
Three experiments showed that dynamic frequency change influenced loudness. Listeners heard tones that had concurrent frequency and intensity change and tracked loudness while ignoring pitch. Dynamic frequency change significantly influenced loudness. A control experiment showed that the effect depended on dynamic change and was opposite that predicted by static equal loudness contours. In a 3rd experiment, listeners heard white noise intensity change in one ear and harmonic frequency change in the other and tracked the loudness of the noise while ignoring the harmonic tone. Findings suggest that the dynamic interaction of pitch and loudness occurs centrally in the auditory system; is an analytic process; has evolved to take advantage of naturally occurring covariation of frequency and intensity; and reflects a shortcoming of traditional static models of loudness perception in a dynamic natural setting.  相似文献   

8.
The purpose of the present study was to examine the nature of auditory representations by manipulating the semantic and physical relationships between auditory objects. On each trial, listeners heard a group of four simultaneous sounds for 1 sec, followed by 350 msec of noise, and then either the same sounds or three of the same plus a new one. Listeners completed a change-detection task and an object-encoding task. For change detection, listeners made a same-different judgment for the two groups of sounds. Object encoding was measured by presenting probe sounds that either were or were not present in the two groups. In Experiments 1 and 3, changing the target to an object that was acoustically different from but semantically the same as the original target resulted in more errors on both tasks than when the target changed to an acoustically and semantically different object. In Experiment 2, comparison of semantic and acoustic effects demonstrated that acoustics provide a weaker cue than semantics for both change detection and object encoding. The results suggest that listeners rely more on semantic information than on physical detail.)  相似文献   

9.
With miniature microphones inserted into the external ear canals of a model and the sound source 90 degrees to left of midline, low-pass, and broadband noise bursts were picked up and recorded on magnetic tape. The bursts were generated in two highly contrasting acoustic environments: an anechoic and an echoic chamber. The taped sounds were played back monaurally and binaurally via headphones to 16 listeners seated in an acoustically neutral setting. They were instructed to estimate the distance of the stimuli. Apparent distances of bursts recorded in the echoic or reverberant chamber far exceeded those recorded in the anechoic chamber. It mattered not whether the sounds were presented monaurally or binaurally. What did influence distance estimates dramatically was the frequency composition of the stimuli. Low-pass sounds recorded in either acoustic environment were consistently judged to be further removed than high-pass sounds recorded in the same setting. They were also more likely to appear from behind the listener. In our moment-to-moment transaction with the acoustic environment, distant sounds generally have less acoustic energy in the higher audio frequency. We suggest that this lifetime of auditory experience influenced our listeners' scale of relative distance.  相似文献   

10.
The auditory kappa effect is a tendency to base the perceived duration of an inter-onset interval (IOI) separating two sequentially presented sounds on the degree of relative pitch distance separating them. Previous research has found that the degree of frequency discrepancy between tones extends the subjective duration of the IOI. In Experiment 1, auditory kappa effects for sound intensity were tested using a three-tone, AXB paradigm (where the intensity of tone X was shifted to be closer to either Tone A or B). Tones closer in intensity level were perceived as occurring closer in time, evidence of an auditory-intensity kappa effect. In Experiments 2 and 3, the auditory motion hypothesis was tested by preceding AXB patterns with null intensity and coherent intensity context sequences, respectively. The auditory motion hypothesis predicts that coherent sequences should enhance the perception of motion and increase the strength of kappa effects. In this study, the presence of context sequences reduced kappa effect strength regardless of the properties of the context tones.  相似文献   

11.
Changes in the spectral content of wide-band auditory stimuli have been repeatedly implicated as a possible cue to the distance of a sound source. Few of the previous studies of this factor, however, have considered whether the cue provided by spectral content serves as an absolute or a relative cue. That is, can differences in spectral content indicate systematic differences in distance even on their first presentation to a listener, or must the listener be able to compare sounds with one another in order to perceive some change in their distances? An attempt to answer this question and simultaneously to evaluate the possibly confounding influence of changes in the sound level and/or the loudness of the stimuli are described in this paper. The results indicate that a decrease in high-frequency content (as might physically be produced by passage through a greater amount of air) can lead to increases in perceived auditory distance, but only when compared with similar sounds having a somewhat different high-frequency content, ie spectral information can serve as a relative cue for auditory distance, independent of changes in overall sound level.  相似文献   

12.
A number of reports have suggested that changing intensity in short tonal stimuli is asymmetrically perceived. In particular, steady stimuli may be heard as growing louder; stimuli must decrease in intensity to be heard as steady in loudness. The influence of stimulus duration on this perceptual asymmetry was examined. Three participants heard diotic tonal stimuli of eight durations between 0.8 s and 2.5 s. Each stimulus increased, decreased, or remained steady in intensity; initial intensity was 40 dB SPL (sound pressure level relative to 0.0002 dynes/cm2), and carrier frequency was 1 kHz. Participants made forced binary responses of “growing louder” or “growing softer” to each stimulus. For each duration, that value of intensity change eliciting equal numbers of both responses was determined. The results indicated a pronounced perceptual asymmetry for 0.8-s stimuli, which diminished for longer stimuli; changing intensity in 2.5-s stimuli was perceived symmetrically. Additionally, sensitivity to changing intensity improved as stimulus duration increased, suggesting that responses may be based in part on the difference in intensity between the beginning and end of the stimulus. Possible ramifications of the asymmetry reside in (a) the percussive nature of many natural sounds and (b) selective responding to approaching sound sources.  相似文献   

13.
We studied auditory discrimination of simulated single-formant frequency transitions that resembled portions of certain speech consonants. Significant age differences in transition discrimination occurred; both children and older adults required larger acoustic differences between transitions for discrimination than did teenagers/young adults. Longer transitions were more easily discriminated than shorter transitions by all listeners, and there were no differences between discriminations of rising and falling transitions. Teens/young adults and older adults, but not children, required larger frequency differences to discriminate frequency transitions followed by a steady-state sound than for transitions alone. There were also age differences in discrimination of steady-state sounds. These developmental-perceptual differences may help explain why children and older adults who have good pure-tone sensitivity may experience difficulty in understanding speech.  相似文献   

14.
Reflected sounds are often treated as an acoustic problem because they produce false localization cues and decrease speech intelligibility. However, their properties are shaped by the acoustic properties of the environment and therefore are a potential source of information about that environment. The objective of this study was to determine whether information carried by reflected sounds can be used by listeners to enhance their awareness of their auditory environment. Twelve listeners participated in two auditory training tasks in which they learned to identify three environments based on a limited subset of sounds and then were tested to determine whether they could transfer that learning to new, unfamiliar sounds. Results showed that significant learning occurred despite the task difficulty. An analysis of stimulus attributes suggests that it is easiest to learn to identify reflected sound when it occurs in sounds with longer decay times and broadly distributed dominant spectral components.  相似文献   

15.
Sounds that are equivalent in all aspects except for their temporal envelope are perceived differently. Sounds with rising temporal envelopes are perceived as louder, longer, and show a greater change in loudness throughout their duration than sounds with falling temporal envelopes. Stecker and Hafter (2000) Stecker, G. C. and Hafter, E. R. 2000. An effect of temporal asymmetry on loudness. Journal of the Acoustical Society of America, 107: 33583368.  [Google Scholar] proposed that participants ignore the decay portion of sounds with falling temporal envelopes to account for observed loudness differences, but there is no empirical evidence support this hypothesis. To test this idea, two duration-matching experiments were performed. One experiment used broadband noise and the other natural stimuli. Different groups of participants were given different instruction sets asking them to (1) simply match the duration or (2) include all aspects of the sounds. Both experiments produced the same result. The first instruction set, which represented participants' natural biases, yielded shorter subjective durations for sounds with falling temporal envelopes than for sounds with rising temporal envelopes. By contrast, asking participants to include all aspects of the sounds significantly reduced the size of the asymmetry in subjective duration, a result that supports Stecker and Hafter's hypothesis. This segregation of the stimulus at the perceptual level is consistent with observed asymmetries in loudness change and overall loudness for sounds with rising and falling temporal envelopes, but it does not account for the entire effect. The remaining portion of the effect, after considering biases due to instructions, is not likely a result of adaptation but could be associated with persistence. The amount of persistence was inferred from behavioral masking data obtained for these sounds.  相似文献   

16.
Blind persons emit sounds to detect objects by echolocation. Both perceived pitch and perceived loudness of the emitted sound change as they fuse with the reflections from nearby objects. Blind persons generally are better than sighted at echolocation, but it is unclear whether this superiority is related to detection of pitch, loudness, or both. We measured the ability of twelve blind and twenty-five sighted listeners to determine which of two sounds, 500 ms noise bursts, that had been recorded in the presence of a reflecting object in a room with reflecting walls using an artificial head. The sound pairs were original recordings differing in both pitch and loudness, or manipulated recordings with either the pitch or the loudness information removed. Observers responded using a 2AFC method with verbal feedback. For both blind and sighted listeners the performance declined more with the pitch information removed than with the loudness information removed. In addition, the blind performed clearly better than the sighted as long as the pitch information was present, but not when it was removed. Taken together, these results show that the ability to detect pitch is a main factor underlying high performance in human echolocation.  相似文献   

17.
In this study, we show that the contingent auditory motion aftereffect is strongly influenced by visual motion information. During an induction phase, participants listened to rightward-moving sounds with falling pitch alternated with leftward-moving sounds with rising pitch (or vice versa). Auditory aftereffects (i.e., a shift in the psychometric function for unimodal auditory motion perception) were bigger when a visual stimulus moved in the same direction as the sound than when no visual stimulus was presented. When the visual stimulus moved in the opposite direction, aftereffects were reversed and thus became contingent upon visual motion. When visual motion was combined with a stationary sound, no aftereffect was observed. These findings indicate that there are strong perceptual links between the visual and auditory motion-processing systems.  相似文献   

18.
We tested whether listeners are differentially responsive to the presence or absence of voicing, a salient, distinguishing acoustic feature, in laughter. Each of 128 participants rated 50 voiced and 20 unvoiced laughs twice according to one of five different rating strategies. Results were highly consistent regardless of whether participants rated their own emotional responses, likely responses of other people, or one of three perceived attributes concerning the laughers, thus indicating that participants were experiencing similarly differentiated affective responses in all these cases. Specifically, voiced, songlike laughs were significantly more likely to elicit positive responses than were variants such as unvoiced grunts, pants, and snortlike sounds. Participants were also highly consistent in their relative dislike of these other sounds, especially those produced by females. Based on these results, we argue that laughers use the acoustic features of their vocalizations to shape listener affect.  相似文献   

19.
Everyday experience tells us that some types of auditory sensory information are retained for long periods of time. For example, we are able to recognize friends by their voice alone or identify the source of familiar noises even years after we last heard the sounds. It is thus somewhat surprising that the results of most studies of auditory sensory memory show that acoustic details, such as the pitch of a tone, fade from memory in ca. 10-15 s. One should, therefore, ask (1) what types of acoustic information can be retained for a longer term, (2) what circumstances allow or help the formation of durable memory records for acoustic details, and (3) how such memory records can be accessed. The present review discusses the results of experiments that used a model of auditory recognition, the auditory memory reactivation paradigm. Results obtained with this paradigm suggest that the brain stores features of individual sounds embedded within representations of acoustic regularities that have been detected for the sound patterns and sequences in which the sounds appeared. Thus, sounds closely linked with their auditory context are more likely to be remembered. The representations of acoustic regularities are automatically activated by matching sounds, enabling object recognition.  相似文献   

20.
Because the environment often includes multiple sounds that overlap in time, listeners must segregate a sound of interest (the auditory figure) from other co-occurring sounds (the unattended auditory ground). We conducted a series of experiments to clarify the principles governing the extraction of auditory figures. We distinguish between auditory "objects" (relatively punctate events, such as a dog's bark) and auditory "streams" (sounds involving a pattern over time, such as a galloping rhythm). In Experiments 1 and 2, on each trial 2 sounds-an object (a vowel) and a stream (a series of tones)-were presented with 1 target feature that could be perceptually grouped with either source. In each block of these experiments, listeners were required to attend to 1 of the 2 sounds, and report its perceived category. Across several experimental manipulations, listeners were more likely to allocate the feature to an impoverished object if the result of the grouping was a good, identifiable object. Perception of objects was quite sensitive to feature variation (noise masking), whereas perception of streams was more robust to feature variation. In Experiment 3, the number of sound sources competing for the feature was increased to 3. This produced a shift toward relying more on spatial cues than on the potential contribution of the feature to an object's perceptual quality. The results support a distinction between auditory objects and streams, and provide new information about the way that the auditory world is parsed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号