首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
A study of pure-tone intensity discrimination is presented in which amplitude changes are detected in 1000 Hz tone bursts 15–20 msec in duration. The masking function (log detectable increment vs log background intensity) is found to have a slope of 9/10 when calculations are carried out via energy measurements. This near-miss to Weber’s law is in agreement with other data reported in the literature. The masking slope proves to be essentially independent of stimulus duration between 15 msec and 1.5 sec. Our stable slope parameter is interpreted as a detectability restriction generated by “mass-flow” phenomena in the auditory channel. These phenomena are thought to be similar to the fluctuations accompanying a noisy or turbulent stream of events. Pure-tone intensity discrimination is then analyzed as a special case of energy detection.  相似文献   

2.
Guttman and Julesz (1963) employed recycling frozen noise segments (RFNs) as model stimuli in their classic study of the lower limits for periodicity detection and short-term auditory memory. They reported that listeners can hear iteration of these stochastic signals effortlessly as "motorboating" for repetition periods ranging from 50 to 250 msec and as "whooshing" from 250 msec to 1 sec. Both motorboating and whooshing RFNs are global percepts encompassing the entire period, as are RFNs in the pitch range (repetition periods shorter than 50 msec). However, with continued listening to whooshing (but not motorboating) RFNs, individuals hear recurrent brief components such as clanks and thumps that are characteristic of the particular waveform. Experiment 1 of the present study describes a cross-modal cuing procedure that enables listeners to store and then recognize the recurrence of portions of frozen noise waveforms that are repeated after intervals of 10 sec or more. Experiment 2 compares the relative saliencies of different spectral regions in enabling listeners to detect repetition of these long-period patterns. Special difficulty was encountered with the 6-kHz band of RFNs, possibly due to the lack of fine-structure phase locking at this frequency range. In addition, a similarity is noted between the organizational principles operating over particular durational ranges of stochastic patterns and the characteristics of traditional hierarchical units of speech having corresponding durations.  相似文献   

3.
We explore the effect on performance in a forced-choice duration-discrimination task of varying the interstimulus interval (ISI) from 0 to 2 sec. The durations were brief empty intervals (115–285 msec) bounded by very brief auditory pulses. Performance improved as the ISI increased from 0 to 1/2 sec, but a further increase in ISI up to 2 sec resulted in little further change in performance. The “time information” derived from a brief interval bounded by auditory pulses does not appear to be susceptible to the very short-term perceptual memory loss inferred in other auditory discriminations.  相似文献   

4.
In a study in which the effect of tone duration on the formation of auditory streams was investigated, subjects were presented with 15-sec alternating pure-tone sequences (ABAB …) and were asked to orient their attention over the duration of the sequence toward hearing either a temporally coherent or a segregated percept. At stimulus offset, the subjects indicated whether their percept at the end of the stimulus had been that of a temporally coherent ABAB trill or that of segregated A and B streams. The experimental results indicated that the occurrence of stream segregation increases as (1) the duration of the A and B tones increases in unison and (2) the difference in duration between the A and B tones increases, with the duration differences between the tones producing the strongest segregation effects. A comparison of these experimental results with those of other studies strongly suggests that the time interval between the offset and onset of consecutive tones in the same frequency range is the most important temporal factor affecting auditory stream formation. Furthermore, a simulation of the experimental results by the Beauvois and Meddis (1996) stream segregation model suggests that both the tone duration effects reported here and Gestalt auditory grouping on the basis of temporal proximity can be understood in terms of low-level neurophysiological processes and peripheral-channeling factors.  相似文献   

5.
Age-related decline in auditory perception reflects changes in the peripheral and central auditory systems. These age-related changes include a reduced ability to detect minute spectral and temporal details in an auditory signal, which contributes to a decreased ability to understand speech in noisy environments. Given that musical training in young adults has been shown to improve these auditory abilities, we investigated the possibility that musicians experience less age-related decline in auditory perception. To test this hypothesis we measured auditory processing abilities in lifelong musicians (N = 74) and nonmusicians (N = 89), aged between 18 and 91. Musicians demonstrated less age-related decline in some auditory tasks (i.e., gap detection and speech in noise), and had a lifelong advantage in others (i.e., mistuned harmonic detection). Importantly, the rate of age-related decline in hearing sensitivity, as measured by pure-tone thresholds, was similar between both groups, demonstrating that musicians experience less age-related decline in central auditory processing.  相似文献   

6.
Observers were adapted to simulated auditory movement produced by dynamically varying the interaural time and intensity differences of tones (500 or 2,000 Hz) presented through headphones. At lO-sec intervals during adaptation, various probe tones were presented for 1 sec (the frequency of the probe was always the same as that of the adaptation stimulus). Observers judged the direction of apparent movement (“left” or “right”) of each probe tone. At 500 Hz, with a 200-deg/sec adaptation velocity, “stationary” probe tones were consistently judged to move in the direction opposite to that of the adaptation stimulus. We call this result an auditory motion aftereffect. In slower velocity adaptation conditions, progressively less aftereffect was demonstrated. In the higher frequency condition (2,000 Hz, 200-deg/sec adaptation velocity), we found no evidence of motion aftereffect. The data are discussed in relation to the well-known visual analog-the “waterfall effect.” Although the auditory aftereffect is weaker than the visual analog, the data suggest that auditory motion perception might be mediated, as is generally believed for the visual system, by direction-specific movement analyzers.  相似文献   

7.
In an experiment designed to investigate the time decay of auditory stream biasing (ASB), subjects were required to listen to a 10-sec induction sequence of repeated tones (AAAA …) designed to bias the listener’s percept toward hearing an A stream. The induction sequence was followed immediately by a silent interval (0–8 sec), and then a short ABAB … test sequence. To measure the amount of ASB remaining at the end of the silent interval, subjects were asked to indicate whether the test sequence was temporally coherent or had segregated into separate A and B streams. A plot of the mean number of segregation responses against silent-interval duration indicated that the overall time decay of ASB can be described by an exponential decay function with a time constant of t = 3.84 sec, with musicians having a longer time constant (τ = 7.84 sec) than nonmusicians (τ = 1.42 sec). The length of the time constants for musicians and nonmusicians suggests that the mechanism responsible for ASB is associated with long auditory storage and that future experiments investigating auditory streaming phenomena should use interstimulus intervals of at least 8 sec.  相似文献   

8.
Toward a neurophysiological theory of auditory stream segregation   总被引:2,自引:0,他引:2  
Auditory stream segregation (or streaming) is a phenomenon in which 2 or more repeating sounds differing in at least 1 acoustic attribute are perceived as 2 or more separate sound sources (i.e., streams). This article selectively reviews psychophysical and computational studies of streaming and comprehensively reviews more recent neurophysiological studies that have provided important insights into the mechanisms of streaming. On the basis of these studies, segregation of sounds is likely to occur beginning in the auditory periphery and continuing at least to primary auditory cortex for simple cues such as pure-tone frequency but at stages as high as secondary auditory cortex for more complex cues such as periodicity pitch. Attention-dependent and perception-dependent processes are likely to take place in primary or secondary auditory cortex and may also involve higher level areas outside of auditory cortex. Topographic maps of acoustic attributes, stimulus-specific suppression, and competition between representations are among the neurophysiological mechanisms that likely contribute to streaming. A framework for future research is proposed.  相似文献   

9.
Accuracy of temporal coding: Auditory-visual comparisons   总被引:1,自引:0,他引:1  
Three experiments were designed to decide whether temporal information is coded more accurately for intervals defined by auditory events or for those defined by visual events. In the first experiment, the irregular-list technique was used, in which a short list of items was presented, the items all separated by different interstimulus intervals. Following presentation, the subject was given three items from the list, in their correct serial order, and was asked to judge the relative interstimulus intervals. Performance was indistinguishable whether the items were presented auditorily or visually. In the second experiment, two unfilled intervals were defined by three nonverbal signals in either the auditory or the visual modality. After delays of 0, 9, or 18 sec (the latter two filled with distractor activity), the subjects were directed to make a verbal estimate of the length of one of the two intervals, which ranged from 1 to 4 sec and from 10 to 13 sec. Again, performance was not dependent on the modality of the time markers. The results of Experiment 3, which was procedurally similar to Experiment 2 but with filled rather than empty intervals, showed significant modality differences in one measure only. Within the range of intervals employed in the present study, our results provide, at best, only modest support for theories that predict more accurate temporal coding in memory for auditory, rather than visual, stimulus presentation.  相似文献   

10.
This note describes a way of modifying a tape recorder for producing accurately controllable delays for experiments with delayed auditory feedback. Any value of delay from 80 millisec. to 1.2 sec. can be obtained to the nearest millisec., and the range could be extended by some minor changes. The delay is continuously monitored on a digital electronic timer.  相似文献   

11.
Arao H  Suetomi D  Nakajima Y 《Perception》2000,29(7):819-830
The duration of a short empty time interval (typically shorter than 300 ms) is often underestimated when it is immediately preceded by a shorter time interval. This illusory underestimation--time-shrinking--had been studied only with auditory temporal patterns. In the present study, we examined whether similar underestimation would take place with visual temporal patterns. It turned out that underestimation of the same kind takes place also in the visual modality. However, a considerable difference between the auditory and the visual modalities appeared. In the auditory modality, it had been shown that the amount of underestimation decreased for preceding time intervals longer than 200 ms. In the present study, the underestimation increased when the preceding time interval varied from 160 to 400 ms. Furthermore, the differences between the two neighbouring intervals which could cause this underestimation had always been in a fixed range in the auditory modality. In the visual modality, the range was broader when the intervals were longer. These results were interpreted in terms of an assimilation process in light of the processing-time hypothesis proposed by Nakajima (1987 Perception 16 485-520) in order to explain an aspect of empty-duration perception.  相似文献   

12.
Dorman (1974) studied the discrimination of intensity differences on formant transitions in and out of syllable context. He interpreted his results as suggesting that the acoustic features of his stop-consonant/vowel syllable were recoded into a phonetic representation, then stored in an inaccessible form of auditory short-term memory. The Dorman results are replicated with analogous pure-tone and FM-glide conditions. The results of both studies are explained in terms of specified acoustic properties of the signals and thus provide no evidence for a special phonetic recoding.  相似文献   

13.
The hearing sensitivity of an Atlantic bottlenose dolphin (Tursiops truncatus) to both pure tones and broadband signals simulating echoes from a 7.62-cm water-filled sphere was measured. Pure tones with frequencies between 40 and 140 kHz in increments of 20 kHz were measured along with broadband thresholds using a stimulus with a center frequency of 97.3 kHz and 88.2 kHz. The pure-tone thresholds were compared with the broadband thresholds by converting the pure-tone threshold intensity to energy flux density. The results indicated that dolphins can detect broadband signals slightly better than a pure-tone signal. The broadband results suggest that an echolocating bottlenose dolphin should be able to detect a 7.62-cm diameter water-filled sphere out to a range of 178 m in a quiet environment.  相似文献   

14.
A programmable sine-wave generator has been developed that permits microcomputer control of both discrete and continuous variations in the frequency and amplitude of auditory, visual, or vibrotactile stimuli. The function and design of the sine-wave generator as a peripheral to the Apple II/FIRST system are detailed. Moreover, adaptations of the basic sine-wave circuit are briefly described for interfacing it with other microcomputers (e.g., the IBM PC and compatibles), and for altering the waveform, range, and resolution of the output. Sample programs in Apple II/FIRST and Applesoft BASIC for controlling signal frequency and amplitude are used to illustrate the simplicity of programmable control. The sine-wave generator has many of the capabilities of commercially available ones, at a fraction of the cost.  相似文献   

15.
Thresholds for auditory motion detectability were measured in a darkened anechoic chamber while subjects were adapted to horizontally moving sound sources of various velocities. All stimuli were 500-Hz lowpass noises presented at a level of 55 dBA. The threshold measure employed was the minimum audible movement angle (MAMA)--that is, the minimum angle a horizontally moving sound must traverse to be just discriminable from a stationary sound. In an adaptive, two-interval forced-choice procedure, trials occurred every 2-5 sec (Experiment 1) or every 10-12 sec (Experiment 2). Intertrial time was "filled" with exposure to the adaptor--a stimulus that repeatedly traversed the subject's front hemifield at ear level (distance: 1.7 m) at a constant velocity (-150 degrees/sec to +150 degrees/sec) during a run. Average MAMAs in the control condition, in which the adaptor was stationary (0 degrees/sec,) were 2.4 degrees (Experiment 1) and 3.0 degrees (Experiment 2). Three out of 4 subjects in each experiment showed significantly elevated MAMAs (by up to 60%), with some adaptors relative to the control condition. However, there were large intersubject differences in the shape of the MAMA versus adaptor velocity functions. This loss of sensitivity to motion that most subjects show after exposure to moving signals is probably one component underlying the auditory motion aftereffect (Grantham, 1989), in which judgments of the direction of moving sounds are biased in the direction opposite to that of a previously presented adaptor.  相似文献   

16.
In an attempt to facilitate visual recall when material is presented under bisensory simultaneous conditions (i.e., visual and auditory stimuli are presented together), auditory material was delayed up to 1/4 sec relative to the onset of the visual material. Visual recall, however, remained stable across the auditory delays, suggesting a limitation in the visual system beyond that associated with the simultaneous occurrence of auditory material.  相似文献   

17.
Thresholds for auditory motion detectability were measured in a darkened anechoic chamber while subjects were adapted to horizontally moving sound saurces of various-velocities. All stimuli were 500-Hz lowpass noises presented at a level of 55 dBA. The threshold measure employed was the minimum audible movement angle(MAMA)—that is, the minimum angle a horizontally moving sound must traverse to be just discriminable from a stationary sound. In an adaptive, two-interval forced-choice procedure, trials occurred every 2-5 sec (Experiment 1) or every 10–12 sec (Experiment 2). Intertrial time was “filled” with exposure to the adaptor—a stimulus that repeatedly traversed the subject’s front hemifield at ear level (distance: 1.7 m) at a constant velocity (?150°/secto + 150°/sec)during a run. Average MAMAs in the control condition, in which the adaptor was stationary (0°/sec), were 2.4° (Experiment 1) and 3.0° (Experiment 2). Three out of 4 subjects in each experiment showed significantly elevated MAMAs (by up to 60%), with some adaptors relative to the control condition. However, there were large intersubject differences in the shape of the MAMA versus adaptor velocity functions. This loss of sensitivity to motion that most subjects show after exposure to moving signals is probably one component underlying the auditory motion aftereffect (Grantham, 1989), in which judgmentsof the direction-afmoving sounds are biased in the direction opposite to that of a previously presented adaptor.  相似文献   

18.
Four experiments are reported that examine attentional control in the auditory modality. In Experiment 1, the subjects made detection responses to the onset of a monaurally presented pure tone that was preceded by a pure-tone cue. On a valid trial, the cue was presented in the same ear as the target; on an invalid trial, it was presented in the contralateral ear to the target; and on a neutral trial, it was presented in both ears. Overall performance was facilitated on valid trials in comparison with invalid trials. In later experiments, the subjects made choice decisions about the location of the target, and significant cuing effects were found relative to the neutral condition. Finally, performance was assessed in the presence of central (spoken) word cues. Here, the content of the cue specified the likely location of the target. Under these conditions, costs and benefits were found over a range of cue-target stimulus onset asynchronies. The results are discussed in terms of automatic and controlled attentional processes.  相似文献   

19.
The question of whether sudden increases in the amplitude of pure-tone components would perceptually isolate them from a more complex spectrum was investigated in two experiments. In Experiment 1, a 3.5-sec noise was played as a masker. During the noise, two pure-tone components of different frequencies appeared in succession. Subjects were asked to judge whether the pitch sequence went up or down. The rise time of these components had only a small and inconsistent effect on discrimination. In Experiment 2, the 3.5-sec background signal was a complex tone. The amplitudes of two of its components were incremented in succession. Again, subjects judged whether the pitch pattern went up or down. This time there was a sizable, monotonic effect of the rise time of the increments, with more rapid increments leading to better discrimination. The difference between the two results is interpreted in terms of the auditory system’s response to changing and unchanging signals and the role of its “sudden-change” responses in attracting perceptual processing to certain spectral regions.  相似文献   

20.
Previous studies have related the trait of sensation seeking to augmenting of the evoked potential (EP) in both visual and auditory modalities and to electrodermal and heart rate (HR) orienting and defensive reactions. The present study examined all of these phenomena in the same sample of subjects in order to replicate previous findings, and investigate cross-modality consistency and relationships between cortical and peripheral responses. Fifty-four male subjects, scoring high or low on the Disinhibition Sensation Seeking Scale, were exposed to 4 intensities of auditory stimuli (tones) on one occasion and visual stimuli (light flash) on another. Two interstimulus (ISIs) intervals were used for each set of stimuli: first a 17 sec series, and then a 2 sec series. High disinhibitors showed EP augmenting and lows reducing on 3 of the 4 series; differences were significant on two of them. High disinhibitors showed stronger orienting (deceleratory) HR responses to visual and auditory stimuli while lows showed stronger defensive (acceleratory) HR responses. HR responses were significantly correlated across stimulus modalities. Auditory and visual EP slope measures were correlated only for the long ISI series.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号