首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
An experiment was conducted to investigate the effect of spatial separation on interference effects in pitch memory. Subjects compared the pitches of two tones that were separated by a sequence of eight interpolated tones. It was found that error rates were lower in sequences where the test and interpolated tones were presented to different ears, compared with sequences where they were presented to the same ear; however, this effect of spatial separation was not large. It is concluded that differences in spatial location can enable the focussing of attention away from the irrelevant tones and so reduce their disruptive effect, but that this occurs only to a limited extent.  相似文献   

2.
The octave illusion occurs when each ear receives a sequence of tones alternating by 1 octave but with the high and low tones in different ears. Most listeners perceive these stimuli as a high pitch in one ear alternating with a low pitch in the other ear. D. Deutsch and P. L. Roll (1976) interpreted this phenomenon as evidence for a what-where division of auditory processing caused by sequential interactions between the tones. They argued that the pitch follows the frequency presented to the dominant ear but is lateralized toward the higher frequency component. This model was examined in 4 experiments. Results indicate that the perceived pitch approximates the fundamental frequency and that the illusion does not depend on sequential interactions. The octave illusion may arise from an interaction between dichotic fusion and binaural diplacusis rather than from suppression as proposed by Deutsch.  相似文献   

3.
What is the involvement of what we know in what we perceive? In this article, the contribution of melodic schema-based processes to the perceptual organization of tone sequences is examined. Two unfamiliar six-tone melodies, one of which was interleaved with distractor tones, were presented successively to listeners who were required to decide whether the melodies were identical or different. In one condition, the comparison melody was presented after the mixed sequence: a target melody interleaved with distractor tones. In another condition, it was presented beforehand, so that the listeners had precise knowledge about the melody to be extracted from the mixture. In the latter condition, recognition performance was better and a bias toward same responses was reduced, as compared with the former condition. A third condition, in which the comparison melody presented beforehand was transposed up in frequency, revealed that whereas the performance improvement was explained in part by absolute pitch or frequency priming, relative pitch representation (interval and/or contour structure) may also have played a role. Differences in performance as a function of mean frequency separation between target and distractor sequences, when listeners did or did not have prior knowledge about the target melody, argue for a functional distinction between primitive and schema-based processes in auditory scene analysis.  相似文献   

4.
Auditory redundancy gains were assessed in two experiments in which a simple reaction time task was used. In each trial, an auditory stimulus was presented to the left ear, to the right ear, or simultaneously to both ears. The physical difference between auditory stimuli presented to the two ears was systematically increased across experiments. No redundancy gains were observed when the stimuli were identical pure tones or pure tones of different frequencies (Experiment 1). A clear redundancy gain and evidence of coactivation were obtained, however, when one stimulus was a pure tone and the other was white noise (Experiment 2). Experiment 3 employed a two-alternative forced choice localization task and provided evidence that dichotically presented pure tones of different frequencies are apparently integrated into a single percept, whereas a pure tone and white noise are not fused. The results extend previous findings of redundancy gains and coactivation with visual and bimodal stimuli to the auditory modality. Furthermore, at least within this modality, the results indicate that redundancy gains do not emerge when redundant stimuli are integrated into a single percept.  相似文献   

5.
Eye movement reaction time (RT) was measured in simple and choice RT situations in which monaural tones were presented to the left or right ear. In the choice RT conditions, tones of one frequency signaled a left looking response and tones of another frequency signaled a right looking response. In the simple RT condition, tones were presented in 2 blocks signaling right or left looking responses. RTs were measured by electro-oculogram (EOG), with electrodes placed over the outer canthus of each eye. In the choice RT condition, oculomotor RTs were faster when the tones signaling right or left looking were presented in the ears corresponding to the direction of looking than when they occured in the opposite ear. No such correspondence was present in the simple RT condition. Ss also performed a manual choice RT task. The lateral stimulus- response (S-R) compatibility effects obtained confirmed previous findings and were of the same magnitude as those obtained in the oculomotor response modality. Asymmetry in correlations between oculomotor and manual compatibility effects suggests differential hemispheric mediation.  相似文献   

6.
Subjects listened to repetitive presentations of C major scale patterns and simple two-part contrapuntal specimens, both dichotically and in a stereophonic free sound field. All scalar and melodic patterns were presented so that successive tones alternated from ear to ear: when a component of one scale or melody was routed through one speaker or headphone, the concurrent member of the other scale or melodic pattern was routed through the other speaker or headphone, and vice versa. The stimulus parameters of spectral contour and envelope characteristics, duration, melodic pattern, and loudness were varied, and a testing procedure was designed to minimize any bias in responses which might be produced by learning, immediate memory, or vocal Limitations of subjects. Virtually all responses (95.2% to scalar stimuli, 99.1% to melodic stimuli) indicated that the subjects channeled stimuli by pitch range, and not by ear of input. When spectral contours routed through the separate headphones or speakers were noticeably dissimilar, no subject perceived that this timbral inequality switched from ear to ear: all subjects heard an overall change in tone quality which pervaded both scalar or melodic sequences, and which apparently emanated from both headphones or speakers.  相似文献   

7.
A series of experiments explored the role of structural information in the auditory recognition process, within the context of a backward recognition masking paradigm. A masking tone presented after a test tone has been found to interfere with the perceptual processing of the test tone, the degree of interference decreasing with increased durations of the silent intertone interval between the test and masking tones. In the current studies, the task was modified to utilize three-tone sequences as the test stimuli. Six test sequences were employed (LMH, LHM, MLH, MHL, HLM, HML), where L, M, and H represent the lowest, middle, and highest frequencies in the melody. The observers identified these six possible sequences when the three tones of the test sequence were interleaved with three presentations of a single masking tone. All three tones of the test sequence were drawn from the same octave, while the masking tones could be drawn from any of three octaves, symmetrical around the octave containing the test tones. Under these conditions, interference occurred primarily from masking tones drawn from the same octave as the test tones. Masking tones drawn from other octaves were found to produce little, if any, interference with perception of the test tones. This effect was found to occur only for the identification of tonal sequences. Substantial masking of single-tone targets occurred with masking tones drawn from octaves other than that containing the targets. The results make apparent the use of structural information during auditory recognition. A theoretical interpretation was advanced which suggests that, while single tones are perceived on the basis of absolute pitch, the presence of auditory structure may allow relational information, such as exact pitch intervals or melodic contour, to facilitate perception of the tonal sequence.  相似文献   

8.
The initial and optimum voice reaction times (VRT) to auditory stimuli presented separately to the left and right ears of ten adult stutterers and ten nonstutterers was investigated. Subjects initiated the neutral vowel sound /Λ/ in response to one hundred 4000 Hz tones of 2.5 sec in duration. The silent intervals between the tones varied randomly. The stimulus cues were divided into five equal response sets of 20 tones each with 10 tones in each set being presented to the right ear and 10 tones to the left ear alternating back and forth. No significant differences were reported between the VRTs for cues presented to the left or right ears for either group. However, the stutterers exhibited voice reaction times which were significantly longer and more variable than those of the nonstutterers. The between- group differences were observed for what appeared to be the “optimum” level of voice initiation for the experimental task. These results lend to the speculative hypothesis that the observed difficulty for adult stutterers to promptly and consistently initiate vocalization may in part be attributable to inherent rather than learned factors.  相似文献   

9.
Auditory event-related brain potentials (ERPs) and reaction times were, analyzed in a selective attention task in which subjects attended to tone pips presented at high rates-Xinterstimulua intervals [ISIs] of 40-200 msec). Subjects responded to infrequent target tones of a specified frequency (250 or 4000 Hz) and location (left or right ear) that were louder than otherwise identical tones presented randomly to the left and right ears. Negative difference (Nd) waves were isolated by subtracting ERPs to tones with no target features from ERPs to the same tones when they shared target location, frequency, or both frequency and location cues. Nd waves began 60-70 msec after tone onset and lasted until 252–350 msec after tone onset, even for tones with single attended cues. The duration of Nd waves exceeded the ISIs between successive tones, implying that several stimuli underwent concurrent analysis. Nd waves associated with frequency processing had scalp distributions different from those associated with location processing, implying that the features were analyzed in distinct cortical areas. Nd waves specific to auditory feature conjunction were isolated. These began at latencies of 110–120 msec, some 30-40 msec after the Nds to single features. The relative timing of the different Nd waves suggests that auditory feaure conjunction begins after a brief parallel analysis of individual features but before feature analysis is complete.  相似文献   

10.
A same-different matching task was used to investigate how subjects perceived a dichotic pair of pure tones. Pairs of stimulus tones in four frequency ranges (center frequencies of 400–1,700 Hz), with separations between 40 and 400 Hzt were tested. Five types of test tones were matched to the stimulus pair: the stimulus pair presented again (control) or crossed over (same tones, different ears), the geometric mean of the two tones, or a binaural tone of the low or high tone of the pair. In the lowest frequency range and the highest with maximum separation, the crossed-over test tones were perceived as different from the same stimulus tones. A bias for perceiving the higher tone of a pair was evident in the frequency ranges with separations of 40-200 Hz. In the lowest frequency range, the bias was for perceiving the higher tone in the right ear. This restricted ear advantage in the perception of pure tones was not significantly related to the right-ear advantage in dichotic word monitoring.  相似文献   

11.
Dual-task methodology was used to assess a multiple-resources account of information processing in which each cerebral hemisphere is assumed to have access to its own finite amount of attentional resources. A visually presented verbal memory task was paired with an auditory tone memory task, and subjects were paid to emphasize one task more than the other. When subjects were trying to remember tones presented to the right ear, they could trade performance between tasks as a function of the emphasis condition, whereas on left-ear trials they could not. In addition, a control session indicated that stimuli presented to the unattended ear demanded processing resources, even when it was to the detriment of performance. The data support the assumption of independence between the hemispheres' resource supplies.  相似文献   

12.
The Ss’ task was to identify repeating sequences of pure tones that differed only with respect to the order in which the tones occurred. With tones occurring at a constant rate of 5/sec, performance was better when the tones were widely spaced in frequency than when they were less widely spaced. One S was able, after considerable practice, to distinguish among different sequences whose component tones were presented at rates up to 500/sec. It was tenatively concluded that, in this case, performance was based on temporal (order) information at the slowest presentation rates, primarily on spectral information at the highest rates, and on both order and spectral information at intermediate rates.  相似文献   

13.
Ross's 1981 model of right-hemisphere processing of affective speech components was investigated within the dichotic paradigm. A spoken sentence constant in semantic content but varying among mad, sad, and glad emotional tones was presented to 45 male and 45 female college students. Duration of stimuli was controlled by adjusting digital sound samples to a uniform length. No effect of sex emerged, but the hypothesized ear advantage was found: more correct identifications were made with the left ear than with the right. A main effect of prosody was also observed, with significantly poorer performance in identifying the sad tone; in addition, sad condition scores for the right ear were more greatly depressed than those for the left ear, resulting in a significant interaction of ear and prosody.  相似文献   

14.
A musical canon consists of two melodic lines with the second part copying the first exactly after some time delay. Right-handed adults listened to canons presented dichotically at time delays between the ears of 2, 4, and 8 sec. Presentation rate varied from 1.0 to 4.4 notes/sec in one part. Different groups of subjects heard the canons with the left or the right ear leading. The subject’s task was to tell whether a given stimulus was a canon or not. Control stimuli were noncanons by the same composer. Musically experienced subjects performed better at the task than inexperienced subjects. Short time lags were easier than long, and the effect of lag was more pronounced with the right ear leading. In the light of previous evidence of functional ear asymmetry in music perception, these results suggest that whenever possible subjects use a strategy of selecting out small chunks of the lead-ear melody for short-term memory storage and later comparison with the trailing melody. The auditory system processing information from the right ear is especially good at focusing on small chunks. But this strategy is particularly vulnerable to time lag; hence the interaction of lead ear and time lag.  相似文献   

15.
This study tests the locus of attention during selective listening for speech-like stimuli. Can processing be differentially allocated to the two ears? Two conditions were used. The simultaneous condition involved one of four randomly chosen stop-consonants being presented to one of the ears chosen at random. The sequential condition involved two intervals; in the first S listened to the right ear; in the second S listened to the left ear. One of the four consonants was presented to an attended ear during one of these intervals. Experiment I used no distracting stimuli. Experiment II utilized a distracting consonant not confusable with any of the four target consonants. This distractor was always presented to any ear not containing a target. In both experiments, simultaneous and sequential performance were essentially identical, despite the need for attention sharing between the two ears during the simultaneous condition. We conclude that selective attention does not occur during perceptual processing of speech sounds presented to the two ears. We suggest that attentive effects arise in short-term memory following processing.  相似文献   

16.
Recent experience with attempts to test relatively simple patterns such as three-tone sequences in a traditional dichotic-listening paradigm indicates that when such sequences are used for both target and contralateral interference, performance tends to be low in both ears and not useful for measuring or comparing ear advantages in various target conditions. It is reported that tests with a variety of sounds presented contralaterally to three-element patterns show that several such sounds can (1) allow performance in at least one ear to remain above floor values, (2) result in performance in at least one ear that is below ceiling, and (3) reveal ear advantages that are similar in direction and magnitude to those seen with the traditional dichotic paradigm.  相似文献   

17.
Ear advantages for CV syllables were determined for 28 right-handed individuals in a target monitoring dichotic task. In addition, ear dominance for dichotically presented tones was determined when the frequency difference of the two tones was small compared to the center frequency and when the frequency difference of the tones was larger. On all three tasks, subjects provided subjective separability ratings as measures of the spatial complexity of the dichotic stimuli. The results indicated a robust right ear advantage (REA) for the CV syllables and a left ear dominance on the two tone tasks, with a significant shift toward right ear dominance when the frequency difference of the tones was large. Although separability ratings for the group data indicated an increase in the perceived spatial separation of the components of the tone complex across the two tone tasks, the separability judgment ratings and the ear dominance scores were not correlated for either tone task. A significant correlation, however, was evidenced between the laterality measure for speech and the judgment of separability, indicating that a REA of increased magnitude is associated with more clearly localized and spatially separate speech sounds. Finally, the dominance scores on the two tone tasks were uncorrelated with the laterality measures of the speech task, whereas the scores on the tone tasks were highly correlated. The results suggest that spatial complexity does play a role in the emergence of the REA for speech. However, the failure to find a relationship between speech and nonspeech tasks suggest that all perceptual asymmetries observed with dichotic stimuli cannot be accounted for by a single theoretical explanation.  相似文献   

18.
Children 4 to 6 years of age were exposed to repetitions of a six-tone melody, then tested for their detection of transformations that either preserved or changed the contour of the standard melody. Discrimination performance was examined as a function of contour condition, magnitude of contour change, rate of presentation, and the presence of novel frequencies. Performance was superior for transformations that changed contour compared to those that did not, for greater changes in contour, and for faster presentation rates. Melodies transformed by a reordering of component tones were no less discriminable than those transformed by the addition of novel frequencies.  相似文献   

19.
Left-right asymmetry in the central processing of musical consonance was investigated by dichotic listening tasks. Two piano tones paired at various pitch intervals (1-11 semitones) were presented one note in each ear to twenty absolute-pitch possessors. As a result, a weak overall trend for left ear advantage (LEA) was found, as is characteristic of trained musicians. Second, pitches of dissonant intervals were more difficult to identify than those of consonant intervals. Finally, the LEA was greater with dissonant intervals than with consonant intervals. As the tones were dichotically presented, the results indicated that the central auditory system could distinguish between consonant and dissonant intervals without initial processing of pitch-pitch relations in the cochlea.  相似文献   

20.
The present experiment was designed to investigate the effects of the following two variables on the identification of dichotically presented CV syllables: (1) the relative difference in intensity levels of the stimuli presented to the two ears and (2) the instructions to attend to both ears or to focus attention on one ear. As expected for a verbal task, more CV's were identified from the right ear than from the left ear. Furthermore, identification of stimuli presented to one ear improved when (1) those stimuli were relatively higher in volume than the stimuli presented to the other ear and (2) when subjects were instructed to focus attention on only that ear rather than distribute attention across both ears. Of particular importance is the finding that the effects of relative stimulus intensity are the same under conditions of focused attention as under conditions of divided attention. This finding is inconsistent with an attention explantation of the relative intensity effects. Instead, the results are consistent with a model of dichotic listening in which ear of stimulus presentation and relative stimulus intensity influence a perceptual stage of information processing and attentional instructions influence a subsequent response selection stage.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号