首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The sensitivity of the scalp-recorded, auditory evoked potential to selective attention was examined while subjects monitored one of two dichotically presented speech passages for content. Evoked potentials were elicited to irrelevant probe stimuli (vowel sounds) embedded in both the right- and the leftear’s message. The amplitude of the evoked potential was larger to probe stimuli embedded in the attended message than to probe stimuli in the unattended message. Recall performance was unaffected by the presence of the probes. The results are interpreted as supporting the hypothesis that this evoked potential sensitivity reflects an initial “input selection” stage of attention.  相似文献   

2.
3.
The detectability of a 500-Hz tone of either 32- or 256-msec duration in a broad-band 50-dB spectrum level noise was measured as a function of the duration of the noise. The noise was continuous or was gated 0, 125, or 250 msec before the onset of the signal. For the gated noise conditions the noise was terminated approximately 5 msec after termination of the signal. With a homophasic condition (NO SO), the three noise conditions led to approximately the same detectability as did the continuous masker. In an antiphasic condition (NO Sπ), detectability was poorest when signal and masker began together and improved as the delay between noise onset and signal onset increased. The difference between the simultaneous onset and the continuous noise condition was about 9 dB for the 32-msec signal and about 2 dB for the 256-msec signal. These results are compared to those reported by McFadden (1966).  相似文献   

4.
In shadowing one of two simultaneous messages presented dichotically, subjects are unable to report any of the content of the rejected message. Even if the rejected message consists of a short list of simple words repeated many times, a recognition test fails to reveal any trace of the list. If numbers are interpolated in prose passages presented for dichotic shadowing, no more are recalled from the rejected messages if the instructions are specifically to remember numbers than if the instructions are general: a specific set for numbers will not break through the attentional barrier set up in this task. The only stimulus so far found that will break through this barrier is the subject's own name. It is probably only material “important” to the subject that will break through the barrier.  相似文献   

5.
The distinction between auditory and phonetic processes in speech perception was used in the design and analysis of an experiment. Earlier studies had shown that dichotically presented stop consonants are more often identified correctly when they share place of production (e.g., /ba-pa/) or voicing (e.g., /ba-da/) than when neither feature is shared (e.g., /ba-ta/). The present experiment was intended to determine whether the effect has an auditory or a phonetic basis. Increments in performance due to feature-sharing were compared for synthetic stop-vowel syllables in which formant transitions were the sole cues to place of production under two experimental conditions: (1) when the vowel was the same for both syllables in a dichotic pair, as in our earlier studies, and (2) when the vowels differed. Since the increment in performance due to sharing place was not diminished when vowels differed (i.e., when formant transitions did not coincide), it was concluded that the effect has a phonetic rather than an auditory basis. Right ear advantages were also measured and were found to interact with both place of production and vowel conditions. Taken together, the two sets of results suggest that inhibition of the ipsilateral signal in the perception of dichotically presented speech occurs during phonetic analysis.  相似文献   

6.
7.
8.
A person can attend to a message in one ear while seemingly ignoring a simultaneously presented verbal message in the other ear. There is considerable controversy over the extent to which the unattended message is actually processed. This issue was investigated by presenting dichotic messages to which the listeners responded by buttonpressing (not shadowing) to color words occurring in the primary ear message while attempting to detect a target word in either the primary ear or secondary ear message. Less than 40% of the target words were detected in the secondary ear message, whereas for the primary ear message (and also for either ear in a control experiment), target detection was approximately 80%. Furthermore, there was a significant negative correlation between buttonpressing performance and secondary ear target-detection performance. The results were interpreted as being inconsistent with automatic processing theories of attention.  相似文献   

9.
The experiments reported examined monitoring for semantically defined targets whilst concurrently shadowing (Experiment I) or listening silently (Experiment II). The word lists for monitoring were either visual or auditory. Monitoring and shadowing accuracy showed less interference when presentation was bimodal than when it was dichotic. However, monitoring latency and recognition memory for shadowed material did not show this effect. It is argued that these data reveal the existence of a number of different sources of potential difficulty in dichotic listening situations and the nature of these is discussed.  相似文献   

10.
The present study quantified the magnitude of sex differences in perceptual asymmetries as measured with dichotic listening. This was achieved by means of a meta-analysis of the literature dating back from the initial use of dichotic listening as a measure of laterality. The meta-analysis included 249 effect sizes pertaining to sex differences and 246 effect sizes for the main effect of laterality. The results showed small and homogeneous sex differences in laterality in favor of men (d=0.054). The main effect of laterality was of medium magnitude (d=0.609) but it was heterogeneous. Homogeneity for the main effect of laterality was achieved through partitioning as a function of task, demonstrating larger asymmetries for verbal (d=0.65) than for non-verbal tasks (d=0.45). The results are discussed with reference to top-down and bottom-up factors in dichotic listening. The possible influence of a publication bias is also discussed.  相似文献   

11.
The goal of the present study was to evaluate the differences between dichotic listening and mismatch negativity as measures of speech lateralization in the human brain. For this purpose, we recorded the magnetic equivalent of the mismatch negativity, elicited by consonant-vowel syllable change, and tested the same subjects in the dichotic listening procedure. The results showed that both methods indicated left-hemisphere dominance in speech processing. However, the mismatch negativity, as compared to the right-ear advantage, suggested slightly stronger left-hemisphere dominance in speech processing. No clear correlation was found between the laterality indexes of mismatch negativity and right-ear advantage calculated from dichotic listening results. The possible explanation for this finding would be that these two measures reflect different stages of speech processing in the human brain.  相似文献   

12.
13.
When a component of a complex tone is captured into a stream by other events that precede and follow it, it does not fuse with the other components of the complex tone but tends to to be heard as a separate event. The current study examined the ability of elements of a stream to resist becoming fused with other synchronous events, heard either in the same ear or at the opposite ear. The general finding was that events in one ear fuse strongly with elements of an auditory stream in the other ear only when they are spectrally very similar. In this case, the fusion of simultaneous components at opposite ears is stronger than of simultaneous components heard in the same ear. However, when the spectra of the synchronous events are mismatched even slightly, components in the same ear fuse more strongly than components at opposite ears. These results are accounted for by a theory that assumes that decisions that perceptually integrate sequential events, synchronous events, and events at opposite ears are interdependent.  相似文献   

14.
Normal adults spontaneously adopt different recall strategies in reporting dichotic material presented at different rates. A channel by channel or ear order is used with the faster rate of input and a pair by pair or temporal order is used with the slower rate of input. The purpose of the present report is to study the frequency of the different orders of report in children as a function of the rate of input of dichotic stimulation. Twenty-four normal children, 9–10 years of age, were given the dichotic listening task under three rate conditions. The children used different recall strategies as a function of rate of input in the same manner as that reported for adults. In addition, the order of presentation of the different rates of input was found to influence the relative frequency of the different recall strategies. A significant positive correlation was found between intelligence and the frequency of use of only the temporal recall strategy in its appropriate (slow) rate condition.  相似文献   

15.
Ear advantages for CV syllables were determined for 28 right-handed individuals in a target monitoring dichotic task. In addition, ear dominance for dichotically presented tones was determined when the frequency difference of the two tones was small compared to the center frequency and when the frequency difference of the tones was larger. On all three tasks, subjects provided subjective separability ratings as measures of the spatial complexity of the dichotic stimuli. The results indicated a robust right ear advantage (REA) for the CV syllables and a left ear dominance on the two tone tasks, with a significant shift toward right ear dominance when the frequency difference of the tones was large. Although separability ratings for the group data indicated an increase in the perceived spatial separation of the components of the tone complex across the two tone tasks, the separability judgment ratings and the ear dominance scores were not correlated for either tone task. A significant correlation, however, was evidenced between the laterality measure for speech and the judgment of separability, indicating that a REA of increased magnitude is associated with more clearly localized and spatially separate speech sounds. Finally, the dominance scores on the two tone tasks were uncorrelated with the laterality measures of the speech task, whereas the scores on the tone tasks were highly correlated. The results suggest that spatial complexity does play a role in the emergence of the REA for speech. However, the failure to find a relationship between speech and nonspeech tasks suggest that all perceptual asymmetries observed with dichotic stimuli cannot be accounted for by a single theoretical explanation.  相似文献   

16.
Subjects heard two lists of 4 items each presented simultaneously to the two ears at a rate of four pairs of items per sec. A recall cue presented immediately after the test list signalled report of 4 of the 8 items. In recall by spatial location, the cue indicated whether the items on the right ear on left ear should be recalled. In recall by category name, the cue indicated the superset category (e.g., letters or words) of the items to be recalled. Recall by spatial location was not significantly different than recall by category name. This results argues against the idea of a preperceptual auditory storage that holds information along spatial channels for 1 or 2 sec. The final experiment showed that recall by spatial location is significantly better than recall by category name when the report cue is given before, not after, the list presentation. These results show that spatial location can be used to enhance semantic processing and/or memory of 1 of 2 simultaneous items, but only if the relevant location is known at the time of the item presentation.  相似文献   

17.
Children at two age levels, 6 to 7 years and 9 to 10 years, listened to pairs of words presented dichotically to the left and right ear either simultaneously or in immediate succession. Their task was to report what they heard after each pair. The experimental pairs involved either syntagmatic relations (e.g., bird-fly) or paradigmatic relations (table-chair). The younger children correctly identified more syntagmatic than paradigmatic pairs (using control pairs as a base) while the older children correctly identified more paradigmatic pairs. There was no difference between associated and nonassociated pairs. The results are taken to mean that pragmatic, serial relations dominate the semantic organization of younger children and that logical, hierarchical relations make their impact later in the course of development.  相似文献   

18.
Ear advantage for the processing of dichotic speech sounds can be separated into two components. One of these components is an ear advantage for those phonetic features that are based on spectral acoustic cues. This ear advantage follows the direction of a given individual's ear dominance for the processing of spectral information in dichotic sounds, whether speech or nonspeech. The other factor represents a right-ear advantage for the processing of temporal information in dichotic sounds, whether speech or nonspeech. The present experiments were successful in dissociating these two factors. Since the results clearly show that ear advantage for speech is influenced by ear dominance for spectral information, a full understanding of the asymmetry in the perceptual salience of speech sounds in any individual will not be possible without knowing his ear dominance.  相似文献   

19.
A same-different matching task was used to investigate how subjects perceived a dichotic pair of pure tones. Pairs of stimulus tones in four frequency ranges (center frequencies of 400–1,700 Hz), with separations between 40 and 400 Hzt were tested. Five types of test tones were matched to the stimulus pair: the stimulus pair presented again (control) or crossed over (same tones, different ears), the geometric mean of the two tones, or a binaural tone of the low or high tone of the pair. In the lowest frequency range and the highest with maximum separation, the crossed-over test tones were perceived as different from the same stimulus tones. A bias for perceiving the higher tone of a pair was evident in the frequency ranges with separations of 40-200 Hz. In the lowest frequency range, the bias was for perceiving the higher tone in the right ear. This restricted ear advantage in the perception of pure tones was not significantly related to the right-ear advantage in dichotic word monitoring.  相似文献   

20.
In previous behavioral studies, a prime syllable was presented just prior to a dichotic syllable pair, with instructions to ignore the prime and report one syllable from the dichotic pair. When the prime matched one of the syllables in the dichotic pair, response selection was biased towards selecting the unprimed target. The suggested mechanism was that the prime was inhibited to reduce conflict between task-irrelevant prime processing and task-relevant dichotic target processing, and a residual effect of the prime inhibition biased the resolution of the conflict between the two targets. The current experiment repeated the primed dichotic listening task in an event-related fMRI setting. The fMRI data showed that when the task-irrelevant prime matched the task-relevant targets, activations in posterior medial frontal cortex (pMFC) and in right inferior frontal gyrus (IFG) increased, which was considered to represent conflict and inhibition, respectively. Further, matching trials where the unprimed target was selected showed activation in right IFG, while matching trials where the primed target was selected showed activations in pMFC and left IFG, indicating the difference between inhibition-biased selection and unbiased selection.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号