首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Cortical function was evaluated in 26 subjects with spasmodic dysphonia. Quantitative topographic electrophysiologic mapping (QTE) was employed to provide quantitative analyses of EEG spectra and auditory and visual long-latency evoked potentials. Single-photon emission computed tomography (SPECT) of the cerebral transit of Xenon-133 was used to evaluate regional cerebral blood flow. Left hemispheric abnormalities in cortical function were found by both techniques in 10 subjects and by at least one technique in 18 subjects. Right hemispheric abnormalities were observed by both techniques in 8 subjects and by at least one technique in 18 subjects. Most patients with cortical dysfunction in one hemisphere had cortical dysfunction in the other, while only 4 subjects had unilateral lesions as found by one of the two techniques. Eight subjects were normal by all measurements. Underlying structural abnormalities were detected by magnetic resonance imaging in 5/24 subjects. However, functional abnormalities (SPECT or QTE) were not observed at sites of structural abnormalities. SPECT and QTE were significantly related in identification of left hemispheric dysfunction (p = .037) with a trend in the right hemisphere (p = .070), and a significant congruence of SPECT and QTE findings occurred in the left anterior cortical quadrant (p = .011). These findings indicate that dysfunction of cortical perfusion and/or cortical electrophysiology is associated with spasmodic dysphonia in the majority of subjects studied.  相似文献   

2.
Perception of the speech code   总被引:40,自引:0,他引:40  
  相似文献   

3.
Listeners are exquisitely sensitive to fine-grained acoustic detail within phonetic categories for sounds and words. Here we show that this sensitivity is optimal given the probabilistic nature of speech cues. We manipulated the probability distribution of one probabilistic cue, voice onset time (VOT), which differentiates word initial labial stops in English (e.g., "beach" and "peach"). Participants categorized words from distributions of VOT with wide or narrow variances. Uncertainty about word identity was measured by four-alternative forced-choice judgments and by the probability of looks to pictures. Both measures closely reflected the posterior probability of the word given the likelihood distributions of VOT, suggesting that listeners are sensitive to these distributions.  相似文献   

4.
Infant perception often deals with audiovisual speech input and a first step in processing this input is to perceive both visual and auditory information. The speech directed to infants has special characteristics and may enhance visual aspects of speech. The current study was designed to explore the impact of visual enhancement in infant-directed speech (IDS) on audiovisual mismatch detection in a naturalistic setting. Twenty infants participated in an experiment with a visual fixation task conducted in participants’ homes. Stimuli consisted of IDS and adult-directed speech (ADS) syllables with a plosive and the vowel /a:/, /i:/ or /u:/. These were either audiovisually congruent or incongruent. Infants looked longer at incongruent than congruent syllables and longer at IDS than ADS syllables, indicating that IDS and incongruent stimuli contain cues that can make audiovisual perception challenging and thereby attract infants’ gaze.  相似文献   

5.
In forensic settings, lay (nonexpert) listeners may be required to compare voice samples for identity. In two experiments we investigated the effect of background noise and variations in speaking style on performance. In each trial, participants heard two recordings, responded whether the voices belonged to the same person, and provided a confidence rating. In Experiment 1, the first recording featured read speech and the second featured read or spontaneous speech. Both recordings were presented in quiet, or with background noise. Accuracy was highest when recordings featured the same speaking style. In Experiment 2, background noise either occurred in the first or second recording. Accuracy was higher when it occurred in the second. The overall results reveal that both speaking style and background noise can disrupt accuracy. Although there is a relationship between confidence and accuracy in all conditions, it is variable. The forensic implications of these findings are discussed.  相似文献   

6.
The aim of this study was to investigate the perception of competing speech stimuli in 3-, 4-, and 5-year-old normally developing children. A dichotic listening paradigm was used in which the temporal alignment between the two stimuli was varied to represent three levels of competition. Minimal, moderate, and maximal levels of temporal competition were represented by a Separation, Lag, and Simultaneous test condition, respectively. The subjects were behaviorally set to listen for and to report the two stimuli on each trial. The incidence of double correct responses in the test conditions was the measure of interest. The results show a sharp and linear drop in double correct scores from the Separation, to the Lag, and to the Simultaneous condition. There were no age-related differences in the Separation and the Simultaneous conditions. In the Lag condition, the performance of the 3-year-olds was significantly lower than the 4- and 5-year-olds. The findings were interpreted to be indicative of limited auditory processing ability in preschoolers for moderately and maximally competing speech stimuli.  相似文献   

7.
The irrelevant speech effect is the impairment of task performance by the presentation of to-be-ignored speech stimuli. Typically, the irrelevant speech comprises a variety of sounds, but previous research (e.g., Jones, Madden, & Miles, 1992) has suggested that the deleterious effect of background speech is virtually eliminated if the speech comprises repetitions of a sound (e.g., “be, be, be”) or a single continuous sound (e.g., “beeeeeee”). Four experiments are reported that challenge this finding. Experiments 1, 2, and 4 show a substantial impairment in serial recall performance in the presence of a repeated sound, and Experiments 3 and 4 show a similar impairment of serial recall in the presence of a continuous sound. The relevance of these findings to several explanations of the irrelevant speech effect is discussed.  相似文献   

8.
Listeners had the task of following a target speech signal heard against two competitors either located at the same spatial position as the target or displaced symmetrically to locations flanking it. When speech was the competitor, there was a significantly higher separation effect (maintained intelligibility with reduced target sound level), as compared with either steady-state or fluctuating noises. Increasing the extent of spatial separation slightly increased the effect, and a substantial contribution of interaural time differences was observed. When same- and opposite-sex voices were used, a hypothesis that the similarity between target and competing speech would explain the role for spatial separation was partly supported. High- and low-pass filtering showed that both parts of an acoustically similar competing signal contribute to the phenomenon. We conclude that, in parsing the auditory array, attention to spatial cues is heightened when the components of the array are confusable on other acoustic grounds.  相似文献   

9.
Two groups of right-handed young adults were tested on a series of handedness measures and on dichotic nonverbal rhythmic sequences. Cross-validated multiple regression analysis revealed that all of the cerebral-lateralization/manual-praxis measures were positively related to the degree of left-hemisphere perceptual asymmetry for nonverbal rhythms (Crawford peg, scissor, handwriting, Crawford screws, tracing, total R = .67). Seventeen of the 52 subjects manifested significant (p < .05) left-hemisphere laterality coefficients for the dichotic stimuli. More complex rhythms elicited greater left-hemisphere perceptual preference. The results are discussed in reference to the concept of cerebral lateralization.  相似文献   

10.
The present study used fMRI/BOLD neuroimaging to investigate how visual‐verbal working memory is updated when exposed to three different background‐noise conditions: speech noise, aircraft noise and silence. The number‐updating task that was used can distinguish between “substitution processes,” which involve adding new items to the working memory representation and suppressing old items, and “exclusion processes,” which involve rejecting new items and maintaining an intact memory set. The current findings supported the findings of a previous study by showing that substitution activated the dorsolateral prefrontal cortex, the posterior medial frontal cortex and the parietal lobes, whereas exclusion activated the anterior medial frontal cortex. Moreover, the prefrontal cortex was activated more by substitution processes when exposed to background speech than when exposed to aircraft noise. These results indicate that (a) the prefrontal cortex plays a special role when task‐irrelevant materials should be denied access to working memory and (b) that, when compensating for different types of noise, either different cognitive mechanisms are involved or those cognitive mechanisms that are involved are involved to different degrees.  相似文献   

11.
Four monkeys and 6 humans representing five different native languages were compared in the ability to categorize natural CV tokens of /b/ versus /d/ produced by 4 talkers of American English (2 male, 2 female) in four vowel contexts (/i, e, a, u/). A two-choice "left/right" procedure was used in which both percentage correct and response time data were compared between species. Both measures indicated striking context effects for monkeys, in that they performed better for the back vowels /a/ and /u/ than for the front vowels /i/ and /e/. Humans showed no context effects for the percentage correct measure, but their response times showed an enhancement for the /i/ vowel, in contrast with monkeys. Results suggest that monkey perception of place of articulation is more dependent than human perception on the direction of the F2 onset transitions of syllables, since back-vowel F2s differentiate /b/ and /d/ more distinctively. Although monkeys do not provide an accurate model of the adult human in place perception, they may be able to model the preverbal human infant before it learns a more speech-specific strategy of place information extraction.  相似文献   

12.
Perception of intersensory temporal order is particularly difficult for (continuous) audiovisual speech, as perceivers may find it difficult to notice substantial timing differences between speech sounds and lip movements. Here we tested whether this occurs because audiovisual speech is strongly paired (“unity assumption”). Participants made temporal order judgments (TOJ) and simultaneity judgments (SJ) about sine-wave speech (SWS) replicas of pseudowords and the corresponding video of the face. Listeners in speech and non-speech mode were equally sensitive judging audiovisual temporal order. Yet, using the McGurk effect, we could demonstrate that the sound was more likely integrated with lipread speech if heard as speech than non-speech. Judging temporal order in audiovisual speech is thus unaffected by whether the auditory and visual streams are paired. Conceivably, previously found differences between speech and non-speech stimuli are not due to the putative “special” nature of speech, but rather reflect low-level stimulus differences.  相似文献   

13.
Individuals with developmental dyslexia are often impaired in their ability to process certain linguistic and even basic non-linguistic auditory signals. Recent investigations report conflicting findings regarding impaired low-level binaural detection mechanisms associated with dyslexia. Binaural impairment has been hypothesized to stem from a general low-level processing disorder for temporally fine sensory stimuli. Here we use a new behavioral paradigm to address this issue. We compared the response times of dyslexic listeners and their matched controls in a tone-in-noise detection task. The tonal signals were either Huggins Pitch (HP), a stimulus requiring binaural processing to elicit a pitch percept, or a pure tone-perceptually similar but physically very different signals. The results showed no difference between the two groups specific to the processing of HP and thus no evidence for a binaural impairment in dyslexia. However, dyslexic subjects exhibited a general difficulty in extracting tonal objects from background noise, manifested by a globally delayed detection speed.  相似文献   

14.
15.
Individuals with developmental dyslexia (DD) may experience, besides reading problems, other speech‐related processing deficits. Here, we examined the influence of visual articulatory information (lip‐read speech) at various levels of background noise on auditory word recognition in children and adults with DD. We found that children with a documented history of DD have deficits in their ability to gain benefit from lip‐read information that disambiguates noise‐masked speech. We show with another group of adult individuals with DD that these deficits persist into adulthood. These deficits could not be attributed to impairments in unisensory auditory word recognition. Rather, the results indicate a specific deficit in audio‐visual speech processing and suggest that impaired multisensory integration might be an important aspect of DD.  相似文献   

16.
Previous studies have noted that writing processes are impaired by task-irrelevant background sound. However, what makes sound distracting to writing processes has remained unaddressed. The experiment reported here investigated whether the semanticity of irrelevant speech contributes to disruption of writing processes beyond the acoustic properties of the sound. The participants wrote stories against a background of normal speech, spectrally-rotated speech (i.e., a meaningless sound with marked acoustic resemblance to speech) or silence. Normal speech impaired quantitative (e.g., number of characters produced) and qualitative/semantic (e.g., uncorrected typing errors, proposition generation) aspects of the written material, in comparison with the other two sound conditions, and it increased the duration of pauses between words. No difference was found between the silent and the rotated-speech condition. These results suggest that writing is susceptible to disruption from the semanticity of speech but not especially susceptible to disruption from the acoustic properties of speech.  相似文献   

17.
This article describes the development of a test for measuring the intelligibility of speech in noise for the Spanish language, similar to the test developed by Kalikow, Stevens, and Elliot (Journal of the Acoustical Society of America, 5, 1337–1360, 1977) for the English language. The test consists of six forms, each comprising 25 high-predictability (HP) sentences and 25 low-predictability (LP) sentences. The sentences were used in a perceptual task to assess their intelligibility in babble noise across three different signal-to-noise ratio (SNR) conditions in a sample of 474 normal-hearing listeners. The results showed that the listeners obtained higher scores of intelligibility for HP sentences than for LP sentences, and the scores were lower for the higher SNRs, as was expected. The final six forms were equivalent in intelligibility and phonetic content.  相似文献   

18.
Twenty right-handed Ss listened to a dichotic tape in which one of six consonant-vowel syllables was paired with a burst of white noise on each trial. Eight blocks of 40 trials were presented, with the syllables within a block presented to, the same ear. On each trial, Ss decided if/ ba/ was presented. Mean RT to right-ear items was 440.0 msec, while mean left-ear RT was 453.6 msec. Responses indicating the presence of /ba/ were made significantly more quickly than responses indicating its absence, with no significant interaction of ear and type of decision. This study demonstrated a right-ear advantage in the perception of spoken syllables when noise is presented to the opposite ear. An interpretation of the RT differences between ears in terms of callosal transmission time is discussed, and implications of this study for the perceptual origins of the ear advantage effect are considered.  相似文献   

19.
Kim J  Sironic A  Davis C 《Perception》2011,40(7):853-862
Seeing the talker improves the intelligibility of speech degraded by noise (a visual speech benefit). Given that talkers exaggerate spoken articulation in noise, this set of two experiments examined whether the visual speech benefit was greater for speech produced in noise than in quiet. We first examined the extent to which spoken articulation was exaggerated in noise by measuring the motion of face markers as four people uttered 10 sentences either in quiet or in babble-speech noise (these renditions were also filmed). The tracking results showed that articulated motion in speech produced in noise was greater than that produced in quiet and was more highly correlated with speech acoustics. Speech intelligibility was tested in a second experiment using a speech-perception-in-noise task under auditory-visual and auditory-only conditions. The results showed that the visual speech benefit was greater for speech recorded in noise than for speech recorded in quiet. Furthermore, the amount of articulatory movement was related to performance on the perception task, indicating that the enhanced gestures made when speaking in noise function to make speech more intelligible.  相似文献   

20.
Listeners must cope with a great deal of variability in the speech signal, and thus theories of speech perception must also account for variability, which comes from a number of sources, including variation between accents. It is well known that there is a processing cost when listening to speech in an accent other than one's own, but recent work has suggested that this cost is reduced when listening to a familiar accent widely represented in the media, and/or when short amounts of exposure to an accent are provided. Little is known, however, about how these factors (long-term familiarity and short-term familiarization with an accent) interact. The current study tested this interaction by playing listeners difficult-to-segment sentences in noise, before and after a familiarization period where the same sentences were heard in the clear, allowing us to manipulate short-term familiarization. Listeners were speakers of either Glasgow English or Standard Southern British English, and they listened to speech in either their own or the other accent, thereby allowing us to manipulate long-term familiarity. Results suggest that both long-term familiarity and short-term familiarization mitigate the perceptual processing costs of listening to an accent that is not one's own, but seem not to compensate for them entirely, even when the accent is widely heard in the media.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号