首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
14 mothers of children who were deaf or hard of hearing provided magnitude estimation scaling responses for the speech intelligibility and speech annoyance of narrative speech samples produced by children who were deaf or hard of hearing. Analysis indicated that listeners scaled intelligibility and annoyance the same. As samples became more difficult to understand, they also became more annoying to these listeners. Implications for further research are discussed.  相似文献   

2.
3.
言语理解是听者接受外部语音输入并且获得意义的心理过程。日常交流中,听觉言语理解受多尺度节律信息的影响,常见有韵律结构节律、语境节律、和说话者身体语言节律三方面外部节律。它们改变听者在言语理解中的音素判别、词汇感知以及言语可懂度等过程。内部节律表现为大脑内神经振荡,其能够表征外部言语输入在不同时间尺度下的层级特征。外部节律性刺激与内部神经活动的神经夹带能够优化大脑对言语刺激的处理,并受到听者自上而下的认知过程的调节进一步增强目标言语的内在表征。我们认为它可能是实现内外节律相互联系并共同影响言语理解的关键机制。对内外节律及其联系机制的揭示能够为理解言语这种在多层级时间尺度上具有结构规律的复杂序列提供了一个研究窗口。  相似文献   

4.
Lipreading proficiency was investigated in a group of hearing-impaired people, all of them knowing Spanish Sign Language (SSL). The aim of this study was to establish the relationships between lipreading and some other variables (gender, intelligence, audiological variables, participants' education, parents' education, communication practices, intelligibility, use of SSL). The 32 participants were between 14 and 47 years of age. They all had sensorineural hearing losses (from severe to profound). The lipreading procedures comprised identification of words in isolation. The words selected for presentation in isolation were spoken by the same talker. Identification of words required participants to select their responses from set of four pictures appropriately labelled. Lipreading was significantly correlated with intelligence and intelligibility. Multiple regression analyses were used to obtain a prediction equation for the lipreading measures. As a result of this procedure, it is concluded that proficient deaf lipreaders are more intelligent and their oral speech was more comprehensible for others.  相似文献   

5.
An automated threshold method has been developed for determining the maximum rate of speech understood by individual listeners. Two experiments were undertaken to determine whether the threshold was related to the comprehension of speech or to speech intelligibility. The first experiment compared thresholds of two types of rapid speech reportedly different in intelligibility: simple speeded speech and speech compressed by the sampling method. The second experiment sought to determine the relationship of the threshold to traditional comprehension measures. The results are discussed in terms of the intelligibility and comprehensibility of speech.  相似文献   

6.
The purpose of this investigation was to judge whether the Lombard effect, a characteristic change in the acoustical properties of speech produced in noise, existed in adductor spasmodic dysphonia speech, and if so, whether the effect added to or detracted from speaker intelligibility. Intelligibility, as described by Duffy, is the extent to which the acoustic signal produced by a speaker is understood by a listener based on the auditory signal alone. Four speakers with adductor spasmodic dysphonia provided speech samples consisting of low probability sentences from the Speech Perception in Noise test to use as stimuli. The speakers were first tape-recorded as they read the sentences in a quiet speaking condition and were later tape-recorded as they read the same sentences while exposed to background noise. The listeners used as subjects in this study were 50 undergraduate university students. The results of the statistical analysis indicated a significant difference between the intelligibility of the speech recorded in the quiet versus noise conditions (F(1,49) = 57.80, p < or = .001). It was concluded that a deleterious Lombard effect existed for the adductor spasmodic dysphonia speaker group, with the premise that the activation of a Lombard effect in such patients may detract from their overall speech intelligibility.  相似文献   

7.
When speech is rapidly alternated between the two ears, intelligibility declines as the rate of alternation approaches 3 to 5 switching cycles per second, and then, paradoxically, returns to a good level beyond that point. We tested intelligibility when shadowing was used as a response measure (Experiment 1), when recall was used as a response measure (Experiment 2), and when time-compression was used to vary the speech rate of the presented materials (Experiment 3). In spite of claims that older adults are generally slower in switching attention, younger and older adults did not differ in the critical alternation rates producing minimal intelligibility. We suggest that the point of minimal intelligibility in alternated speech reflects an interaction between (1) the rate of disruption induced by breaking the speech stream between two sound sources, (2) the amount of contextual information per ear, and (3) the size of the silent gaps separating the speech elements that must be perceptually bridged.  相似文献   

8.
Understanding low-intelligibility speech is effortful. In three experiments, we examined the effects of intelligibility on working memory (WM) demands imposed by perception of synthetic speech. In all three experiments, a primary speeded word recognition task was paired with a secondary WM-load task designed to vary the availability of WM capacity during speech perception. Speech intelligibility was varied either by training listeners to use available acoustic cues in a more diagnostic manner (as in Experiment 1) or by providing listeners with more informative acoustic cues (i.e., better speech quality, as in Experiments 2 and 3). In the first experiment, training significantly improved intelligibility and recognition speed; increasing WM load significantly slowed recognition. A significant interaction between training and load indicated that the benefit of training on recognition speed was observed only under low memory load. In subsequent experiments, listeners received no training; intelligibility was manipulated by changing synthesizers. Improving intelligibility without training improved recognition accuracy, and increasing memory load still decreased it, but more intelligible speech did not produce more efficient use of available WM capacity. This suggests that perceptual learning modifies the way available capacity is used, perhaps by increasing the use of more phonetically informative features and/or by decreasing use of less informative ones.  相似文献   

9.
Talkers hyperarticulate vowels when communicating with listeners that require increased speech intelligibility. Vowel hyperarticulation is said to be motivated by knowledge of the listener's linguistic needs because it typically occurs in speech to infants, foreigners and hearing-impaired listeners, but not to non-verbal pets. However, the degree to which vowel hyperarticulation is determined by feedback from the listener is surprisingly less well understood. This study examines whether mothers' speech input is driven by knowledge of the infant's linguistic competence, or by the infant's feedback cues. Specifically, we manipulated (i) mothers' knowledge of whether they believed their infants could hear them or not, and (ii) the audibility of the speech signal available to the infant (full or partial audibility, or inaudible). Remarkably, vowel hyperarticulation was completely unaffected by mothers' knowledge; instead, there was a reduction in the degree of hyperarticulation such that vowels were hyperarticulated to the greatest extent in the full audibility condition, there was reduced hyperarticulation in the partially audible condition, and no hyperarticulation in the inaudible condition. Thus, while it might be considered adaptive to hyperarticulate speech to the hearing-impaired adult or infant, when these two factors (infant and hearing difficulty) are coupled, vowel hyperarticulation is sacrificed. Our results imply that infant feedback drives talker behavior and raise implications for intervention strategies used with carers of hearing-impaired infants.  相似文献   

10.
This study assessed intelligibility in a dysarthric patient with Parkinson's disease (PD) across five speech production tasks: spontaneous speech, repetition, reading, repeated singing, and spontaneous singing, using the same phrases for all but spontaneous singing. The results show that this speaker was significantly less intelligible when speaking spontaneously than in the other tasks. Acoustic analysis suggested that relative intensity and word duration were not independently linked to intelligibility, but dysfluencies (from perceptual analysis) and articulatory/resonance patterns (from acoustic records) were related to intelligibility in predictable ways. These data indicate that speech production task may be an important variable to consider during the evaluation of dysarthria. As speech production efficiency was found to vary with task in a patient with Parkinson's disease, these results can be related to recent models of basal ganglia function in motor performance.  相似文献   

11.
Speech intelligibility performance with an in-the-ear microphone embedded in a custom-molded deep-insertion earplug was compared with results obtained using a free-field microphone. Intelligibility differences between microphones were further analyzed to assess whether reduced intelligibility was specific to certain sound classes. 36 participants completed the Modified Rhyme Test using recordings made with each microphone. While speech intelligibility for both microphones was highly accurate, intelligibility with the free-field microphone was significantly better than with the in-the-ear microphone. There were significant effects of place and manner of sound production. Significant differences in recognition among specific phonemes were also revealed. Implications included modifying the in-the-ear microphone to transmit more high frequency energy. Use of the in-the-ear microphone was limited by significant loss of high-frequency energy of the speech signal which resulted in reduced intelligibility for some sounds; however, the in-the-ear microphone is a promising technology for effective communication in military environments.  相似文献   

12.
When deleted segments of speech are replaced by extraneous sounds rather than silence, the missing speech fragments may be perceptually restored and intelligibility improved. This phonemic restoration (PhR) effect has been used to measure various aspects of speech processing, with deleted portions of speech typically being replaced by stochastic noise. However, several recent studies of PhR have used speech-modulated noise, which may provide amplitude-envelope cues concerning the replaced speech. The present study compared the effects upon intelligibility of replacing regularly spaced portions of speech with stochastic (white) noise versus speech-modulated noise. In Experiment 1, filling periodic gaps in sentences with noise modulated by the amplitude envelope of the deleted speech fragments produced twice the intelligibility increase obtained with interpolated stochastic noise. Moreover, when lists of isolated monosyllables were interrupted in Experiment 2, interpolation of speech-modulated noise increased intelligibility whereas stochastic noise reduced intelligibility. The augmentation of PhR produced by modulated noise appeared without practice, suggesting that speech processing normally involves not only a narrowband analysis of spectral information but also a wideband integration of amplitude levels across critical bands. This is of considerable theoretical interest, but it also suggests that since PhRs produced by speech-modulated noise utilize potent bottom-up cues provided by the noise, they differ from the PhRs produced by extraneous sounds, such as coughs and stochastic noise.  相似文献   

13.
Kim J  Sironic A  Davis C 《Perception》2011,40(7):853-862
Seeing the talker improves the intelligibility of speech degraded by noise (a visual speech benefit). Given that talkers exaggerate spoken articulation in noise, this set of two experiments examined whether the visual speech benefit was greater for speech produced in noise than in quiet. We first examined the extent to which spoken articulation was exaggerated in noise by measuring the motion of face markers as four people uttered 10 sentences either in quiet or in babble-speech noise (these renditions were also filmed). The tracking results showed that articulated motion in speech produced in noise was greater than that produced in quiet and was more highly correlated with speech acoustics. Speech intelligibility was tested in a second experiment using a speech-perception-in-noise task under auditory-visual and auditory-only conditions. The results showed that the visual speech benefit was greater for speech recorded in noise than for speech recorded in quiet. Furthermore, the amount of articulatory movement was related to performance on the perception task, indicating that the enhanced gestures made when speaking in noise function to make speech more intelligible.  相似文献   

14.
This article summarizes the developmental outcomes of Colorado children with significant hearing loss. Some of the research compares children born in hospitals that have implemented universal newborn hearing screening programs for newborns. Other research compares the developmental outcomes of children who have been early-identified with hearing loss. Early-identification is defined as identification of hearing loss within the first six months of life. Late identification in the Colorado studies is defined as age of identification of hearing loss after the age of six months. In a few of the Colorado studies, age at initiation of intervention was used. Within the Colorado system, age of identification can be interpreted as almost synonymous with age of intervention, as the vast majority of children enter intervention services with two months after the identification of the hearing loss. Children who were early-identified and had early initiation of intervention services (within the first year of life) had significantly better vocabulary, general language abilities, speech intelligibility and phoneme repertoires, syntax as measured by mean length of utterance, social-emotional development, parental bonding, and parental grief resolution. Two other studies (Nebraska and Washington state) of early- versus later-initiation of intervention services report findings similar to the Colorado studies. Direct comparisons with the historical literature are not possible because the developmental delays of what would now be termed "later-identified" were too low to report developmental ages for the birth through five-year-old population.  相似文献   

15.
Classical views of speech perception argue that the static and dynamic characteristics of spectral energy peaks (formants) are the acoustic features that underpin phoneme recognition. Here we use representations where the amplitude modulations of sub-band filtered speech are described, precisely, in terms of co-sinusoidal pulses. These pulses are parameterised in terms of their amplitude, duration and position in time across a large number of spectral channels. Coherent sweeps of energy across this parameter space are identified and the local transitions of pulse features across spectral channels are extracted. Synthesised speech based on manipulations of these local amplitude modulation features was used to explore the basis of intelligibility. The results show that removing changes in amplitude across channels has a much greater impact on intelligibility than differences in sweep transition or duration across channels. This finding has severe implications for future experimental design in the fields of psychophysics, electrophysiology and neuroimaging.  相似文献   

16.
The neighborhood activation model (NAM; P. A. Luce & Pisoni, 1998) of spoken word recognition was applied to the problem of predicting accuracy of visual spoken word identification. One hundred fifty-three spoken consonant-vowel-consonant words were identified by a group of 12 college-educated adults with normal hearing and a group of 12 college-educated deaf adults. In both groups, item identification accuracy was correlated with the computed NAM output values. Analysis of subsets of the stimulus set demonstrated that when stimulus intelligibility was controlled, words with fewer neighbors were easier to identify than words with many neighbors. However, when neighborhood density was controlled, variation in segmental intelligibility was minimally related to identification accuracy. The present study provides evidence of a common spoken word recognition system for both auditory and visual speech that retains sensitivity to the phonetic properties of the input.  相似文献   

17.
We studied speech intelligibility and memory performance for speech material heard under different signal‐to‐noise (S/N) ratios. Pre‐experimental measures of working memory capacity (WMC) were taken to explore individual susceptibility to the disruptive effects of noise. Thirty‐five participants first completed a WMC‐operation span task in quiet and later listened to spoken word lists containing 11 one‐syllable phonetically balanced words presented at four different S/N ratios (+12, +9, +6, and +3). Participants repeated each word aloud immediately after its presentation, to establish speech intelligibility and later on performed a free recall task for those words. The speech intelligibility function decreased linearly with increasing S/N levels for both the high‐WMC and low‐WMC groups. However, only the low‐WMC group had decreasing memory performance with increasing S/N levels. The memory of the high‐WMC individuals was not affected by increased S/N levels. Our results suggest that individual differences in WMC counteract some of the negative effects of speech noise. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

18.
A continuous speech message alternated between the left and right ears retains generally good intelligibility, except at certain critical rates of alternation of about 3–4 switching cycles/sec. In the present experiment, subjects heard speech alternated between the two ears at eight different switching frequencies, and at four different speech rates. Results support an earlier contention that the critical intelligibility parameter in alternated speech is average speech content per ear segment, rather than absolute time per ear. Implications are discussed both in terms of critical speech segments in auditory analysis and in neural processing of binaural auditory information.  相似文献   

19.
When speech is rapidly alternated between the two ears, intelligibility declines as rates approach 3–5 switching cycles/sec and then paradoxically returns to a good level beyond that point. The present study examines previous explanations of the phenomenon by comparing intelligibility of alternated speech with that for presentation of an interrupted message to a single ear. Results favor one of two possible explanations, and a theoretical model to account for the effect is proposed.  相似文献   

20.
The insertion of noise in the silent intervals of interrupted speech has a very striking perceptual effect if a certain signal-to-noise ratio is used. Conflicting reports have been published as to whether the inserted noise improves speech intelligibility or not. The major difference between studies was the level of redundancy in the speech material. We show in the present paper that the noise leads to a better intelligibility of interrupted speech. The redundancy level determines the possible amount of improvement. The consequences of our findings are discussed. in relation to such phenomena as continuity perception and pulsation threshold measurement. A hypothesis is formulated for the processing of interrupted stimuli with and without intervening noise: for stimuli presented with intervening noise, the presence in the auditory system of an automatic interpolation mechanism is assumed. The mechanism operates only if the noise makes it impossible to perceive the interruption.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号