首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This study assessed intelligibility in a dysarthric patient with Parkinson's disease (PD) across five speech production tasks: spontaneous speech, repetition, reading, repeated singing, and spontaneous singing, using the same phrases for all but spontaneous singing. The results show that this speaker was significantly less intelligible when speaking spontaneously than in the other tasks. Acoustic analysis suggested that relative intensity and word duration were not independently linked to intelligibility, but dysfluencies (from perceptual analysis) and articulatory/resonance patterns (from acoustic records) were related to intelligibility in predictable ways. These data indicate that speech production task may be an important variable to consider during the evaluation of dysarthria. As speech production efficiency was found to vary with task in a patient with Parkinson's disease, these results can be related to recent models of basal ganglia function in motor performance.  相似文献   

2.
We studied speech intelligibility and memory performance for speech material heard under different signal‐to‐noise (S/N) ratios. Pre‐experimental measures of working memory capacity (WMC) were taken to explore individual susceptibility to the disruptive effects of noise. Thirty‐five participants first completed a WMC‐operation span task in quiet and later listened to spoken word lists containing 11 one‐syllable phonetically balanced words presented at four different S/N ratios (+12, +9, +6, and +3). Participants repeated each word aloud immediately after its presentation, to establish speech intelligibility and later on performed a free recall task for those words. The speech intelligibility function decreased linearly with increasing S/N levels for both the high‐WMC and low‐WMC groups. However, only the low‐WMC group had decreasing memory performance with increasing S/N levels. The memory of the high‐WMC individuals was not affected by increased S/N levels. Our results suggest that individual differences in WMC counteract some of the negative effects of speech noise. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

3.
Kim J  Sironic A  Davis C 《Perception》2011,40(7):853-862
Seeing the talker improves the intelligibility of speech degraded by noise (a visual speech benefit). Given that talkers exaggerate spoken articulation in noise, this set of two experiments examined whether the visual speech benefit was greater for speech produced in noise than in quiet. We first examined the extent to which spoken articulation was exaggerated in noise by measuring the motion of face markers as four people uttered 10 sentences either in quiet or in babble-speech noise (these renditions were also filmed). The tracking results showed that articulated motion in speech produced in noise was greater than that produced in quiet and was more highly correlated with speech acoustics. Speech intelligibility was tested in a second experiment using a speech-perception-in-noise task under auditory-visual and auditory-only conditions. The results showed that the visual speech benefit was greater for speech recorded in noise than for speech recorded in quiet. Furthermore, the amount of articulatory movement was related to performance on the perception task, indicating that the enhanced gestures made when speaking in noise function to make speech more intelligible.  相似文献   

4.
The insertion of noise in the silent intervals of interrupted speech has a very striking perceptual effect if a certain signal-to-noise ratio is used. Conflicting reports have been published as to whether the inserted noise improves speech intelligibility or not. The major difference between studies was the level of redundancy in the speech material. We show in the present paper that the noise leads to a better intelligibility of interrupted speech. The redundancy level determines the possible amount of improvement. The consequences of our findings are discussed. in relation to such phenomena as continuity perception and pulsation threshold measurement. A hypothesis is formulated for the processing of interrupted stimuli with and without intervening noise: for stimuli presented with intervening noise, the presence in the auditory system of an automatic interpolation mechanism is assumed. The mechanism operates only if the noise makes it impossible to perceive the interruption.  相似文献   

5.
Davis C  Kim J 《Cognition》2006,100(3):B21-B31
The study examined whether people can extract speech related information from the talker's upper face that was presented using either normally textured videos (Experiments 1 and 3) or videos showing only the outlined of the head (Experiments 2 and 4). Experiments 1 and 2 used within- and cross-modal matching tasks. In the within-modal task, observers were presented two pairs of short silent video clips that showed the top part of a talker's head. In the cross-modal task, pairs of audio and silent video clips were presented. The task was to determine the pair in which the talker said the same sentence. Performance on both tasks was better than chance for the outline as well as textured presentation suggesting that judgments were primarily based on head movements. Experiments 3 and 4 tested if observing the talker's upper face would help identify speech in noise. The results showed the viewing the talker's moving upper head produced a small but reliable improvement in speech intelligibility, however, this effect was only secure for the expressive sentences that involved greater head movements. The results suggest that people are sensitive to speech related head movements that extend beyond the mouth area and can use these to assist in language processing.  相似文献   

6.
In order to function effectively as a means of communication, speech must be intelligible under the noisy conditions encountered in everyday life. Two types of perceptual synthesis have been reported that can reduce or cancel the effects of masking by extraneous sounds: Phonemic restoration can enhance intelligibility when segments are replaced or masked by noise, and contralateral induction can prevent mislateralization by effectively restoring speech masked at one ear when it is heard in the other. The present study reports a third type of perceptual synthesis induced by noise: enhancement of intelligibility produced by adding noise to spectral gaps. In most of the experiments, the speech stimuli consisted of two widely separated narrow bands of speech (center frequencies of 370 and 6000 Hz, each band having high-pass and low-pass slopes of 115 dB/octave meeting at the center frequency). These very narrow bands effectively reduced the available information to frequency-limited patterns of amplitude fluctuation lacking information concerning formant structure and frequency transitions. When stochastic noise was introduced into the gap separating the two speech bands, intelligibility increased for “everyday” sentences, for sentences that varied in the transitional probability of keywords, and for monosyllabic word lists. Effects produced by systematically varying noise amplitude and noise bandwidth are reported, and the implications of some of the novel effects observed are discussed.  相似文献   

7.
Cognitive functions and speech‐recognition‐in‐noise were evaluated with a cognitive test battery, assessing response inhibition using the Hayling task, working memory capacity (WMC) and verbal information processing, and an auditory test of speech recognition. The cognitive tests were performed in silence whereas the speech recognition task was presented in noise. Thirty young normally‐hearing individuals participated in the study. The aim of the study was to investigate one executive function, response inhibition, and whether it is related to individual working memory capacity (WMC), and how speech‐recognition‐in‐noise relates to WMC and inhibitory control. The results showed a significant difference between initiation and response inhibition, suggesting that the Hayling task taps cognitive activity responsible for executive control. Our findings also suggest that high verbal ability was associated with better performance in the Hayling task. We also present findings suggesting that individuals who perform well on tasks involving response inhibition, and WMC, also perform well on a speech‐in‐noise task. Our findings indicate that capacity to resist semantic interference can be used to predict performance on speech‐in‐noise tasks.  相似文献   

8.
9.
Noise annoyance during the performance of different nonauditory tasks   总被引:2,自引:0,他引:2  
Three experiments were performed to study the effects of an ongoing task on the annoyance response to noise. In the first two experiments a total of five tasks were used: three versions of a proofreading task, a finger-dexterity task, and a complex reaction time (RT) task. Subjects performed the tasks during exposure to two levels of a continuous broadband noise. Task was of no consequence for rated annoyance. Four tasks were used in Experiment 3: proofreading, complex RT, grammatical reasoning, and simple RT. A third type of noise, irrelevant speech, was added to the broadband noises. Rated annoyance was lower during simple RT than during the reasoning and proofreading tasks, especially in the irrelevant speech condition. The difference corresponded to a 6-dB difference in noise level. It was concluded that task differences probably only explain a small part of the widely differing noise tolerance levels at different work places.  相似文献   

10.
When deleted segments of speech are replaced by extraneous sounds rather than silence, the missing speech fragments may be perceptually restored and intelligibility improved. This phonemic restoration (PhR) effect has been used to measure various aspects of speech processing, with deleted portions of speech typically being replaced by stochastic noise. However, several recent studies of PhR have used speech-modulated noise, which may provide amplitude-envelope cues concerning the replaced speech. The present study compared the effects upon intelligibility of replacing regularly spaced portions of speech with stochastic (white) noise versus speech-modulated noise. In Experiment 1, filling periodic gaps in sentences with noise modulated by the amplitude envelope of the deleted speech fragments produced twice the intelligibility increase obtained with interpolated stochastic noise. Moreover, when lists of isolated monosyllables were interrupted in Experiment 2, interpolation of speech-modulated noise increased intelligibility whereas stochastic noise reduced intelligibility. The augmentation of PhR produced by modulated noise appeared without practice, suggesting that speech processing normally involves not only a narrowband analysis of spectral information but also a wideband integration of amplitude levels across critical bands. This is of considerable theoretical interest, but it also suggests that since PhRs produced by speech-modulated noise utilize potent bottom-up cues provided by the noise, they differ from the PhRs produced by extraneous sounds, such as coughs and stochastic noise.  相似文献   

11.
Four experiments are reported investigating previous findings that speech perception interferes with concurrent verbal memory but difficult nonverbal perceptual tasks do not, to any great degree. The forgetting produced by processing noisy speech could not be attributed to task difficulty, since equally difficult nonspeech tasks did not produce forgetting, and the extent of forgetting produced by speech could be manipulated independently of task difficulty. The forgetting could not be attributed to similarity between memory material and speech stimuli, since clear speech, analyzed in a simple and probably acoustically mediated discrimination task, produced little forgetting. The forgetting could not be attributed to a combination of similarity and difficutly since a very easy speech task involving clear speech produced as much forgetting as noisy speech tasks, as long as overt reproduction of the stimuli was required. By assuming that noisy speech and overtly reproduced speech are processed at a phonetic level but that clear, repetitive speech can be processed at a purely acoustic level, the forgetting produced by speech perception could be entirely attributed to the level at which the speech was processed. In a final experiment, results were obtained which suggest that if prior set induces processing of noisy and clear speech at comparable levels, the difference between the effects of noisy speech processing and clear speech processing on concurrent memory is completely eliminated.  相似文献   

12.
The effects of attention during encoding and rehearsal after initial encoding on frequency estimates were investigated in three experiments. Varying the level of processing affected the linear increase in frequency estimates as a function of actual frequency, but varying processing after encoding with remember or forget cues had the greatest effects on the intercept of the function relating judged to actual frequency. Deeper levels of processing improved performance in a frequency discrimination task, whereas remember and forget cues had only very small effects on performance. Materials that are easy to rehearse were compared with materials that are difficult to rehearse in Experiment 2. The results were interpreted as evidence against a covert rehearsal explanation of slope effects in frequency estimation tasks because materials that are difficult to rehearse tended to produce larger interactions between remember versus forget cues and frequency than materials that are easier to rehearse. In Experiment 3, an arithmetic task that was performed during word encoding affected the slope of the function relating judged to actual frequency, but the same task performed immediately after word presentation had no effect on frequency estimates. It was concluded that frequency is not stored automatically because attention during the initial stages of encoding affects it; however, attention devoted to processing after initial encoding does not affect the rate with which subjective frequency increases with repetitions.  相似文献   

13.
语音告警信号语速研究   总被引:3,自引:0,他引:3  
用普通会话句表和飞机告警句表两种测试材料 ,以言语可懂度测试法和主观评价法研究言语告警信号的适宜语速。实验中的语速定为 0 .1 1、0 .1 5、0 .2 0、0 .2 5、0 .3 5和 0 .45秒 /字六级。实验模拟飞机座舱环境 ,采用计算机生成的数字化言语信号 ,在 90 d B(A)的飞机噪声环境下 ,通过耳机传递给被试。研究得到以下结论 :言语告警信号的适宜语速为 0 .2 5秒 /字 (或 4字 /秒 ) ,它的下限为 >0 .2 0秒 /字 (或 <5字 /秒 ) ,它的上限为 0 .3 0秒 /字 (或 3 .3 3字 /秒 )。  相似文献   

14.
Ear advantages for CV syllables were determined for 28 right-handed individuals in a target monitoring dichotic task. In addition, ear dominance for dichotically presented tones was determined when the frequency difference of the two tones was small compared to the center frequency and when the frequency difference of the tones was larger. On all three tasks, subjects provided subjective separability ratings as measures of the spatial complexity of the dichotic stimuli. The results indicated a robust right ear advantage (REA) for the CV syllables and a left ear dominance on the two tone tasks, with a significant shift toward right ear dominance when the frequency difference of the tones was large. Although separability ratings for the group data indicated an increase in the perceived spatial separation of the components of the tone complex across the two tone tasks, the separability judgment ratings and the ear dominance scores were not correlated for either tone task. A significant correlation, however, was evidenced between the laterality measure for speech and the judgment of separability, indicating that a REA of increased magnitude is associated with more clearly localized and spatially separate speech sounds. Finally, the dominance scores on the two tone tasks were uncorrelated with the laterality measures of the speech task, whereas the scores on the tone tasks were highly correlated. The results suggest that spatial complexity does play a role in the emergence of the REA for speech. However, the failure to find a relationship between speech and nonspeech tasks suggest that all perceptual asymmetries observed with dichotic stimuli cannot be accounted for by a single theoretical explanation.  相似文献   

15.
Forty Ss, previously classified as introverts or extraverts on the basis of scores on the Eysenck Personality Inventory, performed a visual vigilance task while being stimulated with noise at an intensity level of either 65 or 85 dB. Introverts given noise of 65 dB intensity showed an improvement in detection rate across trials, whereas introverts given noise of 85 dB intensity showed a decline in detection rate. Extraverts responded to noise of 65 dB intensity with a slight decrease in detection rate, but showed an improvement in detection over trials when noise of 85 dB intensity was given. When noise of the lower intensity was given, introverts showed greater sensitivity to signals than extraverts. When noise of the higher intensity was given, introverts and extraverts were equal in sensitivity. The results are discussed in terms of a hypothesized relationship between stimulation and arousal, with E-I as a moderator variable.  相似文献   

16.
The effect of stimulus repetition was investigated in a speech identification task where intelligibility was lowered not by white noise added to the audio waveform but by ‘structural’ noise added to the spectrum parameters. Several aspects of the results argue for the improvement in intelligibility between one and two presentations being due not to statistical averaging over internal or external noise, but to increased perceptual selectivity under the influence of the first presentation's stimulus properties.  相似文献   

17.
Understanding low-intelligibility speech is effortful. In three experiments, we examined the effects of intelligibility on working memory (WM) demands imposed by perception of synthetic speech. In all three experiments, a primary speeded word recognition task was paired with a secondary WM-load task designed to vary the availability of WM capacity during speech perception. Speech intelligibility was varied either by training listeners to use available acoustic cues in a more diagnostic manner (as in Experiment 1) or by providing listeners with more informative acoustic cues (i.e., better speech quality, as in Experiments 2 and 3). In the first experiment, training significantly improved intelligibility and recognition speed; increasing WM load significantly slowed recognition. A significant interaction between training and load indicated that the benefit of training on recognition speed was observed only under low memory load. In subsequent experiments, listeners received no training; intelligibility was manipulated by changing synthesizers. Improving intelligibility without training improved recognition accuracy, and increasing memory load still decreased it, but more intelligible speech did not produce more efficient use of available WM capacity. This suggests that perceptual learning modifies the way available capacity is used, perhaps by increasing the use of more phonetically informative features and/or by decreasing use of less informative ones.  相似文献   

18.
Working memory as a predictor of verbal fluency   总被引:3,自引:0,他引:3  
This study investigated whether working memory capacity could account for individual differences in verbal fluency. Working memory was assessed by the speaking span test (Daneman & Green, 1986) that taxes the processing and storage functions of working memory during sentence production. Verbal fluency was assessed by (1) a speech generation task in which subjects made a speech about a picture; (2) an oral reading task in which subjects read aloud a prose passage; and (3) a Baars, Motley, and MacKay (1975) task for eliciting oral slips of the tongue (e.g.,wet gun forget one). Speking span was significantly correlated with performance on all three tasks; individuals with small speaking spans were less fluent and more prone to making speech errors. Whereas speaking span was related to individual differences in verbal fluency on the speech and reading tasks, reading span, a measure included to assess working memory capacity during sentence comprehension, was only significantly related to individual differences in verbal fluency on the reading task. The methodology proved useful for revealing whether a particular kind of oral reading error reflected a reading failure or an articulatory failure.This research was supported by a grant from the Natural Sciences and Engineering Research Council of Canada.  相似文献   

19.
The intelligibility of word lists subjected to various types of spectral filtering has been studied extensively. Although words used for communication are usually present in sentences rather than lists, there has been no systematic report of the intelligibility of lexical components of narrowband sentences. In the present study, we found that surprisingly little spectral information is required to identify component words when sentences are heard through narrow spectral slits. Four hundred twenty listeners (21 groups of 20 subjects) were each presented with 100 bandpass filtered CID ( “everyday speech ”) sentences; separate groups received center frequencies of 370, 530, 750, 1100, 1500, 2100, 3000, 4200, and 6000 Hz at 70 dBA SPL. In Experiment 1, intelligibility of single 1/3-octave bands with steep filter slopes (96 dB/octave) averaged more than 95% for sentences centered at 1100, 1500, and 2100 Hz. In Experiment 2, we used the same center frequencies with extremely narrow bands (slopes of 115 dB/octave intersecting at the center frequency, resulting in a nominal bandwidth of l/20 octave). Despite the severe spectral tilt for all frequencies of this impoverished spectrum, intelligibility remained relatively high for most bands, with the greatest intelligibility (77%) at 1500 Hz. In Experiments 1 and 2, the bands centered at 370 and 6000 Hz provided little useful information when presented individually, but in each experiment they interacted synergistically when combined. The present findings demonstrate the adaptive flexibility of mechanisms used for speech perception and are discussed in the context of the LAME model of opportunistic multilevel processing.  相似文献   

20.
Research into intuitive problem solving has shown that objective closeness of participants' hypotheses were closer to the accurate solution than their subjective ratings of closeness. After separating conceptually intuitive problem solving from the solutions of rational incremental tasks and of sudden insight tasks, we replicated this finding by using more precise measures in a conceptual problem-solving task. In a second study, we distinguished performance level, processing style, implicit knowledge and subjective feeling of closeness to the solution within the problem-solving task and examined the relationships of these different components with measures of intelligence and personality. Verbal intelligence correlated with performance level in problem solving, but not with processing style and implicit knowledge. Faith in intuition, openness to experience, and conscientiousness correlated with processing style, but not with implicit knowledge. These findings suggest that one needs to decompose processing style and intuitive components in problem solving to make predictions on effects of intelligence and personality measures.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号