首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Twenty speech pathologists measured total words and completed molar and molecular stuttering analyses from audio-recorded contrived samples of stuttered speech. Each subject completed a criterion referenced test prior to performing the experimental tasks. Analyses were performed under two conditions: Condition I consisted of samples presented at 100 wpm as recorded (nonexpanded); Condition II consisted of parallel samples recorded at 100 wpm as recorded (nonexpanded condition); Condition II consisted of parallel samples recorded at 100 wpm but presented at 59 wpm via a speech time expander. Results indicate that presentation of expanded samples (Condition II) significantly increased subject accuracy for the specific disfluency form-types word repetitions and part-word repetitions. Similar trends were noted for sound prolongations and hard contacts. Theoretical, experimental, and clinical implications are offered.  相似文献   

2.
Six speech samples containing varying amounts of schwa interjections were tape-recorded and presented to 36 male and 36 female listeners. For each sample, listeners were asked to make judgments of fluent, disfluent, and stuttered speech, and to answer the question “Would you recommend speech therapy?” Results indicated that speech samples containing 5% or more interjections evoked a judgment of disfluent speech by a majority of listeners. The sample containing 20% interjections, however, was found to evoke judgments of disfluent and stuttered speech about equally. Varying numbers of listeners recommended clinical services for disfluent speech. In general, the results indicated that (1) the presence of interjections in connected speech is not normal regardless of frequency, (2) fluent speech may not contain interjections in excess of 5%, and (3) with 20% interjections in speech, the distinction between disfluency and stuttering may be blurred.  相似文献   

3.
Among the variables affecting comprehension of linguistic stimuli by aphasic subjects are syntactic complexity and processing time. Comprehension performance of 15 aphasic adults was studied while altering the rate of speech presentation and varying the pause time between the major phrases within sentences of increasing grammatical complexity.Simple Active Affirmative Declarative Sentences, Negative, and Passive sentences were presented (1) at the rate of 150 words per minute (wpm) with 1-sec interphrase pause time (IPT); (2) 150 wpm with no pauses; (3) 120 wpm with 1-sec IPT; (4) 120 wpm with no pauses added.Performance was seen to vary with increasing syntactical complexity and as a function of processing time. Greater comprehension was seen with active affirmative than negative; greater with passive affirmative than with active negative. Clinical implications are discussed. Subjects demonstrated greater comprehension when sentences were presented at slower than normal rate; addition of interphrase pause time intervals aided comprehension. Combining slower rate of presentation and IPT intervals provided greatest increase in auditory processing time and showed concomitant increase in comprehension performance.  相似文献   

4.
日常言语交流常常会遇到各种噪音的干扰。研究表明, 噪音所产生的能量掩蔽和信息掩蔽会给语音感知带来较大影响。在感知地道的二语语音时, 二语者在各种噪音条件下所受干扰通常比母语者更大, 其表现随噪音类型、水平和目标音位特征的不同而变化; 同时, 二语者的感知也存在较大个体差异, 这是多种因素影响的结果。在噪音下感知带有各种外国腔的二语口音时, 母语者的表现差于其对地道母语语音的感知; 二语经历贫乏的二语者则对与自己口音类似的外国腔感知较好, 但二语经历较长的二语者在感知中却表现出较大灵活性。  相似文献   

5.
Six speech samples containing varying amounts of whole-word repetitions were tape-recorded and presented to 36 male and 36 female listeners. For each sample, listeners were asked to make judgments of fluent, disfluent, and stuttered speech, and to answer the question, “Would you recommend speech therapy?” Results showed that samples containing 5% or more word repetitions were not judged fluent speech by a majority of listeners. Judgments of disfluent and stuttered speech were nearly equal for speech samples containing word repetitions from 5% to 15%. At 20%, however, the judgments of stuttered speech were found to be more likely than judgments of disfluent speech. A majority of listeners recommended clinical services for speech samples containing 5% or more word repetitions. Generally, the results indicated that (1) the presence of whole-word repetitions is not normal regardless of frequency, (2) fluent speech may not contain 5% or more word repetitions, and (3) with 20% word repetitions the judgments of stuttering may be more likely than judgments of disfluency.  相似文献   

6.
Despite spectral and temporal discontinuities in the speech signal, listeners normally report coherent phonetic patterns corresponding to the phonemes of a language that they know. What is the basis for the internal coherence of phonetic segments? According to one account, listeners achieve coherence by extracting and integrating discrete cues; according to another, coherence arises automatically from general principles of auditory form perception; according to a third, listeners perceive speech patterns as coherent because they are the acoustic consequences of coordinated articulatory gestures in a familiar language. We tested these accounts in three experiments by training listeners to hear a continuum of three-tone, modulated sine wave patterns, modeled after a minimal pair contrast between three-formant synthetic speech syllables, either as distorted speech signals carrying a phonetic contrast (speech listeners) or as distorted musical chords carrying a nonspeech auditory contrast (music listeners). The music listeners could neither integrate the sine wave patterns nor perceive their auditory coherence to arrive at consistent, categorical percepts, whereas the speech listeners judged the patterns as speech almost as reliably as the synthetic syllables on which they were modeled. The outcome is consistent with the hypothesis that listeners perceive the phonetic coherence of a speech signal by recognizing acoustic patterns that reflect the coordinated articulatory gestures from which they arose.  相似文献   

7.
The authors' hypotheses were that (a) listeners regard speakers whose global speech rates they judge to be similar to their own as more competent and more socially attractive than speakers whose rates are different from their own and (b) gender influences those perceptions. Participants were 17 male and 28 female listeners; they judged each of 3 male and 3 female speakers in terms of 10 unipolar adjective scales. The authors used 8 of the scales to derive 2 scores describing the extent to which the listener viewed a speaker as competent and socially attractive. The 2 scores were related by trend analyses (a) to the listeners' perceptions of the speakers' speech rates as compared with their own and (b) to comparisons of the actual speech rates of the speakers and listeners. The authors examined trend components of the data by split-plot multiple regression analyses. In general, the results supported both hypotheses. The participants judged speakers with speech rates similar to their own as more competent and socially attractive than speakers with speech rates slower or faster than their own. However, the ratings of competence were significantly influenced by the gender of the listeners, and those of social attractiveness were influenced by the gender of the listeners and the speakers.  相似文献   

8.
Abstract

The authors' hypotheses were that (a) listeners regard speakers whose global speech rates they judge to be similar to their own as more competent and more socially attractive than speakers whose rates are different from their own and (b) gender influences those perceptions. Participants were 17 male and 28 female listeners; they judged each of 3 male and 3 female speakers in terms of 10 unipolar adjective scales. The authors used 8 of the scales to derive 2 scores describing the extent to which the listener viewed a speaker as competent and socially attractive. The 2 scores were related by trend analyses (a) to the listeners' perceptions of the speakers' speech rates as compared with their own and (b) to comparisons of the actual speech rates of the speakers and listeners. The authors examined trend components of the data by split-plot multiple regression analyses. In general, the results supported both hypotheses. The participants judged speakers with speech rates similar to their own as more competent and socially attractive than speakers with speech rates slower or faster than their own. However, the ratings of competence were significantly influenced by the gender of the listeners, and those of social attractiveness were influenced by the gender of the listeners and the speakers.  相似文献   

9.
In noisy situations, visual information plays a critical role in the success of speech communication: listeners are better able to understand speech when they can see the speaker. Visual influence on auditory speech perception is also observed in the McGurk effect, in which discrepant visual information alters listeners’ auditory perception of a spoken syllable. When hearing /ba/ while seeing a person saying /ga/, for example, listeners may report hearing /da/. Because these two phenomena have been assumed to arise from a common integration mechanism, the McGurk effect has often been used as a measure of audiovisual integration in speech perception. In this study, we test whether this assumed relationship exists within individual listeners. We measured participants’ susceptibility to the McGurk illusion as well as their ability to identify sentences in noise across a range of signal-to-noise ratios in audio-only and audiovisual modalities. Our results do not show a relationship between listeners’ McGurk susceptibility and their ability to use visual cues to understand spoken sentences in noise, suggesting that McGurk susceptibility may not be a valid measure of audiovisual integration in everyday speech processing.  相似文献   

10.
Two experiments asked whether listeners can judge word rate from a speech signal that has been degraded in various ways. In the first, the rates of spontaneous speech were increased by 42% and further transformed to produce tone-silence sequences. The tonesilence sequences were presented to listeners who judged the rate of each sequence. Results clearly indicated that listeners could differentiate the rates of the tone-silence sequences, suggesting that minimal nonlinguistic information may be sufficient to make grossly accurate estimates of speech rates. In the second study, listeners were presented with speech sequences involving three naturally produced rates (slow, moderate, and fast) in three conditions (clear, frequency-inverted, and tone-silence) such that different listeners participated in the three conditions, but heard all rates in each condition. Listeners in the clear and frequency-inverted conditions distinguished all three rates, but those in the tone-silence condition differentiated only the slow and moderate rates. Contrary to expectation, the gender and extroversion scores of the listeners did not affect their judgments.  相似文献   

11.
The purpose of this investigation was to determine if the speech of “successfully therapeutized” stutterers and a group of partially treated stutterers was perceptually different from the speech of normal speakers when judged by unsophisticated listeners. Tape-recorded speech samples of treated stutterers were obtained from leading proponents of (1) Van Riperian, (2) metronome-conditioned speech retraining, (3) delayed auditory feedback, (4) operant conditioning, (5) precision fluency shaping, and (6) “holistic” therapy programs. Fluent speech samples from these groups of stutterers were paired with matched fluent samples of normal talkers and presented to a group of 20 unsophisticated judges. The judges were instructed to select from each paired speech sample presented to them the one produced by the stuttering subject. The results of the analyses showed that five of seven experimental groups were identified at levels significantly above chance. It can be concluded that the fluent speech of the partially and successfully treated stutterers was perceptibly different from the utterances of the normal speakers and that the perceptual disparity can be detected, even by unsophisticated listeners.  相似文献   

12.
During speech perception, listeners make judgments about the phonological category of sounds by taking advantage of multiple acoustic cues for each phonological contrast. Perceptual experiments have shown that listeners weight these cues differently. How do listeners weight and combine acoustic cues to arrive at an overall estimate of the category for a speech sound? Here, we present several simulations using a mixture of Gaussians models that learn cue weights and combine cues on the basis of their distributional statistics. We show that a cue‐weighting metric in which cues receive weight as a function of their reliability at distinguishing phonological categories provides a good fit to the perceptual data obtained from human listeners, but only when these weights emerge through the dynamics of learning. These results suggest that cue weights can be readily extracted from the speech signal through unsupervised learning processes.  相似文献   

13.
Upon hearing an ambiguous speech sound dubbed onto lipread speech, listeners adjust their phonetic categories in accordance with the lipread information (recalibration) that tells what the phoneme should be. Here we used sine wave speech (SWS) to show that this tuning effect occurs if the SWS sounds are perceived as speech, but not if the sounds are perceived as non-speech. In contrast, selective speech adaptation occurred irrespective of whether listeners were in speech or non-speech mode. These results provide new evidence for the distinction between a speech and non-speech processing mode, and they demonstrate that different mechanisms underlie recalibration and selective speech adaptation.  相似文献   

14.
We conducted experiments on the effects of brief prior exposure to time-altered speech on preferred listening rate and the rate listeners would select when asked to listen to speech as fast as possible with good compression (induced listening rate). In Exp. 1, 48 participants were exposed either to normal rate speech or to speech compressed to twice-normal rate. Brief exposure to twice-normal rate speech led to a faster induced listening rate than exposure to normal rate speech. In Exp. 2, 31 participants were briefly exposed to normal rate speech, speech compressed to twice-normal rate, or speech expanded to half-normal rate. The faster the rate of the exposure speech, the faster the induced rate. Speech compressed to twice-normal rate led to a faster induced listening rate than exposure to speech expanded to half-normal rate. Normal rate speech was intermediate between twice-normal and half-normal rate and did not differ significantly from them. Induced listening rate was a linear combination of listening rate preference and recent forced exposure to time-altered speech.  相似文献   

15.
Visual information conveyed by iconic hand gestures and visible speech can enhance speech comprehension under adverse listening conditions for both native and non‐native listeners. However, how a listener allocates visual attention to these articulators during speech comprehension is unknown. We used eye‐tracking to investigate whether and how native and highly proficient non‐native listeners of Dutch allocated overt eye gaze to visible speech and gestures during clear and degraded speech comprehension. Participants watched video clips of an actress uttering a clear or degraded (6‐band noise‐vocoded) action verb while performing a gesture or not, and were asked to indicate the word they heard in a cued‐recall task. Gestural enhancement was the largest (i.e., a relative reduction in reaction time cost) when speech was degraded for all listeners, but it was stronger for native listeners. Both native and non‐native listeners mostly gazed at the face during comprehension, but non‐native listeners gazed more often at gestures than native listeners. However, only native but not non‐native listeners' gaze allocation to gestures predicted gestural benefit during degraded speech comprehension. We conclude that non‐native listeners might gaze at gesture more as it might be more challenging for non‐native listeners to resolve the degraded auditory cues and couple those cues to phonological information that is conveyed by visible speech. This diminished phonological knowledge might hinder the use of semantic information that is conveyed by gestures for non‐native compared to native listeners. Our results demonstrate that the degree of language experience impacts overt visual attention to visual articulators, resulting in different visual benefits for native versus non‐native listeners.  相似文献   

16.
Responding to indirect speech acts   总被引:1,自引:0,他引:1  
Indirect speech acts, like the request Do you know the time?, have both a literal meaning, here “I ask you whether you know the time,” and an indirect meaning “I request you to tell me the time.” In this paper I outline a model of how listeners understand such speech acts and plan responses to them. The main proposals are these. The literal meaning of indirect speech acts can be intended to be taken seriously (along with the indirect meaning) or merely pro forma. In the first case listeners are expected to respond to both meanings, as in Yes, I do—it's six, but in the second case only to the indirect meaning, as in It's six. There are at least six sources of information listeners use in judging whether the literal meaning was intended seriously or pro forma, as well as whether there was intended to be any indirect meaning. These proposals were supported in five experiments in which ordinary requests for information were made by telephone of 950 local merchants.  相似文献   

17.
Foreign‐accented speech is generally harder to understand than native‐accented speech. This difficulty is reduced for non‐native listeners who share their first language with the non‐native speaker. It is currently unclear, however, how non‐native listeners deal with foreign‐accented speech produced by speakers of a different language. We show that the process of (second) language acquisition is associated with an increase in the relative difficulty of processing foreign‐accented speech. Therefore, experiencing greater relative difficulty with foreign‐accented speech compared with native speech is a marker of language proficiency. These results contribute to our understanding of how phonological categories are acquired during second language learning.  相似文献   

18.
This study investigated listeners' perception of the speech naturalness of people who stutter (PWS) speaking under delayed auditory feedback (DAF) with particular attention for possible listener differences. Three panels of judges consisting of 14 stuttering individuals, 14 speech language pathologists, and 14 naive listeners rated the naturalness of speech samples of stuttering and non-stuttering individuals using a 9-point interval scale. Results clearly indicate that these three groups evaluate naturalness differently. Naive listeners appear to be more severe in their judgements than speech language pathologists and stuttering listeners, and speech language pathologists are apparently more severe than PWS. The three listener groups showed similar trends with respect to the relationship between speech naturalness and speech rate. Results of all three indicated that for PWS, the slower a speaker's rate was, the less natural speech was judged to sound. The three listener groups also showed similar trends with regard to naturalness of the stuttering versus the non-stuttering individuals. All three panels considered the speech of the non-stuttering participants more natural. EDUCATIONAL OBJECTIVES: The reader will be able to: (1) discuss the speech naturalness of people who stutter speaking under delayed auditory feedback, (2) discuss listener differences about the naturalness of people who stutter speaking under delayed auditory feedback, and (3) discuss the importance of speech rate for the naturalness of speech.  相似文献   

19.
在安静、语音型噪音、语音调制型噪音三种背景下测量了汉语母语者、汉语中、高水平的韩语母语者感知汉语元音和声调的正确率。安静背景下,三组人的语音感知类似,而在语音型噪音背景下,汉语母语者的感知正确率显著高于中水平二语者。进一步的检验表明中水平二语者在语音型噪音背景下的感知难度较大是由于其受到的语音型噪音中能量掩蔽的影响比母语被试要大,而其受到的信息掩蔽的干扰和另外两组被试相近。  相似文献   

20.
Models of speech processing typically assume that speech is represented by a succession of codes. In this paper we argue for the psychological validity of a prelexical (phonetic) code and for a postlexical (phonological) code. Whereas phonetic codes are computed directly from an analysis of input acoustic information, phonological codes are derived from information made available subsequent to the perception of higher order (word) units. The results of four experiments described here indicate that listeners can gain access to, or identify, entities at both of these levels. In these studies listeners were presented with sentences and were asked to respond when a particular word-initial target phoneme was detected (phoneme monitoring). In the first three experiments speed of lexical access was manipulated by varying the lexical status (word/nonword) or frequency (high/low) of a word in the critical sentences. Reaction times (RTs) to target phonemes were unaffected by these variables when the target phoneme was on the manipulated word. On the other hand, RTs were substantially affected when the target-bearing word was immediately after the manipulated word. These studies demonstrate that listeners can respond to the prelexical phonetic code. Experiment IV manipulated the transitional probability (high/low) of the target-bearing word and the comprehension test administered to subjects. The results suggest that listeners are more likely to respond to the postlexical phonological code when contextual constraints are present. The comprehension tests did not appear to affect the code to which listeners responded. A “Dual Code” hypothesis is presented to account for the reported findings. According to this hypothesis, listeners can respond to either the phonetic or the phonological code, and various factors (e.g., contextual constraints, memory load, clarity of the input speech signal) influence in predictable ways the code that will be responded to. The Dual Code hypothesis is also used to account for and integrate data gathered with other experimental tasks and to make predictions about the outcome of further studies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号