首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Are perceptions of computer-synthesized speech altered by the belief that the person using this technology is disabled? In a 2 x 2 factorial design, participants completed an attitude pretest and were randomly assigned to watch an actor deliver a persuasive appeal under 1 of the following 4 conditions: disabled or nondisabled using normal speech and disabled or nondisabled using computer-synthesized speech. Participants then completed a posttest survey and a series of questionnaires assessing perceptions of voice, speaker, and message. Natural speech was perceived more favorably and was more persuasive than computer-synthesized speech. When the speaker was perceived to be speech-disabled, however, this difference diminished. This finding suggests that negatively viewed assistive technologies will be perceived more favorably when used by people with disabilities.  相似文献   

2.
This case report describes an unusual combination of speech and language deficits secondary to bilateral infarctions in a 62-year-old woman. The patient was administered an extensive series of speech, language, and audiologic tests and was found to exhibit a fluent aphasia in which reading and writing were extremely well preserved in comparison to auditory comprehension and oral expression, and a severe auditory agnosia. In spite of her auditory processing deficits, the patient exhibited unexpected self-monitoring ability and the capacity to form acoustic images on visual tasks. The manner in which she corrected and attempted to correct her phonemic errors, while ignoring semantic errors, suggests that different mechanisms may underlie the monitoring of these errors.  相似文献   

3.
A patient with a rather pure word deafness showed extreme suppression of right ear signals under dichotic conditions, suggesting that speech signals were being processed in the right hemisphere. Systematic errors in the identification and discrimination of natural and synthetic stop consonants further indicated that speech sounds were not being processed in the normal manner. Auditory comprehension improved considerably however, when the range of speech stimuli was limited by contextual constraints. Possible implications for the mechanism of word deafness are discussed.  相似文献   

4.
Three experiments investigated possible acoustic determinants of the infant listening preference for motherese speech found by Fernald 1985. To test the hypothesis that the intonation of motherese speech was sufficient to elicit this preference, it was necessary to eliminate lexical content and to isolate the three major acoustic correlates of intonation: (1) fundamental frequency (Fo), or pitch; (2) amplitude, correlated with loudness; and (3) duration, related to speech rhythm. Three sets of auditory reinforcers were computer-synthesized, derived from the Fo (Experiment 1), amplitude (Experiment 2), and duration (Experiment 3) characteristics of the infant- and adult-directed natural speech samples used by Fernald 1985. Thus, each of these experiments focused on particular prosodic variables in the absence of segmental variation. Twenty 4-month-old infants were tested in an operant auditory preference procedure in each experiment. Infants showed a significant preference for the Fo-patterns of motherese speech, but not for the amplitude or duration patterns of motherese.  相似文献   

5.
6.
Test of speech-sound discrimination are used by special educators, reading specialists and speech-language pathologists in assessing children's ability to differentiate between speech sounds occurring in standard English. Such tests are important in determining if speech-sound articulation errors are caused by difficulty in making such differentiations. However, during the past 10 years, these tests have been criticized on the basis of their reliability and validity. The purpose of this study was to examine the use of two alternative methods of assessing speech-sound discrimination with a school-aged population to determine if they elicited responses in a similar manner.  相似文献   

7.
The development of speech perception during the 1st year reflects increasing attunement to native language features, but the mechanisms underlying this development are not completely understood. One previous study linked reductions in nonnative speech discrimination to performance on nonlinguistic tasks, whereas other studies have shown associations between speech perception and vocabulary growth. The present study examined relationships among these abilities in 11-month-old infants using a conditioned head-turn test of native and nonnative speech sound discrimination, nonlinguistic object-retrieval tasks requiring attention and inhibitory control, and the MacArthur-Bates Communicative Development Inventory (L. Fenson et al., 1993). Native speech discrimination was positively linked to receptive vocabulary size but not to the cognitive control tasks, whereas nonnative speech discrimination was negatively linked to cognitive control scores but not to vocabulary size. Speech discrimination, vocabulary size, and cognitive control scores were not associated with more general cognitive measures. These results suggest specific relationships between domain-general inhibitory control processes and the ability to ignore variation in speech that is irrelevant to the native language and between the development of native language speech perception and vocabulary.  相似文献   

8.
Individual assessment of infants’ speech discrimination is of great value for studies of language development that seek to relate early and later skills, as well as for clinical work. The present study explored the applicability of the hybrid visual fixation paradigm (Houston et al., 2007) and the associated statistical analysis approach to assess individual discrimination of a native vowel contrast, /aː/ - /eː/, in Dutch 6 to 10-month-old infants. Houston et al. found that 80% (8/10) of the 9-month-old infants successfully discriminated the contrast between pseudowords boodup - seepug. Using the same approach, we found that 12% (14/117) of the infants in our sample discriminated the highly salient /aː/ -/eː/ contrast. This percentage was reduced to 3% (3/117) when we corrected for multiple testing. Bayesian hierarchical modeling indicated that 50% of the infants showed evidence of discrimination. Advantages of Bayesian hierarchical modeling are that 1) there is no need for a correction for multiple testing and 2) better estimates at the individual level are obtained. Thus, individual speech discrimination can be more accurately assessed using state of the art statistical approaches.  相似文献   

9.
Here we report, for the first time, a relationship between sensitivity to amplitude envelope rise time in infants and their later vocabulary development. Recent research in auditory neuroscience has revealed that amplitude envelope rise time plays a mechanistic role in speech encoding. Accordingly, individual differences in infant discrimination of amplitude envelope rise times could be expected to relate to individual differences in language acquisition. A group of 50 infants taking part in a longitudinal study contributed rise time discrimination thresholds when aged 7 and 10 months, and their vocabulary development was measured at 3 years. Experimental measures of phonological sensitivity were also administered at 3 years. Linear mixed effect models taking rise time sensitivity as the dependent variable, and controlling for non‐verbal IQ, showed significant predictive effects for vocabulary at 3 years, but not for the phonological sensitivity measures. The significant longitudinal relationship between amplitude envelope rise time discrimination and vocabulary development suggests that early rise time discrimination abilities have an impact on speech processing by infants.  相似文献   

10.
Identification and discrimination of two-formant [bae-dae-gae] and [pae-tae-kae] synthetic speech stimuli and discrimination of corresponding isolated second formant transitions (chirps) were performed by six subjects. Stimuli were presented at several intensity levels such that the intensity of the F2 transition was equated between speech and nonspeech stimuli, or the overall intensity of the stimulus was equated. At higher intensity (92 dB), b-d-g and p-t-k identification and between-category discrimination performance declined and bilabial-alveolar phonetic boundaries shifted in location on the continuum towards the F2 steady-state frequency. Between-category discrimination improved from performance at 92 dB when 92-dB speech stimuli were simultaneously masked by 60-dB speech noise; alveolar-velar boundaries shifted to a higher frequency location in the 92-dB-plus-noise condition. Chirps were discriminated categorically when presented at 58 dB, but discrimination peaks declined at higher intensities. Perceptual performance for chirps and p-t-k stimuli was very similar, and slightly inferior to performance for b-d-g stimuli, where simultaneous masking by F1 resulted in a lower effective intensity of F2. The results were related to a suggested model involving pitch comparison and transitional quality perceptual strategies.  相似文献   

11.
Two aspects of visual speech processing in speechreading (word decoding and word discrimination) were tested in a group of 24 normal hearing and a group of 20 hearing-impaired subjects. Word decoding and word discrimination performance were independent of factors related to the impairment, both in a quantitative and a qualitative sense. Decoding skill, but not discrimination skill, was associated with sentence-based speechreading. The results were interpreted such that, in order to represent a critical component process in sentence-based speechreading, the visual speech perception task must entail lexically induced processing as a task-demand. The theoretical status of the word decoding task as one operationalization of a speech decoding module was discussed (Fodor, 1983). An error analysis of performance in the word decoding/discrimination tasks suggested that the perception of heard stimuli, as well as the perception of lipped stimuli, were critically dependent on the same features; that is, the temporally initial phonetic segment of the word (cf. Marslen-Wilson, 1987). Implications for a theory of visual speech perception were discussed.  相似文献   

12.
We examined the abilities of 15 patients with dementia of the Alzheimer type (DAT), 22 patients with Parkinson's Disease (PD), and 141 healthy subjects (ranging in age from 30 to 79 years) to detect and correct their own speech errors. Each subject was shown the Cookie Theft picture of the BDAE (Goodglass & Kaplan, 1972. The assessment of aphasia and related disorders. Philadelphia: Lea & Febiger.) and instructed to tell the examiner the "...story of what's happening in the picture." Self-monitoring performance was assessed by tabulating the number of uncorrected errors as well as repaired errors. We divided repairs into two types based on the psycholinguistics literature (van Wijk & Kempen, 1987. Cognitive Psychology, 19, 403-440). Speech corrections were judged to be lemma repairs when the reparandum was a single word, and reformulation repairs when a new syntactic constituent was added to the reparandum. Patients with DAT corrected only 24% of their total errors and patients with PD only 25%. Healthy subjects, by contrast, corrected from 72 to 92% of their total errors. Patients with DAT tended to rely on reformulation repairs while patients with PD used both repair types about equally often. While healthy elderly Ss (in the 70s group) utilized lemma repairs more often than the reformulation strategy, all other healthy Ss used both strategies about equally often. Across all groups naming performance correlated negatively with numbers of undetected errors. Results point to a previously unrecognized communication disorder associated with PD and DAT and manifested by an impairment in the ability to correct output errors. This impairment may be related to attentional and frontal dysfunction in the two patient groups.  相似文献   

13.
Twenty-eight right-handed patients who suffered a single cerebrovascular accident in the distribution of either the left or right middle cerebral artery were tested on their ability to discriminate complex-pitch and speech stimuli presented dichotically. Whereas the left hemisphere lesion group was impaired in dichotic speech but not in dichotic complex-pitch discrimination, the right hemisphere lesion group was impaired in dichotic complex pitch but not in dichotic speech discrimination. Complex-pitch phenomena may provide a useful model for the study of auditory function in the nondominant hemisphere.  相似文献   

14.
Discrimination of polysyllabic sequences by one- to four-month-old infants   总被引:1,自引:0,他引:1  
The goal of this research was to ascertain the effects of suprasegmental parameters (fundamental frequency, amplitude, and duration) on discrimination of polysyllabic sequences by 1- to 4-month-old infants. A high-amplitude sucking procedure, with synthesized female speech, was used. Results indicate that young infants can discriminate the three-syllable sequences [marana] versus [malana] when suprasegmental characteristics typical of infant-directed speech emphasize the middle syllable. However, infants failed to demonstrate discrimination when adult-directed suprasegmentals were used and in several other experimental conditions in which prosodic parameters were manipulated. The pattern of results obtained in the six experiments suggests that the exaggerated suprasegmentals of infant-directed speech may function as a perceptual catalyst, facilitating discrimination by focusing the infant's attention on a distinctive syllable within polysyllabic sequences.  相似文献   

15.
The role of the left cerebral hemisphere for the discrimination of duration was examined in a group of normal subjects. Two tasks were presented: the first required a reaction-time response to the offset of monaural pulse sequences varying in interpulse duration, and the second required the discrimination of small differences in durations, within a delayed-comparison paradigm. In each task a right-ear advantage was obtained when the durations were 50 msec or less. No ear advantage was obtained for the larger durations of 67 to 120 msec. Since the perceptual distinctiveness of phonemes may be provided by durations approximating 50 msec, the nature of the relationship between the left hemisphere's role in temporal processing and speech processing may be elaborated.  相似文献   

16.
Errorless transfer of a discrimination across two continua   总被引:5,自引:4,他引:1       下载免费PDF全文
A procedure developed earlier (Terrace, 1963) successfully trained a red-green discrimination without the occurrence of any errors in 12 out of 12 cases. Errorless transfer from the red-green discrimination to a discrimination between a vertical and a horizontal line was accomplished by first superimposing the vertical and the horizontal lines on the red and green backgrounds, respectively, and then fading out the red and the green backgrounds. Superimposition of the two sets of stimuli without fading, or an abrupt transfer from the first to the second set of stimuli, resulted in the occurrence of errors during transfer. Superimposition, however, did result in some “incidental learning”. Performance following acquisition of the vertical-horizontal discrimination with errors differed from performance following acquisition without errors. If the vertical-horizontal discrimination was learned with errors, the latency of the response to S+ was permanently shortened and errors occurred during subsequent testing on the red-green discrimination even though the red-green discrimination was originally acquired without errors. If the vertical-horizontal discrimination was learned without errors, the latency of the response to S+ was unaffected and no errors occurred during subsequent testing on the red-green discrimination.  相似文献   

17.
To comprehend speech in most environments, listeners must combine some but not all sounds from across a wide range of frequencies. Three experiments were conducted to examine the role of amplitude comodulation in performing an essential part of this function: the grouping together of the simultaneous components of a speech signal. Each of the experiments used time-varying sinusoidal (TVS) sentences (Remez, Rubin, Pisoni, & Carrell, 1981) as base stimuli because their component tones are acoustically unrelated. The independence of the three tones reduced the number of confounding grouping cues available compared with those found in natural or computer-synthesized speech (e.g., fundamental frequency and simultaneity of harmonic onset). In each of the experiments, the TVS base stimuli were amplitude modulated to determine whether this modulation would lead to appropriate grouping of the three tones as reflected by sentence intelligibility. Experiment 1 demonstrated that amplitude comodulation at 100 Hz did improve the intelligibility of TVS sentences. Experiment 2 showed that the component tones of a TVS sentence must be comodulated (as opposed to independently modulated) for improvements in intelligibility to be found. Experiment 3 showed that the comodulation rates that led to intelligibility improvements were consistent with the effective rates found in experiments that examined the grouping of complex nonspeech sounds by common temporal envelopes (e.g., comodulation masking release; Hall, Haggard, & Fernandes, 1984). The results of these experiments support the claim that certain basic temporal-envelope processing capabilities of the human auditory system contribute to the perception of fluent speech.  相似文献   

18.
A discrimination was established between two fixed-ratio schedules of reinforcement. In one, fixed ratio 25, the reinforcer was delivered on the twenty-fifth response; on the other, fixed ratio 50, the fiftieth response was reinforced. In the first component of a chain, either fixed ratio 25 or fixed ratio 50 was randomly programmed on the center key of a three-key pigeon box. Reinforcement of a single peck on the side key was contingent upon discriminating which schedule had just been completed on the center key. During test trials, a timeout was introduced after the first response on fixed ratio 25 and after either the first or twenty sixth response on fixed ratio 50. When the timeout followed the first response on fixed ratio 25 and fixed ratio 50, the accuracy of the discrimination was unaffected. When the timeout followed the first response on fixed ratio 25 and the twenty sixth response on fixed ratio 50, the accuracy of the discrimination decreased rapidly to chance as a function of the duration of the timeout. The loss of discrimination was primarily due to errors after fixed ratio 50 was completed. The timeout appears to weaken the control over the choice response by the response-produced stimuli which preceded the timeout. The results are consistent with the interpretation that the discrimination between fixed ratio 25 and fixed ratio 50 is maintained by chaining of response-produced stimuli within the ratio cycle.  相似文献   

19.
To test the effect of linguistic experience on the perception of a cue that is known to be effective in distinguishing between [r] and [l] in English, 21 Japanese and 39 American adults were tested on discrimination of a set of synthetic speech-like stimuli. The 13 “speech” stimuli in this set varied in the initial stationary frequency of the third formant (F3) and its subsequent transition into the vowel over a range sufficient to produce the perception of [r a] and [l a] for American subjects and to produce [r a] (which is not in phonemic contrast to [l a ]) for Japanese subjects. Discrimination tests of a comparable set of stimuli consisting of the isolated F3 components provided a “nonspeech” control. For Americans, the discrimination of the speech stimuli was nearly categorical, i.e., comparison pairs which were identified as different phonemes were discriminated with high accuracy, while pairs which were identified as the same phoneme were discriminated relatively poorly. In comparison, discrimination of speech stimuli by Japanese subjects was only slightly better than chance for all comparison pairs. Performance on nonspeech stimuli, however, was virtually identical for Japanese and American subjects; both groups showed highly accurate discrimination of all comparison pairs. These results suggest that the effect of linguistic experience is specific to perception in the “speech mode.”  相似文献   

20.
Two experiments investigated the role of metacognition in changing answers to multiple-choice, general-knowledge questions. Both experiments revealed qualitatively different errors produced by speeded responding versus confusability amongst the alternatives; revision completely corrected the former, but had no effect on the latter. Experiment 2 also demonstrated that a pretest, designed to make participants' actual experience with answer changing either positive or negative, affected the tendency to correct errors. However, this effect was not apparent in the proportion of correct responses; it was only discovered when the metacognitive component to answer changing was isolated with a Type 2 signal-detection measure of discrimination. Overall, the results suggest that future research on answer changing should more closely consider the metacognitive factors underlying answer changing, using Type 2 signal-detection theory to isolate these aspects of performance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号