首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A S Meyer 《Cognition》1992,42(1-3):181-211
Phonological encoding in language production can be defined as a set of processes generating utterance forms on the basis of semantic and syntactic information. Most evidence about these processes stems from analyses of sound errors. In section 1 of this paper, certain important results of these analyses are reviewed. Two prominent models of phonological encoding, which are mainly based on speech error evidence, are discussed in section 2. In section 3, limitations of speech error analyses are discussed, and it is argued that detailed and comprehensive models of phonological encoding cannot be derived solely on the basis of error analyses. As is argued in section 4, a new research strategy is required. Instead of using the properties of errors to draw inferences about the generation of correct word forms, future research should directly investigate the normal process of phonological encoding.  相似文献   

2.
The author reports on a series of integrated studies on melodic contours in infant-directed (ID) speech. ID melodies in speech are taken as an instructive example of intuitive parenting in order to review current evidence on its forms, functions and determinants. The forms and functions of melodic prototypes are compared in terms of universal properties and individual and/or cultural variability across samples of German, Chinese and American mothers, and German mothers and fathers with their 2- and 3-month-old infants. Microanalyses of interactional contexts show that forms and functions of ID melodies are intimately related to typical dimensions of intuitive caregiving–arousing/soothing, turnyielding/turn-closing, approving/disapproving. The communicative functions of ID melodies as both categorical and graded signals are discussed with respect to the current knowledge on infant responses to ID speech and on early speech perception. According to a comprehensive longitudinal study of ID speech in relation to stages of infant vocalization, ID speech results from fine-tuned adjustments in various prosodic and linguistic features to developmental changes in infants' perceptual and vocal competence. ID melodies evidently have the potential to draw infant attention to caregivers' speech, to regulate arousal and affect in infants, to provide models for imitation, to guide infants in practising communicative subroutines and to mediate linguistic information. Current evidence suggests that the melodies in caregivers' speech provide a species-specific guidance towards language acquisition.  相似文献   

3.
The relationship between male gender, left-handedness, high visuo-spatial and mathematical skill, speech problems and increased vulnerability to immune disorders was assessed in a large sample of university undergraduates. The only significant finding was a two-way interaction between hand preference and incidence of speech problems, with “pure” left-handers reporting a three-fold increase in the incidence of speech problems compared to mixed left-handers, mixed right-handers and “pure” right-handers. There was no evidence of an interaction among three or more of the variables. These findings are discussed in relation to the influential theory of cerebral lateralisation proposed by Geschwind and Galaburda (1987).  相似文献   

4.
There is some evidence that loudness judgments of speech are more closely related to the degree of vocal effort induced in speech production than to the speech signal's surface-acoustic properties such as intensity. Other researchers have claimed that speech loudness can be rationalized simply by considering the acoustic complexity of the signal. Because vocal effort can be specified optically as well as acoustically, a study to test the effort-loudness hypothesis was conducted that used conflicting audiovisual presentations of a speaker that produced consonant-vowel syllables with different efforts. It was predicted that if loudness judgments are constrained by effort perception rather than by simple acoustic parameters, then judgments ought to be affected by visual as well as auditory information. It is shown that loudness judgments are affected significantly by visual information even when subjects are instructed to base their judgments only on what they hear. A similar (though less pronounced) patterning of results is shown for a nonspeech "clapping" event, which attests to the generality of the loudness-effort effect previously thought to be special to speech. Results are discussed in terms of auditory, fuzzy logical, motor, and ecological theories of speech perception.  相似文献   

5.
The purpose of this study was to examine speech convergence and speech evaluation in fact-finding interviews conducted in the field. Forty interviewers (ERs), undergraduates enrolled in a class on interviewing processes, conducted 20–30 minute interviews with selected interviewees (EEs), business persons and professionals in fields of interest to the ERs. Speech behaviors examined included response latency, speech rate, and turn duration; these were coded per one minute intervals of each interaction. Time series regression procedures indicated that both ERs and EEs converged speech rate and response latency toward their interlocutors' performances of these behaviors. Although turn duration convergence did not characterize the entire data set, male-male dyads did converge significantly and male (ER)-female (EE) dyads significantly diverged turn duration. Regarding speech evaluation, there was some evidence that greater response latency similarity, greater speech rate and response latency convergence, and faster ER speech and slower EE speech were positively related to the competence and social attractiveness judgments of participants. Limitations and implications are discussed.  相似文献   

6.
Two left-handed siblings with developmental stuttering are comprehensively described. The methods of study included speech and language evaluation, neurological and neuropsychological examinations, dichotic listening, auditory evoked responses, electroencephalogram, and CT scan asymmetry measurements. The data from each sibling showed evidence of anomalous cerebral dominance on many of the variables investigated. The CT scan measurements showed atypical asymmetries, especially in the occipital regions. These findings support the theory that stuttering may be related to anomalous cerebral dominance, both on functional as well as structural bases. Implications of anomalous dominance and the resultant effect of hemispheric rivalry on speech fluency are discussed.  相似文献   

7.
Speech sounds can be classified on the basis of their underlying articulators or on the basis of the acoustic characteristics resulting from particular articulatory positions. Research in speech perception suggests that distinctive features are based on both articulatory and acoustic information. In recent years, neuroelectric and neuromagnetic investigations provided evidence for the brain's early sensitivity to distinctive features and their acoustic consequences, particularly for place of articulation distinctions. Here, we compare English consonants in a Mismatch Field design across two broad and distinct places of articulation - labial and coronal - and provide further evidence that early evoked auditory responses are sensitive to these features. We further add to the findings of asymmetric consonant processing, although we do not find support for coronal underspecification. Labial glides (Experiment 1) and fricatives (Experiment 2) elicited larger Mismatch responses than their coronal counterparts. Interestingly, their M100 dipoles differed along the anterior/posterior dimension in the auditory cortex that has previously been found to spatially reflect place of articulation differences. Our results are discussed with respect to acoustic and articulatory bases of featural speech sound classifications and with respect to a model that maps distinctive phonetic features onto long-term representations of speech sounds.  相似文献   

8.
There is strong evidence of shared acoustic profiles common to the expression of emotions in music and speech, yet relatively limited understanding of the specific psychoacoustic features involved. This study combined a controlled experiment and computational modelling to investigate the perceptual codes associated with the expression of emotion in the acoustic domain. The empirical stage of the study provided continuous human ratings of emotions perceived in excerpts of film music and natural speech samples. The computational stage created a computer model that retrieves the relevant information from the acoustic stimuli and makes predictions about the emotional expressiveness of speech and music close to the responses of human subjects. We show that a significant part of the listeners’ second-by-second reported emotions to music and speech prosody can be predicted from a set of seven psychoacoustic features: loudness, tempo/speech rate, melody/prosody contour, spectral centroid, spectral flux, sharpness, and roughness. The implications of these results are discussed in the context of cross-modal similarities in the communication of emotion in the acoustic domain.  相似文献   

9.
The results of a recent perceptual study (W. Ziegler & D. von Cramon, 1985, Anticipatory coarticulation in a patient with apraxia of speech. Brain and Language 26, 117-130) provided evidence for disturbed coarticulation in verbal apraxia. Further support for this finding is now provided by acoustic analyses. Formant frequencies and LP reflection coefficients were chosen to assess anticipatory vowel-to-vowel coarticulation and vowel anticipation in stop consonants, respectively. These parameters revealed a lack of coarticulatory cohesion in the speech of a patient suffering from verbal apraxia, explainable by a consistent delay in the initiation of anticipatory vowel gestures. The findings are discussed with respect to prosodic features and to theoretical and clinical concepts of verbal apraxia.  相似文献   

10.
On the basis of evidence from six areas of speech research, it is argued that there is no reason to assume that speech stimuli are processed by structures that are inherently different from structures used for other auditory stimuli. It is concluded that speech and non-speech auditory stimuli are probably perceived in the same way.  相似文献   

11.
Unlike speech production, lexical access in written production has not systematically been investigated experimentally. Four experiments were run on literate adults to support the view that although the spoken and written language production systems may obviously share some processing levels, they also both have some specific processing components. The general findings provide evidence for such a view and are discussed in the framework of studies in verbal production conducted on normals and on brain-damaged patients.  相似文献   

12.
Irrelevant background speech disrupts serial recall of visually presented lists of verbal material. Three experiments tested the hypothesis that the degree of disruption is dependent on the number of words heard (i.e. word dose) whilst the task was undertaken. Experiments 1 and 2 showed that more disruption is produced if the word dose is increased, thereby providing evidence to support the experimental hypothesis. It was concluded from the first two experiments that the word-dose effect might be the result of increasing the amount of changing-state information in the speech. The results of Experiment 3 supported this conclusion by showing an interaction between word dose and changing-state information. It was noted however that the results might be explained within the working memory account of the disruptive action of irrelevant speech. A further two experiments cast doubt on this possibility by failing to replicate the finding that the phonological similarity between heard and seen material affects the degree of interference (Salame & Baddeley, 1982). The findings are discussed in relation to the changing state hypothesis of the irrelevant speech effect (e.g. Jones, Madden, & Miles, 1992).  相似文献   

13.
The 'audio-phonatoric coupling' (APC) was investigated in two independent experiments. Slightly delayed auditory feedback (delay time 40 ms) of the subjects' own speech was used as experimental method. The first experiment was conducted to examine whether the strength of the APC depends on the speech rate. In this experiment 16 male Subjects (Ss) were required to utter the testword/tatatas/either with stress placing on the first or second syllable at two different speech rates (fast and slow). In 16% of the randomly chosen speech trials, the delayed auditory feedback (DAF; 40 ms delay) was introduced. It could be shown that the stressed phonation was significantly lengthened under the DAF condition. This lengthening was greater when Ss spoke slowly. The unstressed phonations were not influenced by the DAF condition. The second experiment was conducted to examine whether or not speech intensity effects APC. Nine male Ss were required to utter the testword/tatatas/either with stress placing on the first or second syllable using three different speech intensities (30 dB, 50 dB and 70 dB). In 16% of the randomly chosen speech trials DAF condition was introduced. It could be shown that speech intensity does not influence the DAF effect (lengthening of stressed phonation). These findings were taken as evidence that the auditory feedback of the subjects' own speech can be incorporated into speech control during ongoing speech. Obviously, this feedback information is efficient only during the production of stressed syllables, and varies as a function of speech rate. In addition, the significance of stressed syllables for the structuring of speech is discussed.  相似文献   

14.
We present the results of studies designed to measure the segmental intelligibility of eight text-to-speech systems and a natural speech control, using the Modified Rhyme Test (MRT). Results indicated that the voices tested could be grouped into four categories: natural speech, high-quality synthetic speech, moderate-quality synthetic speech, and low-quality synthetic speech. The overall performance of the best synthesis system, DECtalk-Paul, was equivalent to natural speech only in terms of performance on initial consonants. The findings are discussed in terms of recent work investigating the perception of synthetic speech under more severe conditions. Suggestions for future research on improving the quality of synthetic speech are also considered.  相似文献   

15.
Exaggeration of the vowel space in infant-directed speech (IDS) is well documented for English, but not consistently replicated in other languages or for other speech-sound contrasts. A second attested, but less discussed, pattern of change in IDS is an overall rise of the formant frequencies, which may reflect an affective speaking style. The present study investigates longitudinally how Dutch mothers change their corner vowels, voiceless fricatives, and pitch when speaking to their infant at 11 and 15 months of age. In comparison to adult-directed speech (ADS), Dutch IDS has a smaller vowel space, higher second and third formant frequencies in the vowels, and a higher spectral frequency in the fricatives. The formants of the vowels and spectral frequency of the fricatives are raised more strongly for infants at 11 than at 15 months, while the pitch is more extreme in IDS to 15-month olds. These results show that enhanced positive affect is the main factor influencing Dutch mothers’ realisation of speech sounds in IDS, especially to younger infants. This study provides evidence that mothers’ expression of emotion in IDS can influence the realisation of speech sounds, and that the loss or gain of speech clarity may be secondary effects of affect.  相似文献   

16.
An often-cited criterion for assessing the effect of a stuttering therapy is the ability of the stutterers to produce normally fluent speech. Many modern stuttering therapies use special techniques that may produce stutter-free speech that does not sound completely normal. The present study investigates this problem in the framework of the Dutch adaptation of the Precision Fluency Shaping Program.

Pre-, post-, and -year follow-up therapy speech samples of 32 severe stutterers who were treated in a four-week intensive therapy are compared with comparable samples of 20 nonstutterers. For that aim the samples were rated on 14 bipolar scales by groups of about 20 listeners. The results show that the speech of the stutterers in all three conditions differs significantly from the speech of the nonstutterers. The pretherapy speech takes an extreme position on a Distorted Speech dimension, due to the large proportion of disfluencies. The posttherapy speech has extremely low scores on a Dynamics/Prosody dimension, a‘1 while the follow-up therapy speech differs from the normal speech on both dimensions, but now the distances are smaller. These results are discussed in relation to the severity of the stuttering problem in the group of treated stutterers. Finally, implications for future research on therapy evaluation are discussed.  相似文献   


17.
For nearly two decades it has been known that infants' perception of speech sounds is affected by native language input during the first year of life. However, definitive evidence of a mechanism to explain these developmental changes in speech perception has remained elusive. The present study provides the first evidence for such a mechanism, showing that the statistical distribution of phonetic variation in the speech signal influences whether 6- and 8-month-old infants discriminate a pair of speech sounds. We familiarized infants with speech sounds from a phonetic continuum, exhibiting either a bimodal or unimodal frequency distribution. During the test phase, only infants in the bimodal condition discriminated tokens from the endpoints of the continuum. These results demonstrate that infants are sensitive to the statistical distribution of speech sounds in the input language, and that this sensitivity influences speech perception.  相似文献   

18.
Three methods of voice restoration--tracheosophageal speech (TEP), oesophageal speech, electrolarynx--are available following total laryngectomy. TEP produces better voice quality compared with other methods and is assumed to result in better quality of life. Little evidence exists to support the relationship between voice quality and quality of life, however. Advertising this study through several leading laryngectomy charities resulted in the completion of 226 questionnaires (TEP = 147; oesophageal speech = 42; electrolarynx = 37) comprising the Short Form 36 (SF-36) quality of life measure and questions examining perceived voice intelligibility. Additionally, 89 questionnaires comprising only the SF-36 were completed by participants who reported having no serious medical problems, to form a healthy control group. Results indicate that improved voice quality does not result in widespread benefits to quality of life. On only a few dimensions were there differences between voice restoration method: electrolarynx and TEP better than oesophageal speech with respect to pain, TEP better than oesophageal speech with respect to role limitation: physical problems. Additionally whilst widespread differences between voice restoration methods did not occur, all three groups had a worse quality of life compared with the healthy control group. Implications of the results for the selection of voice restoration method to maximize quality of life are discussed.  相似文献   

19.
Upon hearing an ambiguous speech sound dubbed onto lipread speech, listeners adjust their phonetic categories in accordance with the lipread information (recalibration) that tells what the phoneme should be. Here we used sine wave speech (SWS) to show that this tuning effect occurs if the SWS sounds are perceived as speech, but not if the sounds are perceived as non-speech. In contrast, selective speech adaptation occurred irrespective of whether listeners were in speech or non-speech mode. These results provide new evidence for the distinction between a speech and non-speech processing mode, and they demonstrate that different mechanisms underlie recalibration and selective speech adaptation.  相似文献   

20.
Direct measures of overt behavior have been underutilized in speech and other social fear, anxiety, and phobia research. This study demonstrates the usefulness of such variables in the evaluation of public speaking fear. A molecular behavioral assessment methodology was used to examine pauses and verbal dysfluencies of individuals with circumscribed speech fear (n=8) or general social anxiety (n=8), as well as nonanxious control participants (n=16), during an impromptu speech behavior test. Speech fear and generally social anxious individuals paused more often and for a longer duration than the nonanxious group. Results also indicated greater increases in state anxiety during the speech in the circumscribed speech fear sample, relative to the generalized social anxiety and control groups. Taken together with other research, these findings provide evidence that circumscribed speech fear is a meaningful subtype and can be independent of generalized social anxiety. The utility of measuring pausing and verbal dysfluencies in the behavioral assessment of speech fear and other social anxiety and phobia is discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号