首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Some reaction time experiments are reported on the relation between the perception and production of phonetic features in speech. Subjects had to produce spoken consonant-vowel syllables rapidly in response to other consonant-vowel stimulus syllables. The stimulus syllables were presented auditorily in one condition and visually in another. Reaction time was measured as a function of the phonetic features shared by the consonants of the stimulus and response syllables. Responses to auditory stimulus syllables were faster when the response syllables started with consonants that had the same voicing feature as those of the stimulus syllables. A shared place-of-articulation feature did not affect the speed of responses to auditory stimulus syllables, even though the place feature was highly salient. For visual stimulus syllables, performance was independent of whether the consonants of the response syllables had the same voicing, same place of articulation, or no shared features. This pattern of results occurred in cases where the syllables contained stop consonants and where they contained fricatives. It held for natural auditory stimuli as well as artificially synthesized ones. The overall data reveal a close relation between the perception and production of voicing features in speech. It does not appear that such a relation exists between perceiving and producing places of articulation. The experiments are relevant to the motor theory of speech perception and to other models of perceptual-motor interactions.  相似文献   

2.
Using stimuli that could be labeled either as stops [b,d] or as fricatives [f,v,θ,ð], we found that, for a given acoustic stimulus, perceived place of articulation was dependent on perceived manner. This effect appeared for modified natural syllables with a free-identification task and for a synthetic transition continuum with a forced-choice identification task. Since perceived place could be changed by changing manner labels with no change in the acoustic stimulus, it follows that the processing of the place feature depends on the value the listener assigns to the manner feature rather than directly on any of the acoustic cues to manner. We interpret these results as evidence that the identification of place of articulation involves phonetic processing and could not be purely auditor  相似文献   

3.
In previous experiments Ss were presented for ordered recall with sequences of five consonant phonemes paired with /a/ in which the middle three consonant phonemes shared the same manner of articulation (voiced, unvoiced, nasal), the same place of articulation (front, middle, back), or neither the same manner nor place of articulation (control sequences). Compared to performance in control sequences, the middle consonant phoneme was always more difficult to recall in manner of articulation sequences but not in place of articulation sequences. The results suggested that for these sequences consonant phonemes were not remembered in terms of their place of articulation. In the present experiment, sequences of consonant-vowel (CV) or vowel-consonant (VC) syllables were presented for recall in which each consonant phoneme was paired with a different vowel. When consonant phonemes in the different sequence types were presented for recall with different vowels, phonetic interference was observed for the middle consonant in place of articulation sequences as well as manner of articulation sequences, and the effect was observed in both CV and VC groups. It was suggested that vowels are encoded in short-term memory in terms of their place of articulation and that presenting consonant phonemes for recall with different vowels caused Ss to use this dimension to code consonant phonemes in short-term memory.  相似文献   

4.
How do acoustic attributes of the speech signal contribute to feature-processing interactions that occur in phonetic classification? In a series of five experiments addressed to this question, listeners performed speeded classification tasks that explicitly required a phonetic decision for each response. Stimuli were natural consonant-vowel syllables differing by multiple phonetic features, although classification responses were based on a single target feature. In control tasks, no variations in nontarget features occurred, whereas in orthogonal tasks nonrelevant feature variations occurred but had to be ignored. Comparison of classification times demonstrated that feature information may either be processed separately as independent cues for each feature or as a single integral segment that jointly specifies several features. The observed form on processing depended on the acoustic manifestations of feature variation in the signal. Stop-consonant place of articulation and voicing cues, conveyed independently by the pattern and excitation source of the initial formant transitions, may be processed separately. However, information for consonant place of articulation and vowel quality, features that interactively affect the shape of initial formant transitions, are processed as an integral segment. Articulatory correlates of each type of processing are discussed in terms of the distinction between source features that vary discretely in speech production and resonance features that can change smoothly and continuously. Implications for perceptual models that include initial segmentation of an input utterance into a phonetic feature representation are also considered.  相似文献   

5.
In the McGurk effect, perception of audiovisually discrepant syllables can depend on auditory, visual, or a combination of audiovisual information. Under some conditions, visual information can override auditory information to the extent that identification judgments of a visually influenced syllable can be as consistent as for an analogous audiovisually compatible syllable. This might indicate that visually influenced and analogous audiovisually compatible syllables are phonetically equivalent. Experiments were designed to test this issue using a compelling visually influenced syllable in an AXB matching paradigm. Subjects were asked to match an audio syllable/va/either to an audiovisually consistent syllable (audio/va/-video/fa/) or an audiovisually discrepant syllable (audio/ba/-video/fa/). It was hypothesized that if the two audiovisual syllables were phonetically equivalent, then subjects should choose them equally often in the matching task. Results show, however, that subjects are more likely to match the audio/va/ to the audiovisually consistent/va/, suggesting differences in phonetic convincingness. Additional experiments further suggest that this preference is not based on a phonetically extraneous dimension or on noticeable relative audiovisual discrepancies.  相似文献   

6.
Recent experiments using a variety of techniques have suggested that speech perception involves separate auditory and phonetic levels of processing. Two models of auditory and phonetic processing appear to be consistent with existing data: (a) a strictserial model in which auditory information would be processed at one level, followed by the processing of phonetic information at a subsequent level; and (b) aparallel model in which auditory and phonetic processing could proceed simultaneously. The present experiment attempted to distinguish empirically between these two models. Ss identified either an auditory dimension (fundamental frequency) or a phonetic dimension (place of articulation of the consonant) of synthetic consonant-vowel syllables. When the two dimensions varied in a completely correlated manner, reaction times were significantly shorter than when either dimension varied alone. This “redundancy gain” could not be attributed to speed-accuracy trades, selective serial processing, or differential transfer between conditions. These results allow rejection of a completely serial model, suggesting instead that at least some portion of auditory and phonetic processing can occur in parallel.  相似文献   

7.
Synthetic speech stimuli were used to investigate whether aphasics' ability to perceive stop consonant place of articulation was enhanced by the extension of initial formant transitions in CV syllables. Phoneme identification and discrimination tests were administered to 12 aphasic patients, 5 fluent and 7 nonfluent. There were no significant differences in performance due to the extended transitions, and no systematic pattern of performance due to aphasia type. In both groups, discrimination was generally high and significantly better than identification, demonstrating that auditory capacity was retained, while phonetic perception was impaired; this result is consistent with repeated demonstrations that auditory and phonetic processes may be dissociated in normal listeners. Moreover, significant rank order correlations between performances on the Token Test and on both perceptual tasks suggest that impairment on these tests may reflect a general cognitive rather than a language-specific deficit.  相似文献   

8.
Several previous studies have shown that memory span is greater for short words than for long words. This effect is claimed to occur even when the short and long words are matched for the number of syllables and phonemes and so to provide evidence for subvocal articulation as being one mechanism that underlies memory span (Baddeley, Thomson, & Buchanan, 1975). The three experiments reported in this paper further investigate the articulatory determinants of word length effects on span tasks. Experiment 1 replicated Baddeley et al.'s finding of an effect of word length on auditory and visual span when the stimuli consist of words that differ in terms of the number of syllables. Experiments 2 and 3 showed that the effects of word length are eliminated when the words in the span task are matched for the number of syllables and phonemes but differ with respect to the duration and/or complexity of their articulatory gestures. These results indicate that it is the phonological structure of a word and not features of its actual articulation that determines the magnitude of the word length effect in span tasks.  相似文献   

9.
Several studies have indicated that dyslexics show a deficit in speech perception (SP). The main purpose of this research is to determine the development of SP in dyslexics and normal readers paired by grades from 2nd to 6th grade of primary school and to know whether the phonetic contrasts that are relevant for SP change during development, taking into account the individual differences. The achievement of both groups was compared in the phonetic tasks: voicing contrast, place of articulation contrast and manner of articulation contrast. The results showed that the dyslexic performed poorer than the normal readers in SP. In place of articulation contrast, the developmental pattern is similar in both groups but not in voicing and manner of articulation. Manner of articulation has more influence on SP, and its development is higher than the other contrast tasks in both groups.  相似文献   

10.
To examine the importance of distinctive features that are used to encode consonants (following Wickelgren’s analysis) in an immediate recall task, sequences of 5 consonants, all paired with the vowel /a/ were constructed and presented aurally for recall. The middle three items in each sequence all had either the same place of articulation (front, middle, or back of the vocal apparatus), orthe same manner of articulation (voiced, unvoiced, or nasal), or were unrelated in either place or manner (control). It was shown that, in comparison with the control sequences, consonants imbedded among others articulated similarly were recalled less accurately, suggesting that these distinctive features are important in encoding and memory maintenance. A comparison of the 3 manner and 3 place features showed that the greatest difficulty in recall occurred for the similar manner sequences (especially voiced and unvoiced, implicating manner of articulation as the critical distinctive feature in aural encoding. Some discussion is also presented of a distinction between articulation and acoustic factors in encoding processes.  相似文献   

11.
To better understand how infants process complex auditory input, this study investigated whether 11-month-old infants perceive the pitch (melodic) or the phonetic (lyric) components within songs as more salient, and whether melody facilitates phonetic recognition. Using a preferential looking paradigm, uni-dimensional and multi-dimensional songs were tested; either the pitch or syllable order of the stimuli varied. As a group, infants detected a change in pitch order in a 4-note sequence when the syllables were redundant (experiment 1), but did not detect the identical pitch change with variegated syllables (experiment 2). Infants were better able to detect a change in syllable order in a sung sequence (experiment 2) than the identical syllable change in a spoken sequence (experiment 1). These results suggest that by 11 months, infants cannot “ignore” phonetic information in the context of perceptually salient pitch variation. Moreover, the increased phonetic recognition in song contexts mirrors findings that demonstrate advantages of infant-directed speech. Findings are discussed in terms of how stimulus complexity interacts with the perception of sung speech in infancy.  相似文献   

12.
In the McGurk effect, perception of audiovisually discrepant syllables can depend on auditory, visual, or a combination of audiovisual information. Undersome conditions, Vi8Ual information can override auditory information to the extent that identification judgments of a-visually influenced syllable can be as consistent as for an analogous audiovisually compatible syllable. This might indicate that visually influenced and analogous audiuvisually-compatible syllables-are-phictnetically equivalent. Experiments were designed to test this issue using a compelling visually influenced syllable in an AXB matching paradigm. Subjects were asked tomatch an audio syllable /val either to an audiovisually consistent syllable (audio /val-video /fa/) or an audiovisually discrepant syllable (audio /bs/-video ifa!). It was hypothesized that if the two audiovisual syllables were phonetically equivalent, then subjects should choose them equally often in the matching task. Results show, however, that subjects are more likely to match the audio /va/ to the audiovisually consistent /va/, suggesting differences in phonetic convincingness. Additional experiments further suggest that this preference is not based on a phonetically extraneous dimension or on noticeable relative audiovisual discrepancies.  相似文献   

13.
To examine the claim that phonetic coding plays a special role in temporal order recall, deaf and hearing college students were tested on their recall of temporal and spatial order information at two delay intervals. The deaf subjects were all native signers of American Sign Language. The results indicated that both the deaf and hearing subjects used phonetic coding in short-term temporal recall, and visual coding in spatial recall. There was no evidence of manual or visual coding among either the hearing or the deaf subjects in the temporal order recall task. The use of phonetic coding for temporal recall is consistent with the hypothesis that recall of temporal order information is facilitated by a phonetic code.  相似文献   

14.
This study examined the role of phonetic factors in the performance of good and poor beginning readers on a verbal short-term memory task. Good and poor readers in the second and third grades repeated four-item lists of consonant-vowel syllables in which each consonant shared zero, one, or two features with other consonants in the string. As in previous studies, the poor readers performed less accurately than the good readers. However, the nature of their errors was the same: Both groups tended to transpose initial consonants as a function of their phonetic similarity and adjacency. These findings suggest that poor readers are able to employ a phonetic coding strategy in short-term memory, as do good readers, but less skillfully.  相似文献   

15.
16.
We investigated the role of syllables during speech planning in English by measuring syllable-frequency effects. So far, syllable-frequency effects in English have not been reported. English has poorly defined syllable boundaries, and thus the syllable might not function as a prominent unit in English speech production. Speakers produced either monosyllabic (Experiment 1) or disyllabic (Experiment 2-4) pseudowords as quickly as possible in response to symbolic cues. Monosyllabic targets consisted of either high- or low-frequency syllables, whereas disyllabic items contained either a 1st or 2nd syllable that was frequency-manipulated. Significant syllable-frequency effects were found in all experiments. Whereas previous findings for disyllables in Dutch and Spanish-languages with relatively clear syllable boundaries-showed effects of a frequency manipulation on 1st but not 2nd syllables, in our study English speakers were sensitive to the frequency of both syllables. We interpret this sensitivity as an indication that the production of English has more extensive planning scopes at the interface of phonetic encoding and articulation. (PsycINFO Database Record (c) 2010 APA, all rights reserved).  相似文献   

17.
Previous work has demonstrated that the graded internal structure of phonetic categories is sensitive to a variety of contextual factors. One such factor is place of articulation: The best exemplars of voiceless stop consonants along auditory bilabial and velar voice onset time (VOT) continua occur over different ranges of VOTs (Volaitis & Miller, 1992). In the present study, we exploited the McGurk effect to examine whether visual information for place of articulation also shifts the best exemplar range for voiceless consonants, following Green and Kuhl's (1989) demonstration of effects of visual place of articulation on the location of voicing boundaries. In Experiment 1, we established that /p/ and /t/ have different best exemplar ranges along auditory bilabial and alveolar VOT continua. We then found, in Experiment 2, a similar shift in the best-exemplar range for /t/ relative to that for /p/ when there was a change in visual place of articulation, with auditory place of articulation held constant. These findings indicate that the perceptual mechanisms that determine internal phonetic category structure are sensitive to visual, as well as to auditory, information.  相似文献   

18.
Dichotic CV syllables (identical and nonidentical pairs) were presented at nine temporal offsets between 0 and 500 msec. One task consisted in judging quickly whether the syllables in a pair were phonetically the same or different; the other task was to identify both syllables. The fundamental frequency (pitch) of the synthetic stimuli was either the same or different, and either predictable or unpredictable. The pitch variable had surprisingly little effect on the latencies of "same"-"different" judgments, and the expected "preparation" effect of pitch predictability was ba]rely present. Instead, there were strong effects on the frequencies of errors at short temporal delays, which suggests shifts or biases in the phonetic "same"-"different" criterion with context. A comparison with analogous errors in the identification task revealed identical patterns. Further analysis of identification errors showed no overall "feature sharing advantage": The direction of this effect depends on the kind of error committed. Also, a lag effect was found only in nonidentical pairs that received two identical responses. The results are discussed in the framework of a two-stage information-processing model. Effects of pitch are tentatively explained as biases from implicit (pitch) decisions at the auditory level on phonetic decisions in the presence of uncertainty. Four sources of errors are identified: fusion at the auditory level; "integration," confusions, and transpositions at the phonetic level.  相似文献   

19.
Ebbinghaus noted that there were great differences in nonsense syllables in ease of learning. Later investigators attempted to control for this variability but made little effort to account for the differences. An approach from the point of view of systematic linguistics suggests that a major source of variability can be found in the phonetic and orthographic "distance from English." A scale of phonetic distance and a scale of orthographic distance combined in multiple regression to predict association value and meaningfulness with R above +.80. Experimental tests suggest that subjects are highly sensitive to violations of the rules of syllable structure even when the syllables are very unlike English. It is suggested that nonsense syllables do equate for prior knowledge across subjects because all subjects are highly familiar with the phonetic and orthographic rules that contribute heavily to the meaningfulness of nonsense syllables.  相似文献   

20.
Previous research (Byrne, 1984) showed that adults who learned to read an orthography representing phonetic features (voicing, place of articulation) did not readily obtain usable knowledge of the mapping of phonetic features onto orthographic elements, as evidenced by failure to generalize to partially new stimuli. The present Experiment 1 used a different method of detecting learning savings during acquisition. Subjects learned a set of complex symbols standing for phones, with the elements representing voicing and place. In a second acquisition set, the signs for voicing were reversed. Learning speed was not affected, which was consistent with the claim that feature-element links went unnoticed in initial acquisition. In Experiment 2, some subjects were instructed to "find the rule" embodied in the orthography. None did, and acquisition rates were no different from those of uninstructed subjects. In Experiment 3, subjects had 4 h of training on the orthography, with consistent feature-symbol mapping for half of the subjects and arbitrary pairings for the remainder. No reaction time advantage emerged in the consistent condition, which is further evidence of nonanalytic acquisition. The results are related to data from children learning to read.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号