首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
The distinction between auditory and phonetic processes in speech perception was used in the design and analysis of an experiment. Earlier studies had shown that dichotically presented stop consonants are more often identified correctly when they share place of production (e.g., /ba-pa/) or voicing (e.g., /ba-da/) than when neither feature is shared (e.g., /ba-ta/). The present experiment was intended to determine whether the effect has an auditory or a phonetic basis. Increments in performance due to feature-sharing were compared for synthetic stop-vowel syllables in which formant transitions were the sole cues to place of production under two experimental conditions: (1) when the vowel was the same for both syllables in a dichotic pair, as in our earlier studies, and (2) when the vowels differed. Since the increment in performance due to sharing place was not diminished when vowels differed (i.e., when formant transitions did not coincide), it was concluded that the effect has a phonetic rather than an auditory basis. Right ear advantages were also measured and were found to interact with both place of production and vowel conditions. Taken together, the two sets of results suggest that inhibition of the ipsilateral signal in the perception of dichotically presented speech occurs during phonetic analysis.  相似文献   

3.
How do acoustic attributes of the speech signal contribute to feature-processing interactions that occur in phonetic classification? In a series of five experiments addressed to this question, listeners performed speeded classification tasks that explicitly required a phonetic decision for each response. Stimuli were natural consonant-vowel syllables differing by multiple phonetic features, although classification responses were based on a single target feature. In control tasks, no variations in nontarget features occurred, whereas in orthogonal tasks nonrelevant feature variations occurred but had to be ignored. Comparison of classification times demonstrated that feature information may either be processed separately as independent cues for each feature or as a single integral segment that jointly specifies several features. The observed form on processing depended on the acoustic manifestations of feature variation in the signal. Stop-consonant place of articulation and voicing cues, conveyed independently by the pattern and excitation source of the initial formant transitions, may be processed separately. However, information for consonant place of articulation and vowel quality, features that interactively affect the shape of initial formant transitions, are processed as an integral segment. Articulatory correlates of each type of processing are discussed in terms of the distinction between source features that vary discretely in speech production and resonance features that can change smoothly and continuously. Implications for perceptual models that include initial segmentation of an input utterance into a phonetic feature representation are also considered.  相似文献   

4.
The results of earlier studies by several authors suggest that speech and nonspeech auditory patterns are processed primarily in different places in the brain and perhaps by different modes. The question arises in studies of speech perception whether all phonetic elements or all features of phonetic elements are processed in the same way. The technique of dichotic presentation was used to examine this question.

The present study compared identifications of dichotically presented pairs of synthetic CV syllables and pairs of steady-state vowels. The results show a significant right-ear advantage for CV syllables but not for steady-state vowels. Evidence for analysis by feature in the perception of consonants is discussed.  相似文献   

5.
A dichotic listening experiment was conducted to determine if vowel perception is based on phonetic feature extraction as is consonant perception. Twenty normal right-handed subjects were given dichotic CV syllables contrasting in final vowels. It was found that, unlike consonants, the perception of dichotic vowels was not significantly lateralized, that the dichotic perception of vowels was not significantly enhanced by the number of phonetic features shared, and that the occurrence of double-blend errors was not greater than chance. However, there was strong evidence for the use of phonetic features at the level of response organization. It is suggested that the differences between vowel and consonant perception reflect the differential availability of the underlying acoustic information from auditory store, rather than differences in processing mechanisms.  相似文献   

6.
Identification of CV syllables was studied in a backward masking paradigm in order to examine two types of interactions observed between dichotically presented speech sounds: the feature sharing effect and the lag effect. Pairs of syllables differed in the consonant, the vowel, and their relative times of onset. Interference between the two dichotic inputs was observed primarily for pairs which contrasted on voicing. Performance on pairs that shared voicing remained excellent under all three conditions. The results suggest that the interference underlying the lag effect and the feature sharing effect for voicing occur before phonetic analysis where both auditory inputs interact.  相似文献   

7.
Two new experimental operations were used to distinguish between auditory and phonetic levels of processing in speech perception: the first based on reaction time data in speeded classification tasks with synthetic speech stimuli, and the second based on average evoked potentials recorded concurrently in the same tasks. Each of four experiments compared the processing of two different dimensions of the same synthetic consonant-vowel syllables. When a phonetic dimensions was compared to an auditory dimension, different patterns of results were obtained in both the reaction time and evoked potential data. No such differences were obtained for isolated acoustic components of the phonetic dimension or for two purely auditory dimensions. Together with other recent evidence, the present results constitute additional converging operations on the distinction between auditory and phonetic processes in speech perception and on the idea that phonetic processing involves mechanisms that are lateralized in one cerebral hemisphere.  相似文献   

8.
Recent experiments using a variety of techniques have suggested that speech perception involves separate auditory and phonetic levels of processing. Two models of auditory and phonetic processing appear to be consistent with existing data: (a) a strictserial model in which auditory information would be processed at one level, followed by the processing of phonetic information at a subsequent level; and (b) aparallel model in which auditory and phonetic processing could proceed simultaneously. The present experiment attempted to distinguish empirically between these two models. Ss identified either an auditory dimension (fundamental frequency) or a phonetic dimension (place of articulation of the consonant) of synthetic consonant-vowel syllables. When the two dimensions varied in a completely correlated manner, reaction times were significantly shorter than when either dimension varied alone. This “redundancy gain” could not be attributed to speed-accuracy trades, selective serial processing, or differential transfer between conditions. These results allow rejection of a completely serial model, suggesting instead that at least some portion of auditory and phonetic processing can occur in parallel.  相似文献   

9.
The McGurk effect, where an incongruent visual syllable influences identification of an auditory syllable, does not always occur, suggesting that perceivers sometimes fail to use relevant visual phonetic information. We tested whether another visual phonetic effect, which involves the influence of visual speaking rate on perceived voicing (Green & Miller, 1985), would occur in instances when the McGurk effect does not. In Experiment 1, we established this visual rate effect using auditory and visual stimuli matching in place of articulation, finding a shift in the voicing boundary along an auditory voice-onset-time continuum with fast versus slow visual speech tokens. In Experiment 2, we used auditory and visual stimuli differing in place of articulation and found a shift in the voicing boundary due to visual rate when the McGurk effect occurred and, more critically, when it did not. The latter finding indicates that phonetically relevant visual information is used in speech perception even when the McGurk effect does not occur, suggesting that the incidence of the McGurk effect underestimates the extent of audio-visual integration.  相似文献   

10.
Right hemisphere EEG sensitivity to speech   总被引:3,自引:2,他引:1  
Recent speech perception work with normals and aphasics suggests that the right hemisphere may be more adept than the left at making the voicing discrimination, and the reverse for place of articulation. We examined this right hemisphere voicing effect with natural speech stimuli: stop consonants in pre-, mid-, and postvocalic contexts. Using a neuroelectric event-related potential paradigm, we found numerous effects indicating bilateral components reflecting the voicing and place contrast and unique right hemisphere discrimination of both voicing and place of articulation.  相似文献   

11.
We report the case of a neonate tested three weeks after a neonatal left sylvian infarct. We studied her perception of speech and non-speech stimuli with high-density event-related potentials. The results show that she was able to discriminate not only a change of timbre in tones but also a vowel change, and even a place of articulation contrast in stop consonants. Moreover, a discrimination response to stop consonants was observed even when syllables were produced by different speakers. Her intact right hemisphere was thus able to extract relevant phonetic information in spite of irrelevant acoustic variation. These results suggest that both hemispheres contribute to phoneme perception during the first months of life and confirm our previous findings concerning bilateral responses in normal infants.  相似文献   

12.
Speech perception without hearing   总被引:6,自引:0,他引:6  
In this study of visual phonetic speech perception without accompanying auditory speech stimuli, adults with normal hearing (NH; n = 96) and with severely to profoundly impaired hearing (IH; n = 72) identified consonant-vowel (CV) nonsense syllables and words in isolation and in sentences. The measures of phonetic perception were the proportion of phonemes correct and the proportion of transmitted feature information for CVs, the proportion of phonemes correct for words, and the proportion of phonemes correct and the amount of phoneme substitution entropy for sentences. The results demonstrated greater sensitivity to phonetic information in the IH group. Transmitted feature information was related to isolated word scores for the IH group, but not for the NH group. Phoneme errors in sentences were more systematic in the IH than in the NH group. Individual differences in phonetic perception for CVs were more highly associated with word and sentence performance for the IH than for the NH group. The results suggest that the necessity to perceive speech without hearing can be associated with enhanced visual phonetic perception in some individuals.  相似文献   

13.
An experiment is reported which uses a same-different matching paradigm in which subjects are required to indicate whether the consonants of a pair of consonant-diphthong syllables are the same or different. The question addressed is the operation of two hypothesized processes in the perception of speech sounds. The auditory level is shown to hold stimulus information for a brief period of time and be sensitive to allophonic variations within a stimulus. Moreover, matching at this level takes place by identity of the syllables rather than of the separate phoneme segments. The phonemic level is impaired when the diphthong segments of the pair leads to a contradictory match to that of the consonants of the pair, even though only the consonants are relevant to the matching decision.  相似文献   

14.
Previous work has demonstrated that the graded internal structure of phonetic categories is sensitive to a variety of contextual factors. One such factor is place of articulation: The best exemplars of voiceless stop consonants along auditory bilabial and velar voice onset time (VOT) continua occur over different ranges of VOTs (Volaitis & Miller, 1992). In the present study, we exploited the McGurk effect to examine whether visual information for place of articulation also shifts the best exemplar range for voiceless consonants, following Green and Kuhl's (1989) demonstration of effects of visual place of articulation on the location of voicing boundaries. In Experiment 1, we established that /p/ and /t/ have different best exemplar ranges along auditory bilabial and alveolar VOT continua. We then found, in Experiment 2, a similar shift in the best-exemplar range for /t/ relative to that for /p/ when there was a change in visual place of articulation, with auditory place of articulation held constant. These findings indicate that the perceptual mechanisms that determine internal phonetic category structure are sensitive to visual, as well as to auditory, information.  相似文献   

15.
A series of experiments, using a selective adaptation procedure, investigated some of the properties of the linguistic feature detectors that mediate the perception of the voiced and voiceless stop consonants. The first experiment showed that these detectors are centrally rather than peripherally located, in that monotic presentation of the adapting stimulus and test stimuli to different ears resulted in large and reliable shifts in the locus of the phonetic boundary. The second experiment revealed that the detectors are part of the specialized speech processor, inasmuch as adaptation of a voicing detector (as measured by a shift in the phonetic boundary) occurred only when the voicing information was presented in a speech context. In the third experiment, the detector mediating perception of the voiced stops was shown to be more resistant to adaptation than the detector mediating perception of the voiceless stops.  相似文献   

16.
In this study, we attempted to determine whether phonetic disintegration of speech in Broca's aphasia affects the spectral characteristics of speech sounds as has been shown for the temporal characteristics of speech. To this end, we investigated the production of place of articulation in Broca's aphasics. Acoustic analysis of the spectral characteristics for stop consonants were conducted. Results indicated that the static aspects of speech production were preserved, as Broca's aphasics seemed to be able to reach the articulatory configuration for the appropriate place of articulation. However, the dynamic aspects of speech production seemed to be impaired, as their productions reflected problems with the source characteristics of speech sounds and with the integration of articulatory movements in the vocal tract. Listener perceptions of the aphasics' productions were compared with acoustic analyses for these same productions. The two measures were related; that is, the spectral characteristics of the utterances provided salient cues for place of articulation perception. An analysis of the occurrences of errors along the dimensions of voicing and place showed that aphasics rarely produce utterances containing both voice and place substitutions.  相似文献   

17.
This study examined the role of phonetic factors in the performance of good and poor beginning readers on a verbal short-term memory task. Good and poor readers in the second and third grades repeated four-item lists of consonant-vowel syllables in which each consonant shared zero, one, or two features with other consonants in the string. As in previous studies, the poor readers performed less accurately than the good readers. However, the nature of their errors was the same: Both groups tended to transpose initial consonants as a function of their phonetic similarity and adjacency. These findings suggest that poor readers are able to employ a phonetic coding strategy in short-term memory, as do good readers, but less skillfully.  相似文献   

18.
In previous work, 11‐month‐old infants were able to learn rules about the relation of the consonants in CVCV words from just four examples. The rules involved phonetic feature relations (same voicing or same place of articulation), and infants' learning was impeded when pairs of words allowed alternative possible generalizations (e.g. two words both contained the specific consonants p and t). Experiment 1 asked whether a small number of such spurious generalizations found in a randomly ordered list of 24 different words would also impede learning. It did – infants showed no sign of learning the rule. To ask whether it was the overall set of words or their order that prevented learning, Experiment 2 reordered the words to avoid local spurious generalizations. Infants showed robust learning. Infants thus appear to entertain spurious generalizations based on small, local subsets of stimuli. The results support a characterization of infants as incremental rather than batch learners.  相似文献   

19.
Speech sounds can be classified on the basis of their underlying articulators or on the basis of the acoustic characteristics resulting from particular articulatory positions. Research in speech perception suggests that distinctive features are based on both articulatory and acoustic information. In recent years, neuroelectric and neuromagnetic investigations provided evidence for the brain's early sensitivity to distinctive features and their acoustic consequences, particularly for place of articulation distinctions. Here, we compare English consonants in a Mismatch Field design across two broad and distinct places of articulation - labial and coronal - and provide further evidence that early evoked auditory responses are sensitive to these features. We further add to the findings of asymmetric consonant processing, although we do not find support for coronal underspecification. Labial glides (Experiment 1) and fricatives (Experiment 2) elicited larger Mismatch responses than their coronal counterparts. Interestingly, their M100 dipoles differed along the anterior/posterior dimension in the auditory cortex that has previously been found to spatially reflect place of articulation differences. Our results are discussed with respect to acoustic and articulatory bases of featural speech sound classifications and with respect to a model that maps distinctive phonetic features onto long-term representations of speech sounds.  相似文献   

20.
To offset shortcomings of existing demonstrations of right-ear superiority in the analysis of formant transitions, an experiment was performed on whispered speech. Two aspects of dichotic listening performance were examined in a single-report paradigm: the right-ear advantage (REA) for the perception of the voicing distinction and the feature sharing advantage (FSA) for both voicing and place features. A significant REA was obtained for the voicing distinction cued by first formant transition in the absence of a switch from aperiodic to periodic excitation. This, plus a greater incidence of voiced responses to right-ear stimuli, suggests that a distinction involving transitions can specifically augment the REA. The data also showed better identification of place and of voicing feature values when the competing dichotic speech stimuli shared these respective features (FSA) than when they did not. This FSA was restricted to the feature shared and hence not an effect of response uncertainty. The implications of these results for models of speech processing are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号