共查询到20条相似文献,搜索用时 0 毫秒
1.
Frequency characteristics of electromyographic traces from the tongue and lips were studied as a function of class of phonemic input. Previous research has established that there are amplitude increases in the lips while processing bilabial linguistic units such as [p] and in the tongue when processing lingual-alveolar units (like [t]). Preliminary results using variability measures suggest similar conclusions for frequency parameters. 相似文献
2.
One aspect of the comprehension of speech is the assignment of a phonetic representation to the sounds being heard. However, if a person listens to a meaningless syllable that is continually repeated, over time he will hear the syllable undergo a variety of changes. These changes are very systematic in character and represent alterations in the phonetic coding assigned to an unchanging sound stimulus. When the restricted nature of the changes that occur is analyzed phonetically, these changes are found to invlove a reorganization of the phones constituting the syllables and changes in a small number of distinctive features. 相似文献
3.
Eleven subjects were asked to silently read slides of the letters “P” and “T,” and to view meaningless control slides similarly as they were presented visually. One-eighth-second electromyographic excerpts were sampled from the baseline and response periods. The data were then transformed into the frequency domain for inferential analyses. The mean power spectral frequencies for the response period were significantly lower than those for the base-line in the overall analysis. There were, however, no significant changes from baseline as a function of kind of stimulus (T, P, or Control) or muscle activated (lips or tongue). It was concluded that there was a generalized responding, not unique to the processing of the specific stimuli studied. Frequency analysis of EMG measures of covert behavior holds some promise of yielding unique information not available through traditional analysis procedures, but more sensitive methods than those used here would be required to demonstrate this. 相似文献
4.
The present investigation examined the effects of covert modeling in developing assertive behavior and the effects of verbal coding of the modeling stimuli on treatment outcome. In a 2 × 2 design, modeling (imagining an assertive model versus imagining scenes without an assertive model) and summary coding (developing verbal codes of the modeled material versus not developing codes) were combined. The results indicated that modeling and coding enhanced behavior change across self-report inventories and a behavioral role-playing test. Superior performance on these measures was achieved by the modeling group that both imagined an assertive model and engaged in summary coding. These effects transferred to novel role-playing situations at post-treatment and were maintained at a 6-month follow-up assessment. The results suggest that coding of treatment stimuli affects acquisition and maintenance of the modeled behaviors in treatment in a way that resembles findings from laboratory research on modeling. 相似文献
5.
Some reaction time experiments are reported on the relation between the perception and production of phonetic features in speech. Subjects had to produce spoken consonant-vowel syllables rapidly in response to other consonant-vowel stimulus syllables. The stimulus syllables were presented auditorily in one condition and visually in another. Reaction time was measured as a function of the phonetic features shared by the consonants of the stimulus and response syllables. Responses to auditory stimulus syllables were faster when the response syllables started with consonants that had the same voicing feature as those of the stimulus syllables. A shared place-of-articulation feature did not affect the speed of responses to auditory stimulus syllables, even though the place feature was highly salient. For visual stimulus syllables, performance was independent of whether the consonants of the response syllables had the same voicing, same place of articulation, or no shared features. This pattern of results occurred in cases where the syllables contained stop consonants and where they contained fricatives. It held for natural auditory stimuli as well as artificially synthesized ones. The overall data reveal a close relation between the perception and production of voicing features in speech. It does not appear that such a relation exists between perceiving and producing places of articulation. The experiments are relevant to the motor theory of speech perception and to other models of perceptual-motor interactions. 相似文献
6.
It is often hypothesized that speech production units are less distinctive in young children and that generalized movement primitives, or templates, serve as a base on which distinctive, mature templates are later elaborated. This hypothesis was examined by analyzing the shape and stability of single close-open speech movements of the lower lip recorded in 4-year-old, 7-year-old, and adult speakers during production of utterances that varied in only a single phoneme. To assess the presence of a generalized template, lower lip movement sequences were time and amplitude normalized, and a pattern recognition procedure was implemented. The findings indicate that speech movements of children already converged on phonetically distinctive patterns by 4 years of age. In contrast, an index of spatiotemporal stability demonstrated that the stability of underlying patterning of the movement sequence improves with maturation. 相似文献
7.
Identification of CV syllables was studied in a backward masking paradigm in order to examine two types of interactions observed between dichotically presented speech sounds: the feature sharing effect and the lag effect. Pairs of syllables differed in the consonant, the vowel, and their relative times of onset. Interference between the two dichotic inputs was observed primarily for pairs which contrasted on voicing. Performance on pairs that shared voicing remained excellent under all three conditions. The results suggest that the interference underlying the lag effect and the feature sharing effect for voicing occur before phonetic analysis where both auditory inputs interact. 相似文献
8.
Infants, 2 and 3 months of age, were found to discriminte stimuli along the acoustic continuum underlying the phonetic contrast [r] vs. [l] in a nearly categorical manner. For an approximately equal acoustic difference, discrimination, as measured by recovery from satiation or familiarization, was reliably better when the two stimuli were exemplars of different phonetic categories than when they were acoustic variations of the same phonetic category. Discrimination of the same acoustic information presented in a nonspeech mode was found to be continuous, that is, determined by acoustic rather than phonetic characteristics of the stimuli. The findings were discussed with reference to the nature of the mechanisms that may determine the processing of complex acoustic signals in young infants and with reference to the role of linguistic experience on the development of speech perception at the phonetic level. 相似文献
10.
Recent experiments using a variety of techniques have suggested that speech perception involves separate auditory and phonetic levels of processing. Two models of auditory and phonetic processing appear to be consistent with existing data: (a) a strict serial model in which auditory information would be processed at one level, followed by the processing of phonetic information at a subsequent level; and (b) a parallel model in which auditory and phonetic processing could proceed simultaneously. The present experiment attempted to distinguish empirically between these two models. Ss identified either an auditory dimension (fundamental frequency) or a phonetic dimension (place of articulation of the consonant) of synthetic consonant-vowel syllables. When the two dimensions varied in a completely correlated manner, reaction times were significantly shorter than when either dimension varied alone. This “redundancy gain” could not be attributed to speed-accuracy trades, selective serial processing, or differential transfer between conditions. These results allow rejection of a completely serial model, suggesting instead that at least some portion of auditory and phonetic processing can occur in parallel. 相似文献
11.
A complete understanding of visual phonetic perception (lipreading) requires linking perceptual effects to physical stimulus properties. However, the talking face is a highly complex stimulus, affording innumerable possible physical measurements. In the search for isomorphism between stimulus properties and phoneticeffects, second-order isomorphism was examined between theperceptual similarities of video-recorded perceptually identified speech syllables and the physical similarities among the stimuli. Four talkers produced the stimulus syllables comprising 23 initial consonants followed by one of three vowels. Six normal-hearing participants identified the syllables in a visual-only condition. Perceptual stimulus dissimilarity was quantified using the Euclidean distances between stimuli in perceptual spaces obtained via multidimensional scaling. Physical stimulus dissimilarity was quantified using face points recorded in three dimensions by an optical motion capture system. The variance accounted for in the relationship between the perceptual and the physical dissimilarities was evaluated using both the raw dissimilarities and the weighted dissimilarities. With weighting and the full set of 3-D optical data, the variance accounted for ranged between 46% and 66% across talkers and between 49% and 64% across vowels. The robust second-order relationship between the sparse 3-D point representation of visible speech and the perceptual effects suggests that the 3-D point representation is a viable basis for controlled studies of first-order relationships between visual phonetic perception and physical stimulus attributes. 相似文献
12.
Across the first year of life, infants show decreased sensitivity to phonetic differences not used in the native language [Werker, J. F., & Tees, R. C. (1984). Cross-language speech perception: evidence for perceptual reorganization during the first year of life. Infant Behaviour and Development, 7, 49-63]. In an artificial language learning manipulation, Maye, Werker, and Gerken [Maye, J., Werker, J. F., & Gerken, L. (2002). Infant sensitivity to distributional information can affect phonetic discrimination. Cognition, 82(3), B101-B111] found that infants change their speech sound categories as a function of the distributional properties of the input. For such a distributional learning mechanism to be functional, however, it is essential that the input speech contain distributional cues to support such perceptual learning. To test this, we recorded Japanese and English mothers teaching words to their infants. Acoustic analyses revealed language-specific differences in the distributions of the cues used by mothers (or cues present in the input) to distinguish the vowels. The robust availability of these cues in maternal speech adds support to the hypothesis that distributional learning is an important mechanism whereby infants establish native language phonetic categories. 相似文献
13.
Four experiments investigated a new phenomenon: the existence of a very large switching time effect that occurs from rapidly alternating between overt and covert (mouthed) speech. This is referred to as an intensity switching effect, and the time taken for each switch is, in itself, long enough for a spoken or mouthed character (letter or digit) to be produced. In Experiments 1 and 2, the intensity switching effect was shown to be different from the switching that occurs between categories of materials (letters and digits) because it is both much larger and much more resistant to practice effects. The intensity switching effect was also shown to be distinct from a memory load effect, since it holds even for perceptually available lists. In Experiments 3 and 4, the issue of a peripheral vs. a central origin of intensity switching was addressed. Evidence was found for a central origin. In addition, two models -of response intensity representation were contrasted: a symbolic or digital model, with intensity altered by parameter substitution, and an analog model, with intensity represented by a moving pointer on an intensity continuum. The results supported the symbolic model. It is concluded that the intensity switching effect is a measure of control processes at work in altering the intensity parameters of the vocal response system. 相似文献
14.
Listeners hearing an ambiguous phoneme flexibly adjust their phonetic categories in accordance with information telling what the phoneme should be (i.e., recalibration). Here the authors compared recalibration induced by lipread versus lexical information. Listeners were exposed to an ambiguous phoneme halfway between /t/ and /p/ dubbed onto a face articulating /t/ or /p/ or embedded in a Dutch word ending in /t/ (e.g., groot [big]) or /p/ (knoop [button]). In a posttest, participants then categorized auditory tokens as /t/ or /p/. Lipread and lexical aftereffects were comparable in size (Experiment 1), dissipated about equally fast (Experiment 2), were enhanced by exposure to a contrast phoneme (Experiment 3), and were not affected by a 3-min silence interval (Experiment 4). Exposing participants to 1 instead of both phoneme categories did not make the phenomenon more robust (Experiment 5). Despite the difference in nature (bottom-up vs. top-down information), lipread and lexical information thus appear to serve a similar role in phonetic adjustments. 相似文献
15.
The analysis of syllable and pause durations in speech production can provide information about the properties of a speaker's grammatical code. The present study was conducted to reveal aspects of this code by analyzing syllable and pause durations in structurally ambiguous sentences. In Experiments 1–6, acoustical measurements were made for a key syllabic segment and a following pause for 10 or more speakers. Each of six structural ambiguities, previously unrelated, involved a grammatical relation between the constituent following the pause and one of two possible constituents preceding the pause. The results showed lengthening of the syllabic segments and pauses for the reading in which the constituent following the pause was hierarchically dominated by the higher of the two possible preceding constituents in a syntactic representation. The effects were also observed, to a lesser extent, when the structurally ambiguous sentences were embedded in disambiguating paragraph contexts (Experiment 7). The results show that a single hierarchical principle can provide a unified account of speech timing effects for a number of otherwise unrelated ambiguities. This principle is superior to a linear alternative and provides specific inferences about hierarchical relations among syntactic constituents in speech coding. 相似文献
16.
Previous research has shown that infants match vowel sounds to facial displays of vowel articulation [Kuhl, P. K., & Meltzoff, A. N. (1982). The bimodal perception of speech in infancy. Science, 218, 1138–1141; Patterson, M. L., & Werker, J. F. (1999). Matching phonetic information in lips and voice is robust in 4.5-month-old infants. Infant Behaviour & Development, 22, 237–247], and integrate seen and heard speech sounds [Rosenblum, L. D., Schmuckler, M. A., & Johnson, J. A. (1997). The McGurk effect in infants. Perception & Psychophysics, 59, 347–357; Burnham, D., & Dodd, B. (2004). Auditory-visual speech integration by prelinguistic infants: Perception of an emergent consonant in the McGurk effect. Developmental Psychobiology, 45, 204–220]. However, the role of visual speech in language development remains unknown. Our aim was to determine whether seen articulations enhance phoneme discrimination, thereby playing a role in phonetic category learning. We exposed 6-month-old infants to speech sounds from a restricted range of a continuum between /ba/ and /da/, following a unimodal frequency distribution. Synchronously with these speech sounds, one group of infants (the two-category group) saw a visual articulation of a canonical /ba/ or /da/, with the two alternative visual articulations, /ba/ and /da/, being presented according to whether the auditory token was on the /ba/ or /da/ side of the midpoint of the continuum. Infants in a second (one-category) group were presented with the same unimodal distribution of speech sounds, but every token for any particular infant was always paired with the same syllable, either a visual /ba/ or a visual /da/. A stimulus-alternation preference procedure following the exposure revealed that infants in the former, and not in the latter, group discriminated the /ba/–/da/ contrast. These results not only show that visual information about speech articulation enhances phoneme discrimination, but also that it may contribute to the learning of phoneme boundaries in infancy. 相似文献
18.
Three modeling therapy formats (overt modeling with a standard hierarchy of situations. covert modeling with the standard hierarchy, and covert modeling with a self-tailored hierarchy) were compared to assess their relative efficacy in developing assertive skills. Half the subjects in each treatment condition received or did not receive generalization training. Significant within-group improvement was indicated on four self-report measures. Overall, the results suggest that covert modeling was as effective as overt modeling or covert modeling plus self-tailoring for instating assertion among nonassertive college women. 相似文献
19.
The distinction between auditory and phonetic processes in speech perception was used in the design and analysis of an experiment. Earlier studies had shown that dichotically presented stop consonants are more often identified correctly when they share place of production (e.g., /ba-pa/) or voicing (e.g., /ba-da/) than when neither feature is shared (e.g., /ba-ta/). The present experiment was intended to determine whether the effect has an auditory or a phonetic basis. Increments in performance due to feature-sharing were compared for synthetic stop-vowel syllables in which formant transitions were the sole cues to place of production under two experimental conditions: (1) when the vowel was the same for both syllables in a dichotic pair, as in our earlier studies, and (2) when the vowels differed. Since the increment in performance due to sharing place was not diminished when vowels differed (i.e., when formant transitions did not coincide), it was concluded that the effect has a phonetic rather than an auditory basis. Right ear advantages were also measured and were found to interact with both place of production and vowel conditions. Taken together, the two sets of results suggest that inhibition of the ipsilateral signal in the perception of dichotically presented speech occurs during phonetic analysis. 相似文献
20.
Two new experimental operations were used to distinguish between auditory and phonetic levels of processing in speech perception: the first based on reaction time data in speeded classification tasks with synthetic speech stimuli, and the second based on average evoked potentials recorded concurrently in the same tasks. Each of four experiments compared the processing of two different dimensions of the same synthetic consonant-vowel syllables. When a phonetic dimensions was compared to an auditory dimension, different patterns of results were obtained in both the reaction time and evoked potential data. No such differences were obtained for isolated acoustic components of the phonetic dimension or for two purely auditory dimensions. Together with other recent evidence, the present results constitute additional converging operations on the distinction between auditory and phonetic processes in speech perception and on the idea that phonetic processing involves mechanisms that are lateralized in one cerebral hemisphere. 相似文献
|