首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 1 毫秒
1.
Identification of CV syllables was studied in a backward masking paradigm in order to examine two types of interactions observed between dichotically presented speech sounds: the feature sharing effect and the lag effect. Pairs of syllables differed in the consonant, the vowel, and their relative times of onset. Interference between the two dichotic inputs was observed primarily for pairs which contrasted on voicing. Performance on pairs that shared voicing remained excellent under all three conditions. The results suggest that the interference underlying the lag effect and the feature sharing effect for voicing occur before phonetic analysis where both auditory inputs interact.  相似文献   

2.
In previous research a discriminative relationship has been established between patterns of covert speech behavior and the phonemic system when processing continuous linguistic material. The goal of the present research was to be more analytic and pinpoint covert neuromuscular speech patterns when one processes specific instances of phonemes. Electromyographic (EMG) recording indicated that the lips are significantly active when visually processing the letter “P” (an instance of bilabial material), but not when processing the letter “T” or a nonlinguistic control (C) stimulus. Similarly, the tongue is significantly active when processing the letter “T” (an instance of lingual-alveolar material), but not when processing the letters “P” or “C.” It is concluded that the speech musculature covertly responds systematically as a function of class of phoneme being processed. These results accord with our model that semantic processing (“understanding”) occurs when the speech (and other) musculature interacts with linguistic regions of the brain. In the interactions phonetic coding is generated and transmitted through neuromuscular circuits that have cybernetic characteristics.  相似文献   

3.
4.
Attention, Perception, & Psychophysics - Speech perception is heavily influenced by surrounding sounds. When spectral properties differ between earlier (context) and later (target) sounds, this...  相似文献   

5.
Some reaction time experiments are reported on the relation between the perception and production of phonetic features in speech. Subjects had to produce spoken consonant-vowel syllables rapidly in response to other consonant-vowel stimulus syllables. The stimulus syllables were presented auditorily in one condition and visually in another. Reaction time was measured as a function of the phonetic features shared by the consonants of the stimulus and response syllables. Responses to auditory stimulus syllables were faster when the response syllables started with consonants that had the same voicing feature as those of the stimulus syllables. A shared place-of-articulation feature did not affect the speed of responses to auditory stimulus syllables, even though the place feature was highly salient. For visual stimulus syllables, performance was independent of whether the consonants of the response syllables had the same voicing, same place of articulation, or no shared features. This pattern of results occurred in cases where the syllables contained stop consonants and where they contained fricatives. It held for natural auditory stimuli as well as artificially synthesized ones. The overall data reveal a close relation between the perception and production of voicing features in speech. It does not appear that such a relation exists between perceiving and producing places of articulation. The experiments are relevant to the motor theory of speech perception and to other models of perceptual-motor interactions.  相似文献   

6.

Infants, 2 and 3 months of age, were found to discriminte stimuli along the acoustic continuum underlying the phonetic contrast [r] vs. [l] in a nearly categorical manner. For an approximately equal acoustic difference, discrimination, as measured by recovery from satiation or familiarization, was reliably better when the two stimuli were exemplars of different phonetic categories than when they were acoustic variations of the same phonetic category. Discrimination of the same acoustic information presented in a nonspeech mode was found to be continuous, that is, determined by acoustic rather than phonetic characteristics of the stimuli. The findings were discussed with reference to the nature of the mechanisms that may determine the processing of complex acoustic signals in young infants and with reference to the role of linguistic experience on the development of speech perception at the phonetic level.

  相似文献   

7.
The goal of this study was to evaluate movement-based principles for understanding early speech output patterns. Consonant repetition patterns within children's actual productions of word forms were analyzed using spontaneous speech data from 10 typically developing American-English learning children between 12 and 36 months of age. Place of articulation, word level patterns, and developmental trends in CVC and CVCV repeated word forms were evaluated. Labial and coronal place repetitions dominated. Regressive repetition (e.g., [gag] for “dog”) occurred frequently in CVC but not in CVCV word forms. Consonant repetition decreased over time. However, the children produced sound types available reported as being within young children's production system capabilities in consonant repetitions in all time periods. Findings suggest that a movement-based approach can provide a framework for comprehensively characterizing consonant place repetition patterns in early speech development.  相似文献   

8.
Pregnant women recited a particular speech passage aloud each day during their last 6 weeks of pregnancy. Their newborns were tested with an operant-choice procedure to determine whether the sounds of the recited passage were more reinforcing than the sounds of a novel passage. The previously recited passage was more reinforcing. The reinforcing value of the two passages did not differ for a matched group of control subjects. Thus, third-trimester fetuses experience their mothers' speech sounds and that prenatal auditory experience can influence postnatal auditory preferences.  相似文献   

9.
If one listens to a meaningless syllable that is repeated over and over, he will hear it undergo a variety of changes. These changes are extremely systematic in character and can be described phonetically in terms of reorganizations of the phones constituting the syllable and changes in a restricted set of distinctive features. When a new syllable is presented to a subject after he has listened to a particular syllable that was repeated, he will misreport the new (test) syllable. His misperception of the test syllable is related to the changes occurring in the representation of the original repeated syllable just prior to the presentation of the test syllable.  相似文献   

10.
11.
During much of the past century, it was widely believed that phonemes—the human speech sounds that constitute words—have no inherent semantic meaning, and that the relationship between a combination of phonemes (a word) and its referent is simply arbitrary. Although recent work has challenged this picture by revealing psychological associations between certain phonemes and particular semantic contents, the precise mechanisms underlying these associations have not been fully elucidated. Here we provide novel evidence that certain phonemes have an inherent, non-arbitrary emotional quality. Moreover, we show that the perceived emotional valence of certain phoneme combinations depends on a specific acoustic feature—namely, the dynamic shift within the phonemes' first two frequency components. These data suggest a phoneme-relevant acoustic property influencing the communication of emotion in humans, and provide further evidence against previously held assumptions regarding the structure of human language. This finding has potential applications for a variety of social, educational, clinical, and marketing contexts.  相似文献   

12.
Memory for speech sounds is a key component of models of verbal working memory (WM). But how good is verbal WM? Most investigations assess this using binary report measures to derive a fixed number of items that can be stored. However, recent findings in visual WM have challenged such “quantized” views by employing measures of recall precision with an analogue response scale. WM for speech sounds might rely on both continuous and categorical storage mechanisms. Using a novel speech matching paradigm, we measured WM recall precision for phonemes. Vowel qualities were sampled from a formant space continuum. A probe vowel had to be adjusted to match the vowel quality of a target on a continuous, analogue response scale. Crucially, this provided an index of the variability of a memory representation around its true value and thus allowed us to estimate how memories were distorted from the original sounds. Memory load affected the quality of speech sound recall in two ways. First, there was a gradual decline in recall precision with increasing number of items, consistent with the view that WM representations of speech sounds become noisier with an increase in the number of items held in memory, just as for vision. Based on multidimensional scaling (MDS), the level of noise appeared to be reflected in distortions of the formant space. Second, as memory load increased, there was evidence of greater clustering of participants' responses around particular vowels. A mixture model captured both continuous and categorical responses, demonstrating a shift from continuous to categorical memory with increasing WM load. This suggests that direct acoustic storage can be used for single items, but when more items must be stored, categorical representations must be used.  相似文献   

13.
It is often hypothesized that speech production units are less distinctive in young children and that generalized movement primitives, or templates, serve as a base on which distinctive, mature templates are later elaborated. This hypothesis was examined by analyzing the shape and stability of single close-open speech movements of the lower lip recorded in 4-year-old, 7-year-old, and adult speakers during production of utterances that varied in only a single phoneme. To assess the presence of a generalized template, lower lip movement sequences were time and amplitude normalized, and a pattern recognition procedure was implemented. The findings indicate that speech movements of children already converged on phonetically distinctive patterns by 4 years of age. In contrast, an index of spatiotemporal stability demonstrated that the stability of underlying patterning of the movement sequence improves with maturation.  相似文献   

14.
Speech perception is an ecologically important example of the highly context-dependent nature of perception; adjacent speech, and even nonspeech, sounds influence how listeners categorize speech. Some theories emphasize linguistic or articulation-based processes in speech-elicited context effects and peripheral (cochlear) auditory perceptual interactions in non-speech-elicited context effects. The present studies challenge this division. Results of three experiments indicate that acoustic histories composed of sine-wave tones drawn from spectral distributions with different mean frequencies robustly affect speech categorization. These context effects were observed even when the acoustic context temporally adjacent to the speech stimulus was held constant and when more than a second of silence or multiple intervening sounds separated the nonlinguistic acoustic context and speech targets. These experiments indicate that speech categorization is sensitive to statistical distributions of spectral information, even if the distributions are composed of nonlinguistic elements. Acoustic context need be neither linguistic nor local to influence speech perception.  相似文献   

15.
Children affected by dyslexia exhibit a deficit in the categorical perception of speech sounds, characterized by both poorer discrimination of between-category differences and by better discrimination of within-category differences, compared to normal readers. These categorical perception anomalies might be at the origin of dyslexia, by hampering the set up of grapheme-phoneme correspondences, but they might also be the consequence of poor reading skills, as literacy probably contributes to stabilizing phonological categories. The aim of the present study was to investigate this issue by comparing categorical perception performances of illiterate and literate people. Identification and discrimination responses were collected for a /ba-da/ synthetic place-of-articulation continuum and between-group differences in both categorical perception and in the precision of the categorical boundary were examined. The results showed that illiterate vs. literate people did not differ in categorical perception, thereby suggesting that the categorical perception anomalies displayed by dyslexics are indeed a cause rather than a consequence of their reading problems. However, illiterate people displayed a less precise categorical boundary and a stronger lexical bias, both also associated with dyslexia, which might, therefore, be a specific consequence of written language deprivation or impairment.  相似文献   

16.
The aim of this study is to investigate whether speech sounds--as is stated by the widely accepted theory of categorical perception of speech--can be perceived only as instances of phonetic categories, or whether physical differences between speech sounds lead to perceptual differences regardless of their phonetic categorization. Subjects listened to pairs of synthetically generated speech sounds that correspond to realizations of the syllables "ba" and "pa" in natural German, and they were instructed to decide as fast as possible whether they perceived them as belonging to the same or to different phonetic categories. For 'same'-responses reaction times become longer when the physical distance between the speech sounds is increased; for 'different'-responses reaction times become shorter with growing physical distance between the stimuli. The results show that subjects can judge speech sounds on the basis of perceptual continua, which is inconsistent with the theory of categorical perception. A mathematical model is presented that attempts to explain the results by postulating two interacting stages of processing, a psychoacoustical and a phonetic one. The model is not entirely confirmed by the data, but it seems to deserve further consideration.  相似文献   

17.
Listeners hearing an ambiguous phoneme flexibly adjust their phonetic categories in accordance with information telling what the phoneme should be (i.e., recalibration). Here the authors compared recalibration induced by lipread versus lexical information. Listeners were exposed to an ambiguous phoneme halfway between /t/ and /p/ dubbed onto a face articulating /t/ or /p/ or embedded in a Dutch word ending in /t/ (e.g., groot [big]) or /p/ (knoop [button]). In a posttest, participants then categorized auditory tokens as /t/ or /p/. Lipread and lexical aftereffects were comparable in size (Experiment 1), dissipated about equally fast (Experiment 2), were enhanced by exposure to a contrast phoneme (Experiment 3), and were not affected by a 3-min silence interval (Experiment 4). Exposing participants to 1 instead of both phoneme categories did not make the phenomenon more robust (Experiment 5). Despite the difference in nature (bottom-up vs. top-down information), lipread and lexical information thus appear to serve a similar role in phonetic adjustments.  相似文献   

18.
Recent experiments using a variety of techniques have suggested that speech perception involves separate auditory and phonetic levels of processing. Two models of auditory and phonetic processing appear to be consistent with existing data: (a) a strictserial model in which auditory information would be processed at one level, followed by the processing of phonetic information at a subsequent level; and (b) aparallel model in which auditory and phonetic processing could proceed simultaneously. The present experiment attempted to distinguish empirically between these two models. Ss identified either an auditory dimension (fundamental frequency) or a phonetic dimension (place of articulation of the consonant) of synthetic consonant-vowel syllables. When the two dimensions varied in a completely correlated manner, reaction times were significantly shorter than when either dimension varied alone. This “redundancy gain” could not be attributed to speed-accuracy trades, selective serial processing, or differential transfer between conditions. These results allow rejection of a completely serial model, suggesting instead that at least some portion of auditory and phonetic processing can occur in parallel.  相似文献   

19.
The analysis of syllable and pause durations in speech production can provide information about the properties of a speaker's grammatical code. The present study was conducted to reveal aspects of this code by analyzing syllable and pause durations in structurally ambiguous sentences. In Experiments 1–6, acoustical measurements were made for a key syllabic segment and a following pause for 10 or more speakers. Each of six structural ambiguities, previously unrelated, involved a grammatical relation between the constituent following the pause and one of two possible constituents preceding the pause. The results showed lengthening of the syllabic segments and pauses for the reading in which the constituent following the pause was hierarchically dominated by the higher of the two possible preceding constituents in a syntactic representation. The effects were also observed, to a lesser extent, when the structurally ambiguous sentences were embedded in disambiguating paragraph contexts (Experiment 7). The results show that a single hierarchical principle can provide a unified account of speech timing effects for a number of otherwise unrelated ambiguities. This principle is superior to a linear alternative and provides specific inferences about hierarchical relations among syntactic constituents in speech coding.  相似文献   

20.
The stimulus suffix effect (SSE) was examined with short sequences of words and meaningful nonspeech sounds. In agreement with previous findings, the SSE for word sequences was obtained with a speech, but not a nonspeech, suffix. The reverse was true for sounds. The results contribute further evidence for a functional distinction between speech and nonspeech processing mechanisms in auditory memory.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号