首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
婴儿期言语知觉研究表明婴儿最初(1~4个月)可以分辨几乎所有的语音范畴对比;随母语经验增加,婴儿言语知觉逐渐表现出母语语音特征的影响,辅音知觉表现为对母语语音范畴界限敏感性的提高和对非母语范畴界限敏感性的下降,非母语范畴开始同化到母语音系中去,母语元音知觉表现出知觉磁体效应。这些证据表明婴儿逐渐习得母语音位范畴,音位范畴习得顺序可能依赖范畴例子本身的声学特征、发生频率等因素  相似文献   

2.
The language environment modifies the speech perception abilities found in early development. In particular, adults have difficulty perceiving many nonnative contrasts that young infants discriminate. The underlying perceptual reorganization apparently occurs by 10-12 months. According to one view, it depends on experiential effects on psychoacoustic mechanisms. Alternatively, phonological development has been held responsible, with perception influenced by whether the nonnative sounds occur allophonically in the native language. We hypothesized that a phonemic process appears around 10-12 months that assimilates speech sounds to native categories whenever possible; otherwise, they are perceived in auditory or phonetic (articulatory) terms. We tested this with English-speaking listeners by using Zulu click contrasts. Adults discriminated the click contrasts; performance on the most difficult (80% correct) was not diminished even when the most obvious acoustic difference was eliminated. Infants showed good discrimination of the acoustically modified contrast even by 12-14 months. Together with earlier reports of developmental change in perception of nonnative contrasts, these findings support a phonological explanation of language-specific reorganization in speech perception.  相似文献   

3.
The development of speech perception during the 1st year reflects increasing attunement to native language features, but the mechanisms underlying this development are not completely understood. One previous study linked reductions in nonnative speech discrimination to performance on nonlinguistic tasks, whereas other studies have shown associations between speech perception and vocabulary growth. The present study examined relationships among these abilities in 11-month-old infants using a conditioned head-turn test of native and nonnative speech sound discrimination, nonlinguistic object-retrieval tasks requiring attention and inhibitory control, and the MacArthur-Bates Communicative Development Inventory (L. Fenson et al., 1993). Native speech discrimination was positively linked to receptive vocabulary size but not to the cognitive control tasks, whereas nonnative speech discrimination was negatively linked to cognitive control scores but not to vocabulary size. Speech discrimination, vocabulary size, and cognitive control scores were not associated with more general cognitive measures. These results suggest specific relationships between domain-general inhibitory control processes and the ability to ignore variation in speech that is irrelevant to the native language and between the development of native language speech perception and vocabulary.  相似文献   

4.
Previous research suggests that infant speech perception reorganizes in the first year: young infants discriminate both native and non‐native phonetic contrasts, but by 10–12 months difficult non‐native contrasts are less discriminable whereas performance improves on native contrasts. In the current study, four experiments tested the hypothesis that, in addition to the influence of native language experience, acoustic salience also affects the perceptual reorganization that takes place in infancy. Using a visual habituation paradigm, two nasal place distinctions that differ in relative acoustic salience, acoustically robust labial‐alveolar [ma]–[na] and acoustically less salient alveolar‐velar [na]–[?a], were presented to infants in a cross‐language design. English‐learning infants at 6–8 and 10–12 months showed discrimination of the native and acoustically robust [ma]–[na] (Experiment 1), but not the non‐native (in initial position) and acoustically less salient [na]–[?a] (Experiment 2). Very young (4–5‐month‐old) English‐learning infants tested on the same native and non‐native contrasts also showed discrimination of only the [ma]–[na] distinction (Experiment 3). Filipino‐learning infants, whose ambient language includes the syllable‐initial alveolar (/n/)–velar (/?/) contrast, showed discrimination of native [na]–[?a] at 10–12 months, but not at 6–8 months (Experiment 4). These results support the hypothesis that acoustic salience affects speech perception in infancy, with native language experience facilitating discrimination of an acoustically similar phonetic distinction [na]–[?a]. We discuss the implications of this developmental profile for a comprehensive theory of speech perception in infancy.  相似文献   

5.
Over the course of the first year of life, infants develop from being generalized listeners, capable of discriminating both native and non-native speech contrasts, into specialized listeners whose discrimination patterns closely reflect the phonetic system of the native language(s). Recent work by Maye, Werker and Gerken (2002) has proposed a statistical account for this phenomenon, showing that infants may lose the ability to discriminate some foreign language contrasts on the basis of their sensitivity to the statistical distribution of sounds in the input language. In this paper we examine the process of enhancement in infant speech perception, whereby initially difficult phonetic contrasts become better discriminated when they define two categories that serve a functional role in the native language. In particular, we demonstrate that exposure to a bimodal statistical distribution in 8-month-old infants' phonetic input can lead to increased discrimination of difficult contrasts. In addition, this exposure also facilitates discrimination of an unfamiliar contrast sharing the same phonetic feature as the contrast presented during familiarization, suggesting that infants extract acoustic/phonetic information that is invariant across an abstract featural representation.  相似文献   

6.
One of the central themes in the study of language acquisition is the gap between the linguistic knowledge that learners demonstrate, and the apparent inadequacy of linguistic input to support induction of this knowledge. One of the first linguistic abilities in the course of development to exemplify this problem is in speech perception: specifically, learning the sound system of one’s native language. Native-language sound systems are defined by meaningful contrasts among words in a language, yet infants learn these sound patterns before any significant numbers of words are acquired. Previous approaches to this learning problem have suggested that infants can learn phonetic categories from statistical analysis of auditory input, without regard to word referents. Experimental evidence presented here suggests instead that young infants can use visual cues present in word-labeling situations to categorize phonetic information. In Experiment 1, 9-month-old English-learning infants failed to discriminate two non-native phonetic categories, establishing baseline performance in a perceptual discrimination task. In Experiment 2, these infants succeeded at discrimination after watching contrasting visual cues (i.e., videos of two novel objects) paired consistently with the two non-native phonetic categories. In Experiment 3, these infants failed at discrimination after watching the same visual cues, but paired inconsistently with the two phonetic categories. At an age before which memory of word labels is demonstrated in the laboratory, 9-month-old infants use contrastive pairings between objects and sounds to influence their phonetic sensitivity. Phonetic learning may have a more functional basis than previous statistical learning mechanisms assume: infants may use cross-modal associations inherent in social contexts to learn native-language phonetic categories.  相似文献   

7.
Previous cross-language research has indicated that some speech contrasts present greater perceptual difficulty for adult non-native listeners than others do. It has been hypothesized that phonemic, phonetic, and acoustic factors contribute to this variability. Two experiments were conducted to evaluate systematically the role of phonemic status and phonetic familiarity in the perception of non-native speech contrasts and to test predictions derived from a model proposed by Best, McRoberts, and Sithole (1988). Experiment 1 showed that perception of an unfamiliar phonetic contrast was not less difficult for subjects who had experience with an analogous phonemic distinction in their native language than for subjects without such analogous experience. These results suggest that substantive phonetic experience influences the perception of non-native contrasts, and thus should contribute to a conceptualization of native language-processing skills. In Experiment 2, English listeners' perception of two related nonphonemic place contrasts was not consistently different as had been expected on the basis of phonetic familiarity. A clear order effect in the perceptual data suggests that interactions between different perceptual assimilation patterns or acoustic properties of the two contrasts, or interactions involving both of these factors, underlie the perception of the two contrasts in this experiment. It was concluded that both phonetic familiarity and acoustic factors are potentially important to the explanation of variability in perception of nonphonemic contrasts. The explanation of how linguistic experience shapes speech perception will require characterizing the relative contribution of these factors, as well as other factors, including individual differences and variables that influence a listener's orientation to speech stimuli.  相似文献   

8.
Previous cross-language research has indicated that some speech contrasts present greater perceptual difficulty for adult non-native listeners than others do. It has been hypothesized that phonemic, phonetic, and acoustic factors contribute to this variability. Two experiments were conducted to evaluate systematically the role of phonemic status and phonetic familiarity in the perception of non-native speech contrasts and to test predictions derived from a model proposed by Best, McRoberts, and Sithole (1988). Experiment 1 showed that perception of an unfamiliar phonetic contrast was not less difficult for subjects who had experience with an analogous phonemic distinction in their native language than for subjects without such analogous experience. These results suggest that substantive phonetic experience influences the perception of non-native contrasts, and thus should contribute to a conceptualization of native language-processing skills. In Experiment 2, English listeners’ perception of two related nonphonemic place contrasts was not consistently different as had been expected on the basis of phonetic familiarity. A clear order effect in the perceptual data suggests that interactions between different perceptual assimilation patterns or acoustic properties of the two contrasts, or interactions involving both of these factors, underlie the perception of the two contrasts in this experiment. It was concluded that both phonetic familiarity and acoustic factors are potentially important to the explanation of variability in perception of nonphonemic contrasts. The explanation of how linguistic experience shapes speech perception will require characterizing the relative contribution of these factors, as well as other factors, including individual differences and variables that influence a listener’s orientation to speech stimuli.  相似文献   

9.
Synthetic speech stimuli were used to investigate whether aphasics' ability to perceive stop consonant place of articulation was enhanced by the extension of initial formant transitions in CV syllables. Phoneme identification and discrimination tests were administered to 12 aphasic patients, 5 fluent and 7 nonfluent. There were no significant differences in performance due to the extended transitions, and no systematic pattern of performance due to aphasia type. In both groups, discrimination was generally high and significantly better than identification, demonstrating that auditory capacity was retained, while phonetic perception was impaired; this result is consistent with repeated demonstrations that auditory and phonetic processes may be dissociated in normal listeners. Moreover, significant rank order correlations between performances on the Token Test and on both perceptual tasks suggest that impairment on these tests may reflect a general cognitive rather than a language-specific deficit.  相似文献   

10.
Patterns of developmental change in phonetic perception are critical to theory development. Many previous studies document a decline in nonnative phonetic perception between 6 and 12 months of age. However, much less experimental attention has been paid to developmental change in native-language phonetic perception over the same time period. We hypothesized that language experience in the first year facilitates native-language phonetic performance between 6 and 12 months of age. We tested 6-8- and 10-12-month-old infants in the United States and Japan to examine native and nonnative patterns of developmental change using the American English /r-l/ contrast. The goals of the experiment were to: (a) determine whether facilitation characterizes native-language phonetic change between 6 and 12 months of age, (b) examine the decline previously observed for nonnative contrasts and (c) test directional asymmetries for consonants. The results show a significant increase in performance for the native-language contrast in the first year, a decline in nonnative perception over the same time period, and indicate directional asymmetries that are constant across age and culture. We argue that neural commitment to native-language phonetic properties explains the pattern of developmental change in the first year.  相似文献   

11.
Language experience 'narrows' speech perception by the end of infants' first year, reducing discrimination of non-native phoneme contrasts while improving native-contrast discrimination. Previous research showed that declines in non-native discrimination were reversed by second-language experience provided at 9-10 months, but it is not known whether second-language experience affects first-language speech sound processing. Using event-related potentials (ERPs), we examined learning-related changes in brain activity to Spanish and English phoneme contrasts in monolingual English-learning infants pre- and post-exposure to Spanish from 9.5-10.5 months of age. Infants showed a significant discriminatory ERP response to the Spanish contrast at 11 months (post-exposure), but not at 9 months (pre-exposure). The English contrast elicited an earlier discriminatory response at 11 months than at 9 months, suggesting improvement in native-language processing. The results show that infants rapidly encode new phonetic information, and that improvement in native speech processing can occur during second-language learning in infancy.  相似文献   

12.
Abstract Behavioral data establish a dramatic change in infants' phonetic perception between 6 and 12 months of age. Foreign-language phonetic discrimination significantly declines with increasing age. Using a longitudinal design, we examined the electrophysiological responses of 7- and 11-month-old American infants to native and non-native consonant contrasts. Analyses of the event-related potentials (ERP) of the group data at 7 and at 11 months of age demonstrated that infants' discriminatory ERP responses to the non-native contrast are present at 7 months of age but disappear by 11 months of age, consistent with the behavioral data reported in the literature. However, when the same infants were divided into subgroups based on individual ERP components, we found evidence that the infant brain remains sensitive to the non-native contrast at 11 months of age, showing differences in either the P150-250 or the N250-550 time window, depending upon the subgroup. Moreover, we observed an increase in infants' responsiveness to native language consonant contrasts over time. We describe distinct neural patterns in two groups of infants and suggest that their developmental differences may have an impact on language development.  相似文献   

13.
American English liquids /r/ and /l/ have been considered intermediate between stop consonants and vowels acoustically, articulatorily, phonologically, and perceptually. Cutting (1947a) found position-dependent ear advantages for liquids in a dichotic listening task: syllable-initial liquids produced significant right ear advantages, while syllable-final liquids produced no reliable ear advantages. The present study employed identification and discrimination tasks to determine whether /r/and /l/ are perceived differently depending on syllable position when perception is tested by a different method. Fifteen subjects listened to two synthetically produced speech series—/li/ to /ri/ and /il/ to /ir/—in which stepwise variations of the third formant cued the difference in consonant identity. The results indicated that: (1) perception did not differ between syllable positions (in contrast to the dichotic listening results), (2) liquids in both syllable positions were perceived categorically, and (3) discrimination of a nonspeech control series did not account for the perception of the speech sounds.  相似文献   

14.
Trading relations show that diverse acoustic consequences of minimal contrasts in speech are equivalent in perception of phonetic categories. This perceptual equivalence received stronger support from a recent finding that discrimination was differentially affected by the phonetic cooperation or conflict between two cues for the /slIt/-/splIt/contrast. Experiment 1 extended the trading relations and perceptual equivalence findings to the /sei/-/stei/contrast. With a more sensitive discrimination test, Experiment 2 found that cue equivalence is a characteristic of perceptual sensitivity to phonetic information. Using “sine-wave analogues” of the /sei/-/stei/stimuli, Experiment 3 showed that perceptual integration of the cues was phonetic, not psychoacoustic, in origin. Only subjects who perceived the sine-wave stimuli as “say” and “stay” showed a trading relation and perceptual equivalence; subjects who perceived them as nonspeech failed to integrate the two dimensions perceptually. Moreover, the pattern of differences between obtained and predicted discrimination was quite similar across the first two experiments and the “say”-“stay” group of Experiment 3, and suggested that phonetic perception was responsible even for better-than-predicted performance by these groups. Trading relations between speech cues, and the perceptual equivalence that underlies them, thus appear to derive specifically from perception of phonetic information.  相似文献   

15.
Werker JF  Pons F  Dietrich C  Kajikawa S  Fais L  Amano S 《Cognition》2007,103(1):147-162
Across the first year of life, infants show decreased sensitivity to phonetic differences not used in the native language [Werker, J. F., & Tees, R. C. (1984). Cross-language speech perception: evidence for perceptual reorganization during the first year of life. Infant Behaviour and Development, 7, 49-63]. In an artificial language learning manipulation, Maye, Werker, and Gerken [Maye, J., Werker, J. F., & Gerken, L. (2002). Infant sensitivity to distributional information can affect phonetic discrimination. Cognition, 82(3), B101-B111] found that infants change their speech sound categories as a function of the distributional properties of the input. For such a distributional learning mechanism to be functional, however, it is essential that the input speech contain distributional cues to support such perceptual learning. To test this, we recorded Japanese and English mothers teaching words to their infants. Acoustic analyses revealed language-specific differences in the distributions of the cues used by mothers (or cues present in the input) to distinguish the vowels. The robust availability of these cues in maternal speech adds support to the hypothesis that distributional learning is an important mechanism whereby infants establish native language phonetic categories.  相似文献   

16.
Infant speech discrimination can follow multiple trajectories depending on the language and the specific phonemes involved. Two understudied languages in terms of the development of infants’ speech discrimination are Arabic and Hebrew.PurposeThe purpose of the present study was to examine the influence of listening experience with the native language on the discrimination of the voicing contrast /ba-pa/ in Arabic-learning infants whose native language includes only the phoneme /b/ and in Hebrew-learning infants whose native language includes both phonemes.Method128 Arabic-learning infants and Hebrew-learning infants, 4-to-6 and 10-to-12-month-old infants, were tested with the Visual Habituation Procedure.ResultsThe results showed that 4-to-6-month-old infants discriminated between /ba-pa/ regardless of their native language and order of presentation. However, only 10-to-12-month-old infants learning Hebrew retained this ability. 10-to-12-month-old infants learning Arabic did not discriminate the change from /ba/ to /pa/ but showed a tendency for discriminating the change from /pa/ to /ba/.ConclusionsThis is the first study to report on the reduced discrimination of /ba-pa/ in older infants learning Arabic. Our findings are consistent with the notion that experience with the native language changes discrimination abilities and alters sensitivity to non-native contrasts, thus providing evidence for ‘top-down’ processing in young infants. The directional asymmetry in older infants learning Arabic can be explained by assimilation of the non-native consonant /p/ to the native Arabic category /b/ as predicted by current speech perception models.  相似文献   

17.
During the first year of life, infants begin to have difficulties perceiving non‐native vowel and consonant contrasts, thus adapting their perception to the phonetic categories of the target language. In this paper, we examine the perception of a non‐segmental feature, i.e. stress. Previous research with adults has shown that speakers of French (a language with fixed stress) have great difficulties in perceiving stress contrasts ( Dupoux, Pallier, Sebastián & Mehler, 1997 ), whereas speakers of Spanish (a language with lexically contrastive stress) perceive these contrasts as accurately as segmental contrasts. We show that language‐specific differences in the perception of stress likewise arise during the first year of life. Specifically, 9‐month‐old Spanish infants successfully distinguish between stress‐initial and stress‐final pseudo‐words, while French infants of this age show no sign of discrimination. In a second experiment using multiple tokens of a single pseudo‐word, French infants of the same age successfully discriminate between the two stress patterns, showing that they are able to perceive the acoustic correlates of stress. Their failure to discriminate stress patterns in the first experiment thus reflects an inability to process stress at an abstract, phonological level.  相似文献   

18.
In adults, native language phonology has strong perceptual effects. Previous work has shown that Japanese speakers, unlike French speakers, break up illegal sequences of consonants with illusory vowels: they report hearing abna as abuna. To study the development of phonological grammar, we compared Japanese and French infants in a discrimination task. In Experiment 1, we observed that 14-month-old Japanese infants, in contrast to French infants, failed to discriminate phonetically varied sets of abna-type and abuna-type stimuli. In Experiment 2, 8-month-old French and Japanese did not differ significantly from each other. In Experiment 3, we found that, like adults, Japanese infants can discriminate abna from abuna when phonetic variability is reduced (single item). These results show that the phonologically induced /u/ illusion is already experienced by Japanese infants at the age of 14 months. Hence, before having acquired many words of their language, they have grasped enough of their native phonological grammar to constrain their perception of speech sound sequences.  相似文献   

19.
Four experiments are reported investigating previous findings that speech perception interferes with concurrent verbal memory but difficult nonverbal perceptual tasks do not, to any great degree. The forgetting produced by processing noisy speech could not be attributed to task difficulty, since equally difficult nonspeech tasks did not produce forgetting, and the extent of forgetting produced by speech could be manipulated independently of task difficulty. The forgetting could not be attributed to similarity between memory material and speech stimuli, since clear speech, analyzed in a simple and probably acoustically mediated discrimination task, produced little forgetting. The forgetting could not be attributed to a combination of similarity and difficutly since a very easy speech task involving clear speech produced as much forgetting as noisy speech tasks, as long as overt reproduction of the stimuli was required. By assuming that noisy speech and overtly reproduced speech are processed at a phonetic level but that clear, repetitive speech can be processed at a purely acoustic level, the forgetting produced by speech perception could be entirely attributed to the level at which the speech was processed. In a final experiment, results were obtained which suggest that if prior set induces processing of noisy and clear speech at comparable levels, the difference between the effects of noisy speech processing and clear speech processing on concurrent memory is completely eliminated.  相似文献   

20.
Groups of 2-, 3-, and 4-month olds were tested for dichotic ear differences in memory-based phonetic and music timbre discriminations. A right-ear advantage for speech and a left-ear advantage (LEA) for music were found in the 3- and 4-month-olds. However, the 2-month-olds showed only the music LEA, with no reliable evidence of memory-based speech discrimination by either hemisphere. Thus, the responses of all groups to speech contrasts were different from those to music contrasts, but the pattern of the response dichotomy in the youngest group deviated from that found in the older infants. It is suggested that the quality or use of lefthemisphere phonetic memory may change between 2 and 3 months, and that the engagement of right-hemisphere specialized memory for musical timbre may precede that for left-hemisphere phonetic memory. Several directions for future research are suggested to determine whether infant short-term memory asymmetries for speech and music are attributable to acoustic factors, to different modes or strategies in perception, or to structural and dynamic properties of natural sound sources.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号