首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Joanne L. Miller 《Cognition》1994,50(1-3):271-285
There is growing evidence that phonetic categories have a rich internal structure, with category members varying systematically in category goodness. Our recent findings on this issue, which are summarized in this paper, underscore the existence and robustness of this structure and indicate further that the mapping between acoustic signal and internal category structure is complex: just as in the case of category boundaries, the best exemplars of a given category are highly dependent on acoustic-phonetic context and are specified by multiple properties of the speech signal. These findings suggest that the listener's representation of phonetic form preserves not only vategorical information, but also fine-grained information about the detailed acoustic-phonetic characteristics of the language.  相似文献   

2.
It is well established that young infants process speech in terms of perceptual categories that closely correspond to the phonetic categories of adult language users. Recently, Kuhl (1991) has provided evidence that this correspondence is not limited to the region of category boundaries: At least by 6–7 months of age, vowel categories of infants, like those of adults, have an internal perceptual structure. In the current experiments, which focused on a consonantal contrast, we found evidence of internally structured categories in even younger infants—3–4 months of age. The implications of these findings for the nature of the infant’s earliest language-universal categories are discussed, as is the role of exposure to the native language in shaping these categories over the course of development.  相似文献   

3.
To learn to produce speech, infants must effectively monitor and assess their own speech output. Yet very little is known about how infants perceive speech produced by an infant, which has higher voice pitch and formant frequencies compared to adult or child speech. Here, we tested whether pre‐babbling infants (at 4–6 months) prefer listening to vowel sounds with infant vocal properties over vowel sounds with adult vocal properties. A listening preference favoring infant vowels may derive from their higher voice pitch, which has been shown to attract infant attention in infant‐directed speech (IDS). In addition, infants' nascent articulatory abilities may induce a bias favoring infant speech given that 4‐ to 6‐month‐olds are beginning to produce vowel sounds. We created infant and adult /i/ (‘ee’) vowels using a production‐based synthesizer that simulates the act of speaking in talkers at different ages and then tested infants across four experiments using a sequential preferential listening task. The findings provide the first evidence that infants preferentially attend to vowel sounds with infant voice pitch and/or formants over vowel sounds with no infant‐like vocal properties, supporting the view that infants' production abilities influence how they process infant speech. The findings with respect to voice pitch also reveal parallels between IDS and infant speech, raising new questions about the role of this speech register in infant development. Research exploring the underpinnings and impact of this perceptual bias can expand our understanding of infant language development.  相似文献   

4.
Before the end of the first year of life, infants begin to lose the ability to perceive distinctions between sounds that are not phonemic in their native language. It is typically assumed that this developmental change reflects the construction of language‐specific phoneme categories, but how these categories are learned largely remains a mystery. Peperkamp, Le Calvez, Nadal, and Dupoux (2006) present an algorithm that can discover phonemes using the distributions of allophones as well as the phonetic properties of the allophones and their contexts. We show that a third type of information source, the occurrence of pairs of minimally differing word forms in speech heard by the infant, is also useful for learning phonemic categories and is in fact more reliable than purely distributional information in data containing a large number of allophones. In our model, learners build an approximation of the lexicon consisting of the high‐frequency n‐grams present in their speech input, allowing them to take advantage of top‐down lexical information without needing to learn words. This may explain how infants have already begun to exhibit sensitivity to phonemic categories before they have a large receptive lexicon.  相似文献   

5.
Evidence from infant studies indicates that language learning can be facilitated by multimodal cues. We extended this observation to adult language learning by studying the effects of simultaneous visual cues (nonassociated object images) on speech segmentation performance. Our results indicate that segmentation of new words from a continuous speech stream is facilitated by simultaneous visual input that it is presented at or near syllables that exhibit the low transitional probability indicative of word boundaries. This indicates that temporal audio-visual contiguity helps in directing attention to word boundaries at the earliest stages of language learning. Off-boundary or arrhythmic picture sequences did not affect segmentation performance, suggesting that the language learning system can effectively disregard noninformative visual information. Detection of temporal contiguity between multimodal stimuli may be useful in both infants and second-language learners not only for facilitating speech segmentation, but also for detecting word–object relationships in natural environments.  相似文献   

6.
The processes of infant word segmentation and infant word learning have largely been studied separately. However, the ease with which potential word forms are segmented from fluent speech seems likely to influence subsequent mappings between words and their referents. To explore this process, we tested the link between the statistical coherence of sequences presented in fluent speech and infants’ subsequent use of those sequences as labels for novel objects. Notably, the materials were drawn from a natural language unfamiliar to the infants (Italian). The results of three experiments suggest that there is a close relationship between the statistics of the speech stream and subsequent mapping of labels to referents. Mapping was facilitated when the labels contained high transitional probabilities in the forward and/or backward direction (Experiment 1). When no transitional probability information was available (Experiment 2), or when the internal transitional probabilities of the labels were low in both directions (Experiment 3), infants failed to link the labels to their referents. Word learning appears to be strongly influenced by infants’ prior experience with the distribution of sounds that make up words in natural languages.  相似文献   

7.
In their first year, infants begin to learn the speech sounds of their language. This process is typically modeled as an unsupervised clustering problem in which phonetically similar speech‐sound tokens are grouped into phonetic categories by infants using their domain‐general inference abilities. We argue here that maternal speech is too phonetically variable for this account to be plausible, and we provide phonetic evidence from Spanish showing that infant‐directed Spanish vowels are more readily clustered over word types than over vowel tokens. The results suggest that infants’ early adaptation to native‐language phonetics depends on their word‐form lexicon, implicating a much wider range of potential sources of influence on infants’ developmental trajectories in language learning.  相似文献   

8.
In all languages studied to date, distinct prosodic contours characterize different intention categories of infant-directed (ID) speech. This vocal behavior likely exists universally as a species-typical trait, but little research has examined whether listeners can accurately recognize intentions in ID speech using only vocal cues, without access to semantic information. We recorded native-English-speaking mothers producing four intention categories of utterances (prohibition, approval, comfort, and attention) as both ID and adult-directed (AD) speech, and we then presented the utterances to Shuar adults (South American hunter-horticulturalists). Shuar subjects were able to reliably distinguish ID from AD speech and were able to reliably recognize the intention categories in both types of speech, although performance was significantly better with ID speech. This is the first demonstration that adult listeners in an indigenous, nonindustrialized, and nonliterate culture can accurately infer intentions from both ID speech and AD speech in a language they do not speak.  相似文献   

9.
Most research on infant speech categories has relied on measures of discrimination. Such work often employs categorical perception as a linking hypothesis to enable inferences about categorization on the basis of discrimination measures. However, a large number of studies with adults challenge the utility of categorical perception in describing adult speech perception, and this in turn calls into question how to interpret measures of infant speech discrimination. We propose here a parallel channels model of discrimination (built on Pisoni and Tash Perception & Psychophysics, 15(2), 285–290, 1974), which posits that both a noncategorical or veridical encoding of speech cues and category representations can simultaneously contribute to discrimination. This can thus produce categorical perception effects without positing any warping of the acoustic signal, but it also reframes how we think about infant discrimination and development. We test this model by conducting a quantitative review of 20 studies examining infants’ discrimination of voice onset time contrasts. This review suggests that within-category discrimination is surprisingly prevalent even in classic studies and that, averaging across studies, discrimination is related to continuous acoustic distance. It also identifies several methodological factors that may mask our ability to see this. Finally, it suggests that infant discrimination may improve over development, contrary to commonly held notion of perceptual narrowing. These results are discussed in terms of theories of speech development that may require such continuous sensitivity.  相似文献   

10.
The present study investigated the degree to which an infants’ use of simultaneous gesture–speech combinations during controlled social interactions predicts later language development. Nineteen infants participated in a declarative pointing task involving three different social conditions: two experimental conditions (a) available, when the adult was visually attending to the infant but did not attend to the object of reference jointly with the child, and (b) unavailable, when the adult was not visually attending to neither the infant nor the object; and (c) a baseline condition, when the adult jointly engaged with the infant's object of reference. At 12 months of age measures related to infants’ speech-only productions, pointing-only gestures, and simultaneous pointing–speech combinations were obtained in each of the three social conditions. Each child's lexical and grammatical output was assessed at 18 months of age through parental report. Results revealed a significant interaction between social condition and type of communicative production. Specifically, only simultaneous pointing–speech combinations increased in frequency during the available condition compared to baseline, while no differences were found for speech-only and pointing-only productions. Moreover, simultaneous pointing–speech combinations in the available condition at 12 months positively correlated with lexical and grammatical development at 18 months of age. The ability to selectively use this multimodal communicative strategy to engage the adult in joint attention by drawing his attention toward an unseen event or object reveals 12-month-olds’ clear understanding of referential cues that are relevant for language development. This strategy to successfully initiate and maintain joint attention is related to language development as it increases learning opportunities from social interactions.  相似文献   

11.
Individual variability in infant's language processing is partly explained by environmental factors, like the quantity of parental speech input, as well as by infant‐specific factors, like speech production. Here, we explore how these factors affect infant word segmentation. We used an artificial language to ensure that only statistical regularities (like transitional probabilities between syllables) could cue word boundaries, and then asked how the quantity of parental speech input and infants’ babbling repertoire predict infants’ abilities to use these statistical cues. We replicated prior reports showing that 8‐month‐old infants use statistical cues to segment words, with a preference for part‐words over words (a novelty effect). Crucially, 8‐month‐olds with larger novelty effects had received more speech input at 4 months and had greater production abilities at 8 months. These findings establish for the first time that the ability to extract statistical information from speech correlates with individual factors in infancy, like early speech experience and language production. Implications of these findings for understanding individual variability in early language acquisition are discussed.  相似文献   

12.
In the present study, the discourse interaction between adult and child was examined in terms of the content of their utterances, and the linguistic and contextual relations between their messages, in order to investigate how children use the information from adults' input sentences to form contingent responses. The analyses described were based on longitudinal data from four children from approximately 21 to 36 months of age. Categories of child discourse, their development and their interactions with aspects of prior adult utterances form the major results of the study. Child utterances were identified as adjacent (immediately preceded by an adult utterance), or as nonadjacent (not immediately preceded by an adult utterance). Adjacent utterances were either contingent (shared the same topic and added new information relative to the topic of the prior utterance), imitative (shared the same topic but did not add new information), or noncontingent (did not share the same topic). From the beginning, the adjacent speech was greater than nonadjacent speech. Contingent speech increased over time; in particular, linguistically contingent speech (speech that expanded the verb relation of the prior adult utterance with added or replaced constituents within a clause) showed the greatest developmental increase. Linguistically contingent speech occurred more often after questions than nonquestions. The results are discussed in terms of how the differential requirements for processing information in antecedent messages is related to language learning.  相似文献   

13.
Ramus F  Nespor M  Mehler J 《Cognition》1999,73(3):265-292
Spoken languages have been classified by linguists according to their rhythmic properties, and psycholinguists have relied on this classification to account for infants' capacity to discriminate languages. Although researchers have measured many speech signal properties, they have failed to identify reliable acoustic characteristics for language classes. This paper presents instrumental measurements based on a consonant/vowel segmentation for eight languages. The measurements suggest that intuitive rhythm types reflect specific phonological properties, which in turn are signaled by the acoustic/phonetic properties of speech. The data support the notion of rhythm classes and also allow the simulation of infant language discrimination, consistent with the hypothesis that newborns rely on a coarse segmentation of speech. A hypothesis is proposed regarding the role of rhythm perception in language acquisition.  相似文献   

14.
The relationship between maternal ADHD symptoms and maternal language was examined in a community sample of 50 mothers of infants age 3–12 months. It was hypothesized that higher maternal symptoms of ADHD would be related to lower quality of maternal language use. Recordings of mothers’ speech were coded for complexity and elaboration of speech and vocabulary diversity during an interview with an adult and during mother–infant play interactions in the home. Hierarchical regression analysis revealed that maternal ADHD symptoms were significantly related to mothers’ lower mean length of utterances during the interview and during mother–infant play interactions. Maternal ADHD symptoms were not related to maternal vocabulary use in either of these situations. Our findings suggest that mothers with higher ADHD symptoms may display exiguous language behaviors when interacting with their infants and with adults. In addition, findings suggest one reason why current parent-management programs for children with ADHD, which are verbally based and rely heavily on the parent’s communication skills, are relatively ineffective when ADHD may be present in the parent.  相似文献   

15.
Ramus F  Nespor M  Mehler J 《Cognition》2000,75(1):AD3-AD30
Spoken languages have been classified by linguists according to their rhythmic properties, and psycholinguists have relied on this classification to account for infants' capacity to discriminate languages. Although researchers have measured many speech signal properties, they have failed to identify reliable acoustic characteristics for language classes. This paper presents instrumental measurements based on a consonant/vowel segmentation for eight languages. The measurements suggest that intuitive rhythm types reflect specific phonological properties, which in turn are signaled by the acoustic/phonetic properties of speech. The data support the notion of rhythm classes and also allow the simulation of infant language discrimination, consistent with the hypothesis that newborns rely on a coarse segmentation of speech. A hypothesis is proposed regarding the role of rhythm perception in language acquisition.  相似文献   

16.
Cognitive systems face a tension between stability and plasticity. The maintenance of long-term representations that reflect the global regularities of the environment is often at odds with pressure to flexibly adjust to short-term input regularities that may deviate from the norm. This tension is abundantly clear in speech communication when talkers with accents or dialects produce input that deviates from a listener's language community norms. Prior research demonstrates that when bottom-up acoustic information or top-down word knowledge is available to disambiguate speech input, there is short-term adaptive plasticity such that subsequent speech perception is shifted even in the absence of the disambiguating information. Although such effects are well-documented, it is not yet known whether bottom-up and top-down resolution of ambiguity may operate through common processes, or how these information sources may interact in guiding the adaptive plasticity of speech perception. The present study investigates the joint contributions of bottom-up information from the acoustic signal and top-down information from lexical knowledge in the adaptive plasticity of speech categorization according to short-term input regularities. The results implicate speech category activation, whether from top-down or bottom-up sources, in driving rapid adjustment of listeners' reliance on acoustic dimensions in speech categorization. Broadly, this pattern of perception is consistent with dynamic mapping of input to category representations that is flexibly tuned according to interactive processing accommodating both lexical knowledge and idiosyncrasies of the acoustic input.  相似文献   

17.
Over the course of the first year of life, infants develop from being generalized listeners, capable of discriminating both native and non-native speech contrasts, into specialized listeners whose discrimination patterns closely reflect the phonetic system of the native language(s). Recent work by Maye, Werker and Gerken (2002) has proposed a statistical account for this phenomenon, showing that infants may lose the ability to discriminate some foreign language contrasts on the basis of their sensitivity to the statistical distribution of sounds in the input language. In this paper we examine the process of enhancement in infant speech perception, whereby initially difficult phonetic contrasts become better discriminated when they define two categories that serve a functional role in the native language. In particular, we demonstrate that exposure to a bimodal statistical distribution in 8-month-old infants' phonetic input can lead to increased discrimination of difficult contrasts. In addition, this exposure also facilitates discrimination of an unfamiliar contrast sharing the same phonetic feature as the contrast presented during familiarization, suggesting that infants extract acoustic/phonetic information that is invariant across an abstract featural representation.  相似文献   

18.
Perceptual discrimination between speech sounds belonging to different phoneme categories is better than that between sounds falling within the same category. This property, known as "categorical perception," is weaker in children affected by dyslexia. Categorical perception develops from the predispositions of newborns for discriminating all potential phoneme categories in the world's languages. Predispositions that are not relevant for phoneme perception in the ambient language are usually deactivated during early childhood. However, the current study shows that dyslexic children maintain a higher sensitivity to phonemic distinctions irrelevant in their linguistic environment. This suggests that dyslexic children use an allophonic mode of speech perception that, although without straightforward consequences for oral communication, has obvious implications for the acquisition of alphabetic writing. Allophonic perception specifically affects the mapping between graphemes and phonemes, contrary to other manifestations of dyslexia, and may be a core deficit.  相似文献   

19.
The ability to perceive and produce sounds at multiple time scales is a skill necessary for the acquisition of language. Unlike speech perception, which develops early in life, the production of speech sounds starts at a few months and continues into late childhood with the development of speech-motor skills. Though there is detailed information available on early phonological development, there is very little information on when various articulatory features achieve adult-like maturity. We use modern spectral analysis to investigate the development of three language features associated with three different timescales in vocal utterances from typically developing children between 4 and 8 years. We make comparisons with adult speech and find age dependence in the appearance of these features. Results suggest that as children get older they exhibit increasingly more power in features associated with shorter time scales, thereby indicating the maturation of fine motor control in speech. Such data from typically developing children could provide milestones of speech production at different timescales. Since impairments in spoken language often provide the first warning signs of a language disorder we suggest that speech production could also be used to probe language disorders.  相似文献   

20.
Event Related Potentials (ERPs) were recorded from Spanish-English bilinguals (N = 10) to test pre-attentive speech discrimination in two language contexts. ERPs were recorded while participants silently read magazines in English or Spanish. Two speech contrast conditions were recorded in each language context. In the phonemic in English condition, the speech sounds represented two different phonemic categories in English, but represented the same phonemic category in Spanish. In the phonemic in Spanish condition, the speech sounds represented two different phonemic categories in Spanish, but represented the same phonemic categories in English. Results showed pre-attentive discrimination when the acoustics/phonetics of the speech sounds match the language context (e.g., phonemic in English condition during the English language context). The results suggest that language contexts can affect pre-attentive auditory change detection. Specifically, bilinguals’ mental processing of stop consonants relies on contextual linguistic information.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号