首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Across languages, lexical items specific to infant‐directed speech (i.e., ‘baby‐talk words’) are characterized by a preponderance of onomatopoeia (or highly iconic words), diminutives, and reduplication. These lexical characteristics may help infants discover the referential nature of words, identify word referents, and segment fluent speech into words. If so, the amount of lexical input containing these properties should predict infants’ rate of vocabulary growth. To test this prediction, we tracked the vocabulary size in 47 English‐learning infants from 9 to 21 months and examined whether the patterns of growth can be related to measures of iconicity, diminutives, and reduplication in the lexical input at 9 months. Our analyses showed that both diminutives and reduplication in the input were associated with vocabulary growth, although measures of iconicity were not. These results are consistent with the hypothesis that phonological properties typical of lexical input in infant‐directed speech play a role in early vocabulary growth.  相似文献   

2.
3.
From the very first moments of their lives, infants selectively attend to the visible orofacial movements of their social partners and apply their exquisite speech perception skills to the service of lexical learning. Here we explore how early bilingual experience modulates children's ability to use visible speech as they form new lexical representations. Using a cross‐modal word‐learning task, bilingual children aged 30 months were tested on their ability to learn new lexical mappings in either the auditory or the visual modality. Lexical recognition was assessed either in the same modality as the one used at learning (‘same modality’ condition: auditory test after auditory learning, visual test after visual learning) or in the other modality (‘cross‐modality’ condition: visual test after auditory learning, auditory test after visual learning). The results revealed that like their monolingual peers, bilingual children successfully learn new words in either the auditory or the visual modality and show cross‐modal recognition of words following auditory learning. Interestingly, as opposed to monolinguals, they also demonstrate cross‐modal recognition of words upon visual learning. Collectively, these findings indicate a bilingual edge in visual word learning, expressed in the capacity to form a recoverable cross‐modal representation of visually learned words.  相似文献   

4.
This paper documents the occurrence of form variability through diminutive ‘wordplay’, and examines whether this variability facilitates or hinders morphology acquisition in a richly inflected language. First, in a longitudinal speech corpus of eight Russian mothers conversing with their children (1.6–3.6), and with an adult, the use of diminutive word forms was shown to be pervasive in Russian child‐directed, but not adult‐directed speech. Importantly, all of the mothers were shown to routinely engage in alternating uses of diminutive and simplex forms of the same nouns within the same conversational episodes. Second, an elicitation experiment was conducted which tested 24 children's (2.7–4.2) productivity in inflecting novel nouns for case. By varying whether children heard the novel nouns in diminutive form, simplex form or both, we show that children benefit from the introduction of words in multiple forms (i.e. showing fewer case‐marking errors in this condition). We suggest that pragmatically motivated form variation in child‐directed speech (CDS) may have beneficial effects for acquiring richly inflected languages.  相似文献   

5.
We investigate whether infant‐directed speech (IDS) could facilitate word form learning when compared to adult‐directed speech (ADS). To study this, we examine the distribution of word forms at two levels, acoustic and phonological, using a large database of spontaneous speech in Japanese. At the acoustic level we show that, as has been documented before for phonemes, the realizations of words are more variable and less discriminable in IDS than in ADS. At the phonological level, we find an effect in the opposite direction: The IDS lexicon contains more distinctive words (such as onomatopoeias) than the ADS counterpart. Combining the acoustic and phonological metrics together in a global discriminability score reveals that the bigger separation of lexical categories in the phonological space does not compensate for the opposite effect observed at the acoustic level. As a result, IDS word forms are still globally less discriminable than ADS word forms, even though the effect is numerically small. We discuss the implication of these findings for the view that the functional role of IDS is to improve language learnability.  相似文献   

6.
This paper describes an investigation into the function of child-directed speech (CDS) across development. In the first experiment, 10–21-month-olds were presented with familiar words in CDS and trained on novel words in CDS or adult-directed speech (ADS). All children preferred the matching display for familiar words. However, only older toddlers in the CDS condition preferred the matching display for novel words. In Experiment 2, children 3–6 years of age were presented with a sentence comprehension task in CDS or ADS. Older children performed better overall than younger children with 5- and 6-year-olds performing above chance regardless of speech condition, while 3- and 4-year-olds only performed above chance when the sentences were presented in CDS. These findings provide support for the theory that CDS is most effective at the beginning of acquisition for particular constructions (e.g. vocabulary acquisition, syntactic comprehension) rather than at a particular age or for a particular task.  相似文献   

7.
Children produce their first gestures before their first words, and their first gesture+word sentences before their first word+word sentences. These gestural accomplishments have been found not only to predate linguistic milestones, but also to predict them. Findings of this sort suggest that gesture itself might be playing a role in the language‐learning process. But what role does it play? Children's gestures could elicit from their mothers the kinds of words and sentences that the children need to hear in order to take their next linguistic step. We examined maternal responses to the gestures and speech that 10 children produced during the one‐word period. We found that all 10 mothers ‘translated’ their children's gestures into words, providing timely models for how one‐ and two‐word ideas can be expressed in English. Gesture thus offers a mechanism by which children can point out their thoughts to mothers, who then calibrate their speech to those thoughts, and potentially facilitate language‐learning.  相似文献   

8.
The goal of the study was to examine whether the ‘noun-bias’ phenomenon, which exists in the lexicon of Hebrew-speaking children, also exists in Hebrew child-directed speech (CDS) as well as in Hebrew adult-directed speech (ADS). In addition, we aimed to describe the use of the different classes of content words in the speech of Hebrew-speaking parents to their children at different ages compared to the speech of parents to adults (ADS). Thirty infants (age range 8:5–33 months) were divided into three stages according to age: pre-lexical, single-word, and early grammar. The ADS corpus included 18 Hebrew-speaking parents of children at the same three stages of language development as in the CDS corpus. The CDS corpus was collected from parent–child dyads during naturalistic activities at home: mealtime, bathing, and play. The ADS corpus was collected from parent–experimenter interactions including the parent watching a video and then being interviewed by the experimenter. 200 utterances of each sample were transcribed, coded for types and tokens and analyzed quantitatively and qualitatively. Results show that in CDS, when speaking to infants of all ages, parents’ use of types and tokens of verbs and nouns was similar and significantly higher than their use of adjectives or adverbs. In ADS, however, verbs were the main lexical category used by Hebrew-speaking parents in both types and tokens. It seems that both the properties of the input language (e.g. the pro-drop parameter) and the interactional styles of the caregivers are important factors that may influence the high presence of verbs in Hebrew-speaking parents’ ADS and CDS. The negative correlation between the widespread use of verbs in the speech of parents to their infants and the ‘noun-bias’ phenomenon in the Hebrew-child lexicon will be discussed in detail.  相似文献   

9.
Child-directed language can support language learning, but how? We addressed two questions: (1) how caregivers prosodically modulated their speech as a function of word familiarity (known or unknown to the child) and accessibility of referent (visually present or absent from the immediate environment); (2) whether such modulations affect children's unknown word learning and vocabulary development. We used data from 38 English-speaking caregivers (from the ECOLANG corpus) talking about toys (both known and unknown to their children aged 3–4 years) both when the toys are present and when absent. We analyzed prosodic dimensions (i.e., speaking rate, pitch and intensity) of caregivers’ productions of 6529 toy labels. We found that unknown labels were spoken with significantly slower speaking rate, wider pitch and intensity range than known labels, especially in the first mentions, suggesting that caregivers adjust their prosody based on children's lexical knowledge. Moreover, caregivers used slower speaking rate and larger intensity range to mark the first mentions of toys that were physically absent. After the first mentions, they talked about the referents louder with higher mean pitch when toys were present than when toys were absent. Crucially, caregivers’ mean pitch of unknown words and the degree of mean pitch modulation for unknown words relative to known words (pitch ratio) predicted children's immediate word learning and vocabulary size 1 year later. In conclusion, caregivers modify their prosody when the learning situation is more demanding for children, and these helpful modulations assist children in word learning.

Research Highlights

  • In naturalistic interactions, caregivers use slower speaking rate, wider pitch and intensity range when introducing new labels to 3–4-year-old children, especially in first mentions.
  • Compared to when toys are present, caregivers speak more slowly with larger intensity range to mark the first mentions of toys that are physically absent.
  • Mean pitch to mark word familiarity predicts children's immediate word learning and future vocabulary size.
  相似文献   

10.
It is well established that variation in caregivers’ speech is associated with language outcomes, yet little is known about the learning principles that mediate these effects. This longitudinal study (n = 27) explores whether Spanish‐learning children's early experiences with language predict efficiency in real‐time comprehension and vocabulary learning. Measures of mothers’ speech at 18 months were examined in relation to children's speech processing efficiency and reported vocabulary at 18 and 24 months. Children of mothers who provided more input at 18 months knew more words and were faster in word recognition at 24 months. Moreover, multiple regression analyses indicated that the influences of caregiver speech on speed of word recognition and vocabulary were largely overlapping. This study provides the first evidence that input shapes children's lexical processing efficiency and that vocabulary growth and increasing facility in spoken word comprehension work together to support the uptake of the information that rich input affords the young language learner.  相似文献   

11.
Can infants, in the very first stages of word learning, use their perceptual sensitivity to the phonetics of speech while learning words? Research to date suggests that infants of 14 months cannot learn two similar‐sounding words unless there is substantial contextual support. The current experiment advances our understanding of this failure by testing whether the source of infants’ difficulty lies in the learning or testing phase. Infants were taught to associate two similar‐sounding words with two different objects, and tested using a visual choice method rather than the standard Switch task. The results reveal that 14‐month‐olds are capable of learning and mapping two similar‐sounding labels; they can apply phonetic detail in new words. The findings are discussed in relation to infants’ concurrent failure, and the developmental transition to success, in the Switch task.  相似文献   

12.
Pitch is often described metaphorically: for example, Farsi and Turkish speakers use a ‘thickness’ metaphor (low sounds are ‘thick’ and high sounds are ‘thin’), while German and English speakers use a height metaphor (‘low’, ‘high’). This study examines how child and adult speakers of Farsi, Turkish, and German map pitch and thickness using a cross‐modal association task. All groups, except for German children, performed significantly better than chance. German‐speaking adults’ success suggests the pitch‐to‐thickness association can be learned by experience. But the fact that German children were at chance indicates that this learning takes time. Intriguingly, Farsi and Turkish children's performance suggests that learning cross‐modal associations can be boosted through experience with consistent metaphorical mappings in the input language.  相似文献   

13.
The current study examines the relationship between 18‐month‐old toddlers’ vocabulary size and their ability to inhibit attention to no‐longer relevant information using the backward semantic inhibition paradigm. When adults switch attention from one semantic category to another, the former and no‐longer‐relevant semantic category becomes inhibited, and subsequent attention to an item that belongs to the inhibited semantic category is impaired. Here we demonstrate that 18‐month‐olds can inhibit attention to no‐longer relevant semantic categories, but only if they have a relatively large vocabulary. These findings suggest that an increased number of items (word knowledge) in the toddler lexical‐semantic system during the “vocabulary spurt” at 18‐months may be an important driving force behind the emergence of a semantic inhibitory mechanism. Possessing more words in the mental lexicon likely results in the formation of inhibitory links between words, which allow toddlers to select and deselect words and concepts more efficiently. Our findings highlight the role of vocabulary growth in the development of inhibitory processes in the emerging lexical‐semantic system.  相似文献   

14.
Markson and Bloom (1997) found that some learning processes involved in children's acquisition of a new word are also involved in their acquisition of a new fact. They argued that these findings provided evidence against a domain‐specific system for word learning. However, Waxman and Booth (2000) found that whereas children quite readily extend newly learned words to novel exemplars within a category, they do not do this with newly learned facts. They therefore argued that because children did not extend some facts in a principled way, word learning and fact learning may result from different domain‐specific processes. In the current study, we argue that facts are a poor comparison in this argument since facts vary in whether they are tied to particular individuals. A more appropriate comparison is a conventional non‐verbal action on an object –‘what we do with things like this’– since they are routinely generalized categorically to new objects. Our study shows that 21/2‐year‐old children extend novel non‐verbal actions to new objects in the same way that they extend novel words to new objects. The findings provide support for the view that word learning represents a unique configuration of more general learning processes.  相似文献   

15.
16.
In gender‐marking languages, the gender of the noun determines the form of the preceding article. In this study, we examined whether French‐learning toddlers use gender‐marking information on determiners to recognize words. In a split‐screen preferential looking experiment, 25‐month‐olds were presented with picture pairs that referred to nouns with either the same or different genders. The target word in the auditory instruction was preceded either by the correct or incorrect gender‐marked definite article. Toddlers’ looking times to target shortly after article onset demonstrated that target words were processed most efficiently in different‐gender grammatical trials. While target processing in same‐gender grammatical trials recovered in the subsequent time window, ungrammatical articles continued to affect processing efficiency until much later in the trial. These results indicate that by 25 months of age, French‐learning toddlers use gender information on determiners when comprehending subsequent nouns.  相似文献   

17.
To learn to produce speech, infants must effectively monitor and assess their own speech output. Yet very little is known about how infants perceive speech produced by an infant, which has higher voice pitch and formant frequencies compared to adult or child speech. Here, we tested whether pre‐babbling infants (at 4–6 months) prefer listening to vowel sounds with infant vocal properties over vowel sounds with adult vocal properties. A listening preference favoring infant vowels may derive from their higher voice pitch, which has been shown to attract infant attention in infant‐directed speech (IDS). In addition, infants' nascent articulatory abilities may induce a bias favoring infant speech given that 4‐ to 6‐month‐olds are beginning to produce vowel sounds. We created infant and adult /i/ (‘ee’) vowels using a production‐based synthesizer that simulates the act of speaking in talkers at different ages and then tested infants across four experiments using a sequential preferential listening task. The findings provide the first evidence that infants preferentially attend to vowel sounds with infant voice pitch and/or formants over vowel sounds with no infant‐like vocal properties, supporting the view that infants' production abilities influence how they process infant speech. The findings with respect to voice pitch also reveal parallels between IDS and infant speech, raising new questions about the role of this speech register in infant development. Research exploring the underpinnings and impact of this perceptual bias can expand our understanding of infant language development.  相似文献   

18.
How do infants begin to understand spoken words? Recent research suggests that word comprehension develops from the early detection of intersensory relations between conventionally paired auditory speech patterns (words) and visible objects or actions. More importantly, in keeping with dynamic systems principles, the findings suggest that word comprehension develops from a dynamic and complementary relationship between the organism (the infant) and the environment (language addressed to the infant). In addition, parallel findings from speech and non‐speech studies of intersensory perception provide evidence for domain general processes in the development of word comprehension. These research findings contrast with the view that a lexical acquisition device with specific lexical principles and innate constraints is required for early word comprehension. Furthermore, they suggest that learning of word–object relations is not merely an associative process. The data support an alternative view of the developmental process that emphasizes the dynamic and reciprocal interactions between general intersensory perception, selective attention and learning in infants, and the specific characteristics of maternal communication.  相似文献   

19.
Previous research with artificial language learning paradigms has shown that infants are sensitive to statistical cues to word boundaries (Saffran, Aslin & Newport, 1996) and that they can use these cues to extract word‐like units (Saffran, 2001). However, it is unknown whether infants use statistical information to construct a receptive lexicon when acquiring their native language. In order to investigate this issue, we rely on the fact that besides real words a statistical algorithm extracts sound sequences that are highly frequent in infant‐directed speech but constitute nonwords. In three experiments, we use a preferential listening paradigm to test French‐learning 11‐month‐old infants' recognition of highly frequent disyllabic sequences from their native language. In Experiments 1 and 2, we use nonword stimuli and find that infants listen longer to high‐frequency than to low‐frequency sequences. In Experiment 3, we compare high‐frequency nonwords to real words in the same frequency range, and find that infants show no preference. Thus, at 11 months, French‐learning infants recognize highly frequent sound sequences from their native language and fail to differentiate between words and nonwords among these sequences. These results are evidence that they have used statistical information to extract word candidates from their input and stored them in a ‘protolexicon’, containing both words and nonwords.  相似文献   

20.
Over 30 years ago, it was suggested that difficulties in the ‘auditory organization’ of word forms in the mental lexicon might cause reading difficulties. It was proposed that children used parameters such as rhyme and alliteration to organize word forms in the mental lexicon by acoustic similarity, and that such organization was impaired in developmental dyslexia. This literature was based on an ‘oddity’ measure of children's sensitivity to rhyme (e.g. wood, book, good) and alliteration (e.g. sun, sock, rag). The ‘oddity’ task revealed that children with dyslexia were significantly poorer at identifying the ‘odd word out’ than younger children without reading difficulties. Here we apply a novel modelling approach drawn from auditory neuroscience to study the possible sensory basis of the auditory organization of rhyming and non‐rhyming words by children. We utilize a novel Spectral‐Amplitude Modulation Phase Hierarchy (S‐AMPH) approach to analysing the spectro‐temporal structure of rhyming and non‐rhyming words, aiming to illuminate the potential acoustic cues used by children as a basis for phonological organization. The S‐AMPH model assumes that speech encoding depends on neuronal oscillatory entrainment to the amplitude modulation (AM) hierarchy in speech. Our results suggest that phonological similarity between rhyming words in the oddity task depends crucially on slow (delta band) modulations in the speech envelope. Contrary to linguistic assumptions, therefore, auditory organization by children may not depend on phonemic information for this task. Linguistically, it is assumed that ‘book’ does not rhyme with ‘wood’ and ‘good’ because the final phoneme differs. However, our auditory analysis suggests that the acoustic cues to this phonological dissimilarity depend primarily on the slower amplitude modulations in the speech envelope, thought to carry prosodic information. Therefore, the oddity task may help in detecting reading difficulties because phonological similarity judgements about rhyme reflect sensitivity to slow amplitude modulation patterns. Slower amplitude modulations are known to be detected less efficiently by children with dyslexia.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号