首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In natural settings, infants learn spoken language with the aid of a caregiver who explicitly provides social signals. Although previous studies have demonstrated that young infants are sensitive to these signals that facilitate language development, the impact of real-life interactions on early word segmentation and word–object mapping remains elusive. We tested whether infants aged 5–6 months and 9–10 months could segment a word from continuous speech and acquire a word–object relation in an ecologically valid setting. In Experiment 1, infants were exposed to a live tutor, while in Experiment 2, another group of infants were exposed to a televised tutor. Results indicate that both younger and older infants were capable of segmenting a word and learning a word–object association only when the stimuli were derived from a live tutor in a natural manner, suggesting that real-life interaction enhances the learning of spoken words in preverbal infants.  相似文献   

2.
Singh L 《Cognition》2008,106(2):833-870
Although infants begin to encode and track novel words in fluent speech by 7.5 months, their ability to recognize words is somewhat limited at this stage. In particular, when the surface form of a word is altered, by changing the gender or affective prosody of the speaker, infants begin to falter at spoken word recognition. Given that natural speech is replete with variability, only some of which determines the meaning of a word, it remains unclear how infants might ever overcome the effects of surface variability without appealing to meaning. In the current set of experiments, consequences of high and low variability are examined in preverbal infants. The source of variability, vocal affect, is a common property of infant-directed speech with which young learners have to contend. Across a series of four experiments, infants' abilities to recognize repeated encounters of words, as well as to reject similar-sounding words, are investigated in the context of high and low affective variation. Results point to positive consequences of affective variation, both in creating generalizable memory representations for words, but also in establishing phonologically precise memories for words. Conversely, low variability appears to degrade word recognition on both fronts, compromising infants' abilities to generalize across different affective forms of a word and to detect similar-sounding items. Findings are discussed in the context of principles of categorization that may potentiate the early growth of a lexicon.  相似文献   

3.
In this research, we investigated the effects of voice and face information on the perceptual learning of talkers and on long-term memory for spoken words. In the first phase, listeners were trained over several days to identify voices from words presented auditorily or audiovisually. The training data showed that visual information about speakers enhanced voice learning, revealing cross-modal connections in talker processing akin to those observed in speech processing. In the second phase, the listeners completed an auditory or audiovisual word recognition memory test in which equal numbers of words were spoken by familiar and unfamiliar talkers. The data showed that words presented by familiar talkers were more likely to be retrieved from episodic memory, regardless of modality. Together, these findings provide new information about the representational code underlying familiar talker recognition and the role of stimulus familiarity in episodic word recognition.  相似文献   

4.
The present experiments investigated how the process of statistically segmenting words from fluent speech is linked to the process of mapping meanings to words. Seventeen-month-old infants first participated in a statistical word segmentation task, which was immediately followed by an object-label-learning task. Infants presented with labels that were words in the fluent speech used in the segmentation task were able to learn the object labels. However, infants presented with labels consisting of novel syllable sequences (nonwords; Experiment 1) or familiar sequences with low internal probabilities (part-words; Experiment 2) did not learn the labels. Thus, prior segmentation opportunities, but not mere frequency of exposure, facilitated infants' learning of object labels. This work provides the first demonstration that exposure to word forms in a statistical word segmentation task facilitates subsequent word learning.  相似文献   

5.
One of the central themes in the study of language acquisition is the gap between the linguistic knowledge that learners demonstrate, and the apparent inadequacy of linguistic input to support induction of this knowledge. One of the first linguistic abilities in the course of development to exemplify this problem is in speech perception: specifically, learning the sound system of one’s native language. Native-language sound systems are defined by meaningful contrasts among words in a language, yet infants learn these sound patterns before any significant numbers of words are acquired. Previous approaches to this learning problem have suggested that infants can learn phonetic categories from statistical analysis of auditory input, without regard to word referents. Experimental evidence presented here suggests instead that young infants can use visual cues present in word-labeling situations to categorize phonetic information. In Experiment 1, 9-month-old English-learning infants failed to discriminate two non-native phonetic categories, establishing baseline performance in a perceptual discrimination task. In Experiment 2, these infants succeeded at discrimination after watching contrasting visual cues (i.e., videos of two novel objects) paired consistently with the two non-native phonetic categories. In Experiment 3, these infants failed at discrimination after watching the same visual cues, but paired inconsistently with the two phonetic categories. At an age before which memory of word labels is demonstrated in the laboratory, 9-month-old infants use contrastive pairings between objects and sounds to influence their phonetic sensitivity. Phonetic learning may have a more functional basis than previous statistical learning mechanisms assume: infants may use cross-modal associations inherent in social contexts to learn native-language phonetic categories.  相似文献   

6.
Baby Wordsmith   总被引:1,自引:0,他引:1  
ABSTRACT— How do infants acquire their first words? Word reference , or how words map onto objects and events, lies at the core of this question. The emergentist coalition model (ECM) represents a new wave of hybrid developmental theories suggesting that the process of vocabulary development changes from one based in perceptual salience and association to one embedded in social understanding. Beginning at 10 months, babies learn words associatively, ignoring the speaker's social cues and using perceptual salience to guide them. By 12 months, babies attend to social cues, but fail to recruit them for word learning. By 18 and 24 months, babies recruit speakers' social cues to learn the names of particular objects speakers label, regardless of those objects' perceptual attraction. Controversies about how to account for the changing character of word acquisition, along with the roots of children's increasing reliance on speakers' social intent, are discussed.  相似文献   

7.
How do infants begin to understand spoken words? Recent research suggests that word comprehension develops from the early detection of intersensory relations between conventionally paired auditory speech patterns (words) and visible objects or actions. More importantly, in keeping with dynamic systems principles, the findings suggest that word comprehension develops from a dynamic and complementary relationship between the organism (the infant) and the environment (language addressed to the infant). In addition, parallel findings from speech and non‐speech studies of intersensory perception provide evidence for domain general processes in the development of word comprehension. These research findings contrast with the view that a lexical acquisition device with specific lexical principles and innate constraints is required for early word comprehension. Furthermore, they suggest that learning of word–object relations is not merely an associative process. The data support an alternative view of the developmental process that emphasizes the dynamic and reciprocal interactions between general intersensory perception, selective attention and learning in infants, and the specific characteristics of maternal communication.  相似文献   

8.
A crucial step for acquiring a native language vocabulary is the ability to segment words from fluent speech. English-learning infants first display some ability to segment words at about 7.5 months of age. However, their initial attempts at segmenting words only approximate those of fluent speakers of the language. In particular, 7.5-month-old infants are able to segment words that conform to the predominant stress pattern of English words. The ability to segment words with other stress patterns appears to require the use of other sources of information about word boundaries. By 10.5 months, English learners display sensitivity to additional cues to word boundaries such as statistical regularities, allophonic cues and phonotactic patterns. Infants’ word segmentation abilities undergo further development during their second year when they begin to link sound patterns with particular meanings. By 24 months, the speed and accuracy with which infants recognize words in fluent speech is similar to that of native adult listeners. This review describes how infants use multiple sources of information to locate word boundaries in fluent speech, thereby laying the foundations for language understanding.  相似文献   

9.
This study demonstrates that listeners use lexical knowledge in perceptual learning of speech sounds. Dutch listeners first made lexical decisions on Dutch words and nonwords. The final fricative of 20 critical words had been replaced by an ambiguous sound, between [f] and [s]. One group of listeners heard ambiguous [f]-final words (e.g., [WItlo?], from witlof, chicory) and unambiguous [s]-final words (e.g., naaldbos, pine forest). Another group heard the reverse (e.g., ambiguous [na:ldbo?], unambiguous witlof). Listeners who had heard [?] in [f]-final words were subsequently more likely to categorize ambiguous sounds on an [f]-[s] continuum as [f] than those who heard [?] in [s]-final words. Control conditions ruled out alternative explanations based on selective adaptation and contrast. Lexical information can thus be used to train categorization of speech. This use of lexical information differs from the on-line lexical feedback embodied in interactive models of speech perception. In contrast to on-line feedback, lexical feedback for learning is of benefit to spoken word recognition (e.g., in adapting to a newly encountered dialect).  相似文献   

10.
Phonological development is sometimes seen as a process of learning sounds, or forming phonological categories, and then combining sounds to build words, with the evidence taken largely from studies demonstrating ‘perceptual narrowing’ in infant speech perception over the first year of life. In contrast, studies of early word production have long provided evidence that holistic word learning may precede the formation of phonological categories. In that account, children begin by matching their existing vocal patterns to adult words, with knowledge of the phonological system emerging from the network of related word forms. Here I review evidence from production and then consider how the implicit and explicit learning mechanisms assumed by the complementary memory systems model might be understood as reconciling the two approaches.  相似文献   

11.
Rapid Gains in Speed of Verbal Processing by Infants in the 2nd Year   总被引:3,自引:0,他引:3  
Infants improve substantially in language ability during their 2nd year. Research on the early development of speech production shows that vocabulary begins to expand rapidly around the age of 18 months. During this period, infants also make impressive gains in understanding spoken language. We examined the time course of word recognition in infants from ages 15 to 24 months, tracking their eye movements as they looked at pictures in response to familiar spoken words. The speed and efficiency of verbal processing increased dramatically over the 2nd year. Although 15-month-old infants did not orient to the correct picture until after the target word was spoken, 24-month-olds were significantly faster, shifting their gaze to the correct picture before the end of the spoken word. By 2 years of age, children are progressing toward the highly efficient performance of adults, making decisions about words based on incomplete acoustic information.  相似文献   

12.
Can infants, in the very first stages of word learning, use their perceptual sensitivity to the phonetics of speech while learning words? Research to date suggests that infants of 14 months cannot learn two similar‐sounding words unless there is substantial contextual support. The current experiment advances our understanding of this failure by testing whether the source of infants’ difficulty lies in the learning or testing phase. Infants were taught to associate two similar‐sounding words with two different objects, and tested using a visual choice method rather than the standard Switch task. The results reveal that 14‐month‐olds are capable of learning and mapping two similar‐sounding labels; they can apply phonetic detail in new words. The findings are discussed in relation to infants’ concurrent failure, and the developmental transition to success, in the Switch task.  相似文献   

13.
Lew-Williams C  Saffran JR 《Cognition》2012,122(2):241-246
Infants have been described as ‘statistical learners’ capable of extracting structure (such as words) from patterned input (such as language). Here, we investigated whether prior knowledge influences how infants track transitional probabilities in word segmentation tasks. Are infants biased by prior experience when engaging in sequential statistical learning? In a laboratory simulation of learning across time, we exposed 9- and 10-month-old infants to a list of either disyllabic or trisyllabic nonsense words, followed by a pause-free speech stream composed of a different set of disyllabic or trisyllabic nonsense words. Listening times revealed successful segmentation of words from fluent speech only when words were uniformly disyllabic or trisyllabic throughout both phases of the experiment. Hearing trisyllabic words during the pre-exposure phase derailed infants’ abilities to segment speech into disyllabic words, and vice versa. We conclude that prior knowledge about word length equips infants with perceptual expectations that facilitate efficient processing of subsequent language input.  相似文献   

14.
Early word learning in infants relies on statistical, prosodic, and social cues that support speech segmentation and the attachment of meaning to words. It is debated whether such early word knowledge represents mere associations between sound patterns and visual object features, or reflects referential understanding of words. By measuring an event-related brain potential component known as the N400, we demonstrated that 9-month-old infants can detect the mismatch between an object appearing from behind an occluder and a preceding label with which their mother introduces it. Differential N400 amplitudes have been shown to reflect semantic priming in adults, and its absence in infants has been interpreted as a sign of associative word learning. By setting up a live communicative situation for referring to objects, we demonstrated that a similar priming effect also occurs in young infants. This finding may indicate that word meaning is referential from the outset of word learning and that referential expectation drives, rather than results from, vocabulary acquisition in humans.  相似文献   

15.
We examined whether 12-month-old infants privilege words over other linguistic stimuli in an associative learning task. Sixty-four infants were presented with sets of either word–object, communicative sound–object, or consonantal sound–object pairings until they habituated. They were then tested on a ‘switch’ in the sound to determine whether they were able to associate the word and/or sound with the novel objects. Infants associated words, but not communicative sounds or consonantal sounds, with novel objects. The results demonstrate that infants exhibit a preference for words over other linguistic stimuli in an associative word learning task. This suggests that by 12 months of age, infants have developed knowledge about the nature of an appropriate sound form for an object label and will privilege this form as an object label.  相似文献   

16.
Knowing a word affects the fundamental perception of the sounds within it   总被引:4,自引:0,他引:4  
Understanding spoken language is an exceptional computational achievement of the human cognitive apparatus. Theories of how humans recognize spoken words fall into two categories: Some theories assume a fully bottom-up flow of information, in which successively more abstract representations are computed. Other theories, in contrast, assert that activation of a more abstract representation (e.g., a word) can affect the activation of smaller units (e.g., phonemes or syllables). The two experimental conditions reported here demonstrate the top-down influence of word representations on the activation of smaller perceptual units. The results show that perceptual processes are not strictly bottom-up: Computations at logically lower levels of processing are affected by computations at logically more abstract levels. These results constrain and inform theories of the architecture of human perceptual processing of speech.  相似文献   

17.
At about 7 months of age, infants listen longer to sentences containing familiar words – but not deviant pronunciations of familiar words (Jusczyk & Aslin, 1995). This finding suggests that infants are able to segment familiar words from fluent speech and that they store words in sufficient phonological detail to recognize deviations from a familiar word. This finding does not examine whether it is, nevertheless, easier for infants to segment words from sentences when these words sound similar to familiar words. Across three experiments, the present study investigates whether familiarity with a word helps infants segment similar‐sounding words from fluent speech and if they are able to discriminate these similar‐sounding words from other words later on. Results suggest that word‐form familiarity may be a powerful tool bootstrapping further lexical acquisition.  相似文献   

18.
Previous research with artificial language learning paradigms has shown that infants are sensitive to statistical cues to word boundaries (Saffran, Aslin & Newport, 1996) and that they can use these cues to extract word‐like units (Saffran, 2001). However, it is unknown whether infants use statistical information to construct a receptive lexicon when acquiring their native language. In order to investigate this issue, we rely on the fact that besides real words a statistical algorithm extracts sound sequences that are highly frequent in infant‐directed speech but constitute nonwords. In three experiments, we use a preferential listening paradigm to test French‐learning 11‐month‐old infants' recognition of highly frequent disyllabic sequences from their native language. In Experiments 1 and 2, we use nonword stimuli and find that infants listen longer to high‐frequency than to low‐frequency sequences. In Experiment 3, we compare high‐frequency nonwords to real words in the same frequency range, and find that infants show no preference. Thus, at 11 months, French‐learning infants recognize highly frequent sound sequences from their native language and fail to differentiate between words and nonwords among these sequences. These results are evidence that they have used statistical information to extract word candidates from their input and stored them in a ‘protolexicon’, containing both words and nonwords.  相似文献   

19.
Three experiments were conducted using a repetition priming paradigm: Auditory word or environmental sound stimuli were identified by subjects in a pre-test phase, which was followed by a perceptual identification task using either sounds or words in the test phase. Identification of an environmental sound was facilitated by prior presentation of the same sound, but not by prior presentation of a spoken label (Experiments 1 and 2). Similarly, spoken word identification was facilitated by previous presentation of the same word, but not when the word had been used to label an environmental sound (Experiment 1). A degree of abstraction was demonstrated in Experiment 3, which revealed a facilitation effect between similar sounds produced by the same type of source. These results are discussed in terms of the Transfer Appropriate Processing, activation, and systems approaches.  相似文献   

20.
The effects of perceptual adjustments to voice information on the perception of isolated spoken words were examined. In two experiments, spoken target words were preceded or followed within a trial by a neutral word spoken in the same voice or in a different voice as the target. Over-all, words were reproduced more accurately on trials on which the voice of the neutral word matched the voice of the spoken target word, suggesting that perceptual adjustments to voice interfere with word processing. This result, however, was mediated by selective attention to voice. The results provide further evidence of a close processing relationship between perceptual adjustments to voice and spoken word recognition.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号