首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Statistical learning is one of the key mechanisms available to human infants and adults when they face the problems of segmenting a speech stream (Saffran, Aslin, & Newport, 1996) and extracting long-distance regularities (G6mez, 2002; Pe?a, Bonatti, Nespor, & Mehler, 2002). In the present study, we explore statistical learning abilities in rats in the context of speech segmentation experiments. In a series of five experiments, we address whether rats can compute the necessary statistics to be able to segment synthesized speech streams and detect regularities associated with grammatical structures. Our results demonstrate that rats can segment the streams using the frequency of co-occurrence (not transitional probabilities, as human infants do) among items, showing that some basic statistical learning mechanism generalizes over nonprimate species. Nevertheless, rats did not differentiate among test items when the stream was organized over more complex regularities that involved nonadjacent elements and abstract grammar-like rules.  相似文献   

2.
Language acquisition depends on the ability to detect and track the distributional properties of speech. Successful acquisition also necessitates detecting changes in those properties, which can occur when the learner encounters different speakers, topics, dialects, or languages. When encountering multiple speech streams with different underlying statistics but overlapping features, how do infants keep track of the properties of each speech stream separately? In four experiments, we tested whether 8‐month‐old monolingual infants (N = 144) can track the underlying statistics of two artificial speech streams that share a portion of their syllables. We first presented each stream individually. We then presented the two speech streams in sequence, without contextual cues signaling the different speech streams, and subsequently added pitch and accent cues to help learners track each stream separately. The results reveal that monolingual infants experience difficulty tracking the statistical regularities in two speech streams presented sequentially, even when provided with contextual cues intended to facilitate separation of the speech streams. We discuss the implications of our findings for understanding how infants learn and separate the input when confronted with multiple statistical structures.  相似文献   

3.
In earlier work we have shown that adults, young children, and infants are capable of computing transitional probabilities among adjacent syllables in rapidly presented streams of speech, and of using these statistics to group adjacent syllables into word-like units. In the present experiments we ask whether adult learners are also capable of such computations when the only available patterns occur in non-adjacent elements. In the first experiment, we present streams of speech in which precisely the same kinds of syllable regularities occur as in our previous studies, except that the patterned relations among syllables occur between non-adjacent syllables (with an intervening syllable that is unrelated). Under these circumstances we do not obtain our previous results: learners are quite poor at acquiring regular relations among non-adjacent syllables, even when the patterns are objectively quite simple. In subsequent experiments we show that learners are, in contrast, quite capable of acquiring patterned relations among non-adjacent segments-both non-adjacent consonants (with an intervening vocalic segment that is unrelated) and non-adjacent vowels (with an intervening consonantal segment that is unrelated). Finally, we discuss why human learners display these strong differences in learning differing types of non-adjacent regularities, and we conclude by suggesting that these contrasts in learnability may account for why human languages display non-adjacent regularities of one type much more widely than non-adjacent regularities of the other type.  相似文献   

4.
In earlier work we have shown that adults, infants, and cotton-top tamarin monkeys are capable of computing the probability with which syllables occur in particular orders in rapidly presented streams of human speech, and of using these probabilities to group adjacent syllables into word-like units. We have also investigated adults' learning of regularities among elements that are not adjacent, and have found strong selectivities in their ability to learn various kinds of non-adjacent regularities. In the present paper we investigate the learning of these same non-adjacent regularities in tamarin monkeys, using the same materials and familiarization methods. Three types of languages were constructed. In one, words were formed by statistical regularities between non-adjacent syllables. Words contained predictable relations between syllables 1 and 3; syllable 2 varied. In a second type of language, words were formed by statistical regularities between non-adjacent segments. Words contained predictable relations between consonants; the vowels varied. In a third type of language, also formed by regularities between non-adjacent segments, words contained predictable relations between vowels; the consonants varied. Tamarin monkeys were exposed to these languages in the same fashion as adults (21 min of exposure to a continuous speech stream) and were then tested in a playback paradigm measuring spontaneous looking (no reinforcement). Adult subjects learned the second and third types of language easily, but failed to learn the first. However, tamarin monkeys showed a different pattern, learning the first and third type of languages but not the second. These differences held up over multiple replications, using different sounds instantiating each of the patterns. These results suggest differences among learners in the elementary units perceived in speech (syllables, consonants, and vowels) and/or the distance over which such units can be related, and therefore differences among learners in the types of patterned regularities they can acquire. Such studies with tamarins open interesting questions about the perceptual and computational capacities of human learners that may be essential for language acquisition, and how they may differ from those of non-human primates.  相似文献   

5.
Non-verbal numerical behavior in human infants, human adults, and non-human primates appears to be rooted in two distinct mechanisms: a precise system for tracking and comparing small numbers of items simultaneously (up to 3 or 4 items) and an approximate system for estimating numerical magnitude of a group of objects. The most striking evidence that these two mechanisms are distinct comes from the apparent inability of young human infants and non-human primates to compare quantites across the small (<3 or 4)/large (>4) number boundary. We ask whether this distinction is present in lower animal species more distantly related to humans, guppies (Poecilia reticulata). We found that, like human infants and non-human primates, fish succeed at comparisons between large numbers only (5 vs. 10), succeed at comparisons between small numbers only (3 vs. 4), but systematically fail at comparisons that closely span the small/large boundary (3 vs. 5). Furthermore, increasing the distance between the small and large number resulted in successful discriminations (3 vs. 6, 3 vs. 7, and 3 vs. 9). This pattern of successes and failures is similar to those observed in human infants and non-human primates to suggest that the two systems are present and functionally distinct across a wide variety of animal species.  相似文献   

6.
In this work we ask whether at birth, the human brain responds uniquely to speech, or if similar activation also occurs to a non‐speech surrogate ‘language’. We compare neural activation in newborn infants to the language heard in utero (English), to an unfamiliar language (Spanish), and to a whistled surrogate language (Silbo Gomero) that, while used by humans to communicate, is not speech. Anterior temporal areas of the neonate cortex are activated in response to both familiar and unfamiliar spoken language, but these classic language areas are not activated to the whistled surrogate form. These results suggest that at the time human infants emerge from the womb, the neural preparation for language is specialized to speech.  相似文献   

7.
In previous research, Saffran and colleagues [Saffran, J. R., Aslin, R. N., & Newport, E. L. (1996). Statistical learning by 8-month-old infants. Science, 274, 1926-1928; Saffran, J. R., Newport, E. L., & Aslin, R. N. (1996). Word segmentation: The role of distributional cues. Journal of Memory and Language, 35, 606-621.] have shown that adults and infants can use the statistical properties of syllable sequences to extract words from continuous speech. They also showed that a similar learning mechanism operates with musical stimuli [Saffran, J. R., Johnson, R. E. K., Aslin, N., & Newport, E. L. (1999). Abstract Statistical learning of tone sequences by human infants and adults. Cognition, 70, 27-52.]. In this work we combined linguistic and musical information and we compared language learning based on speech sequences to language learning based on sung sequences. We hypothesized that, compared to speech sequences, a consistent mapping of linguistic and musical information would enhance learning. Results confirmed the hypothesis showing a strong learning facilitation of song compared to speech. Most importantly, the present results show that learning a new language, especially in the first learning phase wherein one needs to segment new words, may largely benefit of the motivational and structuring properties of music in song.  相似文献   

8.
Infants and adults are well able to match auditory and visual speech, but the cues on which they rely (viz. temporal, phonetic and energetic correspondence in the auditory and visual speech streams) may differ. Here we assessed the relative contribution of the different cues using sine-wave speech (SWS). Adults (N = 52) and infants (N = 34, age ranged in between 5 and 15 months) matched 2 trisyllabic speech sounds (‘kalisu’ and ‘mufapi’), either natural or SWS, with visual speech information. On each trial, adults saw two articulating faces and matched a sound to one of these, while infants were presented the same stimuli in a preferential looking paradigm. Adults’ performance was almost flawless with natural speech, but was significantly less accurate with SWS. In contrast, infants matched the sound to the articulating face equally well for natural speech and SWS. These results suggest that infants rely to a lesser extent on phonetic cues than adults do to match audio to visual speech. This is in line with the notion that the ability to extract phonetic information from the visual signal increases during development, and suggests that phonetic knowledge might not be the basis for early audiovisual correspondence detection in speech.  相似文献   

9.
Numerous recent studies suggest that human learners, including both infants and adults, readily track sequential statistics computed between adjacent elements. One such statistic, transitional probability, is typically calculated as the likelihood that one element predicts another. However, little is known about whether listeners are sensitive to the directionality of this computation. To address this issue, we tested 8-month-old infants in a word segmentation task, using fluent speech drawn from an unfamiliar natural language. Critically, test items were distinguished solely by their backward transitional probabilities. The results provide the first evidence that infants track backward statistics in fluent speech.  相似文献   

10.
人类的数能力可以发展到很高的抽象水平,但大量研究表明,人类婴儿和非人灵长类也具有基本的数表征和数推理能力。文章从表征内容和表征形式两个方面系统地比较了人类婴儿和非人灵长类的数能力,并对比了成人和非人灵长类数表征的生理基础。人类和非人灵长类在这三方面存在的相似性揭示了他们可能享有相同的数表征系统。在今后的研究中,进一步探讨这两者相似的数表征核心系统,可以加深我们对人类数能力起源和本质的理解  相似文献   

11.
Previous research suggests that language learners can detect and use the statistical properties of syllable sequences to discover words in continuous speech (e.g. Aslin, R.N., Saffran, J.R., Newport, E.L., 1998. Computation of conditional probability statistics by 8-month-old infants. Psychological Science 9, 321-324; Saffran, J.R., Aslin, R.N., Newport, E.L., 1996. Statistical learning by 8-month-old infants. Science 274, 1926-1928; Saffran, J., R., Newport, E.L., Aslin, R.N., (1996). Word segmentation: the role of distributional cues. Journal of Memory and Language 35, 606-621; Saffran, J.R., Newport, E.L., Aslin, R.N., Tunick, R.A., Barrueco, S., 1997. Incidental language learning: Listening (and learning) out of the corner of your ear. Psychological Science 8, 101-195). In the present research, we asked whether this statistical learning ability is uniquely tied to linguistic materials. Subjects were exposed to continuous non-linguistic auditory sequences whose elements were organized into 'tone words'. As in our previous studies, statistical information was the only word boundary cue available to learners. Both adults and 8-month-old infants succeeded at segmenting the tone stream, with performance indistinguishable from that obtained with syllable streams. These results suggest that a learning mechanism previously shown to be involved in word segmentation can also be used to segment sequences of non-linguistic stimuli.  相似文献   

12.
Giroux I  Rey A 《Cognitive Science》2009,33(2):260-272
Saffran, Newport, and Aslin (1996a) found that human infants are sensitive to statistical regularities corresponding to lexical units when hearing an artificial spoken language. Two sorts of segmentation strategies have been proposed to account for this early word-segmentation ability: bracketing strategies, in which infants are assumed to insert boundaries into continuous speech, and clustering strategies, in which infants are assumed to group certain speech sequences together into units ( Swingley, 2005 ). In the present study, we test the predictions of two computational models instantiating each of these strategies i.e., Serial Recurrent Networks: Elman, 1990 ; and Parser: Perruchet & Vinter, 1998 in an experiment where we compare the lexical and sublexical recognition performance of adults after hearing 2 or 10 min of an artificial spoken language. The results are consistent with Parser's predictions and the clustering approach, showing that performance on words is better than performance on part-words only after 10 min. This result suggests that word segmentation abilities are not merely due to stronger associations between sublexical units but to the emergence of stronger lexical representations during the development of speech perception processes.  相似文献   

13.
Over the past couple of decades, research has established that infants are sensitive to the predominant stress pattern of their native language. However, the degree to which the stress pattern shapes infants' language development has yet to be fully determined. Whether stress is merely a cue to help organize the patterns of speech or whether it is an important part of the representation of speech sound sequences has still to be explored. Building on research in the areas of infant speech perception and segmentation, we asked how several months of exposure to the target language shapes infants' speech processing biases with respect to lexical stress. We hypothesize that infants represent stressed and unstressed syllables differently, and employed analyses of child-directed speech to show how this change to the representational landscape results in better distribution-based word segmentation as well as an advantage for stress-initial syllable sequences. A series of experiments then tested 9- and 7-month-old infants on their ability to use lexical stress without any other cues present to parse sequences from an artificial language. We found that infants adopted a stress-initial syllable strategy and that they appear to encode stress information as part of their proto-lexical representations. Together, the results of these studies suggest that stress information in the ambient language not only shapes how statistics are calculated over the speech input, but that it is also encoded in the representations of parsed speech sequences.  相似文献   

14.
To explore 10-month-old infants' abilities to engage in intentional imitation, they were shown a human agent, a non-human agent (stuffed animal), and a surrogate object (mechanical pincers) model actions on objects. The tendency of infants to perform the target act was compared in several situations: (a) after test items were manipulated but the target action was not shown, (b) after the target act was demonstrated successfully, and (c) after the target act was demonstrated unsuccessfully. Although infants imitated the successful actions of human and non-human agents, they completed the unsuccessful actions of humans only. Toward the surrogate object infants did not respond differentially. These findings suggest that although infant may mimic the actions of human and non-human agents, they only engage in intentional imitation with people.  相似文献   

15.
A wide variety of organisms produce actions and signals in particular temporal sequences, including the motor actions recruited during tool-mediated foraging, the arrangement of notes in the songs of birds, whales and gibbons, and the patterning of words in human speech. To accurately reproduce such events, the elements that comprise such sequences must be memorized. Both memory and artificial language learning studies have revealed at least two mechanisms for memorizing sequences, one tracking co-occurrence statistics among items in sequences (i.e., transitional probabilities) and the other one tracking the positions of items in sequences, in particular those of items in sequence-edges. The latter mechanism seems to dominate the encoding of sequences after limited exposure, and to be recruited by a wide array of grammatical phenomena. To assess whether humans differ from other species in their reliance on one mechanism over the other after limited exposure, we presented chimpanzees (Pan troglodytes) and human adults with brief exposure to six items, auditory sequences. Each sequence consisted of three distinct sound types (X, A, B), arranged according to two simple temporal rules: the A item always preceded the B item, and the sequence-edges were always occupied by the X item. In line with previous results with human adults, both species primarily encoded positional information from the sequences; that is, they kept track of the items that occurred in the sequence-edges. In contrast, the sensitivity to co-occurrence statistics was much weaker. Our results suggest that a mechanism to spontaneously encode positional information from sequences is present in both chimpanzees and humans and may represent the default in the absence of training and with brief exposure. As many grammatical regularities exhibit properties of this mechanism, it may be recruited by language and constrain the form that certain grammatical regularities take.  相似文献   

16.
Trehub SE  Hannon EE 《Cognition》2006,100(1):73-99
We review the literature on infants' perception of pitch and temporal patterns, relating it to comparable research with human adult and non-human listeners. Although there are parallels in relative pitch processing across age and species, there are notable differences. Infants accomplish such tasks with ease, but non-human listeners require extensive training to achieve very modest levels of performance. In general, human listeners process auditory sequences in a holistic manner, and non-human listeners focus on absolute aspects of individual tones. Temporal grouping processes and categorization on the basis of rhythm are evident in non-human listeners and in human infants and adults. Although synchronization to sound patterns is thought to be uniquely human, tapping to music, synchronous firefly flashing, and other cyclic behaviors can be described by similar mathematical principles. We conclude that infants' music perception skills are a product of general perceptual mechanisms that are neither music- nor species-specific. Along with general-purpose mechanisms for the perceptual foundations of music, we suggest unique motivational mechanisms that can account for the perpetuation of musical behavior in all human societies.  相似文献   

17.
Sequences of notes contain several different types of pitch cues, including both absolute and relative pitch information. What factors determine which of these cues are used when learning about tone sequences? Previous research suggests that infants tend to preferentially process absolute pitch patterns in continuous tone sequences, while other types of input elicit relative pitch use by infants. In order to ask whether the structure of the input influences infants’ choice of pitch cues, we presented learners with continuous tone streams in which absolute pitch cues were rendered uninformative by transposing the tone sequences. Under these circumstances, both infants and adults successfully tracked relative pitches in a statistical learning task. Implications for the role played by the structure of the input in the learning process are considered.  相似文献   

18.
Sequential learning in non-human primates   总被引:1,自引:0,他引:1  
Sequential learning plays a role in a variety of common tasks, such as human language processing, animal communication, and the learning of action sequences. In this article, we investigate sequential learning in non-human primates from a comparative perspective, focusing on three areas: the learning of arbitrary, fixed sequences; statistical learning; and the learning of hierarchical structure. Although primates exhibit many similarities to humans in their performance on sequence learning tasks, there are also important differences. Crucially, non-human primates appear to be limited in their ability to learn and represent the hierarchical structure of sequences. We consider the evolutionary implications of these differences and suggest that limitations in sequential learning may help explain why non-human primates lack human-like language.  相似文献   

19.
In adults, native language phonology has strong perceptual effects. Previous work has shown that Japanese speakers, unlike French speakers, break up illegal sequences of consonants with illusory vowels: they report hearing abna as abuna. To study the development of phonological grammar, we compared Japanese and French infants in a discrimination task. In Experiment 1, we observed that 14-month-old Japanese infants, in contrast to French infants, failed to discriminate phonetically varied sets of abna-type and abuna-type stimuli. In Experiment 2, 8-month-old French and Japanese did not differ significantly from each other. In Experiment 3, we found that, like adults, Japanese infants can discriminate abna from abuna when phonetic variability is reduced (single item). These results show that the phonologically induced /u/ illusion is already experienced by Japanese infants at the age of 14 months. Hence, before having acquired many words of their language, they have grasped enough of their native phonological grammar to constrain their perception of speech sound sequences.  相似文献   

20.
The nature and origin of the human capacity for acquiring language is not yet fully understood. Here we uncover early roots of this capacity by demonstrating that humans are born with a preference for listening to speech. Human neonates adjusted their high amplitude sucking to preferentially listen to speech, compared with complex non-speech analogues that controlled for critical spectral and temporal parameters of speech. These results support the hypothesis that human infants begin language acquisition with a bias for listening to speech. The implications of these results for language and communication development are discussed. For a commentary on this article see Rosen and Iverson (2007).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号