首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Ferran Pons  Juan M. Toro 《Cognition》2010,116(3):361-367
Recent research has suggested consonants and vowels serve different roles during language processing. While statistical computations are preferentially made over consonants but not over vowels, simple structural generalizations are easily made over vowels but not over consonants. Nevertheless, the origins of this asymmetry are unknown. Here we tested if a lifelong experience with language is necessary for vowels to become the preferred target for structural generalizations. We presented 11-month-old infants with a series of CVCVCV nonsense words in which all vowels were arranged according to an AAB rule (first and second vowels were the same, while the third vowel was different). During the test, we presented infants with new words whose vowels either followed or not, the aforementioned rule. We found that infants readily generalized this rule when implemented over the vowels. However, when the same rule was implemented over the consonants, infants could not generalize it to new instances. These results parallel those found with adult participants and demonstrate that several years of experience learning a language are not necessary for functional asymmetries between consonants and vowels to appear.  相似文献   

2.
Consonants and vowels may play different roles during language processing, consonants being preferentially involved in lexical processing, and vowels tending to mark syntactic constituency through prosodic cues. In support of this view, artificial language learning studies have demonstrated that consonants (C) support statistical computations, whereas vowels (V) allow certain structural generalizations. Nevertheless, these asymmetries could be mere by-products of lower level acoustic differences between Cs and Vs, in particular the energy they carry, and thus their relative salience. Here we address this issue and show that vowels remain the preferred targets for generalizations, even when consonants are made highly salient or vowels barely audible. Participants listened to speech streams of nonsense CVCVCV words, in which consonants followed a simple ABA structure. Participants failed to generalize this structure over sonorant consonants (Experiment 1), even when vowel duration was reduced to one third of that of consonants (Experiment 2). When vowels were eliminated from the stream, participants showed only a marginal evidence of generalizations (Experiment 4). In contrast, participants readily generalized the structure over barely audible vowels (Experiment 3). These results show that different roles of consonants and vowels cannot be readily reduced to acoustical and perceptual differences between these phonetic categories.  相似文献   

3.
By 7 months of age, infants are able to learn rules based on the abstract relationships between stimuli ( Marcus et al ., 1999 ), but they are better able to do so when exposed to speech than to some other classes of stimuli. In the current experiments we ask whether multimodal stimulus information will aid younger infants in identifying abstract rules. We habituated 5‐month‐olds to simple abstract patterns (ABA or ABB) instantiated in coordinated looming visual shapes and speech sounds (Experiment 1), shapes alone (Experiment 2), and speech sounds accompanied by uninformative but coordinated shapes (Experiment 3). Infants showed evidence of rule learning only in the presence of the informative multimodal cues. We hypothesize that the additional evidence present in these multimodal displays was responsible for the success of younger infants in learning rules, congruent with both a Bayesian account and with the Intersensory Redundancy Hypothesis.  相似文献   

4.
Language acquisition involves both acquiring a set of words (i.e. the lexicon) and learning the rules that combine them to form sentences (i.e. syntax). Here, we show that consonants are mainly involved in word processing, whereas vowels are favored for extracting and generalizing structural relations. We demonstrate that such a division of labor between consonants and vowels plays a role in language acquisition. In two very similar experimental paradigms, we show that 12-month-old infants rely more on the consonantal tier when identifying words (Experiment 1), but are better at extracting and generalizing repetition-based srtuctures over the vocalic tier (Experiment 2). These results indicate that infants are able to exploit the functional differences between consonants and vowels at an age when they start acquiring the lexicon, and suggest that basic speech categories are assigned to different learning mechanisms that sustain early language acquisition.  相似文献   

5.
The present study explored whether the phonological bias favoring consonants found in French‐learning infants and children when learning new words (Havy & Nazzi, 2009; Nazzi, 2005) is language‐general, as proposed by Nespor, Peña and Mehler (2003), or varies across languages, perhaps as a function of the phonological or lexical properties of the language in acquisition. To do so, we used the interactive word‐learning task set up by Havy and Nazzi (2009), teaching Danish‐learning 20‐month‐olds pairs of phonetically similar words that contrasted either on one of their consonants or one of their vowels, by either one or two phonological features. Danish was chosen because it has more vowels than consonants, and is characterized by extensive consonant lenition. Both phenomena could disfavor a consonant bias. Evidence of word‐learning was found only for vocalic information, irrespective of whether one or two phonological features were changed. The implication of these findings is that the phonological biases found in early lexical processing are not language‐general but develop during language acquisition, depending on the phonological or lexical properties of the native language.  相似文献   

6.
Extracting general rules from specific examples is important, as we must face the same challenge displayed in various formats. Previous studies have found that bimodal presentation of grammar‐like rules (e.g. ABA) enhanced 5‐month‐olds’ capacity to acquire a rule that infants failed to learn when the rule was presented with visual presentation of the shapes alone (circle‐triangle‐circle) or auditory presentation of the syllables (la‐ba‐la) alone. However, the mechanisms and constraints for this bimodal learning facilitation are still unknown. In this study, we used audio‐visual relation congruency between bimodal stimulation to disentangle possible facilitation sources. We exposed 8‐ to 10‐month‐old infants to an AAB sequence consisting of visual faces with affective expressions and/or auditory voices conveying emotions. Our results showed that infants were able to distinguish the learned AAB rule from other novel rules under bimodal stimulation when the affects in audio and visual stimuli were congruently paired (Experiments 1A and 2A). Infants failed to acquire the same rule when audio‐visual stimuli were incongruently matched (Experiment 2B) and when only the visual (Experiment 1B) or the audio (Experiment 1C) stimuli were presented. Our results highlight that bimodal facilitation in infant rule learning is not only dependent on better statistical probability and redundant sensory information, but also the relational congruency of audio‐visual information. A video abstract of this article can be viewed at https://m.youtube.com/watch?v=KYTyjH1k9RQ  相似文献   

7.
Three experiments investigated the representations that underlie 14-month-old infants’ and adults’ success at match-to-sample (MTS) and non-match-to-sample (NMTS) tasks. In Experiment 1, 14-month-old infants were able to learn rules based on abstract representations of sameness and/or difference. When presented with one of eighteen sample stimuli (A) and a choice between a stimulus that was the same as the sample (A) and a different stimulus (B), infants learned to choose A in MTS and B in NMTS. In Experiments 2 and 3, we began to explore the nature of the representations at play in these paradigms. Experiment 2 confirmed that abstract representations were at play, as infants generalized the MTS and NMTS rules to stimuli unseen during familiarization. Experiment 2 also showed that infants tested in MTS learned to seek the stimulus that was the same as the sample, whereas infants tested in NMTS did not learn to seek the different stimulus, but instead learned to avoid the stimulus that was the same as the sample. Infants appeared to only use an abstract representation of the relation same in these experiments. Experiment 3 showed that adult participants, despite knowing the words “same” and “different”, also relied on representations of sameness in both MTS and NMTS in a paradigm modeled on that of Experiment 2. We conclude with a discussion of how young infants may possibly represent the abstract relation same.  相似文献   

8.
Phonological rules relate surface phonetic word forms to abstract underlying forms that are stored in the lexicon. Infants must thus acquire these rules in order to infer the abstract representation of words. We implement a statistical learning algorithm for the acquisition of one type of rule, namely allophony, which introduces context-sensitive phonetic variants of phonemes. This algorithm is based on the observation that different realizations of a single phoneme typically do not appear in the same contexts (ideally, they have complementary distributions). In particular, it measures the discrepancies in context probabilities for each pair of phonetic segments. In Experiment 1, we test the algorithm's performances on a pseudo-language and show that it is robust to statistical noise due to sampling and coding errors, and to non-systematic rule application. In Experiment 2, we show that a natural corpus of semiphonetically transcribed child-directed speech in French presents a very large number of near-complementary distributions that do not correspond to existing allophonic rules. These spurious allophonic rules can be eliminated by a linguistically motivated filtering mechanism based on a phonetic representation of segments. We discuss the role of a priori linguistic knowledge in the statistical learning of phonology.  相似文献   

9.
Can infants, in the very first stages of word learning, use their perceptual sensitivity to the phonetics of speech while learning words? Research to date suggests that infants of 14 months cannot learn two similar‐sounding words unless there is substantial contextual support. The current experiment advances our understanding of this failure by testing whether the source of infants’ difficulty lies in the learning or testing phase. Infants were taught to associate two similar‐sounding words with two different objects, and tested using a visual choice method rather than the standard Switch task. The results reveal that 14‐month‐olds are capable of learning and mapping two similar‐sounding labels; they can apply phonetic detail in new words. The findings are discussed in relation to infants’ concurrent failure, and the developmental transition to success, in the Switch task.  相似文献   

10.
Bilingual acquisition presents learning challenges beyond those found in monolingual environments, including the need to segment speech in two languages. Infants may use statistical cues, such as syllable‐level transitional probabilities, to segment words from fluent speech. In the present study we assessed monolingual and bilingual 14‐month‐olds’ abilities to segment two artificial languages using transitional probability cues. In Experiment 1, monolingual infants successfully segmented the speech streams when the languages were presented individually. However, monolinguals did not segment the same language stimuli when they were presented together in interleaved segments, mimicking the language switches inherent to bilingual speech. To assess the effect of real‐world bilingual experience on dual language speech segmentation, Experiment 2 tested infants with regular exposure to two languages using the same interleaved language stimuli as Experiment 1. The bilingual infants in Experiment 2 successfully segmented the languages, indicating that early exposure to two languages supports infants’ abilities to segment dual language speech using transitional probability cues. These findings support the notion that early bilingual exposure prepares infants to navigate challenging aspects of dual language environments as they begin to acquire two languages.  相似文献   

11.
Infants in the early stages of word learning have difficulty learning lexical neighbors (i.e. word pairs that differ by a single phoneme), despite their ability to discriminate the same contrast in a purely auditory task. While prior work has focused on top‐down explanations for this failure (e.g. task demands, lexical competition), none has examined if bottom‐up acoustic‐phonetic factors play a role. We hypothesized that lexical neighbor learning could be improved by incorporating greater acoustic variability in the words being learned, as this may buttress still‐developing phonetic categories, and help infants identify the relevant contrastive dimension. Infants were exposed to pictures accompanied by labels spoken by either a single or multiple speakers. At test, infants in the single‐speaker condition failed to recognize the difference between the two words, while infants who heard multiple speakers discriminated between them.  相似文献   

12.
This study investigates the influence of the acoustic properties of vowels on 6‐ and 10‐month‐old infants’ speech preferences. The shape of the contour (bell or monotonic) and the duration (normal or stretched) of vowels were manipulated in words containing the vowels /i/ and /u/, and presented to infants using a two‐choice preference procedure. Experiment 1 examined contour shape: infants heard either normal‐duration bell‐shaped and monotonic contours, or the same two contours with stretched duration. The results show that 6‐month‐olds preferred bell to monotonic contours, whereas 10‐month‐olds preferred monotonic to bell contours. In Experiment 2, infants heard either normal‐duration and stretched bell contours, or normal‐duration and stretched monotonic contours. As in Experiment 1, infants showed age‐specific preferences, with 6‐month‐olds preferring stretched vowels, and 10‐month‐olds preferring normal‐duration vowels. Infants’ attention to the acoustic qualities of vowels, and to speech in general, undergoes a dramatic transformation in the final months of the first year, a transformation that aligns with the emergence of other developmental milestones in speech perception.  相似文献   

13.
The present experiments investigated how the process of statistically segmenting words from fluent speech is linked to the process of mapping meanings to words. Seventeen-month-old infants first participated in a statistical word segmentation task, which was immediately followed by an object-label-learning task. Infants presented with labels that were words in the fluent speech used in the segmentation task were able to learn the object labels. However, infants presented with labels consisting of novel syllable sequences (nonwords; Experiment 1) or familiar sequences with low internal probabilities (part-words; Experiment 2) did not learn the labels. Thus, prior segmentation opportunities, but not mere frequency of exposure, facilitated infants' learning of object labels. This work provides the first demonstration that exposure to word forms in a statistical word segmentation task facilitates subsequent word learning.  相似文献   

14.
In two experiments the flexibility of 18‐month‐olds' extension of familiar object labels was investigated using the intermodal preferential looking paradigm. The first experiment tested whether infants consider intact and incomplete objects as equally acceptable referents for familiar labels. Infants looked equally long at the intact and incomplete objects whether or not a label was presented. In the second experiment, infants were requested to find the referent of a target word among an incomplete target and an intact distracter or an intact target and an incomplete distracter. The incomplete objects were missing a large or small part. Infants looked longer at the incomplete target, even when large or small parts were deleted. Taken together, these findings suggest that infants do not hold a strong shape bias when generalizing familiar words. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

15.
Nazzi T 《Cognition》2005,98(1):13-30
The present study explores the issue of the use of phonetic specificity in the process of learning new words at 20 months of age. The procedure used follows Nazzi and Gopnik [Nazzi, T., & Gopnik, A. (2001). Linguistic and cognitive abilities in infancy: When does language become a tool for categorization? Cognition, 80, B11-B20]. Infants were first presented with triads of perceptually dissimilar objects, which were given made-up names, two of the objects receiving the same name. Then, word learning was evaluated through object selection/categorization. Tests involved phonetically different words (e.g. [pize] vs. [mora], Experiment 1), words differing minimally on their onset consonant (e.g. [pize] vs. [tize], Experiment 2a), and conditions which had never been tested before: non-initial consonantal contrasts (e.g. [pide] vs. [pige], Experiment 2b), and vocalic contrasts (e.g. [pize] vs. [pyze]; [pize] vs. [paze]; [pize] vs. [pizu], Experiments 3a-c). Results differed across conditions: words could be easily learnt in the phonetically different condition, and were learnt, though to a lesser degree, in both the initial and non-initial minimal consonant contrast; however, infants' global performance on all three vocalic contrasts was at chance level. The present results shed new light regarding the specificity of early words, and raise the possibility of different contributions for vowels and consonants in early word learning.  相似文献   

16.
Previous research with artificial language learning paradigms has shown that infants are sensitive to statistical cues to word boundaries (Saffran, Aslin & Newport, 1996) and that they can use these cues to extract word‐like units (Saffran, 2001). However, it is unknown whether infants use statistical information to construct a receptive lexicon when acquiring their native language. In order to investigate this issue, we rely on the fact that besides real words a statistical algorithm extracts sound sequences that are highly frequent in infant‐directed speech but constitute nonwords. In three experiments, we use a preferential listening paradigm to test French‐learning 11‐month‐old infants' recognition of highly frequent disyllabic sequences from their native language. In Experiments 1 and 2, we use nonword stimuli and find that infants listen longer to high‐frequency than to low‐frequency sequences. In Experiment 3, we compare high‐frequency nonwords to real words in the same frequency range, and find that infants show no preference. Thus, at 11 months, French‐learning infants recognize highly frequent sound sequences from their native language and fail to differentiate between words and nonwords among these sequences. These results are evidence that they have used statistical information to extract word candidates from their input and stored them in a ‘protolexicon’, containing both words and nonwords.  相似文献   

17.
Most research on children's spelling has emphasized the role of phonological or sound-based processes. We asked whether morphology plays a part in early spelling by examining how children write words with final consonant clusters. In three experiments, children made different patterns of omission errors on the last two consonants of words such astunedandbars,in which the consonants belong to different morphemes, and words such asbrandandMars,in which the consonants belong to the same morpheme. These differences emerged even among children reading at the first-grade level. Effects of morphology appeared whether children spelled single words to dictation (Experiments 1 and 3), finished partially completed spellings (Experiment 2), or wrote sentences containing specified words (Experiment 3). Children did not use morphological relations among words as much as they could have, given their knowledge of the stems, but they did use them to some extent. Although phonology plays an important role in early spelling, young children can also use other sources of information, including certain morphological relationships among words.  相似文献   

18.
Fifteen‐month‐olds have difficulty detecting differences between novel words differing in a single vowel. Previous work showed that Australian English (AusE) infants habituated to the word‐object pair DEET detected an auditory switch to DIT and DOOT in Canadian English (CanE) but not in their native AusE (Escudero et al., 2014 ). The authors speculated that this may be because the vowel inherent spectral change variation (VISC) in AusE DEET is larger than in CanE DEET. We investigated whether VISC leads to difficulty in encoding phonetic detail during early word learning, and whether this difficulty dissipates with age. In Experiment 1, we familiarized AusE‐learning 15‐month‐olds to AusE DIT, which contains smaller VISC than AusE DEET. Unlike infants familiarized with AusE DEET (Escudero et al., 2014 ), infants detected a switch to DEET and DOOT. In Experiment 2, we familiarized AusE‐learning 17‐month‐olds to AusE DEET. This time, infants detected a switch to DOOT, and marginally detected a switch to DIT. Our acoustic analysis showed that AusE DEET and DOOT are differentiated by the second vowel formant, while DEET and DIT can only be distinguished by their changing dynamic properties throughout the vowel trajectory. Thus, by 17 months, AusE infants can encode highly dynamic acoustic properties, enabling them to learn the novel vowel minimal pairs that are difficult at 15 months. These findings suggest that the development of word learning is shaped by the phonetic properties of the specific word minimal pair.  相似文献   

19.
The current study examined whether and when young infants are sensitive to distressed others, using two experiments with a forced-choice paradigm. Experiment 1 showed that 5- to 9-month-old infants demonstrate a clear pro-victim preference: Infants preferred a distressed character that had been physically harmed over a matched neutral character. Experiment 2 showed that infants’ preference for a distressed other is not invariable, but rather depends on the context: Infants no longer preferred the distressed character when it expressed the exact same distress but for no apparent reason. These findings have implications for the early ontogeny of human compassion and morality, addressed in the discussion.  相似文献   

20.
We have proposed that consonants give cues primarily about the lexicon, whereas vowels carry cues about syntax. In a study supporting this hypothesis, we showed that when segmenting words from an artificial continuous stream, participants compute statistical relations over consonants, but not over vowels. In the study reported here, we tested the symmetrical hypothesis that when participants listen to words in a speech stream, they tend to exploit relations among vowels to extract generalizations, but tend to disregard the same relations among consonants. In our streams, participants could segment words on the basis of transitional probabilities in one tier and could extract a structural regularity in the other tier. Participants used consonants to extract words, but vowels to extract a structural generalization. They were unable to extract the same generalization using consonants, even when word segmentation was facilitated and the generalization made simpler. Our results suggest that different signal-driven computations prime lexical and grammatical processing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号