首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Onishi KH  Chambers KE  Fisher C 《Cognition》2002,83(1):B13-B23
Three experiments asked whether phonotactic regularities not present in English could be acquired by adult English speakers from brief listening experience. Subjects listened to consonant-vowel-consonant (CVC) syllables displaying restrictions on consonant position. Responses in a later speeded repetition task revealed rapid learning of (a) first-order regularities in which consonants were restricted to particular positions (e.g. [baep] not *[paeb]), and (b) second-order regularities in which consonant position depended on the adjacent vowel (e.g. [baep] or [pIb], not *[paeb] or *[bIp]). No evidence of learning was found for second-order regularities in which consonant position depended on speaker's voice. These results demonstrated that phonotactic constraints are rapidly learned from listening experience and that some types of contingencies (consonant-vowel) are more easily learned than others (consonant-voice).  相似文献   

2.
Language learners are sensitive to phonotactic patterns from an early age, and can acquire both simple and 2nd-order positional restrictions contingent on segment identity (e.g., /f/ is an onset with /æ/but a coda with /?/). The present study explored the learning of phonototactic patterns conditioned on a suprasegmental cue: lexical stress. Adults first heard non-words in which trochaic and iambic items had different consonant restrictions. In Experiment 1, participants trained with phonotactic patterns involving natural classes of consonants later falsely recognized novel items that were consistent with the training patterns (legal items), demonstrating that they had learned the stress-conditioned phonotactic patterns. However, this was only true for iambic items. In Experiment 2, participants completed a forced-choice test between novel legal and novel illegal items and were again successful only for the iambic items. Experiment 3 demonstrated learning for trochaic items when they were presented alone. Finally, in Experiment 4, in which the training phase was lengthened, participants successfully learned both sets of phonotactic patterns. These experiments provide evidence that learners consider more global phonological properties in the computation of phonotactic patterns, and that learners can acquire multiple sets of patterns simultaneously, even contradictory ones.  相似文献   

3.
Smith L  Yu C 《Cognition》2008,106(3):1558-1568
First word learning should be difficult because any pairing of a word and scene presents the learner with an infinite number of possible referents. Accordingly, theorists of children's rapid word learning have sought constraints on word-referent mappings. These constraints are thought to work by enabling learners to resolve the ambiguity inherent in any labeled scene to determine the speaker's intended referent at that moment. The present study shows that 12- and 14-month-old infants can resolve the uncertainty problem in another way, not by unambiguously deciding the referent in a single word-scene pairing, but by rapidly evaluating the statistical evidence across many individually ambiguous words and scenes.  相似文献   

4.
Adults rapidly learn phonotactic constraints from brief production or perception experience. Three experiments asked whether this learning is modality-specific, occurring separately in production and perception, or whether perception transfers to production. Participant pairs took turns repeating syllables in which particular consonants were restricted to particular syllable positions. Speakers’ errors reflected learning of the constraints present in the sequences they produced, regardless of whether their partner produced syllables with the same constraints, or opposing constraints. Although partial transfer could be induced (Experiment 3), simply hearing and encoding syllables produced by others did not affect speech production to the extent that error patterns were altered. Learning of new phonotactic constraints was predominantly restricted to the modality in which those constraints were experienced.  相似文献   

5.
Fluent speakers’ representations of verbs include semantic knowledge about the nouns that can serve as their arguments. These “selectional restrictions” of a verb can in principle be recruited to learn the meaning of a novel noun. For example, the sentence He ate the carambola licenses the inference that carambola refers to something edible. We ask whether 15- and 19-month-old infants can recruit their nascent verb lexicon to identify the referents of novel nouns that appear as the verbs’ subjects. We compared infants’ interpretation of a novel noun (e.g., the dax) in two conditions: one in which dax is presented as the subject of animate-selecting construction (e.g., The dax is crying), and the other in which dax is the subject of an animacy-neutral construction (e.g., The dax is right here). Results indicate that by 19 months, infants use their representations of known verbs to inform the meaning of a novel noun that appears as its argument.  相似文献   

6.
Children’s early word production is influenced by the statistical frequency of speech sounds and combinations. Three experiments asked whether this production effect can be explained by a perceptual learning mechanism that is sensitive to word-token frequency and/or variability. Four-year-olds were exposed to nonwords that were either frequent (presented 10 times) or infrequent (presented once). When the frequent nonwords were spoken by the same talker, children showed no significant effect of perceptual frequency on production. When the frequent nonwords were spoken by different talkers, children produced them with fewer errors and shorter latencies. The results implicate token variability in perceptual learning.  相似文献   

7.
The picture-word interference paradigm was used to shed new light on the debate concerning slow serial versus fast parallel activation of phonology in silent reading. Prereaders, beginning readers (Grades 1-4), and adults named pictures that had words printed on them. Words and pictures shared phonology either at the beginnings of words (e.g., DOLL-DOG) or at the ends of words (e.g., FOG-DOG). The results showed that phonological overlap between primes and targets facilitated picture naming. This facilitatory effect was present even in beginning readers. More important, from Grade 1 onward, end-related facilitation always was as strong as beginning-related facilitation. This result suggests that, from the beginning of reading, the implicit and automatic activation of phonological codes during silent reading is not serial but rather parallel.  相似文献   

8.
Cross‐situational word learning, like any statistical learning problem, involves tracking the regularities in the environment. However, the information that learners pick up from these regularities is dependent on their learning mechanism. This article investigates the role of one type of mechanism in statistical word learning: competition. Competitive mechanisms would allow learners to find the signal in noisy input and would help to explain the speed with which learners succeed in statistical learning tasks. Because cross‐situational word learning provides information at multiple scales—both within and across trials/situations—learners could implement competition at either or both of these scales. A series of four experiments demonstrate that cross‐situational learning involves competition at both levels of scale, and that these mechanisms interact to support rapid learning. The impact of both of these mechanisms is considered from the perspective of a process‐level understanding of cross‐situational learning.  相似文献   

9.
This paper examines whether adults can adapt to novel accents of their native language that contain unfamiliar context-dependent phonological alternations. In two experiments, French participants listen to short stories read in accented speech. Their knowledge of the accents is then tested in a forced-choice identification task. In Experiment 1, two groups of listeners are exposed to newly created French accents in which certain vowels harmonize or disharmonize, respectively, to the rounding of the preceding vowel. Despite the cross-linguistic predominance of vowel harmony over disharmony, the two groups adapt equally well to both accents, suggesting that this typological difference is not reflected in perceptual learning. Experiment 2 further explores the mechanism underlying this type of phonological learning. Participants are exposed to an accent in which some vowels harmonize and others disharmonize, yielding an increased featural complexity. They adapt less well to this regularity, showing that adaptation to novel accents involves feature-based inferences.  相似文献   

10.
Laakso A  Calvo P 《Cognitive Science》2011,35(7):1243-1281
Some empirical evidence in the artificial language acquisition literature has been taken to suggest that statistical learning mechanisms are insufficient for extracting structural information from an artificial language. According to the more than one mechanism (MOM) hypothesis, at least two mechanisms are required in order to acquire language from speech: (a) a statistical mechanism for speech segmentation; and (b) an additional rule-following mechanism in order to induce grammatical regularities. In this article, we present a set of neural network studies demonstrating that a single statistical mechanism can mimic the apparent discovery of structural regularities, beyond the segmentation of speech. We argue that our results undermine one argument for the MOM hypothesis.  相似文献   

11.
Mirman D  Magnuson JS  Estes KG  Dixon JA 《Cognition》2008,108(1):271-280
Many studies have shown that listeners can segment words from running speech based on conditional probabilities of syllable transitions, suggesting that this statistical learning could be a foundational component of language learning. However, few studies have shown a direct link between statistical segmentation and word learning. We examined this possible link in adults by following a statistical segmentation exposure phase with an artificial lexicon learning phase. Participants were able to learn all novel object-label pairings, but pairings were learned faster when labels contained high probability (word-like) or non-occurring syllable transitions from the statistical segmentation phase than when they contained low probability (boundary-straddling) syllable transitions. This suggests that, for adults, labels inconsistent with expectations based on statistical learning are harder to learn than consistent or neutral labels. In contrast, a previous study found that infants learn consistent labels, but not inconsistent or neutral labels.  相似文献   

12.
The processes of infant word segmentation and infant word learning have largely been studied separately. However, the ease with which potential word forms are segmented from fluent speech seems likely to influence subsequent mappings between words and their referents. To explore this process, we tested the link between the statistical coherence of sequences presented in fluent speech and infants’ subsequent use of those sequences as labels for novel objects. Notably, the materials were drawn from a natural language unfamiliar to the infants (Italian). The results of three experiments suggest that there is a close relationship between the statistics of the speech stream and subsequent mapping of labels to referents. Mapping was facilitated when the labels contained high transitional probabilities in the forward and/or backward direction (Experiment 1). When no transitional probability information was available (Experiment 2), or when the internal transitional probabilities of the labels were low in both directions (Experiment 3), infants failed to link the labels to their referents. Word learning appears to be strongly influenced by infants’ prior experience with the distribution of sounds that make up words in natural languages.  相似文献   

13.
Natural languages contain many layers of sequential structure, from the distribution of phonemes within words to the distribution of phrases within utterances. However, most research modeling language acquisition using artificial languages has focused on only one type of distributional structure at a time. In two experiments, we investigated adult learning of an artificial language that contains dependencies between both adjacent and non‐adjacent words. We found that learners rapidly acquired both types of regularities and that the strength of the adjacent statistics influenced learning of both adjacent and non‐adjacent dependencies. Additionally, though accuracy was similar for both types of structure, participants’ knowledge of the deterministic non‐adjacent dependencies was more explicit than their knowledge of the probabilistic adjacent dependencies. The results are discussed in the context of current theories of statistical learning and language acquisition.  相似文献   

14.
Because children hear language in environments that contain many things to talk about, learning the meaning of even the simplest word requires making inferences under uncertainty. A cross-situational statistical learner can aggregate across naming events to form stable word-referent mappings, but this approach neglects an important source of information that can reduce referential uncertainty: social cues from speakers (e.g., eye gaze). In four large-scale experiments with adults, we tested the effects of varying referential uncertainty in cross-situational word learning using social cues. Social cues shifted learners away from tracking multiple hypotheses and towards storing only a single hypothesis (Experiments 1 and 2). In addition, learners were sensitive to graded changes in the strength of a social cue, and when it became less reliable, they were more likely to store multiple hypotheses (Experiment 3). Finally, learners stored fewer word-referent mappings in the presence of a social cue even when given the opportunity to visually inspect the objects for the same amount of time (Experiment 4). Taken together, our data suggest that the representations underlying cross-situational word learning of concrete object labels are quite flexible: In conditions of greater uncertainty, learners store a broader range of information.  相似文献   

15.
Orienting biases for speech may provide a foundation for language development. Although human infants show a bias for listening to speech from birth, the relation of a speech bias to later language development has not been established. Here, we examine whether infants' attention to speech directly predicts expressive vocabulary. Infants listened to speech or non‐speech in a preferential listening procedure. Results show that infants' attention to speech at 12 months significantly predicted expressive vocabulary at 18 months, while indices of general development did not. No predictive relationships were found for infants' attention to non‐speech, or overall attention to sounds, suggesting that the relationship between speech and expressive vocabulary was not a function of infants' general attentiveness. Potentially ancient evolutionary perceptual capacities such as biases for conspecific vocalizations may provide a foundation for proficiency in formal systems such language, much like the approximate number sense may provide a foundation for formal mathematics.  相似文献   

16.
Fine-grained sensitivity to statistical information in adult word learning   总被引:1,自引:0,他引:1  
Vouloumanos A 《Cognition》2008,107(2):729-742
A language learner trying to acquire a new word must often sift through many potential relations between particular words and their possible meanings. In principle, statistical information about the distribution of those mappings could serve as one important source of data, but little is known about whether learners can in fact track multiple word-referent mappings, and, if they do, the precision with which they can represent those statistics. To test this, two experiments contrasted a pair of possibilities: that learners encode the fine-grained statistics of mappings in the input - both high- and low-frequency mappings - or, alternatively, that only high frequency mappings are represented. Participants were briefly trained on novel word-novel object pairs combined with varying frequencies: some objects were paired with one word, other objects with multiple words with differing frequencies (ranging from 10% to 80%). Results showed that participants were exquisitely sensitive to very small statistical differences in mappings. The second experiment showed that word learners' representation of low frequency mappings is modulated as a function of the variability in the environment. Implications for Mutual Exclusivity and Bayesian accounts of word learning are discussed.  相似文献   

17.
Evidence from infant studies indicates that language learning can be facilitated by multimodal cues. We extended this observation to adult language learning by studying the effects of simultaneous visual cues (nonassociated object images) on speech segmentation performance. Our results indicate that segmentation of new words from a continuous speech stream is facilitated by simultaneous visual input that it is presented at or near syllables that exhibit the low transitional probability indicative of word boundaries. This indicates that temporal audio-visual contiguity helps in directing attention to word boundaries at the earliest stages of language learning. Off-boundary or arrhythmic picture sequences did not affect segmentation performance, suggesting that the language learning system can effectively disregard noninformative visual information. Detection of temporal contiguity between multimodal stimuli may be useful in both infants and second-language learners not only for facilitating speech segmentation, but also for detecting word–object relationships in natural environments.  相似文献   

18.
A series of three experiments examined children's sensitivity to probabilistic phonotactic structure as reflected in the relative frequencies with which speech sounds occur and co-occur in American English. Children, ages 212 and 312 years, participated in a nonword repetition task that examined their sensitivity to the frequency of individual phonetic segments and to the frequency of combinations of segments. After partialling out ease of articulation and lexical variables, both groups of children repeated higher phonotactic frequency nonwords more accurately than they did low phonotactic frequency nonwords, suggesting sensitivity to phoneme frequency. In addition, sensitivity to individual phonetic segments increased with age. Finally, older children, but not younger children, were sensitive to the frequency of larger (diphone) units. These results suggest not only that young children are sensitive to fine-grained acoustic-phonetic information in the developing lexicon but also that sensitivity to all aspects of the sound structure increases over development. Implications for the acoustic nature of both developing and mature lexical representations are discussed.  相似文献   

19.
To better understand how infants process complex auditory input, this study investigated whether 11-month-old infants perceive the pitch (melodic) or the phonetic (lyric) components within songs as more salient, and whether melody facilitates phonetic recognition. Using a preferential looking paradigm, uni-dimensional and multi-dimensional songs were tested; either the pitch or syllable order of the stimuli varied. As a group, infants detected a change in pitch order in a 4-note sequence when the syllables were redundant (experiment 1), but did not detect the identical pitch change with variegated syllables (experiment 2). Infants were better able to detect a change in syllable order in a sung sequence (experiment 2) than the identical syllable change in a spoken sequence (experiment 1). These results suggest that by 11 months, infants cannot “ignore” phonetic information in the context of perceptually salient pitch variation. Moreover, the increased phonetic recognition in song contexts mirrors findings that demonstrate advantages of infant-directed speech. Findings are discussed in terms of how stimulus complexity interacts with the perception of sung speech in infancy.  相似文献   

20.
Children show a remarkable degree of consistency in learning some words earlier than others. What patterns of word usage predict variations among words in age of acquisition? We use distributional analysis of a naturalistic corpus of child-directed speech to create quantitative features representing natural variability in word contexts. We evaluate two sets of features: One set is generated from the distribution of words into frames defined by the two adjacent words. These features primarily encode syntactic aspects of word usage. The other set is generated from non-adjacent co-occurrences between words. These features encode complementary thematic aspects of word usage. Regression models using these distributional features to predict age of acquisition of 656 early-acquired English words indicate that both types of features improve predictions over simpler models based on frequency and appearance in salient or simple utterance contexts. Syntactic features were stronger predictors of children's production than comprehension, whereas thematic features were stronger predictors of comprehension. Overall, earlier acquisition was predicted by features representing frames that select for nouns and verbs, and by thematic content related to food and face-to-face play topics; later acquisition was predicted by features representing frames that select for pronouns and question words, and by content related to narratives and object play.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号