首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
We investigated the effect of lexical stress on 16-month-olds' ability to form associations between labels and paths of motion. Disyllabic English nouns tend to have a strong-weak (trochaic) stress pattern, and verbs tend to have a weak-strong (iambic) pattern. We explored whether infants would use word stress information to guide word-action associations during learning. Infants heard two novel words with either verb-like iambic stress or noun-like trochaic stress. Each word was paired with a single novel object performing one of two path actions and was tested using path-switch trials. Only infants in the iambic stress condition learned the association between the novel words and the path actions. To further investigate infants' difficulty in mapping the trochaic labels to the actions, we conducted an additional study in which infants were given an object switch task using the trochaic labels. In this case, infants were able to associate the trochaic labels with the objects, providing further support that infants use lexical stress to guide label-referent associations. This study demonstrates that by 16months, English-learning infants have developed a bias to expect disyllabic action labels to have iambic stress patterns, consistent with native language stress patterns.  相似文献   

2.
Infants in the early stages of word learning have difficulty learning lexical neighbors (i.e. word pairs that differ by a single phoneme), despite their ability to discriminate the same contrast in a purely auditory task. While prior work has focused on top‐down explanations for this failure (e.g. task demands, lexical competition), none has examined if bottom‐up acoustic‐phonetic factors play a role. We hypothesized that lexical neighbor learning could be improved by incorporating greater acoustic variability in the words being learned, as this may buttress still‐developing phonetic categories, and help infants identify the relevant contrastive dimension. Infants were exposed to pictures accompanied by labels spoken by either a single or multiple speakers. At test, infants in the single‐speaker condition failed to recognize the difference between the two words, while infants who heard multiple speakers discriminated between them.  相似文献   

3.
English, French, and bilingual English-French 17-month-old infants were compared for their performance on a word learning task using the Switch task. Object names presented a /b/ vs. /g/ contrast that is phonemic in both English and French, and auditory strings comprised English and French pronunciations by an adult bilingual. Infants were habituated to two novel objects labeled 'bowce' or 'gowce' and were then presented with a switch trial where a familiar word and familiar object were paired in a novel combination, and a same trial with a familiar word–object pairing. Bilingual infants looked significantly longer to switch vs. same trials, but English and French monolinguals did not, suggesting that bilingual infants can learn word–object associations when the phonetic conditions favor their input. Monolingual infants likely failed because the bilingual mode of presentation increased phonetic variability and did not match their real-world input. Experiment 2 tested this hypothesis by presenting monolingual infants with nonce word tokens restricted to native language pronunciations. Monolinguals succeeded in this case. Experiment 3 revealed that the presence of unfamiliar pronunciations in Experiment 2, rather than a reduction in overall phonetic variability was the key factor to success, as French infants failed when tested with English pronunciations of the nonce words. Thus phonetic variability impacts how infants perform in the switch task in ways that contribute to differences in monolingual and bilingual performance. Moreover, both monolinguals and bilinguals are developing adaptive speech processing skills that are specific to the language(s) they are learning.  相似文献   

4.
Consonants and vowels differ acoustically and articulatorily, but also functionally: Consonants are more relevant for lexical processing, and vowels for prosodic/syntactic processing. These functional biases could be powerful bootstrapping mechanisms for learning language, but their developmental origin remains unclear. The relative importance of consonants and vowels at the onset of lexical acquisition was assessed in French‐learning 5‐month‐olds by testing sensitivity to minimal phonetic changes in their own name. Infants’ reactions to mispronunciations revealed sensitivity to vowel but not consonant changes. Vowels were also more salient (on duration and intensity) but less distinct (on spectrally based measures) than consonants. Lastly, vowel (but not consonant) mispronunciation detection was modulated by acoustic factors, in particular spectrally based distance. These results establish that consonant changes do not affect lexical recognition at 5 months, while vowel changes do; the consonant bias observed later in development does not emerge until after 5 months through additional language exposure.  相似文献   

5.
6.
This research investigates how early learning about native language sound structure affects how infants associate sounds with meanings during word learning. Infants (19-month-olds) were presented with bisyllabic labels with high or low phonotactic probability (i.e., sequences of frequent or infrequent phonemes in English). The labels were produced with the predominant English trochaic (strong/weak) stress pattern or the less common iambic (weak/strong) pattern. Using the habituation-based Switch Task to test label learning, we found that infants readily learned high probability trochaic labels. However, they failed to learn low probability labels, regardless of stress, and failed to learn iambic labels, regardless of phonotactics. Thus, infants required support from both common phoneme sequences and a common stress pattern to map the labels to objects. These findings demonstrate that early word learning is shaped by prior knowledge of native language phonological regularities and provide support for the role of statistical learning in language acquisition.  相似文献   

7.
To investigate the interaction between segmental and supra-segmental stress-related information in early word learning, two experiments were conducted with 20- to 24-month-old English-learning children. In an adaptation of the object categorization study designed by Nazzi and Gopnik (2001), children were presented with pairs of novel objects whose labels differed by their initial consonant (Experiment 1) or their medial consonant (Experiment 2). Words were produced with a stress initial (trochaic) or a stress final (iambic) pattern. In both experiments successful word learning was established when the to-be-remembered contrast was embedded in a stressed syllable, but not when embedded in unstressed syllables. This was independent of the overall word pattern, trochaic or iambic, or the location of the phonemic contrast, word-initial or -medial. Results are discussed in light of the use of phonetic information in early lexical acquisition, highlighting the role of lexical stress and ambisyllabicity in early word processing.  相似文献   

8.
Discrimination of speech sounds from three computer-generated continua that ranged from voiced to voiceless syllables (/ba-pa/, /da-ta/, and ga-ha/ was tested with three macaques. The stimuli on each continuum varied in voice-onset time (VOT). Paris of stimuli that were equally different in VOT were chosen such that they were either within-category pairs (syllables given the same phonetic label by human listeners) or between-category paks (syllables given different phonetic labels by human listeners). Results demonstrated that discrimination performance was always best for between-category pairs of stimuli, thus replicating the “phoneme boundary effect” seen in adult listeners and in human infants as young as I month of age. The findings are discussed in terms of their specific impact on accounts of voicing perception in human listeners and in terms of their impact on discussions of the evolution of language.  相似文献   

9.
Child-directed language can support language learning, but how? We addressed two questions: (1) how caregivers prosodically modulated their speech as a function of word familiarity (known or unknown to the child) and accessibility of referent (visually present or absent from the immediate environment); (2) whether such modulations affect children's unknown word learning and vocabulary development. We used data from 38 English-speaking caregivers (from the ECOLANG corpus) talking about toys (both known and unknown to their children aged 3–4 years) both when the toys are present and when absent. We analyzed prosodic dimensions (i.e., speaking rate, pitch and intensity) of caregivers’ productions of 6529 toy labels. We found that unknown labels were spoken with significantly slower speaking rate, wider pitch and intensity range than known labels, especially in the first mentions, suggesting that caregivers adjust their prosody based on children's lexical knowledge. Moreover, caregivers used slower speaking rate and larger intensity range to mark the first mentions of toys that were physically absent. After the first mentions, they talked about the referents louder with higher mean pitch when toys were present than when toys were absent. Crucially, caregivers’ mean pitch of unknown words and the degree of mean pitch modulation for unknown words relative to known words (pitch ratio) predicted children's immediate word learning and vocabulary size 1 year later. In conclusion, caregivers modify their prosody when the learning situation is more demanding for children, and these helpful modulations assist children in word learning.

Research Highlights

  • In naturalistic interactions, caregivers use slower speaking rate, wider pitch and intensity range when introducing new labels to 3–4-year-old children, especially in first mentions.
  • Compared to when toys are present, caregivers speak more slowly with larger intensity range to mark the first mentions of toys that are physically absent.
  • Mean pitch to mark word familiarity predicts children's immediate word learning and future vocabulary size.
  相似文献   

10.
Two experiments investigated sensory/motor-based functional knowledge of man-made objects: manipulation features associated with the actual usage of objects. In Experiment 1, a series of prime-target pairs was presented auditorily, and participants were asked to make a lexical decision on the target word. Participants made a significantly faster decision about the target word (e.g. ‘typewriter’) following a related prime that shared manipulation features with the target (e.g. ‘piano’) than an unrelated prime (e.g. ‘blanket’). In Experiment 2, participants' eye movements were monitored when they viewed a visual display on a computer screen while listening to a concurrent auditory input. Participants were instructed to simply identify the auditory input and touch the corresponding object on the computer display. Participants fixated an object picture (e.g. “typewriter”) related to a target word (e.g. ‘piano’) significantly more often than an unrelated object picture (e.g. “bucket”) as well as a visually matched control (e.g. “couch”). Results of the two experiments suggest that manipulation knowledge of words is retrieved without conscious effort and that manipulation knowledge constitutes a part of the lexical-semantic representation of objects.  相似文献   

11.
Infant directed speech (IDS) is a speech register characterized by simpler sentences, a slower rate, and more variable prosody. Recent work has implicated it in more subtle aspects of language development. Kuhl et al. (1997) demonstrated that segmental cues for vowels are affected by IDS in a way that may enhance development: the average locations of the extreme “point” vowels (/a/, /i/ and /u/) are further apart in acoustic space. If infants learn speech categories, in part, from the statistical distributions of such cues, these changes may specifically enhance speech category learning. We revisited this by asking (1) if these findings extend to a new cue (Voice Onset Time, a cue for voicing); (2) whether they extend to the interior vowels which are much harder to learn and/or discriminate; and (3) whether these changes may be an unintended phonetic consequence of factors like speaking rate or prosodic changes associated with IDS. Eighteen caregivers were recorded reading a picture book including minimal pairs for voicing (e.g., beach/peach) and a variety of vowels to either an adult or their infant. Acoustic measurements suggested that VOT was different in IDS, but not in a way that necessarily supports better development, and that these changes are almost entirely due to slower rate of speech of IDS. Measurements of the vowel suggested that in addition to changes in the mean, there was also an increase in variance, and statistical modeling suggests that this may counteract the benefit of any expansion of the vowel space. As a whole this suggests that changes in segmental cues associated with IDS may be an unintended by-product of the slower rate of speech and different prosodic structure, and do not necessarily derive from a motivation to enhance development.  相似文献   

12.
One of the central themes in the study of language acquisition is the gap between the linguistic knowledge that learners demonstrate, and the apparent inadequacy of linguistic input to support induction of this knowledge. One of the first linguistic abilities in the course of development to exemplify this problem is in speech perception: specifically, learning the sound system of one’s native language. Native-language sound systems are defined by meaningful contrasts among words in a language, yet infants learn these sound patterns before any significant numbers of words are acquired. Previous approaches to this learning problem have suggested that infants can learn phonetic categories from statistical analysis of auditory input, without regard to word referents. Experimental evidence presented here suggests instead that young infants can use visual cues present in word-labeling situations to categorize phonetic information. In Experiment 1, 9-month-old English-learning infants failed to discriminate two non-native phonetic categories, establishing baseline performance in a perceptual discrimination task. In Experiment 2, these infants succeeded at discrimination after watching contrasting visual cues (i.e., videos of two novel objects) paired consistently with the two non-native phonetic categories. In Experiment 3, these infants failed at discrimination after watching the same visual cues, but paired inconsistently with the two phonetic categories. At an age before which memory of word labels is demonstrated in the laboratory, 9-month-old infants use contrastive pairings between objects and sounds to influence their phonetic sensitivity. Phonetic learning may have a more functional basis than previous statistical learning mechanisms assume: infants may use cross-modal associations inherent in social contexts to learn native-language phonetic categories.  相似文献   

13.
Infants and adults are well able to match auditory and visual speech, but the cues on which they rely (viz. temporal, phonetic and energetic correspondence in the auditory and visual speech streams) may differ. Here we assessed the relative contribution of the different cues using sine-wave speech (SWS). Adults (N = 52) and infants (N = 34, age ranged in between 5 and 15 months) matched 2 trisyllabic speech sounds (‘kalisu’ and ‘mufapi’), either natural or SWS, with visual speech information. On each trial, adults saw two articulating faces and matched a sound to one of these, while infants were presented the same stimuli in a preferential looking paradigm. Adults’ performance was almost flawless with natural speech, but was significantly less accurate with SWS. In contrast, infants matched the sound to the articulating face equally well for natural speech and SWS. These results suggest that infants rely to a lesser extent on phonetic cues than adults do to match audio to visual speech. This is in line with the notion that the ability to extract phonetic information from the visual signal increases during development, and suggests that phonetic knowledge might not be the basis for early audiovisual correspondence detection in speech.  相似文献   

14.
Distributional information is a potential cue for learning syntactic categories. Recent studies demonstrate a developmental trajectory in the level of abstraction of distributional learning in young infants. Here we investigate the effect of prosody on infants' learning of adjacent relations between words. Twelve‐ to thirteen‐month‐old infants were exposed to an artificial language comprised of 3‐word‐sentences of the form aXb and cYd, where X and Y words differed in the number of syllables. Training sentences contained a prosodic boundary between either the first and the second word or the second and the third word. Subsequently, infants were tested on novel test sentences that contained new X and Y words and also contained a flat prosody with no grouping cues. Infants successfully discriminated between novel grammatical and ungrammatical sentences, suggesting that the learned adjacent relations can be abstracted across words and prosodic conditions. Under the conditions tested, prosody may be only a weak constraint on syntactic categorization. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

15.
The present experiments investigated how the process of statistically segmenting words from fluent speech is linked to the process of mapping meanings to words. Seventeen-month-old infants first participated in a statistical word segmentation task, which was immediately followed by an object-label-learning task. Infants presented with labels that were words in the fluent speech used in the segmentation task were able to learn the object labels. However, infants presented with labels consisting of novel syllable sequences (nonwords; Experiment 1) or familiar sequences with low internal probabilities (part-words; Experiment 2) did not learn the labels. Thus, prior segmentation opportunities, but not mere frequency of exposure, facilitated infants' learning of object labels. This work provides the first demonstration that exposure to word forms in a statistical word segmentation task facilitates subsequent word learning.  相似文献   

16.
Huettig F  Altmann GT 《Cognition》2005,96(1):B23-B32
When participants are presented simultaneously with spoken language and a visual display depicting objects to which that language refers, participants spontaneously fixate the visual referents of the words being heard [Cooper, R. M. (1974). The control of eye fixation by the meaning of spoken language: A new methodology for the real-time investigation of speech perception, memory, and language processing. Cognitive Psychology, 6(1), 84-107; Tanenhaus, M. K., Spivey-Knowlton, M. J., Eberhard, K. M., & Sedivy, J. C. (1995). Integration of visual and linguistic information in spoken language comprehension. Science, 268(5217), 1632-1634]. We demonstrate here that such spontaneous fixation can be driven by partial semantic overlap between a word and a visual object. Participants heard the word ‘piano’ when (a) a piano was depicted amongst unrelated distractors; (b) a trumpet was depicted amongst those same distractors; and (c), both the piano and trumpet were depicted. The probability of fixating the piano and the trumpet in the first two conditions rose as the word ‘piano’ unfolded. In the final condition, only fixations to the piano rose, although the trumpet was fixated more than the distractors. We conclude that eye movements are driven by the degree of match, along various dimensions that go beyond simple visual form, between a word and the mental representations of objects in the concurrent visual field.  相似文献   

17.
When a listener hears a word (beef), current theories of spoken word recognition posit the activation of both lexical (beef) and sublexical (/b/, /i/, /f/) representations. No lexical representation can be settled on for an unfamiliar utterance (peef). The authors examined the perception of nonwords (peef) as a function of words or nonwords heard 10-20 min earlier. In lexical decision, nonword recognition responses were delayed if a similar word had been heard earlier. In contrast, nonword processing was facilitated by the earlier presentation of a similar nonword (baff-paff). This pattern was observed for both word-initial (beef-peef), and word-final (job-jop) deviation. With the word-in-noise task, real word primes (beef) increased real word intrusions for the target nonword (peef), but only consonant-vowel (CV) or vowel-consonant (VC) intrusions were increased with similar pseudoword primes (baff-paff). The results across tasks and experiments support both a lexical neighborhood view of activation and sublexical representations based on chunks larger than individual phonemes (CV or VC sequences).  相似文献   

18.
A critical question about early word learning is whether word learning constraints such as mutual exclusivity exist and foster early language acquisition. It is well established that children will map a novel label to a novel rather than a familiar object. Evidence for the role of mutual exclusivity in such indirect word learning has been questioned because: (1) it comes mostly from 2 and 3-year-olds and (2) the findings might be accounted for, not by children avoiding second labels, but by the novel object which creates a lexical gap children are motivated to fill. Three studies addressed these concerns by having only a familiar object visible. Fifteen to seventeen and 18-20-month-olds were selected to straddle the vocabulary spurt. In Study 1, babies saw a familiar object and an opaque bucket as a location to search. Study 2 handed babies the familiar object to play with. Study 3 eliminated an obvious location to search. On the whole, babies at both ages resisted second labels for objects and, with some qualifications, tended to search for a better referent for the novel label. Thus mutual exclusivity is in place before the onset of the naming explosion. The findings demonstrate that lexical constraints enable babies to learn words even under non-optimal conditions--when speakers are not clear and referents are not visible. The results are discussed in relation to an alternative social-pragmatic account.  相似文献   

19.
ABSTRACT— What is the nature of early words? Specifically, do infants expect words for objects to refer to kinds or to distinct shapes? The current study investigated this question by testing whether 10-month-olds expect internal object properties to be predicted by linguistic labels. A looking-time method was employed. Infants were familiarized with pairs of identical or different objects that made identical or different sounds. During test, before the sounds were demonstrated, paired objects were labeled with one repeated count-noun label or two distinct labels. Results showed that infants expected objects labeled with distinct labels to make different sounds and objects labeled with repeated labels to make identical sounds, regardless of the objects' appearance. These findings indicate that the 10-month-olds' expectations about internal properties of objects were driven by labeling and provide evidence that even at the beginning of word learning, infants expect distinct labels to refer to different kinds.  相似文献   

20.
The role of temporal synchrony and syllable distinctiveness in preverbal infants’ learning of word-object relations was investigated. In Experiment 1, 7- and 8-month-olds (N = 64) were habituated under conditions where two similar-sounding syllables, /tah/ and /gah/, were spoken simultaneously with the motions of one of two sets of objects (synchronous) or out of phase with the motions (asynchronous). On test trials, 8-month-olds, but not 7-month-olds, showed learning of the relations in the synchronous condition but not in the asynchronous condition. Furthermore, in Experiment 2, following habituation to one of the synchronous syllable-object pairs, 7-month-olds (n = 8) discriminated the syllables and the objects. In Experiment 3, following habituation to two distinct syllables, /tah/-/gih/ or /gah/-/tih/, paired with identical objects, 7-month-olds (n = 40) showed learning of the relations, again only in the synchronous condition. Thus, synchrony, which mothers naturally provide between words and object motions, facilitated the mapping onto objects of similar-sounding syllables at 8 months of age and distinct syllables at 7 months of age. These findings suggest an interaction between infants’ synchrony and syllable distinctiveness perception during early word mapping development.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号