首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Although there is mounting evidence that selective social learning begins in infancy, the psychological mechanisms underlying this ability are currently a controversial issue. The purpose of this study is to investigate whether theory of mind abilities and statistical learning skills are related to infants’ selective social learning. Seventy‐seven 18‐month‐olds were first exposed to a reliable or an unreliable speaker and then completed a word learning task, two theory of mind tasks, and a statistical learning task. If domain‐general abilities are linked to selective social learning, then infants who demonstrate superior performance on the statistical learning task should perform better on the selective learning task, that is, should be less likely to learn words from an unreliable speaker. Alternatively, if domain‐specific abilities are involved, then superior performance on theory of mind tasks should be related to selective learning performance. Findings revealed that, as expected, infants were more likely to learn a novel word from a reliable speaker. Importantly, infants who passed a theory of mind task assessing knowledge attribution were significantly less likely to learn a novel word from an unreliable speaker compared to infants who failed this task. No such effect was observed for the other tasks. These results suggest that infants who possess superior social‐cognitive abilities are more apt to reject an unreliable speaker as informant. A video abstract of this article can be viewed at: https://youtu.be/zuuCniHYzqo  相似文献   

2.
There is considerable evidence that labeling supports infants' object categorization. Yet in daily life, most of the category exemplars that infants encounter will remain unlabeled. Inspired by recent evidence from machine learning, we propose that infants successfully exploit this sparsely labeled input through “semi‐supervised learning.” Providing only a few labeled exemplars leads infants to initiate the process of categorization, after which they can integrate all subsequent exemplars, labeled or unlabeled, into their evolving category representations. Using a classic novelty preference task, we introduced 2‐year‐old infants (n = 96) to a novel object category, varying whether and when its exemplars were labeled. Infants were equally successful whether all exemplars were labeled (fully supervised condition) or only the first two exemplars were labeled (semi‐supervised condition), but they failed when no exemplars were labeled (unsupervised condition). Furthermore, the timing of the labeling mattered: when the labeled exemplars were provided at the end, rather than the beginning, of familiarization (reversed semi‐supervised condition), infants failed to learn the category. This provides the first evidence of semi‐supervised learning in infancy, revealing that infants excel at learning from exactly the kind of input that they typically receive in acquiring real‐world categories and their names.  相似文献   

3.
In previous work, 11‐month‐old infants were able to learn rules about the relation of the consonants in CVCV words from just four examples. The rules involved phonetic feature relations (same voicing or same place of articulation), and infants' learning was impeded when pairs of words allowed alternative possible generalizations (e.g. two words both contained the specific consonants p and t). Experiment 1 asked whether a small number of such spurious generalizations found in a randomly ordered list of 24 different words would also impede learning. It did – infants showed no sign of learning the rule. To ask whether it was the overall set of words or their order that prevented learning, Experiment 2 reordered the words to avoid local spurious generalizations. Infants showed robust learning. Infants thus appear to entertain spurious generalizations based on small, local subsets of stimuli. The results support a characterization of infants as incremental rather than batch learners.  相似文献   

4.
Selective learning (SL) is the ability to select items to learn from among other items. It requires the use of the executive processes of metacognitive control and working memory, which are considered to be mediated by the frontal cortex and its circuitry. We studied the efficiency with which verbal items of greater value are selectively learned from among items varying in value in 14 children ages 8-15 years who had sustained severe traumatic brain injury (TBI) and in 39 typically developing age-matched children. We hypothesized that children with TBI would be disproportionately compromised in selective learning efficiency in contrast to memory span when compared to normally developing children. The results supported our hypothesis, as children with TBI performed significantly worse than controls on a measure of selective learning efficiency, but the two groups performed similarly on a measure of word recall within the same task. Furthermore, the effect of TBI on performance was demonstrated to take place at the time of encoding, rather than at retrieval.  相似文献   

5.
From the very first moments of their lives, infants selectively attend to the visible orofacial movements of their social partners and apply their exquisite speech perception skills to the service of lexical learning. Here we explore how early bilingual experience modulates children's ability to use visible speech as they form new lexical representations. Using a cross‐modal word‐learning task, bilingual children aged 30 months were tested on their ability to learn new lexical mappings in either the auditory or the visual modality. Lexical recognition was assessed either in the same modality as the one used at learning (‘same modality’ condition: auditory test after auditory learning, visual test after visual learning) or in the other modality (‘cross‐modality’ condition: visual test after auditory learning, auditory test after visual learning). The results revealed that like their monolingual peers, bilingual children successfully learn new words in either the auditory or the visual modality and show cross‐modal recognition of words following auditory learning. Interestingly, as opposed to monolinguals, they also demonstrate cross‐modal recognition of words upon visual learning. Collectively, these findings indicate a bilingual edge in visual word learning, expressed in the capacity to form a recoverable cross‐modal representation of visually learned words.  相似文献   

6.
To learn to produce speech, infants must effectively monitor and assess their own speech output. Yet very little is known about how infants perceive speech produced by an infant, which has higher voice pitch and formant frequencies compared to adult or child speech. Here, we tested whether pre‐babbling infants (at 4–6 months) prefer listening to vowel sounds with infant vocal properties over vowel sounds with adult vocal properties. A listening preference favoring infant vowels may derive from their higher voice pitch, which has been shown to attract infant attention in infant‐directed speech (IDS). In addition, infants' nascent articulatory abilities may induce a bias favoring infant speech given that 4‐ to 6‐month‐olds are beginning to produce vowel sounds. We created infant and adult /i/ (‘ee’) vowels using a production‐based synthesizer that simulates the act of speaking in talkers at different ages and then tested infants across four experiments using a sequential preferential listening task. The findings provide the first evidence that infants preferentially attend to vowel sounds with infant voice pitch and/or formants over vowel sounds with no infant‐like vocal properties, supporting the view that infants' production abilities influence how they process infant speech. The findings with respect to voice pitch also reveal parallels between IDS and infant speech, raising new questions about the role of this speech register in infant development. Research exploring the underpinnings and impact of this perceptual bias can expand our understanding of infant language development.  相似文献   

7.
In this study, we propose that infant social cognition may ‘bootstrap' the successive development of domain‐general cognition in line with the cultural intelligence hypothesis. Using a longitudinal design, 6‐month‐old infants (N = 118) were assessed on two basic social cognitive tasks targeting the abilities to share attention with others and understanding other peoples' actions. At 10 months, we measured the quality of the child's social learning environment, indexed by parent's abilities to provide scaffolding behaviors during a problem‐solving task. Eight months later, the children were followed up with a cognitive test‐battery, including tasks of inhibitory control and working memory. Our results showed that better infant social action understanding interacted with better parental scaffolding skills in predicting simple inhibitory control in toddlerhood. This suggests that infants' who are better at understanding other's actions are also better equipped to make the most of existing social learning opportunities, which in turn may benefit future non‐social cognitive outcomes.  相似文献   

8.
Recent years have seen a flourishing of Natural Language Processing models that can mimic many aspects of human language fluency. These models harness a simple, decades-old idea: It is possible to learn a lot about word meanings just from exposure to language, because words similar in meaning are used in language in similar ways. The successes of these models raise the intriguing possibility that exposure to word use in language also shapes the word knowledge that children amass during development. However, this possibility is strongly challenged by the fact that models use language input and learning mechanisms that may be unavailable to children. Across three studies, we found that unrealistically complex input and learning mechanisms are unnecessary. Instead, simple regularities of word use in children's language input that they have the capacity to learn can foster knowledge about word meanings. Thus, exposure to language may play a simple but powerful role in children's growing word knowledge. A video abstract of this article can be viewed at https://youtu.be/dT83dmMffnM .

Research Highlights

  • Natural Language Processing (NLP) models can learn that words are similar in meaning from higher-order statistical regularities of word use.
  • Unlike NLP models, infants and children may primarily learn only simple co-occurrences between words.
  • We show that infants' and children's language input is rich in simple co-occurrence that can support learning similarities in meaning between words.
  • We find that simple co-occurrences can explain infants' and children's knowledge that words are similar in meaning.
  相似文献   

9.
Children who rapidly recognize and interpret familiar words typically have accelerated lexical growth, providing indirect evidence that lexical processing efficiency (LPE) is related to word‐learning ability. Here we directly tested whether children with better LPE are better able to learn novel words. In Experiment 1, 17‐ and 30‐month‐olds were tested on an LPE task and on a simple word‐learning task. The 17‐month‐olds’ LPE scores predicted word learning in a regression model, and only those with relatively good LPE showed evidence of learning. The 30‐month‐olds learned novel words quite well regardless of LPE, but in a more difficult word‐learning task (Experiment 2), their LPE predicted word‐learning ability. These findings suggest that LPE supports word‐learning processes, especially when learning is difficult.  相似文献   

10.
Children learn their earliest words through social interaction, but it is unknown how much they rely on social information. Some theories argue that word learning is fundamentally social from its outset, with even the youngest infants understanding intentions and using them to infer a social partner's target of reference. In contrast, other theories argue that early word learning is largely a perceptual process in which young children map words onto salient objects. One way of unifying these accounts is to model word learning as weighted cue combination, in which children attend to many potential cues to reference, but only gradually learn the correct weight to assign each cue. We tested four predictions of this kind of naïve cue combination account, using an eye‐tracking paradigm that combines social word teaching and two‐alternative forced‐choice testing. None of the predictions were supported. We thus propose an alternative unifying account: children are sensitive to social information early, but their ability to gather and deploy this information is constrained by domain‐general cognitive processes. Developmental changes in children's use of social cues emerge not from learning the predictive power of social cues, but from the gradual development of attention, memory, and speed of information processing.  相似文献   

11.
Distributional information is a potential cue for learning syntactic categories. Recent studies demonstrate a developmental trajectory in the level of abstraction of distributional learning in young infants. Here we investigate the effect of prosody on infants' learning of adjacent relations between words. Twelve‐ to thirteen‐month‐old infants were exposed to an artificial language comprised of 3‐word‐sentences of the form aXb and cYd, where X and Y words differed in the number of syllables. Training sentences contained a prosodic boundary between either the first and the second word or the second and the third word. Subsequently, infants were tested on novel test sentences that contained new X and Y words and also contained a flat prosody with no grouping cues. Infants successfully discriminated between novel grammatical and ungrammatical sentences, suggesting that the learned adjacent relations can be abstracted across words and prosodic conditions. Under the conditions tested, prosody may be only a weak constraint on syntactic categorization. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

12.
In several previous studies, 18‐month‐old infants who were directly addressed demonstrated more robust imitative behaviors than infants who simply observed another's actions, leading theorists to suggest that child‐directed interactions carried unique informational value. However, these data came exclusively from cultural communities where direct teaching is commonplace, raising the possibility that the findings reflect regularities in infants' social experiences rather than responses to innate or a priori learning mechanisms. The current studies consider infants' imitative learning from child‐directed teaching and observed interaction in two cultural communities, a Yucatec Mayan village where infants have been described as experiencing relatively limited direct instruction (Study 1) and a US city where infants are regularly directly engaged (Study 2). Eighteen‐month‐old infants from each community participated in a within‐subjects study design where they were directly taught to use novel objects on one day and observed actors using different objects on another day. Mayan infants showed relative increases in imitative behaviors on their second visit to the lab as compared to their first visit, but there was no effect of condition. US infants showed no difference in imitative behavior in the child‐directed vs. observed conditions; however, infants who were directly addressed on their first visit showed significantly higher overall imitation rates than infants who observed on their first visit. Together, these findings call into question the idea that child‐directed teaching holds automatic or universal informational value.  相似文献   

13.
One of the central themes in the study of language acquisition is the gap between the linguistic knowledge that learners demonstrate, and the apparent inadequacy of linguistic input to support induction of this knowledge. One of the first linguistic abilities in the course of development to exemplify this problem is in speech perception: specifically, learning the sound system of one’s native language. Native-language sound systems are defined by meaningful contrasts among words in a language, yet infants learn these sound patterns before any significant numbers of words are acquired. Previous approaches to this learning problem have suggested that infants can learn phonetic categories from statistical analysis of auditory input, without regard to word referents. Experimental evidence presented here suggests instead that young infants can use visual cues present in word-labeling situations to categorize phonetic information. In Experiment 1, 9-month-old English-learning infants failed to discriminate two non-native phonetic categories, establishing baseline performance in a perceptual discrimination task. In Experiment 2, these infants succeeded at discrimination after watching contrasting visual cues (i.e., videos of two novel objects) paired consistently with the two non-native phonetic categories. In Experiment 3, these infants failed at discrimination after watching the same visual cues, but paired inconsistently with the two phonetic categories. At an age before which memory of word labels is demonstrated in the laboratory, 9-month-old infants use contrastive pairings between objects and sounds to influence their phonetic sensitivity. Phonetic learning may have a more functional basis than previous statistical learning mechanisms assume: infants may use cross-modal associations inherent in social contexts to learn native-language phonetic categories.  相似文献   

14.
Infants start learning words, the building blocks of language, at least by 6 months. To do so, they must be able to extract the phonological form of words from running speech. A rich literature has investigated this process, termed word segmentation. We addressed the fundamental question of how infants of different ages segment words from their native language using a meta‐analytic approach. Based on previous popular theoretical and experimental work, we expected infants to display familiarity preferences early on, with a switch to novelty preferences as infants become more proficient at processing and segmenting native speech. We also considered the possibility that this switch may occur at different points in time as a function of infants' native language and took into account the impact of various task‐ and stimulus‐related factors that might affect difficulty. The combined results from 168 experiments reporting on data gathered from 3774 infants revealed a persistent familiarity preference across all ages. There was no significant effect of additional factors, including native language and experiment design. Further analyses revealed no sign of selective data collection or reporting. We conclude that models of infant information processing that are frequently cited in this domain may not, in fact, apply in the case of segmenting words from native speech.  相似文献   

15.
How do young children learn about causal structure in an uncertain and variable world? We tested whether they can use observed probabilistic information to solve causal learning problems. In two experiments, 24‐month‐olds observed an adult produce a probabilistic pattern of causal evidence. The toddlers then were given an opportunity to design their own intervention. In Experiment 1, toddlers saw one object bring about an effect with a higher probability than a second object. In Experiment 2, the frequency of the effect was held constant, though its probability differed. After observing the probabilistic evidence, toddlers in both experiments chose to act on the object that was more likely to produce the effect. The results demonstrate that toddlers can learn about cause and effect without trial‐and‐error or linguistic instruction on the task, simply by observing the probabilistic patterns of evidence resulting from the imperfect actions of other social agents. Such observational causal learning from probabilistic displays supports human children's rapid cultural learning.  相似文献   

16.
Infants in the early stages of word learning have difficulty learning lexical neighbors (i.e. word pairs that differ by a single phoneme), despite their ability to discriminate the same contrast in a purely auditory task. While prior work has focused on top‐down explanations for this failure (e.g. task demands, lexical competition), none has examined if bottom‐up acoustic‐phonetic factors play a role. We hypothesized that lexical neighbor learning could be improved by incorporating greater acoustic variability in the words being learned, as this may buttress still‐developing phonetic categories, and help infants identify the relevant contrastive dimension. Infants were exposed to pictures accompanied by labels spoken by either a single or multiple speakers. At test, infants in the single‐speaker condition failed to recognize the difference between the two words, while infants who heard multiple speakers discriminated between them.  相似文献   

17.
Human infants have an enormous amount to learn from others to become full-fledged members of their culture. Thus, it is important that they learn from reliable, rather than unreliable, models. In two experiments, we investigated whether 14-month-olds (a) imitate instrumental actions and (b) adopt the individual preferences of a model differently depending on the model’s previous reliability. Infants were shown a series of videos in which a model acted on familiar objects either competently or incompetently. They then watched as the same model demonstrated a novel action on an object (imitation task) and preferentially chose one of two novel objects (preference task). Infants’ imitation of the novel action was influenced by the model’s previous reliability; they copied the action more often when the model had been reliable. However, their preference for one of the novel objects was not influenced by the model’s previous reliability. We conclude that already by 14 months of age, infants discriminate between reliable and unreliable models when learning novel actions.  相似文献   

18.
The goal of the current study was to examine the relationship between mothers' spontaneous facial expressions of pain and fear immediately preceding their infants' immunizations and infants' facial expressions of pain immediately following immunizations. Infants' observations of mothers' faces prior to immunization also were examined to explore whether these observations moderated the effect of mothers' facial expressions on infant pain. The final sample included 58 mothers and their infants. Video data were used to code maternal facial expressions, infants' observations, and infants' expressions of pain. Infants who observed their mothers' faces had mothers who expressed significantly more fear pre‐needle. Furthermore, mothers' facial expressions of mild fear pre‐needle were associated with lower levels of infants' pain expression post‐needle. A regression analysis confirmed maternal facial expressions of mild fear pre‐needle as the strongest predictor of infant pain post‐needle after controlling for infants' observations of mothers' faces. Mothers' subtle facial expressions of fear may indicate a relationship history of empathic caregiving that functions to support infants' abilities to regulate distress following painful procedures. Interventions aimed at improving caregiver sensitivity to infants' emotional cues may prove beneficial to infants in pain. Future directions in research are discussed.  相似文献   

19.
At 14 months, children appear to struggle to apply their fairly well-developed speech perception abilities to learning similar sounding words (e.g., bih/dih; Stager & Werker, 1997). However, variability in nonphonetic aspects of the training stimuli seems to aid word learning at this age. Extant theories of early word learning cannot account for this benefit of variability. We offer a simple explanation for this range of effects based on associative learning. Simulations suggest that if infants encode both noncontrastive information (e.g., cues to speaker voice) and meaningful linguistic cues (e.g., place of articulation or voicing), then associative learning mechanisms predict these variability effects in early word learning. Crucially, this means that despite the importance of task variables in predicting performance, this body of work shows that phonological categories are still developing at this age, and that the structure of noninformative cues has critical influences on word learning abilities.  相似文献   

20.
Previous research with artificial language learning paradigms has shown that infants are sensitive to statistical cues to word boundaries (Saffran, Aslin & Newport, 1996) and that they can use these cues to extract word‐like units (Saffran, 2001). However, it is unknown whether infants use statistical information to construct a receptive lexicon when acquiring their native language. In order to investigate this issue, we rely on the fact that besides real words a statistical algorithm extracts sound sequences that are highly frequent in infant‐directed speech but constitute nonwords. In three experiments, we use a preferential listening paradigm to test French‐learning 11‐month‐old infants' recognition of highly frequent disyllabic sequences from their native language. In Experiments 1 and 2, we use nonword stimuli and find that infants listen longer to high‐frequency than to low‐frequency sequences. In Experiment 3, we compare high‐frequency nonwords to real words in the same frequency range, and find that infants show no preference. Thus, at 11 months, French‐learning infants recognize highly frequent sound sequences from their native language and fail to differentiate between words and nonwords among these sequences. These results are evidence that they have used statistical information to extract word candidates from their input and stored them in a ‘protolexicon’, containing both words and nonwords.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号