首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In two studies, we investigated infants’ preference for infant‐directed (ID) action or ‘motionese’ ( Brand, Baldwin & Ashburn, 2002 ) relative to adult‐directed (AD) action. In Study 1, full‐featured videos were shown to 32 6‐ to 8‐month‐olds, who demonstrated a strong preference for ID action. In Study 2, infants at 6–8 months (n= 28) and 11–13 months (n= 24) were shown either standard ID and AD clips, or clips in which demonstrators’ faces were blurred to obscure emotional and eye‐gaze information. Across both ages, infants showed evidence of preferring ID to AD action, even when faces were blurred. Infants did not have a preference for still‐frame images of the demonstrators, indicating that the ID preference arose from action characteristics, not demonstrators’ general appearance. These results suggest that motionese enhances infants’ attention to action, possibly supporting infants’ learning.  相似文献   

2.
Our aim was to investigate why 16-month-old infants fail to master a novel tool-use action via observational learning. We hypothesized that 16-month-olds’ difficulties may be due to not understanding the goal of the observed action. To test this hypothesis, we investigated whether showing infants an explicit demonstration of the goal of the action before demonstrating the action would improve observational learning compared with a classic demonstration of the target action. We examined 16-month-old infants who observed a tool-use action consisting of grasping a rake-like tool to retrieve an out-of-reach toy, under five conditions. Only when infants were shown the goal of the action before demonstration did they show some success.  相似文献   

3.
Human infants have an enormous amount to learn from others to become full-fledged members of their culture. Thus, it is important that they learn from reliable, rather than unreliable, models. In two experiments, we investigated whether 14-month-olds (a) imitate instrumental actions and (b) adopt the individual preferences of a model differently depending on the model’s previous reliability. Infants were shown a series of videos in which a model acted on familiar objects either competently or incompetently. They then watched as the same model demonstrated a novel action on an object (imitation task) and preferentially chose one of two novel objects (preference task). Infants’ imitation of the novel action was influenced by the model’s previous reliability; they copied the action more often when the model had been reliable. However, their preference for one of the novel objects was not influenced by the model’s previous reliability. We conclude that already by 14 months of age, infants discriminate between reliable and unreliable models when learning novel actions.  相似文献   

4.
Research on initial conceptual knowledge and research on early statistical learning mechanisms have been, for the most part, two separate enterprises. We report a study with 11-month-old infants investigating whether they are sensitive to sampling conditions and whether they can integrate intentional information in a statistical inference task. Previous studies found that infants were able to make inferences from samples to populations, and vice versa [Xu, F., & Garcia, V. (2008). Intuitive statistics by 8-month-old infants. Proceedings of the National Academy of Sciences of the United States of America, 105, 5012-5015]. We found that when employing this statistical inference mechanism, infants are sensitive to whether a sample was randomly drawn from a population or not, and they take into account intentional information (e.g., explicitly expressed preference, visual access) when computing the relationship between samples and populations. Our results suggest that domain-specific knowledge is integrated with statistical inference mechanisms early in development.  相似文献   

5.
The present study investigated the importance of Event Boundaries for 16- and 20-month-olds’ (n = 80) memory for cartoons. The infants watched one out of two cartoons with ellipses inserted covering the screen for 3 s either at Event Boundaries or at Non-Boundaries. After a two-week delay both cartoons (one familiar and one novel) were presented simultaneously without ellipses while eye-tracking the infants. According to recent evidence a familiarity preference was expected. However, following Event Segmentation Theory ellipses at Event Boundaries were expected to cause greater disturbance of the encoding and hence a weaker memory trace evidenced by reduced familiarity preference, relative to ellipses at Non-Boundaries. The results suggest that overall this was the case, documenting the importance of Boundaries for infant memory. Furthermore, planned analyses revealed that whereas the same pattern was found when looking at the 20-month-old infants, no significant difference was found between the two conditions in the youngest age-group.  相似文献   

6.
Infant speech discrimination can follow multiple trajectories depending on the language and the specific phonemes involved. Two understudied languages in terms of the development of infants’ speech discrimination are Arabic and Hebrew.PurposeThe purpose of the present study was to examine the influence of listening experience with the native language on the discrimination of the voicing contrast /ba-pa/ in Arabic-learning infants whose native language includes only the phoneme /b/ and in Hebrew-learning infants whose native language includes both phonemes.Method128 Arabic-learning infants and Hebrew-learning infants, 4-to-6 and 10-to-12-month-old infants, were tested with the Visual Habituation Procedure.ResultsThe results showed that 4-to-6-month-old infants discriminated between /ba-pa/ regardless of their native language and order of presentation. However, only 10-to-12-month-old infants learning Hebrew retained this ability. 10-to-12-month-old infants learning Arabic did not discriminate the change from /ba/ to /pa/ but showed a tendency for discriminating the change from /pa/ to /ba/.ConclusionsThis is the first study to report on the reduced discrimination of /ba-pa/ in older infants learning Arabic. Our findings are consistent with the notion that experience with the native language changes discrimination abilities and alters sensitivity to non-native contrasts, thus providing evidence for ‘top-down’ processing in young infants. The directional asymmetry in older infants learning Arabic can be explained by assimilation of the non-native consonant /p/ to the native Arabic category /b/ as predicted by current speech perception models.  相似文献   

7.
The present study examined whether infants’ visual preferences for real objects and pictures are related to their manual object exploration skills. Fifty-nine 7-month-old infants were tested in a preferential looking task with a real object and its pictorial counterpart. All of the infants also participated in a manual object exploration task, in which they freely explored five toy blocks. Results revealed a significant positive relationship between infants’ haptic scan levels in the manual object exploration task and their gaze behavior in the preferential looking task: The higher infants’ haptic scan levels, the longer they looked at real objects compared to pictures. Our findings suggest that the specific exploratory action of haptically scanning an object is associated with infants’ visual preference for real objects over pictures.  相似文献   

8.
Infants’ early visual preferences for faces, and their observational learning abilities, are well-established in the literature. The current study examines how infants’ attention changes as they become increasingly familiar with a person and the actions that person is demonstrating. The looking patterns of 12- (n = 61) and 16-month-old infants (n = 29) were tracked while they watched videos of an adult presenting novel actions with four different objects three times. A face-to-action ratio in visual attention was calculated for each repetition and summarized as a mean across all videos. The face-to-action ratio increased with each action repetition, indicating that there was an increase in attention to the face relative to the action each additional time the action was demonstrated. Infant’s prior familiarity with the object used was related to face-to-action ratio in 12-month-olds and initial looking behavior was related to face-to-action ratio in the whole sample. Prior familiarity with the presenter, and infant gender and age, were not related to face-to-action ratio. This study has theoretical implications for face preference and action observations in dynamic contexts.  相似文献   

9.
We conducted a close replication of the seminal work by Marcus and colleagues from 1999, which showed that after a brief auditory exposure phase, 7-month-old infants were able to learn and generalize a rule to novel syllables not previously present in the exposure phase. This work became the foundation for the theoretical framework by which we assume that infants are able to learn abstract representations and generalize linguistic rules. While some extensions on the original work have shown evidence of rule learning, the outcomes are mixed, and an exact replication of Marcus et al.'s study has thus far not been reported. A recent meta-analysis by Rabagliati and colleagues brings to light that the rule-learning effect depends on stimulus type (e.g., meaningfulness, speech vs. nonspeech) and is not as robust as often assumed. In light of the theoretical importance of the issue at stake, it is appropriate and necessary to assess the replicability and robustness of Marcus et al.'s findings. Here we have undertaken a replication across four labs with a large sample of 7-month-old infants (= 96), using the same exposure patterns (ABA and ABB), methodology (Headturn Preference Paradigm), and original stimuli. As in the original study, we tested the hypothesis that infants are able to learn abstract “algebraic” rules and apply them to novel input. Our results did not replicate the original findings: infants showed no difference in looking time between test patterns consistent or inconsistent with the familiarization pattern they were exposed to.  相似文献   

10.
Four experiments examined whether infants' use of task-relevant information in an action task could be facilitated by visual experience in the laboratory. Twelve- but not 9-month-old infants spontaneously used height information and chose an appropriate (taller) cover in search of a hidden tall toy. After watching examples of covering events in a teaching session, 9-month-old infants succeeded in an action task that involved the same event category; learning was not generalized to events from a different category. The present results demonstrate that learning through visual experience can be transferred to infants' subsequent actions. These findings shed light on the link between perception and action in infancy.  相似文献   

11.
Learning and retention effects in 3- and 6-month-old infants were analysed for the habituation, the novelty preference, and the visual expectation paradigm that are assumed to indicate processing speed. A total of 119 infants participated in the study. The tasks related to the different paradigms were presented on three different days within a week and were repeated two weeks later to analyse retention effects. The results showed clear learning effects in all paradigms. The learning effects shown for different tasks were interrelated for 6-month-old infants, thus supporting the assumption of a shared latent dimension like processing speed. Moreover, retention effects over an interval of two weeks could be shown for visual expectation (in both 3- and 6-month-old infants) and for novelty preference (only for 6-month-old infants). The comparably high retention rates especially in the visual expectation paradigm are discussed.  相似文献   

12.
Most research on early language learning focuses on the objects that infants see and the words they hear in their daily lives, although growing evidence suggests that motor development is also closely tied to language development. To study the real-time behaviors required for learning new words during free-flowing toy play, we measured infants’ visual attention and manual actions on to-be-learned toys. Parents and 12-to-26-month-old infants wore wireless head-mounted eye trackers, allowing them to move freely around a home-like lab environment. After the play session, infants were tested on their knowledge of object-label mappings. We found that how often parents named objects during play did not predict learning, but instead, it was infants’ attention during and around a labeling utterance that predicted whether an object-label mapping was learned. More specifically, we found that infant visual attention alone did not predict word learning. Instead, coordinated, multimodal attention–when infants’ hands and eyes were attending to the same object–predicted word learning. Our results implicate a causal pathway through which infants’ bodily actions play a critical role in early word learning.  相似文献   

13.
Three experiments demonstrate that biological movement facilitates young infants’ recognition of the whole human form. A body discrimination task was used in which 6-, 9-, and 12-month-old infants were habituated to typical human bodies and then shown scrambled human bodies at the test. Recovery of interest to the scrambled bodies was observed in 9- and 12-month-old infants in Experiment 1, but only when the body images were animated to move in a biologically possible way. In Experiment 2, nonbiological movement was incorporated into the typical and scrambled body images, but this did not facilitate body recognition in 9- and 12-month-olds. A preferential looking paradigm was used in Experiment 3 to determine if infants had a spontaneous preference for the scrambled versus typical body stimuli when these were both animated. The results showed that 12-month-olds preferred the scrambled body stimuli, 9-month-olds preferred the typical body stimuli and the 6-month-olds showed no preference for either type of body stimuli. These findings suggest that human body recognition involves integrating form and movement, possibly in the superior temporal sulcus, from as early as 9 months of life.  相似文献   

14.
Both human and nonhuman primate adults use infant‐directed facial and vocal expressions across many contexts when interacting with infants (e.g., feeding, playing). This infant‐oriented style of communication, known as infant‐directed speech (IDS), seems to benefit human infants in numerous ways, including facilitating language acquisition. Given the variety of contexts in which adults use IDS, we hypothesized that IDS supports learning beyond the linguistic domain and that these benefits may extend to nonhuman primates. We exposed 2.5‐month‐old rhesus macaque infants (= 15) to IDS, adult‐directed speech (ADS), and a non‐social control (CTR) during a video presentation of unrelated stimuli. After a 5‐ or 60‐minute delay, infants were shown the familiar video side‐by‐side with a novel video. Infants exhibited a novelty preference after the 5‐minute delay, but not after the 60‐minute delay, in the ADS and CTR conditions, and a novelty preference in the IDS condition only after the 60‐minute delay. These results are the first to suggest that exposure to IDS affects infants’ long‐term memory, even in non‐linguistic animals.  相似文献   

15.
Research has shown that infants are more likely to learn from certain and competent models than from uncertain and incompetent models. However, it is unknown which of these cues to a model’s reliability infants consider more important. In Experiment 1, we investigated whether 14-month-old infants (n = 35) imitate and adopt tool choices selectively from an uncertain but competent compared to a certain but incompetent model. Infants watched videos in which an adult expressed either uncertainty but acted competently or expressed certainty but acted incompetently with familiar objects. In tool-choice tasks, the adult then chose one of two objects to operate an apparatus, and in imitation tasks, the adult then demonstrated a novel action. Infants did not adopt the model’s choice in the tool-choice tasks but they imitated the uncertain but competent model more often than the certain but incompetent model in the imitation tasks. In Experiment 2, 14-month-olds (n = 33) watched videos in which an adult expressed only either certainty or uncertainty in order to test whether infants at this age are sensitive to a model’s certainty. Infants imitated and adopted the tool choice from a certain model more than from an uncertain model. These results suggest that 14-month-olds acknowledge both a model’s competence and certainty when learning novel actions. However, they rely more on a model’s competence than on his certainty when both cues are in conflict. The ability to detect reliable models when learning how to handle cultural artifacts helps infants to become well-integrated members of their culture.  相似文献   

16.
Adults perceive emotional expressions categorically, with discrimination being faster and more accurate between expressions from different emotion categories (i.e. blends with two different predominant emotions) than between two stimuli from the same category (i.e. blends with the same predominant emotion). The current study sought to test whether facial expressions of happiness and fear are perceived categorically by pre-verbal infants, using a new stimulus set that was shown to yield categorical perception in adult observers (Experiments 1 and 2). These stimuli were then used with 7-month-old infants (N = 34) using a habituation and visual preference paradigm (Experiment 3). Infants were first habituated to an expression of one emotion, then presented with the same expression paired with a novel expression either from the same emotion category or from a different emotion category. After habituation to fear, infants displayed a novelty preference for pairs of between-category expressions, but not within-category ones, showing categorical perception. However, infants showed no novelty preference when they were habituated to happiness. Our findings provide evidence for categorical perception of emotional expressions in pre-verbal infants, while the asymmetrical effect challenges the notion of a bias towards negative information in this age group.  相似文献   

17.
There is considerable evidence that labeling supports infants' object categorization. Yet in daily life, most of the category exemplars that infants encounter will remain unlabeled. Inspired by recent evidence from machine learning, we propose that infants successfully exploit this sparsely labeled input through “semi‐supervised learning.” Providing only a few labeled exemplars leads infants to initiate the process of categorization, after which they can integrate all subsequent exemplars, labeled or unlabeled, into their evolving category representations. Using a classic novelty preference task, we introduced 2‐year‐old infants (n = 96) to a novel object category, varying whether and when its exemplars were labeled. Infants were equally successful whether all exemplars were labeled (fully supervised condition) or only the first two exemplars were labeled (semi‐supervised condition), but they failed when no exemplars were labeled (unsupervised condition). Furthermore, the timing of the labeling mattered: when the labeled exemplars were provided at the end, rather than the beginning, of familiarization (reversed semi‐supervised condition), infants failed to learn the category. This provides the first evidence of semi‐supervised learning in infancy, revealing that infants excel at learning from exactly the kind of input that they typically receive in acquiring real‐world categories and their names.  相似文献   

18.
The emergence of joint attention is still a matter of vigorous debate. It involves diverse hypotheses ranging from innate modules dedicated to intention reading to more neuro-constructivist approaches. The aim of this study was to assess whether 12-month-old infants are able to recognize a “joint attention” situation when observing such a social interaction. Using a violation-of-expectation paradigm, we habituated infants to a “joint attention” video and then compared their looking time durations between “divergent attention” videos and “joint attention” ones using a 2 (familiar or novel perceptual component) × 2 (familiar or novel conceptual component) factorial design. These results were enriched with measures of pupil dilation, which are considered to be reliable measures of cognitive load. Infants looked longer at test events that involved novel speaker and divergent attention but no changes in infants’ pupil dilation were observed in any conditions. Although looking time data suggest that infants may appreciate discrepancies from expectations related to joint attention behavior, in the absence of clear evidence from pupillometry, the results show no demonstration of understanding of joint attention, even at a tacit level. Our results suggest that infants may be sensitive to relevant perceptual variables in joint attention situations, which would help scaffold social cognitive development. This study supports a gradual, learning interpretation of how infants come to recognize, understand, and participate in joint attention.  相似文献   

19.
Human adults exaggerate their actions and facial expressions when interacting with infants. These infant-directed modifications highlight certain aspects of action sequences and attract infants’ attention. This study investigated whether social-emotional aspects of infant-directed modifications, such as smiling, eye contact, and onomatopoeic vocalization, influence infants’ copying of another's action, especially action style, during the process of achieving an outcome. In Study 1, 14-month-old infants (n = 22) saw an experimenter demonstrate goal-directed actions in an exaggerated manner. Either the style or the end state of the actions was accompanied by social-emotional cues from the experimenter. Infants copied the style of the action more often when social-emotional cues accompanied the style than when they accompanied the end state. In Study 2, a different group of 14-month-old infants (n = 22) watched the same exaggerated actions as in Study 1, except that either the style or the end state was accompanied by a physical sound instead of social-emotional cues. The infants copied the end state consistently more often than the style. Taken together, these two studies showed that accompanying social-emotional cues provided by a demonstrator, but not accompanying physical sound, increased infants’ copying of action style. These findings suggest that social-emotional cues facilitate efficient social learning through the adult–infant interaction.  相似文献   

20.
The current study examined the role redundant amodal properties play in an operant learning task in 3-month-old human infants. Prior studies have suggested that the presence of redundant amodal information facilitates detection and discrimination of amodal properties and potentially functions to influence general learning processes such as associative conditioning. The current study examined how human infants use redundant amodal information (visual and haptic) about the shape of an object to influence learning of an operant response. Infants learned to kick to move a mobile of cylinders while either holding a cylinder, a rectangular cube, or no object. Kick rate served as the dependent measure. The results showed that infants given matching redundant amodal properties (e.g., viewed cylinders while holding a cylinder) showed facilitated operant learning whereas infants given mismatching redundant amodal properties showed inhibited operant learning. These results support and extend the Intersensory Redundancy Hypothesis by demonstrating that amodal redundancy influences complex learning processes such as operant conditioning.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号