首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Infants aged 4.5 months are able to match phonetic information in the face and voice ( Kuhl & Meltzoff, 1982 ; Patterson & Werker, 1999 ); however, the ontogeny of this remarkable ability is not understood. In the present study, we address this question by testing substantially younger infants at 2 months of age. Like the 4.5‐month‐olds in past studies, the 2‐month‐old infants tested in the current study showed evidence of matching vowel information in face and voice. The effect was observed in overall looking time, number of infants who looked longer at the match, and longest look to the match versus mismatch. Furthermore, there were no differences based on male or female stimuli and no preferences for the match when it was on the right or left side. These results show that there is robust evidence for phonetic matching at a much younger age than previously known and support arguments for either some kind of privileged processing or particularly rapid learning of phonetic information.  相似文献   

2.
Past research has demonstrated that infants can rapidly extract syllable distribution information from an artificial language and use this knowledge to infer likely word boundaries in speech. However, artificial languages are extremely simplified with respect to natural language. In this study, we ask whether infants’ ability to track transitional probabilities between syllables in an artificial language can scale up to the challenge of natural language. We do so by testing both 5.5‐ and 8‐month‐olds’ ability to segment an artificial language containing four words of uniform length (all CVCV) or four words of varying length (two CVCV, two CVCVCV). The transitional probability cues to word boundaries were held equal across the two languages. Both age groups segmented the language containing words of uniform length, demonstrating that even 5.5‐month‐olds are extremely sensitive to the conditional probabilities in their environment. However, neither age group succeeded in segmenting the language containing words of varying length, despite the fact that the transitional probability cues defining word boundaries were equally strong in the two languages. We conclude that infants’ statistical learning abilities may not be as robust as earlier studies have suggested.  相似文献   

3.
Are abstract representations of number – representations that are independent of the particular type of entities that are enumerated – a product of human language or culture, or do they trace back to human infancy? To address this question, four experiments investigated whether human infants discriminate between sequences of actions (jumps of a puppet) on the basis of numerosity. At 6 months, infants successfully discriminated four‐ versus eight‐jump sequences, when the continuous variables of sequence duration, jump duration, jump rate, jump interval and duration, and extent of motion were controlled, and rhythm was eliminated. In contrast, infants failed to discriminate two‐ versus four‐jump sequences, suggesting that infants fail to form cardinal number representations of small numbers of actions. Infants also failed to discriminate between sequences of four versus six jumps at 6 months, and succeeded at 9 months, suggesting that infants’ number representations are imprecise and increase in precision with age. All of these findings agree with those of studies using visual–spatial arrays and auditory sequences, providing evidence that a single, abstract system of number representation is present and functional in infancy.  相似文献   

4.
This study investigates whether infants are sensitive to backward and forward transitional probabilities within temporal and spatial visual streams. Two groups of 8‐month‐old infants were familiarized with an artificial grammar of shapes, comprising backward and forward base pairs (i.e. two shapes linked by strong backward or forward transitional probability) and part‐pairs (i.e. two shapes with weak transitional probabilities in both directions). One group viewed the continuous visual stream as a temporal sequence, while the other group viewed the same stream as a spatial array. Following familiarization, infants looked longer at test trials containing part‐pairs than base pairs, although they had appeared with equal frequency during familiarization. This pattern of looking time was evident for both forward and backward pairs, in both the temporal and spatial conditions. Further, differences in looking time to part‐pairs that were consistent or inconsistent with the predictive direction of the base pairs (forward or backward) indicated that infants were indeed sensitive to direction when presented with temporal sequences, but not when presented with spatial arrays. These results suggest that visual statistical learning is flexible in infancy and depends on the nature of visual input.  相似文献   

5.
Previous research with artificial language learning paradigms has shown that infants are sensitive to statistical cues to word boundaries (Saffran, Aslin & Newport, 1996) and that they can use these cues to extract word‐like units (Saffran, 2001). However, it is unknown whether infants use statistical information to construct a receptive lexicon when acquiring their native language. In order to investigate this issue, we rely on the fact that besides real words a statistical algorithm extracts sound sequences that are highly frequent in infant‐directed speech but constitute nonwords. In three experiments, we use a preferential listening paradigm to test French‐learning 11‐month‐old infants' recognition of highly frequent disyllabic sequences from their native language. In Experiments 1 and 2, we use nonword stimuli and find that infants listen longer to high‐frequency than to low‐frequency sequences. In Experiment 3, we compare high‐frequency nonwords to real words in the same frequency range, and find that infants show no preference. Thus, at 11 months, French‐learning infants recognize highly frequent sound sequences from their native language and fail to differentiate between words and nonwords among these sequences. These results are evidence that they have used statistical information to extract word candidates from their input and stored them in a ‘protolexicon’, containing both words and nonwords.  相似文献   

6.
Development of the functional visual field   总被引:1,自引:0,他引:1  
Andries Sanders' dissertation examined selective mechanisms in the functional visual field, and much of his work since has been concerned with the stages that underlie visual information processing particularly while making saccades. We argue that the study of orienting in the functional visual field is timely because it deals with the relation of covert attention shifts, eye movements and head movements to their underlying neurology. In our paper we develop a method to study learning of sequences at all ages from infants to adults. Our studies focus on how learning influences anticipatory eye movements. We examined the learning of unambiguous and context dependent sequences by 4-, 10-, and 18-month-old infants and undergraduates. We found clear learning of unambiguous sequences at 4 months, but learning of context dependent associations was found only in 18-month-olds and in adults. We hypothesize that the learning of unambiguous sequences by 4-month-olds reflects maturation of a basal ganglia-parietal circuit related to adult implicit learning, while the learning of context dependent sequences requires development of frontal structures underlying more general attentional abilities.  相似文献   

7.
Infants’ social environment is rich of complex sequences of events and actions. This study investigates whether 12-month-old infants are able to learn statistical regularities from a sequence of human gestures and whether this ability is affected by a social vs non-social context. Using a visual familiarization task, infants were familiarized to a continuous sequence of eight videos in which two women imitated each other performing arm gestures. The sequence of videos in which the two women performed imitative gestures was organized into 4 different gesture units. Videos within a gesture unit had a highly predictable transitional probability, while such transition was less predictable between gesture units. The social context was manipulated varying the mutual gaze of the actors and their body orientation. At test, infants were able to discriminate between the high- and low-predictable gesture units in the social, but not in the non-social condition. Results demonstrate that infants are capable to detect statistical regularities from a sequence of human gestures performed by two different individuals. Moreover, our findings indicate that salient social cues can modulate infants’ ability to extract statistical information from a sequence of gestures.  相似文献   

8.
In the present study, a moving room paradigm was used that characterized the developmental progression of the effects of visual perturbations on stance control in subjects (N = 39) from 5 months to 10 years of age. Kinematic (probability of recording sway, magnitude of sway response) and electromyographic (probability and patterns of muscle activation, muscle onset latencies) data were found that suggested that visual flow simulating sway activates organized postural muscle responses and results in subsequent sway in standing infants as young as 5 months of age, well before they are able to stand independently. In new walkers, there was an increase in the magnitude of the effect of the visual perturbation, suggesting a possible increase in reliance on visual information. The magnitude of sway decreased to very low levels in older children and adults. The large-amplitude responses observed in the youngest age groups may indicate an inability to switch from an unreliable to a reliable source of perceptual information or an inability to modulate the responses produced following the perturbations. With increasing age and experience, the ability to resolve the conflict increased, with adult subjects demonstrating little sway response.  相似文献   

9.
This paper reviews habituation-dishabituation and preferential-looking studies on the emergence of sensitivity to pictorial depth cues in infancy. This research can be subdivided into two groups. While one group of studies has established responsiveness to pictorial depth cues at 3-5 months of age, the other has found that the ability to extract pictorial 3D information emerges at about 6 months. In the former, young infants were tested for their ability to distinguish between displays that differ in spatial information provided by pictorial depth cues. The results of these studies might demonstrate that 3-5-month-old infants perceive spatial layout from pictorial cues. It is possible, however, that the infants in these studies responded primarily to low-level, two-dimensional stimulus differences. In contrast, the second group of studies controlled for the potential influence of lower-level stimulus features on the infants' experimental performance and more unambiguously demonstrated sensitivity to pictorial depth information in infants 6 months of age and older. In sum, the divergent findings of studies in this area may be resolved by assuming substantial developmental progress in infant sensitivity to pictorial depth cues during the first months of life.  相似文献   

10.
Pointing, like eye gaze, is a deictic gesture that can be used to orient the attention of another person towards an object or an event. Previous research suggests that infants first begin to follow a pointing gesture between 10 and 13 months of age. We investigated whether sensitivity to pointing could be seen at younger ages employing a technique recently used to show early sensitivity to perceived eye gaze. Three experiments were conducted with 4.5- and 6.5-month-old infants. Our first goal was to examine whether these infants could show a systematic response to pointing by shifting their visual attention in the direction of a pointing gesture when we eliminated the difficulty of disengaging fixation from a pointing hand. The results from Experiments 1 and 2 suggest that a dynamic, but not a static, pointing gesture triggers shifts of visual attention in infants as young as 4.5 months of age. Our second goal was to clarify whether this response was based on sensitivity to the directional posture of the pointing hand, the motion of the pointing hand, or both. The results from Experiment 3 suggest that the direction of motion is necessary but not sufficient to orient infants' attention toward a distal target. Infants shifted their attention in the direction of the pointing finger, but only when the hand was moving in the same direction. These results suggest that infants are prepared to orient to the distal referent of a pointing gesture which likely contributes to their learning the communicative function of pointing.  相似文献   

11.
Extracting general rules from specific examples is important, as we must face the same challenge displayed in various formats. Previous studies have found that bimodal presentation of grammar‐like rules (e.g. ABA) enhanced 5‐month‐olds’ capacity to acquire a rule that infants failed to learn when the rule was presented with visual presentation of the shapes alone (circle‐triangle‐circle) or auditory presentation of the syllables (la‐ba‐la) alone. However, the mechanisms and constraints for this bimodal learning facilitation are still unknown. In this study, we used audio‐visual relation congruency between bimodal stimulation to disentangle possible facilitation sources. We exposed 8‐ to 10‐month‐old infants to an AAB sequence consisting of visual faces with affective expressions and/or auditory voices conveying emotions. Our results showed that infants were able to distinguish the learned AAB rule from other novel rules under bimodal stimulation when the affects in audio and visual stimuli were congruently paired (Experiments 1A and 2A). Infants failed to acquire the same rule when audio‐visual stimuli were incongruently matched (Experiment 2B) and when only the visual (Experiment 1B) or the audio (Experiment 1C) stimuli were presented. Our results highlight that bimodal facilitation in infant rule learning is not only dependent on better statistical probability and redundant sensory information, but also the relational congruency of audio‐visual information. A video abstract of this article can be viewed at https://m.youtube.com/watch?v=KYTyjH1k9RQ  相似文献   

12.
Previous research suggests that language learners can detect and use the statistical properties of syllable sequences to discover words in continuous speech (e.g. Aslin, R.N., Saffran, J.R., Newport, E.L., 1998. Computation of conditional probability statistics by 8-month-old infants. Psychological Science 9, 321-324; Saffran, J.R., Aslin, R.N., Newport, E.L., 1996. Statistical learning by 8-month-old infants. Science 274, 1926-1928; Saffran, J., R., Newport, E.L., Aslin, R.N., (1996). Word segmentation: the role of distributional cues. Journal of Memory and Language 35, 606-621; Saffran, J.R., Newport, E.L., Aslin, R.N., Tunick, R.A., Barrueco, S., 1997. Incidental language learning: Listening (and learning) out of the corner of your ear. Psychological Science 8, 101-195). In the present research, we asked whether this statistical learning ability is uniquely tied to linguistic materials. Subjects were exposed to continuous non-linguistic auditory sequences whose elements were organized into 'tone words'. As in our previous studies, statistical information was the only word boundary cue available to learners. Both adults and 8-month-old infants succeeded at segmenting the tone stream, with performance indistinguishable from that obtained with syllable streams. These results suggest that a learning mechanism previously shown to be involved in word segmentation can also be used to segment sequences of non-linguistic stimuli.  相似文献   

13.
Rule learning (RL) is an implicit learning mechanism that allows infants to detect and generalize rule-like repetition-based patterns (such as ABB and ABA) from a sequence of elements. Increasing evidence shows that RL operates both in the auditory and the visual domain and is modulated by the perceptual expertise with the to-be-learned stimuli. Yet, whether infants’ ability to detect a high-order rule from a sequence of stimuli is affected by affective information remains a largely unexplored issue. Using a visual habituation paradigm, we investigated whether the presence of emotional expressions with a positive and a negative value (i.e., happiness and anger) modulates 7- to 8-month-old infants’ ability to learn a rule-like pattern from a sequence of faces of different identities. Results demonstrate that emotional facial expressions (either positive and negative) modulate infants’ visual RL mechanism, even though positive and negative facial expressions affect infants’ RL in a different manner: while anger disrupts infants’ ability to learn the rule-like pattern from a face sequence, in the presence of a happy face infants show a familiarity preference, thus maintaining their learning ability. These findings show that emotional expressions exert an influence on infants’ RL abilities, contributing to the investigation on how emotion and cognition interact in face processing during infancy.  相似文献   

14.
Infants' reactions to visual movement of the environment   总被引:1,自引:0,他引:1  
It has been demonstrated many times that the posture of infants is affected by movement of the visual environment. However, in previous studies, measurements taken with infants less than 10 to 12 months of age have always been recorded with the infants in a sitting position. An experiment is reported in which the postural reactions to a sinusoidal movement of the visual environment were recorded in infants 7 months of age and older standing with support. Fifty subjects divided into five groups (mean age 7.15 to 48.6 months) participated in the experiment. The groups differed in age and motor ability. Movement of the visual environment was achieved by means of a floorless room that could be moved sinusoidally in the anteroposterior axis. The subjects had to stand holding a horizontal bar fixed to a force-measurement platform. For each subject, measurements were made during four 60 s intervals: two with movement of the room and two with the room stationary. For all groups, reactions in the anteroposterior axis were stronger than in the lateral axis and this was true for both stimulus conditions. Comparison of the differences between the movement and stationary conditions in the anteroposterior axis, as a function of age, shows that the youngest infants seemed paradoxically to give stronger reactions when the room was stationary than when it was moving; the inverse was true for older infants and this difference increased with age. An analysis of the data with fast Fourier transforms reveals that the majority of subjects showed a pattern of postural reactions where the dominant (peak) frequency was identical to the peak frequency of room movement.(ABSTRACT TRUNCATED AT 250 WORDS)  相似文献   

15.
The present research examined whether infants acquire general principles or more specific rules when learning about physical events. Experiments 1 and 2 investigated 4.5-month-old infants' ability to judge how much of a tall object should be hidden when lowered behind an occluder versus inside a container. The results indicated that at this age infants are able to reason about height in occlusion but not containment events. Experiment 3 showed that this latter ability does not emerge until about 7.5 months of age. The marked discrepancy in infants' reasoning about height in occlusion and containment events suggests that infants sort events into distinct categories, and acquire separate rules for each category.  相似文献   

16.
How do infants select and use information that is relevant to the task at hand? Infants treat events that involve different spatial relations as distinct, and their selection and use of object information depends on the type of event they encounter. For example, 4.5-month-olds consider information about object height in occlusion events, but infants typically fail to do so in containment events until they reach the age of 7.5 months. However, after seeing a prime involving occlusion, 4.5-month-olds became sensitive to height information in a containment event (Experiment 1). The enhancement lasted over a brief delay (Experiment 2) and persisted even longer when infants were shown an additional occlusion prime but not an object prime (Experiment 3). Together, these findings reveal remarkable flexibility in visual representations of young infants and show that their use of information can be facilitated not by strengthening object representations per se but by strengthening their tendency to retrieve available information in the representations.  相似文献   

17.
Evidence from infant studies indicates that language learning can be facilitated by multimodal cues. We extended this observation to adult language learning by studying the effects of simultaneous visual cues (nonassociated object images) on speech segmentation performance. Our results indicate that segmentation of new words from a continuous speech stream is facilitated by simultaneous visual input that it is presented at or near syllables that exhibit the low transitional probability indicative of word boundaries. This indicates that temporal audio-visual contiguity helps in directing attention to word boundaries at the earliest stages of language learning. Off-boundary or arrhythmic picture sequences did not affect segmentation performance, suggesting that the language learning system can effectively disregard noninformative visual information. Detection of temporal contiguity between multimodal stimuli may be useful in both infants and second-language learners not only for facilitating speech segmentation, but also for detecting word–object relationships in natural environments.  相似文献   

18.
One critical aspect of learning is the ability to apply learned knowledge to new situations. This ability to transfer is often limited, and its development is not well understood. The current research investigated the development of transfer between 8 and 16 months of age. In Experiment 1, 8- and 16-month-olds (who were established to have a preference to the beginning of a visual sequence) were trained to attend to the end of a sequence. They were then tested on novel visual sequences. Results indicated transfer of learning, with both groups changing baseline preferences as a result of training. In Experiment 2, participants were trained to attend to the end of a visual sequence and were then tested on an auditory sequence. Unlike Experiment 1, only older participants exhibited transfer of learning by changing baseline preferences. These findings suggest that the generalization of learning becomes broader with development, with transfer across modalities developing later than transfer within a modality.  相似文献   

19.
Knowledge of sequential relationships enables future events to be anticipated and processed efficiently. Research with the serial reaction time task (SRTT) has shown that sequence learning often occurs implicitly without effort or awareness. Here, the authors report 4 experiments that use a triplet-learning task (TLT) to investigate sequence learning in young and older adults. In the TLT, people respond only to the last target event in a series of discrete, 3-event sequences or triplets. Target predictability is manipulated by varying the triplet frequency (joint probability) and/or the statistical relationships (conditional probabilities) among events within the triplets. Results reveal that both groups learned, though older adults showed less learning of both joint and conditional probabilities. Young people used the statistical information in both cues, but older adults relied primarily on information in the 2nd cue alone. The authors conclude that the TLT complements and extends the SRTT and other tasks by offering flexibility in the kinds of sequential statistical regularities that may be studied as well as by controlling event timing and eliminating motor response sequencing.  相似文献   

20.
The ability of adult learners to exploit the joint and conditional probabilities in a serial reaction time task containing both deterministic and probabilistic information was investigated. Learners used the statistical information embedded in a continuous input stream to improve their performance for certain transitions by simultaneously exploiting differences in the predictability of 2 or more underlying statistics. Analysis of individual learners revealed that although most acquired the underlying statistical structure veridically, others used an alternate strategy that was partially predictive of the sequences. The findings show that learners possess a robust learning device well suited to exploiting the relative predictability of more than I source of statistical information at the same time. This work expands on previous studies of statistical learning, as well as studies of artificial grammar learning and implicit sequence learning.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号