首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
Study 1 investigated whether infants 3 and 7 months of age show differential learning of and memory for sight-sound pairs depending on whether or not temporal synchrony was present; memory was assessed after a 10-min and 1-week interval. Study 2 examined whether 7-month-olds show generalization of learning when they encounter novel bimodal events that are similar (changes in size, orientation, or color, and spectral sound properties) to the sight-sound pairs learned 1 week earlier based on temporal synchrony. For Study 1, infants received a familiarization phase followed by a paired-comparison preference procedure to assess for learning of the sight-sound pairs. One week later a memory test was given. Results confirmed that 7-month-olds had no difficulty learning auditory-visual pairings regardless of whether or not events were temporally synchronous, and they remembered these 10 min and 1 week later. In contrast, 3-month-olds showed poorer learning of sight-sound associations in the no-synchrony than synchrony conditions, and memory for sight-sound pairs 1 week later was shown only for the synchrony conditions. Results for Study 2 revealed generalization of learning of bimodal pairings under all stimulus conditions after a 1-week interval at 7 months of age. Implications of these findings for development of intersensory knowledge are discussed.  相似文献   

2.
The goal of the present study was twofold: to examine the influence of two amodal properties, co-location and temporal synchrony, on infants' associating a sight with a sound, and to determine if the relative influence of these properties on crossmodal learning changes with age. During familiarization 2-, 4-, 6- and 8-month-olds were presented two toys and a sound, with sights and sounds varying with respect to co-location and temporal synchrony. Following each familiarization phase infants were given a paired preference test to assess their learning of sight-sound associations. Measures of preferential looking revealed age-related changes in the influence of co-location and temporal synchrony on infants' learning sight-sound associations. At all ages, infants could use temporal synchrony and co-location as a basis for associating an auditory with a visual event and, in the absence of temporal synchrony, co-location was sufficient to support crossmodal learning. However, when these cues conflicted there were developmental changes in the influence of these cues on infants' learning auditory-visual associations. At 2 and 4 months infants associated the sounds with the toy that moved in synchrony with the sound's rhythm despite extreme violation of co-location of this sight and sound. In contrast, 6- and 8-month-olds did not associate a specific toy with the sound when co-location and synchrony information conflicted. The findings highlight the unique and interactive effects of distinct amodal properties on infants' learning arbitrary crossmodal relations. Possible explanations for the age shift in performance are discussed.  相似文献   

3.
Thought before language   总被引:6,自引:0,他引:6  
To learn language infants must develop a conceptual base onto which language can be mapped. Recent research in infant cognitive development shows that at least by 9 months of age infants have developed a conceptual system sufficiently rich to allow language to begin. Evidence for this system is shown by categorization of objects above and beyond their perceptual appearance, problem-solving, long-term recall of events, and inductive inferences. During the next year, early concepts gradually become refined. However, at the time when language takes off they are often still less specific than many words in daily use, accounting for the phenomenon of overextension of word meaning.  相似文献   

4.
People represent many social categories, including gender categories, in essentialist terms: They see category members as sharing deep, nonobvious properties that make them the kinds of things they are. The present research explored the consequences of this mode of representation for social inferences. In two sets of studies, participants learned (a) that they were similar to a member of the other gender on a novel attribute, (b) that they were different from a member of the other gender on a novel attribute, or (c) just their own standing on a novel attribute. Results showed that participants made stronger inductive inferences about the attribute in question when they learned that it distinguished them from a member of the other gender than in the other conditions. We consider the implications of these results for the representation of social categories and for everyday social inference processes.  相似文献   

5.
Traditional studies of spatial attention consider only a single sensory modality at a time (e.g. just vision, or just audition). In daily life, however, our spatial attention often has to be coordinated across several modalities. This is a non-trivial problem, given that each modality initially codes space in entirely different ways. In the last five years, there has been a spate of studies on crossmodal attention. These have demonstrated numerous crossmodal links in spatial attention, such that attending to a particular location in one modality tends to produce corresponding shifts of attention in other modalities. The spatial coordinates of these crossmodal links illustrate that the internal representation of external space depends on extensive crossmodal integration. Recent neuroscience studies are discussed that suggest possible brain mechanisms for the crossmodal links in spatial attention.  相似文献   

6.
Two cross-setional studies examined how infants learn the location of visual events. In Experiment 1, infants of 4, 8, and 12 mo of age learned to turn one way to view a novel pattern. In a subsequent transfer task, they were rotated to face the opposite side of the room. The 4-mo-old infants tended to err by repeating their previously learned response, but within 16-20 trials their performance was comparable to the higher levels maintained by the older infants. These results suggest that young infants learn the location of the pattern primarily in terms of response cues, whereas older infants employ both response cues and place cues. Experiment 2 was designed to independently assess the use of response cues and place cues by infants of 4, 8, 12, and 16 mo of age. All infants were able to rapidly learn and remember the location of the novel pattern when they were given response cues. There was a gradual emergence of place-cue use associated with age. It is suggested that the decrease in infant egocentricity in such spatial localization tasks may in fact reflect an age-related increase in the variety of reliable cues responded to by infants.  相似文献   

7.
In laboratory experiments, infants are sensitive to patterns of visual features that co-occur (e.g., Fiser & Aslin, 2002). Once infants learn the statistical regularities, however, what do they do with that knowledge? Moreover, which patterns do infants learn in the cluttered world outside of the laboratory? Across 4 experiments, we show that 9-month-olds use this sensitivity to make inferences about object properties. In Experiment 1, 9-month-old infants expected co-occurring visual features to remain fused (i.e., infants looked longer when co-occurring features split apart than when they stayed together). Forming such expectations can help identify integral object parts for object individuation, recognition, and categorization. In Experiment 2, we increased the task difficulty by presenting the test stimuli simultaneously with a different spatial layout from the familiarization trials to provide a more ecologically valid condition. Infants did not make similar inferences in this more distracting test condition. However, Experiment 3 showed that a social cue did allow inferences in this more difficult test condition, and Experiment 4 showed that social cues helped infants choose patterns among distractor patterns during learning as well as during test. These findings suggest that infants can use feature co-occurrence to learn about objects and that social cues shape such foundational learning in distraction-filled environments.  相似文献   

8.
Under incidental learning conditions, a spatial layout can be acquired implicitly and facilitate visual searches (the contextual cuing effect). Whereas previous studies have shown a cuing effect in the visual domain, the present study examined whether a contextual cuing effect could develop from association between auditory events and visual target locations (Experiments 1 and 2). In the training phase, participants searched for a T among Ls, preceded by 2 sec of auditory stimulus. The target location could be predicted from the preceding auditory stimulus. In the test phase, the auditory-visual association pairings were disrupted. The results revealed that a contextual cuing effect occurs by auditory-visual association. Participants did not notice the auditory-visual association. Experiment 3 explored a boundary condition for the auditory-visual contextual cuing effect. These results suggest that visual attention can be guided implicitly by crossmodal association, and they extend the idea that the visual system is sensitive to all kinds of statistical consistency.  相似文献   

9.
This study focuses on how the body schema develops during the first months of life, by investigating infants’ motor responses to localized vibrotactile stimulation on their limbs. Vibrotactile stimulation was provided by small buzzers that were attached to the infants’ four limbs one at a time. Four age groups were compared cross‐sectionally (3‐, 4‐, 5‐, and 6‐month‐olds). We show that before they actually reach for the buzzer, which, according to previous studies, occurs around 7–8 months of age, infants demonstrate emerging knowledge about their body's configuration by producing specific movement patterns associated with the stimulated body area. At 3 months, infants responded with an increase in general activity when the buzzer was placed on the body, independently of the vibrator's location. Differentiated topographical awareness of the body seemed to appear around 5 months, with specific responses resulting from stimulation of the hands emerging first, followed by the differentiation of movement patterns associated with the stimulation of the feet. Qualitative analyses revealed specific movement types reliably associated with each stimulated location by 6 months of age, possibly preparing infants’ ability to actually reach for the vibrating target. We discuss this result in relation to newborns’ ability to learn specific movement patterns through intersensory contingency.

Statement of contribution

what is already known on infants’ sensorimotor knowledge about their own bodies
  • 3‐month‐olds readily learn to produce specific limb movements to obtain a desired effect (movement of a mobile).
  • infants detect temporal and spatial correspondences between events involving their own body and visual events.
what the present study adds
  • until 4–5 months of age, infants mostly produce general motor responses to localized touch.
  • this is because in the present study, infants could not rely on immediate contingent feedback.
  • we propose a cephalocaudal developmental trend of topographic differentiation of body areas.
  相似文献   

10.
The present research examined whether 3-month-old infants, the youngest found so far to engage in goal-related reasoning about human agents, would also act as if they attribute goals to a novel non-human agent, a self-propelled box. In two experiments, the infants seemed to have interpreted the box’s actions as goal-directed after seeing the box approach object A as opposed to object B during familiarization. They thus acted as though they expected the box to maintain this goal and responded with increased attention when the box approached object B during test. In contrast, when object B was absent during familiarization and introduced afterwards, the infants’ responses were consistent with their having recognized that they had no information to predict which of the two objects the box should choose during test and therefore responded similarly when the box approached either object. However, if object B was absent during familiarization and object A was in different positions but the box persistently approached it, thus demonstrating equifinal variations in its actions, the infants again acted as though they attributed to the box a goal directed towards object A and expected the box to maintain this goal even when object B was introduced and hence responded with prolonged looking when the box failed to do so during test. These results are consistent with the notion that (a) infants as young as 3 months appear to attribute goals to both human and non-human agents, and (b) even young infants can use certain behavioral cues, e.g. equifinal variations in agents’ actions, to make inferences about agents’ goals.  相似文献   

11.
Mirman D  Magnuson JS  Estes KG  Dixon JA 《Cognition》2008,108(1):271-280
Many studies have shown that listeners can segment words from running speech based on conditional probabilities of syllable transitions, suggesting that this statistical learning could be a foundational component of language learning. However, few studies have shown a direct link between statistical segmentation and word learning. We examined this possible link in adults by following a statistical segmentation exposure phase with an artificial lexicon learning phase. Participants were able to learn all novel object-label pairings, but pairings were learned faster when labels contained high probability (word-like) or non-occurring syllable transitions from the statistical segmentation phase than when they contained low probability (boundary-straddling) syllable transitions. This suggests that, for adults, labels inconsistent with expectations based on statistical learning are harder to learn than consistent or neutral labels. In contrast, a previous study found that infants learn consistent labels, but not inconsistent or neutral labels.  相似文献   

12.
Extracting the statistical regularities present in the environment is a central learning mechanism in infancy. For instance, infants are able to learn the associations between simultaneously or successively presented visual objects (Fiser & Aslin, 2002 ; Kirkham, Slemmer & Johnson, 2002 ). The present study extends these results by investigating whether infants can learn the association between a target location and the context in which it is presented. With this aim, we used a visual associative learning procedure inspired by the contextual cuing paradigm, with infants from 8 to 12 months of age. In two experiments, in which we varied the complexity of the stimuli, we first habituated infants to several scenes where the location of a target (a cartoon character) was consistently associated with a context, namely a specific configuration of geometrical shapes. Second, we examined whether infants learned the covariation between the target location and the context by measuring looking times at scenes that either respected or violated the association. In both experiments, results showed that infants learned the target–context associations, as they looked longer at the familiar scenes than at the novel ones. In particular, infants selected clusters of co‐occurring contextual shapes and learned the covariation between the target location and this subset. These results support the existence of a powerful and versatile statistical learning mechanism that may influence the orientation of infants’ visual attention toward areas of interest in their environment during early developmental stages. A video abstract of this article can be viewed at: https://www.youtube.com/watch?v=9Hm1unyLBn0  相似文献   

13.
Previous research has shown that a speaker's choice between logically equivalent frames is influenced by reference point information, and that listeners draw accurate inferences based on the frame. Less clear, however, is whether these inferences play a causal role in generating attribute framing effects. Two experiments are reported, which suggest that frame‐dependent inferences are sufficient to generate attribute framing effects, and that blocking such inferences may block framing effects. Experiment 1 decomposed the typical framing design into two parts: One group of participants saw a target described in one of two attribute frames and reported their estimates (inferences) of the typical attribute value. These estimates were then given to a second group of yoked participants, who evaluated the target. Although this latter group was not exposed to different attribute frames, they nevertheless exhibited a “framing effect” as a result of receiving systematically different inferences. In contrast, Experiment 2 shows that experts—who are familiar with an attribute's distribution and are therefore less likely to draw strong frame‐based inferences—exhibit a diminished framing effect. Together, these findings underscore the role of inferences in the generation and attenuation of attribute framing effects. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

14.
Two experiments investigated whether infants represent goal‐directed actions of others in a way that allows them to draw inferences to unobserved states of affairs (such as unseen goal states or occluded obstacles). We measured looking times to assess violation of infants' expectations upon perceiving either a change in the actions of computer‐animated figures or in the context of such actions. The first experiment tested whether infants would attribute a goal to an action that they had not seen completed. The second experiment tested whether infants would infer from an observed action the presence of an occluded object that functions as an obstacle. The looking time patterns of 12‐month‐olds indicated that they were able to make both types of inferences, while 9‐month‐olds failed in both tasks. These results demonstrate that, by the end of the first year of life, infants use the principle of rational action not only for the interpretation and prediction of goal‐directed actions, but also for making productive inferences about unseen aspects of their context. We discuss the underlying mechanisms that may be involved in the developmental change from 9 to 12 months of age in the ability to infer hypothetical (unseen) states of affairs in teleological action representations.  相似文献   

15.
An ability to detect the common location of multisensory stimulation is essential for us to perceive a coherent environment, to represent the interface between the body and the external world, and to act on sensory information. Regarding the tactile environment “at hand”, we need to represent somatosensory stimuli impinging on the skin surface in the same spatial reference frame as distal stimuli, such as those transduced by vision and audition. Across two experiments we investigated whether 6‐ (n = 14; Experiment 1) and 4‐month‐old (n = 14; Experiment 2) infants were sensitive to the colocation of tactile and auditory signals delivered to the hands. We recorded infants’ visual preferences for spatially congruent and incongruent auditory‐tactile events delivered to their hands. At 6 months, infants looked longer toward incongruent stimuli, whilst at 4 months infants looked longer toward congruent stimuli. Thus, even from 4 months of age, infants are sensitive to the colocation of simultaneously presented auditory and tactile stimuli. We conclude that 4‐ and 6‐month‐old infants can represent auditory and tactile stimuli in a common spatial frame of reference. We explain the age‐wise shift in infants’ preferences from congruent to incongruent in terms of an increased preference for novel crossmodal spatial relations based on the accumulation of experience. A comparison of looking preferences across the congruent and incongruent conditions with a unisensory control condition indicates that the ability to perceive auditory‐tactile colocation is based on a crossmodal rather than a supramodal spatial code by 6 months of age at least.  相似文献   

16.
Bilingual and monolingual infants differ in how they process linguistic aspects of the speech signal. But do they also differ in how they process non‐linguistic aspects of speech, such as who is talking? Here, we addressed this question by testing Canadian monolingual and bilingual 9‐month‐olds on their ability to learn to identify native Spanish‐speaking females in a face‐voice matching task. Importantly, neither group was familiar with Spanish prior to participating in the study. In line with our predictions, bilinguals succeeded in learning the face‐voice pairings, whereas monolinguals did not. We consider multiple explanations for this finding, including the possibility that simultaneous bilingualism enhances perceptual attentiveness to talker‐specific speech cues in infancy (even in unfamiliar languages), and that early bilingualism delays perceptual narrowing to language‐specific talker recognition cues. This work represents the first evidence that multilingualism in infancy affects the processing of non‐linguistic aspects of the speech signal, such as talker identity.  相似文献   

17.
Extinction is a common consequence of unilateral brain injury: contralesional events can be perceived in isolation, yet are missed when presented concurrently with competing events on the ipsilesional side. This can arise crossmodally, where a contralateral touch is extinguished by an ipsilateral visual event. Recent studies showed that repositioning the hands in visible space, or making visual events more distant, can modulate such crossmodal extinction. Here, in a detailed single-case study, we implemented a novel spatial manipulation when assessing crossmodal extinction. This was designed not only to hold somatosensory inputs and hand/arm-posture constant, but also to hold (retinotopic) visual inputs constant, yet while still changing the spatial relationship of tactile and visual events in the external world. Our right hemisphere patient extinguished left-hand touches due to visual stimulation of the right visual field (RVF) when tested in the usual default posture with eyes/head directed straight ahead. But when her eyes/head were turned to the far left (and any visual events shifted along with this), such that the identical RVF retinal stimulation now fell at the same external location as the left-hand touch, crossmodal extinction was eliminated. Since only proprioceptive postural cues could signal this changed spatial relationship for the critical condition, our results show for the first time that such postural cues alone are sufficient to modulate crossmodal extinction. Identical somatosensory and retinal inputs can lead to severe crossmodal extinction, or none, depending on current posture.  相似文献   

18.
How does perceptual learning take place early in life? Traditionally, researchers have focused on how infants make use of information within displays to organize it, but recently, increasing attention has been paid to the question of how infants perceive objects differently depending upon their recent interactions with the objects. This experiment investigates 10‐month‐old infants' use of brief prior experiences with objects to visually organize a display consisting of multiple geometrically shaped three‐dimensional blocks created for this study. After a brief exposure to a multipart portion of the display, each infant was shown two test events, one of which preserved the unit the infant had seen and the other of which broke that unit. Overall, infants looked longer at the event that broke the unit they had seen prior to testing than the event that preserved that unit, suggesting that infants made use of the brief prior experience to (a) form a cohesive unit of the multipart portion of the display they saw prior to test and (b) segregate this unit from the rest of the test display. This suggests that infants made inferences about novel parts of the test display based on limited exposure to a subset of the test display. Like adults, infants learn features of the three‐dimensional world through their experiences in it.  相似文献   

19.
How do infants and young children learn about the causal structure of the world around them? In 4 experiments we investigate whether young children initially give special weight to the outcomes of goal-directed interventions they see others perform and use this to distinguish correlations from genuine causal relations-observational causal learning. In a new 2-choice procedure, 2- to 4-year-old children saw 2 identical objects (potential causes). Activation of 1 but not the other triggered a spatially remote effect. Children systematically intervened on the causal object and predictively looked to the effect. Results fell to chance when the cause and effect were temporally reversed, so that the events were merely associated but not causally related. The youngest children (24- to 36-month-olds) were more likely to make causal inferences when covariations were the outcome of human interventions than when they were not. Observational causal learning may be a fundamental learning mechanism that enables infants to abstract the causal structure of the world. (PsycINFO Database Record (c) 2012 APA, all rights reserved).  相似文献   

20.
Form perception at birth: Cohen and Younger (1984) revisited   总被引:1,自引:0,他引:1  
Cohen (1988; Cohen & Younger, 1984) has suggested that there is a shift in the perception of form sometime after 6 weeks of age. Prior to this age infants can remember the specific orientations of line segments, but cannot process and remember the angular relations that line segments can make. Experiment 1 used simple line stimuli with newborn infants to test this suggestion. Following habituation to a simple two-line angle the newborns dishabituated to a change of orientation but not to a change in angle, confirming Cohen and Younger's suggestion that orientation is a powerful cue in early shape perception. In Experiments 2 and 3 newborns were familiarized either to an acute or to an obtuse angle that changed its orientation over trials. On subsequent test trials the babies gave strong novelty preferences to a different angle. Alternative interpretations of the results are discussed, but these experimental findings are compatible with the suggestion that newborns can quickly learn to process angular relations, and that rudimentary form perception may not be dependent on a lengthy period of learning and/or maturation for its development.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号