首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Both the movements of people and inanimate objects are intimately bound up with physical causality. Furthermore, in contrast to object movements, causal relationships between limb movements controlled by humans and their body displacements uniquely reflect agency and goal-directed actions in support of social causality. To investigate the development of sensitivity to causal movements, we examined the looking behavior of infants between 9 and 18 months of age when viewing movements of humans and objects. We also investigated whether individual differences in gender and gross motor functions may impact the development of the visual preferences for causal movements. In Experiment 1, infants were presented with walking stimuli showing either normal body translation or a “moonwalk” that reversed the horizontal motion of body translations. In Experiment 2, infants were presented with unperformable actions beyond infants’ gross motor functions (i.e., long jump) either with or without ecologically valid body displacement. In Experiment 3, infants were presented with rolling movements of inanimate objects that either complied with or violated physical causality. We found that female infants showed longer looking times to normal walking stimuli than to moonwalk stimuli, but did not differ in their looking time to movements of inanimate objects and unperformable actions. In contrast, male infants did not show sensitivity to causal movement for either category. Additionally, female infants looked longer at social stimuli of human actions than male infants. Under the tested circumstances, our findings indicate that female infants have developed a sensitivity to causal consistency between limb movements and body translations of biological motion, only for actions with previous visual and motor exposures, and demonstrate a preference toward social information.  相似文献   

2.
The development of visual binding in humans has been investigated with psychophysical tasks assessing the extent to which young infants achieve perceptual completion of partly occluded objects. These experiments lead to two conclusions. First, neonates are capable of figure-ground segregation, but do not perceive the unity of a centre-occluded object; the ability to perceive object unity emerges over the first several postnatal months. Second, by 4 months, infants rely on a range of Gestalt visual information in perceiving unity, including common motion, alignment, and good form. This developmental pattern is thought to be built on the ability to detect, and then utilize, appropriate visual information in support of the binding of features into surfaces and objects. Evidence from changes in infant attention, computational modelling, and developmental neurophysiology is cited that is consistent with this view.  相似文献   

3.
Adults who watch an ambiguous visual event consisting of two identical objects moving toward, through, and away from each other and hear a brief sound when the objects overlap report seeing visual bouncing. We conducted three experiments in which we used the habituation/test method to determine whether these illusory effects might emerge early in development. In Experiments 1 and 3 we tested 4‐, 6‐ and 8‐month‐old infants’ discrimination between an ambiguous visual display presented together with a sound synchronized with the objects’ spatial coincidence and the identical visual display presented together with a sound no longer synchronized with coincidence. Consistent with illusory perception, the 6‐ and 8‐month‐old, but not the 4‐month‐old, infants responded to these events as different. In Experiment 2 infants were habituated to the ambiguous visual display together with a sound synchronized with the objects’ coincidence and tested with a physically bouncing object accompanied by the sound at the bounce. Consistent with illusory perception again, infants treated these two events as equivalent by not exhibiting response recovery. The developmental emergence of this intersensory illusion at 6 months of age is hypothesized to reflect developmental changes in object knowledge and attentional mechanisms.  相似文献   

4.
The present research examined the development of 4.5‐ to 7.5‐month‐old infants’ ability to map different‐features occlusion events using a simplified event‐mapping task. In this task, infants saw a different‐features (i.e. egg‐column) event followed by a display containing either one object or two objects. Experiments 1 and 2 assessed infants’ ability to judge whether the egg‐column event was consistent with a subsequent one‐column display. Experiments 3 and 4 examined infants’ ability to judge whether the objects seen in the egg‐column event and those seen in a subsequent display were consistent in their featural composition. At 7.5 and 5.5 months, but not at 4.5 months, the infants successfully mapped the egg‐column event onto the one‐column display. However, the 7.5‐ and 5.5‐month‐olds differed in whether they mapped the featural properties of those objects. Whereas the 7.5‐month‐olds responded as if they expected to see two specific objects, an egg and a column, in the final display the 5.5‐month‐olds responded as if they simply expected to see ‘two objects’. Additional results revealed, however, that when spatiotemporal information specified the presence of two objects, 5.5‐month‐olds succeeded at tagging the objects as being featurally distinct, although they still failed to attach more specific information about what those differences were. Reasons for why the younger infants had difficulty integrating featural information into their object representations were discussed.  相似文献   

5.
In human adults two functionally and neuro‐anatomically separate systems exist for the use of visual information in perception and the use of visual information to control movements (Milner & Goodale, 1995 , 2008 ). We investigated whether this separation is already functioning in the early stages of the development of reaching. To this end, 6‐ and 7‐month‐old infants were presented with two identical objects at identical distances in front of an illusory Ponzo‐like background that made them appear to be located at different distances. In two further conditions without the illusory background, the two objects were presented at physically different distances. Preferential reaching outcomes indicated that the allocentric distance information contained in the illusory background affected the perception of object distance. Yet, infants' reaching kinematics were only affected by the objects' physical distance and not by the perceptual distance manipulation. These findings were taken as evidence for the two‐visual systems, as proposed by Milner and Goodale ( 2008 ), being functional in early infancy. We discuss the wider implications of this early dissociation.  相似文献   

6.
Previous research has shown that 6‐month‐old infants extrapolate object motion on linear paths when they act predictively on fully visible moving objects but not when they observe partly occluded moving objects. The present research probed whether differences in the tasks presented to infants or in the visibility of the objects account for these findings, by investigating infants’ predictive head tracking of a visible object that moves behind a small occluder. Six‐month‐old infants were presented with an object that moved repeatedly on linear or nonlinear paths, with an occluder covering the place where all the paths intersected. The first time infants viewed an object’s motion, their head movements did not anticipate either linear or nonlinear motion, but they quickly learned to anticipate linear motion on successive trials. Infants also learned to anticipate nonlinear motion, but this learning was slower and less consistent. Learning in all cases concerned the trajectory of the object, not the specific locations at which the object appeared. These findings suggest that infants form object representations that are weakly biased toward inertial motion and that are influenced by learning. The findings accord with the thesis that a single system of representation underlies both predictive action and perception of object motion, and that occlusion reduces the precision of object representations.  相似文献   

7.
This research evaluated infants’ facial expressions as they viewed pictures of possible and impossible objects on a TV screen. Previous studies in our lab demonstrated that four-month-old infants looked longer at the impossible figures and fixated to a greater extent within the problematic region of the impossible shape, suggesting they were sensitive to novel or unusual object geometry. Our work takes studies of looking time data a step further, determining if increased looking co-occurs with facial expressions associated with increased visual interest and curiosity, or even puzzlement and surprise. We predicted that infants would display more facial expressions consistent with either “interest” or “surprise” when viewing the impossible objects relative to possible ones, which would provide further evidence of increased perceptual processing due to incompatible spatial information. Our results showed that the impossible cubes evoked both longer looking times and more reactive expressions in the majority of infants. Specifically, the data revealed significantly greater frequency of raised eyebrows, widened eyes and returns to looking when viewing impossible figures with the most robust effects occurring after a period of habituation. The pattern of facial expressions were consistent with the “interest” family of facial expressions and appears to reflect infants’ ability to perceive systematic differences between matched pairs of possible and impossible objects as well as recognize novel geometry found in impossible objects. Therefore, as young infants are beginning to register perceptual discrepancies in visual displays, their facial expressions may reflect heightened attention and increased information processing associated with identifying irreconcilable contours in line drawings of objects. This work further clarifies the ongoing formation and development of early mental representations of coherent 3D objects.  相似文献   

8.
Previous studies on perceptual grouping found that people can use spatiotemporal and featural information to group spatially separated rigid objects into a unit while tracking moving objects. However, few studies have tested the role of objects’ self-motion information in perceptual grouping, although it is of great significance to the motion perception in the three-dimensional space. In natural environments, objects always move in translation and rotation at the same time. The self-rotation of the objects seriously destroys objects’ rigidity and topology, creates conflicting movement signals and results in crowding effects. Thus, this study sought to examine the specific role played by self-rotation information on grouping spatially separated non-rigid objects through a modified multiple object tracking (MOT) paradigm with self-rotating objects. Experiment 1 found that people could use self-rotation information to group spatially separated non-rigid objects, even though this information was deleterious for attentive tracking and irrelevant to the task requirements, and people seemed to use it strategically rather than automatically. Experiment 2 provided stronger evidence that this grouping advantage did come from the self-rotation per se rather than surface-level cues arising from self-rotation (e.g. similar 2D motion signals and common shapes). Experiment 3 changed the stimuli to more natural 3D cubes to strengthen the impression of self-rotation and again found that self-rotation improved grouping. Finally, Experiment 4 demonstrated that grouping by self-rotation and grouping by changing shape were statistically comparable but additive, suggesting that they were two different sources of the object information. Thus, grouping by self-rotation mainly benefited from the perceptual differences in motion flow fields rather than in deformation. Overall, this study is the first attempt to identify self-motion as a new feature that people can use to group objects in dynamic scenes and shed light on debates about what entities/units we group and what kinds of information about a target we process while tracking objects.  相似文献   

9.
We investigated the scanning strategies used by 2- to 3.5-month-old infants when viewing partly occluded object displays. Eye movements were recorded with a corneal reflection system as the infants observed stimuli depicting two rod parts above and below an occluding box. Stimulus parameters were chosen on the basis of past research demonstrating the importance of motion, occluder width, and edge alignment to perception of object unity. Results indicated that the infants tailored scanning to display characteristics, engaging in more extensive scanning when unity perception was challenged by a wide occluder or misaligned edges. In addition, older infants tended to scan the lower parts of the displays more frequently than did younger infants. Exploration of individual differences, however, revealed marked contrasts in specific scanning styles across infants. The findings are consistent with views of perceptual development stressing the importance of information processing skills and self-directed action to the acquisition of object knowledge.  相似文献   

10.
Adults have little difficulty perceiving objects as complete despite occlusion, but newborn infants perceive moving partly occluded objects solely in terms of visible surfaces. The developmental mechanisms leading to perceptual completion have never been adequately explained. Here, the authors examine the potential contributions of oculomotor behavior and motion sensitivity to perceptual completion performance in individual infants. Young infants were presented with a center-occluded rod, moving back and forth against a textured background, to assess perceptual completion. Infants also participated in tasks to assess oculomotor scanning patterns and motion direction discrimination. Individual differences in perceptual completion performance were strongly correlated with scanning patterns but were unrelated to motion direction discrimination. The authors present a new model of development of perceptual completion that posits a critical role for targeted visual scanning, an early developing oculomotor action system.  相似文献   

11.
The effects of limited attentional resources and study time on explicit and implicit memory were studied using Schacter and Cooper’s possible and impossible objects in their recognition and object decision paradigm. In one experiment, when attention at study was limited by a flanking digits procedure, object recognition was diminished but object decision priming for possible objects was unaffected; in another experiment, limiting attention plus reducing stimulus study time impaired object recognition and eliminated object priming. Recognition memory and perceptual priming for previously unfamiliar visual stimuli were both influenced by attention, although to different degrees. The intervening variable of study time determined the degree to which priming was affected by attentional resources. These results support a limited capacity attentional model for both recognition and perceptual priming of unfamiliar visual stimuli, and they highlight the need for assessing the interaction of attentional resources and study time in explicit and implicit memory tasks.  相似文献   

12.
The ability to determine how many objects are involved in physical events is fundamental for reasoning about the world that surrounds us. Previous studies suggest that infants can fail to individuate objects in ambiguous occlusion events until their first birthday and that learning words for the objects may play a crucial role in the development of this ability. The present eye-tracking study tested whether the classical object individuation experiments underestimate young infants’ ability to individuate objects and the role word learning plays in this process. Three groups of 6-month-old infants (N = 72) saw two opaque boxes side by side on the eye-tracker screen so that the content of the boxes was not visible. During a familiarization phase, two visually identical objects emerged sequentially from one box and two visually different objects from the other box. For one group of infants the familiarization was silent (Visual Only condition). For a second group of infants the objects were accompanied with nonsense words so that objects’ shape and linguistic labels indicated the same number of objects in the two boxes (Visual & Language condition). For the third group of infants, objects’ shape and linguistic labels were in conflict (Visual vs. Language condition). Following the familiarization, it was revealed that both boxes contained the same number of objects (e.g. one or two). In the Visual Only condition, infants looked longer to the box with incorrect number of objects at test, showing that they could individuate objects using visual cues alone. In the Visual & Language condition infants showed the same looking pattern. However, in the Visual vs Language condition infants looked longer to the box with incorrect number of objects according to linguistic labels. The results show that infants can individuate objects in a complex object individuation paradigm considerably earlier than previously thought and that linguistic cues enforce their own preference in object individuation. The results are consistent with the idea that when language and visual information are in conflict, language can exert an influence on how young infants reason about the visual world.  相似文献   

13.
When a moving object (A) contacts a stationary one (B) and Object B then moves, visual impressions of force occur along with a visual impression of causality. It is shown that findings about force impressions that occur with launching effect stimuli generalize to other forms of phenomenal causality, namely entraining, enforced disintegration, and shattering stimuli. In particular, evidence is reported for generality of the force asymmetry, in which the amount of perceived force exerted by Object A is greater than the amount of perceived resistance put up by Object B. Effects of manipulations of kinematic variables also resembled those found in previous experiments. Some unpredicted findings occurred. It is argued that these reflect a change in perceptual interpretation when both objects are in motion prior to contact, due to both objects being perceived as in autonomous motion. The results are consistent with a theoretical account in which force impressions occur by a process of matching kinematic information in visual stimuli to stored representations of actions on objects, which supply information about forces.  相似文献   

14.
Self‐propelled motion is a powerful cue that conveys information that an object is animate. In this case, animate refers to an entity's capacity to initiate motion without an applied external force. Sensitivity to this motion cue is present in infants that are a few months old, but whether this sensitivity is experience‐dependent or is already present at birth is unknown. Here, we tested newborns to examine whether predispositions to process self‐produced motion cues underlying animacy perception were present soon after birth. We systematically manipulated the onset of motion by self‐propulsion (Experiment 1) and the change in trajectory direction in the presence or absence of direct contact with an external object (Experiments 2 and 3) to investigate how these motion cues determine preference in newborns. Overall, data demonstrated that, at least at birth, the self‐propelled onset of motion is a crucial visual cue that allowed newborns to differentiate between self‐ and non‐self‐propelled objects (Experiment 1) because when this cue was removed, newborns did not manifest any visual preference (Experiment 2), even if they were able to discriminate between the stimuli (Experiment 3). To our knowledge, this is the first study aimed at identifying sensitivity in human newborns to the most basic and rudimentary motion cues that reliably trigger perceptions of animacy in adults. Our findings are compatible with the hypothesis of the existence of inborn predispositions to visual cues of motion that trigger animacy perception in adults.  相似文献   

15.
Object names are a major component of early vocabularies and learning object names depends on being able to visually recognize objects in the world. However, the fundamental visual challenge of the moment‐to‐moment variations in object appearances that learners must resolve has received little attention in word learning research. Here we provide the first evidence that image‐level object variability matters and may be the link that connects infant object manipulation to vocabulary development. Using head‐mounted eye tracking, the present study objectively measured individual differences in the moment‐to‐moment variability of visual instances of the same object, from infants’ first‐person views. Infants who generated more variable visual object images through manual object manipulation at 15 months of age experienced greater vocabulary growth over the next six months. Elucidating infants’ everyday visual experiences with objects may constitute a crucial missing link in our understanding of the developmental trajectory of object name learning.  相似文献   

16.
Two factors hypothesized to affect shared visual attention in 9-month-olds were investigated in two experiments. In Experiment 1, we examined the effects of different attention-directing actions (looking, looking and pointing, and looking, pointing and verbalizing) on 9-month-olds’ engagement in shared visual attention. In Experiment 1 we also varied target object locations (i.e., in front, behind, or peripheral to the infant) to test whether 9-month-olds can follow an adult’s gesture past a nearby object to a more distal target. Infants followed more elaborate parental gestures to targets within their visual field. They also ignored nearby objects to follow adults’ attention to a peripheral target, but not to targets behind them. In Experiment 2, we rotated the parent 90° from the infant’s midline to equate the size of the parents’ head turns to targets within as well as outside the infants’ visual field. This manipulation significantly increased infants’ looking to target objects behind them, however, the frequency of such looks did not exceed chance. The results of these two experiments are consistent with perceptual and social experience accounts of shared visual attention.  相似文献   

17.
18.
To examine the relationship between visual motion processing for perception and pursuit, we measured the pursuit eye-movement and perceptual responses to the same complex-motion stimuli. We show that humans can both perceive and pursue the motion of line-figure objects, even when partial occlusion makes the resulting image motion vastly different from the underlying object motion. Our results show that both perception and pursuit can perform largely accurate motion integration, i.e. the selective combination of local motion signals across the visual field to derive global object motion. Furthermore, because we manipulated perceived motion while keeping image motion identical, the observed parallel changes in perception and pursuit show that the motion signals driving steady-state pursuit and perception are linked. These findings disprove current pursuit models whose control strategy is to minimize retinal image motion, and suggest a new framework for the interplay between visual cortex and cerebellum in visuomotor control.  相似文献   

19.
《Cognitive development》2003,18(2):233-246
A first experiment tested infants’ perceptual discrimination between correct and incorrect adding and subtracting objects with a standardized violation-of-expectation procedure. Eleven-, 16- and 21-month-olds did not look longer at incorrect (one object) than at correct (two objects) results of adding one object to another, nor at incorrect (two objects) than at correct (one object) results of subtracting one object from two. A second experiment tested 21-month-olds’ sensorimotor production of addition and subtraction results using a search-for-objects procedure. They already searched correctly for two objects when one object is added to another and for one object when one is subtracted from two objects. These findings support the constructivist hypothesis that infants’ active sensorimotor production may develop before their reactive perceptual discrimination of adding and subtracting objects.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号