首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Object and observer motion in the perception of objects by infants   总被引:1,自引:0,他引:1  
Sixteen-week-old human infants distinguish optical displacements given by their own motion from displacements given by moving objects, and they use only the latter to perceive the unity of partly occluded objects. Optical changes produced by moving the observer around a stationary object produced attentional levels characteristic of stationary observers viewing stationary displays and much lower than those shown by stationary observers viewing moving displays. Real displacements of an object with no subject-relative displacement, produced by moving an object so as to maintain a constant relation to the moving observer, evoked attentional levels that were higher than with stationary displays and more characteristic of attention to moving displays, a finding suggesting detection of the real motion. Previously reported abilities of infants to perceive the unity of partly occluded objects from motion information were found to depend on real object motion rather than on optical displacements in general. The results suggest that object perception depends on registration of the motions of surfaces in the three-dimensional layout.  相似文献   

2.
The effects of viewing the face of the talker (visual speech) on the processing of clearly presented intact auditory stimuli were investigated using two measures likely to be sensitive to the articulatory motor actions produced in speaking. The aim of these experiments was to highlight the need for accounts of the effects of audio-visual (AV) speech that explicitly consider the properties of articulated action. The first experiment employed a syllable-monitoring task in which participants were required to monitor for target syllables within foreign carrier phrases. An AV effect was found in that seeing a talker's moving face (moving face condition) assisted in more accurate recognition (hits and correct rejections) of spoken syllables than of auditory-only still face (still face condition) presentations. The second experiment examined processing of spoken phrases by investigating whether an AV effect would be found for estimates of phrase duration. Two effects of seeing the moving face of the talker were found. First, the moving face condition had significantly longer duration estimates than the still face auditory-only condition. Second, estimates of auditory duration made in the moving face condition reliably correlated with the actual durations whereas those made in the still face auditory condition did not. The third experiment was carried out to determine whether the stronger correlation between estimated and actual duration in the moving face condition might have been due to generic properties of AV presentation. Experiment 3 employed the procedures of the second experiment but used stimuli that were not perceived as speech although they possessed the same timing cues as those of the speech stimuli of Experiment 2. It was found that simply presenting both auditory and visual timing information did not result in more reliable duration estimates. Further, when released from the speech context (used in Experiment 2), duration estimates for the auditory-only stimuli were significantly correlated with actual durations. In all, these results demonstrate that visual speech can assist in the analysis of clearly presented auditory stimuli in tasks concerned with information provided by viewing the production of an utterance. We suggest that these findings are consistent with there being a processing link between perception and action such that viewing a talker speaking will activate speech motor schemas in the perceiver.  相似文献   

3.
Infants' ability to represent objects has received significant attention from the developmental research community. With the advent of eye-tracking technology, detailed analysis of infants' looking patterns during object occlusion have revealed much about the nature of infants' representations. The current study continues this research by analyzing infants' looking patterns in a novel manner and by comparing infants' looking at a simple display in which a single three-dimensional (3D) object moves along a continuous trajectory to a more complex display in which two 3D objects undergo trajectories that are interrupted behind an occluder. Six-month-old infants saw an occlusion sequence in which a ball moved along a linear path, disappeared behind a rectangular screen, and then a ball (ball-ball event) or a box (ball-box event) emerged at the other edge. An eye-tracking system recorded infants' eye-movements during the event sequence. Results from examination of infants' attention to the occluder indicate that during the occlusion interval infants looked longer to the side of the occluder behind which the moving occluded object was located, shifting gaze from one side of the occluder to the other as the object(s) moved behind the screen. Furthermore, when events included two objects, infants attended to the spatiotemporal coordinates of the objects longer than when a single object was involved. These results provide clear evidence that infants' visual tracking is different in response to a one-object display than to a two-object display. Furthermore, this finding suggests that infants may require more focused attention to the hidden position of objects in more complex multiple-object displays and provides additional evidence that infants represent the spatial location of moving occluded objects.  相似文献   

4.
What kind of hand and finger movements are newborn infants preoccupied with, and how are these movements organized and controlled? These questions were studied in two experiments under three conditions: a social condition, in which the mother (in expt 1) or the experimenter (in expt 2) sat face to face with the infant; an object condition, in which a ball moving slowly and irregularly was presented to the infant; and a baseline condition (in expt 1) without ball or mother present. The size of the ball and the distance to it was chosen so that it approximately corresponded to the visual angle of the head of the model. Twenty-six neonates participated in the study ranging from 2 to 6 days of age at the time of observation. All infants were in an alert, optimal awake state during the experiments. The infants' finger movements were scored from video recordings. The result revealed a large variety of relatively independent finger movements. It was found that finger movements differed both in quantity and quality between the three conditions. There were many more finger movements in the social condition than in the object and baseline conditions. In addition, there were relatively more transitional finger movements and flexions of the hand in the social condition, and relatively more thumb-index finger activity and extensions of the hand in the object condition. Finally, the arms were more often forward extended in the object condition than in the social condition. The results support the notion that neonates show different modes of functioning towards people and objects.  相似文献   

5.
In this study, the authors investigated the relation between early social contingency experiences and infants' competencies to detect nonsocial contingencies. In this study of 87 three-month-old infants, the authors operationalized early social contingencies as prompt, contingent maternal responses and coded microanalytically on the basis of video-recorded mother-infant interactions. The authors assessed competence to detect nonsocial contingencies by 2 methods: (a) the mobile conjugate reinforcement paradigm, which focuses on detecting contingencies between the infants' actions (kicking) and nonsocial consequences (mobile moving) and (b) the visual expectation paradigm, which focuses on detecting contingencies between 1 event (a smiley face projected on a screen) that was followed by a 2nd event (a complex picture projected on the other side of the screen). The results showed that early social contingencies are related to the competency to detect nonsocial action-consequence contingencies in the mobile conjugate reinforcement paradigm.  相似文献   

6.
Perception of animacy from the motion of a single object   总被引:3,自引:0,他引:3  
Tremoulet PD  Feldman J 《Perception》2000,29(8):943-951
We demonstrate that a single moving object can create the subjective impression that it is alive, based solely on its pattern of movement. Our displays differ from conventional biological motion displays (which normally involve multiple moving points, usually integrated to suggest a human form) in that they contain only a single rigid object moving across a uniform field. We focus on motion paths in which the speed and direction of the target object change simultaneously. Naive subjects' ratings of animacy were significantly influenced by (i) the magnitude of the speed change, (ii) the angular magnitude of the direction change, (iii) the shape of the object, and (iv) the alignment between the principal axis of the object and its direction of motion. These findings are consistent with the hypothesis that observers classify as animate only those objects whose motion trajectories are otherwise unlikely to occur in the observed setting.  相似文献   

7.
In this study, the authors investigated the relation between early social contingency experiences and infants' competencies to detect nonsocial contingencies. In this study of 87 three-month-old infants, the authors operationalized early social contingencies as prompt, contingent maternal responses and coded microanalytically on the basis of video-recorded mother-infant interactions. The authors assessed competence to detect nonsocial contingencies by 2 methods: (a) the mobile conjugate reinforcement paradigm, which focuses on detecting contingencies between the infants' actions (kicking) and nonsocial consequences (mobile moving) and (b) the visual expectation paradigm, which focuses on detecting contingencies between 1 event (a smiley face projected on a screen) that was followed by a 2nd event (a complex picture projected on the other side of the screen). The results showed that early social contingencies are related to the competency to detect nonsocial action-consequence contingencies in the mobile conjugate reinforcement paradigm.  相似文献   

8.
In a visual occlusion task, 4-month-olds were given a dynamic sound cue (following the trajectory of an object), or a static cue (sound remained stationary). Infants' oculomotor anticipations were greater in the Dynamic condition, suggesting that representations of visual occlusion were supported by auditory information.  相似文献   

9.
The process of learning the structure of novel objects involves the selective use of information available in the distal stimulus. By allowing participants to explore the object within a limited field of view, we were able to examine more rigorously what regions of the object are actually selected in the learning process. Participants explored objects either by moving a circular aperture over a stationary novel object (the aperture-movement condition), or by moving the object behind a stationary aperture (the object-movement condition). Given the differences in how the spatial layout of object parts is revealed in the two study conditions, we expected that exploration would be more systematic in the aperture-movement condition than it would be in the object-movement condition, and would lead to better object recognition. We show evidence that in the aperture-movement condition exploration patterns were more related to the structure of the object and, as a consequence, the aperture-movement condition resulted in more accurate recognition in a later old--new discrimination test.  相似文献   

10.
Two-dimensional (2D) displays of real three-dimensional (3D) objects are frequently used experimental tools in animal studies. Whether marmoset monkeys, with their highly diverse and complex anti-predation strategies, readily recognized 2D representations of potential threats has yet to be determined, as seen in other primates. Thus, the behavioral responses of adult captive black tufted-ear marmosets (Callithrix penicillata) toward an unfamiliar motionless snake-model and its photograph were assessed. Pictorially naïve subjects were randomly divided into two groups (n = 12/each) and submitted to two trials. Group 1 was initially exposed to the 3D object and after 1 week to its photograph. Group 2 was first presented the picture and only tested with the real object 1 week later. All 15-min trials were divided into three consecutive 5-min intervals: pre-exposure, exposure and post-exposure. In the presence of the 3D snake object, regardless of its presentation order, the frequency of direct gazes, head-cocks, tsik-tsik alarm/mobbing calls and genital displays increased significantly. The photograph induced a similar response, although only when the object had been previously presented, as significantly higher levels of these behaviors were seen in Group 1 than Group 2. Proximity to the stimulus, aerial scan, terrestrial glance, displacement activities and locomotion were not consistently influenced by the stimuli’s presence and/or order of presentation. Therefore, marmosets recognized and responded appropriately to biologically and emotionally relevant 3D and 2D stimuli. Since the aversive/fearful reactions toward the photograph were only seen after the snake object had been presented, the former seems to be essentially a learned response.  相似文献   

11.
No evidence had been provided so far of newborns’ capacity to give a matching response to 2D stimuli. We report evidence from 18 newborns who were presented with three types of stimuli on a 2D screen. The stimuli were video-recorded displays of tongue protrusion shown by: (a) a human face, (b) a human tongue from a disembodied mouth, and (c) an artificial tongue from a robotic mouth. Compared to a baseline condition, neonates increased significantly their tongue protrusion when seeing disembodied human and artificial tongue movements, but not when seeing a 2D full-face protruding tongue. This result was interpreted as revealing the exploration of top-heavy patterns of the 2D face that distracted infants’ attention from the tongue. Results also showed progressively more accurate matching (full tongue protrusion) throughout repeated exposure to each kind of stimulus. Such findings are not in line with the predictions of the innate releasing mechanism (IRM) model or of the oral exploration hypothesis. They support the active intermodal mapping (AIM) hypothesis that emphasizes not only the importance of repeated experience, as would the associative sequence learning (ASL) hypothesis, but also predicts a differential learning and progressive correction of the response adapted to each stimulus.  相似文献   

12.

The presentation of visual food cues (e.g., food plating) can affect our appetite and leads to characteristic changes of early as well as late positivity in the electroencephalogram. The present event-related potential (ERP) study attempted to change ERPs and affective ratings for food pictures by rearranging the components of a depicted meal (conventional presentation) as a smiley or frowny. The images were presented to 68 women (mean age?=?24 years), who rated the wanting and liking of the meals. Compared to conventional food plating, smiley and frowny meals elicited enhanced amplitudes of the P200, P300, and late positive potential (LPP) in a large occipito-parietal cluster. Frowny meals were rated as less appetizing than conventional food presentations. The mentioned ERP components are concomitants of face configuration processing (P200), automatic attention/novelty detection (P300), and voluntary attention/assignment of emotional meaning (LPP). Thus, the combination of two affective cues (food, face) in one stimulus changed the activation in motivational circuits of the brain. Also, serving a meal as a frowny could help to regulate appetite.

  相似文献   

13.
本研究采用客体回溯范式考察了特征变化的连续性对维持客体连续表征的作用。实验1和实验2分别探索了形状维度上的变化方式(不变、渐变、突变)和亮度维度上的变化方式(不变、渐变、随机变化)对客体预览利化效应的影响。在特征连续条件下(不变或渐变),两个实验都获得了客体预览利化效应。而在特征不连续变化条件下(突变或随机变化),该效应消失。这些结果说明特征变化的连续性同样影响客体连续表征的维持。  相似文献   

14.
Voluntary attention to one of two static objects in the peripheral field of one eye makes this object more liable to masking by a moving object in the corresponding area of the field of the other eye (Experiment 1).

Positive after images (and probably negative after images) are subject to (binocular) movement masking (Experiment 2).

Movement masking can occur in the field of either eye, but with the displays so far tried the inhibitory influence of a moving object is less in the field of the eye to which it is shown than in the field of the other eye (Experiment 3).  相似文献   

15.
Parametric induction of animacy experience   总被引:2,自引:1,他引:1  
Graphical displays of simple moving geometrical figures have been repeatedly used to study the attribution of animacy in human observers. Yet little is known about the relevant movement characteristics responsible for this experience. The present study introduces a novel parametric research paradigm, which allows for the experimental control of specific motion parameters and a predictable influence on the attribution of animacy. Two experiments were conducted using 3D computer animations of one or two objects systematically introducing variations in the following aspects of motion: directionality, discontinuity and responsiveness. Both experiments further varied temporal kinematics. Results showed that animacy experience increased with the time a moving object paused in the vicinity of a second object and with increasing complexity of interaction between the objects (approach and responsiveness). The experience of animacy could be successfully modulated in a parametric fashion by the systematic variation of comparably simple differential movement characteristics.  相似文献   

16.
Two experiments assessed infant sensitivity to figural coherence in point-light displays moving as if attached to the major joints of a walking person. Experiment 1 tested whether 3- and 5-month-old infants could discriminate between upright and inverted versions of the walker in both moving and static displays. Using an infant-control habituation paradigm, it was found that both ages discriminated the moving but not the static displays. Experiment 2 was designed to clarify whether or not structural invariants were extracted from these displays. The results revealed that (1) moving point-light displays with equivalent motions but different topographic relations were discriminated while (2) static versions were not, and (3) arrays that varied in the amount of motion present in different portions of the display were also not discriminated. These results are interpreted as indicating that young infants are sensitive to figural coherence in displays of biomechanical motion.  相似文献   

17.
Four experiments assessed change detection performance for displays consisting of a single, novel, multipart object, leading to several new findings. First, larger changes (involving more object parts) were more difficult to detect than smaller changes. Second, change detection performance for displays of a temporarily occluded moving object was no more or less sensitive than detection performance for displays of static objects disappearing and reappearing; however, item analyses did indicate that detection may have been based on different representations in these two situations. Third, training observers to recognize objects before the detection task had no measurable effect on sensitivity levels, but induced different biases depending on the training conditions. Finally, some participants' performance revealed implicit change detection on trials in which they explicitly responded that they saw no change.  相似文献   

18.
Infants learn less from a televised demonstration than from a live demonstration, the video deficit effect. The present study employs a novel approach, using touch screen technology to examine 15‐month olds' transfer of learning. Infants were randomly assigned either to within‐dimension (2D/2D or 3D/3D) or cross‐dimension (3D/2D or 2D/3D) conditions. For the within‐dimension conditions, an experimenter demonstrated an action by pushing a virtual button on a 2D screen or a real button on a 3D object. Infants were then given the opportunity to imitate using the same screen or object. For the 3D/2D condition, an experimenter demonstrated the action on the 3D object, and infants were given the opportunity to reproduce the action on a 2D touch screen (and vice versa for the 2D/3D condition). Infants produced significantly fewer target actions in the cross‐dimension conditions than in the within‐dimension conditions. These findings have important implications for infants' understanding and learning from 2D images and for their using 2D media as the basis of actions in the real world.  相似文献   

19.
Wu B  Klatzky RL  Stetten GD 《Cognition》2012,123(1):33-49
We extended the classic anorthoscopic viewing procedure to test a model of visualization of 3D structures from 2D cross-sections. Four experiments were conducted to examine key processes described in the model, localizing cross-sections within a common frame of reference and spatiotemporal integration of cross sections into a hierarchical object representation. Participants used a hand-held device to reveal a hidden object as a sequence of cross-sectional images. The process of localization was manipulated by contrasting two displays, in situ vs. ex situ, which differed in whether cross sections were presented at their source locations or displaced to a remote screen. The process of integration was manipulated by varying the structural complexity of target objects and their components. Experiments 1 and 2 demonstrated visualization of 2D and 3D line-segment objects and verified predictions about display and complexity effects. In Experiments 3 and 4, the visualized forms were familiar letters and numbers. Errors and orientation effects showed that displacing cross-sectional images to a remote display (ex situ viewing) impeded the ability to determine spatial relationships among pattern components, a failure of integration at the object level.  相似文献   

20.
An experiment involving 90 students in the 1st, 3rd, and 5th grades investigated how visual examples and grade (our surrogate for age) affected variability in a drawing task. The task involved using circles as the main element in a set of drawings. There were two examples: One was simple and single (a smiley face inside a circle); the other, complex and dual (a fishbowl extending outside a circle and a bicycle using two circles). There were significant effects of both example and grade on variability. Between-grades, 3rd and 5th graders were more variable than 1st graders with the complex (but not the simple) set of examples. Within-grades, 3rd and 5th graders were more variable with the complex (compared to the simple) set of examples. First graders' variability levels did not change with examples. The discussion focuses on how examples have and should be used to increase variability in drawings of both younger and older children.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号