首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Decision-making is central to human cognition. Fundamental to every decision is the ability to internally represent the available choices and their relative costs and benefits. The most basic and frequent decisions we make occur as our motor system chooses and executes only those actions that achieve our current goals. Although these interactions with the environment may appear effortless, this belies what must be incredibly sophisticated visuomotor decision-making processes. In order to measure how visuomotor decisions unfold in real-time, we used a unique reaching paradigm that forced participants to initiate rapid hand movements toward multiple potential targets, with only one being cued after reach onset. We show across three experiments that, in cases of target uncertainty, trajectories are spatially sensitive to the probabilistic distribution of targets within the display. Specifically, when presented with two or three target displays, subjects initiate their reaches toward an intermediary or ‘averaged’ location before correcting their trajectory in-flight to the cued target location. A control experiment suggests that our effect depends on the targets acting as potential reach locations and not as distractors. This study is the first to show that the ‘averaging’ of target-directed reaching movements depends not only on the spatial position of the targets in the display but also the probability of acting at each target location.  相似文献   

2.
Two experiments investigated infants’ sensitivity to familiar size as information for the distances of objects with which they had had only brief experience. Each experiment had two phases: a familiarization phase and a test phase. During the familiarization phase, the infant played with a pair of different-sized objects for 10 min. During the test phase, a pair of objects, identical to those seen in the familiarization phase but now equal in size, were presented to the infant at a fixed distance under monocular or binocular viewing conditions. In the test phase of Experiment 1, 7-month-old infants viewing the objects monocularly showed a significant preference to reach for the object that resembled the smaller object in the familiarization phase. Seven-month-old infants in the binocular viewing condition reached equally to the two test phase objects. These results indicate that, in the monocular condition, the 7-month-olds used knowledge about the objects’ sizes, acquired during the familiarization phase, to perceive distance from the test objects’ visual angles, and that they reached preferentially for the apparently nearer object. The lack of a reaching preference in the binocular condition rules out interpretations of the results not based on the objects’ perceived distances. The results, therefore, indicate that 7-month-old infants can use memory to mediate spatial perception. The implications of this finding for the debate between direct and indirect theories of visual perception are discussed. In the test phase of Experiment 2,5-month-old infants viewing the objects monocularly showed no reaching preference. These infants, therefore, showed no evidence of sensitivity to familiar size as distance information.  相似文献   

3.
In human adults two functionally and neuro‐anatomically separate systems exist for the use of visual information in perception and the use of visual information to control movements (Milner & Goodale, 1995 , 2008 ). We investigated whether this separation is already functioning in the early stages of the development of reaching. To this end, 6‐ and 7‐month‐old infants were presented with two identical objects at identical distances in front of an illusory Ponzo‐like background that made them appear to be located at different distances. In two further conditions without the illusory background, the two objects were presented at physically different distances. Preferential reaching outcomes indicated that the allocentric distance information contained in the illusory background affected the perception of object distance. Yet, infants' reaching kinematics were only affected by the objects' physical distance and not by the perceptual distance manipulation. These findings were taken as evidence for the two‐visual systems, as proposed by Milner and Goodale ( 2008 ), being functional in early infancy. We discuss the wider implications of this early dissociation.  相似文献   

4.
Recent reports of similar patterns of brain electrical activity (electroencephalogram: EEG) during action execution and observation, recorded from scalp locations over motor‐related regions in infants and adults, have raised the possibility that two foundational abilities – controlling one's own intentional actions and perceiving others’ actions – may be integrally related during ontogeny. However, to our knowledge, there are no published reports of the relations between developments in motor skill (i.e. recording actual motor skill performance) and EEG during both action execution and action observation. In the present study we collected EEG from 21 9‐month‐olds who were given opportunities to reach for toys and who also observed an experimenter reach for toys. Event‐related desynchronization (ERD) was computed from the EEG during the reaching events. We assessed infants’ reaching‐grasping competence, including reach latency, errors, preshaping of the hand, and bimanual reaches, and found that desynchronization recorded in scalp electrodes over motor‐related regions during action observation was associated with action competence during execution. Infants who were more competent reachers, compared to less competent reachers, exhibited greater ERD while observing reaching‐grasping. These results provide initial evidence for an early emerging neural system integrating one's own actions with the perception of others’ actions.  相似文献   

5.
During visually guided grasping movements, visual information is transformed into motor commands. This transformation is known as the "visuomotor map." To investigate limitations in the short-term plasticity of the visuomotor map in normal humans, we studied the maximum grip aperture (MGA) during the reaching phase while subjects grasped objects of various sizes. The objects seen and the objects grasped were physically never the same. When a discrepancy had been introduced between the size of the visual and the grasped objects, and the subjects were fully adapted to it, they all readily interpolated and extrapolated the MGA to objects not included in training trials. In contrast, when the subjects were exposed to discrepancies that required a slope change in the visuomotor map, they were unable to adapt adequately. They instead retained a subject-specific slope of the relationship between the visual size and MGA. We conclude from these results that during reaching for grasping, normal subjects are unable to abandon a straight linear function determining the relationship between visual object size and MGA. Moreover, the plasticity of the visuomotor map is, at least in short term, constrained to allow only offset changes, that is, only "rigid shifts" are possible between the visual and motor coordinate systems.  相似文献   

6.
Since Broca’s studies on language processing, cortical functional specialization has been considered to be integral to efficient neural processing. A fundamental question in cognitive neuroscience concerns the type of learning that is required for functional specialization to develop. To address this issue with respect to the development of neural specialization for letters, we used functional magnetic resonance imaging (fMRI) to compare brain activation patterns in pre‐school children before and after different letter‐learning conditions: a sensori‐motor group practised printing letters during the learning phase, while the control group practised visual recognition. Results demonstrated an overall left‐hemisphere bias for processing letters in these pre‐literate participants, but, more interestingly, showed enhanced blood oxygen‐level‐dependent activation in the visual association cortex during letter perception only after sensori‐motor (printing) learning. It is concluded that sensori‐motor experience augments processing in the visual system of pre‐school children. The change of activation in these neural circuits provides important evidence that ‘learning‐by‐doing’ can lay the foundation for, and potentially strengthen, the neural systems used for visual letter recognition.  相似文献   

7.
Why do infants make perseverative errors when reaching for two identical targets? From a dynamic systems perspective, perseverative errors emerge from repetitive perceptual–motor activity in novel and/or difficult contexts. To evaluate this account, we studied 9‐month‐old infants performing two tasks in which they repetitively reached toward either a single target or two identical targets. Results showed that, in the context of the two identical targets, perseverative responses were preceded by the creation of strong memories of previous reach directions and trajectories. In contrast, we found little evidence for convergence on habitual reach trajectories when the infants performed the less taxing single‐target task, suggesting that the demands of reaching for two identical targets strongly constrained the reaching behavior. In total, results indicated that memories of prior movements make a critical contribution to performance in the A‐not‐B task and its variants.  相似文献   

8.
Humans can reach for objects with their hands whether the objects are seen, heard or touched. Thus, the position of objects is recoded in a joint-centered frame of reference regardless of the sensory modality involved. Our study indicates that this frame of reference is not the only one shared across sensory modalities. The location of reaching targets is also encoded in eye-centered coordinates, whether the targets are visual, auditory, proprioceptive or imaginary. Furthermore, the remembered eye-centered location is updated after each eye and head movement. This is quite surprising since, in principle, a reaching motor command can be computed from any non-visual modality without ever recovering the eye-centered location of the stimulus. This finding may reflect the predominant role of vision in human spatial perception.  相似文献   

9.
When reaching for objects, people frequently look where they reach. This raises the question of whether the targets for the eye and hand in concurrent eye and hand movements are selected by a unitary attentional system or by independent mechanisms. We used the deployment of visual attention as an index of the selection of movement targets and asked observers to reach and look to either the same location or separate locations. Results show that during the preparation of coordinated movements, attention is allocated in parallel to the targets of a saccade and a reaching movement. Attentional allocations for the two movements interact synergistically when both are directed to a common goal. Delaying the eye movement delays the attentional shift to the saccade target while leaving attentional deployment to the reach target unaffected. Our findings demonstrate that attentional resources are allocated independently to the targets of eye and hand movements and suggest that the goals for these effectors are selected by separate attentional mechanisms.  相似文献   

10.
Is continuous visual monitoring necessary in visually guided locomotion?   总被引:5,自引:0,他引:5  
Subjects were asked to walk to targets that were up to 21 m away, either with vision excluded during walking or under normal visual control. Over the entire range, subjects were accurate whether or not vision was available as long as no more than approximately 8 sec elapsed between closing the eyes and reaching the target. If more than 8 sec elapsed, (a) this had no influence on distances up to 5 m, but (b) distances between 6-21 m were severely impaired. The results are interpreted to mean that two mechanisms are involved in guidance. Up to 5 m, motor programs of relatively long duration can be formulated and used to control activity. Over greater distances, subjects internalized information about the environment in a more general form, independently of any particular set of motor instructions, and used this to control activity and formulate new motor programs. Experiments in support of this interpretation are presented.  相似文献   

11.
Visually perceived eye level (VPEL) and the ability of subjects to reach with an unseen limb to targets placed at VPEL were measured in a statically pitched visual surround (pitchroom). VPEL was shifted upward and downward by upward and downward room pitch, respectively. Accuracy in reaching to VPEL represented a compromise between VPEL and actual eye level. This indicates that VPEL shifts reflect in part a change in perceived location of objects. When subjects were provided with terminal visual feedback about their reaching, accuracy improved rapidly. Subsequent reaching, with the room vertical, revealed a negative aftereffect (i.e., reaching errors that were opposite those made initially in the pitched room). In a second study, pointing accuracy was assessed for targets located both at VPEL and at other positions. Errors were similar for targets whether located at VPEL or elsewhere. Additionally, pointing responses were restricted to a narrower range than that of the actual target locations. The small size of reaching and pointing errors in both studies suggests that factors other than a change in perceived location are also involved in VPEL shifts.  相似文献   

12.
《Acta psychologica》1986,63(1):3-21
A historical review of the development of interaction theories in visual perception is presented. The concept that efferent signals generated in the eye or central nervous system interact with afferent visual signal flow dates back to the circles of pre-Socratic philosophers. They believed, however, that the interaction between observer-generated ‘pneuma’ and visual objects takes place at the site of the object or in the extrapersonal space between objects and eye. This idea was elaborated by Plato, the Stoic philosophers, Galen and some church fathers, but rejected by Aristotle and his school. The interaction theory was modified by Arabian medieval scientists (e.g., Alhazen, Avicenna), who believed the interaction of afferent and efferent signal flow to occur within the eye at the site of the pupil. The interaction theory finally disappeared during the first half of the 18th century when Alkmaion's age-old idea of ‘efferent light’ generated in the eye was experimentally refuted. With the rediscovery of Aristotle's observation of eye-movement-related afterimage movement, however, interaction theory reappeared towards the beginning of the 19th century, and sensory physiologists were asking why the world is perceived as stable despite the fact that its image shifts continuously across the retina (Erasmus Darwin, Stembuch, Purkyneě, Bell). The idea of ‘cancellation’ between afferent visual movement signals and corollary signals evoked by the motor compounds of gaze movement (now called efference copy signals) was first proposed by Purkyně. It was further developed during the 19th century by leading sensory physiologists such as Hering, Helmholtz, Mach and their pupils. The first block diagrams of this idea were presented by Mach (1906) and Von Uexküll (1920/1928). These concepts led to the ‘reafference principles’ of Von Holst and Mittelstaedt (1950) and Sperry (1950).  相似文献   

13.
The ability to adapt is a fundamental and vital characteristic of the motor system. The authors altered the visual environment and focused on the ability of humans to adapt to a rotated environment in a reaching task, in the absence of continuous visual information about their hand location. Subjects could not see their arm but were provided with post trial knowledge of performance depicting hand path from movement onset to final position. Subjects failed to adapt under these conditions. The authors sought to find out whether the lack of adaptation is related to the number of target directions presented in the task, and planned 2 protocols in which subjects were gradually exposed to 22.5° visuomotor rotation. These protocols differed only in the number of target directions: 8 and 4 targets. The authors found that subjects had difficulty adapting without the existence of continuous visual feedback of their performance regardless of the number of targets presented in task. In the 4-target protocol, some of the subjects noticed the rotation and explicitly aimed to the correct direction. The results suggest that real-time feedback is required for motor adaptation to visual rotation during reaching movements.  相似文献   

14.
Skilled behavior requires a balance between previously successful behaviors and new behaviors appropriate to the present context. We describe a dynamic field model for understanding this balance in infant perseverative reaching. The model predictions are tested with regard to the interaction of two aspects of the typical perseverative reaching task: the visual cue indicating the target and the memory demand created by the delay imposed between cueing and reaching. The memory demand was manipulated by imposing either a 0‐ or a 3‐second delay, and the salience of the cue to reach was systematically varied. Infants demonstrated fewer perseverative errors at 0‐delay versus 3‐second delay based on the cue salience, such that a more salient visual cue was necessary to overcome a longer delay. These results have important implications for understanding both the basic perceptual‐motor processes that produce reaching in infants and skilled flexible behavior in general.  相似文献   

15.
Independent processing of visual information for perception and action is supported by studies about visual illusions, which showed that context information influences overtjudgment but not reaching attempts. The objection was raised, however, that these two types of performance are notdirectly comparable, since they generally focus on different properties of the visual input. The goal of the present study was to quantify the influence of context information (in the form of a textured background) on the cognitive and sensorimotor processing of egocentric distance. We found that the subjective area comprising reachable objects (probed with a cognitive task) decreased, whereas the amplitude of reaching movement (probed with a sensorimotor task) increased in the presence of the textured background with both binocular and monocular viewing. Directional motor performance was not affected by the experimental conditions, but there was a tendency for the kinematic parameters to mimic trajectory variations. The similar but opposite effects of the textured background in the cognitive and sensorimotor tasks suggested that in both tasks the visual targets were perceived as closer when they were presented in a sparse environment. A common explanation for the opposite effects was confirmed by the percentage of background influence, which was highly correlated in the two tasks. We conclude that visual processing for perception and action cannot be dissociated from context influence, since it does not differ when the tasks entail the processing of similar spatial characteristics.  相似文献   

16.
In a series of experiments we investigated whether identification of a lateralized visual target would benefit from concurrent execution of a reaching movement on the same side of space. Participants were tested in a dual-task paradigm. In one task, they performed a speeded reach movement towards a lateralized target button. The reach was cued by an auditory stimulus, and performed out of the participant's sight. In the other task, participants identified one of two simultaneous visual stimuli presented to the left and right visual fields, close to movement target locations. If motor activity were effective in modulating perceptual processes via a visuo-attentional shift, identification performance should have improved when the visual stimulus appeared at the movement target location. In fact, identification was not affected by the side of reach. Such results suggest substantially independent selection processes in motor and visual domains.  相似文献   

17.
Adults who watch an ambiguous visual event consisting of two identical objects moving toward, through, and away from each other and hear a brief sound when the objects overlap report seeing visual bouncing. We conducted three experiments in which we used the habituation/test method to determine whether these illusory effects might emerge early in development. In Experiments 1 and 3 we tested 4‐, 6‐ and 8‐month‐old infants’ discrimination between an ambiguous visual display presented together with a sound synchronized with the objects’ spatial coincidence and the identical visual display presented together with a sound no longer synchronized with coincidence. Consistent with illusory perception, the 6‐ and 8‐month‐old, but not the 4‐month‐old, infants responded to these events as different. In Experiment 2 infants were habituated to the ambiguous visual display together with a sound synchronized with the objects’ coincidence and tested with a physically bouncing object accompanied by the sound at the bounce. Consistent with illusory perception again, infants treated these two events as equivalent by not exhibiting response recovery. The developmental emergence of this intersensory illusion at 6 months of age is hypothesized to reflect developmental changes in object knowledge and attentional mechanisms.  相似文献   

18.
Following F. Zaal and R. J. Bootsma (1995), the authors studied whether the decelerative phase of a reaching movement could be modeled as a constant tau-dot strategy resulting in a soft collision with the object. Specifically, they investigated whether that strategy is sustained over different viewing conditions. Participants (N = 11) were required to reach for 15- and 50-mm objects at 2 different distances under 3 conditions in which visual availability of the immediate environment and of the reaching hand were varied. Tau-dot estimates and goodness-of-fit were highly similar across the 3 conditions. Only within-participant variability of tau-dot estimates was increased when environmental cues were removed. That finding suggests that the motor system uses a tau-dot strategy involving the intermodal (i.e., visual, proprioceptive, or both) specification of information to regulate the decelerative phase of reaching under restricted viewing conditions. The authors provide recommendations for improving the derivation of tau;(x) estimates and stress the need for further research on how time-to-contact information is used in the regulation of the dynamics of actions such as reaching.  相似文献   

19.
Professional visual searches, such as those conducted by airport security personnel, often demand highly accurate performance. As many factors can hinder accuracy, it is critical to understand the potential influences. Here, we examined how explicit decision‐making criteria might affect multiple‐target search performance. Non‐professional searchers (college undergraduates) and professional searchers (airport security officers) classified trials as ‘safe’ or ‘dangerous’, in one of two conditions. Those in the ‘one = dangerous’ condition classified trials as dangerous if they found one or two targets, and those in the ‘one = safe’ condition only classified trials as dangerous if they found two targets. The data suggest an important role of context that may be mediated by experience; non‐professional searchers were more likely to miss a second target in the one = dangerous condition (i.e., when finding a second found target did not change the classification), whereas professional searchers were more likely to miss a second in the one = safe condition.  相似文献   

20.
Brain event-related potentials are a useful tool for investigating visual processing and action planning. This technique requires extremely accurate synchronization of stimulus delivery with recordings. The precision of the onset time of visual stimulus delivery is a major challenge when attempting to use real, three-dimensional objects as stimuli. Here, we present an innovative device, the “box for interaction with objects” (BIO), that is designed to synchronize the presentation of objects with electroencephalographic (EEG) recordings. To reach the required resolution of stimulus-onset timing, the BIO system features an interface with reflective glass and light-emitting diodes (LEDs). When the LEDs inside the BIO are turned on, the object inside becomes visible, and a synchronizing pulse is sent to the recording systems. The BIO was tested in a motivational study that focused on visual and motor event-related potentials. EEG signals were recorded during the presentation of an emotion-laden object that could be grasped and brought close to the participant’s chest. BIO successfully synchronized the appearance of a three-dimensional object with EEG recordings, which would allow for an analysis of visual and motor event-related potentials in the same experiment. The BIO device, through a high-quality psychophysiological approach, offers a new perspective for the study of the motivational factors that drive actions toward relevant stimuli.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号