首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 468 毫秒
1.
Temporal and spatial coupling of point of gaze (PG) and movements of the finger, elbow, and shoulder during a speeded aiming task were examined. Ten participants completed 40-cm aiming movements with the right arm, in a situation that allowed free movement of the eyes, head, arm, and trunk. On the majority of trials, a large initial saccade undershot the target slightly, and 1 or more smaller corrective saccades brought the eyes to the target position. The finger, elbow, and shoulder exhibited a similar pattern of undershooting their final positions, followed by small corrective movements. Eye movements usually preceded limb movements, and the eyes always arrived at the target well in advance of the finger. There was a clear temporal coupling between primary saccade completion and peak acceleration of the finger, elbow, and shoulder. The initiation of limb-segment movement usually occurred in a proximal-to-distal pattern. Increased variability in elbow and shoulder position as the movement progressed may have served to reduce variability in finger position. The spatial-temporal coupling of PG with the 3 limb segments was optimal for the pick up of visual information about the position of the finger and the target late in the movement.  相似文献   

2.
Temporal and spatial coupling of point of gaze (PG) and movements of the finger, elbow, and shoulder during a speeded aiming task were examined. Ten participants completed 40-cm aiming movements with the right arm, in a situation that allowed free movement of the eyes, head, arm, and trunk. On the majority of trials, a large initial saccade undershot the target slightly, and 1 or more smaller corrective saccades brought the eyes to the target position. The finger, elbow, and shoulder exhibited a similar pattern of undershooting their final positions, followed by small corrective movements. Eye movements usually preceded limb movements, and the eyes always arrived at the target well in advance of the finger. There was a clear temporal coupling between primary saccade completion and peak acceleration of the finger, elbow, and shoulder. The initiation of limb-segment movement usually occurred in a proximal-to-distal pattern. Increased variability in elbow and shoulder position as the movement progressed may have served to reduce variability in finger position. The spatial-temporal coupling of PG with the 3 limb segments was optimal for the pick up of visual information about the position of the finger and the target late in the movement.  相似文献   

3.
The study examined the contribution of various sources of visual information utilised in the control of discrete aiming movements. Subjects produced movements, 15.24 cm in amplitude, to a 1.27 cm target in a movement time of 330 ms. Responses were carried out at five vision-manipulation conditions which allowed the subject complete vision, no vision, vision of only the target or stylus, and a combination of stylus and target. Response accuracy scores indicated that a decrement in performance occurred when movements were completed in the absence of visual information or when only the target was visible during the response. The stylus and the target plus stylus visual conditions led to response accuracy which was comparable to movements produced with complete vision. These results suggest that the critical visual information for aiming accuracy is that of the stylus. These findings are consistent with a control model based on a visual representation of the discrepancy between the position of the hand and the location of the target.  相似文献   

4.
It has been shown that, even for very fast and short duration movements, seeing one's hand in peripheral vision, or a cursor representing it on a video screen, resulted in a better direction accuracy of a manual aiming movement than when the task was performed while only the target was visible. However, it is still unclear whether this was caused by on-line or off-line processes. Through a novel series of analyses, the goal of the present study was to shed some light on this issue. We replicated previous results showing that the visual information concerning one's movement, which is available between 40 degrees and 25 degrees of visual angle, is not useful to ensure direction accuracy of video-aiming movements, whereas visual afferent information available between 40 degrees and 15 degrees of visual angle improved direction accuracy over a target-only condition. In addition, endpoint variability on the direction component of the task was scaled to direction variability observed at peak movement velocity. Similar observations were made in a second experiment when the position of the cursor was translated to the left or to the right as soon as it left the starting base. Further, the data showed no evidence of on-line correction to the direction dimension of the task for the translated trials. Taken together, the results of the two experiments strongly suggest that, for fast video-aiming movements, the information concerning one's movement that is available in peripheral vision is used off-line.  相似文献   

5.
Pointing accuracy with an unseen hand to a just-extinguished visual target was examined in various eye movement conditions. When subjects caught the target by a saccade, they showed about the same degree of accuracy as that shown in pointing to a visible target. On the other hand, when subjects tracked a moving target by a pursuit eye movement, they systematically undershot when subsequently pointing to the target. The differential effect of the two types of eye movements on pointing tasks was examined on both the preferred and non-preferred hands, and it was found that the effect of eye movements was more prominent on the preferred hand than on the non-preferred hand. The results are discussed in relation to outflow eye position information.  相似文献   

6.
The present study attempted to determine if during short-duration movements visual feedback can be processed in order to make adjustments to changes in the environment. The effect that varying the importance of monitoring target position has on the relative importance of vision of hand and vision of target (Carlton 1981a; Whiting and Cockerill 1974) was also examined. Subjects performed short- (150 ms) and longer-duration (330 ms) aimed hand movements under four visual feedback conditions (lights-on/lights-off by target-on/target-off) to stationary and moving targets. For the lights-off and target-off conditions, the lights and target, respectively, were extinguished 50 ms after movement initiation. For all moving-target conditions, the target started to move as the movement was initiated. Subjects were able to process visual information in 165 ms, as movement endpoints were biased in the direction of target motion for movements of this duration. Removing visual feedback 50 ms after movement initiation did not alter this finding. Subjects performed equally well with target and lights on or off, independent of whether the target remained stationary or moved. Presumably, during the first 50 ms of the movement subjects received sufficient visual information to aid in movement control.  相似文献   

7.
In aiming movements the limb position drifts away from the defined target after some trials without visual feedback, a phenomenon defined as proprioceptive drift (PD). There are no studies investigating the association between the posterior parietal cortex (PPC) and PD in aiming movements. Therefore, cathodal and sham transcranial direct current stimulation (tDCS) were applied to the left PPC concomitantly with the performance of movements with or without vision. Cathodal tDCS applied without vision produced a higher level of PD and higher rates of drift accumulation while it decreased peak velocity and maintained the number of error corrections, not affecting movement amplitude. The proprioceptive information seems to produce an effective reference to movement, but with PPC stimulation it causes a negative impact on position.  相似文献   

8.
Past research has revealed that central vision is more important than peripheral vision in controlling the amplitude of target-directed aiming movements. However, the extent to which central vision contributes to movement planning versus online control is unclear. Since participants usually fixate the target very early in the limb trajectory, the limb enters the central visual field during the late stages of movement. Hence, there may be insufficient time for central vision to be processed online to correct errors during movement execution. Instead, information from central vision may be processed offline and utilised as a form of knowledge of results, enhancing the programming of subsequent trials. In the present research, variability in limb trajectories was analysed to determine the extent to which peripheral and central vision is used to detect and correct errors during movement execution. Participants performed manual aiming movements of 450 ms under four different visual conditions: full vision, peripheral vision, central vision, no vision. The results revealed that participants utilised visual information from both the central and peripheral visual fields to adjust limb trajectories during movement execution. However, visual information from the central visual field was used more effectively to correct errors online compared to visual information from the peripheral visual field.  相似文献   

9.
This study was designed to determine if movement planning strategies incorporating the use of visual feedback during manual aiming are specific to individual movements. Advance information about target location and visual context was manipulated using precues. Participants exhibited a shorter reaction time and a longer movement time when they were certain of the target location and that vision would be available. The longer movement time was associated with greater time after peak velocity. Under conditions of uncertainty, participants prepared for the worst-case scenario. That is, they spent more time organizing their movements and produced trajectories that would be expected from greater open-loop control. Our results are consistent with hierarchical movement planning in which knowledge of the movement goal is an essential ingredient of visual feedback utilization.  相似文献   

10.
The question addressed in the present study was whether subjects (N = 24) can use visual information about their hand, in the first half of an aiming movement, to ensure optimal directional accuracy of their aiming movements. Four groups of subjects practiced an aiming task in either a complete vision condition, a no-vision condition, or in a condition in which their hand was visible for the first half [initial vision condition (IV)] or the second half of the movement [final vision condition (FV)]. Following 240 trials of acquisition, all subjects were submitted to a transfer test that consisted of 40 trials performed in a no-vision condition. The results indicated that seeing the hand early in movement did not help subjects to optimize either directional or amplitude accuracy. On the other hand, when subjects viewed their hand closer to the target, movements resulted that were as accurate as those performed under a complete vision condition. In transfer, withdrawing vision did not cause any increase in aiming error for the IV or the no-vision conditions. These results replicated those of Carlton (1981) and extended those of Bard and colleagues (Bard, Hay, & Fleury, 1985) in that they indicated that the kinetic visual channel hypothesized by Paillard (1980; Paillard & Amblard, 1985) appeared to be inoperative beyond 40deg of visual angle.  相似文献   

11.
Previous paradigms have used reaching movements to study coupling of eye-hand kinematics. In the present study, we investigated eye-hand kinematics as curved trajectories were drawn at normal speeds. Eye and hand movements were tracked as a monkey traced ellipses and circles with the hand in free space while viewing the hand's position on a computer monitor. The results demonstrate that the movement of the hand was smooth and obeyed the 2/3 power law. Eye position, however, was restricted to 2-3 clusters along the hand's trajectory and fixed approximately 80% of the time in one of these clusters. The eye remained stationary as the hand moved away from the fixation for up to 200 ms and saccaded ahead of the hand position to the next fixation along the trajectory. The movement from one fixation cluster to another consistently occurred just after the tangential hand velocity had reached a local minimum, but before the next segment of the hand's trajectory began. The next fixation point was close to an area of high curvature along the hand's trajectory even though the hand had not reached that point along the path. A visuo-motor illusion of hand movement demonstrated that the eye movement was influenced by hand movement and not simply by visual input. During the task, neural activity of pre-motor cortex (area F4) was recorded using extracellular electrodes and used to construct a population vector of the hand's trajectory. The results suggest that the saccade onset is correlated in time with maximum curvature in the population vector trajectory for the hand movement. We hypothesize that eye and arm movements may have common, or shared, information in forming their motor plans.  相似文献   

12.
Two experiments were performed to evaluate the influence of movement frequency and predictability on visual tracking of the actively and the passively moved hand. Four measures of tracking precision were employed: (a) saccades/cycle, (b) percent of pursuit movement, (c) eye amplitude/arm amplitude, (d) asynchrony of eye and hand at reversal. Active and passive limb movements were tracked with nearly identical accuracy and were always vastly superior to tracking an external visual target undergoing comparable motion. Proprioceptive information about target position appears to provide velocity and position information about target location. Its presence permits the development of central eye-movement programmes that move the eyes in patterns that approximate but do not exactly match, temporally or spatially, the motion of the hand.  相似文献   

13.
Two experiments examined on-line processing during the execution of reciprocal aiming movements. In Experiment 1, participants used a stylus to make movements between two targets of equal size. Three vision conditions were used: full vision, vision during flight and vision only on contact with the target. Participants had significantly longer movement times and spent more time in contact with the targets when vision was available only on contact with the target. Additionally, the proportion of time to peak velocity revealed that movement trajectories became more symmetric when vision was not available during flight. The data indicate that participants used vision not only to 'home-in' on the current target, but also to prepare subsequent movements. In Experiment 2, liquid crystal goggles provided a single visual sample every 40 ms of a 500 ms duty cycle. Of interest was how participants timed their reciprocal aiming to take advantage of these brief visual samples. Although across participants no particular portion of the movement trajectory was favored, individual performers did time their movements consistently with the onset and offset of vision. Once again, performance and kinematic data indicated that movement segments were not independent of each other.  相似文献   

14.
Visual regulation of upper limb movements occurs throughout the trajectory and is not confined to discrete control in the target area. Early control is based on the dynamic relationship between the limb, the target, and the environment. Despite robust outcome differences between protocols involving visual manipulations, it remains difficult to identify the kinematic events that characterize these differences. In this study, participants performed manual aiming movements with and without vision. We compared several traditional approaches to movement analysis with two new methods of quantifying online limb regulation. As expected, participants undershot the target and their movement endpoints were more variable when vision was not available. Although traditional measures such as reaction time, time after peak velocity, and the presence of discontinuities in acceleration were sensitive to the visual manipulation, measures quantifying the trial-to-trial spatial variability throughout the trajectory were the most effective in isolating the time course of online regulation.  相似文献   

15.
Orienting to a target by looking and pointing is examined for parallels between the control of the two systems and interactions due to movement of the eyes and limb to the same target. Parallels appear early in orienting and may be due to common processing of spatial information for the ocular and manual systems. The eyes and limb both have shorter response latency to central visual and peripheral auditory targets. Each movement also has shorter latency and duration when the target presentation is short enough (200 msec) that no analysis of feedback of the target position is possible during the movement. Interactions appear at many stages of information processing for movement. Latency of ocular movement is much longer when the subject also points, and the eye and limb movement latencies are highly correlated for orienting to auditory targets. Final position of eyes and limb are significantly correlated only when target duration is short (200 msec). This illustrates that sensory information obtained before the movement begins is an important, but not the only, source of input about target position. Additional information that assists orienting may be passed from one system to another, since visual information gained by looking aided pointing to lights and proprioceptive information from the pointing hand seemed to assist the eyes in looking to sounds. Thus the production of this simple set of movements may be partly described by a cascade-type process of parallel analysis of spatial information for eye and hand control, but is also, later in the movement, assisted by cross-system interaction.  相似文献   

16.
A limb’s initial position is often biased to the right of the midline during activities of daily living. Given this specific initial limb position, visual cues of the limb become first available to the ipsilateral eye relative to the contralateral eye. The current study investigated online control of the dominant limb as a function of having visual cues available to the ipsilateral or contralateral eye, in relation to the initial start position of the limb. Participants began each trial with their right limb on a home position to the left or right of the midline. After movement onset, a brief visual sample was provided to the ipsilateral or contralateral eye. On one third of the trials, an imperceptible 3 cm target jump was introduced. If visual information from the eye ipsilateral to the limb is preferentially used to control ongoing movements of the dominant limb, corrections for the target jump should be observed when movements began from the right of the body’s midline and vision was available to the ipsilateral eye. As expected, limb trajectory corrections for the target jump were only observed when participants started from the right home position and visual information was provided to the ipsilateral eye. We purport that such visuomotor asymmetry specialization emerges via neurophysiological developments, which may arise from naturalistic and probabilistic limb trajectory asymmetries.  相似文献   

17.
Two experiments were conducted in which participants (N = 12, Experiment 1; N = 12, Experiment 2) performed rapid aiming movements with and without visual feedback under blocked, random, and alternating feedback schedules. Prior knowledge of whether vision would be available had a significant impact on the strategies that participants adopted. When they knew that vision would be available, less time was spent preparing movements before movement initiation. Participants also reached peak deceleration sooner but spent more time after peak deceleration adjusting limb trajectories. Consistent with those findings, analysis of spatial variability at different points in the trajectory indicated that variability increased up to peak deceleration but then decreased from peak deceleration to the end of the movement.  相似文献   

18.
The goal of this study was to determine whether a sensorimotor or cognitive encoding is used to encode a target position and save it into iconic memory. The methodology consisted of disrupting a manual aiming movement to a memorized visual target by displacing the visual field containing the target. The nature of the encoding was inferred from the nature and the size of the errors relative to a control. The target was presented either centrally or in the right periphery. Participants moved their hand from the left to the right of fixation. Black and white vertical stripes covered the whole visual field. The visual field was either stationary throughout the trial or was displaced to the right or left at the extinction of the target or at the start of the hand movement. In the latter case, the displacement of the visual field obviously could only be taken into account by the participant during the gesture. In this condition, our hypothesis was that the aiming error would follow the direction of visual field displacement. Results showed three major effects: (1) Vision of the hand during the gesture improved the final accuracy; (2) visual field displacement produced an underestimation of the target distance only when the hand was not visible during the gesture and was always in the same direction displacement; and (3) the effect of the stationary structured visual field on aiming precision when the hand was not visible depended on the distance to the target. These results suggest that a stationary structured visual field is used to support the memory of the target position. The structured visual field is more critical when the hand is not visible and when the target appears in peripheral rather than central vision. This suggests that aiming depends on memory of the relative peripheral position of the target (allocentric reference). However, in the present task, cognitive encoding does not maintain the "position" of the target in memory without reference to the environment. The systematic effect of the visual field displacement on the manual aiming suggests that the role of environmental reference frames in memory for position is not well understood. Some studies, in particular those of Giesbrecht and Dixon (1999) and Glover and Dixon (2001), suggested differing roles of the environment in the retention of the target position and the control of aiming movements toward the target. The present observations contribute to understanding the mechanism involved in locating and grasping objects with the hand.  相似文献   

19.
This experiment tested whether the perceived stability of the environment is altered when there is a combination of eye and visually open-loop hand movements toward a target displaced during the eye movements, i.e., during saccadic suppression. Visual-target eccentricity randomly decreased or increased during eye movements and subjects reported whether they perceived a target displacement or not, and if so, the direction of the displacement. Three experimental conditions, involving different combinations of eye and arm movements, were tested: (a) eye movements only; (b) simultaneous eye and rapid arm movements toward the target; and (c) simultaneous eye and arm movements with a restraint blocking the arm as soon as the hand left the starting position. The perceptual threshold of target displacements resulting in an increased target eccentricity was greater when subjects combined eye and arm movements toward the target object, specially for the no-restraint condition. Subjects corrected most of their arm trajectory toward the displaced target despite the short movement times (average MT = 189 ms). After the movements, the null error feedback of the hand's final position presumably overlapped the retino-oculomotor signal error and could be responsible for the deficient perception of target displacements. Thus, subjects interpreted the terminal hand positions as being within the range of the endpoint variability associated with the production of rapid arm movements rather than as a change of the environment. These results suggest that a natural strategy adopted for processing spatial information, especially in a competing situation, could favour a constancy tendency, avoiding systematic perception of a change of environment for any noise or variability at the central or peripheral levels.  相似文献   

20.
Previous research has demonstrated that movement times to the first target in sequential aiming movements are influenced by the properties of subsequent segments. Based on this finding, it has been proposed that individual segments are not controlled independently. The purpose of the current study was to investigate the role of visual feedback in the interaction between movement segments. In contrast to past research in which participants were instructed to minimize movement time, participants were set a criterion movement time and the resulting errors and limb trajectory kinematics were examined under vision and no vision conditions. Similar to single target movements, the results indicated that vision was used within each movement segment to correct errors in the limb trajectory. In mediating the transition between segments, visual feedback from the first movement segment was used to adjust the parameters of the second segment. Hence, increases in variability that occurred from the first to the second target in the no vision condition were curtailed when visual feedback was available. These results are discussed along the lines of the movement constraint and movement integration hypotheses.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号