首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Subjects produced speeded and unspeeded hand movements to a target location after either saccadic or pursuit eye movements to the target. Hand movements began either aligned with the initial position of gaze or from some other location. Subjects generally underestimated the extent of the pursuit eye movements relative to estimates made after saccades. With speeded hand movements, however, the underestimation was reduced considerably if the hand movements began aligned with a location other than the initial position of gaze. The results reveal details of the mechanisms underlying eye-hand coordination and show that important differences exist in the information used for localization for slow and rapid limb movements.  相似文献   

2.
Eye-hand coordination is required to accurately perform daily activities that involve reaching, grasping and manipulating objects. Studies using aiming, grasping or sequencing tasks have shown a stereotypical temporal coupling pattern where the eyes are directed to the object in advance of the hand movement, which may facilitate the planning and execution required for reaching. While the temporal coordination between the ocular and manual systems has been extensively investigated in adults, relatively little is known about the typical development of eye-hand coordination. Therefore, the current study addressed an important knowledge gap by characterizing the profile of eye-hand coupling in typically developing school-age children (n = 57) and in a cohort of adults (n = 30). Eye and hand movements were recorded concurrently during the performance of a bead threading task which consists of four distinct movements: reach to bead, grasp, reach to needle, and thread. Results showed a moderate to high correlation between eye and hand latencies in children and adults, supporting that both movements were planned in parallel. Eye and reach latencies, latency differences, and dwell time during grasping and threading, showed significant age-related differences, suggesting eye-hand coupling becomes more efficient in adolescence. Furthermore, visual acuity, stereoacuity and accommodative facility were also found to be associated with the efficiency of eye-hand coordination in children. Results from this study can serve as reference values when examining eye and hand movement during the performance of fine motor skills in children with neurodevelopmental disorders.  相似文献   

3.
Coordinated control of eye and hand movements in dynamic reaching   总被引:3,自引:0,他引:3  
In the present study, we integrated two recent, at first sight contradictory findings regarding the question whether saccadic eye movements can be generated to a newly presented target during an ongoing hand movement. Saccades were measured during so-called adaptive and sustained pointing conditions. In the adapted pointing condition, subjects had to direct both their gaze and arm movements to a displaced target location. The results showed that the eyes could fixate the new target during pointing. In addition, a temporal coupling of these corrective saccades was found with changes in arm movement trajectories when reaching to the new target. In the sustained pointing condition, however, the same subjects had to point to the initial target, while trying to deviate their gaze to a new target that appeared during pointing. It was found that the eyes could not fixate the new target before the hand reached the initial target location. Together, the results indicate that ocular gaze is always forced to follow the target intended by a manual arm movement. A neural mechanism is proposed that couples ocular gaze to the target of an arm movement. Specifically, the mechanism includes a reach neuron layer besides the well-known saccadic layer in the primate superior colliculus. Such a tight, sub-cortical coupling of ocular gaze to the target of a reaching movement can explain the contrasting behavior of the eyes in dependency of whether the eye and hand share the same target position or attempt to move to different locations.  相似文献   

4.
The authors investigated whether movement-planning and feedback-processing abilities associated with the 2 hand-hemisphere systems mediate illusion-induced biases in manual aiming and saccadic eye movements. Although participants' (N = 23) eye movements were biased in the direction expected on the basis of a typical Müller-Lyer configuration, hand movements were unaffected. Most interesting, both left- and right-handers' eye fixation onset and time to hand peak velocity were earlier when they aimed with the left hand than they were when they aimed with the right hand, regardless of the availability of vision for online movement control. They thus adapted their eye-hand coordination pattern to accommodate functional asymmetries. The authors suggest that individuals apply different movement strategies according to the abilities of the hand and the hemisphere system used to produce the same outcome.  相似文献   

5.
This study investigated how frequency demand and motion feedback influenced composite ocular movements and eye-hand synergy during manual tracking. Fourteen volunteers conducted slow and fast force-tracking in which targets were displayed in either line-mode or wave-mode to guide manual tracking with target movement of direct position or velocity nature. The results showed that eye-hand synergy was a selective response of spatiotemporal coupling conditional on target rate and feedback mode. Slow and line-mode tracking exhibited stronger eye-hand coupling than fast and wave-mode tracking. Both eye movement and manual action led the target signal during fast-tracking, while the latency of ocular navigation during slow-tracking depended on the feedback mode. Slow-tracking resulted in more saccadic responses and larger pursuit gains than fast-tracking. Line-mode tracking led to larger pursuit gains but fewer and shorter gaze fixations than wave-mode tracking. During slow-tracking, incidences of saccade and gaze fixation fluctuated across a target cycle, peaking at velocity maximum and the maximal curvature of target displacement, respectively. For line-mode tracking, the incidence of smooth pursuit was phase-dependent, peaking at velocity maximum as well. Manual behavior of slow or line-mode tracking was better predicted by composite eye movements than that of fast or wave-mode tracking. In conclusion, manual tracking relied on versatile visual strategies to perceive target movements of different kinematic properties, which suggested a flexible coordinative control for the ocular and manual sensorimotor systems.  相似文献   

6.
The relationship between attention and the programming of motor responses was investigated, using a paradigm in which the onsets of targets for movements were preceded by peripheral attentional cues. Simple (button release) and reaching manual responses were compared under conditions in which the subjects either made saccades toward the target location or refrained from making eye movements. The timing of the movement onset was used as the dependent measure for both simple and reaching manual responses. Eye movement latencies were also measured. A follow-up experiment measured the effect of the same peripheral cuing procedure on purely visual processes, using signal detection measures of visual sensitivity and response bias. The results of the first experiment showed that reaction time (RT) increased with the distance between the cued and the target locations. Stronger distance effects were observed when goal-directed responses were required, which suggests enhanced attentional localization of target positions under these conditions. The requirement to generate an eye movement response was found to delay simple manual RTs. However, mean reaching RTs were unaffected by the eye movement condition. Distance gradients on eye movement latencies were relatively shallow, as compared with those on goal-directed manual responses. The second experiment showed that the peripheral cue had only a very small effect on visual detection sensitivity in the absence of directed motor responses. It is concluded that cue-target distance effects with peripheral cues are modulated by the motor-programming requirements of the task. The effect of the peripheral cue on eye movement latencies was qualitatively different from that observed on manual RTs, indicating the existence of separate neural representations underlying both response types. At the same time, the interactions between response modalities are consistent with a supramodal representation of attentional space, within which different motor programs may interact.  相似文献   

7.
The relationship between attention and the programming of motor responses was investigated, using a paradigm in which the onsets of targets for movements were preceded by peripheral attentional cues. Simple (button release) and reaching manual responses were compared under conditions in which the subjects either made saccades toward the target location or refrained from making eye movements. The timing of the movement onset was used as the dependent measure for both simple and reaching manual responses. Eye movement latencies were also measured. A follow-up experiment measured the effect of the same peripheral cuing procedure on purely visual processes, using signal detection mea-sures of visual sensitivity and response bias. The results of the first experiment showed that reaction time (RT) increased with the distance between the cued and the target locations. Stronger distance ef-fects were observed when goal-directed responses were required, which suggests enhanced attentional localization of target positions under these conditions. The requirement to generate an eye movement response was found to delay simple manual RTs. However, mean reaching RTs were unaffected by the eye movement condition. Distance gradients on eye movement latencies were relatively shallow, as compared with those on goal-directed manual responses. The second experiment showed that the peripheral cue had only a very small effect on visual detection sensitivity in the absence of directed motor responses. It is concluded that cue-target distance effects with peripheral cues are modulated by the motor-programming requirements of the task. The effect of the peripheral cue on eye movement latencies was qualitatively different from that observed on manual RTs, indicating the existence of separate neural representations underlying both response types. At the same time, the interactions be-tween response modalities are consistent with a supramodal representation of attentional space, within which different motor programs may interact.  相似文献   

8.
Two experiments are reported that address the issue of coordination of the eyes, head, and hand during reaching and pointing. Movement initiation of the eyes, head, and hand were monitored in order to make inferences about the type of movement control used. In the first experiment, when subjects pointed with the finger to predictable or unpredictable locations marked by the appearance of a light, no differences between head and eye movement initiation were found. In the second experiment, when subjects pointed very fast with the finger, the head started to move before the eyes did. Conversely, when subjects pointed accurately, and thus more slowly, with the finger, the eyes started to move first, followed by the head and finger. When subjects were instructed to point to the same visual target only with their eyes and head, both fast and accurately, however, eye movement always started before head movement, regardless of speed-accuracy instructions. These results indicate that the behavior of the eye and head system can be altered by introducing arm movements. This, along with the variable movement initiation patterns, contradicts the idea that the eye, head, and hand system is controlled by a single motor program. The time of movement termination was also monitored, and across both experiments, the eyes always reached the target first, followed by the finger, and then the head. This finding suggests that movement termination patterns may be a fundamental control variable.  相似文献   

9.
The influence of abrupt onsets on attentionally demanding visual search (i.e., search for a letter target among heterogeneous letter distractors), as indexed by performance and eye movement measures, was investigated in a series of studies. In Experiments 1 and 2 we examined whether onsets would capture the eyes when the appearance of an onset predicted neither the location nor the identity of the target. Subjects did direct their eyes to the abrupt onsets on a disproportionate number of trials in these studies. Interestingly, however, onset capture was modulated by subjects' scan strategies. Furthermore, onsets captured the eyes less frequently than would have been predicted by paradigms showing attentional capture when no eye movements are required. Experiment 3 examined the question of whether onsets would capture the eyes in a situation in which they never served as the target. Capture was observed in this study. However, the magnitude of capture effects was substantially diminished as compared to previous behavioural studies in which the onset had a chance probability of serving as the target. These data are discussed in terms of the influence of top-down constraints on stimulus-driven attentional and oculomotor capture by abrupt onset stimuli.  相似文献   

10.
Previous paradigms have used reaching movements to study coupling of eye-hand kinematics. In the present study, we investigated eye-hand kinematics as curved trajectories were drawn at normal speeds. Eye and hand movements were tracked as a monkey traced ellipses and circles with the hand in free space while viewing the hand's position on a computer monitor. The results demonstrate that the movement of the hand was smooth and obeyed the 2/3 power law. Eye position, however, was restricted to 2-3 clusters along the hand's trajectory and fixed approximately 80% of the time in one of these clusters. The eye remained stationary as the hand moved away from the fixation for up to 200 ms and saccaded ahead of the hand position to the next fixation along the trajectory. The movement from one fixation cluster to another consistently occurred just after the tangential hand velocity had reached a local minimum, but before the next segment of the hand's trajectory began. The next fixation point was close to an area of high curvature along the hand's trajectory even though the hand had not reached that point along the path. A visuo-motor illusion of hand movement demonstrated that the eye movement was influenced by hand movement and not simply by visual input. During the task, neural activity of pre-motor cortex (area F4) was recorded using extracellular electrodes and used to construct a population vector of the hand's trajectory. The results suggest that the saccade onset is correlated in time with maximum curvature in the population vector trajectory for the hand movement. We hypothesize that eye and arm movements may have common, or shared, information in forming their motor plans.  相似文献   

11.
The accuracy of reaching movements improves when active gaze can be used to fixate on targets. The advantage of free gaze has been attributed to the use of ocular proprioception or efference signals for online control. The time course of this process, however, is not established, and it is unclear how far in advance gaze can move and still be used to parameterize subsequent movements. In this experiment, the authors considered the advantage of prescanning targets for both pointing and reaching movements. The authors manipulated the visual information and examined the extent to which prescanning of targets could compensate for a reduction in online visual feedback. In comparison with a conventional reaching/pointing condition, the error in pointing was reduced, the eye-hand lead decreased, and both the hand-closure time and the size of the maximum grip aperture in reaching were modulated when prescanning was allowed. These results indicate that briefly prescanning multiple targets just prior to the movement allows the refinement of subsequent hand movements that yields an improvement in accuracy. This study therefore provides additional evidence that the coordinate information arising from efference or ocular-proprioceptive signals can, for a limited period, be buffered and later used to generate a sequence of movements.  相似文献   

12.
The accuracy of reaching movements improves when active gaze can be used to fixate on targets. The advantage of free gaze has been attributed to the use of ocular proprioception or efference signals for online control. The time course of this process, however, is not established, and it is unclear how far in advance gaze can move and still be used to parameterize subsequent movements. In this experiment, the authors considered the advantage of prescanning targets for both pointing and reaching movements. The authors manipulated the visual information and examined the extent to which prescanning of targets could compensate for a reduction in online visual feedback. In comparison with a conventional reaching/pointing condition, the error in pointing was reduced, the eye-hand lead decreased, and both the hand-closure time and the size of the maximum grip aperture in reaching were modulated when prescanning was allowed. These results indicate that briefly prescanning multiple targets just prior to the movement allows the refinement of subsequent hand movements that yields an improvement in accuracy. This study therefore provides additional evidence that the coordinate information arising from efference or ocular-proprioceptive signals can, for a limited period, be buffered and later used to generate a sequence of movements.  相似文献   

13.
This study proposed and verified a new hypothesis on the relationship between gaze direction and visual attention: attentional bias by default gaze direction based on eye-head coordination. We conducted a target identification task in which visual stimuli appeared briefly to the left and right of a fixation cross. In Experiment 1, the direction of the participant’s head (aligned with the body) was manipulated to the left, front, or right relative to a central fixation point. In Experiment 2, head direction was manipulated to the left, front, or right relative to the body direction. This manipulation was based on results showing that bias of eye position distribution was highly correlated with head direction. In both experiments, accuracy was greater when the target appeared at a position where the eyes would potentially be directed. Consequently, eye–head coordination influences visual attention. That is, attention can be automatically biased toward the location where the eyes tend to be directed.  相似文献   

14.
Typing span and coordination of word viewing times with word typing times in copytyping were examined. In Experiment 1, typists copied passages of text while their eye movements were measured. Viewing location was determined during each eye fixation and was used to control the amount of usable visual information. Viewing constraints decreased interword saccade size when fewer than 7 character spaces of text were visible to the right of fixation and increased interkeypress times when fewer than 3 character spaces of text were visible. The eye-hand span amounted to 2.8 character spaces. Experiment 2 revealed increases in word typing times and word viewing times as biomechanic typing difficulty increased and word frequency decreased. These findings are consistent with a model of eye-hand coordination that postulates that eye-hand coordination involves central and peripheral processes.  相似文献   

15.
The aim of this study was to provide a detailed account of the spatial and temporal disruptions to eye-hand coordination when using a prosthetic hand during a sequential fine motor skill. Twenty-one able-bodied participants performed 15 trials of the picking up coins task derived from the Southampton Hand Assessment Procedure with their anatomic hand and with a prosthesis simulator while wearing eye-tracking equipment. Gaze behavior results revealed that when using the prosthesis, performance detriments were accompanied by significantly greater hand-focused gaze and a significantly longer time to disengage gaze from manipulations to plan upcoming movements. The study findings highlight key metrics that distinguish disruptions to eye-hand coordination that may have implications for the training of prosthesis use.  相似文献   

16.
When observers localize the vanishing point of a moving target, localizations are reliably displaced beyond the final position, in the direction the stimulus was travelling just prior to its offset. We examined modulations of this phenomenon through eye movements and action control over the vanishing point. In Experiment 1 with pursuit eye movements, localization errors were in movement direction, but less pronounced when the vanishing point was self‐determined by a key press of the observer. In contrast, in Experiment 2 with fixation instruction, localization errors were opposite movement direction and independent from action control. This pattern of results points at the role of eye movements, which were gathered in Experiment 3. That experiment showed that the eyes lagged behind the target at the point in time, when it vanished from the screen, but that the eyes continued to drift on the targets' virtual trajectory. It is suggested that the perceived target position resulted from the spatial lag of the eyes and of the persisting retinal image during the drift.  相似文献   

17.
In three experiments, we examined attentional and oculomotor capture by single and multiple abrupt onsets in a singleton search paradigm. Subjects were instructed to move their eyes as quickly as possible to a color singleton target and to identify a small letter located inside of it. In Experiment 1, task-irrelevant sudden onsets appeared simultaneously on half the trials with the presentation of the color singleton target. Response times (RTs) were longer when onsets appeared in the display regardless of the number of onsets. Eye-scan strategies were also disrupted by the appearance of the onset distractors, although the proportion of trials on which the eyes were directed to the onsets was the same regardless of the number of onsets. In Experiment 2, we manipulated the time of presentation of two task-irrelevant onsets in order to further examine whether multiple onsets would be attended and fixated prior to attending a color singleton target. Again, subjects made a saccade to a task-irrelevant onset on a substantial proportion of trials prior to fixating the target. However, saccades to the second onset were rare. Experiment 3 served as a replication of Experiment 1 but without the requirement for subjects to move their eyes to detect and identify the singleton target. The RT results were consistent with those in Experiment 1; dual onsets had no larger an effect on response speed than single onset distractors. These data are discussed in terms of the interaction between top-down and bottom-up control of attention and the eyes.  相似文献   

18.
In three experiments, we examined attentional and oculomotor capture by single and multiple abrupt onsets in a singleton search paradigm. Subjects were instructed to move their eyes as quickly as possible to a color singleton target and to identify a small letter located inside of it. In Experiment 1, taskirrelevant sudden onsets appeared simultaneously on half the trials with the presentation of the color singleton target. Response times (RTs) were longer when onsets appeared in the display regardless of the number of onsets. Eye-scan strategies were also disrupted by the appearance of the onset distractors, although the proportion of trials on which the eyes were directed to the onsets was the same regardless of the number of onsets. In Experiment 2, we manipulated the time of presentation of two taskirrelevant onsets in order to further examine whether multiple onsets would be attended and fixated prior to attending a color singleton target. Again, subjects made a saccade to a task-irrelevant onset on a substantial proportion of trials prior to fixating the target. However, saccades to the second onset were rare. Experiment 3 served as a replication of Experiment 1 but without the requirement for subjects to move their eyes to detect and identify the singleton target. The RT results were consistent with those in Experiment 1; dual onsets had no larger an effect on response speed than single onset distractors. These data are discussed in terms of the interaction between top-down and bottom-up control of attention and the eyes.  相似文献   

19.
Eye-hand coordination was investigated during a task of finger pointing toward visual targets viewed through wedge prisms. Hand and eye latencies and movement times were identical during the control condition and at the end of prism exposure. A temporal reorganization of eye and hand movements was observed during the course of adaptation. During the earlier stage of prism exposure, the time gap between the end of the eye saccade and the onset of hand movement was increased from a control time of 23 to 68 msec. This suggests that a time-consuming process occurred during the early prism-exposure period. The evolution of this time gap was correlated with the evolution of pointing errors during the early stage of prism exposure, in such a way that both measures increased at the onset of prism exposure and decreased almost back to control values within about 10 trials. However, spatial error was not entirely corrected, even late in prism exposure when the temporal organization of eye and hand had returned to baseline. These data suggest that two different adaptive mechanisms were at work: a rather short-term mechanism, involved in normal coordination of spatially aligned eye and hand systems, and a long-term mechanism, responsible for remapping spatially misaligned systems. The former mechanism can be strategically employed to quickly optimize accuracy in a situation involving misalignment, but completely adaptive behavior must await the slower-acting latter mechanism to achieve longterm spatial alignment.  相似文献   

20.
Previous research has shown that the appearance of an object (onset) and the disappearance of an object (offset) have the ability to influence the allocation of covert attention. To determine whether both onsets and offsets have the ability to influence eye movements, a series of experiments was conducted in which participants had to make goal-directed eye movements to a color singleton target in the presence of an irrelevant onset/offset. In accord with previous research, onsets had the ability to capture the eyes. The offset of an object demonstrated little or no ability to interrupt goal-directed eye movements to the target. Two experiments in which the effects of onsets and offsets on covert attention were examined suggest that offsets do not capture the eyes, because they have a lesser ability to capture covert attention than do onsets. A number of other studies that have shown strong effects of offsets on attention have used offsets that were uncorrelated with target position (i.e., nonpredictive), whereas we used onsets and offsets that never served as targets (i.e., antipredictive). The present results are consistent with a new-object theory of attentional capture in which onsets receive attentional priority over other types of changes in the visual environment.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号