首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
《Visual cognition》2013,21(2):113-142
Vision is critical for the efficient execution of prehension movements, providing information about: The location of a target object with respect to the viewer; its spatial relationship to other objects; as well as intrinsic properties of the object such as its size and orientation. This paper reports three experiments which examined the role played by binocular vision in the execution of prehension movements. Specifically, transport and grasp kinematics were examined for prehension movements executed under binocular, monocular, and no vision (memory-guided and open-loop) viewing conditions. The results demonstrated an overall advantage for reaches executed under binocular vision; movement duration and the length of the deceleration phase were longer, and movement velocity reduced, when movements were executed with monocular vision. Furthermore, the results indicated that binocular vision is particularly important during “selective” reaching, that is reaching for target objects which are accompanied by flanker objects. These results are related to recent neuro psychological investigations suggesting that stereopsis may be critical for the visual control of prehension.  相似文献   

2.
The role of visual input during reaching and grasping was evaluated. Groups of infants (5, 7, and 9 months old) and adults reached for an illuminated object that sometimes darkened during the reach. Behavioral and kinematic measures were assessed during transport and grasp. Both infants and adults could complete a reach and grasp to a darkened object. However, vision was used during the reach when the object remained visible. Infants contacted the object more often when it remained visible, though they had longer durations and more movement units. In contrast, adults reached faster and more precisely during transport and grasp when the object remained visible. Thus, continuous sight of the object was not necessary, but when it was available, infants used it for contacting the object whereas adults used it to reach and grasp more efficiently.  相似文献   

3.
Three experiments were performed on reach and grasp in 9- to 10-year-old children (8 controls and 8 with developmental coordination disorder [DCD]). In normal reaching, children in the DCD group were less responsive to the accuracy demands of the task in controlling the transport component of prehension and spent less time in the deceleration phase of hand transport. When vision was removed as movement began, children in the control group spent more time decelerating and reached peak aperture earlier. Children in the DCD group did not do that, although, like the control group, they did increase grip aperture in the dark. When depth cues were reduced and only the target or only the target and hand were visible, children in the control group used target information to maintain the same grip aperture in all conditions, but DCD children behaved as if the target was not visible. Throughout the studies, the control group of 9- to 10-year-olds did not produce adult-like adaptations to reduced vision, suggesting that they had not yet attained adult-like integration of sensory input. Compared with control children, children with DCD did not exhibit increased dependence on vision but showed less recognition of accuracy demands, less adaptation to the removal of vision, and less use of minimal visual information when it was available.  相似文献   

4.
Three experiments were performed on reach and grasp in 9- to 10-year-old children (8 controls and 8 with developmental coordination disorder [DCD]). In normal reaching, children in the DCD group were less responsive to the accuracy demands of the task in controlling the transport component of prehension and spent less time in the deceleration phase of hand transport. When vision was removed as movement began, children in the control group spent more time decelerating and reached peak aperture earlier. Children in the DCD group did not do that, although, like the control group, they did increase grip aperture in the dark. When depth cues were reduced and only the target or only the target and hand were visible, children in the control group used target information to maintain the same grip aperture in all conditions, but DCD children behaved as if the target was not visible. Throughout the studies, the control group of 9- to 10-year-olds did not produce adult-like adaptations to reduced vision, suggesting that they had not yet attained adult-like integration of sensory input. Compared with control children, children with DCD did not exhibit increased dependence on vision but showed less recognition of accuracy demands, less adaptation to the removal of vision, and less use of minimal visual information when it was available.  相似文献   

5.
The nature of visually guided locomotion was examined in an experiment where subjects had to walk to targets under various conditions. Target distance was manipulated so that subjects had to (a) lengthen their paces in order to hit the target; (b) shorten their paces; (c) make no adjustments to their standard pace length at all. They did this under four visual conditions: (a) normal vision; (b) with vision restricted to a "snapshot" each time the foot that was to be placed on the target was on the ground; (c) with a snapshot each time the foot to be placed was in the swing phase; and (d) no vision after departure fro the target. The results show that the subjects succeed in reaching the target in most cases. However, the smoothness and fluidity of their movements vary significantly between conditions. Under normal vision or where visual snapshots are delivered when the pointing foot is on the ground, locomotion is smoothly regulated as the subjects approach the target. where snapshots are delivered when the pointing foot is in the swing phase, regulation becomes clumsy and ill coordinated. Where no vision is available at all during the approach, adjustments are made, but these are least coordinated of all. The results show that well-coordinated visual regulation does not require continuous visual guidance but depends on intermittent information being available at the appropriate times in the action sequence. Such timing is often more important than the total amount of information that is available for guidance.  相似文献   

6.
Is continuous visual monitoring necessary in visually guided locomotion?   总被引:5,自引:0,他引:5  
Subjects were asked to walk to targets that were up to 21 m away, either with vision excluded during walking or under normal visual control. Over the entire range, subjects were accurate whether or not vision was available as long as no more than approximately 8 sec elapsed between closing the eyes and reaching the target. If more than 8 sec elapsed, (a) this had no influence on distances up to 5 m, but (b) distances between 6-21 m were severely impaired. The results are interpreted to mean that two mechanisms are involved in guidance. Up to 5 m, motor programs of relatively long duration can be formulated and used to control activity. Over greater distances, subjects internalized information about the environment in a more general form, independently of any particular set of motor instructions, and used this to control activity and formulate new motor programs. Experiments in support of this interpretation are presented.  相似文献   

7.
The utilization of static and kinetic information for depth by Mala?ian children and young adults in making monocular relative size judgments was investigated. Subjects viewed pairs of objects or photographic slides of the same pairs and judged which was the larger of each pair. The sizes and positions of the objects were manipulated such that the more distant object subtended a visual angle equal to, 80% of, or 70% of the nearer object. Motor parallax information was manipulated by allowing or preventing head movement. All subjects displayed sensitivity to static information for depth when the two objects subtended equal visual angles. When the more distant object was larger but subtended a smaller visual angle than the nearer object, subjects tended to base their judgments on retinal size. Motion parallax information increased accuracy of judgments of three-dimensional displays but reduced accuracy of judgments of pictorial displays. Comparisons are made between these results and those for American subjects.  相似文献   

8.
The authors used a virtual environment to investigate visual control of reaching and monocular and binocular perception of egocentric distance, size, and shape. With binocular vision, the results suggested use of disparity matching. This was tested and confirmed in the virtual environment by eliminating other information about contact of hand and target. Elimination of occlusion of hand by target destabilized monocular but not binocular performance. Because the virtual environment entails accommodation of an image beyond reach, the authors predicted overestimation of egocentric distances in the virtual relative to actual environment. This was confirmed. The authors used -2 diopter glasses to reduce the focal distance in the virtual environment. Overestimates were reduced by half. The authors conclude that calibration of perception is required for accurate feedforward reaching and that disparity matching is optimal visual information for calibration.  相似文献   

9.
The sensorimotor transformations necessary for generating appropriate motor commands depend on both current and previously acquired sensory information. To investigate the relative impact (or weighting) of visual and haptic information about object size during grasping movements, we let normal subjects perform a task in which, unbeknownst to the subjects, the object seen (visual object) and the object grasped (haptic object) were never the same physically. When the haptic object abruptly became larger or smaller than the visual object, subjects in the following trials automatically adapted their maximum grip aperture when reaching for the object. This adaptation was not dependent on conscious processes. We analyzed how visual and haptic information were weighted during the course of sensorimotor adaptation. The adaptation process was quicker and relied more on haptic information when the haptic objects increased in size than when they decreased in size. As such, sensory weighting seemed to be molded to avoid prehension error. We conclude from these results that the impact of a specific source of sensory information on the sensorimotor transformation is regulated to satisfy task requirements.  相似文献   

10.
The purpose of these experiments was to determine the effects of object weight and condition of weight presentation on the kinematics of human prehension. Subjects performed reaching and grasping movements to metal dowels whose visible characteristics were similar but whose weight varied (20, 55, 150, 410 g). Movements were performed under two conditions of weight presentation, random (weight unknown) and blocked (weight known). Three-dimensional movements of the thumb, index finger, and wrist were recorded, using a WATSMART system to obtain information regarding the grasp and transport components. The results of the first experiment indicated that object weight and condition of presentation affected the temporal and kinematic measures for both the grasp and transport components. In conjunction with the results of a second experiment, in which time in contact with the dowel was measured, it was shown that the free-motion phase of prehension (i.e., up to object contact) was invariant over the different conditions, however. The changes were observed in the finger-object interaction phase (when subjects applied forces after contact with the dowel), prior to lift-off. These results were interpreted as indicating (a) object weight does not influence the planning and execution of the free-motion phase of prehension and (b) there are at least two motor control phases involved in prehension, one for making contact with the object and the other for finger-object interaction. The changing contributions of visual, kinesthetic, and haptic information during these two phases is discussed.  相似文献   

11.
The purpose of these experiments was to determine the effects of object weight and condition of weight presentation on the kinematics of human prehension. Subjects performed reaching and grasping movements to metal dowels whose visible characteristics were similar but whose weight varied (20, 55, 150, 410 g). Movements were performed under two conditions of weight presentation, random (weight unknown) and blocked (weight known). Three-dimensional movements of the thumb, index finger, and wrist were recorded, using a WATSMART system to obtain information regarding the grasp and transport components. The results of the first experiment indicated that object weight and condition of presentation affected the temporal and kinematic measures for both the grasp and transport components. In conjunction with the results of a second experiment, in which time in contact with the dowel was measured, it was shown that the free-motion phase of prehension (i.e., up to object contact) was invariant over the different conditions, however. The changes were observed in the finger-object interaction phase (when subjects applied forces after contact with the dowel), prior to lift-off. These results were interpreted as indicating (a) object weight does not influence the planning and execution of the free-motion phase of prehension and (b) there are at least two motor control phases involved in prehension, one for making contact with the object and the other for finger-object interaction. The changing contributions of visual, kinesthetic, and haptic information during these two phases is discussed.  相似文献   

12.
The ability to adapt is a fundamental and vital characteristic of the motor system. The authors altered the visual environment and focused on the ability of humans to adapt to a rotated environment in a reaching task, in the absence of continuous visual information about their hand location. Subjects could not see their arm but were provided with post trial knowledge of performance depicting hand path from movement onset to final position. Subjects failed to adapt under these conditions. The authors sought to find out whether the lack of adaptation is related to the number of target directions presented in the task, and planned 2 protocols in which subjects were gradually exposed to 22.5° visuomotor rotation. These protocols differed only in the number of target directions: 8 and 4 targets. The authors found that subjects had difficulty adapting without the existence of continuous visual feedback of their performance regardless of the number of targets presented in task. In the 4-target protocol, some of the subjects noticed the rotation and explicitly aimed to the correct direction. The results suggest that real-time feedback is required for motor adaptation to visual rotation during reaching movements.  相似文献   

13.
Both judgment studies and studies of feedforward reaching have shown that the visual perception of object distance, size, and shape are inaccurate. However, feedback has been shown to calibrate feedfoward reaches-to-grasp to make them accurate with respect to object distance and size. We now investigate whether shape perception (in particular, the aspect ratio of object depth to width) can be calibrated in the context of reaches-to-grasp. We used cylindrical objects with elliptical cross-sections of varying eccentricity. Our participants reached to grasp the width or the depth of these objects with the index finger and thumb. The maximum grasp aperture and the terminal grasp aperture were used to evaluate perception. Both occur before the hand has contacted an object. In Experiments 1 and 2, we investigated whether perceived shape is recalibrated by distorted haptic feedback. Although somewhat equivocal, the results suggest that it is not. In Experiment 3, we tested the accuracy of feedforward grasping with respect to shape with haptic feedback to allow calibration. Grasping was inaccurate in ways comparable to findings in shape perception judgment studies. In Experiment 4, we hypothesized that online guidance is needed for accurate grasping. Participants reached to grasp either with or without vision of the hand. The result was that the former was accurate, whereas the latter was not. We conclude that shape perception is not calibrated by feedback from reaches-to-grasp and that online visual guidance is required for accurate grasping because shape perception is poor.  相似文献   

14.
Visual information about the location of the hand in space plays a key role in many theories of the development of reaching. Empirical data casts doubt on this assumption, although vision of the hand is clearly used by adults. The current study investigated the role of vision in 15-month-olds' reaching, manipulating both the precision demands of the task and the level of visual information available. Infants reached for both large and small objects, presented with visual feedback of the target and hand (full lighting), or with visual feedback of only the target object (glowing object in the dark). In contrast to findings with younger infants, 15-month-olds' reaches were sensitive to changes in precision demands and visual feedback, reflecting corrective movements that become necessary as reaching tasks become more challenging. Furthermore, these kinematic alterations are similar to those seen in adults, suggesting that visual guidance may become more important over the course of development, as infants engage in increasingly higher precision tasks.  相似文献   

15.
In four experiments, reducing lenses were used to minify vision and generate intersensory size conflicts between vision and touch. Subjects made size judgments, using either visual matching or haptic matching. In visual matching, the subjects chose from a set of visible squares that progressively increased in size. In haptic matching, the subjects selected matches from an array of tangible wooden squares. In Experiment 1, it was found that neither sense dominated when subjects exposed to an intersensory discrepancy made their size estimates by using either visual matching or haptic matching. Size judgments were nearly indentical for conflict subjects making visual or haptic matches. Thus, matching modality did not matter in Experiment 1. In Experiment 2, it was found that subjects were influenced by the sight of their hands, which led to increases in the magnitude of their size judgments. Sight of the hands produced more accurate judgments, with subjects being better able to compensate for the illusory effects of the reducing lens. In two additional experiments, it was found that when more precise judgments were required and subjects had to generate their own size estimates, the response modality dominated. Thus, vision dominated in Experiment 3, where size judgments derived from viewing a metric ruler, whereas touch dominated in Experiment 4, where subjects made size estimates with a pincers posture of their hands. It is suggested that matching procedures are inadequate for assessing intersensory dominance relations. These results qualify the position (Hershberger & Misceo, 1996) that the modality of size estimates influences the resolution of intersensory conflicts. Only when required to self-generate more precise judgments did subjects rely on one sense, either vision or touch. Thus, task and attentional requirements influence dominance relations, and vision does not invariably prevail over touch.  相似文献   

16.
Patients with right unilateral cerebral stroke, four of which showed acute hemispatial neglect, and healthy aged-matched controls were tested for their ability to grasp objects located in either right or left space at near or far distances. Reaches were performed either in free vision or without visual feedback from the hand or target object. It was found that the patient group showed normal grasp kinematics with respect to maximum grip aperture, grip orientation, and the time taken to reach the maximum grip aperture. Analysis of hand path curvature showed that control subjects produced straighter right hand reaches when vision was available compared to when it was not. The right hemisphere lesioned patients, however, showed similar levels of curvature in each of these conditions. No behavioural differences, though, could be found between right hemisphere lesioned patients with or without hemispatial neglect on either grasp parameters, path deviation or temporal kinematics.  相似文献   

17.
Virtual reality (VR) technology is being used with increasing frequency as a training medium for motor rehabilitation. However, before addressing training effectiveness in virtual environments (VEs), it is necessary to identify if movements made in such environments are kinematically similar to those made in physical environments (PEs) and the effect of provision of haptic feedback on these movement patterns. These questions are important since reach-to-grasp movements may be inaccurate when visual or haptic feedback is altered or absent. Our goal was to compare kinematics of reaching and grasping movements to three objects performed in an immersive three-dimensional (3D) VE with haptic feedback (cyberglove/grasp system) viewed through a head-mounted display to those made in an equivalent physical environment (PE). We also compared movements in PE made with and without wearing the cyberglove/grasp haptic feedback system. Ten healthy subjects (8 women, 62.1 ± 8.8 years) reached and grasped objects requiring 3 different grasp types (can, diameter 65.6 mm, cylindrical grasp; screwdriver, diameter 31.6 mm, power grasp; pen, diameter 7.5 mm, precision grasp) in PE and visually similar virtual objects in VE. Temporal and spatial arm and trunk kinematics were analyzed. Movements were slower and grip apertures were wider when wearing the glove in both the PE and the VE compared to movements made in the PE without the glove. When wearing the glove, subjects used similar reaching trajectories in both environments, preserved the coordination between reaching and grasping and scaled grip aperture to object size for the larger object (cylindrical grasp). However, in VE compared to PE, movements were slower and had longer deceleration times, elbow extension was greater when reaching to the smallest object and apertures were wider for the power and precision grip tasks. Overall, the differences in spatial and temporal kinematics of movements between environments were greater than those due only to wearing the cyberglove/grasp system. Differences in movement kinematics due to the viewing environment were likely due to a lack of prior experience with the virtual environment, an uncertainty of object location and the restricted field-of-view when wearing the head-mounted display. The results can be used to inform the design and disposition of objects within 3D VEs for the study of the control of prehension and for upper limb rehabilitation.  相似文献   

18.
Three experiments were conducted to determine how variables other than movement time influence the speed of visual feedback utilization in a target-pointing task. In Experiment 1, subjects moved a stylus to a target 20 cm away with movement times of approximately 225 msec. Visual feedback was manipulated by leaving the room lights on over the whole course of the movement or extinguishing the lights upon movement initiation, while prior knowledge about feedback availability was manipulated by blocking or randomizing feedback. Subjects exhibited less radial error in the lights-on/blocked condition than in the other three conditions. In Experiment 2, when subjects were forced to use vision by a laterally displacing prism, it was found that they benefited from the presence of visual feedback regardless of feedback uncertainty even when moving very rapidly (e.g. less than 190 msec). In Experiment 3, subjects pointed with and without a prism over a wide variety of movement times. Subjects benefited from vision much earlier in the prism condition. Subjects seem able to use vision rapidly to modify aiming movements but may do so only when the visual information is predictably available and/or yields an error large enough to detect early enough to correct.  相似文献   

19.
During visually guided grasping movements, visual information is transformed into motor commands. This transformation is known as the "visuomotor map." To investigate limitations in the short-term plasticity of the visuomotor map in normal humans, we studied the maximum grip aperture (MGA) during the reaching phase while subjects grasped objects of various sizes. The objects seen and the objects grasped were physically never the same. When a discrepancy had been introduced between the size of the visual and the grasped objects, and the subjects were fully adapted to it, they all readily interpolated and extrapolated the MGA to objects not included in training trials. In contrast, when the subjects were exposed to discrepancies that required a slope change in the visuomotor map, they were unable to adapt adequately. They instead retained a subject-specific slope of the relationship between the visual size and MGA. We conclude from these results that during reaching for grasping, normal subjects are unable to abandon a straight linear function determining the relationship between visual object size and MGA. Moreover, the plasticity of the visuomotor map is, at least in short term, constrained to allow only offset changes, that is, only "rigid shifts" are possible between the visual and motor coordinate systems.  相似文献   

20.
The study examined the contribution of various sources of visual information utilised in the control of discrete aiming movements. Subjects produced movements, 15.24 cm in amplitude, to a 1.27 cm target in a movement time of 330 ms. Responses were carried out at five vision-manipulation conditions which allowed the subject complete vision, no vision, vision of only the target or stylus, and a combination of stylus and target. Response accuracy scores indicated that a decrement in performance occurred when movements were completed in the absence of visual information or when only the target was visible during the response. The stylus and the target plus stylus visual conditions led to response accuracy which was comparable to movements produced with complete vision. These results suggest that the critical visual information for aiming accuracy is that of the stylus. These findings are consistent with a control model based on a visual representation of the discrepancy between the position of the hand and the location of the target.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号