首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
A substantial body of research has examined the speed-accuracy tradeoff captured by Fitts’ law, demonstrating increases in movement time that occur as aiming tasks are made more difficult by decreasing target width and/or increasing the distance between targets. Yet, serial aiming movements guided by internal spatial representations, rather than by visual views of targets have not been examined in this manner, and the value of confirmatory feedback via different sensory modalities within this paradigm is unknown. Here we examined goal-directed serial aiming movements (tapping back and forth between two targets), wherein targets were visually unavailable during the task. However, confirmatory feedback (auditory, haptic, visual, and bimodal combinations of each) was delivered upon each target acquisition, in a counterbalanced, within-subjects design. Each participant performed the aiming task with their pointer finger, represented within an immersive virtual environment as a 1 cm white sphere, while wearing a head-mounted display. Despite visual target occlusion, movement times increased in accordance with Fitts’ law. Though Fitts’ law captured performance for each of the sensory feedback conditions, the slopes differed. The effect of increasing difficulty on movement times was least influential in the haptic condition, suggesting more efficient processing of confirmatory haptic feedback during aiming movements guided by internal spatial representations.  相似文献   

2.
The perception of linear extent in haptic touch appears to be anisotropic, in that haptically perceived extents can depend on the spatial orientation and location of the object and, thus, on the direction of exploratory motion. Experiments 1 and 2 quantified how the haptic perception of linear extent depended on the type of motion (radial or tangential to the body) when subjects explored different stimulus objects (raised lines or solid blocks) varying in length and in relative spatial location. Relatively narrow, shallow, raised lines were judged to be longer, by magnitude estimation, than solid blocks. Consistent with earlier reports, stimuli explored with radial arm motions were judged to be longer than identical stimuli explored with tangential motions; this difference did not depend consistently on the lateral position of the stimulus object, the direction of movement (toward or away from the body), or the distance of the hand from the body but did depend slightly on the angular position of the shoulder. Experiment 3 showed that the radial-tangential effect could be explained by temporal differences in exploratory movements, implying that the apparent anisotropy is not intrinsic to the structure of haptic space.  相似文献   

3.
The perception of linear extent in haptic touch appears to be anisotropic, in that haptically perceived extents can depend on the spatial orientation and location of the object and, thus, on the direction of exploratory motion. Experiments 1 and 2 quantified how the haptic perception of linear extent depended on the type of motion (radial or tangential to the body) when subjects explored different stimulus objects (raised lines or solid blocks) varying in length and in relative spatial location. Relatively narrow, shallow, raised lines were judged to be longer, by magnitude estimation, than solid blocks. Consistent with earlier reports, stimuli explored with radial arm motions were judged to be longer than identical stimuli explored with tangential motions; this difference did not depend consistently on the lateral position of the stimulus object, the direction of movement (toward or away from the body), or the distance of the hand from the body but did depend slightly on the angular position of the shoulder. Experiment 3 showed that the radial-tangential effect could be explained by temporal differences in exploratory movements, implying that the apparent anisotropy is not intrinsic to the structure of haptic space.  相似文献   

4.
Preschoolers who explore objects haptically often fail to recognize those objects in subsequent visual tests. This suggests that children may represent qualitatively different information in vision and haptics and/or that children’s haptic perception may be poor. In this study, 72 children (2½-5 years of age) and 20 adults explored unfamiliar objects either haptically or visually and then chose a visual match from among three test objects, each matching the exemplar on one perceptual dimension. All age groups chose shape-based matches after visual exploration. Both 5-year-olds and adults also chose shape-based matches after haptic exploration, but younger children did not match consistently in this condition. Certain hand movements performed by children during haptic exploration reliably predicted shape-based matches but occurred at very low frequencies. Thus, younger children’s difficulties with haptic-to-visual information transfer appeared to stem from their failure to use their hands to obtain reliable haptic information about objects.  相似文献   

5.
Blindfolded subjects moved a stylus held in the hand over a standard distance of 4.5 ins. in a given direction. They then attempted to move the same distance in a direction at right angles to the first. Eight combinations of movements were investigated. The results reveal an illusion such that the extent of movements to left or right across the body is underestimated, while the extent of movements towards or away from the body in the mid-line is overestimated. The illusion applies to speed as well as extent of movement. Movement up or down in a vertical plane is equivalent to movement towards or away from the body in a horizontal plane.

The interaction of this illusion with the well-known horizontal-vertical illusion of visual perception explains a failure to find any net illusory effect where lines visually displayed in different orientations were matched for length by unseen movements in similar orientations.

Whether the visual and movement illusions simply co-exist or whether they are functionally related is not yet clear.  相似文献   

6.
Dance-like actions are complex visual stimuli involving multiple changes in body posture across time and space. Visual perception research has demonstrated a difference between the processing of dynamic body movement and the processing of static body posture. Yet, it is unclear whether this processing dissociation continues during the retention of body movement and body form in visual working memory (VWM). When observing a dance-like action, it is likely that static snapshot images of body posture will be retained alongside dynamic images of the complete motion. Therefore, we hypothesized that, as in perception, posture and movement would differ in VWM. Additionally, if body posture and body movement are separable in VWM, as form- and motion-based items, respectively, then differential interference from intervening form and motion tasks should occur during recognition. In two experiments, we examined these hypotheses. In Experiment 1, the recognition of postures and movements was tested in conditions in which the formats of the study and test stimuli matched (movement–study to movement–test, posture–study to posture–test) or mismatched (movement–study to posture–test, posture–study to movement–test). In Experiment 2, the recognition of postures and movements was compared after intervening form and motion tasks. These results indicated that (1) the recognition of body movement based only on posture is possible, but it is significantly poorer than recognition based on the entire movement stimulus, and (2) form-based interference does not impair memory for movements, although motion-based interference does. We concluded that, whereas static posture information is encoded during the observation of dance-like actions, body movement and body posture differ in VWM.  相似文献   

7.
It has been shown that, even for very fast and short duration movements, seeing one's hand in peripheral vision, or a cursor representing it on a video screen, resulted in a better direction accuracy of a manual aiming movement than when the task was performed while only the target was visible. However, it is still unclear whether this was caused by on-line or off-line processes. Through a novel series of analyses, the goal of the present study was to shed some light on this issue. We replicated previous results showing that the visual information concerning one's movement, which is available between 40 degrees and 25 degrees of visual angle, is not useful to ensure direction accuracy of video-aiming movements, whereas visual afferent information available between 40 degrees and 15 degrees of visual angle improved direction accuracy over a target-only condition. In addition, endpoint variability on the direction component of the task was scaled to direction variability observed at peak movement velocity. Similar observations were made in a second experiment when the position of the cursor was translated to the left or to the right as soon as it left the starting base. Further, the data showed no evidence of on-line correction to the direction dimension of the task for the translated trials. Taken together, the results of the two experiments strongly suggest that, for fast video-aiming movements, the information concerning one's movement that is available in peripheral vision is used off-line.  相似文献   

8.
Effects of learning can show in a direct, i.e., explicit way, or they can be expressed indirectly, i.e., in an implicit way. It is investigated whether hepatic information shows implicit effect, and whether implicit haptic memory effects are based primarily on motor or on sensory memory components. In the first phase blindfolded subjects had to palpate objects in order to answer questions about the objects' distinct properties as fast as possible. In the following phase this task was repeated with the same objects and additional control items. Additionally, recognition judgements were required. Results demonstrate reliable effects of implicit memory for haptic information in terms of reaction times to old vs. new objects. Subjects who had to wear plastic gloves in the first stage showed comparable effects of repetition priming. Changing the questions--and, thus, hand movements--during the palpitation of objects known from the first stage, however, abolishes implicit memory expression. It is concluded, therefore, that implicit memory for haptic information is based on motor processes. On the other hand, explicit memory is hampered in subjects wearing gloves during the first phase, as revealed in terms of recognition performance while changing the questions about objects' properties has no effect on recognition judgements. Thus, explicit memory for haptic information seems to be based on the sensory processes when touching objects.  相似文献   

9.
Previous research on the interaction between manual action and visual perception has focused on discrete movements or static postures and discovered better performance near the hands (the near-hand effect). However, in everyday behaviors, the hands are usually moving continuously between possible targets. Therefore, the current study explored the effects of continuous hand motion on the allocation of visual attention. Eleven healthy adults performed a visual discrimination task during cyclical concealed hand movements underneath a display. Both the current hand position and its movement direction systematically contributed to participants’ visual sensitivity. Discrimination performance increased substantially when the hand was distant from but moving toward the visual probe location (a far-hand effect). Implications of this novel observation are discussed.  相似文献   

10.
Encoding seen movement of another human body requires visuo-spatial processing, and recall involves motor activity. However, encoding whole body movement patterns is affected differently by patterned and spatial secondary tasks, and this difference is reversed for encoding of spatial targets for movement (Smyth, Pearson, & Pendleton, 1988). The experiments reported here investigate the rehearsal of such movement patterns and their recall over unfilled and filled intervals. Performing, watching, or encoding a sequence of spatial positions while carrying a memory load of movement patterns did not affect recall of those movements, whereas performing, watching, or encoding a further set of patterned movements reduced the number recalled from the original set. However, memory for a series of locations in space was not affected by watching patterned movements during the interval, and only order information was affected by watching movement to a series of spatial locations during the interval. The results are discussed in terms of the independence of rehearsal mechanisms for spatial sequencing and movement patterns.  相似文献   

11.
In six experiments, we used the Müller-Lyer illusion to investigate factors in the integration of touch, movement, and spatial cues in haptic shape perception, and in the similarity with the visual illusion. Latencies provided evidence against the hypothesis that scanning times explain the haptic illusion. Distinctive fin effects supported the hypothesis that cue distinctiveness contributes to the illusion, but showed also that it depends on modality-specific conditions, and is not the main factor. Allocentric cues from scanning an external frame (EF) did not reduce the haptic illusion. Scanning elicited downward movements and more negative errors for horizontal convergent figures and more positive errors for vertical divergent figures, suggesting a modality-specific movement effect. But the Müller-Lyer illusion was highly significant for both vertical and horizontal figures. By contrast, instructions to use body-centered reference and to ignore the fins reduced the haptic illusion for vertical figures in touch from 12.60% to 1.7%. In vision, without explicit egocentric reference, instructions to ignore fins did not reduce the illusion to near floor level, though external cues were present. But the visual illusion was reduced to the same level as in touch with instructions that included the use of body-centered cues. The new evidence shows that the same instructions reduced the Müller-Lyer illusion almost to zero in both vision and touch. It suggests that the similarity of the illusions is not fortuitous. The results on touch supported the hypothesis that body-centered spatial reference is involved in integrating inputs from touch and movement for accurate haptic shape perception. The finding that explicit egocentric reference had the same effect on vision suggests that it may be a common factor in the integration of disparate inputs from multisensory sources.  相似文献   

12.
The present study examined the role of vision and haptics in memory for stimulus objects that vary along the dimension of curvature. Experiment 1 measured haptic‐haptic (T‐T) and haptic‐visual (T‐V) discrimination of curvature in a short‐term memory paradigm, using 30‐second retention intervals containing five different interpolated tasks. Results showed poorest performance when the interpolated tasks required spatial processing or movement, thereby suggesting that haptic information about shape is encoded in a spatial‐motor representation. Experiment 2 compared visual‐visual (V‐V) and visual‐haptic (V‐T) short‐term memory, again using 30‐second delay intervals. The results of the ANOVA failed to show a significant effect of intervening activity. Intra‐modal visual performance and cross‐modal performance were similar. Comparing the four modality conditions (inter‐modal V‐T, T‐V; intra‐modal V‐V, T‐T, by combining the data of Experiments 1 and 2), in a global analysis, showed a reliable interaction between intervening activity and experiment (modality). Although there appears to be a general tendency for spatial and movement activities to exert the most deleterious effects overall, the patterns are not identical when the initial stimulus is encoded haptically (Experiment 1) and visually (Experiment 2).  相似文献   

13.
The present study examined if and how the direction of planned hand movements affects the perceived direction of visual stimuli. In three experiments participants prepared hand movements that deviated regarding direction (“Experiment 1” and “2”) or distance relative to a visual target position (“Experiment 3”). Before actual execution of the movement, the direction of the visual stimulus had to be estimated by means of a method of adjustment. The perception of stimulus direction was biased away from planned movement direction, such that with leftward movements stimuli appeared somewhat more rightward than with rightward movements. Control conditions revealed that this effect was neither a mere response bias, nor a result of processing or memorizing movement cues. Also, shifting the focus of attention toward a cued location in space was not sufficient to induce the perceptual bias observed under conditions of movement preparation (“Experiment 4”). These results confirm that characteristics of planned actions bias visual perception, with the direction of bias (contrast or assimilation) possibly depending on the type of the representations (categorical or metric) involved.  相似文献   

14.
Movement to spatial targets that can, in principle, be carried out by more than one effector can be distinguished from movements that involve specific configurations of body parts. The experiments reported here investigate memory span for a series of hand configurations and memory span for a series of hand movements to spatial locations. Spans were produced normally, or in conditions in which a suppression task was carried out on the right or the left hand while the movements to be remembered were presented. All movements were recalled using the right hand. There were two suppression tasks. One involved repeatedly squeezing a tube and so changing the configuration of the hand, and the other involved tapping a repeated series of spatial targets. The spatial tapping task interfered with span for spatial locations when it was presented on either the right or the left hand but did not affect span for movement pattern. The movement suppression task interfered with memory for movement pattern when it was presented on either the right or the left hand, but did not interfere with span for spatial locations. It is concluded that memory for movement configurations involves different processes from those used in spatial tasks and that there may be a need for a subsystem of working memory that is specific for movement configuration.  相似文献   

15.
Do patients with unilateral neglect exhibit direction-specific deficits in the control of movement velocity when performing goal-directed arm movements? Five patients with left-sided neglect performed unrestrained three-dimensional pointing movements to visual targets presented at body midline, the left and right hemispace. A group of healthy adults and a group of patients with right-hemispheric brain damage but no neglect served as controls. Pointing was performed under normal room light or in darkness. Time-position data of the hand were recorded with an opto-electronic camera system. We found that compared to healthy controls, movement times were longer in both patient groups due to prolonged acceleration and deceleration phases. Tangential peak hand velocity was lower in both patient groups, but not significantly different from controls. Single peak, bell-shaped velocity profiles of the hand were preserved in all right hemispheric patients and in three out of five neglect patients. Most important, the velocity profiles of neglect patients to leftward targets did not differ significantly from those to targets in the right hemispace. In summary, we found evidence for general bradykinesia in neglect patients, but not for a direction-specific deficit in the control of hand velocity. We conclude that visual neglect induces characteristic changes in exploratory behavior, but not in the kinematics of goal-directed movements to objects in peripersonal space.  相似文献   

16.
《Acta psychologica》2013,142(3):394-401
The integration of separate, yet complimentary, cortical pathways appears to play a role in visual perception and action when intercepting objects. The ventral system is responsible for object recognition and identification, while the dorsal system facilitates continuous regulation of action. This dual-system model implies that empirically manipulating different visual information sources during performance of an interceptive action might lead to the emergence of distinct gaze and movement pattern profiles. To test this idea, we recorded hand kinematics and eye movements of participants as they attempted to catch balls projected from a novel apparatus that synchronised or de-synchronised accompanying video images of a throwing action and ball trajectory. Results revealed that ball catching performance was less successful when patterns of hand movements and gaze behaviours were constrained by the absence of advanced perceptual information from the thrower's actions. Under these task constraints, participants began tracking the ball later, followed less of its trajectory, and adapted their actions by initiating movements later and moving the hand faster. There were no performance differences when the throwing action image and ball speed were synchronised or de-synchronised since hand movements were closely linked to information from ball trajectory. Results are interpreted relative to the two-visual system hypothesis, demonstrating that accurate interception requires integration of advanced visual information from kinematics of the throwing action and from ball flight trajectory.  相似文献   

17.
In a number of studies, we have demonstrated that the spatial-temporal coupling of eye and hand movements is optimal for the pickup of visual information about the position of the hand and the target late in the hand's trajectory. Several experiments designed to examine temporal coupling have shown that the eyes arrive at the target area concurrently with the hand achieving peak acceleration. Between the time the hand reached peak velocity and the end of the movement, increased variability in the position of the shoulder and the elbow was accompanied by a decreased spatial variability in the hand. Presumably, this reduction in variability was due to the use of retinal and extra-retinal information about the relative positions of the eye, hand and target. However, the hand does not appear to be a slave to the eye. For example, we have been able to decouple eye movements and hand movements using Müller-Lyer configurations as targets. Predictable bias, found in primary and corrective saccadic eye movements, was not found for hand movements, if on-line visual information about the target was available during aiming. That is, the hand remained accurate even when the eye had a tendency to undershoot or overshoot the target position. However, biases of the hand were evident, at least in the initial portion of an aiming movement, when vision of the target was removed and vision of the hand remained. These findings accent the versatility of human motor control and have implications for current models of visual processing and limb control.  相似文献   

18.
肢体运动(空间位置运动与身体模式运动)是个体与环境交互作用的重要途径。以往行为学和脑成像研究分别探讨了空间位置运动信息和身体模式运动信息的工作记忆存储问题, 发现两种肢体运动信息的存储均独立于语音环、视空间画板的视觉子系统, 需要视空间画板的空间子系统的参与; 两种肢体运动信息激活的脑区(运动相关皮层)独立于语音环、视空间画板的视觉子系统和空间子系统, 并存在差异。这表明, 现有的工作记忆多成分模型不能完全解释肢体运动信息的存储。据此可推论, 工作记忆系统中可能存在一个负责处理肢体运动信息的“肢体运动系统”, 其隶属于视空间画板, 与视觉子系统和空间子系统并存; 其激活脑区因肢体运动的不同而存在差异。  相似文献   

19.
The right hand advantage has been thought to arise from the greater efficiency of the right hand/left hemisphere system in processing visual feedback information. This hypothesis was examined using kinematic analyses of aiming performance, focusing particularly on time after peak velocity which has been shown to be sensitive to visual feedback processing demands. Eight right-handed subjects pointed at two targets with their left and right hands with or without vision available and either as accurately or as fast as possible. Pointing errors and movement time were found to be smaller with the right hand. Analyses of the temporal componenets of movement time revealed that the hands differed only in time after peak velocity (in deceleration), with the right hand spending significantly less time. This advantage for the right hand, however, was apparent whether or not vision was available and only when accuracy was emphasized in performance. These findings suggest that the right hand system may be more efficient at processing feedback information whether this be visual or nonvisual (e.g., proprioceptive).  相似文献   

20.
表象、知觉和记忆是一个整合的认知系统。由于知觉和记忆提供了表象生成的材料,因而三者共享相似的表征,并激活广泛而相似的脑区。然而在认知加工过程上三者存在一定的差异。与知觉相比,表象的编码方式更抽象、更依赖过去经验的参与且处理细节能力更弱;与记忆相比,表象更容易受无关信息的干扰。未来对三者关系的研究应关注不同来源和不同类型的表象与知觉、记忆之间的关系,以及工作记忆在三者关系中所起的作用。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号