首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We have investigated how participants match the orientation of a line, which moves on a vertical screen towards the subject. On its path to the participant, the line could disappear at several positions. Participants were instructed to put a bar on a predefined interception point on the screen, such that the bar touched the screen with the same orientation as the moving line at the very moment when the line passed through the interception point or (in case of line disappearance) when the hidden line would pass through the interception point (like in catching). Participants made significant errors for oblique orientations, but not for vertical and horizontal orientations of the moving line. These errors were small or absent when the moving line was visible all the way along its path on the screen. However, these errors became larger when the line disappeared farther away from the interception point. In the second experiment we tested whether these errors could be related to errors in visual perception of line orientation. The results demonstrate that errors in matching of the bar do not correspond to the last perceived orientation of the line, but rather to the perceived orientation of the moving line near the beginning of the movement path. This corresponds to earlier observations that participants shortly track a moving target and then make a saccadic eye movement to the interception point.  相似文献   

2.
Pointing accuracy with an unseen hand to a just-extinguished visual target was examined in various eye movement conditions. When subjects caught the target by a saccade, they showed about the same degree of accuracy as that shown in pointing to a visible target. On the other hand, when subjects tracked a moving target by a pursuit eye movement, they systematically undershot when subsequently pointing to the target. The differential effect of the two types of eye movements on pointing tasks was examined on both the preferred and non-preferred hands, and it was found that the effect of eye movements was more prominent on the preferred hand than on the non-preferred hand. The results are discussed in relation to outflow eye position information.  相似文献   

3.
It has been claimed that increased reliance on context, or allocentric information, develops when aiming movements are more consciously monitored and/or controlled. Since verbalizing target features requires strong conscious monitoring, we expected an increased reliance on allocentric information when verbalizing a target label (i.e. target number) during movement execution. We examined swiping actions towards a global array of targets embedded in different local array configurations on a tablet under no-verbalization and verbalization conditions. The global and local array configurations allowed separation of contextual-effects from any possible numerical magnitude biases triggered from calling out specific target numbers. The patterns of constant errors in the target direction were used to assess differences between conditions. Variation in the target context configuration systematically biased movement endpoints in both the no-verbalization and verbalization conditions. Ultimately, our results do not support the assertion that calling out target numbers during movement execution increases the context-dependency of targeted actions.  相似文献   

4.
Hermens F  Gielen S 《Perception》2003,32(2):235-248
In this study we investigated the perception and production of line orientations in a vertical plane. Previous studies have shown that systematic errors are made when participants have to match oblique orientations visually and haptically. Differences in the setup for visual and haptic matching did not allow for a quantitative comparison of the errors. To investigate whether matching errors are the same for different modalities, we asked participants to match a visually presented orientation visually, haptically with visual feedback, and haptically without visual feedback. The matching errors were the same in all three matching conditions. Horizontal and vertical orientations were matched correctly, but systematic errors were made for the oblique orientations. The errors depended on the viewing position from which the stimuli were seen, and on the distance of the stimulus from the observer.  相似文献   

5.
A pitched visual inducer has a strong effect on the visually perceived elevation of a target in extrapersonal space, and also on the elevation of the arm when a subject points with an unseen arm to the target’s elevation. The manual effect is a systematic function of hand-to-body distance (Li and Matin Vision Research 45:533–550, 2005): When the arm is fully extended, manual responses to perceptually mislocalized luminous targets are veridical; when the arm is close to the body, gross matching errors occur. In the present experiments, we measured this hand-to-body distance effect during the presence of a pitched visual inducer and after inducer offset, using three values of hand-to-body distance (0, 40, and 70 cm) and two open-loop tasks (pointing to the perceived elevation of a target at true eye level and setting the height of the arm to match the elevation). We also measured manual behavior when subjects were instructed to point horizontally under induction and after inducer offset (no visual target at any time). In all cases, the hand-to-body distance effect disappeared shortly after inducer offset. We suggest that the rapid disappearance of the distance effect is a manifestation of processes in the dorsal visual stream that are involved in updating short-lived representations of the arm in egocentric visual perception and manual behavior.  相似文献   

6.
Perceived finger span—the perceived spatial separation between the tip of the thumb and the tip of the index finger—was measured by using cross-modal matching to line length. In the first experiment, subjects adjusted finger span to match the length of line segments presented on a video monitor, and conversely, with both hands. Subjects also made estimates of finger span in physical units (“dead reckoning”). Finger spans were measured by using infrared LEDs mounted on the tip of the thumb and the finger tip, so the hand made no contact with any object during the experiment. Unlike in previous studies, the results suggest that perceived finger span is proportional to line length and slightly shorter than the actual span, provided that corrections are made for regression bias. The effect of finger contact was assessed in a second experiment by matching line length both to free span and to spans constrained by the pinching of blocks in the same session. The matching function when subjects were pinching blocks was accelerating, consistent with previous reports. In contrast, matched line length was a decelerating function of free span. The exponent of the free span matching function in the second experiment was slightly smaller than in the first experiment, probably due to uncorrected matching biases in the second experiment.  相似文献   

7.
We examined the hypothesis that angular errors in visually directed pointing, in which an unseen target is pointed to after its direction has been seen, are attributed to the difference between the locations of the visual and kinesthetic egocentres. Experiment 1 showed that in three of four cases, angular errors in visually directed pointing equaled those in kinesthetically directed pointing, in which a visual target was pointed to after its direction had been felt. Experiment 2 confirmed the results of experiment 1 for the targets at two different egocentric distances. Experiment 3 showed that when the kinesthetic egocentre was used as the reference of direction, angular errors in visually directed pointing equaled those in visually directed reaching, in which an unseen target is reached after its location has been seen. These results suggest that in the visually and the kinesthetically directed pointing, the egocentric directions represented in the visual space are transferred to the kinesthetic space and vice versa.  相似文献   

8.
The experiments investigated how two adult captive chimpanzees learned to navigate in an automated interception task. They had to capture a visual target that moved predictably on a touch monitor. The aim of the study was to determine the learning stages that led to an efficient strategy of intercepting the target. The chimpanzees had prior training in moving a finger on a touch monitor and were exposed to the interception task without any explicit training. With a finger the subject could move a small "ball" at any speed on the screen toward a visual target that moved at a fixed speed either back and forth in a linear path or around the edge of the screen in a rectangular pattern. Initial ball and target locations varied from trial to trial. The subjects received a small fruit reinforcement when they hit the target with the ball. The speed of target movement was increased across training stages up to 38 cm/s. Learning progressed from merely chasing the target to intercepting the target by moving the ball to a point on the screen that coincided with arrival of the target at that point. Performance improvement consisted of reduction in redundancy of the movement path and reduction in the time to target interception. Analysis of the finger's movement path showed that the subjects anticipated the target's movement even before it began to move. Thus, the subjects learned to use the target's initial resting location at trial onset as a predictive signal for where the target would later be when it began moving. During probe trials, where the target unpredictably remained stationary throughout the trial, the subjects first moved the ball in anticipation of expected target movement and then corrected the movement to steer the ball to the resting target. Anticipatory ball movement in probe trials with novel ball and target locations (tested for one subject) showed generalized interception beyond the trained ball and target locations. The experiments illustrate in a laboratory setting the development of a highly complex and adaptive motor performance that resembles navigational skills seen in natural settings where predators intercept the path of moving prey. Electronic Supplementary Material Supplementary material is available for this article if you access the article at . A link in the frame on the left on that page takes you directly to the supplementary material.  相似文献   

9.
Numerous studies showed that the simultaneous execution of multiple actions is associated with performance costs. Here, we demonstrate that when highly automatic responses are involved, performance in single-response conditions can actually be worse than in dual-response conditions. Participants responded to peripheral visual stimuli with an eye movement (saccade), a manual key press, or both. To manipulate saccade automaticity, a central fixation cross either remained present throughout the trial (overlap condition, lower automaticity) or disappeared 200 ms before visual target onset (gap condition, greater automaticity). Crucially, single-response conditions yielded more performance errors than dual-response conditions (i.e., dual-response benefit), especially in gap trials. This was due to difficulties associated with inhibiting saccades when only manual responses were required, suggesting that response inhibition (remaining fixated) can be even more resource-demanding than overt response execution (saccade to peripheral target).  相似文献   

10.
When observers are asked to localize the onset or the offset position of a moving target, they typically make localization errors in the direction of movement. Similarly, when observers judge a moving target that is presented in alignment with a flash, the target appears to lead the flash. These errors are known as the Fröhlich effect, representational momentum, and flash-lag effect, respectively. This study compared the size of the three mislocalization errors. In Experiment 1, a flash appeared either simultaneously with the onset, the mid-position, or the offset of the moving target. Observers then judged the position where the moving target was located when the flash appeared. Experiments 2 and 3 are exclusively concerned with localizing the onset and the offset of the moving target. When observers localized the position with respect to the point in time when the flash was presented, a clear mislocalization in the direction of movement was observed at the initial position and the mid-position. In contrast, a mislocalization opposite to movement direction occurred at the final position. When observers were asked to ignore the flash (or when no flash was presented at all), a reduced error (or no error) was observed at the initial position and only a minor error in the direction of the movement occurred at the final position. An integrative model is proposed, which suggests a common underlying mechanism, but emphasizes the specific processing components of the Fröhlich effect, flash-lag effect, and representational momentum.  相似文献   

11.
Past research has revealed that central vision is more important than peripheral vision in controlling the amplitude of target-directed aiming movements. However, the extent to which central vision contributes to movement planning versus online control is unclear. Since participants usually fixate the target very early in the limb trajectory, the limb enters the central visual field during the late stages of movement. Hence, there may be insufficient time for central vision to be processed online to correct errors during movement execution. Instead, information from central vision may be processed offline and utilised as a form of knowledge of results, enhancing the programming of subsequent trials. In the present research, variability in limb trajectories was analysed to determine the extent to which peripheral and central vision is used to detect and correct errors during movement execution. Participants performed manual aiming movements of 450 ms under four different visual conditions: full vision, peripheral vision, central vision, no vision. The results revealed that participants utilised visual information from both the central and peripheral visual fields to adjust limb trajectories during movement execution. However, visual information from the central visual field was used more effectively to correct errors online compared to visual information from the peripheral visual field.  相似文献   

12.
Observers tend to localize the final position of a suddenly vanished moving target farther along in the direction of the target motion (representational momentum). We report here that such localization errors are mediated by perceived motion rather than by retinal motion. By manipulating the cast shadow of a moving target, we induced illusory motion to a target stimulus while keeping the retinal motion constant. Participants indicated the vanishing point of the target by directing a mouse cursor. The resulting magnitude of localization errors was modulated on the basis of the induced direction of the target. Such systematic localization biases were not obtained in a control condition in which the motion paths of the ball and shadow were switched. Our results suggest that cues to object motion trajectory, such as cast shadows, are used for the localization task, supporting a view that a predictive mechanism is responsible for the production of localization errors.  相似文献   

13.
Normal human subjects were required to manually point to small visual targets that suddenly changed location upon finger movement initiation. They pointed either as fast or as accurately as possible. Movements of the eyes were measured by electrooculography, and the movements of the unrestrained limb and head were monitored by an optoelectric system (WATSMART), which allowed for the analysis of kinematic parameters in three-dimensional space. The temporal and kinematic reorganization of each body part in response to the target perturbations were variable, which indicated independent control for each part of the system. That is, the timing and nature of the reorganization varied for each body part. In addition, the pattern of reorganization depended upon the speed and accuracy demands of the movement task. As well, the movement termination patterns (eyes finished first, the finger reached the target, then the head stopped moving) were extremely consistent, indicating that movement termination may be a controlled variable. Finally, no evidence was found to suggest that visual information was used to amend arm movements early (before peak velocity) in the trajectory.  相似文献   

14.
2 experiments in which Ss matched a proprioceptively perceived stylus to a visual target in the para-median plane show that Ss make idiosyncratic errors which are stable over periods of at least several days. A technique for comparing visual and proprioceptive spaces is used to Show how matching errors vary within the para-median plane. These errors are interpreted as the result of inadequate inter-calibration of visual and proprioceptive space perception, and their implications for movement control are discussed.  相似文献   

15.
Executed bimanual movements are prepared slower when moving to symbolically different than when moving to symbolically same targets and when targets are mapped to target locations in a left/right fashion than when they are mapped in an inner/outer fashion [Weigelt et al. (Psychol Res 71:238–447, 2007)]. We investigated whether these cognitive bimanual coordination constraints are observable in motor imagery. Participants performed fast bimanual reaching movements from start to target buttons. Symbolic target similarity and mapping were manipulated. Participants performed four action conditions: one execution and three imagination conditions. In the latter they indicated starting, ending, or starting and ending of the movement. We measured movement preparation (RT), movement execution (MT) and the combined duration of movement preparation and execution (RTMT). In all action conditions RTs and MTs were longer in movements towards different targets than in movements towards same targets. Further, RTMTs were longer when targets were mapped to target locations in a left/right fashion than when they were mapped in an inner/outer fashion, again in all action conditions. RTMTs in imagination and execution were similar, apart from the imagination condition in which participants indicated the start and the end of the movement. Here MTs, but not RTs, were longer than in the execution condition. In conclusion, cognitive coordination constraints are present in the motor imagery of fast (<1600 ms) bimanual movements. Further, alternations between inhibition and execution may prolong the duration of motor imagery.  相似文献   

16.
We investigated the effect of unseen hand posture on cross-modal, visuo-tactile links in covert spatial attention. In Experiment 1, a spatially nonpredictive visual cue was presented to the left or right hemifield shortly before a tactile target on either hand. To examine the spatial coordinates of any cross-modal cuing, the unseen hands were either uncrossed or crossed so that the left hand lay to the right and vice versa. Tactile up/down (i.e., index finger/thumb) judgments were better on the same side of external space as the visual cue, for both crossed and uncrossed postures. Thus, which hand was advantaged by a visual cue in a particular hemifield reversed across the different unseen postures. In Experiment 2, nonpredictive tactile cues now preceded visual targets. Up/down judgments for the latter were better on the same side of external space as the tactile cue, again for both postures. These results demonstrate cross-modal links between vision and touch in exogenous covert spatial attention that remap across changes in unseen hand posture, suggesting a modulatory role for proprioception.  相似文献   

17.
The timing of natural prehension movements   总被引:39,自引:0,他引:39  
Prehension movements were studied by film in 7 adult subjects. Transportation of the hand to the target-object location had features very similar to any aiming arm movement, that is, it involved a fast-velocity initial phase and a low-velocity final phase. The peak velocity of the movement was highly correlated with its amplitude, although total movement duration tended to remain invariant when target distance was changed. The low-velocity phase consistently began after about 75% of movement time had elapsed. This ration was maintained for different movement amplitudes. Formation of the finger grip occurred during hand transportation. Fingers were first stretched and then began to close in anticipation to contact with the object. The onset of the closure phase was highly correlated to the beginning of the low velocity phase of transportation. This pattern for both transportation and finger grip formation was maintained in conditions whether visual feedback from the moving limb was present or not. Implications of these findings for the central programming of multisegmental movements are discussed.  相似文献   

18.
Two experiments investigated the ability of subjects to identify a moving, tactile stimulus. In both experiments, the subjects were presented with a target to their left index fingerpad and a nontarget (also moving) to their left middle fingerpad. Subjects were instructed to attend only to the target location and to respond "1" if the stimulus moved either to the left or up the finger, and to respond "2" if the stimulus moved either right or down the finger. The results showed that accuracy was better and reaction times were faster when the target and nontarget moved in the same direction than when they moved in different directions. When the target and nontarget moved in different directions, accuracy was significantly better and reaction times were significantly faster when the two stimuli had the same assigned response than when they had different responses. The results provide support for the conclusion that movement information is processed across adjacent fingers to the level of incipient response activation, even when subjects attempt to focus their attention on one location on the skin.  相似文献   

19.
Two experiments investigated the ability of subjects to identify a moving, tactile stimulus. In both experiments, the subjects were presented with a target to their left index fingerpad and a nontarget (also moving) to their left middle fingerpad. Subjects were instructed to attend only to the target location and to respond “1” if the stimulus moved either to the left or up the finger, and to respond “2” if the stimulus moved either right or down the finger. The results showed that accuracy was better and reaction times were faster when the target and nontarget moved in the same direction than when they moved in different directions. When the target and nontarget moved in different directions, accuracy was significantly better and reaction times were significantly faster when the two stimuli had the same assigned response than when they had different responses. The results provide support for the conclusion that movement information is processed across adjacent fingers to the level of incipient response activation, even when subjects attempt to focus their attention on one location on the skin.  相似文献   

20.
When moving to grasp an object having adjacent obstacles that limit the space available for placing the fingers, the time for the reach/grasp is dependent on the distance of reaching and the space available for finger placement. Here we model the time taken in terms of these variables and develop mathematical models for the reach and grasp phases of the task and the location of obstacles. Data show that the movement to the target may be made under visual control and that, when the obstacles are close to the target object, a visually-controlled movement is made that is modeled by a modified form of Fitts' law. The time for the two components of the reach/grasp appear to be independent and linearly additive.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号