首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The contribution of retinal flow (RF), extraretinal (ER), and egocentric visual direction (VD) information in locomotor control was explored. First, the recovery of heading from RF was examined when ER information was manipulated; results confirmed that ER signals affect heading judgments. Then the task was translated to steering curved paths, and the availability and veracity of VD were manipulated with either degraded or systematically biased RF. Large steering errors resulted from selective manipulation of RF and VD, providing strong evidence for the combination of RF, ER, and VD. The relative weighting applied to RF and VD was estimated. A point-attractor model is proposed that combines redundant sources of information for robust locomotor control with flexible trajectory planning through active gaze.  相似文献   

2.
To catch a lofted ball, a catcher must pick up information that guides locomotion to where the ball will land. The acceleration of tangent of the elevation angle of the ball (AT) has received empirical support as a possible source of this information. Little, however, has been said about how the information is detected. Do catchers fixate on a stationary point, or do they track the ball with their gaze? Experiment 1 revealed that catchers use eye and head movements to track the ball. This means that if AT is picked up retinally, it must be done by means of background motion. Alternatively, AT could be picked up by extraretinal mechanisms, such as the vestibular and proprioceptive systems. In Experiment 2, catchers reliably ran to intercept luminous fly balls in the dark, that is, in absence of a visual background, under both binocular and monocular viewing conditions. This indicates that the optical information is not detected by a retinal mechanism alone.  相似文献   

3.
Although there is much research on infants' ability to orient in space, little is known regarding the information they use to do so. This research uses a rotating room to evaluate the relative contribution of visual and vestibular information to location of a target following bodily rotation. Adults responded precisely on the basis of visual flow information. Seven-month-olds responded mostly on the basis of visual flow, whereas 9-month-olds responded mostly on the basis of vestibular information, and 12-month-olds responded mostly on the basis of visual information. Unlike adults, infants of all ages showed partial influence by both modalities. Additionally, 7-month-olds were capable of using vestibular information when there was no visual information for movement or stability, and 9-month-olds still relied on vestibular information when visual information was enhanced. These results are discussed in the context of neuroscientific evidence regarding visual-vestibular interaction, and in relation to possible changes in reliance on visual and vestibular information following acquisition of locomotion.  相似文献   

4.
Robust control of skilled actions requires the flexible combination of multiple sources of information. Here we examined the role of gaze during high-speed locomotor steering and in particular the role of feedback from the visible road edges. Participants were required to maintain one of three lateral positions on the road when one or both edges were degraded (either by fading or removing them). Steering became increasingly impaired as road edge information was degraded, with gaze being predominantly directed toward the required road position. When either of the road edges were removed, we observed systematic shifts in steering and gaze direction dependent upon both the required road position and the visible edge. A second experiment required fixation on the road center or beyond the road edges. The results showed that the direction of gaze led to predictable steering biases, which increased as road edge information became degraded. A new steering model demonstrates that the direction of gaze and both road edges influence steering in a manner consistent with the flexible weighted combination of near road feedback information and prospective gaze information.  相似文献   

5.
By systematically varying cue availability in the stimulus and response phases of a series of same-modality and cross-modality distance matching tasks, we examined the contributions of static visual information, idiothetic information, and optic flow information. The experiment was conducted in a large-scale, open, outdoor environment. Subjects were presented with information about a distance and were then required to turn 180 before producing a distance estimate. Distance encoding and responding occurred via: (i) visually perceived target distance, or (ii) traversed distance through either blindfolded locomotion or during sighted locomotion. The results demonstrated that subjects performed with similar accuracy across all conditions. In conditions in which the stimulus and the response were delivered in the same mode, when visual information was absent, constant error was minimal; whereas, when visual information was present, overestimation was observed. In conditions in which the stimulus and response modes differed, a consistent error pattern was observed. By systematically comparing complementary conditions, we found that the availability of visual information during locomotion (particularly optic flow) led to an 'under-perception' of movement relative to conditions in which visual information was absent during locomotion.  相似文献   

6.
The goal of this study was to examine the effects of visual roll tilt on gaze and riding behavior when negotiating a bend using a motorcycle simulator. To this end, experienced motorcyclists rode along a track with a series of right and left turns whilst the degree of visual roll tilt was manipulated in three different conditions. Gaze behavior was analyzed by using the tangent point as a dynamic spatial reference; the deviation of gaze to this particular point was computed in both the horizontal and vertical directions. Steering control was assessed in terms of the lateral positioning, steering stability and number of lane departures. In the no-roll condition, the motorcyclists tracked a steering point on the road ahead, which was compatible with the hypothesis of “steer where you look” behavior. In the roll condition, our results revealed that the horizontal distribution of gaze points relative to the tangent point was preserved. However, significantly more fixations were made further ahead of the tangent point in the vertical direction. This modification of visual behavior was coupled with a degradation in steering stability and an offset in lateral positioning, which sometimes led to lane departures. These results are discussed with regard to models of visual control of steering for bend negotiation.  相似文献   

7.
Active gaze, visual look-ahead, and locomotor control   总被引:2,自引:0,他引:2  
The authors examined observers steering through a series of obstacles to determine the role of active gaze in shaping locomotor trajectories. Participants sat on a bicycle trainer integrated with a large field-of-view simulator and steered through a series of slalom gates. Steering behavior was determined by examining the passing distance through gates and the smoothness of trajectory. Gaze monitoring revealed which slalom targets were fixated and for how long. Participants tended to track the most immediate gate until it was about 1.5 s away, at which point gaze switched to the next slalom gate. To probe this gaze pattern, the authors then introduced a number of experimental conditions that placed spatial or temporal constraints on where participants could look and when. These manipulations resulted in systematic steering errors when observers were forced to use unnatural looking patterns, but errors were reduced when peripheral monitoring of obstacles was allowed. A steering model based on active gaze sampling is proposed, informed by the experimental conditions and consistent with observations in free-gaze experiments and with recommendations from real-world high-speed steering.  相似文献   

8.
How do we determine where we are heading during visually controlled locomotion? Psychophysical research has shown that humans are quite good at judging their travel direction, or heading, from retinal optic flow. Here we show that retinal optic flow is sufficient, but not necessary, for determining heading. By using a purely cyclopean stimulus (random dot cinematogram), we demonstrate heading perception without retinal optic flow. We also show that heading judgments are equally accurate for the cyclopean stimulus and a conventional optic flow stimulus, when the two are matched for motion visibility. The human visual system thus demonstrates flexible, robust use of available visual cues for perceiving heading direction.  相似文献   

9.
How do people control locomotion while their eyes are simultaneously rotating? A previous study found that during simulated rotation, they can perceive a straight path of self-motion from the retinal flow pattern, despite conflicting extraretinal information, on the basis of dense motion parallax and reference objects. Here we report that the same information is sufficient for active control ofjoystick steering. Participants steered toward a target in displays that simulated a pursuit eye movement. Steering was highly inaccurate with a textured ground plane (motion parallax alone), but quite accurate when an array of posts was added (motion parallax plus reference objects). This result is consistent with the theory that instantaneous heading is determined from motion parallax, and the path of self-motion is determined by updating heading relative to environmental objects. Retinal flow is thus sufficient for both perceiving self-motion and controlling self-motion with a joystick; extraretinal and positional information can also contribute, but are not necessary.  相似文献   

10.
The processing of visual and vestibular information is crucial for perceiving self-motion. Visual cues, such as optic flow, have been shown to induce and alter vestibular percepts, yet the role of vestibular information in shaping visual awareness remains unclear. Here we investigated if vestibular signals influence the access to awareness of invisible visual signals. Using natural vestibular stimulation (passive yaw rotations) on a vestibular self-motion platform, and optic flow masked through continuous flash suppression (CFS) we tested if congruent visual–vestibular information would break interocular suppression more rapidly than incongruent information. We found that when the unseen optic flow was congruent with the vestibular signals perceptual suppression as quantified with the CFS paradigm was broken more rapidly than when it was incongruent. We argue that vestibular signals impact the formation of visual awareness through enhanced access to awareness for congruent multisensory stimulation.  相似文献   

11.
Driver distraction has become a major concern for transportation safety due to increasing use of infotainment systems in vehicles. To reduce safety risks, it is crucial to understand how fundamental aspects of distracting activities affect driver behavior at different levels of vehicle control. This study used a simulator-based experiment to assess the effects of visual, cognitive and simultaneous distraction on operational (braking, accelerating) and tactical (maneuvering) control of vehicles. Twenty drivers participated in the study and drove in lead-car following or passing scenarios under four distraction conditions: without distraction, with visual distraction, with cognitive distraction, and with simultaneous distraction. Results revealed higher perceived workload for passing than following. Simultaneous distraction was most demanding and also resulted in the greatest steering errors among distraction conditions during both driving tasks. During passing, drivers also appeared to slow down their responses to secondary distraction tasks as workload increased. Visual distraction was associated with more off-road glances (to an in-vehicle device) and resulted in high workload. Longer headway times were also observed under visual distraction, suggesting driver adaptation to the workload. Similarly, cognitive distraction also increased driver workload but this demand did not translate into steering errors as high as for visual distraction. In general, findings indicate that tactical control of a vehicle demands more workload than operational control. Visual and cognitive distractions both increase driver workload, but they influence vehicle control and gaze behavior in different ways.  相似文献   

12.
J R Lackner  P DiZio 《Perception》1988,17(1):71-80
When a limb is used for locomotion, patterns of afferent and efferent activity related to its own motion are present as well as visual, vestibular, and other proprioceptive information about motion of the whole body. A study is reported in which it was asked whether visual stimulation present during whole-body motion can influence the perception of the leg movements propelling the body. Subjects were tested in conditions in which the stepping movements they made were identical but the amount of body displacement relative to inertial space and to the visual surround varied. These test conditions were created by getting the subjects to walk on a rotatable platform centered inside a large, independently rotatable, optokinetic drum. In each test condition, subjects, without looking at their legs, compared, against a standard condition in which the floor and drum were both stationary, their speed of body motion, their stride length and stepping rate, the direction of their steps, and the perceived force they exerted during stepping. When visual surround motion was incompatible with the motion normally associated with the stepping movements being made, changes in apparent body motion and in the awareness of the frequency, extent, and direction of the voluntary stepping movements resulted.  相似文献   

13.
When describing visual scenes, speakers typically gaze at objects while preparing their names. In a study of the relation between eye movements and speech, a corpus of self-corrected speech errors was analyzed. If errors result from rushed word preparation, insufficient visual information, or failure to check prepared names against objects, speakers should spend less time gazing at referents before uttering errors than before uttering correct names. Counter to predictions, gazes to referents before errors (e.g., gazes to an axe before saying "ham-" [hammer]) highly resembled gazes to referents before correct names (e.g., gazes to an axe before saying "axe"). However, speakers gazed at referents for more time after initiating erroneous compared with correct names, apparently while they prepared corrections. Assuming that gaze nonetheless reflects word preparation, errors were not associated with insufficient preparation. Nor were errors systematically associated with decreased inspection of objects. Like gesture, gaze may accurately reflect a speaker's intentions even when the accompanying speech does not.  相似文献   

14.
Visual control of locomotion   总被引:2,自引:0,他引:2  
Abstract.— How is locomotion controlled? What information is necessary and how is it used? It is first of all argued that the classical twofold division of information into ex-teroceptive and proprioceptive is inadequate and confusing, that three fundamental types of information need to be distinguished and that the information is used in a continual process of formulating locomotor programs, monitoring their execution and adjusting them. The paper goes on to show (1) how steering could be controlled on the basis of the “locomotor flow line” in the optic array, which specifies the potential future course, whether curved or straight, and (2) how stopping for an obstacle could be controlled simply on the basis of the information in the optic array about the time-to-collision and its rate of change. The problems inherent in pedestrian locomotion of controlling footing and balance are discussed and an investigation of visual locomotor programming in the long jump reported.  相似文献   

15.
In this study, we examined the effects of different gaze types (stationary fixation, directed looking, or gaze shifting) and gaze eccentricities (central or peripheral) on the vection induced by jittering, oscillating, and purely radial optic flow. Contrary to proposals of eccentricity independence for vection (e.g., Post, 1988), we found that peripheral directed looking improved vection and peripheral stationary fixation impaired vection induced by purely radial flow (relative to central gaze). Adding simulated horizontal or vertical viewpoint oscillation to radial flow always improved vection, irrespective of whether instructions were to fixate, or look at, the center or periphery of the self-motion display. However, adding simulated high-frequency horizontal or vertical viewpoint jitter was found to increase vection only when central gaze was maintained. In a second experiment, we showed that alternating gaze between the center and periphery of the display also improved vection (relative to stable central gaze), with greater benefits observed for purely radial flow than for horizontally or vertically oscillating radial flow. These results suggest that retinal slip plays an important role in determining the time course and strength of vection. We conclude that how and where one looks in a self-motion display can significantly alter vection by changing the degree of retinal slip.  相似文献   

16.
The utilization of static and kinetic information for depth by Mala?ian children and young adults in making monocular relative size judgments was investigated. Subjects viewed pairs of objects or photographic slides of the same pairs and judged which was the larger of each pair. The sizes and positions of the objects were manipulated such that the more distant object subtended a visual angle equal to, 80% of, or 70% of the nearer object. Motor parallax information was manipulated by allowing or preventing head movement. All subjects displayed sensitivity to static information for depth when the two objects subtended equal visual angles. When the more distant object was larger but subtended a smaller visual angle than the nearer object, subjects tended to base their judgments on retinal size. Motion parallax information increased accuracy of judgments of three-dimensional displays but reduced accuracy of judgments of pictorial displays. Comparisons are made between these results and those for American subjects.  相似文献   

17.
Several recent theories of visual information processing have postulated that errors in recognition may result not only from a failure in feature extraction, but also from a failure to correctly join features after they have been correctly extracted. Errors that result from incorrectly integrating features are called conjunction errors. The present study uses conjunction errors to investigate the principles used by the visual system to integrate features. The research tests whether the visual system is more likely to integrate features located close together in visual space (the location principle) or whether the visual system is more likely to integrate features from stimulus items that come from the same perceptual group or object (the perceptual group principle). In four target-detection experiments, stimuli were created so that feature integration by the location principle and feature integration by the perceptual group principle made different predictions for performance. In all of the experiments, the perceptual group principle predicted feature integration even though the distance between stimulus items and retinal eccentricity were strictly controlled.  相似文献   

18.
Visually guided locomotion was studied in an experiment in which human subjects (N = 8) had to accurately negotiate a series of irregularly spaced stepping-stones while infrared reflectometry and electrooculography were used to continuously record their eye movements. On average, 68% of saccades made toward the next target of footfall had been completed (visual target capture had occurred) while the foot to be positioned was still on the ground; the remainder were completed in the first 300 ms of the swing phase. The subjects' gaze remained fixed on a target, on average, until 51 ms after making contact with it, with little variation. A greater amount of variation was seen in the timing of trailing footlift relative to visual target capture. Assuming that subjects sampled the visual cues as and when they were required, visual information appeared most useful when the foot to be positioned was still on the ground.  相似文献   

19.
The relation between perceptual information and the motor response during lane-change manoeuvres was studied in a fixed-based driving simulator. Eight subjects performed 48 lane changes with varying vehicle speed, lane width and direction of movement. Three sequential phases of the lane change manoeuvre are distinguished. During the first phase the steering wheel is turned to a maximum angle. After this the steering wheel is turned to the opposite direction. The second phase ends when the vehicle heading approaches a maximum that generally occurs at the moment the steering wheel angle passes through zero. During the third phase the steering wheel is turned to a second maximum steering wheel angle in opposite direction to stabilize the vehicle in the new lane. Duration of the separate phases were analysed together with steering amplitudes and Time-to-Line Crossing in order to test whether and how drivers use the outcome of each phase during the lane change manoeuvre to adjust the way the subsequent phase is executed. During the first phase the time margin to the outer lane boundary was controlled by the driver such that a higher speed was compensated for by a smaller steering wheel amplitude. Due to this mechanism the time margin to the lane boundary was not affected by vehicle speed. During the second phase the speed with which the steering wheel was turned to the opposite direction was affected by the time margins to the lane boundary at the start of the second phase. Thereafter, smaller minimum time margins were compensated for by a larger steering wheel amplitude to the opposite direction. The results suggest that steering actions are controlled by the outcome of previous actions in such a way that safety margins are maintained. The results also suggest that visual feedback is used by the driver during lane change manoeuvres to control steering actions, resulting in flexible and adaptive steering behaviour. Evidence is presented in support of the idea that temporal information on the relation between the vehicle and lane boundaries is used by the driver in order to control the motor response.  相似文献   

20.
In the present study we assessed the use of landmarks and scene layout information for the control of locomotion. Observers were presented displays simulating forward locomotion through a random dot field with the horizontal position perturbed by a sum-of-sines function and were asked to steer and null out the horizontal disturbance of the path of locomotion. The results indicate greater control gain and accuracy when presented with a repeating layout of landmarks as compared to a changing layout of landmarks. Debriefing responses suggest that observers may have implicitly learned the layout of the repeating pattern. These results suggest that observers use an allocentric representation of the scene for steering control. A model for the control of locomotion is discussed that utilizes both scene-based information and optic flow.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号