首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 8 毫秒
1.
By systematically varying cue availability in the stimulus and response phases of a series of same-modality and cross-modality distance matching tasks, we examined the contributions of static visual information, idiothetic information, and optic flow information. The experiment was conducted in a large-scale, open, outdoor environment. Subjects were presented with information about a distance and were then required to turn 180 before producing a distance estimate. Distance encoding and responding occurred via: (i) visually perceived target distance, or (ii) traversed distance through either blindfolded locomotion or during sighted locomotion. The results demonstrated that subjects performed with similar accuracy across all conditions. In conditions in which the stimulus and the response were delivered in the same mode, when visual information was absent, constant error was minimal; whereas, when visual information was present, overestimation was observed. In conditions in which the stimulus and response modes differed, a consistent error pattern was observed. By systematically comparing complementary conditions, we found that the availability of visual information during locomotion (particularly optic flow) led to an 'under-perception' of movement relative to conditions in which visual information was absent during locomotion.  相似文献   

2.
Two experiments were conducted in order to assess the contribution of locomotor information to estimates of egocentric distance in a walking task. In the first experiment, participants were either shown, or led blind to, a target located at a distance ranging from 4 to 10 m and were then asked to indicate the distance to the target by walking to the location previously occupied by the target. Participants in both the visual and locomotor conditions were very accurate in this task and there was no significant difference between conditions. In the second experiment, a cue-conflict paradigm was used in which, without the knowledge of the participants, the visual and locomotor targets (the targets they were asked to walk to) were at two different distances. Most participants did not notice the conflict, but despite this their responses showed evidence that they had averaged the visual and locomotor inputs to arrive at a walked estimate of distance. Together, these experiments demonstrate that, although they showed poor awareness of their position in space without vision, in some conditions participants were able to use such nonvisual information to arrive at distance estimates as accurate as those given by vision.  相似文献   

3.
This study investigated the relationship between limb apraxia, as assessed by a formal clinical test, and the production of spontaneous communicative gesture, as measured by a newly designed rating scale--the Nonvocal Communication Scale (NCS). Subjects were aphasic adult males with cerebrovascular lesions of the left hemisphere. The performance of aphasic patients on the praxis test and the NCS was independent of demographic, neuroanatomic, linguistic, or cognitive variables, except for global aphasics who were low-scoring across the board. There was a significant positive correlation, however, between praxis ability and spontaneous gestural communication. Clinical implications of these findings are discussed.  相似文献   

4.
This article investigates vehicle steering control, focusing on the task of lane changing and the role of different sources of sensory feedback. Participants carried out 2 experiments in a fully instrumented, motion-based simulator. Despite the high level of realism afforded by the simulator, participants were unable to complete a lane change in the absence of visual feedback. When asked to produce the steering movements required to change lanes and turn a corner, participants produced remarkably similar behavior in each case, revealing a misconception of how a lane-change maneuver is normally executed. Finally, participants were asked to change lanes in a fixed-based simulator, in the presence of intermittent visual information. Normal steering behavior could be restored using brief but suitably timed exposure to visual information. The data suggest that vehicle steering control can be characterized as a series of unidirectional, open-loop steering movements, each punctuated by a brief visual update.  相似文献   

5.
During locomotion, retinal flow, gaze angle, and vestibular information can contribute to one's perception of self-motion. Their respective roles were investigated during active steering: Retinal flow and gaze angle were biased by altering the visual information during computer-simulated locomotion, and vestibular information was controlled through use of a motorized chair that rotated the participant around his or her vertical axis. Chair rotation was made appropriate for the steering response of the participant or made inappropriate by rotating a proportion of the veridical amount. Large steering errors resulted from selective manipulation of retinal flow and gaze angle, and the pattern of errors provided strong evidence for an additive model of combination. Vestibular information had little or no effect on steering performance, suggesting that vestibular signals are not integrated with visual information for the control of steering at these speeds.  相似文献   

6.
A linear positioning task was used to examine the effects of visual and nonvisual inputs on motor learning. The experiment had three factors with two levels of each namely: sensory modality (visual-nonvisual), transfer at recall (changed-unchanged), size of movement (25.4 cm, 50.8 com). Three dependent variable were used: absolute error (AE), constant error (CE), and variable error (VE). The results suggest that visual dominance causes disruption of recall in the visual, changed conditions. No disruption of recall was found for the nonvisual condition other than in terms of CE with respect to movement sizes. The results are taken to follow Posner et al.'s (1976) theory of visual dominance, but some account of the spatial qualities of visual and kinesthetic information is needed.  相似文献   

7.
Visual tasks can yield quantitatively similar patterns of performance that reflect different underlying mechanisms in younger and older observers. In 3 experiments, we used the visual masking task of J. T. Enns and V. Di Lollo (1997) to examine 2 of these mechanisms: stimulus contrast and attention. Performance appeared to be equivalent for younger and older observers in some circumstances, although manipulation of contrast and attention suggested that older observers may use focal attention to enhance the perceptual clarity of the target. For older observers, impoverished visual representations may more readily be eliminated by manipulation of attention or by the presence of a mask, indicating that both attention and stimulus quality are important influences on performance.  相似文献   

8.
Summary The subjective visual vertical (SV) has been investigated in centrifugation experiments in order to test the hypothesis, that changes in the SV are proportional to changes in the shear force acting on the utricles. The results deviate from those predicted from the shear hypothesis. This deviation can be accounted for by assuming that the SV is determined by the average of utricular and somaesthetic inputs.Thanks are due to A. Stein for performing some of the experiments of series d and e. N.J. Wade was supported by a Forschungsstipendium from the Alexander von Humboldt-Stiftung. Renate Alton skillfully assisted with the drawings.  相似文献   

9.
Previous work investigating the strategies that observers use to intercept moving targets has shown that observers maintain a constant target-heading angle (CTHA) to achieve interception. Most of this work has concluded or indirectly assumed that vision is necessary to do this. We investigated whether blindfolded pursuers chasing a ball carrier holding a beeping football would utilize the same strategy that sighted observers use to chase a ball carrier. Results confirm that both blindfolded and sighted pursuers use a CTHA strategy in order to intercept targets, whether jogging or walking and irrespective of football experience and path and speed deviations of the ball carrier during the course of the pursuit. This work shows that the mechanisms involved in intercepting moving targets may be designed to use different sensory mechanisms in order to drive behavior that leads to the same end result. This has potential implications for the supramodal representation of motion perception in the human brain.  相似文献   

10.
11.
We investigated the role of two kinds of attention—visual and central attention—for the maintenance of visual representations in working memory (WM). In Experiment 1 we directed attention to individual items in WM by presenting cues during the retention interval of a continuous delayed-estimation task, and instructing participants to think of the cued items. Attending to items improved recall commensurate with the frequency with which items were attended (0, 1, or 2 times). Experiments 1 and 3 further tested which kind of attention—visual or central—was involved in WM maintenance. We assessed the dual-task costs of two types of distractor tasks, one tapping sustained visual attention and one tapping central attention. Only the central attention task yielded substantial dual-task costs, implying that central attention substantially contributes to maintenance of visual information in WM. Experiment 2 confirmed that the visual-attention distractor task was demanding enough to disrupt performance in a task relying on visual attention. We combined the visual-attention and the central-attention distractor tasks with a multiple object tracking (MOT) task. Distracting visual attention, but not central attention, impaired MOT performance. Jointly, the three experiments provide a double dissociation between visual and central attention, and between visual WM and visual object tracking: Whereas tracking multiple targets across the visual filed depends on visual attention, visual WM depends mostly on central attention.  相似文献   

12.
Spontaneous gesture frequently accompanies speech. The question is why. In these studies, we tested two non‐mutually exclusive possibilities. First, speakers may gesture simply because they see others gesture and learn from this model to move their hands as they talk. We tested this hypothesis by examining spontaneous communication in congenitally blind children and adolescents. Second, speakers may gesture because they recognize that gestures can be useful to the listener. We tested this hypothesis by examining whether speakers gesture even when communicating with a blind listener who is unable to profit from the information that the hands convey. We found that congenitally blind speakers, who had never seen gestures, nevertheless gestured as they spoke, conveying the same information and producing the same range of gesture forms as sighted speakers. Moreover, blind speakers gestured even when interacting with another blind individual who could not have benefited from the information contained in those gestures. These findings underscore the robustness of gesture in talk and suggest that the gestures that co‐occur with speech may serve a function for the speaker as well as for the listener.  相似文献   

13.
Relations between infant visual recognition memory and later cognition have fueled interest in identifying the underlying cognitive components of this important infant ability. The present large-scale study examined three promising factors in this regard--processing speed, short-term memory capacity, and attention. Two of these factors, attention and processing speed (but, surprisingly, not short-term memory capacity), were related to visual recognition memory: Infants who showed better attention (shorter looks and more shifts) and faster processing had better recognition memory. The contributions of attention and processing speed were independent of one another and were similar at all ages studied--5, 7, and 12 months. Taken together, attention and speed accounted for 6%-9% of the variance in visual recognition memory, leaving a considerable, but not unexpected, portion of the variance unexplained.  相似文献   

14.
This study investigated the influence of culture on people's sensory responses, such as smell, taste, sound and touch to visual stimuli. The sensory feelings of university students from four countries (Japan, South Korea, Britain and France) to six images were evaluated. The images combined real and abstract objects and were presented on a notebook computer. Overall, 280 participants (144 men and 136 women; n = 70/country) were included in the statistical analysis. The chi‐square independence analysis showed differences and similarities in the sensory responses across countries. Most differences were detected in smell and taste, whereas few variations were observed for sound responses. Large variations in the response were observed for the abstract coral and butterfly images, but few differences were detected in response to the real leaf image. These variations in response were mostly found in the British and Japanese participants.  相似文献   

15.
16.
Eyes move over visual scenes to gather visual information. Studies have found heavy-tailed distributions in measures of eye movements during visual search, which raises questions about whether these distributions are pervasive to eye movements, and whether they arise from intrinsic or extrinsic factors. Three different measures of eye movement trajectories were examined during visual foraging of complex images, and all three were found to exhibit heavy tails: Spatial clustering of eye movements followed a power law distribution, saccade length distributions were lognormally distributed, and the speeds of slow, small amplitude movements occurring during fixations followed a 1/f spectral power law relation. Images were varied to test whether the spatial clustering of visual scene information is responsible for heavy tails in eye movements. Spatial clustering of eye movements and saccade length distributions were found to vary with image type and task demands, but no such effects were found for eye movement speeds during fixations. Results showed that heavy-tailed distributions are general and intrinsic to visual foraging, but some of them become aligned with visual stimuli when required by task demands. The potentially adaptive value of heavy-tailed distributions in visual foraging is discussed.  相似文献   

17.
The relative contributions of proprioceptive and efferent information in eliciting adaptation to visual rearrangement were studied under two conditions of visual stimulation. Subjects permitted sight of their forearm under normal room illumination showed significant adaptation when the forearm was (a) moved up and down under the action of tonic vibration reflexes, (b) voluntarily moved through the same trajectory at the same pace, (c) viewed while still, and (d) viewed while the margins of the elbow were vibrated. The reflex movement condition elicited significantly greater adaptation than the other conditions. Subjects allowed only sight of a point source of light attached to their hand showed significant adaptation when the forearm was (a) reflexly moved, (b) voluntarily moved through the same trajectory at the same rate, (c) passively moved, (d) still, and (e) vibrated while still. Less adaptation occurred as the amount of proprioceptive information about limb position was decreased. The adaptation elicited by voluntary movements of the forearm and by reflex movements did not differ significantly. It is concluded that corollary-discharge signals may not be crucial in adaptation to visual rearrangement; a more important factor appears to be discordance between proprioceptive and visual information.  相似文献   

18.
Two hypotheses, attentional prioritization and attentional spreading, have been proposed to account for object-based attention. The attentional-prioritization hypothesis posits that the positional uncertainty of targets is sufficient to resolve the controversy raised by the competing attentional-spreading hypothesis. Here we challenge the sufficiency of this explanation by showing that object-based attention is a function of sensory uncertainty in a task with consistent high positional uncertainty of the targets. In Experiment 1, object-based attention was modulated by sensory uncertainty induced by the noise from backward masking, showing an object-based effect under high as compared to low sensory uncertainty. This finding was replicated in Experiment 2 with increased task difficulty, to exclude that as a confounding factor, and in Experiment 3 with a psychophysical method, to obtain converging evidence using perceptual threshold measurement. Additionally, such a finding was not observed when sensory uncertainty was eliminated by replacing the backward-masking stimuli with perceptually dissimilar ones in Experiment 4. These results reveal that object-based attention is influenced by sensory uncertainty, even under high positional uncertainty of the targets. Our findings contradict the proposition of attentional spreading, proposing instead an automatic form of object-based attention due to enhancement of the perceptual representation. More importantly, the attentional-prioritization hypothesis based solely on positional uncertainty cannot sufficiently account for object-based attention, but needs to be developed by expanding the concept of uncertainty to include at least sensory uncertainty.  相似文献   

19.
《Cognitive development》2006,21(1):46-59
Investigations that focus on children's hand gestures often conclude that gesture production arises as a result of having multiple representations. To date, the predictive validity of this notion has not been tested. In this study, we compared the gestures of 82 five-year-old children holding either a single or a dual representation. The children retold a story narrated to them, with pictures, by the experimenter. In one condition the children heard a false belief story and hence, when retelling, held two beliefs—or representations—concurrently. In the other conditions, the children retold a version of the story without the false belief component and therefore held single representations. Children were four times more likely to gesture in the false belief condition than in two comparable true belief conditions, supporting the notion that gestures may function to externalise some of the child's cognitive process, particularly when they hold multiple representations.  相似文献   

20.
The nonvisual self-touch rubber hand paradigm elicits the compelling illusion that one is touching one’s own hand even though the two hands are not in contact. In four experiments, we investigated spatial limits of distance (15 cm, 30 cm, 45 cm, 60 cm) and alignment (0°, 90° anti-clockwise) on the nonvisual self-touch illusion and the well-known visual rubber hand illusion. Common procedures (synchronous and asynchronous stimulation administered for 60 s with the prosthetic hand at body midline) and common assessment methods were used. Subjective experience of the illusion was assessed by agreement ratings for statements on a questionnaire and time of illusion onset. The nonvisual self-touch illusion was diminished though never abolished by distance and alignment manipulations, whereas the visual rubber hand illusion was more robust against these manipulations. We assessed proprioceptive drift, and implications of a double dissociation between subjective experience of the illusion and proprioceptive drift are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号