首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Previous studies have demonstrated large errors (over 30 degrees ) in visually perceived exocentric directions (the direction between two objects that are both displaced from the observer's location; e.g., Philbeck et al. [Philbeck, J. W., Sargent, J., Arthur, J. C., & Dopkins, S. (2008). Large manual pointing errors, but accurate verbal reports, for indications of target azimuth. Perception, 37, 511-534]). Here, we investigated whether a similar pattern occurs in auditory space. Blindfolded participants either attempted to aim a pointer at auditory targets (an exocentric task) or gave a verbal estimate of the egocentric target azimuth. Targets were located at 20-160 degrees azimuth in the right hemispace. For comparison, we also collected pointing and verbal judgments for visual targets. We found that exocentric pointing responses exhibited sizeable undershooting errors, for both auditory and visual targets, that tended to become more strongly negative as azimuth increased (up to -19 degrees for visual targets at 160 degrees ). Verbal estimates of the auditory and visual target azimuths, however, showed a dramatically different pattern, with relatively small overestimations of azimuths in the rear hemispace. At least some of the differences between verbal and pointing responses appear to be due to the frames of reference underlying the responses; when participants used the pointer to reproduce the egocentric target azimuth rather than the exocentric target direction relative to the pointer, the pattern of pointing errors more closely resembled that seen in verbal reports. These results show that there are similar distortions in perceiving exocentric directions in visual and auditory space.  相似文献   

2.
Updating egocentric representations in human navigation   总被引:20,自引:0,他引:20  
Wang RF  Spelke ES 《Cognition》2000,77(3):215-250
Seven experiments tested whether human navigation depends on enduring representations, or on momentary egocentric representations that are updated as one moves. Human subjects pointed to unseen targets, either while remaining oriented or after they had been disoriented by self-rotation. Disorientation reduced not only the absolute accuracy of pointing to all objects ('heading error') but also the relative accuracy of pointing to different objects ('configuration error'). A single light providing a directional cue reduced both heading and configuration errors if it was present throughout the experiment. If the light was present during learning and test but absent during the disorientation procedure, however, subjects showed low heading errors (indicating that they reoriented by the light) but high configuration errors (indicating that they failed to retrieve an accurate cognitive map of their surroundings). These findings provide evidence that object locations are represented egocentrically. Nevertheless, disorientation had little effect on the coherence of pointing to different room corners, suggesting both (a) that the disorientation effect on representations of object locations is not due to the experimental paradigm and (b) that room geometry is captured by an enduring representation. These findings cast doubt on the view that accurate navigation depends primarily on an enduring, observer-free cognitive map, for humans construct such a representation of extended surfaces but not of objects. Like insects, humans represent the egocentric distances and directions of objects and continuously update these representations as they move. The principal evolutionary advance in animal navigation may concern the number of unseen targets whose egocentric directions and distances can be represented and updated simultaneously, rather than a qualitative shift in navigation toward reliance on an allocentric map.  相似文献   

3.
A pitched visual inducer has a strong effect on the visually perceived elevation of a target in extrapersonal space, and also on the elevation of the arm when a subject points with an unseen arm to the target’s elevation. The manual effect is a systematic function of hand-to-body distance (Li and Matin Vision Research 45:533–550, 2005): When the arm is fully extended, manual responses to perceptually mislocalized luminous targets are veridical; when the arm is close to the body, gross matching errors occur. In the present experiments, we measured this hand-to-body distance effect during the presence of a pitched visual inducer and after inducer offset, using three values of hand-to-body distance (0, 40, and 70 cm) and two open-loop tasks (pointing to the perceived elevation of a target at true eye level and setting the height of the arm to match the elevation). We also measured manual behavior when subjects were instructed to point horizontally under induction and after inducer offset (no visual target at any time). In all cases, the hand-to-body distance effect disappeared shortly after inducer offset. We suggest that the rapid disappearance of the distance effect is a manifestation of processes in the dorsal visual stream that are involved in updating short-lived representations of the arm in egocentric visual perception and manual behavior.  相似文献   

4.
The purpose of this study was (a) to determine if vision and kinesthesis contribute differentially to the coding of a specific two-dimensional pattern and (b) to identify the effect of repetition on the spatial representation of this pattern. The reproductions of a specific pattern presented visually were compared with those of a pattern presented kinesthetically. The results showed that vision and kinesthesis had contributed equally to the coding of the directional components of the pattern. However, there was dominance of visual information over kinesthetic information when coding the distance between the intersecting points of the pattern, especially at the beginning of the process. Generally speaking, the visual or kinesthetic repetition, or both, have increased favourably the precision with which a specific pattern was reproduced in distance and direction.  相似文献   

5.
The present study investigated whether visual and kinesthetic stimuli are stored as multisensory or modality-specific representations in unimodal and crossmodal working memory tasks. To this end, angle-shaped movement trajectories were presented to 16 subjects in delayed matching-to-sample tasks either visually or kinesthetically during encoding and recognition. During the retention interval, a secondary visual or kinesthetic interference task was inserted either immediately or with a delay after encoding. The modality of the interference task interacted significantly with the encoding modality. After visual encoding, memory was more impaired by a visual than by a kinesthetic secondary task, while after kinesthetic encoding the pattern was reversed. The time when the secondary task had to be performed interacted with the encoding modality as well. For visual encoding, memory was more impaired, when the secondary task had to be performed at the beginning of the retention interval. In contrast, memory after kinesthetic encoding was more affected, when the secondary task was introduced later in the retention interval. The findings suggest that working memory traces are maintained in a modality-specific format characterized by distinct consolidation processes that take longer after kinesthetic than after visual encoding.  相似文献   

6.
Self-rated sense of direction is reliably related to people's accuracy when pointing in the direction of unseen landmarks from imagined or actual perspectives. It is proposed that the cognitive substrate of accurate pointing responses is a vector representation, which is defined as an integrated network of displacement vectors. Experiment 1 isolated the body senses and tested displacement vector formation in a path-integration task. Experiment 2 isolated the visual modality and tested displacement vector formation in a virtual-learning task. Both experiments tested whether people reporting a good sense of direction (GSOD) were more likely to compute displacement vectors than people reporting a poor sense of direction (PSOD). The results showed that both GSOD and PSOD people computed displacement vectors in the path-integration task, but not in the virtual-learning task. When interlandmark relations were visually specified, GSOD people made more accurate pointing responses than PSOD people, adding to a growing body of cognitive correlates of self-rated direction sense.  相似文献   

7.
A voluntary motor response that is prepared in advance of a stimulus may be triggered by any sensory input. This study investigated the combination of visual and kinesthetic inputs in triggering voluntary torque responses. When a visual stimulus was presented alone, subjects produced a fast and accurate increase in elbow flexion torque. When a kinesthetic stimulus was presented instead of the visual stimulus, subjects produced a similar response with a reduced response latency. When a visual stimulus was presented in combination with a kinesthetic stimulus, subjects initiated their responses after either a visual or a kinesthetic response latency, depending on the relative timing of the two stimuli. An analysis of response amplitude suggested that when visual and kinesthetic stimuli were combined, both stimuli triggered a response. The results are more consistent with a simple behavioral model of addition of visual and kinesthetic responses (which predicts that the response to combined stimuli should be the sum of individual responses) than with a model of exclusion of one response (which predicts that the response to combined stimuli should be identical to either the visual or the kinesthetic response). Because addition of visually and kinesthetically triggered responses produced a response with an erroneously large amplitude, it is suggested that visual and kinesthetic inputs are not always efficiently integrated.  相似文献   

8.
A voluntary motor response that is prepared in advance of a stimulus may be triggered by any sensory input. This study investigated the combination of visual and kinesthetic inputs in triggering voluntary torque responses. When a visual stimulus was presented alone, subjects produced a fast and accurate increase in elbow flexion torque. When a kinesthetic stimulus was presented instead of the visual stimulus, subjects produced a similar response with a reduced response latency. When a visual stimulus was presented in combination with a kinesthetic stimulus, subjects initiated their responses after either a visual or a kinesthetic response latency, depending on the relative timing of the two stimuli. An analysis of response amplitude suggested that when visual and kinesthetic stimuli were combined, both stimuli triggered a response. The results are more consistent with a simple behavioral model of addition of visual and kinesthetic responses (which predicts that the response to combined stimuli should be the sum of individual responses) than with a model of exclusion of one response (which predicts that the response to combined stimuli should be identical to either the visual or the kinesthetic response). Because addition of visually and kinesthetically triggered responses produced a response with an erroneously large amplitude, it is suggested that visual and kinesthetic inputs are not always efficiently integrated.  相似文献   

9.
Studies of saccadic suppression and induced motion have suggested separate representations of visual space for perception and visually guided behavior. Because these methods required stimulus motion, subjects might have confounded motion and position. We separated cognitive and sensorimotor maps without motion of target, background, or eye, with an “induced Roelofs effect”: a target inside an off-center frame appears biased opposite the direction of the frame. A frame displayed to the left of a subject’s center line, for example, will make a target inside the frame appear farther to the right than its actual position. The effect always influences perception, but in half of our subjects it did not influence pointing. Cognitive and sensorimotor maps interacted when the motor response was delayed; all subjects now showed a Roelofs effect for pointing, suggesting that the motor system was being fed from the biased cognitive map. A second experiment showed similar results when subjects made an open-ended cognitive response instead of a five-alternative forced choice. Experiment 3 showed that the results were not due to shifts in subjects’ perception of the felt straight-ahead position. In Experiment 4, subjects pointed to the target and judged its location on the same trial. Both measures showed a Roelofs effect, indicating that each trial was treated as a single event and that the cognitive representation was accessed to localize this event in both response modes.  相似文献   

10.
《Ecological Psychology》2013,25(3):197-226
Ability to visually perceive egocentric target distance was assessed using 2 response measures: verbal reports and reaches. These 2 response measures were made within experimental trials with the participants' eyes closed either immediately after viewing the target (Experiment 1) or after a 6- or 12-sec delay (Experiment 2). Systematic and random errors differed as a function of the response measure. The random errors for the verbal reports and the reaches were not correlated in the no-delay condition but became correlated in each of the 6- and 12-sec delay conditions. Systematic errors varied as a function of delay for the verbal reports but not for the reaches. These findings suggest that immediate verbal and action responses are not directed by a single internally represented perceived depth, as suggested by Philbeck and Loomis (1997). The findings are related to the possibility of separate neurological streams for vision (e.g., Bridgeman, 1989; Milner & Goodale, 1995; Rossetti, 1998), and our discussion contains a review that supplements Michaels's (2000) commentary on those theories. The findings are also related to the recent theories regarding task-specific devices, and a possible synthesis of task-specific devices and separate visual streams is offered.  相似文献   

11.
肖承丽 《心理学报》2013,45(7):752-761
通过同步视觉或者序列本体感觉,被试学习不规则场景。学习完毕之后,在面对学习方向、自主旋转240°、和持续旋转直至迷向3种运动条件下,被试随机指出各个物体的位置。迷向导致同步视觉学习组指向的内部一致性显著变差,而序列本体感觉学习组不受迷向影响。离线的相对位置判断任务表明两组被试的环境中心空间表征没有差异。这证明通过序列本体感觉学习被试也可以形成稳定的自我中心空间表征,支持了空间快照理论的扩展和空间认知的功能等价假说。  相似文献   

12.
Observers pointing to a target viewed directly may elevate their fingertip close to the line of sight. However, pointing blindfolded, after viewing the target, they may pivot lower, from the shoulder, aligning the arm with the target as if reaching to the target. Indeed, in Experiment 1 participants elevated their arms more in visually monitored than blindfolded pointing. In Experiment 2, pointing to a visible target they elevated a short pointer more than a long one, raising its tip to the line of sight. In Experiment 3, the Experimenter aligned the participant's arm with the target. Participants judged they were pointing below a visually monitored target. In Experiment 4, participants viewing another person pointing, eyes-open or eyes-closed, judged the target was aligned with the pointing arm. In Experiment 5, participants viewed their arm and the target via a mirror and posed their arm so that it was aligned with the target. Arm elevation was higher in pointing directly.  相似文献   

13.
Gaunet F  Rossetti Y 《Perception》2006,35(1):107-124
Congenitally blind, late-blind, and blindfolded-sighted participants performed a pointing task at proximal memorised proprioceptive targets. The locations to be memorised were presented on a sagittal plane by passively positioning the left index finger. A 'go' signal for matching the target location with the right index finger was given 0 or 8 s after left-hand demonstration. Absolute distance errors were smaller in the blind groups, with both delays pooled together; signed distance and direction errors were underestimated with the longer delay, and were overestimated by blind groups, whereas the blindfolded-sighted group underestimated them. Elongation of the scatters was stretched but not affected by delay or group. The surface scatter was greater with the longer delay; and orientation of the main axis of the pointing ellipses shows the use of an egocentric frame of reference by the congenitally blind group for both delays, the use of egocentric (0 s) and exocentric (8 s) frame of reference by the blindfolded-sighted group, with the late-blind group using an intermediate frame of reference for both delays. Therefore, early and late visual-deprivation effects are distinguished from transient visual-deprivation effects as long-term deprivation leads to increased capabilities (absolute distance estimations), unaltered organisation (for surface and elongation), and altered organisation (amplitude and direction estimations, orientation of pointing distribution) of the spatial representation with proprioception. Besides providing an extensive exploration of pointing ability and mechanisms in the visually deprived population, the results show that cross-modal plasticity applies not only to neural bases but extends to spatial behaviour.  相似文献   

14.
Under spatial misalignment of eye and hand induced by laterally displacing prisms (11.4 degrees in the rightward direction), subjects pointed 60 times (once every 3 s) at a visually implicit target (straight ahead of nose, Experiment 1) or a visually explicit target (an objectively straight-ahead target, Experiment 2). For different groups in each experiment, the hand became visible early in the sagittal pointing movement (early visual feedback). Adaptation to the optical misalignment during exposure (direct effects) was rapid, especially with early feedback; complete compensation for the misalignment was achieved within about 30 trials, and overcompensation occurred in later trials, especially with an explicit target. In contrast, adaptation measured with the misalignment removed and without visual feedback after blocks of 10 pointing trials (aftereffects) was slow to develop, especially with delayed feedback and an implicit target; at most, about 40% compensation for the misalignment occurred after 60 trials. This difference between direct effects and aftereffects is discussed in terms of separable adaptive mechanisms that are activated by different error signals. Adaptive coordination is activated by error feedback and involves centrally located, strategically flexible, short-latency processes to correct for sudden changes in operational precision that normally occur with short-term changes in coordination tasks. Adaptive alignment is activated automatically by spatial discordance between misaligned systems and involves distributed, long-latency processes to correct for slowly developing shifts in alignment among perceptual-motor components that normally occur with long-term drift. The sudden onset of misalignment in experimental situations activates both mechanisms in a complex and not always cooperative manner, which may produce overcompensatory behavior during exposure (i.e., direct effects) and which may limit long-term alignment (i.e., aftereffects).  相似文献   

15.
He ZJ  Wu B  Ooi TL  Yarbrough G  Wu J 《Perception》2004,33(7):789-806
On the basis of the finding that a common and homogeneous ground surface is vital for accurate egocentric distance judgments (Sinai et al, 1998 Nature 395 497-500), we propose a sequential-surface-integration-process (SSIP) hypothesis to elucidate how the visual system constructs a representation of the ground-surface in the intermediate distance range. According to the SSIP hypothesis, a near ground-surface representation is formed from near depth cues, and is utilized as an anchor to integrate the more distant surfaces by using texture-gradient information as the depth cue. The SSIP hypothesis provides an explanation for the finding that egocentric distance judgment is underestimated when a texture boundary exists on the ground surface that commonly supports the observer and target. We tested the prediction that the fidelity of the visually represented ground-surface reference frame depends on how the visual system selects the surface information for integration. Specifically, if information is selected along a direct route between the observer and target where the ground surface is disrupted by an occluding object, the ground surface will be inaccurately represented. In experiments 1-3 we used a perceptual task and two different visually directed tasks to show that this leads to egocentric distance underestimation. Judgment is accurate however, when the observer selects the continuous ground information bypassing the occluding object (indirect route), as found in experiments 4 and 5 with a visually directed task. Altogether, our findings provide support for the SSIP hypothesis and reveal, surprisingly, that the phenomenal visual space is not unique but depends on how optic information is selected.  相似文献   

16.
The organization of manual reaching movements suggests considerable independence in the initial programming with respect to the direction and the distance of the intended movement. It was hypothesized that short-term memory for a visually-presented location within reaching space, in the absence of other allocentric reference points, might also be represented in a motoric code, showing similar independence in the encoding of direction and distance. This hypothesis was tested in two experiments, using adult human subjects who were required to remember the location of a briefly presented luminous spot. Stimuli were presented in the dark, thus providing purely egocentric spatial information. After the specified delay, subjects were instructed to point to the remembered location. In Exp. 1, temporal decay of location memory was studied, over a range of 4–30 s. The results showed that (a) memory for both the direction and the distance of the visual target location declined over time, at about the same rate for both parameters; however, (b) errors of distance were much greater in the left than in the right hemispace, whereas direction errors showed no such effect; (c) the distance and direction errors were essentially uncorrelated, at all delays. These findings suggest independent representation of these two parameters in working memory. In Exp. 2 the subjects were required to remember the locations of two visual stimuli presented sequentially, one after the other. Only after both stimuli had been presented did the subject receive a signal from the experimenter as to which one was to be pointed to. The results showed that the encoding of a second location selectively interfered with memory for the direction but not for the distance of the to-be-remembered target location. As in Exp. 1, direction and distance errors were again uncorrelated. The results of both experiments indicate that memory for egocentrically-specified visual locations can encode the direction and distance of the target independently. Use of motor-related representation in spatial working memory is thus strongly suggested. The findings are discussed in the context of multiple representations of space in visuo-spatial short-term memory.  相似文献   

17.
Hering's model of egocentric visual direction assumes implicitly that the effect of eye position on direction is both linear and equal for the two eyes; these two assumptions were evaluated in the present experiment. Five subjects pointed (open-loop) to the apparent direction of a target seen under conditions in which the position of one eye was systematically varied while the position of the other eye was held constant. The data were analyzed through examination of the relationship between the variations in perceived egocentric direction and variations in expected egocentric direction based on the positions of the varying eye. The data revealed that the relationship between eye position and egocentric direction is indeed linear. Further, the data showed that, for some subjects, variations in the positions of the two eyes do not have equal effects on egocentric direction. Both the between-eye differences and the linear relationship may be understood in terms of individual differences in the location of the cyclopean eye, an unequal weighting of the positions of the eyes in the processing of egocentric direction, or some combination of these two factors.  相似文献   

18.
Hering’s model of egocentric visual direction assumes implicitly that the effect of eye position on direction is both linear and equal for the two eyes; these two assumptions were evaluated in the present experiment. Five subjects pointed (open-loop) to the apparent direction of a target seen under conditions in which the position of one eye was systematically varied while the position of the other eye was held constant. The data were analyzed through examination of the relationship between the variations in perceived egocentric direction and variations inexpected egocentric direction based on the positions of the varying eye. The data revealed that the relationship between eye position and egocentric direction is indeed linear. Further, the data showed that, for some subjects, variations in the positions of the two eyes do not have equateffectsTjn egocentric direction. Both the between-eye differences and the linear relationship may be understood in terms of individual differences in the location of the cyclopean eye, an unequal weighting of the positions of the eyes in the processing of egocentric direction, or some combination of these two factors.  相似文献   

19.
The aim of the two present experiments was to examine the ontogenetic development of the dissociation between perception and action in children using the Duncker illusion. In this illusion, a moving background alters the perceived direction of target motion. Targets were held stationary while appearing to move in an induced displacement. In Experiment 1, 30 children aged 7, 9, and 12 years and 10 adults made a perceptual judgment or pointed as accurately as possible, with their index finger, to the last position of the target. The 7-year-old children were more perceptually deceived than the others by the Duncker illusion but there were no differences for the goal-directed pointing movements. In Experiment 2, 50 children aged 7, 8, 9, 10, and 11 years made a perceptual judgment or reproduced as accurately as possible, with a handle, the distance traveled by the target. Participants were perceptually deceived by the illusion, judging the target as moving although it was stationary. When reproducing the distance covered by the target, children were unaffected by the Duncker illusion. Our results suggest that the separation of the allocentric visual perception pathway from the egocentric action pathway occurs before 7 years of age.  相似文献   

20.
We provide experimental evidence that perceived location is an invariant in the control of action, by showing that different actions are directed toward a single visually specified location in space (corresponding to the putative perceived location) and that this single location, although specified by a fixed physical target, varies with the availability of information about the distance of that target. Observers in two conditions varying in the availability of egocentric distance cues viewed targets at 1.5, 3.1, or 6.0 m and then attempted to walk to the target with eyes closed using one of three paths; the path was not specified until after vision was occluded. The observers stopped at about the same location regardless of the path taken, providing evidence that action was being controlled by some invariant, ostensibly visually perceived location. That it was indeed perceived location was indicated by the manipulation of information about target distance—the trajectories in the full-cues condition converged near the physical target locations, whereas those in the reduced-cues condition converged at locations consistent with the usual perceptual errors found when distance cues are impoverished.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号