首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
These studies examined the role of spatial encoding in inducing perception-action dissociations in visual illusions. Participants were shown a large-scale Müller-Lyer configuration with hoops as its tails. In Experiment 1, participants either made verbal estimates of the extent of the Müller-Lyer shaft (verbal task) or walked the extent without vision, in an offset path (blind-walking task). For both tasks, participants stood a small distance away from the configuration, to elicit object-relative encoding of the shaft with respect to its hoops. A similar illusion bias was found in the verbal and motoric tasks. In Experiment 2, participants stood at one endpoint of the shaft in order to elicit egocentric encoding of extent. Verbal judgments continued to exhibit the illusion bias, whereas blind-walking judgments did not. These findings underscore the importance of egocentric encoding in motor tasks for producing perception-action dissociations.  相似文献   

2.
Previous studies have demonstrated large errors (over 30 degrees ) in visually perceived exocentric directions (the direction between two objects that are both displaced from the observer's location; e.g., Philbeck et al. [Philbeck, J. W., Sargent, J., Arthur, J. C., & Dopkins, S. (2008). Large manual pointing errors, but accurate verbal reports, for indications of target azimuth. Perception, 37, 511-534]). Here, we investigated whether a similar pattern occurs in auditory space. Blindfolded participants either attempted to aim a pointer at auditory targets (an exocentric task) or gave a verbal estimate of the egocentric target azimuth. Targets were located at 20-160 degrees azimuth in the right hemispace. For comparison, we also collected pointing and verbal judgments for visual targets. We found that exocentric pointing responses exhibited sizeable undershooting errors, for both auditory and visual targets, that tended to become more strongly negative as azimuth increased (up to -19 degrees for visual targets at 160 degrees ). Verbal estimates of the auditory and visual target azimuths, however, showed a dramatically different pattern, with relatively small overestimations of azimuths in the rear hemispace. At least some of the differences between verbal and pointing responses appear to be due to the frames of reference underlying the responses; when participants used the pointer to reproduce the egocentric target azimuth rather than the exocentric target direction relative to the pointer, the pattern of pointing errors more closely resembled that seen in verbal reports. These results show that there are similar distortions in perceiving exocentric directions in visual and auditory space.  相似文献   

3.
In immersive virtual environments, judgments of perceived egocentric distance are significantly underestimated, as compared with accurate performance in the real world. Two experiments assessed the influence of graphics quality on two distinct estimates of distance, a visually directed walking task and verbal reports. Experiment 1 demonstrated a similar underestimation of distances walked to previously viewed targets in both low- and high-quality virtual classrooms. In Experiment 2, participants’ verbal judgments underestimated target distances in both graphics quality environments but were more accurate in the high-quality environment, consistent with the subjective impression that high-quality environments seem larger. Contrary to previous results, we suggest that quality of graphics does influence judgments of distance, but only for verbal reports. This behavioral dissociation has implications beyond the context of virtual environments and may reflect a differential use of cues and context for verbal reports and visually directed walking.  相似文献   

4.
Egocentric distances in virtual environments are commonly underperceived by up to 50 % of the intended distance. However, a brief period of interaction in which participants walk through the virtual environment while receiving visual feedback can dramatically improve distance judgments. Two experiments were designed to explore whether the increase in postinteraction distance judgments is due to perception–action recalibration or the rescaling of perceived space. Perception–action recalibration as a result of walking interaction should only affect action-specific distance judgments, whereas rescaling of perceived space should affect all distance judgments based on the rescaled percept. Participants made blind-walking distance judgments and verbal size judgments in response to objects in a virtual environment before and after interacting with the environment through either walking (Experiment 1) or reaching (Experiment 2). Size judgments were used to infer perceived distance under the assumption of size–distance invariance, and these served as an implicit measure of perceived distance. Preinteraction walking and size-based distance judgments indicated an underperception of egocentric distance, whereas postinteraction walking and size-based distance judgments both increased as a result of the walking interaction, indicating that walking through the virtual environment with continuous visual feedback caused rescaling of the perceived space. However, interaction with the virtual environment through reaching had no effect on either type of distance judgment, indicating that physical translation through the virtual environment may be necessary for a rescaling of perceived space. Furthermore, the size-based distance and walking distance judgments were highly correlated, even across changes in perceived distance, providing support for the size–distance invariance hypothesis.  相似文献   

5.
Three experiments investigated auditory distance perception under natural listening conditions in a large open field. Targets varied in egocentric distance from 3 to 16 m. By presenting visual targets at these same locations on other trials, we were able to compare visual and auditory distance perception under similar circumstances. In some experimental conditions, observers made verbal reports of target distance. In others, observers viewed or listened to the target and then, without further perceptual information about the target, attempted to face the target, walk directly to it, or walk along a two-segment indirect path to it. The primary results were these. First, the verbal and walking responses were largely concordant, with the walking responses exhibiting less between-observer variability. Second, different motoric responses provided consistent estimates of the perceived target locations and, therefore, of the initially perceived distances. Third, under circumstances for which visual targets were perceived more or less correctly in distance using the more precise walking response, auditory targets were generally perceived with considerable systematic error. In particular, the perceived locations of the auditory targets varied only about half as much in distance as did the physical targets; in addition, there was a tendency to underestimate target distance, except for the closest targets.  相似文献   

6.
The present study extended previous findings of geographical slant perception, in which verbal judgments of the incline of hills were greatly overestimated but motoric (haptic) adjustments were much more accurate. In judging slant from memory following a brief or extended time delay, subjects’ verbal judgments were greater than those given when viewing hills. Motoric estimates differed depending on the length of the delay and place of response. With a short delay, motoric adjustments made in the proximity of the hill did not differ from those evoked during perception. When given a longer delay or when taken away from the hill, subjects’ motoric responses increased along with the increase in verbal reports. These results suggest two different memorial influences on action. With a short delay at the hill, memory for visual guidance is separate from the explicit memory informing the conscious response. With short or long delays away from the hill, short-term visual guidance memory no longer persists, and both motor and verbal responses are driven by an explicit representation. These results support recent research involving visual guidance from memory, where actions become influenced by conscious awareness, and provide evidence for communication between the “what” and “how” visual processing systems.  相似文献   

7.
In three experiments, we scrutinized the dissociation between perception and action, as reflected by the contributions of egocentric and allocentric information. In Experiment 1, participants stood at the base of a large-scale one-tailed version of a Müller-Lyer illusion (with a hoop) and either threw a beanbag to the endpoint of the shaft or verbally estimated the egocentric distance to that location. The results confirmed an effect of the illusion on verbal estimates, but not on throwing, providing evidence for a dissociation between perception and action. In Experiment 2, participants observed a two-tailed version of the Müller-Lyer illusion from a distance of 1.5 m and performed the same tasks as in Experiment 1, yet neither the typical illusion effects nor a dissociation became apparent. Experiment 3 was a replication of Experiment 1, with the difference that participants stood at a distance of 1.5 m from the base of the one-tailed illusion. The results indicated an illusion effect on both the verbal estimate task and the throwing task; hence, there was no dissociation between perception and action. The presence (Exp. 1) and absence (Exp. 3) of a dissociation between perception and action may indicate that dissociations are a function of the relative availability of egocentric and allocentric information. When distance estimates are purely egocentric, dissociations between perception and action occur. However, when egocentric distance estimates have a (complementary) exocentric component, the use of allocentric information is promoted, and dissociations between perception and action are reduced or absent.  相似文献   

8.
We report two experiments on the relationship between allocentric/egocentric frames of reference and categorical/coordinate spatial relations. Jager and Postma (2003) suggest two theoretical possibilities about their relationship: categorical judgements are better when combined with an allocentric reference frame and coordinate judgements with an egocentric reference frame (interaction hypothesis); allocentric/egocentric and categorical/coordinate form independent dimensions (independence hypothesis). Participants saw stimuli comprising two vertical bars (targets), one above and the other below a horizontal bar. They had to judge whether the targets appeared on the same side (categorical) or at the same distance (coordinate) with respect either to their body-midline (egocentric) or to the centre of the horizontal bar (allocentric). The results from Experiment 1 showed a facilitation in the allocentric and categorical conditions. In line with the independence hypothesis, no interaction effect emerged. To see whether the results were affected by the visual salience of the stimuli, in Experiment 2 the luminance of the horizontal bar was reduced. As a consequence, a significant interaction effect emerged indicating that categorical judgements were more accurate than coordinate ones, and especially so in the allocentric condition. Furthermore, egocentric judgements were as accurate as allocentric ones with a specific improvement when combined with coordinate spatial relations. The data from Experiment 2 showed that the visual salience of stimuli affected the relationship between allocentric/egocentric and categorical/coordinate dimensions. This suggests that the emergence of a selective interaction between the two dimensions may be modulated by the characteristics of the task.  相似文献   

9.
Two experiments were conducted in order to assess the contribution of locomotor information to estimates of egocentric distance in a walking task. In the first experiment, participants were either shown, or led blind to, a target located at a distance ranging from 4 to 10 m and were then asked to indicate the distance to the target by walking to the location previously occupied by the target. Participants in both the visual and locomotor conditions were very accurate in this task and there was no significant difference between conditions. In the second experiment, a cue-conflict paradigm was used in which, without the knowledge of the participants, the visual and locomotor targets (the targets they were asked to walk to) were at two different distances. Most participants did not notice the conflict, but despite this their responses showed evidence that they had averaged the visual and locomotor inputs to arrive at a walked estimate of distance. Together, these experiments demonstrate that, although they showed poor awareness of their position in space without vision, in some conditions participants were able to use such nonvisual information to arrive at distance estimates as accurate as those given by vision.  相似文献   

10.
Abstract:  We report the visually directed actions of soccer players. After perceiving the location of a target on their left side at the starting point and traveling toward the ball without seeing the target, the players could kick the ball accurately (Experiment 2). In contrast, if they were verbally asked the direction of the target in a similar situation, the perceived direction was systematically distorted (Experiment 3). Our major concern in explaining the distorted perception was whether the egocentric distance before locomotion was perceived accurately or not, and whether the updating of the target location during locomotion was accurate or not. Combining these two possibilities, there should be four hypotheses, each of which assumes either: (1) accurate egocentric distance and accurate updating, (2) inaccurate egocentric distance and accurate updating, (3) accurate egocentric distance and inaccurate updating, or (4) inaccurate egocentric distance and inaccurate updating. Based on these hypotheses, we conducted four simulations, which revealed that the combination of the perception of the accurate egocentric distance and the distorted updating that substituted the constant function for the sine function produced not only a good r 2, but also three kinds of interactions obtained in Experiment 3. Why did the players, based on their distorted perception, perform accurately? We would like to suggest that through perceptual learning they might acquire a perceptual-motor relation as the inverse function of the physical-perceptual relation.  相似文献   

11.
《Ecological Psychology》2013,25(3):197-226
Ability to visually perceive egocentric target distance was assessed using 2 response measures: verbal reports and reaches. These 2 response measures were made within experimental trials with the participants' eyes closed either immediately after viewing the target (Experiment 1) or after a 6- or 12-sec delay (Experiment 2). Systematic and random errors differed as a function of the response measure. The random errors for the verbal reports and the reaches were not correlated in the no-delay condition but became correlated in each of the 6- and 12-sec delay conditions. Systematic errors varied as a function of delay for the verbal reports but not for the reaches. These findings suggest that immediate verbal and action responses are not directed by a single internally represented perceived depth, as suggested by Philbeck and Loomis (1997). The findings are related to the possibility of separate neurological streams for vision (e.g., Bridgeman, 1989; Milner & Goodale, 1995; Rossetti, 1998), and our discussion contains a review that supplements Michaels's (2000) commentary on those theories. The findings are also related to the recent theories regarding task-specific devices, and a possible synthesis of task-specific devices and separate visual streams is offered.  相似文献   

12.
This study examined how different components of working memory are involved in the acquisition of egocentric and allocentric survey knowledge by people with a good and poor sense of direction (SOD). We employed a dual‐task method and asked participants to learn routes from videos with verbal, visual, and spatial interference tasks and without any interference. Results showed that people with a good SOD encoded and integrated knowledge about landmarks and routes into egocentric survey knowledge in verbal and spatial working memory, which is then transformed into allocentric survey knowledge with the support of all three components, distances being processed in verbal and spatial working memory and directions in visual and spatial working memory. In contrast, people with a poor SOD relied on verbal working memory and lacked spatial processing, thus failing to acquire accurate survey knowledge. Based on the results, a possible model for explaining individual differences in spatial knowledge acquisition is proposed.  相似文献   

13.
We report two experiments on the relationship between allocentric/egocentric frames of reference and categorical/coordinate spatial relations. Jager and Postma (2003) suggest two theoretical possibilities about their relationship: categorical judgements are better when combined with an allocentric reference frame and coordinate judgements with an egocentric reference frame (interaction hypothesis); allocentric/egocentric and categorical/coordinate form independent dimensions (independence hypothesis). Participants saw stimuli comprising two vertical bars (targets), one above and the other below a horizontal bar. They had to judge whether the targets appeared on the same side (categorical) or at the same distance (coordinate) with respect either to their body-midline (egocentric) or to the centre of the horizontal bar (allocentric). The results from Experiment 1 showed a facilitation in the allocentric and categorical conditions. In line with the independence hypothesis, no interaction effect emerged. To see whether the results were affected by the visual salience of the stimuli, in Experiment 2 the luminance of the horizontal bar was reduced. As a consequence, a significant interaction effect emerged indicating that categorical judgements were more accurate than coordinate ones, and especially so in the allocentric condition. Furthermore, egocentric judgements were as accurate as allocentric ones with a specific improvement when combined with coordinate spatial relations. The data from Experiment 2 showed that the visual salience of stimuli affected the relationship between allocentric/egocentric and categorical/coordinate dimensions. This suggests that the emergence of a selective interaction between the two dimensions may be modulated by the characteristics of the task.  相似文献   

14.
Kelly JW  Loomis JM  Beall AC 《Perception》2004,33(4):443-454
Judgments of exocentric direction are quite common, especially when judging where others are looking or pointing. To investigate these judgments in large-scale space, observers were shown two targets in a large open field and were asked to judge the exocentric direction specified by the targets. The targets ranged in egocentric distance from 5 to 20 m with target-to-target angular separations of 45 degrees, 90 degrees, and 135 degrees. Observers judged exocentric direction using two methods: (i) by judging which point on a distant fence appeared collinear with the two targets, and (ii) by orienting their body in a direction parallel with the perceived line segment. In the collinearity task, observers had to imagine the line connecting the targets and then extrapolate this imagined line out to the fence. Observers indicated the perceived point of collinearity on a handheld 360 degrees panoramic cylinder representing their vista. The two judgment methods gave similar results except for a constant bias associated with the body-pointing response. Aside from this bias, the results of these two methods agree with other existing research indicating an effect of relative egocentric distance to the targets on judgment error--line segments are perceived as being rotated in depth. Additionally, verbal estimates of egocentric and exocentric distance suggest that perceived distance is not the cause for the systematic errors in judging exocentric direction.  相似文献   

15.

Judgments of egocentric distances in well-lit natural environments can differ substantially in indoor versus outdoor contexts. Visual cues (e.g., linear perspective, texture gradients) no doubt play a strong role in context-dependent judgments when cues are abundant. Here we investigated a possible top-down influence on distance judgments that might play a unique role under conditions of perceptual uncertainty: assumptions or knowledge that one is indoors or outdoors. We presented targets in a large outdoor field and in an indoor classroom. To control visual distance and depth cues between the environments, we restricted the field of view by using a 14-deg aperture. Evidence of context effects depended on the response mode: Blindfolded-walking responses were systematically shorter indoors than outdoors, whereas verbal and size gesture judgments showed no context effects. These results suggest that top-down knowledge about the environmental context does not strongly influence visually perceived egocentric distance. However, this knowledge can operate as an output-level bias, such that blindfolded-walking responses are shorter when observers’ top-down knowledge indicates that they are indoors and when the size of the room is uncertain.

  相似文献   

16.
Many tasks have been used to probe human directional knowledge, but relatively little is known about the comparative merits of different means of indicating target azimuth. Few studies have compared action-based versus non-action-based judgments for targets encircling the observer. This comparison promises to illuminate not only the perception of azimuths in the front and rear hemispaces, but also the frames of reference underlying various azimuth judgments, and ultimately their neural underpinnings. We compared a response in which participants aimed a pointer at a nearby target, with verbal azimuth estimates. Target locations were distributed between 20 degrees and 340 degrees. Non-visual pointing responses exhibited large constant errors (up to -32 degrees) that tended to increase with target eccentricity. Pointing with eyes open also showed large errors (up to -21 degrees). In striking contrast, verbal reports were highly accurate, with constant errors rarely exceeding +/-5 degrees. Under our testing conditions, these results are not likely to stem from differences in perception-based versus action-based responses, but instead reflect the frames of reference underlying the pointing and verbal responses. When participants used the pointer to match the egocentric target azimuth rather than the exocentric target azimuth relative to the pointer, errors were reduced.  相似文献   

17.
He ZJ  Wu B  Ooi TL  Yarbrough G  Wu J 《Perception》2004,33(7):789-806
On the basis of the finding that a common and homogeneous ground surface is vital for accurate egocentric distance judgments (Sinai et al, 1998 Nature 395 497-500), we propose a sequential-surface-integration-process (SSIP) hypothesis to elucidate how the visual system constructs a representation of the ground-surface in the intermediate distance range. According to the SSIP hypothesis, a near ground-surface representation is formed from near depth cues, and is utilized as an anchor to integrate the more distant surfaces by using texture-gradient information as the depth cue. The SSIP hypothesis provides an explanation for the finding that egocentric distance judgment is underestimated when a texture boundary exists on the ground surface that commonly supports the observer and target. We tested the prediction that the fidelity of the visually represented ground-surface reference frame depends on how the visual system selects the surface information for integration. Specifically, if information is selected along a direct route between the observer and target where the ground surface is disrupted by an occluding object, the ground surface will be inaccurately represented. In experiments 1-3 we used a perceptual task and two different visually directed tasks to show that this leads to egocentric distance underestimation. Judgment is accurate however, when the observer selects the continuous ground information bypassing the occluding object (indirect route), as found in experiments 4 and 5 with a visually directed task. Altogether, our findings provide support for the SSIP hypothesis and reveal, surprisingly, that the phenomenal visual space is not unique but depends on how optic information is selected.  相似文献   

18.
Egocentric distance perception is a psychological process in which observers use various depth cues to estimate the distance between a target and themselves. The impairment of basic visual function and treatment of amblyopia have been well documented. However, the disorder of egocentric distance perception of amblyopes is poorly understood. In this review, we describe the cognitive mechanism of egocentric distance perception, and then, we focus on empirical evidence for disorders in egocentric distance perception for amblyopes in the whole visual space. In the personal space (within 2 m), it is difficult for amblyopes to show normal hand-eye coordination; in the action space (within 2 m~30 m), amblyopes cannot accurately judge the distance of a target suspended in the air. Few studies have focused on the performance of amblyopes in the vista space (more than 30 m). Finally, five critical topics for future research are discussed: 1) it is necessary to systematically explore the mechanism of egocentric distance perception in all three spaces; 2) the laws of egocentric distance perception in moving objects for amblyopes should be explored; and 3) the comparison of three subtypes of amblyopia is still insufficient; 4) study the perception of distance under another theoretical framework; 5) explore the mechanisms of amblyopia by Virtual Reality.  相似文献   

19.
This study explored whether people create Euclidean representations of 2-dimensional right triangles from touch and use them to make spatial inferences in accord with Euclidean distance axioms. Blindfolded participants who were instructed to form visual images of triangles felt the vertical and horizontal sides of right triangles, then estimated the lengths (but not the angles) of the 3 triangle sides. In these 3 experiments, length estimates conformed closely to the Euclidean metric when evaluated on application of the Pythagorean theorem. Participants who used a visual imaging strategy were accurate more often than those who used visual imagery less often. In Experiments 2 and 3, a hypotenuse inference was as accurate as a direct haptic judgment of the hypotenuse. These results demonstrated similar accuracy of the hypotenuse judgments when participants made verbal rather than haptic estimates. The findings indicate that participants can form Euclidean representations under certain conditions from felt 2-dimensional right triangles based on visual images.  相似文献   

20.
The paper by Shaffer, McManama, Swank, Williams & Durgin (2014) uses correlations between palm-board and verbal estimates of geographical slant to argue against dissociation of the two measures. This paper reports the correlations between the verbal, visual and palm-board measures of geographical slant used by Proffitt and co-workers as a counterpoint to the analyses presented by Shaffer and colleagues. The data are for slant perception of staircases in a station (N = 269), a shopping mall (N = 229) and a civic square (N = 109). In all three studies, modest correlations between the palm-board matches and the verbal reports were obtained. Multiple-regression analyses of potential contributors to verbal reports, however, indicated no unique association between verbal and palm-board measures. Data from three further studies (combined N = 528) also show no evidence of any relationship. Shared method variance between visual and palm-board matches could account for the modest association between palm-boards and verbal reports.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号