首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We investigated spatial perception of virtual images that were produced by convex and plane mirrors. In Experiment 1, 36 subjects reproduced both the perceived size and the perceived distance of virtual images for five targets that had been placed at a real distance of 10 or 20 m. In Experiment 2, 30 subjects verbally judged both the perceived size and the perceived distance of virtual images for five targets that were placed at each of five real distances of 2.5-45 m. In both experiments, the subjects received objective-size and objective-distance instructions. The results were that (1) size constancy was attained for a distance of up to 45 m, (2) distance was readily discriminated within this distance range, although virtual images produced by the mirror of strong curvature were judged to be farther away than those produced by the mirrors of less curvature, and (3) the ratio of perceived size to perceived distance was described as a power function of visual angle, and the ratio for the convex mirror was larger than that for the plane mirror. We compared the taking-into-account model and the direct perception model on the basis of a correlation analysis for proximal, virtual, and real levels of the stimuli. The taking-into-account model, which assumes that visual angle is transformed into perceived size by taking perceived distance into account, was supported by an analysis for the proximal level of stimuli. The direct perception model, which assumes that there is no inferential process between perceived size and perceived distance, was partially supported by an analysis for the distal level of the stimuli.  相似文献   

2.
Experiments are described in which each eye is presented with a target whose image remains on the same part of the retina when the eye moves. The patterns presented to each eye may be similar and may be placed on corresponding parts of the retina or may be placed in non-corresponding positions: alternatively, different targets may be presented to the two eyes. Each pattern fades intermittently. Sometimes both are seen together and sometimes both fields are dark at once. There is a small negative correlation between the times of clear vision with the two eyes. When corresponding areas of the two retinas are illuminated with red and green light respectively, the composite colour (yellow) is never perceived with steady illumination. When two similar patterns are in nearly corresponding positions there may be subjective fusion. With two different targets there is sometimes a subjective impression that the two patterns move with respect to one another even though their positions on the retina are fixed.  相似文献   

3.
This research examined whether visual and haptic map learning yield functionally equivalent spatial images in working memory, as evidenced by similar encoding bias and updating performance. In 3 experiments, participants learned 4-point routes either by seeing or feeling the maps. At test, blindfolded participants made spatial judgments about the maps from imagined perspectives that were either aligned or misaligned with the maps as represented in working memory. Results from Experiments 1 and 2 revealed a highly similar pattern of latencies and errors between visual and haptic conditions. These findings extend the well-known alignment biases for visual map learning to haptic map learning, provide further evidence of haptic updating, and most important, show that learning from the 2 modalities yields very similar performance across all conditions. Experiment 3 found the same encoding biases and updating performance with blind individuals, demonstrating that functional equivalence cannot be due to visual recoding and is consistent with an amodal hypothesis of spatial images.  相似文献   

4.
Three experiments were conducted to dissociate the perceived orientation of a stimulus from its orientation on the retina while inducing the McCollough effect. In the first experiment, the typical contingency between color and retinal orientation was eliminated by having subjects tilt their head 90° for half of the induction trials while the stimuli remained the same. The only relation remaining was that between color and the perceived or spatial orientation, which led to only a small contingent aftereffect. In contrast, when the spatial contingency was eliminated in the second experiment, the aftereffect was as large as when both contingencies were present. Finally, a third experiment determined that part of the small spatial effect obtained in the first experiment could be traced to hidden higher order retinal contingencies. The study suggested that even under optimal conditions the McCollough effect is not concerned with real-world properties of objects or events. Implications for several classes of theories are discussed.  相似文献   

5.
When observers are asked to localize the peripheral position of a target with respect to the midposition of a spatially extended comparison stimulus, they tend to mislocalize the target as being more outer than the midposition of the comparison stimulus (cf. Müsseler, Van der Heijden, Mahmud, Deubel, & Ertsey, 1999). For explaining this finding, we examined a model that postulates that in the calculation of perceived positions two sources are involved, a sensory map and a motor map. The sensory map provides vision and the motor map contains information for saccadic eye movements. The model predicts that errors in location judgements will be observed when the motor map has to provide the information for the judgements. In four experiments we examined, and found evidence for, this prediction. Localization errors were found in all conditions in which the motor map had to be used but not in conditions in which the sensory map could be used.  相似文献   

6.
Scholl BJ  Nakayama K 《Perception》2004,33(4):455-469
When an object A moves toward an object B until they are adjacent, at which point A stops and B starts moving, we often see a collision--ie we see A as the cause of B's motion. The spatiotemporal parameters which mediate the perception of causality have been explored in many studies, but this work is seldom related to other aspects of perception. Here we report a novel illusion, wherein the perception of causality affects the perceived spatial relations among two objects involved in a collision event: observers systematically underestimate the amount of overlap between two items in an event which is seen as a causal collision. This occurs even when the causal nature of the event is induced by a surrounding context, such that estimates of the amount of overlap in the very same event are much improved when the event is displayed in isolation, without a 'causal' interpretation. This illusion implies that the perception of causality does not proceed completely independently of other visual processes, but can affect the perception of other spatial properties.  相似文献   

7.
We examined the influence of context on exocentric pointing. In a virtual three-dimensional set-up, we asked our subjects to aim a pointer toward a target in two conditions. The target and the pointer were visible alone, or they were visible with planes through each of them. The planes consisted of a regular grid of horizontal and vertical lines. The presence of the planes had a significant influence on the indicated direction. These changes in indicated direction depended systematically on the orientation of the planes relative to the subject and on the angle between the planes. When the orientation of the (perpendicular) planes varied from asymmetrical to symmetrical to the frontoparallel plane, the indicated direction varied over a range of 15 degrees--from a slightly larger slant to a smaller slant--as compared with the condition without the contextual planes. When the dihedral angle between the two planes varied from 90 degrees to 40 degrees, the indicated direction varied over a range of less than 5 degrees: A smaller angle led to a slightly larger slant. The standard deviations in the indicated directions (about 3 degrees) did not change systematically. The additional structure provided by the planes did not lead to more consistent pointing. The systematic changes in the indicated direction contradict all theories that assume that the perceived distance between any two given points is independent of whatever else is present in the visual field--that is, they contradict all theories of visual space that assume that its geometry is independent of its contents (e.g., Gilinsky, 1951; Luneburg, 1947; Wagner, 1985).  相似文献   

8.
It has been proposed that inferring personal authorship for an event gives rise to intentional binding, a perceptual illusion in which one’s action and inferred effect seem closer in time than they otherwise would (Haggard, Clark, & Kalogeras, 2002). Using a novel, naturalistic paradigm, we conducted two experiments to test this hypothesis and examine the relationship between binding and self-reported authorship. In both experiments, an important authorship indicator – consistency between one’s action and a subsequent event – was manipulated, and its effects on binding and self-reported authorship were measured. Results showed that action-event consistency enhanced both binding and self-reported authorship, supporting the hypothesis that binding arises from an inference of authorship. At the same time, evidence for a dissociation emerged, with consistency having a more robust effect on self-reports than on binding. Taken together, these results suggest that binding and self-reports reveal different aspects of the sense of authorship.  相似文献   

9.
Georgeson MA  Meese TS 《Perception》1999,28(6):687-702
Much evidence shows that early vision employs an array of spatial filters tuned for different spatial frequencies and orientations. We suggest that for moderately low spatial frequencies these preliminary filters are not treated independently, but are used to perform grouping and segmentation in the patchwise Fourier domain. For example, consider a stationary plaid made from two superimposed sinusoidal gratings of the same contrast and spatial frequency oriented +/- 45 degrees from vertical. Most of the energy in a wavelet-like (e.g. simple-cell) transform of this stimulus is in the oblique orientations, but typically it looks like a compound structure containing blurred vertical and horizontal edges. This checkerboard structure corresponds with the locations of zero crossings in the output of an isotropic (circular) filter, synthesised from the linear sum of a set of oriented basis-filters (Georgeson, 1992 Proceedings of the Royal Society of London, Series B 249 235-245). However, the addition of a third harmonic in square-wave phase causes almost complete perceptual segmentation of the plaid into two overlapping oblique gratings. Here we confirm this result psychophysically using a feature-marking technique, and argue that this perceptual segmentation cannot be understood in terms of the zero crossings marked in the output of any static linear filter that is sensitive to all of the plaid's components. If it is assumed that zero crossings or similar are an appropriate feature-primitive in human vision, our results require a flexible process that combines and segments early basis-filters according to prevailing image conditions. Thus, we suggest that combination and segmentation of spatial filters in the patchwise Fourier domain underpins the perceptual segmentation observed in our experiments. Under this kind of image-processing scheme, registration across spatial scales occurs at the level of spatial filters, before features are extracted. This contrasts with many previous schemes where feature correspondence is required between spatial edge-maps at different spatial scales.  相似文献   

10.
Past research (e.g., J. M. Loomis, Y. Lippa, R. L. Klatzky, & R. G. Golledge, 2002) has indicated that spatial representations derived from spatial language can function equivalently to those derived from perception. The authors tested functional equivalence for reporting spatial relations that were not explicitly stated during learning. Participants learned a spatial layout by visual perception or spatial language and then made allocentric direction and distance judgments. Experiments 1 and 2 indicated allocentric relations could be accurately reported in all modalities, but visually perceived layouts, tested with or without vision, produced faster and less variable directional responses than language. In Experiment 3, when participants were forced to create a spatial image during learning (by spatially updating during a backward translation), functional equivalence of spatial language and visual perception was demonstrated by patterns of latency, systematic error, and variability.  相似文献   

11.
This study was designed to assess the effects of symmetry and plane of presentation on the determination of the perceptual center of flat figures. Experiment 1 demonstrates the existence of effects in improving center determination, both in the number of sides of the shape and in rotational and reflective symmetry (confounded in the experiment). Experiment 2 shows that the presentation plane has no effect on center determination. In Experiment 3, we divide the effects of the two symmetry types, showing that rotational symmetry alone is as effective as the presence of both symmetry types—that is, the presence of symmetry axes is not very useful in finding perceived centers.  相似文献   

12.
Empirical studies of the locus of perceived equidistance in binocular vision have revealed a characteristic change of its form, depending on absolute distance. This result is commonly taken to indicate influence of vergence-related binocular information, a conclusion that is by no means exclusively dictated by the data. Heller (1997) has suggested an alternative theoretical account that is based on the idea of independently combining the outcome of monocular input transformations without any form of binocular interaction. This article provides an experimental test of the structural assumption lying at the core of the axiomatic foundation of Heller's theory. I test the so-called Reidemeister condition under reduced cue conditions in two settings for each of 7 subjects. The results provide strong evidence for the validity of the Reidemeister condition and thus challenge the view that the locus of perceived equidistance depends on vergence-related binocular information. The discussion of the factors contributing to the monocular input transformations emphasizes the role of the optical properties of the eyes.  相似文献   

13.
Four experiments demonstrated that more time is required to scan further distances across visual images, even when the same amount of material falls between the initial focus point and the target. Not only did times systematically increase with distance but subjectively larger images required more time to scan than did subjectively smaller ones. Finally, when subjects were not asked to base all judgments on examination of their images, the distance between an initial focus point and a target did not affect reaction times.  相似文献   

14.
Schutz M  Lipscomb S 《Perception》2007,36(6):888-897
Percussionists inadvertently use visual information to strategically manipulate audience perception of note duration. Videos of long (L) and short (S) notes performed by a world-renowned percussionist were separated into visual (Lv, Sv) and auditory (La, Sa) components. Visual components contained only the gesture used to perform the note, auditory components the acoustic note itself. Audio and visual components were then crossed to create realistic musical stimuli. Participants were informed of the mismatch, and asked to rate note duration of these audio-visual pairs based on sound alone. Ratings varied based on visual (Lv versus Sv), but not auditory (La versus Sa) components. Therefore while longer gestures do not make longer notes, longer gestures make longer sounding notes through the integration of sensory information. This finding contradicts previous research showing that audition dominates temporal tasks such as duration judgment.  相似文献   

15.
This study was designed to assess the effects of symmetry and plane of presentation on the determination of the perceptual center of flat figures. Experiment 1 demonstrates the existence of effects in improving center determination, both in the number of sides of the shape and in rotational and reflective symmetry (confounded in the experiment). Experiment 2 shows that the presentation plane has no effect on center determination. In Experiment 3, we divide the effects of the two symmetry types, showing that rotational symmetry alone is as effective as the presence of both symmetry types--that is, the presence of symmetry axes is not very useful in finding perceived centers.  相似文献   

16.
Four perceptual identification experiments examined the influence of spatial cues on the recognition of words presented in central vision (with fixation on either the first or last letter of the target word) and in peripheral vision (displaced left or right of a central fixation point). Stimulus location had a strong effect on word identification accuracy in both central and peripheral vision, showing a strong right visual field superiority that did not depend on eccentricity. Valid spatial cues improved word identification for peripherally presented targets but were largely ineffective for centrally presented targets. Effects of spatial cuing interacted with visual field effects in Experiment 1, with valid cues reducing the right visual field superiority for peripherally located targets, but this interaction was shown to depend on the type of neutral cue. These results provide further support for the role of attentional factors in visual field asymmetries obtained with targets in peripheral vision but not with centrally presented targets.  相似文献   

17.
Even though human perceptual development relies on combining multiple modalities, most categorization studies so far have focused on the visual modality. To better understand the mechanisms underlying multisensory categorization, we analyzed visual and haptic perceptual spaces and compared them with human categorization behavior. As stimuli we used a three-dimensional object space of complex, parametrically-defined objects. First, we gathered similarity ratings for all objects and analyzed the perceptual spaces of both modalities using multidimensional scaling analysis. Next, we performed three different categorization tasks which are representative of every-day learning scenarios: in a fully unconstrained task, objects were freely categorized, in a semi-constrained task, exactly three groups had to be created, whereas in a constrained task, participants received three prototype objects and had to assign all other objects accordingly. We found that the haptic modality was on par with the visual modality both in recovering the topology of the physical space and in solving the categorization tasks. We also found that within-category similarity was consistently higher than across-category similarity for all categorization tasks and thus show how perceptual spaces based on similarity can explain visual and haptic object categorization. Our results suggest that both modalities employ similar processes in forming categories of complex objects.  相似文献   

18.
Perception of spatial order in central and extrafoveal vision was investigated by means of a task in which observers judged the relative position of a target line segment in a briefly flashed array of six line segments. The line arrays were size-scaled in extrafoveal vision according to the cortical magnification factor derived from the cone and ganglion cell distributions across the human retina, i.e. the stimulus representations in the striate cortex were of equal size in central and eccentric vision. In central vision, observers could judge the relative position of the target with high accuracy but in extrafoveal vision the task was more difficult, especially with short interline distances. However, performance was well above chance level also in extrafoveal vision.  相似文献   

19.
20.
Mast FW  Kosslyn SM 《Cognition》2002,86(1):57-70
The debate about whether objects in mental images can be ambiguous has produced ambiguous results. In some studies, participants could not reinterpret objects in images, but even in the studies where participants could reinterpret visualized patterns, the results are not conclusive. The present study used a novel task to investigate the reinterpretation of ambiguous figures in imagery, which required the participants to mentally rotate a figure 180 degrees before attempting to "see" an alternate interpretation. In addition, the participants did not know the purpose of the study in advance, nor did they see alternate interpretations of the stimuli; moreover, we explicitly measured individual differences in key mental imagery abilities. Eight of the 44 participants discovered the alternate version while they were memorizing the figure; 16 reported it after mentally rotating an image; and 20 were not able to "see" the alternate version. The ability to rotate images, assessed with an independent task, was highly associated with reports of image reversals, whereas measures of other imagery abilities were not.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号