首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In left-right comparisons of perceived length, objects on the left were slightly overestimated by vision alone but not by touch alone. This conflict between vision and touch occurred in the absence of any experimentally induced distortion or illusion. Judgments made with concurrent vision and touch were similar to those made with vision alone regardlesa of whether the Os were judging which objectfelt longer,looked longer, orwas longer. The resolution of a natural conflict between vision and touch is an example of natural visual capture.  相似文献   

2.
Subjects attempted to recognize simple line drawings of common objects using either touch or vision. In the touch condition, subjects explored raised line drawings using the distal pad of the index finger or the distal pads both of the index and of the middle fingers. In the visual condition, a computer-driven display was used to simulate tactual exploration. By moving an electronic pen over a digitizing tablet, the subject could explore a line drawing stored in memory; on the display screen a portion of the drawing appeared to move behind a stationary aperture, in concert with the movement of the pen. This aperture was varied in width, thus simulating the use of one or two fingers. In terms of average recognition accuracy and average response latency, recognition performance was virtually the same in the one-finger touch condition and the simulated one-finger vision condition. Visual recognition performance improved considerably when the visual field size was doubled (simulating two fingers), but tactual performance showed little improvement, suggesting that the effective tactual field of view for this task is approximately equal to one finger pad. This latter result agrees with other reports in the literature indicating that integration of two-dimensional pattern information extending over multiple fingers on the same hand is quite poor. The near equivalence of tactual picture perception and narrow-field vision suggests that the difficulties of tactual picture recognition must be largely due to the narrowness of the effective field of view.  相似文献   

3.
Research has examined the nature of visual imagery in normally sighted and blind subjects, but not in those with low vision. Findings with normally sighted subjects suggest that imagery involves primary visual areas of the brain. Since the plasticity of visual cortex appears to be limited in adulthood, we might expect imagery of those with adult-onset low vision to be relatively unaffected by these losses. But if visual imagery is based on recent and current experience, we would expect images of those with low vision to share some properties of impaired visual perception. We examined key parameters of mental images reported by normally sighted subjects, compared to those with early- and late-onset low vision, and with a group of subjects with restricted visual fields using an imagery questionnaire. We found evidence that those with reduced visual acuity report the imagery distances of objects to be closer than those with normal acuity and also depict objects in imagery with lower resolution than those with normal visual acuity. We also found that all low vision groups, like the normally sighted, image objects at a substantially greater distance than when asked to place them at a distance that ‘just fits’ their imagery field (overflow distance). All low vision groups, like the normally sighted, showed evidence of a limited visual field for imagery, but our group with restricted visual fields did not differ from the other groups in this respect. We conclude that imagery of those with low vision is similar to that of those with normal vision in being dependent on the size of objects or features being imaged, but that it also reflects their reduced visual acuity. We found no evidence for a dependence on imagery of age of onset or number of years of vision impairment.  相似文献   

4.
To what extent are newborn brains designed to operate over natural visual input? To address this question, we used a high-throughput controlled-rearing method to examine whether newborn chicks (Gallus gallus) show enhanced learning of natural visual sequences at the onset of vision. We took the same set of images and grouped them into either natural sequences (i.e., sequences showing different viewpoints of the same real-world object) or unnatural sequences (i.e., sequences showing different images of different real-world objects). When raised in virtual worlds containing natural sequences, newborn chicks developed the ability to recognize familiar images of objects. Conversely, when raised in virtual worlds containing unnatural sequences, newborn chicks’ object recognition abilities were severely impaired. In fact, the majority of the chicks raised with the unnatural sequences failed to recognize familiar images of objects despite acquiring over 100 h of visual experience with those images. Thus, newborn chicks show enhanced learning of natural visual sequences at the onset of vision. These results indicate that newborn brains are designed to operate over natural visual input.  相似文献   

5.
A young woman, blinded by the development of corneal opacity at the age of 3 years, was given a corneal graft at the age of 27. Though the image-forming powers of the eye were largely restored, the patient showed little recovery of functional vision. Six months after operation she could detect and locate conspicuous objects and had some degree of ambient spatial vision but she could not learn to recognize simple visual patterns. Eventually she reverted to the life of a blind person. Her failure to recover is discussed in terms of the known deleterious effects of restricted early visual experience on the development of the visual cortex in animals.  相似文献   

6.
Visual capture of touch: out-of-the-body experiences with rubber gloves   总被引:15,自引:0,他引:15  
When the apparent visual location of a body part conflicts with its veridical location, vision can dominate proprioception and kinesthesia. In this article, we show that vision can capture tactile localization. Participants discriminated the location of vibrotactile stimuli (upper, at the index finger, vs. lower, at the thumb), while ignoring distractor lights that could independently be upper or lower. Such tactile discriminations were slowed when the distractor light was incongruent with the tactile target (e.g., an upper light during lower touch) rather than congruent, especially when the lights appeared near the stimulated hand. The hands were occluded under a table, with all distractor lights above the table. The effect of the distractor lights increased when rubber hands were placed on the table, 'holding' the distractor lights, but only when the rubber hands were spatially aligned with the participant's own hands. In this aligned situation, participants were more likely to report the illusion of feeling touch at the rubber hands. Such visual capture of touch appears cognitively impenetrable.  相似文献   

7.
Viewpoint dependence in visual and haptic object recognition   总被引:5,自引:0,他引:5  
On the whole, people recognize objects best when they see the objects from a familiar view and worse when they see the objects from views that were previously occluded from sight. Unexpectedly, we found haptic object recognition to be viewpoint-specific as well, even though hand movements were unrestricted. This viewpoint dependence was due to the hands preferring the back "view" of the objects. Furthermore, when the sensory modalities (visual vs. haptic) differed between learning an object and recognizing it, recognition performance was best when the objects were rotated back-to-front between learning and recognition. Our data indicate that the visual system recognizes the front view of objects best, whereas the hand recognizes objects best from the back.  相似文献   

8.
In 7 experiments, the authors explored whether visual attention (the ability to select relevant visual information) and visual working memory (the ability to retain relevant visual information) share the same content representations. The presence of singleton distractors interfered more strongly with a visual search task when it was accompanied by an additional memory task. Singleton distractors interfered even more when they were identical or related to the object held in memory, but only when it was difficult to verbalize the memory content. Furthermore, this content-specific interaction occurred for features that were relevant to the memory task but not for irrelevant features of the same object or for once-remembered objects that could be forgotten. Finally, memory-related distractors attracted more eye movements but did not result in longer fixations. The results demonstrate memory-driven attentional capture on the basis of content-specific representations.  相似文献   

9.
A case is reported of an associative visual agnosic patient who could not draw from memory objects he could recognize, even though he could copy drawings flawlessly. His ability to generate mental visual images was found to be spared, as was his ability to operate upon mental images. These data suggest that the patient could generate mental images but could not draw from memory because he did not have access to stored knowledge about pictorial attributes of objects. A similar functional impairment can be found in some other visual agnosic patients and in patients affected by optic aphasia. The present case allows a discussion of relationships among drawing from memory, imagery, and copying procedures.  相似文献   

10.
A left-handed woman developed visual object agnosia, prosopagnosia, and visual disorientation after resection of the right occipital lobe. Color agnosia and alexia were absent. When asked to identify objects presented visually, the patient's errors represented visually related objects (underspecifications) or perseverations. Identification was facilitated when she observed the object being used in a natural way. Identification was impaired by surrounding the object with unrelated objects, decreasing the background illumination, decreasing the duration of exposure of the object to the patient, and probably also by decreasing the visual angle subtended by the object. In addition, there were disturbances of visualization (i.e., imaging in the absence of a visual stimulus) that paralleled the perceptual difficulties. We conclude that: (1) A deficit in visual perception, characterized by insufficient feature analysis of visual stimuli, was the basis of the visual agnosia in this case. (2) The visual agnosia could not be explained by (a) a vision-language disconnection syndrome, (b) decay of visual memory traces, or (c) deficiencies in the visual fields (pathologic Funktionswandel). (3) The ability to visualize (visual imagery) probably utilizes some of the same neural pathways used in perception. (4) The results in this case probably can be generalized to some but not all cases of visual agnosia; in particular, the deficit in most previously reported patients with prosopagnosia is similar to that of our case. However, agnosic alexia and color agnosia usually have a different neuropsychological basis.  相似文献   

11.
How long does it take for a newborn to recognize an object? Adults can recognize objects rapidly, but measuring object recognition speed in newborns has not previously been possible. Here we introduce an automated controlled‐rearing method for measuring the speed of newborn object recognition in controlled visual worlds. We raised newborn chicks (Gallus gallus) in strictly controlled environments that contained no objects other than a single virtual object, and then measured the speed at which the chicks could recognize that object from familiar and novel viewpoints. The chicks were able to recognize the object rapidly, at presentation rates of 125 ms per image. Further, recognition speed was equally fast whether the object was presented from familiar viewpoints or novel viewpoints (30° and 60° azimuth rotations). Thus, newborn chicks can recognize objects across novel viewpoints within a fraction of a second. These results demonstrate that newborns are capable of both rapid and invariant object recognition at the onset of vision.  相似文献   

12.
In earlier work, the author has demonstrated that tactile pattern perception and visual pattern perception exhibit many parallels when the effective spatial resolution of vision is reduced to that of touch, thus supporting the hypothesis that the two pattern senses are functionally similar when matched in spatial bandwidth. The present experiments demonstrate a clear counter-example to this hypothesis of functional similarity. Specifically, it was found that the lateral masking effect of a surround on tactile character recognition increases when the surround changes in composition from solid lines to dots, whereas for vision, recognition performance goes in the opposite direction. This finding necessitates some modification of the model of character recognition proposed by the author (Loornis, 1990) as it applies to the sensing of raised tactile patterns. One possible modification would be to incorporate, as the initial stage of pattern transformation, the continuum mechanics model for the skin that was developed by Phillips and Johnson (1981b).  相似文献   

13.
The phenomena of prismatically induced “visual capture” and adaptation of the hand were compared. In Experiment 1, it was demonstrated that when the subject’s hand was transported for him by the experimenter (passive movement) immediately preceding the measure of visual capture, the magnitude of the immediate shift in felt limb position (visual capture) was enhanced relative to when the subject moved the hand himself (active movement). In Experiment 2, where the dependent measure was adaptation of the prism-exposed hand, the opposite effect was produced by the active/passive manipulation. It appears, then, that different processes operate to produce visual capture and adaptation. It was speculated that visual capture represents an immediate weighting of visual over proprioceptive input as a result of the greater precision of vision and/or the subject’s tendency to direct his attention more heavily to this modality. In contrast, prism adaptation is probably a recalibration of felt limb position in the direction of vision, induced by the presence of a registered discordance between visual and proprioceptive inputs.  相似文献   

14.
Preschoolers who explore objects haptically often fail to recognize those objects in subsequent visual tests. This suggests that children may represent qualitatively different information in vision and haptics and/or that children’s haptic perception may be poor. In this study, 72 children (2½-5 years of age) and 20 adults explored unfamiliar objects either haptically or visually and then chose a visual match from among three test objects, each matching the exemplar on one perceptual dimension. All age groups chose shape-based matches after visual exploration. Both 5-year-olds and adults also chose shape-based matches after haptic exploration, but younger children did not match consistently in this condition. Certain hand movements performed by children during haptic exploration reliably predicted shape-based matches but occurred at very low frequencies. Thus, younger children’s difficulties with haptic-to-visual information transfer appeared to stem from their failure to use their hands to obtain reliable haptic information about objects.  相似文献   

15.
16.
The present research investigates newborn infants' perceptions of the shape and texture of objects through studies of the bi-directionality of cross-modal transfer between vision and touch. Using an intersensory procedure, four experiments were performed in newborns to study their ability to transfer shape and texture information from vision to touch and from touch to vision. The results showed that cross-modal transfer of shape is not bi-directional at birth. Newborns visually recognized a shape previously held but they failed to tactually recognize a shape previously seen. In contrast, a bi-directional cross-modal transfer of texture was observed. Taken together, the results suggest that newborn infants, like older children and adults, gather information differently in the visual and tactile modes, for different object properties. The findings provide evidence for continuity in the development of mechanisms for perceiving object properties.  相似文献   

17.
Human observers are sensitive to the '(physical) light field' in the sense that they have expectations of how a given object would appear if it were introduced in the scene in front of them at some arbitrary location. Thus the 'visual light field' is defined even in the 'empty space' between objects. In that sense the light field is akin to visual space considered as a 'container'. The visual light field at any given point can be measured in psychophysical experiments through the introduction of a suitable 'gauge object' at that position and letting the observer adjust the appearance of that gauge object (eg through suitable computer rendering) so as to produce a 'visual fit' into the scene. The parameters of the rendering will then be considered as the measurement result. We introduced white spheres as gauge objects at various locations in stereoscopically presented photographic scenes. We measured the direction ('direction of the light'), diffuseness ('quality of the light' as used by photographers and interior decorators), and intensity of the light field. We used three very different scenes, with very different physical light fields. The images were geometrically and photometrically calibrated, so we were in a position to correlate the observations with the physical 'ground truth'. We report that human observers are quite sensitive to various parameters of the physical light field and generally arrive at close to veridical settings, although a number of comparatively minor systematic deviations from veridicality can be noted. We conclude that the visual light field is an entity whose existence is at least as well defined as that of visual space, despite the fact that the visual light field hardly appears as prominently in vision science as it does in the visual arts.  相似文献   

18.
Shape recognition can be achieved through vision or touch, raising the issue of how this information is shared across modalities. Here we provide a short review of previous findings on cross-modal object recognition and we provide new empirical data on multisensory recognition of actively explored objects. It was previously shown that, similar to vision, haptic recognition of objects fixed in space is orientation specific and that cross-modal object recognition performance was relatively efficient when these views of the objects were matched across the sensory modalities (Newell, Ernst, Tjan, & Bülthoff, 2001). For actively explored (i.e., spatially unconstrained) objects, we now found a cost in cross-modal relative to within-modal recognition performance. At first, this may seem to be in contrast to findings by Newell et al. (2001). However, a detailed video analysis of the visual and haptic exploration behaviour during learning and recognition revealed that one view of the objects was predominantly explored relative to all others. Thus, active visual and haptic exploration is not balanced across object views. The cost in recognition performance across modalities for actively explored objects could be attributed to the fact that the predominantly learned object view was not appropriately matched between learning and recognition test in the cross-modal conditions. Thus, it seems that participants naturally adopt an exploration strategy during visual and haptic object learning that involves constraining the orientation of the objects. Although this strategy ensures good within-modal performance, it is not optimal for achieving the best recognition performance across modalities.  相似文献   

19.
Effects of occlusion on pigeons' visual object recognition   总被引:2,自引:0,他引:2  
DiPietro NT  Wasserman EA  Young ME 《Perception》2002,31(11):1299-1312
Casual observation suggests that pigeons and other animals can recognize occluded objects; yet laboratory research has thus far failed to show that pigeons can do so. In a series of experiments, we investigated pigeons' ability to 'name' shaded, textured stimuli by associating each with a different response. After first learning to recognize four unoccluded objects, pigeons had to recognize the objects when they were partially occluded by another surface or when they were placed on top of another surface; in each case, recognition was weak. Following training with the unoccluded stimuli and with the stimuli placed on top of the occluder, pigeons' recognition of occluded objects dramatically improved. Pigeons' improved recognition of occluded objects was not limited to the trained objects but transferred to novel objects as well. Evidently, the recognition of occluded objects requires pigeons to learn to discriminate the object from the occluder; once this discrimination is mastered, occluded objects can be better recognized.  相似文献   

20.
Previous research has shown that visual perception is affected by sensory information from other modalities. For example, sound can alter the visual intensity or the number of visual objects perceived. However, when touch and vision are combined, vision normally dominates—a phenomenon known asvisual capture. Here we report a cross-modal interaction between active touch and vision: The perceived number of brief visual events (flashes) is affected by the number of concurrently performed finger movements (keypresses). This sensorimotor illusion occurred despite little ambiguity in the visual stimuli themselves and depended on a close temporal proximity between movement execution and vision.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号