首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
When people scan mental images, their response times increase linearly with increases in the distance to be scanned, which is generally taken as reflecting the fact that their internal representations incorporate the metric properties of the corresponding objects. In view of this finding, we investigated the structural properties of spatial mental images created from nonvisual sources in three groups (blindfolded sighted, late blind, and congenitally blind). In Experiment 1, blindfolded sighted and late blind participants created metrically accurate spatial representations of a small-scale spatial configuration under both verbal and haptic learning conditions. In Experiment 2, late and congenitally blind participants generated accurate spatial mental images after both verbal and locomotor learning of a full-scale navigable space (created by an immersive audio virtual reality system), whereas blindfolded sighted participants were selectively impaired in their ability to generate precise spatial representations from locomotor experience. These results attest that in the context of a permanent lack of sight, encoding spatial information on the basis of the most reliable currently functional system (the sensorimotor system) is crucial for building a metrically accurate representation of a spatial environment. The results also highlight the potential of spatialized audio-rendering technology for exploring the spatial representations of visually impaired participants.  相似文献   

2.
We examined haptic perception of the horizontal in visually impaired people. Blind people (late blind and congenitally blind), persons with very low vision, and blindfolded sighted individuals felt raised-line drawings of jars at four angles. They had to demonstrate their understanding that water remains horizontal, despite jar tilt, by selecting the correct raised-line drawing given four choices. Low-vision subjects, with near perfect scores, performed significantly better than the other groups of subjects. While the late-blind and blindfolded sighted subjects performed slightly better than the congenitally blind participants, the difference between the late-blind and congenitally blind groups was nonsignificant. The performance of the congenitally blind subjects indicates that visual experience is not necessary for the development of an understanding that water level stays horizontal, given container tilt.  相似文献   

3.
A set of three experiments was performed to investigate the role of visual imaging in the haptic recognition of raised-line depictions of common objects. Blindfolded, sighted (Experiment 1) observers performed the task very poorly, while several findings converged to indicate that a visual translation process was adopted. These included (1) strong correlation between image-ability ratings (obtained in Experiment 1 and, independently, in Experiment 2) and both recognition speed and accuracy, (2) superior performance with, and greater ease of imaging, two-dimensional as opposed to three-dimensional depictions, despite equivalence in rated line complexity, and (3) a significant correlation between the general ability of the observer to image and obtained imageability ratings of the stimulus depictions. That congenitally blind observers performed the same task even more poorly, while their performance did not differ for two- versus three-dimensional depictions (Experiment 3), provides further evidence that visual translation was used by the sighted. Such limited performance is contrasted with the considerable skill with which real common objects are processed and recognized haptically. The reasons for the general difference in the haptic performance of two- versus three-dimensional tasks are considered. Implications for the presentation of spatial information in the form of tangible graphics displays for the blind are also discussed.  相似文献   

4.
The aim of this research is to assess whether the crucial factor in determining the characteristics of blind people's spatial mental images is concerned with the visual impairment per se or the processing style that the dominant perceptual modalities used to acquire spatial information impose, i.e. simultaneous (vision) vs sequential (kinaesthesis). Participants were asked to learn six positions in a large parking area via movement alone (congenitally blind, adventitiously blind, blindfolded sighted) or with vision plus movement (simultaneous sighted, sequential sighted), and then to mentally scan between positions in the path. The crucial manipulation concerned the sequential sighted group. Their visual exploration was made sequential by putting visual obstacles within the pathway in such a way that they could not see simultaneously the positions along the pathway. The results revealed a significant time/distance linear relation in all tested groups. However, the linear component was lower in sequential sighted and blind participants, especially congenital. Sequential sighted and congenitally blind participants showed an almost overlapping performance. Differences between groups became evident when mentally scanning farther distances (more than 5m). This threshold effect could be revealing of processing limitations due to the need of integrating and updating spatial information. Overall, the results suggest that the characteristics of the processing style rather than the visual impairment per se affect blind people's spatial mental images.  相似文献   

5.
This study compared the sensory and perceptual abilities of the blind and sighted. The 32 participants were required to perform two tasks: tactile grating orientation discrimination (to determine tactile acuity) and haptic three-dimensional (3-D) shape discrimination. The results indicated that the blind outperformed their sighted counterparts (individually matched for both age and sex) on both tactile tasks. The improvements in tactile acuity that accompanied blindness occurred for all blind groups (congenital, early, and late). However, the improvements in haptic 3-D shape discrimination only occurred for the early-onset and late-onset blindness groups; the performance of the congenitally blind was no better than that of the sighted controls. The results of the present study demonstrate that blindness does lead to an enhancement of tactile abilities, but they also suggest that early visual experience may play a role in facilitating haptic 3-D shape discrimination.  相似文献   

6.
Blindfolded sighted, congenitally blind, late-blind, and very-low-vision subjects were tested on a tangible version of the embedded-figures test. The results of ANOVAs on accuracy measures yielded superior performance by the very-low-vision and late-blind subjects compared with the blindfolded sighted and congenitally blind participants. Accuracy of the congenitally blind subjects was similar to that of the blindfolded sighted participants. However, all groups of blind subjects were significantly faster than the blindfolded sighted subjects. It is suggested that experience with pictures combined with haptic skill aid perceptual selectivity in touch.  相似文献   

7.
In this research, the impact of visual experience on the capacity to use egocentric (body-centered) and allocentric (object-centered) representations in combination with categorical (invariant non-metric) and coordinate (variable metric) spatial relations was examined. Participants memorized through haptic (congenitally blind, adventitiously blind, and blindfolded) and haptic + visual (sighted) exploration triads of 3D objects and then they were asked to judge: "which object was closest/farthest to you?" (egocentric-coordinate); "which object was on your left/right?" (egocentric-categorical); "which object was closest/farthest to a target object (e.g., cone)?" (allocentric-coordinate); "which object was on the left/right of the target object (e.g., cone)?" (allocentric-categorical). The results showed a slowdown in processing time when congenitally blind people provided allocentric-coordinate judgments and adventitiously blind people egocentric-categorical judgments. Moreover, in egocentric judgments, adventitiously blind participants were less accurate than sighted participants. However, the overall performance was quite good and this supports the idea that the differences observed are more quantitative than qualitative. The theoretical implications of these results are discussed.  相似文献   

8.
Integrating different senses to reduce sensory uncertainty and increase perceptual precision can have an important compensatory function for individuals with visual impairment and blindness. However, how visual impairment and blindness impact the development of optimal multisensory integration in the remaining senses is currently unknown. Here we first examined how audio‐haptic integration develops and changes across the life span in 92 sighted (blindfolded) individuals between 7 and 70 years of age. We used a child‐friendly task in which participants had to discriminate different object sizes by touching them and/or listening to them. We assessed whether audio‐haptic performance resulted in a reduction of perceptual uncertainty compared to auditory‐only and haptic‐only performance as predicted by maximum‐likelihood estimation model. We then compared how this ability develops in 28 children and adults with different levels of visual experience, focussing on low‐vision individuals and blind individuals that lost their sight at different ages during development. Our results show that in sighted individuals, adult‐like audio‐haptic integration develops around 13–15 years of age, and remains stable until late adulthood. While early‐blind individuals, even at the youngest ages, integrate audio‐haptic information in an optimal fashion, late‐blind individuals do not. Optimal integration in low‐vision individuals follows a similar developmental trajectory as that of sighted individuals. These findings demonstrate that visual experience is not necessary for optimal audio‐haptic integration to emerge, but that consistency of sensory information across development is key for the functional outcome of optimal multisensory integration.  相似文献   

9.
Visual cortical areas are involved in a variety of somatosensory tasks in the sighted, including tactile perception of two-dimensional patterns and motion, and haptic perception of three-dimensional objects. It is still unresolved whether visual imagery or modality-independent representations can better explain such cross-modal recruitment. However, these explanations are not necessarily in conflict with each other and might both be true, if imagery processes can access modality-independent representations. Greater visual cortical engagement in blind compared to sighted people is commonplace during language tasks, and also seems to occur during processing of tactile spatial information. Such engagement is even greater in the congenitally blind compared to the late blind, indicative of enhanced cross-modal plasticity during early development. At the other extreme, short-term visual deprivation of the normally sighted also leads to cross-modal plasticity. Altogether, the boundaries between sensory modalities appear to be flexible rather than immutable.  相似文献   

10.
11.
12.
Shape recognition can be achieved through vision or touch, raising the issue of how this information is shared across modalities. Here we provide a short review of previous findings on cross-modal object recognition and we provide new empirical data on multisensory recognition of actively explored objects. It was previously shown that, similar to vision, haptic recognition of objects fixed in space is orientation specific and that cross-modal object recognition performance was relatively efficient when these views of the objects were matched across the sensory modalities (Newell, Ernst, Tjan, & Bülthoff, 2001). For actively explored (i.e., spatially unconstrained) objects, we now found a cost in cross-modal relative to within-modal recognition performance. At first, this may seem to be in contrast to findings by Newell et al. (2001). However, a detailed video analysis of the visual and haptic exploration behaviour during learning and recognition revealed that one view of the objects was predominantly explored relative to all others. Thus, active visual and haptic exploration is not balanced across object views. The cost in recognition performance across modalities for actively explored objects could be attributed to the fact that the predominantly learned object view was not appropriately matched between learning and recognition test in the cross-modal conditions. Thus, it seems that participants naturally adopt an exploration strategy during visual and haptic object learning that involves constraining the orientation of the objects. Although this strategy ensures good within-modal performance, it is not optimal for achieving the best recognition performance across modalities.  相似文献   

13.
Three experiments were carried out to investigate tactual processing of two-dimensional raised line drawings by blind and blindfolded sighted children. The results showed an unexpected but consistent pattern indicating that the introduction of ‘meaning’ facilitated the performance of the blindfolded sighted children but caused a relative decline in the performance of the congenitally blind. A general lack of evidence distinguishing between recognition of objects that had or had not been directly experienced through touch suggested that the internal spatial representations of objects depended not only on perceptual information but also on knowledge derived from other sources.  相似文献   

14.
Mental rotation in the congenitally blind was investigated with a haptic letter-judgment task. Blind subjects and blindfolded, sighted subjects were presented a letter in some orientation between 0° to 300° from upright and timed while they judged whether it was a normal or mirror-image letter. Both groups showed an increasing response time with the stimulus’s departure from upright; this result was interpreted as reflecting the process of mental rotation. The results for the blind subjects suggest that mental rotation can operate on a spatial representation that does not have any specifically visual components. Further research showed that for the sighted subjects in the haptic task, the orientation of a letter is coded with respect to the position of the hand. Sighted subjects may code the orientation of the letter and then translate this code into a visual representation, or they may use a spatial representation that is not specifically visual.  相似文献   

15.
This research examined whether visual and haptic map learning yield functionally equivalent spatial images in working memory, as evidenced by similar encoding bias and updating performance. In 3 experiments, participants learned 4-point routes either by seeing or feeling the maps. At test, blindfolded participants made spatial judgments about the maps from imagined perspectives that were either aligned or misaligned with the maps as represented in working memory. Results from Experiments 1 and 2 revealed a highly similar pattern of latencies and errors between visual and haptic conditions. These findings extend the well-known alignment biases for visual map learning to haptic map learning, provide further evidence of haptic updating, and most important, show that learning from the 2 modalities yields very similar performance across all conditions. Experiment 3 found the same encoding biases and updating performance with blind individuals, demonstrating that functional equivalence cannot be due to visual recoding and is consistent with an amodal hypothesis of spatial images.  相似文献   

16.
Spatial representation by 72 blind and blindfolded sighted children between the ages of 6 and 11 was tested in two experiments by mental rotation of a raised line under conditions of clockwise varied directions.Experiment 1 showed that the two groups were well matched on tactual recognition and scored equally badly on matching displays to their own mentally rotated position.Experiment 2 found the sighted superior in recall tests. There was a highly significant interaction between sighted status and degree of rotation. Degree of rotation affected only the blind. Their scores were significantly lower for rotating to oblique and to the far orthogonal directions than to near orthogonal test positions. On near orthogonals the blind did not differ from the sighted.Age was a main effect, but it did not interact with any other variable. Older blind children whose visual experience dated from before the age of 6 were superior to congenitally blind subjects, but not differentially more so on oblique directions.The results were discussed in relation to hypotheses about the nature of spatial representation and strategies by children whose prior experience derived from vision or from touch and movement.  相似文献   

17.
Early-blind, late-blind, and blindfolded sighted participants were presented with two haptic allocentric spatial tasks: a parallel-setting task, in an immediate and a 10-sec delay condition, and a task in which the orientation of a single bar was judged verbally. With respect to deviation size, the data suggest that mental visual processing filled a beneficial role in both tasks. In the parallel-setting task, the early blind performed more variably and showed no improvement with delay, whereas the late blind did improve, but less than the sighted did. In the verbal judgment task, both early- and late-blind participants displayed larger deviations than the sighted controls. Differences between the groups were absent or much weaker with respect to the haptic oblique effect, a finding that reinforces the view that this effect is not of visual origin. The role of visual processing mechanisms and visual experience in haptic spatial tasks is discussed.  相似文献   

18.
We investigated which reference frames are preferred when matching spatial language to the haptic domain. Sighted, low-vision, and blind participants were tested on a haptic-sentence-verification task where participants had to haptically explore different configurations of a ball and a shoe and judge the relation between them. Results from the spatial relation "above", in the vertical plane, showed that various reference frames are available after haptic inspection of a configuration. Moreover, the pattern of results was similar for all three groups and resembled patterns found for the sighted on visual sentence-verification tasks. In contrast, when judging the spatial relation "in front", in the horizontal plane, the blind showed a markedly different response pattern. The sighted and low-vision participants did not show a clear preference for either the absolute/relative or the intrinsic reference frame when these frames were dissociated. The blind, on the other hand, showed a clear preference for the intrinsic reference frame. In the absence of a dominant cue, such as gravity in the vertical plane, the blind might emphasise the functional relationship between the objects owing to enhanced experience with haptic exploration of objects.  相似文献   

19.
Imagery in the congenitally blind: how visual are visual images?   总被引:1,自引:0,他引:1  
Three experiments compared congenitally blind and sighted adults and children on tasks presumed to involve visual imagery in memory. In all three, the blind subjects' performances were remarkably similar to the sighted. The first two experiments examined Paivio's (1971) modality-specific imagery hypothesis. Experiment 1 used a paired-associate task with words whose referents were high in either visual or auditory imagery. The blind, like the sighted, recalled more high-visual-imagery pairs than any others. Experiment 2 used a free-recall task for words grouped according to modality-specific attributes, such as color and sound. The blind performed as well as the sighted on words grouped by color. In fact, the only consistent deficit in both experiments occurred for the sighted in recall of words whose referents are primarily auditory. These results challenge Paivio's theory and suggest either (a) that the visual imagery used by the sighted is no more facilitating than the abstract semantic representations used by the blind or (b) that the sighted are not using visual imagery. Experiment 3 used Neisser and Kerr's (1973) imaging task. Subjects formed images of scenes in which target objects were described as either visible in the picture plane or concealed by another object and thus not visible. On an incidental recall test for the target objects, the blind, like the sighted, recalled more pictorial than concealed targets. This finding suggests that the haptic images of the blind maintain occlusion just as the visual images of the sighted do.  相似文献   

20.
Does vision play a role in the elaboration of the semantic representation of small and large numerosities, notably in its spatial format? To investigate this issue, we decided to compare in the auditory modality the performance of congenitally and early blind people with that of a sighted control group, in two number comparison tasks (to 5 and to 55) and in one parity judgement task. Blind and sighted participants presented exactly the same distance and SNARC (Spatial Numerical Association of Response Codes) effects, indicating that they share the same semantic numerical representation. In consequence, our results suggest that the spatial dimension of the numerical representation is not necessarily attributable to the visual modality and that the absence of vision does not preclude the elaboration of this representation for 1-digit (Experiment 1) and 2-digit numerosities (Experiment 2). Moreover, as classical semantic numerical effects were observed in the auditory modality, the postulate of the amodal nature of the mental number line for both small and large magnitudes was reinforced.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号