首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The ability to perceive objects from a distance and navigate without vision depends principally on auditory information. Two experiments were conducted in order to assess this ability in congenitally blind children aged 4 to 12 years who had negligible amounts of visual experience or formal mobility training. In Experiment 1, children walked along a sidewalk toward a target location to get some candy. A box was placed along the path on some trials, and the children were instructed to avoid the box if it was present. The children spent more time in the region just in front of the box than in the region just behind it, indicating that they perceived the box and acted so as to navigate around it. In Experiment 2, children attempted to discriminate whether a nearby disk was on their left or on their right. The children performed at above-chance levels, again indicating distal perception of objects. The results of both experiments suggest that blind children with little or no visual experience or formal training utilize nonvisual information, presumably auditory, to perceive objects. The specific nature of this auditory information requires further investigation, but these findings imply that the underlying perceptual ability does not require experience in spatial vision or deliberate training and intervention.  相似文献   

2.
Imagery in the congenitally blind: how visual are visual images?   总被引:1,自引:0,他引:1  
Three experiments compared congenitally blind and sighted adults and children on tasks presumed to involve visual imagery in memory. In all three, the blind subjects' performances were remarkably similar to the sighted. The first two experiments examined Paivio's (1971) modality-specific imagery hypothesis. Experiment 1 used a paired-associate task with words whose referents were high in either visual or auditory imagery. The blind, like the sighted, recalled more high-visual-imagery pairs than any others. Experiment 2 used a free-recall task for words grouped according to modality-specific attributes, such as color and sound. The blind performed as well as the sighted on words grouped by color. In fact, the only consistent deficit in both experiments occurred for the sighted in recall of words whose referents are primarily auditory. These results challenge Paivio's theory and suggest either (a) that the visual imagery used by the sighted is no more facilitating than the abstract semantic representations used by the blind or (b) that the sighted are not using visual imagery. Experiment 3 used Neisser and Kerr's (1973) imaging task. Subjects formed images of scenes in which target objects were described as either visible in the picture plane or concealed by another object and thus not visible. On an incidental recall test for the target objects, the blind, like the sighted, recalled more pictorial than concealed targets. This finding suggests that the haptic images of the blind maintain occlusion just as the visual images of the sighted do.  相似文献   

3.
Beyond perceiving the features of individual objects, we also have the intriguing ability to efficiently perceive average values of collections of objects across various dimensions. Over what features can perceptual averaging occur? Work to date has been limited to visual properties, but perceptual experience is intrinsically multimodal. In an initial exploration of how this process operates in multimodal environments, we explored statistical summarizing in audition (averaging pitch from a sequence of tones) and vision (averaging size from a sequence of discs), and their interaction. We observed two primary results. First, not only was auditory averaging robust, but if anything, it was more accurate than visual averaging in the present study. Second, when uncorrelated visual and auditory information were simultaneously present, observers showed little cost for averaging in either modality when they did not know until the end of each trial which average they had to report. These results illustrate that perceptual averaging can span different sensory modalities, and they also illustrate how vision and audition can both cooperate and compete for resources.  相似文献   

4.
The authors tested the ability of capuchin monkeys (Cebus apella) to make inferences about hidden food. In Experiment 1, we showed the content of 2 boxes, 1 of which was baited (visual condition, VC) or we shook both boxes producing noise from the baited box (auditory condition, AC). Seven subjects (out of 8) were above chance in the VC, whereas only 1 was above chance in AC. During treatment, by manipulating empty and filled objects subjects experienced the relation between noise and content. When tested again, 7 capuchins were above chance in the VC and 3 in AC. In Experiment 2, we gave visual or auditory information only about the empty box and, consequently, successful choice implied inferential reasoning. All subjects (out of 4) were above chance in the VC, and 2 in the AC. Control tests ruled out the possibility that success resulted from simply avoiding the shaken noiseless box, or from the use of arbitrary auditory information. Similar to apes (Call, 2004), capuchins were capable of inferential reasoning.  相似文献   

5.
6.
Recent studies show that recognition memory for sounds is inferior to memory for pictures. Four experiments were conducted to examine the nature of auditory and visual memory. Experiments 1–3 were conducted to evaluate the role of experience in auditory and visual memory. Participants received a study phase with pictures/sounds, followed by a recognition memory test. Participants then completed auditory training with each of the sounds, followed by a second memory test. Despite auditory training in Experiments 1 and 2, visual memory was superior to auditory memory. In Experiment 3, we found that it is possible to improve auditory memory, but only after 3 days of specific auditory training and 3 days of visual memory decay. We examined the time course of information loss in auditory and visual memory in Experiment 4 and found a trade-off between visual and auditory recognition memory: Visual memory appears to have a larger capacity, while auditory memory is more enduring. Our results indicate that visual and auditory memory are inherently different memory systems and that differences in visual and auditory recognition memory performance may be due to the different amounts of experience with visual and auditory information, as well as structurally different neural circuitry specialized for information retention.  相似文献   

7.
This research explores the way in which young children (5 years of age) and adults use perceptual and conceptual cues for categorizing objects processed by vision or by audition. Three experiments were carried out using forced-choice categorization tasks that allowed responses based on taxonomic relations (e.g., vehicles) or on schema category relations (e.g., vehicles that can be seen on the road). In Experiment 1 (visual modality), prominent responses based on conceptually close objects (e.g., objects included in a schema category) were observed. These responses were also favored when within-category objects were perceptually similar. In Experiment 2 (auditory modality), schema category responses depended on age and were influenced by both within- and between-category perceptual similarity relations. Experiment 3 examined whether these results could be explained in terms of sensory modality specializations or rather in terms of information processing constraints (sequential vs. simultaneous processing).  相似文献   

8.
An ability to detect the common location of multisensory stimulation is essential for us to perceive a coherent environment, to represent the interface between the body and the external world, and to act on sensory information. Regarding the tactile environment “at hand”, we need to represent somatosensory stimuli impinging on the skin surface in the same spatial reference frame as distal stimuli, such as those transduced by vision and audition. Across two experiments we investigated whether 6‐ (n = 14; Experiment 1) and 4‐month‐old (n = 14; Experiment 2) infants were sensitive to the colocation of tactile and auditory signals delivered to the hands. We recorded infants’ visual preferences for spatially congruent and incongruent auditory‐tactile events delivered to their hands. At 6 months, infants looked longer toward incongruent stimuli, whilst at 4 months infants looked longer toward congruent stimuli. Thus, even from 4 months of age, infants are sensitive to the colocation of simultaneously presented auditory and tactile stimuli. We conclude that 4‐ and 6‐month‐old infants can represent auditory and tactile stimuli in a common spatial frame of reference. We explain the age‐wise shift in infants’ preferences from congruent to incongruent in terms of an increased preference for novel crossmodal spatial relations based on the accumulation of experience. A comparison of looking preferences across the congruent and incongruent conditions with a unisensory control condition indicates that the ability to perceive auditory‐tactile colocation is based on a crossmodal rather than a supramodal spatial code by 6 months of age at least.  相似文献   

9.
It is not clear what role visual information plays in the development of space perception. It has previously been shown that in absence of vision, both the ability to judge orientation in the haptic modality and bisect intervals in the auditory modality are severely compromised (Gori, Sandini, Martinoli & Burr, 2010; Gori, Sandini, Martinoli & Burr, 2014). Here we report for the first time also a strong deficit in proprioceptive reproduction and audio distance evaluation in early blind children and adults. Interestingly, the deficit is not present in a small group of adults with acquired visual disability. Our results support the idea that in absence of vision the audio and proprioceptive spatial representations may be delayed or drastically weakened due to the lack of visual calibration over the auditory and haptic modalities during the critical period of development.  相似文献   

10.
Integrating different senses to reduce sensory uncertainty and increase perceptual precision can have an important compensatory function for individuals with visual impairment and blindness. However, how visual impairment and blindness impact the development of optimal multisensory integration in the remaining senses is currently unknown. Here we first examined how audio‐haptic integration develops and changes across the life span in 92 sighted (blindfolded) individuals between 7 and 70 years of age. We used a child‐friendly task in which participants had to discriminate different object sizes by touching them and/or listening to them. We assessed whether audio‐haptic performance resulted in a reduction of perceptual uncertainty compared to auditory‐only and haptic‐only performance as predicted by maximum‐likelihood estimation model. We then compared how this ability develops in 28 children and adults with different levels of visual experience, focussing on low‐vision individuals and blind individuals that lost their sight at different ages during development. Our results show that in sighted individuals, adult‐like audio‐haptic integration develops around 13–15 years of age, and remains stable until late adulthood. While early‐blind individuals, even at the youngest ages, integrate audio‐haptic information in an optimal fashion, late‐blind individuals do not. Optimal integration in low‐vision individuals follows a similar developmental trajectory as that of sighted individuals. These findings demonstrate that visual experience is not necessary for optimal audio‐haptic integration to emerge, but that consistency of sensory information across development is key for the functional outcome of optimal multisensory integration.  相似文献   

11.
视觉心理表象是一种基于过去经验的自上而下的加工过程,它是信息表征的一种形式.以往的研究主要采用视觉来诱发表象,但是这很难排除视觉后像的干扰,而且与表象“自上而下”的加工过程不一致.听觉诱发表象很好地解决了视觉后像的干扰问题,也更吻合表象“自上而下”的加工过程,因为被试产生的表象是基于自己过去的经验,从自身出发,主动加工的结果.采用实验法,考察通过听觉通道诱发表象眼动的情况,使用眼动仪记录被试的眼动轨迹.结果发现:(1)不管是运动还是静止的物体,都能诱发表象,在表象时存在眼动行为.(2)运动的物体诱发眼动的程度更大.(3)眼动影响表象的建构,在表象建构中起机能性作用.  相似文献   

12.
When places are explored without vision, observers go from temporally sequenced, circuitous inputs available along walks to knowledge of spatial structure (i.e., straight-line distances and directions characterizing the simultaneous arrangement of the objects passed along the way). Studies show that a life history of vision helps develop nonvisual sensitivity, but they are unspecific on the formative experiences or the underlying processes. This study compared judgments of straight-line distances and directions among landmarks in a familiar area of town by partially sighted persons who varied in types and ages of visual impairment. Those with early childhood loss of broad-field vision and those blind from birth performed significantly worse than those with early or late acuity loss and those with late field loss. Broad-field visual experience facilitates perceptual development by providing a basis for proprioceptive and efferent information from locomotion against distances and directions relative to the surrounding environment. Differences in the perception of walking, in turn, cause the observed differences in sensitivity to spatial structure.  相似文献   

13.
It has been shown that congenital blindness can lead to anomalies in the integration of auditory and tactile information, at least under certain conditions. In the present study, we used the parchment-skin illusion, a robust illustration of sound-biased perception of touch based on changes in frequency, to investigate the specificities of audiotactile interactions in early- and late-onset blind individuals. Blind individuals in both groups did not experience any illusory change in tactile perception when the frequency of the auditory signal was modified, whereas sighted individuals consistently experienced the illusion. This demonstration that blind individuals had reduced susceptibility to an auditory-tactile illusion suggests either that vision is necessary for the establishment of audiotactile interactions or that auditory and tactile information can be processed more independently in blind individuals than in sighted individuals. In addition, the results obtained in late-onset blind participants suggest that visual input may play a role in the maintenance of audiotactile integration.  相似文献   

14.
We investigated the extent to which people can discriminate between languages on the basis of their characteristic temporal, rhythmic information, and the extent to which this ability generalizes across sensory modalities. We used rhythmical patterns derived from the alternation of vowels and consonants in English and Japanese, presented in audition, vision, both audition and vision at the same time, or touch. Experiment 1 confirmed that discrimination is possible on the basis of auditory rhythmic patterns, and extended it to the case of vision, using ‘aperture-close’ mouth movements of a schematic face. In Experiment 2, language discrimination was demonstrated using visual and auditory materials that did not resemble spoken articulation. In a combined analysis including data from Experiments 1 and 2, a beneficial effect was also found when the auditory rhythmic information was available to participants. Despite the fact that discrimination could be achieved using vision alone, auditory performance was nevertheless better. In a final experiment, we demonstrate that the rhythm of speech can also be discriminated successfully by means of vibrotactile patterns delivered to the fingertip. The results of the present study therefore demonstrate that discrimination between language's syllabic rhythmic patterns is possible on the basis of visual and tactile displays.  相似文献   

15.
Spatial representation by 72 blind and blindfolded sighted children between the ages of 6 and 11 was tested in two experiments by mental rotation of a raised line under conditions of clockwise varied directions.Experiment 1 showed that the two groups were well matched on tactual recognition and scored equally badly on matching displays to their own mentally rotated position.Experiment 2 found the sighted superior in recall tests. There was a highly significant interaction between sighted status and degree of rotation. Degree of rotation affected only the blind. Their scores were significantly lower for rotating to oblique and to the far orthogonal directions than to near orthogonal test positions. On near orthogonals the blind did not differ from the sighted.Age was a main effect, but it did not interact with any other variable. Older blind children whose visual experience dated from before the age of 6 were superior to congenitally blind subjects, but not differentially more so on oblique directions.The results were discussed in relation to hypotheses about the nature of spatial representation and strategies by children whose prior experience derived from vision or from touch and movement.  相似文献   

16.
In two experiments, we explored whether 2.5-year-olds can use delayed video information to locate objects placed somewhere covertly after first being given pre-test video experience. Our findings revealed that children had little difficulty passing a surprise-object task, that is, a teddy bear hidden in a box that was placed behind the child and hence only visible in the delayed video. In contrast, the children did not pass the surprise-mark task or delayed self-recognition (DSR) task, even with pre-test video training. Similarly, delayed self-image experience and pre-test video training did not facilitate DSR performance in 2.5-year-olds. Children were also just as likely to fail a live video self-recognition task, suggesting that object-retrieval tasks pertaining to self using video information are difficult for children at this age. The findings are discussed in light of possible changes in representational capabilities; the implications for the development of a temporally extended self are also noted.  相似文献   

17.
Little is known about the ability of blind people to cross obstacles after they have explored haptically their size and position. Long-term absence of vision may affect spatial cognition in the blind while their extensive experience with the use of haptic information for guidance may lead to compensation strategies. Seven blind and 7 sighted participants (with vision available and blindfolded) walked along a flat pathway and crossed an obstacle after a haptic exploration. Blind and blindfolded subjects used different strategies to cross the obstacle. After the first 20 trials the blindfolded subjects reduced the distance between the foot and the obstacle at the toe-off instant, while the blind behaved as the subjects with full vision. Blind and blindfolded participants showed larger foot clearance than participants with vision. At foot landing the hip was more behind the foot in the blindfolded condition, while there were no differences between the blind and the vision conditions. For several parameters of the obstacle crossing task, blind people were more similar to subjects with full vision indicating that the blind subjects were able to compensate for the lack of vision.  相似文献   

18.
Research has examined the nature of visual imagery in normally sighted and blind subjects, but not in those with low vision. Findings with normally sighted subjects suggest that imagery involves primary visual areas of the brain. Since the plasticity of visual cortex appears to be limited in adulthood, we might expect imagery of those with adult-onset low vision to be relatively unaffected by these losses. But if visual imagery is based on recent and current experience, we would expect images of those with low vision to share some properties of impaired visual perception. We examined key parameters of mental images reported by normally sighted subjects, compared to those with early- and late-onset low vision, and with a group of subjects with restricted visual fields using an imagery questionnaire. We found evidence that those with reduced visual acuity report the imagery distances of objects to be closer than those with normal acuity and also depict objects in imagery with lower resolution than those with normal visual acuity. We also found that all low vision groups, like the normally sighted, image objects at a substantially greater distance than when asked to place them at a distance that ‘just fits’ their imagery field (overflow distance). All low vision groups, like the normally sighted, showed evidence of a limited visual field for imagery, but our group with restricted visual fields did not differ from the other groups in this respect. We conclude that imagery of those with low vision is similar to that of those with normal vision in being dependent on the size of objects or features being imaged, but that it also reflects their reduced visual acuity. We found no evidence for a dependence on imagery of age of onset or number of years of vision impairment.  相似文献   

19.
We investigated the role of visual experience on the spatial representation and updating of haptic scenes by comparing recognition performance across sighted, congenitally and late blind participants. We first established that spatial updating occurs in sighted individuals to haptic scenes of novel objects. All participants were required to recognise a previously learned haptic scene of novel objects presented across the same or different orientation as learning whilst they either remained in the same position to moved to a new position relative to the scene. Scene rotation incurred a cost in recognition performance in all groups. However, overall haptic scene recognition performance was worse in the congenitally blind group. Moreover, unlike the late blind or sighted groups, the congenitally blind group were unable to compensate for the cost in scene rotation with observer motion. Our results suggest that vision plays an important role in representing and updating spatial information encoded through touch and have important implications for the role of vision in the development of neuronal areas involved in spatial cognition.  相似文献   

20.
The present study examined whether infant-directed (ID) speech facilitates intersensory matching of audio–visual fluent speech in 12-month-old infants. German-learning infants’ audio–visual matching ability of German and French fluent speech was assessed by using a variant of the intermodal matching procedure, with auditory and visual speech information presented sequentially. In Experiment 1, the sentences were spoken in an adult-directed (AD) manner. Results showed that 12-month-old infants did not exhibit a matching performance for the native, nor for the non-native language. However, Experiment 2 revealed that when ID speech stimuli were used, infants did perceive the relation between auditory and visual speech attributes, but only in response to their native language. Thus, the findings suggest that ID speech might have an influence on the intersensory perception of fluent speech and shed further light on multisensory perceptual narrowing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号