首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
Eight participants were presented with auditory or visual targets and then indicated the target's remembered positions relative to their head eight seconds after actively moving their eyes, head or body to pull apart head, retinal, body, and external space reference frames. Remembered target position was indicated by repositioning sounds or lights. Localization errors were found related to head-on-body position but not of eye-in-head or body-in-space for both auditory (0.023 dB/deg in the direction of head displacement) and visual targets (0.068 deg/deg in the direction opposite to head displacement). The results indicate that both auditory and visual localization use head-on-body information, suggesting a common coding into body coordinates--the only conversion that requires this information.  相似文献   

2.
A DEVELOPMENTAL DEFICIT IN LOCALIZING OBJECTS FROM VISION   总被引:3,自引:0,他引:3  
Abstract— We describe a college student, A. H., with a developmental deficit in determining the location of objects from vision. The deficit is selective in that (a) localization from auditory or tactile information is intact, (b) A H reports the identity of mislocalized objects accurately, (c) visual localization errors preserve certain parameters of the target location, and (d) visual localization is severely impaired under certain stimulus conditions, but nearly intact under other conditions. These results bear on the representation and processing of location information in the visual system, and also have implications for understanding developmental dyslexia.  相似文献   

3.
A period of exposure to trains of simultaneous but spatially offset auditory and visual stimuli can induce a temporary shift in the perception of sound location. This phenomenon, known as the ‘ventriloquist aftereffect’, reflects a realignment of auditory and visual spatial representations such that they approach perceptual alignment despite their physical spatial discordance. Such dynamic changes to sensory representations are likely to underlie the brain’s ability to accommodate inter-sensory discordance produced by sensory errors (particularly in sound localization) and variability in sensory transduction. It is currently unknown, however, whether these plastic changes induced by adaptation to spatially disparate inputs occurs automatically or whether they are dependent on selectively attending to the visual or auditory stimuli. Here, we demonstrate that robust auditory spatial aftereffects can be induced even in the presence of a competing visual stimulus. Importantly, we found that when attention is directed to the competing stimuli, the pattern of aftereffects is altered. These results indicate that attention can modulate the ventriloquist aftereffect.  相似文献   

4.
Ss walked about out of doors wearing laterally displacing prisms, and sound-attenuating muffs. Errors occurred in an auditory localization task during exposure to visual displacement. With continued exposure these errors tended to disappear after about 108 min. The errors disappeared earlier when muffs were not worn.  相似文献   

5.
19 auditory handicapped and 19 hearing children (4- to 12-yr.-old) were compared for performance on a visual localization task during which visual stimuli were presented both within and beyond the initial field of view. In the latter situations the localization response depends, initially on a cognitive map of the surrounding environment. The youngest group (4- and 5-yr.-old) of auditorially handicapped children showed, relative to their nondeaf peers, slower latencies of head movements to stimuli beyond their initial field of view. This finding is interpreted as these subjects having at their disposal a less precise, less adequate, cognitive map of the environment, possibly arising from a disturbed crossmodal integration as a consequence of the absence of auditory input.  相似文献   

6.
Choice reaction times are measured for three values of a priori signal probability with three well-practiced observers. Two sets of data are taken with the only difference being the modality of the reaction signal. In one set of conditions it is auditory, in the other, visual. The auditory reaction times are faster than the visual and in addition several other differences are noted. The latency of the errors and correct responses are nearly equal for the auditory data. Error latencies are nearly 30% faster for the visual data. Non-stationary effects, autocorrelation between successive latencies and non-homogeneous distribution of errors, are clearly evident in the visual data, but are small or non-existent in the auditory data. The data are compared with several models of the choice reaction time process but none of the models is completely adequate.  相似文献   

7.
The influence of vision on auditory localization was assessed in an absolute identification paradigm using sighted and blindfolded subjects. Vision improved the accuracy of judgments directly in front of, to the side of, and behind the head of subjects in the horizontal plane, but had little relevance to vertical-plane localization. The exact form of the observed facilitation depended on the orientation of the speaker array to the head. In a second experiment involving sound localization in 10 visual environments, there was evidence for the operation of two distinct influences of vision on directional hearing. One result supported the hypothesis that vision provides a frame of reference for judgments, and a second indicated the importance of vision to the maintenance of spatial memory.  相似文献   

8.
It has previously been shown that adults localize unseen auditory targets more accurately with their eyes open than closed. The interpretation usually proposed to explain this phenomenon is that auditory spatial information is referred or translated to a visual frame of reference. The present experiments show that the presence of an auditory reference point facilitates auditory localization judgements in the same manner as a visual reference point does. Although our results do not support the visual frame of reference hypothesis, they suggest that the auditory and the visual modalities are strongly linked in their localizing processes.  相似文献   

9.
Previous studies have shown that adults respond faster and more reliably to bimodal compared to unimodal localization cues. The current study investigated for the first time the development of audiovisual (A-V) integration in spatial localization behavior in infants between 1 and 10 months of age. We observed infants' head and eye movements in response to auditory, visual, or both kinds of stimuli presented either 25 degrees or 45 degrees to the right or left of midline. Infants under 8 months of age intermittently showed response latencies significantly faster toward audiovisual targets than toward either auditory or visual targets alone They did so, however, without exhibiting a reliable violation of the Race Model, suggesting that probability summation alone could explain the faster bimodal response. In contrast, infants between 8 and 10 months of age exhibited bimodal response latencies significantly faster than unimodal latencies for both eccentricity conditions and their latencies violated the Race Model at 25 degrees eccentricity. In addition to this main finding, we found age-dependent eccentricity and modality effects on response latencies. Together, these findings suggest that audiovisual integration emerges late in the first year of life and are consistent with neurophysiological findings from multisensory sites in the superior colliculus of infant monkeys showing that multisensory enhancement of responsiveness is not present at birth but emerges later in life.  相似文献   

10.
It is well known that discrepancies in the location of synchronized auditory and visual events can lead to mislocalizations of the auditory source, so-called ventriloquism. In two experiments, we tested whether such cross-modal influences on auditory localization depend on deliberate visual attention to the biasing visual event. In Experiment 1, subjects pointed to the apparent source of sounds in the presence or absence of a synchronous peripheral flash. They also monitored for target visual events, either at the location of the peripheral flash or in a central location. Auditory localization was attracted toward the synchronous peripheral flash, but this was unaffected by where deliberate visual attention was directed in the monitoring task. In Experiment 2, bilateral flashes were presented in synchrony with each sound, to provide competing visual attractors. When these visual events were equally salient on the two sides, auditory localization was unaffected by which side subjects monitored for visual targets. When one flash was larger than the other, auditory localization was slightly but reliably attracted toward it, but again regardless of where visual monitoring was required. We conclude that ventriloquism largely reflects automatic sensory interactions, with little or no role for deliberate spatial attention.  相似文献   

11.
Interactions Between Exogenous Auditory and Visual Spatial Attention   总被引:5,自引:0,他引:5  
Six experiments were carried out to investigate the issue of cross-modality between exogenous auditory and visual spatial attention employing Posner's cueing paradigm in detection, localization, and discrimination tasks. Results indicated cueing in detection tasks with visual or auditory cues and visual targets but not with auditory targets (Experiment 1). In the localization tasks, cueing was found with both visual and auditory targets. Inhibition of return was apparent only in the within-modality conditions (Experiment 2). This suggests that it is important whether the attention system is activated directly (within a modality) or indirectly (between modalities). Increasing the cue validity from 50% to 80% influenced performance only in the localization task (Experiment 4). These findings are interpreted as being indicative for modality-specific but interacting attention mechanisms. The results of Experiments 5 and 6 (up/down discrimination tasks) also show cross-modal cueing but not with visual cues and auditory targets. Furthermore, there was no inhibition of return in any condition. This suggests that some cueing effects might be task dependent.  相似文献   

12.
Popple AV  Levi DM 《Perception》2005,34(1):87-107
Amblyopia, a major cause of vision loss, is a developmental disorder of visual perception commonly associated with strabismus (squint). Although defined by a reduction in visual acuity, severe distortions of perceived visual location are common in strabismic amblyopia. These distortions can help us understand the cortical coding of visual location and its development in normal vision, as well as in amblyopia. The history of retinotopic mapping in the visual cortex highlights the potential impact of amblyopia. Theories of amblyopia include topological disarray of receptors in primary visual cortex, undersampling from the amblyopic eye compared with normal eyes, and the presence of anomalous retinal correspondence or multiple cortical representations of the strabismic fovea. We examined the distortions in a strabismic amblyope, using a pop-out localization task, in which normal observers made errors dependent on the visual context of the stimulus. The localization errors of the strabismic amblyope were abnormal. We found that none of the available theories could fully explain this one patient's localization performance. Instead, the observed behavior suggests that multiple adaptations of the underlying cortical topology are possible simultaneously in different parts of the visual field.  相似文献   

13.
Spatial representations in the visual system were probed in 4 experiments involving A. H., a woman with a developmental deficit in localizing visual stimuli. Previous research (M. McCloskey et al., 1995) has shown that A. H.'s localization errors take the form of reflections across a central vertical or horizontal axis (e.g., a stimulus 30 degrees to her left localized to a position 30 degrees to her right). The present experiments demonstrate that A. H.'s errors vary systematically as a function of where her attention is focused, independent of how her eyes, head, or body are oriented, or what potential reference points are present in the visual field. These results suggest that the normal visual system constructs attention-referenced spatial representations, in which the focus of attention defines the origin of a spatial coordinate system. A more general implication is that some of the brain's spatial representations take the form of coordinate systems.  相似文献   

14.
Multisensory-mediated auditory localization   总被引:1,自引:0,他引:1  
Multisensory integration is a powerful mechanism for maximizing sensitivity to sensory events. We examined its effects on auditory localization in healthy human subjects. The specific objective was to test whether the relative intensity and location of a seemingly irrelevant visual stimulus would influence auditory localization in accordance with the inverse effectiveness and spatial rules of multisensory integration that have been developed from neurophysiological studies with animals [Stein and Meredith, 1993 The Merging of the Senses (Cambridge, MA: MIT Press)]. Subjects were asked to localize a sound in one condition in which a neutral visual stimulus was either above threshold (supra-threshold) or at threshold. In both cases the spatial disparity of the visual and auditory stimuli was systematically varied. The results reveal that stimulus salience is a critical factor in determining the effect of a neutral visual cue on auditory localization. Visual bias and, hence, perceptual translocation of the auditory stimulus appeared when the visual stimulus was supra-threshold, regardless of its location. However, this was not the case when the visual stimulus was at threshold. In this case, the influence of the visual cue was apparent only when the two cues were spatially coincident and resulted in an enhancement of stimulus localization. These data suggest that the brain uses multiple strategies to integrate multisensory information.  相似文献   

15.
Unlike visual and tactile stimuli, auditory signals that allow perception of timbre, pitch and localization are temporal. To process these, the auditory nervous system must either possess specialized neural machinery for analyzing temporal input, or transform the initial responses into patterns that are spatially distributed across its sensory epithelium. The former hypothesis, which postulates the existence of structures that facilitate temporal processing, is most popular. However, I argue that the cochlea transforms sound into spatiotemporal response patterns on the auditory nerve and central auditory stages; and that a unified computational framework exists for central auditory, visual and other sensory processing. Specifically, I explain how four fundamental concepts in visual processing play analogous roles in auditory processing.  相似文献   

16.
Effectively executing goal-directed behaviours requires both temporal and spatial accuracy. Previous work has shown that providing auditory cues enhances the timing of upper-limb movements. Interestingly, alternate work has shown beneficial effects of multisensory cueing (i.e., combined audiovisual) on temporospatial motor control. As a result, it is not clear whether adding visual to auditory cues can enhance the temporospatial control of sequential upper-limb movements specifically. The present study utilized a sequential pointing task to investigate the effects of auditory, visual, and audiovisual cueing on temporospatial errors. Eighteen participants performed pointing movements to five targets representing short, intermediate, and large movement amplitudes. Five isochronous auditory, visual, or audiovisual priming cues were provided to specify an equal movement duration for all amplitudes prior to movement onset. Movement time errors were then computed as the difference between actual and predicted movement times specified by the sensory cues, yielding delta movement time errors (ΔMTE). It was hypothesized that auditory-based (i.e., auditory and audiovisual) cueing would yield lower movement time errors compared to visual cueing. The results showed that providing auditory relative to visual priming cues alone reduced ΔMTE particularly for intermediate amplitude movements. The results further highlighted the beneficial impact of unimodal auditory cueing for improving visuomotor control in the absence of significant effects for the multisensory audiovisual condition.  相似文献   

17.
Eighty-six adults serially recalled lists of visually presented consonant letters similar in auditory or visual features or dissimilar in both feature sets. There were significantly more errors at every auditory list position than at the corresponding visual and neutral list positions, which did not themselves differ. There was a positive correlation between the tendency toward phonetic coding and overall performance, with 75 subjects making more errors on the auditory list than either of the other lists. The eight subjects who made more errors on the visual list showed poor performance in the recall of all lists. Factors governing the perceivability of stimuli apparently do not continue to operate significantly in controlling their recallability, at least in the case of veridical visual input.  相似文献   

18.
Previous studies have demonstrated large errors (over 30 degrees ) in visually perceived exocentric directions (the direction between two objects that are both displaced from the observer's location; e.g., Philbeck et al. [Philbeck, J. W., Sargent, J., Arthur, J. C., & Dopkins, S. (2008). Large manual pointing errors, but accurate verbal reports, for indications of target azimuth. Perception, 37, 511-534]). Here, we investigated whether a similar pattern occurs in auditory space. Blindfolded participants either attempted to aim a pointer at auditory targets (an exocentric task) or gave a verbal estimate of the egocentric target azimuth. Targets were located at 20-160 degrees azimuth in the right hemispace. For comparison, we also collected pointing and verbal judgments for visual targets. We found that exocentric pointing responses exhibited sizeable undershooting errors, for both auditory and visual targets, that tended to become more strongly negative as azimuth increased (up to -19 degrees for visual targets at 160 degrees ). Verbal estimates of the auditory and visual target azimuths, however, showed a dramatically different pattern, with relatively small overestimations of azimuths in the rear hemispace. At least some of the differences between verbal and pointing responses appear to be due to the frames of reference underlying the responses; when participants used the pointer to reproduce the egocentric target azimuth rather than the exocentric target direction relative to the pointer, the pattern of pointing errors more closely resembled that seen in verbal reports. These results show that there are similar distortions in perceiving exocentric directions in visual and auditory space.  相似文献   

19.
Correctly integrating sensory information across different modalities is a vital task, yet there are illusions which cause the incorrect localization of multisensory stimuli. A common example of these phenomena is the "ventriloquism effect". In this illusion, the localization of auditory signals is biased by the presence of visual stimuli. For instance, when a light and sound are simultaneously presented, observers may erroneously locate the sound closer to the light than its actual position. While this phenomenon has been studied extensively in azimuth at a single depth, little is known about the interactions of stimuli at different depth planes. In the current experiment, virtual acoustics and stereo-image displays were used to test the integration of visual and auditory signals across azimuth and depth. The results suggest that greater variability in the localization of sounds in depth may lead to a greater bias from visual stimuli in depth than in azimuth. These results offer interesting implications for understanding multisensory integration.  相似文献   

20.
The attention network test (ANT) assesses efficiency across alerting, orienting, and executive components of visual attention. This study examined approaches to assessing auditory attention networks, and performance was compared to the visual ANT. Results showed (1) alerting was sufficiently elicited in a pitch discrimination and sound localization task, although these effects were unrelated, (2) weak orienting of attention was elicited through pitch discrimination, which varied based on ISI and conflict level, but robust orienting of attention was found through sound localization, and (3) executive control was sufficiently assessed in both pitch discrimination and sound localization tasks, but these effects were unrelated. Correlation analysis suggested that, unlike alerting and orienting, sound localization auditory executive control functions tap a shared attention network system. Overall, the results suggest that auditory ANT measures are largely task and modality specific, with sound localization offering potential to assess all three attention networks in a single task.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号