首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A critical analysis of a recent paper by Shelton and Searle (1980) on the visual facilitation of auditory localization is presented. The author claims that Shelton and Searle fail to make the relevant distinction between Warren’s (1970) frame of reference hypothesis and Jones’s (1975) spatial memory hypothesis. Shelton and Searle’s claim that they have demonstrated the existence of two distinct forms of visual facilitation is questioned. Data from an experiment in which auditory localization was tested under two levels of illumination (light and dark) and with two kinds of eye movement instructions (fixed and movement) are presented. The results show that facilitation occurs only when eye movements take place in a lighted environment. This is interpreted as supporting Warren’s frame of reference hypothesis.  相似文献   

2.
Recent studies on the conceptualization of abstract concepts suggest that the concept of time is represented along a left-right horizontal axis, such that left-to-right readers represent past on the left and future on the right. Although it has been demonstrated with strong consistency that the localization (left or right) of visual stimuli could modulate temporal judgments, results obtained with auditory stimuli are more puzzling, with both failures and successes at finding the effect in the literature. The present study supports an account based on the relative relevance of visual versus auditory-spatial information in the creation of a frame of reference to map time: The auditory location of words interacted with their temporal meaning only when auditory information was made more relevant than visual spatial information by blindfolding participants.  相似文献   

3.
Although discrete auditory stimuli have been found useful for emergency braking, the role of continuous speed-related auditory feedback has not been investigated yet. This point may though be of importance in electric vehicles in which acoustic cues are drastically changed. The present study addressed this question through two experiments. In experiment 1, 12 usual drivers were exposed to naturalistic auditory feedback mimicking those issued from electric cars, while facing dynamic visual scenes in a 3D driving simulator. After being passively travelled up to a sustained constant speed, subjects had to stop their car in front of a traffic light that unexpectedly turned to red. Modifications of the speed-related auditory feedback did not impact braking initiation and regulation. In experiment 2, synthesized auditory feedback based on the Shepard-Risset glissando was provided to a new sample of 15 usual drivers in the same task. Pitch variations of this acoustic stimulus, although not scaled to an absolute speed, were manipulated as a function of visual speed changes. Changing the mapping between pitch variations of the synthesized auditory feedback and visual speed changes induced adjustments on braking which depended on acceleration/deceleration feedback. These findings stressed the importance of the acoustic content and its dynamics for car speed control.  相似文献   

4.
Two experiments comparing imaginative processing in different modalities and semantic processing were carried out to investigate the issue of whether conceptual knowledge can be represented in different format. Participants were asked to judge the similarity between visual images, auditory images, and olfactory images in the imaginative block, if two items belonged to the same category in the semantic block. Items were verbally cued in both experiments. The degree of similarity between the imaginative and semantic items was changed across experiments. Experiment 1 showed that the semantic processing was faster than the visual and the auditory imaginative processing, whereas no differentiation was possible between the semantic processing and the olfactory imaginative processing. Experiment 2 revealed that only the visual imaginative processing could be differentiated from the semantic processing in terms of accuracy. These results showed that the visual and auditory imaginative processing can be differentiated from the semantic processing, although both visual and auditory images strongly rely on semantic representations. On the contrary, no differentiation is possible within the olfactory domain. Results are discussed in the frame of the imagery debate.  相似文献   

5.
Eight participants were presented with auditory or visual targets and then indicated the target's remembered positions relative to their head eight seconds after actively moving their eyes, head or body to pull apart head, retinal, body, and external space reference frames. Remembered target position was indicated by repositioning sounds or lights. Localization errors were found related to head-on-body position but not of eye-in-head or body-in-space for both auditory (0.023 dB/deg in the direction of head displacement) and visual targets (0.068 deg/deg in the direction opposite to head displacement). The results indicate that both auditory and visual localization use head-on-body information, suggesting a common coding into body coordinates--the only conversion that requires this information.  相似文献   

6.
Multisensory-mediated auditory localization   总被引:1,自引:0,他引:1  
Multisensory integration is a powerful mechanism for maximizing sensitivity to sensory events. We examined its effects on auditory localization in healthy human subjects. The specific objective was to test whether the relative intensity and location of a seemingly irrelevant visual stimulus would influence auditory localization in accordance with the inverse effectiveness and spatial rules of multisensory integration that have been developed from neurophysiological studies with animals [Stein and Meredith, 1993 The Merging of the Senses (Cambridge, MA: MIT Press)]. Subjects were asked to localize a sound in one condition in which a neutral visual stimulus was either above threshold (supra-threshold) or at threshold. In both cases the spatial disparity of the visual and auditory stimuli was systematically varied. The results reveal that stimulus salience is a critical factor in determining the effect of a neutral visual cue on auditory localization. Visual bias and, hence, perceptual translocation of the auditory stimulus appeared when the visual stimulus was supra-threshold, regardless of its location. However, this was not the case when the visual stimulus was at threshold. In this case, the influence of the visual cue was apparent only when the two cues were spatially coincident and resulted in an enhancement of stimulus localization. These data suggest that the brain uses multiple strategies to integrate multisensory information.  相似文献   

7.
The goal of this study was to evaluate the claim that young children display preferences for auditory stimuli over visual stimuli. This study was motivated by concerns that the visual stimuli employed in prior studies were considerably more complex and less distinctive than the competing auditory stimuli, resulting in an illusory preference for auditory cues. Across three experiments, preschool-age children and adults were trained to use paired audio-visual cues to predict the location of a target. At test, the cues were switched so that auditory cues indicated one location and visual cues indicated the opposite location. In contrast to prior studies, preschool-age children did not exhibit auditory dominance. Instead, children and adults flexibly shifted their preferences as a function of the degree of contrast within each modality, with high contrast leading to greater use.  相似文献   

8.
Recent studies show that recognition memory for sounds is inferior to memory for pictures. Four experiments were conducted to examine the nature of auditory and visual memory. Experiments 1–3 were conducted to evaluate the role of experience in auditory and visual memory. Participants received a study phase with pictures/sounds, followed by a recognition memory test. Participants then completed auditory training with each of the sounds, followed by a second memory test. Despite auditory training in Experiments 1 and 2, visual memory was superior to auditory memory. In Experiment 3, we found that it is possible to improve auditory memory, but only after 3 days of specific auditory training and 3 days of visual memory decay. We examined the time course of information loss in auditory and visual memory in Experiment 4 and found a trade-off between visual and auditory recognition memory: Visual memory appears to have a larger capacity, while auditory memory is more enduring. Our results indicate that visual and auditory memory are inherently different memory systems and that differences in visual and auditory recognition memory performance may be due to the different amounts of experience with visual and auditory information, as well as structurally different neural circuitry specialized for information retention.  相似文献   

9.
Hartnagel D  Bichot A  Roumes C 《Perception》2007,36(10):1487-1496
We investigated the frame of reference involved in audio-visual (AV) fusion over space. This multisensory phenomenon refers to the perception of unity resulting from visual and auditory stimuli despite their potential spatial disparity. The extent of this illusion depends on the eccentricity in azimuth of the bimodal stimulus (Godfroy et al, 2003 Perception 32 1233-1245). In a previous study, conducted in a luminous environment, Roumes et al 2004 (Perception 33 Supplement, 142) have shown that variation of AV fusion is gaze-dependent. Here we examine the contribution of ego- or allocentric visual cues by conducting the experiment in total darkness. Auditory and visual stimuli were displayed in synchrony with various spatial disparities. Subjects had to judge their unity ('fusion' or 'no fusion'). Results showed that AV fusion in darkness remains gaze-dependent despite the lack of any allocentric cues and confirmed the hypothesis that the reference frame of the bimodal space is neither head-centred nor eye-centred.  相似文献   

10.
Previous studies have demonstrated large errors (over 30 degrees ) in visually perceived exocentric directions (the direction between two objects that are both displaced from the observer's location; e.g., Philbeck et al. [Philbeck, J. W., Sargent, J., Arthur, J. C., & Dopkins, S. (2008). Large manual pointing errors, but accurate verbal reports, for indications of target azimuth. Perception, 37, 511-534]). Here, we investigated whether a similar pattern occurs in auditory space. Blindfolded participants either attempted to aim a pointer at auditory targets (an exocentric task) or gave a verbal estimate of the egocentric target azimuth. Targets were located at 20-160 degrees azimuth in the right hemispace. For comparison, we also collected pointing and verbal judgments for visual targets. We found that exocentric pointing responses exhibited sizeable undershooting errors, for both auditory and visual targets, that tended to become more strongly negative as azimuth increased (up to -19 degrees for visual targets at 160 degrees ). Verbal estimates of the auditory and visual target azimuths, however, showed a dramatically different pattern, with relatively small overestimations of azimuths in the rear hemispace. At least some of the differences between verbal and pointing responses appear to be due to the frames of reference underlying the responses; when participants used the pointer to reproduce the egocentric target azimuth rather than the exocentric target direction relative to the pointer, the pattern of pointing errors more closely resembled that seen in verbal reports. These results show that there are similar distortions in perceiving exocentric directions in visual and auditory space.  相似文献   

11.
In humans, multisensory interaction is an important strategy for improving the detection of stimuli of different nature and reducing the variability of response. It is known that the presence of visual information affects the auditory perception in the horizontal plane (azimuth), but there are few researches that study the influence of vision in the auditory distance perception. In general, the data obtained from these studies are contradictory and do not completely define the way in which visual cues affect the apparent distance of a sound source. Here psychophysical experiments on auditory distance perception in humans are performed, including and excluding visual cues. The results show that the apparent distance from the source is affected by the presence of visual information and that subjects can store in their memory a representation of the environment that later improves the perception of distance.  相似文献   

12.
Ontario Institute for Studtes tn Education, University of Toronto, Toronto, Canada M5S 1 V6 Auditory space has been characterized as an entity without bound or dimension, as opposed to visual space which is limited in three dimensions. While there is evidence that visual space may be represented mentally in terms of contrastive values on these dimensions, no evidence exists concerning the representation of auditory space. Two experiments used an auditory Stroop-like task to investigate (1) whether the linguistic code for auditory space is comprised of component dimensions as in vision or whether it is unitary, and (2) whether auditory space can also be encoded in a nonlinguistic fashion. Subjects were required to respond to the spatial location of a locative term whose meaning could be congruent, incongruent, or neutral with respect to its location. The findings pointed to the conclusion that when subjects are encouraged to code auditory space linguistically, the code is an undifferentiated symbol or name. Furthermore, some tentative evidence existed that auditory space may also be encoded in a nonlinguistic manner.  相似文献   

13.
In vision, the Gestalt principles of perceptual organization are generally well understood and remain a subject of detailed analysis. However, the possibility for a unified theory of grouping across visual and auditory modalities remains largely unexplored. Here we present examples of auditory and visual Gestalt grouping, which share important organizational properties. In particular, similarities are revealed between grouping processes in apparent motion, auditory streaming, and static 2-D displays. Given the substantial difference in the context, within which the phenomena in question occur (auditory vs. visual, static vs. dynamic), these similarities suggest that the dynamics of perceptual organization could be associated with a common (possibly central) mechanism. If the relevance of supramodal invariants of grouping is granted, the question arises as to whether they can be studied empirically. We propose that a “force-field” theory, based on a differential-geometric interpretation of perceptual space, could provide a suitable starting point for a systematic exploration of the subjective properties of certain classes of auditory and visual grouping phenomena.  相似文献   

14.
Visual and auditory classification of equivalent class structured patterns were examined. Underlying patterns from two classes were translated into auditory tone sequences and visual polygons. All Ss classified 50 visual patterns and their direct auditory analogs. Visual classification accuracy exceeded auditory accuracy (p < .01); however, auditory accuracy improved when auditory classification was preceded by the visual task (p < .01). Based on group data, classification strategies appeared similar across modalities, with accuracy of classification of individual patterns predicted to the same degree by common measures of physical class structure across modalities. Ss’ drawings of the prototypes also suggested a common strategy across modalities. While group data suggest some consistency of classification strategy across modalities, individual Ss were not at all consistent in their visual and auditory classifications.  相似文献   

15.
In this paper, we show that human saccadic eye movements toward a visual target are generated with a reduced latency when this target is spatially and temporally aligned with an irrelevant auditory nontarget. This effect gradually disappears if the temporal and/or spatial alignment of the visual and auditory stimuli are changed. When subjects are able to accurately localize the auditory stimulus in two dimensions, the spatial dependence of the reduction in latency depends on the actual radial distance between the auditory and the visual stimulus. If, however, only the azimuth of the sound source can be determined by the subjects, the horizontal target separation determines the strength of the interaction. Neither saccade accuracy nor saccade kinematics were affected in these paradigms. We propose that, in addition to an aspecific warning signal, the reduction of saccadic latency is due to interactions that take place at a multimodal stage of saccade programming, where theperceived positions of visual and auditory stimuli are represented in a common frame of reference. This hypothesis is in agreement with our finding that the saccades often are initially directed to the average position of the visual and the auditory target, provided that their spatial separation is not too large. Striking similarities with electrophysiological findings on multisensory interactions in the deep layers of the midbrain superior colliculus are discussed.  相似文献   

16.
People often move in synchrony with auditory rhythms (e.g., music), whereas synchronization of movement with purely visual rhythms is rare. In two experiments, this apparent attraction of movement to auditory rhythms was investigated by requiring participants to tap their index finger in synchrony with an isochronous auditory (tone) or visual (flashing light) target sequence while a distractor sequence was presented in the other modality at one of various phase relationships. The obtained asynchronies and their variability showed that auditory distractors strongly attracted participants' taps, whereas visual distractors had much weaker effects, if any. This asymmetry held regardless of the spatial congruence or relative salience of the stimuli in the two modalities. When different irregular timing patterns were imposed on target and distractor sequences, participants' taps tended to track the timing pattern of auditory distractor sequences when they were approximately in phase with visual target sequences, but not the reverse. These results confirm that rhythmic movement is more strongly attracted to auditory than to visual rhythms. To the extent that this is an innate proclivity, it may have been an important factor in the evolution of music.  相似文献   

17.
Temporal expectation is a process by which people use temporally structured sensory information to explicitly or implicitly predict the onset and/or the duration of future events. Because timing plays a critical role in crossmodal interactions, we investigated how temporal expectation influenced auditory–visual interaction, using an auditory–visual crossmodal congruity effect as a measure of crossmodal interaction. For auditory identification, an incongruent visual stimulus produced stronger interference when the crossmodal stimulus was presented with an expected rather than an unexpected timing. In contrast, for visual identification, an incongruent auditory stimulus produced weaker interference when the crossmodal stimulus was presented with an expected rather than an unexpected timing. The fact that temporal expectation made visual distractors more potent and visual targets less susceptible to auditory interference suggests that temporal expectation increases the perceptual weight of visual signals.  相似文献   

18.
A period of exposure to trains of simultaneous but spatially offset auditory and visual stimuli can induce a temporary shift in the perception of sound location. This phenomenon, known as the ‘ventriloquist aftereffect’, reflects a realignment of auditory and visual spatial representations such that they approach perceptual alignment despite their physical spatial discordance. Such dynamic changes to sensory representations are likely to underlie the brain’s ability to accommodate inter-sensory discordance produced by sensory errors (particularly in sound localization) and variability in sensory transduction. It is currently unknown, however, whether these plastic changes induced by adaptation to spatially disparate inputs occurs automatically or whether they are dependent on selectively attending to the visual or auditory stimuli. Here, we demonstrate that robust auditory spatial aftereffects can be induced even in the presence of a competing visual stimulus. Importantly, we found that when attention is directed to the competing stimuli, the pattern of aftereffects is altered. These results indicate that attention can modulate the ventriloquist aftereffect.  相似文献   

19.
This teaching machine has been designed and used to train reading and other visual discrimination skills with normal and retarded children. On each frame the subject responds by touching one of three response panels on which are projected the multiple-choice alternatives. The response panels are coated with a transparent conducting film which allows electronic detection of this simple and direct response. Correct responses are reinforced by the machine naming the stimulus, while auditory reinforcement is absent for an incorrect response. The subject's performance level is continuously computed as an exponentially weighted moving average. The measure is weighted so that it rapidly follows recent changes in performance.  相似文献   

20.
The ability to process simultaneously presented auditory and visual information is a necessary component underlying many cognitive tasks. While this ability is often taken for granted, there is evidence that under many conditions auditory input attenuates processing of corresponding visual input. The current study investigated infants' processing of visual input under unimodal and cross-modal conditions. Results of the three reported experiments indicate that different auditory input had different effects on infants' processing of visual information. In particular, unfamiliar auditory input slowed down visual processing, whereas more familiar auditory input did not. These results elucidate mechanisms underlying auditory overshadowing in the course of cross-modal processing and have implications on a variety of cognitive tasks that depend on cross-modal processing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号