首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
It has previously been shown that adults localize unseen auditory targets more accurately with their eyes open than closed. The interpretation usually proposed to explain this phenomenon is that auditory spatial information is referred or translated to a visual frame of reference. The present experiments show that the presence of an auditory reference point facilitates auditory localization judgements in the same manner as a visual reference point does. Although our results do not support the visual frame of reference hypothesis, they suggest that the auditory and the visual modalities are strongly linked in their localizing processes.  相似文献   

3.
Choice reaction times are measured for three values of a priori signal probability with three well-practiced observers. Two sets of data are taken with the only difference being the modality of the reaction signal. In one set of conditions it is auditory, in the other, visual. The auditory reaction times are faster than the visual and in addition several other differences are noted. The latency of the errors and correct responses are nearly equal for the auditory data. Error latencies are nearly 30% faster for the visual data. Non-stationary effects, autocorrelation between successive latencies and non-homogeneous distribution of errors, are clearly evident in the visual data, but are small or non-existent in the auditory data. The data are compared with several models of the choice reaction time process but none of the models is completely adequate.  相似文献   

4.
Visual and auditory working memory capacity   总被引:1,自引:0,他引:1  
  相似文献   

5.
Previous experiments showing the importance of visual factors in auditory localization are shown to have been insufficiently quantitative.

In the first Experiment, bells were rung and lights shone on the same or different vectors, eleven subjects indicating which bell had rung. In the second Experiment, a puff of steam was seen to issue from a kettle whistle with no whistling sound, while similar whistles were sounded by compressed air on that or another vector. Twenty-one subjects cooperated.

The addition of a visual stimulus at 0° deviation increased the percentage of correct responses significantly in the second, and insignificantly in the first experiment. At 20°-30° deviation the proportion of naive responses to the visual cue was 43 per cent. in the first and 97 per cent, in the second experiment. At greater angular deviations, the proportion of naive responses fell to chance level in the first, but remained significant in the second experiment, even at 90°. The “visuo-auditory threshold” was found to be 20°-30°, but might be much larger if there were more grounds for supposing the two stimuli to be from the same source in space.  相似文献   

6.
If retrieval in short-term memory can be either from a pre-perceptual sensory store or from a post-perceptual memory then recall should vary as a function of input into sensory store. To test this possibility two experiments with paired associates compared visual and auditory presentation under conditions as comparable as possible. In both experiments modality interacted with retention interval: more recency with auditory but, in Experiment I, more primacy with visual. The interaction was taken as support for the hypothesis. An alternative hypothesis (that storage is post-perceptual but not a-historical) was discussed and weak negative evidence presented.  相似文献   

7.
8.
In the McGurk effect, visual information specifying a speaker’s articulatory movements can influence auditory judgments of speech. In the present study, we attempted to find an analogue of the McGurk effect by using nonspeech stimuli—the discrepant audiovisual tokens of plucks and bows on a cello. The results of an initial experiment revealed that subjects’ auditory judgments were influenced significantly by the visual pluck and bow stimuli. However, a second experiment in which speech syllables were used demonstrated that the visual influence on consonants was significantly greater than the visual influence observed for pluck-bow stimuli. This result could be interpreted to suggest that the nonspeech visual influence was not a true McGurk effect. In a third experiment, visual stimuli consisting of the wordspluck andbow were found to have no influence over auditory pluck and bow judgments. This result could suggest that the nonspeech effects found in Experiment 1 were based on the audio and visual information’s having an ostensive lawful relation to the specified event. These results are discussed in terms of motor-theory, ecological, and FLMP approaches to speech perception.  相似文献   

9.
10.
Visual and auditory classification of equivalent class structured patterns were examined. Underlying patterns from two classes were translated into auditory tone sequences and visual polygons. All Ss classified 50 visual patterns and their direct auditory analogs. Visual classification accuracy exceeded auditory accuracy (p < .01); however, auditory accuracy improved when auditory classification was preceded by the visual task (p < .01). Based on group data, classification strategies appeared similar across modalities, with accuracy of classification of individual patterns predicted to the same degree by common measures of physical class structure across modalities. Ss’ drawings of the prototypes also suggested a common strategy across modalities. While group data suggest some consistency of classification strategy across modalities, individual Ss were not at all consistent in their visual and auditory classifications.  相似文献   

11.
Correctly integrating sensory information across different modalities is a vital task, yet there are illusions which cause the incorrect localization of multisensory stimuli. A common example of these phenomena is the "ventriloquism effect". In this illusion, the localization of auditory signals is biased by the presence of visual stimuli. For instance, when a light and sound are simultaneously presented, observers may erroneously locate the sound closer to the light than its actual position. While this phenomenon has been studied extensively in azimuth at a single depth, little is known about the interactions of stimuli at different depth planes. In the current experiment, virtual acoustics and stereo-image displays were used to test the integration of visual and auditory signals across azimuth and depth. The results suggest that greater variability in the localization of sounds in depth may lead to a greater bias from visual stimuli in depth than in azimuth. These results offer interesting implications for understanding multisensory integration.  相似文献   

12.
We investigated the effect on the right and left responses of the disappearance of a task-irrelevant stimulus located on the right or left side. Participants pressed a right or left response key on the basis of the color of a centrally located visual target. Visual (Experiment 1) or auditory (Experiment 2) task-irrelevant accessory stimuli appeared or disappeared at locations to the right or left of the central target. In Experiment 1, responses were faster when onset or offset of the visual accessory stimulus was spatially congruent with the response. In Experiment 2, responses were again faster when onset of the auditory accessory stimulus and the response were on the same side. However, responses were slightly slower when offset of the auditory accessory stimulus and the response were on the same side than when they were on opposite sides. These findings indicate that transient change information is crucial for a visual Simon effect, whereas sustained stimulation from an ongoing stimulus also contributes to an auditory Simon effect.  相似文献   

13.
In this study, we show that the contingent auditory motion aftereffect is strongly influenced by visual motion information. During an induction phase, participants listened to rightward-moving sounds with falling pitch alternated with leftward-moving sounds with rising pitch (or vice versa). Auditory aftereffects (i.e., a shift in the psychometric function for unimodal auditory motion perception) were bigger when a visual stimulus moved in the same direction as the sound than when no visual stimulus was presented. When the visual stimulus moved in the opposite direction, aftereffects were reversed and thus became contingent upon visual motion. When visual motion was combined with a stationary sound, no aftereffect was observed. These findings indicate that there are strong perceptual links between the visual and auditory motion-processing systems.  相似文献   

14.
15.
Short-term implicit memory was examined for mixed auditory (A) and visual (V) stimuli. In lexical decision, words and nonwords were repeated at lags of 0, 1, 3, and 6 intervening trials, in four prime-target combinations (VV, VA, AV, AA). Same-modality repetition priming showed a lag x lexicality interaction for visual stimuli (nonwords decayed faster), but not for auditory stimuli (longer lasting smooth decay for both words and nonwords). These modality differences suggest that short-term priming has a perceptual locus, with the phonological lexicon maintaining stimuli active longer than the orthographic lexicon and treating pseudowords as potential words. We interpret these differences in terms of the different memory needs of speech recognition and text reading. Weak cross-modality short-term priming was present for words and nonwords, indicating recoding between perceptual forms.  相似文献   

16.
Eyewitnesses instructed to close their eyes during retrieval recall more correct and fewer incorrect visual and auditory details. This study tested whether eye closure causes these effects through a reduction in environmental distraction. Sixty participants watched a staged event before verbally answering questions about it in the presence of auditory distraction or in a quiet control condition. Participants were instructed to close or not close their eyes during recall. Auditory distraction did not affect correct recall, but it increased erroneous recall of visual and auditory details. Instructed eye closure reduced this effect equally for both modalities. The findings support the view that eye closure removes the general resource load of monitoring the environment rather than reducing competition for modality-specific resources.  相似文献   

17.
The kinds of aftereffects, indicative of cross-modal recalibration, that are observed after exposure to spatially incongruent inputs from different sensory modalities have not been demonstrated so far for identity incongruence. We show that exposure to incongruent audiovisual speech (producing the well-known McGurk effect) can recalibrate auditory speech identification. In Experiment 1, exposure to an ambiguous sound intermediate between /aba/ and /ada/ dubbed onto a video of a face articulating either /aba/ or /ada/ increased the proportion of /aba/ or /ada/ responses, respectively, during subsequent sound identification trials. Experiment 2 demonstrated the same recalibration effect or the opposite one, fewer /aba/ or /ada/ responses, revealing selective speech adaptation, depending on whether the ambiguous sound or a congruent nonambiguous one was used during exposure. In separate forced-choice identification trials, bimodal stimulus pairs producing these contrasting effects were identically categorized, which makes a role of postperceptual factors in the generation of the effects unlikely.  相似文献   

18.
Subjects simultaneously performed visual and auditory detection tasks. Pupillary dilation accompanies increased cognitive load such as that caused by the auditory tasks. Errors in the visual task increased when the auditory task became more difficult. The increase was greater when the effects of pupillary dilation were blocked by an artificial pupil.  相似文献   

19.
20.
In some people, visual stimulation evokes auditory sensations. How prevalent and how perceptually real is this? 22% of our neurotypical adult participants responded ‘Yes' when asked whether they heard faint sounds accompanying flash stimuli, and showed significantly better ability to discriminate visual ‘Morse-code’ sequences. This benefit might arise from an ability to recode visual signals as sounds, thus taking advantage of superior temporal acuity of audition. In support of this, those who showed better visual relative to auditory sequence discrimination also had poorer auditory detection in the presence of uninformative visual flashes, though this was independent of awareness of visually-evoked sounds. Thus a visually-evoked auditory representation may occur subliminally and disrupt detection of real auditory signals. The frequent natural correlation between visual and auditory stimuli might explain the surprising prevalence of this phenomenon. Overall, our results suggest that learned correspondences between strongly correlated modalities may provide a precursor for some synaesthetic abilities.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号