首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Temporal expectation is a process by which people use temporally structured sensory information to explicitly or implicitly predict the onset and/or the duration of future events. Because timing plays a critical role in crossmodal interactions, we investigated how temporal expectation influenced auditory–visual interaction, using an auditory–visual crossmodal congruity effect as a measure of crossmodal interaction. For auditory identification, an incongruent visual stimulus produced stronger interference when the crossmodal stimulus was presented with an expected rather than an unexpected timing. In contrast, for visual identification, an incongruent auditory stimulus produced weaker interference when the crossmodal stimulus was presented with an expected rather than an unexpected timing. The fact that temporal expectation made visual distractors more potent and visual targets less susceptible to auditory interference suggests that temporal expectation increases the perceptual weight of visual signals.  相似文献   

2.
3.
Rutgers-The State University, New Brunswick, New Jersey 08903 Is the quality of information obtained from simple auditory and visual signals diminished when both modalities must be attended to simultaneously? This question was investigated in an experiment in which subjects made forced-choice judgments of the location of simple light and tone signals presented in focused- and divided-attention conditions. The data are compared with the predictions of a model that describes the largest performance decrement to be expected in the divided-attention condition on the basis of nonattentional factors. The results of this comparison suggest that the difference in performance between focused- and dividedattention conditions is attributable solely to the increased opportunity to confuse signal with noise as the number of modalities is increased. Thus, there appears to be no evidence that dividing attention between modalities affects the quality of the stimulus representations of individual light and tone signals.  相似文献   

4.
Responses are typically faster and more accurate when both auditory and visual modalities are stimulated than when only one is. This bimodal advantage is generally attributed to a speeding of responding on bimodal trials, relative to unimodal trials. It remains possible that this effect might be due to a performance decrement on unimodal ones. To investigate this, two levels of auditory and visual signal intensities were combined in a double-factorial paradigm. Responses to the onset of the imperative signal were measured under go/no-go conditions. Mean reaction times to the four types of bimodal stimuli exhibited a superadditive interaction. This is evidence for the parallel self-terminating processing of the two signal components. Violations of the race model inequality also occurred, and measures of processing capacity showed that efficiency was greater on the bimodal than on the unimodal trials. These data are discussed in terms of a possible underlying neural substrate.  相似文献   

5.
Past studies of simultaneous attention to pairs of visual stimuli have used the “dual-task” paradigm to show that identification of the direction of a change in luminance, whether incremental or decremental, is “capacity-limited,” while simple detection of these changes is governed by “capacity-free” processes. On the basis of that finding, it has been suggested that the contrast between identification and detection reflects different processes in the sensory periphery, namely the responses of magno- and parvocellular receptors. The present study questions that assertion and investigates the contribution of central processing in resource limitation by applying the dual task to a situation in which one stimulus is auditory and one is visual. The results are much the same as before, with identification demonstrating the tradeoff in performance generally attributed to a limited capacity but detection showing no loss compared with single-task controls. This implies that limitations on resources operate at a central level of processing rather than in the auditory and visual peripheries.  相似文献   

6.
The effect of brief auditory stimuli on visual apparent motion   总被引:1,自引:0,他引:1  
Getzmann S 《Perception》2007,36(7):1089-1103
When two discrete stimuli are presented in rapid succession, observers typically report a movement of the lead stimulus toward the lag stimulus. The object of this study was to investigate crossmodal effects of irrelevant sounds on this illusion of visual apparent motion. Observers were presented with two visual stimuli that were temporally separated by interstimulus onset intervals from 0 to 350 ms. After each trial, observers classified their impression of the stimuli using a categorisation system. The presentation of short sounds intervening between the visual stimuli facilitated the impression of apparent motion relative to baseline (visual stimuli without sounds), whereas sounds presented before the first and after the second visual stimulus as well as simultaneously presented sounds reduced the motion impression. The results demonstrate an effect of the temporal structure of irrelevant sounds on visual apparent motion that is discussed in light of a related multisensory phenomenon, 'temporal ventriloquism', on the assumption that sounds can attract lights in the temporal dimension.  相似文献   

7.
Summary Perceptual organization during short tachistoscopic presentation of stimulus patterns formed by ten moving bright spots, representing a human body in walking, running, etc. was investigated. Exposure times were .1 sec to .5 sec.The results reveal that in all Ss the dot pattern is perceptually organized to a gestalt, a walking, running, etc., person at an exposure time of .2 sec. 40% of Ss perceived a human body in such motion at presentation times as short as 0.1 sec. Under the experimental conditions used the track length of the bright spots at the threshold of integration to a moving unit was of the size order 10 visual angle.This result is regarded as indicating that a complex vector analysis of the proximal motion pattern is accomplished at the initial stage of physiological signal recording and that it is a consequence of receptive field organization. It is discussed in terms of vector calculus.  相似文献   

8.
Three experiments tested for developmental changes in attention to simple auditory and visual signals. Subject pressed a single button in response to the onset (Experiment 1) or offset (Experiment 2) of either a tone or a light. During one block of trials subjects knew which stimulus would come on or go off on each trial (precue condition) whereas during the other block of trials no precue was provided. In both experiments subjects as young as 4 years old responded more rapidly with precues, indicating that they were able to allocate their attention to the indicated modality. Experiment 3 utilized a choice reaction paradigm (in which subjects pressed different buttons in response to the onset of the light and the tone) in order to examine their attention allocation when no precues were provided. It was found that the adults and 7-year-olds tended to allocate their attention to vision rather than audition when no precue was provided. The results with the 4-year-olds were not entirely consistent, but suggested a similar biasing of attention to vision on their part as well.  相似文献   

9.
10.
Three experiments were carried out to investigate the evaluation and integration of visual and auditory information in speech perception. In the first two experiments, subjects identified /ba/ or /da/ speech events consisting of high-quality synthetic syllables ranging from /ba/ to /da/ combined with a videotaped /ba/ or /da/ or neutral articulation. Although subjects were specifically instructed to report what they heard, visual articulation made a large contribution to identification. The tests of quantitative models provide evidence for the integration of continuous and independent, as opposed to discrete or nonindependent, sources of information. The reaction times for identification were primarily correlated with the perceived ambiguity of the speech event. In a third experiment, the speech events were identified with an unconstrained set of response alternatives. In addition to /ba/ and /da/ responses, the /bda/ and /tha/ responses were well described by a combination of continuous and independent features. This body of results provides strong evidence for a fuzzy logical model of perceptual recognition.  相似文献   

11.
Short-term memory for the timing of irregular sequences of signals has been said to be more accurate when the signals are auditory than when they are visual. No support for this contention was obtained when the signals were beeps versus flashes (Experiments 1 and 3) nor when they were sets of spoken versus typewritten digits (Experiments 4 and 5). On the other hand, support was obtained both for beeps versus flashes (Experiments 2 and 5) and for repetitions of a single spoken digit versus repetitions of a single typewritten digit (Experiment 6) when the subjects silently mouthed a nominally irrelevant item during sequence presentation. Also, the timing of sequences of auditory signals, whether verbal (Experiment 7) or nonverbal (Experiments 8 and 9), was more accurately remembered when the signals within each sequence were identical. The findings are considered from a functional perspective.  相似文献   

12.
Seven pigeons whose key-pecking was maintained by food reinforcement on a differential-reinforcement-of-low-rates 12-sec limited-hold 4-sec schedule and 12 other pigeons whose treadle-pressing was maintained by the same schedule received appetitive Pavlovian conditioning trials superimposed upon the instrumental baseline. Half the birds in each group received a tone as the CS, and the other half received a stimulus change on the key. Each CS was 20 sec long, and was immediately followed by 10-sec access to grain. The visual CS markedly facilitated the rate of pecking on the key for the birds whose baseline response was pecking. The visual CS produced auto-shaping of the key-peck and tended to produce suppression of treadle-pressing for the birds whose baseline response was treadle-pressing. The auditory CS produced inconsistent effects across birds regardless of the baseline response. In all cases the conditioned effects extinguished when response-independent food was omitted.  相似文献   

13.
Searching for an object within a cluttered, continuously changing environment can be a very time-consuming process. The authors show that a simple auditory pip drastically decreases search times for a synchronized visual object that is normally very difficult to find. This effect occurs even though the pip contains no information on the location or identity of the visual object. The experiments also show that the effect is not due to general alerting (because it does not occur with visual cues), nor is it due to top-down cuing of the visual change (because it still occurs when the pip is synchronized with distractors on the majority of trials). Instead, we propose that the temporal information of the auditory signal is integrated with the visual signal, generating a relatively salient emergent feature that automatically draws attention. Phenomenally, the synchronous pip makes the visual object pop out from its complex environment, providing a direct demonstration of spatially nonspecific sounds affecting competition in spatial visual processing.  相似文献   

14.
15.
Two experiments investigated autoshaping in rats to localizable visual and auditory conditioned stimuli predicting response-independent food. In Experiment 1 considerable conditioned-stimulus approach behavior was generated by a localizable visual conditioned stimulus that was situated approximately 35 cm from the food tray. Using the same apparatus in Experiment 2 we found that the conditioned-stimulus approach was generated only to a visual conditioned stimulus and not to a localizable auditory conditioned stimulus even though subjects (1) could discriminate presentations of the auditory conditioned stimulus, (2) had associated it with food, (3) could localize it, and (4) would approach the auditory stimulus if this behavior constituted an instrumental response to food. The predominant conditioned responses to the auditory stimuli were goal tracking (entering the food tray) and orienting towards the food-paired conditioned stimulus by head turning and rearing and turning. These results imply that rats do not invariably approach a localizable appetitive Pavlovian conditioned stimulus but that stimulus-approach responses depend on the nature and modality of the conditioned stimulus.  相似文献   

16.
Three experiments were conducted examining unimodal and crossmodal effects of attention to motion. Horizontally moving sounds and dot patterns were presented and participants’ task was to discriminate their motion speed or whether they were presented with a brief gap. In Experiments 1 and 2, stimuli of one modality and of one direction were presented with a higher probability ( p = .7) than other stimuli. Sounds and dot patterns moving in the expected direction were discriminated faster than stimuli moving in the unexpected direction. In Experiment 3, participants had to respond only to stimuli moving in one direction within the primary modality, but to all stimuli regardless of their direction within the rarer secondary modality. Stimuli of the secondary modality moving in the attended direction were discriminated faster than were oppositely moving stimuli. Results suggest that attending to the direction of motion affects perception within vision and audition, but also across modalities.  相似文献   

17.
The authors hypothesized that during a gap in a timed signal, the time accumulated during the pregap interval decays at a rate proportional to the perceived salience of the gap, influenced by sensory acuity and signal intensity. When timing visual signals, albino (Sprague-Dawley) rats, which have poor visual acuity, stopped timing irrespective of gap duration, whereas pigmented (Long-Evans) rats, which have good visual acuity, stopped timing for short gaps but reset timing for long gaps. Pigmented rats stopped timing during a gap in a low-intensity visual signal and reset after a gap in a high-intensity visual signal, suggesting that memory for time in the gap procedure varies with the perceived salience of the gap, possibly through an attentional mechanism.  相似文献   

18.
S Mateeff  J Hohnsbein  T Noack 《Perception》1985,14(6):721-727
Apparent motion of a sound source can be induced by a moving visual target. The direction of the perceived motion of the sound source is the same as that of the visual target, but the subjective velocity of the sound source is 25-50% of that of the visual target measured under the same conditions. Eye tracking of the light target tends to enhance the apparent motion of the sound, but is not a prerequisite for its occurrence. The findings are discussed in connection with the 'visual capture' or 'ventriloquism' effect.  相似文献   

19.
20.
To analyze complex scenes efficiently, the human visual system performs perceptual groupings based on various features (e.g., color and motion) of the visual elements in a scene. Although previous studies demonstrated that such groupings can be based on a single feature (e.g., either color or motion information), here we show that the visual system also performs scene analyses based on a combination of two features. We presented subjects with a mixture of red and green dots moving in various directions. Although the pairings between color and motion information were variable across the dots (e.g., one red dot moved upward while another moved rightward), subjects' perceptions of the color-motion pairings were significantly biased when the randomly paired dots were flanked by additional dots with consistent color-motion pairings. These results indicate that the visual system resolves local ambiguities in color-motion pairings using unambiguous pairings in surrounds, demonstrating a new type of scene analysis based on the combination of two featural cues.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号