首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
We report that when a flash and audible click occur in temporal proximity to each other, the perceived time of occurrence of both events is shifted in such a way as to draw them toward temporal convergence. In one experiment, observers judged when a flash occurred by reporting the clock position of a rotating marker. The flash was seen significantly earlier when it was preceded by an audible click and significantly later when it was followed by an audible click, relative to a condition in which the flash and click occurred simultaneously. In a second experiment, observers judged where the marker was when the click was heard. When a flash preceded or followed the click, similar but smaller capture effects were observed. These capture effects may reveal how temporal discrepancies in the input from different sensory modalities are reconciled and could provide a probe for examining the neural stages at which evoked responses correspond to the contents of conscious perception.  相似文献   

2.
3.
4.
5.
Nonspatial attentional shifts between audition and vision   总被引:2,自引:0,他引:2  
This study investigated nonspatial shifts of attention between visual and auditory modalities. The authors provide evidence that the modality of a stimulus (S1) affected the processing of a subsequent stimulus (S2) depending on whether they shared the same modality. For both vision and audition, the onset of S1 summoned attention exogenously to its modality, causing a delay in processing S2 in a different modality. That undermines the notion that auditory stimuli have a stronger and more automatic alerting effect than visual stimuli (M. I. Posner, M. J. Nissen, & R. M. Klein, 1976). The results are consistent with other recent studies showing cross-modal attentional limitation. The authors suggest that such cross-modal limitation can be produced by simply presenting S1 and S2 in different modalities and that central processing mechanisms are also, at least partially, modality dependent.  相似文献   

6.
Neurologically normal observers misperceive the midpoint of horizontal lines as systematically leftward of veridical center, a phenomenon known as pseudoneglect. Pseudoneglect is attributed to a tonic asymmetry of visuospatial attention favoring left hemispace. Whereas visuospatial attention is biased toward left hemispace, some evidence suggests that audiospatial attention may possess a right hemispatial bias. If spatial attention is supramodal, then the leftward bias observed in visual line bisection should also be expressed in auditory bisection tasks. If spatial attention is modality specific then bisection errors in visual and auditory spatial judgments are potentially dissociable. Subjects performed a bisection task for spatial intervals defined by auditory stimuli, as well as a tachistoscopic visual line bisection task. Subjects showed a significant leftward bias in the visual line bisection task and a significant rightward bias in the auditory interval bisection task. Performance across both tasks was, however, significantly positively correlated. These results imply the existence of both modality specific and supramodal attentional mechanisms where visuospatial attention has a prepotent leftward vector and audiospatial attention has a prepotent rightward vector of attention. In addition, the biases of both visuospatial and audiospatial attention are correlated.  相似文献   

7.
Switching from one functional or cognitive operation to another is thought to rely on executive/control processes. The efficacy of these processes may depend on the extent of overlap between neural circuitry mediating the different tasks; more effective task preparation (and by extension smaller switch costs) is achieved when this overlap is small. We investigated the performance costs associated with switching tasks and/or switching sensory modalities. Participants discriminated either the identity or spatial location of objects that were presented either visually or acoustically. Switch costs between tasks were significantly smaller when the sensory modality of the task switched versus when it repeated. This was the case irrespective of whether the pre-trial cue informed participants only of the upcoming task, but not sensory modality (Experiment 1) or whether the pre-trial cue was informative about both the upcoming task and sensory modality (Experiment 2). In addition, in both experiments switch costs between the senses were positively correlated when the sensory modality of the task repeated across trials and not when it switched. The collective evidence supports the independence of control processes mediating task switching and modality switching and also the hypothesis that switch costs reflect competitive interference between neural circuits.  相似文献   

8.
Specific and nonspecific transfer of pattern class concept information between vision and audition was examined. In Experiment 1, subjects learned visually or auditorily to distinguish between two pattern classes that were either the same as or different from the test classes. All subjects were then tested on the auditory classification of 50 patterns. Specific intramodal and cross-modal transfer was noted; subjects trained visually and auditorily on the test classes were equivalent in performance and more accurate than untrained controls. In Experiment 2, the training of Experiment 1 was repeated, but subjects were tested visually. There was no evidence of auditory-to-visual transfer but some suggestion of nonspecific transfer within the visual modality. The asymmetry of transfer is discussed in terms of the modality into which patterns are most likely translated for the cross-modal tasks and in terms of the quality of prototype formation with visual versus a,ditory patterns.  相似文献   

9.
Recent work on non-visual modalities aims to translate, extend, revise, or unify claims about perception beyond vision. This paper presents central lessons drawn from attention to hearing, sounds, and multimodality. It focuses on auditory awareness and its objects, and it advances more general lessons for perceptual theorizing that emerge from thinking about sounds and audition. The paper argues that sounds and audition no better support the privacy of perception’s objects than does vision; that perceptual objects are more diverse than an exclusively visual perspective suggests; and that multimodality is rampant. In doing so, it presents an account according to which audition affords awareness as of not just sounds, but also environmental happenings beyond sounds.  相似文献   

10.
11.
Sixty-four subjects were tested in an experiment relating temporal factors to the modality effect. The experiment involved both an immediate recall and a recognition phase. Recall scores supported a functional view of the modality effect, while the one-and two-store models only partially predicted the data. The recognition data could only be accounted for in functional terms. A non-parametric compatibility score was suggested to capture the recognition performance. The effect of a shift of input mode (in Phase I) to test mode (in Phase II) was also analyzed.  相似文献   

12.
After repeated presentations of a long inspection tone (800 or 1,000 msec), a test tone of intermediate duration (600 msec) appeared shorter than it would otherwise appear. A short inspection tone (200 or 400 msec) tended to increase the apparent length of the intermediate test tone. Thus, a negative aftereffect of perceived auditory duration occurred, and a similar aftereffect occurred in the visual modality. These aftereffects, each involving a single sensory dimension, aresimple aftereffects. The following procedures producedcontingent aftereffects of perceived duration. A pair of lights, the first short and the second long, was presented repeatedly during an inspection period. When a pair of test lights of intermediate duration was then presented, the first member of the pair appeared longer in relation to the second. A similar aftereffect occurred in the auditory modality. In these latter aftereffects, the perceived duration of a test light or tone is contingent—dependent—on its temporal order, first or second, within a pair of test stimuli. An experiment designed to test the possibility of cross-modal transfer of contingent aftereffects between audition and vision found no significant cross-modal aftereffects.  相似文献   

13.
We investigated the extent to which people can discriminate between languages on the basis of their characteristic temporal, rhythmic information, and the extent to which this ability generalizes across sensory modalities. We used rhythmical patterns derived from the alternation of vowels and consonants in English and Japanese, presented in audition, vision, both audition and vision at the same time, or touch. Experiment 1 confirmed that discrimination is possible on the basis of auditory rhythmic patterns, and extended it to the case of vision, using ‘aperture-close’ mouth movements of a schematic face. In Experiment 2, language discrimination was demonstrated using visual and auditory materials that did not resemble spoken articulation. In a combined analysis including data from Experiments 1 and 2, a beneficial effect was also found when the auditory rhythmic information was available to participants. Despite the fact that discrimination could be achieved using vision alone, auditory performance was nevertheless better. In a final experiment, we demonstrate that the rhythm of speech can also be discriminated successfully by means of vibrotactile patterns delivered to the fingertip. The results of the present study therefore demonstrate that discrimination between language's syllabic rhythmic patterns is possible on the basis of visual and tactile displays.  相似文献   

14.
15.
Temporal orienting—that is, selective attention to instants in time—has been shown to modulate performance in terms of faster responses in a variety of paradigms. Electrophysiological recordings have shown that temporal orienting modulates neural processing at early, probably perceptual, and late, probably decision- or response-related, stages. Recently, it was shown that the effect of temporal orienting on early auditory brain potentials is independent of the effect of the physical sound feature intensity. This indicates that temporal orienting might not affect stimulus processing by increasing the sensory gain of attended stimuli. In the present study, we investigated whether the independence of temporal-orienting and sound-intensity effects could be replicated behaviorally. Sequences were presented that were either rhythmic, most likely creating temporal expectations, or arrhythmic, presumably not creating such expectations. As hypothesized, the main effects of temporal expectation and sound intensity on reaction times were independent (Experiment 1). The exact pattern of results was replicated with a slightly altered paradigm (Experiment 2) and with a different kind of task (Experiment 3). In sum, these results corroborate the notion that the effect of temporal orienting might not rely on the same processes as the effect of sound intensity does.  相似文献   

16.
The general observation that handwriting is not noticeably impaired by the withdrawal of vision can be explained in two ways. One might argue that vision is not needed during the act of writing. Micro-analyses should then reveal that spatial as well as temporal writing features are identical in conditions of vision and no vision. Alternatively, it is possible that vision is needed during the act of writing, but that without vision possible errors and inaccuracies have to be prevented. Assuming that the latter would place an extra demand on movement control, this should be revealed by an increase in processing time. We have found evidence for the latter view in the present study in which 12 subjects wrote a nonsense letter sequence with and without vision. Close examination showed that writing shapes remained equally invariant under both vision conditions, suggesting that spatial control was unaffected by withdrawing vision. The prediction that invariance of shapes is preserved in the absence of vision at the expense of processing time increments was confirmed. The increase of reaction time observed when visual guidance was withdrawn suggests that more processing time was needed prior to the movement start. Moreover, the RT increment was larger when a short writing duration was instructed. The present findings will be discussed in light of the remarkable flexibility of writing as a motor skill in which writers appear to be able to employ specific strategies to preserve shape in the absence of visual guidance.  相似文献   

17.
18.
The development of neuroimaging methods has had a significant impact on the study of the human brain. Functional MRI, with its high spatial resolution, provides investigators with a method to localize the neuronal correlates of many sensory and cognitive processes. Magneto- and electroencephalography, in turn, offer excellent temporal resolution allowing the exact time course of neuronal processes to be investigated. Applying these methods to multisensory processing, many research laboratories have been successful in describing cross-sensory interactions and their spatio-temporal dynamics in the human brain. Here, we review data from selected neuroimaging investigations showing how vision can influence and interact with other senses, namely audition, touch, and olfaction. We highlight some of the similarities and differences in the cross-processing of the different sensory modalities and discuss how different neuroimaging methods can be applied to answer specific questions about multisensory processing.Edited by: Marie-Hélène Giard and Mark Wallace  相似文献   

19.
Subjective flicker rates were measured for compound waveforms consisting of five harmonics without a fundamental component. It was found that observers perceived a rate at the fundamental frequency, although energy at this frequency was not included in the signals. In auditory pitch sensation this is called the missing fundamental phenomenon, and an analogous finding is known to occur in spatial vision. Moreover, observers perceived the flicker rates at the fundamental frequency even in the random-phase conditions, in which the period of the fundamental component is unclear in the real waveforms. The results indicate that the perceived flicker rates are not detected from the temporal waveforms per se. One possible mechanism for extracting such a periodicity in the signal is an autocorrelation function to the real temporal waveforms. Received: 18 October 1999 / Accepted: 28 January 2000  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号