首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Nonspatial attentional shifts between audition and vision   总被引:2,自引:0,他引:2  
This study investigated nonspatial shifts of attention between visual and auditory modalities. The authors provide evidence that the modality of a stimulus (S1) affected the processing of a subsequent stimulus (S2) depending on whether they shared the same modality. For both vision and audition, the onset of S1 summoned attention exogenously to its modality, causing a delay in processing S2 in a different modality. That undermines the notion that auditory stimuli have a stronger and more automatic alerting effect than visual stimuli (M. I. Posner, M. J. Nissen, & R. M. Klein, 1976). The results are consistent with other recent studies showing cross-modal attentional limitation. The authors suggest that such cross-modal limitation can be produced by simply presenting S1 and S2 in different modalities and that central processing mechanisms are also, at least partially, modality dependent.  相似文献   

2.
We report that when a flash and audible click occur in temporal proximity to each other, the perceived time of occurrence of both events is shifted in such a way as to draw them toward temporal convergence. In one experiment, observers judged when a flash occurred by reporting the clock position of a rotating marker. The flash was seen significantly earlier when it was preceded by an audible click and significantly later when it was followed by an audible click, relative to a condition in which the flash and click occurred simultaneously. In a second experiment, observers judged where the marker was when the click was heard. When a flash preceded or followed the click, similar but smaller capture effects were observed. These capture effects may reveal how temporal discrepancies in the input from different sensory modalities are reconciled and could provide a probe for examining the neural stages at which evoked responses correspond to the contents of conscious perception.  相似文献   

3.
The temporal cross-capture of audition and vision.   总被引:4,自引:0,他引:4  
We report that when a flash and audible click occur in temporal proximity to each other, the perceived time of occurrence of both events is shifted in such a way as to draw them toward temporal convergence. In one experiment, observers judged when a flash occurred by reporting the clock position of a rotating marker. The flash was seen significantly earlier when it was preceded by an audible click and significantly later when it was followed by an audible click, relative to a condition in which the flash and click occurred simultaneously. In a second experiment, observers judged where the marker was when the click was heard. When a flash preceded or followed the click, similar but smaller capture effects were observed. These capture effects may reveal how temporal discrepancies in the input from different sensory modalities are reconciled and could provide a probe for examining the neural stages at which evoked responses correspond to the contents of conscious perception.  相似文献   

4.
Neurologically normal observers misperceive the midpoint of horizontal lines as systematically leftward of veridical center, a phenomenon known as pseudoneglect. Pseudoneglect is attributed to a tonic asymmetry of visuospatial attention favoring left hemispace. Whereas visuospatial attention is biased toward left hemispace, some evidence suggests that audiospatial attention may possess a right hemispatial bias. If spatial attention is supramodal, then the leftward bias observed in visual line bisection should also be expressed in auditory bisection tasks. If spatial attention is modality specific then bisection errors in visual and auditory spatial judgments are potentially dissociable. Subjects performed a bisection task for spatial intervals defined by auditory stimuli, as well as a tachistoscopic visual line bisection task. Subjects showed a significant leftward bias in the visual line bisection task and a significant rightward bias in the auditory interval bisection task. Performance across both tasks was, however, significantly positively correlated. These results imply the existence of both modality specific and supramodal attentional mechanisms where visuospatial attention has a prepotent leftward vector and audiospatial attention has a prepotent rightward vector of attention. In addition, the biases of both visuospatial and audiospatial attention are correlated.  相似文献   

5.
Recent work on non-visual modalities aims to translate, extend, revise, or unify claims about perception beyond vision. This paper presents central lessons drawn from attention to hearing, sounds, and multimodality. It focuses on auditory awareness and its objects, and it advances more general lessons for perceptual theorizing that emerge from thinking about sounds and audition. The paper argues that sounds and audition no better support the privacy of perception’s objects than does vision; that perceptual objects are more diverse than an exclusively visual perspective suggests; and that multimodality is rampant. In doing so, it presents an account according to which audition affords awareness as of not just sounds, but also environmental happenings beyond sounds.  相似文献   

6.
Specific and nonspecific transfer of pattern class concept information between vision and audition was examined. In Experiment 1, subjects learned visually or auditorily to distinguish between two pattern classes that were either the same as or different from the test classes. All subjects were then tested on the auditory classification of 50 patterns. Specific intramodal and cross-modal transfer was noted; subjects trained visually and auditorily on the test classes were equivalent in performance and more accurate than untrained controls. In Experiment 2, the training of Experiment 1 was repeated, but subjects were tested visually. There was no evidence of auditory-to-visual transfer but some suggestion of nonspecific transfer within the visual modality. The asymmetry of transfer is discussed in terms of the modality into which patterns are most likely translated for the cross-modal tasks and in terms of the quality of prototype formation with visual versus a,ditory patterns.  相似文献   

7.
8.
9.
Sixty-four subjects were tested in an experiment relating temporal factors to the modality effect. The experiment involved both an immediate recall and a recognition phase. Recall scores supported a functional view of the modality effect, while the one-and two-store models only partially predicted the data. The recognition data could only be accounted for in functional terms. A non-parametric compatibility score was suggested to capture the recognition performance. The effect of a shift of input mode (in Phase I) to test mode (in Phase II) was also analyzed.  相似文献   

10.
We investigated the extent to which people can discriminate between languages on the basis of their characteristic temporal, rhythmic information, and the extent to which this ability generalizes across sensory modalities. We used rhythmical patterns derived from the alternation of vowels and consonants in English and Japanese, presented in audition, vision, both audition and vision at the same time, or touch. Experiment 1 confirmed that discrimination is possible on the basis of auditory rhythmic patterns, and extended it to the case of vision, using ‘aperture-close’ mouth movements of a schematic face. In Experiment 2, language discrimination was demonstrated using visual and auditory materials that did not resemble spoken articulation. In a combined analysis including data from Experiments 1 and 2, a beneficial effect was also found when the auditory rhythmic information was available to participants. Despite the fact that discrimination could be achieved using vision alone, auditory performance was nevertheless better. In a final experiment, we demonstrate that the rhythm of speech can also be discriminated successfully by means of vibrotactile patterns delivered to the fingertip. The results of the present study therefore demonstrate that discrimination between language's syllabic rhythmic patterns is possible on the basis of visual and tactile displays.  相似文献   

11.
After repeated presentations of a long inspection tone (800 or 1,000 msec), a test tone of intermediate duration (600 msec) appeared shorter than it would otherwise appear. A short inspection tone (200 or 400 msec) tended to increase the apparent length of the intermediate test tone. Thus, a negative aftereffect of perceived auditory duration occurred, and a similar aftereffect occurred in the visual modality. These aftereffects, each involving a single sensory dimension, aresimple aftereffects. The following procedures producedcontingent aftereffects of perceived duration. A pair of lights, the first short and the second long, was presented repeatedly during an inspection period. When a pair of test lights of intermediate duration was then presented, the first member of the pair appeared longer in relation to the second. A similar aftereffect occurred in the auditory modality. In these latter aftereffects, the perceived duration of a test light or tone is contingent—dependent—on its temporal order, first or second, within a pair of test stimuli. An experiment designed to test the possibility of cross-modal transfer of contingent aftereffects between audition and vision found no significant cross-modal aftereffects.  相似文献   

12.
Spatial working memory can maintain representations from vision, hearing, and touch, representations referred to here as spatial images. The present experiment addressed whether spatial images from vision and hearing that are simultaneously present within working memory retain modality-specific tags or are amodal. Observers were presented with short sequences of targets varying in angular direction, with the targets in a given sequence being all auditory, all visual, or a sequential mixture of the two. On two thirds of the trials, one of the locations was repeated, and observers had to respond as quickly as possible when detecting this repetition. Ancillary detection and localization tasks confirmed that the visual and auditory targets were perceptually comparable. Response latencies in the working memory task showed small but reliable costs in performance on trials involving a sequential mixture of auditory and visual targets, as compared with trials of pure vision or pure audition. These deficits were statistically reliable only for trials on which the modalities of the matching location switched from the penultimate to the final target in the sequence, indicating a switching cost. The switching cost for the pair in immediate succession means that the spatial images representing the target locations retain features of the visual or auditory representations from which they were derived. However, there was no reliable evidence of a performance cost for mixed modalities in the matching pair when the second of the two did not immediately follow the first, suggesting that more enduring spatial images in working memory may be amodal.  相似文献   

13.
Switching from one functional or cognitive operation to another is thought to rely on executive/control processes. The efficacy of these processes may depend on the extent of overlap between neural circuitry mediating the different tasks; more effective task preparation (and by extension smaller switch costs) is achieved when this overlap is small. We investigated the performance costs associated with switching tasks and/or switching sensory modalities. Participants discriminated either the identity or spatial location of objects that were presented either visually or acoustically. Switch costs between tasks were significantly smaller when the sensory modality of the task switched versus when it repeated. This was the case irrespective of whether the pre-trial cue informed participants only of the upcoming task, but not sensory modality (Experiment 1) or whether the pre-trial cue was informative about both the upcoming task and sensory modality (Experiment 2). In addition, in both experiments switch costs between the senses were positively correlated when the sensory modality of the task repeated across trials and not when it switched. The collective evidence supports the independence of control processes mediating task switching and modality switching and also the hypothesis that switch costs reflect competitive interference between neural circuits.  相似文献   

14.
The modality by which object azimuths (directions) are presented affects learning of multiple locations. In Experiment 1, participants learned sets of three and five object azimuths specified by a visual virtual environment, spatial audition (3D sound), or auditory spatial language. Five azimuths were learned faster when specified by spatial modalities (vision, audition) than by language. Experiment 2 equated the modalities for proprioceptive cues and eliminated spatial cues unique to vision (optic flow) and audition (differential binaural signals). There remained a learning disadvantage for spatial language. We attribute this result to the cost of indirect processing from words to spatial representations.  相似文献   

15.
16.
17.
The development of neuroimaging methods has had a significant impact on the study of the human brain. Functional MRI, with its high spatial resolution, provides investigators with a method to localize the neuronal correlates of many sensory and cognitive processes. Magneto- and electroencephalography, in turn, offer excellent temporal resolution allowing the exact time course of neuronal processes to be investigated. Applying these methods to multisensory processing, many research laboratories have been successful in describing cross-sensory interactions and their spatio-temporal dynamics in the human brain. Here, we review data from selected neuroimaging investigations showing how vision can influence and interact with other senses, namely audition, touch, and olfaction. We highlight some of the similarities and differences in the cross-processing of the different sensory modalities and discuss how different neuroimaging methods can be applied to answer specific questions about multisensory processing.Edited by: Marie-Hélène Giard and Mark Wallace  相似文献   

18.
19.
20.
Three experiments investigated cross-modal links between touch, audition, and vision in the control of covert exogenous orienting. In the first two experiments, participants made speeded discrimination responses (continuous vs. pulsed) for tactile targets presented randomly to the index finger of either hand. Targets were preceded at a variable stimulus onset asynchrony (150,200, or 300 msec) by a spatially uninformative cue that was either auditory (Experiment 1) or visual (Experiment 2) on the same or opposite side as the tactile target. Tactile discriminations were more rapid and accurate when cue and target occurred on the same side, revealing cross-modal covert orienting. In Experiment 3, spatially uninformative tactile cues were presented prior to randomly intermingled auditory and visual targets requiring an elevation discrimination response (up vs. down). Responses were significantly faster for targets in both modalities when presented ipsilateral to the tactile cue. These findings demonstrate that the peripheral presentation of spatially uninforrnative auditory and visual cues produces cross-modal orienting that affects touch, and that tactile cues can also produce cross-modal covert orienting that affects audition and vision.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号