首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We report a series of experiments designed to demonstrate that the presentation of a sound can facilitate the identification of a concomitantly presented visual target letter in the backward masking paradigm. Two visual letters, serving as the target and its mask, were presented successively at various interstimulus intervals (ISIs). The results demonstrate that the crossmodal facilitation of participants' visual identification performance elicited by the presentation of a simultaneous sound occurs over a very narrow range of ISIs. This critical time-window lies just beyond the interval needed for participants to differentiate the target and mask as constituting two distinct perceptual events (Experiment 1) and can be dissociated from any facilitation elicited by making the visual target physically brighter (Experiment 2). When the sound is presented at the same time as the mask, a facilitatory, rather than an inhibitory effect on visual target identification performance is still observed (Experiment 3). We further demonstrate that the crossmodal facilitation of the visual target by the sound depends on the establishment of a reliable temporally coincident relationship between the two stimuli (Experiment 4); however, by contrast, spatial coincidence is not necessary (Experiment 5). We suggest that when visual and auditory stimuli are always presented synchronously, a better-consolidated object representation is likely to be constructed (than that resulting from unimodal visual stimulation).  相似文献   

2.
Previous studies of multisensory integration have often stressed the beneficial effects that may arise when information concerning an event arrives via different sensory modalities at the same time, as, for example, exemplified by research on the redundant target effect (RTE). By contrast, studies of the Colavita visual dominance effect (e.g., [Colavita, F. B. (1974). Human sensory dominance. Perception & Psychophysics, 16, 409–412]) highlight the inhibitory consequences of the competition between signals presented simultaneously in different sensory modalities instead. Although both the RTE and the Colavita effect are thought to occur at early sensory levels and the stimulus conditions under which they are typically observed are very similar, the interplay between these two opposing behavioural phenomena (facilitation vs. competition) has yet to be addressed empirically. We hypothesized that the dissociation may reflect two of the fundamentally different ways in which humans can perceive concurrent auditory and visual stimuli. In Experiment 1, we demonstrated both multisensory facilitation (RTE) and the Colavita visual dominance effect using exactly the same audiovisual displays, by simply changing the task from a speeded detection task to a speeded modality discrimination task. Meanwhile, in Experiment 2, the participants exhibited multisensory facilitation when responding to visual targets and multisensory inhibition when responding to auditory targets while keeping the task constant. These results therefore indicate that both multisensory facilitation and inhibition can be demonstrated in reaction to the same bimodal event.  相似文献   

3.
Saccades operate a continuous selection between competing targets at different locations. This competition has been mostly investigated in the visual context, and it is well known that a visual distractor can interfere with a saccade toward a visual target. Here, we investigated whether multimodal, audio-visual targets confer stronger resilience against visual distraction. Saccades to audio-visual targets had shorter latencies than saccades to unisensory stimuli. This facilitation exceeded the level that could be explained by simple probability summation, indicating that multisensory integration had occurred. The magnitude of inhibition induced by a visual distractor was comparable for saccades to unisensory and multisensory targets, but the duration of the inhibition was shorter for multimodal targets. We conclude that multisensory integration can allow a saccade plan to be reestablished more rapidly following saccadic inhibition.  相似文献   

4.
The question of how vision and audition interact in natural object identification is currently a matter of debate. We developed a large set of auditory and visual stimuli representing natural objects in order to facilitate research in the field of multisensory processing. Normative data was obtained for 270 brief environmental sounds and 320 visual object stimuli. Each stimulus was named, categorized, and rated with regard to familiarity and emotional valence by N=56 participants (Study 1). This multimodal stimulus set was employed in two subsequent crossmodal priming experiments that used semantically congruent and incongruent stimulus pairs in a S1-S2 paradigm. Task-relevant targets were either auditory (Study 2) or visual stimuli (Study 3). The behavioral data of both experiments expressed a crossmodal priming effect with shorter reaction times for congruent as compared to incongruent stimulus pairs. The observed facilitation effect suggests that object identification in one modality is influenced by input from another modality. This result implicates that congruent visual and auditory stimulus pairs were perceived as the same object and demonstrates a first validation of the multimodal stimulus set.  相似文献   

5.
Presenting two targets in a rapid visual stream will frequently result in the second target (T2) being missed when presented shortly after the first target (T1). This so-called attentional blink (AB) phenomenon can be reduced by various experimental manipulations. This study investigated the effect of combining T2 with a non-specific sound, played either simultaneously with T2 or preceding T2 by a fixed latency. The reliability of the observed effects and their correlation with potential predictors were studied. The tone significantly improved T2 identification rates regardless of tone condition and of the delay between targets, suggesting that the crossmodal facilitation of T2 identification is not limited to visual-perceptual enhancement. For the simultaneous condition, an additional time-on-task effect was observed in form of a reduction of the AB that occurred within an experimental session. Thus, audition-driven enhancement of visual perception may need some time for its full potential to evolve. Split-half and test-retest reliability were found consistently only for a condition without additional sound. AB magnitude obtained in this condition was related to AB magnitudes obtained in both sound conditions. Self-reported distractibility and performance in tests of divided attention and of cognitive flexibility correlated with the AB magnitudes of a subset but never all conditions under study. Reliability and correlation results suggest that not only dispositional abilities but also state factors exert an influence on AB magnitude. These findings extend earlier work on audition-driven enhancement of target identification in the AB and on the reliability and behavioural correlates of the AB.  相似文献   

6.
Multisensory integration can play a critical role in producing unified and reliable perceptual experience. When sensory information in one modality is degraded or ambiguous, information from other senses can crossmodally resolve perceptual ambiguities. Prior research suggests that auditory information can disambiguate the contents of visual awareness by facilitating perception of intermodally consistent stimuli. However, it is unclear whether these effects are truly due to crossmodal facilitation or are mediated by voluntary selective attention to audiovisually congruent stimuli. Here, we demonstrate that sounds can bias competition in binocular rivalry toward audiovisually congruent percepts, even when participants have no recognition of the congruency. When speech sounds were presented in synchrony with speech-like deformations of rivalling ellipses, ellipses with crossmodally congruent deformations were perceptually dominant over those with incongruent deformations. This effect was observed in participants who could not identify the crossmodal congruency in an open-ended interview (Experiment 1) or detect it in a simple 2AFC task (Experiment 2), suggesting that the effect was not due to voluntary selective attention or response bias. These results suggest that sound can automatically disambiguate the contents of visual awareness by facilitating perception of audiovisually congruent stimuli.  相似文献   

7.
Ecker AJ  Heller LM 《Perception》2005,34(1):59-75
We carried out two experiments to measure the combined perceptual effect of visual and auditory information on the perception of a moving object's trajectory. All visual stimuli consisted of a perspective rendering of a ball moving in a three-dimensional box. Each video was paired with one of three sound conditions: silence, the sound of a ball rolling, or the sound of a ball hitting the ground. We found that the sound condition influenced whether observers were more likely to perceive the ball as rolling back in depth on the floor of the box or jumping in the frontal plane. In a second experiment we found further evidence that the reported shift in path perception reflects perceptual experience rather than a deliberate decision process. Instead of directly judging the ball's path, observers judged the ball's speed. Speed is an indirect measure of the perceived path because, as a result of the geometry of the box and the viewing angle, a rolling ball would travel a greater distance than a jumping ball in the same time interval. Observers did judge a ball paired with a rolling sound as faster than a ball paired with a jumping sound. This auditory-visual interaction provides an example of a unitary percept arising from multisensory input.  相似文献   

8.
Previous research has shown that irrelevant sounds can facilitate the perception of visual apparent motion. Here the effectiveness of a single sound to facilitate motion perception was investigated in three experiments. Observers were presented with two discrete lights temporally separated by stimulus onset asynchronies from 0 to 350 ms. After each trial, observers classified their impression of the stimuli using a categorisation system. A short sound presented temporally (and spatially) midway between the lights facilitated the impression of motion relative to baseline (lights without sound), whereas a sound presented either before the first or after the second light or simultaneously with the lights did not affect motion impression. The facilitation effect also occurred with sound presented far from the visual display, as well as with continuous-sound that was started with the first light and terminated with the second light. No facilitation of visual motion perception occurred if the sound was part of a tone sequence that allowed for intramodal perceptual grouping of the auditory stimuli prior to the critical audiovisual stimuli. Taken together, the findings are consistent with a low-level audiovisual integration approach in which the perceptual system merges temporally proximate sound and light stimuli, thereby provoking the impression of a single multimodal moving object.  相似文献   

9.
The relevance of emotional perception in interpersonal relationships and social cognition has been well documented. Although brain diseases might impair emotional processing, studies concerning emotional recognition in patients with brain tumours are relatively rare. The aim of this study was to explore emotional recognition in patients with gliomas in three conditions (visual, auditory and crossmodal) and to analyse how tumour-related variables (notably, tumour localisation) and patient-related variables influence emotion recognition. Twenty six patients with gliomas and 26 matched healthy controls were instructed to identify 5 basic emotions and a neutral expression, which were displayed through visual, auditory and crossmodal stimuli. Relative to the controls, recognition was weakly impaired in the patient group under both visual and auditory conditions, but the performances were comparable in the crossmodal condition. Additional analyses using the ‘race model’ suggest differences in multisensory emotional integration abilities across the groups, which were potentially correlated with the executive disorders observed in the patients. These observations support the view of compensatory mechanisms in the case of gliomas that might preserve the quality of life and help maintain the normal social and professional lives often observed in these patients.  相似文献   

10.
We constantly integrate the information that is available to our various senses. The extent to which the mechanisms of multisensory integration are subject to the influences of attention, emotion, and/or motivation is currently unknown. The ??ventriloquist effect?? is widely assumed to be an automatic crossmodal phenomenon, shifting the perceived location of an auditory stimulus toward a concurrently presented visual stimulus. In the present study, we examined whether audiovisual binding, as indicated by the magnitude of the ventriloquist effect, is influenced by threatening auditory stimuli presented prior to the ventriloquist experiment. Syllables spoken in a fearful voice were presented from one of eight loudspeakers, while syllables spoken in a neutral voice were presented from the other seven locations. Subsequently, participants had to localize pure tones while trying to ignore concurrent visual stimuli (both the auditory and the visual stimuli here were emotionally neutral). A reliable ventriloquist effect was observed. The emotional stimulus manipulation resulted in a reduction of the magnitude of the subsequently measured ventriloquist effect in both hemifields, as compared to a control group exposed to a similar attention-capturing, but nonemotional, manipulation. These results suggest that the emotional system is capable of influencing multisensory binding processes that have heretofore been considered automatic.  相似文献   

11.
Since their formulation by the Gestalt movement more than a century ago, the principles of perceptual grouping have primarily been investigated in the visual modality and, to a lesser extent, in the auditory modality. The present review addresses the question of whether the same grouping principles also affect the perception of tactile stimuli. Although, to date, only a few studies have explicitly investigated the existence of Gestalt grouping principles in the tactile modality, we argue that many more studies have indirectly provided evidence relevant to this topic. Reviewing this body of research, we argue that similar principles to those reported previously in visual and auditory studies also govern the perceptual grouping of tactile stimuli. In particular, we highlight evidence showing that the principles of proximity, similarity, common fate, good continuation, and closure affect tactile perception in both unimodal and crossmodal settings. We also highlight that the grouping of tactile stimuli is often affected by visual and auditory information that happen to be presented simultaneously. Finally, we discuss the theoretical and applied benefits that might pertain to the further study of Gestalt principles operating in both unisensory and multisensory tactile perception.  相似文献   

12.
Multisensory integration and crossmodal attention have a large impact on how we perceive the world. Therefore, it is important to know under what circumstances these processes take place and how they affect our performance. So far, no consensus has been reached on whether multisensory integration and crossmodal attention operate independently and whether they represent truly automatic processes. This review describes the constraints under which multisensory integration and crossmodal attention occur and in what brain areas these processes take place. Some studies suggest that multisensory integration and crossmodal attention take place in higher heteromodal brain areas, while others show the involvement of early sensory specific areas. Additionally, the current literature suggests that multisensory integration and attention interact depending on what processing level integration takes place. To shed light on this issue, different frameworks regarding the level at which multisensory interactions takes place are discussed. Finally, this review focuses on the question whether audiovisual interactions and crossmodal attention in particular are automatic processes. Recent studies suggest that this is not always the case. Overall, this review provides evidence for a parallel processing framework suggesting that both multisensory integration and attentional processes take place and can interact at multiple stages in the brain.  相似文献   

13.
Change blindness is the name given to people's inability to detect changes introduced between two consecutively-presented scenes when they are separated by a distractor that masks the transients that are typically associated with change. Change blindness has been reported within vision, audition, and touch, but has never before been investigated when successive patterns are presented to different sensory modalities. In the study reported here, we investigated change detection performance when the two to-be-compared stimulus patterns were presented in the same sensory modality (i.e., both visual or both tactile) and when one stimulus pattern was tactile while the other was presented visually or vice versa. The two to-be-compared patterns were presented consecutively, separated by an empty interval, or else separated by a masked interval. In the latter case, the masked interval could either be tactile or visual. The first experiment investigated visual-tactile and tactile-visual change detection performance. The results showed that in the absence of masking, participants detected changes in position accurately, despite the fact that the two to-be-compared displays were presented in different sensory modalities. Furthermore, when a mask was presented between the two to-be-compared displays, crossmodal change blindness was elicited no matter whether the mask was visual or tactile. The results of two further experiments showed that performance was better overall in the unimodal (visual or tactile) conditions than in the crossmodal conditions. These results suggest that certain of the processes underlying change blindness are multisensory in nature. We discuss these findings in relation to recent claims regarding the crossmodal nature of spatial attention.  相似文献   

14.
Categorization of seen objects is often determined by the shapes of objects. However, shape is not exclusive to the visual modality: The haptic system also is expert at identifying shapes. Hence, an important question for understanding shape processing is whether humans store separate modality-dependent shape representations, or whether information is integrated into one multisensory representation. To answer this question, we created a metric space of computer-generated novel objects varying in shape. These objects were then printed using a 3-D printer, to generate tangible stimuli. In a categorization experiment, participants first explored the objects visually and haptically. We found that both modalities led to highly similar categorization behavior. Next, participants were trained either visually or haptically on shape categories within the metric space. As expected, visual training increased visual performance, and haptic training increased haptic performance. Importantly, however, we found that visual training also improved haptic performance, and vice versa. Two additional experiments showed that the location of the categorical boundary in the metric space also transferred across modalities, as did heightened discriminability of objects adjacent to the boundary. This observed transfer of metric category knowledge across modalities indicates that visual and haptic forms of shape information are integrated into a shared multisensory representation.  相似文献   

15.
We report a series of experiments designed to assess the effect of audiovisual semantic congruency on the identification of visually-presented pictures. Participants made unspeeded identification responses concerning a series of briefly-presented, and then rapidly-masked, pictures. A naturalistic sound was sometimes presented together with the picture at a stimulus onset asynchrony (SOA) that varied between 0 and 533 ms (auditory lagging). The sound could be semantically congruent, semantically incongruent, or else neutral (white noise) with respect to the target picture. The results showed that when the onset of the picture and sound occurred simultaneously, a semantically-congruent sound improved, whereas a semantically-incongruent sound impaired, participants’ picture identification performance, as compared to performance in the white-noise control condition. A significant facilitatory effect was also observed at SOAs of around 300 ms, whereas no such semantic congruency effects were observed at the longest interval (533 ms). These results therefore suggest that the neural representations associated with visual and auditory stimuli can interact in a shared semantic system. Furthermore, this crossmodal semantic interaction is not constrained by the need for the strict temporal coincidence of the constituent auditory and visual stimuli. We therefore suggest that audiovisual semantic interactions likely occur in a short-term buffer which rapidly accesses, and temporarily retains, the semantic representations of multisensory stimuli in order to form a coherent multisensory object representation. These results are explained in terms of Potter’s (1993) notion of conceptual short-term memory.  相似文献   

16.
Although both auditory and visual information can influence the perceived emotion of an individual, how these modalities contribute to the perceived emotion of a crowd of characters was hitherto unknown. Here, we manipulated the ambiguity of the emotion of either a visual or auditory crowd of characters by varying the proportions of characters expressing one of two emotional states. Using an intersensory bias paradigm, unambiguous emotional information from an unattended modality was presented while participants determined the emotion of a crowd in an attended, but different, modality. We found that emotional information in an unattended modality can disambiguate the perceived emotion of a crowd. Moreover, the size of the crowd had little effect on these crossmodal influences. The role of audiovisual information appears to be similar in perceiving emotion from individuals or crowds. Our findings provide novel insights into the role of multisensory influences on the perception of social information from crowds of individuals.  相似文献   

17.
The last decade has seen great progress in the study of the nature of crossmodal links in exogenous and endogenous spatial attention (see [Spence, C., McDonald, J., & Driver, J. (2004). Exogenous spatial cuing studies of human crossmodal attention and multisensory integration. In C. Spence, & J. Driver (Eds.), Crossmodal space and crossmodal attention (pp. 277-320). Oxford, UK: Oxford University Press.], for a recent review). A growing body of research now highlights the existence of robust crossmodal links between auditory, visual, and tactile spatial attention. However, until recently, studies of exogenous and endogenous attention have proceeded relatively independently. In daily life, however, these two forms of attentional orienting continuously compete for the control of our attentional resources, and ultimately, our awareness. It is therefore critical to try and understand how exogenous and endogenous attention interact in both the unimodal context of the laboratory and the multisensory contexts that are more representative of everyday life. To date, progress in understanding the interaction between these two forms of orienting has primarily come from unimodal studies of visual attention. We therefore start by summarizing what has been learned from this large body of empirical research, before going on to review more recent studies that have started to investigate the interaction between endogenous and exogenous orienting in a multisensory setting. We also discuss the evidence suggesting that exogenous spatial orienting is not truly automatic, at least when assessed in a crossmodal context. Several possible models describing the interaction between endogenous and exogenous orienting are outlined and then evaluated in terms of the extant data.  相似文献   

18.
Tracking an audio-visual target involves integrating spatial cues about target position from both modalities. Such sensory cue integration is a developmental process in the brain involving learning, with neuroplasticity as its underlying mechanism. We present a Hebbian learning-based adaptive neural circuit for multi-modal cue integration. The circuit temporally correlates stimulus cues within each modality via intramodal learning as well as symmetrically across modalities via crossmodal learning to independently update modality-specific neural weights on a sample-by-sample basis. It is realised as a robotic agent that must orient towards a moving audio-visual target. It continuously learns the best possible weights required for a weighted combination of auditory and visual spatial target directional cues that is directly mapped to robot wheel velocities to elicit an orientation response. Visual directional cues are noise-free and continuous but arising from a relatively narrow receptive field while auditory directional cues are noisy and intermittent but arising from a relatively wider receptive field. Comparative trials in simulation demonstrate that concurrent intramodal learning improves both the overall accuracy and precision of the orientation responses of symmetric crossmodal learning. We also demonstrate that symmetric crossmodal learning improves multisensory responses as compared to asymmetric crossmodal learning. The neural circuit also exhibits multisensory effects such as sub-additivity, additivity and super-additivity.  相似文献   

19.
人类对感觉阈限附近的视觉刺激的知觉不总是一致的。为探究这种视知觉不一致的现象及其神经机制, 一些研究者关注刺激前脑内自发alpha神经振荡(8~13 Hz)对视知觉的影响。近年来的研究发现, 刺激前alpha振荡能量的降低能提高被试的探测击中率, 但不能提高知觉精确度; 而刺激前alpha振荡的相位能预测被试能否成功探测刺激。刺激前alpha能量被认为调控了视皮层的基础活动强度; alpha能量的降低反映了皮层基础活动的增强, 进而提高了对较弱刺激的探测率。刺激前alpha相位则被认为调控了皮层兴奋和抑制的时间; 大脑在刺激呈现时的不同状态(兴奋/抑制)决定了最终的知觉结果。  相似文献   

20.
This study aims at identifying the effect of training in the acquisition of the alphabetic principle in 5‐year‐old children. We compared the effect of multisensory training of letters in visual, haptic, graphomotor, visuo‐haptic, and visuo‐graphomotor groups. For each training type, we contrasted trained versus untrained letters in reading and spelling tasks. First, visuo‐haptic and visuo‐graphomotor training improved letter‐sound correspondence acquisition scores more than the other types of training, and this improvement persisted in the second post‐test. A cross‐modal transfer was revealed by the fact that scores increased after blindfold haptic and graphomotor experiences. Moreover, performance on untrained letters also improved, suggesting an indirect effect following the specific trained letters training. The results argue in favor of a facilitating effect of multisensory encoding on acquisition of the alphabetic principle. Practical implications for the prevention of future reading difficulties are discussed. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号