首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The last decade has seen great progress in the study of the nature of crossmodal links in exogenous and endogenous spatial attention (see [Spence, C., McDonald, J., & Driver, J. (2004). Exogenous spatial cuing studies of human crossmodal attention and multisensory integration. In C. Spence, & J. Driver (Eds.), Crossmodal space and crossmodal attention (pp. 277-320). Oxford, UK: Oxford University Press.], for a recent review). A growing body of research now highlights the existence of robust crossmodal links between auditory, visual, and tactile spatial attention. However, until recently, studies of exogenous and endogenous attention have proceeded relatively independently. In daily life, however, these two forms of attentional orienting continuously compete for the control of our attentional resources, and ultimately, our awareness. It is therefore critical to try and understand how exogenous and endogenous attention interact in both the unimodal context of the laboratory and the multisensory contexts that are more representative of everyday life. To date, progress in understanding the interaction between these two forms of orienting has primarily come from unimodal studies of visual attention. We therefore start by summarizing what has been learned from this large body of empirical research, before going on to review more recent studies that have started to investigate the interaction between endogenous and exogenous orienting in a multisensory setting. We also discuss the evidence suggesting that exogenous spatial orienting is not truly automatic, at least when assessed in a crossmodal context. Several possible models describing the interaction between endogenous and exogenous orienting are outlined and then evaluated in terms of the extant data.  相似文献   

2.
Spatial information processing takes place in different brain regions that receive converging inputs from several sensory modalities. Because of our own movements—for example, changes in eye position, head rotations, and so forth—unimodal sensory representations move continuously relative to one another. It is generally assumed that for multisensory integration to be an orderly process, it should take place between stimuli at congruent spatial locations. In the monkey posterior parietal cortex, the ventral intraparietal (VIP) area is specialized for the analysis of movement information using visual, somatosensory, vestibular, and auditory signals. Focusing on the visual and tactile modalities, we found that in area VIP, like in the superior colliculus, multisensory signals interact at the single neuron level, suggesting that this area participates in multisensory integration. Curiously, VIP does not use a single, invariant coordinate system to encode locations within and across sensory modalities. Visual stimuli can be encoded with respect to the eye, the head, or halfway between the two reference frames, whereas tactile stimuli seem to be prevalently encoded relative to the body. Hence, while some multisensory neurons in VIP could encode spatially congruent tactile and visual stimuli independently of current posture, in other neurons this would not be the case. Future work will need to evaluate the implications of these observations for theories of optimal multisensory integration.Edited by: Marie-Hélène Giard and Mark Wallace  相似文献   

3.
康冠兰  罗霄骁 《心理科学》2020,(5):1072-1078
多通道信息交互是指来自某个感觉通道的信息与另一感觉通道的信息相互作用、相互影响的一系列加工过程。主要包括两个方面:一是不同感觉通道的输入如何整合;二是跨通道信息的冲突控制。本文综述了视听跨通道信息整合与冲突控制的行为心理机制和神经机制,探讨了注意对视听信息整合与冲突控制的影响。未来需探究视听跨通道信息加工的脑网络机制,考察特殊群体的跨通道整合和冲突控制以帮助揭示其认知和社会功能障碍的机制。  相似文献   

4.
The extent to which attention modulates multisensory processing in a top-down fashion is still a subject of debate among researchers. Typically, cognitive psychologists interested in this question have manipulated the participants’ attention in terms of single/dual tasking or focal/divided attention between sensory modalities. We suggest an alternative approach, one that builds on the extensive older literature highlighting hemispheric asymmetries in the distribution of spatial attention. Specifically, spatial attention in vision, audition, and touch is typically biased preferentially toward the right hemispace, especially under conditions of high perceptual load. We review the evidence demonstrating such an attentional bias toward the right in extinction patients and healthy adults, along with the evidence of such rightward-biased attention in multisensory experimental settings. We then evaluate those studies that have demonstrated either a more pronounced multisensory effect in right than in left hemispace, or else similar effects in the two hemispaces. The results suggest that the influence of rightward-biased attention is more likely to be observed when the crossmodal signals interact at later stages of information processing and under conditions of higher perceptual load—that is, conditions under which attention is perhaps a compulsory enhancer of information processing. We therefore suggest that the spatial asymmetry in attention may provide a useful signature of top-down attentional modulation in multisensory processing.  相似文献   

5.
The development of neuroimaging methods has had a significant impact on the study of the human brain. Functional MRI, with its high spatial resolution, provides investigators with a method to localize the neuronal correlates of many sensory and cognitive processes. Magneto- and electroencephalography, in turn, offer excellent temporal resolution allowing the exact time course of neuronal processes to be investigated. Applying these methods to multisensory processing, many research laboratories have been successful in describing cross-sensory interactions and their spatio-temporal dynamics in the human brain. Here, we review data from selected neuroimaging investigations showing how vision can influence and interact with other senses, namely audition, touch, and olfaction. We highlight some of the similarities and differences in the cross-processing of the different sensory modalities and discuss how different neuroimaging methods can be applied to answer specific questions about multisensory processing.Edited by: Marie-Hélène Giard and Mark Wallace  相似文献   

6.
Although sensory perception and neurobiology are traditionally investigated one modality at a time, real world behaviour and perception are driven by the integration of information from multiple sensory sources. Mounting evidence suggests that the neural underpinnings of multisensory integration extend into early sensory processing. This article examines the notion that neocortical operations are essentially multisensory. We first review what is known about multisensory processing in higher-order association cortices and then discuss recent anatomical and physiological findings in presumptive unimodal sensory areas. The pervasiveness of multisensory influences on all levels of cortical processing compels us to reconsider thinking about neural processing in unisensory terms. Indeed, the multisensory nature of most, possibly all, of the neocortex forces us to abandon the notion that the senses ever operate independently during real-world cognition.  相似文献   

7.
We constantly integrate the information that is available to our various senses. The extent to which the mechanisms of multisensory integration are subject to the influences of attention, emotion, and/or motivation is currently unknown. The ??ventriloquist effect?? is widely assumed to be an automatic crossmodal phenomenon, shifting the perceived location of an auditory stimulus toward a concurrently presented visual stimulus. In the present study, we examined whether audiovisual binding, as indicated by the magnitude of the ventriloquist effect, is influenced by threatening auditory stimuli presented prior to the ventriloquist experiment. Syllables spoken in a fearful voice were presented from one of eight loudspeakers, while syllables spoken in a neutral voice were presented from the other seven locations. Subsequently, participants had to localize pure tones while trying to ignore concurrent visual stimuli (both the auditory and the visual stimuli here were emotionally neutral). A reliable ventriloquist effect was observed. The emotional stimulus manipulation resulted in a reduction of the magnitude of the subsequently measured ventriloquist effect in both hemifields, as compared to a control group exposed to a similar attention-capturing, but nonemotional, manipulation. These results suggest that the emotional system is capable of influencing multisensory binding processes that have heretofore been considered automatic.  相似文献   

8.
The relevance of emotional perception in interpersonal relationships and social cognition has been well documented. Although brain diseases might impair emotional processing, studies concerning emotional recognition in patients with brain tumours are relatively rare. The aim of this study was to explore emotional recognition in patients with gliomas in three conditions (visual, auditory and crossmodal) and to analyse how tumour-related variables (notably, tumour localisation) and patient-related variables influence emotion recognition. Twenty six patients with gliomas and 26 matched healthy controls were instructed to identify 5 basic emotions and a neutral expression, which were displayed through visual, auditory and crossmodal stimuli. Relative to the controls, recognition was weakly impaired in the patient group under both visual and auditory conditions, but the performances were comparable in the crossmodal condition. Additional analyses using the ‘race model’ suggest differences in multisensory emotional integration abilities across the groups, which were potentially correlated with the executive disorders observed in the patients. These observations support the view of compensatory mechanisms in the case of gliomas that might preserve the quality of life and help maintain the normal social and professional lives often observed in these patients.  相似文献   

9.
大脑的知觉加工并非单纯由外部刺激驱动,而是存在自上而下的知觉调控。尽管这一现象被大量实验研究证实,但其神经机制仍然是认知神经科学研究的重要问题。本研究系统介绍了知觉调控的神经基础、实现形式、研究范式,及其理论模型,分析指出了当前研究面临的主要问题,并对未来的研究进行了展望,以期促进该问题研究的进一步开展。  相似文献   

10.
Change blindness is the name given to people's inability to detect changes introduced between two consecutively-presented scenes when they are separated by a distractor that masks the transients that are typically associated with change. Change blindness has been reported within vision, audition, and touch, but has never before been investigated when successive patterns are presented to different sensory modalities. In the study reported here, we investigated change detection performance when the two to-be-compared stimulus patterns were presented in the same sensory modality (i.e., both visual or both tactile) and when one stimulus pattern was tactile while the other was presented visually or vice versa. The two to-be-compared patterns were presented consecutively, separated by an empty interval, or else separated by a masked interval. In the latter case, the masked interval could either be tactile or visual. The first experiment investigated visual-tactile and tactile-visual change detection performance. The results showed that in the absence of masking, participants detected changes in position accurately, despite the fact that the two to-be-compared displays were presented in different sensory modalities. Furthermore, when a mask was presented between the two to-be-compared displays, crossmodal change blindness was elicited no matter whether the mask was visual or tactile. The results of two further experiments showed that performance was better overall in the unimodal (visual or tactile) conditions than in the crossmodal conditions. These results suggest that certain of the processes underlying change blindness are multisensory in nature. We discuss these findings in relation to recent claims regarding the crossmodal nature of spatial attention.  相似文献   

11.
Tracking an audio-visual target involves integrating spatial cues about target position from both modalities. Such sensory cue integration is a developmental process in the brain involving learning, with neuroplasticity as its underlying mechanism. We present a Hebbian learning-based adaptive neural circuit for multi-modal cue integration. The circuit temporally correlates stimulus cues within each modality via intramodal learning as well as symmetrically across modalities via crossmodal learning to independently update modality-specific neural weights on a sample-by-sample basis. It is realised as a robotic agent that must orient towards a moving audio-visual target. It continuously learns the best possible weights required for a weighted combination of auditory and visual spatial target directional cues that is directly mapped to robot wheel velocities to elicit an orientation response. Visual directional cues are noise-free and continuous but arising from a relatively narrow receptive field while auditory directional cues are noisy and intermittent but arising from a relatively wider receptive field. Comparative trials in simulation demonstrate that concurrent intramodal learning improves both the overall accuracy and precision of the orientation responses of symmetric crossmodal learning. We also demonstrate that symmetric crossmodal learning improves multisensory responses as compared to asymmetric crossmodal learning. The neural circuit also exhibits multisensory effects such as sub-additivity, additivity and super-additivity.  相似文献   

12.
Everyday experience involves the continuous integration of information from multiple sensory inputs. Such crossmodal interactions are advantageous since the combined action of different sensory cues can provide information unavailable from their individual operation, reducing perceptual ambiguity and enhancing responsiveness. The behavioural consequences of such multimodal processes and their putative neural mechanisms have been investigated extensively with respect to orienting behaviour and, to a lesser extent, the crossmodal coordination of spatial attention. These operations are concerned mainly with the determination of stimulus location. However, information from different sensory streams can also be combined to assist stimulus identification. Psychophysical and physiological data indicate that these two crossmodal processes are subject to different temporal and spatial constraints both at the behavioural and neuronal level and involve the participation of distinct neural substrates. Here we review the evidence for such a dissociation and discuss recent neurophysiological, neuroanatomical and neuroimaging findings that shed light on the mechanisms underlying crossmodal identification, with specific reference to audio-visual speech perception.  相似文献   

13.
An important goal in the study of object, word, and face perception is to understand how the brain integrates various visual features- the binding process. Along with the progress of knowledge of the neurophysiological properties of integration processes in cortical areas, a number of psychophysical, neuropsychological, and computational studies have provided information about how and in what conditions the visual system combines different signals across time and space, the factors that modulate integration processes, the local processes that control larger scale integration, and how the mechanisms can be implemented physiologically. This special issue is aimed at summarizing recent data about integration processes based on investigations ranging from the neurochemical substrate of integration processes, the role of attention in integrating characteristics, the spatial and temporal integration of local signals into coherentpercepts, the integration of visual information during the programming of saccade, and what remains when integration fails in neurologically impaired patients.  相似文献   

14.
To understand the development of sensory processes, it is necessary not only to look at the maturation of each of the sensory systems in isolation, but also to study the development of the nervous systems capacity to integrate information across the different senses. It is through such multisensory integration that a coherent perceptual gestalt of the world comes to be generated. In the adult brain, multisensory convergence and integration take place at a number of brainstem and cortical sites, where individual neurons have been found that respond to multisensory stimuli with patterns of activation that depend on the nature of the stimulus complex and the intrinsic properties of the neuron. Parallels between the responses of these neurons and multisensory behavior and perception suggest that they are the substrates that underlie these cognitive processes. In both cat and monkey models, the development of these multisensory neurons and the appearance of their integrative capacity is a gradual postnatal process. For subcortical structures (i.e., the superior colliculus) this maturational process appears to be gated by the appearance of functional projections from regions of association cortex. The slow postnatal maturation of multisensory processes, coupled with its dependency on functional corticotectal connections, suggested that the development of multisensory integration may be tied to sensory experiences acquired during postnatal life. In support of this, eliminating experience in one sensory modality (i.e., vision) during postnatal development severely compromises the integration of multisensory cues. Research is ongoing to better elucidate the critical development antecedents for the emergence of normal multisensory capacity.Edited by Marie-Hélène Giard and Mark WallaceThis revised version was published in May 2004 with corrections to Fig. 1.  相似文献   

15.
Audiotactile multisensory interactions in human information processing   总被引:1,自引:0,他引:1  
Abstract:  The last few years has seen a very rapid growth of interest in how signals from different sensory modalities are integrated in the brain to form the unified percepts that fill our daily lives. Research on multisensory interactions between vision, touch, and proprioception has revealed the existence of multisensory spatial representations that code the location of external events relative to our own bodies. In this review, we highlight recent converging evidence from both human and animal studies that has revealed that spatially-modulated multisensory interactions also occur between hearing and touch, especially in the space immediately surrounding the head. These spatial audiotactile interactions for stimuli presented close to the head can affect not only the spatial aspects of perception, but also various other non-spatial aspects of audiotactile information processing. Finally, we highlight some of the most important questions for future research in this area.  相似文献   

16.
Integrative processing is traditionally believed to be dependent on consciousness. While earlier studies within the last decade reported many types of integration under subliminal conditions (i.e. without perceptual awareness), these findings are widely challenged recently. This review evaluates the current evidence for 10 types of subliminal integration that are widely studied: arithmetic processing, object-context integration, multi-word processing, same-different processing, multisensory integration and 5 different types of associative learning. Potential methodological issues concerning awareness measures are also taken into account. It is concluded that while there is currently no reliable evidence for subliminal integration, this does not necessarily refute ‘unconscious’ integration defined through non-subliminal (e.g. implicit) approaches.  相似文献   

17.
Emotional crossmodal integration (i.e., multisensorial decoding of emotions) is a crucial process that ensures adaptive social behaviors and responses to the environment. Recent evidence suggests that in binge drinking—an excessive alcohol consumption pattern associated with psychological and cerebral deficits—crossmodal integration is preserved at the behavioral level. Although some studies have suggested brain modifications during affective processing in binge drinking, nothing is known about the cerebral correlates of crossmodal integration. In the current study, we asked 53 university students (17 binge drinkers, 17 moderate drinkers, 19 nondrinkers) to perform an emotional crossmodal task while their behavioral and neurophysiological responses were recorded. Participants had to identify happiness and anger in three conditions (unimodal, crossmodal congruent, crossmodal incongruent) and two modalities (face and/or voice). Binge drinkers did not significantly differ from moderate drinkers and nondrinkers at the behavioral level. However, widespread cerebral modifications were found at perceptual (N100) and mainly at decisional (P3b) stages in binge drinkers, indexed by slower brain processing and stronger activity. These cerebral modifications were mostly related to anger processing and crossmodal integration. This study highlights higher electrophysiological activity in the absence of behavioral deficits, which could index a potential compensation process in binge drinkers. In line with results found in severe alcohol-use disorders, these electrophysiological findings show modified anger processing, which might have a deleterious impact on social functioning. Moreover, this study suggests impaired crossmodal integration at early stages of alcohol-related disorders.  相似文献   

18.
Previous research has shown that sounds facilitate perception of visual patterns appearing immediately after the sound but impair perception of patterns appearing after some delay. Here we examined the spatial gradient of the fast crossmodal facilitation effect and the slow inhibition effect in order to test whether they reflect separate mechanisms. We found that crossmodal facilitation is only observed at visual field locations overlapping with the sound, whereas crossmodal inhibition affects the whole hemifield. Furthermore, we tested whether multisensory perceptual learning with misaligned audio-visual stimuli reshapes crossmodal facilitation and inhibition. We found that training shifts crossmodal facilitation towards the trained location without changing its range. By contrast, training narrows the range of inhibition without shifting its position. Our results suggest that crossmodal facilitation and inhibition reflect separate mechanisms that can both be reshaped by multisensory experience even in adult humans. Multisensory links seem to be more plastic than previously thought.  相似文献   

19.
The present study investigated how multisensory integration in peripersonal space is modulated by limb posture (i.e. whether the limbs are crossed or uncrossed) and limb congruency (i.e. whether the observed body part matches the actual position of one’s limb). This was done separately for the upper limbs (Experiment 1) and the lower limbs (Experiment 2). The crossmodal congruency task was used to measure peripersonal space integration for the hands and the feet. It was found that the peripersonal space representation for the hands but not for the feet is dynamically updated based on both limb posture and limb congruency. Together these findings show how dynamic cues from vision, proprioception, and touch are integrated in peripersonal limb space and highlight fundamental differences in the way in which peripersonal space is represented for the upper and lower extremity.  相似文献   

20.
The last couple of years have seen a rapid growth of interest (especially amongst cognitive psychologists, cognitive neuroscientists, and developmental researchers) in the study of crossmodal correspondences – the tendency for our brains (not to mention the brains of other species) to preferentially associate certain features or dimensions of stimuli across the senses. By now, robust empirical evidence supports the existence of numerous crossmodal correspondences, affecting people’s performance across a wide range of psychological tasks – in everything from the redundant target effect paradigm through to studies of the Implicit Association Test, and from speeded discrimination/classification tasks through to unspeeded spatial localisation and temporal order judgment tasks. However, one question that has yet to receive a satisfactory answer is whether crossmodal correspondences automatically affect people’s performance (in all, or at least in a subset of tasks), as opposed to reflecting more of a strategic, or top-down, phenomenon. Here, we review the latest research on the topic of crossmodal correspondences to have addressed this issue. We argue that answering the question will require researchers to be more precise in terms of defining what exactly automaticity entails. Furthermore, one’s answer to the automaticity question may also hinge on the answer to a second question: Namely, whether crossmodal correspondences are all ‘of a kind’, or whether instead there may be several different kinds of crossmodal mapping (e.g., statistical, structural, and semantic). Different answers to the automaticity question may then be revealed depending on the type of correspondence under consideration. We make a number of suggestions for future research that might help to determine just how automatic crossmodal correspondences really are.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号