首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Multisensory neurons in the deep superior colliculus (SC) show response enhancement to cross-modal stimuli that coincide in time and space. However, multisensory SC neurons respond to unimodal input as well. It is thus legitimate to ask why not all deep SC neurons are multisensory or, at least, develop multisensory behavior during an organism's maturation. The novel answer given here derives from a signal detection theory perspective. A Bayes' ratio model of multisensory enhancement is suggested. It holds that deep SC neurons operate under the Bayes' ratio rule, which guarantees optimal performance-that is, it maximizes the probability of target detection while minimizing the false alarm rate. It is shown that optimal performance of multisensory neurons vis-à-vis cross-modal stimuli implies, at the same time, that modality-specific neurons will outperform multisensory neurons in processing unimodal targets. Thus, only the existence of both multisensory and modality-specific neurons allows optimal performance when targets of one or several modalities may occur.  相似文献   

2.
Multisensory integration is a process whereby information converges from different sensory modalities to produce a response that is different from that elicited by the individual modalities presented alone. A neural basis for multisensory integration has been identified within a variety of brain regions, but the most thoroughly examined model has been that of the superior colliculus (SC). Multisensory processing in the SC of anaesthetized animals has been shown to be dependent on the physical parameters of the individual stimuli presented (e.g., intensity, direction, velocity) as well as their spatial relationship. However, it is unknown whether these stimulus features are important, or evident, in the awake behaving animal. To address this question, we evaluated the influence of physical properties of sensory stimuli (visual intensity, direction, and velocity; auditory intensity and location) on sensory activity and multisensory integration of SC neurons in awake, behaving primates. Monkeys were trained to fixate a central visual fixation point while visual and/or auditory stimuli were presented in the periphery. Visual stimuli were always presented within the contralateral receptive field of the neuron whereas auditory stimuli were presented at either ipsi- or contralateral locations. Many of the SC neurons responsive to these sensory stimuli (n = 66/84; 76%) had stronger responses when the visual and auditory stimuli were combined at contralateral locations than when the auditory stimulus was located on the ipsilateral side. This trend was significant across the population of auditory-responsive neurons. In addition, some SC neurons (n = 31) were presented a battery of tests in which the quality of one stimulus of a pair was systematically manipulated. A small proportion of these neurons (n = 8/31; 26%) showed preferential responses to stimuli with specific physical properties, and these preferences were not significantly altered when multisensory stimulus combinations were presented. These data demonstrate that multisensory processing in the awake behaving primate is influenced by the spatial congruency of the stimuli as well as their individual physical properties.  相似文献   

3.
Lacey S  Campbell C  Sathian K 《Perception》2007,36(10):1513-1521
The relationship between visually and haptically derived representations of objects is an important question in multisensory processing and, increasingly, in mental representation. We review evidence for the format and properties of these representations, and address possible theoretical models. We explore the relevance of visual imagery processes and highlight areas for further research, including the neglected question of asymmetric performance in the visuo-haptic cross-modal memory paradigm. We conclude that the weight of evidence suggests the existence of a multisensory representation, spatial in format, and flexibly accessible by both bottom-up and top-down inputs, although efficient comparison between modality-specific representations cannot entirely be ruled out.  相似文献   

4.
多感觉整合的时间再校准   总被引:1,自引:0,他引:1  
跨通道刺激的时间同步性是多感觉整合的必要条件, 但是由于刺激物理传导与神经传导的差异, 它们在时间上并非完全匹配。时间再校准指大脑能够适应跨通道刺激间很短的时间延迟的现象, 反映了多感觉整合在时间维度上的可塑性, 表现为适应相继呈现的跨通道刺激后, 主观同时点向时间延迟方向的偏移。本文主要介绍了时间再校准的通道效应与潜在机制, 其初始加工阶段, 它与刺激内容加工的关系及主要的影响因素。今后的研究应当进一步探索时间再校准能否发生于早期加工阶段, 检验其认知过程是否具有双向性, 探讨空间选择性注意的作用, 并结合神经机制的研究, 从综合的视角进行更完善的理论建构。  相似文献   

5.
ABSTRACT

Barsalou has recently argued against the strategy of identifying amodal neural representations by using their cross-modal responses (i.e., their responses to stimuli from different modalities). I agree that there are indeed modal structures that satisfy this “cross-modal response” criterion (CM), such as distributed and conjunctive modal representations. However, I argue that we can distinguish between modal and amodal structures by looking into differences in their cross-modal responses. A component of a distributed cell assembly can be considered unimodal because its responses to stimuli from a given modality are stable, whereas its responses to stimuli from any other modality are not (i.e., these are lost within a short time, plausibly as a result of cell assembly dynamics). In turn, conjunctive modal representations, such as superior colliculus cells in charge of sensory integration, are multimodal because they have a stable response to stimuli from different modalities. Finally, some prefrontal cells constitute amodal representations because they exhibit what has been called ‘adaptive coding’. This implies that their responses to stimuli from any given modality can be lost when the context and task conditions are modified. We cannot assign them a modality because they have no stable relation with any input type.

Abbreviatons: CM: cross-modal response criterion; CCR: conjuntive cross-modal representations; fMRI: functional magnetic resonance imaging; MVPA: multivariate pattern analysis; pre-SMA: pre-supplementary motor area; PFC: prefrontal cortex; SC: superior colliculus; GWS: global workspace  相似文献   

6.
大脑可以对来自不同感觉通道的信息进行处理与整合。与单一感觉通道相比, 个体对同时呈现在不同感觉通道的目标信号的响应会更快。对于这种现象的一种主要理论解释是共同激活模型, 该模型认为来自不同通道的刺激在特定的脑区汇聚整合, 比如顶叶内沟、颞上沟和前额叶皮层区域。整合后的信号强度更大, 可以更快地触发反应, 但是信号的整合发生在认知加工的哪一阶段目前尚未有明确结论。当个体对出现在不同感觉通道之间的任务转换进行加工时, 产生与感觉通道相关的任务转换的损失小于跨感觉通道转换损失与任务转换损失的总和, 这为与感觉通道相关的转换代价来源于任务设置的惯性和干扰提供了证据。而在单通道和多通道之间发生转换时, 跨通道转换代价会减小甚至消失, 这是由于同时发生的多感觉整合抵消了一部分损失, 这种现象支持了共同激活模型理论。然而, 多感觉信号整合对任务转换的神经加工过程产生怎样的影响并不清楚, 在未来的研究中可以把多感觉整合范式同经典的任务转换范式结合改进, 进而确定跨通道转换的加工机制和多感觉信号整合的发生阶段。  相似文献   

7.
Perception is fundamentally a multisensory experience. The principle of inverse effectiveness (PoIE) states how the multisensory gain is maximal when responses to the unisensory constituents of the stimuli are weak. It is one of the basic principles underlying multisensory processing of spatiotemporally corresponding crossmodal stimuli that are well established at behavioral as well as neural levels. It is not yet clear, however, how modality-specific stimulus features influence discrimination of subtle changes in a crossmodally corresponding feature belonging to another modality. Here, we tested the hypothesis that reliance on visual cues to pitch discrimination follow the PoIE at the interindividual level (i.e., varies with varying levels of auditory-only pitch discrimination abilities). Using an oddball pitch discrimination task, we measured the effect of varying visually perceived vertical position in participants exhibiting a wide range of pitch discrimination abilities (i.e., musicians and nonmusicians). Visual cues significantly enhanced pitch discrimination as measured by the sensitivity index d’, and more so in the crossmodally congruent than incongruent condition. The magnitude of gain caused by compatible visual cues was associated with individual pitch discrimination thresholds, as predicted by the PoIE. This was not the case for the magnitude of the congruence effect, which was unrelated to individual pitch discrimination thresholds, indicating that the pitch-height association is robust to variations in auditory skills. Our findings shed light on individual differences in multisensory processing by suggesting that relevant multisensory information that crucially aids some perceivers’ performance may be of less importance to others, depending on their unisensory abilities.  相似文献   

8.
Efficient navigation of our social world depends on the generation, interpretation, and combination of social signals within different sensory systems. However, the influence of healthy adult aging on multisensory integration of emotional stimuli remains poorly explored. This article comprises 2 studies that directly address issues of age differences on cross-modal emotional matching and explicit identification. The first study compared 25 younger adults (19-40 years) and 25 older adults (60-80 years) on their ability to match cross-modal congruent and incongruent emotional stimuli. The second study looked at performance of 20 younger (19-40) and 20 older adults (60-80) on explicit emotion identification when information was presented congruently in faces and voices or only in faces or in voices. In Study 1, older adults performed as well as younger adults on tasks in which congruent auditory and visual emotional information were presented concurrently, but there were age-related differences in matching incongruent cross-modal information. Results from Study 2 indicated that though older adults were impaired at identifying emotions from 1 modality (faces or voices alone), they benefited from congruent multisensory information as age differences were eliminated. The findings are discussed in relation to social, emotional, and cognitive changes with age.  相似文献   

9.
Chiou R  Rich AN 《Perception》2012,41(3):339-353
The brain constantly integrates incoming signals across the senses to form a cohesive view of the world. Most studies on multisensory integration concern the roles of spatial and temporal parameters. However, recent findings suggest cross-modal correspondences (eg high-pitched sounds associated with bright, small objects located high up) also affect multisensory integration. Here, we focus on the association between auditory pitch and spatial location. Surprisingly little is known about the cognitive and perceptual roots of this phenomenon, despite its long use in ergonomic design. In a series of experiments, we explore how this cross-modal mapping affects the allocation of attention with an attentional cuing paradigm. Our results demonstrate that high and low tones induce attention shifts to upper or lower locations, depending on pitch height. Furthermore, this pitch-induced cuing effect is susceptible to contextual manipulations and volitional control. These findings suggest the cross-modal interaction between pitch and location originates from an attentional level rather than from response mapping alone. The flexible contextual mapping between pitch and location, as well as its susceptibility to top-down control, suggests the pitch-induced cuing effect is primarily mediated by cognitive processes after initial sensory encoding and occurs at a relatively late stage of voluntary attention orienting.  相似文献   

10.
Involuntary listening aids seeing: evidence from human electrophysiology   总被引:3,自引:0,他引:3  
It is well known that sensory events of one modality can influence judgments of sensory events in other modalities. For example, people respond more quickly to a target appearing at the location of a previous cue than to a target appearing at another location, even when the two stimuli are from different modalities. Such cross-modal interactions suggest that involuntary spatial attention mechanisms are not entirely modality-specific. In the present study, event-related brain potentials (ERPs) were recorded to elucidate the neural basis and timing of involuntary, cross-modal spatial attention effects. We found that orienting spatial attention to an irrelevant sound modulates the ERP to a subsequent visual target over modality-specific, extrastriate visual cortex, but only after the initial stages of sensory processing are completed. These findings are consistent with the proposal that involuntary spatial attention orienting to auditory and visual stimuli involves shared, or at least linked, brain mechanisms.  相似文献   

11.
In order to determine the spatial location of an object that is simultaneously seen and heard, the brain assigns higher weights to the sensory inputs that provide the most reliable information. For example, in the well-known ventriloquism effect, the perceived location of a sound is shifted toward the location of a concurrent but spatially misaligned visual stimulus. This perceptual illusion can be explained by the usually much higher spatial resolution of the visual system as compared to the auditory system. Recently, it has been demonstrated that this cross-modal binding process is not fully automatic, but can be modulated by emotional learning. Here we tested whether cross-modal binding is similarly affected by motivational factors, as exemplified by reward expectancy. Participants received a monetary reward for precise and accurate localization of brief auditory stimuli. Auditory stimuli were accompanied by task-irrelevant, spatially misaligned visual stimuli. Thus, the participants’ motivational goal of maximizing their reward was put in conflict with the spatial bias of auditory localization induced by the ventriloquist situation. Crucially, the amounts of expected reward differed between the two hemifields. As compared to the hemifield associated with a low reward, the ventriloquism effect was reduced in the high-reward hemifield. This finding suggests that reward expectations modulate cross-modal binding processes, possibly mediated via cognitive control mechanisms. The motivational significance of the stimulus material, thus, constitutes an important factor that needs to be considered in the study of top-down influences on multisensory integration.  相似文献   

12.
Several studies have shown that the direction in which a visual apparent motion stream moves can influence the perceived direction of an auditory apparent motion stream (an effect known as crossmodal dynamic capture). However, little is known about the role that intramodal perceptual grouping processes play in the multisensory integration of motion information. The present study was designed to investigate the time course of any modulation of the cross-modal dynamic capture effect by the nature of the perceptual grouping taking place within vision. Participants were required to judge the direction of an auditory apparent motion stream while trying to ignore visual apparent motion streams presented in a variety of different configurations. Our results demonstrate that the cross-modal dynamic capture effect was influenced more by visual perceptual grouping when the conditions for intramodal perceptual grouping were set up prior to the presentation of the audiovisual apparent motion stimuli. However, no such modulation occurred when the visual perceptual grouping manipulation was established at the same time as or after the presentation of the audiovisual stimuli. These results highlight the importance of the unimodal perceptual organization of sensory information to the manifestation of multisensory integration.  相似文献   

13.
Dyslexia has been associated with a problem in visual–audio integration mechanisms. Here, we investigate for the first time the contribution of unisensory cues on multisensory audio and visual integration in 32 dyslexic children by modelling results using the Bayesian approach. Non‐linguistic stimuli were used. Children performed a temporal task: they had to report whether the middle of three stimuli was closer in time to the first one or to the last one presented. Children with dyslexia, compared with typical children, exhibited poorer unimodal thresholds, requiring greater temporal distance between items for correct judgements, while multisensory thresholds were well predicted by the Bayesian model. This result suggests that the multisensory deficit in dyslexia is due to impaired audio and visual inputs rather than impaired multisensory processing per se. We also observed that poorer temporal skills correlated with lower reading skills in dyslexic children, suggesting that this temporal capability can be linked to reading abilities.  相似文献   

14.
唐晓雨  佟佳庚  于宏  王爱君 《心理学报》2021,53(11):1173-1188
本文采用内-外源性空间线索靶子范式, 操控内源性线索有效性(有效线索、无效线索)、外源性线索有效性(有效线索、无效线索)、目标刺激类型(视觉刺激、听觉刺激、视听觉刺激)三个自变量。通过两个不同任务难度的实验(实验1: 简单定位任务; 实验2: 复杂辨别任务)来考察内外源性空间注意对多感觉整合的影响。两个实验结果均发现外源性空间注意显著减弱了多感觉整合效应, 内源性空间注意没有显著增强多感觉整合效应; 实验2中还发现了内源性空间注意会对外源性空间注意减弱多感觉整合效应产生影响。结果表明, 与内源性空间注意不同, 外源性空间注意对多感觉整合的影响不易受任务难度的调控; 当任务较难时内源性空间注意会影响外源性空间注意减弱多感觉整合效应的过程。由此推测, 内外源性空间注意对多感觉整合的调节并非彼此独立、而是相互影响的。  相似文献   

15.
Spatial information processing takes place in different brain regions that receive converging inputs from several sensory modalities. Because of our own movements—for example, changes in eye position, head rotations, and so forth—unimodal sensory representations move continuously relative to one another. It is generally assumed that for multisensory integration to be an orderly process, it should take place between stimuli at congruent spatial locations. In the monkey posterior parietal cortex, the ventral intraparietal (VIP) area is specialized for the analysis of movement information using visual, somatosensory, vestibular, and auditory signals. Focusing on the visual and tactile modalities, we found that in area VIP, like in the superior colliculus, multisensory signals interact at the single neuron level, suggesting that this area participates in multisensory integration. Curiously, VIP does not use a single, invariant coordinate system to encode locations within and across sensory modalities. Visual stimuli can be encoded with respect to the eye, the head, or halfway between the two reference frames, whereas tactile stimuli seem to be prevalently encoded relative to the body. Hence, while some multisensory neurons in VIP could encode spatially congruent tactile and visual stimuli independently of current posture, in other neurons this would not be the case. Future work will need to evaluate the implications of these observations for theories of optimal multisensory integration.Edited by: Marie-Hélène Giard and Mark Wallace  相似文献   

16.
康冠兰  罗霄骁 《心理科学》2020,(5):1072-1078
多通道信息交互是指来自某个感觉通道的信息与另一感觉通道的信息相互作用、相互影响的一系列加工过程。主要包括两个方面:一是不同感觉通道的输入如何整合;二是跨通道信息的冲突控制。本文综述了视听跨通道信息整合与冲突控制的行为心理机制和神经机制,探讨了注意对视听信息整合与冲突控制的影响。未来需探究视听跨通道信息加工的脑网络机制,考察特殊群体的跨通道整合和冲突控制以帮助揭示其认知和社会功能障碍的机制。  相似文献   

17.
In multisensory research, faster responses are commonly observed when multimodal stimuli are presented, as compared to unimodal target presentations. This so-called redundant-signals effect can be explained by several frameworks, including separate-activation and coactivation models. The redundant-signals effect has been investigated in a large number of studies; however, most of those studies have been limited to the rejection of separate-activation models. Coactivation models have been analyzed in only a few studies, primarily using simple response tasks. Here, we investigated the mechanism of multisensory integration underlying go/no-go and choice responses to redundant auditory–visual stimuli. In the present study, the mean and variance of response times, as well as the accuracy rates of go/no-go and choice responses, were used to test a coactivation model based on the linear superposition of diffusion processes (Schwarz, 1994) within two absorbing barriers. The diffusion superposition model accurately describes the means and variances of response times as well as the proportions of correct responses observed in the two tasks. Linear superposition thus seems to be a general principle in the integration of redundant information provided by different sensory channels, and is not restricted to simple responses. The results connect existing theories of multisensory integration with theories on choice behavior.  相似文献   

18.
We investigated whether the "unity assumption," according to which an observer assumes that two different sensory signals refer to the same underlying multisensory event, influences the multisensory integration of audiovisual speech stimuli. Syllables (Experiments 1, 3, and 4) or words (Experiment 2) were presented to participants at a range of different stimulus onset asynchronies using the method of constant stimuli. Participants made unspeeded temporal order judgments regarding which stream (either auditory or visual) had been presented first. The auditory and visual speech stimuli in Experiments 1-3 were either gender matched (i.e., a female face presented together with a female voice) or else gender mismatched (i.e., a female face presented together with a male voice). In Experiment 4, different utterances from the same female speaker were used to generate the matched and mismatched speech video clips. Measuring in terms of the just noticeable difference the participants in all four experiments found it easier to judge which sensory modality had been presented first when evaluating mismatched stimuli than when evaluating the matched-speech stimuli. These results therefore provide the first empirical support for the "unity assumption" in the domain of the multisensory temporal integration of audiovisual speech stimuli.  相似文献   

19.
The study investigates cross-modal simultaneous processing of emotional tone of voice and emotional facial expression by event-related potentials (ERPs), using a wide range of different emotions (happiness, sadness, fear, anger, surprise, and disgust). Auditory emotional stimuli (a neutral word pronounced in an affective tone) and visual patterns (emotional facial expressions) were matched in congruous (the same emotion in face and voice) and incongruous (different emotions) pairs. Subjects (N=31) were required to watch and listen to the stimuli in order to comprehend them. Repeated measures ANOVAs showed a positive ERP deflection (P2), more posterior distributed. This P2 effect may represent a marker of cross-modal integration, modulated as a function of congruous/incongruous condition. Indeed, it shows an ampler peak in response to congruous stimuli than incongruous ones. It is suggested P2 can be a cognitive marker of multisensory processing, independently from the emotional content.  相似文献   

20.
Tracking an audio-visual target involves integrating spatial cues about target position from both modalities. Such sensory cue integration is a developmental process in the brain involving learning, with neuroplasticity as its underlying mechanism. We present a Hebbian learning-based adaptive neural circuit for multi-modal cue integration. The circuit temporally correlates stimulus cues within each modality via intramodal learning as well as symmetrically across modalities via crossmodal learning to independently update modality-specific neural weights on a sample-by-sample basis. It is realised as a robotic agent that must orient towards a moving audio-visual target. It continuously learns the best possible weights required for a weighted combination of auditory and visual spatial target directional cues that is directly mapped to robot wheel velocities to elicit an orientation response. Visual directional cues are noise-free and continuous but arising from a relatively narrow receptive field while auditory directional cues are noisy and intermittent but arising from a relatively wider receptive field. Comparative trials in simulation demonstrate that concurrent intramodal learning improves both the overall accuracy and precision of the orientation responses of symmetric crossmodal learning. We also demonstrate that symmetric crossmodal learning improves multisensory responses as compared to asymmetric crossmodal learning. The neural circuit also exhibits multisensory effects such as sub-additivity, additivity and super-additivity.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号