首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 140 毫秒
1.
视听整合是指当呈现的视觉和听觉信号在时间、空间上大致接近时,视觉和听觉系统倾向于整合的加工过程。失匹配负波(Mismatch Negativity,MMN)作为反映大脑早期加工的成分,表征偏差的信息输入与感觉记忆痕迹之间的神经失匹配。以MMN作为探测指标的视听整合加工研究,主要包括MMN在阅读理解中字母和语音、韵律信息、麦格克效应等方面的视听整合加工,以及分析跨通道视听整合相互竞争、相互补充的关系。未来研究需聚焦于其他通道的跨通道整合加工,同时应拓展MMN的诱发范式。  相似文献   

2.
大脑可以对来自不同感觉通道的信息进行处理与整合。与单一感觉通道相比, 个体对同时呈现在不同感觉通道的目标信号的响应会更快。对于这种现象的一种主要理论解释是共同激活模型, 该模型认为来自不同通道的刺激在特定的脑区汇聚整合, 比如顶叶内沟、颞上沟和前额叶皮层区域。整合后的信号强度更大, 可以更快地触发反应, 但是信号的整合发生在认知加工的哪一阶段目前尚未有明确结论。当个体对出现在不同感觉通道之间的任务转换进行加工时, 产生与感觉通道相关的任务转换的损失小于跨感觉通道转换损失与任务转换损失的总和, 这为与感觉通道相关的转换代价来源于任务设置的惯性和干扰提供了证据。而在单通道和多通道之间发生转换时, 跨通道转换代价会减小甚至消失, 这是由于同时发生的多感觉整合抵消了一部分损失, 这种现象支持了共同激活模型理论。然而, 多感觉信号整合对任务转换的神经加工过程产生怎样的影响并不清楚, 在未来的研究中可以把多感觉整合范式同经典的任务转换范式结合改进, 进而确定跨通道转换的加工机制和多感觉信号整合的发生阶段。  相似文献   

3.
统计最优化理论认为, 在多感觉信息整合的过程中, 大脑是以加权平均的方式将多个感觉通道的信息整合为统一的感觉信息, 通道信息的利用权重根据各通道信息的可靠性来确定。近期的几个行为研究结果则显示, 有关通道估计可靠性的先验知识同样能够影响通道信息在整合过程中的利用权重。然而, 这些研究结果不能确定先验知识对多感觉信息整合的影响是发生在认知加工的感知觉阶段还是决策阶段。当前研究致力于对此问题进行探讨。实验通过赋予两种颜色字母不同的视、听一致概率(高概率、低概率), 测量并分析被试在各概率水平下对视听一致刺激的反应时。实验数据显示不同的视听一致概率能够调制对视听一致刺激的反应时。该结果揭示通道估计可靠性先验知识在早期的知觉加工阶段影响多感觉信息整合。  相似文献   

4.
视听时间整合是指个体对一定时间间隔内输入的视听刺激进行表征的过程,是视听整合的重要机制。孤独症谱系障碍者的视听时间整合存在缺陷,主要表现为4个方面:视听时间整合窗口较正常个体更宽、更对称;快速视听时间再校准能力不足;听觉时间线索对其视觉搜索促进作用弱;言语刺激的视听时序知觉敏感性低。目前使用的研究任务多样,如声音诱发闪光错觉和"pip-pop"任务从内隐角度探究视听整合中的时间机制,同时性判断、时序判断和优先注视任务主要用于跨通道时序知觉的研究。相关理论从神经加工异常、先验经验不足和视听通道的相互作用等角度解释了其缺陷。未来需要进一步提高研究生态效度,整合理论解释,精确量化诊断指标,同时开发有实效的干预策略。  相似文献   

5.
采用2×3的被试内实验设计,将注意条件和目标刺激类型作为实验变量,考察了指向不同感觉通道的注意对视听语义整合加工的不同影响。结果发现,只有在同时注意视觉和听觉刺激时,被试对语义一致的视听刺激反应最快,即产生冗余信号效应。而在选择性注意一个感觉通道时,语义一致的视听刺激并不具有加工优势。进一步分析发现,在同时注意视觉和听觉时语义一致视听刺激的加工优势源自于其视觉和听觉成分产生了整合。也就是说,只有在同时注意视觉和听觉时,语义一致视听刺激才会产生整合,语义不一致视听刺激不会产生整合。而在选择性注意一个感觉通道时,不论语义是否一致,视听刺激均不会产生整合。  相似文献   

6.
McGurk效应(麦格克效应)是典型的视听整合现象, 该效应受到刺激的物理特征、注意分配、个体视听信息依赖程度、视听整合能力、语言文化差异的影响。引发McGurk效应的关键视觉信息主要来自说话者的嘴部区域。产生McGurk效应的认知过程包含早期的视听整合(与颞上皮层有关)以及晚期的视听不一致冲突(与额下皮层有关)。未来研究应关注面孔社会信息对McGurk效应的影响, McGurk效应中单通道信息加工与视听整合的关系, 结合计算模型探讨其认知神经机制等。  相似文献   

7.
通过要求被试判断同时呈现的视听信息情绪效价的关系,考察视听情绪信息整合加工特点。实验一中词汇效价与韵律效价不冲突,实验二中词汇效价与韵律效价冲突。两个实验一致发现当面孔表情为积极时,被试对视听通道情绪信息关系判断更准确;实验二还发现,当面孔表情为消极时,相对于韵律线索,被试根据语义线索对视听信息关系判断更迅速。上述结果说明视听信息在同时呈现时,视觉信息可能先行加工,并影响到随后有关视听关系的加工。  相似文献   

8.
张亮  孙向红  张侃 《心理科学进展》2009,17(6):1133-1138
自然环境中人类情绪信息的交流是依靠多个感觉通道实现的,多通道整合是情绪加工的基础。近年来的行为学、电生理学与神经成像的研究表明,情绪信息的加工具有跨通道自动整合的特点,它发生在认知加工的早期阶段,与颞上回、颞中回、海马旁回和丘脑等脑区密切相关。不同情绪的整合既有共同的神经基础,又有各自独特的加工区域。情绪信息的整合机制还可能与加工类型及注意资源有关。在未来研究中,实验的标准化、动态化、自然化有助于提高研究的准确性和研究间的可比性,而对特殊群体的研究,以及综合研究情绪加工与注意等其他认知过程则有助于我们进一步探索多通道整合的神经机制。  相似文献   

9.
多感觉整合的时间再校准   总被引:1,自引:0,他引:1  
跨通道刺激的时间同步性是多感觉整合的必要条件, 但是由于刺激物理传导与神经传导的差异, 它们在时间上并非完全匹配。时间再校准指大脑能够适应跨通道刺激间很短的时间延迟的现象, 反映了多感觉整合在时间维度上的可塑性, 表现为适应相继呈现的跨通道刺激后, 主观同时点向时间延迟方向的偏移。本文主要介绍了时间再校准的通道效应与潜在机制, 其初始加工阶段, 它与刺激内容加工的关系及主要的影响因素。今后的研究应当进一步探索时间再校准能否发生于早期加工阶段, 检验其认知过程是否具有双向性, 探讨空间选择性注意的作用, 并结合神经机制的研究, 从综合的视角进行更完善的理论建构。  相似文献   

10.
以往关于面孔吸引力判断的研究多关注视觉信息,忽视了非视觉信息在其中的作用,而现有研究已证实面孔吸引力判断中存在不同感官信息的相互作用,是跨通道整合的。为此,在以往研究的基础上,综合面孔空间模型和贝叶斯因果推理模型,推测在面孔吸引力判断的跨通道整合过程中,当个体根据感官刺激和已有的标准面孔推断不同感官信息是来自同一张目标面孔时,便自然将各种感官信息在大脑中进行整合,形成统一的目标面孔,进行吸引力的判断。未来可将面孔嵌入更广泛的环境中,考察多种感官信息的跨通道整合,并进一步探究跨通道整合的边界条件,以及社会互动中的跨通道整合,以构建更系统的面孔吸引力跨通道整合模型。  相似文献   

11.
返回抑制(inhibition of return, IOR)与情绪刺激都具有引导注意偏向、提高搜索效率的特点, 但二者间是否存在一定的交互作用迄今为止尚不明确。研究采用“线索-目标”范式并在视听双通道呈现情绪刺激来考察情绪刺激的加工与IOR的交互作用。实验1中情绪刺激以单通道视觉面孔或一致的视听双通道呈现, 实验2通过在视听通道呈现不一致的情绪刺激进一步考察视听双通道情绪一致刺激对IOR的影响是否是由听觉通道一致的情绪刺激导致的, 即是否对听觉通道的情绪刺激进行了加工。结果发现, 视听双通道情绪一致刺激能够削弱IOR, 但情绪不一致刺激与IOR之间不存在交互作用, 并且单双通道的IOR不存在显著差异。研究结果表明仅在视听双通道呈现情绪一致刺激时, 才会影响同一阶段的IOR, 这进一步支持了IOR的知觉抑制理论。  相似文献   

12.
When participants judge multimodal audiovisual stimuli, the auditory information strongly dominates temporal judgments, whereas the visual information dominates spatial judgments. However, temporal judgments are not independent of spatial features. For example, in the kappa effect, the time interval between two marker stimuli appears longer when they originate from spatially distant sources rather than from the same source. We investigated the kappa effect for auditory markers presented with accompanying irrelevant visual stimuli. The spatial sources of the markers were varied such that they were either congruent or incongruent across modalities. In two experiments, we demonstrated that the spatial layout of the visual stimuli affected perceived auditory interval duration. This effect occurred although the visual stimuli were designated to be task-irrelevant for the duration reproduction task in Experiment 1, and even when the visual stimuli did not contain sufficient temporal information to perform a two-interval comparison task in Experiment 2. We conclude that the visual and auditory marker stimuli were integrated into a combined multisensory percept containing temporal as well as task-irrelevant spatial aspects of the stimulation. Through this multisensory integration process, visuospatial information affected even temporal judgments, which are typically dominated by the auditory modality.  相似文献   

13.
本研究使用空间任务-转换范式,控制视、听刺激的突显性,探讨自下而上注意对视觉主导效应的影响。结果表明视、听刺激突显性显著地影响视觉主导效应,实验一中当听觉刺激为高突显性时,视觉主导效应显著减弱。实验二中当听觉刺激为高突显性并且视觉刺激为低突显性时,视觉主导效应进一步减弱但依然存在。结果支持偏向竞争理论,在跨通道视听交互过程中视觉刺激更具突显性,在多感觉整合过程中更具加工优势。  相似文献   

14.
声音诱发闪光错觉效应是典型的视听整合错觉现象, 是指当视觉闪光刺激与间隔100 ms内的听觉声音刺激不等数量呈现时, 被试知觉视觉闪光的数量与听觉声音的数量相等。声音诱发闪光错觉的影响因素既包括自下而上和自上而下的被试内差异因素, 也包括视听刺激依赖程度、视听整合的发展和视听刺激知觉敏感性等被试间差异因素。该效应的产生在时程上主要体现在早期加工阶段, 在脑区上主要涉及多处皮层及皮层下相关脑区。未来研究应进一步考察注意、奖赏和视听整合方式等认知加工对声音诱发闪光错觉的影响, 同时也应该关注声音诱发闪光错觉对记忆和学习的影响以及结合计算模型和神经科学的手段进一步探讨其认知神经机制。  相似文献   

15.
Spatial information processing takes place in different brain regions that receive converging inputs from several sensory modalities. Because of our own movements—for example, changes in eye position, head rotations, and so forth—unimodal sensory representations move continuously relative to one another. It is generally assumed that for multisensory integration to be an orderly process, it should take place between stimuli at congruent spatial locations. In the monkey posterior parietal cortex, the ventral intraparietal (VIP) area is specialized for the analysis of movement information using visual, somatosensory, vestibular, and auditory signals. Focusing on the visual and tactile modalities, we found that in area VIP, like in the superior colliculus, multisensory signals interact at the single neuron level, suggesting that this area participates in multisensory integration. Curiously, VIP does not use a single, invariant coordinate system to encode locations within and across sensory modalities. Visual stimuli can be encoded with respect to the eye, the head, or halfway between the two reference frames, whereas tactile stimuli seem to be prevalently encoded relative to the body. Hence, while some multisensory neurons in VIP could encode spatially congruent tactile and visual stimuli independently of current posture, in other neurons this would not be the case. Future work will need to evaluate the implications of these observations for theories of optimal multisensory integration.Edited by: Marie-Hélène Giard and Mark Wallace  相似文献   

16.
The question of how vision and audition interact in natural object identification is currently a matter of debate. We developed a large set of auditory and visual stimuli representing natural objects in order to facilitate research in the field of multisensory processing. Normative data was obtained for 270 brief environmental sounds and 320 visual object stimuli. Each stimulus was named, categorized, and rated with regard to familiarity and emotional valence by N=56 participants (Study 1). This multimodal stimulus set was employed in two subsequent crossmodal priming experiments that used semantically congruent and incongruent stimulus pairs in a S1-S2 paradigm. Task-relevant targets were either auditory (Study 2) or visual stimuli (Study 3). The behavioral data of both experiments expressed a crossmodal priming effect with shorter reaction times for congruent as compared to incongruent stimulus pairs. The observed facilitation effect suggests that object identification in one modality is influenced by input from another modality. This result implicates that congruent visual and auditory stimulus pairs were perceived as the same object and demonstrates a first validation of the multimodal stimulus set.  相似文献   

17.
Previous studies of multisensory integration have often stressed the beneficial effects that may arise when information concerning an event arrives via different sensory modalities at the same time, as, for example, exemplified by research on the redundant target effect (RTE). By contrast, studies of the Colavita visual dominance effect (e.g., [Colavita, F. B. (1974). Human sensory dominance. Perception & Psychophysics, 16, 409–412]) highlight the inhibitory consequences of the competition between signals presented simultaneously in different sensory modalities instead. Although both the RTE and the Colavita effect are thought to occur at early sensory levels and the stimulus conditions under which they are typically observed are very similar, the interplay between these two opposing behavioural phenomena (facilitation vs. competition) has yet to be addressed empirically. We hypothesized that the dissociation may reflect two of the fundamentally different ways in which humans can perceive concurrent auditory and visual stimuli. In Experiment 1, we demonstrated both multisensory facilitation (RTE) and the Colavita visual dominance effect using exactly the same audiovisual displays, by simply changing the task from a speeded detection task to a speeded modality discrimination task. Meanwhile, in Experiment 2, the participants exhibited multisensory facilitation when responding to visual targets and multisensory inhibition when responding to auditory targets while keeping the task constant. These results therefore indicate that both multisensory facilitation and inhibition can be demonstrated in reaction to the same bimodal event.  相似文献   

18.
Multisensory integration is a process whereby information converges from different sensory modalities to produce a response that is different from that elicited by the individual modalities presented alone. A neural basis for multisensory integration has been identified within a variety of brain regions, but the most thoroughly examined model has been that of the superior colliculus (SC). Multisensory processing in the SC of anaesthetized animals has been shown to be dependent on the physical parameters of the individual stimuli presented (e.g., intensity, direction, velocity) as well as their spatial relationship. However, it is unknown whether these stimulus features are important, or evident, in the awake behaving animal. To address this question, we evaluated the influence of physical properties of sensory stimuli (visual intensity, direction, and velocity; auditory intensity and location) on sensory activity and multisensory integration of SC neurons in awake, behaving primates. Monkeys were trained to fixate a central visual fixation point while visual and/or auditory stimuli were presented in the periphery. Visual stimuli were always presented within the contralateral receptive field of the neuron whereas auditory stimuli were presented at either ipsi- or contralateral locations. Many of the SC neurons responsive to these sensory stimuli (n = 66/84; 76%) had stronger responses when the visual and auditory stimuli were combined at contralateral locations than when the auditory stimulus was located on the ipsilateral side. This trend was significant across the population of auditory-responsive neurons. In addition, some SC neurons (n = 31) were presented a battery of tests in which the quality of one stimulus of a pair was systematically manipulated. A small proportion of these neurons (n = 8/31; 26%) showed preferential responses to stimuli with specific physical properties, and these preferences were not significantly altered when multisensory stimulus combinations were presented. These data demonstrate that multisensory processing in the awake behaving primate is influenced by the spatial congruency of the stimuli as well as their individual physical properties.  相似文献   

19.
We propose a measure of audiovisual speech integration that takes into account accuracy and response times. This measure should prove beneficial for researchers investigating multisensory speech recognition, since it relates to normal-hearing and aging populations. As an example, age-related sensory decline influences both the rate at which one processes information and the ability to utilize cues from different sensory modalities. Our function assesses integration when both auditory and visual information are available, by comparing performance on these audiovisual trials with theoretical predictions for performance under the assumptions of parallel, independent self-terminating processing of single-modality inputs. We provide example data from an audiovisual identification experiment and discuss applications for measuring audiovisual integration skills across the life span.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号