首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 500 毫秒
1.
失匹配负波(MMN)是小概率刺激ERP减大概率刺激ERP的差异负波,可由Oddball范式和多特征选择范式获得。近年研究发现MMN不仅存在于听觉通道,还存在于其他通道,且其波幅可反映自动加工,潜伏期则可反映控制加工。MMN可能反映与错误负波ERN相同或相似的加工过程,MMN反映社会认知加工的观点还需进一步研究。未来研究需整合高时间和空间精度技术,从行为和认知神经科学多层面来开展复杂社会刺激的研究,以便阐明MMN可能反映社会认知加工的性质,并在此基础上增强对MMN与ERN的比较研究。  相似文献   

2.
人脑如何自动化加工瞬息万变的情绪信息?研究者们在借鉴听觉通道的失匹配负波(mismatch negativity, MMN)研究的基础上, 进一步发展出了表情失匹配负波(expression mismatch negativity, EMMN), 以此作为视觉情绪信息前注意加工的重要指标。与以往的一般视觉线索的视觉失匹配负波(visual mismatch negativity, vMMN)有所区别, EMMN研究专注于人脑如何自动化加工瞬息万变的情绪信息。当前的研究主要探讨了不同类型的面部表情、不同性别、高低流体智力个体的EMMN差异, 以及自闭症、抑郁症、精神分裂症等异常个体EMMN的特点。此外, 从预测编码的角度阐释了EMMN的机制。今后的研究有必要聚焦EMMN在临床诊断和治疗中的应用, 考察不同情绪线索EMMN的特点, 并进一步揭示EMMN的神经机制。  相似文献   

3.
采用2×3的被试内实验设计,将注意条件和目标刺激类型作为实验变量,考察了指向不同感觉通道的注意对视听语义整合加工的不同影响。结果发现,只有在同时注意视觉和听觉刺激时,被试对语义一致的视听刺激反应最快,即产生冗余信号效应。而在选择性注意一个感觉通道时,语义一致的视听刺激并不具有加工优势。进一步分析发现,在同时注意视觉和听觉时语义一致视听刺激的加工优势源自于其视觉和听觉成分产生了整合。也就是说,只有在同时注意视觉和听觉时,语义一致视听刺激才会产生整合,语义不一致视听刺激不会产生整合。而在选择性注意一个感觉通道时,不论语义是否一致,视听刺激均不会产生整合。  相似文献   

4.
本研究分别在时间和情绪认知维度上考察预先准备效应对情绪视听整合的影响。时间辨别任务(实验1)发现视觉引导显著慢于听觉引导,并且整合效应量为负值。情绪辨别任务(实验2)发现整合效应量为正值;在负性情绪整合中,听觉引导显著大于视觉引导;在正性情绪整合中,视觉引导显著大于听觉引导。研究表明,情绪视听整合基于情绪认知加工,而时间辨别会抑制整合;此外,跨通道预先准备效应和情绪预先准备效应都与引导通道有关。  相似文献   

5.
为了探讨视听双通道下的音乐情绪加工机制及音乐情绪类型和音乐训练背景对加工机制的影响,本研究采用表达开心和悲伤的音乐表演视频为材料,比较音乐组被试和非音乐组被试在单听觉通道、单视觉通道和视听双通道三种情境下的情绪评定速度、正确率和强度。结果发现:1)视听双通道与单视觉通道差异显著,与单听觉通道差异不显著。2)非音乐组被试对悲伤的评定正确率高于音乐组被试,对开心的评定正确率低于音乐组被试。说明音乐情绪加工的视听双通道整合优势仅相对单视觉通道存在;非音乐组被试对视觉通道情绪信息的变化更敏感,音乐组被试更依赖音乐经验;可在音乐表演时加入协调的视觉通道情绪信息帮助没有音乐训练经验的听赏者。  相似文献   

6.
以简单图形为视觉刺激,以短纯音作为听觉刺激,通过指导被试注意不同通道(注意视觉、注意听觉、注意视听)以形成不同注意状态(选择性注意和分配性注意),考察了注意对多感觉整合的影响,发现只有在分配性注意时被试对双通道目标的反应最快最准确。通过竞争模型分析发现,这种对双通道目标的加工优势源自于视听双通道刺激的整合。上述结果表明,只有在分配性注意状态下才会产生多感觉整合。  相似文献   

7.
视听时间整合是指个体对一定时间间隔内输入的视听刺激进行表征的过程,是视听整合的重要机制。孤独症谱系障碍者的视听时间整合存在缺陷,主要表现为4个方面:视听时间整合窗口较正常个体更宽、更对称;快速视听时间再校准能力不足;听觉时间线索对其视觉搜索促进作用弱;言语刺激的视听时序知觉敏感性低。目前使用的研究任务多样,如声音诱发闪光错觉和"pip-pop"任务从内隐角度探究视听整合中的时间机制,同时性判断、时序判断和优先注视任务主要用于跨通道时序知觉的研究。相关理论从神经加工异常、先验经验不足和视听通道的相互作用等角度解释了其缺陷。未来需要进一步提高研究生态效度,整合理论解释,精确量化诊断指标,同时开发有实效的干预策略。  相似文献   

8.
本研究使用空间任务-转换范式,控制视、听刺激的突显性,探讨自下而上注意对视觉主导效应的影响。结果表明视、听刺激突显性显著地影响视觉主导效应,实验一中当听觉刺激为高突显性时,视觉主导效应显著减弱。实验二中当听觉刺激为高突显性并且视觉刺激为低突显性时,视觉主导效应进一步减弱但依然存在。结果支持偏向竞争理论,在跨通道视听交互过程中视觉刺激更具突显性,在多感觉整合过程中更具加工优势。  相似文献   

9.
通常人们接收到来自不同感觉通道的信息时, 首先在大脑中各个分离的区域单独进行加工处理, 而后在多感官区进行整合。前人关于言语感知中视听整合加工的神经成像研究认为, 视觉和听觉信息能够相互影响; 两者进行整合的关键区域是人脑左后侧的颞上沟, 其整合效应受时间和空间因素的限制。未来的研究应致力于建立更加合理的实验范式和数据分析方法来探讨整合加工的脑区机制, 把多感官整合研究进一步延伸到更加复杂的领域。  相似文献   

10.
时序信息加工机制及其通道效应的实验研究   总被引:5,自引:2,他引:3  
王振勇  黄希庭 《心理学报》1996,29(4):345-351
研究由两个实验组成,实验一采用纯粹的视听方式,以线段长度和声音频率为材料,对时序信息的加工方式(自动加工与控制加工)、编码特点(时序信息是何时编码的)以及通道效应进行了研究。实验二以汉字为材料重复实验一的过程;结果表明,时序信息的加工存在视听通道效应,其机制源于记忆,且与实验材料的加工深度以及加工方式有关;时序信息既包含有自动加工,又包含有控制加工。视觉刺激倾向于自动加工,而听觉则倾向于控制加工;时序信息是在项目学习时编码获得的,而不是在提取时建构的。  相似文献   

11.
康冠兰  罗霄骁 《心理科学》2020,(5):1072-1078
多通道信息交互是指来自某个感觉通道的信息与另一感觉通道的信息相互作用、相互影响的一系列加工过程。主要包括两个方面:一是不同感觉通道的输入如何整合;二是跨通道信息的冲突控制。本文综述了视听跨通道信息整合与冲突控制的行为心理机制和神经机制,探讨了注意对视听信息整合与冲突控制的影响。未来需探究视听跨通道信息加工的脑网络机制,考察特殊群体的跨通道整合和冲突控制以帮助揭示其认知和社会功能障碍的机制。  相似文献   

12.
McGurk效应(麦格克效应)是典型的视听整合现象, 该效应受到刺激的物理特征、注意分配、个体视听信息依赖程度、视听整合能力、语言文化差异的影响。引发McGurk效应的关键视觉信息主要来自说话者的嘴部区域。产生McGurk效应的认知过程包含早期的视听整合(与颞上皮层有关)以及晚期的视听不一致冲突(与额下皮层有关)。未来研究应关注面孔社会信息对McGurk效应的影响, McGurk效应中单通道信息加工与视听整合的关系, 结合计算模型探讨其认知神经机制等。  相似文献   

13.
Congruent information conveyed over different sensory modalities often facilitates a variety of cognitive processes, including speech perception (Sumby & Pollack, 1954). Since auditory processing is substantially faster than visual processing, auditory-visual integration can occur over a surprisingly wide temporal window (Stein, 1998). We investigated the processing architecture mediating the integration of acoustic digit names with corresponding symbolic visual forms. The digits "1" or "2" were presented in auditory, visual, or bimodal format at several stimulus onset asynchronies (SOAs; 0, 75, 150, and 225 msec). The reaction times (RTs) for echoing unimodal auditory stimuli were approximately 100 msec faster than the RTs for naming their visual forms. Correspondingly, bimodal facilitation violated race model predictions, but only at SOA values greater than 75 msec. These results indicate that the acoustic and visual information are pooled prior to verbal response programming. However, full expression of this bimodal summation is dependent on the central coincidence of the visual and auditory inputs. These results are considered in the context of studies demonstrating multimodal activation of regions involved in speech production.  相似文献   

14.
Studies of the McGurk effect have shown that when discrepant phonetic information is delivered to the auditory and visual modalities, the information is combined into a new percept not originally presented to either modality. In typical experiments, the auditory and visual speech signals are generated by the same talker. The present experiment examined whether a discrepancy in the gender of the talker between the auditory and visual signals would influence the magnitude of the McGurk effect. A male talker’s voice was dubbed onto a videotape containing a female talker’s face, and vice versa. The gender-incongruent videotapes were compared with gender-congruent videotapes, in which a male talker’s voice was dubbed onto a male face and a female talker’s voice was dubbed onto a female face. Even though there was a clear incompatibility in talker characteristics between the auditory and visual signals on the incongruent videotapes, the resulting magnitude of the McGurk effectwas not significantly different for the incongruent as opposed to the congruent videotapes. The results indicate that the mechanism for integrating speech information from the auditory and the visual modalities is not disrupted by a gender incompatibility even when it is perceptually apparent. The findings are compatible with the theoretical notion that information about voice characteristics of the talker is extracted and used to normalize the speech signal at an early stage of phonetic processing, prior to the integration of the auditory and the visual information.  相似文献   

15.
Studies of the McGurk effect have shown that when discrepant phonetic information is delivered to the auditory and visual modalities, the information is combined into a new percept not originally presented to either modality. In typical experiments, the auditory and visual speech signals are generated by the same talker. The present experiment examined whether a discrepancy in the gender of the talker between the auditory and visual signals would influence the magnitude of the McGurk effect. A male talker's voice was dubbed onto a videotape containing a female talker's face, and vice versa. The gender-incongruent videotapes were compared with gender-congruent videotapes, in which a male talker's voice was dubbed onto a male face and a female talker's voice was dubbed onto a female face. Even though there was a clear incompatibility in talker characteristics between the auditory and visual signals on the incongruent videotapes, the resulting magnitude of the McGurk effect was not significantly different for the incongruent as opposed to the congruent videotapes. The results indicate that the mechanism for integrating speech information from the auditory and the visual modalities is not disrupted by a gender incompatibility even when it is perceptually apparent. The findings are compatible with the theoretical notion that information about voice characteristics of the talker is extracted and used to normalize the speech signal at an early stage of phonetic processing, prior to the integration of the auditory and the visual information.  相似文献   

16.
发展性阅读障碍的ERP研究   总被引:10,自引:0,他引:10  
发展性阅读障碍,ERP,语音缺陷假说,Oddball范式。行为实验已经发现,语音能力的缺损是拼音文字发展性阅读障碍的核心。然而近年来行为研究和神经生理学的研究也发现,发展性阅读障碍与基本知觉障碍有关。事件相关电位(event-related potentials ,简称ERP)作为一种独特的电生理学研究手段,从一个更为直观的角度验证了行为实验的结果,推动了发展性阅读障碍的进展。语言认知水平ERP研究表明,发展性阅读障碍者在存在语音加工和信息整合的缺陷。感知觉加工层次的ERP研究结果则不尽一致,有的研究发现,阅读障碍存在着基本的听觉加工缺陷;有的研究则发现发展性阅读障碍存在言语声音加工的缺陷,而对非言语声音的加工与正常读者没有显著差别;有的研究支持大细胞通路受损假说,发现阅读障碍在低对比度和低空间频率上存在视觉加工的缺陷,有的研究结果则没有发现发展性阅读障碍者与正常读者在不同对比度和空间频率上的差异。  相似文献   

17.
Visual system has been proposed to be divided into two, the ventral and dorsal, processing streams. The ventral pathway is thought to be involved in object identification whereas the dorsal pathway processes information regarding the spatial locations of objects and the spatial relationships among objects. Several studies on working memory (WM) processing have further suggested that there is a dissociable domain-dependent functional organization within the prefrontal cortex for processing of spatial and nonspatial visual information. Also the auditory system is proposed to be organized into two domain-specific processing streams, similar to that seen in the visual system. Recent studies on auditory WM have further suggested that maintenance of nonspatial and spatial auditory information activates a distributed neural network including temporal, parietal, and frontal regions but the magnitude of activation within these activated areas shows a different functional topography depending on the type of information being maintained. The dorsal prefrontal cortex, specifically an area of the superior frontal sulcus (SFS), has been shown to exhibit greater activity for spatial than for nonspatial auditory tasks. Conversely, ventral frontal regions have been shown to be more recruited by nonspatial than by spatial auditory tasks. It has also been shown that the magnitude of this dissociation is dependent on the cognitive operations required during WM processing. Moreover, there is evidence that within the nonspatial domain in the ventral prefrontal cortex, there is an across-modality dissociation during maintenance of visual and auditory information. Taken together, human neuroimaging results on both visual and auditory sensory systems support the idea that the prefrontal cortex is organized according to the type of information being maintained in WM.  相似文献   

18.
A two-stage model for visual-auditory interaction in saccadic latencies   总被引:2,自引:0,他引:2  
In two experiments, saccadic response time (SRT) for eye movements toward visual target stimuli at different horizontal positions was measured under simultaneous or near-simultaneous presentation of an auditory nontarget (distractor). The horizontal position of the auditory signal was varied, using a virtual auditory environment setup. Mean SRT to a visual target increased with distance to the auditory nontarget and with delay of the onset of the auditory signal relative to the onset of the visual stimulus. A stochastic model is presented that distinguishes a peripheral processing stage with separate parallel activation by visual and auditory information from a central processing stage at which intersensory integration takes place. Two model versions differing with respect to the role of the auditory distractors are tested against the SRT data.  相似文献   

19.
Integrating face and voice in person perception   总被引:4,自引:0,他引:4  
Integration of information from face and voice plays a central role in our social interactions. It has been mostly studied in the context of audiovisual speech perception: integration of affective or identity information has received comparatively little scientific attention. Here, we review behavioural and neuroimaging studies of face-voice integration in the context of person perception. Clear evidence for interference between facial and vocal information has been observed during affect recognition or identity processing. Integration effects on cerebral activity are apparent both at the level of heteromodal cortical regions of convergence, particularly bilateral posterior superior temporal sulcus (pSTS), and at 'unimodal' levels of sensory processing. Whether the latter reflects feedback mechanisms or direct crosstalk between auditory and visual cortices is as yet unclear.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号