首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到17条相似文献,搜索用时 640 毫秒
1.
王润洲  毕鸿燕 《心理科学进展》2022,30(12):2764-2776
发展性阅读障碍的本质一直是研究者争论的焦点。大量研究发现, 阅读障碍者具有视听时间整合缺陷。然而, 这些研究仅考察了阅读障碍者视听时间整合加工的整体表现, 也就是平均水平的表现, 却对整合加工的变化过程缺乏探讨。视听时间再校准反映了视听时间整合的动态加工过程, 对内部时间表征与感觉输入之间差异的再校准困难则会导致多感觉整合受损, 而阅读障碍者的再校准相关能力存在缺陷。因此, 视听时间再校准能力受损可能是发展性阅读障碍视听时间整合缺陷的根本原因。未来的研究需要进一步考察发展性阅读障碍者视听时间再校准能力的具体表现, 以及这些表现背后的认知神经机制。  相似文献   

2.
大脑可以对来自不同感觉通道的信息进行处理与整合。与单一感觉通道相比, 个体对同时呈现在不同感觉通道的目标信号的响应会更快。对于这种现象的一种主要理论解释是共同激活模型, 该模型认为来自不同通道的刺激在特定的脑区汇聚整合, 比如顶叶内沟、颞上沟和前额叶皮层区域。整合后的信号强度更大, 可以更快地触发反应, 但是信号的整合发生在认知加工的哪一阶段目前尚未有明确结论。当个体对出现在不同感觉通道之间的任务转换进行加工时, 产生与感觉通道相关的任务转换的损失小于跨感觉通道转换损失与任务转换损失的总和, 这为与感觉通道相关的转换代价来源于任务设置的惯性和干扰提供了证据。而在单通道和多通道之间发生转换时, 跨通道转换代价会减小甚至消失, 这是由于同时发生的多感觉整合抵消了一部分损失, 这种现象支持了共同激活模型理论。然而, 多感觉信号整合对任务转换的神经加工过程产生怎样的影响并不清楚, 在未来的研究中可以把多感觉整合范式同经典的任务转换范式结合改进, 进而确定跨通道转换的加工机制和多感觉信号整合的发生阶段。  相似文献   

3.
同一事件的视听成分即使同时发生, 由于两种信息的物理传播与神经传导速度的差异, 大脑知觉到它们的时刻仍会存在一定的延迟。时间再校准指大脑可以最小化并适应这段时间延迟的现象。本研究采用“适应—测验”范式, 在适应阶段连续给被试呈现视觉先于听觉或听觉先于视觉128ms的刺激对, 在测验阶段以时序判断任务来测量被试此时主观同时点的变化。通过操纵适应及新异视听刺激对(即适应客体与新异客体)在测验阶段出现的空间位置, 探索空间与客体因素对视听时间再校准的影响。结果发现, 当适应客体出现于适应空间时, 主观同时点显著偏向了时间延迟的方向; 同样显著或边缘显著的主观同时点偏移还发生于适应客体出现于未适应空间时, 以及新异客体出现于适应空间时。研究表明, 视听时间再校准在不同空间中的表现取决于适应空间与适应客体的联合或独立作用。  相似文献   

4.
视听时间整合是指个体对一定时间间隔内输入的视听刺激进行表征的过程,是视听整合的重要机制。孤独症谱系障碍者的视听时间整合存在缺陷,主要表现为4个方面:视听时间整合窗口较正常个体更宽、更对称;快速视听时间再校准能力不足;听觉时间线索对其视觉搜索促进作用弱;言语刺激的视听时序知觉敏感性低。目前使用的研究任务多样,如声音诱发闪光错觉和"pip-pop"任务从内隐角度探究视听整合中的时间机制,同时性判断、时序判断和优先注视任务主要用于跨通道时序知觉的研究。相关理论从神经加工异常、先验经验不足和视听通道的相互作用等角度解释了其缺陷。未来需要进一步提高研究生态效度,整合理论解释,精确量化诊断指标,同时开发有实效的干预策略。  相似文献   

5.
康冠兰  罗霄骁 《心理科学》2020,(5):1072-1078
多通道信息交互是指来自某个感觉通道的信息与另一感觉通道的信息相互作用、相互影响的一系列加工过程。主要包括两个方面:一是不同感觉通道的输入如何整合;二是跨通道信息的冲突控制。本文综述了视听跨通道信息整合与冲突控制的行为心理机制和神经机制,探讨了注意对视听信息整合与冲突控制的影响。未来需探究视听跨通道信息加工的脑网络机制,考察特殊群体的跨通道整合和冲突控制以帮助揭示其认知和社会功能障碍的机制。  相似文献   

6.
张亮  孙向红  张侃 《心理科学进展》2009,17(6):1133-1138
自然环境中人类情绪信息的交流是依靠多个感觉通道实现的,多通道整合是情绪加工的基础。近年来的行为学、电生理学与神经成像的研究表明,情绪信息的加工具有跨通道自动整合的特点,它发生在认知加工的早期阶段,与颞上回、颞中回、海马旁回和丘脑等脑区密切相关。不同情绪的整合既有共同的神经基础,又有各自独特的加工区域。情绪信息的整合机制还可能与加工类型及注意资源有关。在未来研究中,实验的标准化、动态化、自然化有助于提高研究的准确性和研究间的可比性,而对特殊群体的研究,以及综合研究情绪加工与注意等其他认知过程则有助于我们进一步探索多通道整合的神经机制。  相似文献   

7.
通常人们接收到来自不同感觉通道的信息时, 首先在大脑中各个分离的区域单独进行加工处理, 而后在多感官区进行整合。前人关于言语感知中视听整合加工的神经成像研究认为, 视觉和听觉信息能够相互影响; 两者进行整合的关键区域是人脑左后侧的颞上沟, 其整合效应受时间和空间因素的限制。未来的研究应致力于建立更加合理的实验范式和数据分析方法来探讨整合加工的脑区机制, 把多感官整合研究进一步延伸到更加复杂的领域。  相似文献   

8.
视听整合是指当呈现的视觉和听觉信号在时间、空间上大致接近时,视觉和听觉系统倾向于整合的加工过程。失匹配负波(Mismatch Negativity,MMN)作为反映大脑早期加工的成分,表征偏差的信息输入与感觉记忆痕迹之间的神经失匹配。以MMN作为探测指标的视听整合加工研究,主要包括MMN在阅读理解中字母和语音、韵律信息、麦格克效应等方面的视听整合加工,以及分析跨通道视听整合相互竞争、相互补充的关系。未来研究需聚焦于其他通道的跨通道整合加工,同时应拓展MMN的诱发范式。  相似文献   

9.
时距知觉会受到刺激新异性的影响,新异刺激的呈现时间往往被认为长于等时距重复出现的标准刺激的呈现时间.已有研究在解释这种主观的时距扩张现象时主要涉及三种假设:注意假设、唤醒假设和神经编码效能假设.研究中的变量混淆问题、跨通道效应和时间因素是今后研究中值得思考的几个问题.对于新异刺激效应产生的心理和神经机制,研究者们还存在着较大分歧.进一步探讨其心理和神经机制对理解时距知觉具有重要意义.  相似文献   

10.
已有的多感觉整合研究范式多采用不同单通道刺激和双通道刺激随机呈现的方式进行测量。在这种范式中混有通道转换效应, 该效应可能导致多感觉整合的测量结果不准确。因此, 分析清楚实验范式中通道转换效应的影响因素, 并以此为依据设计合理的整合测量范式, 是进行多感觉整合研究的一个必要前提。本文首先通过实验一验证了在经典的整合测量范式中通道转换效应的影响方式; 进而, 在实验二中通过控制前后刺激的信号强度的一致性, 分析了通道转换效应的特点。综合分析显示, 通道转换效应是由前一刺激差异所造成的对当前刺激通道加工的注意资源分配和警觉水平的变化所导致。该结果表明, 在多感觉行为测量中, 需先根据前一刺激通道分类后再进行分析。  相似文献   

11.
Chiou R  Rich AN 《Perception》2012,41(3):339-353
The brain constantly integrates incoming signals across the senses to form a cohesive view of the world. Most studies on multisensory integration concern the roles of spatial and temporal parameters. However, recent findings suggest cross-modal correspondences (eg high-pitched sounds associated with bright, small objects located high up) also affect multisensory integration. Here, we focus on the association between auditory pitch and spatial location. Surprisingly little is known about the cognitive and perceptual roots of this phenomenon, despite its long use in ergonomic design. In a series of experiments, we explore how this cross-modal mapping affects the allocation of attention with an attentional cuing paradigm. Our results demonstrate that high and low tones induce attention shifts to upper or lower locations, depending on pitch height. Furthermore, this pitch-induced cuing effect is susceptible to contextual manipulations and volitional control. These findings suggest the cross-modal interaction between pitch and location originates from an attentional level rather than from response mapping alone. The flexible contextual mapping between pitch and location, as well as its susceptibility to top-down control, suggests the pitch-induced cuing effect is primarily mediated by cognitive processes after initial sensory encoding and occurs at a relatively late stage of voluntary attention orienting.  相似文献   

12.
Rowland BA  Stanford TR  Stein BE 《Perception》2007,36(10):1431-1443
Much of the information about multisensory integration is derived from studies of the cat superior colliculus (SC), a midbrain structure involved in orientation behaviors. This integration is apparent in the enhanced responses of SC neurons to cross-modal stimuli, responses that exceed those to any of the modality-specific component stimuli. The simplest model of multisensory integration is one in which the SC neuron simply sums its various sensory inputs. However, a number of empirical findings reveal the inadequacy of such a model; for example, the finding that deactivation of cortico-collicular inputs eliminates the enhanced response to a cross-modal stimulus without eliminating responses to the modality-specific component stimuli. These and other empirical findings inform a computational model that accounts for all of the most fundamental aspects of SC multisensory integration. The model is presented in two forms: an algebraic form that conveys the essential insights, and a compartmental form that represents the neuronal computations in a more biologically realistic way.  相似文献   

13.
Visual temporal processing and multisensory integration (MSI) of sound and vision were examined in individuals with schizophrenia using a visual temporal order judgment (TOJ) task. Compared to a non-psychiatric control group, persons with schizophrenia were less sensitive judging the temporal order of two successively presented visual stimuli. However, their sensitivity to visual temporal order improved as in the control group when two accessory sounds were added (temporal ventriloquism). These findings indicate that individuals with schizophrenia have diminished sensitivity to visual temporal order, but no deficits in the integration of low-level auditory and visual information.  相似文献   

14.
Several studies have shown that the direction in which a visual apparent motion stream moves can influence the perceived direction of an auditory apparent motion stream (an effect known as crossmodal dynamic capture). However, little is known about the role that intramodal perceptual grouping processes play in the multisensory integration of motion information. The present study was designed to investigate the time course of any modulation of the cross-modal dynamic capture effect by the nature of the perceptual grouping taking place within vision. Participants were required to judge the direction of an auditory apparent motion stream while trying to ignore visual apparent motion streams presented in a variety of different configurations. Our results demonstrate that the cross-modal dynamic capture effect was influenced more by visual perceptual grouping when the conditions for intramodal perceptual grouping were set up prior to the presentation of the audiovisual apparent motion stimuli. However, no such modulation occurred when the visual perceptual grouping manipulation was established at the same time as or after the presentation of the audiovisual stimuli. These results highlight the importance of the unimodal perceptual organization of sensory information to the manifestation of multisensory integration.  相似文献   

15.
Multisensory neurons in the deep superior colliculus (SC) show response enhancement to cross-modal stimuli that coincide in time and space. However, multisensory SC neurons respond to unimodal input as well. It is thus legitimate to ask why not all deep SC neurons are multisensory or, at least, develop multisensory behavior during an organism's maturation. The novel answer given here derives from a signal detection theory perspective. A Bayes' ratio model of multisensory enhancement is suggested. It holds that deep SC neurons operate under the Bayes' ratio rule, which guarantees optimal performance-that is, it maximizes the probability of target detection while minimizing the false alarm rate. It is shown that optimal performance of multisensory neurons vis-à-vis cross-modal stimuli implies, at the same time, that modality-specific neurons will outperform multisensory neurons in processing unimodal targets. Thus, only the existence of both multisensory and modality-specific neurons allows optimal performance when targets of one or several modalities may occur.  相似文献   

16.
Dyslexia has been associated with a problem in visual–audio integration mechanisms. Here, we investigate for the first time the contribution of unisensory cues on multisensory audio and visual integration in 32 dyslexic children by modelling results using the Bayesian approach. Non‐linguistic stimuli were used. Children performed a temporal task: they had to report whether the middle of three stimuli was closer in time to the first one or to the last one presented. Children with dyslexia, compared with typical children, exhibited poorer unimodal thresholds, requiring greater temporal distance between items for correct judgements, while multisensory thresholds were well predicted by the Bayesian model. This result suggests that the multisensory deficit in dyslexia is due to impaired audio and visual inputs rather than impaired multisensory processing per se. We also observed that poorer temporal skills correlated with lower reading skills in dyslexic children, suggesting that this temporal capability can be linked to reading abilities.  相似文献   

17.
Efficient navigation of our social world depends on the generation, interpretation, and combination of social signals within different sensory systems. However, the influence of healthy adult aging on multisensory integration of emotional stimuli remains poorly explored. This article comprises 2 studies that directly address issues of age differences on cross-modal emotional matching and explicit identification. The first study compared 25 younger adults (19-40 years) and 25 older adults (60-80 years) on their ability to match cross-modal congruent and incongruent emotional stimuli. The second study looked at performance of 20 younger (19-40) and 20 older adults (60-80) on explicit emotion identification when information was presented congruently in faces and voices or only in faces or in voices. In Study 1, older adults performed as well as younger adults on tasks in which congruent auditory and visual emotional information were presented concurrently, but there were age-related differences in matching incongruent cross-modal information. Results from Study 2 indicated that though older adults were impaired at identifying emotions from 1 modality (faces or voices alone), they benefited from congruent multisensory information as age differences were eliminated. The findings are discussed in relation to social, emotional, and cognitive changes with age.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号