首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
外源性注意与多感觉整合的交互关系是一个复杂且具有争议的研究领域, 一直以来备受研究者们关注。为了解释两者间的交互作用机制, 本文基于已有研究成果从两方面综述了外源性注意与多感觉整合的交互关系:(1)外源性注意可以通过自下而上的方式调节多感觉整合, 包括空间不确定性、感知觉敏感度和感觉通道间信号强度差异三种理论假说; (2)多感觉整合可以调节外源性注意。一方面, 来自多感觉通道的刺激能够以自下而上的方式自动整合, 整合后的多感觉通道刺激比单通道刺激具有更大的凸显性从而有效地吸引注意。另一方面, 整合后的多感觉通道刺激能够作为多感觉信号模板存储于大脑之中, 从而在任务中实现自上而下地调节注意捕获。  相似文献   

2.
唐晓雨  佟佳庚  于宏  王爱君 《心理学报》2021,53(11):1173-1188
本文采用内-外源性空间线索靶子范式, 操控内源性线索有效性(有效线索、无效线索)、外源性线索有效性(有效线索、无效线索)、目标刺激类型(视觉刺激、听觉刺激、视听觉刺激)三个自变量。通过两个不同任务难度的实验(实验1: 简单定位任务; 实验2: 复杂辨别任务)来考察内外源性空间注意对多感觉整合的影响。两个实验结果均发现外源性空间注意显著减弱了多感觉整合效应, 内源性空间注意没有显著增强多感觉整合效应; 实验2中还发现了内源性空间注意会对外源性空间注意减弱多感觉整合效应产生影响。结果表明, 与内源性空间注意不同, 外源性空间注意对多感觉整合的影响不易受任务难度的调控; 当任务较难时内源性空间注意会影响外源性空间注意减弱多感觉整合效应的过程。由此推测, 内外源性空间注意对多感觉整合的调节并非彼此独立、而是相互影响的。  相似文献   

3.
When participants respond to auditory and visual stimuli, responses to audiovisual stimuli are substantially faster than to unimodal stimuli (redundant signals effect, RSE). In such tasks, the RSE is usually higher than probability summation predicts, suggestive of specific integration mechanisms underlying the RSE. We investigated the role of spatial and selective attention on the RSE in audiovisual redundant signals tasks. In Experiment 1, stimuli were presented either centrally (narrow attentional focus) or at 1 of 3 unpredictable locations (wide focus). The RSE was accurately described by a coactivation model assuming linear superposition of modality-specific activation. Effects of spatial attention were explained by a shift of the evidence criterion. In Experiment 2, stimuli were presented at 3 locations; participants had to respond either to all signals regardless of location (simple response task) or to central stimuli only (selective attention task). The RSE was consistent with task-specific coactivation models; accumulation of evidence, however, differed between the 2 tasks.  相似文献   

4.
It is well accepted that multisensory integration has a facilitative effect on perceptual and motor processes, evolutionarily enhancing the chance of survival of many species, including humans. Yet, there is limited understanding of the relationship between multisensory processes, environmental noise, and children's cognitive abilities. Thus, this study investigated the relationship between multisensory integration, auditory background noise, and the general intellectual abilities of school-age children (N = 88, mean age = 9 years, 7 months) using a simple audiovisual detection paradigm. We provide evidence that children with enhanced multisensory integration in quiet and noisy conditions are likely to score above average on the Full-Scale IQ of the Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV). Conversely, approximately 45% of tested children, with relatively low verbal and nonverbal intellectual abilities, showed reduced multisensory integration in either quiet or noise. Interestingly, approximately 20% of children showed improved multisensory integration abilities in the presence of auditory background noise. The findings of the present study suggest that stable and consistent multisensory integration in quiet and noisy environments is associated with the development of optimal general intellectual abilities. Further theoretical implications are discussed.  相似文献   

5.
Irrelevant events in one sensory modality can influence the number of events that are perceived in another modality. Previously, the underlying process of sensory integration was studied in conditions in which participants knew a priori which sensory modality was relevant and which was not. Consequently, (bottom-up) sensory interference and (top-down) selective attention were confounded. We disentangled these effects by measuring the influence of visual flashes on the number of tactile taps that were perceived, and vice versa, in two conditions. In the cue condition, participants were instructed on which modality to report before the bimodal stimulus was presented. In the no-cue condition, they were instructed after stimulus presentation. Participants reported the number of events that they perceived for bimodal combinations of one, two, or three flashes and one, two, or three taps. Our main findings were that (1) in no-cue conditions, the influence of vision on touch was stronger than was the influence of touch on vision; (2) in cue conditions, the integration effects were smaller than those in no-cue conditions; and (3) irrelevant taps were less easily ignored than were irrelevant flashes. This study disentangled previously confounded bottom-up and top-down effects: The bottom-up influence of vision on touch was stronger, but vision was also more easily suppressed by top-down selective attention. We have compared our results qualitatively and quantitatively with recently proposed sensory-integration models.  相似文献   

6.
The brain often integrates multisensory sources of information in a way that is close to the optimal according to Bayesian principles. Since sensory modalities are grounded in different, body-relative frames of reference, multisensory integration requires accurate transformations of information. We have shown experimentally, for example, that a rotating tactile stimulus on the palm of the right hand can influence the judgment of ambiguously rotating visual displays. Most significantly, this influence depended on the palm orientation: when facing upwards, a clockwise rotation on the palm yielded a clockwise visual judgment bias; when facing downwards, the same clockwise rotation yielded a counterclockwise bias. Thus, tactile rotation cues biased visual rotation judgment in a head-centered reference frame. Recently, we have generated a modular, multimodal arm model that is able to mimic aspects of such experiments. The model co-represents the state of an arm in several modalities, including a proprioceptive, joint angle modality as well as head-centered orientation and location modalities. Each modality represents each limb or joint separately. Sensory information from the different modalities is exchanged via local forward and inverse kinematic mappings. Also, re-afferent sensory feedback is anticipated and integrated via Kalman filtering. Information across modalities is integrated probabilistically via Bayesian-based plausibility estimates, continuously maintaining a consistent global arm state estimation. This architecture is thus able to model the described effect of posture-dependent motion cue integration: tactile and proprioceptive sensory information may yield top-down biases on visual processing. Equally, such information may influence top-down visual attention, expecting particular arm-dependent motion patterns. Current research implements such effects on visual processing and attention.  相似文献   

7.
The brain often integrates multisensory sources of information in a way that is close to the optimal according to Bayesian principles. Since sensory modalities are grounded in different, body-relative frames of reference, multisensory integration requires accurate transformations of information. We have shown experimentally, for example, that a rotating tactile stimulus on the palm of the right hand can influence the judgment of ambiguously rotating visual displays. Most significantly, this influence depended on the palm orientation: when facing upwards, a clockwise rotation on the palm yielded a clockwise visual judgment bias; when facing downwards, the same clockwise rotation yielded a counterclockwise bias. Thus, tactile rotation cues biased visual rotation judgment in a head-centered reference frame. Recently, we have generated a modular, multimodal arm model that is able to mimic aspects of such experiments. The model co-represents the state of an arm in several modalities, including a proprioceptive, joint angle modality as well as head-centered orientation and location modalities. Each modality represents each limb or joint separately. Sensory information from the different modalities is exchanged via local forward and inverse kinematic mappings. Also, re-afferent sensory feedback is anticipated and integrated via Kalman filtering. Information across modalities is integrated probabilistically via Bayesian-based plausibility estimates, continuously maintaining a consistent global arm state estimation. This architecture is thus able to model the described effect of posture-dependent motion cue integration: tactile and proprioceptive sensory information may yield top–down biases on visual processing. Equally, such information may influence top–down visual attention, expecting particular arm-dependent motion patterns. Current research implements such effects on visual processing and attention.  相似文献   

8.
We assessed the influence of multisensory interactions on the exogenous orienting of spatial attention by comparing the ability of auditory, tactile, and audiotactile exogenous cues to capture visuospatial attention under conditions of no perceptual load versus high perceptual load. In Experiment 1, participants discriminated the elevation of visual targets preceded by either unimodal or bimodal cues under conditions of either a high perceptual load (involving the monitoring of a rapidly presented central stream of visual letters for occasionally presented target digits) or no perceptual load (when the central stream was replaced by a fixation point). All of the cues captured spatial attention in the no-load condition, whereas only the bimodal cues captured visuospatial attention in the highload condition. In Experiment 2, we ruled out the possibility that the presentation of any changing stimulus at fixation (i.e., a passively monitored stream of letters) would eliminate exogenous orienting, which instead appears to be a consequence of high perceptual load conditions (Experiment 1). These results demonstrate that multisensory cues capture spatial attention more effectively than unimodal cues under conditions of concurrent perceptual load.  相似文献   

9.
Despite their common origin, studies on motor coordination and on attentional load have developed into separate fields of investigation, bringing out findings, methods, and theories which are diverse if not mutually exclusive. Sitting at the intersection of these two fields, this article addresses the issue of behavioral flexibility by investigating how intention modifies the stability of existing patterns of coordination between moving limbs. It addresses the issue, largely ignored until now, of the attentional cost incurred by the central nervous system (CNS) in maintaining a coordination pattern at a given level of stability, in particular under different attentional priority requirements. The experimental paradigm adopted in these studies provides an original mix of a classical measure of attentional load, namely, reaction time, and of a dynamic approach to coordination, most suitable for characterizing the dynamic properties of coordinated behavior and behavioral change. Findings showed that central cost and pattern stability covary, suggesting that bimanual coordination and the attentional activity of the CNS involved in maintaining such a coordination bear on the same underlying dynamics. Such a conclusion provides a strong support to a unified approach to coordination encompassing a conceptualization in terms of information processing and another, more recent framework rooted in self-organization theories and dynamical systems models  相似文献   

10.
大脑可以对来自不同感觉通道的信息进行处理与整合。与单一感觉通道相比, 个体对同时呈现在不同感觉通道的目标信号的响应会更快。对于这种现象的一种主要理论解释是共同激活模型, 该模型认为来自不同通道的刺激在特定的脑区汇聚整合, 比如顶叶内沟、颞上沟和前额叶皮层区域。整合后的信号强度更大, 可以更快地触发反应, 但是信号的整合发生在认知加工的哪一阶段目前尚未有明确结论。当个体对出现在不同感觉通道之间的任务转换进行加工时, 产生与感觉通道相关的任务转换的损失小于跨感觉通道转换损失与任务转换损失的总和, 这为与感觉通道相关的转换代价来源于任务设置的惯性和干扰提供了证据。而在单通道和多通道之间发生转换时, 跨通道转换代价会减小甚至消失, 这是由于同时发生的多感觉整合抵消了一部分损失, 这种现象支持了共同激活模型理论。然而, 多感觉信号整合对任务转换的神经加工过程产生怎样的影响并不清楚, 在未来的研究中可以把多感觉整合范式同经典的任务转换范式结合改进, 进而确定跨通道转换的加工机制和多感觉信号整合的发生阶段。  相似文献   

11.
声音诱发闪光错觉效应是典型的视听整合错觉现象, 是指当视觉闪光刺激与间隔100 ms内的听觉声音刺激不等数量呈现时, 被试知觉视觉闪光的数量与听觉声音的数量相等。声音诱发闪光错觉的影响因素既包括自下而上和自上而下的被试内差异因素, 也包括视听刺激依赖程度、视听整合的发展和视听刺激知觉敏感性等被试间差异因素。该效应的产生在时程上主要体现在早期加工阶段, 在脑区上主要涉及多处皮层及皮层下相关脑区。未来研究应进一步考察注意、奖赏和视听整合方式等认知加工对声音诱发闪光错觉的影响, 同时也应该关注声音诱发闪光错觉对记忆和学习的影响以及结合计算模型和神经科学的手段进一步探讨其认知神经机制。  相似文献   

12.
Learning to identify objects as members of categories is an essential cognitive skill and learning to deploy attention effectively is a core component of that process. The present study investigated an assumption imbedded in formal models of categorization: error is necessary for attentional learning. Eye-trackers were used to record participants’ allocation of attention to task relevant and irrelevant features while learning a complex categorization task. It was found that participants optimized their fixation patterns in the absence of both performance errors and corrective external feedback. Optimization began immediately after each category was mastered and continued for many trials. These results demonstrate that error is neither necessary nor sufficient for all forms of attentional learning.  相似文献   

13.
Psychopathic behavior has long been attributed to a fundamental deficit in fear that arises from impaired amygdala function. Growing evidence has demonstrated that fear-potentiated startle (FPS) and other psychopathy-related deficits are moderated by focus of attention, but to date, no work on adult psychopathy has examined attentional modulation of the amygdala or concomitant recruitment of relevant attention-related circuitry. Consistent with previous FPS findings, here we report that psychopathy-related differences in amygdala activation appear and disappear as a function of goal-directed attention. Specifically, decreased amygdala activity was observed in psychopathic offenders only when attention was engaged in an alternative goal-relevant task prior to presenting threat-relevant information. Under this condition, psychopaths also exhibited greater activation in selective-attention regions of the lateral prefrontal cortex (LPFC) than did nonpsychopaths, and this increased LPFC activation mediated psychopathy’s association with decreased amygdala activation. In contrast, when explicitly attending to threat, amygdala activation did not differ in psychopaths and nonpsychopaths. This pattern of amygdala activation highlights the potential role of LPFC in mediating the failure of psychopathic individuals to process fear and other important information when it is peripheral to the primary focus of goal-directed attention.  相似文献   

14.
Our ability to perceive two events in close temporal succession is severely limited, a phenomenon known as the attentional blink. While the blink has served as a popular tool to prevent conscious perception, there is less research on its causes, and in particular on the role of conscious perception of the first event in triggering it. In three experiments, we disentangled the roles of spatial attention, conscious perception and working memory (WM) in causing the blink. We show that while allocating spatial attention to T1 is neither necessary nor sufficient for eliciting a blink, consciously perceiving it is necessary but not sufficient. When T1 was task irrelevant, consciously perceiving it triggered a blink only when it matched the attentional set for T2. We conclude that consciously perceiving a task-relevant event causes the blink, possibly because it triggers encoding of this event into WM. We discuss the implications of these findings for the relationship between spatial attention, conscious perception and WM, as well as for the distinction between access and phenomenal consciousness.  相似文献   

15.
Previous studies have shown that adults respond faster and more reliably to bimodal compared to unimodal localization cues. The current study investigated for the first time the development of audiovisual (A-V) integration in spatial localization behavior in infants between 1 and 10 months of age. We observed infants' head and eye movements in response to auditory, visual, or both kinds of stimuli presented either 25 degrees or 45 degrees to the right or left of midline. Infants under 8 months of age intermittently showed response latencies significantly faster toward audiovisual targets than toward either auditory or visual targets alone They did so, however, without exhibiting a reliable violation of the Race Model, suggesting that probability summation alone could explain the faster bimodal response. In contrast, infants between 8 and 10 months of age exhibited bimodal response latencies significantly faster than unimodal latencies for both eccentricity conditions and their latencies violated the Race Model at 25 degrees eccentricity. In addition to this main finding, we found age-dependent eccentricity and modality effects on response latencies. Together, these findings suggest that audiovisual integration emerges late in the first year of life and are consistent with neurophysiological findings from multisensory sites in the superior colliculus of infant monkeys showing that multisensory enhancement of responsiveness is not present at birth but emerges later in life.  相似文献   

16.
We examined the relationship between subcomponents of embodiment and multisensory integration using a mirror box illusion. The participants’ left hand was positioned against the mirror, while their right hidden hand was positioned 12″, 6″, or 0″ from the mirror – creating a conflict between visual and proprioceptive estimates of limb position in some conditions. After synchronous tapping, asynchronous tapping, or no movement of both hands, participants gave position estimates for the hidden limb and filled out a brief embodiment questionnaire. We found a relationship between different subcomponents of embodiment and illusory displacement towards the visual estimate. Illusory visual displacement was positively correlated with feelings of deafference in the asynchronous and no movement conditions, whereas it was positive correlated with ratings of visual capture and limb ownership in the synchronous and no movement conditions. These results provide evidence for dissociable contributions of different aspects of embodiment to multisensory integration.  相似文献   

17.
The role of attention in temporal integration   总被引:3,自引:0,他引:3  
Visser TA  Enns JT 《Perception》2001,30(2):135-145
When two visual patterns are presented in rapid succession, their contours may be combined into a single unified percept. This temporal integration is known to be influenced by such low-level visual factors as stimulus intensity, contour proximity, and stimulus duration. In this study we asked whether temporal integration is modulated by an attentional-blink procedure. The results from a localisation task in experiment 1 and a detection task in experiment 2 pointed to two separate effects. First, greater attentional availability increased the accuracy of spatial localisation. Second, it increased the duration over which successive stimuli could be integrated. These results imply that theories of visible persistence and visual masking must account for attentional influences in addition to lower-level effects. They also have practical implications for use of the temporal-integration task in the assessment of group and individual differences.  相似文献   

18.
People often have to make decisions based on many pieces of information. Previous work has found that people are able to integrate values presented in a rapid serial visual presentation (RSVP) stream to make informed judgements on the overall stream value (Tsetsos et al. Proceedings of the National Academy of Sciences of the United States of America, 109(24), 9659–9664, 2012). It is also well known that attentional mechanisms influence how people process information. However, it is unknown how attentional factors impact value judgements of integrated material. The current study is the first of its kind to investigate whether value judgements are influenced by attentional processes when assimilating information. Experiments 13 examined whether the attentional salience of an item within an RSVP stream affected judgements of overall stream value. The results showed that the presence of an irrelevant high or low value salient item biased people to judge the stream as having a higher or lower overall mean value, respectively. Experiments 47 directly tested Tsetsos et al.’s (Proceedings of the National Academy of Sciences of the United States of America, 109(24), 9659–9664, 2012) theory examining whether extreme values in an RSVP stream become over-weighted, thereby capturing attention more than other values in the stream. The results showed that the presence of both a high (Experiments 4, 6 and 7) and a low (Experiment 5) value outlier captures attention leading to less accurate report of subsequent items in the stream. Taken together, the results showed that valuations can be influenced by attentional processes, and can lead to less accurate subjective judgements.  相似文献   

19.
虚拟现实技术通过提供视觉、听觉和触觉等信息为用户创造身临其境的感知体验, 其中触觉反馈面临诸多技术瓶颈使得虚拟现实中的自然交互受限。基于多感官错觉的伪触觉技术可以借助其他通道的信息强化和丰富触觉感受, 是目前虚拟现实环境中优化触觉体验的有效途径。本文聚焦于触觉中最重要的维度之一——粗糙度, 试图为解决虚拟现实中触觉反馈的受限问题提供新思路。探讨了粗糙度感知中, 视、听、触多感觉通道整合的关系, 分析了视觉线索(表面纹理密度、表面光影、控制显示比)和听觉线索(音调/频率、响度)如何影响触觉粗糙度感知, 总结了当下调控这些因素来改变粗糙度感知的方法。最后, 探讨了使用伪触觉反馈技术时, 虚拟现实环境中视、听、触觉信息在呈现效果、感知整合等方面与真实世界相比可能存在的差异, 提出可借鉴的改善触觉体验的适用方法和未来待研究的方向。  相似文献   

20.
于薇  王爱君  张明 《心理学报》2017,(2):164-173
听觉主导效应是指多感觉通道信息整合过程中,听觉通道中的信息得到优先加工,从而主导其他感觉通道的信息。研究采用经典的声音诱发闪光错觉的范式,通过两个实验操纵了注意资源的分配方式以及实验任务难度,考察了主动注意听觉通道的声音刺激对声音诱发闪光错觉产生的影响,以及任务难度对声音诱发闪光错觉的影响。结果发现:(1)裂变错觉会受到注意资源分配程度的影响,但是融合错觉则不然;(2)任务难度既不会影响裂变错觉,也不会影响融合错觉。说明了分散注意能够影响听觉主导效应中的裂变错觉,并且这种主导效应与任务难度无关。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号