首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 328 毫秒
1.
唐晓雨  佟佳庚  于宏  王爱君 《心理学报》2021,53(11):1173-1188
本文采用内-外源性空间线索靶子范式, 操控内源性线索有效性(有效线索、无效线索)、外源性线索有效性(有效线索、无效线索)、目标刺激类型(视觉刺激、听觉刺激、视听觉刺激)三个自变量。通过两个不同任务难度的实验(实验1: 简单定位任务; 实验2: 复杂辨别任务)来考察内外源性空间注意对多感觉整合的影响。两个实验结果均发现外源性空间注意显著减弱了多感觉整合效应, 内源性空间注意没有显著增强多感觉整合效应; 实验2中还发现了内源性空间注意会对外源性空间注意减弱多感觉整合效应产生影响。结果表明, 与内源性空间注意不同, 外源性空间注意对多感觉整合的影响不易受任务难度的调控; 当任务较难时内源性空间注意会影响外源性空间注意减弱多感觉整合效应的过程。由此推测, 内外源性空间注意对多感觉整合的调节并非彼此独立、而是相互影响的。  相似文献   

2.
Previous research has demonstrated that threatening, compared to neutral pictures, can bias attention towards non-emotional auditory targets. Here we investigated which subcomponents of attention contributed to the influence of emotional visual stimuli on auditory spatial attention. Participants indicated the location of an auditory target, after brief (250 ms) presentation of a spatially non-predictive peripheral visual cue. Responses to targets were faster at the location of the preceding visual cue, compared to at the opposite location (cue validity effect). The cue validity effect was larger for targets following pleasant and unpleasant cues compared to neutral cues, for right-sided targets. For unpleasant cues, the crossmodal cue validity effect was driven by delayed attentional disengagement, and for pleasant cues, it was driven by enhanced engagement. We conclude that both pleasant and unpleasant visual cues influence the distribution of attention across modalities and that the associated attentional mechanisms depend on the valence of the visual cue.  相似文献   

3.
郑晓丹  岳珍珠 《心理科学》2022,45(6):1329-1336
采用生活中的真实客体,我们考察了跨通道语义相关性对视觉注意的影响以及跨通道促进的时程。结合启动范式和点探测范式,实验1发现在听觉启动600毫秒后,被试对高相关视觉刺激的反应比对低相关刺激的反应更快,而在视觉启动下没有发现启动效应。实验2发现在启动刺激呈现900毫秒后跨通道启动效应消失。我们的研究证明了基于先前经验的视、听语义相关能够促进视觉的选择性注意。  相似文献   

4.
Multisensory integration is a process whereby information converges from different sensory modalities to produce a response that is different from that elicited by the individual modalities presented alone. A neural basis for multisensory integration has been identified within a variety of brain regions, but the most thoroughly examined model has been that of the superior colliculus (SC). Multisensory processing in the SC of anaesthetized animals has been shown to be dependent on the physical parameters of the individual stimuli presented (e.g., intensity, direction, velocity) as well as their spatial relationship. However, it is unknown whether these stimulus features are important, or evident, in the awake behaving animal. To address this question, we evaluated the influence of physical properties of sensory stimuli (visual intensity, direction, and velocity; auditory intensity and location) on sensory activity and multisensory integration of SC neurons in awake, behaving primates. Monkeys were trained to fixate a central visual fixation point while visual and/or auditory stimuli were presented in the periphery. Visual stimuli were always presented within the contralateral receptive field of the neuron whereas auditory stimuli were presented at either ipsi- or contralateral locations. Many of the SC neurons responsive to these sensory stimuli (n = 66/84; 76%) had stronger responses when the visual and auditory stimuli were combined at contralateral locations than when the auditory stimulus was located on the ipsilateral side. This trend was significant across the population of auditory-responsive neurons. In addition, some SC neurons (n = 31) were presented a battery of tests in which the quality of one stimulus of a pair was systematically manipulated. A small proportion of these neurons (n = 8/31; 26%) showed preferential responses to stimuli with specific physical properties, and these preferences were not significantly altered when multisensory stimulus combinations were presented. These data demonstrate that multisensory processing in the awake behaving primate is influenced by the spatial congruency of the stimuli as well as their individual physical properties.  相似文献   

5.
Audio-visual simultaneity judgments   总被引:3,自引:0,他引:3  
The relative spatiotemporal correspondence between sensory events affects multisensory integration across a variety of species; integration is maximal when stimuli in different sensory modalities are presented from approximately the same position at about the same time. In the present study, we investigated the influence of spatial and temporal factors on audio-visual simultaneity perception in humans. Participants made unspeeded simultaneous versus successive discrimination responses to pairs of auditory and visual stimuli presented at varying stimulus onset asynchronies from either the same or different spatial positions using either the method of constant stimuli (Experiments 1 and 2) or psychophysical staircases (Experiment 3). The participants in all three experiments were more likely to report the stimuli as being simultaneous when they originated from the same spatial position than when they came from different positions, demonstrating that the apparent perception of multisensory simultaneity is dependent on the relative spatial position from which stimuli are presented.  相似文献   

6.
Several studies have shown that the direction in which a visual apparent motion stream moves can influence the perceived direction of an auditory apparent motion stream (an effect known as crossmodal dynamic capture). However, little is known about the role that intramodal perceptual grouping processes play in the multisensory integration of motion information. The present study was designed to investigate the time course of any modulation of the cross-modal dynamic capture effect by the nature of the perceptual grouping taking place within vision. Participants were required to judge the direction of an auditory apparent motion stream while trying to ignore visual apparent motion streams presented in a variety of different configurations. Our results demonstrate that the cross-modal dynamic capture effect was influenced more by visual perceptual grouping when the conditions for intramodal perceptual grouping were set up prior to the presentation of the audiovisual apparent motion stimuli. However, no such modulation occurred when the visual perceptual grouping manipulation was established at the same time as or after the presentation of the audiovisual stimuli. These results highlight the importance of the unimodal perceptual organization of sensory information to the manifestation of multisensory integration.  相似文献   

7.
Abstract: Despite previous failures to identify visual‐upon‐auditory spatial‐cuing effects, recent studies have demonstrated that the abrupt onset of a lateralized visual stimulus triggers a shift of spatial attention in response to auditory judgment. Nevertheless, whether a centrally presented visual stimulus orients auditory attention remained unclear. The present study investigated whether centrally presented gaze cues trigger a reflexive shift of attention in response to auditory judgment. Participants fixated on a schematic face in which the eyes looked left or right (the cue). A target sound was then presented to the left or right of the cue. Participants judged the direction of the target as quickly as possible. Even though participants were told that the gaze direction did not predict the direction of the target, the response time was significantly faster when the gaze was in the target direction than when it was in the non‐target direction. These findings provide initial evidence for visual‐upon‐auditory spatial‐cuing effects produced by centrally presented cues, suggesting that a reflexive crossmodal shift of attention does occur with a centrally presented visual stimulus.  相似文献   

8.
The aim of the present study was to investigate exogenous crossmodal orienting of attention in three-dimensional (3-D) space. Most studies in which the orienting of attention has been examined in 3-D space concerned either exogenous intramodal or endogenous crossmodal attention. Evidence for exogenous crossmodal orienting of attention in depth is lacking. Endogenous and exogenous attention are behaviorally different, suggesting that they are two different mechanisms. We used the orthogonal spatial-cueing paradigm and presented auditory exogenous cues at one of four possible locations in near or far space before the onset of a visual target. Cues could be presented at the same (valid) or at a different (invalid) depth from the target (radial validity), and on the same (valid) or on a different (invalid) side (horizontal validity), whereas we blocked the depth at which visual targets were presented. Next to an overall validity effect (valid RTs < invalid RTs) in horizontal space, we observed an interaction between the horizontal and radial validity of the cue: The horizontal validity effect was present only when the cue and the target were presented at the same depth. No horizontal validity effect was observed when the cue and the target were presented at different depths. These results suggest that exogenous crossmodal attention is “depth-aware,” and they are discussed in the context of the supramodal hypothesis of attention.  相似文献   

9.
Everyday experience involves the continuous integration of information from multiple sensory inputs. Such crossmodal interactions are advantageous since the combined action of different sensory cues can provide information unavailable from their individual operation, reducing perceptual ambiguity and enhancing responsiveness. The behavioural consequences of such multimodal processes and their putative neural mechanisms have been investigated extensively with respect to orienting behaviour and, to a lesser extent, the crossmodal coordination of spatial attention. These operations are concerned mainly with the determination of stimulus location. However, information from different sensory streams can also be combined to assist stimulus identification. Psychophysical and physiological data indicate that these two crossmodal processes are subject to different temporal and spatial constraints both at the behavioural and neuronal level and involve the participation of distinct neural substrates. Here we review the evidence for such a dissociation and discuss recent neurophysiological, neuroanatomical and neuroimaging findings that shed light on the mechanisms underlying crossmodal identification, with specific reference to audio-visual speech perception.  相似文献   

10.
采用内源性线索-靶子范式, 操纵线索类型(有效线索、无效线索)和靶刺激通道类型(视觉刺激、听觉刺激、视听觉刺激)两个自变量, 通过两个实验, 分别设置50%和80%两种内源性空间线索有效性来考察不同空间线索有效性条件下内源性空间注意对视听觉整合的影响。结果发现, 当线索有效性为50%时(实验1), 有效线索位置和无效线索位置的视听觉整合效应没有显著差异; 当线索有效性为80%时(实验2), 有效线索位置的视听觉整合效应显著大于无效线索位置的视听觉整合效应。结果表明, 线索有效性不同时, 内源性空间注意对视听觉整合产生了不同的影响, 高线索有效性条件下内源性空间注意能够促进视听觉整合效应。  相似文献   

11.
赵晨  张侃  杨华海 《心理学报》2001,34(3):28-33
该研究利用空间线索技术的实验模式,考察跨视觉和听觉通道的内源性选择注意与外源性选择性注意的相互关系。实验结果表明:(1)听觉中央线索在较长的SOA(至少500毫秒)条件下,可以引导内源性空间选择性注意;同时外周线索突现也能自动化地吸引部分注意资源。(2)听觉和视觉选择注意是分离的加工通道,但二者之间存在相互联系。  相似文献   

12.
The ??pip-and-pop effect?? refers to the facilitation of search for a visual target (a horizontal or vertical bar whose color changes frequently) among multiple visual distractors (tilted bars also changing color unpredictably) by the presentation of a spatially uninformative auditory cue synchronized with the color change of the visual target. In the present study, the visual stimuli in the search display changed brightness instead of color, and the crossmodal congruency between the pitch of the auditory cue and the brightness of the visual target was manipulated. When cue presence and cue congruency were randomly varied between trials (Experiment 1), both congruent cues (low-frequency tones synchronized with dark target states or high-frequency tones synchronized with bright target states) and incongruent cues (the reversed mapping) facilitated visual search performance equally, relative to a no-cue baseline condition. However, when cue congruency was blocked and the participants were informed about the pitch?Cbrightness mapping in the cue-present blocks (Experiment 2), performance was significantly enhanced when the cue and target were crossmodally congruent as compared to when they were incongruent. These results therefore suggest that the crossmodal congruency between auditory pitch and visual brightness can influence performance in the pip-and-pop task by means of top-down facilitation.  相似文献   

13.
Multisensory information has been shown to modulate attention in infants and facilitate learning in adults, by enhancing the amodal properties of a stimulus. However, it remains unclear whether this translates to learning in a multisensory environment across middle childhood, and particularly in the case of incidental learning. One hundred and eighty‐one children aged between 6 and 10 years participated in this study using a novel Multisensory Attention Learning Task (MALT). Participants were asked to respond to the presence of a target stimulus whilst ignoring distractors. Correct target selection resulted in the movement of the target exemplar to either the upper left or right screen quadrant, according to category membership. Category membership was defined either by visual‐only, auditory‐only or multisensory information. As early as 6 years of age, children demonstrated greater performance on the incidental categorization task following exposure to multisensory audiovisual cues compared to unisensory information. These findings provide important insight into the use of multisensory information in learning, and particularly on incidental category learning. Implications for the deployment of multisensory learning tasks within education across development will be discussed.  相似文献   

14.
In the present study, we investigate how spatial attention, driven by unisensory and multisensory cues, can bias the access of information into visuo-spatial working memory (VSWM). In a series of four experiments, we compared the effectiveness of spatially-nonpredictive visual, auditory, or audiovisual cues in capturing participants' spatial attention towards a location where to-be-remembered visual stimuli were or were not presented (cued/uncued trials, respectively). The results suggest that the effect of peripheral visual cues in biasing the access of information into VSWM depend on the size of the attentional focus, while auditory cues did not have direct effects in biasing VSWM. Finally, spatially congruent multisensory cues showed an enlarged attentional effect in VSWM as compared to unimodal visual cues, as a likely consequence of multisensory integration. This latter result sheds new light on the interplay between spatial attention and VSWM, pointing to the special role exerted by multisensory (audiovisual) cues.  相似文献   

15.
In many everyday situations, our senses are bombarded by many different unisensory signals at any given time. To gain the most veridical, and least variable, estimate of environmental stimuli/properties, we need to combine the individual noisy unisensory perceptual estimates that refer to the same object, while keeping those estimates belonging to different objects or events separate. How, though, does the brain “know” which stimuli to combine? Traditionally, researchers interested in the crossmodal binding problem have focused on the roles that spatial and temporal factors play in modulating multisensory integration. However, crossmodal correspondences between various unisensory features (such as between auditory pitch and visual size) may provide yet another important means of constraining the crossmodal binding problem. A large body of research now shows that people exhibit consistent crossmodal correspondences between many stimulus features in different sensory modalities. For example, people consistently match high-pitched sounds with small, bright objects that are located high up in space. The literature reviewed here supports the view that crossmodal correspondences need to be considered alongside semantic and spatiotemporal congruency, among the key constraints that help our brains solve the crossmodal binding problem.  相似文献   

16.
多感官线索整合的理论模型   总被引:4,自引:0,他引:4  
日常生活中,人脑能联合来自不同感官通道的线索对外部世界中的物体和事件进行感知。这些感官通道采用不同的参照系来表征物体的特征和位置;而且各种线索的可靠性也不恒定,根据环境而改变。但是人脑依然能够有效地整合这些线索,对物体和事件进行正确的感知。在以往研究的基础上,总结评述了多感官线索整合的几种理论模型及其验证结果,其中重点介绍了近年来引起广泛关注的贝叶斯统计优化模型。未来的研究应结合虚拟现实技术和脑成像技术对多感官线索整合进行探讨。  相似文献   

17.
Multisensory-mediated auditory localization   总被引:1,自引:0,他引:1  
Multisensory integration is a powerful mechanism for maximizing sensitivity to sensory events. We examined its effects on auditory localization in healthy human subjects. The specific objective was to test whether the relative intensity and location of a seemingly irrelevant visual stimulus would influence auditory localization in accordance with the inverse effectiveness and spatial rules of multisensory integration that have been developed from neurophysiological studies with animals [Stein and Meredith, 1993 The Merging of the Senses (Cambridge, MA: MIT Press)]. Subjects were asked to localize a sound in one condition in which a neutral visual stimulus was either above threshold (supra-threshold) or at threshold. In both cases the spatial disparity of the visual and auditory stimuli was systematically varied. The results reveal that stimulus salience is a critical factor in determining the effect of a neutral visual cue on auditory localization. Visual bias and, hence, perceptual translocation of the auditory stimulus appeared when the visual stimulus was supra-threshold, regardless of its location. However, this was not the case when the visual stimulus was at threshold. In this case, the influence of the visual cue was apparent only when the two cues were spatially coincident and resulted in an enhancement of stimulus localization. These data suggest that the brain uses multiple strategies to integrate multisensory information.  相似文献   

18.
为考察选择性注意如何影响行人整合交通信号中的多感觉加工优势,研究采用线索—靶子范式,操控交通信号类型(视觉/传统视觉信号灯、听觉、视听觉/有声信号灯)和信号呈现位置(线索化位置、非线索化位置)两个变量,分析被试对视觉、听觉、视听觉三类刺激的反应时和正确率,并通过对相对多感觉反应增强值(rMRE)和竞争模型(race model)的计算,对多感觉加工优势效应进行量化。结果发现:有声信号灯能够产生多感觉加工优势,选择性注意会减弱有声信号灯的多感觉加工优势。该研究结果为有声信号灯的设置提供了理论依据,且提示应当避免在环境中设置容易引起行人选择性注意的设施。  相似文献   

19.
Presenting an auditory or tactile cue in temporal synchrony with a change in the color of a visual target can facilitate participants’ visual search performance. In the present study, we compared the magnitude of unimodal auditory, vibrotactile, and bimodal (i.e., multisensory) cuing benefits when the nonvisual cues were presented in temporal synchrony with the changing of the target’s color (Experiments 1 and 2). The target (a horizontal or vertical line segment) was presented among a number of distractors (tilted line segments) that also changed color at various times. In Experiments 3 and 4, the cues were also made spatially informative with regard to the location of the visual target. The unimodal and bimodal cues gave rise to an equivalent (significant) facilitation of participants’ visual search performance relative to a no-cue baseline condition. Making the unimodal auditory and vibrotactile cues spatially informative produced further performance improvements (on validly cued trials), as compared with cues that were spatially uninformative or otherwise spatially invalid. A final experiment was conducted in order to determine whether cue location (close to versus far from the visual display) would influence participants’ visual search performance. Auditory cues presented close to the visual search display were found to produce significantly better performance than cues presented over headphones. Taken together, these results have implications for the design of nonvisual and multisensory warning signals used in complex visual displays.  相似文献   

20.
基于外源性线索-靶子范式, 采用2(线索-靶子间隔时间, stimulus onset asynchronies, SOA:400~600 ms、1000~1200 ms) × 3(目标刺激类型:视觉、听觉、视听觉) × 2(线索有效性:有效线索、无效线索)的被试内实验设计, 要求被试对目标刺激完成检测任务, 以考察视觉线索诱发的返回抑制(inhibition of return, IOR)对视听觉整合的调节作用, 从而为感知觉敏感度、空间不确定性及感觉通道间信号强度差异假说提供实验证据。结果发现:(1) 随SOA增长, 视觉IOR效应显著降低, 视听觉整合效应显著增强; (2) 短SOA (400~600 ms)时, 有效线索位置上的视听觉整合效应显著小于无效线索位置, 但长SOA (1000~1200 ms)时, 有效与无效线索位置上的视听觉整合效应并无显著差异。结果表明, 在不同SOA条件下, 视觉IOR对视听觉整合的调节作用产生变化, 当前结果支持感觉通道间信号强度差异假说。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号