首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 625 毫秒
1.
唐晓雨  佟佳庚  于宏  王爱君 《心理学报》2021,53(11):1173-1188
本文采用内-外源性空间线索靶子范式, 操控内源性线索有效性(有效线索、无效线索)、外源性线索有效性(有效线索、无效线索)、目标刺激类型(视觉刺激、听觉刺激、视听觉刺激)三个自变量。通过两个不同任务难度的实验(实验1: 简单定位任务; 实验2: 复杂辨别任务)来考察内外源性空间注意对多感觉整合的影响。两个实验结果均发现外源性空间注意显著减弱了多感觉整合效应, 内源性空间注意没有显著增强多感觉整合效应; 实验2中还发现了内源性空间注意会对外源性空间注意减弱多感觉整合效应产生影响。结果表明, 与内源性空间注意不同, 外源性空间注意对多感觉整合的影响不易受任务难度的调控; 当任务较难时内源性空间注意会影响外源性空间注意减弱多感觉整合效应的过程。由此推测, 内外源性空间注意对多感觉整合的调节并非彼此独立、而是相互影响的。  相似文献   

2.
Multisensory integration is a process whereby information converges from different sensory modalities to produce a response that is different from that elicited by the individual modalities presented alone. A neural basis for multisensory integration has been identified within a variety of brain regions, but the most thoroughly examined model has been that of the superior colliculus (SC). Multisensory processing in the SC of anaesthetized animals has been shown to be dependent on the physical parameters of the individual stimuli presented (e.g., intensity, direction, velocity) as well as their spatial relationship. However, it is unknown whether these stimulus features are important, or evident, in the awake behaving animal. To address this question, we evaluated the influence of physical properties of sensory stimuli (visual intensity, direction, and velocity; auditory intensity and location) on sensory activity and multisensory integration of SC neurons in awake, behaving primates. Monkeys were trained to fixate a central visual fixation point while visual and/or auditory stimuli were presented in the periphery. Visual stimuli were always presented within the contralateral receptive field of the neuron whereas auditory stimuli were presented at either ipsi- or contralateral locations. Many of the SC neurons responsive to these sensory stimuli (n = 66/84; 76%) had stronger responses when the visual and auditory stimuli were combined at contralateral locations than when the auditory stimulus was located on the ipsilateral side. This trend was significant across the population of auditory-responsive neurons. In addition, some SC neurons (n = 31) were presented a battery of tests in which the quality of one stimulus of a pair was systematically manipulated. A small proportion of these neurons (n = 8/31; 26%) showed preferential responses to stimuli with specific physical properties, and these preferences were not significantly altered when multisensory stimulus combinations were presented. These data demonstrate that multisensory processing in the awake behaving primate is influenced by the spatial congruency of the stimuli as well as their individual physical properties.  相似文献   

3.
This study examined the multisensory integration of visual and auditory motion information using a methodology designed to single out perceptual integration processes from post-perceptual influences. We assessed the threshold stimulus onset asynchrony (SOA) at which the relative directions (same vs. different) of simultaneously presented visual and auditory apparent motion streams could no longer be discriminated (Experiment 1). This threshold was higher than the upper threshold for direction discrimination (left vs. right) of each individual modality when presented in isolation (Experiment 2). The poorer performance observed in bimodal displays was interpreted as a consequence of automatic multisensory integration of motion information. Experiment 3 supported this interpretation by ruling out task differences as the explanation for the higher threshold in Experiment 1. Together these data provide empirical support for the view that multisensory integration of motion signals can occur at a perceptual level.  相似文献   

4.
Previous research has demonstrated that threatening, compared to neutral pictures, can bias attention towards non-emotional auditory targets. Here we investigated which subcomponents of attention contributed to the influence of emotional visual stimuli on auditory spatial attention. Participants indicated the location of an auditory target, after brief (250 ms) presentation of a spatially non-predictive peripheral visual cue. Responses to targets were faster at the location of the preceding visual cue, compared to at the opposite location (cue validity effect). The cue validity effect was larger for targets following pleasant and unpleasant cues compared to neutral cues, for right-sided targets. For unpleasant cues, the crossmodal cue validity effect was driven by delayed attentional disengagement, and for pleasant cues, it was driven by enhanced engagement. We conclude that both pleasant and unpleasant visual cues influence the distribution of attention across modalities and that the associated attentional mechanisms depend on the valence of the visual cue.  相似文献   

5.
采用内源性线索-靶子范式, 操纵线索类型(有效线索、无效线索)和靶刺激通道类型(视觉刺激、听觉刺激、视听觉刺激)两个自变量, 通过两个实验, 分别设置50%和80%两种内源性空间线索有效性来考察不同空间线索有效性条件下内源性空间注意对视听觉整合的影响。结果发现, 当线索有效性为50%时(实验1), 有效线索位置和无效线索位置的视听觉整合效应没有显著差异; 当线索有效性为80%时(实验2), 有效线索位置的视听觉整合效应显著大于无效线索位置的视听觉整合效应。结果表明, 线索有效性不同时, 内源性空间注意对视听觉整合产生了不同的影响, 高线索有效性条件下内源性空间注意能够促进视听觉整合效应。  相似文献   

6.
In order to determine the spatial location of an object that is simultaneously seen and heard, the brain assigns higher weights to the sensory inputs that provide the most reliable information. For example, in the well-known ventriloquism effect, the perceived location of a sound is shifted toward the location of a concurrent but spatially misaligned visual stimulus. This perceptual illusion can be explained by the usually much higher spatial resolution of the visual system as compared to the auditory system. Recently, it has been demonstrated that this cross-modal binding process is not fully automatic, but can be modulated by emotional learning. Here we tested whether cross-modal binding is similarly affected by motivational factors, as exemplified by reward expectancy. Participants received a monetary reward for precise and accurate localization of brief auditory stimuli. Auditory stimuli were accompanied by task-irrelevant, spatially misaligned visual stimuli. Thus, the participants’ motivational goal of maximizing their reward was put in conflict with the spatial bias of auditory localization induced by the ventriloquist situation. Crucially, the amounts of expected reward differed between the two hemifields. As compared to the hemifield associated with a low reward, the ventriloquism effect was reduced in the high-reward hemifield. This finding suggests that reward expectations modulate cross-modal binding processes, possibly mediated via cognitive control mechanisms. The motivational significance of the stimulus material, thus, constitutes an important factor that needs to be considered in the study of top-down influences on multisensory integration.  相似文献   

7.
Integration of simultaneous auditory and visual information about an event can enhance our ability to detect that event. This is particularly evident in the perception of speech, where the articulatory gestures of the speaker's lips and face can significantly improve the listener's detection and identification of the message, especially when that message is presented in a noisy background. Speech is a particularly important example of multisensory integration because of its behavioural relevance to humans and also because brain regions have been identified that appear to be specifically tuned for auditory speech and lip gestures. Previous research has suggested that speech stimuli may have an advantage over other types of auditory stimuli in terms of audio-visual integration. Here, we used a modified adaptive psychophysical staircase approach to compare the influence of congruent visual stimuli (brief movie clips) on the detection of noise-masked auditory speech and non-speech stimuli. We found that congruent visual stimuli significantly improved detection of an auditory stimulus relative to incongruent visual stimuli. This effect, however, was equally apparent for speech and non-speech stimuli. The findings suggest that speech stimuli are not specifically advantaged by audio-visual integration for detection at threshold when compared with other naturalistic sounds.  相似文献   

8.
We constantly integrate the information that is available to our various senses. The extent to which the mechanisms of multisensory integration are subject to the influences of attention, emotion, and/or motivation is currently unknown. The ??ventriloquist effect?? is widely assumed to be an automatic crossmodal phenomenon, shifting the perceived location of an auditory stimulus toward a concurrently presented visual stimulus. In the present study, we examined whether audiovisual binding, as indicated by the magnitude of the ventriloquist effect, is influenced by threatening auditory stimuli presented prior to the ventriloquist experiment. Syllables spoken in a fearful voice were presented from one of eight loudspeakers, while syllables spoken in a neutral voice were presented from the other seven locations. Subsequently, participants had to localize pure tones while trying to ignore concurrent visual stimuli (both the auditory and the visual stimuli here were emotionally neutral). A reliable ventriloquist effect was observed. The emotional stimulus manipulation resulted in a reduction of the magnitude of the subsequently measured ventriloquist effect in both hemifields, as compared to a control group exposed to a similar attention-capturing, but nonemotional, manipulation. These results suggest that the emotional system is capable of influencing multisensory binding processes that have heretofore been considered automatic.  相似文献   

9.
Tracking an audio-visual target involves integrating spatial cues about target position from both modalities. Such sensory cue integration is a developmental process in the brain involving learning, with neuroplasticity as its underlying mechanism. We present a Hebbian learning-based adaptive neural circuit for multi-modal cue integration. The circuit temporally correlates stimulus cues within each modality via intramodal learning as well as symmetrically across modalities via crossmodal learning to independently update modality-specific neural weights on a sample-by-sample basis. It is realised as a robotic agent that must orient towards a moving audio-visual target. It continuously learns the best possible weights required for a weighted combination of auditory and visual spatial target directional cues that is directly mapped to robot wheel velocities to elicit an orientation response. Visual directional cues are noise-free and continuous but arising from a relatively narrow receptive field while auditory directional cues are noisy and intermittent but arising from a relatively wider receptive field. Comparative trials in simulation demonstrate that concurrent intramodal learning improves both the overall accuracy and precision of the orientation responses of symmetric crossmodal learning. We also demonstrate that symmetric crossmodal learning improves multisensory responses as compared to asymmetric crossmodal learning. The neural circuit also exhibits multisensory effects such as sub-additivity, additivity and super-additivity.  相似文献   

10.
Audio-visual simultaneity judgments   总被引:3,自引:0,他引:3  
The relative spatiotemporal correspondence between sensory events affects multisensory integration across a variety of species; integration is maximal when stimuli in different sensory modalities are presented from approximately the same position at about the same time. In the present study, we investigated the influence of spatial and temporal factors on audio-visual simultaneity perception in humans. Participants made unspeeded simultaneous versus successive discrimination responses to pairs of auditory and visual stimuli presented at varying stimulus onset asynchronies from either the same or different spatial positions using either the method of constant stimuli (Experiments 1 and 2) or psychophysical staircases (Experiment 3). The participants in all three experiments were more likely to report the stimuli as being simultaneous when they originated from the same spatial position than when they came from different positions, demonstrating that the apparent perception of multisensory simultaneity is dependent on the relative spatial position from which stimuli are presented.  相似文献   

11.
This event-related potential study investigated (i) to what extent incongruence between attention-directing cue and cued target modality affects attentional control processes that bias the system in advance to favor a particular stimulus modality and (ii) to what extent top-down attentional control mechanisms are generalized for the type of information that is to be attended. To this end, both visual and auditory word cues were used to instruct participants to direct attention to a specific visual (color) or auditory (pitch) stimulus feature of a forthcoming multisensory target stimulus. Effects of cue congruency were observed within 200 ms post-cue over frontal scalp regions and related to processes involved in shifting attention from the cue modality to the modality of the task-relevant target feature. Both directing visual attention and directing auditory attention were associated with dorsal posterior positivity, followed by sustained fronto-central negativity. However, this fronto-central negativity appeared to have an earlier onset and was more pronounced when the visual modality was cued. Together the present results suggest that the mechanisms involved in deploying attention are to some extent determined by the modality (visual, auditory) in which attention operates, and in addition, that some of these mechanisms can also be affected by cue congruency.  相似文献   

12.
基于外源性线索-靶子范式, 采用2(线索-靶子间隔时间, stimulus onset asynchronies, SOA:400~600 ms、1000~1200 ms) × 3(目标刺激类型:视觉、听觉、视听觉) × 2(线索有效性:有效线索、无效线索)的被试内实验设计, 要求被试对目标刺激完成检测任务, 以考察视觉线索诱发的返回抑制(inhibition of return, IOR)对视听觉整合的调节作用, 从而为感知觉敏感度、空间不确定性及感觉通道间信号强度差异假说提供实验证据。结果发现:(1) 随SOA增长, 视觉IOR效应显著降低, 视听觉整合效应显著增强; (2) 短SOA (400~600 ms)时, 有效线索位置上的视听觉整合效应显著小于无效线索位置, 但长SOA (1000~1200 ms)时, 有效与无效线索位置上的视听觉整合效应并无显著差异。结果表明, 在不同SOA条件下, 视觉IOR对视听觉整合的调节作用产生变化, 当前结果支持感觉通道间信号强度差异假说。  相似文献   

13.
Poliakoff E  Miles E  Li X  Blanchette I 《Cognition》2007,102(3):405-414
Viewing a threatening stimulus can bias visual attention toward that location. Such effects have typically been investigated only in the visual modality, despite the fact that many threatening stimuli are most dangerous when close to or in contact with the body. Recent multisensory research indicates that a neutral visual stimulus, such as a light flash, can lead to a tactile attention shift towards a nearby body part. Here, we investigated whether the threat value of a visual stimulus modulates its effect on attention to touch. Participants made speeded discrimination responses about tactile stimuli presented to one or other hand, preceded by a picture cue (snake, spider, flower or mushroom) presented close to the same or the opposite hand. Pictures of snakes led to a significantly greater tactile attentional facilitation effect than did non-threatening pictures of flowers and mushrooms. Furthermore, there was a correlation between self-reported fear of snakes and spiders and the magnitude of early facilitation following cues of that type. These findings demonstrate that the attentional bias towards threat extends to the tactile modality and indicate that perceived threat value can modulate the cross-modal effect that a visual cue has on attention to touch.  相似文献   

14.
为考察选择性注意如何影响行人整合交通信号中的多感觉加工优势,研究采用线索—靶子范式,操控交通信号类型(视觉/传统视觉信号灯、听觉、视听觉/有声信号灯)和信号呈现位置(线索化位置、非线索化位置)两个变量,分析被试对视觉、听觉、视听觉三类刺激的反应时和正确率,并通过对相对多感觉反应增强值(rMRE)和竞争模型(race model)的计算,对多感觉加工优势效应进行量化。结果发现:有声信号灯能够产生多感觉加工优势,选择性注意会减弱有声信号灯的多感觉加工优势。该研究结果为有声信号灯的设置提供了理论依据,且提示应当避免在环境中设置容易引起行人选择性注意的设施。  相似文献   

15.
Presenting an auditory or tactile cue in temporal synchrony with a change in the color of a visual target can facilitate participants’ visual search performance. In the present study, we compared the magnitude of unimodal auditory, vibrotactile, and bimodal (i.e., multisensory) cuing benefits when the nonvisual cues were presented in temporal synchrony with the changing of the target’s color (Experiments 1 and 2). The target (a horizontal or vertical line segment) was presented among a number of distractors (tilted line segments) that also changed color at various times. In Experiments 3 and 4, the cues were also made spatially informative with regard to the location of the visual target. The unimodal and bimodal cues gave rise to an equivalent (significant) facilitation of participants’ visual search performance relative to a no-cue baseline condition. Making the unimodal auditory and vibrotactile cues spatially informative produced further performance improvements (on validly cued trials), as compared with cues that were spatially uninformative or otherwise spatially invalid. A final experiment was conducted in order to determine whether cue location (close to versus far from the visual display) would influence participants’ visual search performance. Auditory cues presented close to the visual search display were found to produce significantly better performance than cues presented over headphones. Taken together, these results have implications for the design of nonvisual and multisensory warning signals used in complex visual displays.  相似文献   

16.
In the present study, we investigate how spatial attention, driven by unisensory and multisensory cues, can bias the access of information into visuo-spatial working memory (VSWM). In a series of four experiments, we compared the effectiveness of spatially-nonpredictive visual, auditory, or audiovisual cues in capturing participants' spatial attention towards a location where to-be-remembered visual stimuli were or were not presented (cued/uncued trials, respectively). The results suggest that the effect of peripheral visual cues in biasing the access of information into VSWM depend on the size of the attentional focus, while auditory cues did not have direct effects in biasing VSWM. Finally, spatially congruent multisensory cues showed an enlarged attentional effect in VSWM as compared to unimodal visual cues, as a likely consequence of multisensory integration. This latter result sheds new light on the interplay between spatial attention and VSWM, pointing to the special role exerted by multisensory (audiovisual) cues.  相似文献   

17.
唐晓雨  孙佳影  彭姓 《心理学报》2020,52(3):257-268
本研究基于线索-靶子范式, 操纵目标刺激类型(视觉、听觉、视听觉)与线索有效性(有效线索、中性条件、无效线索)两个自变量, 通过3个实验来考察双通道分配性注意对视听觉返回抑制(inhibition of return, IOR)的影响。实验1 (听觉刺激呈现在左/右侧)结果发现, 在双通道分配性注意条件下, 视觉目标产生显著IOR效应, 而视听觉目标没有产生IOR效应; 实验2 (听觉刺激呈现在左/右侧)与实验3 (听觉刺激呈现在中央)结果发现, 在视觉通道选择性注意条件下, 视觉与视听觉目标均产生显著IOR效应但二者无显著差异。结果表明:双通道分配性注意减弱视听觉IOR效应。  相似文献   

18.
Correctly integrating sensory information across different modalities is a vital task, yet there are illusions which cause the incorrect localization of multisensory stimuli. A common example of these phenomena is the "ventriloquism effect". In this illusion, the localization of auditory signals is biased by the presence of visual stimuli. For instance, when a light and sound are simultaneously presented, observers may erroneously locate the sound closer to the light than its actual position. While this phenomenon has been studied extensively in azimuth at a single depth, little is known about the interactions of stimuli at different depth planes. In the current experiment, virtual acoustics and stereo-image displays were used to test the integration of visual and auditory signals across azimuth and depth. The results suggest that greater variability in the localization of sounds in depth may lead to a greater bias from visual stimuli in depth than in azimuth. These results offer interesting implications for understanding multisensory integration.  相似文献   

19.
The tendency for observers to overestimate slant is not simply a visual illusion but can also occur with another sense, such as proprioception, as in the case of overestimation of self-body tilt. In the present study, distortion in the perception of body tilt was examined as a function of gender and multisensory spatial information. We used a full-body-tilt apparatus to test when participants experienced being tilted by 45 degrees, with visual and auditory cues present or absent. Body tilt was overestimated in all conditions, with the largest bias occurring when there were no visual or auditory cues. Both visual and auditory information independently improved performance. We also found a gender difference, with women exhibiting more bias in the absence of auditory information and more improvement when auditory information was added. The findings support the view that perception of body tilt is multisensory and that women more strongly utilize auditory information in such multisensory spatial judgments.  相似文献   

20.
Crossmodal selective attention was investigated in a cued task switching paradigm using bimodal visual and auditory stimulation. A cue indicated the imperative modality. Three levels of spatial S–R associations were established following perceptual (location), structural (numerical), and conceptual (verbal) set-level compatibility. In Experiment 1, participants switched attention between the auditory and visual modality either with a spatial-location or spatial-numerical stimulus set. In the spatial-location set, participants performed a localization judgment on left vs. right presented stimuli, whereas the spatial-numerical set required a magnitude judgment about a visually or auditorily presented number word. Single-modality blocks with unimodal stimuli were included as a control condition. In Experiment 2, the spatial-numerical stimulus set was replaced by a spatial-verbal stimulus set using direction words (e.g., “left”). RT data showed modality switch costs, which were asymmetric across modalities in the spatial-numerical and spatial-verbal stimulus set (i.e., larger for auditory than for visual stimuli), and congruency effects, which were asymmetric primarily in the spatial-location stimulus set (i.e., larger for auditory than for visual stimuli). This pattern of effects suggests task-dependent visual dominance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号