首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Spatially informative auditory and vibrotactile (cross-modal) cues can facilitate attention but little is known about how similar cues influence visual spatial working memory (WM) across the adult lifespan. We investigated the effects of cues (spatially informative or alerting pre-cues vs. no cues), cue modality (auditory vs. vibrotactile vs. visual), memory array size (four vs. six items), and maintenance delay (900 vs. 1800 ms) on visual spatial location WM recognition accuracy in younger adults (YA) and older adults (OA). We observed a significant interaction between spatially informative pre-cue type, array size, and delay. OA and YA benefitted equally from spatially informative pre-cues, suggesting that attentional orienting prior to WM encoding, regardless of cue modality, is preserved with age. Contrary to predictions, alerting pre-cues generally impaired performance in both age groups, suggesting that maintaining a vigilant state of arousal by facilitating the alerting attention system does not help visual spatial location WM.  相似文献   

2.
The ??pip-and-pop effect?? refers to the facilitation of search for a visual target (a horizontal or vertical bar whose color changes frequently) among multiple visual distractors (tilted bars also changing color unpredictably) by the presentation of a spatially uninformative auditory cue synchronized with the color change of the visual target. In the present study, the visual stimuli in the search display changed brightness instead of color, and the crossmodal congruency between the pitch of the auditory cue and the brightness of the visual target was manipulated. When cue presence and cue congruency were randomly varied between trials (Experiment 1), both congruent cues (low-frequency tones synchronized with dark target states or high-frequency tones synchronized with bright target states) and incongruent cues (the reversed mapping) facilitated visual search performance equally, relative to a no-cue baseline condition. However, when cue congruency was blocked and the participants were informed about the pitch?Cbrightness mapping in the cue-present blocks (Experiment 2), performance was significantly enhanced when the cue and target were crossmodally congruent as compared to when they were incongruent. These results therefore suggest that the crossmodal congruency between auditory pitch and visual brightness can influence performance in the pip-and-pop task by means of top-down facilitation.  相似文献   

3.
We assessed the influence of multisensory interactions on the exogenous orienting of spatial attention by comparing the ability of auditory, tactile, and audiotactile exogenous cues to capture visuospatial attention under conditions of no perceptual load versus high perceptual load. In Experiment 1, participants discriminated the elevation of visual targets preceded by either unimodal or bimodal cues under conditions of either a high perceptual load (involving the monitoring of a rapidly presented central stream of visual letters for occasionally presented target digits) or no perceptual load (when the central stream was replaced by a fixation point). All of the cues captured spatial attention in the no-load condition, whereas only the bimodal cues captured visuospatial attention in the highload condition. In Experiment 2, we ruled out the possibility that the presentation of any changing stimulus at fixation (i.e., a passively monitored stream of letters) would eliminate exogenous orienting, which instead appears to be a consequence of high perceptual load conditions (Experiment 1). These results demonstrate that multisensory cues capture spatial attention more effectively than unimodal cues under conditions of concurrent perceptual load.  相似文献   

4.
In the present study, participants identified the location of a visual target presented in a rapidly masked, changing sequence of visual distractors. In Experiment 1, we examined performance when a high tone, embedded in a sequence of low tones, was presented in synchrony with the visual target and observed that the high tone improved visual target identification, relative to a condition in which a low tone was synchronized with the visual target, thus replicating Vroomen and de Gelder's (2000, Experiment 1) findings. In subsequent experiments, we presented a single visual, auditory, vibrotactile, or combined audiotactile cue with the visual target and found similar improvements in participants' performance regardless of cue type. These results suggest that crossmodal perceptual organization may account for only a part of the improvement in participants' visual target identification performance reported in Vroomen and de Gelder's original study. Moreover, in contrast with many previous crossmodal cuing studies, our results also suggest that visual cues can enhance visual target identification performance. Alternative accounts for these results are discussed in terms of enhanced saliency, the presence of a temporal marker, and attentional capture by oddball stimuli as potential explanations for the observed performance benefits.  相似文献   

5.
Saccadic reaction time (SRT) to visual targets tends to be shorter when nonvisual stimuli are presented in close temporal or spatial proximity, even when subjects are instructed to ignore the accessory input. Here, we investigate visualtactile interaction effects on SRT under varying spatial configurations. SRT to bimodal stimuli was reduced by up to 30 msec, in comparison with responses to unimodal visual targets. In contrast to previous findings, the amount of multisensory facilitation did not decrease with increases in the physical distance between the target and the nontarget but depended on (1) whether the target and the nontarget were presented in the same hemifield (ipsilateral) or in different hemifields (contralateral), (2) the eccentricity of the stimuli, and (3) the frequency of the vibrotactile nontarget. The time-window-of-integration (TWIN) model for SRT (Colonius & Diederich, 2004) is shown to yield an explicit characterization of the observed multisensory spatial interaction effects through the removal of the peripheral-processing effects of stimulus location and tactile frequency.  相似文献   

6.
Modeling spatial effects in visual-tactile saccadic reaction time   总被引:2,自引:0,他引:2  
Saccadic reaction time (SRT) to visual targets tends to be shorter when nonvisual stimuli are presented in close temporal or spatial proximity, even when subjects are instructed to ignore the accessory input. Here, we investigate visual-tactile interaction effects on SRT under varying spatial configurations. SRT to bimodal stimuli was reduced by up to 30 msec, in comparison with responses to unimodal visual targets. In contrast to previous findings, the amount of multisensory facilitation did not decrease with increases in the physical distance between the target and the nontarget but depended on (1) whether the target and the nontarget were presented in the same hemifield (ipsilateral) or in different hemifields (contralateral), (2) the eccentricity of the stimuli, and (3) the frequency of the vibrotactile nontarget. The time-window-of-integration (TWIN) model for SRT (Colonius & Diederich, 2004) is shown to yield an explicit characterization of the observed multisensory spatial interaction effects through the removal of the peripheral-processing effects of stimulus location and tactile frequency.  相似文献   

7.
Multisensory cues capture spatial attention regardless of perceptual load   总被引:3,自引:0,他引:3  
We compared the ability of auditory, visual, and audiovisual (bimodal) exogenous cues to capture visuo-spatial attention under conditions of no load versus high perceptual load. Participants had to discriminate the elevation (up vs. down) of visual targets preceded by either unimodal or bimodal cues under conditions of high perceptual load (in which they had to monitor a rapidly presented central stream of visual letters for occasionally presented target digits) or no perceptual load (in which the central stream was replaced by a fixation point). The results of 3 experiments showed that all 3 cues captured visuo-spatial attention in the no-load condition. By contrast, only the bimodal cues captured visuo-spatial attention in the high-load condition, indicating for the first time that multisensory integration can play a key role in disengaging spatial attention from a concurrent perceptually demanding stimulus.  相似文献   

8.
Three experiments investigated cross-modal links between touch, audition, and vision in the control of covert exogenous orienting. In the first two experiments, participants made speeded discrimination responses (continuous vs. pulsed) for tactile targets presented randomly to the index finger of either hand. Targets were preceded at a variable stimulus onset asynchrony (150,200, or 300 msec) by a spatially uninformative cue that was either auditory (Experiment 1) or visual (Experiment 2) on the same or opposite side as the tactile target. Tactile discriminations were more rapid and accurate when cue and target occurred on the same side, revealing cross-modal covert orienting. In Experiment 3, spatially uninformative tactile cues were presented prior to randomly intermingled auditory and visual targets requiring an elevation discrimination response (up vs. down). Responses were significantly faster for targets in both modalities when presented ipsilateral to the tactile cue. These findings demonstrate that the peripheral presentation of spatially uninforrnative auditory and visual cues produces cross-modal orienting that affects touch, and that tactile cues can also produce cross-modal covert orienting that affects audition and vision.  相似文献   

9.
In the present study, we investigate how spatial attention, driven by unisensory and multisensory cues, can bias the access of information into visuo-spatial working memory (VSWM). In a series of four experiments, we compared the effectiveness of spatially-nonpredictive visual, auditory, or audiovisual cues in capturing participants' spatial attention towards a location where to-be-remembered visual stimuli were or were not presented (cued/uncued trials, respectively). The results suggest that the effect of peripheral visual cues in biasing the access of information into VSWM depend on the size of the attentional focus, while auditory cues did not have direct effects in biasing VSWM. Finally, spatially congruent multisensory cues showed an enlarged attentional effect in VSWM as compared to unimodal visual cues, as a likely consequence of multisensory integration. This latter result sheds new light on the interplay between spatial attention and VSWM, pointing to the special role exerted by multisensory (audiovisual) cues.  相似文献   

10.
A driving simulator was used to examine the effects on driving performance of auditory cues in an in-vehicle information search task. Drivers' distraction by the search tasks was measured on a peripheral detection task. The difficulty of the search task was systematically varied to test the distraction caused by a quantified visual load. 58 participants completed the task. Performance on both search tasks and peripheral detection tasks was measured by mean response time and percent error. Analyses indicated that in-vehicle information search performance can be severely degraded when a target is located within a group of diverse distractors. Inclusion of an auditory cue in the visual search increased the mean response time as a result of a change in modality from auditory to visual. Inclusion of such an auditory cue seemed to influence distraction as measured by performance on the peripheral detection task; accuracy was lower when auditory cues were provided, and responses were slower when no auditory cues were provided. Distraction by the auditory cue varied according to the difficulty of the search task.  相似文献   

11.
This event-related potential study investigated (i) to what extent incongruence between attention-directing cue and cued target modality affects attentional control processes that bias the system in advance to favor a particular stimulus modality and (ii) to what extent top-down attentional control mechanisms are generalized for the type of information that is to be attended. To this end, both visual and auditory word cues were used to instruct participants to direct attention to a specific visual (color) or auditory (pitch) stimulus feature of a forthcoming multisensory target stimulus. Effects of cue congruency were observed within 200 ms post-cue over frontal scalp regions and related to processes involved in shifting attention from the cue modality to the modality of the task-relevant target feature. Both directing visual attention and directing auditory attention were associated with dorsal posterior positivity, followed by sustained fronto-central negativity. However, this fronto-central negativity appeared to have an earlier onset and was more pronounced when the visual modality was cued. Together the present results suggest that the mechanisms involved in deploying attention are to some extent determined by the modality (visual, auditory) in which attention operates, and in addition, that some of these mechanisms can also be affected by cue congruency.  相似文献   

12.
Research with younger adults has shown that retrospective cues can be used to orient top-down attention toward relevant items in working memory. We examined whether older adults could take advantage of these cues to improve memory performance. Younger and older adults were presented with visual arrays of five colored shapes; during maintenance, participants were presented either with an informative cue based on an object feature (here, object shape or color) that would be probed, or with an uninformative, neutral cue. Although older adults were less accurate overall, both age groups benefited from the presentation of an informative, feature-based cue relative to a neutral cue. Surprisingly, we also observed differences in the effectiveness of shape versus color cues and their effects upon post-cue memory load. These results suggest that older adults can use top-down attention to remove irrelevant items from visual working memory, provided that task-relevant features function as cues.  相似文献   

13.
Participants respond more quickly to two simultaneously presented target stimuli of two different modalities (redundant targets) than would be predicted from their reaction times to the unimodal targets. To examine the neural correlates of this redundant-target effect, event-related potentials (ERPs) were recorded to auditory, visual, and bimodal standard and target stimuli presented at two locations (left and right of central fixation). Bimodal stimuli were combinations of two standards, two targets, or a standard and a target, presented either from the same or from different locations. Responses generally were faster for bimodal stimuli than for unimodal stimuli and were faster for spatially congruent than for spatially incongruent bimodal events. ERPs to spatially congruent and spatially incongruent bimodal stimuli started to differ over the parietal cortex as early as 160 msec after stimulus onset. The present study suggests that hearing and seeing interact at sensory-processing stages by matching spatial information across modalities.  相似文献   

14.
唐晓雨  佟佳庚  于宏  王爱君 《心理学报》2021,53(11):1173-1188
本文采用内-外源性空间线索靶子范式, 操控内源性线索有效性(有效线索、无效线索)、外源性线索有效性(有效线索、无效线索)、目标刺激类型(视觉刺激、听觉刺激、视听觉刺激)三个自变量。通过两个不同任务难度的实验(实验1: 简单定位任务; 实验2: 复杂辨别任务)来考察内外源性空间注意对多感觉整合的影响。两个实验结果均发现外源性空间注意显著减弱了多感觉整合效应, 内源性空间注意没有显著增强多感觉整合效应; 实验2中还发现了内源性空间注意会对外源性空间注意减弱多感觉整合效应产生影响。结果表明, 与内源性空间注意不同, 外源性空间注意对多感觉整合的影响不易受任务难度的调控; 当任务较难时内源性空间注意会影响外源性空间注意减弱多感觉整合效应的过程。由此推测, 内外源性空间注意对多感觉整合的调节并非彼此独立、而是相互影响的。  相似文献   

15.
Two studies were conducted to examine the effects of unimodal and multimodal cueing techniques for indicating the location of threats on target acquisition, the recall of information from concurrent communications, and perceived workload. One visual, two auditory (i.e., nonspatial speech and spatial tones [3-D]), and one tactile cue were assessed in Experiment 1. Experiment 2 examined the effects of combinations of the cues assessed in the first investigation: visual + nonspatial speech, visual + spatial tones, visual + tactile, and nonspatial speech + tactile. A unimodal, “visual only” condition was included as a baseline to determine the extent to which a supplementary cue might influence changes in performance and workload. The results of the studies indicated that time to first shot and the percentage of hits can be improved and workload reduced by providing cues about target location. The multimodal cues did not yield significant improvements in performance or workload beyond that achieved by the unimodal visual cue.  相似文献   

16.
Peripheral cues are thought to facilitate responses to stimuli presented at the same location because they lead to exogenous attention shifts. Facilitation has been observed in numerous studies of visual and auditory attention, but there have been only four demonstrations of tactile facilitation, all in studies with potential confounds. Three studies used a spatial (finger versus thumb) discrimination task, where the cue could have provided a spatial framework that might have assisted the discrimination of subsequent targets presented on the same side as the cue. The final study circumvented this problem by using a non-spatial discrimination; however, the cues were informative and interspersed with visual cues which may have affected the attentional effects observed. In the current study, therefore, we used a non-spatial tactile frequency discrimination task following a non-informative tactile white noise cue. When the target was presented 150 ms after the cue, we observed faster discrimination responses to targets presented on the same side compared to the opposite side as the cue; by 1000 ms, responses were significantly faster to targets presented on the opposite side to the cue. Thus, we demonstrated that tactile attentional facilitation can be observed in a non-spatial discrimination task, under unimodal conditions and with entirely non-predictive cues. Furthermore, we provide the first demonstration of significant tactile facilitation and tactile inhibition of return within a single experiment.  相似文献   

17.
Multisensory integration increases the salience of sensory events and, therefore, possibly also their ability to capture attention in visual search. This was investigated in two experiments where spatially uninformative color change cues preceded visual search arrays with color-defined targets. Tones were presented synchronously with these cues on half of all trials. Spatial-cuing effects indicative of cue-triggered capture of attention were larger on tone-present than on tone-absent trials, demonstrating multisensory enhancements of attentional capture. Larger capture effects for audiovisual events were found when cues were color singletons, and also when they appeared among heterogeneous color distractors. Tone-induced increases of attentional capture were independent of color-specific top-down task sets, suggesting that this multisensory effect is a stimulus-driven bottom-up phenomenon.  相似文献   

18.
基于外源性线索-靶子范式, 采用2(线索-靶子间隔时间, stimulus onset asynchronies, SOA:400~600 ms、1000~1200 ms) × 3(目标刺激类型:视觉、听觉、视听觉) × 2(线索有效性:有效线索、无效线索)的被试内实验设计, 要求被试对目标刺激完成检测任务, 以考察视觉线索诱发的返回抑制(inhibition of return, IOR)对视听觉整合的调节作用, 从而为感知觉敏感度、空间不确定性及感觉通道间信号强度差异假说提供实验证据。结果发现:(1) 随SOA增长, 视觉IOR效应显著降低, 视听觉整合效应显著增强; (2) 短SOA (400~600 ms)时, 有效线索位置上的视听觉整合效应显著小于无效线索位置, 但长SOA (1000~1200 ms)时, 有效与无效线索位置上的视听觉整合效应并无显著差异。结果表明, 在不同SOA条件下, 视觉IOR对视听觉整合的调节作用产生变化, 当前结果支持感觉通道间信号强度差异假说。  相似文献   

19.
Previous studies have shown that adults respond faster and more reliably to bimodal compared to unimodal localization cues. The current study investigated for the first time the development of audiovisual (A-V) integration in spatial localization behavior in infants between 1 and 10 months of age. We observed infants' head and eye movements in response to auditory, visual, or both kinds of stimuli presented either 25 degrees or 45 degrees to the right or left of midline. Infants under 8 months of age intermittently showed response latencies significantly faster toward audiovisual targets than toward either auditory or visual targets alone They did so, however, without exhibiting a reliable violation of the Race Model, suggesting that probability summation alone could explain the faster bimodal response. In contrast, infants between 8 and 10 months of age exhibited bimodal response latencies significantly faster than unimodal latencies for both eccentricity conditions and their latencies violated the Race Model at 25 degrees eccentricity. In addition to this main finding, we found age-dependent eccentricity and modality effects on response latencies. Together, these findings suggest that audiovisual integration emerges late in the first year of life and are consistent with neurophysiological findings from multisensory sites in the superior colliculus of infant monkeys showing that multisensory enhancement of responsiveness is not present at birth but emerges later in life.  相似文献   

20.
Recently, Guzman-Martinez, Ortega, Grabowecky, Mossbridge, and Suzuki (Current Biology : CB, 22(5), 383–388, 2012) reported that observers could systematically match auditory amplitude modulations and tactile amplitude modulations to visual spatial frequencies, proposing that these cross-modal matches produced automatic attentional effects. Using a series of visual search tasks, we investigated whether informative auditory, tactile, or bimodal cues can guide attention toward a visual Gabor of matched spatial frequency (among others with different spatial frequencies). These cues improved visual search for some but not all frequencies. Auditory cues improved search only for the lowest and highest spatial frequencies, whereas tactile cues were more effective and frequency specific, although less effective than visual cues. Importantly, although tactile cues could produce efficient search when informative, they had no effect when uninformative. This suggests that cross-modal frequency matching occurs at a cognitive rather than sensory level and, therefore, influences visual search through voluntary, goal-directed behavior, rather than automatic attentional capture.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号