首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We report three experiments designed to investigate the nature of any crossmodal links between audition and touch in sustained endogenous covert spatial attention, using the orthogonal spatial cuing paradigm. Participants discriminated the elevation (up vs. down) of auditory and tactile targets presented to either the left or the right of fixation. In Experiment 1, targets were expected on a particular side in just one modality; the results demonstrated that the participants could spatially shift their attention independently in both audition and touch. Experiment 2 demonstrated that when the participants were informed that targets were more likely to be on one side for both modalities, elevation judgments were faster on that side in both audition and touch. The participants were also able to "split" their auditory and tactile attention, albeit at some cost, when targets in the two modalities were expected on opposite sides. Similar results were also reported in Experiment 3 when participants adopted a crossed-hands posture, thus revealing that crossmodal links in audiotactile attention operate on a representation of space that is updated following posture change. These results are discussed in relation to previous findings regarding crossmodal links in audiovisual and visuotactile covert spatial attentional orienting.  相似文献   

2.
The authors report a series of 6 experiments investigating crossmodal links between vision and touch in covert endogenous spatial attention. When participants were informed that visual and tactile targets were more likely on 1 side than the other, speeded discrimination responses (continuous vs. pulsed, Experiments 1 and 2; or up vs. down, Experiment 3) for targets in both modalities were significantly faster on the expected side, even though target modality was entirely unpredictable. When participants expected a target on a particular side in just one modality, corresponding shifts of covert attention also took place in the other modality, as evidenced by faster elevation judgments on that side (Experiment 4). Larger attentional effects were found when directing visual and tactile attention to the same position rather than to different positions (Experiment 5). A final study with crossed hands revealed that these visuotactile links in spatial attention apply to common positions in external space.  相似文献   

3.
The last decade has seen great progress in the study of the nature of crossmodal links in exogenous and endogenous spatial attention (see [Spence, C., McDonald, J., & Driver, J. (2004). Exogenous spatial cuing studies of human crossmodal attention and multisensory integration. In C. Spence, & J. Driver (Eds.), Crossmodal space and crossmodal attention (pp. 277-320). Oxford, UK: Oxford University Press.], for a recent review). A growing body of research now highlights the existence of robust crossmodal links between auditory, visual, and tactile spatial attention. However, until recently, studies of exogenous and endogenous attention have proceeded relatively independently. In daily life, however, these two forms of attentional orienting continuously compete for the control of our attentional resources, and ultimately, our awareness. It is therefore critical to try and understand how exogenous and endogenous attention interact in both the unimodal context of the laboratory and the multisensory contexts that are more representative of everyday life. To date, progress in understanding the interaction between these two forms of orienting has primarily come from unimodal studies of visual attention. We therefore start by summarizing what has been learned from this large body of empirical research, before going on to review more recent studies that have started to investigate the interaction between endogenous and exogenous orienting in a multisensory setting. We also discuss the evidence suggesting that exogenous spatial orienting is not truly automatic, at least when assessed in a crossmodal context. Several possible models describing the interaction between endogenous and exogenous orienting are outlined and then evaluated in terms of the extant data.  相似文献   

4.
Visual stimuli are often processed more efficiently than accompanying stimuli in another modality. In line with this “visual dominance”, earlier studies on attentional switching showed a clear benefit for visual stimuli in a bimodal visual–auditory modality-switch paradigm that required spatial stimulus localization in the relevant modality. The present study aimed to examine the generality of this visual dominance effect. The modality appropriateness hypothesis proposes that stimuli in different modalities are differentially effectively processed depending on the task dimension, so that processing of visual stimuli is favored in the dimension of space, whereas processing auditory stimuli is favored in the dimension of time. In the present study, we examined this proposition by using a temporal duration judgment in a bimodal visual–auditory switching paradigm. Two experiments demonstrated that crossmodal interference (i.e., temporal stimulus congruence) was larger for visual stimuli than for auditory stimuli, suggesting auditory dominance when performing temporal judgment tasks. However, attention switch costs were larger for the auditory modality than for visual modality, indicating a dissociation of the mechanisms underlying crossmodal competition in stimulus processing and modality-specific biasing of attentional set.  相似文献   

5.
郑晓丹  岳珍珠 《心理科学》2022,45(6):1329-1336
采用生活中的真实客体,我们考察了跨通道语义相关性对视觉注意的影响以及跨通道促进的时程。结合启动范式和点探测范式,实验1发现在听觉启动600毫秒后,被试对高相关视觉刺激的反应比对低相关刺激的反应更快,而在视觉启动下没有发现启动效应。实验2发现在启动刺激呈现900毫秒后跨通道启动效应消失。我们的研究证明了基于先前经验的视、听语义相关能够促进视觉的选择性注意。  相似文献   

6.
The present study examined the effects of cue-based preparation and cue-target modality mapping in crossmodal task switching. In two experiments, we randomly presented lateralized visual and auditory stimuli simultaneously. Subjects were asked to make a left/right judgment for a stimulus in only one of the modalities. Prior to each trial, the relevant stimulus modality was indicated by a visual or auditory cue. The cueing interval was manipulated to examine preparation. In Experiment 1, we used a corresponding mapping of cue-modality and stimulus modality, whereas in Experiment 2 the mapping of cue and stimulus modalities was reversed. We found reduced modality-switch costs with a long cueing interval, showing that attention shifts to stimulus modalities can be prepared, irrespective of cue-target modality mapping. We conclude that perceptual processing in crossmodal switching can be biased in a preparatory way towards task-relevant stimulus modalities.  相似文献   

7.
Recent literature has highlighted the importance and ubiquity of cross-modal links in spatial attention, whereby shifts in attention in one modality often induce corresponding shifts in other modalities. We attempted to provide further evidence for the case of audiovisual links during sustained endogenous attention by addressing several potential methodological confounds in previous demonstrations. However, we failed repeatedly to reproduce the phenomenon of spatial synergies between auditory and visual attention, found by Driver and Spence (1994) and frequently cited to support the automatic nature of cross-modal attention links. We discuss the results in light of recent evidence about cross-modal spatial links during sustained attention and support the idea that such links can weaken or even disappear under certain circumstances, such as during periods of sustained attention. The implication is that individuals can select inputs from different modalities from different locations more easily than previously had been thought.  相似文献   

8.
Change blindness is the name given to people's inability to detect changes introduced between two consecutively-presented scenes when they are separated by a distractor that masks the transients that are typically associated with change. Change blindness has been reported within vision, audition, and touch, but has never before been investigated when successive patterns are presented to different sensory modalities. In the study reported here, we investigated change detection performance when the two to-be-compared stimulus patterns were presented in the same sensory modality (i.e., both visual or both tactile) and when one stimulus pattern was tactile while the other was presented visually or vice versa. The two to-be-compared patterns were presented consecutively, separated by an empty interval, or else separated by a masked interval. In the latter case, the masked interval could either be tactile or visual. The first experiment investigated visual-tactile and tactile-visual change detection performance. The results showed that in the absence of masking, participants detected changes in position accurately, despite the fact that the two to-be-compared displays were presented in different sensory modalities. Furthermore, when a mask was presented between the two to-be-compared displays, crossmodal change blindness was elicited no matter whether the mask was visual or tactile. The results of two further experiments showed that performance was better overall in the unimodal (visual or tactile) conditions than in the crossmodal conditions. These results suggest that certain of the processes underlying change blindness are multisensory in nature. We discuss these findings in relation to recent claims regarding the crossmodal nature of spatial attention.  相似文献   

9.
Tracking an audio-visual target involves integrating spatial cues about target position from both modalities. Such sensory cue integration is a developmental process in the brain involving learning, with neuroplasticity as its underlying mechanism. We present a Hebbian learning-based adaptive neural circuit for multi-modal cue integration. The circuit temporally correlates stimulus cues within each modality via intramodal learning as well as symmetrically across modalities via crossmodal learning to independently update modality-specific neural weights on a sample-by-sample basis. It is realised as a robotic agent that must orient towards a moving audio-visual target. It continuously learns the best possible weights required for a weighted combination of auditory and visual spatial target directional cues that is directly mapped to robot wheel velocities to elicit an orientation response. Visual directional cues are noise-free and continuous but arising from a relatively narrow receptive field while auditory directional cues are noisy and intermittent but arising from a relatively wider receptive field. Comparative trials in simulation demonstrate that concurrent intramodal learning improves both the overall accuracy and precision of the orientation responses of symmetric crossmodal learning. We also demonstrate that symmetric crossmodal learning improves multisensory responses as compared to asymmetric crossmodal learning. The neural circuit also exhibits multisensory effects such as sub-additivity, additivity and super-additivity.  相似文献   

10.
赵晨  张侃  杨华海 《心理学报》2001,34(3):28-33
该研究利用空间线索技术的实验模式,考察跨视觉和听觉通道的内源性选择注意与外源性选择性注意的相互关系。实验结果表明:(1)听觉中央线索在较长的SOA(至少500毫秒)条件下,可以引导内源性空间选择性注意;同时外周线索突现也能自动化地吸引部分注意资源。(2)听觉和视觉选择注意是分离的加工通道,但二者之间存在相互联系。  相似文献   

11.
In order to perceive the world coherently, we need to integrate features of objects and events that are presented to our senses. Here we investigated the temporal limit of integration in unimodal visual and auditory as well as crossmodal auditory-visual conditions. Participants were presented with alternating visual and auditory stimuli and were asked to match them either within or between modalities. At alternation rates of about 4 Hz and higher, participants were no longer able to match visual and auditory stimuli across modalities correctly, while matching within either modality showed higher temporal limits. Manipulating different temporal stimulus characteristics (stimulus offsets and/or auditory-visual SOAs) did not change performance. Interestingly, the difference in temporal limits between crossmodal and unimodal conditions appears strikingly similar to temporal limit differences between unimodal conditions when additional features have to be integrated. We suggest that adding a modality across which sensory input is integrated has the same effect as adding an extra feature to be integrated within a single modality.  相似文献   

12.
There is currently a great deal of interest regarding the possible existence of a crossmodal attentional blink (AB) between audition and vision. The majority of evidence now suggests that no such crossmodal deficit exists unless a task switch is introduced. We report two experiments designed to investigate the existence of a crossmodal AB between vision and touch. Two masked targets were presented successively at variable interstimulus intervals. Participants had to respond either to both targets (experimental condition) or to just the second target (control condition). In Experiment 1, the order of target modality was blocked, and an AB was demonstrated when visual targets preceded tactile targets, but not when tactile targets preceded visual targets. In Experiment 2, target modality was mixed randomly, and a significant crossmodal AB was demonstrated in both directions between vision and touch. The contrast between our visuotactile results and those of previous audiovisual studies is discussed, as are the implications for current theories of the AB.  相似文献   

13.
Perceptual judgments can be affected by expectancies regarding the likely target modality. This has been taken as evidence for selective attention to particular modalities, but alternative accounts remain possible in terms of response priming, criterion shifts, stimulus repetition, and spatial confounds. We examined whether attention to a sensory modality would still be apparent when these alternatives were ruled out. Subjects made a speeded detection response (Experiment 1), an intensity or color discrimination (Experiment 2), or a spatial discrimination response (Experiments 3 and 4) for auditory and visual targets presented in a random sequence. On each trial, a symbolic visual cue predicted the likely target modality. Responses were always more rapid and accurate for targets presented in the expected versus unexpected modality, implying that people can indeed selectively attend to the auditory or visual modalities. When subjects were cued to both the probable modality of a target and its likely spatial location (Experiment 4), separable modality-cuing and spatial-cuing effects were observed. These studies introduce appropriate methods for distinguishing attention to a modality from the confounding factors that have plagued previous normal and clinical research.  相似文献   

14.
Manipulating inattentional blindness within and across sensory modalities   总被引:1,自引:0,他引:1  
People often fail to consciously perceive visual events that are outside the focus of attention, a phenomenon referred to as inattentional blindness or IB (i.e., Mack & Rock, 1998). Here, we investigated IB for words within and across sensory modalities (visually and auditorily) in order to assess whether dividing attention across different senses has the same consequences as dividing attention within an individual sensory modality. Participants were asked to monitor a rapid stream of pictures or sounds presented concurrently with task-irrelevant words (spoken or written). A word recognition test was used to measure the processing for unattended words compared to word recognition levels after explicitly monitoring the word stream. We were able to produce high levels of IB for visually and auditorily presented words under unimodal conditions (Experiment 1) as well as under crossmodal conditions (Experiment 2). A further manipulation revealed, however, that IB is less prevalent when attention is divided across modalities than within the same modality (Experiment 3). These findings are explained in terms of the attentional load hypothesis and suggest that, contrary to some claims, attention resources are to a certain extent shared across sensory modalities.  相似文献   

15.
In many everyday situations, our senses are bombarded by many different unisensory signals at any given time. To gain the most veridical, and least variable, estimate of environmental stimuli/properties, we need to combine the individual noisy unisensory perceptual estimates that refer to the same object, while keeping those estimates belonging to different objects or events separate. How, though, does the brain “know” which stimuli to combine? Traditionally, researchers interested in the crossmodal binding problem have focused on the roles that spatial and temporal factors play in modulating multisensory integration. However, crossmodal correspondences between various unisensory features (such as between auditory pitch and visual size) may provide yet another important means of constraining the crossmodal binding problem. A large body of research now shows that people exhibit consistent crossmodal correspondences between many stimulus features in different sensory modalities. For example, people consistently match high-pitched sounds with small, bright objects that are located high up in space. The literature reviewed here supports the view that crossmodal correspondences need to be considered alongside semantic and spatiotemporal congruency, among the key constraints that help our brains solve the crossmodal binding problem.  相似文献   

16.
Humans tend to represent numbers in the form of a mental number line. Here we show that the mental number line can modulate the representation of peripersonal haptic space in a crossmodal fashion and that this interaction is not visually mediated. Sighted and early-blind participants were asked to haptically explore rods of different lengths and to indicate midpoints of those rods. During each trial, either a small (2) or a large (8) number was presented in the auditory modality. When no numbers were presented, participants tended to bisect the rods to the left of the actual midpoint, consistent with the notion of pseudoneglect. In both groups, this bias was significantly increased by the presentation of a small number and was significantly reduced by the presentation of a large number. Hence, spatial shifts of attention induced by number processing are not limited to visual space or embodied responses but extend to haptic peripersonal space and occur crossmodally without requiring the activation of a visuospatial representation.  相似文献   

17.
Three experiments were conducted examining unimodal and crossmodal effects of attention to motion. Horizontally moving sounds and dot patterns were presented and participants’ task was to discriminate their motion speed or whether they were presented with a brief gap. In Experiments 1 and 2, stimuli of one modality and of one direction were presented with a higher probability ( p = .7) than other stimuli. Sounds and dot patterns moving in the expected direction were discriminated faster than stimuli moving in the unexpected direction. In Experiment 3, participants had to respond only to stimuli moving in one direction within the primary modality, but to all stimuli regardless of their direction within the rarer secondary modality. Stimuli of the secondary modality moving in the attended direction were discriminated faster than were oppositely moving stimuli. Results suggest that attending to the direction of motion affects perception within vision and audition, but also across modalities.  相似文献   

18.
本研究使用空间任务-转换范式,控制视、听刺激的突显性,探讨自下而上注意对视觉主导效应的影响。结果表明视、听刺激突显性显著地影响视觉主导效应,实验一中当听觉刺激为高突显性时,视觉主导效应显著减弱。实验二中当听觉刺激为高突显性并且视觉刺激为低突显性时,视觉主导效应进一步减弱但依然存在。结果支持偏向竞争理论,在跨通道视听交互过程中视觉刺激更具突显性,在多感觉整合过程中更具加工优势。  相似文献   

19.
The cost of expecting events in the wrong sensory modality   总被引:8,自引:0,他引:8  
We examined the effects of modality expectancy on human performance. Participants judged azimuth (left vs. right location) for an unpredictable sequence of auditory, visual, and tactile targets. In some blocks, equal numbers of targets were presented in each modality. In others, the majority (75%) of the targets were presented in just one expected modality. Reaction times (RTs) for targets in an unexpected modality were slower than when that modality was expected or when no expectancy applied. RT costs associated with shifting attention from the tactile modality were greater than those for shifts from either audition or vision. Any RT benefits for the most likely modality were due to priming from an event in the same modality on the previous trial, not to the expectancy per se. These results show that stimulus-driven and expectancy-driven effects must be distinguished in studies of attending to different sensory modalities.  相似文献   

20.
The aim of the present study was to investigate exogenous crossmodal orienting of attention in three-dimensional (3-D) space. Most studies in which the orienting of attention has been examined in 3-D space concerned either exogenous intramodal or endogenous crossmodal attention. Evidence for exogenous crossmodal orienting of attention in depth is lacking. Endogenous and exogenous attention are behaviorally different, suggesting that they are two different mechanisms. We used the orthogonal spatial-cueing paradigm and presented auditory exogenous cues at one of four possible locations in near or far space before the onset of a visual target. Cues could be presented at the same (valid) or at a different (invalid) depth from the target (radial validity), and on the same (valid) or on a different (invalid) side (horizontal validity), whereas we blocked the depth at which visual targets were presented. Next to an overall validity effect (valid RTs < invalid RTs) in horizontal space, we observed an interaction between the horizontal and radial validity of the cue: The horizontal validity effect was present only when the cue and the target were presented at the same depth. No horizontal validity effect was observed when the cue and the target were presented at different depths. These results suggest that exogenous crossmodal attention is “depth-aware,” and they are discussed in the context of the supramodal hypothesis of attention.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号