首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
Subjects judged the elevation (up vs. down, regardless of laterality) of peripheral auditory or visual targets, following uninformative cues on either side with an intermediate elevation. Judgments were better for targets in either modality when preceded by an uninformative auditory cue on the side of the target. Experiment 2 ruled out nonattentional accounts for these spatial cuing effects. Experiment 3 found that visual cues affected elevation judgments for visual but not auditory targets. Experiment 4 confirmed that the effect on visual targets was attentional. In Experiment 5, visual cues produced spatial cuing when targets were always auditory, but saccades toward the cue may have been responsible. No such visual-to-auditory cuing effects were found in Experiment 6 when saccades were prevented, though they were present when eye movements were not monitored. These results suggest a one-way cross-modal dependence in exogenous covert orienting whereby audition influences vision, but not vice versa. Possible reasons for this asymmetry are discussed in terms of the representation of space within the brain.  相似文献   

2.
We investigated the effect of unseen hand posture on cross-modal, visuo-tactile links in covert spatial attention. In Experiment 1, a spatially nonpredictive visual cue was presented to the left or right hemifield shortly before a tactile target on either hand. To examine the spatial coordinates of any cross-modal cuing, the unseen hands were either uncrossed or crossed so that the left hand lay to the right and vice versa. Tactile up/down (i.e., index finger/thumb) judgments were better on the same side of external space as the visual cue, for both crossed and uncrossed postures. Thus, which hand was advantaged by a visual cue in a particular hemifield reversed across the different unseen postures. In Experiment 2, nonpredictive tactile cues now preceded visual targets. Up/down judgments for the latter were better on the same side of external space as the tactile cue, again for both postures. These results demonstrate cross-modal links between vision and touch in exogenous covert spatial attention that remap across changes in unseen hand posture, suggesting a modulatory role for proprioception.  相似文献   

3.
We report three experiments designed to investigate the nature of any crossmodal links between audition and touch in sustained endogenous covert spatial attention, using the orthogonal spatial cuing paradigm. Participants discriminated the elevation (up vs. down) of auditory and tactile targets presented to either the left or the right of fixation. In Experiment 1, targets were expected on a particular side in just one modality; the results demonstrated that the participants could spatially shift their attention independently in both audition and touch. Experiment 2 demonstrated that when the participants were informed that targets were more likely to be on one side for both modalities, elevation judgments were faster on that side in both audition and touch. The participants were also able to "split" their auditory and tactile attention, albeit at some cost, when targets in the two modalities were expected on opposite sides. Similar results were also reported in Experiment 3 when participants adopted a crossed-hands posture, thus revealing that crossmodal links in audiotactile attention operate on a representation of space that is updated following posture change. These results are discussed in relation to previous findings regarding crossmodal links in audiovisual and visuotactile covert spatial attentional orienting.  相似文献   

4.
In a previous study, Ward (1994) reported that spatially uninformative visual cues orient auditory attention but that spatially uninformative auditory cues fail to orient visual attention. This cross-modal asymmetry is consistent with other intersensory perceptual phenomena that are dominated by the visual modality (e.g., ventriloquism). However, Spence and Driver (1997) found exactly the opposite asymmetry under different experimental conditions and with a different task. In spite of the several differences between the two studies, Spence and Driver (see also Driver & Spence, 1998) argued that Ward's findings might have arisen from response-priming effects, and that the cross-modal asymmetry they themselves reported, in which auditory cues affect responses to visual targets but not vice versa, is in fact the correct result. The present study investigated cross-modal interactions in stimulus-driven spatial attention orienting under Ward's complex cue environment conditions using an experimental procedure that eliminates response-priming artifacts. The results demonstrate that the cross-modal asymmetry reported by Ward (1994) does occur when the cue environment is complex. We argue that strategic effects in cross-modal stimulus-driven orienting of attention are responsible for the opposite asymmetries found by Ward and by Spence and Driver (1997).  相似文献   

5.
The participants in this study discriminated the position of tactile target stimuli presented at the tip or the base of the forefinger of one of the participants’ hands, while ignoring visual distractor stimuli. The visual distractor stimuli were presented from two circles on a display aligned with the tactile targets in Experiment 1 or orthogonal to them in Experiment 2. Tactile discrimination performance was slower and less accurate when the visual distractor stimuli were presented from incongruent locations relative to the tactile target stimuli (e.g., tactile target at the base of the finger with top visual distractor) highlighting a cross-modal congruency effect. We examined whether the presence and orientation of a simple line drawing of a hand, which was superimposed on the visual distractor stimuli, would modulate the cross-modal congruency effects. When the tactile targets and the visual distractors were spatially aligned, the modulatory effects of the hand picture were small (Experiment 1). However, when they were spatially misaligned, the effects were much larger, and the direction of the cross-modal congruency effects changed in accordance with the orientation of the picture of the hand, as if the hand picture corresponded to the participants’ own stimulated hand (Experiment 2). The results suggest that the two-dimensional picture of a hand can modulate processes maintaining our internal body representation. We also observed that the cross-modal congruency effects were influenced by the postures of the stimulated and the responding hands. These results reveal the complex nature of spatial interactions among vision, touch, and proprioception.  相似文献   

6.
We investigated the extent to which people can discriminate between languages on the basis of their characteristic temporal, rhythmic information, and the extent to which this ability generalizes across sensory modalities. We used rhythmical patterns derived from the alternation of vowels and consonants in English and Japanese, presented in audition, vision, both audition and vision at the same time, or touch. Experiment 1 confirmed that discrimination is possible on the basis of auditory rhythmic patterns, and extended it to the case of vision, using ‘aperture-close’ mouth movements of a schematic face. In Experiment 2, language discrimination was demonstrated using visual and auditory materials that did not resemble spoken articulation. In a combined analysis including data from Experiments 1 and 2, a beneficial effect was also found when the auditory rhythmic information was available to participants. Despite the fact that discrimination could be achieved using vision alone, auditory performance was nevertheless better. In a final experiment, we demonstrate that the rhythm of speech can also be discriminated successfully by means of vibrotactile patterns delivered to the fingertip. The results of the present study therefore demonstrate that discrimination between language's syllabic rhythmic patterns is possible on the basis of visual and tactile displays.  相似文献   

7.
Spatially informative auditory and vibrotactile (cross-modal) cues can facilitate attention but little is known about how similar cues influence visual spatial working memory (WM) across the adult lifespan. We investigated the effects of cues (spatially informative or alerting pre-cues vs. no cues), cue modality (auditory vs. vibrotactile vs. visual), memory array size (four vs. six items), and maintenance delay (900 vs. 1800 ms) on visual spatial location WM recognition accuracy in younger adults (YA) and older adults (OA). We observed a significant interaction between spatially informative pre-cue type, array size, and delay. OA and YA benefitted equally from spatially informative pre-cues, suggesting that attentional orienting prior to WM encoding, regardless of cue modality, is preserved with age. Contrary to predictions, alerting pre-cues generally impaired performance in both age groups, suggesting that maintaining a vigilant state of arousal by facilitating the alerting attention system does not help visual spatial location WM.  相似文献   

8.
赵晨  张侃  杨华海 《心理学报》2001,34(3):28-33
该研究利用空间线索技术的实验模式,考察跨视觉和听觉通道的内源性选择注意与外源性选择性注意的相互关系。实验结果表明:(1)听觉中央线索在较长的SOA(至少500毫秒)条件下,可以引导内源性空间选择性注意;同时外周线索突现也能自动化地吸引部分注意资源。(2)听觉和视觉选择注意是分离的加工通道,但二者之间存在相互联系。  相似文献   

9.
The authors report a series of 6 experiments investigating crossmodal links between vision and touch in covert endogenous spatial attention. When participants were informed that visual and tactile targets were more likely on 1 side than the other, speeded discrimination responses (continuous vs. pulsed, Experiments 1 and 2; or up vs. down, Experiment 3) for targets in both modalities were significantly faster on the expected side, even though target modality was entirely unpredictable. When participants expected a target on a particular side in just one modality, corresponding shifts of covert attention also took place in the other modality, as evidenced by faster elevation judgments on that side (Experiment 4). Larger attentional effects were found when directing visual and tactile attention to the same position rather than to different positions (Experiment 5). A final study with crossed hands revealed that these visuotactile links in spatial attention apply to common positions in external space.  相似文献   

10.
跨通道的内源性选择注意   总被引:2,自引:2,他引:0  
赵晨  杨华海  张侃 《心理学报》1999,32(2):148-153
该实验利用空间线索技术研究跨视觉和听觉通道的内源性选择性注意,实验结果表明视觉中央线索能可靠地引导出内源性视觉选择性注意,听觉中央线索在较长的SOA(至少500ms)条件下也能引导出内源性视觉选择性注意,支持视觉和听觉具有特异性的注意加工通道,但两者之间存在相互连接的假说。  相似文献   

11.
Presenting an auditory or tactile cue in temporal synchrony with a change in the color of a visual target can facilitate participants’ visual search performance. In the present study, we compared the magnitude of unimodal auditory, vibrotactile, and bimodal (i.e., multisensory) cuing benefits when the nonvisual cues were presented in temporal synchrony with the changing of the target’s color (Experiments 1 and 2). The target (a horizontal or vertical line segment) was presented among a number of distractors (tilted line segments) that also changed color at various times. In Experiments 3 and 4, the cues were also made spatially informative with regard to the location of the visual target. The unimodal and bimodal cues gave rise to an equivalent (significant) facilitation of participants’ visual search performance relative to a no-cue baseline condition. Making the unimodal auditory and vibrotactile cues spatially informative produced further performance improvements (on validly cued trials), as compared with cues that were spatially uninformative or otherwise spatially invalid. A final experiment was conducted in order to determine whether cue location (close to versus far from the visual display) would influence participants’ visual search performance. Auditory cues presented close to the visual search display were found to produce significantly better performance than cues presented over headphones. Taken together, these results have implications for the design of nonvisual and multisensory warning signals used in complex visual displays.  相似文献   

12.
Peripheral cues are thought to facilitate responses to stimuli presented at the same location because they lead to exogenous attention shifts. Facilitation has been observed in numerous studies of visual and auditory attention, but there have been only four demonstrations of tactile facilitation, all in studies with potential confounds. Three studies used a spatial (finger versus thumb) discrimination task, where the cue could have provided a spatial framework that might have assisted the discrimination of subsequent targets presented on the same side as the cue. The final study circumvented this problem by using a non-spatial discrimination; however, the cues were informative and interspersed with visual cues which may have affected the attentional effects observed. In the current study, therefore, we used a non-spatial tactile frequency discrimination task following a non-informative tactile white noise cue. When the target was presented 150 ms after the cue, we observed faster discrimination responses to targets presented on the same side compared to the opposite side as the cue; by 1000 ms, responses were significantly faster to targets presented on the opposite side to the cue. Thus, we demonstrated that tactile attentional facilitation can be observed in a non-spatial discrimination task, under unimodal conditions and with entirely non-predictive cues. Furthermore, we provide the first demonstration of significant tactile facilitation and tactile inhibition of return within a single experiment.  相似文献   

13.
Our environment is richly structured, with objects producing correlated information within and across sensory modalities. A prominent challenge faced by our perceptual system is to learn such regularities. Here, we examined statistical learning and addressed learners’ ability to track transitional probabilities between elements in the auditory and visual modalities. Specifically, we investigated whether cross-modal information affects statistical learning within a single modality. Participants were familiarized with a statistically structured modality (e.g., either audition or vision) accompanied by different types of cues in a second modality (e.g., vision or audition). The results revealed that statistical learning within either modality is affected by cross-modal information, with learning being enhanced or reduced according to the type of cue provided in the second modality.  相似文献   

14.
Interactions Between Exogenous Auditory and Visual Spatial Attention   总被引:5,自引:0,他引:5  
Six experiments were carried out to investigate the issue of cross-modality between exogenous auditory and visual spatial attention employing Posner's cueing paradigm in detection, localization, and discrimination tasks. Results indicated cueing in detection tasks with visual or auditory cues and visual targets but not with auditory targets (Experiment 1). In the localization tasks, cueing was found with both visual and auditory targets. Inhibition of return was apparent only in the within-modality conditions (Experiment 2). This suggests that it is important whether the attention system is activated directly (within a modality) or indirectly (between modalities). Increasing the cue validity from 50% to 80% influenced performance only in the localization task (Experiment 4). These findings are interpreted as being indicative for modality-specific but interacting attention mechanisms. The results of Experiments 5 and 6 (up/down discrimination tasks) also show cross-modal cueing but not with visual cues and auditory targets. Furthermore, there was no inhibition of return in any condition. This suggests that some cueing effects might be task dependent.  相似文献   

15.
Behavioral studies of multisensory integration in motion perception have focused on the particular case of visual and auditory signals. Here, we addressed a new case: audition and touch. In Experiment 1, we tested the effects of an apparent motion stream presented in an irrelevant modality (audition or touch) on the perception of apparent motion streams in the other modality (touch or audition, respectively). We found significant congruency effects (lower performance when the direction of motion in the irrelevant modality was incongruent with the direction of the target) for the two possible modality combinations. This congruency effect was asymmetrical, with tactile motion distractors having a stronger influence on auditory motion perception than vice versa. In Experiment 2, we used auditory motion targets and tactile motion distractors while participants adopted one of two possible postures: arms uncrossed or arms crossed. The effects of tactile motion on auditory motion judgments were replicated in the arms-uncrossed posture, but they dissipated in the arms-crossed posture. The implications of these results are discussed in light of current findings regarding the representation of tactile and auditory space.  相似文献   

16.
The modality by which object azimuths (directions) are presented affects learning of multiple locations. In Experiment 1, participants learned sets of three and five object azimuths specified by a visual virtual environment, spatial audition (3D sound), or auditory spatial language. Five azimuths were learned faster when specified by spatial modalities (vision, audition) than by language. Experiment 2 equated the modalities for proprioceptive cues and eliminated spatial cues unique to vision (optic flow) and audition (differential binaural signals). There remained a learning disadvantage for spatial language. We attribute this result to the cost of indirect processing from words to spatial representations.  相似文献   

17.
Using a cue-target paradigm, we investigated the interaction between endogenous and exogenous orienting in cross-modal attention. A peripheral (exogenous) cue was presented after a central (endogenous) cue with a variable time interval. The endogenous and exogenous cues were presented in one sensory modality (auditory in Experiment 1 and visual in Experiment 2) whereas the target was presented in another modality. Both experiments showed a significant endogenous cuing effect (longer reaction times in the invalid condition than in the valid condition). However, exogenous cuing produced a facilitatory effect in both experiments in response to the target when endogenous cuing was valid, but it elicited a facilitatory effect in Experiment 1 and an inhibitory effect in Experiment 2 when endogenous cuing was invalid. These findings indicate that endogenous and exogenous cuing can co-operate in orienting attention to the crossmodal target. Moreover, the interaction between endogenous and exogenous orienting of attention is modulated by the modality between the cue and the target.  相似文献   

18.
There is currently a great deal of interest regarding the possible existence of a crossmodal attentional blink (AB) between audition and vision. The majority of evidence now suggests that no such crossmodal deficit exists unless a task switch is introduced. We report two experiments designed to investigate the existence of a crossmodal AB between vision and touch. Two masked targets were presented successively at variable interstimulus intervals. Participants had to respond either to both targets (experimental condition) or to just the second target (control condition). In Experiment 1, the order of target modality was blocked, and an AB was demonstrated when visual targets preceded tactile targets, but not when tactile targets preceded visual targets. In Experiment 2, target modality was mixed randomly, and a significant crossmodal AB was demonstrated in both directions between vision and touch. The contrast between our visuotactile results and those of previous audiovisual studies is discussed, as are the implications for current theories of the AB.  相似文献   

19.
Recently, Guzman-Martinez, Ortega, Grabowecky, Mossbridge, and Suzuki (Current Biology : CB, 22(5), 383–388, 2012) reported that observers could systematically match auditory amplitude modulations and tactile amplitude modulations to visual spatial frequencies, proposing that these cross-modal matches produced automatic attentional effects. Using a series of visual search tasks, we investigated whether informative auditory, tactile, or bimodal cues can guide attention toward a visual Gabor of matched spatial frequency (among others with different spatial frequencies). These cues improved visual search for some but not all frequencies. Auditory cues improved search only for the lowest and highest spatial frequencies, whereas tactile cues were more effective and frequency specific, although less effective than visual cues. Importantly, although tactile cues could produce efficient search when informative, they had no effect when uninformative. This suggests that cross-modal frequency matching occurs at a cognitive rather than sensory level and, therefore, influences visual search through voluntary, goal-directed behavior, rather than automatic attentional capture.  相似文献   

20.
Previous probe-signal studies of auditory spatial attention have shown faster responses to sounds at an expected versus an unexpected location, making no distinction between the use of interaural time difference (ITD) cues and interaural-level difference cues. In 5 experiments, performance on a same-different spatial discrimination task was used in place of the reaction time metric, and sounds, presented over headphones, were lateralized only by an ITD. In all experiments, performance was better for signals lateralized on the expected side of the head, supporting the conclusion that ITDs can be used as a basis for covert orienting. The performance advantage generalized to all sounds within the spatial focus and was not dissipated by a trial-by-trial rove in frequency or by a rove in spectral profile. Successful use by the listeners of a cross-modal, centrally positioned visual cue provided evidence for top-down attentional control.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号