首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A perception of coherent motion can be obtained in an otherwise ambiguous or illusory visual display by directing one's attention to a feature and tracking it. We demonstrate an analogous auditory effect in two separate sets of experiments. The temporal dynamics associated with the attention-dependent auditory motion closely matched those previously reported for attention-based visual motion. Since attention-based motion mechanisms appear to exist in both modalities, we also tested for multimodal (audiovisual) attention-based motion, using stimuli composed of interleaved visual and auditory cues. Although subjects were able to track a trajectory using cues from both modalities, no one spontaneously perceived "multimodal motion" across both visual and auditory cues. Rather, they reported motion perception only within each modality, thereby revealing a spatiotemporal limit on putative cross-modal motion integration. Together, results from these experiments demonstrate the existence of attention-based motion in audition, extending current theories of attention-based mechanisms from visual to auditory systems.  相似文献   

2.
Two experiments examined any inhibition-of-return (IOR) effects from auditory cues and from preceding auditory targets upon reaction times (RTs) for detecting subsequent auditory targets. Auditory RT was delayed if the preceding auditory cue was on the same side as the target, but was unaffected by the location of the auditorytarget from the preceding trial, suggesting that response inhibition for the cue may have produced its effects. By contrast, visual detection RT was inhibited by the ipsilateral presentation of a visual target on the preceding trial. In a third experiment, targets could be unpredictably auditory or visual, and no peripheral cues intervened. Both auditory and visual detection RTs were now delayed following an ipsilateral versus contralateral target in either modality on the preceding trial, even when eye position was monitored to ensure central fixation throughout. These data suggest that auditory target—target IOR arises only when target modality is unpredictable. They also provide the first unequivocal evidence for cross-modal IOR, since, unlike other recent studies (e.g., Reuter-Lorenz, Jha, & Rosenquist, 1996; Tassinari & Berlucchi, 1995; Tassinari & Campara, 1996), the present cross-modal effects cannot be explained in terms of response inhibition for the cue. The results are discussed in relation to neurophysiological studies and audiovisual links in saccade programming.  相似文献   

3.
于薇  王爱君  张明 《心理学报》2017,(2):164-173
听觉主导效应是指多感觉通道信息整合过程中,听觉通道中的信息得到优先加工,从而主导其他感觉通道的信息。研究采用经典的声音诱发闪光错觉的范式,通过两个实验操纵了注意资源的分配方式以及实验任务难度,考察了主动注意听觉通道的声音刺激对声音诱发闪光错觉产生的影响,以及任务难度对声音诱发闪光错觉的影响。结果发现:(1)裂变错觉会受到注意资源分配程度的影响,但是融合错觉则不然;(2)任务难度既不会影响裂变错觉,也不会影响融合错觉。说明了分散注意能够影响听觉主导效应中的裂变错觉,并且这种主导效应与任务难度无关。  相似文献   

4.
Spatial attention and audiovisual interactions in apparent motion   总被引:1,自引:0,他引:1  
In this study, the authors combined the cross-modal dynamic capture task (involving the horizontal apparent movement of visual and auditory stimuli) with spatial cuing in the vertical dimension to investigate the role of spatial attention in cross-modal interactions during motion perception. Spatial attention was manipulated endogenously, either by means of a blocked design or by predictive peripheral cues, and exogenously by means of nonpredictive peripheral cues. The results of 3 experiments demonstrate a reduction in the magnitude of the cross-modal dynamic capture effect on cued trials compared with uncued trials. The introduction of neutral cues (Experiments 4 and 5) confirmed the existence of both attentional costs and benefits. This attention-related reduction in cross-modal dynamic capture was larger when a peripheral cue was used compared with when attention was oriented in a purely endogenous manner. In sum, the results suggest that spatial attention reduces illusory binding by facilitating the segregation of unimodal signals, thereby modulating audiovisual interactions in information processing. Thus, the effect of spatial attention occurs prior to or at the same time as cross-modal interactions involving motion information.  相似文献   

5.
After repeated presentations of a long inspection tone (800 or 1,000 msec), a test tone of intermediate duration (600 msec) appeared shorter than it would otherwise appear. A short inspection tone (200 or 400 msec) tended to increase the apparent length of the intermediate test tone. Thus, a negative aftereffect of perceived auditory duration occurred, and a similar aftereffect occurred in the visual modality. These aftereffects, each involving a single sensory dimension, aresimple aftereffects. The following procedures producedcontingent aftereffects of perceived duration. A pair of lights, the first short and the second long, was presented repeatedly during an inspection period. When a pair of test lights of intermediate duration was then presented, the first member of the pair appeared longer in relation to the second. A similar aftereffect occurred in the auditory modality. In these latter aftereffects, the perceived duration of a test light or tone is contingent—dependent—on its temporal order, first or second, within a pair of test stimuli. An experiment designed to test the possibility of cross-modal transfer of contingent aftereffects between audition and vision found no significant cross-modal aftereffects.  相似文献   

6.
Whether information perceived without awareness can affect overt performance, and whether such effects can cross sensory modalities, remains a matter of debate. Whereas influence of unconscious visual information on auditory perception has been documented, the reverse influence has not been reported. In addition, previous reports of unconscious cross-modal priming relied on procedures in which contamination of conscious processes could not be ruled out. We present the first report of unconscious cross-modal priming when the unaware prime is auditory and the test stimulus is visual. We used the process-dissociation procedure [Debner, J. A., & Jacoby, L. L. (1994). Unconscious perception: Attention, awareness and control. Journal of Experimental Psychology: Learning, Memory, and Cognition, 20, 304-317] which allowed us to assess the separate contributions of conscious and unconscious perception of a degraded prime (either seen or heard) to performance on a visual fragment-completion task. Unconscious cross-modal priming (auditory prime, visual fragment) was significant and of a magnitude similar to that of unconscious within-modality priming (visual prime, visual fragment). We conclude that cross-modal integration, at least between visual and auditory information, is more symmetrical than previously shown, and does not require conscious mediation.  相似文献   

7.
In the McGurk effect, visual information specifying a speaker’s articulatory movements can influence auditory judgments of speech. In the present study, we attempted to find an analogue of the McGurk effect by using nonspeech stimuli—the discrepant audiovisual tokens of plucks and bows on a cello. The results of an initial experiment revealed that subjects’ auditory judgments were influenced significantly by the visual pluck and bow stimuli. However, a second experiment in which speech syllables were used demonstrated that the visual influence on consonants was significantly greater than the visual influence observed for pluck-bow stimuli. This result could be interpreted to suggest that the nonspeech visual influence was not a true McGurk effect. In a third experiment, visual stimuli consisting of the wordspluck andbow were found to have no influence over auditory pluck and bow judgments. This result could suggest that the nonspeech effects found in Experiment 1 were based on the audio and visual information’s having an ostensive lawful relation to the specified event. These results are discussed in terms of motor-theory, ecological, and FLMP approaches to speech perception.  相似文献   

8.
Modality specificity in priming is taken as evidence for independent perceptual systems. However, Easton, Greene, and Srinivas (1997) showed that visual and haptic cross-modal priming is comparable in magnitude to within-modal priming. Where appropriate, perceptual systems might share like information. To test this, we assessed priming and recognition for visual and auditory events, within- and across- modalities. On the visual test, auditory study resulted in no priming. On the auditory priming test, visual study resulted in priming that was only marginally less than within-modal priming. The priming results show that visual study facilitates identification on both visual and auditory tests, but auditory study only facilitates performance on the auditory test. For both recognition tests, within-modal recognition exceeded cross-modal recognition. The results have two novel implications for the understanding of perceptual priming: First, we introduce visual and auditory priming for spatio-temporal events as a new priming paradigm chosen for its ecological validity and potential for information exchange. Second, we propose that the asymmetry of the cross-modal priming observed here may reflect the capacity of these perceptual modalities to provide cross-modal constraints on ambiguity. We argue that visual perception might inform and constrain auditory processing, while auditory perception corresponds to too many potential visual events to usefully inform and constrain visual perception.  相似文献   

9.
虚拟现实技术通过提供视觉、听觉和触觉等信息为用户创造身临其境的感知体验, 其中触觉反馈面临诸多技术瓶颈使得虚拟现实中的自然交互受限。基于多感官错觉的伪触觉技术可以借助其他通道的信息强化和丰富触觉感受, 是目前虚拟现实环境中优化触觉体验的有效途径。本文聚焦于触觉中最重要的维度之一——粗糙度, 试图为解决虚拟现实中触觉反馈的受限问题提供新思路。探讨了粗糙度感知中, 视、听、触多感觉通道整合的关系, 分析了视觉线索(表面纹理密度、表面光影、控制显示比)和听觉线索(音调/频率、响度)如何影响触觉粗糙度感知, 总结了当下调控这些因素来改变粗糙度感知的方法。最后, 探讨了使用伪触觉反馈技术时, 虚拟现实环境中视、听、触觉信息在呈现效果、感知整合等方面与真实世界相比可能存在的差异, 提出可借鉴的改善触觉体验的适用方法和未来待研究的方向。  相似文献   

10.
There is now convincing evidence that an involuntary shift of spatial attention to a stimulus in one modality can affect the processing of stimuli in other modalities, but inconsistent findings across different paradigms have led to controversy. Such inconsistencies have important implications for theories of cross-modal attention. The authors investigated why orienting attention to a visual event sometimes influences responses to subsequent sounds and why it sometimes fails to do so. They examined visual-cue-on-auditory-target effects in two paradigms--implicit spatial discrimination (ISD) and orthogonal cuing (OC)--that have yielded conflicting findings in the past. Consistent with previous research, visual cues facilitated responses to same-side auditory targets in the ISD paradigm but not in the OC paradigm. Furthermore, in the ISD paradigm, visual cues facilitated responses to auditory targets only when the targets were presented directly at the cued location, not when they appeared above or below the cued location. This pattern of results confirms recent claims that visual cues fail to influence responses to auditory targets in the OC paradigm because the targets fall outside the focus of attention. (PsycINFO Database Record (c) 2008 APA, all rights reserved).  相似文献   

11.
In a previous study, Ward (1994) reported that spatially uninformative visual cues orient auditory attention but that spatially uninformative auditory cues fail to orient visual attention. This cross-modal asymmetry is consistent with other intersensory perceptual phenomena that are dominated by the visual modality (e.g., ventriloquism). However, Spence and Driver (1997) found exactly the opposite asymmetry under different experimental conditions and with a different task. In spite of the several differences between the two studies, Spence and Driver (see also Driver & Spence, 1998) argued that Ward's findings might have arisen from response-priming effects, and that the cross-modal asymmetry they themselves reported, in which auditory cues affect responses to visual targets but not vice versa, is in fact the correct result. The present study investigated cross-modal interactions in stimulus-driven spatial attention orienting under Ward's complex cue environment conditions using an experimental procedure that eliminates response-priming artifacts. The results demonstrate that the cross-modal asymmetry reported by Ward (1994) does occur when the cue environment is complex. We argue that strategic effects in cross-modal stimulus-driven orienting of attention are responsible for the opposite asymmetries found by Ward and by Spence and Driver (1997).  相似文献   

12.
Recently, Guzman-Martinez, Ortega, Grabowecky, Mossbridge, and Suzuki (Current Biology : CB, 22(5), 383–388, 2012) reported that observers could systematically match auditory amplitude modulations and tactile amplitude modulations to visual spatial frequencies, proposing that these cross-modal matches produced automatic attentional effects. Using a series of visual search tasks, we investigated whether informative auditory, tactile, or bimodal cues can guide attention toward a visual Gabor of matched spatial frequency (among others with different spatial frequencies). These cues improved visual search for some but not all frequencies. Auditory cues improved search only for the lowest and highest spatial frequencies, whereas tactile cues were more effective and frequency specific, although less effective than visual cues. Importantly, although tactile cues could produce efficient search when informative, they had no effect when uninformative. This suggests that cross-modal frequency matching occurs at a cognitive rather than sensory level and, therefore, influences visual search through voluntary, goal-directed behavior, rather than automatic attentional capture.  相似文献   

13.
从跨通道的角度入手,采用大小比较任务,对视听单通道及跨通道下数量空间表征的特点及表征过程中的相互影响进行探讨。结果发现,视觉通道和听觉通道均存在SNARC效应;在跨通道任务下,无论启动通道是视觉还是听觉通道,都表现出,当启动通道的数量大小信息与主通道的数量大小一致或无关时,主通道的SNARC效应没有显著变化;但当启动通道的数量大小信息与主通道不一致时,主通道的SNARC效应受到显著影响,表现为降低或消失。这进一步证明了SNARC效应受情境影响的特点,并发现在进行跨通道数量空间表征时,听觉通道的数量信息对视觉通道下的数量空间表征的影响大于视觉通道的数量信息对听觉通道下的数量空间表征的影响。  相似文献   

14.
The effects of stimulus duration and spatial separation on the illusion of apparent motion in the auditory modality were examined. Two narrow-band noise sources (40 dB, A-weighted) were presented through speakers separated in space by 2.5°, 5°, or 100, centered about the subject’s midline. The duration of each stimulus was 5, 10, or 50 msec. On each trial, the sound pair was temporally separated by 1 of 10 interstimulus onset intervals (ISOIs): 0, 2, 4, 6, 8, 10, 15, 20, 50, or 70 msec. Five subjects were tested in nine trial blocks; each block represented a particular spatial-separation-duration combination. Within a trial block, each ISOI was presented 30 times each, in random order. Subjects were instructed to listen to the stimulus sequence and classify their perception of the sound into one of five categories: single sound, simultaneous sounds, continuous motion, broken motion, or successive sounds. Each subject was also required to identify the location of the first-occurring stimulus (left or right). The percentage of continuous-motion responses was significantly affected by the ISOI [F(9,36) = 5.67,p < .001], the duration × ISOI interaction [F(18,72) = 3.54,p < .0001], and the separation × duration × ISOI interaction [F(36,144) = 1.51,p < .05]. The results indicate that a minimum duration is required for the perception of auditory apparent motion. Little or no motion was reported at durations of 10 msec or less. At a duration of 50 msec, motion was reported most often for ISOIs of 20–50 msec. The effect of separation appeared to be limited to durations and-ISOIs during which little motion was perceived.  相似文献   

15.
Several studies have shown that the direction in which a visual apparent motion stream moves can influence the perceived direction of an auditory apparent motion stream (an effect known as crossmodal dynamic capture). However, little is known about the role that intramodal perceptual grouping processes play in the multisensory integration of motion information. The present study was designed to investigate the time course of any modulation of the cross-modal dynamic capture effect by the nature of the perceptual grouping taking place within vision. Participants were required to judge the direction of an auditory apparent motion stream while trying to ignore visual apparent motion streams presented in a variety of different configurations. Our results demonstrate that the cross-modal dynamic capture effect was influenced more by visual perceptual grouping when the conditions for intramodal perceptual grouping were set up prior to the presentation of the audiovisual apparent motion stimuli. However, no such modulation occurred when the visual perceptual grouping manipulation was established at the same time as or after the presentation of the audiovisual stimuli. These results highlight the importance of the unimodal perceptual organization of sensory information to the manifestation of multisensory integration.  相似文献   

16.
Three experiments investigated cross-modal links between touch, audition, and vision in the control of covert exogenous orienting. In the first two experiments, participants made speeded discrimination responses (continuous vs. pulsed) for tactile targets presented randomly to the index finger of either hand. Targets were preceded at a variable stimulus onset asynchrony (150,200, or 300 msec) by a spatially uninformative cue that was either auditory (Experiment 1) or visual (Experiment 2) on the same or opposite side as the tactile target. Tactile discriminations were more rapid and accurate when cue and target occurred on the same side, revealing cross-modal covert orienting. In Experiment 3, spatially uninformative tactile cues were presented prior to randomly intermingled auditory and visual targets requiring an elevation discrimination response (up vs. down). Responses were significantly faster for targets in both modalities when presented ipsilateral to the tactile cue. These findings demonstrate that the peripheral presentation of spatially uninforrnative auditory and visual cues produces cross-modal orienting that affects touch, and that tactile cues can also produce cross-modal covert orienting that affects audition and vision.  相似文献   

17.
There is growing interest in the effect of sound on visual motion perception. One model involves the illusion created when two identical objects moving towards each other on a two-dimensional visual display can be seen to either bounce off or stream through each other. Previous studies show that the large bias normally seen toward the streaming percept can be modulated by the presentation of an auditory event at the moment of coincidence. However, no reports to date provide sufficient evidence to indicate whether the sound bounce-inducing effect is due to a perceptual binding process or merely to an explicit inference resulting from the transient auditory stimulus resembling a physical collision of two objects. In the present study, we used a novel experimental design in which a subliminal sound was presented either 150 ms before, at, or 150 ms after the moment of coincidence of two disks moving towards each other. The results showed that there was an increased perception of bouncing (rather than streaming) when the subliminal sound was presented at or 150 ms after the moment of coincidence compared to when no sound was presented. These findings provide the first empirical demonstration that activation of the human auditory system without reaching consciousness affects the perception of an ambiguous visual motion display.  相似文献   

18.
Strong cross-modal interactions exist between visual and auditory processing. The relative contributions of perceptual versus decision-related processes to such interactions are only beginning to be understood. We used methodological and statistical approaches to control for potential decision-related contributions such as response interference, decisional criterion shift, and strategy selection. Participants were presented with rising-, falling-, and constant-amplitude sounds and were asked to detect change (increase or decrease) in sound amplitude while ignoring an irrelevant visual cue of a disk that grew, shrank, or stayed constant in size. Across two experiments, testing context was manipulated by varying the grouping of visual cues during testing, and cross-modal congruency showed independent perceptual and decision-related effects. Whereas a change in testing context greatly affected criterion shifts, cross-modal effects on perceptual sensitivity remained relatively consistent. In general, participants were more sensitive to increases in sound amplitude and less sensitive to sounds paired with dynamic visual cues. As compared with incongruent visual cues, congruent cues enhanced detection of amplitude decreases, but not increases. These findings suggest that the relative contributions of perceptual and decisional processing and the impacts of these processes on cross-modal interactions can vary significantly depending on asymmetries in within-modal processing, as well as consistencies in cross-modal dynamics.  相似文献   

19.
Strybel TZ  Vatakis A 《Perception》2004,33(9):1033-1048
Unimodal auditory and visual apparent motion (AM) and bimodal audiovisual AM were investigated to determine the effects of crossmodal integration on motion perception and direction-of-motion discrimination in each modality. To determine the optimal stimulus onset asynchrony (SOA) ranges for motion perception and direction discrimination, we initially measured unimodal visual and auditory AMs using one of four durations (50, 100, 200, or 400 ms) and ten SOAs (40-450 ms). In the bimodal conditions, auditory and visual AM were measured in the presence of temporally synchronous, spatially displaced distractors that were either congruent (moving in the same direction) or conflicting (moving in the opposite direction) with respect to target motion. Participants reported whether continuous motion was perceived and its direction. With unimodal auditory and visual AM, motion perception was affected differently by stimulus duration and SOA in the two modalities, while the opposite was observed for direction of motion. In the bimodal audiovisual AM condition, discriminating the direction of motion was affected only in the case of an auditory target. The perceived direction of auditory but not visual AM was reduced to chance levels when the crossmodal distractor direction was conflicting. Conversely, motion perception was unaffected by the distractor direction and, in some cases, the mere presence of a distractor facilitated movement perception.  相似文献   

20.
The tendency for observers to overestimate slant is not simply a visual illusion but can also occur with another sense, such as proprioception, as in the case of overestimation of self-body tilt. In the present study, distortion in the perception of body tilt was examined as a function of gender and multisensory spatial information. We used a full-body-tilt apparatus to test when participants experienced being tilted by 45 degrees, with visual and auditory cues present or absent. Body tilt was overestimated in all conditions, with the largest bias occurring when there were no visual or auditory cues. Both visual and auditory information independently improved performance. We also found a gender difference, with women exhibiting more bias in the absence of auditory information and more improvement when auditory information was added. The findings support the view that perception of body tilt is multisensory and that women more strongly utilize auditory information in such multisensory spatial judgments.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号