首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
When you are looking for an object, does hearing its characteristic sound make you find it more quickly? Our recent results supported this possibility by demonstrating that when a cat target, for example, was presented among other objects, a simultaneously presented “meow” sound (containing no spatial information) reduced the manual response time for visual localization of the target. To extend these results, we determined how rapidly an object-specific auditory signal can facilitate target detection in visual search. On each trial, participants fixated a specified target object as quickly as possible. The target’s characteristic sound speeded the saccadic search time within 215–220 msec and also guided the initial saccade toward the target, compared with presentation of a distractor’s sound or with no sound. These results suggest that object-based auditory—visual interactions rapidly increase the target object’s salience in visual search.  相似文献   

2.
Auditory and visual processes demonstrably enhance each other based on spatial and temporal coincidence. Our recent results on visual search have shown that auditory signals also enhance visual salience of specific objects based on multimodal experience. For example, we tend to see an object (e.g., a cat) and simultaneously hear its characteristic sound (e.g., “meow”), to name an object when we see it, and to vocalize a word when we read it, but we do not tend to see a word (e.g., cat) and simultaneously hear the characteristic sound (e.g., “meow”) of the named object. If auditory–visual enhancements occur based on this pattern of experiential associations, playing a characteristic sound (e.g., “meow”) should facilitate visual search for the corresponding object (e.g., an image of a cat), hearing a name should facilitate visual search for both the corresponding object and corresponding word, but playing a characteristic sound should not facilitate visual search for the name of the corresponding object. Our present and prior results together confirmed these experiential association predictions. We also recently showed that the underlying object-based auditory–visual interactions occur rapidly (within 220 ms) and guide initial saccades towards target objects. If object-based auditory–visual enhancements are automatic and persistent, an interesting application would be to use characteristic sounds to facilitate visual search when targets are rare, such as during baggage screening. Our participants searched for a gun among other objects when a gun was presented on only 10% of the trials. The search time was speeded when a gun sound was played on every trial (primarily on gun-absent trials); importantly, playing gun sounds facilitated both gun-present and gun-absent responses, suggesting that object-based auditory–visual enhancements persistently increase the detectability of guns rather than simply biasing gun-present responses. Thus, object-based auditory–visual interactions that derive from experiential associations rapidly and persistently increase visual salience of corresponding objects.  相似文献   

3.
The modality by which object azimuths (directions) are presented affects learning of multiple locations. In Experiment 1, participants learned sets of three and five object azimuths specified by a visual virtual environment, spatial audition (3D sound), or auditory spatial language. Five azimuths were learned faster when specified by spatial modalities (vision, audition) than by language. Experiment 2 equated the modalities for proprioceptive cues and eliminated spatial cues unique to vision (optic flow) and audition (differential binaural signals). There remained a learning disadvantage for spatial language. We attribute this result to the cost of indirect processing from words to spatial representations.  相似文献   

4.
A driving simulator was used to examine the effects on driving performance of auditory cues in an in-vehicle information search task. Drivers' distraction by the search tasks was measured on a peripheral detection task. The difficulty of the search task was systematically varied to test the distraction caused by a quantified visual load. 58 participants completed the task. Performance on both search tasks and peripheral detection tasks was measured by mean response time and percent error. Analyses indicated that in-vehicle information search performance can be severely degraded when a target is located within a group of diverse distractors. Inclusion of an auditory cue in the visual search increased the mean response time as a result of a change in modality from auditory to visual. Inclusion of such an auditory cue seemed to influence distraction as measured by performance on the peripheral detection task; accuracy was lower when auditory cues were provided, and responses were slower when no auditory cues were provided. Distraction by the auditory cue varied according to the difficulty of the search task.  相似文献   

5.
Several studies have shown that handedness has an impact on visual spatial abilities. Here we investigated the effect of laterality on auditory space perception. Participants (33 right-handers, 20 left-handers) completed two tasks of sound localization. In a dark, anechoic, and sound-proof room, sound stimuli (broadband noise) were presented via 21 loudspeakers mounted horizontally (from 80° on the left to 80° on the right). Participants had to localize the target either by using a swivel hand-pointer or by head-pointing. Individual lateral preferences of eye, ear, hand, and foot were obtained using a questionnaire. With both pointing methods, participants showed a bias in sound localization that was to the side contralateral to the preferred hand, an effect that was unrelated to their overall precision. This partially parallels findings in the visual modality as left-handers typically have a more rightward bias in visual line bisection compared with right-handers. Despite the differences in neural processing of auditory and visual spatial information these findings show similar effects of lateral preference on auditory and visual spatial perception. This suggests that supramodal neural processes are involved in the mechanisms generating laterality in space perception.  相似文献   

6.
Correctly integrating sensory information across different modalities is a vital task, yet there are illusions which cause the incorrect localization of multisensory stimuli. A common example of these phenomena is the "ventriloquism effect". In this illusion, the localization of auditory signals is biased by the presence of visual stimuli. For instance, when a light and sound are simultaneously presented, observers may erroneously locate the sound closer to the light than its actual position. While this phenomenon has been studied extensively in azimuth at a single depth, little is known about the interactions of stimuli at different depth planes. In the current experiment, virtual acoustics and stereo-image displays were used to test the integration of visual and auditory signals across azimuth and depth. The results suggest that greater variability in the localization of sounds in depth may lead to a greater bias from visual stimuli in depth than in azimuth. These results offer interesting implications for understanding multisensory integration.  相似文献   

7.
Identifying how memories are organized remains a fundamental issue in psychology. Previous work has shown that visual short-term memory is organized according to the object of origin, with participants being better at retrieving multiple pieces of information from the same object than from different objects. However, it is not yet clear whether similar memory structures are employed for other modalities, such as audition. Under analogous conditions in the auditory domain, we found that short-term memories for sound can also be organized according to object, with a same-object advantage being demonstrated for the retrieval of information in an auditory scene defined by two complex sounds overlapping in both space and time. Our results provide support for the notion of an auditory object, in addition to the continued identification of similar processing constraints across visual and auditory domains. The identification of modality-independent organizational principles of memory, such as object-based coding, suggests possible mechanisms by which the human processing system remembers multimodal experiences.  相似文献   

8.
In this study, we show that the contingent auditory motion aftereffect is strongly influenced by visual motion information. During an induction phase, participants listened to rightward-moving sounds with falling pitch alternated with leftward-moving sounds with rising pitch (or vice versa). Auditory aftereffects (i.e., a shift in the psychometric function for unimodal auditory motion perception) were bigger when a visual stimulus moved in the same direction as the sound than when no visual stimulus was presented. When the visual stimulus moved in the opposite direction, aftereffects were reversed and thus became contingent upon visual motion. When visual motion was combined with a stationary sound, no aftereffect was observed. These findings indicate that there are strong perceptual links between the visual and auditory motion-processing systems.  相似文献   

9.
The categorization and identification of previously ignored visual or auditory stimuli is typically slowed down—a phenomenon that has been called the negative priming effect and can be explained by the episodic retrieval of response-inadequate prime information and/or an inhibitory model. A similar after-effect has been found in visuospatial tasks: participants are slowed down in localizing a visual stimulus that appears at a previously ignored location. In the auditory modality, however, such an after-effect of ignoring a sound at a specific location has never been reported. Instead, participants are impaired in their localization performance when the sound at the previously ignored location changes identity, a finding which is compatible with the so-called feature-mismatch hypothesis. Here, we describe the properties of auditory spatial in contrast to visuospatial negative priming and report two experiments that specify the nature of this auditory after-effect. Experiment 1 shows that the detection of identity-location mismatches is a genuinely auditory phenomenon that can be replicated even when the sound sources are invisible. Experiment 2 reveals that the detection of sound-identity mismatches in the probe depends on the processing demands in the prime. This finding implies that the localization of irrelevant sound sources is not the inevitable consequence of processing the auditory prime scenario but depends on the difficulty of the target search process among distractor sounds.  相似文献   

10.
通过要求被试判断同时呈现的视听信息情绪效价的关系,考察视听情绪信息整合加工特点。实验一中词汇效价与韵律效价不冲突,实验二中词汇效价与韵律效价冲突。两个实验一致发现当面孔表情为积极时,被试对视听通道情绪信息关系判断更准确;实验二还发现,当面孔表情为消极时,相对于韵律线索,被试根据语义线索对视听信息关系判断更迅速。上述结果说明视听信息在同时呈现时,视觉信息可能先行加工,并影响到随后有关视听关系的加工。  相似文献   

11.
During navigation, humans combine visual information from their surroundings with body-based information from the translational and rotational components of their movement. Theories of navigation focus on the role of visual and rotational body-based information, even though experimental evidence shows they are not sufficient for complex spatial tasks. To investigate the contribution of all three sources of information, we asked participants to search a computer-generated virtual room for targets. Participants were provided with only visual information or with visual information supplemented with body-based information for all movement (walk group) or rotational movement (rotate group). The walk group performed the task with near-perfect efficiency, irrespective of whether a rich or impoverished visual scene was provided. The visual-only and rotate groups were significantly less efficient and frequently searched parts of the room at least twice. These results suggest that full physical movement plays a critical role in navigational search, but only moderate visual detail is required.  相似文献   

12.
It is common to judge the duration of an audiovisual event, and yet it remains controversial how the judgment of duration is affected by signals from other modalities. We used an oddball paradigm to examine the effect of sound on the judgment of visual duration and that of a visual object on the judgment of an auditory duration. In a series of standards and oddballs, the participants compared the duration of the oddballs to that of the standards. Results showed asymmetric cross-modal effects, supporting the auditory dominance hypothesis: a sound extends the perceived visual duration, whereas a visual object has no effect on perceived auditory duration. The possible mechanisms (pacemaker or mode switch) proposed in the Scalar Expectancy Theory [Gibbon, J., Church, R. M., & Meck, W. H. (1984). Scalar timing in memory. In J. Gibbon & L. Allan (Eds.), Annals of the New York Academy of Sciences: Vol. 423. Timing and time perception (pp. 52–77). New York: New York Academy of Sciences] were examined using different standard durations. We conclude that sound increases the perceived visual duration by accelerating the pulse rate in the visual pacemaker.  相似文献   

13.
In this study the ability of newborn infants to learn arbitrary auditory–visual associations in the absence versus presence of amodal (redundant) and contingent information was investigated. In the auditory-noncontingent condition 2-day-old infants were familiarized to two alternating visual stimuli (differing in colour and orientation), each accompanied by its ‘own’ sound: when the visual stimulus was presented the sound was continuously presented, independently of whether the infant looked at the visual stimulus. In the auditory-contingent condition the auditory stimulus was presented only when the infant looked at the visual stimulus: thus, presentation of the sound was contingent upon infant looking. On the post-familiarization test trials attention recovered strongly to a novel auditory–visual combination in the auditory-contingent condition, but remained low, and indistinguishable from attention to the familiar combination, in the auditory-noncontingent condition. These findings are a clear demonstration that newborn infants’ learning of arbitrary auditory–visual associations is constrained and guided by the presence of redundant (amodal) contingent information. The findings give strong support to Bahrick’s theory of early intermodal perception.  相似文献   

14.
We report a series of experiments designed to assess the effect of audiovisual semantic congruency on the identification of visually-presented pictures. Participants made unspeeded identification responses concerning a series of briefly-presented, and then rapidly-masked, pictures. A naturalistic sound was sometimes presented together with the picture at a stimulus onset asynchrony (SOA) that varied between 0 and 533 ms (auditory lagging). The sound could be semantically congruent, semantically incongruent, or else neutral (white noise) with respect to the target picture. The results showed that when the onset of the picture and sound occurred simultaneously, a semantically-congruent sound improved, whereas a semantically-incongruent sound impaired, participants’ picture identification performance, as compared to performance in the white-noise control condition. A significant facilitatory effect was also observed at SOAs of around 300 ms, whereas no such semantic congruency effects were observed at the longest interval (533 ms). These results therefore suggest that the neural representations associated with visual and auditory stimuli can interact in a shared semantic system. Furthermore, this crossmodal semantic interaction is not constrained by the need for the strict temporal coincidence of the constituent auditory and visual stimuli. We therefore suggest that audiovisual semantic interactions likely occur in a short-term buffer which rapidly accesses, and temporarily retains, the semantic representations of multisensory stimuli in order to form a coherent multisensory object representation. These results are explained in terms of Potter’s (1993) notion of conceptual short-term memory.  相似文献   

15.
虚拟现实技术通过提供视觉、听觉和触觉等信息为用户创造身临其境的感知体验, 其中触觉反馈面临诸多技术瓶颈使得虚拟现实中的自然交互受限。基于多感官错觉的伪触觉技术可以借助其他通道的信息强化和丰富触觉感受, 是目前虚拟现实环境中优化触觉体验的有效途径。本文聚焦于触觉中最重要的维度之一——粗糙度, 试图为解决虚拟现实中触觉反馈的受限问题提供新思路。探讨了粗糙度感知中, 视、听、触多感觉通道整合的关系, 分析了视觉线索(表面纹理密度、表面光影、控制显示比)和听觉线索(音调/频率、响度)如何影响触觉粗糙度感知, 总结了当下调控这些因素来改变粗糙度感知的方法。最后, 探讨了使用伪触觉反馈技术时, 虚拟现实环境中视、听、触觉信息在呈现效果、感知整合等方面与真实世界相比可能存在的差异, 提出可借鉴的改善触觉体验的适用方法和未来待研究的方向。  相似文献   

16.
The effect of audiovisual interactions on size perception has yet to be examined, despite its fundamental importance in daily life. Previous studies have reported that object length can be estimated solely on the basis of the sounds produced when an object is dropped. Moreover, it has been shown that people typically and easily perceive the correspondence between object sizes and sound intensities. It is therefore possible that auditory stimuli may act as cues for object size, thereby altering the visual perception of size. Thus, in the present study we examined the effects of auditory stimuli on the visual perception of size. Specifically, we investigated the effects of the sound intensity of auditory stimuli, the temporal window of audiovisual interactions, and the effects of the retinal eccentricity of visual stimuli. The results indicated that high-intensity auditory stimuli increased visually perceived object size, and that this effect was especially strong in the peripheral visual field. Additional consideration indicated that this effect on the visual perception of size is induced when the cue reliability is relatively higher for the auditory than for the visual stimuli. In addition, we further suggest that the cue reliabilities of visual and auditory stimuli relate to retinal eccentricity and sound intensity, respectively.  相似文献   

17.
Buchan JN  Munhall KG 《Perception》2011,40(10):1164-1182
Conflicting visual speech information can influence the perception of acoustic speech, causing an illusory percept of a sound not present in the actual acoustic speech (the McGurk effect). We examined whether participants can voluntarily selectively attend to either the auditory or visual modality by instructing participants to pay attention to the information in one modality and to ignore competing information from the other modality. We also examined how performance under these instructions was affected by weakening the influence of the visual information by manipulating the temporal offset between the audio and video channels (experiment 1), and the spatial frequency information present in the video (experiment 2). Gaze behaviour was also monitored to examine whether attentional instructions influenced the gathering of visual information. While task instructions did have an influence on the observed integration of auditory and visual speech information, participants were unable to completely ignore conflicting information, particularly information from the visual stream. Manipulating temporal offset had a more pronounced interaction with task instructions than manipulating the amount of visual information. Participants' gaze behaviour suggests that the attended modality influences the gathering of visual information in audiovisual speech perception.  相似文献   

18.
Visual search (e.g., finding a specific object in an array of other objects) is performed most effectively when people are able to ignore distracting nontargets. In repeated search, however, incidental learning of object identities may facilitate performance. In three experiments, with over 1,100 participants, we examined the extent to which search could be facilitated by object memory and by memory for spatial layouts. Participants searched for new targets (real-world, nameable objects) embedded among repeated distractors. To make the task more challenging, some participants performed search for multiple targets, increasing demands on visual working memory (WM). Following search, memory for search distractors was assessed using a surprise two-alternative forced choice recognition memory test with semantically matched foils. Search performance was facilitated by distractor object learning and by spatial memory; it was most robust when object identity was consistently tied to spatial locations and weakest (or absent) when object identities were inconsistent across trials. Incidental memory for distractors was better among participants who searched under high WM load, relative to low WM load. These results were observed when visual search included exhaustive-search trials (Experiment 1) or when all trials were self-terminating (Experiment 2). In Experiment 3, stimulus exposure was equated across WM load groups by presenting objects in a single-object stream; recognition accuracy was similar to that in Experiments 1 and 2. Together, the results suggest that people incidentally generate memory for nontarget objects encountered during search and that such memory can facilitate search performance.  相似文献   

19.
Searching for an object within a cluttered, continuously changing environment can be a very time-consuming process. The authors show that a simple auditory pip drastically decreases search times for a synchronized visual object that is normally very difficult to find. This effect occurs even though the pip contains no information on the location or identity of the visual object. The experiments also show that the effect is not due to general alerting (because it does not occur with visual cues), nor is it due to top-down cuing of the visual change (because it still occurs when the pip is synchronized with distractors on the majority of trials). Instead, we propose that the temporal information of the auditory signal is integrated with the visual signal, generating a relatively salient emergent feature that automatically draws attention. Phenomenally, the synchronous pip makes the visual object pop out from its complex environment, providing a direct demonstration of spatially nonspecific sounds affecting competition in spatial visual processing.  相似文献   

20.
Previous research has shown that irrelevant sounds can facilitate the perception of visual apparent motion. Here the effectiveness of a single sound to facilitate motion perception was investigated in three experiments. Observers were presented with two discrete lights temporally separated by stimulus onset asynchronies from 0 to 350 ms. After each trial, observers classified their impression of the stimuli using a categorisation system. A short sound presented temporally (and spatially) midway between the lights facilitated the impression of motion relative to baseline (lights without sound), whereas a sound presented either before the first or after the second light or simultaneously with the lights did not affect motion impression. The facilitation effect also occurred with sound presented far from the visual display, as well as with continuous-sound that was started with the first light and terminated with the second light. No facilitation of visual motion perception occurred if the sound was part of a tone sequence that allowed for intramodal perceptual grouping of the auditory stimuli prior to the critical audiovisual stimuli. Taken together, the findings are consistent with a low-level audiovisual integration approach in which the perceptual system merges temporally proximate sound and light stimuli, thereby provoking the impression of a single multimodal moving object.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号