首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   9篇
  免费   0篇
  2019年   1篇
  2018年   1篇
  2017年   2篇
  2012年   1篇
  2011年   2篇
  2010年   1篇
  2008年   1篇
排序方式: 共有9条查询结果,搜索用时 15 毫秒
1
1.
Attention, Perception, & Psychophysics - Despite the rapid growth of research on the crossmodal correspondence between visually presented shapes and basic tastes (e.g., sweet, sour, bitter, and...  相似文献   
2.
Bistable figures provide a fascinating window through which to explore human visual awareness. Here we demonstrate for the first time that the semantic context provided by a background auditory soundtrack (the voice of a young or old female) can modulate an observer's predominant percept while watching the bistable "my wife or my mother-in-law" figure (Experiment 1). The possibility of a response-bias account-that participants simply reported the percept that happened to be congruent with the soundtrack that they were listening to-was excluded in Experiment 2. We further demonstrate that this crossmodal semantic effect was additive with the manipulation of participants' visual fixation (Experiment 3), while it interacted with participants' voluntary attention (Experiment 4). These results indicate that audiovisual semantic congruency constrains the visual processing that gives rise to the conscious perception of bistable visual figures. Crossmodal semantic context therefore provides an important mechanism contributing to the emergence of visual awareness.  相似文献   
3.
Psychonomic Bulletin & Review - We examined the ability of people to evaluate their confidence when making perceptual judgments concerning a classic crossmodal correspondence, the Bouba/Kiki...  相似文献   
4.
We report a series of experiments designed to assess the effect of audiovisual semantic congruency on the identification of visually-presented pictures. Participants made unspeeded identification responses concerning a series of briefly-presented, and then rapidly-masked, pictures. A naturalistic sound was sometimes presented together with the picture at a stimulus onset asynchrony (SOA) that varied between 0 and 533 ms (auditory lagging). The sound could be semantically congruent, semantically incongruent, or else neutral (white noise) with respect to the target picture. The results showed that when the onset of the picture and sound occurred simultaneously, a semantically-congruent sound improved, whereas a semantically-incongruent sound impaired, participants’ picture identification performance, as compared to performance in the white-noise control condition. A significant facilitatory effect was also observed at SOAs of around 300 ms, whereas no such semantic congruency effects were observed at the longest interval (533 ms). These results therefore suggest that the neural representations associated with visual and auditory stimuli can interact in a shared semantic system. Furthermore, this crossmodal semantic interaction is not constrained by the need for the strict temporal coincidence of the constituent auditory and visual stimuli. We therefore suggest that audiovisual semantic interactions likely occur in a short-term buffer which rapidly accesses, and temporarily retains, the semantic representations of multisensory stimuli in order to form a coherent multisensory object representation. These results are explained in terms of Potter’s (1993) notion of conceptual short-term memory.  相似文献   
5.
Attention, Perception, & Psychophysics - We examined audiovisual and visuotactile integration in the central and peripheral visual field using visual fission and fusion illusions induced by...  相似文献   
6.
The extent to which attention modulates multisensory processing in a top-down fashion is still a subject of debate among researchers. Typically, cognitive psychologists interested in this question have manipulated the participants’ attention in terms of single/dual tasking or focal/divided attention between sensory modalities. We suggest an alternative approach, one that builds on the extensive older literature highlighting hemispheric asymmetries in the distribution of spatial attention. Specifically, spatial attention in vision, audition, and touch is typically biased preferentially toward the right hemispace, especially under conditions of high perceptual load. We review the evidence demonstrating such an attentional bias toward the right in extinction patients and healthy adults, along with the evidence of such rightward-biased attention in multisensory experimental settings. We then evaluate those studies that have demonstrated either a more pronounced multisensory effect in right than in left hemispace, or else similar effects in the two hemispaces. The results suggest that the influence of rightward-biased attention is more likely to be observed when the crossmodal signals interact at later stages of information processing and under conditions of higher perceptual load—that is, conditions under which attention is perhaps a compulsory enhancer of information processing. We therefore suggest that the spatial asymmetry in attention may provide a useful signature of top-down attentional modulation in multisensory processing.  相似文献   
7.
We report a series of experiments designed to demonstrate that the presentation of a sound can facilitate the identification of a concomitantly presented visual target letter in the backward masking paradigm. Two visual letters, serving as the target and its mask, were presented successively at various interstimulus intervals (ISIs). The results demonstrate that the crossmodal facilitation of participants' visual identification performance elicited by the presentation of a simultaneous sound occurs over a very narrow range of ISIs. This critical time-window lies just beyond the interval needed for participants to differentiate the target and mask as constituting two distinct perceptual events (Experiment 1) and can be dissociated from any facilitation elicited by making the visual target physically brighter (Experiment 2). When the sound is presented at the same time as the mask, a facilitatory, rather than an inhibitory effect on visual target identification performance is still observed (Experiment 3). We further demonstrate that the crossmodal facilitation of the visual target by the sound depends on the establishment of a reliable temporally coincident relationship between the two stimuli (Experiment 4); however, by contrast, spatial coincidence is not necessary (Experiment 5). We suggest that when visual and auditory stimuli are always presented synchronously, a better-consolidated object representation is likely to be constructed (than that resulting from unimodal visual stimulation).  相似文献   
8.
We propose a multisensory framework based on Glaser and Glaser's (1989) general reading-naming interference model to account for the semantic priming effect by naturalistic sounds and spoken words on visual picture sensitivity. Four experiments were designed to investigate two key issues: First, can auditory stimuli enhance visual sensitivity when the sound leads the picture as well as when they are presented simultaneously? And, second, do naturalistic sounds (e.g., a dog's "woofing") and spoken words (e.g., /d?g/) elicit similar semantic priming effects? Here, we estimated participants' sensitivity and response criterion using signal detection theory in a picture detection task. The results demonstrate that naturalistic sounds enhanced visual sensitivity when the onset of the sounds led that of the picture by 346 ms (but not when the sounds led the pictures by 173 ms, nor when they were presented simultaneously, Experiments 1-3A). At the same SOA, however, spoken words did not induce semantic priming effects on visual detection sensitivity (Experiments 3B and 4A). When using a dual picture detection/identification task, both kinds of auditory stimulus induced a similar semantic priming effect (Experiment 4B). Therefore, we suggest that there needs to be sufficient processing time for the auditory stimulus to access its associated meaning to modulate visual perception. Besides, the interactions between pictures and the two types of sounds depend not only on their processing route to access semantic representations, but also on the response to be made to fulfill the requirements of the task.  相似文献   
9.
Repetition blindness (RB; Kanwisher, 1987) is the term used to describe people’s failure to detect or report an item that is repeated in a rapid serial visual presentation (RSVP) stream. Although RB is, by definition, a visual deficit, whether it is affected by an auditory signal remains unknown. In the present study, we added two sounds before, simultaneous with, or after the onset of the two critical visual items during RSVP to examine the effect of sound on RB. The results show that the addition of the sounds effectively reduced RB when they appeared at, or around, the critical items. These results indicate that it is easier to perceive an event containing multisensory information than unisensory ones. Possible mechanisms of how visual and auditory information interact are discussed.  相似文献   
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号