Automatic auditory disambiguation of visual awareness |
| |
Authors: | John Plass Emmanuel Guzman-Martinez Laura Ortega Satoru Suzuki Marcia Grabowecky |
| |
Affiliation: | 1.Department of Psychology,Northwestern University,Evanston,USA;2.Interdepartmental Neuroscience Program,Northwestern University,Evanston,USA |
| |
Abstract: | Multisensory integration can play a critical role in producing unified and reliable perceptual experience. When sensory information in one modality is degraded or ambiguous, information from other senses can crossmodally resolve perceptual ambiguities. Prior research suggests that auditory information can disambiguate the contents of visual awareness by facilitating perception of intermodally consistent stimuli. However, it is unclear whether these effects are truly due to crossmodal facilitation or are mediated by voluntary selective attention to audiovisually congruent stimuli. Here, we demonstrate that sounds can bias competition in binocular rivalry toward audiovisually congruent percepts, even when participants have no recognition of the congruency. When speech sounds were presented in synchrony with speech-like deformations of rivalling ellipses, ellipses with crossmodally congruent deformations were perceptually dominant over those with incongruent deformations. This effect was observed in participants who could not identify the crossmodal congruency in an open-ended interview (Experiment 1) or detect it in a simple 2AFC task (Experiment 2), suggesting that the effect was not due to voluntary selective attention or response bias. These results suggest that sound can automatically disambiguate the contents of visual awareness by facilitating perception of audiovisually congruent stimuli. |
| |
Keywords: | |
本文献已被 SpringerLink 等数据库收录! |
|