Affiliation: | (1) Centre for Cognitive Neuroscience, School of Psychology, University of Liverpool, Eleanor Rathbone Bldg., Bedford Street South, Liverpool, L69 7AZ, UK;(2) Cognitive Neuroinformatics, School of Mathematics and Computer Science, Bremen University, Bremen, Germany;(3) Present address: HONDA Research Institute Europe, Care-Legien-Str. 30, 63073 Offenbach, Germany |
Abstract: | We investigated the extent to which auditory and visual motion signals are combined when observers are asked to predict the location of a virtually moving target. In Condition 1, the unimodal and bimodal signals were noisy, but the target object was continuously visible and audible; in Condition 2, the virtually moving object was hidden (invisible and inaudible) for a short period prior to its arrival at the target location. Our main finding was that the facilitation due to simultaneous visual and auditory input is very different for the two conditions. When the target is continuously visible and audible (Condition 1), the bimodal performance is twice as good as the unimodal performances, thus suggesting a very effective integration mechanism. On the other hand, if the object is hidden for a short period (Condition 2) and the task therefore requires the extrapolation of motion speed over a temporal and spatial period, the facilitation due to both sensory inputs is almost absent, and the bimodal performance is limited by the visual performance. |