Preattentive processing of audio-visual emotional signals |
| |
Authors: | Föcker Julia Gondan Matthias Röder Brigitte |
| |
Affiliation: | aUniversity of Hamburg, Biological Psychology and Neuropsychology, Von-Melle-Park 11, 20146 Hamburg, Germany;bBrain and Cognitive Sciences, University of Rochester, United States;cUniversity of Regensburg, Department of Psychology, Universitätsstr. 31, 93050 Regensburg, Germany;dInstitute of Medical Biometry and Informatics, University of Heidelberg, Im Neuenheimer Feld 305, 69120 Heidelberg, Germany |
| |
Abstract: | Previous research has shown that redundant information in faces and voices leads to faster emotional categorization compared to incongruent emotional information even when attending to only one modality. The aim of the present study was to test whether these crossmodal effects are predominantly due to a response conflict rather than interference at earlier, e.g. perceptual processing stages. In Experiment 1, participants had to categorize the valence and rate the intensity of happy, sad, angry and neutral unimodal or bimodal face–voice stimuli. They were asked to rate either the facial or vocal expression and ignore the emotion expressed in the other modality. Participants responded faster and more precisely to emotionally congruent compared to incongruent face–voice pairs in both the Attend Face and in the Attend Voice condition. Moreover, when attending to faces, emotionally congruent bimodal stimuli were more efficiently processed than unimodal visual stimuli. To study the role of a possible response conflict, Experiment 2 used a modified paradigm in which emotional and response conflicts were disentangled. Incongruency effects were significant even in the absence of response conflicts. The results suggest that emotional signals available through different sensory channels are automatically combined prior to response selection. |
| |
Keywords: | PsycINFO classification: 2323 2326 2346 |
本文献已被 ScienceDirect PubMed 等数据库收录! |
|