首页 | 本学科首页   官方微博 | 高级检索  
     


Cross-modal facilitation in speech prosody
Authors:Jessica M. Foxton  Louis-David Riviere  Pascal Barone
Affiliation:1. Institute of Energy and Climate Research (IEK-2), Leo-Brandt-Str. 1, 52425 Jülich, Germany;2. AlzChem AG, Chemiepark Trostberg, 83308 Trostberg, Germany;1. Idiap Research Institute, Martigny, Switzerland;2. École Polytechnique Fédérale de Lausanne (EPFL), Switzerland
Abstract:Speech prosody has traditionally been considered solely in terms of its auditory features, yet correlated visual features exist, such as head and eyebrow movements. This study investigated the extent to which visual prosodic features are able to affect the perception of the auditory features. Participants were presented with videos of a speaker pronouncing two words, with visual features of emphasis on one of these words. For each trial, participants saw one video where the two words were identical in both pitch and amplitude, and another video where there was a difference in either pitch or amplitude that was congruent or incongruent with the visual changes. Participants were asked to decide which video contained the sound difference. Thresholds were obtained for the congruent and incongruent videos, and for an auditory-alone condition. It was found that the congruent thresholds were better than the incongruent thresholds for both pitch and amplitude changes. Interestingly, the congruent thresholds for amplitude were better than for the auditory-alone condition, which implies that the visual features improve sensitivity to loudness changes. These results demonstrate that visual stimuli can affect auditory thresholds for changes in pitch and amplitude, and furthermore support the view that visual prosodic features enhance speech processing.
Keywords:
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号