首页 | 本学科首页   官方微博 | 高级检索  
     


Tracking the time course of phonetic cue integration during spoken word recognition
Authors:Bob McMurray  Meghan A. Clayards  Michael K. Tanenhaus  Richard N. Aslin
Affiliation:(1) Transfer Centre for Neuroscience and Learning (ZNL), University of Ulm, Beim Alten Fritz 2, 89075 Ulm, Germany;(2) Department of Neurology, University of Ulm, Oberer Eselsberg 45, 89081 Ulm, Germany;(3) Department of Psychology II, Faculty of Social Sciences, University of Kaiserslautern, Postfach 3049, 67653 Kaiserslautern, Germany;
Abstract:Speech perception requires listeners to integrate multiple cues that each contribute to judgments about a phonetic category. Classic studies of trading relations assessed the weights attached to each cue but did not explore the time course of cue integration. Here, we provide the first direct evidence that asynchronous cues to voicing (/b/ vs. /p/) and manner (/b/ vs. /w/) contrasts become available to the listener at different times during spoken word recognition. Using the visual world paradigm, we show that the probability of eye movements to pictures of target and of competitor objects diverge at different points in time after the onset of the target word. These points of divergence correspond to the availability of early (voice onset time or formant transition slope) and late (vowel length) cues to voicing and manner contrasts. These results support a model of cue integration in which phonetic cues are used for lexical access as soon as they are available.
Keywords:
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号