首页 | 本学科首页   官方微博 | 高级检索  
   检索      


Language-driven anticipatory eye movements in virtual reality
Authors:Nicole Eichert  David Peeters  Peter Hagoort
Institution:1.Max Planck Institute for Psycholinguistics,Nijmegen,The Netherlands;2.University of Oxford,Oxford,UK;3.Donders Institute for Brain, Cognition, and Behavior,Radboud University,Nijmegen,The Netherlands
Abstract:Predictive language processing is often studied by measuring eye movements as participants look at objects on a computer screen while they listen to spoken sentences. This variant of the visual-world paradigm has revealed that information encountered by a listener at a spoken verb can give rise to anticipatory eye movements to a target object, which is taken to indicate that people predict upcoming words. The ecological validity of such findings remains questionable, however, because these computer experiments used two-dimensional stimuli that were mere abstractions of real-world objects. Here we present a visual-world paradigm study in a three-dimensional (3-D) immersive virtual reality environment. Despite significant changes in the stimulus materials and the different mode of stimulus presentation, language-mediated anticipatory eye movements were still observed. These findings thus indicate that people do predict upcoming words during language comprehension in a more naturalistic setting where natural depth cues are preserved. Moreover, the results confirm the feasibility of using eyetracking in rich and multimodal 3-D virtual environments.
Keywords:
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号