首页 | 本学科首页   官方微博 | 高级检索  
   检索      


Linguistically guided anticipatory eye movements in scene viewing
Authors:Adrian Staub  Matthew Abbott  Richard S Bogartz
Institution:1. University of Massachusetts , Amherst , MA , USA astaub@psych.umass.edu;3. University of California , San Diego , CA , USA;4. University of Massachusetts , Amherst , MA , USA
Abstract:The present study replicated the well-known demonstration by Altmann and Kamide (1999) that listeners make linguistically guided anticipatory eye movements, but used photographs of scenes rather than clip-art arrays as the visual stimuli. When listeners heard a verb for which a particular object in a visual scene was the likely theme, they made earlier looks to this object (e.g., looks to a cake upon hearing The boy will eat …) than when they heard a control verb (The boy will move …). New data analyses assessed whether these anticipatory effects are due to a linguistic effect on the targeting of saccades (i.e., the where parameter of eye movement control), the duration of fixations (i.e., the when parameter), or both. Participants made fewer fixations before reaching the target object when the verb was selectionally restricting (e.g., will eat). However, verb type had no effect on the duration of individual eye fixations. These results suggest an important constraint on the linkage between spoken language processing and eye movement control: Linguistic input may influence only the decision of where to move the eyes, not the decision of when to move them.
Keywords:Eye movements  Language comprehension  Scene viewing
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号