首页 | 本学科首页   官方微博 | 高级检索  
     


Investigating joint attention mechanisms through spoken human-robot interaction
Authors:Staudte Maria  Crocker Matthew W
Affiliation:Department of Computational Linguistics Campus, Saarland University, 66123 Saarbruecken, Germany
Abstract:Referential gaze during situated language production and comprehension is tightly coupled with the unfolding speech stream ( [Griffin, 2001], [Meyer et al., 1998] and [Tanenhaus et al., 1995]). In a shared environment, utterance comprehension may further be facilitated when the listener can exploit the speaker’s focus of (visual) attention to anticipate, ground, and disambiguate spoken references. To investigate the dynamics of such gaze-following and its influence on utterance comprehension in a controlled manner, we use a human–robot interaction setting. Specifically, we hypothesize that referential gaze is interpreted as a cue to the speaker’s referential intentions which facilitates or disrupts reference resolution. Moreover, the use of a dynamic and yet extremely controlled gaze cue enables us to shed light on the simultaneous and incremental integration of the unfolding speech and gaze movement.We report evidence from two eye-tracking experiments in which participants saw videos of a robot looking at and describing objects in a scene. The results reveal a quantified benefit-disruption spectrum of gaze on utterance comprehension and, further, show that gaze is used, even during the initial movement phase, to restrict the spatial domain of potential referents. These findings more broadly suggest that people treat artificial agents similar to human agents and, thus, validate such a setting for further explorations of joint attention mechanisms.
Keywords:Utterance comprehension   Referential gaze   Joint attention   Human&ndash  robot interaction   Referential intention   Gaze   Situated language processing   Reference resolution
本文献已被 ScienceDirect PubMed 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号