The role of appearance and motion in action prediction |
| |
Authors: | Saygin Ayse Pinar Stadler Waltraud |
| |
Affiliation: | Department of Cognitive Science, Kavli Institute for Brain and Mind, University of California San Diego, 9500 Gilman Drive, La Jolla, CA 92093-0515, USA. saygin@cogsci.ucsd.edu |
| |
Abstract: | We used a novel stimulus set of human and robot actions to explore the role of humanlike appearance and motion in action prediction. Participants viewed videos of familiar actions performed by three agents: human, android and robot, the former two sharing human appearance, the latter two nonhuman motion. In each trial, the video was occluded for 400?ms. Participants were asked to determine whether the action continued coherently (in-time) after occlusion. The timing at which the action continued was early, late, or in-time (100, 700 or 400?ms after the start of occlusion). Task performance interacted with the observed agent. For early continuations, accuracy was highest for human, lowest for robot actions. For late continuations, the pattern was reversed. Both android and human conditions differed significantly from the robot condition. Given the robot and android conditions had the same kinematics, the visual form of the actor appears to affect action prediction. We suggest that the selection of the internal sensorimotor model used for action prediction is influenced by the observed agent's appearance. |
| |
Keywords: | |
本文献已被 PubMed 等数据库收录! |
|