首页 | 本学科首页   官方微博 | 高级检索  
   检索      


Task-specific modulations of locomotor action parameters based on on-line visual information during collision avoidance with moving objects
Authors:Cinelli Michael Eric  Patla Aftab E
Institution:Brown University, Department of Cognitive and Linguistic Sciences, 190 Thayer Street, Providence, RI 02912, United States. Michael_Cinelli@brown.edu
Abstract:The objectives of this study were: (a) to determine if the control mechanism for interacting with a dynamic real environment is the same as in the virtual reality (VR) studies, and (b) to identify the action control parameters that are modulated to successfully pass through oscillating doors. The participants walked along a 14-m path towards oscillating doors (rate of change in aperture size = 44 cm/s and maximum aperture varied 70, 80, or 100 cm). The participants had to use vision to extrapolate what the aperture of the doors would be at the time of crossing and determine if a change in action parameters was necessary. If their current state did not match the required state then the participants made modifications to their actions. The results showed that individuals in a real environment used similar action modifications (i.e., velocity adjustments) as those seen in VR studies to increase success. Aside from the gradual velocity adjustments observed, there was an immergence of a different locomotor action parameter on some trials that was not seen in VR studies (i.e., shoulder rotations). These shoulder rotations occurred when the participants perceived that a velocity adjustment alone would not lead to a successful trial. These results show that participants use perception to control movement in a feedback rather than feedforward manner.
Keywords:Visually guided human locomotion  Perception–  action coupling  Moving obstacles  Kinematics
本文献已被 ScienceDirect PubMed 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号