首页 | 本学科首页   官方微博 | 高级检索  
   检索      


Concurrent intramodal learning enhances multisensory responses of symmetric crossmodal learning in robotic audio-visual tracking
Institution:2. SDU Robotics, The Maersk Mc-Kinney Moeller Institute, University of Southern Denmark, Campusvej 55, 5230 Odense M, Denmark;3. Bio-inspired Robotics and Neural Engineering Laboratory, School of Information Science and Technology, Vidyasirimedhi Institute of Science and Technology, Rayong, Thailand;1. College of Information Engineering, China Jiliang University, Hangzhou 310018, China;2. College of Computer Science, Hangzhou Dianzi University, Hangzhou 310018, China;1. Institute of Machine Learning and Systems Biology, School of Electronics and Information Engineering, Tongji University, Caoan Road 4800, Shanghai 201804, China;2. Science Computing and Intelligent Information Processing of Guang Xi Higher Education Key Laboratory, Guangxi Teachers Education University, Nanning, Guangxi 530001, China;1. Institute of Cognitive Science, University of Osnabrück, Germany;2. Department of Philosophy, Research Platform Cognitive Science, University of Vienna, Austria
Abstract:Tracking an audio-visual target involves integrating spatial cues about target position from both modalities. Such sensory cue integration is a developmental process in the brain involving learning, with neuroplasticity as its underlying mechanism. We present a Hebbian learning-based adaptive neural circuit for multi-modal cue integration. The circuit temporally correlates stimulus cues within each modality via intramodal learning as well as symmetrically across modalities via crossmodal learning to independently update modality-specific neural weights on a sample-by-sample basis. It is realised as a robotic agent that must orient towards a moving audio-visual target. It continuously learns the best possible weights required for a weighted combination of auditory and visual spatial target directional cues that is directly mapped to robot wheel velocities to elicit an orientation response. Visual directional cues are noise-free and continuous but arising from a relatively narrow receptive field while auditory directional cues are noisy and intermittent but arising from a relatively wider receptive field. Comparative trials in simulation demonstrate that concurrent intramodal learning improves both the overall accuracy and precision of the orientation responses of symmetric crossmodal learning. We also demonstrate that symmetric crossmodal learning improves multisensory responses as compared to asymmetric crossmodal learning. The neural circuit also exhibits multisensory effects such as sub-additivity, additivity and super-additivity.
Keywords:Crossmodal learning  Multisensory integration  Audio-visual tracking  Biorobotics
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号