首页 | 本学科首页   官方微博 | 高级检索  
     


Faster Teaching via POMDP Planning
Authors:Anna N. Rafferty  Emma Brunskill  Thomas L. Griffiths  Patrick Shafto
Affiliation:1. Department of Computer ScienceCarleton College;2. Computer Science DepartmentCarnegie Mellon University;3. Department of PsychologyUniversity of California;4. Department of Mathematics and Computer ScienceRutgers University – Newark
Abstract:Human and automated tutors attempt to choose pedagogical activities that will maximize student learning, informed by their estimates of the student's current knowledge. There has been substantial research on tracking and modeling student learning, but significantly less attention on how to plan teaching actions and how the assumed student model impacts the resulting plans. We frame the problem of optimally selecting teaching actions using a decision‐theoretic approach and show how to formulate teaching as a partially observable Markov decision process planning problem. This framework makes it possible to explore how different assumptions about student learning and behavior should affect the selection of teaching actions. We consider how to apply this framework to concept learning problems, and we present approximate methods for finding optimal teaching actions, given the large state and action spaces that arise in teaching. Through simulations and behavioral experiments, we explore the consequences of choosing teacher actions under different assumed student models. In two concept‐learning tasks, we show that this technique can accelerate learning relative to baseline performance.
Keywords:Automated teaching  Partially observable Markov decision process  Concept learning
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号