首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2篇
  免费   0篇
  2013年   1篇
  2005年   1篇
排序方式: 共有2条查询结果,搜索用时 15 毫秒
1
1.
The ability to understand the goals that drive another person’s actions is an important social and cognitive skill. This is no trivial task, because any given action may in principle be explained by different possible goals (e.g., one may wave ones arm to hail a cab or to swat a mosquito). To select which goal best explains an observed action is a form of abduction. To explain how people perform such abductive inferences, Baker, Tenenbaum, and Saxe (2007) proposed a computational-level theory that formalizes goal inference as Bayesian inverse planning (BIP). It is known that general Bayesian inference–be it exact or approximate–is computationally intractable (NP-hard). As the time required for computationally intractable computations grows excessively fast when scaled from toy domains to the real world, it seems that such models cannot explain how humans can perform Bayesian inferences quickly in real world situations. In this paper we investigate how the BIP model can nevertheless explain how people are able to make goal inferences quickly. The approach that we propose builds on taking situational constraints explicitly into account in the computational-level model. We present a methodology for identifying situational constraints that render the model tractable. We discuss the implications of our findings and reflect on how the methodology can be applied to alternative models of goal inference and Bayesian models in general.  相似文献   
2.
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号