首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   51篇
  免费   1篇
  2018年   1篇
  2016年   1篇
  2015年   2篇
  2014年   3篇
  2013年   6篇
  2012年   6篇
  2011年   3篇
  2010年   1篇
  2009年   3篇
  2008年   2篇
  2007年   4篇
  2005年   1篇
  2004年   1篇
  2002年   4篇
  2001年   2篇
  2000年   4篇
  1999年   2篇
  1995年   1篇
  1994年   1篇
  1992年   1篇
  1988年   1篇
  1974年   1篇
  1959年   1篇
排序方式: 共有52条查询结果,搜索用时 15 毫秒
51.
Languages are transmitted from person to person and generation to generation via a process of iterated learning: people learn a language from other people who once learned that language themselves. We analyze the consequences of iterated learning for learning algorithms based on the principles of Bayesian inference, assuming that learners compute a posterior distribution over languages by combining a prior (representing their inductive biases) with the evidence provided by linguistic data. We show that when learners sample languages from this posterior distribution, iterated learning converges to a distribution over languages that is determined entirely by the prior. Under these conditions, iterated learning is a form of Gibbs sampling, a widely-used Markov chain Monte Carlo algorithm. The consequences of iterated learning are more complicated when learners choose the language with maximum posterior probability, being affected by both the prior of the learners and the amount of information transmitted between generations. We show that in this case, iterated learning corresponds to another statistical inference algorithm, a variant of the expectation-maximization (EM) algorithm. These results clarify the role of iterated learning in explanations of linguistic universals and provide a formal connection between constraints on language acquisition and the languages that come to be spoken, suggesting that information transmitted via iterated learning will ultimately come to mirror the minds of the learners.  相似文献   
52.
A multidimensional scaling approach to mental multiplication   总被引:5,自引:0,他引:5  
Adults consistently make errors in solving simple multiplication problems. These errors have been explained with reference to the interference between similar problems. In this paper, we apply multidimensional scaling (MDS) to the domain of multiplication problems, to uncover their underlying similarity structure. A tree-sorting task was used to obtain perceived dissimilarity ratings. The derived representation shows greater similarity between problems containing larger operands and suggests that tie problems (e.g., 7 x 7) hold special status. A version of the generalized context model (Nosofsky, 1986) was used to explore the derived MDS solution. The similarity of multiplication problems made an important contribution to producing a model consistent with human performance, as did the frequency with which such problems arise in textbooks, suggesting that both factors may be involved in the explanation of errors.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号