首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   607篇
  免费   38篇
  645篇
  2023年   14篇
  2022年   10篇
  2021年   15篇
  2020年   28篇
  2019年   31篇
  2018年   35篇
  2017年   26篇
  2016年   51篇
  2015年   22篇
  2014年   34篇
  2013年   66篇
  2012年   45篇
  2011年   41篇
  2010年   27篇
  2009年   15篇
  2008年   28篇
  2007年   34篇
  2006年   21篇
  2005年   11篇
  2004年   10篇
  2003年   19篇
  2002年   11篇
  2001年   6篇
  2000年   6篇
  1999年   2篇
  1998年   7篇
  1997年   2篇
  1995年   1篇
  1994年   4篇
  1993年   5篇
  1992年   1篇
  1991年   2篇
  1990年   2篇
  1989年   2篇
  1988年   1篇
  1987年   2篇
  1983年   1篇
  1980年   1篇
  1979年   2篇
  1977年   1篇
  1974年   1篇
  1973年   1篇
  1948年   1篇
排序方式: 共有645条查询结果,搜索用时 0 毫秒
631.
632.
633.
634.
635.
It is proposed that arithmetical facts are organized in memory in terms of a principle that is unique to numbers—the cardinal magnitudes of the addends. This implies that sums such as 4 + 2 and 2 + 4 are represented, and searched for, in terms of the maximum and minimum addends. This in turn implies that a critical stage in solving an addition problem is deciding which addend is the larger. The COMP model of addition fact retrieval incorporates a comparison stage, as well as a retrieval stage and a pronunciation stage. Three tasks, using the same subjects, were designed to assess the contribution of these three stages to retrieving the answers to single-digit addition problems. Task 3 was the addition task, which examined whether reaction times (RTs) were explained by the model; Task 1 was a number naming task to assess the contribution of the pronunciation stage; Task 2 was a magnitude comparison task to assess the contribution, if any, of the comparison stage. A regression equation that included just expressions of these three stages was found to account for 71% of the variance. It is argued that the COMP model fits not only the adult RT data better than do alternatives, but also the evidence from development of additional skills.  相似文献   
636.
637.
638.
639.
640.
It is often assumed that graphemes are a crucial level of orthographic representation above letters. Current connectionist models of reading, however, do not address how the mapping from letters to graphemes is learned. One major challenge for computational modeling is therefore developing a model that learns this mapping and can assign the graphemes to linguistically meaningful categories such as the onset, vowel, and coda of a syllable. Here, we present a model that learns to do this in English for strings of any letter length and any number of syllables. The model is evaluated on error rates and further validated on the results of a behavioral experiment designed to examine ambiguities in the processing of graphemes. The results show that the model (a) chooses graphemes from letter strings with a high level of accuracy, even when trained on only a small portion of the English lexicon; (b) chooses a similar set of graphemes as people do in situations where different graphemes can potentially be selected; (c) predicts orthographic effects on segmentation which are found in human data; and (d) can be readily integrated into a full‐blown model of multi‐syllabic reading aloud such as CDP++ (Perry, Ziegler, & Zorzi, 2010). Altogether, these results suggest that the model provides a plausible hypothesis for the kind of computations that underlie the use of graphemes in skilled reading.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号