首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   766篇
  免费   59篇
  国内免费   52篇
  2024年   4篇
  2023年   26篇
  2022年   26篇
  2021年   29篇
  2020年   48篇
  2019年   53篇
  2018年   38篇
  2017年   41篇
  2016年   47篇
  2015年   15篇
  2014年   27篇
  2013年   76篇
  2012年   11篇
  2011年   16篇
  2010年   11篇
  2009年   21篇
  2008年   19篇
  2007年   23篇
  2006年   28篇
  2005年   25篇
  2004年   20篇
  2003年   16篇
  2002年   20篇
  2001年   13篇
  2000年   11篇
  1999年   11篇
  1998年   15篇
  1997年   9篇
  1996年   7篇
  1995年   17篇
  1994年   6篇
  1993年   11篇
  1992年   7篇
  1991年   8篇
  1990年   8篇
  1989年   12篇
  1988年   11篇
  1987年   9篇
  1986年   6篇
  1985年   4篇
  1984年   3篇
  1983年   6篇
  1982年   6篇
  1981年   6篇
  1980年   6篇
  1979年   15篇
  1978年   9篇
  1977年   9篇
  1976年   7篇
  1974年   2篇
排序方式: 共有877条查询结果,搜索用时 15 毫秒
31.
Percentage agreement measures of interobserver agreement or "reliability" have traditionally been used to summarize observer agreement from studies using interval recording, time-sampling, and trial-scoring data collection procedures. Recent articles disagree on whether to continue using these percentage agreement measures, and on which ones to use, and what to do about chance agreements if their use is continued. Much of the disagreement derives from the need to be reasonably certain we do not accept as evidence of true interobserver agreement those agreement levels which are substantially probable as a result of chance observer agreement. The various percentage agreement measures are shown to be adequate to this task, but easier ways are discussed. Tables are given to permit checking to see if obtained disagreements are unlikely due to chance. Particularly important is the discovery of a simple rule that, when met, makes the tables unnecessary. If reliability checks using 50 or more observation occasions produce 10% or fewer disagreements, for behavior rates from 10% through 90%, the agreement achieved is quite improbably the result of chance agreement.  相似文献   
32.
Synthetic data are used to examine how well axiomatic and numerical conjoint measurement methods, individually and comparatively, recover simple polynomial generators in three dimensions. The study illustrates extensions of numerical conjoint measurement (NCM) to identify and model distributive and dual-distributive, in addition to the usual additive, data structures. It was found that while minimum STRESS was the criterion of fit, another statistic, predictive capability, provided a better diagnosis of the known generating model. That NCM methods were able to better identify generating models conflicts with Krantz and Tversky's assertion that, in general, the direct axiom tests provide a more powerful diagnostic test between alternative composition rules than does evaluation of numerical correspondence. For all methods, dual-distributive models are most difficult to recover, while consistent with past studies, the additive model is the most robust of the fitted models.Douglas Emery is now at the Krannert Graduate School of Management, Purdue University, West Lafayette, IN, on leave from the University of Calgary.  相似文献   
33.
A two-step weighted least squares estimator for multiple factor analysis of dichotomized variables is discussed. The estimator is based on the first and second order joint probabilities. Asymptotic standard errors and a model test are obtained by applying the Jackknife procedure.  相似文献   
34.
35.
36.
This paper develops a novel psycholinguistic parser and tests it against experimental and corpus reading data. The parser builds on the recent research into memory structures, which argues that memory retrieval is content-addressable and cue-based. It is shown that the theory of cue-based memory systems can be combined with transition-based parsing to produce a parser that, when combined with the cognitive architecture ACT-R, can model reading and predict online behavioral measures (reading times and regressions). The parser's modeling capacities are tested against self-paced reading experimental data (Grodner & Gibson, 2005), eye-tracking experimental data (Staub, 2011), and a self-paced reading corpus (Futrell et al., 2018).  相似文献   
37.
The current study evaluated whether a computer‐based training program could improve observers' accuracy in scoring discrete instances of problem behavior at 5x normal speed using a multiple‐baseline design across subjects. During pretraining and posttraining, observers attempted to score multiple examples of problem behavior at 5.0x without feedback. During training, participants scored multiple examples of problem behavior at 5.0x with automated feedback. Researchers measured omission (missing problem behavior) and commission (scoring other behavior as problem behavior) errors and the total duration of scoring time to determine the observers' accuracy and efficiency, respectively. After training, all participants scored instances of problem behavior with less than 11% error using 5.0x. The time required to score the videos across 90‐min observations was reduced by 66%. Results extend previous evaluations of fast forwarding by demonstrating that the training program could be used to teach observers to accurately score problem behavior using a speed faster than 3.5x.  相似文献   
38.
认知诊断评估旨在探讨个体内部的知识掌握结构,并提供关于学生优缺点的详细诊断信息,以促进个体的全面发展。当前研究者已开发了大量0-1评分的认知诊断模型,但对于多级评分认知诊断模型的研究还比较少。本文对已有的多级评分认知诊断模型进行了归纳,介绍了模型的假设,计量特征以及适用范围,为实际应用者和研究者在多级评分认知诊断模型的比较和选用上提供借鉴和参考。最后,对未来关于多级评分诊断模型的研究方向进行了展望。  相似文献   
39.
We consider first a variant of the analytic hierarchy process (AHP) with a one-parametric class of geometric scales to quantify human comparative judgement and with a multiplicative structure: logarithmic regression to calculate the impact scores of the alternatives at the first evaluation level and a geometric-mean aggregation rule to calculate the final scores at the second level. We demonstrate that the rank order of the impact scores and final scores is scale-independent. Finally we show that the multiplicative AHP is an exponential version of the simple multi-attribute rating technique (SMART). In fact, the multiplicative AHP is concerned with ratios of intervals on the dimension of desirability, whereas SMART analyses differences in the corresponding orders of magnitude.  相似文献   
40.
Research indicates that selecting a strategy to best exploit a new technology is a complex decision-making process. The task involves making a series of decisions with multiple alternatives, each to be evaluated by multiple criteria whose values have high levels of uncertainty. This paper presents a methodology for modelling a new technology decision using decision trees and an optimizing algorithm. A problem of a mining company considering the adoption of new technology is used to illustrate the decision-making task and modelling methodology. A numerical solution to the case demonstrates the potential of the optimizing technique in strategy selection.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号