首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   402篇
  免费   46篇
  国内免费   107篇
  2024年   1篇
  2023年   9篇
  2022年   18篇
  2021年   31篇
  2020年   24篇
  2019年   35篇
  2018年   25篇
  2017年   28篇
  2016年   32篇
  2015年   14篇
  2014年   26篇
  2013年   32篇
  2012年   18篇
  2011年   14篇
  2010年   9篇
  2009年   7篇
  2008年   15篇
  2007年   18篇
  2006年   20篇
  2005年   19篇
  2004年   5篇
  2003年   8篇
  2002年   9篇
  2001年   13篇
  2000年   8篇
  1999年   10篇
  1998年   11篇
  1997年   6篇
  1996年   7篇
  1995年   5篇
  1994年   8篇
  1993年   7篇
  1992年   8篇
  1991年   7篇
  1990年   5篇
  1989年   7篇
  1988年   5篇
  1987年   6篇
  1986年   5篇
  1985年   4篇
  1984年   5篇
  1983年   2篇
  1982年   1篇
  1981年   3篇
  1979年   1篇
  1977年   2篇
  1976年   1篇
  1975年   1篇
排序方式: 共有555条查询结果,搜索用时 15 毫秒
101.
Test items are often evaluated and compared by contrasting the shapes of their item characteristics curves (ICC's) or surfaces. The current paper develops and applies three general (i.e., nonparametric) comparisons of the shapes of two item characteristic surfaces: (i) proportional latent odds, (ii) uniform relative difficulty, and (iii) item sensitivity. Two items may be compared in these ways while making no assumption about the shapes of item characteristic surfaces for other items, and no assumption about the dimensionality of the latent variable. Also studied is a method for comparing the relative shapes of two item characteristic curves in two examinee populations.The author is grateful to Paul Holland, Robert Mislevy, Tue Tjur, Rebecca Zwick, the editor and reviewers for valuable comments on the subject of this paper, to Mari A. Pearlman for advice on the pairing of items in the examples, and to Dorothy Thayer for assistance with computing.  相似文献   
102.
Wendy M. Yen 《Psychometrika》1987,52(2):275-291
Comparisons are made between BILOG version 2.2 and LOGIST 5.0 Version 2.5 in estimating the item parameters, traits, item characteristic functions (ICFs), and test characteristic functions (TCFs) for the three-parameter logistic model. Data analyzed are simulated item responses for 1000 simulees and one 10-item test, four 20-item tests, and four 40-item tests. LOGIST usually was faster than BILOG in producing maximum likelihood estimates. BILOG almost always produced more accurate estimates of individual item parameters. In estimating ICFs and TCFs BILOG was more accurate for the 10-item test, and the two programs were about equally accurate for the 20- and 40-item tests.I am grateful to Robert J. Mislevy, Martha L. Stocking, and Marilyn S. Wingersky for many helpful comments on an earlier version of this paper. I would also like to thank Hamid Kamrani and Bongmyoung Park for getting LOGIST and BILOG running and keeping them running under changing computer systems at CTB/McGraw-Hill.  相似文献   
103.
An algorithmic approach to test design, using information functions, is presented. The approach uses a special branch of linear programming, i.e. binary programming. In addition, results of some benchmark problems are presented. Within the same framework, it is also possible to formulate the problem of individualized testing.I would like to thank my colleagues N. Veldhuijzen, H. Verstralen and M. Zwarts for their suggestions and comments. Furthermore, I would like to thank Professor W. van der Linden, Department of Educational Measurement and Data Analysis, Technological University Twente, for offering facilities at his department; Ellen Timminga of the same department and S. Baas, department of Operational Research at the same University for their efforts in linear programming.  相似文献   
104.
Score equity assessment (SEA) refers to an examination of population invariance of equating across two or more subpopulations of test examinees. Previous SEA studies have shown that score equity may be present for examinees scoring at particular test score ranges but absent for examinees scoring at other score ranges. No studies to date have performed research for the purpose of understanding why score equity can be inconsistent across the score range of some tests. The purpose of this study is to explore a source of uneven subpopulation score equity across the score range of a test. It is hypothesized that the difficulty of anchor items displaying differential item functioning (DIF) is directly related to the score location at which issues of score inequity are observed. The simulation study supports the hypothesis that the difficulty of DIF items has a systematic impact on the uneven nature of conditional score equity.  相似文献   
105.
价值导向元记忆关注人们在面对不同重要性信息时,通过元记忆监测和调节,有选择地优先加工高价值信息,以实现记忆效率最大化的目的。价值导向元记忆包括价值导向元记忆监测和控制,眼动追踪技术以其无干扰性、生态效度高等优势可以时时追踪这一监控过程。当前该领域研究中已采用的眼动指标集中在项目选择、学习时间分配、学习进程等方面。未来在项目选择、学习效率和策略比较等研究中可以探索眼动追踪技术的进一步应用。  相似文献   
106.
In this paper we propose two interpretations for the discrimination parameter in the two-parameter logistic model (2PLM). The interpretations are based on the relation between the 2PLM and two stochastic models. In the first interpretation, the 2PLM is linked to a diffusion model so that the probability of absorption equals the 2PLM. The discrimination parameter is the distance between the two absorbing boundaries and therefore the amount of information that has to be collected before a response to an item can be given. For the second interpretation, the 2PLM is connected to a specific type of race model. In the race model, the discrimination parameter is inversely related to the dependency of the information used in the decision process. Extended versions of both models with person-to-person variability in the difficulty parameter are considered. When fitted to a data set, it is shown that a generalization of the race model that allows for dependency between choices and response times (RTs) is the best-fitting model.  相似文献   
107.
Although the Bock–Aitkin likelihood-based estimation method for factor analysis of dichotomous item response data has important advantages over classical analysis of item tetrachoric correlations, a serious limitation of the method is its reliance on fixed-point Gauss-Hermite (G-H) quadrature in the solution of the likelihood equations and likelihood-ratio tests. When the number of latent dimensions is large, computational considerations require that the number of quadrature points per dimension be few. But with large numbers of items, the dispersion of the likelihood, given the response pattern, becomes so small that the likelihood cannot be accurately evaluated with the sparse fixed points in the latent space. In this paper, we demonstrate that substantial improvement in accuracy can be obtained by adapting the quadrature points to the location and dispersion of the likelihood surfaces corresponding to each distinct pattern in the data. In particular, we show that adaptive G-H quadrature, combined with mean and covariance adjustments at each iteration of an EM algorithm, produces an accurate fast-converging solution with as few as two points per dimension. Evaluations of this method with simulated data are shown to yield accurate recovery of the generating factor loadings for models of upto eight dimensions. Unlike an earlier application of adaptive Gibbs sampling to this problem by Meng and Schilling, the simulations also confirm the validity of the present method in calculating likelihood-ratio chi-square statistics for determining the number of factors required in the model. Finally, we apply the method to a sample of real data from a test of teacher qualifications.  相似文献   
108.
汉语转折复句的命题表征项目互换效应研究   总被引:1,自引:0,他引:1  
采用句子一图画验证任务(sentence-picture verification task)探讨了汉语转折复句的命题表征项目互换效应。结果表明,汉语倒装转折复句的项目表达顺序为“(但是)B→虽然A”,命题表征中项目存在着互换的倾向,所形成的命题表征为“虽然A→但是B”。本研究结果初步表明,读者理解汉语转折复句可能是一个按照“虽然A(事实让步)→但是B(转折)”固定方向进行系列认知加工的过程。  相似文献   
109.
汉语因果复句的心理表征项目互换效应研究   总被引:2,自引:0,他引:2       下载免费PDF全文
采用句子-图画验证任务(sentence picture verification task)探讨了汉语因果复句的心理表征项目互换效应。结果表明,汉语因果复句的项目表达顺序为"结果→原因"时,所形成的心理表征中项目发生互换,其结果为"原因→结果"。本研究结果初步表明,读者理解汉语因果复句是一个按照"原因→结果"固定方向进行系列加工的认知过程。  相似文献   
110.
An IRT model based on the Rasch model is proposed for composite tasks, that is, tasks that are decomposed into subtasks of different kinds. There is one subtask for each component that is discerned in the composite tasks. A component is a generic kind of subtask of which the subtasks resulting from the decomposition are specific instantiations with respect to the particular composite tasks under study. The proposed model constrains the difficulties of the composite tasks to be linear combinations of the difficulties of the corresponding subtask items, which are estimated together with the weights used in the linear combinations, one weight for each kind of subtask. Although the model does not belong to the exponential family, its parameters can be estimated using conditional maximum likelihood estimation. The approach is demonstrated with an application to spelling tasks. We thank Eric Maris for his helpful comments.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号