排序方式: 共有5条查询结果,搜索用时 15 毫秒
1
1.
2.
An item response theory model for dealing with test speededness is proposed. The model consists of two random processes, a
problem solving process and a random guessing process, with the random guessing gradually taking over from the problem solving
process. The involved change point and change rate are considered random parameters in order to model examinee differences
in both respects. The proposed model is evaluated on simulated data and in a case study.
The research reported in this paper was supported by IAP P5/24 and GOA/2005/04, both awarded to Paul De Boeck and Iven Van
Mechelen, and by IAP P6/03, awarded to Iven Van Mechelen. Yuri Goegebeur’s research was supported by a grant of the Danish
Natural Science Research Council. 相似文献
3.
Test collusion (TC) is sharing of test materials or answers to test questions before or during the test (important special case of TC is item preknowledge). Because of potentially large advantages for examinees involved, TC poses a serious threat to the validity of score interpretations. The proposed approach applies graph theory methodology to response similarity analyses for identifying groups of examinees involved in TC without using any knowledge about parts of test that were affected by TC. The approach supports different response similarity indices (specific to a particular type of TC) and different types of groups (connected components, cliques, or near-cliques). A comparison with an up-to-date method using real and simulated data is presented. Possible extensions and practical recommendations are given. 相似文献
4.
Psychometrika - We consider a multidimensional noncompensatory approach for binary items in passage-based tests. The passage-based noncompensatory model (PB-NM) emphasizes two underlying components... 相似文献
5.
Nested logit models have been presented as an alternative to multinomial logistic models for multiple-choice test items (Suh
and Bolt in Psychometrika 75:454–473, 2010) and possess a mathematical structure that naturally lends itself to evaluating the incremental information provided by attending
to distractor selection in scoring. One potential concern in attending to distractors is the possibility that distractor selection
reflects a different trait/ability than that underlying the correct response. This paper illustrates a multidimensional extension
of a nested logit item response model that can be used to evaluate such distinctions and also defines a new framework for
incorporating collateral information from distractor selection when differences exist. The approach is demonstrated in application
to questions faced by a university testing center over whether to incorporate distractor selection into the scoring of its
multiple-choice tests. Several empirical examples are presented. 相似文献
1