共查询到20条相似文献,搜索用时 0 毫秒
1.
In multidimensional item response theory (MIRT), it is possible for the estimate of a subject’s ability in some dimension
to decrease after they have answered a question correctly. This paper investigates how and when this type of paradoxical result
can occur. We demonstrate that many response models and statistical estimates can produce paradoxical results and that in
the popular class of linearly compensatory models, maximum likelihood estimates are guaranteed to do so. In light of these
findings, the appropriateness of multidimensional item response methods for assigning scores in high-stakes testing is called
into question. 相似文献
2.
In multidimensional item response models, paradoxical scoring effects can arise, wherein correct answers are penalized and incorrect answers are rewarded. For the most prominent class of IRT models, the class of linearly compensatory models, a general derivation of paradoxical scoring effects based on the geometry of item discrimination vectors is given, which furthermore corrects an error in an established theorem on paradoxical results. This approach highlights the very counterintuitive way in which item discrimination parameters (and also factor loadings) have to be interpreted in terms of their influence on the latent ability estimate. It is proven that, despite the error in the original proof, the key result concerning the existence of paradoxical effects remains true—although the actual relation to the item parameters is shown to be a more complicated function than previous results suggested. The new proof enables further insights into the actual mathematical causation of the paradox and generalizes the findings within the class of linearly compensatory models. 相似文献
3.
测验理论的新发展:多维项目反应理论 总被引:3,自引:0,他引:3
多维项目反应理论是基于因子分析和单维项目反应理论两大背景下发展起来的一种新型测验理论。根据被试在完成一项任务时多种能力之间是如何相互作用的,多维项目反应模型可以分为补偿性模型和非补偿性模型两类。本文在系统介绍了当前普遍使用的补偿性模型的基础上,指出后续研究者应关注多维项目反应理论中多级评分和高维空间的多维模型、补偿性和非补偿性模型的融合、参数估计程序的开发和多维测验等值四个方面的研究。 相似文献
4.
Recently, there has been increasing interest in reporting subscores. This paper examines reporting of subscores using multidimensional
item response theory (MIRT) models (e.g., Reckase in Appl. Psychol. Meas. 21:25–36, 1997; C.R. Rao and S. Sinharay (Eds), Handbook of Statistics, vol. 26, pp. 607–642, North-Holland, Amsterdam, 2007; Beguin & Glas in Psychometrika, 66:471–488, 2001). A MIRT model is fitted using a stabilized Newton–Raphson algorithm (Haberman in The Analysis of Frequency Data, University
of Chicago Press, Chicago, 1974; Sociol. Methodol. 18:193–211, 1988) with adaptive Gauss–Hermite quadrature (Haberman, von Davier, & Lee in ETS Research Rep. No. RR-08-45, ETS, Princeton, 2008). A new statistical approach is proposed to assess when subscores using the MIRT model have any added value over (i) the
total score or (ii) subscores based on classical test theory (Haberman in J. Educ. Behav. Stat. 33:204–229, 2008; Haberman, Sinharay, & Puhan in Br. J. Math. Stat. Psychol. 62:79–95, 2008). The MIRT-based methods are applied to several operational data sets. The results show that the subscores based on MIRT
are slightly more accurate than subscore estimates derived by classical test theory. 相似文献
5.
Hooker, Finkelman, and Schwartzman (Psychometrika, 2009, in press) defined a paradoxical result as the attainment of a higher test score by changing answers from correct to incorrect
and demonstrated that such results are unavoidable for maximum likelihood estimates in multidimensional item response theory.
The potential for these results to occur leads to the undesirable possibility of a subject’s best answer being detrimental
to them. This paper considers the existence of paradoxical results in tests composed of item bundles when compensatory models
are used. We demonstrate that paradoxical results can occur when bundle effects are modeled as nuisance parameters for each
subject. However, when these nuisance parameters are modeled as random effects, or used in a Bayesian analysis, it is possible
to design tests comprised of many short bundles that avoid paradoxical results and we provide an algorithm for doing so. We
also examine alternative models for handling dependence between item bundles and show that using fixed dependency effects
is always guaranteed to avoid paradoxical results. 相似文献
6.
《Multivariate behavioral research》2013,48(2):245-268
The componential structure of synonym tasks is investigated using confirmatory multidimensional two-parameter IRT models. It was hypothesized that an open synonym task is decomposable into generating synonym candidates and evaluating these candidate words with respect to their synonymy with the stimulus word. Two subtasks were constructed to identify these two components. Different confirmatory models were estimated both with TESTMAP and with NOHARM. The componential hypothesis was supported, but it was found that the generation subtask also involved some evaluation and that generation and evaluation were highly correlated. 相似文献
7.
8.
《International Journal of Testing》2013,13(2):131-141
A hybrid procedure for number correct scoring is proposed. The proposed scoring procedure is based on both classical true-score theory (CTT) and multidimensional item response theory (MIRT). Specifically, the hybrid scoring procedure uses test item weights based on MIRT and the total test scores are computed based on CTT. Thus, what makes the hybrid scoring method attractive is that this method accounts for the dimensionality of the test items while test scores remain easy to compute. Further, the hybrid scoring does not require large sample sizes once the item parameters are known. Monte Carlo techniques were used to compare and contrast the proposed hybrid scoring method with three other scoring procedures. Results indicated that all scoring methods in this study generated estimated and true scores that were highly correlated. However, the hybrid scoring procedure had significantly smaller error variances between the estimated and true scores relative to the other procedures. 相似文献
9.
Psychometrika - For test development in the setting of multidimensional item response theory, the exploratory and confirmatory approaches lie on two ends of a continuum in terms of the loading and... 相似文献
10.
We develop a latent variable selection method for multidimensional item response theory models. The proposed method identifies latent traits probed by items of a multidimensional test. Its basic strategy is to impose an \(L_{1}\) penalty term to the log-likelihood. The computation is carried out by the expectation–maximization algorithm combined with the coordinate descent algorithm. Simulation studies show that the resulting estimator provides an effective way in correctly identifying the latent structures. The method is applied to a real dataset involving the Eysenck Personality Questionnaire. 相似文献
11.
Residual analysis (e.g. Hambleton & Swaminathan, Item response theory: principles and applications, Kluwer Academic, Boston, 1985; Hambleton, Swaminathan, & Rogers, Fundamentals of item response theory, Sage, Newbury Park, 1991) is a popular method to assess fit of item response theory (IRT) models. We suggest a form of residual analysis that may be applied to assess item fit for unidimensional IRT models. The residual analysis consists of a comparison of the maximum-likelihood estimate of the item characteristic curve with an alternative ratio estimate of the item characteristic curve. The large sample distribution of the residual is proved to be standardized normal when the IRT model fits the data. We compare the performance of our suggested residual to the standardized residual of Hambleton et al. (Fundamentals of item response theory, Sage, Newbury Park, 1991) in a detailed simulation study. We then calculate our suggested residuals using data from an operational test. The residuals appear to be useful in assessing the item fit for unidimensional IRT models. 相似文献
12.
本文首先分析了经典测验理论存在的局限,然后在潜在特质理论和项目特征曲线两大概念基础上阐述了项目反应理论及其基础模型的测量学原理,介绍了多个项目反应理论基础模型.最后简要介绍了七项当前应用项目反应理论指导大型题库建设和指导编制各种新型测验的热点内容. 相似文献
13.
When categorical ordinal item response data are collected over multiple timepoints from a repeated measures design, an item response theory (IRT) modeling approach whose unit of analysis is an item response is suitable. This study proposes a few longitudinal IRT models and illustrates how a popular compensatory multidimensional IRT model can be utilized to formulate such longitudinal IRT models, which permits an investigation of ability growth at both individual and population levels. The equivalence of an existing multidimensional IRT model and those longitudinal IRT models is also elaborated so that one can make use of an existing multidimensional IRT model to implement the longitudinal IRT models. 相似文献
14.
Nathan T. Carter Lindsey M. Kotrba Christopher J. Lake 《Journal of business and psychology》2014,29(2):205-220
In using organizational surveys for decision-making, it is essential to consider measurement equivalence/invariance (ME/I), which addresses the questions of whether score differences are attributable to differences in the latent variable we intend to measure, or attributable to confounding differences in measurement properties. Due to the tendency for null results to remain unpublished, most articles have focused on findings of, and reasons for violations of ME/I. On the other hand, little is available to practitioners and researchers concerning situations where ME/I can be expected to uphold. This is especially disconcerting due to the fact that the null is the desired result in such analyses, and allows for unfettered observed-score comparisons. This special issue presents a unique opportunity to provide such a discussion using real-world examples from an organizational culture survey. In doing so we hope to clear up confusion surrounding the concept of ME/I, when it can be expected, and how it relates to actual differences in scores. First, we review the basic tenets and past findings focusing on ME/I, and discuss the item response theory differential item functioning framework used here. Next, we show ME/I being upheld using organizational survey data wherein violations of ME/I would reasonably not be expected (i.e., the null hypothesis was predicted and supported), and simulate the consequences of ignoring ME/I. Finally, we suggest a set of conditions wherein ME/I is likely to be upheld. 相似文献
15.
Psychometrika - In item response theory (IRT), it is often necessary to perform restricted recalibration (RR) of the model: A set of (focal) parameters is estimated holding a set of (nuisance)... 相似文献
16.
Confidence intervals (CIs) are fundamental inferential devices which quantify the sampling variability of parameter estimates. In item response theory, CIs have been primarily obtained from large-sample Wald-type approaches based on standard error estimates, derived from the observed or expected information matrix, after parameters have been estimated via maximum likelihood. An alternative approach to constructing CIs is to quantify sampling variability directly from the likelihood function with a technique known as profile-likelihood confidence intervals (PL CIs). In this article, we introduce PL CIs for item response theory models, compare PL CIs to classical large-sample Wald-type CIs, and demonstrate important distinctions among these CIs. CIs are then constructed for parameters directly estimated in the specified model and for transformed parameters which are often obtained post-estimation. Monte Carlo simulation results suggest that PL CIs perform consistently better than Wald-type CIs for both non-transformed and transformed parameters. 相似文献
17.
Jung Aa Moon Sandip Sinharay Madeleine Keehner Irvin R. Katz 《International Journal of Testing》2020,20(2):122-145
The current study examined the relationship between test-taker cognition and psychometric item properties in multiple-selection multiple-choice and grid items. In a study with content-equivalent mathematics items in alternative item formats, adult participants’ tendency to respond to an item was affected by the presence of a grid and variations of answer options. The results of an item response theory analysis were consistent with the hypothesized cognitive processes in alternative item formats. The findings suggest that seemingly subtle variations of item design could substantially affect test-taker cognition and psychometric outcomes, emphasizing the need for investigating item format effects at a fine-grained level. 相似文献
18.
19.
应用项目反应理论对《中国士兵人格问卷》的项目分析 总被引:4,自引:0,他引:4
采用项目反应理论(IRT)对《中国士兵人格问卷》进行项目分析。计算机呈现中国士兵人格问卷(CSPQ)对100,523名适龄男性青年进行测验,随机抽取2676名任一维度标准分均低于70的定为合格组;将任一维度大于70分并经专业人员访谈不合格的274名定为不合格组;从精神病院抽取男性年龄相当的221名缓解期精神分裂症患者定为精神病组,并完成CSPQ测验。运用基于IRT的双参数Logistic模型进行分析;结果发现,区分度参数超过区间(0.30,4.00)的条目删除前后,被试的能力值与标准分均存在显著相关;精神病组的测验分数经IRT分析,图形曲线与不合格组有高度吻合。研究结果说明,在测验精度基本相同的条件下,应用IRT可以减少施测条目,提高测验效率,可在一定程度上更精确地区分被试的特质水平 相似文献
20.
应用项目反应理论对瑞文测验联合型的分析 总被引:1,自引:0,他引:1
使用BILOG-MG3.0软件,边际极大似然估计,3参数Logistic模型对354名不同能力水平的男性青年的瑞文测验联合型数据进行了分析。结果显示:大多数瑞文测验联合型的题目都适合3参数Logistic模型(有6道题不适合)。整个测验的信息函数峰值的位置在难度量表的-3到-2之间,其值为16.82。共有18道题的信息函数峰值在0.2以下。从区分度来看,72道题目的区分度均大于0.5,比较理想。难度参数显示所有题目均较低,绝大部分都在0以下,最高的只有1.01。题目的难度主要由所需的操作水平决定。伪猜测参数在0.07-0.24之间。综合分析表明瑞文测验联合型对正常青年的智力评价精度较差。 相似文献