首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
马洁  刘红云 《心理科学》2018,(6):1374-1381
本研究通过高中英语阅读测验实测数据,对比分析双参数逻辑斯蒂克模型 (2PL-IRT)和加入不同数量题组的双参数逻辑斯蒂克模型 (2PL-TRT), 探究题组数量对参数估计及模型拟合的影响。结果表明:(1) 2PL-IRT模型对能力介于-1.50到0.50的被试,能力参数估计偏差较大;(2)将题组效应大于0.50的题组作为局部独立题目纳入模型,会导致部分题目区分度参数的低估和大部分题目难度参数的高估;(3)题组效应越大,将其当作局部独立题目纳入模型估计项目参数的偏差越大。  相似文献   

2.
吴锐  丁树良  甘登文 《心理学报》2010,42(3):434-442
题组越来越多地出现在各类考试中, 采用标准的IRT模型对有题组的测验等值, 可能因忽略题组的局部相依性导致等值结果的失真。为解决此问题, 我们采用基于题组的2PTM模型及IRT特征曲线法等值, 以等值系数估计值的误差大小作为衡量标准, 以Wilcoxon符号秩检验为依据, 在几种不同情况下进行了大量的Monte Carlo模拟实验。实验结果表明, 考虑了局部相依性的题组模型2PTM绝大部分情况下都比2PLM等值的误差小且有显著性差异。另外, 用6种不同等值准则对2PTM等值并评价了不同条件下等值准则之间的优劣。  相似文献   

3.
篇章形式的阅读测验在语文学科考试与语言能力测试中占有越来越重要的地位。篇章阅读测验是一种典型的题组测验, 因此需要采用能够处理题组效应的统计方法进行分析。在进行项目功能差异(DIF)检验时, 也需要采用与之匹配的DIF检验方法。目前能够处理题组效应的DIF检验方法主要包括变通的题组DIF检验方法和基于题组反应模型的DIF检验方法, 基于题组反应模型的DIF检验方法由于实现过程繁琐, 目前只停留在理论探讨阶段。本研究将变通的题组DIF检验方法及其效应值指标引入篇章阅读测验的DIF检验中, 能够解决篇章阅读测验中DIF检验与测量的问题, 效应值指标能够为如何处理有DIF效应的题组项目提供重要依据。本研究首先选用非题组DIF检验方法与变通的题组DIF检验方法对一份试卷进行DIF检验, 两种方法的比较结果体现了进行题组DIF检验的必要性与优越性, 然后选用变通的题组DIF检验方法中有代表性的四种方法对某阅读成就测验进行题组DIF检验。研究结果表明, 在篇章阅读测验中, 能够处理题组效应的DIF检验方法较传统的DIF检验方法具有较大的优越性。  相似文献   

4.
刘玥  刘红云 《心理学报》2012,44(2):263-275
题组模型可以解决传统IRT模型由于题目间局部独立性假设违背时所导致的参数估计偏差。为探讨题组随机效应模型的适用范围, 采用Monte Carlo模拟研究, 分别使用2-PL贝叶斯题组随机效应模型(BTRM)和2-PL贝叶斯模型(BM)对数据进行拟合, 考虑了题组效应、题组长度、题目数量和局部独立题目比例的影响。结果显示:(1) BTRM不受题组效应和题组长度影响, BM对参数估计的误差随题组效应和题组长度增加而增加。(2) BTRM具有一定的普遍性, 且当题组效应大, 题组长, 题目数量大时使用该模型能减少估计误差, 但是当题目数量较小时, 两个模型得到的能力估计误差都较大。(3)当局部独立题目的比例较大时, 两种模型得到的参数估计差异不大。  相似文献   

5.
本文将多维题组反应模型(MTRM)应用到多维题组测验的项目功能差异(DIF)检验中,通过模拟研究和应用研究探究MTRM在DIF检验中的准确性、有效性和影响因素,并与忽略题组效应的多维随机系数多项Logistic模型(MRCMLM)进行对比。结果表明:(1)随着样本量的增大,MTRM对有效DIF值检出率增高,错误率降低,在不同条件下结果的稳定性更高;(2)与MRCMLM相比,基于MTRM的DIF检验模型检验率更高,受到其他因素的影响更小;(3)当测验中题组效应较小时,MTRM与MRCMLM结果差异较小,但是MTRM模型拟合度更高。  相似文献   

6.
篇章形式的阅读测验是一种典型的题组测验,在进行项目功能差异(DIF)检验时需要采用与之匹配的DIF检验方法.基于题组反应模型的DIF检验方法是真正能够处理题组效应的DIF检验方法,能够提供题组中每个项目的DIF效应测量,是题组DIF检验方法中较有理论优势的一种,主要使用的方法是Rasch题组DIF检验方法.该研究将Rasch题组DIF检验方法引入篇章阅读测验的DIF检验中,对某阅读成就测验进行题组DIF检验,结果显示,该测验在内容维度和能力维度的部分子维度上出现了具有显著DIF效应的项目,研究从测验公平的角度对该测验的进一步修改及编制提出了一定的建议.研究中进一步将Rasch题组DIF检验方法与基于传统Rasch模型的DIF检验方法以及变通的题组DIF检验方法的结果进行比较,研究结果体现了进行题组DIF检验的必要性与优越性.研究结果表明,在篇章阅读测验中,能够真正处理题组效应的题组DIF检验方法更加具有理论优势且对于阅读测验的编制与质量的提高具有更重要的意义.  相似文献   

7.
For testlet response data, traditional item response theory (IRT) models are often not appropriate due to local dependence presented among items within a common testlet. Several testlet‐based IRT models have been developed to model examinees' responses. In this paper, a new two‐parameter normal ogive testlet response theory (2PNOTRT) model for dichotomous items is proposed by introducing testlet discrimination parameters. A Bayesian model parameter estimation approach via a data augmentation scheme is developed. Simulations are conducted to evaluate the performance of the proposed 2PNOTRT model. The results indicated that the estimation of item parameters is satisfactory overall from the viewpoint of convergence. Finally, the proposed 2PNOTRT model is applied to a set of real testlet data.  相似文献   

8.
题组作为众多测验中的一种常见题型,由于项目间存在一定程度的依赖性而违背了局部独立性假设,若用项目反应模型进行参数估计将会出现较大的偏差.题组反应理论将被试与题组的交互作用纳入到模型中,解决了项目间相依性的问题.笔者对题组反应理论的发展、基本原理及其相关研究进行了综述,并将其应用在中学英语考试中.与项目反应理论相对比,结果发现:(1)题组反应模型与项目反应模型在各参数估计值的相关系数较强,尤其是能力参数和难度参数;(2)在置信区间宽度的比较上,题组反应模型在各个参数上均窄于项目反应模型,即题组反应模型的估计精度优于项目反应模型.  相似文献   

9.
尽管多阶段测验(MST)在保持自适应测验优点的同时允许测验编制者按照一定的约束条件去建构每一个模块和题板,但建构测验时若因忽视某些潜在的因素而导致题目之间出现局部题目依赖性(LID)时,也会对MST测验结果带来一定的危害。为探究"LID对MST的危害"这一问题,本研究首先介绍了MST和LID等相关概念;然后通过模拟研究比较探讨该问题,结果表明LID的存在会影响被试能力估计的精度但仍为估计偏差较小,且该危害不限于某一特定的路由规则;之后为消除该危害,使用了题组反应模型作为MST施测过程中的分析模型,结果表明尽管该方法能够消除部分危害但效果有限。这一方面表明LID对MST中被试能力估计精度所带来的危害确实值得关注,另一方面也表明在今后关于如何消除MST中由LID造成危害的方法仍值得进一步探究的。  相似文献   

10.
当前认知诊断领域还缺少对包含题组的测验进行诊断分析的研究, 即已开发的认知诊断模型无法合理有效地处理含有题组效应的测验数据, 且已开发的题组反应模型也不具有对被试知识结构或认知过程进行诊断的功能。针对该问题, 本文尝试性地将多维题组效应向量参数引入线性Logistic模型中, 同时开发了属性间具有补偿作用的和属性间具有非补偿作用的多维题组效应认知诊断模型。模拟研究结果显示新模型合理有效, 与线性Logistic模型和DINA模型对比研究后表明:(1)作答数据含有题组效应时, 忽略题组效应会导致项目参数的偏差估计并降低对目标属性的判准率; (2)新模型更具普适性, 即便当作答数据不存在题组效应时, 采用新模型进行测验分析亦能得到很好的项目参数估计结果且不影响对目标属性的判准率。整体来看, 新模型既具有认知诊断功能又可有效处理题组效应。  相似文献   

11.
在心理测量和教育测量中,二级项目和题组项目是两类常见的项目类型。由这两种项目混合构成的测试在实践中有着重要的应用。被试在答题时,由于个人的潜在能力和项目难度不匹配,常常会产生异常反应,这些异常反应会影响IRT中潜在特质估计的准确性。仿真实验证明,二级项目题组混合IRT模型的稳健估计方法在出现异常值的情况下,能够比极大似然估计对被试的潜在特质做出更加准确的估计,能够满足实际测试的需求。  相似文献   

12.
多维题组效应Rasch模型   总被引:2,自引:0,他引:2  
首先, 本文诠释了“题组”的本质即一个存在共同刺激的项目集合。并基于此, 将题组效应划分为项目内单维题组效应和项目内多维题组效应。其次, 本文基于Rasch模型开发了二级评分和多级评分的多维题组效应Rasch模型, 以期较好地处理项目内多维题组效应。最后, 模拟研究结果显示新模型有效合理, 与Rasch题组模型、分部评分模型对比研究后表明:(1)测验存在项目内多维题组效应时, 仅把明显的捆绑式题组效应进行分离而忽略其他潜在的题组效应, 仍会导致参数的偏差估计甚或高估测验信度; (2)新模型更具普适性, 即便当被试作答数据不存在题组效应或只存在项目内单维题组效应, 采用新模型进行测验分析也能得到较好的参数估计结果。  相似文献   

13.
认知元反应理论--IRT直接应用于多值记分题   总被引:1,自引:0,他引:1  
缪源  李绍珠 《心理科学》2000,23(2):196-199
0-1记分测验的项目反应理论已经得到广泛的研究和应用.但是,许多测验都含有多值记分题,所以需要将IRT推广到此类情况.从认知理论的观点看,每个0-1记分题(项目)和多值记分题的每个测试点都可同样地看成一个由若干知识点构成的集合,称之为认知元;根据认知元之间存在的关系可以确定各受测者对各试题作出特定答案的概率,从而不需要引用任何其它假设就可将IRT的方法直接应用于含多值记分题的测验.本文应用这一理论分析了某些测验样本,结果表明是可行的.  相似文献   

14.
This article offers some solutions used in the assembly of the computerized Uniform Certified Public Accountancy (CPA) licensing examination as practical alternatives for operational programs producing large numbers of forms. The Uniform CPA examination was offered as an adaptive multistage test (MST) beginning in April of 2004. Examples of automated assembly using mixed integer programming (MIP) solutions in Optimization Programming Language software (OPL Studio 3.6.1©, ILOG Inc., Gentilly, France) illustrate MST design features and implementation strategies that can be generalized to other testing programs. This article affords a compromise between the best possible combination of test content and sustainability and security over time by linear programming techniques that make use of mixed integer optimization algorithms. Relative optimization functions are formulated to ensure a variety of constraints are always met for "testlets" of differing difficulties to form panels for the MST design. Technical information is discussed to assist other practitioners and researchers with test assembly optimization problems. Some practical consequences from the selection of statistical targets on testlet exposure are illustrated. These results are described with respect to the fundamental principles of automated test assembly and continuous administration based on MST when testlet or item exposure and inventory rotation are critical concerns.  相似文献   

15.
A method of the IRT observed-score equating using chain equating through a third test without equating coefficients is presented with the assumption of the three-parameter logistic model. The asymptotic standard errors of the equated scores by this method are obtained using the results given by M. Liou and P.E. Cheng. The asymptotic standard errors of the IRT observed-score equating method using a synthetic examinee group with equating coefficients, which is a currently used method, are also provided. Numerical examples show that the standard errors by these observed-score equating methods are similar to those by the corresponding true score equating methods except in the range of low scores.The author is indebted to Michael J. Kolen for access to the real data used in this article and anonymous reviewers for their corrections and suggestions on this work.  相似文献   

16.
In this study, we contrast results from two differential item functioning (DIF) approaches (manifest and latent class) by the number of items and sources of items identified as DIF using data from an international reading assessment. The latter approach yielded three latent classes, presenting evidence of heterogeneity in examinee response patterns. It also yielded more DIF items with larger effect sizes and more consistent item response patterns by substantive aspects (e.g., reading comprehension processes and cognitive complexity of items). Based on our findings, we suggest empirically evaluating the homogeneity assumption in international assessments because international populations cannot be assumed to have homogeneous item response patterns. Otherwise, differences in response patterns within these populations may be under-detected when conducting manifest DIF analyses. Detecting differences in item responses across international examinee populations has implications on the generalizability and meaningfulness of DIF findings as they apply to heterogeneous examinee subgroups.  相似文献   

17.
在测量具有层阶结构的潜质时, 标准项目反应模型对项目参数估计和能力参数估计都具有较低的效率, 多维项目反应模型虽然在估计第一阶潜质时具有高效性, 但没有考虑到潜质层阶的情况, 所以它不适合用来处理具有层阶结构的潜质; 而高阶项目反应模型在处理这种具有层阶结构的潜质时, 不仅能够高效准确地对项目参数和能力参数进行估计, 而且还能同时获得高阶潜质与低阶潜质。目前存在的高阶项目反应模型有高阶DINA模型、高阶双参数正态肩型层阶模型、高阶逻辑斯蒂模型、多级评分的高阶项目反应模型和高阶题组模型。未来对高阶项目反应模型的研究方向应注意多水平高阶项目反应模型、项目内多维情况下的高阶项目反应模型以及高阶认知诊断模型。  相似文献   

18.
In item response theory (IRT), the invariance property states that item parameter estimates are independent of the examinee sample, and examinee ability estimates are independent of the test items. While this property has long been established and understood by the measurement community for IRT models, the same cannot be said for diagnostic classification models (DCMs). DCMs are a newer class of psychometric models that are designed to classify examinees according to levels of categorical latent traits. We examined the invariance property for general DCMs using the log-linear cognitive diagnosis model (LCDM) framework. We conducted a simulation study to examine the degree to which theoretical invariance of LCDM classifications and item parameter estimates can be observed under various sample and test characteristics. Results illustrated that LCDM classifications and item parameter estimates show clear invariance when adequate model data fit is present. To demonstrate the implications of this important property, we conducted additional analyses to show that using pre-calibrated tests to classify examinees provided consistent classifications across calibration samples with varying mastery profile distributions and across tests with varying difficulties.  相似文献   

19.
The Non-Equivalent groups with Anchor Test (NEAT) design involves missing data that are missing by design. Three nonlinear observed score equating methods used with a NEAT design are the frequency estimation equipercentile equating (FEEE), the chain equipercentile equating (CEE), and the item-response-theory observed-score-equating (IRT OSE). These three methods each make different assumptions about the missing data in the NEAT design. The FEEE method assumes that the conditional distribution of the test score given the anchor test score is the same in the two examinee groups. The CEE method assumes that the equipercentile functions equating the test score to the anchor test score are the same in the two examinee groups. The IRT OSE method assumes that the IRT model employed fits the data adequately, and the items in the tests and the anchor test do not exhibit differential item functioning across the two examinee groups. This paper first describes the missing data assumptions of the three equating methods. Then it describes how the missing data in the NEAT design can be filled in a manner that is coherent with the assumptions made by each of these equating methods. Implications on equating are also discussed.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号