首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
陈平  李珍  辛涛 《心理与行为研究》2011,9(2):125-132,153
项目曝光控制是认知诊断计算机化自适应测验(CD-CAT)中亟需解决的重要问题之一。采用蒙特卡洛模拟方法对CD-CAT中五种常用选题策略(随机化方法、KL信息量方法、香农熵方法、后验加权的KL信息量方法和综合后验加权和距离加权的KL信息量方法)的题库使用情况进行探讨。结果发现:四种非随机化选题策略的题库使用均匀性较差、测验重叠率高,从而导致测验安全性较差;香农熵方法的判准率总是最高。今后可以将传统CAT中的项目曝光控制技术融入到CD-CAT选题策略中。  相似文献   

2.
当CD-CAT测验需要同时诊断被试的解题策略、认知状态并评估被试的宏观能力时,就需要在选题过程中兼顾这三个测量目标。用两种不同方式将多策略香农熵(MSSHE)指标与Fisher信息量相结合,提出多策略情境中的DWI指标MSDWI)选题法与“先用MSSHE后用Fisher信息量”的两步选题法。基于多策略RRUM模型(MS-RRUM),将这两种方法与随机选题法在不同属性数量条件下进行模拟比较,结果表明:当属性数量为4个或6个时,两步选题法在策略判准率、认知状态判准率和能力估计三个方面都有最佳的效果。  相似文献   

3.
具有认知诊断功能的计算机化多阶段测验(CD-MST)是CDA和MST相结合的一种测验方式。由于CD-MST自适应频次较少,初始阶段模块组建会影响整个测验的判准率。借鉴CD-CAT初始项目选取方法,根据CDA和MST自身特点,提出了7种CD-MST初始阶段模块组建方法,分别是随机法、选题策略法、R*矩阵法、CTTID法、CDI法、CTTIDR*法和CDIR*法。采用模拟研究对不同项目质量下7种方法的判准率进行了比较。研究结果表明,当初始阶段结束时,包含R*矩阵的方法判准率显著高于其他方法,尤其是CTTIDR*法;整个测验结束时,CTTIDR*法较其他方法仍然有优势,CDIR*法和R*矩阵法结果较为接近。选题策略法在初始阶段结束时判准率较低,甚至低于随机法,整个测验结束时,判准率同CDIR*法和R*矩阵法持平。4种项目质量对判准率影响较大,HD-HV题库下判准率最高,HD-LV次之,LD-HV较差,LD-LV最差。  相似文献   

4.
基于属性平衡的CD-CAT选题策略能够保证每个认知属性被相当数量的题目测量,从而提高被试属性判准率,传统的基于属性平衡的选题策略包括MMGDI法和MGCDI法。本文针对传统的基于属性测量次数平衡选题策略进行改进,提出4种新的基于属性平衡的选题策略:RMGDI、RMCDI、SE-RMGDI、SE-RMCDI,前两种为基于属性测量次数平衡,后两种为基于属性测量精度平衡的选题策略。模拟研究表明:(1)定长CD-CAT条件下,短测验中,MMGDI表现最好,而长测验中,SE-RMGDI和SE-RMCDI的表现优于传统的属性平衡选题策略。(2)不定长CD-CAT条件下,RMGDI在判准率指标上表现优于传统的属性平衡选题策略,4种新的属性平衡策略在测量效率和综合指标上的表现均优于传统的选题策略。  相似文献   

5.
认知诊断测验组卷方法对提高被试属性掌握模式的判准率至关重要.Henson和Douglas的组卷方法(2005)得到的认知诊断测验判准率不高,没有考虑属性间的层级关系是重要原因.本文提出一种基于属性层级结构的认知诊断组卷方法:首先根据属性层级结构确定待选项目类集合,其次根据新建构的选题指标确定项目类,然后由属性区分被试的能力确定各项目类中题目的数量,并在测验Q阵中放入可达阵.模拟研究表明:新方法比H&D方法在判准率上有很大的提高;新的选题指标比H&D的指标大大缩短计算时间.  相似文献   

6.
涂冬波  蔡艳  戴海琦  丁树良 《心理学报》2012,44(11):1547-1553
当前国内外开发的认知诊断模型基本上只能处理单策略的测验情景,并假设所有被试均采用同一种加工策略/解题策略,从而忽视了加工策略的多样性及差异性.本研究根据de la Torre和Douglas (2008)采用多个Q矩阵来表征多个加工策略的思想,并结合使用丁树良等(2009)修正的Q矩阵理论及孙佳楠,张淑梅、辛涛和包珏(2011)的广义距离判别法,开发了一种新的多策略认知诊断方法——MSCD方法.Monte Carlo模拟研究结果表明:在单策略测验情景下,传统的单策略认知诊断方法与采用MSCD方法的诊断正确率均比较理想,且差异不大;但在多策略测验情景时,传统的单策略认知诊断方法诊断正确率较低,而MSCD方法的诊断正确率却仍较理想;当加工策略增至5种时,MSCD方法仍有较高的边际判准率、模式判准率以及加工策略判准率.研究表明MSCD方法基本合理、可行.这为实现对加工策略的诊断提供了方法学支持,有利于拓展认知诊断在实际中的应用.  相似文献   

7.
在有多种解题策略的认知诊断问题情境中, 用每个Q矩阵表示一种解题策略, 由此将单策略认知诊断RRUM模型拓广为多策略RRUM模型(MS-RRUM)。随后, 在应用MS-RRUM模型的CD-CAT中开发了适用于多策略情境的MAP参数估计法和多策略香农熵(MSSHE)选题法。将MSSHE选题法与随机选题法分别在不同属性数量、不同测验长度下进行比较, 结果发现前者对被试的策略和认知状态判准率都显著优于后者, 而且都很理想。这样就顺利实现了在CD-CAT做策略诊断的目标。  相似文献   

8.
CD-CAT是CDA同CAT的相结合的产物,适用于课堂教学,是教师补救教学、学生自我学习的重要工具。作为CD-CAT重要组成部分的初始阶段项目选取方法是影响测验判准率的重要因素。本文基于现有研究和CDA的项目区分度提出了四种新的初始阶段项目选取方法:CTTID法、CDI法、CTTIDR*法和CDIR*法。通过模拟研究发现,在定长的CD-CAT下,题库质量是HD-HV下,初始阶段结束时,CTTIDR*法的PCCR比现有的T阵法高了.2999,比PWKL高了.1707,其它题库下趋势相同。整个测验结束时CTTIDR*法的判准率仍然是最高的。在变长的CD-CAT下,最大后验概率大于.7、.8、.9下,CTTIDR*法的被试平均测验长度比T阵法分别缩短了2.6170、2.2347、1.7470道题。  相似文献   

9.
郭磊  郑蝉金  边玉芳 《心理学报》2015,47(1):129-140
本研究借鉴传统计算机化自适应测验的思想, 并结合认知诊断的特点, 在认知诊断框架下提出了4种变长CD-CAT的终止规则, 分别是属性标准误法(SEA)、邻近后验概率之差法(DAPP)、二等分法(HA)以及混合法(HM)。在未控制曝光和采用不同曝光控制条件下, 与HSU法及KL法进行了比较。研究结果表明:(1) 终止条件越严格, 平均测验长度越长, 按测验长度最大值终止的测验百分比越大, 模式判准率越高。(2) 当未加入曝光控制时, 4种新的终止规则均有较好表现, 与HSU法十分接近。随着最大后验概率预设值的增加或e的减小, 模式判准率呈上升趋势, 平均测验长度逐渐增加, 但在题库使用率方面均较差。(3) 当加入项目曝光控制时, 6种变长终止规则下的题库使用率有了极大的提升, 仍能保持较高的模式判准率, 并且不同的曝光控制方法对终止规则的影响是不同的。其中, 相对标准终止规则极易受到曝光控制方法的影响。(4) 综合来看, SEA、HM以及HA法在各项指标上的表现与HSU法基本一致, 其次为KL法和DAPP法。  相似文献   

10.
孙小坚  郭磊 《心理学报》2022,54(9):1137-1150
选择题中的作答选项能提供额外诊断信息, 为充分利用选项信息, 研究提出认知诊断计算机自适应测验(CD-CAT)中两种处理选择题选项信息的非参数选题策略和变长终止规则。模拟研究的结果发现:(1)定长条件下两种非参数选题策略的分类准确性整体要高于参数选题策略; (2)两种非参数选题策略较参数选题策略具有更加均衡的题库使用情况; (3)非参数选题策略在两种新的变长终止规则下具有更高的分类准确率; (4)两种非参数选题策略均适用于选择题CD-CAT情境, 使用者可任选其一进行测验分析。  相似文献   

11.
The area between two item characteristic curves   总被引:1,自引:0,他引:1  
Formulas for computing the exact signed and unsigned areas between two item characteristic curves (ICCs) are presented. It is further shown that when thec parameters are unequal, the area between two ICCs is infinite. The significance of the exact area measures for item bias research is discussed.The author expresses his appreciation to Jeffrey A. Slinde, Stephen Steinhaus, Audrey Qualls-Payne, Ivo Molenaar, and two anonymous reviewers for their very helpful and constructive comments.  相似文献   

12.
When scaling data using item response theory, valid statements based on the measurement model are only permissible if the model fits the data. Most item fit statistics used to assess the fit between observed item responses and the item responses predicted by the measurement model show significant weaknesses, such as the dependence of fit statistics on sample size and number of items. In order to assess the size of misfit and to thus use the fit statistic as an effect size, dependencies on properties of the data set are undesirable. The present study describes a new approach and empirically tests it for consistency. We developed an estimator of the distance between the predicted item response functions (IRFs) and the true IRFs by semiparametric adaptation of IRFs. For the semiparametric adaptation, the approach of extended basis functions due to Ramsay and Silverman (2005) is used. The IRF is defined as the sum of a linear term and a more flexible term constructed via basis function expansions. The group lasso method is applied as a regularization of the flexible term, and determines whether all parameters of the basis functions are fixed at zero or freely estimated. Thus, the method serves as a selection criterion for items that should be adjusted semiparametrically. The distance between the predicted and semiparametrically adjusted IRF of misfitting items can then be determined by describing the fitting items by the parametric form of the IRF and the misfitting items by the semiparametric approach. In a simulation study, we demonstrated that the proposed method delivers satisfactory results in large samples (i.e., N ≥ 1,000).  相似文献   

13.
Information functions are used to find the optimum ability levels and maximum contributions to information for estimating item parameters in three commonly used logistic item response models. For the three and two parameter logistic models, examinees who contribute maximally to the estimation of item difficulty contribute little to the estimation of item discrimination. This suggests that in applications that depend heavily upon the veracity of individual item parameter estimates (e.g. adaptive testing or text construction), better item calibration results may be obtained (for fixed sample sizes) from examinee calibration samples in which ability is widely dispersed.This work was supported by Contract No. N00014-83-C-0457, project designation NR 150-520, from Cognitive Science Program, Cognitive and Neural Sciences Division, Office of Naval Research and Educational Testing Service through the Program Research Planning Council. Reproduction in whole or in part is permitted for any purpose of the United States Government. The author wishes to acknowledge the invaluable assistance of Maxine B. Kingston in carrying out this study, and to thank Charles Lewis for his many insightful comments on earlier drafts of this paper.  相似文献   

14.
针对双目标CD-CAT,将六种项目区分度(鉴别力D、一般区分度GDI、优势比OR、2PL的区分度a、属性区分度ADI、认知诊断区分度CDI)分别与IPA方法结合,得到新的选题策略。模拟研究比较了它们的表现,还考察了区分度分层在控制项目曝光的表现。结果发现:新方法都能明显提高知识状态的判准率和能力估计精度;分层选题均能很好地提高题库利用率。总体上,OR加权能显著提高测量精度;OR分层选题在保证测量精度条件下显著提高项目曝光均匀性。  相似文献   

15.
In tailored testing, it is important to determine the optimal difficulty of the next item to present to the examinee. This paper shows that the difference that maximizes information for the three-parameter normal ogive response model is approximately 1.7 times the optimal differenceb for the three-parameter logistic model. Under the normal model, calculation of the optimal difficulty for minimizing the Bayes risk is equivalent to maximizing an associated information function.The views expressed herein, are those of the author and do not necessarily reflect those of the Department of the Navy.  相似文献   

16.
An IRT model based on the Rasch model is proposed for composite tasks, that is, tasks that are decomposed into subtasks of different kinds. There is one subtask for each component that is discerned in the composite tasks. A component is a generic kind of subtask of which the subtasks resulting from the decomposition are specific instantiations with respect to the particular composite tasks under study. The proposed model constrains the difficulties of the composite tasks to be linear combinations of the difficulties of the corresponding subtask items, which are estimated together with the weights used in the linear combinations, one weight for each kind of subtask. Although the model does not belong to the exponential family, its parameters can be estimated using conditional maximum likelihood estimation. The approach is demonstrated with an application to spelling tasks. We thank Eric Maris for his helpful comments.  相似文献   

17.
Owen (1975) proposed an approximate empirical Bayes procedure for item selection in computerized adaptive testing (CAT). The procedure replaces the true posterior by a normal approximation with closed-form expressions for its first two moments. This approximation was necessary to minimize the computational complexity involved in a fully Bayesian approach but is no longer necessary given the computational power currently available for adaptive testing. This paper suggests several item selection criteria for adaptive testing which are all based on the use of the true posterior. Some of the statistical properties of the ability estimator produced by these criteria are discussed and empirically characterized.Portions of this paper were presented at the 60th annual meeting of the Psychometric Society, Minneapolis, Minnesota, June, 1995. The author is indebted to Wim M. M. Tielen for his computational support.  相似文献   

18.
A test theory using only ordinal assumptions is presented. It is based on the idea that the test items are a sample from a universe of items. The sum across items of the ordinal relations for a pair of persons on the universe items is analogous to a true score. Using concepts from ordinal multiple regression, it is possible to estimate the tau correlations of test items with the universe order from the taus among the test items. These in turn permit the estimation of the tau of total score with the universe. It is also possible to estimate the odds that the direction of a given observed score difference is the same as that of the true score difference. The estimates of the correlations between items and universe and between total score and universe are found to agree well with the actual values in both real and artificial data.Part of this paper was presented at the June, 1989, Meeting of the Psychometric Society. The authors wish to thank several reviewers for their suggestions. This research was mainly done while the second author was a University Fellow at the University of Southern California.  相似文献   

19.
The item response function (IRF) for a polytomously scored item is defined as a weighted sum of the item category response functions (ICRF, the probability of getting a particular score for a randomly sampled examinee of ability ). This paper establishes the correspondence between an IRF and a unique set of ICRFs for two of the most commonly used polytomous IRT models (the partial credit models and the graded response model). Specifically, a proof of the following assertion is provided for these models: If two items have the same IRF, then they must have the same number of categories; moreover, they must consist of the same ICRFs. As a corollary, for the Rasch dichotomous model, if two tests have the same test characteristic function (TCF), then they must have the same number of items. Moreover, for each item in one of the tests, an item in the other test with an identical IRF must exist. Theoretical as well as practical implications of these results are discussed.This research was supported by Educational Testing Service Allocation Projects No. 79409 and No. 79413. The authors wish to thank John Donoghue, Ming-Mei Wang, Rebecca Zwick, and Zhiliang Ying for their useful comments and discussions. The authors also wish to thank three anonymous reviewers for their comments.  相似文献   

20.
Wendy M. Yen 《Psychometrika》1985,50(4):399-410
When the three-parameter logistic model is applied to tests covering a broad range of difficulty, there frequently is an increase in mean item discrimination and a decrease in variance of item difficulties and traits as the tests become more difficult. To examine the hypothesis that this unexpected scale shrinkage effect occurs because the items increase in complexity as they increase in difficulty, an approximate relationship is derived between the unidimensional model used in data analysis and a multidimensional model hypothesized to be generating the item responses. Scale shrinkage is successfully predicted for several sets of simulated data.The author is grateful to Robert Mislevy for kindly providing a copy of his computer program, RESOLVE.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号