首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 828 毫秒
1.
多阶段混合增长模型(PGMM)可对发展过程中的阶段性及群体异质性特征进行分析,在能力发展、行为发展及干预、临床心理等研究领域应用广泛。PGMM可在结构方程模型和随机系数模型框架下定义,通常使用基于EM算法的极大似然估计和基于马尔科夫链蒙特卡洛模拟的贝叶斯推断两种方法进行参数估计。样本量、测量时间点数、潜在类别距离等因素对模型及参数估计有显著影响。未来应加强PGMM与其它增长模型的比较研究;在相同或不同的模型框架下研究数据特征、类别属性等对参数估计方法的影响。  相似文献   

2.
本文首先用马尔科夫链蒙特卡洛(MCMC)算法和EM算法进行IRT模型参数估计模拟实验,并探讨了两种算法的参数估计精度,然后在分析三参数Logistic(3PL)模型参数估计精度的基础上改进模型并对其进行参数估计。结果表明,MCMC算法估计IRT模型的参数精度均优于EM算法,并且MCMC算法在估计3PL模型参数方面具有更明显的优势;在样本量较小的情况下,MCMC算法能较好地估计3PL模型参数,估计精度略低于2PL模型;3PL模型的项目参数确定性低是参数估计精度略低于2PL模型的主要原因;采用改进模型可以提高项目参数的确定性,进而得到更优的参数估计精度。  相似文献   

3.
在心理测量和教育测量中,二级项目和题组项目是两类常见的项目类型。由这两种项目混合构成的测试在实践中有着重要的应用。被试在答题时,由于个人的潜在能力和项目难度不匹配,常常会产生异常反应,这些异常反应会影响IRT中潜在特质估计的准确性。仿真实验证明,二级项目题组混合IRT模型的稳健估计方法在出现异常值的情况下,能够比极大似然估计对被试的潜在特质做出更加准确的估计,能够满足实际测试的需求。  相似文献   

4.
四参数Logistic模型潜在特质参数的Warm加权极大似然估计   总被引:1,自引:0,他引:1  
孟祥斌  陶剑  陈莎莉 《心理学报》2016,(8):1047-1056
本文以四参数Logistic(4-parameter Logistic,4PL)模型为研究对象,根据Warm的加权极大似然估计技巧,提出了4PL模型潜在特质参数的加权极大似然估计方法,并借助模拟研究对加权极大似然估计的性质进行验证。研究结果表明,与通常的极大似然估计和后验期望估计相比,加权极大似然估计的偏差(bias)明显减小,并且具有良好的返真性能。此外,在测试的长度较短和项目的区分度较小的情况下,加权极大似然估计依然保持了良好的统计性质,表现出更加显著的优势。  相似文献   

5.
当前大多数融合反应时的IRT模型仅适用于0-1评分数据资料,极大的限制了IRT反应时模型在实际中的应用。本文在传统的二级计分反应时IRT模型基础上,拟开发一种多级评分反应时模型。在层次建模框架下,分别采用拓广分部评分模型(GPCM)和对数正态模型构建融合反应时的多级评分IRT模型(本文记为JRT-GPCM),并采用全息贝叶斯MCMC算法实现新模型的参数估计。为验证新开发的JRT-GPCM模型的可行性及其在实践中的应用,本文开展了两项研究:研究1为模拟实验研究,研究2为新模型在大五人格-神经质分量表中的应用。研究1结果表明,JRT-GPCM模型的估计精度较高,且具有较好的稳健性。研究2表明,被试的潜在特质与作答速度具有一定的正相关,且本研究结果支持Ferrando和Lorenzo-Seva(2007)提出的“距离-困难度假设”,即当被试的潜在特质与项目的难度阈限距离越远,那么被试会花费更多的时间对项目进行作答。总之,本研究为拓展反应时信息在心理测量及教育中的应用提供新的方法支持。  相似文献   

6.
涂冬波  蔡艳  戴海琦  丁树良 《心理科学》2011,34(5):1189-1194
IRT中的计量模型较多,不同计量模型适合不同特点的数据资料,实际工作者应根据实际情况选择适当的IRT模型来分析数据。我国是个考试、测评大国,测评的题型丰富多样,在实际应用IRT时,一个模型往往很难反应所有数据资料本身的特点,这时可考虑应用多个IRT模型(即“混合模型”)来分析,以达到对数据的最佳拟合。本文对混合模型的思想方法及原理、参数估计的实现、以及模型性能进行了研究,发现:(1)本文自主开发的混合模型参数估计程序Mix_Tu具有较高的返真性,且与国际知名测量软件Parscale相当。(2)在“项目异常”情况下,Mix_Tu程序对参数b和c的估计受数据异常程度的影响要大于Parscale程序,而对参数a的估计受数据异常程度的影响要小于Parscale程序,而在参数theta上两个程序相当。(3)在“被试异常”情况下,Mix_Tu程序对所有参数的估计受数据异常程度的影响均要小于Parscale程序,Mix_Tu程序表现的更为稳健。  相似文献   

7.
结构方程模型已被广泛应用于心理学、教育学、以及社会科学领域的统计分析中。结构方程模型分析中最常用的估计方法是基于正态分布的估计量,比如极大似然估计法。这些方法需要满足两个假设。第一,理论模型必须正确地反映变量与变量之间的关系,称为结构假设。第二,数据必须符合多元正态分布,称为分布假设。如果这些假设不满足,基于正态分布的估计量就有可能导致不正确的卡方指数、不正确的拟合度、以及有偏差的参数估计和参数估计的标准误。在实际应用中,几乎所有的理论模型都不能准确地解释变量与变量之间的关系,数据也常常呈非多元正态分布。为此,一些新的估计方法得以发展。这些方法要么在理论上不要求数据呈多元正态分布,要么对因数据呈非正态分布而导致的不正确结果进行纠正。当前较为流行的两种方法是稳健极大似然估计和贝叶斯估计。稳健极大似然估计是应用Satorra and Bentler(1994)的方法对不正确的卡方指数和参数估计的标准误进行调整,而参数估计和用极大似然方法得出的完全等同。贝叶斯估计方法则是基于贝叶斯定理,其要点是:参数的后验分布是由参数的先验分布和数据似然值相乘而得来。后验分布常用马尔科夫蒙特卡洛算法来进行模拟。对于稳健极大似然估计和贝叶斯估计这两种方法之间的优劣比较,先前的研究只局限于理论模型是正确的情境。而本研究则着重于理论模型是错误的情境,同时也考虑到数据呈非正态分布的情境。本研究所采用的模型是验证性因子模型,数据全部由计算机模拟而来。数据的生成取决于三个因素:8类因子结构,3种变量分布,和3组样本量。这三个因素产生72个模拟条件(72=8x3x3)。每个模拟条件下生成2000个数据组,每个数据组都拟合两个模型,一个是正确模型、一个是错误模型。每个模型都用两种估计方法来拟合:稳健极大似然估计法和贝叶斯估计方法。贝叶斯估计方法中所使用的先验分布是无信息先验分布。结果分析主要着重于模型拒绝率、拟合度、参数估计、和参数估计的标准误。研究的结果表明:在样本量充足的情况下,两种方法得出的参数估计非常相似。当数据呈非正态分布时,贝叶斯估计法比稳健极大似然估计法更好地拒绝错误模型。但是,当样本量不足且数据呈正态分布时,贝叶斯估计在拒绝错误模型和参数估计上几乎没有优势,甚至在一些条件下,比稳健极大似然法要差。  相似文献   

8.
IRT模型参数估计的新方法——MCMC算法   总被引:1,自引:0,他引:1  
本研究主要探讨MCMC算法在IRT模型参数估计中的实现及其估计精度.通过模拟多种实验条件(人少题少、人题适中、人多题多、被试数及其参数固定情况下项目数变化、项目数及其参数固定情况下人数变化),考察两参数和叁参数Logistic模型的MCMC算法对其参数估计的精度,并与国际通用测量程序-Bilog程序(E-M算法)进行比较研究.模拟实验研究表明,上述各种实验条件下,MCMC算法均可用于IRT模型参数估计,且其估计的精度均较Bilog程序(E-M算法)高,值得推广.  相似文献   

9.
陈平  辛涛 《心理学报》2011,43(7):836-850
项目的增补对认知诊断计算机化自适应测验(CD-CAT)题库的开发与维护至关重要。借鉴单维项目反应理论(IRT)中联合极大似然估计方法(JMLE)的思路, 提出联合估计算法(JEA), 仅依赖被试在旧题和新题上的作答反应联合地、自动地估计新题的属性向量和新题的项目参数。研究结果表明:当项目参数相对较小且样本量相对较大时, JEA算法在新题属性向量和新题项目参数估计精度方面表现不错; 而且样本大小、项目参数大小以及项目参数初值都影响着JEA算法的表现。  相似文献   

10.
梁莘娅  杨艳云 《心理科学》2016,39(5):1256-1267
结构方程模型已被广泛应用于心理学、教育学、以及社会科学领域的统计分析中。结构方程模型分析中最常用的估计方法是基于正 态分布的估计量,比如极大似然估计法。这些方法需要满足两个假设。第一, 理论模型必须正确地反映变量与变量之间的关系,称为结构假 设。第二,数据必须符合多元正态分布,称为分布假设。如果这些假设不满足,基于正态分布的估计量就有可能导致不正确的卡方指数、不 正确的拟合度、以及有偏差的参数估计和参数估计的标准误。在实际应用中,几乎所有的理论模型都不能准确地解释变量与变量之间的关系, 数据也常常呈非多元正态分布。为此,一些新的估计方法得以发展。这些方法要么在理论上不要求数据呈多元正态分布,要么对因数据呈非 正态分布而导致的不正确结果进行纠正。当前较为流行的两种方法是稳健极大似然估计和贝叶斯估计。稳健极大似然估计是应用 Satorra and Bentler (1994) 的方法对不正确的卡方指数和参数估计的标准误进行调整,而参数估计和用极大似然方法得出的完全等同。贝叶斯估计方法则是 基于贝叶斯定理,其要点是:参数的后验分布是由参数的先验分布和数据似然值相乘而得来。后验分布常用马尔科夫蒙特卡洛算法来进行模拟。 对于稳健极大似然估计和贝叶斯估计这两种方法之间的优劣比较,先前的研究只局限于理论模型是正确的情境。而本研究则着重于理论模型 是错误的情境,同时也考虑到数据呈非正态分布的情境。本研究所采用的模型是验证性因子模型,数据全部由计算机模拟而来。数据的生成 取决于三个因素:8 类因子结构,3 种变量分布,和3 组样本量。这三个因素产生72 个模拟条件(72=8x3x3)。每个模拟条件下生成2000 个 数据组,每个数据组都拟合两个模型,一个是正确模型、一个是错误模型。每个模型都用两种估计方法来拟合:稳健极大似然估计法和贝叶 斯估计方法。贝叶斯估计方法中所使用的先验分布是无信息先验分布。结果分析主要着重于模型拒绝率、拟合度、参数估计、和参数估计的 标准误。研究的结果表明:在样本量充足的情况下,两种方法得出的参数估计非常相似。当数据呈非正态分布时,贝叶斯估计法比稳健极大 似然估计法更好地拒绝错误模型。但是,当样本量不足且数据呈正态分布时,贝叶斯估计在拒绝错误模型和参数估计上几乎没有优势,甚至 在一些条件下,比稳健极大似然法要差。  相似文献   

11.
The four-parameter logistic model (4PLM) has recently attracted much interest in various applications. Motivated by recent studies that re-express the four-parameter model as a mixture model with two levels of latent variables, this paper develops a new expectation–maximization (EM) algorithm for marginalized maximum a posteriori estimation of the 4PLM parameters. The mixture modelling framework of the 4PLM not only makes the proposed EM algorithm easier to implement in practice, but also provides a natural connection with popular cognitive diagnosis models. Simulation studies were conducted to show the good performance of the proposed estimation method and to investigate the impact of the additional upper asymptote parameter on the estimation of other parameters. Moreover, a real data set was analysed using the 4PLM to show its improved performance over the three-parameter logistic model.  相似文献   

12.
The four-parameter logistic (4PL) item response model, which includes an upper asymptote for the correct response probability, has drawn increasing interest due to its suitability for many practical scenarios. This paper proposes a new Gibbs sampling algorithm for estimation of the multidimensional 4PL model based on an efficient data augmentation scheme (DAGS). With the introduction of three continuous latent variables, the full conditional distributions are tractable, allowing easy implementation of a Gibbs sampler. Simulation studies are conducted to evaluate the proposed method and several popular alternatives. An empirical data set was analysed using the 4PL model to show its improved performance over the three-parameter and two-parameter logistic models. The proposed estimation scheme is easily accessible to practitioners through the open-source IRTlogit package.  相似文献   

13.
In this paper, we explore the use of the stochastic EM algorithm (Celeux & Diebolt (1985) Computational Statistics Quarterly, 2, 73) for large-scale full-information item factor analysis. Innovations have been made on its implementation, including an adaptive-rejection-based Gibbs sampler for the stochastic E step, a proximal gradient descent algorithm for the optimization in the M step, and diagnostic procedures for determining the burn-in size and the stopping of the algorithm. These developments are based on the theoretical results of Nielsen (2000, Bernoulli, 6, 457), as well as advanced sampling and optimization techniques. The proposed algorithm is computationally efficient and virtually tuning-free, making it scalable to large-scale data with many latent traits (e.g. more than five latent traits) and easy to use for practitioners. Standard errors of parameter estimation are also obtained based on the missing-information identity (Louis, 1982, Journal of the Royal Statistical Society, Series B, 44, 226). The performance of the algorithm is evaluated through simulation studies and an application to the analysis of the IPIP-NEO personality inventory. Extensions of the proposed algorithm to other latent variable models are discussed.  相似文献   

14.
Miller (1956) identified his famous limit of 7 ± 2 items based in part on absolute identification—the ability to identify stimuli that differ on a single physical dimension, such as lines of different length. An important aspect of this limit is its independence from perceptual effects and its application across all stimulus types. Recent research, however, has identified several exceptions. We investigate an explanation for these results that reconciles them with Miller’s work. We find support for the hypothesis that the exceptional stimulus types have more complex psychological representations, which can therefore support better identification. Our investigation uses data sets with thousands of observations for each participant, which allows the application of a new technique for identifying psychological representations: the structural forms algorithm of Kemp and Tenenbaum (2008) . This algorithm supports inferences not possible with previous techniques, such as multidimensional scaling.  相似文献   

15.
朱玮  丁树良  陈小攀 《心理学报》2006,38(3):453-460
对IRT的双参数Logistic模型(2PLM)中未知参数估计问题,给出了一个新的估计方法――最小化χ2/EM估计。新方法在充分考虑项目反应理论(IRT)与经典测量理论(CTT)之间的差异的前提下,从统计计算的角度改进了Berkson的最小化χ2估计,取消了Berkson实施最小化χ2估计时需要已知能力参数的不合实际的前提,扩大了应用范围。实验结果表明新方法能力参数的估计结果与BILOG相比,精确度要高,且当样本容量超过2000时,项目参数的估计结果也优于BILOG。实验还表明新方法稳健性好  相似文献   

16.
This paper demonstrates the feasibility of using the penalty function method to estimate parameters that are subject to a set of functional constraints in covariance structure analysis. Both types of inequality and equality constraints are studied. The approaches of maximum likelihood and generalized least squares estimation are considered. A modified Scoring algorithm and a modified Gauss-Newton algorithm are implemented to produce the appropriate constrained estimates. The methodology is illustrated by its applications to Heywood cases in confirmatory factor analysis, quasi-Weiner simplex model, and multitrait-multimethod matrix analysis.The author is indebted to several anonymous reviewers for creative suggestions for improvement of this paper. Computer funding is provided by the Computer Services Centre, The Chinese University of Hong Kong.  相似文献   

17.
EM and beyond   总被引:2,自引:0,他引:2  
The basic theme of the EM algorithm, to repeatedly use complete-data methods to solve incomplete data problems, is also a theme of several more recent statistical techniques. These techniques—multiple imputation, data augmentation, stochastic relaxation, and sampling importance resampling—combine simulation techniques with complete-data methods to attack problems that are difficult or impossible for EM.A preliminary version of this article was the Keynote Address at the 1987 European Meeting of the Psychometric Society June 24–26, 1987 in Enschede, The Netherlands. The author wishes to thank the editor and reviewers for helpful comments.  相似文献   

18.
Multidimensional item response theory (MIRT) is widely used in assessment and evaluation of educational and psychological tests. It models the individual response patterns by specifying a functional relationship between individuals' multiple latent traits and their responses to test items. One major challenge in parameter estimation in MIRT is that the likelihood involves intractable multidimensional integrals due to the latent variable structure. Various methods have been proposed that involve either direct numerical approximations to the integrals or Monte Carlo simulations. However, these methods are known to be computationally demanding in high dimensions and rely on sampling data points from a posterior distribution. We propose a new Gaussian variational expectation--maximization (GVEM) algorithm which adopts variational inference to approximate the intractable marginal likelihood by a computationally feasible lower bound. In addition, the proposed algorithm can be applied to assess the dimensionality of the latent traits in an exploratory analysis. Simulation studies are conducted to demonstrate the computational efficiency and estimation precision of the new GVEM algorithm compared to the popular alternative Metropolis–Hastings Robbins–Monro algorithm. In addition, theoretical results are presented to establish the consistency of the estimator from the new GVEM algorithm.  相似文献   

19.
In learning environments, understanding the longitudinal path of learning is one of the main goals. Cognitive diagnostic models (CDMs) for measurement combined with a transition model for mastery may be beneficial for providing fine-grained information about students’ knowledge profiles over time. An efficient algorithm to estimate model parameters would augment the practicality of this combination. In this study, the Expectation–Maximization (EM) algorithm is presented for the estimation of student learning trajectories with the GDINA (generalized deterministic inputs, noisy, “and” gate) and some of its submodels for the measurement component, and a first-order Markov model for learning transitions is implemented. A simulation study is conducted to investigate the efficiency of the algorithm in estimation accuracy of student and model parameters under several factors—sample size, number of attributes, number of time points in a test, and complexity of the measurement model. Attribute- and vector-level agreement rates as well as the root mean square error rates of the model parameters are investigated. In addition, the computer run times for converging are recorded. The result shows that for a majority of the conditions, the accuracy rates of the parameters are quite promising in conjunction with relatively short computation times. Only for the conditions with relatively low sample sizes and high numbers of attributes, the computation time increases with a reduction parameter recovery rate. An application using spatial reasoning data is given. Based on the Bayesian information criterion (BIC), the model fit analysis shows that the DINA (deterministic inputs, noisy, “and” gate) model is preferable to the GDINA with these data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号