首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The method of finding the maximum likelihood estimates of the parameters in a multivariate normal model with some of the component variables observable only in polytomous form is developed. The main stratagem used is a reparameterization which converts the corresponding log likelihood function to an easily handled one. The maximum likelihood estimates are found by a Fletcher-Powell algorithm, and their standard error estimates are obtained from the information matrix. When the dimension of the random vector observable only in polytomous form is large, obtaining the maximum likelihood estimates is computationally rather labor expensive. Therefore, a more efficient method, the partition maximum likelihood method, is proposed. These estimation methods are demonstrated by real and simulated data, and are compared by means of a simulation study.  相似文献   

2.
田伟  辛涛  康春花 《心理科学进展》2014,22(6):1036-1046
在心理与教育测量中, 项目反应理论(Item Response Theory, IRT)模型的参数估计方法是理论研究与实践应用的基本工具。最近, 由于IRT模型的不断扩展与EM (expectation-maximization)算法自身的固有问题, 参数估计方法的改进与发展显得尤为重要。这里介绍了IRT模型中边际极大似然估计的发展, 提出了它的阶段性特征, 即联合极大似然估计阶段、确定性潜在心理特质“填补”阶段、随机潜在心理特质“填补”阶段, 重点阐述了它的潜在心理特质“填补” (data augmentation)思想。EM算法与Metropolis-Hastings Robbins-Monro (MH-RM)算法作为不同的潜在心理特质“填补”方法, 都是边际极大似然估计的思想跨越。目前, 潜在心理特质“填补”的参数估计方法仍在不断发展与完善。  相似文献   

3.
The likelihood for generalized linear models with covariate measurement error cannot in general be expressed in closed form, which makes maximum likelihood estimation taxing. A popular alternative is regression calibration which is computationally efficient at the cost of inconsistent estimation. We propose an improved regression calibration approach, a general pseudo maximum likelihood estimation method based on a conveniently decomposed form of the likelihood. It is both consistent and computationally efficient, and produces point estimates and estimated standard errors which are practically identical to those obtained by maximum likelihood. Simulations suggest that improved regression calibration, which is easy to implement in standard software, works well in a range of situations.  相似文献   

4.
A Two-Tier Full-Information Item Factor Analysis Model with Applications   总被引:2,自引:0,他引:2  
Li Cai 《Psychometrika》2010,75(4):581-612
Motivated by Gibbons et al.’s (Appl. Psychol. Meas. 31:4–19, 2007) full-information maximum marginal likelihood item bifactor analysis for polytomous data, and Rijmen, Vansteelandt, and De Boeck’s (Psychometrika 73:167–182, 2008) work on constructing computationally efficient estimation algorithms for latent variable models, a two-tier item factor analysis model is developed in this research. The modeling framework subsumes standard multidimensional IRT models, bifactor IRT models, and testlet response theory models as special cases. Features of the model lead to a reduction in the dimensionality of the latent variable space, and consequently significant computational savings. An EM algorithm for full-information maximum marginal likelihood estimation is developed. Simulations and real data demonstrations confirm the accuracy and efficiency of the proposed methods. Three real data sets from a large-scale educational assessment, a longitudinal public health survey, and a scale development study measuring patient reported quality of life outcomes are analyzed as illustrations of the model’s broad range of applicability.  相似文献   

5.
6.
Rubin and Thayer recently presented equations to implement maximum likelihood (ML) estimation in factor analysis via the EM algorithm. They present an example to demonstrate the efficacy of the algorithm, and propose that their recovery of multiple local maxima of the ML function “certainly should cast doubt on the general utility of second derivatives of the log likelihood as measures of precision of estimation.” It is shown here, in contrast, that these second derivatives verify that Rubin and Thayer did not find multiple local maxima as claimed. The only known maximum remains the one found by Jöreskog over a decade earlier. The standard errors obtained from the second derivatives and the Fisher information matrix thus remain appropriate where ML assumptions are met. The advantages of the EM algorithm over other algorithms for ML factor analysis remain to be demonstrated.  相似文献   

7.
A maximum likelihood estimation procedure was developed to fit unweighted and weighted additive models to conjoint data obtained by the categorical rating, the pair comparison or the directional ranking method. The scoring algorithm used to fit the models was found to be both reliable and efficient, and the program MAXADD is capable of handling up to 300 parameters to be estimated. Practical uses of the procedure are reported to demonstrate various advantages of the procedure as a statistical method.The research reported here was supported by Grant A6394 to the author from the Natural Sciences and Engineering Research Council of Canada. Portions of this research were presented at the Psychometric Society meeting in Iowa City, Iowa, in May, 1980.Thanks are due to Jim Ramsay, Justine Sergent and anonymous reviewers for their helpful comments.Two MAXADD programs which perform the computations discussed in this paper may be obtained from the author.  相似文献   

8.
Several algorithms for covariance structure analysis are considered in addition to the Fletcher-Powell algorithm. These include the Gauss-Newton, Newton-Raphson, Fisher Scoring, and Fletcher-Reeves algorithms. Two methods of estimation are considered, maximum likelihood and weighted least squares. It is shown that the Gauss-Newton algorithm which in standard form produces weighted least squares estimates can, in iteratively reweighted form, produce maximum likelihood estimates as well. Previously unavailable standard error estimates to be used in conjunction with the Fletcher-Reeves algorithm are derived. Finally all the algorithms are applied to a number of maximum likelihood and weighted least squares factor analysis problems to compare the estimates and the standard errors produced. The algorithms appear to give satisfactory estimates but there are serious discrepancies in the standard errors. Because it is robust to poor starting values, converges rapidly and conveniently produces consistent standard errors for both maximum likelihood and weighted least squares problems, the Gauss-Newton algorithm represents an attractive alternative for at least some covariance structure analyses.Work by the first author has been supported in part by Grant No. Da01070 from the U. S. Public Health Service. Work by the second author has been supported in part by Grant No. MCS 77-02121 from the National Science Foundation.  相似文献   

9.
Moderation analysis has many applications in social sciences. Most widely used estimation methods for moderation analysis assume that errors are normally distributed and homoscedastic. When these assumptions are not met, the results from a classical moderation analysis can be misleading. For more reliable moderation analysis, this article proposes two robust methods with a two-level regression model when the predictors do not contain measurement error. One method is based on maximum likelihood with Student's t distribution and the other is based on M-estimators with Huber-type weights. An algorithm for obtaining the robust estimators is developed. Consistent estimates of standard errors of the robust estimators are provided. The robust approaches are compared against normal-distribution-based maximum likelihood (NML) with respect to power and accuracy of parameter estimates through a simulation study. Results show that the robust approaches outperform NML under various distributional conditions. Application of the robust methods is illustrated through a real data example. An R program is developed and documented to facilitate the application of the robust methods.  相似文献   

10.
EM algorithms for ML factor analysis   总被引:11,自引:0,他引:11  
The details of EM algorithms for maximum likelihood factor analysis are presented for both the exploratory and confirmatory models. The algorithm is essentially the same for both cases and involves only simple least squares regression operations; the largest matrix inversion required is for aq ×q symmetric matrix whereq is the matrix of factors. The example that is used demonstrates that the likelihood for the factor analysis model may have multiple modes that are not simply rotations of each other; such behavior should concern users of maximum likelihood factor analysis and certainly should cast doubt on the general utility of second derivatives of the log likelihood as measures of precision of estimation.  相似文献   

11.
The paper proposes a composite likelihood estimation approach that uses bivariate instead of multivariate marginal probabilities for ordinal longitudinal responses using a latent variable model. The model considers time-dependent latent variables and item-specific random effects to be accountable for the interdependencies of the multivariate ordinal items. Time-dependent latent variables are linked with an autoregressive model. Simulation results have shown composite likelihood estimators to have a small amount of bias and mean square error and as such they are feasible alternatives to full maximum likelihood. Model selection criteria developed for composite likelihood estimation are used in the applications. Furthermore, lower-order residuals are used as measures-of-fit for the selected models.  相似文献   

12.
Multidimensional successive categories scaling: A maximum likelihood method   总被引:1,自引:0,他引:1  
A single-step maximum likelihood estimation procedure is developed for multidimensional scaling of dissimilarity data measured on rating scales. The procedure can fit the euclidian distance model to the data under various assumptions about category widths and under two distributional assumptions. The scoring algorithm for parameter estimation has been developed and implemented in the form of a computer program. Practical uses of the method are demonstrated with an emphasis on various advantages of the method as a statistical procedure.The research reported here was partly supported by Grant A6394 to the author by Natural Sciences and Engineering Research Council of Canada. Portions of this research were presented at the Psychometric Society meeting in Uppsala, Sweden, in June, 1978. MAXSCAL-2.1, a program to perform the computations discussed in this paper may be obtained from the author. Thanks are due to Jim Ramsay for his helpful comments.  相似文献   

13.
An algorithm for analyzing ordinal scaling results is described. Frequency data on ordinal categories are modeled for unidimensional psychological attributes according to Thurstone’s judgment scaling model. The algorithm applies maximum likelihood estimation of model parameters. The Cramér-Rao bounds of the standard errors of the estimated parameters are calculated, and a stress measure and a goodness-of-fit measure are supplied.  相似文献   

14.
Standard errors for rotated factor loadings   总被引:1,自引:0,他引:1  
Beginning with the results of Girshick on the asymptotic distribution of principal component loadings and those of Lawley on the distribution of unrotated maximum likelihood factor loadings, the asymptotic distribution of the corresponding analytically rotated loadings is obtained. The principal difficulty is the fact that the transformation matrix which produces the rotation is usually itself a function of the data. The approach is to use implicit differentiation to find the partial derivatives of an arbitrary orthogonal rotation algorithm. Specific details are given for the orthomax algorithms and an example involving maximum likelihood estimation and varimax rotation is presented.This research was supported in part by NIH Grant RR-3. The authors are grateful to Dorothy T. Thayer who implemented the algorithms discussed here as well as those of Lawley and Maxwell. We are particularly indebted to Michael Browne for convincing us of the significance of this work and for helping to guide its development and to Harry H. Harman who many years ago pointed out the need for standard errors of estimate.  相似文献   

15.
This study investigates using response times (RTs) with item responses in a computerized adaptive test (CAT) setting to enhance item selection and ability estimation and control for differential speededness. Using van der Linden’s hierarchical framework, an extended procedure for joint estimation of ability and speed parameters for use in CAT is developed following van der Linden; this is called the joint expected a posteriori estimator (J-EAP). It is shown that the J-EAP estimate of ability and speededness outperforms the standard maximum likelihood estimator (MLE) of ability and speededness in terms of correlation, root mean square error, and bias. It is further shown that under the maximum information per time unit item selection method (MICT)—a method which uses estimates for ability and speededness directly—using the J-EAP further reduces average examinee time spent and variability in test times between examinees above the resulting gains of this selection algorithm with the MLE while maintaining estimation efficiency. Simulated test results are further corroborated with test parameters derived from a real data example.  相似文献   

16.
An algorithm for analyzing difference scaling results is described. Frequency data on ordered categories that represent perceived differences for a unidimensional psychological attribute are modeled according to Thurstone’s judgment scaling model. The algorithm applies the gradient method for the maximum likelihood estimation of the model parameters. Two ways to calculate the start configuration for the model parameters are elaborated. The algorithm also provides asymptotic values for the standard errors of the estimates and three measures for the goodness of the model fit. An additional feature of DifScal is that it is suited to analyze incomplete data.  相似文献   

17.
The polytomous unidimensional Rasch model with equidistant scoring, also known as the rating scale model, is extended in such a way that the item parameters are linearly decomposed into certain basic parameters. The extended model is denoted as the linear rating scale model (LRSM). A conditional maximum likelihood estimation procedure and a likelihood-ratio test of hypotheses within the framework of the LRSM are presented. Since the LRSM is a generalization of both the dichotomous Rasch model and the rating scale model, the present algorithm is suited for conditional maximum likelihood estimation in these submodels as well. The practicality of the conditional method is demonstrated by means of a dichotomous Rasch example with 100 items, of a rating scale example with 30 items and 5 categories, and in the light of an empirical application to the measurement of treatment effects in a clinical study.Work supported in part by the Fonds zur Förderung der Wissenschaftlichen Forschung under Grant No. P6414.  相似文献   

18.
This article proposes a new, more efficient method to compute the minus two log likelihood, its gradient, and the Hessian for structural equation models (SEMs) in reticular action model (RAM) notation. The method exploits the beneficial aspect of RAM notation that the matrix derivatives used in RAM are sparse. For an SEM with K variables, P parameters, and P′ entries in the symmetrical or asymmetrical matrix of the RAM notation filled with parameters, the asymptotical run time of the algorithm is O(P?′?K 2?+?P 2 K 2?+?K 3). The naive implementation and numerical implementations are both O(P 2 K 3), so that for typical applications of SEM, the proposed algorithm is asymptotically K times faster than the best previously known algorithm. A simulation comparison with a numerical algorithm shows that the asymptotical efficiency is transferred to an applied computational advantage that is crucial for the application of maximum likelihood estimation, even in small, but especially in moderate or large, SEMs.  相似文献   

19.
20.
Queen’s University, Kingston, Ontario, Canada We introduce and evaluate via a Monte Carlo study a robust new estimation technique that fits distribution functions to grouped response time (RT) data, where the grouping is determined by sample quantiles. The new estimator, quantile maximum likelihood (QML), is more efficient and less biased than the best alternative estimation technique when fitting the commonly used ex-Gaussian distribution. Limitations of the Monte Carlo results are discussed and guidance provided for the practical application of the new technique. Because QML estimation can be computationally costly, we make fast open source code for fitting available that can be easily modified  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号