首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We address several issues that are raised by Bentler and Tanaka's [1983] discussion of Rubin and Thayer [1982]. Our conclusions are: standard methods do not completely monitor the possible existence of multiple local maxima; summarizing inferential precision by the standard output based on second derivatives of the log likelihood at a maximum can be inappropriate, even if there exists a unique local maximum; EM and LISREL can be viewed as complementary, albeit not entirely adequate, tools for factor analysis.This work was partially supported by the Program Statistics Research Project at Educational Testing Service.  相似文献   

2.
Kohei Adachi 《Psychometrika》2013,78(2):380-394
Rubin and Thayer (Psychometrika, 47:69–76, 1982) proposed the EM algorithm for exploratory and confirmatory maximum likelihood factor analysis. In this paper, we prove the following fact: the EM algorithm always gives a proper solution with positive unique variances and factor correlations with absolute values that do not exceed one, when the covariance matrix to be analyzed and the initial matrices including unique variances and inter-factor correlations are positive definite. We further numerically demonstrate that the EM algorithm yields proper solutions for the data which lead the prevailing gradient algorithms for factor analysis to produce improper solutions. The numerical studies also show that, in real computations with limited numerical precision, Rubin and Thayer’s (Psychometrika, 47:69–76, 1982) original formulas for confirmatory factor analysis can make factor correlation matrices asymmetric, so that the EM algorithm fails to converge. However, this problem can be overcome by using an EM algorithm in which the original formulas are replaced by those guaranteeing the symmetry of factor correlation matrices, or by formulas used to prove the above fact.  相似文献   

3.
EM algorithms for ML factor analysis   总被引:11,自引:0,他引:11  
The details of EM algorithms for maximum likelihood factor analysis are presented for both the exploratory and confirmatory models. The algorithm is essentially the same for both cases and involves only simple least squares regression operations; the largest matrix inversion required is for aq ×q symmetric matrix whereq is the matrix of factors. The example that is used demonstrates that the likelihood for the factor analysis model may have multiple modes that are not simply rotations of each other; such behavior should concern users of maximum likelihood factor analysis and certainly should cast doubt on the general utility of second derivatives of the log likelihood as measures of precision of estimation.  相似文献   

4.
In a manner similar to that used in the orthogonal case, formulas for the aymptotic standard errors of analytically rotated oblique factor loading estimates are obtained. This is done by finding expressions for the partial derivatives of an oblique rotation algorithm and using previously derived results for unrotated loadings. These include the results of Lawley for maximum likelihood factor analysis and those of Girshick for principal components analysis. Details are given in cases including direct oblimin and direct Crawford-Ferguson rotation. Numerical results for an example involving maximum likelihood estimation with direct quartimin rotation are presented. They include simultaneous tests for significant loading estimates.This research was supported in part by NIH Grant RR-3. The author is indebted to Dorothy Thayer who implemented the algorithms required for the example and to Gunnar Gruvaeus and Allen Yates for reviewing an earlier version of this paper. Special thanks are extended to Michael Browne for many conversations devoted to clarifying the thoughts of the author.  相似文献   

5.
Standard errors for rotated factor loadings   总被引:1,自引:0,他引:1  
Beginning with the results of Girshick on the asymptotic distribution of principal component loadings and those of Lawley on the distribution of unrotated maximum likelihood factor loadings, the asymptotic distribution of the corresponding analytically rotated loadings is obtained. The principal difficulty is the fact that the transformation matrix which produces the rotation is usually itself a function of the data. The approach is to use implicit differentiation to find the partial derivatives of an arbitrary orthogonal rotation algorithm. Specific details are given for the orthomax algorithms and an example involving maximum likelihood estimation and varimax rotation is presented.This research was supported in part by NIH Grant RR-3. The authors are grateful to Dorothy T. Thayer who implemented the algorithms discussed here as well as those of Lawley and Maxwell. We are particularly indebted to Michael Browne for convincing us of the significance of this work and for helping to guide its development and to Harry H. Harman who many years ago pointed out the need for standard errors of estimate.  相似文献   

6.
Parameters of the two‐parameter logistic model are generally estimated via the expectation–maximization (EM) algorithm by the maximum‐likelihood (ML) method. In so doing, it is beneficial to estimate the common prior distribution of the latent ability from data. Full non‐parametric ML (FNPML) estimation allows estimation of the latent distribution with maximum flexibility, as the distribution is modelled non‐parametrically on a number of (freely moving) support points. It is generally assumed that EM estimation of the two‐parameter logistic model is not influenced by initial values, but studies on this topic are unavailable. Therefore, the present study investigates the sensitivity to initial values in FNPML estimation. In contrast to the common assumption, initial values are found to have notable influence: for a standard convergence criterion, item discrimination and difficulty parameter estimates as well as item characteristic curve (ICC) recovery were influenced by initial values. For more stringent criteria, item parameter estimates were mainly influenced by the initial latent distribution, whilst ICC recovery was unaffected. The reason for this might be a flat surface of the log‐likelihood function, which would necessitate setting a sufficiently tight convergence criterion for accurate recovery of item parameters.  相似文献   

7.
Li Cai 《Psychometrika》2010,75(1):33-57
A Metropolis–Hastings Robbins–Monro (MH-RM) algorithm for high-dimensional maximum marginal likelihood exploratory item factor analysis is proposed. The sequence of estimates from the MH-RM algorithm converges with probability one to the maximum likelihood solution. Details on the computer implementation of this algorithm are provided. The accuracy of the proposed algorithm is demonstrated with simulations. As an illustration, the proposed algorithm is applied to explore the factor structure underlying a new quality of life scale for children. It is shown that when the dimensionality is high, MH-RM has advantages over existing methods such as numerical quadrature based EM algorithm. Extensions of the algorithm to other modeling frameworks are discussed.  相似文献   

8.
Latent variable models with many categorical items and multiple latent constructs result in many dimensions of numerical integration, and the traditional frequentist estimation approach, such as maximum likelihood (ML), tends to fail due to model complexity. In such cases, Bayesian estimation with diffuse priors can be used as a viable alternative to ML estimation. This study compares the performance of Bayesian estimation with ML estimation in estimating single or multiple ability factors across 2 types of measurement models in the structural equation modeling framework: a multidimensional item response theory (MIRT) model and a multiple-indicator multiple-cause (MIMIC) model. A Monte Carlo simulation study demonstrates that Bayesian estimation with diffuse priors, under various conditions, produces results quite comparable with ML estimation in the single- and multilevel MIRT and MIMIC models. Additionally, an empirical example utilizing the Multistate Bar Examination is provided to compare the practical utility of the MIRT and MIMIC models. Structural relationships among the ability factors, covariates, and a binary outcome variable are investigated through the single- and multilevel measurement models. The article concludes with a summary of the relative advantages of Bayesian estimation over ML estimation in MIRT and MIMIC models and suggests strategies for implementing these methods.  相似文献   

9.
Abstract: A probabilistic multidimensional scaling model is proposed. The model assumes that the coordinates of each stimulus are normally distributed with variance Σi = diag(σ21, … σ2Ri). The advantage of this model is that axes are determined uniquely. The distribution of the distance between two stimuli is obtained by polar coordinates transformation. The method of maximum likelihood estimation for means and variances using the EM algorithm is discussed. Further, simulated annealing is suggested as a means of obtaining initial values in order to avoid local maxima. A simulation study shows that the estimates are accurate, and a numerical example concerning the location of Japanese cities shows that natural axes can be obtained without introducing individual parameters.  相似文献   

10.
Mixture factor analysis is examined as a means of flexibly estimating nonnormally distributed continuous latent factors in the presence of both continuous and dichotomous observed variables. A simulation study compares mixture factor analysis with normal maximum likelihood (ML) latent factor modeling. Different results emerge for continuous versus dichotomous outcomes. For dichotomous outcomes, normal ML path estimates have bias that worsens as latent factor skew/kurtosis increases and does not diminish as sample size increases, whereas the mixture factor analysis model produces nearly unbiased estimators as sample sizes increase (500 and greater) and offers near nominal coverage probability. For continuous outcome variables, both methods produce factor loading estimates with minimal bias regardless of latent factor skew, but the mixture factor analysis is more efficient. The method is demonstrated using data motivated by a study on youth with cystic fibrosis examining predictors of treatment adherence. In summary, mixture factor analysis provides improvements over normal ML estimation in the presence of skewed/kurtotic latent factors, but due to variability in the estimator relating the latent factor to dichotomous outcomes and computational issues, the improvements were only fully realized, in this study, at larger sample sizes (500 and greater).  相似文献   

11.
A Newton-Raphson algorithm for maximum likelihood factor analysis   总被引:1,自引:0,他引:1  
This paper demonstrates the feasibility of using a Newton-Raphson algorithm to solve the likelihood equations which arise in maximum likelihood factor analysis. The algorithm leads to clean easily identifiable convergence and provides a means of verifying that the solution obtained is at least a local maximum of the likelihood function. It is shown that a popular iteration algorithm is numerically unstable under conditions which are encountered in practice and that, as a result, inaccurate solutions have been presented in the literature. The key result is a computationally feasible formula for the second differential of a partially maximized form of the likelihood function. In addition to implementing the Newton-Raphson algorithm, this formula provides a means for estimating the asymptotic variances and covariances of the maximum likelihood estimators. This research was supported by the Air Force Office of Scientific Research, Grant No. AF-AFOSR-4.59-66 and by National Institutes of Health, Grant No. FR-3.  相似文献   

12.
Jennrich  Robert I. 《Psychometrika》1986,51(2):277-284
It is shown that the scoring algorithm for maximum likelihood estimation in exploratory factor analysis can be developed in a way that is many times more efficient than a direct development based on information matrices and score vectors. The algorithm offers a simple alternative to current algorithms and when used in one-step mode provides the simplest and fastest method presently available for moving from consistent to efficient estimates. Perhaps of greater importance is its potential for extension to the confirmatory model. The algorithm is developed as a Gauss-Newton algorithm to facilitate its application to generalized least squares and to maximum likelihood estimation.This research was supported by NSF Grant MCS-8301587.  相似文献   

13.
Several algorithms for covariance structure analysis are considered in addition to the Fletcher-Powell algorithm. These include the Gauss-Newton, Newton-Raphson, Fisher Scoring, and Fletcher-Reeves algorithms. Two methods of estimation are considered, maximum likelihood and weighted least squares. It is shown that the Gauss-Newton algorithm which in standard form produces weighted least squares estimates can, in iteratively reweighted form, produce maximum likelihood estimates as well. Previously unavailable standard error estimates to be used in conjunction with the Fletcher-Reeves algorithm are derived. Finally all the algorithms are applied to a number of maximum likelihood and weighted least squares factor analysis problems to compare the estimates and the standard errors produced. The algorithms appear to give satisfactory estimates but there are serious discrepancies in the standard errors. Because it is robust to poor starting values, converges rapidly and conveniently produces consistent standard errors for both maximum likelihood and weighted least squares problems, the Gauss-Newton algorithm represents an attractive alternative for at least some covariance structure analyses.Work by the first author has been supported in part by Grant No. Da01070 from the U. S. Public Health Service. Work by the second author has been supported in part by Grant No. MCS 77-02121 from the National Science Foundation.  相似文献   

14.
田伟  辛涛  康春花 《心理科学进展》2014,22(6):1036-1046
在心理与教育测量中, 项目反应理论(Item Response Theory, IRT)模型的参数估计方法是理论研究与实践应用的基本工具。最近, 由于IRT模型的不断扩展与EM (expectation-maximization)算法自身的固有问题, 参数估计方法的改进与发展显得尤为重要。这里介绍了IRT模型中边际极大似然估计的发展, 提出了它的阶段性特征, 即联合极大似然估计阶段、确定性潜在心理特质“填补”阶段、随机潜在心理特质“填补”阶段, 重点阐述了它的潜在心理特质“填补” (data augmentation)思想。EM算法与Metropolis-Hastings Robbins-Monro (MH-RM)算法作为不同的潜在心理特质“填补”方法, 都是边际极大似然估计的思想跨越。目前, 潜在心理特质“填补”的参数估计方法仍在不断发展与完善。  相似文献   

15.
A Two-Tier Full-Information Item Factor Analysis Model with Applications   总被引:2,自引:0,他引:2  
Li Cai 《Psychometrika》2010,75(4):581-612
Motivated by Gibbons et al.’s (Appl. Psychol. Meas. 31:4–19, 2007) full-information maximum marginal likelihood item bifactor analysis for polytomous data, and Rijmen, Vansteelandt, and De Boeck’s (Psychometrika 73:167–182, 2008) work on constructing computationally efficient estimation algorithms for latent variable models, a two-tier item factor analysis model is developed in this research. The modeling framework subsumes standard multidimensional IRT models, bifactor IRT models, and testlet response theory models as special cases. Features of the model lead to a reduction in the dimensionality of the latent variable space, and consequently significant computational savings. An EM algorithm for full-information maximum marginal likelihood estimation is developed. Simulations and real data demonstrations confirm the accuracy and efficiency of the proposed methods. Three real data sets from a large-scale educational assessment, a longitudinal public health survey, and a scale development study measuring patient reported quality of life outcomes are analyzed as illustrations of the model’s broad range of applicability.  相似文献   

16.
The Reduced Reparameterized Unified Model (Reduced RUM) is a diagnostic classification model for educational assessment that has received considerable attention among psychometricians. However, the computational options for researchers and practitioners who wish to use the Reduced RUM in their work, but do not feel comfortable writing their own code, are still rather limited. One option is to use a commercial software package that offers an implementation of the expectation maximization (EM) algorithm for fitting (constrained) latent class models like Latent GOLD or Mplus. But using a latent class analysis routine as a vehicle for fitting the Reduced RUM requires that it be re-expressed as a logit model, with constraints imposed on the parameters of the logistic function. This tutorial demonstrates how to implement marginal maximum likelihood estimation using the EM algorithm in Mplus for fitting the Reduced RUM.  相似文献   

17.
The method of deriving the second derivatives of the goodness-of-fit functions of maximum likelihood and least-squares confirmatory factor analysis is discussed. The full set of second derivatives is reported.This research was supported by a PHS research grant No. M-10006 from the National Institutes of Mental Health, Public Health Service.  相似文献   

18.
19.
Psychometric models for item-level data are broadly useful in psychology. A recurring issue for estimating item factor analysis (IFA) models is low-item endorsement (item sparseness), due to limited sample sizes or extreme items such as rare symptoms or behaviors. In this paper, I demonstrate that under conditions characterized by sparseness, currently available estimation methods, including maximum likelihood (ML), are likely to fail to converge or lead to extreme estimates and low empirical power. Bayesian estimation incorporating prior information is a promising alternative to ML estimation for IFA models with item sparseness. In this article, I use a simulation study to demonstrate that Bayesian estimation incorporating general prior information improves parameter estimate stability, overall variability in estimates, and power for IFA models with sparse, categorical indicators. Importantly, the priors proposed here can be generally applied to many research contexts in psychology, and they do not impact results compared to ML when indicators are not sparse. I then apply this method to examine the relationship between suicide ideation and insomnia in a sample of first-year college students. This provides an important alternative for researchers who may need to model items with sparse endorsement.  相似文献   

20.
Influence analysis is an important component of data analysis, and the local influence approach has been widely applied to many statistical models to identify influential observations and assess minor model perturbations since the pioneering work of Cook (1986) . The approach is often adopted to develop influence analysis procedures for factor analysis models with ranking data. However, as this well‐known approach is based on the observed data likelihood, which involves multidimensional integrals, directly applying it to develop influence analysis procedures for the factor analysis models with ranking data is difficult. To address this difficulty, a Monte Carlo expectation and maximization algorithm (MCEM) is used to obtain the maximum‐likelihood estimate of the model parameters, and measures for influence analysis on the basis of the conditional expectation of the complete data log likelihood at the E‐step of the MCEM algorithm are then obtained. Very little additional computation is needed to compute the influence measures, because it is possible to make use of the by‐products of the estimation procedure. Influence measures that are based on several typical perturbation schemes are discussed in detail, and the proposed method is illustrated with two real examples and an artificial example.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号