首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Liang  Jiajuan  Bentler  Peter M. 《Psychometrika》2004,69(1):101-122
Maximum likelihood is an important approach to analysis of two-level structural equation models. Different algorithms for this purpose have been available in the literature. In this paper, we present a new formulation of two-level structural equation models and develop an EM algorithm for fitting this formulation. This new formulation covers a variety of two-level structural equation models. As a result, the proposed EM algorithm is widely applicable in practice. A practical example illustrates the performance of the EM algorithm and the maximum likelihood statistic.We are thankful to the reviewers for their constructive comments that have led to significant improvement on the first version of this paper. Special thanks are due to the reviewer who suggested a comparison with the LISREL program in the saturated means model, and provided its setup and output. This work was supported by National Institute on Drug Abuse grants DA01070, DA00017, and a UNH 2002 Summer Faculty Fellowship.  相似文献   

2.
The Reduced Reparameterized Unified Model (Reduced RUM) is a diagnostic classification model for educational assessment that has received considerable attention among psychometricians. However, the computational options for researchers and practitioners who wish to use the Reduced RUM in their work, but do not feel comfortable writing their own code, are still rather limited. One option is to use a commercial software package that offers an implementation of the expectation maximization (EM) algorithm for fitting (constrained) latent class models like Latent GOLD or Mplus. But using a latent class analysis routine as a vehicle for fitting the Reduced RUM requires that it be re-expressed as a logit model, with constraints imposed on the parameters of the logistic function. This tutorial demonstrates how to implement marginal maximum likelihood estimation using the EM algorithm in Mplus for fitting the Reduced RUM.  相似文献   

3.
The EM algorithm is a popular iterative method for estimating parameters in the latent class model where at each step the unknown parameters can be estimated simply as weighted sums of some latent proportions. The algorithm may also be used when some parameters are constrained to equal given constants or each other. It is shown that in the general case with equality constraints, the EM algorithm is not simple to apply because a nonlinear equation has to be solved. This problem arises, mainly, when equality constrints are defined over probabilities indifferent combinations of variables and latent classes. A simple condition is given in which, although probabilities in different variable-latent class combinations are constrained to be equal, the EM algorithm is still simple to apply.The authors are grateful to the Editor and the anonymous reviewers for their helpful comments on an earlier draft of this paper. C. C. Clogg and R. Luijkx are also acknowledged for verifying our results with their computer programs MLLSA and LCAG, respectively.  相似文献   

4.
5.
EM algorithms for ML factor analysis   总被引:11,自引:0,他引:11  
The details of EM algorithms for maximum likelihood factor analysis are presented for both the exploratory and confirmatory models. The algorithm is essentially the same for both cases and involves only simple least squares regression operations; the largest matrix inversion required is for aq ×q symmetric matrix whereq is the matrix of factors. The example that is used demonstrates that the likelihood for the factor analysis model may have multiple modes that are not simply rotations of each other; such behavior should concern users of maximum likelihood factor analysis and certainly should cast doubt on the general utility of second derivatives of the log likelihood as measures of precision of estimation.  相似文献   

6.
This paper demonstrates the feasibility of using the penalty function method to estimate parameters that are subject to a set of functional constraints in covariance structure analysis. Both types of inequality and equality constraints are studied. The approaches of maximum likelihood and generalized least squares estimation are considered. A modified Scoring algorithm and a modified Gauss-Newton algorithm are implemented to produce the appropriate constrained estimates. The methodology is illustrated by its applications to Heywood cases in confirmatory factor analysis, quasi-Weiner simplex model, and multitrait-multimethod matrix analysis.The author is indebted to several anonymous reviewers for creative suggestions for improvement of this paper. Computer funding is provided by the Computer Services Centre, The Chinese University of Hong Kong.  相似文献   

7.
Convergence of the expectation-maximization (EM) algorithm to a global optimum of the marginal log likelihood function for unconstrained latent variable models with categorical indicators is presented. The sufficient conditions under which global convergence of the EM algorithm is attainable are provided in an information-theoretic context by interpreting the EM algorithm as alternating minimization of the Kullback–Leibler divergence between two convex sets. It is shown that these conditions are satisfied by an unconstrained latent class model, yielding an optimal bound against which more highly constrained models may be compared.  相似文献   

8.
Moderation analysis is widely used in social and behavioral research. The most commonly used model for moderation analysis is moderated multiple regression (MMR) in which the explanatory variables of the regression model include product terms, and the model is typically estimated by least squares (LS). This paper argues for a two-level regression model in which the regression coefficients of a criterion variable on predictors are further regressed on moderator variables. An algorithm for estimating the parameters of the two-level model by normal-distribution-based maximum likelihood (NML) is developed. Formulas for the standard errors (SEs) of the parameter estimates are provided and studied. Results indicate that, when heteroscedasticity exists, NML with the two-level model gives more efficient and more accurate parameter estimates than the LS analysis of the MMR model. When error variances are homoscedastic, NML with the two-level model leads to essentially the same results as LS with the MMR model. Most importantly, the two-level regression model permits estimating the percentage of variance of each regression coefficient that is due to moderator variables. When applied to data from General Social Surveys 1991, NML with the two-level model identified a significant moderation effect of race on the regression of job prestige on years of education while LS with the MMR model did not. An R package is also developed and documented to facilitate the application of the two-level model.  相似文献   

9.
10.
A constrained generalized maximum likelihood routine for fitting psychometric functions is proposed, which determines optimum values for the complete parameter set--that is, threshold and slope--as well as for guessing and lapsing probability. The constraints are realized by Bayesian prior distributions for each of these parameters. The fit itself results from maximizing the posterior distribution of the parameter values by a multidimensional simplex method. We present results from extensive Monte Carlo simulations by which we can approximate bias and variability of the estimated parameters of simulated psychometric functions. Furthermore, we have tested the routine with data gathered in real sessions of psychophysical experimenting.  相似文献   

11.
A maximum likelihood approach is described for estimating the validity of a test (x) as a predictor of a criterion variable (y) when there are both missing and censoredy scores present in the data set. The missing data are due to selection on a latent variable (y s ) which may be conditionally related toy givenx. Thus, the missing data may not be missing random. The censoring process in due to the presence of a floor or ceiling effect. The maximum likelihood estimates are constructed using the EM algorithm. The entire analysis is demonstrated in terms of hypothetical data sets.  相似文献   

12.
In many types of statistical modeling, inequality constraints are imposed between the parameters of interest. As we will show in this paper, the DIC (i.e., posterior Deviance Information Criterium as proposed as a Bayesian model selection tool by Spiegelhalter, Best, Carlin, & Van Der Linde, 2002) fails when comparing inequality constrained hypotheses. In this paper, we will derive the prior DIC and show that it also fails when comparing inequality constrained hypotheses. However, it will be shown that a modification of the prior predictive loss function that is minimized by the prior DIC renders a criterion that does have the properties needed in order to be able to compare inequality constrained hypotheses. This new criterion will be called the Prior Information Criterion (PIC) and will be illustrated and evaluated using simulated data and examples. The PIC has a close connection with the marginal likelihood in combination with the encompassing prior approach and both methods will be compared. All in all, the main message of the current paper is: (1) do not use the classical DIC when evaluating inequality constrained hypotheses, better use the PIC; and (2) the PIC is considered a proper model selection tool in the context of evaluating inequality constrained hypotheses.  相似文献   

13.
Moderation analysis has many applications in social sciences. Most widely used estimation methods for moderation analysis assume that errors are normally distributed and homoscedastic. When these assumptions are not met, the results from a classical moderation analysis can be misleading. For more reliable moderation analysis, this article proposes two robust methods with a two-level regression model when the predictors do not contain measurement error. One method is based on maximum likelihood with Student's t distribution and the other is based on M-estimators with Huber-type weights. An algorithm for obtaining the robust estimators is developed. Consistent estimates of standard errors of the robust estimators are provided. The robust approaches are compared against normal-distribution-based maximum likelihood (NML) with respect to power and accuracy of parameter estimates through a simulation study. Results show that the robust approaches outperform NML under various distributional conditions. Application of the robust methods is illustrated through a real data example. An R program is developed and documented to facilitate the application of the robust methods.  相似文献   

14.
叶宝娟  温忠粦 《心理科学》2013,36(3):728-733
在心理、教育和管理等研究领域中,经常会碰到两水平(两层)的数据结构,如学生嵌套在班级中,员工嵌套在企业中。在两水平研究中,被试通常不是独立的,如果直接用单水平信度公式进行估计,会高估测验信度。文献上已有研究讨论如何更准确地估计两水平研究中单维测验的信度。本研究指出了现有的估计公式的不足之处,用两水平验证性因子分析推导出一个新的信度公式,举例演示如何计算,并给出简单的计算程序。  相似文献   

15.
The standard tobit or censored regression model is typically utilized for regression analysis when the dependent variable is censored. This model is generalized by developing a conditional mixture, maximum likelihood method for latent class censored regression. The proposed method simultaneously estimates separate regression functions and subject membership in K latent classes or groups given a censored dependent variable for a cross-section of subjects. Maximum likelihood estimates are obtained using an EM algorithm. The proposed method is illustrated via a consumer psychology application.  相似文献   

16.
Xu Liqun 《Psychometrika》2000,65(2):217-231
In this paper, we propose a (n–1)2 parameter, multistage ranking model, which represents a generalization of Luce's model. We propose then×n item-rank relative frequency matrix (p-matrix) as a device for summarizing a set of rankings. As an alternative to the traditional maximum likelihood estimation, for the proposed model we suggest a method which estimates the parameters from thep-matrix. An illustrative numerical example is given. The proposed model and its differences from Luce's model are briefly discussed. We also show some specialp-matrix patterns possessed by the Thurstonian models and distance-based models.  相似文献   

17.
In the applications of maximum likelihood factor analysis the occurrence of boundary minima instead of proper minima is no exception at all. In the past the causes of such improper solutions could not be detected. This was impossible because the matrices containing the parameters of the factor analysis model were kept positive definite. By dropping these constraints, it becomes possible to distinguish between the different causes of improper solutions. In this paper some of the most important causes are discussed and illustrated by means of artificial and empirical data.The author is indebted to H. J. Prins for stimulating and encouraging discussions.  相似文献   

18.
Rubin and Thayer recently presented equations to implement maximum likelihood (ML) estimation in factor analysis via the EM algorithm. They present an example to demonstrate the efficacy of the algorithm, and propose that their recovery of multiple local maxima of the ML function “certainly should cast doubt on the general utility of second derivatives of the log likelihood as measures of precision of estimation.” It is shown here, in contrast, that these second derivatives verify that Rubin and Thayer did not find multiple local maxima as claimed. The only known maximum remains the one found by Jöreskog over a decade earlier. The standard errors obtained from the second derivatives and the Fisher information matrix thus remain appropriate where ML assumptions are met. The advantages of the EM algorithm over other algorithms for ML factor analysis remain to be demonstrated.  相似文献   

19.
Parameters of the two‐parameter logistic model are generally estimated via the expectation–maximization (EM) algorithm by the maximum‐likelihood (ML) method. In so doing, it is beneficial to estimate the common prior distribution of the latent ability from data. Full non‐parametric ML (FNPML) estimation allows estimation of the latent distribution with maximum flexibility, as the distribution is modelled non‐parametrically on a number of (freely moving) support points. It is generally assumed that EM estimation of the two‐parameter logistic model is not influenced by initial values, but studies on this topic are unavailable. Therefore, the present study investigates the sensitivity to initial values in FNPML estimation. In contrast to the common assumption, initial values are found to have notable influence: for a standard convergence criterion, item discrimination and difficulty parameter estimates as well as item characteristic curve (ICC) recovery were influenced by initial values. For more stringent criteria, item parameter estimates were mainly influenced by the initial latent distribution, whilst ICC recovery was unaffected. The reason for this might be a flat surface of the log‐likelihood function, which would necessitate setting a sufficiently tight convergence criterion for accurate recovery of item parameters.  相似文献   

20.
Cognitive diagnosis models (CDMs) for educational assessment are constrained latent class models. Examinees are assigned to classes of intellectual proficiency defined in terms of cognitive skills called attributes, which an examinee may or may not have mastered. The Reduced Reparameterized Unified Model (Reduced RUM) has received considerable attention among psychometricians. Markov Chain Monte Carlo (MCMC) or Expectation Maximization (EM) are typically used for estimating the Reduced RUM. Commercial implementations of the EM algorithm are available in the latent class analysis (LCA) routines of Latent GOLD and Mplus, for example. Fitting the Reduced RUM with an LCA routine requires that it be reparameterized as a logit model, with constraints imposed on the parameters. For models involving two attributes, these have been worked out. However, for models involving more than two attributes, the parameterization and the constraints are nontrivial and currently unknown. In this article, the general parameterization of the Reduced RUM as a logit model involving any number of attributes and the associated parameter constraints are derived. As a practical illustration, the LCA routine in Mplus is used for fitting the Reduced RUM to two synthetic data sets and to a real-world data set; for comparison, the results obtained by using the MCMC implementation in OpenBUGS are also provided.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号