首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
An implementation of the Gauss-Newton algorithm for the analysis of covariance structures that is specifically adapted for high-level computer languages is reviewed. With this procedure one need only describe the structural form of the population covariance matrix, and provide a sample covariance matrix and initial values for the parameters. The gradient and approximate Hessian, which vary from model to model, are computed numerically. Using this approach, the entire method can be operationalized in a comparatively small program. A large class of models can be estimated, including many that utilize functional relationships among the parameters that are not possible in most available computer programs. Some examples are provided to illustrate how the algorithm can be used.We are grateful to M. W. Browne and S. H. C. du Toit for many invaluable discussions about these computing ideas. Thanks also to Scott Chaiken for providing the data in the first example. They were collected as part of the U.S. Air Force's Learning Ability Measurement Project (LAMP), sponsored by the Air Force Office of Scientific Research (AFOSR) and the Human Resource Directorate of the Armstrong Laboratory (AL/HRM).  相似文献   

2.
This paper demonstrates the usefulness and flexibility of the general structural equation modelling (SEM) approach to fitting direct covariance patterns or structures (as opposed to fitting implied covariance structures from functional relationships among variables). In particular, the MSTRUCT modelling language (or syntax) of the CALIS procedure (SAS/STAT version 9.22 or later: SAS Institute, 2010) is used to illustrate the SEM approach. The MSTRUCT modelling language supports a direct covariance pattern specification of each covariance element. It also supports the input of additional independent and dependent parameters. Model tests, fit statistics, estimates, and their standard errors are then produced under the general SEM framework. By using numerical and computational examples, the following tests of basic covariance patterns are illustrated: sphericity, compound symmetry, and multiple‐group covariance patterns. Specification and testing of two complex correlation structures, the circumplex pattern and the composite direct product models with or without composite errors and scales, are also illustrated by the MSTRUCT syntax. It is concluded that the SEM approach offers a general and flexible modelling of direct covariance and correlation patterns. In conjunction with the use of SAS macros, the MSTRUCT syntax provides an easy‐to‐use interface for specifying and fitting complex covariance and correlation structures, even when the number of variables or parameters becomes large.  相似文献   

3.
Sample size and bentler and Bonett's nonnormed fit index   总被引:4,自引:0,他引:4  
Bentler and Bonett's nonnormed fit index is a widely used measure of goodness of fit for the analysis of covariance structures. This note shows that contrary to what has been claimed the nonnormed fit index is dependent on sample size. Specifically for a constant value of a fitting function, the nonnormed index is inversely related to sample size. A simple alternative fit measure is proposed that removes this dependency. In addition, it is shown that this new measure as well as the old nonnormed fit index can be applied to any fitting function that measures the deviation of the observed covariance matrix from the covariance matrix implied by the parameter estimates for a model.  相似文献   

4.
Structural analysis of covariance and correlation matrices   总被引:7,自引:0,他引:7  
A general approach to the analysis of covariance structures is considered, in which the variances and covariances or correlations of the observed variables are directly expressed in terms of the parameters of interest. The statistical problems of identification, estimation and testing of such covariance or correlation structures are discussed.Several different types of covariance structures are considered as special cases of the general model. These include models for sets of congeneric tests, models for confirmatory and exploratory factor analysis, models for estimation of variance and covariance components, regression models with measurement errors, path analysis models, simplex and circumplex models. Many of the different types of covariance structures are illustrated by means of real data.1978 Psychometric Society Presidential Address.This research has been supported by the Bank of Sweden Tercentenary Foundation under the project entitledStructural Equation Models in the Social Sciences, Karl G. Jöreskog, project director.  相似文献   

5.
The structure of the covariance matrix of sample covariances under the class of linear latent variate models is derived using properties of cumulants. This is employed to provide a general framework for robustness of statistical inference in the analysis of covariance structures arising from linear latent variate models. Conditions for normal theory estimators and test statistics to retain each of their usual asymptotic properties under non-normality of latent variates are given. Factor analysis, LISREL and other models are discussed as examples.  相似文献   

6.
The relationship between the latent growth curve and repeated measures ANOVA models is often misunderstood. Although a number of investigators have looked into the similarities and differences among these models, a cursory reading of the literature can give the impression that they are very different models. Here we show that each model represents a set of contrasts on the occasion means. We demonstrate that the fixed effects parameters of the estimated basis vector latent growth curve model are merely a transformation of the repeated measures ANOVA fixed effects parameters. We further show that differences in fit in models that estimate the same means structure can be due to the different error covariance structures implied by the model. We show these relationships both algebraically and through using data from a simulation.  相似文献   

7.
We study several aspects of bootstrap inference for covariance structure models based on three test statistics, including Type I error, power and sample‐size determination. Specifically, we discuss conditions for a test statistic to achieve a more accurate level of Type I error, both in theory and in practice. Details on power analysis and sample‐size determination are given. For data sets with heavy tails, we propose applying a bootstrap methodology to a transformed sample by a downweighting procedure. One of the key conditions for safe bootstrap inference is generally satisfied by the transformed sample but may not be satisfied by the original sample with heavy tails. Several data sets illustrate that, by combining downweighting and bootstrapping, a researcher may find a nearly optimal procedure for evaluating various aspects of covariance structure models. A rule for handling non‐convergence problems in bootstrap replications is proposed.  相似文献   

8.
This article reviews Newton procedures for the analysis of mean and covariance structures that may be functions of parameters that enter a model nonlinearly. The kind of model considered is a mixed-effects model that is conditionally linear with regard to its parameters. This means parameters entering the model nonlinearly must be fixed, whereas those entering linearly may vary across individuals. This framework encompasses several models, including hierarchical linear models, linear and nonlinear factor analysis models, and nonlinear latent curve models. A full maximum-likelihood estimation procedure is described. Mx, a statistical software package often used to estimate structural equation models, is considered for estimation of these models. An example with Mx syntax is provided.  相似文献   

9.
The problem of dependence of the nonnormed fit index on sample size in covariance structure analysis is discussed. Contrary to Bollen (1986) we show that the mean of the nonnormed fit index is independent of sample size for true and almost true models whereas Bollen's alternative index does depend on sample size.  相似文献   

10.
Methods of covariance structure modeling are frequently applied in psychological research. These methods merge the logic of confirmatory factor analysis, multiple regression, and path analysis within a single data analytic framework. Among the many applications are estimation of disattenuated correlation and regression coefficients, evaluation of multitrait-multimethod matrices, and assessment of hypothesized causal structures. Shortcomings of these methods are commonly acknowledged in the mathematical literature and in textbooks. Nevertheless, serious flaws remain in many published applications. For example, it is rarely noted that the fit of a favored model is identical for a potentially large number of equivalent models. A review of the personality and social psychology literature illustrates the nature of this and other problems in reported applications of covariance structure models.  相似文献   

11.
It is well-known that the representations of the Thurstonian Case III and Case V models for paired comparison data are not unique. Similarly, when analyzing ranking data, other equivalent covariance structures can substitute for those given by Thurstone in these cases. That is, we may more broadly define the family of covariance structures satisfying Case III and Case V conditions. This paper introduces the notion of equivalence classes which defines a more meaningful partition of the covariance structures of the Thurstonian ranking models. In addition, the equivalence classes of Case V and Case III are completely characterized.  相似文献   

12.
Structural equation modeling is a well-known technique for studying relationships among multivariate data. In practice, high dimensional nonnormal data with small to medium sample sizes are very common, and large sample theory, on which almost all modeling statistics are based, cannot be invoked for model evaluation with test statistics. The most natural method for nonnormal data, the asymptotically distribution free procedure, is not defined when the sample size is less than the number of nonduplicated elements in the sample covariance. Since normal theory maximum likelihood estimation remains defined for intermediate to small sample size, it may be invoked but with the probable consequence of distorted performance in model evaluation. This article studies the small sample behavior of several test statistics that are based on maximum likelihood estimator, but are designed to perform better with nonnormal data. We aim to identify statistics that work reasonably well for a range of small sample sizes and distribution conditions. Monte Carlo results indicate that Yuan and Bentler's recently proposed F-statistic performs satisfactorily.  相似文献   

13.
Bayesian estimation and testing of structural equation models   总被引:2,自引:0,他引:2  
The Gibbs sampler can be used to obtain samples of arbitrary size from the posterior distribution over the parameters of a structural equation model (SEM) given covariance data and a prior distribution over the parameters. Point estimates, standard deviations and interval estimates for the parameters can be computed from these samples. If the prior distribution over the parameters is uninformative, the posterior is proportional to the likelihood, and asymptotically the inferences based on the Gibbs sample are the same as those based on the maximum likelihood solution, for example, output from LISREL or EQS. In small samples, however, the likelihood surface is not Gaussian and in some cases contains local maxima. Nevertheless, the Gibbs sample comes from the correct posterior distribution over the parameters regardless of the sample size and the shape of the likelihood surface. With an informative prior distribution over the parameters, the posterior can be used to make inferences about the parameters underidentified models, as we illustrate on a simple errors-in-variables model.We thank David Spiegelhalter for suggesting applying the Gibbs sampler to structural equation models to the first author at a 1994 workshop in Wiesbaden. We thank Ulf Böckenholt, Chris Meek, Marijtje van Duijn, Clark Glymour, Ivo Molenaar, Steve Klepper, Thomas Richardson, Teddy Seidenfeld, and Tom Snijders for helpful discussions, mathematical advice, and critiques of earlier drafts of this paper.  相似文献   

14.
Nonlinear mixed-effects (NLME) models remain popular among practitioners for analyzing continuous repeated measures data taken on each of a number of individuals when interest centers on characterizing individual-specific change. Within this framework, variation and correlation among the repeated measurements may be partitioned into interindividual variation and intraindividual variation components. The covariance structure of the residuals are, in many applications, consigned to be independent with homogeneous variances, $ {\sigma}^2{\mathbf{I}}_{n_i} $ , not because it is believed that intraindividual variation adheres to this structure, but because many software programs that estimate parameters of such models are not well-equipped to handle other, possibly more realistic, patterns. In this article, we describe how the programmatic environment within SAS may be utilized to model residual structures for serial correlation and variance heterogeneity. An empirical example is used to illustrate the capabilities of the module.  相似文献   

15.
Since the early years of psychological research, investigators in psychology have made use of mathematical models of psychological phenomena. Models are now routinely used to represent and study cognitive processes, the structure of psychological measurements, the structure of correlational relationships among variables, the nature of change over time, and many other topics and phenomena of interest. All of these models, in their attempt to provide a parsimonious representation of psychological phenomena, are wrong to some degree and are thus implausible if taken literally. Such models simply cannot fully represent the complexities of the phenomena of interest and at best provide an approximation of the real world. This imperfection has implications for how we specify, estimate, and evaluate models, and how we interpret results of fitting models to data. Using factor analysis and structural equation models as a context, I examine some implications of model imperfection for our use of models, focusing on formal specification of models; the nature of parameters and parameter estimates; the relevance of discrepancy functions; the issue of sample size; the evaluation, development, and selection of models; and the conduct of simulation studies. The overall perspective is that our use and study of models should be guided by an understanding that our models are imperfect and cannot be made to be exactly correct.  相似文献   

16.
It is well-known that the representations of the Thurstonian models for difference judgment data are not unique. It has been shown that equivalence classes can be formed to provide a more meaningful partition of the covariance structures of the Thurstonian ranking models. In this paper, we examine the equivalence relations between Thurstonian covariance structure models for paired comparison data obtained under multiple judgment and discuss their implications on the general identification constraints and methods to check for parameter identifiability in restricted models.The author is indebted to Ulf Böckenholt and Albert Maydeu-Olivares for their significant comments and suggestions which led to considerable improvement in this article.  相似文献   

17.
Covariance structure analysis is a statistical technique in which a theoretical model, or a covariance structure, is constructed, and the covariances predicted by the theoretical model are compared with those of the observed data. The adequacy of the model in reproducing the sample covariances is reflected by estimates of the parameters of the model and measures indicating the goodness of fit. Covariance structure analysis is frequently used for analyzing data obtained in nonexperimental or quasiexperimental research, but is seldom employed in experimental research. In this paper, the applicability of this technique in experimental research is discussed and illustrated by covariance structure analysis studies in which two models for word translation—the symmetrical model and the asymmetrical model—are described, refined, and contrasted.  相似文献   

18.
Multilevel autoregressive models are especially suited for modeling between-person differences in within-person processes. Fitting these models with Bayesian techniques requires the specification of prior distributions for all parameters. Often it is desirable to specify prior distributions that have negligible effects on the resulting parameter estimates. However, the conjugate prior distribution for covariance matrices—the Inverse-Wishart distribution—tends to be informative when variances are close to zero. This is problematic for multilevel autoregressive models, because autoregressive parameters are usually small for each individual, so that the variance of these parameters will be small. We performed a simulation study to compare the performance of three Inverse-Wishart prior specifications suggested in the literature, when one or more variances for the random effects in the multilevel autoregressive model are small. Our results show that the prior specification that uses plug-in ML estimates of the variances performs best. We advise to always include a sensitivity analysis for the prior specification for covariance matrices of random parameters, especially in autoregressive models, and to include a data-based prior specification in this analysis. We illustrate such an analysis by means of an empirical application on repeated measures data on worrying and positive affect.  相似文献   

19.
This article considers the problem of power and sample size calculations for normal outcomes within the framework of multivariate linear models. The emphasis is placed on the practical situation that not only the values of response variables for each subject are just available after the observations are made, but also the levels of explanatory variables cannot be predetermined before data collection. Using analytic justification, it is shown that the proposed methods extend the existing approaches to accommodate the extra variability and arbitrary configurations of the explanatory variables. The major modification involves the noncentrality parameters associated with the F approximations to the transformations of Wilks likelihood ratio, Pillai trace and Hotelling-Lawley trace statistics. A treatment of multivariate analysis of covariance models is employed to demonstrate the distinct features of the proposed extension. Monte Carlo simulation studies are conducted to assess the accuracy using a child’s intellectual development model. The results update and expand upon current work in the literature.The author wishes to thank the associate editor and the referees for comments which improve the paper considerably. This research was partially supported by a grant from the Natural Science Council of Taiwan.  相似文献   

20.
When designing a study that uses structural equation modeling (SEM), an important task is to decide an appropriate sample size. Historically, this task is approached from the power analytic perspective, where the goal is to obtain sufficient power to reject a false null hypothesis. However, hypothesis testing only tells if a population effect is zero and fails to address the question about the population effect size. Moreover, significance tests in the SEM context often reject the null hypothesis too easily, and therefore the problem in practice is having too much power instead of not enough power.

An alternative means to infer the population effect is forming confidence intervals (CIs). A CI is more informative than hypothesis testing because a CI provides a range of plausible values for the population effect size of interest. Given the close relationship between CI and sample size, the sample size for an SEM study can be planned with the goal to obtain sufficiently narrow CIs for the population model parameters of interest.

Latent curve models (LCMs) is an application of SEM with mean structure to studying change over time. The sample size planning method for LCM from the CI perspective is based on maximum likelihood and expected information matrix. Given a sample, to form a CI for the model parameter of interest in LCM, it requires the sample covariance matrix S, sample mean vector , and sample size N. Therefore, the width (w) of the resulting CI can be considered a function of S, , and N. Inverting the CI formation process gives the sample size planning process. The inverted process requires a proxy for the population covariance matrix Σ, population mean vector μ, and the desired width ω as input, and it returns N as output. The specification of the input information for sample size planning needs to be performed based on a systematic literature review. In the context of covariance structure analysis, Lai and Kelley (2011) discussed several practical methods to facilitate specifying Σ and ω for the sample size planning procedure.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号