首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
An expression is given for weighted least squares estimators of oblique common factors, constrained to have the same covariance matrix as the factors they estimate. It is shown that if as in exploratory factor analysis, the common factors are obtained by oblique transformation from the Lawley-Rao basis, the constrained estimators are given by the same transformation. Finally a proof of uniqueness is given.The research reported in this paper was partly supported by Natural Sciences and Engineering Research Council Grant No. A6346.  相似文献   

2.
We relate Thurstonian models for paired comparisons data to Thurstonian models for ranking data, which assign zero probabilities to all intransitive patterns. We also propose an intermediate model for paired comparisons data that assigns nonzero probabilities to all transitive patterns and to some but not all intransitive patterns.There is a close correspondence between the multidimensional normal ogive model employed in educational testing and Thurstone's model for paired comparisons data under multiple judgment sampling with minimal identification restrictions. Alike the normal ogive model, Thurstonian models have two formulations, a factor analytic and an IRT formulation. We use the factor analytic formulation to estimate this model from the first and second order marginals of the contingency table using estimators proposed by Muthén. We also propose a statistic to assess the fit of these models to the first and second order marginals of the contingency table. This is important, as a model may reproduce well the estimated thresholds and tetrachoric correlations, yet fail to reproduce the marginals of the contingency table if the assumption of multivariate normality is incorrect.A simulation study is performed to investigate the performance of three alternative limited information estimators which differ in the procedure used in their final stage: unweighted least squares (ULS), diagonally weighted least squares (DWLS), and full weighted least squares (WLS). Both the ULS and DWLS show a good performance with medium size problems and small samples, with a slight better performance of the ULS estimator.This paper is based on the author's doctoral dissertation; Ulf Böckenholt, advisor. The final stages of this research took place while the author was at the Department of Statistics and Econometrics, Universidad Carlos III de Madrid. The author is indebted to Adolfo Hernández for stimulating discussions that helped improve this paper, and to Ulf Böckenholt and the Associate Editor for a number of helpfulsuggestions to a previous draft.  相似文献   

3.
刘红云  骆方  王玥  张玉 《心理学报》2012,44(1):121-132
作者简要回顾了SEM框架下分类数据因素分析(CCFA)模型和MIRT框架下测验题目和潜在能力的关系模型, 对两种框架下的主要参数估计方法进行了总结。通过模拟研究, 比较了SEM框架下WLSc和WLSMV估计方法与MIRT框架下MLR和MCMC估计方法的差异。研究结果表明:(1) WLSc得到参数估计的偏差最大, 且存在参数收敛的问题; (2)随着样本量增大, 各种项目参数估计的精度均提高, WLSMV方法与MLR方法得到的参数估计精度差异很小, 大多数情况下不比MCMC方法差; (3)除WLSc方法外, 随着每个维度测验题目的增多参数估计的精度逐渐增高; (4)测验维度对区分度参数和难度参数的影响较大, 而测验维度对项目因素载荷和阈值的影响相对较小; (5)项目参数的估计精度受项目测量维度数的影响, 只测量一个维度的项目参数估计精度较高。另外文章还对两种方法在实际应用中应该注意的问题提供了一些建议。  相似文献   

4.
A two-step weighted least squares estimator for multiple factor analysis of dichotomized variables is discussed. The estimator is based on the first and second order joint probabilities. Asymptotic standard errors and a model test are obtained by applying the Jackknife procedure.  相似文献   

5.
A model containing linear and nonlinear parameters (e. g., a spatial multidimensional scaling model) is viewed as a linear model with free and constrained parameters. Since the rank deficiency of the design matrix for the linear model determines the number of side conditions needed to identify its parameters, the design matrix acts as a guide in identifying the parameters of the nonlinear model. Moreover, if the design matrix and the uniqueness conditions constitute anorthogonal linear model, then the associated error sum of squares may be expressed in a form which separates the free and constrained parameters. This immediately provides least squares estimates of the free parameters, while simplifying the least squares problem for those which are constrained. When the least squares estimates for a nonlinear model are obtained in this way,i.e. by conceptualizing it as a submodel, the final error sum of squares for the nonlinear model will be arestricted minimum whenever the side conditions of the model become real restrictions upon its submodel. In this case the design matrix for the embracing orthogonal model serves as a guide in introducing parameters into the nonlinear model as well as in identifying these parameters. The method of overwriting a nonlinear model with an orthogonal linear model is illustrated with two different spatial analyses of a three-way preference table.  相似文献   

6.
This study examines the unscaled and scaled root mean square error of approximation (RMSEA), comparative fit index (CFI), and Tucker–Lewis index (TLI) of diagonally weighted least squares (DWLS) and unweighted least squares (ULS) estimators in structural equation modeling with ordered categorical data. We show that the number of categories and threshold values for categorization can unappealingly impact the DWLS unscaled and scaled fit indices, as well as the ULS scaled fit indices in the population, given that analysis models are misspecified and that the threshold structure is saturated. Consequently, a severely misspecified model may be considered acceptable, depending on how the underlying continuous variables are categorized. The corresponding CFI and TLI are less dependent on the categorization than RMSEA but are less sensitive to model misspecification in general. In contrast, the number of categories and threshold values do not impact the ULS unscaled fit indices in the population.  相似文献   

7.
A program is described for fitting a regression model in which the relationship between the dependent and the independent variables is described by two regression equations, one for each of two mutually exclusive ranges of the independent variable. The point at which the change from one equation to the other occurs is often unknown, and thus must be estimated. In cognitive psychology, such models are relevant for studying the phenomenon of strategy shifts. The program uses a (weighted) least squares algorithm to estimate the regression parameters and the change point. The algorithm always finds the global minimum of the error sum of squares. The model is applied to data from a mental-rotation experiment. The program’s estimates of the point at which the strategy shift occurs are compared with estimates obtained from a nonlinear least squares minimization procedure in SPSSX.  相似文献   

8.
Li CH 《心理评价》2012,24(3):770-776
Of the several measures of optimism presently available in the literature, the Life Orientation Test (LOT; Scheier & Carver, 1985) has been the most widely used in empirical research. This article explores, confirms, and cross-validates the factor structure of the Chinese version of the LOT with ordinal data by using robust weighted least squares (robust WLS) estimation within the Taiwanese cultural context. Results of exploratory and confirmatory factor analyses using 3 different samples (Ntotal = 1,119) show that the factor structure of the Chinese version of the LOT is better conceptualized as a correlated 2-factor model than a single-factor model. The composite reliability was 0.7 for the "disagreement on optimism" factor and 0.74 for the "agreement on optimism" factor. In addition, comparison results of the 2 estimators using empirical data and simulation data suggest that robust WLS is less biased than maximum likelihood (ML) for estimating factor loadings and interfactor correlations in the factor analytic model of the Chinese version of the LOT. (PsycINFO Database Record (c) 2012 APA, all rights reserved).  相似文献   

9.
A Monte Carlo study was used to compare four approaches to growth curve analysis of subjects assessed repeatedly with the same set of dichotomous items: A two‐step procedure first estimating latent trait measures using MULTILOG and then using a hierarchical linear model to examine the changing trajectories with the estimated abilities as the outcome variable; a structural equation model using modified weighted least squares (WLSMV) estimation; and two approaches in the framework of multilevel item response models, including a hierarchical generalized linear model using Laplace estimation, and Bayesian analysis using Markov chain Monte Carlo (MCMC). These four methods have similar power in detecting the average linear slope across time. MCMC and Laplace estimates perform relatively better on the bias of the average linear slope and corresponding standard error, as well as the item location parameters. For the variance of the random intercept, and the covariance between the random intercept and slope, all estimates are biased in most conditions. For the random slope variance, only Laplace estimates are unbiased when there are eight time points.  相似文献   

10.
Weighted least squares fitting using ordinary least squares algorithms   总被引:2,自引:0,他引:2  
A general approach for fitting a model to a data matrix by weighted least squares (WLS) is studied. This approach consists of iteratively performing (steps of) existing algorithms for ordinary least squares (OLS) fitting of the same model. The approach is based on minimizing a function that majorizes the WLS loss function. The generality of the approach implies that, for every model for which an OLS fitting algorithm is available, the present approach yields a WLS fitting algorithm. In the special case where the WLS weight matrix is binary, the approach reduces to missing data imputation.This research has been made possible by a fellowship from the Royal Netherlands Academy of Arts and Sciences to the author.  相似文献   

11.
Regularization, or shrinkage estimation, refers to a class of statistical methods that constrain the variability of parameter estimates when fitting models to data. These constraints move parameters toward a group mean or toward a fixed point (e.g., 0). Regularization has gained popularity across many fields for its ability to increase predictive power over classical techniques. However, articles published in JEAB and other behavioral journals have yet to adopt these methods. This paper reviews some common regularization schemes and speculates as to why articles published in JEAB do not use them. In response, we propose our own shrinkage estimator that avoids some of the possible objections associated with the reviewed regularization methods. Our estimator works by mixing weighted individual and group (WIG) data rather than by constraining parameters. We test this method on a problem of model selection. Specifically, we conduct a simulation study on the selection of matching‐law‐based punishment models, comparing WIG with ordinary least squares (OLS) regression, and find that, on average, WIG outperforms OLS in this context.  相似文献   

12.
Robust schemes in regression are adapted to mean and covariance structure analysis, providing an iteratively reweighted least squares approach to robust structural equation modeling. Each case is properly weighted according to its distance, based on first and second order moments, from the structural model. A simple weighting function is adopted because of its flexibility with changing dimensions. The weight matrix is obtained from an adaptive way of using residuals. Test statistic and standard error estimators are given, based on iteratively reweighted least squares. The method reduces to a standard distribution-free methodology if all cases are equally weighted. Examples demonstrate the value of the robust procedure.The authors acknowledge the constructive comments of three referees and the Editor that lead to an improved version of the paper. This work was supported by National Institute on Drug Abuse Grants DA01070 and DA00017 and by the University of North Texas Faculty Research Grant Program.  相似文献   

13.
Several algorithms for covariance structure analysis are considered in addition to the Fletcher-Powell algorithm. These include the Gauss-Newton, Newton-Raphson, Fisher Scoring, and Fletcher-Reeves algorithms. Two methods of estimation are considered, maximum likelihood and weighted least squares. It is shown that the Gauss-Newton algorithm which in standard form produces weighted least squares estimates can, in iteratively reweighted form, produce maximum likelihood estimates as well. Previously unavailable standard error estimates to be used in conjunction with the Fletcher-Reeves algorithm are derived. Finally all the algorithms are applied to a number of maximum likelihood and weighted least squares factor analysis problems to compare the estimates and the standard errors produced. The algorithms appear to give satisfactory estimates but there are serious discrepancies in the standard errors. Because it is robust to poor starting values, converges rapidly and conveniently produces consistent standard errors for both maximum likelihood and weighted least squares problems, the Gauss-Newton algorithm represents an attractive alternative for at least some covariance structure analyses.Work by the first author has been supported in part by Grant No. Da01070 from the U. S. Public Health Service. Work by the second author has been supported in part by Grant No. MCS 77-02121 from the National Science Foundation.  相似文献   

14.
We conducted a Monte Carlo study to investigate the performance of the polychoric instrumental variable estimator (PIV) in comparison to unweighted least squares (ULS) and diagonally weighted least squares (DWLS) in the estimation of a confirmatory factor analysis model with dichotomous indicators. The simulation involved 144 conditions (1,000 replications per condition) that were defined by a combination of (a) two types of latent factor models, (b) four sample sizes (100, 250, 500, 1,000), (c) three factor loadings (low, moderate, strong), (d) three levels of non‐normality (normal, moderately, and extremely non‐normal), and (e) whether the factor model was correctly specified or misspecified. The results showed that when the model was correctly specified, PIV produced estimates that were as accurate as ULS and DWLS. Furthermore, the simulation showed that PIV was more robust to structural misspecifications than ULS and DWLS.  相似文献   

15.
This paper examines the implications of violating assumptions concerning the continuity and distributional properties of data in establishing measurement models in social science research. The General Health Questionnaire-12 uses an ordinal response scale. Responses to the GHQ-12 from 201 Hong Kong immigrants on arrival in Australia showed that the data were not normally distributed. A series of confirmatory factor analyses using either a Pearson product-moment or a polychoric correlation input matrix and employing either maximum likelihood, weighted least squares or diagonally weighted least squares estimation methods were conducted on the data. The parameter estimates and goodness-of-fit statistics provided support for using polychoric correlations and diagonally weighted least squares estimation when analyzing ordinal, nonnormal data.  相似文献   

16.
A central assumption that is implicit in estimating item parameters in item response theory (IRT) models is the normality of the latent trait distribution, whereas a similar assumption made in categorical confirmatory factor analysis (CCFA) models is the multivariate normality of the latent response variables. Violation of the normality assumption can lead to biased parameter estimates. Although previous studies have focused primarily on unidimensional IRT models, this study extended the literature by considering a multidimensional IRT model for polytomous responses, namely the multidimensional graded response model. Moreover, this study is one of few studies that specifically compared the performance of full-information maximum likelihood (FIML) estimation versus robust weighted least squares (WLS) estimation when the normality assumption is violated. The research also manipulated the number of nonnormal latent trait dimensions. Results showed that FIML consistently outperformed WLS when there were one or multiple skewed latent trait distributions. More interestingly, the bias of the discrimination parameters was non-ignorable only when the corresponding factor was skewed. Having other skewed factors did not further exacerbate the bias, whereas biases of boundary parameters increased as more nonnormal factors were added. The item parameter standard errors recovered well with both estimation algorithms regardless of the number of nonnormal dimensions.  相似文献   

17.
Observational data typically contain measurement errors. Covariance-based structural equation modelling (CB-SEM) is capable of modelling measurement errors and yields consistent parameter estimates. In contrast, methods of regression analysis using weighted composites as well as a partial least squares approach to SEM facilitate the prediction and diagnosis of individuals/participants. But regression analysis with weighted composites has been known to yield attenuated regression coefficients when predictors contain errors. Contrary to the common belief that CB-SEM is the preferred method for the analysis of observational data, this article shows that regression analysis via weighted composites yields parameter estimates with much smaller standard errors, and thus corresponds to greater values of the signal-to-noise ratio (SNR). In particular, the SNR for the regression coefficient via the least squares (LS) method with equally weighted composites is mathematically greater than that by CB-SEM if the items for each factor are parallel, even when the SEM model is correctly specified and estimated by an efficient method. Analytical, numerical and empirical results also show that LS regression using weighted composites performs as well as or better than the normal maximum likelihood method for CB-SEM under many conditions even when the population distribution is multivariate normal. Results also show that the LS regression coefficients become more efficient when considering the sampling errors in the weights of composites than those that are conditional on weights.  相似文献   

18.
The well‐known problem of fitting the exploratory factor analysis model is reconsidered where the usual least squares goodness‐of‐fit function is replaced by a more resistant discrepancy measure, based on a smooth approximation of the ?1 norm. Fitting the factor analysis model to the sample correlation matrix is a complex matrix optimization problem which requires the structure preservation of the unknown parameters (e.g. positive definiteness). The projected gradient approach is a natural way of solving such data matching problems as especially designed to follow the geometry of the model parameters. Two reparameterizations of the factor analysis model are considered. The approach leads to globally convergent procedures for simultaneous estimation of the factor analysis matrix parameters. Numerical examples illustrate the algorithms and factor analysis solutions.  相似文献   

19.
When there exist omitted effects, measurement error, and/or simultaneity in multilevel models, explanatory variables may be correlated with random components, and standard estimation methods do not provide consistent estimates of model parameters. This paper introduces estimators that are consistent under such conditions. By employing generalized method of moments (GMM) estimation techniques in multilevel modeling, the authors present a series of estimators along a robust to efficient continuum. This continuum depends on the assumptions that the analyst makes regarding the extent of the correlated effects. It is shown that the GMM approach provides an overarching framework that encompasses well-known estimators such as fixed and random effects estimators and also provides more options. These GMM estimators can be expressed as instrumental variable (IV) estimators which enhances their interpretability. Moreover, by exploiting the hierarchical structure of the data, the current technique does not require additional variables unlike traditional IV methods. Further, statistical tests are developed to compare the different estimators. A simulation study examines the finite sample properties of the estimators and tests and confirms the theoretical order of the estimators with respect to their robustness and efficiency. It further shows that not only are regression coefficients biased, but variance components may be severely underestimated in the presence of correlated effects. Empirical standard errors are employed as they are less sensitive to correlated effects when compared to model-based standard errors. An example using student achievement data shows that GMM estimators can be effectively used in a search for the most efficient among unbiased estimators. This research was supported by the National Academy of Education/Spencer Foundation and the National Science Foundation, grant number SES-0436274. We thank the editor, associate editor, and referees for detailed feedback that helped improve the paper.  相似文献   

20.
Ab Mooijaart 《Psychometrika》1984,49(1):143-145
FACTALS is a nonmetric common factor analysis model for multivariate data whose variables may be nominal, ordinal or interval. In FACTALS an Alternating Least Squares algorithm is utilized which is claimed to be monotonically convergent.In this paper it is shown that this algorithm is based upon an erroneous assumption, namely that the least squares loss function (which is in this case a nonscale free loss function) can be transformed into a scalefree loss function. A consequence of this is that monotonical convergence of the algorithm can not be guaranteed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号