首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Ab Mooijaart 《Psychometrika》1984,49(1):143-145
FACTALS is a nonmetric common factor analysis model for multivariate data whose variables may be nominal, ordinal or interval. In FACTALS an Alternating Least Squares algorithm is utilized which is claimed to be monotonically convergent.In this paper it is shown that this algorithm is based upon an erroneous assumption, namely that the least squares loss function (which is in this case a nonscale free loss function) can be transformed into a scalefree loss function. A consequence of this is that monotonical convergence of the algorithm can not be guaranteed.  相似文献   

2.
Weighted least squares fitting using ordinary least squares algorithms   总被引:2,自引:0,他引:2  
A general approach for fitting a model to a data matrix by weighted least squares (WLS) is studied. This approach consists of iteratively performing (steps of) existing algorithms for ordinary least squares (OLS) fitting of the same model. The approach is based on minimizing a function that majorizes the WLS loss function. The generality of the approach implies that, for every model for which an OLS fitting algorithm is available, the present approach yields a WLS fitting algorithm. In the special case where the WLS weight matrix is binary, the approach reduces to missing data imputation.This research has been made possible by a fellowship from the Royal Netherlands Academy of Arts and Sciences to the author.  相似文献   

3.
A direct method in handling incomplete data in general covariance structural models is investigated. Asymptotic statistical properties of the generalized least squares method are developed. It is shown that this approach has very close relationships with the maximum likelihood approach. Iterative procedures for obtaining the generalized least squares estimates, the maximum likelihood estimates, as well as their standard error estimates are derived. Computer programs for the confirmatory factor analysis model are implemented. A longitudinal type data set is used as an example to illustrate the results.This research was supported in part by Research Grant DAD1070 from the U.S. Public Health Service. The author is indebted to anonymous reviewers for some very valuable suggestions. Computer funding is provided by the Computer Services Centre, The Chinese University of Hong Kong.  相似文献   

4.
Observational data typically contain measurement errors. Covariance-based structural equation modelling (CB-SEM) is capable of modelling measurement errors and yields consistent parameter estimates. In contrast, methods of regression analysis using weighted composites as well as a partial least squares approach to SEM facilitate the prediction and diagnosis of individuals/participants. But regression analysis with weighted composites has been known to yield attenuated regression coefficients when predictors contain errors. Contrary to the common belief that CB-SEM is the preferred method for the analysis of observational data, this article shows that regression analysis via weighted composites yields parameter estimates with much smaller standard errors, and thus corresponds to greater values of the signal-to-noise ratio (SNR). In particular, the SNR for the regression coefficient via the least squares (LS) method with equally weighted composites is mathematically greater than that by CB-SEM if the items for each factor are parallel, even when the SEM model is correctly specified and estimated by an efficient method. Analytical, numerical and empirical results also show that LS regression using weighted composites performs as well as or better than the normal maximum likelihood method for CB-SEM under many conditions even when the population distribution is multivariate normal. Results also show that the LS regression coefficients become more efficient when considering the sampling errors in the weights of composites than those that are conditional on weights.  相似文献   

5.
Green solved the problem of least-squares estimation of several criteria subject to the constraint that the estimates have an arbitrary fixed covariance or correlation matrix. In the present paper an omission in Green's proof is discussed and resolved. Furthermore, it is shown that some recently published solutions for estimating oblique factor scores are special cases of Green's solution for the case of fixed covariance matrices.  相似文献   

6.
Bruce Bloxom 《Psychometrika》1979,44(4):473-484
A method is developed for estimating the response time distribution of an unobserved component in a two-component serial model, assuming the components are stochastically independent. The estimate of the component's density function is constrained only to be unimodal and non-negative. Numerical examples suggest that the method can yield reasonably accurate estimates with sample sizes of 300 and, in some cases, with sample sizes as small as 100.The author wishes to thank David Kohfeld, Jim Ramsay, Jim Townsend and two anonymous referees for a number of useful and stimulating comments on an earlier version of this paper.  相似文献   

7.
Several algorithms for covariance structure analysis are considered in addition to the Fletcher-Powell algorithm. These include the Gauss-Newton, Newton-Raphson, Fisher Scoring, and Fletcher-Reeves algorithms. Two methods of estimation are considered, maximum likelihood and weighted least squares. It is shown that the Gauss-Newton algorithm which in standard form produces weighted least squares estimates can, in iteratively reweighted form, produce maximum likelihood estimates as well. Previously unavailable standard error estimates to be used in conjunction with the Fletcher-Reeves algorithm are derived. Finally all the algorithms are applied to a number of maximum likelihood and weighted least squares factor analysis problems to compare the estimates and the standard errors produced. The algorithms appear to give satisfactory estimates but there are serious discrepancies in the standard errors. Because it is robust to poor starting values, converges rapidly and conveniently produces consistent standard errors for both maximum likelihood and weighted least squares problems, the Gauss-Newton algorithm represents an attractive alternative for at least some covariance structure analyses.Work by the first author has been supported in part by Grant No. Da01070 from the U. S. Public Health Service. Work by the second author has been supported in part by Grant No. MCS 77-02121 from the National Science Foundation.  相似文献   

8.
Simple tableau algorithms are given for the noniterative instrumental variable (FABIN 2) and two stage regression (FABIN 3) factor loading estimates of Hägglund. Corresponding generalized least squares estimates of factor covariances and unique variances are introduced. An example is given for the purpose of illustration and comparison.This research was supported by NSF Grant MCS-8301587.  相似文献   

9.
Bruce Bloxom 《Psychometrika》1978,43(3):397-408
A gradient method is used to obtain least squares estimates of parameters of them-dimensional euclidean model simultaneously inN spaces, given the observation of all pairwise distances ofn stimuli for each space. The procedure can estimate an additive constant as well as stimulus projections and the metric of the reference axes of the configuration in each space. Each parameter in the model can be fixed to equal some a priori value, constrained to be equal to any other parameter, or free to take on any value in the parameter space. Two applications of the procedure are described.  相似文献   

10.
The problem of minimizing a general matrix, trace function, possibly subject to certain constraints, is approached by means of majorizing this function by one having a simple quadratic shape and whose minimum is easily found. It is shown that the parameter set that minimizes the majorizing function also decreases the matrix trace function, which in turn provides a monotonically convergent algorithm for minimizing the matrix trace function iteratively. Three algorithms based on majorization for solving certain least squares problems are shown to be special cases. In addition, by means of several examples, it is noted how algorithms may be provided for a wide class of statistical optimization tasks for which no satisfactory algorithms seem available.The Netherlands organization for scientific research(NWO) is gratefully acknowledged for funding this project. This research was conducted while the author was supported by a PSYCHON-grant (560-267-011) from this organization. The author is obliged to Jos ten Berge, Willem Heiser, and Wim Krijnen for helpful comments on an earlier version of this paper.  相似文献   

11.
A generalization of Takane's algorithm for dedicom   总被引:1,自引:0,他引:1  
An algorithm is described for fitting the DEDICOM model for the analysis of asymmetric data matrices. This algorithm generalizes an algorithm suggested by Takane in that it uses a damping parameter in the iterative process. Takane's algorithm does not always converge monotonically. Based on the generalized algorithm, a modification of Takane's algorithm is suggested such that this modified algorithm converges monotonically. It is suggested to choose as starting configurations for the algorithm those configurations that yield closed-form solutions in some special cases. Finally, a sufficient condition is described for monotonic convergence of Takane's original algorithm.Financial Support by the Netherlands organization for scientific research (NWO) is gratefully acknowledged. The authors are obliged to Richard Harshman.  相似文献   

12.
A central assumption that is implicit in estimating item parameters in item response theory (IRT) models is the normality of the latent trait distribution, whereas a similar assumption made in categorical confirmatory factor analysis (CCFA) models is the multivariate normality of the latent response variables. Violation of the normality assumption can lead to biased parameter estimates. Although previous studies have focused primarily on unidimensional IRT models, this study extended the literature by considering a multidimensional IRT model for polytomous responses, namely the multidimensional graded response model. Moreover, this study is one of few studies that specifically compared the performance of full-information maximum likelihood (FIML) estimation versus robust weighted least squares (WLS) estimation when the normality assumption is violated. The research also manipulated the number of nonnormal latent trait dimensions. Results showed that FIML consistently outperformed WLS when there were one or multiple skewed latent trait distributions. More interestingly, the bias of the discrimination parameters was non-ignorable only when the corresponding factor was skewed. Having other skewed factors did not further exacerbate the bias, whereas biases of boundary parameters increased as more nonnormal factors were added. The item parameter standard errors recovered well with both estimation algorithms regardless of the number of nonnormal dimensions.  相似文献   

13.
We propose an alternative method to partial least squares for path analysis with components, called generalized structured component analysis. The proposed method replaces factors by exact linear combinations of observed variables. It employs a well-defined least squares criterion to estimate model parameters. As a result, the proposed method avoids the principal limitation of partial least squares (i.e., the lack of a global optimization procedure) while fully retaining all the advantages of partial least squares (e.g., less restricted distributional assumptions and no improper solutions). The method is also versatile enough to capture complex relationships among variables, including higher-order components and multi-group comparisons. A straightforward estimation algorithm is developed to minimize the criterion.The work reported in this paper was supported by Grant A6394 from the Natural Sciences and Engineering Research Council of Canada to the second author. We wish to thank Richard Bagozzi for permitting us to use his organizational identification data and Wynne Chin for providing PLS-Graph 3.0.  相似文献   

14.
Klaas Nevels 《Psychometrika》1989,54(2):339-343
In FACTALS an alternating least squares algorithm is utilized. Mooijaart (1984) has shown that this algorithm is based upon an erroneous assumption. This paper gives a proper solution for the loss function used in FACTALS.  相似文献   

15.
Background: Although there have been numerous studies conducted on the psychometric properties of Biggs' Learning Process Questionnaire (LPQ), these have involved the use of traditional omnibus measures of scale quality such as corrected item total correlations, internal consistency estimates of reliability, and factor analysis. However, these omnibus measures of scale quality are sample dependent and fail to model item responses as a function of trait level. And since the item trait relationship is typically nonlinear, traditional factor analytic methods are inappropriate. Aims: The purpose of this study was to identify a unidimensional subset of LPQ items and examine the effectiveness of these items and their options in discriminating between changes in the underlying trait level. In addition to assessing item quality, we were interested in assessing overall scale quality with non‐sample dependent measures. Method: The sample was split into two nearly equal halves, and a undimensional subset of items was identified in one of these samples and cross‐validated in the other. The nonlinear relationship between the probability of endorsing an item option and the underlying trait level was modelled using a nonparametric latent trait technique known as kernel smoothing and implemented with the program TestGraf. After item and scale quality were established, maximum likelihood estimates of participants' trait level were obtained and used to examine grade and gender differences. Results: A undimensional subset of 16 deep and achieving items was identified. Slightly more than half of these items needed some of their options combined so that the probability of endorsing an item option as a function of increasing trait level corresponded to the ideal rank ordering of the item options. With this adjustment, scale quality as measured by the information function and standard error function was found to be good. However, no statistically significant gender differences were observed and, although statistically significant grade differences were observed, they were not substantively meaningful. Conclusions: The use of nonparametric kernel‐smoothing techniques is advocated over parametric latent trait methods for the analysis of attitudinal and psychological measures involving polychotomous ordered‐response categories. It is also suggested that latent trait methods are more appropriate than traditional test‐based measures for studying differential item functioning both within and between cultures. Nonparametric kernel‐smoothing techniques hold particular promise in identifying and understanding cross‐cultural differences in student approaches to learning at both the item and scale level.  相似文献   

16.
This paper is concerned with the study of covariance structural models in several populations. Estimation theory of the parameters that are subject to general functional restraints is developed based on the generalized least squares approach. Asymptotic properties of the constrained estimator are studied; and asymptotic chi-square tests are presented to evaluate appropriate model comparisons. The method of multipliers and the standard reparametrization technique are discussed in obtaining the estimates. The methodology is demonstrated by a set of real data.Computer facilities were provided by the Computer Services Center, The Chinese University of Hong Kong. The authors are indebted to several anonymous reviewers for suggestions for improvement of this paper.  相似文献   

17.
Circumplex models for correlation matrices   总被引:1,自引:0,他引:1  
Structural models that yield circumplex inequality patterns for the elements of correlation matrices are reviewed. Particular attention is given to a stochastic process defined on the circle proposed by T. W. Anderson. It is shown that the Anderson circumplex contains the Markov Process model for a simplex as a limiting case when a parameter tends to infinity.Anderson's model is intended for correlation matrices with positive elements. A replacement for Anderson's correlation function that permits negative correlations is suggested. It is shown that the resulting model may be reparametrzed as a factor analysis model with nonlinear constraints on the factor loadings. An unrestricted factor analysis, followed by an appropriate rotation, is employed to obtain parameter estimates. These estimates may be used as initial approximations in an iterative procedure to obtain minimum discrepancy estimates.Practical applications are reported.Presented as the 1992 Psychometric Society Presidential Address. I am greatly indebted to Stephen Du Toit for help in the development of the computer program employed here. Part of this research was carried out at the University of South Africa and at the Institute for Statistical Research of the South African Human Sciences Research Council.  相似文献   

18.
We consider the problem of least-squares fitting of squared distances in unfolding. An alternating procedure is proposed which fixes the row or column configuration in turn and finds the global optimum of the objective criterion with respect to the free parameters, iterating in this fashion until convergence is reached. A considerable simplification in the algorithm results, namely that this conditional global optimum is identified by performing a single unidimensional search for each point, irrespective of the dimensionality of the unfolding solution.This work originally formed part of a doctoral thesis (Greenacre, 1978) presented at the University of Paris VI. The authors acknowledge the helpful comments of John Gower during the first author's sabbatical at Rothamsted Experimental Station. The authors are also indebted to Alexander Shapiro, who came up with the proof of the key result which the authors had long suspected, but had not proved, namely that the smallest root of function (13) provides the global minimum of function (7). The constructive comments of the referees of this paper are acknowledged with thanks. This research was supported in part by the South African Council for Scientific and Industrial Research.  相似文献   

19.
An important feature of distance-based principal components analysis, is that the variables can be optimally transformed. For monotone spline transformation, a nonnegative least-squares problem with a length constraint has to be solved in each iteration. As an alternative algorithm to Lawson and Hanson (1974), we propose the Alternating Length-Constrained Non-Negative Least-Squares (ALC-NNLS) algorithm, which minimizes the nonnegative least-squares loss function over the parameters under a length constraint, by alternatingly minimizing over one parameter while keeping the others fixed. Several properties of the new algorithm are discussed. A Monte Carlo study is presented which shows that for most cases in distance-based principal components analysis, ALC-NNLS performs as good as the method of Lawson and Hanson or sometimes even better in terms of the quality of the solution. Supported by The Netherlands Organization for Scientific Research (NWO) by grant nr. 030-56403 for the “PIONEER” project “Subject Oriented Multivariate Analysis” to the third author. We would like to thank the anonymous referees for their valuable remarks that have improved the quality of this paper.  相似文献   

20.
Current practice in factor analysis typically involves analysis of correlation rather than covariance matrices. We study whether the standardz-statistic that evaluates whether a factor loading is statistically necessary is correctly applied in such situations and more generally when the variables being analyzed are arbitrarily rescaled. Effects of rescaling on estimated standard errors of factor loading estimates, and the consequent effect onz-statistics, are studied in three variants of the classical exploratory factor model under canonical, raw varimax, and normal varimax solutions. For models with analytical solutions we find that some of the standard errors as well as their estimates are scale equivariant, while others are invariant. For a model in which an analytical solution does not exist, we use an example to illustrate that neither the factor loading estimates nor the standard error estimates possess scale equivariance or invariance, implying that different conclusions could be obtained with different scalings. Together with the prior findings on parameter estimates, these results provide new guidance for a key statistical aspect of factor analysis.We gratefully acknowledge the help of the Associate Editor and three referees whose constructive comments lead to an improved version of the paper. This work was supported by National Institute on Drug Abuse Grants DA01070 and DA00017 and by the University of North Texas Faculty Research Grant Program.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号