共查询到20条相似文献,搜索用时 0 毫秒
1.
In the last decade several algorithms for computing the greatest lower bound to reliability or the constrained minimum-trace communality solution in factor analysis have been developed. In this paper convergence properties of these methods are examined. Instead of using Lagrange multipliers a new theorem is applied that gives a sufficient condition for a symmetric matrix to be Gramian. Whereas computational pitfalls for two methods suggested by Woodhouse and Jackson can be constructed it is shown that a slightly modified version of one method suggested by Bentler and Woodward can safely be applied to any set of data. A uniqueness proof for the solution desired is offered.The authors are obliged to Charles Lewis and Dirk Knol for helpful comments, and to Frank Brokken and Henk Camstra for developing computer programs. 相似文献
2.
The subject of factor indeterminacy has a vast history in factor analysis (Guttman, 1955; Lederman, 1938; Wilson, 1928). It has lead to strong differences in opinion (Steiger, 1979). The current paper gives necessary and sufficient conditions for observability of factors in terms of the parameter matrices and a finite number of variables. Five conditions are given which rigorously define indeterminacy. It is shown that (un)observable factors are (in)determinate. Specifically, the indeterminacy proof by Guttman is extended to Heywood cases. The results are illustrated by two examples and implications for indeterminacy are discussed. 相似文献
3.
A jackknife-like procedure is developed for producing standard errors of estimate in maximum likelihood factor analysis. Unlike earlier methods based on information theory, the procedure developed is computationally feasible on larger problems. Unlike earlier methods based on the jackknife, the present procedure is not plagued by the factor alignment problem, the Heywood case problem, or the necessity to jackknife by groups. Standard errors may be produced for rotated and unrotated loading estimates using either orthogonal or oblique rotation as well as for estimates of unique factor variances and common factor correlations. The total cost for larger problems is a small multiple of the square of the number of variables times the number of observations used in the analysis. Examples are given to demonstrate the feasibility of the method.The research done by R. I. Jennrich was supported in part by NSF Grant MCS 77-02121. The research done by D. B. Clarkson was supported in part by NSERC Grant A3109. 相似文献
4.
Douglas B. Clarkson 《Psychometrika》1979,44(3):297-314
The jackknife by groups and modifications of the jackknife by groups are used to estimate standard errors of rotated factor loadings for selected populations in common factor model maximum likelihood factor analysis. Simulations are performed in whicht-statistics based upon these jackknife estimates of the standard errors are computed. The validity of thet-statistics and their associated confidence intervals is assessed. Methods are given through which the computational efficiency of the jackknife may be greatly enhanced in the factor analysis model.Computing assistance was obtained from the Health Sciences Computing Facility, UCLA, sponsored by NIH Special Research Resources Grant RR-3.The author wishes to thank his doctoral committee co-chairmen, Drs James W. Frane and Robert I. Jennrich, UCLA, for their contributions to this research. 相似文献
5.
In the course of developing the minres method of factor analysis the troublesome situation of communalities greater than one arose. This problem—referred to as the generalized Heywood case—is resolved in this paper by means of a process of minimizing the sum of squares of off-diagonal residuals. The resulting solution is superior to the otherwise very efficient original minres method without requiring additional computing time.Both authors were with the System Development Corporation when this work was done. 相似文献
6.
John B. Carroll 《Psychometrika》1953,18(1):23-38
It is proposed that a satisfactory criterion for an approximation to simple structure is the minimization of the sums of cross-products (across factors) ofsquares of factor loadings. This criterion is completely analytical and yields a unique solution; it requires no plotting, nor any decisions as to the clustering of variables into subgroups. The equations involved appear to be capable only of iterative solution; for more than three or four factors the computations become extremely laborious but may be feasible for high-speed electronic equipment. Either orthogonal or oblique solutions may be achieved. For illustrations, the Johnson-Reynolds study of flow and selection factors and the Thurstone box problem are reanalyzed. The presence of factorially complex tests produces a type of hyperplanar fit which the investigator may desire to adjust by graphical rotations; the smaller the number of such tests, the closer the criterion comes to approximating simple structure. 相似文献
7.
Raymond F. Koopman 《Psychometrika》1978,43(1):109-110
It is shown that the common and unique variance estimates produced by Martin & McDonald's Bayesian estimation procedure for the unrestricted common factor model have a predictable sum which is always greater than the maximum likelihood estimate of the total variance. This fact is used to justify a suggested simple alternative method of specifying the Bayesian parameters required by the procedure. 相似文献
8.
Otto P. van Driel 《Psychometrika》1978,43(2):225-243
In the applications of maximum likelihood factor analysis the occurrence of boundary minima instead of proper minima is no exception at all. In the past the causes of such improper solutions could not be detected. This was impossible because the matrices containing the parameters of the factor analysis model were kept positive definite. By dropping these constraints, it becomes possible to distinguish between the different causes of improper solutions. In this paper some of the most important causes are discussed and illustrated by means of artificial and empirical data.The author is indebted to H. J. Prins for stimulating and encouraging discussions. 相似文献
9.
Klaas Nevels 《Psychometrika》1989,54(2):339-343
In FACTALS an alternating least squares algorithm is utilized. Mooijaart (1984) has shown that this algorithm is based upon an erroneous assumption. This paper gives a proper solution for the loss function used in FACTALS. 相似文献
10.
A new algorithm to obtain the least-squares or MINRES solution in common factor analysis is presented. It is based on the up-and-down Marquardt algorithm developed by the present authors for a general nonlinear least-squares problem. Experiments with some numerical models and some empirical data sets showed that the algorithm worked nicely and that SMC (Squared Multiple Correlation) performed best among four sets of initial values for common variances but that the solution might sometimes be very sensitive to fluctuations in the sample covariance matrix.Numerical computation was made on a NEAC S-1000 computer in the Computer Center, Osaka University. 相似文献
11.
Harshman's DEDICOM model providesa framework for analyzing square but asymmetric materices of directional relationships amongn objects or persons in terms of a small number of components. One version of DEDICOM ignores the diagonal entries of the matrices. A straight-forward computational solution for this model is offered in the present paper. The solution can be interpreted as a generalized Minres procedure suitable for handing asymmetric matrices. 相似文献
12.
Michael W. Browne 《Psychometrika》1988,53(4):585-589
Algebraic properties of the normal theory maximum likelihood solution in factor analysis regression are investigated. Two commonly employed measures of the within sample predictive accuracy of the factor analysis regression function are considered: the variance of the regression residuals and the squared correlation coefficient between the criterion variable and the regression function. It is shown that this within sample residual variance and within sample squared correlation may be obtained directly from the factor loading and unique variance estimates, without use of the original observations or the sample covariance matrix. 相似文献
13.
The factor analysis of repeated measures psychiatric data presents interesting challenges for researchers in terms of identifying the latent structure of an assessment instrument. Specifically, repeated measures contain both within and between individual sources of variance. Although a number of techniques exist for separating out these 2 sources of variance, all are problematic. Recently, researchers have proposed that exploratory multilevel factor analysis (MFA) be used to appropriately analyze the latent structure of repeated measures data. The chief objective of this report is to provide a didactic step-by-step guide on how MFA may be applied to psychiatric data. In the discussion, we describe difficulties associated with MFA and consider challenges in factor analyzing life event appraisals in psychiatric samples. 相似文献
14.
15.
An inter-battery method of factor analysis 总被引:4,自引:0,他引:4
Ledyard R Tucker 《Psychometrika》1958,23(2):111-136
The inter-battery method of factor analysis was devised to provide information relevant to the stability of factors over different selections of tests. Two batteries of tests, postulated to depend on the same common factors, but not parallel tests, are given to one sample of individuals. Factors are determined from the correlation of the tests in one battery with the tests in the other battery. These factors are only those that are common to the two batteries. No communality estimates are required. A statistical test is provided for judging the minimum number of factors involved. Rotation of axes is carried out independently for the two batteries. A final step provides the correlation between factors determined by scores on the tests in the two batteries. The correlations between corresponding factors are taken as factor reliability coefficients.This research was jointly supported by Princeton University and the Office of Naval Research under contract N6onr-270-20 and the National Science Foundation under grant NSF G-642; Harold Gulliksen, principal investigator. The preparation of this paper and the accompanying material has been aided by the Educational Testing Service. The author is grateful to Professors Harold Gulliksen and Samuel S. Wilks for their many most helpful comments and suggestions. 相似文献
16.
ROSNER B 《Psychometrika》1948,13(3):181-184
Factorial analysis begins with ann ×n correlation matrixR, whose principal diagonal entries are unknown. If the common test space of the battery is under investigation, the communality of each test is entered in the appropriate diagonal cell. This value is the portion of the test's variance shared with others in the battery. The communalities must be so estimated thatR will maintain the rank determined by its side entries, after the former have been inserted. Previous methods of estimating the communalities have involved a certain arbitrariness, since they depended on selecting test subgroups or parts of the data inR. A theory is presented showing that this difficulty can be avoided in principle. In its present form, the theory is not offered as a practical computing procedure. The basis of the new method lies in the Cayley-Hamilton theorem: Any square matrix satisfies its own characteristic equation. 相似文献
17.
It is shown that the PAR Derivative-Free Nonlinear Regression program in BMDP can be used to fit structural equation models,
producing generalized least squares estimates, standard errors, and goodness-of-fit test statistics. Covariance structure
models more general than LISREL can be analyzed. The approach is particularly useful for dealing with new non-standard models
and experimenting with alternate methods of estimation.
The research of the second author was supported by the NSF grant MCS 83-01587.
We wish to thank our referees for some very valuable suggestions. 相似文献
18.
19.
Harold P. Bechtoldt 《Psychometrika》1961,26(4):405-432
Note is taken of four related sources of confusion as to the usefulness of Thurstone's factor analysis model and of their resolutions. One resolution uses Tucker's distinction between exploratory and confirmatory analyses. Eight analyses of two sets of data demonstrate the procedures and results of a confirmatory study with statistical tests of some, but not all, relevant hypotheses in an investigation of the stability (invariance) hypothesis. The empirical results provide estimates, as substitutes for unavailable sampling formulations, of effects of variation in diagonal values, in method of factoring, and in samples of cases. Implications of these results are discussed.The computational costs of this study were defrayed, in part, by a research small grant M-1922 from the National Institute of Health, and, in part, by support under project 176-0002 by the University of Iowa Computing Center, Dr. J. P. Dolch, Director. The assistance of Dr. Kern Dickman and Mr. Leonard Wevrick of the University of Illinois and of Mr. Norman Luther of the University of Iowa in handling the computing problems is gratefully acknowledged. 相似文献
20.
Sangit Chatterjee 《The British journal of mathematical and statistical psychology》1984,37(2):252-262
Sampling variability of the estimates of factor loadings is neglected in modern factor analysis. Such investigations are generally normal theory based and asymptotic in nature. The bootstrap, a computer-based methodology, is described and then applied to demonstrate how the sampling variability of the estimates of factor loadings can be estimated for a given set of data. The issue of the number of factors to be retained in a factor model is also addressed. The bootstrap is shown to be an effective data-analytic tool for computing various statistics of interest which are otherwise intractable. 相似文献