首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Yutaka Kano 《Psychometrika》1990,55(2):277-291
Based on the usual factor analysis model, this paper investigates the relationship between improper solutions and the number of factors, and discusses the properties of the noniterative estimation method of Ihara and Kano in exploratory factor analysis. The consistency of the Ihara and Kano estimator is shown to hold even for an overestimated number of factors, which provides a theoretical basis for the rare occurrence of improper solutions and for a new method of choosing the number of factors. The comparative study of their estimator and that based on maximum likelihood is carried out by a Monte Carlo experiment.The author would like to express his thanks to Masashi Okamoto and Masamori Ihara for helpful comments and to the editor and referees for critically reading the earlier versions and making many valuable suggestions. He also thanks Shigeo Aki for his comments on physical random numbers.  相似文献   

2.
A distinction is made between statistical inference and psychometric inference in factor analysis. After reviewing Rao's canonical factor analysis (CFA), a fundamental statistical method of factoring, a new method of factor analysis based upon the psychometric concept of generalizability is described. This new procedure (alpha factor analysis, AFA) determines factors which have maximum generalizability in the Kuder-Richardson, or alpha, sense. The two methods, CFA and AFA, each have the important property of giving the same factors regardless of the units of measurement of the observable variables. In determining factors, the principal distinction between the two methods is that CFA operates in the metric of the unique parts of the observable variables while AFA operates in the metric of the common (communality) parts.On the other hand, the two methods are substantially different as to how they establish the number of factors. CFA answers this crucial question with a statistical test of significance while AFA retains only those alpha factors with positive generalizability. This difference is discussed at some length. A brief outline of a computer program for AFA is described and an example of the application of AFA is given.The first version of this paper was prepared while the senior author was a U. S. Public Health Service Fellow at the Center for Advanced Study in the Behavioral Sciences and while the junior author was Director of Research of the Palo Alto Public Schools.  相似文献   

3.
Although the Bock–Aitkin likelihood-based estimation method for factor analysis of dichotomous item response data has important advantages over classical analysis of item tetrachoric correlations, a serious limitation of the method is its reliance on fixed-point Gauss-Hermite (G-H) quadrature in the solution of the likelihood equations and likelihood-ratio tests. When the number of latent dimensions is large, computational considerations require that the number of quadrature points per dimension be few. But with large numbers of items, the dispersion of the likelihood, given the response pattern, becomes so small that the likelihood cannot be accurately evaluated with the sparse fixed points in the latent space. In this paper, we demonstrate that substantial improvement in accuracy can be obtained by adapting the quadrature points to the location and dispersion of the likelihood surfaces corresponding to each distinct pattern in the data. In particular, we show that adaptive G-H quadrature, combined with mean and covariance adjustments at each iteration of an EM algorithm, produces an accurate fast-converging solution with as few as two points per dimension. Evaluations of this method with simulated data are shown to yield accurate recovery of the generating factor loadings for models of upto eight dimensions. Unlike an earlier application of adaptive Gibbs sampling to this problem by Meng and Schilling, the simulations also confirm the validity of the present method in calculating likelihood-ratio chi-square statistics for determining the number of factors required in the model. Finally, we apply the method to a sample of real data from a test of teacher qualifications.  相似文献   

4.
The special characteristics of items-low reliability, confounds by minor, unwanted covariance, and the likelihood of a general factor-and better understanding of factor analysis means that the default procedure of many statistical packages (Little Jiffy) is no longer adequate for exploratory item factor analysis. It produces too many factors and precludes a general factor even when that means the factors extracted are nonreplicable. More appropriate procedures that reduce these problems are presented, along with how to select the sample, sample size required, and how to select items for scales. Proposed scales can be evaluated by their correlations with the factors; a new procedure for doing so eliminates the biased values produced by correlating them with either total or factor scores. The role of exploratory factor analysis relative to cluster analysis and confirmatory factor analysis is noted.  相似文献   

5.
Fisher's method of maximum likelihood is applied to the problem of estimation in factor analysis, as initiated by Lawley, and found to lead to a generalization of the Eckart matrix approximation problem. The solution of this in a special case is applied to show how test fallability enters into factor determination, it being noted that the method of communalities underestimates the number of factors.Dr. George Brown of Princeton University has independently made the same suggestion in some unpublished work.  相似文献   

6.
When some of observed variates do not conform to the model under consideration, they will have a serious effect on the results of statistical analysis. In factor analysis the model with inconsistent variates may result in improper solutions. In this article a useful method for identifying a variate as inconsistent is proposed in factor analysis. The procedure is based on the likelihood principle. Several statistical properties such as the effect of misspecified hypotheses, the problem of multiple comparisons, and robustness to violation of distributional assumptions are investigated. The procedure is illustrated by some examples.  相似文献   

7.
A Monte Carlo study assessed the effect of sampling error and model characteristics on the occurrence of nonconvergent solutions, improper solutions and the distribution of goodness-of-fit indices in maximum likelihood confirmatory factor analysis. Nonconvergent and improper solutions occurred more frequently for smaller sample sizes and for models with fewer indicators of each factor. Effects of practical significance due to sample size, the number of indicators per factor and the number of factors were found for GFI, AGFI, and RMR, whereas no practical effects were found for the probability values associated with the chi-square likelihood ratio test.James Anderson is now at the J. L. Kellogg Graduate School of Management, Northwestern University. The authors gratefully acknowledge the comments and suggestions of Kenneth Land and the reviewers, and the assistance of A. Narayanan with the analysis. Support for this research was provided by the Graduate School of Business and the University Research Institute of the University of Texas at Austin.  相似文献   

8.
Factor analysis is regularly used for analyzing survey data. Missing data, data with outliers and consequently nonnormal data are very common for data obtained through questionnaires. Based on covariance matrix estimates for such nonstandard samples, a unified approach for factor analysis is developed. By generalizing the approach of maximum likelihood under constraints, statistical properties of the estimates for factor loadings and error variances are obtained. A rescaled Bartlett-corrected statistic is proposed for evaluating the number of factors. Equivariance and invariance of parameter estimates and their standard errors for canonical, varimax, and normalized varimax rotations are discussed. Numerical results illustrate the sensitivity of classical methods and advantages of the proposed procedures.This project was supported by a University of North Texas Faculty Research Grant, Grant #R49/CCR610528 for Disease Control and Prevention from the National Center for Injury Prevention and Control, and Grant DA01070 from the National Institute on Drug Abuse. The results do not necessarily represent the official view of the funding agencies. The authors are grateful to three reviewers for suggestions that improved the presentation of this paper.  相似文献   

9.
FACTOR: A computer program to fit the exploratory factor analysis model   总被引:1,自引:0,他引:1  
Exploratory factor analysis (EFA) is one of the most widely used statistical procedures in psychological research. It is a classic technique, but statistical research into EFA is still quite active, and various new developments and methods have been presented in recent years. The authors of the most popular statistical packages, however, do not seem very interested in incorporating these new advances. We present the program FACTOR, which was designed as a general, user-friendly program for computing EFA. It implements traditional procedures and indices and incorporates the benefits of some more recent developments. Two of the traditional procedures implemented are polychoric correlations and parallel analysis, the latter of which is considered to be one of the best methods for determining the number of factors or components to be retained. Good examples of the most recent developments implemented in our program are (1) minimum rank factor analysis, which is the only factor method that allows one to compute the proportion of variance explained by each factor, and (2) the simplimax rotation method, which has proved to be the most powerful rotation method available. Of these methods, only polychoric correlations are available in some commercial programs. A copy of the software, a demo, and a short manual can be obtained free of charge from the first author.  相似文献   

10.
孔明  卞冉  张厚粲 《心理科学》2007,30(4):924-925,918
平行分析是探索性因素分析中用来确定所保留的凶子个数的一种方法。探索性因素分析中常用的确定因子个数的方法有特征值大于1准则和碎石图,但这两种方法又各有不足。平行分析则为探索性因素分析中所保留因子个数的确定提供了另一种新思路。本文详细介绍了平行分析的步骤、潜在逻辑以及进行平行分析所用的软件,并通过实例来说明其在探索性因素分析中如何应用。  相似文献   

11.
Factor analysis and AIC   总被引:65,自引:0,他引:65  
The information criterion AIC was introduced to extend the method of maximum likelihood to the multimodel situation. It was obtained by relating the successful experience of the order determination of an autoregressive model to the determination of the number of factors in the maximum likelihood factor analysis. The use of the AIC criterion in the factor analysis is particularly interesting when it is viewed as the choice of a Bayesian model. This observation shows that the area of application of AIC can be much wider than the conventional i.i.d. type models on which the original derivation of the criterion was based. The observation of the Bayesian structure of the factor analysis model leads us to the handling of the problem of improper solution by introducing a natural prior distribution of factor loadings.The author would like to express his thanks to Jim Ramsay, Yoshio Takane, Donald Ramirez and Hamparsum Bozdogan for helpful comments on the original version of the paper. Thanks are also due to Emiko Arahata for her help in computing.  相似文献   

12.
A direct method in handling incomplete data in general covariance structural models is investigated. Asymptotic statistical properties of the generalized least squares method are developed. It is shown that this approach has very close relationships with the maximum likelihood approach. Iterative procedures for obtaining the generalized least squares estimates, the maximum likelihood estimates, as well as their standard error estimates are derived. Computer programs for the confirmatory factor analysis model are implemented. A longitudinal type data set is used as an example to illustrate the results.This research was supported in part by Research Grant DAD1070 from the U.S. Public Health Service. The author is indebted to anonymous reviewers for some very valuable suggestions. Computer funding is provided by the Computer Services Centre, The Chinese University of Hong Kong.  相似文献   

13.
This article presents the results of two Monte Carlo simulation studies of the recovery of weak factor loadings, in the context of confirmatory factor analysis, for models that do not exactly hold in the population. This issue has not been examined in previous research. Model error was introduced using a procedure that allows for specifying a covariance structure with a specified discrepancy in the population. The effects of sample size, estimation method (maximum likelihood vs. unweighted least squares), and factor correlation were also considered. The first simulation study examined recovery for models correctly specified with the known number of factors, and the second investigated recovery for models incorrectly specified by underfactoring. The results showed that recovery was not affected by model discrepancy for the correctly specified models but was affected for the incorrectly specified models. Recovery improved in both studies when factors were correlated, and unweighted least squares performed better than maximum likelihood in recovering the weak factor loadings.  相似文献   

14.
A plausibles-factor solution for many types of psychological and educational tests is one that exhibits a general factor ands − 1 group or method related factors. The bi-factor solution results from the constraint that each item has a nonzero loading on the primary dimension and at most one of thes − 1 group factors. This paper derives a bi-factor item-response model for binary response data. In marginal maximum likelihood estimation of item parameters, the bi-factor restriction leads to a major simplification of likelihood equations and (a) permits analysis of models with large numbers of group factors; (b) permits conditional dependence within identified subsets of items; and (c) provides more parsimonious factor solutions than an unrestricted full-information item factor analysis in some cases. Supported by the Cognitive Science Program, Office of Naval Research, Under grant #N00014-89-J-1104. We would like to thank Darrell Bock for several helpful suggestions.  相似文献   

15.
Simultaneous factor analysis in several populations   总被引:24,自引:0,他引:24  
This paper is concerned with the study of similarities and differences in factor structures between different groups. A common situation occurs when a battery of tests has been administered to samples of examinees from several populations.A very general model is presented, in which any parameter in the factor analysis models (factor loadings, factor variances, factor covariances, and unique variances) for the different groups may be assigned an arbitrary value or constrained to be equal to some other parameter. Given such a specification, the model is estimated by the maximum likelihood method yielding a large samplex 2 of goodness of fit. By computing several solutions under different specifications one can test various hypotheses.The method is capable of dealing with any degree of invariance, from the one extreme, where nothing is invariant, to the other extreme, where everything is invariant. Neither the number of tests nor the number of common factors need to be the same for all groups, but to be at all interesting, it is assumed that there is a common core of tests in each battery that is the same or at least content-wise comparable.This research was supported by grant NSF-GB-12959 from National Science Foundation. My thanks are due to Michael Browne for his comments on an earlier draft of this paper and to Marielle van Thillo who checked the mathematical derivations and wrote and debugged the computer program SIFASP.Now at Statistics Department, University of Uppsala, Sweden.  相似文献   

16.
A reliability coefficient for maximum likelihood factor analysis   总被引:54,自引:0,他引:54  
Maximum likelihood factor analysis provides an effective method for estimation of factor matrices and a useful test statistic in the likelihood ratio for rejection of overly simple factor models. A reliability coefficient is proposed to indicate quality of representation of interrelations among attributes in a battery by a maximum likelihood factor analysis. Usually, for a large sample of individuals or objects, the likelihood ratio statistic could indicate that an otherwise acceptable factor model does not exactly represent the interrelations among the attributes for a population. The reliability coefficient could indicate a very close representation in this case and be a better indication as to whether to accept or reject the factor solution. This research was supported by the Personnel and Training Research Programs Office of the Office of Naval Research under contract US NAVY/00014-67-A-0305-0003. Critical review of the development and suggestions by Richard Montanelli were most helpful.  相似文献   

17.
A jackknife-like procedure is developed for producing standard errors of estimate in maximum likelihood factor analysis. Unlike earlier methods based on information theory, the procedure developed is computationally feasible on larger problems. Unlike earlier methods based on the jackknife, the present procedure is not plagued by the factor alignment problem, the Heywood case problem, or the necessity to jackknife by groups. Standard errors may be produced for rotated and unrotated loading estimates using either orthogonal or oblique rotation as well as for estimates of unique factor variances and common factor correlations. The total cost for larger problems is a small multiple of the square of the number of variables times the number of observations used in the analysis. Examples are given to demonstrate the feasibility of the method.The research done by R. I. Jennrich was supported in part by NSF Grant MCS 77-02121. The research done by D. B. Clarkson was supported in part by NSERC Grant A3109.  相似文献   

18.
Influence analysis is an important component of data analysis, and the local influence approach has been widely applied to many statistical models to identify influential observations and assess minor model perturbations since the pioneering work of Cook (1986) . The approach is often adopted to develop influence analysis procedures for factor analysis models with ranking data. However, as this well‐known approach is based on the observed data likelihood, which involves multidimensional integrals, directly applying it to develop influence analysis procedures for the factor analysis models with ranking data is difficult. To address this difficulty, a Monte Carlo expectation and maximization algorithm (MCEM) is used to obtain the maximum‐likelihood estimate of the model parameters, and measures for influence analysis on the basis of the conditional expectation of the complete data log likelihood at the E‐step of the MCEM algorithm are then obtained. Very little additional computation is needed to compute the influence measures, because it is possible to make use of the by‐products of the estimation procedure. Influence measures that are based on several typical perturbation schemes are discussed in detail, and the proposed method is illustrated with two real examples and an artificial example.  相似文献   

19.
In restricted statistical models, since the first derivatives of the likelihood displacement are often nonzero, the commonly adopted formulation for local influence analysis is not appropriate. However, there are two kinds of model restrictions in which the first derivatives of the likelihood displacement are still zero. General formulas for assessing local influence under these restrictions are derived and applied to factor analysis as the usually used restriction in factor analysis satisfies the conditions. Various influence schemes are introduced and a comparison to the influence function approach is discussed. It is also shown that local influence for factor analysis is invariant to the scale of the data and is independent of the rotation of the factor loadings. The authors are most grateful to the referees, the Associate Editor, and the Editor for helpful suggestions for improving the clarity of the paper.  相似文献   

20.
The “factor” analyses published by Schultz, Kaye, and Hoyer (1980) confused component and factor analysis and led in this case as in many others to unwarranted conclusions. They used component analysis to develop factor models that were subjected to restricted (confirmatory) maximum likelihood analysis, but the final models for which good fits with the observed correlations were obtained were not common factor models. They were, however, discussed as such and conclusions drawn accordingly. When their correlation matrices are analyzed by the principal factors method, two factors are sufficient to account for the intercorrelations. These two factors generally support the a priori expectation of a difference between intelligence tasks and spontaneous flexibility tasks. They are also quite similar in younger and older subjects, when similarity is judged in terms of factor pattern. Factor loadings for the younger subjects, however, are much smaller than expectations based on the respective ranges of talent in the two groups of subjects or on past experience with similar tests in undergraduate student populations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号