首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Exploratory factor analysis (EFA) is often conducted with ordinal data (e.g., items with 5-point responses) in the social and behavioral sciences. These ordinal variables are often treated as if they were continuous in practice. An alternative strategy is to assume that a normally distributed continuous variable underlies each ordinal variable. The EFA model is specified for these underlying continuous variables rather than the observed ordinal variables. Although these underlying continuous variables are not observed directly, their correlations can be estimated from the ordinal variables. These correlations are referred to as polychoric correlations. This article is concerned with ordinary least squares (OLS) estimation of parameters in EFA with polychoric correlations. Standard errors and confidence intervals for rotated factor loadings and factor correlations are presented. OLS estimates and the associated standard error estimates and confidence intervals are illustrated using personality trait ratings from 228 college students. Statistical properties of the proposed procedure are explored using a Monte Carlo study. The empirical illustration and the Monte Carlo study showed that (a) OLS estimation of EFA is feasible with large models, (b) point estimates of rotated factor loadings are unbiased, (c) point estimates of factor correlations are slightly negatively biased with small samples, and (d) standard error estimates and confidence intervals perform satisfactorily at moderately large samples.  相似文献   

2.
Exploratory factor analysis (EFA) is a commonly used statistical technique for examining the relationships between variables (e.g., items) and the factors (e.g., latent traits) they depict. There are several decisions that must be made when using EFA, with one of the more important being choice of the rotation criterion. This selection can be arduous given the numerous rotation criteria available and the lack of research/literature that compares their function and utility. Historically, researchers have chosen rotation criteria based on whether or not factors are correlated and have failed to consider other important aspects of their data. This study reviews several rotation criteria, demonstrates how they may perform with different factor pattern structures, and highlights for researchers subtle but important differences between each rotation criterion. The choice of rotation criterion is critical to ensure researchers make informed decisions as to when different rotation criteria may or may not be appropriate. The results suggest that depending on the rotation criterion selected and the complexity of the factor pattern matrix, the interpretation of the interfactor correlations and factor pattern loadings can vary substantially. Implications and future directions are discussed.  相似文献   

3.
Exploratory factor analysis is a popular statistical technique used in communication research. Although exploratory factor analysis (EFA) and principal components analysis (PCA) are different techniques, PCA is often employed incorrectly to reveal latent constructs (i.e., factors) of observed variables, which is the purpose of EFA. PCA is more appropriate for reducing measured variables into a smaller set of variables (i.e., components) by keeping as much variance as possible out of the total variance in the measured variables. Furthermore, the popular use of varimax rotation raises some concerns about the relationships among the factors that researchers claim to discover. This paper discusses the distinct purposes of PCA and EFA, using two data sets as examples to highlight the differences in results between these procedures, and also reviews the use of each technique in three major communication journals: Communication Monographs, Human Communication Research, and Communication Research.  相似文献   

4.
Abstract

Exploratory Factor Analysis (EFA) is a widely used statistical technique to discover the structure of latent unobserved variables, called factors, from a set of observed variables. EFA exploits the property of rotation invariance of the factor model to enhance factors’ interpretability by building a sparse loading matrix. In this paper, we propose an optimization-based procedure to give meaning to the factors arising in EFA by means of an additional set of variables, called explanatory variables, which may include in particular the set of observed variables. A goodness-of-fit criterion is introduced which quantifies the quality of the interpretation given this way. Our methodology also exploits the rotational invariance of EFA to obtain the best orthogonal rotation of the factors, in terms of the goodness-of-fit, but making them match to some of the explanatory variables, thus going beyond traditional rotation methods. Therefore, our approach allows the analyst to interpret the factors not only in terms of the observed variables, but in terms of a broader set of variables. Our experimental results demonstrate how our approach enhances interpretability in EFA, first in an empirical dataset, concerning volumes of reservoirs in California, and second in a synthetic data example.  相似文献   

5.
Exploratory factor analysis (EFA) is a widely used statistical method in traffic and transportation research, particularly for the development and validation of measurement instruments. This article critically examines current practices in conducting and reporting EFA in published transportation studies. One hundred and eighty papers published between 2016 and 2018 were examined, of which eighty-two were included in the present study after applying eligibility criteria. The review suggests that the quality of EFA reported in the field is routinely poor: (a) researchers fail to provide sufficient information to be able to adequately assess the appropriateness and quality of both the input data and the reported output; and (b) the decisions underlying the choices of EFA methods are not justified and rely mostly on procedures advised against, particularly the Little-Jiffy approach. In summary, a significant gap between current practice and experts' recommendations exists. We provide some guidelines that may help in conducting, reporting and reviewing EFA in transportation research.  相似文献   

6.
Parallel analysis (PA) is an often-recommended approach for assessment of the dimensionality of a variable set. PA is known in different variants, which may yield different dimensionality indications. In this article, the authors considered the most appropriate PA procedure to assess the number of common factors underlying ordered polytomously scored variables. They proposed minimum rank factor analysis (MRFA) as an extraction method, rather than the currently applied principal component analysis (PCA) and principal axes factoring. A simulation study, based on data with major and minor factors, showed that all procedures consistently point at the number of major common factors. A polychoric-based PA slightly outperformed a Pearson-based PA, but convergence problems may hamper its empirical application. In empirical practice, PA-MRFA with a 95% threshold based on polychoric correlations or, in case of nonconvergence, Pearson correlations with mean thresholds appear to be a good choice for identification of the number of common factors. PA-MRFA is a common-factor-based method and performed best in the simulation experiment. PA based on PCA with a 95% threshold is second best, as this method showed good performances in the empirically relevant conditions of the simulation experiment.  相似文献   

7.
Quantifying construct validity: two simple measures   总被引:5,自引:0,他引:5  
Construct validity is one of the most central concepts in psychology. Researchers generally establish the construct validity of a measure by correlating it with a number of other measures and arguing from the pattern of correlations that the measure is associated with these variables in theoretically predictable ways. This article presents 2 simple metrics for quantifying construct validity that provide effect size estimates indicating the extent to which the observed pattem of correlations in a convergent-discriminant validity matrix matches the theoretically predicted pattern of correlations. Both measures, based on contrast analysis, provide simple estimates of validity that can be compared across studies, constructs, and measures meta-analytically, and can be implemented without the use of complex statistical procedures that may limit their accessibility.  相似文献   

8.
探索性因素分析——最近10年的评述   总被引:13,自引:0,他引:13  
目的:(1)介绍国外心理统计学界对探索性因素分析中几个重大问题的基本观点;(2)系统地评述过去10年里(1991~2000年)我国心理学研究对这一技术的使用情况;(3)强调运用这一技术时值得注意的一些事项,以期这一技术对我国心理学研究发挥更大的作用。方法:作检索了《心理学报》和《心理科学》在1991~2000年间与探索性因素分析有关的章,对其中以探索性因素分析为主要研究方法的重点章(feature article)进行了编码,统计了频数及百分比分布。结果:我国心理学研究在积极使用探索性因素分析这一先进统计技术时还存在一些问题,主要表现在:(1)在确定因素个数时,倾向于机械地依靠桌个单一方法来作决定,(2)大量使用正交旋转,(3)过于依赖SPSS,(4)对因素分析过程中的重要信息/结果报告不够。结论:探索性因素分析在过去十年得到了广泛的应用,如能吸收国外同行的一些观点,探索性因素分析这一技术在我国心理学研究中必将获得更广泛有效的应用。  相似文献   

9.
Confirmatory factor analysis (CFA) is widely used for examining hypothesized relations among ordinal variables (e.g., Likert-type items). A theoretically appropriate method fits the CFA model to polychoric correlations using either weighted least squares (WLS) or robust WLS. Importantly, this approach assumes that a continuous, normal latent process determines each observed variable. The extent to which violations of this assumption undermine CFA estimation is not well-known. In this article, the authors empirically study this issue using a computer simulation study. The results suggest that estimation of polychoric correlations is robust to modest violations of underlying normality. Further, WLS performed adequately only at the largest sample size but led to substantial estimation difficulties with smaller samples. Finally, robust WLS performed well across all conditions.  相似文献   

10.
Data in psychology are often collected using Likert‐type scales, and it has been shown that factor analysis of Likert‐type data is better performed on the polychoric correlation matrix than on the product‐moment covariance matrix, especially when the distributions of the observed variables are skewed. In theory, factor analysis of the polychoric correlation matrix is best conducted using generalized least squares with an asymptotically correct weight matrix (AGLS). However, simulation studies showed that both least squares (LS) and diagonally weighted least squares (DWLS) perform better than AGLS, and thus LS or DWLS is routinely used in practice. In either LS or DWLS, the associations among the polychoric correlation coefficients are completely ignored. To mend such a gap between statistical theory and empirical work, this paper proposes new methods, called ridge GLS, for factor analysis of ordinal data. Monte Carlo results show that, for a wide range of sample sizes, ridge GLS methods yield uniformly more accurate parameter estimates than existing methods (LS, DWLS, AGLS). A real‐data example indicates that estimates by ridge GLS are 9–20% more efficient than those by existing methods. Rescaled and adjusted test statistics as well as sandwich‐type standard errors following the ridge GLS methods also perform reasonably well.  相似文献   

11.
Ruscio J  Roche B 《心理评价》2012,24(2):282-292
Exploratory factor analysis (EFA) is used routinely in the development and validation of assessment instruments. One of the most significant challenges when one is performing EFA is determining how many factors to retain. Parallel analysis (PA) is an effective stopping rule that compares the eigenvalues of randomly generated data with those for the actual data. PA takes into account sampling error, and at present it is widely considered the best available method. We introduce a variant of PA that goes even further by reproducing the observed correlation matrix rather than generating random data. Comparison data (CD) with known factorial structure are first generated using 1 factor, and then the number of factors is increased until the reproduction of the observed eigenvalues fails to improve significantly. We evaluated the performance of PA, CD with known factorial structure, and 7 other techniques in a simulation study spanning a wide range of challenging data conditions. In terms of accuracy and robustness across data conditions, the CD technique outperformed all other methods, including a nontrivial superiority to PA. We provide program code to implement the CD technique, which requires no more specialized knowledge or skills than performing PA.  相似文献   

12.
Exploratory factor analysis (EFA) is an extremely popular method for determining the underlying factor structure for a set of variables. Due to its exploratory nature, EFA is notorious for being conducted with small sample sizes, and recent reviews of psychological research have reported that between 40% and 60% of applied studies have 200 or fewer observations. Recent methodological studies have addressed small size requirements for EFA models; however, these models have only considered complete data, which are the exception rather than the rule in psychology. Furthermore, the extant literature on missing data techniques with small samples is scant, and nearly all existing studies focus on topics that are not of primary interest to EFA models. Therefore, this article presents a simulation to assess the performance of various missing data techniques for EFA models with both small samples and missing data. Results show that deletion methods do not extract the proper number of factors and estimate the factor loadings with severe bias, even when data are missing completely at random. Predictive mean matching is the best method overall when considering extracting the correct number of factors and estimating factor loadings without bias, although 2-stage estimation was a close second.  相似文献   

13.
Many researchers studying the effectiveness of working in groups have compared group performance with the scores of individuals combined into nominal groups. Traditionally, methods for forming nominal groups have been shown to be poor, and more recent procedures (Wright, 2007) are difficult to use for complex designs and are inflexible. A new procedure is introduced and tested in which thousands of possible combinations of nominal groups are sampled. Sample characteristics, such as the mean, variance, and distribution, of all these sets are calculated, and the set that is most representative of all of these sets is returned. The user can choose among different ways of conceptualizing the meaning of most representative, but on the basis of simulations and the fact that most subsequent statistical procedures are based on the mean and variance, we argue that finding the set with the mean and variance most similar to the means of the representative statistics for all of the sets is the preferred approach. The algorithm is implemented in a stand-alone C++ executable program and as an R function. Both of these allow anyone to use the procedures freely.  相似文献   

14.
A scale-invariant index of factorial simplicity is proposed as a summary statistic for principal components and factor analysis. The index ranges from zero to one, and attains its maximum when all variables are simple rather than factorially complex. A factor scale-free oblique transformation method is developed to maximize the index. In addition, a new orthogonal rotation procedure is developed. These factor transformation methods are implemented using rapidly convergent computer programs. Observed results indicate that the procedures produce meaningfully simple factor pattern solutions.This investigation was supported in part by a Research Scientist Development Award (K02-DA00017) and research grants (MH24149 and DA01070) from the U. S. Public Health Service. The assistance of Andrew L. Comrey, Henry F. Kaiser, Bonnie Barron, Marion Hee, and several anonymous reviewers is gratefully acknowledged.  相似文献   

15.
The aims of the study were (i) to analyse a Norwegian version of the NEO Personality Inventory (NEO-PI), using both exploratory factor analysis (EFA) and confirmatory factor analysis (CFA); (ii) to compare the results of the two factor analytic strategies, both within the present study and across different studies; and (iii) to discuss possible causes of discrepant findings (across factor-analytic methods and across samples). The sample comprised 961 subjects representative of the non-institutionalized Norwegian adult population. Using an EFA strategy, very high coefficients of factor comparability (r=0.93–0.99) across sexes were found. None of the five main domains turned out to be as homogeneous as suggested by the original five-factor model, but most of the deviations from the assumed simple structure were comparable to results from recent American studies. However, none of the revised EFA-based models were supported using CFA methods. Moreover, a large number of modifications were necessary to obtain a model with acceptable fit. It is argued that these discrepant findings can be accounted for, at least in part, by (i) consequences of different model acceptance criteria in the EFA and CFA tradition, (ii) the inherent logical–semantical structure of the NEO-PI, and (iii) consequences of selection effects (factorial invariance problem). © 1997 by John Wiley & Sons, Ltd.  相似文献   

16.
In this article, a Windows program for analyzing measurement invariance in two different populations is described. Factor analysis is a common way of assessing measurement invariance, and restricted factor analysis is now the most popular method. However, applied researchers have usually found that the theoretical advantages of restricted factor analysis do not always apply in practical situations. For example, when the participant sample is large, as is the case in Internet-based questionnaires, the available software for restricted factor analysis might fail to converge on a solution. Our program is based on unrestricted factor analysis and considers the three parameters that define factor invariance: difficulties, discriminations, and residual variances. The statistical significance of the tests for evaluating invariance is obtained using Bootstrap resampling procedures. A real-life example demonstrates the usefulness of the program.  相似文献   

17.
This paper examines the implications of violating assumptions concerning the continuity and distributional properties of data in establishing measurement models in social science research. The General Health Questionnaire-12 uses an ordinal response scale. Responses to the GHQ-12 from 201 Hong Kong immigrants on arrival in Australia showed that the data were not normally distributed. A series of confirmatory factor analyses using either a Pearson product-moment or a polychoric correlation input matrix and employing either maximum likelihood, weighted least squares or diagonally weighted least squares estimation methods were conducted on the data. The parameter estimates and goodness-of-fit statistics provided support for using polychoric correlations and diagonally weighted least squares estimation when analyzing ordinal, nonnormal data.  相似文献   

18.
Several procedures that use summary data to test hypotheses about Pearson correlations and ordinary least squares regression coefficients have been described in various books and articles. To our knowledge, however, no single resource describes all of the most common tests. Furthermore, many of these tests have not yet been implemented in popular statistical software packages such as SPSS and SAS. In this article, we describe all of the most common tests and provide SPSS and SAS programs to perform them. When they are applicable, our code also computes 100 × (1 ? α)% confidence intervals corresponding to the tests. For testing hypotheses about independent regression coefficients, we demonstrate one method that uses summary data and another that uses raw data (i.e., Potthoff analysis). When the raw data are available, the latter method is preferred, because use of summary data entails some loss of precision due to rounding.  相似文献   

19.
Presented is a sample of computerized methods aimed at multidimensional scaling and psychometric item analysis that offer a dynamic graphical interface to execute analyses and help visualize the results. These methods show how the Lisp-Stat programming language and the ViSta statistical program can be jointly applied to develop powerful computer applications that enhance dynamic graphical analysis methods. The feasibility of this combined strategy relies on two main features: (1) The programming architecture of ViSta enables users to add new statistical methods as plug-ins, which are integrated into the program environment and can make use of all the functions already available in ViSta (e.g., data manipulation, editing, printing); and (2) the set of powerful statistical and graphical functions integrated into the Lisp-Stat programming language provides the means for developing statistical methods with dynamic graphical visualizations, which can be implemented as ViSta plug-ins.  相似文献   

20.
Attacks by communication scholars on exploratory factor analysis (EFA) have cast doubt on prior findings based on the technique. The present study is one in a series of studies performed to test the ability of EFA to produce results that replicate known dimensions in a data set. It was designed to determine which of 7 initial extraction techniques produce the highest factor fidelity across 5 item distribution shapes and 3 sample sizes. Monte-Carlo-created data sets with known factors, known item distribution shapes, and a 30% error rate were submitted to EFA. Results from an analysis of variance (ANOH) indicate that image analysis reaches perfect factor fidelity with a smaller number of cases regardless of item distribution shape. This and other results reported in this study suggest that3ndings based on EFA should be viewed with cautious optimism and be evaluated according to the findings from this and similar studies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号