首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Multitrait-Multimethod (MTMM) matrices are often analyzed by means of confirmatory factor analysis (CFA). However, fitting MTMM models often leads to improper solutions, or non-convergence. In an attempt to overcome these problems, various alternative CFA models have been proposed, but with none of these the problem of finding improper solutions was solved completely. In the present paper, an approach is proposed where improper solutions are ruled out altogether and convergence is guaranteed. The approach is based on constrained variants of components analysis (CA). Besides the fact that these methods do not give improper solutions, they have the advantage that they provide component scores which can later on be used to relate the components to external variables. The new methods are illustrated by means of simulated data, as well as empirical data sets.This research has been made possible by a fellowship from the Royal Netherlands Academy of Arts and Sciences to the first author. The authors are obliged to three anonymous reviewers and an associate editor for constructive suggestions on the first version of this paper.  相似文献   

2.
A Monte Carlo approach was employed to investigate the interpretability of improper solutions caused by sampling error in maximum likelihood confirmatory factor analysis. Four models were studied with two sample sizes. Of the overall goodness-of-fit indices provided by the LISREL VI program significant differences between improper and proper solutions were found only for the root mean square residual. As expected, indicators of the factor on which the negative uniqueness estimate occurred had biased loadings, and the correlations of its factor with other factors were also biased. In contrast, the loadings of indicators on other factors and those factor intercorrelations did not have any bias of practical significance. For initial solutions with one negative uniqueness estimate, three respecifications were studied: Fix the uniqueness at .00, fix it at .20, or constrain the domain of the solution to be proper. For alternate, respecified solutions that were converged and proper, the constrained solutions and uniqueness fixed at .00 solutions were equivalent. The mean goodness-of-fit and pattern coefficient values for the original improper solutions were not meaningfully different from those obtained under the constrained and uniqueness fixed at .00 respecifications.This investigation was supported in part by a grant from the Baylor University Research Committee (#018-F83-URC). The authors gratefully acknowledge the comments and suggestions of Claes Fornell and Roger E. Kirk, and the assistance of Timothy J. Vance with the analysis.  相似文献   

3.
A Monte Carlo study assessed the effect of sampling error and model characteristics on the occurrence of nonconvergent solutions, improper solutions and the distribution of goodness-of-fit indices in maximum likelihood confirmatory factor analysis. Nonconvergent and improper solutions occurred more frequently for smaller sample sizes and for models with fewer indicators of each factor. Effects of practical significance due to sample size, the number of indicators per factor and the number of factors were found for GFI, AGFI, and RMR, whereas no practical effects were found for the probability values associated with the chi-square likelihood ratio test.James Anderson is now at the J. L. Kellogg Graduate School of Management, Northwestern University. The authors gratefully acknowledge the comments and suggestions of Kenneth Land and the reviewers, and the assistance of A. Narayanan with the analysis. Support for this research was provided by the Graduate School of Business and the University Research Institute of the University of Texas at Austin.  相似文献   

4.
When some of observed variates do not conform to the model under consideration, they will have a serious effect on the results of statistical analysis. In factor analysis the model with inconsistent variates may result in improper solutions. In this article a useful method for identifying a variate as inconsistent is proposed in factor analysis. The procedure is based on the likelihood principle. Several statistical properties such as the effect of misspecified hypotheses, the problem of multiple comparisons, and robustness to violation of distributional assumptions are investigated. The procedure is illustrated by some examples.  相似文献   

5.
Yutaka Kano 《Psychometrika》1990,55(2):277-291
Based on the usual factor analysis model, this paper investigates the relationship between improper solutions and the number of factors, and discusses the properties of the noniterative estimation method of Ihara and Kano in exploratory factor analysis. The consistency of the Ihara and Kano estimator is shown to hold even for an overestimated number of factors, which provides a theoretical basis for the rare occurrence of improper solutions and for a new method of choosing the number of factors. The comparative study of their estimator and that based on maximum likelihood is carried out by a Monte Carlo experiment.The author would like to express his thanks to Masashi Okamoto and Masamori Ihara for helpful comments and to the editor and referees for critically reading the earlier versions and making many valuable suggestions. He also thanks Shigeo Aki for his comments on physical random numbers.  相似文献   

6.
The latent trait-state-error model (TSE) and the latent state-trait model with autoregression (LST-AR) represent creative structural equation methods for examining the longitudinal structure of psychological constructs. Application of these models has been somewhat limited by empirical or conceptual problems. In the present study, Monte Carlo analysis revealed that TSE models tend to generate improper solutions when N is too small, when waves are too few, and when occasion factor stability is either too large or too small. Mathematical analysis of the LST-AR model revealed its limitation to constructs that become more highly auto-correlated over time. The trait-state-occasion model has fewer empirical problems than does the TSE model and is more broadly applicable than is the LST-AR model.  相似文献   

7.
Kohei Adachi 《Psychometrika》2013,78(2):380-394
Rubin and Thayer (Psychometrika, 47:69–76, 1982) proposed the EM algorithm for exploratory and confirmatory maximum likelihood factor analysis. In this paper, we prove the following fact: the EM algorithm always gives a proper solution with positive unique variances and factor correlations with absolute values that do not exceed one, when the covariance matrix to be analyzed and the initial matrices including unique variances and inter-factor correlations are positive definite. We further numerically demonstrate that the EM algorithm yields proper solutions for the data which lead the prevailing gradient algorithms for factor analysis to produce improper solutions. The numerical studies also show that, in real computations with limited numerical precision, Rubin and Thayer’s (Psychometrika, 47:69–76, 1982) original formulas for confirmatory factor analysis can make factor correlation matrices asymmetric, so that the EM algorithm fails to converge. However, this problem can be overcome by using an EM algorithm in which the original formulas are replaced by those guaranteeing the symmetry of factor correlation matrices, or by formulas used to prove the above fact.  相似文献   

8.
Based on the results from factor analyses conducted on 14 different data sets, Digman proposed a model of two higher-order factors, or metatraits, that subsumed the Big Five personality traits. In the current article, problems in Digman's analyses were explicated, and more appropriate analyses were then conducted using the same 14 correlation matrices from Digman's study. The resultant two-factor model produced improper solutions, poor model fit indices, or both, in almost all of the 14 data sets and thus raised serious doubts about the veracity of Digman's proposed model.  相似文献   

9.
Analytic bifactor rotations have been recently developed and made generally available, but they are not well understood. The Jennrich-Bentler analytic bifactor rotations (bi-quartimin and bi-geomin) are an alternative to, and arguably an improvement upon, the less technically sophisticated Schmid-Leiman orthogonalization. We review the technical details that underlie the Schmid-Leiman and Jennrich-Bentler bifactor rotations, using simulated data structures to illustrate important features and limitations. For the Schmid-Leiman, we review the problem of inaccurate parameter estimates caused by the linear dependencies, sometimes called “proportionality constraints,” that are required to expand a p correlated factors solution into a (p + 1) (bi)factor space. We also review the complexities involved when the data depart from perfect cluster structure (e.g., item cross-loading on group factors). For the Jennrich-Bentler rotations, we describe problems in parameter estimation caused by departures from perfect cluster structure. In addition, we illustrate the related problems of (a) solutions that are not invariant under different starting values (i.e., local minima problems) and (b) group factors collapsing onto the general factor. Recommendations are made for substantive researchers including examining all local minima and applying multiple exploratory techniques in an effort to identify an accurate model.  相似文献   

10.
Psychologists are directed by ethical guidelines in most areas of their practice. However, there are very few guidelines for conducting data analysis in research. The aim of this article is to address the need for more extensive ethical guidelines for researchers who are post–data collection and beginning their data analyses. Improper data analysis is an ethical issue because it can result in publishing false or misleading conclusions. This article includes a review of ethical implications of improper data analysis and potential causes of unethical practices. In addition, current guidelines in psychology and other areas (e.g., American Psychological Association and American Statistical Association Ethics Codes) were used to inspire a list of recommendations for ethical conduct in data analysis that is appropriate for researchers in psychology.  相似文献   

11.
The overarching purpose of this article is to present a nonmathematical introduction to the application of confirmatory factor analysis (CFA) within the framework of structural equation modeling as it applies to psychological assessment instruments. In the interest of clarity and ease of understanding, I model exploratory factor analysis (EFA) structure in addition to first- and second-order CFA structures. All factor analytic structures are based on the same measuring instrument, the Beck Depression Inventory-II (BDI-II; Beck, Steer, & Brown, 1996). Following a "walk" through the general process of CFA modeling, I identify several common misconceptions and improper application practices with respect to both EFA and CFA and tender caveats with a view to preventing further proliferation of these pervasive practices.  相似文献   

12.
Anne Boomsma 《Psychometrika》1985,50(2):229-242
In the framework of a robustness study on maximum likelihood estimation with LISREL three types of problems are dealt with: nonconvergence, improper solutions, and choice of starting values. The purpose of the paper is to illustrate why and to what extent these problems are of importance for users of LISREL. The ways in which these issues may affect the design and conclusions of robustness research is also discussed.  相似文献   

13.
Critical examination is made of the recent controversy over the value of Monte Carlo techniques in nonmetric multidimensional scaling procedures. The case is presented that the major relevance of Monte Carlo studies is not for the local minima problem but for the meaningfulness of the obtained solutions.  相似文献   

14.
Abstract

To execute a motor solution to a given task, individuals search through the space of movement possibilities guided by information that arises from the interaction with task and environment. Through this search, individuals seek to avoid inappropriate solutions through local minima in the task space. The processes that lead to some but not all individuals avoiding local minima and finding solutions is not yet understood. Based on the tenets of ecological psychology for perception and action, we examined in two experiments the hypothesis that the incapacity to differentiate errors (performance of an inappropriate solution) from inherent variability would interfere with the perception of properties of the task space and result in a longer time performing an inappropriate solution for the task before exploration of other solutions. Inherent variability was shown to be a direct predictor of the changes in the search strategies. Also, we found that the specifics of the search patterns could predict the performance in the task. Thus, the pattern of motion through the task space affords perception of specific properties of this space guiding individuals in the evolving dynamics of exploration or exploitation.  相似文献   

15.
In exploratory factor analysis, factor rotation is conducted to improve model interpretability. A promising and increasingly popular factor rotation method is geomin rotation. Geomin rotation, however, frequently encounters multiple local solutions. We report a simulation study that explores the frequency of local solutions in geomin rotation and the implications of such phenomena. The findings include: (1) multiple local solutions exist for geomin rotation in a variety of situations; (2) ? = .01 provides satisfactory rotated factor loadings in most situations; (3) 100 random starts appear sufficient to examine the multiple solution phenomenon; and (4) a population global solution may correspond to a sample local solution rather than the sample global solution.  相似文献   

16.
Factor analysis and AIC   总被引:65,自引:0,他引:65  
The information criterion AIC was introduced to extend the method of maximum likelihood to the multimodel situation. It was obtained by relating the successful experience of the order determination of an autoregressive model to the determination of the number of factors in the maximum likelihood factor analysis. The use of the AIC criterion in the factor analysis is particularly interesting when it is viewed as the choice of a Bayesian model. This observation shows that the area of application of AIC can be much wider than the conventional i.i.d. type models on which the original derivation of the criterion was based. The observation of the Bayesian structure of the factor analysis model leads us to the handling of the problem of improper solution by introducing a natural prior distribution of factor loadings.The author would like to express his thanks to Jim Ramsay, Yoshio Takane, Donald Ramirez and Hamparsum Bozdogan for helpful comments on the original version of the paper. Thanks are also due to Emiko Arahata for her help in computing.  相似文献   

17.
This paper calls for consideration of a new class of preventive interventions designed explicitly to prevent comorbidity of psychiatric disorders. Epidemiologic data show that successful interventions of this type could be extremely valuable, as up to half of lifetime psychiatric disorders and an even larger percent of chronic and seriously impairing disorders occur to people with a prior history of some other disorder. Furthermore, a review of etiologic hypotheses concerning the causes of comorbidity suggests that interventions aimed at primary prevention of secondary disorders might be feasible. However, more basic risk factor research is needed on the causes of comorbidity before we can make a clear assessment of feasibility and discover promising intervention targets. A number of methodological problems arise in carrying out this type of formative research. These problems are reviewed and suggestions are offered for solutions involving innovations in measurement, design, and data analysis.  相似文献   

18.
Some contributions to maximum likelihood factor analysis   总被引:9,自引:0,他引:9  
A new computational method for the maximum likelihood solution in factor analysis is presented. This method takes into account the fact that the likelihood function may not have a maximum in a point of the parameter space where all unique variances are positive. Instead, the maximum may be attained on the boundary of the parameter space where one or more of the unique variances are zero. It is demonstrated that suchimproper (Heywood) solutions occur more often than is usually expected. A general procedure to deal with such improper solutions is proposed. The proposed methods are illustrated using two small sets of empirical data, and results obtained from the analyses of many other sets of data are reported. These analyses verify that the new computational method converges rapidly and that the maximum likelihood solution can be determined very accurately. A by-product obtained by the method is a large sample estimate of the variance-covariance matrix of the estimated unique variances. This can be used to set up approximate confidence intervals for communalities and unique variances.The first part of this work was done at the University of Uppsala, Sweden and supported by the Swedish Council for Social Science Research. The second part was done at Educational Testing Service and supported by a grant (NSF-GB-1985) from National Science Foundation to Educational Testing Service.The author is deeply indebted to Dr. D. N. Lawley whose contributions amounted nearly to coauthorship. The author also wishes to thank Mr. G. Gruvaeus for much valuable assistance in constructing the computer program.  相似文献   

19.
Category estimates of spectral saturation were obtained from three observers under neutral adaptation and under three conditions of chromatic adaptation. The data so obtained show that chromatic adaptation causes a shift in the location of minimal spectral saturation toward the spectral locus of the adapting light. The existence of secondary minima and enhancement effects in spectral saturation are also noted.  相似文献   

20.
Parts of visual objects: an experimental test of the minima rule   总被引:1,自引:0,他引:1  
Three experiments were conducted to test Hoffman and Richards's (1984) hypothesis that, for purposes of visual recognition, the human visual system divides three-dimensional shapes into parts at negative minima of curvature. In the first two experiments, subjects observed a simulated object (surface of revolution) rotating about a vertical axis, followed by a display of four alternative parts. They were asked to select a part that was from the object. Two of the four parts were divided at negative minima of curvature and two at positive maxima. When both a minima part and a maxima part from the object were presented on each trial (experiment 1), most of the correct responses were minima parts (101 versus 55). When only one part from the object--either a minima part or a maxima part--was shown on each trial (experiment 2), accuracy on trials with correct minima parts and correct maxima parts did not differ significantly. However, some subjects indicated that they reversed figure and ground, thereby changing maxima parts into minima parts. In experiment 3, subjects marked apparent part boundaries. 81% of these marks indicated minima parts, 10% of the marks indicated maxima parts, and 9% of the marks were at other positions. These results provide converging evidence, from two different methods, which supports Hoffman and Richard's minima rule.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号