首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 19 毫秒
1.
A Monte Carlo study assessed the effect of sampling error and model characteristics on the occurrence of nonconvergent solutions, improper solutions and the distribution of goodness-of-fit indices in maximum likelihood confirmatory factor analysis. Nonconvergent and improper solutions occurred more frequently for smaller sample sizes and for models with fewer indicators of each factor. Effects of practical significance due to sample size, the number of indicators per factor and the number of factors were found for GFI, AGFI, and RMR, whereas no practical effects were found for the probability values associated with the chi-square likelihood ratio test.James Anderson is now at the J. L. Kellogg Graduate School of Management, Northwestern University. The authors gratefully acknowledge the comments and suggestions of Kenneth Land and the reviewers, and the assistance of A. Narayanan with the analysis. Support for this research was provided by the Graduate School of Business and the University Research Institute of the University of Texas at Austin.  相似文献   

2.
An expression is given for weighted least squares estimators of oblique common factors, constrained to have the same covariance matrix as the factors they estimate. It is shown that if as in exploratory factor analysis, the common factors are obtained by oblique transformation from the Lawley-Rao basis, the constrained estimators are given by the same transformation. Finally a proof of uniqueness is given.The research reported in this paper was partly supported by Natural Sciences and Engineering Research Council Grant No. A6346.  相似文献   

3.
In the applications of maximum likelihood factor analysis the occurrence of boundary minima instead of proper minima is no exception at all. In the past the causes of such improper solutions could not be detected. This was impossible because the matrices containing the parameters of the factor analysis model were kept positive definite. By dropping these constraints, it becomes possible to distinguish between the different causes of improper solutions. In this paper some of the most important causes are discussed and illustrated by means of artificial and empirical data.The author is indebted to H. J. Prins for stimulating and encouraging discussions.  相似文献   

4.
We analytically derive the fixed‐effects estimates in unconditional linear growth curve models by typical linear mixed‐effects modelling (TLME) and by a pattern‐mixture (PM) approach with random‐slope‐dependent two‐missing‐pattern missing not at random (MNAR) longitudinal data. Results showed that when the missingness mechanism is random‐slope‐dependent MNAR, TLME estimates of both the mean intercept and mean slope are biased because of incorrect weights used in the estimation. More specifically, the estimate of the mean slope is biased towards the mean slope for completers, whereas the estimate of the mean intercept is biased towards the opposite direction as compared to the estimate of the mean slope. We also discuss why the PM approach can provide unbiased fixed‐effects estimates for random‐coefficients‐dependent MNAR data but does not work well for missing at random or outcome‐dependent MNAR data. A small simulation study was conducted to illustrate the results and to compare results from TLME and PM. Results from an empirical data analysis showed that the conceptual finding can be generalized to other real conditions even when some assumptions for the analytical derivation cannot be met. Implications from the analytical and empirical results were discussed and sensitivity analysis was suggested for longitudinal data analysis with missing data.  相似文献   

5.
Marsh HW  Wen Z  Hau KT 《心理学方法》2004,9(3):275-300
Interactions between (multiple indicator) latent variables are rarely used because of implementation complexity and competing strategies. Based on 4 simulation studies, the traditional constrained approach performed more poorly than did 3 new approaches--unconstrained, generalized appended product indicator, and quasi-maximum-likelihood (QML). The authors' new unconstrained approach was easiest to apply. All 4 approaches were relatively unbiased for normally distributed indicators, but the constrained and QML approaches were more biased for nonnormal data; the size and direction of the bias varied with the distribution but not with the sample size. QML had more power, but this advantage was qualified by consistently higher Type I error rates. The authors also compared general strategies for defining product indicators to represent the latent interaction factor.  相似文献   

6.
The inability of assessment center (AC) researchers to find admissible solutions for confirmatory factor analytic (CFA) models that include dimensions has led some to conclude that ACs do not measure dimensions at all. This study investigated whether increasing the indicator–factor ratio facilitates the achievement of convergent and admissible CFA solutions in 2 independent ACs. Results revealed that, when models specify multiple behavioral checklist items as manifest indicators of each latent dimension, all of the AC CFA models tested were identified and returned proper solutions. When armed with the ability to undertake a full set of model comparisons using model fit rather than solution convergence and admissibility as comparative criteria, we found clear evidence for modest dimension effects. These results suggest that the frequent failure to find dimensions in models of the internal structure of ACs is a methodological artifact and that one approach to increase the likelihood for reaching a proper solution is to increase the number of manifest indicators for each dimension factor. In addition, across exercise dimension ratings and the overall assessment rating were both strongly correlated with dimension and exercise factors, indicating that regardless of how an AC is scored, exercise variance will continue to play a key role in the scoring of ACs.  相似文献   

7.
Multitrait-Multimethod (MTMM) matrices are often analyzed by means of confirmatory factor analysis (CFA). However, fitting MTMM models often leads to improper solutions, or non-convergence. In an attempt to overcome these problems, various alternative CFA models have been proposed, but with none of these the problem of finding improper solutions was solved completely. In the present paper, an approach is proposed where improper solutions are ruled out altogether and convergence is guaranteed. The approach is based on constrained variants of components analysis (CA). Besides the fact that these methods do not give improper solutions, they have the advantage that they provide component scores which can later on be used to relate the components to external variables. The new methods are illustrated by means of simulated data, as well as empirical data sets.This research has been made possible by a fellowship from the Royal Netherlands Academy of Arts and Sciences to the first author. The authors are obliged to three anonymous reviewers and an associate editor for constructive suggestions on the first version of this paper.  相似文献   

8.
Several issues are discussed when testing inequality constrained hypotheses using a Bayesian approach. First, the complexity (or size) of the inequality constrained parameter spaces can be ignored. This is the case when using the posterior probability that the inequality constraints of a hypothesis hold, Bayes factors based on non‐informative improper priors, and partial Bayes factors based on posterior priors. Second, the Bayes factor may not be invariant for linear one‐to‐one transformations of the data. This can be observed when using balanced priors which are centred on the boundary of the constrained parameter space with a diagonal covariance structure. Third, the information paradox can be observed. When testing inequality constrained hypotheses, the information paradox occurs when the Bayes factor of an inequality constrained hypothesis against its complement converges to a constant as the evidence for the first hypothesis accumulates while keeping the sample size fixed. This paradox occurs when using Zellner's g prior as a result of too much prior shrinkage. Therefore, two new methods are proposed that avoid these issues. First, partial Bayes factors are proposed based on transformed minimal training samples. These training samples result in posterior priors that are centred on the boundary of the constrained parameter space with the same covariance structure as in the sample. Second, a g prior approach is proposed by letting g go to infinity. This is possible because the Jeffreys–Lindley paradox is not an issue when testing inequality constrained hypotheses. A simulation study indicated that the Bayes factor based on this g prior approach converges fastest to the true inequality constrained hypothesis.  相似文献   

9.
Yutaka Kano 《Psychometrika》1990,55(2):277-291
Based on the usual factor analysis model, this paper investigates the relationship between improper solutions and the number of factors, and discusses the properties of the noniterative estimation method of Ihara and Kano in exploratory factor analysis. The consistency of the Ihara and Kano estimator is shown to hold even for an overestimated number of factors, which provides a theoretical basis for the rare occurrence of improper solutions and for a new method of choosing the number of factors. The comparative study of their estimator and that based on maximum likelihood is carried out by a Monte Carlo experiment.The author would like to express his thanks to Masashi Okamoto and Masamori Ihara for helpful comments and to the editor and referees for critically reading the earlier versions and making many valuable suggestions. He also thanks Shigeo Aki for his comments on physical random numbers.  相似文献   

10.
Egeland J  Bosnes O  Johansen H 《Assessment》2009,16(3):292-300
Confirmatory Factor Analyses (CFA) of the Wechsler Adult Intelligence Scale-III (WAIS-III) lend partial support to the four-factor model proposed in the test manual. However, the Arithmetic subtest has been especially difficult to allocate to one factor. Using the new Norwegian WAIS-III version, we tested factor models differing in the number of factors and in the placement of the Arithmetic subtest in a mixed clinical sample (n = 272). Only the four-factor solutions had adequate goodness-of-fit values. Allowing Arithmetic to load on both the Verbal Comprehension and Working Memory factors provided a more parsimonious solution compared to considering the subtest only as a measure of Working Memory. Effects of education were particularly high for both the Verbal Comprehension tests and Arithmetic.  相似文献   

11.
Measurement models of subjective child utility were derived from theoretical typologies of expected consequences of children. Maximum likelihood estimates of the goodness-of-fit of each model to observed convariation among 18 indicators of subjective child utility were obtained, using data from 349 wives and 349 husbands. The best-fitting models included three distinct factors: personal rewards, personal costs, and normative rewards.  相似文献   

12.
A general approach to confirmatory maximum likelihood factor analysis   总被引:17,自引:0,他引:17  
We describe a general procedure by which any number of parameters of the factor analytic model can be held fixed at any values and the remaining free parameters estimated by the maximum likelihood method. The generality of the approach makes it possible to deal with all kinds of solutions: orthogonal, oblique and various mixtures of these. By choosing the fixed parameters appropriately, factors can be defined to have desired properties and make subsequent rotation unnecessary. The goodness of fit of the maximum likelihood solution under the hypothesis represented by the fixed parameters is tested by a large samplex 2 test based on the likelihood ratio technique. A by-product of the procedure is an estimate of the variance-covariance matrix of the estimated parameters. From this, approximate confidence intervals for the parameters can be obtained. Several examples illustrating the usefulness of the procedure are given.This work was supported by a grant (NSF-GB 1985) from the National Science Foundation to Educational Testing Service.  相似文献   

13.
Based on the results from factor analyses conducted on 14 different data sets, Digman proposed a model of two higher-order factors, or metatraits, that subsumed the Big Five personality traits. In the current article, problems in Digman's analyses were explicated, and more appropriate analyses were then conducted using the same 14 correlation matrices from Digman's study. The resultant two-factor model produced improper solutions, poor model fit indices, or both, in almost all of the 14 data sets and thus raised serious doubts about the veracity of Digman's proposed model.  相似文献   

14.
Several recent works have analysed the factorial structure of well-being measures. The aim of our study is to analyse the factorial structure of a widely used well-being scale, Ryff's Scales of Psychological Well-being, but in a specific subpopulation of the Spanish population, the elderly. For this particular subpopulation, the construct of well-being has been employed in most theoretical models that explain quality of life, and its role is therefore pivotal. The sample comprised 169 elderly people (65 years or more), sampled within the Valencian Community. The 54-item version of Ryff's scales was used. An item parcelling process was analytically employed before confirmatory factor analyses, allowing a total of 18 well-being indicators. Confirmatory factor analyses were specified and tested, including all theoretical and empirical solutions found in the literature, either in the general population or in specific populations of different cultural contexts. Goodness-of-fit results were similar to the ones found in the literature. Best solutions were a six-factor model with correlated factors, as defended by the authors, and a five-factor correlated solution, collapsing environmental mastery and self-acceptance into a single factor.  相似文献   

15.
This experiment explores the attributional consequences of asking constraining performance-relevant questions-biased to elicit favorable or unfavorable remarks from the person being evaluated. After hearing the performer deliver a speech, all subjects played the role of questioner in a feedback interview purportedly designed to help the speechmaker improve future performance. Subjects asked biased questions that focused on either the positive or negative aspects of the speech and heard answers that were either congruent or incongruent with the biasing implications of their questions. Results showed that speech performance ratings were influenced by the direction of the performer′s self-evaluative comments, even when these comments were in line with, and thus potentially constrained by, the evaluator′s own behavior. Implications of biased inquiry strategies in organizational contexts where the evaluator is the source of constraint are discussed.  相似文献   

16.
Kohei Adachi 《Psychometrika》2013,78(2):380-394
Rubin and Thayer (Psychometrika, 47:69–76, 1982) proposed the EM algorithm for exploratory and confirmatory maximum likelihood factor analysis. In this paper, we prove the following fact: the EM algorithm always gives a proper solution with positive unique variances and factor correlations with absolute values that do not exceed one, when the covariance matrix to be analyzed and the initial matrices including unique variances and inter-factor correlations are positive definite. We further numerically demonstrate that the EM algorithm yields proper solutions for the data which lead the prevailing gradient algorithms for factor analysis to produce improper solutions. The numerical studies also show that, in real computations with limited numerical precision, Rubin and Thayer’s (Psychometrika, 47:69–76, 1982) original formulas for confirmatory factor analysis can make factor correlation matrices asymmetric, so that the EM algorithm fails to converge. However, this problem can be overcome by using an EM algorithm in which the original formulas are replaced by those guaranteeing the symmetry of factor correlation matrices, or by formulas used to prove the above fact.  相似文献   

17.
Abstract —An exploratory factor analysis of the reflectance spectral distributions of a sample of natural and man-made objects yields a factor pattern remarkably similar to psychophysical color-matching curves. The goodness-of-fit indices from a maximum likelihood confirmatory factor model with fixed factor loadings specified by empirical trichromatic color-matching data indicate that the human visual system performs near to an optimum value for an ideal trichromatic system composed of three linear components. An unconstrained four-factor maximum likelihood model fits significantly better than a three-factor unconstrained model, suggesting that a color metric is better represented in four dimensions than in a three-dimensional space. This fourth factor can be calculated as a nonlinear interaction term between the first three factors: thus, a trichromatic input is sufficient to compute a color space of four dimensions. The visual system may exploit this nonlinear dependency in the spectral environment in order to obtain a four-dimensional color space without the biological cost of a fourth color receptor.  相似文献   

18.
Magnitude estimates of loudness were collected for several variations in the schedule of signal presentations. For wide ranges (about 50 dB centered at 65 dB), the conditions were: random selection of 21 signals equally spaced in decibels, constrained selection so that each signal was used equally often but successive signals were always close together, constrained selection in which successive signals were always far apart, and random selection from a nonuniform distribution that consisted of two groups of equally spaced signals separated by a gap of 24 dB. In addition, two other ranges, 10 and 30 dB, were run with random selection of 21 equally spaced signals. The measures examined were: mean magnitude estimate as a function of signal intensity, coefficient of variation of the ratio of successive responses as a function of signal separation, and the correlation of the logarithm of successive responses as a function of signal separation. The basic question was whether all of the different schedules of signal presentation produce data that can be viewed as selections from appropriate regions of the 50-dB random signal selection data. To a degree, this was found, but with systematic exceptions.  相似文献   

19.
Cross‐classified random effects modelling (CCREM) is a special case of multi‐level modelling where the units of one level are nested within two cross‐classified factors. Typically, CCREM analyses omit the random interaction effect of the cross‐classified factors. We investigate the impact of the omission of the interaction effect on parameter estimates and standard errors. Results from a Monte Carlo simulation study indicate that, for fixed effects, both coefficients estimates and accompanied standard error estimates are not biased. For random effects, results are affected at level 2 but not at level 1 by the presence of an interaction variance and/or a correlation between the residual of level two factors. Results from the analysis of the Early Childhood Longitudinal Study and the National Educational Longitudinal Study agree with those obtained from simulated data. We recommend that researchers attempt to include interaction effects of cross‐classified factors in their models.  相似文献   

20.
叶宝娟  温忠粦 《心理科学》2013,36(3):728-733
在心理、教育和管理等研究领域中,经常会碰到两水平(两层)的数据结构,如学生嵌套在班级中,员工嵌套在企业中。在两水平研究中,被试通常不是独立的,如果直接用单水平信度公式进行估计,会高估测验信度。文献上已有研究讨论如何更准确地估计两水平研究中单维测验的信度。本研究指出了现有的估计公式的不足之处,用两水平验证性因子分析推导出一个新的信度公式,举例演示如何计算,并给出简单的计算程序。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号