首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Sampling variability of the estimates of factor loadings is neglected in modern factor analysis. Such investigations are generally normal theory based and asymptotic in nature. The bootstrap, a computer-based methodology, is described and then applied to demonstrate how the sampling variability of the estimates of factor loadings can be estimated for a given set of data. The issue of the number of factors to be retained in a factor model is also addressed. The bootstrap is shown to be an effective data-analytic tool for computing various statistics of interest which are otherwise intractable.  相似文献   

2.
Principal components analysis (PCA) is used to explore the structure of data sets containing linearly related numeric variables. Alternatively, nonlinear PCA can handle possibly nonlinearly related numeric as well as nonnumeric variables. For linear PCA, the stability of its solution can be established under the assumption of multivariate normality. For nonlinear PCA, however, standard options for establishing stability are not provided. The authors use the nonparametric bootstrap procedure to assess the stability of nonlinear PCA results, applied to empirical data. They use confidence intervals for the variable transformations and confidence ellipses for the eigenvalues, the component loadings, and the person scores. They discuss the balanced version of the bootstrap, bias estimation, and Procrustes rotation. To provide a benchmark, the same bootstrap procedure is applied to linear PCA on the same data. On the basis of the results, the authors advise using at least 1,000 bootstrap samples, using Procrustes rotation on the bootstrap results, examining the bootstrap distributions along with the confidence regions, and merging categories with small marginal frequencies to reduce the variance of the bootstrap results.  相似文献   

3.
Multi‐level simultaneous component analysis (MLSCA) was designed for the exploratory analysis of hierarchically ordered data. MLSCA specifies a component model for each level in the data, where appropriate constraints express possible similarities between groups of objects at a certain level, yielding four MLSCA variants. The present paper discusses different bootstrap strategies for estimating confidence intervals (CIs) on the individual parameters. In selecting a proper strategy, the main issues to address are the resampling scheme and the non‐uniqueness of the parameters. The resampling scheme depends on which level(s) in the hierarchy are considered random, and which fixed. The degree of non‐uniqueness depends on the MLSCA variant, and, in two variants, the extent to which the user exploits the transformational freedom. A comparative simulation study examines the quality of bootstrap CIs of different MLSCA parameters. Generally, the quality of bootstrap CIs appears to be good, provided the sample sizes are sufficient at each level that is considered to be random. The latter implies that if more than a single level is considered random, the total number of observations necessary to obtain reliable inferential information increases dramatically. An empirical example illustrates the use of bootstrap CIs in MLSCA.  相似文献   

4.
The article describes 6 issues influencing standard errors in exploratory factor analysis and reviews 7 methods of computing standard errors for rotated factor loadings and factor correlations. These 7 methods are the augmented information method, the nonparametric bootstrap method, the infinitesimal jackknife method, the method using the asymptotic distributions of unrotated factor loadings, the sandwich method, the parametric bootstrap method, and the jackknife method. Standard error estimates are illustrated using a personality study with 537 men and an intelligence study with 145 children.  相似文献   

5.
Abstract: Exploratory methods using second‐order components and second‐order common factors were proposed. The second‐order components were obtained from the resolution of the correlation matrix of obliquely rotated first‐order principal components. The standard errors of the estimates of the second‐order component loadings were derived from an augmented information matrix with restrictions for the loadings and associated parameters. The second‐order factor analysis proposed was similar to the classical method in that the factor correlations among the first‐order factors were further resolved by the exploratory method of factor analysis. However, in this paper the second‐order factor loadings were estimated by the generalized least squares using the asymptotic variance‐covariance matrix for the first‐order factor correlations. The asymptotic standard errors for the estimates of the second‐order factor loadings were also derived. A numerical example was presented with simulated results.  相似文献   

6.
Earlier research has shown that bootstrap confidence intervals from principal component loadings give a good coverage of the population loadings. However, this only applies to complete data. When data are incomplete, missing data have to be handled before analysing the data. Multiple imputation may be used for this purpose. The question is how bootstrap confidence intervals for principal component loadings should be corrected for multiply imputed data. In this paper, several solutions are proposed. Simulations show that the proposed corrections for multiply imputed data give a good coverage of the population loadings in various situations.  相似文献   

7.
Relationships between the results of factor analysis and component analysis are derived when oblique factors have independent clusters with equal variances of unique factors. The factor loadings are analytically shown to be smaller than the corresponding component loadings while the factor correlations are shown to be greater than the corresponding component correlations. The condition for the inequality of the factor/component contributions is derived in the case with different variances for unique factors. Further, the asymptotic standard errors of parameter estimates are obtained for a simplified model with the assumption of multivariate normality, which shows that the component loading estimate is more stable than the corresponding factor loading estimate.  相似文献   

8.
A Monte Carlo experiment is conducted to investigate the performance of the bootstrap methods in normal theory maximum likelihood factor analysis both when the distributional assumption is satisfied and unsatisfied. The parameters and their functions of interest include unrotated loadings, analytically rotated loadings, and unique variances. The results reveal that (a) bootstrap bias estimation performs sometimes poorly for factor loadings and nonstandardized unique variances; (b) bootstrap variance estimation performs well even when the distributional assumption is violated; (c) bootstrap confidence intervals based on the Studentized statistics are recommended; (d) if structural hypothesis about the population covariance matrix is taken into account then the bootstrap distribution of the normal theory likelihood ratio test statistic is close to the corresponding sampling distribution with slightly heavier right tail.This study was carried out in part under the ISM cooperative research program (91-ISM · CRP-85, 92-ISM · CRP-102). The authors would like to thank the editor and three reviewers for their helpful comments and suggestions which improved the quality of this paper considerably.  相似文献   

9.
刘彦楼 《心理学报》2022,54(6):703-724
认知诊断模型的标准误(Standard Error, SE; 或方差—协方差矩阵)与置信区间(Confidence Interval, CI)在模型参数估计不确定性的度量、项目功能差异检验、项目水平上的模型比较、Q矩阵检验以及探索属性层级关系等领域有重要的理论与实践价值。本研究提出了两种新的SE和CI计算方法:并行参数化自助法和并行非参数化自助法。模拟研究发现:模型完全正确设定时, 在高质量及中等质量项目条件下, 这两种方法在计算模型参数的SE和CI时均有好的表现; 模型参数存在冗余时, 在高质量及中等质量项目条件下, 对于大部分允许存在的模型参数而言, 其SE和CI有好的表现。通过实证数据展示了新方法的价值及计算效率提升效果。  相似文献   

10.
In a meta-analysis, the unknown parameters are often estimated using maximum likelihood, and inferences are based on asymptotic theory. It is assumed that, conditional on study characteristics included in the model, the between-study distribution and the sampling distributions of the effect sizes are normal. In practice, however, samples are finite, and the normality assumption may be violated, possibly resulting in biased estimates and inappropriate standard errors. In this article, we propose two parametric and two nonparametric bootstrap methods that can be used to adjust the results of maximum likelihood estimation in meta-analysis and illustrate them with empirical data. A simulation study, with raw data drawn from normal distributions, reveals that the parametric bootstrap methods and one of the nonparametric methods are generally superior to the ordinary maximum likelihood approach but suffer from a bias/precision tradeoff. We recommend using one of these bootstrap methods, but without applying the bias correction.  相似文献   

11.
Mediation models are often used as a means to explain the psychological mechanisms between an independent and a dependent variable in the behavioral and social sciences. A major limitation of the unstandardized indirect effect calculated from raw scores is that it cannot be interpreted as an effect-size measure. In contrast, the standardized indirect effect calculated from standardized scores can be a good candidate as a measure of effect size because it is scale invariant. In the present article, 11 methods for constructing the confidence intervals (CIs) of the standardized indirect effects were evaluated via a computer simulation. These included six Wald CIs, three bootstrap CIs, one likelihood-based CI, and the PRODCLIN CI. The results consistently showed that the percentile bootstrap, the bias-corrected bootstrap, and the likelihood-based approaches had the best coverage probability. Mplus, LISREL, and Mx syntax were included to facilitate the use of these preferred methods in applied settings. Future issues on the use of the standardized indirect effects are discussed.  相似文献   

12.
Interval estimates – estimates of parameters that include an allowance for sampling uncertainty – have long been touted as a key component of statistical analyses. There are several kinds of interval estimates, but the most popular are confidence intervals (CIs): intervals that contain the true parameter value in some known proportion of repeated samples, on average. The width of confidence intervals is thought to index the precision of an estimate; CIs are thought to be a guide to which parameter values are plausible or reasonable; and the confidence coefficient of the interval (e.g., 95 %) is thought to index the plausibility that the true parameter is included in the interval. We show in a number of examples that CIs do not necessarily have any of these properties, and can lead to unjustified or arbitrary inferences. For this reason, we caution against relying upon confidence interval theory to justify interval estimates, and suggest that other theories of interval estimation should be used instead.  相似文献   

13.
The mathematical connection between canonical correlation analysis (CCA) and covariance structure analysis was first discussed through the Multiple Indicators and Multiple Causes (MIMIC) approach. However, the MIMIC approach has several technical and practical challenges. To address these challenges, a comprehensive COSAN modeling approach is proposed. Specifically, we define four COSAN-CCA models to correspond with four possible combinations of the data to be analyzed and the unique parameters to be estimated. In terms of the data, one can analyze either the unstandardized or standardized variables. In terms of the unique parameters, one can estimate either the weights or loadings. Besides the unique parameters of each COSAN-CCA model, all four COSAN-CCA models also estimate the canonical correlations as their common parameters. Taken together, the four COSAN-CCA models provide the correct point estimates and standard error estimates for all commonly used CCA parameters. Two numeric examples are used to compare the standard error estimates obtained from the MIMIC approach and the COSAN modeling approach. Moreover, the standard error estimates from the COSAN modeling approach are validated by a simulation study and the asymptotic theory. Finally, software implementation and future extensions are discussed.  相似文献   

14.
15.
Commonly used formulae for standard error (SE) estimates in covariance structure analysis are derived under the assumption of a correctly specified model. In practice, a model is at best only an approximation to the real world. It is important to know whether the estimates of SEs as provided by standard software are consistent when a model is misspecified, and to understand why if not. Bootstrap procedures provide nonparametric estimates of SEs that automatically account for distribution violation. It is also necessary to know whether bootstrap estimates of SEs are consistent. This paper studies the relationship between the bootstrap estimates of SEs and those based on asymptotics. Examples are used to illustrate various versions of asymptotic variance–covariance matrices and their validity. Conditions for the consistency of the bootstrap estimates of SEs are identified and discussed. Numerical examples are provided to illustrate the relationship of different estimates of SEs and covariance matrices.  相似文献   

16.
A new method for the analysis of linear models that have autoregressive errors is proposed. The approach is not only relevant in the behavioral sciences for analyzing small-sample time-series intervention models, but it is also appropriate for a wide class of small-sample linear model problems in which there is interest in inferential statements regarding all regression parameters and autoregressive parameters in the model. The methodology includes a double application of bootstrap procedures. The 1st application is used to obtain bias-adjusted estimates of the autoregressive parameters. The 2nd application is used to estimate the standard errors of the parameter estimates. Theoretical and Monte Carlo results are presented to demonstrate asymptotic and small-sample properties of the method; examples that illustrate advantages of the new approach over established time-series methods are described.  相似文献   

17.
To deal with missing data that arise due to participant nonresponse or attrition, methodologists have recommended an “inclusive” strategy where a large set of auxiliary variables are used to inform the missing data process. In practice, the set of possible auxiliary variables is often too large. We propose using principal components analysis (PCA) to reduce the number of possible auxiliary variables to a manageable number. A series of Monte Carlo simulations compared the performance of the inclusive strategy with eight auxiliary variables (inclusive approach) to the PCA strategy using just one principal component derived from the eight original variables (PCA approach). We examined the influence of four independent variables: magnitude of correlations, rate of missing data, missing data mechanism, and sample size on parameter bias, root mean squared error, and confidence interval coverage. Results indicate that the PCA approach results in unbiased parameter estimates and potentially more accuracy than the inclusive approach. We conclude that using the PCA strategy to reduce the number of auxiliary variables is an effective and practical way to reap the benefits of the inclusive strategy in the presence of many possible auxiliary variables.  相似文献   

18.
Standard errors for rotated factor loadings   总被引:1,自引:0,他引:1  
Beginning with the results of Girshick on the asymptotic distribution of principal component loadings and those of Lawley on the distribution of unrotated maximum likelihood factor loadings, the asymptotic distribution of the corresponding analytically rotated loadings is obtained. The principal difficulty is the fact that the transformation matrix which produces the rotation is usually itself a function of the data. The approach is to use implicit differentiation to find the partial derivatives of an arbitrary orthogonal rotation algorithm. Specific details are given for the orthomax algorithms and an example involving maximum likelihood estimation and varimax rotation is presented.This research was supported in part by NIH Grant RR-3. The authors are grateful to Dorothy T. Thayer who implemented the algorithms discussed here as well as those of Lawley and Maxwell. We are particularly indebted to Michael Browne for convincing us of the significance of this work and for helping to guide its development and to Harry H. Harman who many years ago pointed out the need for standard errors of estimate.  相似文献   

19.
Some relationships between factors and components   总被引:1,自引:0,他引:1  
The asymptotic correlations between the estimates of factor and component loadings are obtained for the exploratory factor analysis model with the assumption of a multivariate normal distribution for manifest variables. The asymptotic correlations are derived for the cases of unstandardized and standardized manifest variables with orthogonal and oblique rotations. Based on the above results, the asymptotic standard errors for estimated correlations between factors and components are derived. Further, the asymptotic standard error of the mean squared canonical correlation for factors and components, which is an overall index for the closeness of factors and components, is derived. The results of a Monte Carlo simulation are presented to show the usefulness of the asymptotic results in the data with a finite sample size.The author is indebted to anonymous referees for their comments, corrections and suggestions which have led to the improvement of this article.  相似文献   

20.
Ab Mooijaart 《Psychometrika》1985,50(3):323-342
Factor analysis for nonnormally distributed variables is discussed in this paper. The main difference between our approach and more traditional approaches is that not only second order cross-products (like covariances) are utilized, but also higher order cross-products. It turns out that under some conditions the parameters (factor loadings) can be uniquely determined. Two estimation procedures will be discussed. One method gives Best Generalized Least Squares (BGLS) estimates, but is computationally very heavy, in particular for large data sets. The other method is a least squares method which is computationally less heavy. In one example the two methods will be compared by using the bootstrap method. In another example real life data are analyzed.This paper has partly been written while the author was a visiting scholar at the Department of Psychology, University of California, Los Angeles. He wants to thank Peter Bentler who made this stay at UCLA possible and for his valuable contributions to this paper. This research was supported by the Netherlands Organization for the Advancement of Pure Research (Z.W.O) under number R56-150 and by USPHS Grant DA01070.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号