首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
The asymptotic standard errors for the procrustes solutions are derived for orthogonal rotation, direct oblique rotation and indirect oblique rotation. The standard errors for the first two rotations are obtained using the augmented information matrices. For the indirect oblique solution, the standard errors of rotated parameters are derived from the information matrix of unrotated loadings using the chain rule for information matrices. For all three types of rotation, the standard errors of rotated parameters are presented for unstandardized and standardized manifest variables. Numerical examples show the similarity of theoretical and simulated values.  相似文献   

2.
Concise formulas for the standard errors of component loading estimates   总被引:1,自引:0,他引:1  
Concise formulas for the asymptotic standard errors of component loading estimates were derived. The formulas cover the cases of principal component analysis for unstandardized and standardized variables with orthogonal and oblique rotations. The formulas can be used under any distributions for observed variables as long as the asymptotic covariance matrix for sample covariances/correlations is available. The estimated standard errors in numerical examples were shown to be equivalent to those by the methods using information matrices.The author is indebted to anonymous reviewers for the corrections and suggestions on this study, which have led to improvements of earlier versions of this article.  相似文献   

3.
A jackknife-like procedure is developed for producing standard errors of estimate in maximum likelihood factor analysis. Unlike earlier methods based on information theory, the procedure developed is computationally feasible on larger problems. Unlike earlier methods based on the jackknife, the present procedure is not plagued by the factor alignment problem, the Heywood case problem, or the necessity to jackknife by groups. Standard errors may be produced for rotated and unrotated loading estimates using either orthogonal or oblique rotation as well as for estimates of unique factor variances and common factor correlations. The total cost for larger problems is a small multiple of the square of the number of variables times the number of observations used in the analysis. Examples are given to demonstrate the feasibility of the method.The research done by R. I. Jennrich was supported in part by NSF Grant MCS 77-02121. The research done by D. B. Clarkson was supported in part by NSERC Grant A3109.  相似文献   

4.
The jackknife by groups and modifications of the jackknife by groups are used to estimate standard errors of rotated factor loadings for selected populations in common factor model maximum likelihood factor analysis. Simulations are performed in whicht-statistics based upon these jackknife estimates of the standard errors are computed. The validity of thet-statistics and their associated confidence intervals is assessed. Methods are given through which the computational efficiency of the jackknife may be greatly enhanced in the factor analysis model.Computing assistance was obtained from the Health Sciences Computing Facility, UCLA, sponsored by NIH Special Research Resources Grant RR-3.The author wishes to thank his doctoral committee co-chairmen, Drs James W. Frane and Robert I. Jennrich, UCLA, for their contributions to this research.  相似文献   

5.
Lord and Wingersky have developed a method for computing the asymptotic variance-covariance matrix of maximum likelihood estimates for item and person parameters under some restrictions on the estimates which are needed in order to fix the latent scale. The method is tedious, but can be simplified for the Rasch model when one is only interested in the item parameters. This is demonstrated here under a suitable restriction on the item parameter estimates.  相似文献   

6.
The purpose of this study was to investigate and compare the performance of a stepwise variable selection algorithm to traditional exploratory factor analysis. The Monte Carlo study included six factors in the design; the number of common factors; the number of variables explained by the common factors; the magnitude of factor loadings; the number of variables not explained by the common factors; the type of anomaly evidenced by the poorly explained variables; and sample size. The performance of the methods was evaluated in terms of selection and pattern accuracy, and bias and root mean squared error of the structure coefficients. Results indicate that the stepwise algorithm was generally ineffective at excluding anomalous variables from the factor model. The poor selection accuracy of the stepwise approach suggests that it should be avoided.  相似文献   

7.
Factor analysis is regularly used for analyzing survey data. Missing data, data with outliers and consequently nonnormal data are very common for data obtained through questionnaires. Based on covariance matrix estimates for such nonstandard samples, a unified approach for factor analysis is developed. By generalizing the approach of maximum likelihood under constraints, statistical properties of the estimates for factor loadings and error variances are obtained. A rescaled Bartlett-corrected statistic is proposed for evaluating the number of factors. Equivariance and invariance of parameter estimates and their standard errors for canonical, varimax, and normalized varimax rotations are discussed. Numerical results illustrate the sensitivity of classical methods and advantages of the proposed procedures.This project was supported by a University of North Texas Faculty Research Grant, Grant #R49/CCR610528 for Disease Control and Prevention from the National Center for Injury Prevention and Control, and Grant DA01070 from the National Institute on Drug Abuse. The results do not necessarily represent the official view of the funding agencies. The authors are grateful to three reviewers for suggestions that improved the presentation of this paper.  相似文献   

8.
This paper analyzes the sum score based (SSB) formulation of the Rasch model, where items and sum scores of persons are considered as factors in a logit model. After reviewing the evolution leading to the equality between their maximum likelihood estimates, the SSB model is then discussed from the point of view of pseudo-likelihood and of misspecified models. This is then employed to provide new insights into the origin of the known inconsistency of the difficulty parameter estimates in the Rasch model. The main results consist of exact relationships between the estimated standard errors for both models; and, for the ability parameters, an upper bound for the estimated standard errors of the Rasch model in terms of those for the SSB model, which are more easily available. The authors acknowledge partial financial support from the FONDECYT Project No. 1060722 from the Chilean Government, and the BIL05/03 grant to P. De Boeck, E. Lesaffre and G. Molenberghs (Flanders) for a collaboration with G. del Pino, E. San Martín, F. Quintana and J. Manzi (Chile).  相似文献   

9.
Aim: Cognitive errors (CE) and coping strategies (CS) can bear weight on how individuals relate to others and perceive interpersonal relationships. However, there is little research into how clients' erroneous beliefs and maladaptive coping strategies can interfere with the therapeutic process. This study utilised a sample of healthy clients to explore the relationship between their CEs and CSs and their evaluation of therapy. Method: Therapy sessions of undergraduate student clients (N =26) were rated using the Cognitive Error Rating Scale (CERS – 3rd edition), the Coping Patterns Rating Scale (CPRS;), the Session Evaluation Questionnaire (SEQ) and the Session Impact Scale (SIS). Results: Clients who engaged in dichotomous thinking endorsed problem solving less and were more likely to feel unsupported and misunderstood by the therapist. Clients who discounted the positive tended to feel more pressured and judged by therapists. Conversely, those who engaged in problem solving were more likely to find sessions deeper and more valuable as compared to those who reacted to stressful events by submission, escape, or opposition. Implications: Better understanding how and when a client's cognitive errors and coping mechanisms are at play during therapy can help clinicians to address them and intervene appropriately.  相似文献   

10.
Yutaka Kano 《Psychometrika》1990,55(2):277-291
Based on the usual factor analysis model, this paper investigates the relationship between improper solutions and the number of factors, and discusses the properties of the noniterative estimation method of Ihara and Kano in exploratory factor analysis. The consistency of the Ihara and Kano estimator is shown to hold even for an overestimated number of factors, which provides a theoretical basis for the rare occurrence of improper solutions and for a new method of choosing the number of factors. The comparative study of their estimator and that based on maximum likelihood is carried out by a Monte Carlo experiment.The author would like to express his thanks to Masashi Okamoto and Masamori Ihara for helpful comments and to the editor and referees for critically reading the earlier versions and making many valuable suggestions. He also thanks Shigeo Aki for his comments on physical random numbers.  相似文献   

11.
In item response theory, the classical estimators of ability are highly sensitive to response disturbances and can return strongly biased estimates of the true underlying ability level. Robust methods were introduced to lessen the impact of such aberrant responses on the estimation process. The computation of asymptotic (i.e., large‐sample) standard errors (ASE) for these robust estimators, however, has not yet been fully considered. This paper focuses on a broad class of robust ability estimators, defined by an appropriate selection of the weight function and the residual measure, for which the ASE is derived from the theory of estimating equations. The maximum likelihood (ML) and the robust estimators, together with their estimated ASEs, are then compared in a simulation study by generating random guessing disturbances. It is concluded that both the estimators and their ASE perform similarly in the absence of random guessing, while the robust estimator and its estimated ASE are less biased and outperform their ML counterparts in the presence of random guessing with large impact on the item response process.  相似文献   

12.
A closed form estimator of the uniqueness (unique variance) in factor analysis is proposed. It has analytically desirable properties—consistency, asymptotic normality and scale invariance. The estimation procedure is given through the application to the two sets of Emmett's data and Holzinger and Swineford's data. The new estimator is shown to lead to values rather close to the maximum likelihood estimator.  相似文献   

13.
A general theory for parametric inference in contingency tables is outlined. Estimation of polychoric correlations is seen as a special case of this theory. The asymptotic covariance matrix of the estimated polychoric correlations is derived for the case when the thresholds are estimated from the univariate marginals and the polychoric correlations are estimated from the bivariate marginals for given thresholds. Computational aspects are also discussed.The research was supported by the Swedish Council for Research in the Humanities and Social Sciences (HSFR) under the programMultivariate Statistical Analysis. The author thanks a reviewer for pointing out an error in the original version of the paper.  相似文献   

14.
Jansen (1984) gave explicit formulas for the computation of the second-order derivatives of the elementary symmetric functions. But they are only applicable to those pairs of items which have unequal parameters. It is shown here that for equal parameters similar explicit formulas do exist, too, facilitating the application of the Newton-Raphson procedure to estimate the parameters in the Rasch model and related models according to the conditional maximum likelihood principle.The author wishes to thank the reviewers for their helpful comments on an earlier draft of this paper.  相似文献   

15.
In exploratory factor analysis, latent factors and factor loadings are seldom interpretable until analytic rotation is performed. Typically, the rotation problem is solved by numerically searching for an element in the manifold of orthogonal or oblique rotation matrices such that the rotated factor loadings minimize a pre-specified complexity function. The widely used gradient projection (GP) algorithm, although simple to program and able to deal with both orthogonal and oblique rotation, is found to suffer from slow convergence when the number of manifest variables and/or the number of latent factors is large. The present work examines the effectiveness of two Riemannian second-order algorithms, which respectively generalize the well-established truncated Newton and trust-region strategies for unconstrained optimization in Euclidean spaces, in solving the rotation problem. When approaching a local minimum, the second-order algorithms usually converge superlinearly or even quadratically, better than first-order algorithms that only converge linearly. It is further observed in Monte Carlo studies that, compared to the GP algorithm, the Riemannian truncated Newton and trust-region algorithms require not only much fewer iterations but also much less processing time to meet the same convergence criterion, especially in the case of oblique rotation.  相似文献   

16.
Equivalence of marginal likelihood of the two-parameter normal ogive model in item response theory (IRT) and factor analysis of dichotomized variables (FA) was formally proved. The basic result on the dichotomous variables was extended to multicategory cases, both ordered and unordered categorical data. Pair comparison data arising from multiple-judgment sampling were discussed as a special case of the unordered categorical data. A taxonomy of data for the IRT and FA models was also attempted.The work reported in this paper has been supported by Grant A6394 to the first author from the Natural Sciences and Engineering Research Council of Canada.  相似文献   

17.
Several algorithms for covariance structure analysis are considered in addition to the Fletcher-Powell algorithm. These include the Gauss-Newton, Newton-Raphson, Fisher Scoring, and Fletcher-Reeves algorithms. Two methods of estimation are considered, maximum likelihood and weighted least squares. It is shown that the Gauss-Newton algorithm which in standard form produces weighted least squares estimates can, in iteratively reweighted form, produce maximum likelihood estimates as well. Previously unavailable standard error estimates to be used in conjunction with the Fletcher-Reeves algorithm are derived. Finally all the algorithms are applied to a number of maximum likelihood and weighted least squares factor analysis problems to compare the estimates and the standard errors produced. The algorithms appear to give satisfactory estimates but there are serious discrepancies in the standard errors. Because it is robust to poor starting values, converges rapidly and conveniently produces consistent standard errors for both maximum likelihood and weighted least squares problems, the Gauss-Newton algorithm represents an attractive alternative for at least some covariance structure analyses.Work by the first author has been supported in part by Grant No. Da01070 from the U. S. Public Health Service. Work by the second author has been supported in part by Grant No. MCS 77-02121 from the National Science Foundation.  相似文献   

18.
In item response models of the Rasch type (Fischer & Molenaar, 1995), item parameters are often estimated by the conditional maximum likelihood (CML) method. This paper addresses the loss of information in CML estimation by using the information concept of F-information (Liang, 1983). This concept makes it possible to specify the conditions for no loss of information and to define a quantification of information loss. For the dichotomous Rasch model, the derivations will be given in detail to show the use of the F-information concept for making comparisons for different estimation methods. It is shown that by using CML for item parameter estimation, some information is almost always lost. But compared to JML (joint maximum likelihood) as well as to MML (marginal maximum likelihood) the loss is very small. The reported efficiency in the use of information of CML to JML and to MML in several comparisons is always larger than 93%, and in tests with a length of 20 items or more, larger than 99%.  相似文献   

19.
王孟成  邓俏文 《心理学报》2016,(11):1489-1498
本研究通过蒙特卡洛模拟考查了采用全息极大似然估计进行缺失数据建模时辅助变量的作用。具体考查了辅助变量与研究变量的共缺机制、共缺率、相关程度、辅助变量数目与样本量等因素对参数估计结果精确性的影响。结果表明,当辅助与研究变量共缺时:(1)对于完全随机缺失的辅助变量,结果更容易出现偏差;(2)对于MAR-MAR组合机制,纳入单个辅助变量是有益的;对于MAR-MCAR或MAR-MNAR组合机制,纳入多于一个辅助变量的效果更好;(3)纳入与研究变量低相关的辅助变量对结果也是有益的。  相似文献   

20.
The method of finding the maximum likelihood estimates of the parameters in a multivariate normal model with some of the component variables observable only in polytomous form is developed. The main stratagem used is a reparameterization which converts the corresponding log likelihood function to an easily handled one. The maximum likelihood estimates are found by a Fletcher-Powell algorithm, and their standard error estimates are obtained from the information matrix. When the dimension of the random vector observable only in polytomous form is large, obtaining the maximum likelihood estimates is computationally rather labor expensive. Therefore, a more efficient method, the partition maximum likelihood method, is proposed. These estimation methods are demonstrated by real and simulated data, and are compared by means of a simulation study.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号