首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The common maximum likelihood (ML) estimator for structural equation models (SEMs) has optimal asymptotic properties under ideal conditions (e.g., correct structure, no excess kurtosis, etc.) that are rarely met in practice. This paper proposes model-implied instrumental variable – generalized method of moments (MIIV-GMM) estimators for latent variable SEMs that are more robust than ML to violations of both the model structure and distributional assumptions. Under less demanding assumptions, the MIIV-GMM estimators are consistent, asymptotically unbiased, asymptotically normal, and have an asymptotic covariance matrix. They are “distribution-free,” robust to heteroscedasticity, and have overidentification goodness-of-fit J-tests with asymptotic chi-square distributions. In addition, MIIV-GMM estimators are “scalable” in that they can estimate and test the full model or any subset of equations, and hence allow better pinpointing of those parts of the model that fit and do not fit the data. An empirical example illustrates MIIV-GMM estimators. Two simulation studies explore their finite sample properties and find that they perform well across a range of sample sizes.  相似文献   

2.
The effects ofN and communality on the variability of zero and nonzero factor loadings were assessed using a Monte Carlo approach. It was found that increasingN or communality resulted in decreased sampling error of individual factor loadings, but for zero loadingsN was found to have the greatest influence. It was also found that distributions of factor loadings become relatively elongated as communality increases.  相似文献   

3.
A common criticism of iterative least squares estimates of communality is that method of initial estimation may influence stabilized values. As little systematic research on this topic has been performed, the criticism appears to be based on cumulated experience with empirical data sets. In the present paper, two studies are reported in which four types of initial estimate (unities, squared multiple correlations, highestr, and zeroes) and four levels of convergence criterion were employed using four widely available computer packages (BMDP, SAS, SPSS, and SOUPAC). The results suggest that initial estimates have no effect on stabilized communality estimates when a stringent criterion for convergence is used, whereas initial estimates appear to affect stabilized values employing rather gross convergence criteria. There were no differences among the four computer packages for matrices without Heywood cases.  相似文献   

4.
Estimation of effect size is of interest in many applied fields such as Psychology, Sociology and Education. However there are few nonparametric estimators of effect size proposed in the existing literature, and little is known about the distributional characteristics of these estimators. In this article, two estimators based on the sample quantiles are proposed and studied. The first one is the estimator suggested by Hedges and Olkin (see page 93 of Hedges & Olkin, 1985) for the situation where a treatment effect is evaluated against a control group (Case A). A modified version of the robust estimator by Hedges and Olkin is also proposed for the situation where two parallel treatments are compared (Case B). Large sample distributions of both estimators are derived. Their asymptotic relative efficiencies with respect to the normal maximum likelihood estimators under several common distributions are evaluated. The robust properties of the proposed estimators are discussed with respect to the sample-wise breakdown points proposed by Akritas (1991). Simulation studies are provided in which the performing characteristics of the proposed estimator are compared to that of the nonparametric estimators by Kraemer and Andrews (1982). Interval estimation of the effect sizes is also discussed. In an example, interval estimates for the data set in Kraemer and Andrews (1982) are calculated for both cases A and B.  相似文献   

5.
It is common practice in both randomized and quasi-experiments to adjust for baseline characteristics when estimating the average effect of an intervention. The inclusion of a pre-test, for example, can reduce both the standard error of this estimate and—in non-randomized designs—its bias. At the same time, it is also standard to report the effect of an intervention in standardized effect size units, thereby making it comparable to other interventions and studies. Curiously, the estimation of this effect size, including covariate adjustment, has received little attention. In this article, we provide a framework for defining effect sizes in designs with a pre-test (e.g., difference-in-differences and analysis of covariance) and propose estimators of those effect sizes. The estimators and approximations to their sampling distributions are evaluated using a simulation study and then demonstrated using an example from published data.  相似文献   

6.
This paper is a presentation of the statistical sampling theory of stepped-up reliability coefficients when a test has been divided into any number of equivalent parts. Maximum-likelihood estimators of the reliability are obtained and shown to be biased. Their sampling distributions are derived and form the basis of the definition of new unbiased estimators with known sampling distributions. These unbiased estimators have a smaller sampling variance than the maximum-likelihood estimators and are, because of this and some other favorable properties, recommended for general use. On the basis of the variances of the unbiased estimators the gain in accuracy in estimating reliability connected with further division of a test can be expressed explicitly. The limits of these variances and thus the limits of accuracy of estimation are derived. Finally, statistical small sample tests of the reliability coefficient are outlined. This paper also covers the sampling distribution of Cronbach's coefficient alpha.  相似文献   

7.
Four issues are discussed concerning Thurstone's discriminal processes: the distributions governing the representation, the nature of the response decision rules, the relation of the mean representation to physical characteristics of the stimulus, and factors affecting the variance of the representation. A neural schema underlying the representation is proposed which involves samples in time of pulse trains on individual neural fibers, estimators of parameters of the several pulse trains, samples of neural fibers, and an aggregation of the estimates over the sample. The resulting aggregated estimate is the Thurstonian representation. Two estimators of pulse rate, which is monotonic with signal intensity, are timing and counting ratios and two methods of aggregation are averaging and maximizing. These lead to very different predictions in a speed-accuracy experiment; data indicate that both estimators are available and the aggregation is by averaging. Magnitude estimation data are then used both to illustrate an unusual response rule and to study the psychophysical law. In addition, the pattern of variability and correlation of magnitude estimates on successive trials is interpreted in terms of the sample size over which the aggregation takes place. Neural sample size is equated with selective attention, and is an important factor affecting the variability of the representation. It accounts for the magical number seven phenomenon in absolute identification and predicts the impact of nonuniform distributions of intensities on the absolute identification of two frequencies. 1977 Psychometric Society Presidential Address. This work was supported in part by a grant of the National Science Foundation to Harvard University. I wish to express my appreciation to S. Burbeck, D. M. Green, M. Shaw, and B. Wandell for their useful comments on an earlier draft of this paper.  相似文献   

8.
This article derives a standard normal-based power method polynomial transformation for Monte Carlo simulation studies, approximating distributions, and fitting distributions to data based on the method of percentiles. The proposed method is used primarily when (1) conventional (or L) moment-based estimators such as skew (or L-skew) and kurtosis (or L -kurtosis) are unknown or (2) data are unavailable but percentiles are known (e.g., standardized test score reports). The proposed transformation also has the advantage that solutions to polynomial coefficients are available in simple closed form and thus obviates numerical equation solving. A procedure is also described for simulating power method distributions with specified medians, inter-decile ranges, left-right tail-weight ratios (skew function), tail-weight factors (kurtosis function), and Spearman correlations. The Monte Carlo results presented in this study indicate that the estimators based on the method of percentiles are substantially superior to their corresponding conventional product-moment estimators in terms of relative bias. It is also shown that the percentile power method can be modified for generating nonnormal distributions with specified Pearson correlations. An illustration shows the applicability of the percentile power method technique to publicly available statistics from the Idaho state educational assessment.  相似文献   

9.
At least four approaches have been used to estimate communalities that will leave an observed correlation matrixR Gramian and with minimum rank. It has long been known that the square of the observed multiple-correlation coefficient is a lower bound to any communality of a variable ofR. This lower bound actually provides a best possible estimate in several senses. Furthermore, under certain conditions basic to the Spearman-Thurstone common-factor theory, the bound must equal the communality in the limit as the number of observed variables increases. Otherwise, this type of theory cannot hold forR.This research was facilitated by a grant from the Lucius N. Littauer Foundation to the American Committee for Social Research in Israel in order to promote methodological work of the Israel Institute of Applied Social Research.  相似文献   

10.
Several theorems concerning properties of the communaltiy of a test in the Thurstone multiple factor theory are established. The following theorems are applicable to a battery ofn tests which are describable in terms ofr common factors, with orthogonal reference vectors.1. The communality of a testj is equal to the square of the multiple correlation of testj with ther reference vectors.2. The communality of a testj is equal to the square of the multiple correlation of testj with ther reference vectors and then—1 remaining tests. Corollary: The square of the multiple correlation of a testj with then—1 remaining tests is equal to or less than the communality of testj. It cannot exceed the communality.3. The square of the multiple correlation of a testj with then—1 remaining tests equals the communality of testj if the group of tests containsr statistically independent ests teach with a communality of unity.4. With correlation coefficients corrected for attenuation, when the number of tests increases indefinitely while the rank of the correlational matrix remains unchanged, the communality of a testj equals the square of the multiple correlation of testj with then—1 remaining tests.5. With raw correlation coefficients, it is shown in a special case that the square of the multiple correlation of a testj with then—1 remaining tests approaches the communality of testj as a limit when the number of tests increases indefinitely while the rank of correlational matrix remains the same. This has not yet been proved for the general case.The author wishes to express his appreciation of the encouragement and assistance given him by Dr. L. L. Thurstone.  相似文献   

11.
This study in parametric test theory deals with the statistics of reliability estimation when scores on two parts of a test follow a binormal distribution with equal (Case 1) or unequal (Case 2) expectations. In each case biased maximum-likelihood estimators of reliability are obtained and converted into unbiased estimators. Sampling distributions are derived. Second moments are obtained and utilized in calculating mean square errors of estimation as a measure of accuracy. A rank order of four estimators is established. There is a uniformly best estimator. Tables of absolute and relative accuracies are provided for various reliability parameters and sample sizes.  相似文献   

12.
An examination of the determinantal equation associated with Rao's canonical factors suggests that Guttman's best lower bound for the number of common factors corresponds to the number of positive canonical correlations when squared multiple correlations are used as the initial estimates of communality. When these initial communality estimates are used, solving Rao's determinantal equation (at the first stage) permits expressing several matrices as functions of factors that differ only in the scale of their columns; these matrices include the correlation matrix with units in the diagonal, the correlation matrix with squared multiple correlations as communality estimates, Guttman's image covariance matrix, and Guttman's anti-image covariance matrix. Further, the factor scores associated with these factors can be shown to be either identical or simply related by a scale change. Implications for practice are discussed, and a computing scheme which would lead to an exhaustive analysis of the data with several optional outputs is outlined.  相似文献   

13.
An expression is given for weighted least squares estimators of oblique common factors, constrained to have the same covariance matrix as the factors they estimate. It is shown that if as in exploratory factor analysis, the common factors are obtained by oblique transformation from the Lawley-Rao basis, the constrained estimators are given by the same transformation. Finally a proof of uniqueness is given.The research reported in this paper was partly supported by Natural Sciences and Engineering Research Council Grant No. A6346.  相似文献   

14.
In an effort to find accurate alternatives to the usual confidence intervals based on normal approximations, this paper compares four methods of generating second‐order accurate confidence intervals for non‐standardized and standardized communalities in exploratory factor analysis under the normality assumption. The methods to generate the intervals employ, respectively, the Cornish–Fisher expansion and the approximate bootstrap confidence (ABC), and the bootstrap‐t and the bias‐corrected and accelerated bootstrap (BCa). The former two are analytical and the latter two are numerical. Explicit expressions of the asymptotic bias and skewness of the communality estimators, used in the analytical methods, are derived. A Monte Carlo experiment reveals that the performance of central intervals based on normal approximations is a consequence of imbalance of miscoverage on the left‐ and right‐hand sides. The second‐order accurate intervals do not require symmetry around the point estimates of the usual intervals and achieve better balance, even when the sample size is not large. The behaviours of the second‐order accurate intervals were similar to each other, particularly for large sample sizes, and no method performed consistently better than the others.  相似文献   

15.
There is no shortage of recommendations regarding the appropriate sample size to use when conducting a factor analysis. Suggested minimums for sample size include from 3 to 20 times the number of variables and absolute ranges from 100 to over 1,000. For the most part, there is little empirical evidence to support these recommendations. This simulation study addressed minimum sample size requirements for 180 different population conditions that varied in the number of factors, the number of variables per factor, and the level of communality. Congruence coefficients were calculated to assess the agreement between population solutions and sample solutions generated from the various population conditions. Although absolute minimums are not presented, it was found that, in general, minimum sample sizes appear to be smaller for higher levels of communality; minimum sample sizes appear to be smaller for higher ratios of the number of variables to the number of factors; and when the variables-to-factors ratio exceeds 6, the minimum sample size begins to stabilize regardless of the number of factors or the level of communality.  相似文献   

16.
Perception of social distributions   总被引:1,自引:0,他引:1  
Accurate representation of the distribution of social attitudes and behaviors can guide effective social behavior and is often essential for correct inferences. We examined the accuracy of people's beliefs about the distributions of a large number of attitudinal and behavioral dimensions. In two studies we measured actual attitudes and behaviors in a student population, and we assessed beliefs by asking subjects to estimate the distribution of 100 students on these dimensions. We examined the accuracy of subjects' perceptions of the means, standard deviations, and distribution shapes. Subjects showed a number of systematic biases, including overestimation of dispersion and overestimation of the means of behavioral distributions and a false consensus bias, but their overall accuracy was impressive.  相似文献   

17.
The posterior analysis in estimating factor score in a confirmatory factor analysis model with polytomous, censored or truncated data is investigated in this paper. For the above three types of data, posterior distributions of the factor score are studied, and the estimators of the factor score are obtained to be the location parameters of the posterior distributions. The accuracy of Bayesian estimates is studied via simulation studies.This research was supported by a Hong Kong UGC grant.  相似文献   

18.
This paper is a presentation of an essential part of the sampling theory of the error variance and the standard error of measurement. An experimental assumption is that several equivalent tests with equal variances are available. These may be either final forms of the same test or obtained by dividing one test into several parts. The simple model of independent and normally distributed errors of measurement with zero mean is employed. No assumption is made about the form of the distributions of true and observed scores. This implies unrestricted freedom in defining the population. First, maximum-likelihood estimators of the error variance and the standard error of measurement are obtained, their sampling distributions given, and their properties investigated. Then unbiased estimators are defined and their distributions derived. The accuracy of estimation is given special consideration from various points of view. Next, rigorous statistical tests are developed to test hypotheses about error variances on the basis of one and two samples. Also the construction of confidence intervals is treated. Finally, Bartlett's test of homogeneity of variances is used to provide a multi-sample test of equality of error variances.  相似文献   

19.
The asymptotic distributions of Brogden's and Lord's modified sample biserial correlation coefficients are derived. The asymptotic variances of these estimators are evaluated for bivariate normal populations and compared to the asymptotic variance of the maximum likelihood estimator.The author would like to thank the referees for several suggestions which improved the presentation of the paper.  相似文献   

20.
This paper examines tensions between two visions of schooling. One stresses social cohesion (i.e., common beliefs, shared activities, and caring relations between members). The other emphasizes strong academic mission (i.e., values and practices that reinforce high standards for student performance). Though not incongruous, numerous organizational studies reveal the potential for social cohesion and communality to be achieved at the expense of academic demand or “press.” To examine their separate and joint effects, measures of academic press and communality are developed from NELS:88 First Follow-up data. Results of hierarchical regression analyses indicate (1) significant links between academic press and student achievement; (2) that academic press has its greatest achievement effect among low-SES schools; (3) that strong sense of community may have a negative impact on achievement in low-SES schools with weak academic press; and (4) that for low- and middle-SES schools, the greatest achievement effects follow from strong combinations of communality and academic press. These findings highlight an important additional component of the “school as community” model, indicating that for most schools, academic press serves as a key prerequisite for the positive achievement effects of communality. The author wishes to express appreciation to National Science Foundation for it support under the grant “Improving Mathematics and Science Learning: A School and Classroom Approach” (RED-9255880). The views expressed here are those of the author, and no official endorsement by the National Science Foundation is intended or should be inferred. Correspondence concerning this article should be sent to Roger C. Shouse, The Pennsylvania State University, 315 Rackley Building, University Park, PA 16803, U.S.A.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号