首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
2.
This paper presents briefly the rationale of the tetrachoric correlation coefficient. Pearson's results are outlined and several estimates of the coefficient are given. These estimates are compared with Pearson's expressions to determine the relative accuracy of the various approximations in determining the tetrachoric correlation coefficient.Preparation of this paper was supported in part by Fellowship 1-F1-MH-24, 324-01, from the National Institute of Mental Health; and in part by the Tri-Ethnic Research Project, Grant 3M-9156 from the National Institute of Mental Health to the Institute of Behavioral Science, University of Colorado. This paper comprises Publication Number 57 of the Institute. The author would like to thank D. E. Bailey for his helpful comments and criticisms.  相似文献   

3.
In calculations of the discriminating-power parameter of the normal ogive model, Bock and Lieberman compared estimates derived from their maximum-likelihood solution with those derived from the heuristic solution. The two sets of estimates were in excellent agreement provided the heuristic solution used accurate tetrachoric correlation coefficients. Three computer methods for the calculation of the tetrachoric correlation were examined for accuracy and speed. The routine by Saunders was identified as an acceptably accurate method for calculating the tetrachoric correlation coefficient.This research was supported in part by NSF Grant E 1930 to The University of Chicago. The author wishes to thank Dr. David R. Saunders and Dr. Ledyard Tucker for the use of their original materials and Dr. R. Darrell Bock for his many helpful suggestions and his ready counsel throughout the course of this investigation.  相似文献   

4.
Algebraic properties of the normal theory maximum likelihood solution in factor analysis regression are investigated. Two commonly employed measures of the within sample predictive accuracy of the factor analysis regression function are considered: the variance of the regression residuals and the squared correlation coefficient between the criterion variable and the regression function. It is shown that this within sample residual variance and within sample squared correlation may be obtained directly from the factor loading and unique variance estimates, without use of the original observations or the sample covariance matrix.  相似文献   

5.
An approximation to the sampling distribution of Kuder-Richardson reliability formula 20 is derived, using its algebraic equivalent obtained through an items-by-subjects analysis of variance. The theoretical distribution is compared to empirical estimates of the sampling distribution to assess how crucial certain assumptions are. The use of the theoretical distribution for testing hypotheses and deriving confidence intervals is illustrated. A table of equations for approximating 80, 90, and 95 per cent confidence intervals is presented withN ranging from 40 to 500.  相似文献   

6.
Under certain assumptions an expression, in terms of item difficulties and intercorrelations, is derived for the curvilinear correlation of test score on the ability underlying the test, this ability being defined as the common factor of the item tetrachoric intercorrelations corrected for guessing. It is shown that this curvilinear correlation is equal to the square root of the test reliability. Numerical values for these curvilinear correlations are presented for a number of hypothetical tests, defined in terms of their item parameters. These numerical results indicate that the reliability and the curvilinear correlation will be maximized by (1) minimizing the variability of item difficulty and (2) making the level of item difficulty somewhat easier than the halfway point between a chance percentage of correct answers and 100 per cent correct answers.  相似文献   

7.
A method of estimating the product moment correlation from the polychoric series is developed. This method is shown to be a generalization of the method which uses the tetrachoric series to obtain the tetrachoric correlation. Although this new method involves more computational labor, it is shown to be superior to older methods for data grouped into a small number of classes.  相似文献   

8.
A dilemma was created for factor analysts by Ferguson (Psychometrika, 1941,6, 323–329) when he demonstrated that test items or sub-tests of varying difficulty will yield a correlation matrix of rank greater than 1, even though the material from which the items or sub-tests are drawn is homogeneous, although homogeneity of such material had been defined operationally by factor analysts as having a correlation matrix of rank 1. This dilemma has been resolved as a case of ambiguity, which lay in (1) failure to specify whether homogeneity was to apply to content, difficulty, or both, and (2) failure to state explicitly the kind of correlation to be used in obtaining the matrix. It is demonstrated that (1) if the material but (2) if content is homogeneous but difficulty is not, the homogeneity of the content can be demonstrated only by using the tetrachoric correlation coefficient in deriving the matrix; and that the use of the phi-coefficient (Pearsonianr) will disclose only the nonhomogeneity of the difficulty and lead to a series ofconstant error factors as contrasted withcontent factors. Since varying difficulty of items (and possibly of sub-tests) is desirable as well as practically unavoidable, it is recommended that all factor analysis problems be carried out with tetrachoric correlations. While no one would want to obtain the constant error factors by factor analysis (difficulty being more easily obtained by counting passes), their importance for test construction is pointed out.  相似文献   

9.
HAMILTON M 《Psychometrika》1948,13(4):259-267
This article offers a new nomogram for the tetrachoric correlation coefficient, together with a correcting table. The development of the nomogram is described and directions for its use are included.  相似文献   

10.
Assuming item parameters on a test are known constants, the reliability coefficient for item response theory (IRT) ability estimates is defined for a population of examinees in two different ways: as (a) the product-moment correlation between ability estimates on two parallel forms of a test and (b) the squared correlation between the true abilities and estimates. Due to the bias of IRT ability estimates, the parallel-forms reliability coefficient is not generally equal to the squared-correlation reliability coefficient. It is shown algebraically that the parallel-forms reliability coefficient is expected to be greater than the squared-correlation reliability coefficient, but the difference would be negligible in a practical sense.  相似文献   

11.
Monte Carlo simulations were conducted to compare the performance of the traditional (Fisher, 1954) and mean (Hunter & Schmidt, 1990) estimators of the sampling variance of correlations in meta-analysis. The mean estimator differs from the traditional estimator in that it uses the mean observed correlation, averaged across studies, in the sampling variance formula. The simulations investigated the homogeneous (i.e., no true correlation variance across studies) and heterogeneous case (i.e., true correlation variance across studies). Results reveal that, compared to the traditional estimator, the mean estimator provides less negatively biased estimates of sampling variance in the homogeneous and heterogeneous cases and more positively biased estimates in the heterogenous case. Thus, results support the use of the mean estimator unless strong, theory-based hypotheses regarding moderating effects exist.  相似文献   

12.
A study is made of the extent to which correlations between items and between tests are affected by the difficulties of the items involved and by chance success through guessing. The Pearsonian product-moment coefficient does not necessarily give a correct indication of the relation between items or sets of items, since it tends to decrease as the items or tests become less similar in difficulty. It is suggested that the tetrachoric correlation coefficient can properly be used for estimating the correlation between the continua underlying items or sets of items even though they differ in difficulty, and a method for correcting a 2 × 2 table for the effect of chance is proposed.The opinions expressed in this article are the private ones of the writer and are not to be construed as official or reflecting the views of the Navy Department or the naval service at large. The writer is indebted to Lt. C. L. Vaughn, H (S) USNR, for critical comments on this paper.  相似文献   

13.
In multilevel modeling, group-level variables (L2) for assessing contextual effects are frequently generated by aggregating variables from a lower level (L1). A major problem of contextual analyses in the social sciences is that there is no error-free measurement of constructs. In the present article, 2 types of error occurring in multilevel data when estimating contextual effects are distinguished: unreliability that is due to measurement error and unreliability that is due to sampling error. The fact that studies may or may not correct for these 2 types of error can be translated into a 2 × 2 taxonomy of multilevel latent contextual models comprising 4 approaches: an uncorrected approach, partial correction approaches correcting for either measurement or sampling error (but not both), and a full correction approach that adjusts for both sources of error. It is shown mathematically and with simulated data that the uncorrected and partial correction approaches can result in substantially biased estimates of contextual effects, depending on the number of L1 individuals per group, the number of groups, the intraclass correlation, the number of indicators, and the size of the factor loadings. However, the simulation study also shows that partial correction approaches can outperform full correction approaches when the data provide only limited information in terms of the L2 construct (i.e., small number of groups, low intraclass correlation). A real-data application from educational psychology is used to illustrate the different approaches.  相似文献   

14.
The sampling properties of four item discrimination indices (biserialr, Cook's indexB, theU–L 27 per cent index, and DeltaP) were investigated in order to ascertain their sampling properties when small samples drawn from actual test data rather than constructed data were employed. The empirical results indicated that the mean index values approximated the population values and that values of the standard deviations computed from large sample formulas were good approximations to the standard deviations of the observed distributions based on samples of size 120 or less. Goodness of fit tests comparing the observed distributions with the corresponding distribution of the product-moment correlation coefficient based upon a bivariate normal population indicated that this correlational model was inappropriate for the data. The lack of adequate mathematical models for the sampling distributions of item discrimination indices suggests that one should avoid indices whose only real reason for existence was the simplification of computational procedures.This research reported herein was performed pursuant to a contract (OE-2-10-071) with the United States Office of Education, Department of Health, Education and Welfare.  相似文献   

15.
A two-facet measurement model with broad application in the behavioral sciences is identified, and its coefficient of generalizability (CG) is examined. A normalizing transformation is proposed, and an asymptotic variance expression is derived. Three other multifaceted measurement models and CGs are identified, and variance expressions are presented. Next, an empirical investigation of the procedures follows, and it is shown that, in most cases, Type I error control in inferential applications is precise, and that the estimates are relatively efficient compared with the correlation coefficient. Implications for further research and for practice are noted. In an Appendix, four additional models, CGs, and variance expressions are presented.The research reported herein formed part of a doctoral dissertation conducted by Marsha Schroeder (Schroeder, 1986), under the direction of Ralph Hakstian, at the University of British Columbia. We acknowledge with thanks the contributions to this research of Todd Rogers and James Steiger. We are also very indebted to an mous reviewer who provided some important clarifications in connection with two of the models considered. Some support for this research was provided by a grant to Ralph Hakstian from the Natural Sciences and Engineering Research Council of Canada.  相似文献   

16.
Although the Bock–Aitkin likelihood-based estimation method for factor analysis of dichotomous item response data has important advantages over classical analysis of item tetrachoric correlations, a serious limitation of the method is its reliance on fixed-point Gauss-Hermite (G-H) quadrature in the solution of the likelihood equations and likelihood-ratio tests. When the number of latent dimensions is large, computational considerations require that the number of quadrature points per dimension be few. But with large numbers of items, the dispersion of the likelihood, given the response pattern, becomes so small that the likelihood cannot be accurately evaluated with the sparse fixed points in the latent space. In this paper, we demonstrate that substantial improvement in accuracy can be obtained by adapting the quadrature points to the location and dispersion of the likelihood surfaces corresponding to each distinct pattern in the data. In particular, we show that adaptive G-H quadrature, combined with mean and covariance adjustments at each iteration of an EM algorithm, produces an accurate fast-converging solution with as few as two points per dimension. Evaluations of this method with simulated data are shown to yield accurate recovery of the generating factor loadings for models of upto eight dimensions. Unlike an earlier application of adaptive Gibbs sampling to this problem by Meng and Schilling, the simulations also confirm the validity of the present method in calculating likelihood-ratio chi-square statistics for determining the number of factors required in the model. Finally, we apply the method to a sample of real data from a test of teacher qualifications.  相似文献   

17.
A method of estimating the parameters of the normal ogive model for dichotomously scored item-responses by maximum likelihood is demonstrated. Although the procedure requires numerical integration in order to evaluate the likelihood equations, a computer implemented Newton-Raphson solution is shown to be straightforward in other respects. Empirical tests of the procedure show that the resulting estimates are very similar to those based on a conventional analysis of item difficulties and first factor loadings obtained from the matrix of tetrachoric correlation coefficients. Problems of testing the fit of the model, and of obtaining invariant parameters are discussed.Research reported in this paper was supported by NSF Grant 1025 to the University of Chicago.  相似文献   

18.

Two extreme approximations, namely the Voigt- and Reuss-type approximations, have been used to estimate the effective electrostrictive coefficients of isotropic or anisotropic (as in the dc electric-field-biased piezoelectric mode) relaxor-based ferroelectric ceramics. It is shown that, for a dense ceramic with cubic crystallites, both simple approximations give very similar results and can be used for such estimates. However, for common ceramics containing pores, the Voigt and Reuss approximations yield only extreme upper and lower bounds respectively, and a more appropriate approach is needed.  相似文献   

19.
A table is developed and presented to facilitate the computation of the PearsonQ 3 (cosine method) estimate of the tetrachoric correlation coefficient. Data are presented concerning the accuracy ofQ 3 as an estimate of the tetrachoric correlation coefficient, and it is compared with the results obtainable from the Chesire, Saffir, and Thurstone tables for the same four-fold frequency tables.The authors are indebted to Mr. John Scott, Chief of the Test Development Section of the U.S. Civil Service Commission, for his encouragement and to Miss Elaine Ambrifi and Mrs. Elaine Nixon for the large amount of computational work involved in this paper.  相似文献   

20.
Monte Carlo procedures are used to study the sampling distribution of the Hoyt reliability coefficient. Estimates of mean, variance, and skewness are made for the case of the Bower-Trabasso concept identification model. Given the Bower-Trabasso assumptions, the Hoyt coefficient of a particular concept identification experiment is shown to be statistically unlikely.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号