首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
An estimate and an upper-bound estimate for the reliability of a test composed of binary items is derived from the multidimensional latent trait theory proposed by Bock and Aitkin (1981). The estimate derived here is similar to internal consistency estimates (such as coefficient alpha) in that it is a function of the correlations among test items; however, it is not a lowerbound estimate as are all other similar methods.An upper bound to reliability that is less than unity does not exist in the context of classical test theory. The richer theoretical background provided by Bock and Aitkin's latent trait model has allowed the development of an index (called here) that is always greater-than or equal-to the reliability coefficient for a test (and is less-than or equal-to one). The upper bound estimate of reliability has practical uses—one of which makes use of the greatest lower bound.  相似文献   

2.
When measuring the same variables on different occasions, two procedures for canonical analysis with stationary compositing weights are developed. The first, SUMCOV, maximizes the sum of the covariances of the canonical variates subject to norming constraints. The second, COLLIN, maximizes the largest root of the covariances of the canonical variates subject to norming constraints. A characterization theorem establishes a model building approach. Both methods are extended to allow for Cohort Sequential Designs. Finally a numerical illustration utilizing Nesselroade and Baltes data is presented.The authors wish to thank John Nesselroade for permitting us to use the data whose analysis we present.  相似文献   

3.
Huynh Huynh 《Psychometrika》1981,46(3):295-305
Procedures are described for the analysis of profiles of means in repeated measures designs under order restriction for patterns of mean change. The exact likelihood ratio is derived for the case of two groups of subjects, and a computationally simple alternative to the exact likelihood ratio test is provided for designs involving more than two groups. Tables of critical values are provided for the case of simple-order alternatives.The author wishes to thank Joseph C. Saunders for his editorial assistance.  相似文献   

4.
A distinction is drawn between redundancy measurement and the measurement of multivariate association for two sets of variables. Several measures of multivariate association between two sets of variables are examined. It is shown that all of these measures are generalizations of the (univariate) squared-multiple correlation; all are functions of the canonical correlations, and all are invariant under linear transformations of the original sets of variables. It is further shown that the measures can be considered to be symmetric and are strictly ordered for any two sets of observed variables. It is suggested that measures of multivariate relationship may be used to generalize the concept of test reliability to the case of vector random variables.  相似文献   

5.
Although the Bock–Aitkin likelihood-based estimation method for factor analysis of dichotomous item response data has important advantages over classical analysis of item tetrachoric correlations, a serious limitation of the method is its reliance on fixed-point Gauss-Hermite (G-H) quadrature in the solution of the likelihood equations and likelihood-ratio tests. When the number of latent dimensions is large, computational considerations require that the number of quadrature points per dimension be few. But with large numbers of items, the dispersion of the likelihood, given the response pattern, becomes so small that the likelihood cannot be accurately evaluated with the sparse fixed points in the latent space. In this paper, we demonstrate that substantial improvement in accuracy can be obtained by adapting the quadrature points to the location and dispersion of the likelihood surfaces corresponding to each distinct pattern in the data. In particular, we show that adaptive G-H quadrature, combined with mean and covariance adjustments at each iteration of an EM algorithm, produces an accurate fast-converging solution with as few as two points per dimension. Evaluations of this method with simulated data are shown to yield accurate recovery of the generating factor loadings for models of upto eight dimensions. Unlike an earlier application of adaptive Gibbs sampling to this problem by Meng and Schilling, the simulations also confirm the validity of the present method in calculating likelihood-ratio chi-square statistics for determining the number of factors required in the model. Finally, we apply the method to a sample of real data from a test of teacher qualifications.  相似文献   

6.
Algebraic properties of the normal theory maximum likelihood solution in factor analysis regression are investigated. Two commonly employed measures of the within sample predictive accuracy of the factor analysis regression function are considered: the variance of the regression residuals and the squared correlation coefficient between the criterion variable and the regression function. It is shown that this within sample residual variance and within sample squared correlation may be obtained directly from the factor loading and unique variance estimates, without use of the original observations or the sample covariance matrix.  相似文献   

7.
To date, there is a lack of satisfactory inferential techniques for the analysis of multivariate data in factorial designs, when only minimal assumptions on the data can be made. Presently available methods are limited to very particular study designs or assume either multivariate normality or equal covariance matrices across groups, or they do not allow for an assessment of the interaction effects across within-subjects and between-subjects variables. We propose and methodologically validate a parametric bootstrap approach that does not suffer from any of the above limitations, and thus provides a rather general and comprehensive methodological route to inference for multivariate and repeated measures data. As an example application, we consider data from two different Alzheimer’s disease (AD) examination modalities that may be used for precise and early diagnosis, namely, single-photon emission computed tomography (SPECT) and electroencephalogram (EEG). These data violate the assumptions of classical multivariate methods, and indeed classical methods would not have yielded the same conclusions with regards to some of the factors involved.  相似文献   

8.
The association structure between manifest variables arising from the single-factor model is investigated using partial correlations. The additional insights to the practitioner provided by partial correlations for detecting a single-factor model are discussed. The parameter space for the partial correlations is presented, as are the patterns of signs in a matrix containing the partial correlations that are not compatible with a single-factor model.  相似文献   

9.
This paper is concerned with the reliability of weighted combinations of a given set of dichotomous measures. Maximal reliability for such measures has been discussed in the past, but the pertinent estimator exhibits a considerable bias and mean squared error for moderate sample sizes. We examine this bias, propose a procedure for bias correction, and develop a more accurate asymptotic confidence interval for the resulting estimator. In most empirically relevant cases, the bias correction and mean squared error correction can be performed simultaneously. We propose an approximate (asymptotic) confidence interval for the maximal reliability coefficient, discuss the implementation of this estimator, and investigate the mean squared error of the associated asymptotic approximation. We illustrate the proposed methods using a numerical example.  相似文献   

10.
Constrained canonical correlation   总被引:1,自引:0,他引:1  
This paper explores some of the problems associated with traditional canonical correlation. A response surface methodology is developed to examine the stability of the derived linear functions, where one wishes to investigate how much the coefficients can change and still be in an -neighborhood of the globally optimum canonical correlation value. In addition, a discrete (or constrained) canonical correlation method is formulated where the derived coefficients of these linear functions are constrained to be in some small set, e.g., {1, 0, –1}, to aid in the interpretation of the results. An example concerning the psychographic responses of Wharton MBA students of the University of Pennsylvania regarding driving preferences and life-style considerations is provided.Wayne S. DeSarbo, Robert Jausman, Shen Lin, and Wesley Thompson are all Members of Technical Staff at Bell Laboratories. We wish to express our gratitude to the editor and reviewers of this paper for their insightful remarks.  相似文献   

11.
When multiple items are clustered around a reading passage, the local independence assumption in item response theory is often violated. The amount of information contained in an item cluster is usually overestimated if violation of local independence is ignored and items are treated as locally independent when in fact they are not. In this article we provide a general method that adjusts for the inflation of information associated with a test containing item clusters. A computational scheme was presented for the evaluation of the factor of adjustment for clusters in the restrictive case of two items per cluster, and the general case of more than two items per cluster. The methodology was motivated by a study of the NAEP Reading Assessment. We present a simulated study along with an analysis of a NAEP data set.The research was supported under the National Assessment of Educational Progress (Grant No. R999G30002) as administered by the Office of Educational Research and Improvement, U.S. Department of Education. This work was started when the author was at the Division of Statistics and Psychometrics at the Educational Testing Service. The author thanks Juliet Shaffer, Bob Mislevy, Eric Bradlow, three reviewers and an associate editor for their helpful comments on the paper.  相似文献   

12.
Consider a repeated measurements study in which a single observation is taken at each oft(t > 2) equally spaced time points for each ofn subjects. This paper uses a result of Box to find the appropriate multiplicative correction term for the degrees of freedom in the analysis of variance table for the case when there is equal variability per time point, and when the correlation between observationsu time units apart is equal to u . For large values oft, the multiplicative correction has a lower bound approximately equal to 2.5/(t – 1).Dr. Fleiss is also with the Biometrics Research Unit of the New York State Psychiatric Institute. This work was supported in part by grant DE 04068 from the National Institute of Dental Research.  相似文献   

13.
The validity conditions for univariate repeated measures designs are described. Attention is focused on the sphericity requirement. For av degree of freedom family of comparisons among the repeated measures, sphericity exists when all contrasts contained in thev dimensional space have equal variances. Under nonsphericity, upper and lower bounds on test size and power of a priori, repeated measures,F tests are derived. The effects of nonsphericity are illustrated by means of a set of charts. The charts reveal that small departures from sphericity (.97 <1.00) can seriously affect test size and power. It is recommended that separate rather than pooled error term procedures be routinely used to test a priori hypotheses.Appreciation is extended to Milton Parnes for his insightful assistance.  相似文献   

14.
A. P. Grieve 《Psychometrika》1984,49(2):257-267
The locally best invariant test statistic for testing sphericity of normal distributions is shown to be a simple function of the Box/Geisser-Greenhouse degrees of freedom correction factor in a repeated measures design. Because of this relationship it provides a more intuitively appealing test of the necessary and sufficient conditions for valid F-tests in repeated measures analysis of variance than the likelihood ratio test. The properties of the two tests are compared and tables of the critical values of the Box/Geisser-Greenhouse correction factor are given.  相似文献   

15.
The Buss-Durkee Hostility Inventory is a widely employed multidimensional measure of aggression. Two studies, each involving the administration of both two-choice and seven-choice response format versions of the instrument, were conducted to determine if (1) the theoretical scales could be reproduced empirically, (2) the change in response format either changes or improves the structure, and (3) the structure of either response format replicates across administrations. The two-choice version provided some support for the present theoretical scoring but was not very stable across administrations. The seven-choice version resulted in a structure that was different from both the two-choice structure and theoretical scoring but was more stable across administrations.  相似文献   

16.
Repeated measures designs in psychology have traditionally been analyzed by the univariate mixed model approach, in which the repeated measures effect is tested against an error term based on the subject by treatment interaction. This paper considers the extension of this analysis to designs in which the individual repeated measures are multivariate. Sufficient conditions for a valid multivariate mixed model analysis are given, and a test is described to determine whether or not given data satisfy these conditions.  相似文献   

17.
A method is presented for constructing a covariance matrix Σ*0 that is the sum of a matrix Σ(γ0) that satisfies a specified model and a perturbation matrix,E, such that Σ*0=Σ(γ0) +E. The perturbation matrix is chosen in such a manner that a class of discrepancy functionsF(Σ*0, Σ(γ0)), which includes normal theory maximum likelihood as a special case, has the prespecified parameter value γ0 as minimizer and a prespecified minimum δ A matrix constructed in this way seems particularly valuable for Monte Carlo experiments as the covariance matrix for a population in which the model does not hold exactly. This may be a more realistic conceptualization in many instances. An example is presented in which this procedure is employed to generate a covariance matrix among nonnormal, ordered categorical variables which is then used to study the performance of a factor analysis estimator. We are grateful to Alexander Shapiro for suggesting the proof of the solution in section 2.  相似文献   

18.
A chain of lower-bound inequalities leading to the greatest lower bound to reliability is established for the internal consistency of a composite of unit-weighted components. The chain includes the maximum split-half coefficient, the lowest coefficient consistent with nonimaginary common factors, and the lowest coefficient consistent with nonimaginary common and unique factors. Optimization theory is utilized to determine the conditions that are requisite for the inequalities. Convergence proofs demonstrate that the coefficients can be attained. Rapid algorithms obtain estimates of the coefficients with sample data. The theory yields methods for splitting items into maximally similar sets and for exploratory factor analysis based on a theoretical solution to the communality problem.  相似文献   

19.
Equivalence tests are an alternative to traditional difference‐based tests for demonstrating a lack of association between two variables. While there are several recent studies investigating equivalence tests for comparing means, little research has been conducted on equivalence methods for evaluating the equivalence or similarity of two correlation coefficients or two regression coefficients. The current project proposes novel tests for evaluating the equivalence of two regression or correlation coefficients derived from the two one‐sided tests (TOST) method (Schuirmann, 1987, J. Pharmacokinet. Biopharm, 15, 657) and an equivalence test by Anderson and Hauck (1983, Stat. Commun., 12, 2663). A simulation study was used to evaluate the performance of these tests and compare them with the common, yet inappropriate, method of assessing equivalence using non‐rejection of the null hypothesis in difference‐based tests. Results demonstrate that equivalence tests have more accurate probabilities of declaring equivalence than difference‐based tests. However, equivalence tests require large sample sizes to ensure adequate power. We recommend the Anderson–Hauck equivalence test over the TOST method for comparing correlation or regression coefficients.  相似文献   

20.
Multilevel covariance structure models have become increasingly popular in the psychometric literature in the past few years to account for population heterogeneity and complex study designs. We develop practical simulation based procedures for Bayesian inference of multilevel binary factor analysis models. We illustrate how Markov Chain Monte Carlo procedures such as Gibbs sampling and Metropolis-Hastings methods can be used to perform Bayesian inference, model checking and model comparison without the need for multidimensional numerical integration. We illustrate the proposed estimation methods using three simulation studies and an application involving student's achievement results in different areas of mathematics. The authors thank Ian Westbury, University of Illinois at Urbana Champaign for kindly providing the SIMS data for the application.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号