首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The performance of five simple multiple imputation methods for dealing with missing data were compared. In addition, random imputation and multivariate normal imputation were used as lower and upper benchmark, respectively. Test data were simulated and item scores were deleted such that they were either missing completely at random, missing at random, or not missing at random. Cronbach's alpha, Loevinger's scalability coefficient H, and the item cluster solution from Mokken scale analysis of the complete data were compared with the corresponding results based on the data including imputed scores. The multiple-imputation methods, two-way with normally distributed errors, corrected item-mean substitution with normally distributed errors, and response function, produced discrepancies in Cronbach's coefficient alpha, Loevinger's coefficient H, and the cluster solution from Mokken scale analysis, that were smaller than the discrepancies in upper benchmark multivariate normal imputation.  相似文献   

2.
Missing data, such as item responses in multilevel data, are ubiquitous in educational research settings. Researchers in the item response theory (IRT) context have shown that ignoring such missing data can create problems in the estimation of the IRT model parameters. Consequently, several imputation methods for dealing with missing item data have been proposed and shown to be effective when applied with traditional IRT models. Additionally, a nonimputation direct likelihood analysis has been shown to be an effective tool for handling missing observations in clustered data settings. This study investigates the performance of six simple imputation methods, which have been found to be useful in other IRT contexts, versus a direct likelihood analysis, in multilevel data from educational settings. Multilevel item response data were simulated on the basis of two empirical data sets, and some of the item scores were deleted, such that they were missing either completely at random or simply at random. An explanatory IRT model was used for modeling the complete, incomplete, and imputed data sets. We showed that direct likelihood analysis of the incomplete data sets produced unbiased parameter estimates that were comparable to those from a complete data analysis. Multiple-imputation approaches of the two-way mean and corrected item mean substitution methods displayed varying degrees of effectiveness in imputing data that in turn could produce unbiased parameter estimates. The simple random imputation, adjusted random imputation, item means substitution, and regression imputation methods seemed to be less effective in imputing missing item scores in multilevel data settings.  相似文献   

3.
Multiple imputation under a two‐way model with error is a simple and effective method that has been used to handle missing item scores in unidimensional test and questionnaire data. Extensions of this method to multidimensional data are proposed. A simulation study is used to investigate whether these extensions produce biased estimates of important statistics in multidimensional data, and to compare them with lower benchmark listwise deletion, two‐way with error and multivariate normal imputation. The new methods produce smaller bias in several psychometrically interesting statistics than the existing methods of two‐way with error and multivariate normal imputation. One of these new methods clearly is preferable for handling missing item scores in multidimensional test data.  相似文献   

4.
Test of homogeneity of covariances (or homoscedasticity) among several groups has many applications in statistical analysis. In the context of incomplete data analysis, tests of homoscedasticity among groups of cases with identical missing data patterns have been proposed to test whether data are missing completely at random (MCAR). These tests of MCAR require large sample sizes n and/or large group sample sizes n i , and they usually fail when applied to nonnormal data. Hawkins (Technometrics 23:105–110, 1981) proposed a test of multivariate normality and homoscedasticity that is an exact test for complete data when n i are small. This paper proposes a modification of this test for complete data to improve its performance, and extends its application to test of homoscedasticity and MCAR when data are multivariate normal and incomplete. Moreover, it is shown that the statistic used in the Hawkins test in conjunction with a nonparametric k-sample test can be used to obtain a nonparametric test of homoscedasticity that works well for both normal and nonnormal data. It is explained how a combination of the proposed normal-theory Hawkins test and the nonparametric test can be employed to test for homoscedasticity, MCAR, and multivariate normality. Simulation studies show that the newly proposed tests generally outperform their existing competitors in terms of Type I error rejection rates. Also, a power study of the proposed tests indicates good power. The proposed methods use appropriate missing data imputations to impute missing data. Methods of multiple imputation are described and one of the methods is employed to confirm the result of our single imputation methods. Examples are provided where multiple imputation enables one to identify a group or groups whose covariance matrices differ from the majority of other groups.  相似文献   

5.
Maydeu-Olivares and Joe (J. Am. Stat. Assoc. 100:1009–1020, 2005; Psychometrika 71:713–732, 2006) introduced classes of chi-square tests for (sparse) multidimensional multinomial data based on low-order marginal proportions. Our extension provides general conditions under which quadratic forms in linear functions of cell residuals are asymptotically chi-square. The new statistics need not be based on margins, and can be used for one-dimensional multinomials. We also provide theory that explains why limited information statistics have good power, regardless of sparseness. We show how quadratic-form statistics can be constructed that are more powerful than X 2 and yet, have approximate chi-square null distribution in finite samples with large models. Examples with models for truncated count data and binary item response data are used to illustrate the theory.  相似文献   

6.
In the diagnostic evaluation of educational systems, self-reports are commonly used to collect data, both cognitive and orectic. For various reasons, in these self-reports, some of the students' data are frequently missing. The main goal of this research is to compare the performance of different imputation methods for missing data in the context of the evaluation of educational systems. On an empirical database of 5,000 subjects, 72 conditions were simulated: three levels of missing data, three types of loss mechanisms, and eight methods of imputation. The levels of missing data were 5%, 10%, and 20%. The loss mechanisms were set at: Missing completely at random, moderately conditioned, and strongly conditioned. The eight imputation methods used were: listwise deletion, replacement by the mean of the scale, by the item mean, the subject mean, the corrected subject mean, multiple regression, and Expectation-Maximization (EM) algorithm, with and without auxiliary variables. The results indicate that the recovery of the data is more accurate when using an appropriate combination of different methods of recovering lost data. When a case is incomplete, the mean of the subject works very well, whereas for completely lost data, multiple imputation with the EM algorithm is recommended. The use of this combination is especially recommended when data loss is greater and its loss mechanism is more conditioned. Lastly, the results are discussed, and some future lines of research are analyzed.  相似文献   

7.
A new method, with an application program in Matlab code, is proposed for testing item performance models on empirical databases. This method uses data intraclass correlation statistics as expected correlations to which one compares simple functions of correlations between model predictions and observed item performance. The method rests on a data population model whose validity for the considered data is suitably tested and has been verified for three behavioural measure databases. Contrarily to usual model selection criteria, this method provides an effective way of testing under-fitting and over-fitting, answering the usually neglected question "does this model suitably account for these data?"  相似文献   

8.
The use of item responses from questionnaire data is ubiquitous in social science research. One side effect of using such data is that researchers must often account for item level missingness. Multiple imputation is one of the most widely used missing data handling techniques. The traditional multiple imputation approach in structural equation modeling has a number of limitations. Motivated by Lee and Cai’s approach, we propose an alternative method for conducting statistical inference from multiple imputation in categorical structural equation modeling. We examine the performance of our proposed method via a simulation study and illustrate it with one empirical data set.  相似文献   

9.
项目反应理论(IRT)是用于客观测量的现代教育与心理测量理论之一,广泛用于缺失数据十分常见的大尺度测验分析。IRT中两参数逻辑斯蒂克模型(2PLM)下仅有完全随机缺失机制下缺失反应和缺失能力处理的EM算法。本研究推导2PLM下缺失反应忽略的EM 算法,并提出随机缺失机制下缺失反应和缺失能力处理的EM算法和考虑能力估计和作答反应不确定性的多重借补法。研究显示:在各种缺失机制、缺失比例和测验设计下,缺失反应忽略的EM算法和多重借补法表现理想。  相似文献   

10.
In Ordinary Least Square regression, researchers often are interested in knowing whether a set of parameters is different from zero. With complete data, this could be achieved using the gain in prediction test, hierarchical multiple regression, or an omnibus F test. However, in substantive research scenarios, missing data often exist. In the context of multiple imputation, one of the current state-of-art missing data strategies, there are several different analogous multi-parameter tests of the joint significance of a set of parameters, and these multi-parameter test statistics can be referenced to various distributions to make statistical inferences. However, little is known about the performance of these tests, and virtually no research study has compared the Type 1 error rates and statistical power of these tests in scenarios that are typical of behavioral science data (e.g., small to moderate samples, etc.). This paper uses Monte Carlo simulation techniques to examine the performance of these multi-parameter test statistics for multiple imputation under a variety of realistic conditions. We provide a number of practical recommendations for substantive researchers based on the simulation results, and illustrate the calculation of these test statistics with an empirical example.  相似文献   

11.
Missing values are a practical issue in the analysis of longitudinal data. Multiple imputation (MI) is a well‐known likelihood‐based method that has optimal properties in terms of efficiency and consistency if the imputation model is correctly specified. Doubly robust (DR) weighing‐based methods protect against misspecification bias if one of the models, but not necessarily both, for the data or the mechanism leading to missing data is correct. We propose a new imputation method that captures the simplicity of MI and protection from the DR method. This method integrates MI and DR to protect against misspecification of the imputation model under a missing at random assumption. Our method avoids analytical complications of missing data particularly in multivariate settings, and is easy to implement in standard statistical packages. Moreover, the proposed method works very well with an intermittent pattern of missingness when other DR methods can not be used. Simulation experiments show that the proposed approach achieves improved performance when one of the models is correct. The method is applied to data from the fireworks disaster study, a randomized clinical trial comparing therapies in disaster‐exposed children. We conclude that the new method increases the robustness of imputations.  相似文献   

12.
Incomplete or missing data is a common problem in almost all areas of empirical research. It is well known that simple and ad hoc methods such as complete case analysis or mean imputation can lead to biased and/or inefficient estimates. The method of maximum likelihood works well; however, when the missing data mechanism is not one of missing completely at random (MCAR) or missing at random (MAR), it too can result in incorrect inference. Statistical tests for MCAR have been proposed, but these are restricted to a certain class of problems. The idea of sensitivity analysis as a means to detect the missing data mechanism has been proposed in the statistics literature in conjunction with selection models where conjointly the data and missing data mechanism are modeled. Our approach is different here in that we do not model the missing data mechanism but use the data at hand to examine the sensitivity of a given model to the missing data mechanism. Our methodology is meant to raise a flag for researchers when the assumptions of MCAR (or MAR) do not hold. To our knowledge, no specific proposal for sensitivity analysis has been set forth in the area of structural equation models (SEM). This article gives a specific method for performing postmodeling sensitivity analysis using a statistical test and graphs. A simulation study is performed to assess the methodology in the context of structural equation models. This study shows success of the method, especially when the sample size is 300 or more and the percentage of missing data is 20% or more. The method is also used to study a set of real data measuring physical and social self-concepts in 463 Nigerian adolescents using a factor analysis model.  相似文献   

13.
When Cognitive Diagnosis Meets Computerized Adaptive Testing: CD-CAT   总被引:2,自引:0,他引:2  
Ying Cheng 《Psychometrika》2009,74(4):619-632
Computerized adaptive testing (CAT) is a mode of testing which enables more efficient and accurate recovery of one or more latent traits. Traditionally, CAT is built upon Item Response Theory (IRT) models that assume unidimensionality. However, the problem of how to build CAT upon latent class models (LCM) has not been investigated until recently, when Tatsuoka (J. R. Stat. Soc., Ser. C, Appl. Stat. 51:337–350, 2002) and Tatsuoka and Ferguson (J. R. Stat., Ser. B 65:143–157, 2003) established a general theorem on the asymptotically optimal sequential selection of experiments to classify finite, partially ordered sets. Xu, Chang, and Douglas (Paper presented at the annual meeting of National Council on Measurement in Education, Montreal, Canada, 2003) then tested two heuristics in a simulation study based on Tatsuoka’s theoretical work in the context of computerized adaptive testing. One of the heuristics was developed based on Kullback–Leibler information, and the other based on Shannon entropy. In this paper, we showcase the application of the optimal sequential selection methodology in item selection of CAT that is built upon cognitive diagnostic models. Two new heuristics are proposed, and are compared against the randomized item selection method and the two heuristics investigated in Xu et al. (Paper presented at the annual meeting of National Council on Measurement in Education, Montreal, Canada, 2003). Finally, we show the connection between the Kullback–Leibler-information-based approaches and the Shannon-entropy-based approach, as well as the connection between algorithms built upon LCM and those built upon IRT models.  相似文献   

14.
The performance of multiple imputation in questionnaire data has been studied in various simulation studies. However, in practice, questionnaire data are usually more complex than simulated data. For example, items may be counterindicative or may have unacceptably low factor loadings on every subscale, or completely missing subscales may complicate computations. In this article, it was studied how well multiple imputation recovered the results of several psychometrically important statistics in a data set with such properties. Analysis of this data set revealed that multiple imputation was able to recover the results of these analyses well. Also, a simulation study showed that multiple imputation produced small bias in these statistics for simulated data sets with the same properties.  相似文献   

15.
Standard factorial designs in psycholinguistics have been complemented recently by large-scale databases providing empirical constraints at the level of item performance. At the same time, the development of precise computational architectures has led modelers to compare item-level performance with item-level predictions. It has been suggested, however, that item performance includes a large amount of undesirable error variance that should be quantified to determine the amount of reproducible variance that models should account for. In the present study, we provide a simple and tractable statistical analysis of this issue. We also report practical solutions for estimating the amount of reproducible variance for any database that conforms to the additive decomposition of the variance. A new empirical database consisting of the word identification times of 140 participants on 120 words is then used to test these practical solutions. Finally, we show that increases in the amount of reproducible variance are accompanied by the detection of new sources of variance.  相似文献   

16.
As a procedure for handling missing data, Multiple imputation consists of estimating the missing data multiple times to create several complete versions of an incomplete data set. All these data sets are analyzed by the same statistical procedure, and the results are pooled for interpretation. So far, no explicit rules for pooling F tests of (repeated-measures) analysis of variance have been defined. In this article we outline the appropriate procedure for the results of analysis of variance (ANOVA) for multiply imputed data sets. It involves both reformulation of the ANOVA model as a regression model using effect coding of the predictors and applying already existing combination rules for regression models. The proposed procedure is illustrated using 3 example data sets. The pooled results of these 3 examples provide plausible F and p values.  相似文献   

17.
A Two-Tier Full-Information Item Factor Analysis Model with Applications   总被引:2,自引:0,他引:2  
Li Cai 《Psychometrika》2010,75(4):581-612
Motivated by Gibbons et al.’s (Appl. Psychol. Meas. 31:4–19, 2007) full-information maximum marginal likelihood item bifactor analysis for polytomous data, and Rijmen, Vansteelandt, and De Boeck’s (Psychometrika 73:167–182, 2008) work on constructing computationally efficient estimation algorithms for latent variable models, a two-tier item factor analysis model is developed in this research. The modeling framework subsumes standard multidimensional IRT models, bifactor IRT models, and testlet response theory models as special cases. Features of the model lead to a reduction in the dimensionality of the latent variable space, and consequently significant computational savings. An EM algorithm for full-information maximum marginal likelihood estimation is developed. Simulations and real data demonstrations confirm the accuracy and efficiency of the proposed methods. Three real data sets from a large-scale educational assessment, a longitudinal public health survey, and a scale development study measuring patient reported quality of life outcomes are analyzed as illustrations of the model’s broad range of applicability.  相似文献   

18.
A new observable consequence of the property of invariant item ordering is presented, which holds under Mokken’s double monotonicity model for dichotomous data. The observable consequence is an invariant ordering of the item-total regressions. Kendall’s measure of concordance W and a weighted version of this measure are proposed as measures for this property. Karabatsos and Sheu proposed a Bayesian procedure (Appl. Psychol. Meas. 28:110–125, 2004), which can be used to determine whether the property of an invariant ordering of the item-total regressions should be rejected for a set of items. An example is presented to illustrate the application of the procedures to empirical data.  相似文献   

19.
Single-word naming is one of the most widely used experimental paradigms for studying how we read words. Following the seminal study by Spieler and Balota (Psychological Science 8:411–416, 1997), accounting for variance in item-level naming databases has become a major challenge for computational models of word reading. Using a new large-scale database of naming responses, we first provided a precise estimate of the amount of reproducible variance that models should try to account for with such databases. Second, by using an item-level measure of delayed naming, we showed that it captures not only the variance usually explained by onset phonetic properties, but also an additional part of the variance related to output processes. Finally, by comparing the item means from this new database with the ones reported in a previous study, we found that the two sets of item response times were highly reliable (r = .94) when the variance related to onset phonetic properties and voice-key sensitivity was factored out. Overall, the present results provide new guidelines for testing computational models of word naming with item-level databases.  相似文献   

20.
This article compares a variety of imputation strategies for ordinal missing data on Likert scale variables (number of categories = 2, 3, 5, or 7) in recovering reliability coefficients, mean scale scores, and regression coefficients of predicting one scale score from another. The examined strategies include imputing using normal data models with naïve rounding/without rounding, using latent variable models, and using categorical data models such as discriminant analysis and binary logistic regression (for dichotomous data only), multinomial and proportional odds logistic regression (for polytomous data only). The result suggests that both the normal model approach without rounding and the latent variable model approach perform well for either dichotomous or polytomous data regardless of sample size, missing data proportion, and asymmetry of item distributions. The discriminant analysis approach also performs well for dichotomous data. Naïvely rounding normal imputations or using logistic regression models to impute ordinal data are not recommended as they can potentially lead to substantial bias in all or some of the parameters.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号