首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Methodologists have developed mediation analysis techniques for a broad range of substantive applications, yet methods for estimating mediating mechanisms with missing data have been understudied. This study outlined a general Bayesian missing data handling approach that can accommodate mediation analyses with any number of manifest variables. Computer simulation studies showed that the Bayesian approach produced frequentist coverage rates and power estimates that were comparable to those of maximum likelihood with the bias-corrected bootstrap. We share an SAS macro that implements Bayesian estimation and use 2 data analysis examples to demonstrate its use.  相似文献   

2.
This article compares a variety of imputation strategies for ordinal missing data on Likert scale variables (number of categories = 2, 3, 5, or 7) in recovering reliability coefficients, mean scale scores, and regression coefficients of predicting one scale score from another. The examined strategies include imputing using normal data models with naïve rounding/without rounding, using latent variable models, and using categorical data models such as discriminant analysis and binary logistic regression (for dichotomous data only), multinomial and proportional odds logistic regression (for polytomous data only). The result suggests that both the normal model approach without rounding and the latent variable model approach perform well for either dichotomous or polytomous data regardless of sample size, missing data proportion, and asymmetry of item distributions. The discriminant analysis approach also performs well for dichotomous data. Naïvely rounding normal imputations or using logistic regression models to impute ordinal data are not recommended as they can potentially lead to substantial bias in all or some of the parameters.  相似文献   

3.
Analyses of multivariate data are frequently hampered by missing values. Until recently, the only missing-data methods available to most data analysts have been relatively ad1 hoc practices such as listwise deletion. Recent dramatic advances in theoretical and computational statistics, however, have produced anew generation of flexible procedures with a sound statistical basis. These procedures involve multiple imputation (Rubin, 1987), a simulation technique that replaces each missing datum with a set of m > 1 plausible values. The rn versions of the complete data are analyzed by standard complete-data methods, and the results are combined using simple rules to yield estimates, standard errors, and p-values that formally incorporate missing-data uncertainty. New computational algorithms and software described in a recent book (Schafer, 1997a) allow us to create proper multiple imputations in complex multivariate settings. This article reviews the key ideas of multiple imputation, discusses the software programs currently available, and demonstrates their use on data from the Adolescent Alcohol Prevention Trial (Hansen & Graham, 199 I).  相似文献   

4.
The performance of five simple multiple imputation methods for dealing with missing data were compared. In addition, random imputation and multivariate normal imputation were used as lower and upper benchmark, respectively. Test data were simulated and item scores were deleted such that they were either missing completely at random, missing at random, or not missing at random. Cronbach's alpha, Loevinger's scalability coefficient H, and the item cluster solution from Mokken scale analysis of the complete data were compared with the corresponding results based on the data including imputed scores. The multiple-imputation methods, two-way with normally distributed errors, corrected item-mean substitution with normally distributed errors, and response function, produced discrepancies in Cronbach's coefficient alpha, Loevinger's coefficient H, and the cluster solution from Mokken scale analysis, that were smaller than the discrepancies in upper benchmark multivariate normal imputation.  相似文献   

5.
A Bayesian nonparametric model is introduced for score equating. It is applicable to all major equating designs, and has advantages over previous equating models. Unlike the previous models, the Bayesian model accounts for positive dependence between distributions of scores from two tests. The Bayesian model and the previous equating models are compared through the analysis of data sets famous in the equating literature. Also, the classical percentile-rank, linear, and mean equating models are each proven to be a special case of a Bayesian model under a highly-informative choice of prior distribution.  相似文献   

6.
Exploratory factor analysis (EFA) is an extremely popular method for determining the underlying factor structure for a set of variables. Due to its exploratory nature, EFA is notorious for being conducted with small sample sizes, and recent reviews of psychological research have reported that between 40% and 60% of applied studies have 200 or fewer observations. Recent methodological studies have addressed small size requirements for EFA models; however, these models have only considered complete data, which are the exception rather than the rule in psychology. Furthermore, the extant literature on missing data techniques with small samples is scant, and nearly all existing studies focus on topics that are not of primary interest to EFA models. Therefore, this article presents a simulation to assess the performance of various missing data techniques for EFA models with both small samples and missing data. Results show that deletion methods do not extract the proper number of factors and estimate the factor loadings with severe bias, even when data are missing completely at random. Predictive mean matching is the best method overall when considering extracting the correct number of factors and estimating factor loadings without bias, although 2-stage estimation was a close second.  相似文献   

7.
Traditional structural equation modeling (SEM) techniques have trouble dealing with incomplete and/or nonnormal data that are often encountered in practice. Yuan and Zhang (2011a) developed a two-stage procedure for SEM to handle nonnormal missing data and proposed four test statistics for overall model evaluation. Although these statistics have been shown to work well with complete data, their performance for incomplete data has not been investigated in the context of robust statistics.

Focusing on a linear growth curve model, a systematic simulation study is conducted to evaluate the accuracy of the parameter estimates and the performance of five test statistics including the naive statistic derived from normal distribution based maximum likelihood (ML), the Satorra-Bentler scaled chi-square statistic (RML), the mean- and variance-adjusted chi-square statistic (AML), Yuan-Bentler residual-based test statistic (CRADF), and Yuan-Bentler residual-based F statistic (RF). Data are generated and analyzed in R using the package rsem (Yuan & Zhang, 2011b).

Based on the simulation study, we can observe the following: (a) The traditional normal distribution-based method cannot yield accurate parameter estimates for nonnormal data, whereas the robust method obtains much more accurate model parameter estimates for nonnormal data and performs almost as well as the normal distribution based method for normal distributed data. (b) With the increase of sample size, or the decrease of missing rate or the number of outliers, the parameter estimates are less biased and the empirical distributions of test statistics are closer to their nominal distributions. (c) The ML test statistic does not work well for nonnormal or missing data. (d) For nonnormal complete data, CRADF and RF work relatively better than RML and AML. (e) For missing completely at random (MCAR) missing data, in almost all the cases, RML and AML work better than CRADF and RF. (f) For nonnormal missing at random (MAR) missing data, CRADF and RF work better than AML. (g) The performance of the robust method does not seem to be influenced by the symmetry of outliers.  相似文献   

8.
Complex research questions often cannot be addressed adequately with a single data set. One sensible alternative to the high cost and effort associated with the creation of large new data sets is to combine existing data sets containing variables related to the constructs of interest. The goal of the present research was to develop a flexible, broadly applicable approach to the integration of disparate data sets that is based on nonparametric multiple imputation and the collection of data from a convenient, de novo calibration sample. We demonstrate proof of concept for the approach by integrating three existing data sets containing items related to the extent of problematic alcohol use and associations with deviant peers. We discuss both necessary conditions for the approach to work well and potential strengths and weaknesses of the method compared to other data set integration approaches.  相似文献   

9.
Evaluating the fit of a structural equation model via bootstrap requires a transformation of the data so that the null hypothesis holds exactly in the sample. For complete data, such a transformation was proposed by Beran and Srivastava (1985) Beran, R. and Srivastava, M. S. 1985. Bootstrap tests and confidence regions for functions of a covariance matrix. The Annals of Statistics, 13: 95115. [Crossref], [Web of Science ®] [Google Scholar] for general covariance structure models and applied to structural equation modeling by Bollen and Stine (1992) Bollen, K. A. and Stine, R. A. 1992. Bootstrapping goodness-of-fit measures in structural equation models. Sociological Methods and Research, 21: 205229. [Crossref], [Web of Science ®] [Google Scholar]. An extension of this transformation to missing data was presented by Enders (2002) Enders, C. K. 2002. Applying the Bollen-Stine bootstrap for goodness-of-fit measures to structural equation models with missing data. Multivariate Behavioral Research, 37: 359377. [Taylor &; Francis Online], [Web of Science ®] [Google Scholar], but it is an approximate and not an exact solution, with the degree of approximation unknown. In this article, we provide several approaches to obtaining an exact solution. First, an explicit solution for the special case when the sample covariance matrix within each missing data pattern is invertible is given. Second, 2 iterative algorithms are described for obtaining an exact solution in the general case. We evaluate the rejection rates of the bootstrapped likelihood ratio statistic obtained via the new procedures in a Monte Carlo study. Our main finding is that model-based bootstrap with incomplete data performs quite well across a variety of distributional conditions, missing data mechanisms, and proportions of missing data. We illustrate our new procedures using empirical data on 26 cognitive ability measures in junior high students, published in Holzinger and Swineford (1939) Holzinger, K. J. and Swineford, F. 1939. A study in factor analysis: The stability of a bi-factor solution. Supplementary Educational Monographs, 48: 191.  [Google Scholar].  相似文献   

10.
11.
12.
A Monte Carlo study compared the statistical performance of standard and robust multilevel mediation analysis methods to test indirect effects for a cluster randomized experimental design under various departures from normality. The performance of these methods was examined for an upper-level mediation process, where the indirect effect is a fixed effect and a group-implemented treatment is hypothesized to impact a person-level outcome via a person-level mediator. Two methods—the bias-corrected parametric percentile bootstrap and the empirical-M test—had the best overall performance. Methods designed for nonnormal score distributions exhibited elevated Type I error rates and poorer confidence interval coverage under some conditions. Although preliminary, the findings suggest that new mediation analysis methods may provide for robust tests of indirect effects.  相似文献   

13.
In some popular test designs (including computerized adaptive testing and multistage testing), many item pairs are not administered to any test takers, which may result in some complications during dimensionality analyses. In this paper, a modified DETECT index is proposed in order to perform dimensionality analyses for response data from such designs. It is proven in this paper that under certain conditions, the modified DETECT can successfully find the dimensionality-based partition of items. Furthermore, the modified DETECT index is decomposed into two parts, which can serve as indices of the reliability of results from the DETECT procedure when response data are judged to be multidimensional. A simulation study shows that the modified DETECT can successfully recover the dimensional structure of response data under reasonable specifications. Finally, the modified DETECT procedure is applied to real response data from two-stage tests to demonstrate how to utilize these indices and interpret their values in dimensionality analyses.  相似文献   

14.
15.
Researchers studying the movements of the human body often encounter data measured in angles (e.g., angular displacements of joints). The evaluation of these circular data requires special statistical methods. The authors introduce a new test for the analysis of order-constrained hypotheses for circular data. Through this test, researchers can evaluate their expectations regarding the outcome of an experiment directly by representing their ideas in the form of a hypothesis containing inequality constraints. The resulting data analysis is generally more powerful than one using standard null hypothesis testing. Two examples of circular data from human movement science are presented to illustrate the use of the test. Results from a simulation study show that the test performs well.  相似文献   

16.
为探讨错失恐惧、自我损耗及关系型自我构念对社交网站成瘾的影响及作用机制,本研究基于有限自制力模型,采用问卷法调查了526名微信朋友圈或QQ空间的大学生用户。结果发现:(1)在控制性别的条件下,错失恐惧能显著正向预测社交网站成瘾;(2)错失恐惧还通过自我损耗的中介作用间接预测社交网站成瘾;(3)关系型自我构念调节错失恐惧通过自我损耗预测社交网站成瘾的前半路径。具体来说,相比于低关系型自我构念个体,高关系型自我构念个体的错失恐惧更多地通过自我损耗影响其社交网站成瘾。  相似文献   

17.
杨金梅  咸桂彩  朱镇 《心理科学》2002,25(3):366-366,362
双手协调性测试是测量个体注意分配经常使用的方法。在同一时间内,把注意分配到两种或两种以上不同的对象与活动上,叫做注意分配。注意的分配品质是完成复杂工作的重要条件。如果一个汽车司机不能同时把注意分配在不同的活动之上,就不能成为一个合格的司机。对于其它许多的职业,注意分配都是一个很重要的个性心理品质。准确地获得个体的个性心理品质特点对于个体选择职业以及在岗位上的贡献都很重要。  相似文献   

18.
Mediation analysis allows the examination of effects of a third variable (mediator/confounder) in the causal pathway between an exposure and an outcome. The general multiple mediation analysis method (MMA), proposed by Yu et al., improves traditional methods (e.g., estimation of natural and controlled direct effects) to enable consideration of multiple mediators/confounders simultaneously and the use of linear and nonlinear predictive models for estimating mediation/confounding effects. Previous studies find that compared with non-Hispanic cancer survivors, Hispanic survivors are more likely to endure anxiety and depression after cancer diagnoses. In this paper, we applied MMA on MY-Health study to identify mediators/confounders and quantify the indirect effect of each identified mediator/confounder in explaining ethnic disparities in anxiety and depression among cancer survivors who enrolled in the study. We considered a number of socio-demographic variables, tumor characteristics, and treatment factors as potential mediators/confounders and found that most of the ethnic differences in anxiety or depression between Hispanic and non-Hispanic white cancer survivors were explained by younger diagnosis age, lower education level, lower proportions of employment, less likely of being born in the USA, less insurance, and less social support among Hispanic patients.  相似文献   

19.
本研究基于项目反应理论,提出了一种检验力高且犯Ⅰ类错误率小的检测DIF的新方法:LP法(Likelihood Procedure),且以2PLM下对题目进行DIF检验为例介绍此法。本文通过与MH方法、Lord卡方检验法和Raju面积测量法三种常用的检验DIF的方法比较研究LP法的有效性,同时探讨样本容量、测验长度、目标组和参照组能力分布的差异、DIF值大小等相关因素对LP法有效性可能产生的影响。通过模拟研究,得到以下结论:(1)LP法比MH法及Lord卡方法更灵敏且更稳健;(2) LP法比Raju面积测量法更合理;(3)LP法的检验力随着被试样本容量或DIF值的增大而增大;(4)当参照组与目标组的能力无差异时,LP法在各种条件下的检验力比参照组与目标组的能力有差异时的检验力高;(5)LP法对一致性DIF和非一致性DIF都有良好的检验力,且LP法对一致性DIF的检验力比对非一致性DIF的检验力高。LP法可以简便的扩展并运用到多维度、多级评分项目上。  相似文献   

20.
On demanda à 123 étudiants de cours de perfectionnement, désignés au hasard, de réaliser de leur mieux une tâche très difficile, au but presqu' impossible à atteindre, après cinq semaines d'entraînement à effectuer une tâche spécifique en trois minutes. Une période-test de quatre semaines suivit au cours de laquelle les buts des táches que devaient réaliser les sujets et leur confiance en leur efficacité furent mesurés une fois par semaine avant chaque évaluation de performance. Construit d'après une théorie de médiation cognitive (Garland, 1985), un modèle causal fut utilisé dans lequel des buts de tâches individuelles furent soumis aux effets de la performance, notamment sur l'efficacité des sujets. L'analyse des résultats à partir des quatre tests hebdomadaires confirma les propositions du modèle.
One hundred and twenty-three students in a fitness training course were assigned at random to a do your best, very hard, or highly improbable goal condition after five weeks of baseline training on a three-minute sit-up task. A four-week test period followed in which subjects' task goals and efficacy expectations were measured once each week prior to an assessment of their performance. Based on cognitive mediation theory (Garland, 1985), a causal model was presented in which individual task goals are proposed to influence performance through their influence on self-efficacy. Path analyses on the data over each of the four test weeks provided support for the proposed model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号