共查询到3条相似文献,搜索用时 0 毫秒
1.
W. Holmes Finch Maria E. Hernández Finch Brian F. French 《International Journal of Testing》2016,16(1):21-53
Differential item functioning (DIF) assessment is key in score validation. When DIF is present scores may not accurately reflect the construct of interest for some groups of examinees, leading to incorrect conclusions from the scores. Given rising immigration, and the increased reliance of educational policymakers on cross-national assessments such as Programme for International Student Assessment, Trends in International Mathematics and Science Study, and Progress in International Reading Literacy Study (PIRLS), DIF with regard to native language is of particular interest in this context. However, given differences in language and cultures, assuming similar cross-national DIF may lead to mistaken assumptions about the impact of immigration status, and native language on test performance. The purpose of this study was to use model-based recursive partitioning (MBRP) to investigate uniform DIF in PIRLS items across European nations. Results demonstrated that DIF based on mother's language was present for several items on a PIRLS assessment, but that the patterns of DIF were not the same across all nations. 相似文献
2.
Xinru Li Elise Dusseldorp Jacqueline J. Meulman 《The British journal of mathematical and statistical psychology》2017,70(1):118-136
In the framework of meta‐analysis, moderator analysis is usually performed only univariately. When several study characteristics are available that may account for treatment effect, standard meta‐regression has difficulties in identifying interactions between them. To overcome this problem, meta‐CART has been proposed: an approach that applies classification and regression trees (CART) to identify interactions, and then subgroup meta‐analysis to test the significance of moderator effects. The previous version of meta‐CART has its shortcomings: when applying CART, the sample sizes of studies are not taken into account, and the effect sizes are dichotomized around the median value. Therefore, this article proposes new meta‐CART extensions, weighting study effect sizes by their accuracy, and using a regression tree to avoid dichotomization. In addition, new pruning rules are proposed. The performance of all versions of meta‐CART was evaluated via a Monte Carlo simulation study. The simulation results revealed that meta‐regression trees with random‐effects weights and a 0.5‐standard‐error pruning rule perform best. The required sample size for meta‐CART to achieve satisfactory performance depends on the number of study characteristics, the magnitude of the interactions, and the residual heterogeneity. 相似文献
3.
The purpose of this treatment effectiveness study was to evaluate the flexible application of a manualized cognitive behavioral treatment (CBT) for PTSD and related symptoms in survivors of the 9/11 terrorist attack on the World Trade Center. Treatment delivery ranged from 12 to 25 sessions; therapist experience ranged from no prior training to extensive training in CBT; and training and supervision of clinicians in the treatment manual was considerably less than that required in a randomized clinical trial (RCT). Paired t-tests demonstrated significant pre-post reductions in symptoms of PTSD and depression for the flexible application of the treatment. A benchmarking analysis revealed that the moderate-to-large effect sizes found for these variables were similar to those obtained in an RCT of the same treatment. Furthermore, effect sizes on measures of outcomes particularly relevant to this population of mass violence survivors such as functional impairment, use of alcohol and drugs to cope, and use of social support to cope, were also medium to large. 相似文献