首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The issue of publication bias in psychological science is one that has remained difficult to address despite decades of discussion and debate. The current article examines a sample of 91 recent meta-analyses published in American Psychological Association and Association for Psychological Science journals and the methods used in these analyses to identify and control for publication bias. Of the 91 studies analyzed, 64 (70%) made some effort to analyze publication bias, and 26 (41%) reported finding evidence of bias. Approaches to controlling publication bias were heterogeneous among studies. Of these studies, 57 (63%) attempted to find unpublished studies to control for publication bias. Nonetheless, those studies that included unpublished studies were just as likely to find evidence for publication bias as those that did not. Furthermore, authors of meta-analyses themselves were overrepresented in unpublished studies acquired, as compared with published studies, suggesting that searches for unpublished studies may increase rather than decrease some sources of bias. A subset of 48 meta-analyses for which study sample sizes and effect sizes were available was further analyzed with a conservative and newly developed tandem procedure of assessing publication bias. Results indicated that publication bias was worrisome in about 25% of meta-analyses. Meta-analyses that included unpublished studies were more likely to show bias than those that did not, likely due to selection bias in unpublished literature searches. Sources of publication bias and implications for the use of meta-analysis are discussed.  相似文献   

2.
We respond to Tibon Czopp and Zeligman's (2016) critique of our systematic reviews and meta-analyses of 65 Rorschach Comprehensive System (CS) variables published in Psychological Bulletin (2013). The authors endorsed our supportive findings but critiqued the same methodology when used for the 13 unsupported variables. Unfortunately, their commentary was based on significant misunderstandings of our meta-analytic method and results, such as thinking we used introspectively assessed criteria in classifying levels of support and reporting only a subset of our externally assessed criteria. We systematically address their arguments that our construct label and criterion variable choices were inaccurate and, therefore, meta-analytic validity for these 13 CS variables was artificially low. For example, the authors created new construct labels for these variables that they called “the customary CS interpretation,” but did not describe their methodology nor provide evidence that their labels would result in better validity than ours. They cite studies they believe we should have included; we explain how these studies did not fit our inclusion criteria and that including them would have actually reduced the relevant CS variables’ meta-analytic validity. Ultimately, criticisms alone cannot change meta-analytic support from negative to positive; Tibon Czopp and Zeligman would need to conduct their own construct validity meta-analyses.  相似文献   

3.
The authors examine 3 methods of combining new studies into existing meta-analyses: (a) adding the new study or studies to the database and recalculating the meta-analysis (the medical model); (b) using the Bayesian procedure advocated by F. L. Schmidt and J. E. Hunter (1977) and F. L. Schmidt, J. E. Hunter, K. Pearlman, and G. S. Shane (1979) to update the meta-analysis; and (c) using the Bayesian methods advocated by these authors and M. T. Brannick (2001) and M. T. Brannick, S. M. Hall, and Y. Liu (2002) to estimate study-specific parameters. Method b was found to severely overweight new studies relative to the previous studies contained in the meta-analysis, and Method c was found to do the same while also requiring an assumption with a low prior probability of being correct, causing the method to violate Bayesian principles. The authors present an alternative Bayesian procedure that does not suffer from these drawbacks and yields meta-analytic results very similar to those obtained with the medical model. They recommend use of the medical model or this alternative Bayesian procedure.  相似文献   

4.
Peer reporting interventions (i.e., Positive Peer Reporting and tootling) are commonly used peer-mediated interventions in schools. These interventions involve training students to make reports about peers' prosocial behaviors, whether in oral or written form. Although peer reporting interventions have been included in meta-analyses of group contingencies, this study is the first meta-analytic review of single-case research focusing exclusively on peer reporting interventions. The literature search and application of inclusion criteria yielded 21 studies examining the impact of a peer reporting intervention on student behavior compared to baseline conditions. All studies used single-case experimental designs including at least three demonstrations of an effect and at least three data points per phase. Several aspects of studies, participants, and interventions were coded. Log response ratios and Tau were calculated as effect size estimates. Effect size estimates were synthesized in a multi-level meta-analysis with random effects for (a) studies and (b) cases within studies. Overall results indicated peer reporting interventions had a non-zero and positive impact on student outcomes. This was also true when data were subset by outcome (i.e., disruptive behavior, academically engaged behavior, and social behavior). Results were suggestive of more between- than within-study variability. Moderator analyses were conducted to identify aspects of studies, participants, or peer reporting interventions associated with differential effectiveness. Moderator analyses suggested published studies were associated with higher effect sizes than unpublished studies (i.e., theses/dissertations). This meta-analysis suggests peer reporting interventions are effective in improving student behavior compared to baseline conditions. Implications and directions for future investigation are discussed.  相似文献   

5.
GENDER DIFFERENCES IN SELF-REPORTED POSTTRAUMATIC GROWTH: A META-ANALYSIS   总被引:1,自引:0,他引:1  
A meta-analysis was conducted to examine the direction and magnitude of gender differences in self-reported posttraumatic growth. Results from 70 studies ( N  = 16,076) revealed a small to moderate gender difference ( g  = .27, 95% CI = .21 −.32), with women reporting more posttraumatic growth than men. Moderator analyses were then conducted to identify possible sources of these differences. The following moderators were examined: mean age of sample, measure used, nature of the stressful event, language of the measure, and type of sample (i.e., community samples, college students, or mixed). The only significant moderator was age, with women reporting incrementally more posttraumatic growth as the mean age of the sample increased ( B  = .004,  p  < .01,  SE  = .001,  Q  = 9.13). To check for publication bias, effect sizes were compared across published and unpublished research. The size of the gender difference was not significantly different between published ( g  = .30, 95% CI = .23 − .38) and unpublished ( g  = .22, 95% CI = .12 −.31) studies. The present findings indicate that modest, but reliable gender differences exist in posttraumatic growth even when unpublished data are included in the analyses. Possible explanations for these findings and suggestions for future research are discussed.  相似文献   

6.
Publication bias is the disproportionate representation of studies with large effects and statistically significant findings in the published research literature. If publication bias occurs in single-case research design studies on applied behavior-analytic (ABA) interventions, it can result in inflated estimates of ABA intervention effects. We conducted an empirical evaluation of publication bias on an evidence-based ABA intervention for children diagnosed with autism spectrum disorder, response interruption and redirection (RIRD). We determined effect size estimates for published and unpublished studies using 3 metrics, percentage of nonoverlapping data (PND), Hedges' g, and log response ratios (LRR). Omnibus effect size estimates across all 3 metrics were positive, supporting that RIRD is an effective treatment for reducing problem behavior maintained by nonsocial consequences. We observed larger PND for published compared to unpublished studies, small and nonsignificant differences in LRR for published compared to unpublished studies, and significant differences in Hedges' g for published compared to unpublished studies, with published studies showing slightly larger effect. We found little, if any, difference in methodological quality between published and unpublished studies. While RIRD appears to be an effective intervention for challenging behavior maintained by nonsocial consequences, our results reflect some degree of publication bias present in the RIRD research literature.  相似文献   

7.
Meta-analytic reviews are an important tool for advancing science and guiding evidence-based practice. Publication bias is one of the greatest threats to meta-analytic reviews. This paper assesses the degree of publication bias in four previously published meta-analytic datasets from various fields of study in the organizational sciences. Of these datasets, one appears to be relatively unaffected by publication bias while the others seem to be noticeably influenced by this bias. Our “null” result (i.e., a prior meta-analytic estimate is unlikely to have been affected by publication bias) increases our confidence in the accuracy of our cumulative knowledge. Yet, our other findings suggest the presence of publication bias and point to the need for caution and further research.  相似文献   

8.
The application of meta-analysis holds much appeal for single-case consultation outcome research. We review a meta-analytic method for using within-study treatment effect sizes in reporting consultation outcomes. The strengths and limitations of traditional group design meta-analysis are examined. Various methods for analyzing single-case outcomes are discussed briefly, followed by an examination of the use of meta-analysis in single-case reviews across independent studies. Within-study meta-analytic results are presented that were derived from treatments implemented in consultations in natural settings. To conclude the article, an illustration is offered of a single-case data analysis display that incorporates meta-analytic results along with other indices of treatment outcome. Recommendations are provided for using meta-analytic methods to evaluate outcomes of single-case consultation treatment.  相似文献   

9.
Previous research has suggested that judgment calls (i.e., methodological choices made in the process of conducting a meta-analysis) have a strong influence on meta-analytic findings and question their robustness. However, prior research applies case study comparison or reanalysis of a few meta-analyses with a focus on a few selected judgment calls. These studies neglect the fact that different judgment calls are related to each other and simultaneously influence the outcomes of a meta-analysis, and that meta-analytic findings can vary due to non–judgment call differences between meta-analyses (e.g., variations of effects over time). The current study analyzes the influence of 13 judgment calls in 176 meta-analyses in marketing research by applying a multivariate, multilevel meta-meta-analysis. The analysis considers simultaneous influences from different judgment calls on meta-analytic effect sizes and controls for alternative explanations based on non–judgment call differences between meta-analyses. The findings suggest that judgment calls have only a minor influence on meta-analytic findings, whereas non–judgment call differences between meta-analyses are more likely to explain differences in meta-analytic findings. The findings support the robustness of meta-analytic results and conclusions.  相似文献   

10.
A meta-analytic test of intergroup contact theory   总被引:9,自引:0,他引:9  
The present article presents a meta-analytic test of intergroup contact theory. With 713 independent samples from 515 studies, the meta-analysis finds that intergroup contact typically reduces intergroup prejudice. Multiple tests indicate that this finding appears not to result from either participant selection or publication biases, and the more rigorous studies yield larger mean effects. These contact effects typically generalize to the entire outgroup, and they emerge across a broad range of outgroup targets and contact settings. Similar patterns also emerge for samples with racial or ethnic targets and samples with other targets. This result suggests that contact theory, devised originally for racial and ethnic encounters, can be extended to other groups. A global indicator of Allport's optimal contact conditions demonstrates that contact under these conditions typically leads to even greater reduction in prejudice. Closer examination demonstrates that these conditions are best conceptualized as an interrelated bundle rather than as independent factors. Further, the meta-analytic findings indicate that these conditions are not essential for prejudice reduction. Hence, future work should focus on negative factors that prevent intergroup contact from diminishing prejudice as well as the development of a more comprehensive theory of intergroup contact.  相似文献   

11.
Previous studies have concluded that cognitive ability tests are not predictively biased against Hispanic American job applicants because test scores generally overpredict, rather than underpredict, their job performance. However, we highlight two important shortcomings of these past studies and use meta-analytic and computation modeling techniques to address these two shortcomings. In Study 1, an updated meta-analysis of the Hispanic–White mean difference (d-value) on job performance was carried out. In Study 2, computation modeling was used to correct the Study 1 d-values for indirect range restriction and combine them with other meta-analytic parameters relevant to predictive bias to determine how often cognitive ability test scores underpredict Hispanic applicants’ job performance. Hispanic applicants’ job performance was underpredicted by a small to moderate amount in most conditions of the computation model. In contrast to previous studies, this suggests cognitive ability tests can be expected to exhibit predictive bias against Hispanic applicants much of the time. However, some conditions did not exhibit underprediction, highlighting that predictive bias depends on various selection system parameters, such as the criterion-related validity of cognitive ability tests and other predictors used in selection. Regardless, our results challenge “lack of predictive bias” as a rationale for supporting test use.  相似文献   

12.
《Body image》2014,11(3):251-259
Weight bias exists across many important life domains, necessitating interventions designed to reduce weight-biased attitudes and beliefs. Though the effectiveness of weight bias interventions has been questioned, to our knowledge no meta-analysis of these interventions has been conducted. This meta-analysis evaluated the impact of weight bias interventions on weight-biased attitudes and beliefs and explored potential moderators. Interventions were eligible if they used an adult sample and a validated measure of weight-biased attitudes, which resulted in the inclusion of 30 studies represented in 29 articles. A random effects approach using inverse weights resulted in a mean effect size estimate of g = −0.33 (lower scores indicate less weight bias) for both attitudes and beliefs. Intervention type, publication type, and population type were not significant moderators but demonstrated noteworthy trends. Results reveal a small, positive effect of weight bias interventions on weight-biased attitudes and beliefs and provide useful information for future interventions.  相似文献   

13.
Although meta-analyses are often used to inform practitioners and researchers, the resulting effect sizes can be artificially inflated due to publication bias. There are a number of methods to protect against, detect, and correct for publication bias. Currently, it is unknown to what extent scholars publishing meta-analyses within school psychology journals use these methods to address publication bias and whether more recently published meta-analyses more frequently utilize these methods. A historical review of every meta-analysis published to date within the most prominent school psychology journals (N = 10) revealed that 88 meta-analyses were published from 1980 to early 2019. Exactly half of them included grey literature, and 60% utilized methods to detect and correct for publication bias. The most common methods were visual analysis of a funnel plot, Orwin's failsafe N, Egger's regression, and the trim and fill procedure. None of these methods were used in more than 20% of the studies. About half of the studies incorporated one method, 20% incorporated two methods, 7% incorporated three methods, and none incorporated all four methods. These methods were most evident in studies published recently. Similar to other fields, the true estimates of effects from meta-analyses published in school psychology journals may not be available, and practitioners may be utilizing interventions that are, in fact, not as strong as believed. Practitioners, researchers employing meta-analysis techniques, education programs, and editors and peer reviewers in school psychology should continue to guard against publication bias using these methods.  相似文献   

14.
The term “multilevel meta-analysis” is encountered not only in applied research studies, but in multilevel resources comparing traditional meta-analysis to multilevel meta-analysis. In this tutorial, we argue that the term “multilevel meta-analysis” is redundant since all meta-analysis can be formulated as a special kind of multilevel model. To clarify the multilevel nature of meta-analysis the four standard meta-analytic models are presented using multilevel equations and fit to an example data set using four software programs: two specific to meta-analysis (metafor in R and SPSS macros) and two specific to multilevel modeling (PROC MIXED in SAS and HLM). The same parameter estimates are obtained across programs underscoring that all meta-analyses are multilevel in nature. Despite the equivalent results, not all software programs are alike and differences are noted in the output provided and estimators available. This tutorial also recasts distinctions made in the literature between traditional and multilevel meta-analysis as differences between meta-analytic choices, not between meta-analytic models, and provides guidance to inform choices in estimators, significance tests, moderator analyses, and modeling sequence. The extent to which the software programs allow flexibility with respect to these decisions is noted, with metafor emerging as the most favorable program reviewed.  相似文献   

15.
Vevea JL  Woods CM 《心理学方法》2005,10(4):428-443
Publication bias, sometimes known as the "file-drawer problem" or "funnel-plot asymmetry," is common in empirical research. The authors review the implications of publication bias for quantitative research synthesis (meta-analysis) and describe existing techniques for detecting and correcting it. A new approach is proposed that is suitable for application to meta-analytic data sets that are too small for the application of existing methods. The model estimates parameters relevant to fixed-effects, mixed-effects or random-effects meta-analysis contingent on a hypothetical pattern of bias that is fixed independently of the data. The authors illustrate this approach for sensitivity analysis using 3 data sets adapted from a commonly cited reference work on research synthesis (H. M. Cooper & L. V. Hedges, 1994).  相似文献   

16.
Single case design (SCD) experiments in the behavioral sciences utilize just one participant from whom data is collected over time. This design permits causal inferences to be made regarding various intervention effects, often in clinical or educational settings, and is especially valuable when between-participant designs are not feasible or when interest lies in the effects of an individualized treatment. Regression techniques are the most common quantitative practice for analyzing time series data and provide parameter estimates for both treatment and trend effects. However, the presence of serially correlated residuals, known as autocorrelation, can severely bias inferences made regarding these parameter estimates. Despite the severity of the issue, few researchers test or correct for the autocorrelation in their analyses.

Shadish and Sullivan (in press) recently conducted a meta-analysis of over 100 studies in order to assess the prevalence of the autocorrelation in the SCD literature. Although they found that the meta-analytic weighted average of the autocorrelation was close to zero, the distribution of autocorrelations was found to be highly heterogeneous. Using the same set of SCDs, the current study investigates various factors that may be related to the variation in autocorrelation estimates (e.g., study and outcome characteristics). Multiple moderator variables were coded for each study and then used in a metaregression in order to estimate the impact these predictor variables have on the autocorrelation.

This current study investigates the autocorrelation using a multilevel meta-analytic framework. Although meta-analyses involve nested data structures (e.g., effect sizes nested within studies nested within journals), there are few instances of meta-analysts utilizing multilevel frameworks with more than two levels. This is likely attributable to the fact that very few software packages allow for meta-analyses to be conducted with more than two levels and those that do allow this provide sparse documentation on how to implement these models. The proposed presentation discusses methods for carrying out a multilevel meta-analysis. The presentation also discusses the findings from the metaregression on the autocorrelation and the implications these findings have on SCDs.  相似文献   

17.
李超平  孟雪  胥彦  蓝媛美 《心理学报》2023,55(2):257-271
为厘清家庭支持型主管行为对员工的独特影响,并比较不同的作用机制,本研究对包含204个独立样本、340个效应值及91145名员工的164篇文献进行了元分析,结果发现:(1)与一般主管支持行为相比,家庭支持型主管行为对员工的任务绩效、创新行为和生活满意度有更强的积极影响。(2)工作对家庭冲突(资源视角)、领导-成员交换(交换视角)和情感承诺(情感视角)均能解释家庭支持型主管行为对员工的作用机制,并互为补充。具体而言,三者均能中介家庭支持型主管行为对任务绩效的影响;领导-成员交换和情感承诺在家庭支持型主管行为与创新行为间起中介作用;工作对家庭冲突和领导-成员交换则在家庭支持型主管行为影响生活满意度中发挥中介效应。研究结果为家庭支持型主管行为的影响效果提供了可靠结论,也有助于深入理解其作用机制。  相似文献   

18.
A relatively large literature has demonstrated that sexual orientation can be judged accurately from a variety of minimal cues, including facial appearance. Untested in this work, however, is the influence that individual differences in prejudice against gays and lesbians may exert upon perceivers’ judgments. Here, we report the results of a meta-analysis of 23 unpublished studies testing the relationship between anti-gay bias and the categorization of sexual orientation from faces. Aggregating data from multiple measures of bias using a variety of methods in three different countries over a period of 8 years, we found a small but significant negative relationship between accuracy and prejudice that was homogeneous across the samples tested. Thus, individuals reporting higher levels of anti-gay bias appear to be less accurate judges of sexual orientation.  相似文献   

19.
We react to the Van Iddekinge, Roth, Raymark, and Odle-Dusseau (2012a) meta-analysis of the relationship between integrity test scores and work-related criteria, the earlier Ones, Viswesvaran, and Schmidt (1993) meta-analysis of those relationships, the Harris et al. (2012) and Ones, Viswesvaran, and Schmidt (2012) responses, and the Van Iddekinge, Roth, Raymark, and Odle-Dusseau (2012b) rebuttal. We highlight differences between the findings of the 2 meta-analyses by focusing on studies that used predictive designs, applicant samples, and non-self-report criteria. We conclude that study exclusion criteria, correction for artifacts, and second order sampling error are not likely explanations for the differences in findings. The lack of detailed documentation of all effect size estimates used in either meta-analysis makes it impossible to ascertain the bases for the differences in findings. We call for increased detail in meta-analytic reporting and for better information sharing among the parties producing and meta-analytically integrating validity evidence.  相似文献   

20.
The inclusion of grey literature in meta-analyses and reviews is controversial. We examine both the advantages and challenges of including grey literature in meta-analyses. An exemplar meta-analysis of behavioral parenting interventions on parent behavior, child behavior, and parent adjustment outcomes is used to demonstrate these issues. It also explores the influence of the inclusion of grey literature on outcomes, including whether effect sizes are affected by the inclusion of grey literature and describes the challenges of searching for grey literature using traditional search engines such as Google and Yahoo. Homogeneity and publication bias are also examined. Based on these results, recommendations are presented for meta-analysts and researchers.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号