首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This article reports the results of a study that located, digitized, and coded all 809 single-case designs appearing in 113 studies in the year 2008 in 21 journals in a variety of fields in psychology and education. Coded variables included the specific kind of design, number of cases per study, number of outcomes, data points and phases per case, and autocorrelations for each case. Although studies of the effects of interventions are a minority in these journals, within that category, single-case designs are used more frequently than randomized or nonrandomized experiments. The modal study uses a multiple-baseline design with 20 data points for each of three or four cases, where the aim of the intervention is to increase the frequency of a desired behavior; but these characteristics vary widely over studies. The average autocorrelation is near to but significantly different from zero; but autocorrelations are significantly heterogeneous. The results have implications for the contributions of single-case designs to evidence-based practice and suggest a number of future research directions.  相似文献   

2.
Multilevel models (MLM) have been used as a method for analyzing multiple-baseline single-case data. However, some concerns can be raised because the models that have been used assume that the Level-1 error covariance matrix is the same for all participants. The purpose of this study was to extend the application of MLM of single-case data in order to accommodate across-participant variation in the Level-1 residual variance and autocorrelation. This more general model was then used in the analysis of single-case data sets to illustrate the method, to estimate the degree to which the autocorrelation and residual variances differed across participants, and to examine whether inferences about treatment effects were sensitive to whether or not the Level-1 error covariance matrix was allowed to vary across participants. The results from the analyses of five published studies showed that when the Level-1 error covariance matrix was allowed to vary across participants, some relatively large differences in autocorrelation estimates and error variance estimates emerged. The changes in modeling the variance structure did not change the conclusions about which fixed effects were statistically significant in most of the studies, but there was one exception. The fit indices did not consistently support selecting either the more complex covariance structure, which allowed the covariance parameters to vary across participants, or the simpler covariance structure. Given the uncertainty in model specification that may arise when modeling single-case data, researchers should consider conducting sensitivity analyses to examine the degree to which their conclusions are sensitive to modeling choices.  相似文献   

3.
The last 10 years have seen great progress in the analysis and meta-analysis of single-case designs (SCDs). This special issue includes five articles that provide an overview of current work on that topic, including standardized mean difference statistics, multilevel models, Bayesian statistics, and generalized additive models. Each article analyzes a common example across articles and presents syntax or macros for how to do them. These articles are followed by commentaries from single-case design researchers and journal editors. This introduction briefly describes each article and then discusses several issues that must be addressed before we can know what analyses will eventually be best to use in SCD research. These issues include modeling trend, modeling error covariances, computing standardized effect size estimates, assessing statistical power, incorporating more accurate models of outcome distributions, exploring whether Bayesian statistics can improve estimation given the small samples common in SCDs, and the need for annotated syntax and graphical user interfaces that make complex statistics accessible to SCD researchers. The article then discusses reasons why SCD researchers are likely to incorporate statistical analyses into their research more often in the future, including changing expectations and contingencies regarding SCD research from outside SCD communities, changes and diversity within SCD communities, corrections of erroneous beliefs about the relationship between SCD research and statistics, and demonstrations of how statistics can help SCD researchers better meet their goals.  相似文献   

4.
Using a low point estimate of autocorrelation to justify analyzing single-case data with the general linear model (GLM) is questioned. Monte Carlo methods are used to examine the degree to which bias in the estimate of autocorrelation depends on the complexity of the linear model used to describe the data. A method is then illustrated for determining the range of autocorrelation parameters that could reasonably have led to the observed autocorrelation. The argument for using a GLM analysis can be strengthened when the GLM analysis functions appropriately across the range of plausible autocorrelations. For situations in which the GLM analysis does not function appropriately across this range, a method is provided for adjusting the confidence intervals to ensure adequate coverage probabilities for specified levels of autocorrelation.  相似文献   

5.
Single case design (SCD) experiments in the behavioral sciences utilize just one participant from whom data is collected over time. This design permits causal inferences to be made regarding various intervention effects, often in clinical or educational settings, and is especially valuable when between-participant designs are not feasible or when interest lies in the effects of an individualized treatment. Regression techniques are the most common quantitative practice for analyzing time series data and provide parameter estimates for both treatment and trend effects. However, the presence of serially correlated residuals, known as autocorrelation, can severely bias inferences made regarding these parameter estimates. Despite the severity of the issue, few researchers test or correct for the autocorrelation in their analyses.

Shadish and Sullivan (in press) recently conducted a meta-analysis of over 100 studies in order to assess the prevalence of the autocorrelation in the SCD literature. Although they found that the meta-analytic weighted average of the autocorrelation was close to zero, the distribution of autocorrelations was found to be highly heterogeneous. Using the same set of SCDs, the current study investigates various factors that may be related to the variation in autocorrelation estimates (e.g., study and outcome characteristics). Multiple moderator variables were coded for each study and then used in a metaregression in order to estimate the impact these predictor variables have on the autocorrelation.

This current study investigates the autocorrelation using a multilevel meta-analytic framework. Although meta-analyses involve nested data structures (e.g., effect sizes nested within studies nested within journals), there are few instances of meta-analysts utilizing multilevel frameworks with more than two levels. This is likely attributable to the fact that very few software packages allow for meta-analyses to be conducted with more than two levels and those that do allow this provide sparse documentation on how to implement these models. The proposed presentation discusses methods for carrying out a multilevel meta-analysis. The presentation also discusses the findings from the metaregression on the autocorrelation and the implications these findings have on SCDs.  相似文献   

6.
Randomization tests are nonparametric statistical tests that obtain their validity by computationally mimicking the random assignment procedure that was used in the design phase of a study. Because randomization tests do not rely on a random sampling assumption, they can provide a better alternative than parametric statistical tests for analyzing data from single-case designs. In this article, an R package is described for use in designing single-case phase (AB, ABA, and ABAB) and alternation (completely randomized, alternating treatments, and randomized block) experiments, as well as for conducting statistical analyses on data gathered by means of such designs. The R code is presented in a step-by-step way, which at the same time clarifies the rationale behind single-case randomization tests.  相似文献   

7.
Methods for meta-analyzing single-case designs (SCDs) are needed to inform evidence-based practice in clinical and school settings and to draw broader and more defensible generalizations in areas where SCDs comprise a large part of the research base. The most widely used outcomes in single-case research are measures of behavior collected using systematic direct observation, which typically take the form of rates or proportions. For studies that use such measures, one simple and intuitive way to quantify effect sizes is in terms of proportionate change from baseline, using an effect size known as the log response ratio. This paper describes methods for estimating log response ratios and combining the estimates using meta-analysis. The methods are based on a simple model for comparing two phases, where the level of the outcome is stable within each phase and the repeated outcome measurements are independent. Although auto-correlation will lead to biased estimates of the sampling variance of the effect size, meta-analysis of response ratios can be conducted with robust variance estimation procedures that remain valid even when sampling variance estimates are biased. The methods are demonstrated using data from a recent meta-analysis on group contingency interventions for student problem behavior.  相似文献   

8.
Running studies with high statistical power, while effect size estimates in psychology are often inaccurate, leads to a practical challenge when designing an experiment. This challenge can be addressed by performing sequential analyses while the data collection is still in progress. At an interim analysis, data collection can be stopped whenever the results are convincing enough to conclude that an effect is present, more data can be collected, or the study can be terminated whenever it is extremely unlikely that the predicted effect will be observed if data collection would be continued. Such interim analyses can be performed while controlling the Type 1 error rate. Sequential analyses can greatly improve the efficiency with which data are collected. Additional flexibility is provided by adaptive designs where sample sizes are increased on the basis of the observed effect size. The need for pre‐registration, ways to prevent experimenter bias, and a comparison between Bayesian approaches and null‐hypothesis significance testing (NHST) are discussed. Sequential analyses, which are widely used in large‐scale medical trials, provide an efficient way to perform high‐powered informative experiments. I hope this introduction will provide a practical primer that allows researchers to incorporate sequential analyses in their research. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

9.
Single-case designs are a class of repeated measures experiments used to evaluate the effects of interventions for small or specialized populations, such as individuals with low-incidence disabilities. There has been growing interest in systematic reviews and syntheses of evidence from single-case designs, but there remains a need to further develop appropriate statistical models and effect sizes for data from the designs. We propose a novel model for single-case data that exhibit nonlinear time trends created by an intervention that produces gradual effects, which build up and dissipate over time. The model expresses a structural relationship between a pattern of treatment assignment and an outcome variable, making it appropriate for both treatment reversal and multiple baseline designs. It is formulated as a generalized linear model so that it can be applied to outcomes measured as frequency counts or proportions, both of which are commonly used in single-case research, while providing readily interpretable effect size estimates such as log response ratios or log odds ratios. We demonstrate the gradual effects model by applying it to data from a single-case study and examine the performance of proposed estimation methods in a Monte Carlo simulation of frequency count data.  相似文献   

10.
Numerous ways to meta-analyze single-case data have been proposed in the literature; however, consensus has not been reached on the most appropriate method. One method that has been proposed involves multilevel modeling. For this study, we used Monte Carlo methods to examine the appropriateness of Van den Noortgate and Onghena's (2008) raw-data multilevel modeling approach for the meta-analysis of single-case data. Specifically, we examined the fixed effects (e.g., the overall average treatment effect) and the variance components (e.g., the between-person within-study variance in the treatment effect) in a three-level multilevel model (repeated observations nested within individuals, nested within studies). More specifically, bias of the point estimates, confidence interval coverage rates, and interval widths were examined as a function of the number of primary studies per meta-analysis, the modal number of participants per primary study, the modal series length per primary study, the level of autocorrelation, and the variances of the error terms. The degree to which the findings of this study are supportive of using Van den Noortgate and Onghena's (2008) raw-data multilevel modeling approach to meta-analyzing single-case data depends on the particular parameter of interest. Estimates of the average treatment effect tended to be unbiased and produced confidence intervals that tended to overcover, but did come close to the nominal level as Level-3 sample size increased. Conversely, estimates of the variance in the treatment effect tended to be biased, and the confidence intervals for those estimates were inaccurate.  相似文献   

11.
In this commentary, we add to the spirit of the articles appearing in the special series devoted to meta- and statistical analysis of single-case intervention-design data. Following a brief discussion of historical factors leading to our initial involvement in statistical analysis of such data, we discuss: (a) the value added by including statistical-analysis recommendations in the What Works Clearinghouse Standards for single-case intervention designs; (b) the importance of visual analysis in single-case intervention research, along with the distinctive role that could be played by single-case effect-size measures; and (c) the elevated internal validity and statistical-conclusion validity afforded by the incorporation of various forms of randomization into basic single-case design structures. For the future, we envision more widespread application of quantitative analyses, as critical adjuncts to visual analysis, in both primary single-case intervention research studies and literature reviews in the behavioral, educational, and health sciences.  相似文献   

12.
A new method for deriving effect sizes from single-case designs is proposed. The strategy is applicable to small-sample time-series data with autoregressive errors. The method uses Generalized Least Squares (GLS) to model the autocorrelation of the data and estimate regression parameters to produce an effect size that represents the magnitude of treatment effect from baseline to treatment phases in standard deviation units. In this paper, the method is applied to two published examples using common single case designs (i.e., withdrawal and multiple-baseline). The results from these studies are described, and the method is compared to ten desirable criteria for single-case effect sizes. Based on the results of this application, we conclude with observations about the use of GLS as a support to visual analysis, provide recommendations for future research, and describe implications for practice.  相似文献   

13.
Peer reporting interventions (i.e., Positive Peer Reporting and tootling) are commonly used peer-mediated interventions in schools. These interventions involve training students to make reports about peers' prosocial behaviors, whether in oral or written form. Although peer reporting interventions have been included in meta-analyses of group contingencies, this study is the first meta-analytic review of single-case research focusing exclusively on peer reporting interventions. The literature search and application of inclusion criteria yielded 21 studies examining the impact of a peer reporting intervention on student behavior compared to baseline conditions. All studies used single-case experimental designs including at least three demonstrations of an effect and at least three data points per phase. Several aspects of studies, participants, and interventions were coded. Log response ratios and Tau were calculated as effect size estimates. Effect size estimates were synthesized in a multi-level meta-analysis with random effects for (a) studies and (b) cases within studies. Overall results indicated peer reporting interventions had a non-zero and positive impact on student outcomes. This was also true when data were subset by outcome (i.e., disruptive behavior, academically engaged behavior, and social behavior). Results were suggestive of more between- than within-study variability. Moderator analyses were conducted to identify aspects of studies, participants, or peer reporting interventions associated with differential effectiveness. Moderator analyses suggested published studies were associated with higher effect sizes than unpublished studies (i.e., theses/dissertations). This meta-analysis suggests peer reporting interventions are effective in improving student behavior compared to baseline conditions. Implications and directions for future investigation are discussed.  相似文献   

14.
Manolov R  Arnau J  Solanas A  Bono R 《Psicothema》2010,22(4):1026-1032
The present study evaluates the performance of four methods for estimating regression coefficients used to make statistical decisions about intervention effectiveness in single-case designs. Ordinary least square estimation is compared to two correction techniques dealing with general trend and a procedure that eliminates autocorrelation whenever it is present. Type I error rates and statistical power are studied for experimental conditions defined by the presence or absence of treatment effect (change in level or in slope), general trend, and serial dependence. The results show that empirical Type I error rates do not approach the nominal ones in the presence of autocorrelation or general trend when ordinary and generalized least squares are applied. The techniques controlling trend show lower false alarm rates, but prove to be insufficiently sensitive to existing treatment effects. Consequently, the use of the statistical significance of the regression coefficients for detecting treatment effects is not recommended for short data series.  相似文献   

15.
This article describes a linear modeling approach for the analysis of single-case designs (SCDs). Effect size measures in SCDs have been defined and studied for the situation where there is a level change without a time trend. However, when there are level and trend changes, effect size measures are either defined in terms of changes in R2 or defined separately for changes in slopes and intercept coefficients. We propose an alternate effect size measure that takes into account changes in slopes and intercepts in the presence of serial dependence and provides an integrated procedure for the analysis of SCDs through estimation and inference based directly on the effect size measure. A Bayesian procedure is described to analyze the data and draw inferences in SCDs. A multilevel model that is appropriate when several subjects are available is integrated into the Bayesian procedure to provide a standardized effect size measure comparable to effect size measures in a between-subjects design. The applicability of the Bayesian approach for the analysis of SCDs is demonstrated through an example.  相似文献   

16.
This article presents a d-statistic for single-case designs that is in the same metric as the d-statistic used in between-subjects designs such as randomized experiments and offers some reasons why such a statistic would be useful in SCD research. The d has a formal statistical development, is accompanied by appropriate power analyses, and can be estimated using user-friendly SPSS macros. We discuss both advantages and disadvantages of d compared to other approaches such as previous d-statistics, overlap statistics, and multilevel modeling. It requires at least three cases for computation and assumes normally distributed outcomes and stationarity, assumptions that are discussed in some detail. We also show how to test these assumptions. The core of the article then demonstrates in depth how to compute d for one study, including estimation of the autocorrelation and the ratio of between case variance to total variance (between case plus within case variance), how to compute power using a macro, and how to use the d to conduct a meta-analysis of studies using single-case designs in the free program R, including syntax in an appendix. This syntax includes how to read data, compute fixed and random effect average effect sizes, prepare a forest plot and a cumulative meta-analysis, estimate various influence statistics to identify studies contributing to heterogeneity and effect size, and do various kinds of publication bias analyses. This d may prove useful for both the analysis and meta-analysis of data from SCDs.  相似文献   

17.
Visual analysis is the dominant method of analysis for single-case time series. The literature assumes that visual analysts will be conservative judges. We show that previous research into visual analysis has not adequately examined false alarm and miss rates or the effect of serial dependence. In order to measure false alarm and miss rates while varying serial dependence, amount of random variability, and effect size, 37 students undertaking a postgraduate course in single-case design and analysis were required to assess the presence of an intervention effect in each of 27 AB charts constructed using a first-order autoregressive model. Three levels of effect size and three levels of variability, representative of values found in published charts, were combined with autocorrelation coefficients of 0, 0.3 and 0.6 in a factorial design. False alarm rates were surprisingly high (16% to 84%). Positive autocorrelation and increased random variation both significantly increased the false alarm rates and interacted in a nonlinear fashion. Miss rates were relatively low (0% to 22%) and were not significantly affected by the design parameters. Thus, visual analysts were not conservative, and serial dependence did influence judgment.  相似文献   

18.
Meta-analytic methods provide a way to synthesize data across treatment evaluation studies. However, these well-accepted methods are infrequent with behavior analytic studies. Multilevel models may be a promising method to meta-analyze single-case data. This technical article provides a primer for how to conduct a multilevel model with single-case designs with AB phases using data from the differential-reinforcement-of-low-rate behavior literature. We provide details, recommendations, and considerations for searching for appropriate studies, organizing the data, and conducting the analyses. All data sets are available to allow the reader to follow along with this primer. The purpose of this technical article is to minimally equip behavior analysts to complete a meta-analysis that will summarize a current state of affairs as it relates to the science of behavior analysis and its practice. Moreover, we aim to demonstrate the value of analyses of this sort for behavior analysis.  相似文献   

19.
The work of Huitema (1985) on autocorrelation in behavioral data suggests that the use of conventional statistical methods is justified. The present study restates the problem of autocorrelation by analyzing 100 baselines of small samples designs published in the Journal of Applied Behavior Analysis during 1992. The results show a negative bias in the autocorrelations, especially with very small samples. The autocorrelation values are normally distributed, and the method of Davies, Trigg, and Newbold (1977) is the most accurate in calculating the standard deviation.  相似文献   

20.
CLAIRE  RABIN  D.S.W. 《Family process》1981,20(3):351-366
This paper discusses the need for the development of new research designs for family therapy evaluation, and a manner for meeting that need. The single-case research design has gained acceptance as bona fide experimental designs for evaluating the effectiveness of intervention techniques in drug and psychotherapy outcome research. Although the need for more outcome studies on the effectiveness of family therapy has been frequently noted, there has been virtually no use of the single-case design in family therapy outside of the behavior modification literature on families. This paper presents potential benefits of the application of single case-design for the practice of family therapy. Preliminary guidelines are suggested for the application of these designs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号