首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
If single-case experimental designs are to be used to establish guidelines for evidence-based interventions in clinical and educational settings, numerical values that reflect treatment effect sizes are required. The present study compares four recently developed procedures for quantifying the magnitude of intervention effect using data with known characteristics. Monte Carlo methods were used to generate AB designs data with potential confounding variables (serial dependence, linear and curvilinear trend, and heteroscedasticity between phases) and two types of treatment effect (level and slope change). The results suggest that data features are important for choosing the appropriate procedure and, thus, inspecting the graphed data visually is a necessary initial stage. In the presence of serial dependence or a change in data variability, the nonoverlap of all pairs (NAP) and the slope and level change (SLC) were the only techniques of the four examined that performed adequately. Introducing a data correction step in NAP renders it unaffected by linear trend, as is also the case for the percentage of nonoverlapping corrected data and SLC. The performance of these techniques indicates that professionals' judgments concerning treatment effectiveness can be readily complemented by both visual and statistical analyses. A flowchart to guide selection of techniques according to the data characteristics identified by visual inspection is provided.  相似文献   

2.
3.
The authors implemented a small series (N = 3) single-case research design to assess the effectiveness of a 9-session creative arts therapy treatment program for adult survivors of domestic violence. Analysis of participants’ scores on the Outcome Questionnaire (OQ-45.2) and Brief Resilience Scale using the percentage of nonoverlapping data procedure yielded treatment effects indicating that a creative arts therapy treatment program may be effective for reducing mental health symptoms and improving resiliency. It is recommended that this body of research continue for other educational, work, and health settings.  相似文献   

4.
Manolov R  Arnau J  Solanas A  Bono R 《Psicothema》2010,22(4):1026-1032
The present study evaluates the performance of four methods for estimating regression coefficients used to make statistical decisions about intervention effectiveness in single-case designs. Ordinary least square estimation is compared to two correction techniques dealing with general trend and a procedure that eliminates autocorrelation whenever it is present. Type I error rates and statistical power are studied for experimental conditions defined by the presence or absence of treatment effect (change in level or in slope), general trend, and serial dependence. The results show that empirical Type I error rates do not approach the nominal ones in the presence of autocorrelation or general trend when ordinary and generalized least squares are applied. The techniques controlling trend show lower false alarm rates, but prove to be insufficiently sensitive to existing treatment effects. Consequently, the use of the statistical significance of the regression coefficients for detecting treatment effects is not recommended for short data series.  相似文献   

5.
The current review and analysis investigated the presence of serial dependency (or autocorrelation) in single-subject applied behavior-analytic research. While well researched, few studies have controlled for the number of data points that appeared in the time-series and, thus, the negative bias of the r coefficient, and the power to detect true serial dependency effects. Therefore, all baseline graphs that appeared in the Journal of Applied Behavior Analysis (JABA) between 1968 and 1993 that provided more than 30 data points were examined for the presence of serial dependency (N = 103). Results indicated that 12% of the baseline graphs provided a significant lag-1 autocorrelation, and over 83% of them had coefficient values less than or equal to (±.25). The distribution of the lag-1 autocorrelation coefficients had a mean of .10. Subsequent distributions of partial autocorrelations at lags two through seven had smaller means indicating that as the distance between observations increases (i.e., the lag), serial dependency decreased. Although serial dependency did not appear to be a common property of the single-subject behavioral experiments, it is recommended that, whenever statistical analyses are contemplated, its presence should always be examined. Alternatives for coping with the presence of significant levels of serial dependency were discussed in terms of: (a) using alternative statistical procedures (e.g., ARIMA models, randomization tests, the Shewhart quality-control charts); (b) correcting statistics of traditional parametric procedures (e.g., t, F); or (c) using the autocorrelation coefficient as an indicator and estimate of reliable intervention effects.  相似文献   

6.
This study reports an independent replication of Parson and Reid's (1993) Group active treatment procedure for adults with developmental disabilities. This study differed from the earlier study in that it took place over a 12-month period, and involved all scheduled leisure activities from Monday through Sunday at all scheduled times of the day. In this study maladaptive behaviors decreased, the percentage of clients with materials increased, the percentage of clients receiving interactions increased, and the percentage of clients receiving social reinforcement increased as a function of the introduction of the Group active treatment. The data in the intervention phase were highly variable indicating that staff performance was a function of variables other than the intervention. Nevertheless, this procedure lead to a sustained improvement in the use of leisure time throughout the intervention period.  相似文献   

7.
Researchers apply individual person fit analyses as a procedure for checking model-data fit for individual test-takers. When a test-taker misfits, it means that the inferences from their test score regarding what they know and can do may not be accurate. One problem in applying individual person fit procedures in practice is the question of how much misfit it takes to make the test score an untrustworthy estimate of achievement. In this paper, we argue that if a person’s responses generally follow a monotonic pattern, the resulting test score is “good enough” to be interpreted and used. We present an approach that applies statistical procedures from the Rasch and Mokken measurement perspectives to examine individual person fit based on this good enough criterion in real data from a performance assessment. We discuss how these perspectives may facilitate thinking about applying individual person fit procedures in practice.  相似文献   

8.
The case-based time-series design is a viable methodology for treatment outcome research. However, the literature has not fully addressed the problem of missing observations with such autocorrelated data streams. Mainly, to what extent do missing observations compromise inference when observations are not independent? Do the available missing data replacement procedures preserve inferential integrity? Does the extent of autocorrelation matter? We use Monte Carlo simulation modeling of a single-subject intervention study to address these questions. We find power sensitivity to be within acceptable limits across four proportions of missing observations (10%, 20%, 30%, and 40%) when missing data are replaced using the Expectation-Maximization Algorithm, more commonly known as the EM Procedure (Dempster, Laird, &; Rubin, 1977). This applies to data streams with lag-1 autocorrelation estimates under 0.80. As autocorrelation estimates approach 0.80, the replacement procedure yields an unacceptable power profile. The implications of these findings and directions for future research are discussed.  相似文献   

9.
One of the most popular paradigms to use for studying human reasoning involves the Wason card selection task. In this task, the participant is presented with four cards and a conditional rule (e.g., “If there is an A on one side of the card, there is always a 2 on the other side”). Participants are asked which cards should be turned to verify whether or not the rule holds. In this simple task, participants consistently provide answers that are incorrect according to formal logic. To account for these errors, several models have been proposed, one of the most prominent being the information gain model (Oaksford & Chater, Psychological Review, 101, 608–631, 1994). This model is based on the assumption that people independently select cards based on the expected information gain of turning a particular card. In this article, we present two estimation methods to fit the information gain model: a maximum likelihood procedure (programmed in R) and a Bayesian procedure (programmed in WinBUGS). We compare the two procedures and illustrate the flexibility of the Bayesian hierarchical procedure by applying it to data from a meta-analysis of the Wason task (Oaksford & Chater, Psychological Review, 101, 608–631, 1994). We also show that the goodness of fit of the information gain model can be assessed by inspecting the posterior predictives of the model. These Bayesian procedures make it easy to apply the information gain model to empirical data. Supplemental materials may be downloaded along with this article from .  相似文献   

10.
A multiple baseline design was used to evaluate the effects of Van Houten and Thompson's (1976) explicit timing procedure on problem completion rates and accuracy levels in African-American third-grade students. During the explicit timing phase, students were told that they were being timed and were instructed to circle the last problem completed at each 1-min interval. Results showed that the explicit timing procedure increased problem completion rates. A decreasing trend in percentage of problems correct also occurred. Exploratory data analysis suggested that decreases in accuracy were not caused by the explicit timing procedure and did not occur in students who had attained high levels of preintervention accuracy. Discussion focuses on recommendations for educators who wish to use timing procedures to increase students' rates of accurate responding.  相似文献   

11.
Conventionally, fitting a mathematical model to empirically derived data is achieved by varying model parameters to minimize the deviations between expected and observed values in the dependent dimension. However, when functions to be fit are multivalued (e.g., an ellipse), conventional model fitting procedures fail. A novel (n+1)-dimensional [(n+1)-D] model fitting procedure is presented which can solve such problems by transforming then-D model and data into (n+1)-D space and then minimizing deviations in the constructed dimension. While the (n+1)-D procedure provides model fits identical to those obtained with conventional methods for single-valued functions, it also extends parameter estimation to multivalued functions.  相似文献   

12.
Recent research has shown that, in response time (RT) tasks, the go/no-go response procedure produces faster (and less noisy) RTs and fewer errors than the two-choice response procedure in children, although these differences are substantially smaller in college-aged adults. Here we examined whether the go/no-go procedure can be preferred to the two-choice procedure in RT experiments with older adults (i.e. another population with slower and more error-prone responding than college-aged individuals). To that end, we compared these response procedures in two experiments with older adults (Mage?=?83 years): a visual word recognition task (lexical decision) and a perceptual task (numerosity discrimination). A group of young adults (Mage?=?31 years) served as a control. In the lexical decision experiment, results showed a go/no-go advantage in the mean RTs and in the error rates for words; however, this was not accompanied by less noisy RT data. The magnitude of the word-frequency effect was similar in the two response procedures. The numerosity discrimination experiment did not reveal any clear differences across response procedures, except that the RTs were noisier in the go/no-go procedure. Therefore, we found no compelling reasons why the go/no-go procedure should be preferred over the two-choice procedure in RT experiments with older adults.  相似文献   

13.
Publication bias is the disproportionate representation of studies with large effects and statistically significant findings in the published research literature. If publication bias occurs in single-case research design studies on applied behavior-analytic (ABA) interventions, it can result in inflated estimates of ABA intervention effects. We conducted an empirical evaluation of publication bias on an evidence-based ABA intervention for children diagnosed with autism spectrum disorder, response interruption and redirection (RIRD). We determined effect size estimates for published and unpublished studies using 3 metrics, percentage of nonoverlapping data (PND), Hedges' g, and log response ratios (LRR). Omnibus effect size estimates across all 3 metrics were positive, supporting that RIRD is an effective treatment for reducing problem behavior maintained by nonsocial consequences. We observed larger PND for published compared to unpublished studies, small and nonsignificant differences in LRR for published compared to unpublished studies, and significant differences in Hedges' g for published compared to unpublished studies, with published studies showing slightly larger effect. We found little, if any, difference in methodological quality between published and unpublished studies. While RIRD appears to be an effective intervention for challenging behavior maintained by nonsocial consequences, our results reflect some degree of publication bias present in the RIRD research literature.  相似文献   

14.
Users of interobserver agreement statistics have heretofore ignored the problem of autocorrelation in behavior sequences when testing the statistical significance of agreement measures. Due to autocorrelation traditional reliability tests based on the 2 × 2 contingency-table model (e.g., kappa, phi) are incorrect. Correct tests can be developed by using the bivariate time series as a statistical model. Seen from this perspective, testing the significance of interobserver agreement becomes formally equivalent to testing the significance of the lag-zero cross-correlation between two time series. The robust procedure known as the jackknife is suggested for this purpose.  相似文献   

15.
Prior to a three-way component analysis of a three-way data set, it is customary to preprocess the data by centering and/or rescaling them. Harshman and Lundy (1984) considered that three-way data actually consist of a three-way model part, which in fact pertains to ratio scale measurements, as well as additive “offset” terms that turn the ratio scale measurements into interval scale measurements. They mentioned that such offset terms might be estimated by incorporating additional components in the model, but discarded this idea in favor of an approach to remove such terms from the model by means of centering. Then estimates for the three-way component model parameters are obtained by analyzing the centered data. In the present paper, the possibility of actually estimating the offset terms is taken up again. First, it is mentioned in which cases such offset terms can be estimated uniquely. Next, procedures are offered for estimating model parameters and offset parameters simultaneously, as well as successively (i.e., providing offset term estimates after the three-way model parameters have been estimated in the traditional way on the basis of the centered data). These procedures are provided for both the CANDECOMP/PARAFAC model and the Tucker3 model extended with offset terms. The successive and the simultaneous approaches for estimating model and offset parameters have been compared on the basis of simulated data. It was found that both procedures perform well when the fitted model captures at least all offset terms actually underlying the data. The simultaneous procedures performed slightly better than the successive procedures. If fewer offset terms are fitted than there are underlying the model, the results are considerably poorer, but in these cases the successive procedures performed better than the simultaneous ones. All in all, it can be concluded that the traditional approach for estimating model parameters can hardly be improved upon, and that offset terms can sufficiently well be estimated by the proposed successive approach, which is a simple extension of the traditional approach. The author is obliged to Jos M.F. ten Berge and Marieke Timmerman for helpful comments on an earlier version of this paper. The author is obliged to Iven van Mechelen for making available the data set used in Section 6.  相似文献   

16.
Single case design (SCD) experiments in the behavioral sciences utilize just one participant from whom data is collected over time. This design permits causal inferences to be made regarding various intervention effects, often in clinical or educational settings, and is especially valuable when between-participant designs are not feasible or when interest lies in the effects of an individualized treatment. Regression techniques are the most common quantitative practice for analyzing time series data and provide parameter estimates for both treatment and trend effects. However, the presence of serially correlated residuals, known as autocorrelation, can severely bias inferences made regarding these parameter estimates. Despite the severity of the issue, few researchers test or correct for the autocorrelation in their analyses.

Shadish and Sullivan (in press) recently conducted a meta-analysis of over 100 studies in order to assess the prevalence of the autocorrelation in the SCD literature. Although they found that the meta-analytic weighted average of the autocorrelation was close to zero, the distribution of autocorrelations was found to be highly heterogeneous. Using the same set of SCDs, the current study investigates various factors that may be related to the variation in autocorrelation estimates (e.g., study and outcome characteristics). Multiple moderator variables were coded for each study and then used in a metaregression in order to estimate the impact these predictor variables have on the autocorrelation.

This current study investigates the autocorrelation using a multilevel meta-analytic framework. Although meta-analyses involve nested data structures (e.g., effect sizes nested within studies nested within journals), there are few instances of meta-analysts utilizing multilevel frameworks with more than two levels. This is likely attributable to the fact that very few software packages allow for meta-analyses to be conducted with more than two levels and those that do allow this provide sparse documentation on how to implement these models. The proposed presentation discusses methods for carrying out a multilevel meta-analysis. The presentation also discusses the findings from the metaregression on the autocorrelation and the implications these findings have on SCDs.  相似文献   

17.
Abstract

Background: Masculinizing mastectomy is the most requested gender affirming surgery (GAS) in trans men, followed by genital GAS. Mastectomy and total laparoscopic hysterectomy, with or without bilateral salpingo-oophorectomy (TLH?±?BSO), can both be performed in one single operation session. However, data on complication rates of the combined procedure is scarce and no consensus exists on the preferred order of procedures.

Aims: To compare safety outcomes between mastectomy performed in a single procedure with those when performed in a combined procedure and assess whether the order of procedures matters when they are combined.

Methods: A retrospective chart review was performed of trans men who underwent masculinizing mastectomy with or without TLH?±?BSO in a combined session. The effects of the surgical procedure on complication and reoperation rate of the chest were assessed using logistic regression.

Results: In total, 480 trans men were included in the study. Of these, 212 patients underwent the combined procedure. The gynecological procedure was performed first in 152 (71.7%) patients. In the total sample, postoperative hematoma of the chest occurred in 11.3%; 16% in the combined versus 7.5% in the single mastectomy group (p?=?0.001). Reoperations due to hematoma of the chest were performed in 7.5% of all patients; 10.8% in the combined versus 4.9% in the single mastectomy group (p?=?0.017). The order of procedures in the combined group had no significant effect on postoperative hematoma of the chest (p?=?0.856), and reoperations (p?=?0.689).

Conclusion: Combining masculinizing mastectomy with TLH?±?BSO in one session was associated with significantly more hematoma and reoperations compared with separately performing mastectomy. This increased risk of complications after a combined procedure should be considered when deciding on surgical options. The order of procedures in a combined procedure did not have an effect on safety outcomes.  相似文献   

18.
Numerous studies have demonstrated that disruptive classroom behavior can be decreased by delivering tokens contingent upon periods of time during which children do not engage in it or by removing tokens contingent upon its occurrence. To date, the best controlled of these studies have consistently reported the two procedures to be equally effective. However, in these studies, token contingencies have been combined with instructions regarding the contingencies. The present study compared these two procedures when no instructions were given regarding the token contingencies. Token delivery was not effective in decreasing disruptive behavior in any of the children, while a combination of token delivery and removal was effective for three of four children. The results suggest that the combined procedure may be effective with certain populations that are not readily controlled by instructions.  相似文献   

19.
In the applied context, short time-series designs are suitable to evaluate a treatment effect. These designs present serious problems given autocorrelation among data and the small number of observations involved. This paper describes analytic procedures that have been applied to data from short time series, and an alternative which is a new version of the generalized least squares method to simplify estimation of the error covariance matrix. Using the results of a simulation study and assuming a stationary first-order autoregressive model, it is proposed that the original observations and the design matrix be transformed by means of the square root or Cholesky factor of the inverse of the covariance matrix. This provides a solution to the problem of estimating the parameters of the error covariance matrix. Finally, the results of the simulation study obtained using the proposed generalized least squares method are compared with those obtained by the ordinary least squares approach. The probability of Type I error associated with the proposed method is close to the nominal value for all values of rho1 and n investigated, especially for positive values of rho1. The proposed generalized least squares method corrects the effect of autocorrelation on the test's power.  相似文献   

20.
Autocorrelation and partial autocorrelation, which provide a mathematical tool to understand repeating patterns in time series data, are often used to facilitate the identification of model orders of time series models (e.g., moving average and autoregressive models). Asymptotic methods for testing autocorrelation and partial autocorrelation such as the 1/T approximation method and the Bartlett's formula method may fail in finite samples and are vulnerable to non-normality. Resampling techniques such as the moving block bootstrap and the surrogate data method are competitive alternatives. In this study, we use a Monte Carlo simulation study and a real data example to compare asymptotic methods with the aforementioned resampling techniques. For each resampling technique, we consider both the percentile method and the bias-corrected and accelerated method for interval construction. Simulation results show that the surrogate data method with percentile intervals yields better performance than the other methods. An R package pautocorr is used to carry out tests evaluated in this study.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号