首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In their criticism of B. E. Wampold and R. C. Serlin's analysis of treatment effects in nested designs, M. Siemer and J. Joormann argued that providers of services should be considered a fixed factor because typically providers are neither randomly selected from a population of providers nor randomly assigned to treatments, and statistical power to detect treatment effects is greater in the fixed than in the mixed model. The authors of the present article argue that if providers are considered fixed, conclusions about the treatment must be conditioned on the specific providers in the study, and they show that in this case generalizing beyond these providers incurs inflated Type I error rates.  相似文献   

2.
Although the consequences of ignoring a nested factor on decisions to reject the null hypothesis of no treatment effects have been discussed in the literature, typically researchers in applied psychology and education ignore treatment providers (often a nested factor) when comparing the efficacy of treatments. The incorrect analysis, however, not only invalidates tests of hypotheses, but it also overestimates the treatment effect. Formulas were derived and a Monte Carlo study was conducted to estimate the degree to which the F statistic and treatment effect size measures are inflated by ignoring the effects due to providers of treatments. These untoward effects are illustrated with examples from psychotherapeutic treatments.  相似文献   

3.
This article proposes an approach to modelling partially cross‐classified multilevel data where some of the level‐1 observations are nested in one random factor and some are cross‐classified by two random factors. Comparisons between a proposed approach to two other commonly used approaches which treat the partially cross‐classified data as either fully nested or fully cross‐classified are completed with a simulation study. Results show that the proposed approach demonstrates desirable performance in terms of parameter estimates and statistical inferences. Both the fully nested model and the fully cross‐classified model suffer from biased estimates of some variance components and statistical inferences of some fixed effects. Results also indicate that the proposed model is robust against cluster size imbalance.  相似文献   

4.
Cross‐classified random effects modelling (CCREM) is a special case of multi‐level modelling where the units of one level are nested within two cross‐classified factors. Typically, CCREM analyses omit the random interaction effect of the cross‐classified factors. We investigate the impact of the omission of the interaction effect on parameter estimates and standard errors. Results from a Monte Carlo simulation study indicate that, for fixed effects, both coefficients estimates and accompanied standard error estimates are not biased. For random effects, results are affected at level 2 but not at level 1 by the presence of an interaction variance and/or a correlation between the residual of level two factors. Results from the analysis of the Early Childhood Longitudinal Study and the National Educational Longitudinal Study agree with those obtained from simulated data. We recommend that researchers attempt to include interaction effects of cross‐classified factors in their models.  相似文献   

5.
Experiments that involve nested structures may assign treatment conditions either to entire groups (such as classrooms or schools) or individuals within groups (such as students). Although typically the interest in field experiments is in determining the significance of the overall treatment effect, it is equally important to examine the inconsistency of the treatment effect in different groups. This study provides methods for computing power of tests for the variability of treatment effects across level-2 and level-3 units in three-level designs, where, for example, students are nested within classrooms and classrooms are nested within schools and random assignment takes place at the first or the second level. The power computations take into account nesting effects at the second (e.g., classroom) and at the third (e.g., school) level as well as sample size effects (e.g., number of level-1 and level-2 units). The methods can also be applied to quasi-experimental studies that examine the significance of the variation of group differences in an outcome or associations between predictors and outcomes across level-2 and level-3 units.  相似文献   

6.
While conventional hierarchical linear modeling is applicable to purely hierarchical data, a multiple membership random effects model (MMrem) is appropriate for nonpurely nested data wherein some lower-level units manifest mobility across higher-level units. Although a few recent studies have investigated the influence of cluster-level residual nonnormality on hierarchical linear modeling estimation for purely hierarchical data, no research has examined the statistical performance of an MMrem given residual non-normality. The purpose of the present study was to extend prior research on the influence of residual non-normality from purely nested data structures to multiple membership data structures. Employing a Monte Carlo simulation study, this research inquiry examined two-level MMrem parameter estimate biases and inferential errors. Simulation factors included the level-two residual distribution, sample sizes, intracluster correlation coefficient, and mobility rate. Results showed that estimates of fixed effect parameters and the level-one variance component were robust to level-two residual non-normality. The level-two variance component, however, was sensitive to level-two residual non-normality and sample size. Coverage rates of the 95% credible intervals deviated from the nominal value assumed when level-two residuals were non-normal. These findings can be useful in the application of an MMrem to account for the contextual effects of multiple higher-level units.  相似文献   

7.
Diversity is a popular topic among academics and practitioners alike. It is also a topic that is surrounded with controversies and passionate opinions. This makes understanding the ambiguous consequences of diversity a highly interesting and puzzling endeavor. To facilitate understanding of diversity's effects in workgroups, I present an overview of the state‐of‐the‐art in diversity research and discuss the potential problems and benefits that are associated with group diversity. Moreover, I discuss how research on diversity interventions uses this problems versus potential approach to distinguish moderators of diversity's effects. Based on this overview, I argue that similar to the importance of contingencies for predicting diversity's effects, contingencies also play a crucial role in predicting the effectiveness of diversity interventions. As such, the current overview stresses the lack of main effects of both diversity and diversity interventions. Finally, I discuss recent work illustrating these contingencies and conclude that positive diversity mindsets—favorable mental representations of group diversity—are a necessary prerequisite to prevent problems and promote potential of group diversity.  相似文献   

8.
Field experiments with nested structures are becoming increasingly common, especially designs that assign randomly entire clusters such as schools to a treatment and a control group. In such large-scale cluster randomized studies the challenge is to obtain sufficient power of the test of the treatment effect. The objective is to maximize power without adding many clusters that make the study much more expensive. In this article I discuss how power estimates of tests of treatment effects in balanced cluster randomized designs are affected by covariates at different levels. I use third-grade data from Project STAR, a field experiment about class size, to demonstrate how covariates that explain a considerable proportion of variance in outcomes increase power significantly. When lower level covariates are group-mean centered and clustering effects are larger, top-level covariates increase power more than lower level covariates. In contrast, when clustering effects are smaller and lower level covariates are grand-mean centered or uncentered, lower level covariates increase power more than top-level covariates.  相似文献   

9.
Experimental disclosure and its moderators: a meta-analysis   总被引:1,自引:0,他引:1  
Disclosing information, thoughts, and feelings about personal and meaningful topics (experimental disclosure) is purported to have various health and psychological consequences (e.g., J. W. Pennebaker, 1993). Although the results of 2 small meta-analyses (P. G. Frisina, J. C. Borod, & S. J. Lepore, 2004; J. M. Smyth, 1998) suggest that experimental disclosure has a positive and significant effect, both used a fixed effects approach, limiting generalizability. Also, a plethora of studies on experimental disclosure have been completed that were not included in the previous analyses. One hundred forty-six randomized studies of experimental disclosure were collected and included in the present meta-analysis. Results of random effects analyses indicate that experimental disclosure is effective, with a positive and significant average r-effect size of .075. In addition, a number of moderators were identified.  相似文献   

10.
Choice of the appropriate model in meta‐analysis is often treated as an empirical question which is answered by examining the amount of variability in the effect sizes. When all of the observed variability in the effect sizes can be accounted for based on sampling error alone, a set of effect sizes is said to be homogeneous and a fixed‐effects model is typically adopted. Whether a set of effect sizes is homogeneous or not is usually tested with the so‐called Q test. In this paper, a variety of alternative homogeneity tests – the likelihood ratio, Wald and score tests – are compared with the Q test in terms of their Type I error rate and power for four different effect size measures. Monte Carlo simulations show that the Q test kept the tightest control of the Type I error rate, although the results emphasize the importance of large sample sizes within the set of studies. The results also suggest under what conditions the power of the tests can be considered adequate.  相似文献   

11.
M > 1     
Increasingly, communication experiments are incorporating replication/actors for the purpose of controlling confounds and increasing generalizability. If replications are considered to be samples of possible treatment implementations, treating the replication factor as random is more appropriate than treating it as fixed. Study 1 shows that treating sampled replications as a fixed effect leads to potentially serious alpha inflation in the test of the treatment effect while treating sampled replications as random controls alpha at its nominal level. Study 2 addresses a common objection to treating replications as random: the argument that to do so will lead to unacceptably low power in statistical testing. Although experiments with very few replications are likely to be deficient in power, the results of Study 2 establish that power can be improved to an unexpected degree by a relatively modest increase in the number of replications.  相似文献   

12.
We developed masked visual analysis (MVA) as a structured complement to traditional visual analysis. The purpose of the present investigation was to compare the effects of computer‐simulated MVA of a four‐case multiple‐baseline (MB) design in which the phase lengths are determined by an ongoing visual analysis (i.e., response‐guided) versus those in which the phase lengths are established a priori (i.e., fixed criteria). We observed an acceptably low probability (less than .05) of false detection of treatment effects. The probability of correctly detecting a true effect frequently exceeded .80 and was higher when: (a) the masked visual analyst extended phases based on an ongoing visual analysis, (b) the effects were larger, (c) the effects were more immediate and abrupt, and (d) the effects of random and extraneous error factors were simpler. Our findings indicate that MVA is a valuable combined methodological and data‐analysis tool for single‐case intervention researchers.  相似文献   

13.
If utterances are the observational unit of analysis and there are no sequential patterns to the interaction, two alternative statistical models may be applied. A hierarchical design in which utterances are nested within subjects and subjects are nested within treatment condition is considered. One method of analysis pools utterances within treatment condition; the other method collapses across utterances to obtain subject means. Inappropriate application of the pooling model instead of the subject means model can lead to Type I errors, decreased generalizability, and inflated variance estimates that attenuate univariate and multivariate correlations. The effect of utterance reliability on the apparent unidimensionality and parallelism of multiple indicators is also discussed.  相似文献   

14.
A one-way random effects model for trimmed means   总被引:1,自引:0,他引:1  
The random effects ANOVA model plays an important role in many psychological studies, but the usual model suffers from at least two serious problems. The first is that even under normality, violating the assumption of equal variances can have serious consequences in terms of Type I errors or significance levels, and it can affect power as well. The second and perhaps more serious concern is that even slight departures from normality can result in a substantial loss of power when testing hypotheses. Jeyaratnam and Othman (1985) proposed a method for handling unequal variances, under the assumption of normality, but no results were given on how their procedure performs when distributions are nonnormal. A secondary goal in this paper is to address this issue via simulations. As will be seen, problems arise with both Type I errors and power. Another secondary goal is to provide new simulation results on the Rust-Fligner modification of the Kruskal-Wallis test. The primary goal is to propose a generalization of the usual random effects model based on trimmed means. The resulting test of no differences among J randomly sampled groups has certain advantages in terms of Type I errors, and it can yield substantial gains in power when distributions have heavy tails and outliers. This last feature is very important in applied work because recent investigations indicate that heavy-tailed distributions are common. Included is a suggestion for a heteroscedastic Winsorized analog of the usual intraclass correlation coefficient.  相似文献   

15.
Research conclusions in the social sciences are increasingly based on meta-analysis, making questions of the accuracy of meta-analysis critical to the integrity of the base of cumulative knowledge. Both fixed effects (FE) and random effects (RE) meta-analysis models have been used widely in published meta-analyses. This article shows that FE models typically manifest a substantial Type I bias in significance tests for mean effect sizes and for moderator variables (interactions), while RE models do not. Likewise, FE models, but not RE models, yield confidence intervals for mean effect sizes that are narrower than their nominal width, thereby overstating the degree of precision in meta-analysis findings. This article demonstrates analytically that these biases in FE procedures are large enough to create serious distortions in conclusions about cumulative knowledge in the research literature. We therefore recommend that RE methods routinely be employed in meta-analysis in preference to FE methods.  相似文献   

16.
We extend dynamic generalized structured component analysis (GSCA) to enhance its data-analytic capability in structural equation modeling of multi-subject time series data. Time series data of multiple subjects are typically hierarchically structured, where time points are nested within subjects who are in turn nested within a group. The proposed approach, named multilevel dynamic GSCA, accommodates the nested structure in time series data. Explicitly taking the nested structure into account, the proposed method allows investigating subject-wise variability of the loadings and path coefficients by looking at the variance estimates of the corresponding random effects, as well as fixed loadings between observed and latent variables and fixed path coefficients between latent variables. We demonstrate the effectiveness of the proposed approach by applying the method to the multi-subject functional neuroimaging data for brain connectivity analysis, where time series data-level measurements are nested within subjects.  相似文献   

17.
When the underlying variances are unknown or/and unequal, using the conventional F test is problematic in the two‐factor hierarchical data structure. Prompted by the approximate test statistics (Welch and Alexander–Govern methods), the authors develop four new heterogeneous test statistics to test factor A and factor B nested within A for the unbalanced fixed‐effect two‐stage nested design under variance heterogeneity. The actual significance levels and statistical power of the test statistics were compared in a simulation study. The results show that the proposed procedures maintain better Type I error rate control and have greater statistical power than those obtained by the conventional F test in various conditions. Therefore, the proposed test statistics are recommended in terms of robustness and easy implementation.  相似文献   

18.
Our goal is to provide empirical scientists with practical tools and advice with which to test hypotheses related to individual differences in intra-individual variability using the mixed-effects location-scale model. To that end, we evaluate Type I error rates and power to detect and predict individual differences in intra-individual variability using this model and provide empirically-based guidelines for building scale models that include random and/or systematically-varying fixed effects. We also provide two power simulation programs that allow researchers to conduct a priori empirical power analyses. Our results aligned with statistical power theory, in that, greater power was observed for designs with more individuals, more repeated occasions, greater proportions of variance available to be explained, and larger effect sizes. In addition, our results indicated that Type I error rates were acceptable in situations when individual differences in intra-individual variability were not initially detectable as well as when the scale-model individual-level predictor explained all initially detectable individual differences in intra-individual variability. We conclude our paper by providing study design and model building advice for those interested in using the mixed-effects location-scale model in practice.  相似文献   

19.
Numerous ways to meta-analyze single-case data have been proposed in the literature; however, consensus has not been reached on the most appropriate method. One method that has been proposed involves multilevel modeling. For this study, we used Monte Carlo methods to examine the appropriateness of Van den Noortgate and Onghena's (2008) raw-data multilevel modeling approach for the meta-analysis of single-case data. Specifically, we examined the fixed effects (e.g., the overall average treatment effect) and the variance components (e.g., the between-person within-study variance in the treatment effect) in a three-level multilevel model (repeated observations nested within individuals, nested within studies). More specifically, bias of the point estimates, confidence interval coverage rates, and interval widths were examined as a function of the number of primary studies per meta-analysis, the modal number of participants per primary study, the modal series length per primary study, the level of autocorrelation, and the variances of the error terms. The degree to which the findings of this study are supportive of using Van den Noortgate and Onghena's (2008) raw-data multilevel modeling approach to meta-analyzing single-case data depends on the particular parameter of interest. Estimates of the average treatment effect tended to be unbiased and produced confidence intervals that tended to overcover, but did come close to the nominal level as Level-3 sample size increased. Conversely, estimates of the variance in the treatment effect tended to be biased, and the confidence intervals for those estimates were inaccurate.  相似文献   

20.
Models specifying indirect effects (or mediation) and structural equation modeling are both popular in the social sciences. Yet relatively little research has compared methods that test for indirect effects among latent variables and provided precise estimates of the effectiveness of different methods.

This simulation study provides an extensive comparison of methods for constructing confidence intervals and for making inferences about indirect effects with latent variables. We compared the percentile (PC) bootstrap, bias-corrected (BC) bootstrap, bias-corrected accelerated (BC a ) bootstrap, likelihood-based confidence intervals (Neale & Miller, 1997), partial posterior predictive (Biesanz, Falk, and Savalei, 2010), and joint significance tests based on Wald tests or likelihood ratio tests. All models included three reflective latent variables representing the independent, dependent, and mediating variables. The design included the following fully crossed conditions: (a) sample size: 100, 200, and 500; (b) number of indicators per latent variable: 3 versus 5; (c) reliability per set of indicators: .7 versus .9; (d) and 16 different path combinations for the indirect effect (α = 0, .14, .39, or .59; and β = 0, .14, .39, or .59). Simulations were performed using a WestGrid cluster of 1680 3.06GHz Intel Xeon processors running R and OpenMx.

Results based on 1,000 replications per cell and 2,000 resamples per bootstrap method indicated that the BC and BC a bootstrap methods have inflated Type I error rates. Likelihood-based confidence intervals and the PC bootstrap emerged as methods that adequately control Type I error and have good coverage rates.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号