首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Repeated measures analyses of variance are the method of choice in many studies from experimental psychology and the neurosciences. Data from these fields are often characterized by small sample sizes, high numbers of factor levels of the within-subjects factor(s), and nonnormally distributed response variables such as response times. For a design with a single within-subjects factor, we investigated Type I error control in univariate tests with corrected degrees of freedom, the multivariate approach, and a mixed-model (multilevel) approach (SAS PROC MIXED) with Kenward–Roger’s adjusted degrees of freedom. We simulated multivariate normal and nonnormal distributions with varied population variance–covariance structures (spherical and nonspherical), sample sizes (N), and numbers of factor levels (K). For normally distributed data, as expected, the univariate approach with Huynh–Feldt correction controlled the Type I error rate with only very few exceptions, even if samples sizes as low as three were combined with high numbers of factor levels. The multivariate approach also controlled the Type I error rate, but it requires NK. PROC MIXED often showed acceptable control of the Type I error rate for normal data, but it also produced several liberal or conservative results. For nonnormal data, all of the procedures showed clear deviations from the nominal Type I error rate in many conditions, even for sample sizes greater than 50. Thus, none of these approaches can be considered robust if the response variable is nonnormally distributed. The results indicate that both the variance heterogeneity and covariance heterogeneity of the population covariance matrices affect the error rates.  相似文献   

2.
The data obtained from one‐way independent groups designs is typically non‐normal in form and rarely equally variable across treatment populations (i.e. population variances are heterogeneous). Consequently, the classical test statistic that is used to assess statistical significance (i.e. the analysis of variance F test) typically provides invalid results (e.g. too many Type I errors, reduced power). For this reason, there has been considerable interest in finding a test statistic that is appropriate under conditions of non‐normality and variance heterogeneity. Previously recommended procedures for analysing such data include the James test, the Welch test applied either to the usual least squares estimators of central tendency and variability, or the Welch test with robust estimators (i.e. trimmed means and Winsorized variances). A new statistic proposed by Krishnamoorthy, Lu, and Mathew, intended to deal with heterogeneous variances, though not non‐normality, uses a parametric bootstrap procedure. In their investigation of the parametric bootstrap test, the authors examined its operating characteristics under limited conditions and did not compare it to the Welch test based on robust estimators. Thus, we investigated how the parametric bootstrap procedure and a modified parametric bootstrap procedure based on trimmed means perform relative to previously recommended procedures when data are non‐normal and heterogeneous. The results indicated that the tests based on trimmed means offer the best Type I error control and power when variances are unequal and at least some of the distribution shapes are non‐normal.  相似文献   

3.
Researchers often want to demonstrate a lack of interaction between two categorical predictors on an outcome. To justify a lack of interaction, researchers typically accept the null hypothesis of no interaction from a conventional analysis of variance (ANOVA). This method is inappropriate as failure to reject the null hypothesis does not provide statistical evidence to support a lack of interaction. This study proposes a bootstrap‐based intersection–union test for negligible interaction that provides coherent decisions between the omnibus test and post hoc interaction contrast tests and is robust to violations of the normality and variance homogeneity assumptions. Further, a multiple comparison strategy for testing interaction contrasts following a non‐significant omnibus test is proposed. Our simulation study compared the Type I error control, omnibus power and per‐contrast power of the proposed approach to the non‐centrality‐based negligible interaction test of Cheng and Shao (2007, Statistica Sinica, 17, 1441). For 2 × 2 designs, the empirical Type I error rates of the Cheng and Shao test were very close to the nominal α level when the normality and variance homogeneity assumptions were satisfied; however, only our proposed bootstrapping approach was satisfactory under non‐normality and/or variance heterogeneity. In general a × b designs, although the omnibus Cheng and Shao test, as expected, is the most powerful, it is not robust to assumption violation and results in incoherent omnibus and interaction contrast decisions that are not possible with the intersection–union approach.  相似文献   

4.
A new procedure analogous to the analysis of variance (ANOVA), called the bisquare-weighted ANOVA (bANOVA), is described. When a traditional ANOVA is calculated, using samples from a distribution with heavy tails, the Type I error rates remain in check, but the Type II error rates increase, relative to those across samples from a normal distribution. The bANOVA is robust with respect to deviations from a normal distribution, maintaining high power with normal and heavy-tailed distributions alike. The more popular rank ANOVA (rANOVA) is also described briefly. However, the rANOVA is not as robust to large deviations from normality as is the bANOVA, and it generates high Type I error rates when applied to three-way designs.  相似文献   

5.
In a recent article in The Journal of General Psychology, J. B. Hittner, K. May, and N. C. Silver (2003) described their investigation of several methods for comparing dependent correlations and found that all can be unsatisfactory, in terms of Type I errors, even with a sample size of 300. More precisely, when researchers test at the .05 level, the actual Type I error probability can exceed .10. The authors of this article extended J. B. Hittner et al.'s research by considering a variety of alternative methods. They found 3 that avoid inflating the Type I error rate above the nominal level. However, a Monte Carlo simulation demonstrated that when the underlying distribution of scores violated the assumption of normality, 2 of these methods had relatively low power and had actual Type I error rates well below the nominal level. The authors report comparisons with E. J. Williams' (1959) method.  相似文献   

6.
We examined nine adaptive methods of trimming, that is, methods that empirically determine when data should be trimmed and the amount to be trimmed from the tails of the empirical distribution. Over the 240 empirical values collected for each method investigated, in which we varied the total percentage of data trimmed, sample size, degree of variance heterogeneity, pairing of variances and group sizes, and population shape, one method resulted in exceptionally good control of Type I errors. However, under less extreme cases of non‐normality and variance heterogeneity a number of methods exhibited reasonably good Type I error control. With regard to the power to detect non‐null treatment effects, we found that the choice among the methods depended on the degree of non‐normality and variance heterogeneity. Recommendations are offered.  相似文献   

7.
The goal of this study was to investigate the performance of Hall’s transformation of the Brunner-Dette-Munk (BDM) and Welch-James (WJ) test statistics and Box-Cox’s data transformation in factorial designs when normality and variance homogeneity assumptions were violated separately and jointly. On the basis of unweighted marginal means, we performed a simulation study to explore the operating characteristics of the methods proposed for a variety of distributions with small sample sizes. Monte Carlo simulation results showed that when data were sampled from symmetric distributions, the error rates of the original BDM and WJ tests were scarcely affected by the lack of normality and homogeneity of variance. In contrast, when data were sampled from skewed distributions, the original BDM and WJ rates were not well controlled. Under such circumstances, the results clearly revealed that Hall’s transformation of the BDM and WJ tests provided generally better control of Type I error rates than did the same tests based on Box-Cox’s data transformation. Among all the methods considered in this study, we also found that Hall’s transformation of the BDM test yielded the best control of Type I errors, although it was often less powerful than either of the WJ tests when both approaches reasonably controlled the error rates.  相似文献   

8.
Categorical moderators are often included in mixed-effects meta-analysis to explain heterogeneity in effect sizes. An assumption in tests of categorical moderator effects is that of a constant between-study variance across all levels of the moderator. Although it rarely receives serious thought, there can be statistical ramifications to upholding this assumption. We propose that researchers should instead default to assuming unequal between-study variances when analysing categorical moderators. To achieve this, we suggest using a mixed-effects location-scale model (MELSM) to allow group-specific estimates for the between-study variance. In two extensive simulation studies, we show that in terms of Type I error and statistical power, little is lost by using the MELSM for moderator tests, but there can be serious costs when an equal variance mixed-effects model (MEM) is used. Most notably, in scenarios with balanced sample sizes or equal between-study variance, the Type I error and power rates are nearly identical between the MEM and the MELSM. On the other hand, with imbalanced sample sizes and unequal variances, the Type I error rate under the MEM can be grossly inflated or overly conservative, whereas the MELSM does comparatively well in controlling the Type I error across the majority of cases. A notable exception where the MELSM did not clearly outperform the MEM was in the case of few studies (e.g., 5). With respect to power, the MELSM had similar or higher power than the MEM in conditions where the latter produced non-inflated Type 1 error rates. Together, our results support the idea that assuming unequal between-study variances is preferred as a default strategy when testing categorical moderators.  相似文献   

9.
When more than one significance test is carried out on data from a single experiment, researchers are often concerned with the probability of one or more Type I errors over the entire set of tests. This article considers several methods of exercising control over that probability (the so-called family-wise Type I error rate), provides a schematic that can be used by a researcher to choose among the methods, and discusses applications to contingency tables.  相似文献   

10.
Serlin RC 《心理学方法》2000,5(2):230-240
Monte Carlo studies provide the information needed to help researchers select appropriate analytical procedures under design conditions in which the underlying assumptions of the procedures are not met. In Monte Carlo studies, the 2 errors that one could commit involve (a) concluding that a statistical procedure is robust when it is not or (b) concluding that it is not robust when it is. In previous attempts to apply standard statistical design principles to Monte Carlo studies, the less severe of these errors has been wrongly designated the Type I error. In this article, a method is presented for controlling the appropriate Type I error rate; the determination of the number of iterations required in a Monte Carlo study to achieve desired power is described; and a confidence interval for a test's true Type I error rate is derived. A robustness criterion is also proposed that is a compromise between W. G. Cochran's (1952) and J. V. Bradley's (1978) criteria.  相似文献   

11.
Three approaches to the analysis of main and interaction effect hypotheses in nonorthogonal designs were compared in a 2×2 design for data that was neither normal in form nor equal in variance. The approaches involved either least squares or robust estimators of central tendency and variability and/or a test statistic that either pools or does not pool sources of variance. Specifically, we compared the ANOVA F test which used trimmed means and Winsorized variances, the Welch-James test with the usual least squares estimators for central tendency and variability and the Welch-James test using trimmed means and Winsorized variances. As hypothesized, we found that the latter approach provided excellent Type I error control, whereas the former two did not.Financial support for this research was provided by grants to the first author from the National Sciences and Engineering Research Council of Canada (#OGP0015855) and the Social Sciences and Humanities Research Council (#410-95-0006). The authors would like to express their appreciation to the Associate Editor as well as the reviewers who provided valuable comments on an earlier version of this paper.  相似文献   

12.
A problem arises in analyzing the existence of interdependence between the behavioral sequences of two individuals: tests involving a statistic such as chi-square assume independent observations within each behavioral sequence, a condition which may not exist in actual practice. Using Monte Carlo simulations of binomial data sequences, we found that the use of a chi-square test frequently results in unacceptable Type I error rates when the data sequences are autocorrelated. We compared these results to those from two other methods designed specifically for testing for intersequence independence in the presence of intrasequence autocorrelation. The first method directly tests the intersequence correlation using an approximation of the variance of the intersequence correlation estimated from the sample autocorrelations. The second method uses tables of critical values of the intersequence correlation computed by Nakamuraet al. (J. Am. Stat. Assoc., 1976,71, 214–222). Although these methods were originally designed for normally distributed data, we found that both methods produced much better results than the uncorrected chi-square test when applied to binomial autocorrelated sequences. The superior method appears to be the variance approximation method, which resulted in Type I error rates that were generally less than or equal to 5% when the level of significance was set at .05.  相似文献   

13.
Abstract This article considers the problem of comparing two independent groups in terms of some measure of location. It is well known that with Student's two-independent-sample t test, the actual level of significance can be well above or below the nominal level, confidence intervals can have inaccurate probability coverage, and power can be low relative to other methods. A solution to deal with heterogeneity is Welch's (1938) test. Welch's test deals with heteroscedasticity but can have poor power under arbitrarily small departures from normality. Yuen (1974) generalized Welch's test to trimmed means; her method provides improved control over the probability of a Type I error, but problems remain. Transformations for skewness improve matters, but the probability of a Type I error remains unsatisfactory in some situations. We find that a transformation for skewness combined with a bootstrap method improves Type I error control and probability coverage even if sample sizes are small.  相似文献   

14.
15.
In a comparison of 2 treatments, if outcome scores are denoted by X in 1 condition and by Y in the other, stochastic equality is defined as P(X < Y) = P(X > Y). Tests of stochastic equality can be affected by characteristics of the distributions being compared, such as heterogeneity of variance. Thus, various robust tests of stochastic equality have been proposed and are evaluated here using a Monte Carlo study with sample sizes ranging from 10 to 30. Three robust tests are identified that perform well in Type I error rates and power except when extremely skewed data co-occur with very small n. When tests of stochastic equality might be preferred to tests of means is also considered.  相似文献   

16.
The authors conducted a Monte Carlo simulation of 8 statistical tests for comparing dependent zero-order correlations. In particular, they evaluated the Type I error rates and power of a number of test statistics for sample sizes (Ns) of 20, 50, 100, and 300 under 3 different population distributions (normal, uniform, and exponential). For the Type I error rate analyses, the authors evaluated 3 different magnitudes of the predictor-criterion correlations (rho(y,x1) = rho(y,x2) = .1, .4, and .7). For the power analyses, they examined 3 different effect sizes or magnitudes of discrepancy between rho(y,x1) and rho(y,x2) (values of .1, .3, and .6). They conducted all of the simulations at 3 different levels of predictor intercorrelation (rho(x1,x2) = .1, .3, and .6). The results indicated that both Type I error rate and power depend not only on sample size and population distribution, but also on (a) the predictor intercorrelation and (b) the effect size (for power) or the magnitude of the predictor-criterion correlations (for Type I error rate). When the authors considered Type I error rate and power simultaneously, the findings suggested that O. J. Dunn and V. A. Clark's (1969) z and E. J. Williams's (1959) t have the best overall statistical properties. The findings extend and refine previous simulation research and as such, should have greater utility for applied researchers.  相似文献   

17.
Many books on statistical methods advocate a ‘conditional decision rule’ when comparing two independent group means. This rule states that the decision as to whether to use a ‘pooled variance’ test that assumes equality of variance or a ‘separate variance’ Welch t test that does not should be based on the outcome of a variance equality test. In this paper, we empirically examine the Type I error rate of the conditional decision rule using four variance equality tests and compare this error rate to the unconditional use of either of the t tests (i.e. irrespective of the outcome of a variance homogeneity test) as well as several resampling‐based alternatives when sampling from 49 distributions varying in skewness and kurtosis. Several unconditional tests including the separate variance test performed as well as or better than the conditional decision rule across situations. These results extend and generalize the findings of previous researchers who have argued that the conditional decision rule should be abandoned.  相似文献   

18.
Empirical Type I error and power rates were estimated for (a) the doubly multivariate model, (b) the Welch-James multivariate solution developed by Keselman, Carriere and Lix (1993) using Johansen's results (1980), and for (c) the multivariate version of the modified Brown-Forsythe (1974) procedure. The performance of these procedures was investigated by testing within- blocks sources of variation in a multivariate split-plot design containing unequal covariance matrices. The results indicate that the doubly multivariate model did not provide effective Type I error control while the Welch-James procedure provided robust and powerful tests of the within-subjects main effect, however, this approach provided liberal tests of the interaction effect. The results also indicate that the modified Brown-Forsythe procedure provided robust tests of within-subjects main and interaction effects, especially when the design was balanced or when group sizes and covariance matrices were positively paired.  相似文献   

19.
Researchers can adopt one of many different measures of central tendency to examine the effect of a treatment variable across groups. These include least squares means, trimmed means, M‐estimators and medians. In addition, some methods begin with a preliminary test to determine the shapes of distributions before adopting a particular estimator of the typical score. We compared a number of recently developed adaptive robust methods with respect to their ability to control Type I error and their sensitivity to detect differences between the groups when data were non‐normal and heterogeneous, and the design was unbalanced. In particular, two new approaches to comparing the typical score across treatment groups, due to Babu, Padmanabhan, and Puri, were compared to two new methods presented by Wilcox and by Keselman, Wilcox, Othman, and Fradette. The procedures examined generally resulted in good Type I error control and therefore, on the basis of this critetion, it would be difficult to recommend one method over the other. However, the power results clearly favour one of the methods presented by Wilcox and Keselman; indeed, in the vast majority of the cases investigated, this most favoured approach had substantially larger power values than the other procedures, particularly when there were more than two treatment groups.  相似文献   

20.
In a variety of measurement situations, the researcher may wish to compare the reliabilities of several instruments administered to the same sample of subjects. This paper presents eleven statistical procedures which test the equality ofm coefficient alphas when the sample alpha coefficients are dependent. Several of the procedures are derived in detail, and numerical examples are given for two. Since all of the procedures depend on approximate asymptotic results, Monte Carlo methods are used to assess the accuracy of the procedures for sample sizes of 50, 100, and 200. Both control of Type I error and power are evaluated by computer simulation. Two of the procedures are unable to control Type I errors satisfactorily. The remaining nine procedures perform properly, but three are somewhat superior in power and Type I error control.A more detailed version of this paper is also available.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号