首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Wilcox, Keselman, Muska and Cribbie (2000) found a method for comparing the trimmed means of dependent groups that performed well in simulations, in terms of Type I errors, with a sample size as small as 21. Theory and simulations indicate that little power is lost under normality when using trimmed means rather than untrimmed means, and trimmed means can result in substantially higher power when sampling from a heavy‐tailed distribution. However, trimmed means suffer from two practical concerns described in this paper. Replacing trimmed means with a robust M‐estimator addresses these concerns, but control over the probability of a Type I error can be unsatisfactory when the sample size is small. Methods based on a simple modification of a one‐step M‐estimator that address the problems with trimmed means are examined. Several omnibus tests are compared, one of which performed well in simulations, even with a sample size of 11.  相似文献   

2.
A well-known concern regarding the usual linear regression model is multicollinearity. As the strength of the association among the independent variables increases, the squared standard error of regression estimators tends to increase, which can seriously impact power. This paper examines heteroscedastic methods for dealing with this issue when testing the hypothesis that all of the slope parameters are equal to zero via a robust ridge estimator that guards against outliers among the dependent variable. Included are results related to leverage points, meaning outliers among the independent variables. In various situations, the proposed method increases power substantially.  相似文献   

3.
The statistical significance levels of the Wilcoxon-Mann-Whitney test and the Kruskal-Wallis test are substantially biased by heterogeneous variances of treatment groups—even when sample sizes are equal. Under these conditions, the Type I error probabilities of the nonparametric tests, performed at the .01, .05, and .10 significance levels, increase by as much as 40%-50% in many cases and sometimes as much as 300%. The bias increases systematically as the ratio of standard deviations of treatment groups increases and remains fairly constant for various sample sizes. There is no indication that Type I error probabilities approach the significance level asymptotically as sample size increases.  相似文献   

4.
The problem of comparing two independent groups based on mulitivariate data is considered. Many such methods have been proposed, but it is difficult to gain a perspective on the extent to which the groups differ. The basic strategy here is to determine a robust measure of location for each group, project the data onto the line connecting these measures of location, and then compare the groups based on the ordering of the projected points. In the univariate case the method uses the same measure of effect size employed by the Wilcoxon — Mann — Whitney test. Under general conditions, the projected points are dependent, causing difficulties when testing hypotheses. Two methods are found to be effective when trying to avoid Type I error probabilities above the nominal level. The relative merits of the two methods are discussed. The projected data provide not only a useful (numerical) measure of effect size, but also a graphical indication of the extent to which groups differ.  相似文献   

5.
Null hypothesis significance testing (NHST) is the most commonly used statistical methodology in psychology. The probability of achieving a value as extreme or more extreme than the statistic obtained from the data is evaluated, and if it is low enough, the null hypothesis is rejected. However, because common experimental practice often clashes with the assumptions underlying NHST, these calculated probabilities are often incorrect. Most commonly, experimenters use tests that assume that sample sizes are fixed in advance of data collection but then use the data to determine when to stop; in the limit, experimenters can use data monitoring to guarantee that the null hypothesis will be rejected. Bayesian hypothesis testing (BHT) provides a solution to these ills because the stopping rule used is irrelevant to the calculation of a Bayes factor. In addition, there are strong mathematical guarantees on the frequentist properties of BHT that are comforting for researchers concerned that stopping rules could influence the Bayes factors produced. Here, we show that these guaranteed bounds have limited scope and often do not apply in psychological research. Specifically, we quantitatively demonstrate the impact of optional stopping on the resulting Bayes factors in two common situations: (1) when the truth is a combination of the hypotheses, such as in a heterogeneous population, and (2) when a hypothesis is composite—taking multiple parameter values—such as the alternative hypothesis in a t-test. We found that, for these situations, while the Bayesian interpretation remains correct regardless of the stopping rule used, the choice of stopping rule can, in some situations, greatly increase the chance of experimenters finding evidence in the direction they desire. We suggest ways to control these frequentist implications of stopping rules on BHT.  相似文献   

6.
The statistical significance levels of the Wilcoxon-Mann-Whitney test and the Kruskal-Wallis test are substantially biased by heterogeneous variances of treatment groups--even when sample sizes are equal. Under these conditions, the Type I error probabilities of the nonparametric tests, performed at the .01, .05, and .10 significance levels, increase by as much as 40%-50% in many cases and sometimes as much as 300%. The bias increases systematically as the ratio of standard deviations of treatment groups increases and remains fairly constant for various sample sizes. There is no indication that Type I error probabilities approach the significance level asymptotically as sample size increases.  相似文献   

7.
What should we do when we discover that our assessment of probabilities is incoherent? I explore the hypothesis that there is a logic of incoherence—a set of universally valid rules that specify how incoherent probability assessments are to be repaired. I examine a pair of candidate‐rules of incoherence logic that have been employed in philosophical reconstructions of scientific arguments. Despite their intuitive plausibility, both rules turn out to be invalid. There are presently no viable candidate‐rules for an incoherence logic on the table. Other ways of dealing with incoherence are surveyed, and found either to be unsatisfactory or to rely on a logic of incoherence in the end. The resolution of these antagonistic conclusions is left to future researchers.  相似文献   

8.
Categorical moderators are often included in mixed-effects meta-analysis to explain heterogeneity in effect sizes. An assumption in tests of categorical moderator effects is that of a constant between-study variance across all levels of the moderator. Although it rarely receives serious thought, there can be statistical ramifications to upholding this assumption. We propose that researchers should instead default to assuming unequal between-study variances when analysing categorical moderators. To achieve this, we suggest using a mixed-effects location-scale model (MELSM) to allow group-specific estimates for the between-study variance. In two extensive simulation studies, we show that in terms of Type I error and statistical power, little is lost by using the MELSM for moderator tests, but there can be serious costs when an equal variance mixed-effects model (MEM) is used. Most notably, in scenarios with balanced sample sizes or equal between-study variance, the Type I error and power rates are nearly identical between the MEM and the MELSM. On the other hand, with imbalanced sample sizes and unequal variances, the Type I error rate under the MEM can be grossly inflated or overly conservative, whereas the MELSM does comparatively well in controlling the Type I error across the majority of cases. A notable exception where the MELSM did not clearly outperform the MEM was in the case of few studies (e.g., 5). With respect to power, the MELSM had similar or higher power than the MEM in conditions where the latter produced non-inflated Type 1 error rates. Together, our results support the idea that assuming unequal between-study variances is preferred as a default strategy when testing categorical moderators.  相似文献   

9.
Methods for comparing means are known to be highly nonrobust in terms of Type II errors. The problem is that slight shifts from normal distributions toward heavy-tailed distributions inflate the standard error of the sample mean. In contrast, the standard error of various robust measures of location, such as the one-step M-estimator, are relatively unaffected by heavy tails. Wilcox recently examined a method of comparing the one-step M-estimators of location corresponding to two independent groups which provided good control over the probability of a Type I error even for unequal sample sizes, unequal variances, and different shaped distributions. There is a fairly obvious extension of this procedure to pairwise comparisons of more than two independent groups, but simulations reported here indicate that it is unsatisfactory. A slight modification of the procedure is found to give much better results, although some caution must be taken when there are unequal sample sizes and light-tailed distributions. An omnibus test is examined as well.  相似文献   

10.
We examined nine adaptive methods of trimming, that is, methods that empirically determine when data should be trimmed and the amount to be trimmed from the tails of the empirical distribution. Over the 240 empirical values collected for each method investigated, in which we varied the total percentage of data trimmed, sample size, degree of variance heterogeneity, pairing of variances and group sizes, and population shape, one method resulted in exceptionally good control of Type I errors. However, under less extreme cases of non‐normality and variance heterogeneity a number of methods exhibited reasonably good Type I error control. With regard to the power to detect non‐null treatment effects, we found that the choice among the methods depended on the degree of non‐normality and variance heterogeneity. Recommendations are offered.  相似文献   

11.
The Type I error probability and the power of the independent samples t test, performed directly on the ranks of scores in combined samples in place of the original scores, are known to be the same as those of the non‐parametric Wilcoxon–Mann–Whitney (WMW) test. In the present study, simulations revealed that these probabilities remain essentially unchanged when the number of ranks is reduced by assigning the same rank to multiple ordered scores. For example, if 200 ranks are reduced to as few as 20, or 10, or 5 ranks by replacing sequences of consecutive ranks by a single number, the Type I error probability and power stay about the same. Significance tests performed on these modular ranks consistently reproduce familiar findings about the comparative power of the t test and the WMW tests for normal and various non‐normal distributions. Similar results are obtained for modular ranks used in comparing the one‐sample t test and the Wilcoxon signed ranks test.  相似文献   

12.
Experience with real data indicates that psychometric measures often have heavy-tailed distributions. This is known to be a serious problem when comparing the means of two independent groups because heavy-tailed distributions can have a serious effect on power. Another problem that is common in some areas is outliers. This paper suggests an approach to these problems based on the one-step M-estimator of location. Simulations indicate that the new procedure provides very good control over the probability of a Type I error even when distributions are skewed, have different shapes, and the variances are unequal. Moreover, the new procedure has considerably more power than Welch's method when distributions have heavy tails, and it compares well to Yuen's method for comparing trimmed means. Wilcox's median procedure has about the same power as the proposed procedure, but Wilcox's method is based on a statistic that has a finite sample breakdown point of only 1/n, wheren is the sample size. Comments on other methods for comparing groups are also included.  相似文献   

13.
This paper reports on a simulation study that evaluated the performance of five structural equation model test statistics appropriate for categorical data. Both Type I error rate and power were investigated. Different model sizes, sample sizes, numbers of categories, and threshold distributions were considered. Statistics associated with both the diagonally weighted least squares (cat‐DWLS) estimator and with the unweighted least squares (cat‐ULS) estimator were studied. Recent research suggests that cat‐ULS parameter estimates and robust standard errors slightly outperform cat‐DWLS estimates and robust standard errors ( Forero, Maydeu‐Olivares, & Gallardo‐Pujol, 2009 ). The findings of the present research suggest that the mean‐ and variance‐adjusted test statistic associated with the cat‐ULS estimator performs best overall. A new version of this statistic now exists that does not require a degrees‐of‐freedom adjustment ( Asparouhov & Muthén, 2010 ), and this statistic is recommended. Overall, the cat‐ULS estimator is recommended over cat‐DWLS, particularly in small to medium sample sizes.  相似文献   

14.
Adverse impact evaluations often call for evidence that the disparity between groups in selection rates is statistically significant, and practitioners must choose which test statistic to apply in this situation. To identify the most effective testing procedure, the authors compared several alternate test statistics in terms of Type I error rates and power, focusing on situations with small samples. Significance testing was found to be of limited value because of low power for all tests. Among the alternate test statistics, the widely-used Z-test on the difference between two proportions performed reasonably well, except when sample size was extremely small. A test suggested by G. J. G. Upton (1982) provided slightly better control of Type I error under some conditions but generally produced results similar to the Z-test. Use of the Fisher Exact Test and Yates's continuity-corrected chi-square test are not recommended because of overly conservative Type I error rates and substantially lower power than the Z-test.  相似文献   

15.
The purpose of this study was to evaluate a modified test of equivalence for conducting normative comparisons when distribution shapes are non‐normal and variances are unequal. A Monte Carlo study was used to compare the empirical Type I error rates and power of the proposed Schuirmann–Yuen test of equivalence, which utilizes trimmed means, with that of the previously recommended Schuirmann and Schuirmann–Welch tests of equivalence when the assumptions of normality and variance homogeneity are satisfied, as well as when they are not satisfied. The empirical Type I error rates of the Schuirmann–Yuen were much closer to the nominal α level than those of the Schuirmann or Schuirmann–Welch tests, and the power of the Schuirmann–Yuen was substantially greater than that of the Schuirmann or Schuirmann–Welch tests when distributions were skewed or outliers were present. The Schuirmann–Yuen test is recommended for assessing clinical significance with normative comparisons.  相似文献   

16.
Running studies with high statistical power, while effect size estimates in psychology are often inaccurate, leads to a practical challenge when designing an experiment. This challenge can be addressed by performing sequential analyses while the data collection is still in progress. At an interim analysis, data collection can be stopped whenever the results are convincing enough to conclude that an effect is present, more data can be collected, or the study can be terminated whenever it is extremely unlikely that the predicted effect will be observed if data collection would be continued. Such interim analyses can be performed while controlling the Type 1 error rate. Sequential analyses can greatly improve the efficiency with which data are collected. Additional flexibility is provided by adaptive designs where sample sizes are increased on the basis of the observed effect size. The need for pre‐registration, ways to prevent experimenter bias, and a comparison between Bayesian approaches and null‐hypothesis significance testing (NHST) are discussed. Sequential analyses, which are widely used in large‐scale medical trials, provide an efficient way to perform high‐powered informative experiments. I hope this introduction will provide a practical primer that allows researchers to incorporate sequential analyses in their research. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

17.
Researchers can adopt one of many different measures of central tendency to examine the effect of a treatment variable across groups. These include least squares means, trimmed means, M‐estimators and medians. In addition, some methods begin with a preliminary test to determine the shapes of distributions before adopting a particular estimator of the typical score. We compared a number of recently developed adaptive robust methods with respect to their ability to control Type I error and their sensitivity to detect differences between the groups when data were non‐normal and heterogeneous, and the design was unbalanced. In particular, two new approaches to comparing the typical score across treatment groups, due to Babu, Padmanabhan, and Puri, were compared to two new methods presented by Wilcox and by Keselman, Wilcox, Othman, and Fradette. The procedures examined generally resulted in good Type I error control and therefore, on the basis of this critetion, it would be difficult to recommend one method over the other. However, the power results clearly favour one of the methods presented by Wilcox and Keselman; indeed, in the vast majority of the cases investigated, this most favoured approach had substantially larger power values than the other procedures, particularly when there were more than two treatment groups.  相似文献   

18.
Quantiles are widely used in both theoretical and applied statistics, and it is important to be able to deploy appropriate quantile estimators. To improve performance in the lower and upper quantiles, especially with small sample sizes, a new quantile estimator is introduced which is a weighted average of all order statistics. The new estimator, denoted NO, has desirable asymptotic properties. Moreover, it offers practical advantages over four estimators in terms of efficiency in most experimental settings. The Harrell–Davis quantile estimator, the default quantile estimator of the R programming language, the Sfakianakis–Verginis SV2 quantile estimator and a kernel quantile estimator. The NO quantile estimator is also utilized in comparing two independent groups with a percentile bootstrap method and, as expected, it is more successful than other estimators in controlling Type I error rates.  相似文献   

19.
A composite step‐down procedure, in which a set of step‐down tests are summarized collectively with Fisher's combination statistic, was considered to test for multivariate mean equality in two‐group designs. An approximate degrees of freedom (ADF) composite procedure based on trimmed/Winsorized estimators and a non‐pooled estimate of error variance is proposed, and compared to a composite procedure based on trimmed/Winsorized estimators and a pooled estimate of error variance. The step‐down procedures were also compared to Hotelling's T2 and Johansen's ADF global procedure based on trimmed estimators in a simulation study. Type I error rates of the pooled step‐down procedure were sensitive to covariance heterogeneity in unbalanced designs; error rates were similar to those of Hotelling's T2 across all of the investigated conditions. Type I error rates of the ADF composite step‐down procedure were insensitive to covariance heterogeneity and less sensitive to the number of dependent variables when sample size was small than error rates of Johansen's test. The ADF composite step‐down procedure is recommended for testing hypotheses of mean equality in two‐group designs except when the data are sampled from populations with different degrees of multivariate skewness.  相似文献   

20.
A problem arises in analyzing the existence of interdependence between the behavioral sequences of two individuals: tests involving a statistic such as chi-square assume independent observations within each behavioral sequence, a condition which may not exist in actual practice. Using Monte Carlo simulations of binomial data sequences, we found that the use of a chi-square test frequently results in unacceptable Type I error rates when the data sequences are autocorrelated. We compared these results to those from two other methods designed specifically for testing for intersequence independence in the presence of intrasequence autocorrelation. The first method directly tests the intersequence correlation using an approximation of the variance of the intersequence correlation estimated from the sample autocorrelations. The second method uses tables of critical values of the intersequence correlation computed by Nakamuraet al. (J. Am. Stat. Assoc., 1976,71, 214–222). Although these methods were originally designed for normally distributed data, we found that both methods produced much better results than the uncorrected chi-square test when applied to binomial autocorrelated sequences. The superior method appears to be the variance approximation method, which resulted in Type I error rates that were generally less than or equal to 5% when the level of significance was set at .05.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号