首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
A composite step‐down procedure, in which a set of step‐down tests are summarized collectively with Fisher's combination statistic, was considered to test for multivariate mean equality in two‐group designs. An approximate degrees of freedom (ADF) composite procedure based on trimmed/Winsorized estimators and a non‐pooled estimate of error variance is proposed, and compared to a composite procedure based on trimmed/Winsorized estimators and a pooled estimate of error variance. The step‐down procedures were also compared to Hotelling's T2 and Johansen's ADF global procedure based on trimmed estimators in a simulation study. Type I error rates of the pooled step‐down procedure were sensitive to covariance heterogeneity in unbalanced designs; error rates were similar to those of Hotelling's T2 across all of the investigated conditions. Type I error rates of the ADF composite step‐down procedure were insensitive to covariance heterogeneity and less sensitive to the number of dependent variables when sample size was small than error rates of Johansen's test. The ADF composite step‐down procedure is recommended for testing hypotheses of mean equality in two‐group designs except when the data are sampled from populations with different degrees of multivariate skewness.  相似文献   

2.
Empirical Type I error and power rates were estimated for (a) the doubly multivariate model, (b) the Welch-James multivariate solution developed by Keselman, Carriere and Lix (1993) using Johansen's results (1980), and for (c) the multivariate version of the modified Brown-Forsythe (1974) procedure. The performance of these procedures was investigated by testing within- blocks sources of variation in a multivariate split-plot design containing unequal covariance matrices. The results indicate that the doubly multivariate model did not provide effective Type I error control while the Welch-James procedure provided robust and powerful tests of the within-subjects main effect, however, this approach provided liberal tests of the interaction effect. The results also indicate that the modified Brown-Forsythe procedure provided robust tests of within-subjects main and interaction effects, especially when the design was balanced or when group sizes and covariance matrices were positively paired.  相似文献   

3.
A Monte Carlo simulation was conducted to compare five, pairwise multiple comparison procedures. The number of means varied from 4 to 6 and the sample size ratio varied from 1 to 60. Procedures were evaluated on the basis of Type I errors, any‐pair power and all‐pairs power. Four procedures were shown to be conservative, while the fifth provided adequate control of Type I errors only for restricted values of sample size ratios. No procedure was found to be uniformly most powerful. The Tukey‐Kramer procedure was found to provide the best any‐pair power provided it is applied without requiring a significant overall F test. In most cases, the Hayter‐Fisher modification of the Tukey‐Kramer was found to provide very good any‐pair power and to be uniformly more powerful than the Tukey‐Kramer when a significant overall F test is required. A partition‐based version of Peritz's method usually provided the greatest all‐pairs power. A modification of the Shaffer‐Welsch was found to be useful in certain conditions.  相似文献   

4.
We discuss the statistical testing of three relevant hypotheses involving Cronbach's alpha: one where alpha equals a particular criterion; a second testing the equality of two alpha coefficients for independent samples; and a third testing the equality of two alpha coefficients for dependent samples. For each of these hypotheses, various statistical tests have been proposed. Over the years, these tests have depended on progressively fewer assumptions. We propose a new approach to testing the three hypotheses that relies on even fewer assumptions, is especially suited for discrete item scores, and can be applied easily to tests containing large numbers of items. The new approach uses marginal modelling. We compared the Type I error rate and the power of the marginal modelling approach to several of the available tests in a simulation study using realistic conditions. We found that the marginal modelling approach had the most accurate Type I error rates, whereas the power was similar across the statistical tests.  相似文献   

5.
Although many common uses of p-values for making statistical inferences in contemporary scientific research have been shown to be invalid, no one, to our knowledge, has adequately assessed the main original justification for their use, which is that they can help to control the Type I error rate (Neyman & Pearson, 1928, 1933). We address this issue head-on by asking a specific question: Across what domain, specifically, do we wish to control the Type I error rate? For example, do we wish to control it across all of science, across all of a specific discipline such as psychology, across a researcher's active lifetime, across a substantive research area, across an experiment, or across a set of hypotheses? In attempting to answer these questions, we show that each one leads to troubling dilemmas wherein controlling the Type I error rate turns out to be inconsistent with other scientific desiderata. This inconsistency implies that we must make a choice. In our view, the other scientific desiderata are much more valuable than controlling the Type I error rate and so it is the latter, rather than the former, with which we must dispense. But by doing so—that is, by eliminating the Type I error justification for computing and using p-values—there is even less reason to believe that p is useful for validly rejecting null hypotheses than previous critics have suggested.  相似文献   

6.
Two types of global testing procedures for item fit to the Rasch model were evaluated using simulation studies. The first type incorporates three tests based on first‐order statistics: van den Wollenberg's Q1 test, Glas's R1 test, and Andersen's LR test. The second type incorporates three tests based on second‐order statistics: van den Wollenberg's Q2 test, Glas's R2 test, and a non‐parametric test proposed by Ponocny. The Type I error rates and the power against the violation of parallel item response curves, unidimensionality and local independence were analysed in relation to sample size and test length. In general, the outcomes indicate a satisfactory performance of all tests, except the Q2 test which exhibits an inflated Type I error rate. Further, it was found that both types of tests have power against all three types of model violation. A possible explanation is the interdependencies among the assumptions underlying the model.  相似文献   

7.
This essay responds to Daniel O'Keefe's “Against Familywise Alpha Adjustment.” Whereas O'Keefe maintains that one should never attempt to control Type I error introduced when many statistical tests are conducted, this essay argues that alpha adjustment should be applied only in the narrowly circumscribed instance when the researcher wants to make a strong claim that there is no Type I error in a specific collection of tests.  相似文献   

8.
A Monte Carlo simulation was conducted to compare pairwise multiple comparison procedures. The number of means varied from 4 to 8 and the sample sizes varied from 2 to 500. Procedures were evaluated on the basis of Type I errors, any‐pair power and all‐pairs power. Two modifications of the Games and Howell procedure were shown to make it conservative. No procedure was found to be uniformly most powerful. For any pair power the Games and Howell procedure was found to be generally most powerful even when applied at more stringent levels to control Type I errors. For all pairs power the Peritz procedure applied with modified Brown–Forsythe tests was found to be most powerful in most conditions.  相似文献   

9.
Preliminary tests of equality of variances used before a test of location are no longer widely recommended by statisticians, although they persist in some textbooks and software packages. The present study extends the findings of previous studies and provides further reasons for discontinuing the use of preliminary tests. The study found Type I error rates of a two‐stage procedure, consisting of a preliminary Levene test on samples of different sizes with unequal variances, followed by either a Student pooled‐variances t test or a Welch separate‐variances t test. Simulations disclosed that the twostage procedure fails to protect the significance level and usually makes the situation worse. Earlier studies have shown that preliminary tests often adversely affect the size of the test, and also that the Welch test is superior to the t test when variances are unequal. The present simulations reveal that changes in Type I error rates are greater when sample sizes are smaller, when the difference in variances is slight rather than extreme, and when the significance level is more stringent. Furthermore, the validity of the Welch test deteriorates if it is used only on those occasions where a preliminary test indicates it is needed. Optimum protection is assured by using a separate‐variances test unconditionally whenever sample sizes are unequal.  相似文献   

10.
In a computer simulation study, random samples from a uniform density were substituted for each of two independent samples from normal and various nonnormal densities. This procedure was compared with conventional ranking and with Bell and Doksum's (1965) procedure, which substituted random normal deviates for initial sample values. After performing the Student t test, the program transformed the initial scores and performed additional t tests on ranks, random uniform scores, and random normal scores. For several distributions, the test on random normal scores was more powerful than the others, consistent with known asymptotic results. The probabilities of Type I and Type II errors of the test on random uniform scores were nearly the same as those of the Mann-Whitney-Wilcoxon test, for all distributions examined.  相似文献   

11.
Score tests for identifying locally dependent item pairs have been proposed for binary item response models. In this article, both the bifactor and the threshold shift score tests are generalized to the graded response model. For the bifactor test, the generalization is straightforward; it adds one secondary dimension associated only with one pair of items. For the threshold shift test, however, multiple generalizations are possible: in particular, conditional, uniform, and linear shift tests are discussed in this article. Simulation studies show that all of the score tests have accurate Type I error rates given large enough samples, although their small‐sample behaviour is not as good as that of Pearson's Χ2 and M2 as proposed in other studies for the purpose of local dependence (LD) detection. All score tests have the highest power to detect the LD which is consistent with their parametric form, and in this case they are uniformly more powerful than Χ2 and M2; even wrongly specified score tests are more powerful than Χ2 and M2 in most conditions. An example using empirical data is provided for illustration.  相似文献   

12.
The goal of this study was to investigate the performance of Hall’s transformation of the Brunner-Dette-Munk (BDM) and Welch-James (WJ) test statistics and Box-Cox’s data transformation in factorial designs when normality and variance homogeneity assumptions were violated separately and jointly. On the basis of unweighted marginal means, we performed a simulation study to explore the operating characteristics of the methods proposed for a variety of distributions with small sample sizes. Monte Carlo simulation results showed that when data were sampled from symmetric distributions, the error rates of the original BDM and WJ tests were scarcely affected by the lack of normality and homogeneity of variance. In contrast, when data were sampled from skewed distributions, the original BDM and WJ rates were not well controlled. Under such circumstances, the results clearly revealed that Hall’s transformation of the BDM and WJ tests provided generally better control of Type I error rates than did the same tests based on Box-Cox’s data transformation. Among all the methods considered in this study, we also found that Hall’s transformation of the BDM test yielded the best control of Type I errors, although it was often less powerful than either of the WJ tests when both approaches reasonably controlled the error rates.  相似文献   

13.
For one‐way fixed effects ANOVA, it is well known that the conventional F test of the equality of means is not robust to unequal variances, and numerous methods have been proposed for dealing with heteroscedasticity. On the basis of extensive empirical evidence of Type I error control and power performance, Welch's procedure is frequently recommended as the major alternative to the ANOVA F test under variance heterogeneity. To enhance its practical usefulness, this paper considers an important aspect of Welch's method in determining the sample size necessary to achieve a given power. Simulation studies are conducted to compare two approximate power functions of Welch's test for their accuracy in sample size calculations over a wide variety of model configurations with heteroscedastic structures. The numerical investigations show that Levy's (1978a) approach is clearly more accurate than the formula of Luh and Guo (2011) for the range of model specifications considered here. Accordingly, computer programs are provided to implement the technique recommended by Levy for power calculation and sample size determination within the context of the one‐way heteroscedastic ANOVA model.  相似文献   

14.
The two‐sample Student t test of location was performed on random samples of scores and on rank‐transformed scores from normal and non‐normal population distributions with unequal variances. The same test also was performed on scores that had been explicitly selected to have nearly equal sample variances. The desired homogeneity of variance was brought about by repeatedly rejecting pairs of samples having a ratio of standard deviations that exceeded a predetermined cut‐off value of 1.1, 1.2, or 1.3, while retaining pairs with ratios less than the cut‐off value. Despite this forced conformity with the assumption of equal variances, the tests on the selected samples were no more robust than tests on unselected samples, and in most cases substantially less robust. Under conditions where sample sizes were unequal, so that Type I error rates were inflated and power curves were atypical, the selection procedure produced still greater inflation and distortion of the power curves.  相似文献   

15.
Random item effects models provide a natural framework for the exploration of violations of measurement invariance without the need for anchor items. Within the random item effects modelling framework, Bayesian tests (Bayes factor, deviance information criterion) are proposed which enable multiple marginal invariance hypotheses to be tested simultaneously. The performance of the tests is evaluated with a simulation study which shows that the tests have high power and low Type I error rate. Data from the European Social Survey are used to test for measurement invariance of attitude towards immigrant items and to show that background information can be used to explain cross‐national variation in item functioning.  相似文献   

16.
In applications of item response theory, assessment of model fit is a critical issue. Recently, limited‐information goodness‐of‐fit testing has received increased attention in the psychometrics literature. In contrast to full‐information test statistics such as Pearson’s X2 or the likelihood ratio G2, these limited‐information tests utilize lower‐order marginal tables rather than the full contingency table. A notable example is Maydeu‐Olivares and colleagues’M2 family of statistics based on univariate and bivariate margins. When the contingency table is sparse, tests based on M2 retain better Type I error rate control than the full‐information tests and can be more powerful. While in principle the M2 statistic can be extended to test hierarchical multidimensional item factor models (e.g., bifactor and testlet models), the computation is non‐trivial. To obtain M2, a researcher often has to obtain (many thousands of) marginal probabilities, derivatives, and weights. Each of these must be approximated with high‐dimensional numerical integration. We propose a dimension reduction method that can take advantage of the hierarchical factor structure so that the integrals can be approximated far more efficiently. We also propose a new test statistic that can be substantially better calibrated and more powerful than the original M2 statistic when the test is long and the items are polytomous. We use simulations to demonstrate the performance of our new methods and illustrate their effectiveness with applications to real data.  相似文献   

17.
We study several aspects of bootstrap inference for covariance structure models based on three test statistics, including Type I error, power and sample‐size determination. Specifically, we discuss conditions for a test statistic to achieve a more accurate level of Type I error, both in theory and in practice. Details on power analysis and sample‐size determination are given. For data sets with heavy tails, we propose applying a bootstrap methodology to a transformed sample by a downweighting procedure. One of the key conditions for safe bootstrap inference is generally satisfied by the transformed sample but may not be satisfied by the original sample with heavy tails. Several data sets illustrate that, by combining downweighting and bootstrapping, a researcher may find a nearly optimal procedure for evaluating various aspects of covariance structure models. A rule for handling non‐convergence problems in bootstrap replications is proposed.  相似文献   

18.
The Type I error rates and powers of three recent tests for analyzing nonorthogonal factorial designs under departures from the assumptions of homogeneity and normality were evaluated using Monte Carlo simulation. Specifically, this work compared the performance of the modified Brown-Forsythe procedure, the generalization of Box's method proposed by Brunner, Dette, and Munk, and the mixed-model procedure adjusted by the Kenward-Roger solution available in the SAS statistical package. With regard to robustness, the three approaches adequately controlled Type I error when the data were generated from symmetric distributions; however, this study's results indicate that, when the data were extracted from asymmetric distributions, the modified Brown-Forsythe approach controlled the Type I error slightly better than the other procedures. With regard to sensitivity, the higher power rates were obtained when the analyses were done with the MIXED procedure of the SAS program. Furthermore, results also identified that, when the data were generated from symmetric distributions, little power was sacrificed by using the generalization of Box's method in place of the modified Brown-Forsythe procedure.  相似文献   

19.
Four applications of permutation tests to the single-mediator model are described and evaluated in this study. Permutation tests work by rearranging data in many possible ways in order to estimate the sampling distribution for the test statistic. The four applications to mediation evaluated here are the permutation test of ab, the permutation joint significance test, and the noniterative and iterative permutation confidence intervals for ab. A Monte Carlo simulation study was used to compare these four tests with the four best available tests for mediation found in previous research: the joint significance test, the distribution of the product test, and the percentile and bias-corrected bootstrap tests. We compared the different methods on Type I error, power, and confidence interval coverage. The noniterative permutation confidence interval for ab was the best performer among the new methods. It successfully controlled Type I error, had power nearly as good as the most powerful existing methods, and had better coverage than any existing method. The iterative permutation confidence interval for ab had lower power than do some existing methods, but it performed better than any other method in terms of coverage. The permutation confidence interval methods are recommended when estimating a confidence interval is a primary concern. SPSS and SAS macros that estimate these confidence intervals are provided.  相似文献   

20.
Researchers often test for a lack of association between variables. A lack of association is usually established by demonstrating a non‐significant relationship with a traditional test (e.g., Pearson's r). However, for logical as well as statistical reasons, such conclusions are problematic. In this paper, we discuss and compare the empirical Type I error and power rates of three lack of association tests. The results indicate that large, sometimes very large, sample sizes are required for the test statistics to be appropriate. What is especially problematic is that the required sample sizes may exceed what is practically feasible for the conditions that are expected to be common among researchers in psychology. This paper highlights the importance of using available lack of association tests, instead of traditional tests of association, for demonstrating the independence of variables, and qualifies the conditions under which these tests are appropriate.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号