首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
While effect size estimates, post hoc power estimates, and a priori sample size determination are becoming a routine part of univariate analyses involving measured variables (e.g., ANOVA), such measures and methods have not been articulated for analyses involving latent means. The current article presents standardized effect size measures for latent mean differences inferred from both structured means modeling and MIMIC approaches to hypothesis testing about differences among means on a single latent construct. These measures are then related to post hoc power analysis, a priori sample size determination, and a relevant measure of construct reliability.I wish to convey my appreciation to the reviewers and Associate Editor, whose suggestions extended and strengthened the article's content immensely, and to Ralph Mueller of The George Washington University for enhancing the clarity of its presentation.  相似文献   

2.
Two different approaches have been used to derive measures of effect size. One approach is based on the comparison of treatment means. The standardized mean difference is an appropriate measure of effect size when one is merely comparing two treatments, but there is no satisfactory analogue for comparing more than two treatments. The second approach is based on the proportion of variance in the dependent variable that is explained by the independent variable. Estimates have been proposed for both fixed-factor and random-factor designs, but their sampling properties are not well understood. Nevertheless, measures of effect size can allow quantitative comparisons to be made across different studies, and they can be a useful adjunct to more traditional outcome measures such as test statistics and significance levels.  相似文献   

3.
Contrasts of means are often of interest because they describe the effect size among multiple treatments. High-quality inference of population effect sizes can be achieved through narrow confidence intervals (CIs). Given the close relation between CI width and sample size, we propose two methods to plan the sample size for an ANCOVA or ANOVA study, so that a sufficiently narrow CI for the population (standardized or unstandardized) contrast of interest will be obtained. The standard method plans the sample size so that the expected CI width is sufficiently small. Since CI width is a random variable, the expected width being sufficiently small does not guarantee that the width obtained in a particular study will be sufficiently small. An extended procedure ensures with some specified, high degree of assurance (e.g., 90% of the time) that the CI observed in a particular study will be sufficiently narrow. We also discuss the rationale and usefulness of two different ways to standardize an ANCOVA contrast, and compare three types of standardized contrast in the ANCOVA/ANOVA context. All of the methods we propose have been implemented in the freely available MBESS package in R so that they can be easily applied by researchers.  相似文献   

4.
Several studies have demonstrated that the fixed-sample stopping rule (FSR), in which the sample size is determined in advance, is less practical and efficient than are sequential-stopping rules. The composite limited adaptive sequential test (CLAST) is one such sequential-stopping rule. Previous research has shown that CLAST is more efficient in terms of sample size and power than are the FSR and other sequential rules and that it reflects more realistically the practice of experimental psychology researchers. The CLAST rule has been applied only to thet test of mean differences with two matched samples and to the chi-square independence test for twofold contingency tables. The present work extends previous research on the efficiency of CLAST to multiple group statistical tests. Simulation studies were conducted to test the efficiency of the CLAST rule for the one-way ANOVA for fixed effects models. The ANOVA general test and two linear contrasts of multiple comparisons among treatment means are considered. The article also introduces four rules for allocatingN observations toJ groups under the general null hypothesis and three allocation rules for the linear contrasts. Results show that the CLAST rule is generally more efficient than the FSR in terms of sample size and power for one-way ANOVA tests. However, the allocation rules vary in their optimality and have a differential impact on sample size and power. Thus, selecting an allocation rule depends on the cost of sampling and the intended precision.  相似文献   

5.
Experience with real data indicates that psychometric measures often have heavy-tailed distributions. This is known to be a serious problem when comparing the means of two independent groups because heavy-tailed distributions can have a serious effect on power. Another problem that is common in some areas is outliers. This paper suggests an approach to these problems based on the one-step M-estimator of location. Simulations indicate that the new procedure provides very good control over the probability of a Type I error even when distributions are skewed, have different shapes, and the variances are unequal. Moreover, the new procedure has considerably more power than Welch's method when distributions have heavy tails, and it compares well to Yuen's method for comparing trimmed means. Wilcox's median procedure has about the same power as the proposed procedure, but Wilcox's method is based on a statistic that has a finite sample breakdown point of only 1/n, wheren is the sample size. Comments on other methods for comparing groups are also included.  相似文献   

6.
The issue of the sample size necessary to ensure adequate statistical power has been the focus of considerableattention in scientific research. Conventional presentations of sample size determination do not consider budgetary and participant allocation scheme constraints, although there is some discussion in the literature. The introduction of additional allocation and cost concerns complicates study design, although the resulting procedure permits a practical treatment of sample size planning. This article presents exact techniques for optimizing sample size determinations in the context of Welch (Biometrika, 29, 350–362, 1938) test of the difference between two means under various design and cost considerations. The allocation schemes include cases in which (1) the ratio of group sizes is given and (2) one sample size is specified. The cost implications suggest optimally assigning subjects (1) to attain maximum power performance for a fixed cost and (2) to meet adesignated power level for the least cost. The proposed methods provide useful alternatives to the conventional procedures and can be readily implemented with the developed R and SAS programs that are available as supplemental materials from brm.psychonomic-journals.org/content/supplemental.  相似文献   

7.
For one‐way fixed effects ANOVA, it is well known that the conventional F test of the equality of means is not robust to unequal variances, and numerous methods have been proposed for dealing with heteroscedasticity. On the basis of extensive empirical evidence of Type I error control and power performance, Welch's procedure is frequently recommended as the major alternative to the ANOVA F test under variance heterogeneity. To enhance its practical usefulness, this paper considers an important aspect of Welch's method in determining the sample size necessary to achieve a given power. Simulation studies are conducted to compare two approximate power functions of Welch's test for their accuracy in sample size calculations over a wide variety of model configurations with heteroscedastic structures. The numerical investigations show that Levy's (1978a) approach is clearly more accurate than the formula of Luh and Guo (2011) for the range of model specifications considered here. Accordingly, computer programs are provided to implement the technique recommended by Levy for power calculation and sample size determination within the context of the one‐way heteroscedastic ANOVA model.  相似文献   

8.
The point-biserial correlation is a commonly used measure of effect size in two-group designs. New estimators of point-biserial correlation are derived from different forms of a standardized mean difference. Point-biserial correlations are defined for designs with either fixed or random group sample sizes and can accommodate unequal variances. Confidence intervals and standard errors for the point-biserial correlation estimators are derived from the sampling distributions for pooled-variance and separate-variance versions of a standardized mean difference. The proposed point-biserial confidence intervals can be used to conduct directional two-sided tests, equivalence tests, directional non-equivalence tests, and non-inferiority tests. A confidence interval for an average point-biserial correlation in meta-analysis applications performs substantially better than the currently used methods. Sample size formulas for estimating a point-biserial correlation with desired precision and testing a point-biserial correlation with desired power are proposed. R functions are provided that can be used to compute the proposed confidence intervals and sample size formulas.  相似文献   

9.
Bonett DG 《心理学方法》2008,13(2):99-109
Most psychology journals now require authors to report a sample value of effect size along with hypothesis testing results. The sample effect size value can be misleading because it contains sampling error. Authors often incorrectly interpret the sample effect size as if it were the population effect size. A simple solution to this problem is to report a confidence interval for the population value of the effect size. Standardized linear contrasts of means are useful measures of effect size in a wide variety of research applications. New confidence intervals for standardized linear contrasts of means are developed and may be applied to between-subjects designs, within-subjects designs, or mixed designs. The proposed confidence interval methods are easy to compute, do not require equal population variances, and perform better than the currently available methods when the population variances are not equal.  相似文献   

10.
In conventional frequentist power analysis, one often uses an effect size estimate, treats it as if it were the true value, and ignores uncertainty in the effect size estimate for the analysis. The resulting sample sizes can vary dramatically depending on the chosen effect size value. To resolve the problem, we propose a hybrid Bayesian power analysis procedure that models uncertainty in the effect size estimates from a meta-analysis. We use observed effect sizes and prior distributions to obtain the posterior distribution of the effect size and model parameters. Then, we simulate effect sizes from the obtained posterior distribution. For each simulated effect size, we obtain a power value. With an estimated power distribution for a given sample size, we can estimate the probability of reaching a power level or higher and the expected power. With a range of planned sample sizes, we can generate a power assurance curve. Both the conventional frequentist and our Bayesian procedures were applied to conduct prospective power analyses for two meta-analysis examples (testing standardized mean differences in example 1 and Pearson's correlations in example 2). The advantages of our proposed procedure are demonstrated and discussed.  相似文献   

11.
In eye movements, saccade trajectory deviation has often been used as a physiological operationalization of visual attention, distraction, or the visual system’s prioritization of different sources of information. However, there are many ways to measure saccade trajectories and to quantify their deviation. This may lead to noncomparable results and poses the problem of choosing a method that will maximize statistical power. Using data from existing studies and from our own experiments, we used principal components analysis to carry out a systematic quantification of the relationships among eight different measures of saccade trajectory deviation and their power to detect the effects of experimental manipulations, as measured by standardized effect size. We concluded that (1) the saccade deviation measure is a good default measure of saccade trajectory deviation, because it is somewhat correlated with all other measures and shows relatively high effect sizes for two well-known experimental effects; (2) more generally, measures made relative to the position of the saccade target are more powerful; and (3) measures of deviation based on the early part of the saccade are made more stable when they are based on data from an eyetracker with a high sampling rate. Our recommendations may be of use to future eye movement researchers seeking to optimize the designs of their studies.  相似文献   

12.
Repeated measures designs have been widely employed in psychological experimentation, however, such designs have rarely been analyzed by means of permutation procedures. In the present paper certain aspects of hypothesis tests ina particular repeated measures design (one non-repeated factor (A) and one repeated factor (B) withK subjects per level ofA) were investigated by means of permutation rather than sampling processes. The empirical size and power of certain normal theoryF-tests obtained under permutation were compared to their nominal normal theory values. Data sets were established in which various combinations of kurtosis of subject means and intra-subject variance heterogeneity existed in order that their effect upon the agreement of these two models could be ascertained. The results indicated that except in cases of high intra-subject variance heterogeneity, the usualF-tests onB andAB exhibited approximately the same size and power characteristics whether based upon a permutation or normal theory sampling basis.This research prepared under Contract No. 2593 from the Cooperative Research Branch of the U. S. Office of Education.  相似文献   

13.
The allocation of sufficient participants into different experimental groups for various research purposes under given constraints is an important practical problem faced by researchers. We address the problem of sample size determination between two independent groups for unequal and/or unknown variances when both the power and the differential cost are taken into consideration. We apply the well‐known Welch approximate test to derive various sample size allocation ratios by minimizing the total cost or, equivalently, maximizing the statistical power. Two types of hypotheses including superiority/non‐inferiority and equivalence of two means are each considered in the process of sample size planning. A simulation study is carried out and the proposed method is validated in terms of Type I error rate and statistical power. As a result, the simulation study reveals that the proposed sample size formulas are very satisfactory under various variances and sample size allocation ratios. Finally, a flowchart, tables, and figures of several sample size allocations are presented for practical reference.  相似文献   

14.
A new approach is presented for the interpretation of differences among means and proportions. Post hoc techniques, such as Tukey's honestly significant difference procedure, have interpretive problems related to intransitive decisions and technical issues arising from unequal sample sizes or heterogeneity of variance. These concerns can be avoided by considering ordered subsets of means and by using information criterion to select among competing models. This paired-comparisons information-criterion (PCIC) approach is wholistic in nature and does not depend on interpreting a series of statistical tests. Simulation results suggest that a protected version of the PCIC procedure is desirable to minimize failures to detect the null case. This technique is illustrated for independent means, proportions, and means from repeated measures.  相似文献   

15.
The variable criteria sequential stopping rule (vcSSR) is an efficient way to add sample size to planned ANOVA tests while holding the observed rate of Type I errors, αo, constant. The only difference from regular null hypothesis testing is that criteria for stopping the experiment are obtained from a table based on the desired power, rate of Type I errors, and beginning sample size. The vcSSR was developed using between-subjects ANOVAs, but it should work with p values from any type of F test. In the present study, the αo remained constant at the nominal level when using the previously published table of criteria with repeated measures designs with various numbers of treatments per subject, Type I error rates, values of ρ, and four different sample size models. New power curves allow researchers to select the optimal sample size model for a repeated measures experiment. The criteria held αo constant either when used with a multiple correlation that varied the sample size model and the number of predictor variables, or when used with MANOVA with multiple groups and two levels of a within-subject variable at various levels of ρ. Although not recommended for use with χ2 tests such as the Friedman rank ANOVA test, the vcSSR produces predictable results based on the relation between F and χ2. Together, the data confirm the view that the vcSSR can be used to control Type I errors during sequential sampling with any t- or F-statistic rather than being restricted to certain ANOVA designs.  相似文献   

16.
Wilcox, Keselman, Muska and Cribbie (2000) found a method for comparing the trimmed means of dependent groups that performed well in simulations, in terms of Type I errors, with a sample size as small as 21. Theory and simulations indicate that little power is lost under normality when using trimmed means rather than untrimmed means, and trimmed means can result in substantially higher power when sampling from a heavy‐tailed distribution. However, trimmed means suffer from two practical concerns described in this paper. Replacing trimmed means with a robust M‐estimator addresses these concerns, but control over the probability of a Type I error can be unsatisfactory when the sample size is small. Methods based on a simple modification of a one‐step M‐estimator that address the problems with trimmed means are examined. Several omnibus tests are compared, one of which performed well in simulations, even with a sample size of 11.  相似文献   

17.
It is difficult to obtain adequate power to test a small effect size with a set criterion alpha of 0.05. Probably an inferential test will indicate non-statistical significance and not be published. Rarely, statistical significance will be obtained, and an exaggerated effect size calculated and reported. Accepting all inferential probabilities and associated effect sizes could solve exaggeration problems. Graphs, generated through Monte Carlo methods, are presented to illustrate this. The first graph presents effect sizes (Cohen's d) as lines from 1 to 0 with probabilities on the Y axis and the number of measures on the X axis. This graph shows effect sizes of .5 or less should yield non-significance with sample sizes below 120 measures. The other graphs show results with as many as 10 small sample size replications. There is a convergence of means with the effect size as sample size increases and measurement accuracy emerges.  相似文献   

18.
An approach to sample size planning for multiple regression is presented that emphasizes accuracy in parameter estimation (AIPE). The AIPE approach yields precise estimates of population parameters by providing necessary sample sizes in order for the likely widths of confidence intervals to be sufficiently narrow. One AIPE method yields a sample size such that the expected width of the confidence interval around the standardized population regression coefficient is equal to the width specified. An enhanced formulation ensures, with some stipulated probability, that the width of the confidence interval will be no larger than the width specified. Issues involving standardized regression coefficients and random predictors are discussed, as are the philosophical differences between AIPE and the power analytic approaches to sample size planning.  相似文献   

19.
The equality of two group variances is frequently tested in experiments. However, criticisms of null hypothesis statistical testing on means have recently arisen and there is interest in other types of statistical tests of hypotheses, such as superiority/non-inferiority and equivalence. Although these tests have become more common in psychology and social sciences, the corresponding sample size estimation for these tests is rarely discussed, especially when the sampling unit costs are unequal or group sizes are unequal for two groups. Thus, for finding optimal sample size, the present study derived an initial allocation by approximating the percentiles of an F distribution with the percentiles of the standard normal distribution and used the exhaustion algorithm to select the best combination of group sizes, thereby ensuring the resulting power reaches the designated level and is maximal with a minimal total cost. In this manner, optimization of sample size planning is achieved. The proposed sample size determination has a wide range of applications and is efficient in terms of Type I errors and statistical power in simulations. Finally, an illustrative example from a report by the Health Survey for England, 1995–1997, is presented using hypertension data. For ease of application, four R Shiny apps are provided and benchmarks for setting equivalence margins are suggested.  相似文献   

20.
The validity conditions for univariate repeated measures designs are described. Attention is focused on the sphericity requirement. For av degree of freedom family of comparisons among the repeated measures, sphericity exists when all contrasts contained in thev dimensional space have equal variances. Under nonsphericity, upper and lower bounds on test size and power of a priori, repeated measures,F tests are derived. The effects of nonsphericity are illustrated by means of a set of charts. The charts reveal that small departures from sphericity (.97 <1.00) can seriously affect test size and power. It is recommended that separate rather than pooled error term procedures be routinely used to test a priori hypotheses.Appreciation is extended to Milton Parnes for his insightful assistance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号