首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Rules of thumb for power in multiple regression research abound. Most such rules dictate the necessary sample size, but they are based only upon the number of predictor variables, usually ignoring other critical factors necessary to compute power accurately. Other guides to power in multiple regression typically use approximate rather than precise equations for the underlying distribution; entail complex preparatory computations; require interpolation with tabular presentation formats; run only under software such as Mathmatica or SAS that may not be immediately available to the user; or are sold to the user as parts of power computation packages. In contrast, the program we offer herein is immediately downloadable at no charge, runs under Windows, is interactive, self-explanatory, flexible to fit the user’s own regression problems, and is as accurate as single precision computation ordinarily permits.  相似文献   

2.
Computer programs for statistical power analysis typically require the user to provide a series of values and respond by reporting the corresponding power. These programs provide essentially the same functions as a published text, albeit in a more convenient form. In this paper, we describe a program that instead uses innovative graphic techniques to provide insight into the interaction among the factors that determine power. For example, fort tests, the means and standard deviations of the two distributions, sample sizes, and alpha are displayed as bar graphs. As the researcher modifies these values, the corresponding values of beta (also displayed as a bar graph) and power are updated and displayed immediately. By displaying all of the factors that are instrumental in determining power, the program ensures that each will be addressed By allowing the user to determine the impact that any modifications will have on power, the program encourages an appropriate balance between alpha and beta while working within the constraints imposed by a limited sample size. The program also allows the user to generate tables and graphs to document the impact of the various factors on power. In addition, the program enables the user to run on-screen Monte Carlo simulations to demonstrate the importance of adequate statistical power, and as such, it can serve as a unique educational tool.  相似文献   

3.
An approach to sample size planning for multiple regression is presented that emphasizes accuracy in parameter estimation (AIPE). The AIPE approach yields precise estimates of population parameters by providing necessary sample sizes in order for the likely widths of confidence intervals to be sufficiently narrow. One AIPE method yields a sample size such that the expected width of the confidence interval around the standardized population regression coefficient is equal to the width specified. An enhanced formulation ensures, with some stipulated probability, that the width of the confidence interval will be no larger than the width specified. Issues involving standardized regression coefficients and random predictors are discussed, as are the philosophical differences between AIPE and the power analytic approaches to sample size planning.  相似文献   

4.
Long JD 《心理学方法》2005,10(3):329-351
Often quantitative data in the social sciences have only ordinal justification. Problems of interpretation can arise when least squares multiple regression (LSMR) is used with ordinal data. Two ordinal alternatives are discussed, dominance-based ordinal multiple regression (DOMR) and proportional odds multiple regression. The Q2 statistic is introduced for testing the omnibus null hypothesis in DOMR. A simulation study is discussed that examines the actual Type I error rate and power of Q2 in comparison to the LSMR omnibus F test under normality and non-normality. Results suggest that Q2 has favorable sampling properties as long as the sample size-to-predictors ratio is not too small, and Q2 can be a good alternative to the omnibus F test when the response variable is non-normal.  相似文献   

5.
Power analysis guides researchers in planning how much data to collect. This article describes BWPower, a computer program for the Windows 95 environment that performs power analyses for research designs that may or may not include both between- and within-subjects factors. We discuss how BWPower easily accommodates both between- and within-subjects factors and provide examples of BWPower’s use in performing power analyses on designs with only between-subjects factors, designs with only repeated measures, and with mixed between- and within-subjects designs. We highlight the major features of BWPower’s user interface, such as the ability to iteratively increment or decrement the number of subjects and the automatic recalculation of power when the number of subjects or effect sizes is changed.  相似文献   

6.
In contrast to prospective power analysis, retrospective power analysis provides an estimate of the statistical power of a hypothesis test after an investigation has been conducted rather than before. In this article, three approaches to obtaining point estimates of power and an interval estimation algorithm are delineated. Previous research on the bias and sampling error of these estimates is briefly reviewed. Finally, an SAS macro that calculates the point and interval estimates is described. The macro was developed to estimate the power of anF test (obtained from analysis of variance, multiple regression analysis, or any of several multivariate analyses), but it may be easily adapted for use with other statistics, such as chi-square tests ort tests.  相似文献   

7.
The effects of a treatment or an intervention on a count outcome are often of interest in applied research. When controlling for additional covariates, a negative binomial regression model is usually applied to estimate conditional expectations of the count outcome. The difference in conditional expectations under treatment and under control is then defined as the (conditional) treatment effect. While traditionally aggregates of these conditional treatment effects (e.g., average treatment effects) are computed by averaging over the empirical distribution, a recently proposed moment-based approach allows for computing aggregate effects as a function of distribution parameters. The moment-based approach makes it possible to control for (latent) multivariate normally distributed covariates and provides more reliable inferences under certain conditions. In this paper we propose three different ways to account for non-normally distributed continuous covariates in this approach: an alternative, known non-normal distribution; a plausible factorization of the joint distribution; and an approximation using finite Gaussian mixtures. A saturated model is used for categorical covariates, making a distributional assumption obsolete. We further extend the moment-based approach to allow for multiple treatment conditions and the computation of conditional effects for categorical covariates. An illustrative example highlighting the key features of our extension is provided.  相似文献   

8.
We show that power and sample size tables developed by Cohen (1988, pp. 289–354, 381–389) produce incorrect estimates for factorial designs: power is underestimated, and sample size is overestimated. The source of this bias is shrinkage in the implied value of the noncentrality parameter, λ, caused by using Cohen’s adjustment ton for factorial designs (pp. 365 and 396). The adjustment was intended to compensate for differences in the actual versus presumed (by the tables) error degrees of freedom; however, more accurate estimates are obtained if the tables are used without adjustment. The problems with Cohen’s procedure were discovered while testing subroutines in DATASIM 1.2 for computing power and sample size in completely randomized, randomized-blocks, and split-plot factorial designs. The subroutines give the user the ability to generate power and sample size tables that are as easy to use as Cohen’s, but that eliminate the conservative bias of his tables. We also implemented several improvements relative to “manual” use of Cohen’s tables: (1) Since the user can control the specific values of 1- β,n, andf used on the rows and columns of the table, interpolation is never required; (2) exact as opposed to approximate solutions for the noncentralF distribution are employed; (3) solutions for factorial designs, including those with repeated measures factors, take into account the actual error degrees of freedom for the effect being tested; and (4) provision is made for the computation of power for applications involving the doubly noncentralF distribution.  相似文献   

9.
Abstract

We identify potential problems in the statistical analysis of social cognition model data, with special emphasis on the theories of reasoned action (TRA) and planned behaviour (TPB). Some statistical guidelines are presented for empirical studies of the TRA and the TPB based upon multiple linear regression and structural equation modelling (SEM). If the model is tested using multiple regression, the assumptions of this technique must be considered and variables transformed if necessary. Adjusted R2 (not R2) should be used as a measure of explained variance and semipartial correlations are useful in assessing each component's unique contribution to explained variance. R2 is not an indicator of model adequacy and residuals should be examined. Expectancy-value variables that are the product of expectancy and value measures represent the interaction term in a multiple regression and should not be used. SEM approaches make explicit the assumptions of unidimensionality of constructs in the TRA/TPB, assumptions that might usefully be challenged by competing models with multidimensional constructs. Finally, statistical power and sample size should be considered for both approaches. Inattention to any of these aspects of analysis threatens the validity of TRA/TPB research.  相似文献   

10.
To facilitate the computation of statistical power for analysis of variance, Cohen developed the index of effect sizef, defined as theSD between groups divided by theSD within groups. A microcomputer program for statistical power allows the user to compute the value off in any of several ways: by specifying the mean andSD for every cell in the ANOVA; by specifying the mean value for the two extreme cells and the pattern of dispersion for the remaining cells; by estimating the proportion of variance in the dependent variable that will be explained by group membership; and/or with reference to conventions for small, medium, and large effects. The program will compute power for any single set of parameters; it will also allow the user to generate tables and graphs showing how power will vary as a function of effect size, sample size, andα.  相似文献   

11.
Relative ingroup prototypicality (RIP) is an important concept in the ingroup projection model (IPM) of social discrimination and tolerance. This paper reviews measures of RIP currently in use and critically examines how the notion of RIP is captured by statistical tests treating RIP as a single variable. It is concluded that composite measures of RIP imply multiple statistical hypotheses that have previously been confounded. The value of an alternative multiple regression approach is illustrated in a study testing the hypothesis of a negative relationship between RIP and outgroup attitudes. Results based on the conventional univariate analyses would have confirmed or disconfirmed the hypothesis depending on the scoring method. In contrast, the multiple regression approach described in this paper resolves this ambiguity by suggesting that only outgroup prototypicality may be necessary to predict outgroup attitudes. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

12.
M Massironi 《Perception》1988,17(5):681-694
It is possible to produce outline drawings that are perceived as representations of sheets or plates folded over themselves. However, only some of the many possible representations are immediately and necessarily perceived as such. Investigations were carried out to find out which elements must be included in a drawing if a subject is to perceive folding. Four necessary, though not individually sufficient, factors were detected. Other factors which are not necessary but which can intensify the perception of folding were also found. The four necessary factors are: (i) the existence of two phenomenically overlapping figures; (ii) at least one side of the upper figure must perfectly coincide with one side of the lower figure, this common side being defined as the folding line; (iii) the two phenomenically overlapping areas must be on the same side of the folding line; (iv) three segments must converge at the ends of the folding line. Some cognitive processes which appear to be involved in the phenomenon are also discussed.  相似文献   

13.
A system is described that meets some user requirements of programming ease, general applicability, and simultaneous multiple S and laboratory operation. The system is based on an extremely flexible time-sharing real-time monitor and a user-level task-oriented programming language which together free the user from all multiple S bookkeeping programming. The conceptually simple language consists of simple commands for operations at the level of experimental procedure, such as displaying stimuli, collecting and timing responses, providing time delays, and recording data. Other commands support string manipulation, arithmetic, and disk I/O. The system is programmed only for the IBM 1800; however, it represents a successful approach to laboratory computerization.  相似文献   

14.
This paper describes the practical steps necessary to write logfiles for recording user actions in event-driven applications. Data logging has long been used as a reliable method to record all user actions, whether assessing new software or running a behavioral experiment. With the widespread introduction of event-driven software, the logfile must enable accurate recording of all the user’s actions, whether with the keyboard or another input device. Logging is only an effective tool when it can accurately and consistently record all actions in a format that aids the extraction of useful information from the mass of data collected. Logfiles are often presented as one of many methods that could be used, and here a technique is proposed for the construction of logfiles for the quantitative assessment of software from the user’s point of view.  相似文献   

15.
16.
The performance of the asymptotic method for comparing the squared multiple correlations of non‐nested models was investigated. Specifically, the increase in a given regression model's R2 when one predictor is added was compared to the increase in the same model's R2 when another predictor is added. This comparison can be used to determine predictor importance and is the basis for procedures such as Dominance Analysis. Results indicate that the asymptotic procedure provides the expected coverage rates for sample sizes of 200 or more, but in many cases much higher sample sizes are required to achieve adequate power. Guidelines and computations are provided for the determination of adequate sample sizes for hypothesis testing.  相似文献   

17.
In Ordinary Least Square regression, researchers often are interested in knowing whether a set of parameters is different from zero. With complete data, this could be achieved using the gain in prediction test, hierarchical multiple regression, or an omnibus F test. However, in substantive research scenarios, missing data often exist. In the context of multiple imputation, one of the current state-of-art missing data strategies, there are several different analogous multi-parameter tests of the joint significance of a set of parameters, and these multi-parameter test statistics can be referenced to various distributions to make statistical inferences. However, little is known about the performance of these tests, and virtually no research study has compared the Type 1 error rates and statistical power of these tests in scenarios that are typical of behavioral science data (e.g., small to moderate samples, etc.). This paper uses Monte Carlo simulation techniques to examine the performance of these multi-parameter test statistics for multiple imputation under a variety of realistic conditions. We provide a number of practical recommendations for substantive researchers based on the simulation results, and illustrate the calculation of these test statistics with an empirical example.  相似文献   

18.
Abstract

When estimating multiple regression models with incomplete predictor variables, it is necessary to specify a joint distribution for the predictor variables. A convenient assumption is that this distribution is a multivariate normal distribution, which is also the default in many statistical software packages. This distribution will in general be misspecified if predictors with missing data have nonlinear effects (e.g., x2) or are included in interaction terms (e.g., x·z). In the present article, we introduce a factored regression modeling approach for estimating regression models with missing data that is based on maximum likelihood estimation. In this approach, the model likelihood is factorized into a part that is due to the model of interest and a part that is due to the model for the incomplete predictors. In three simulation studies, we showed that the factored regression modeling approach produced valid estimates of interaction and nonlinear effects in regression models with missing values on categorical or continuous predictor variables under a broad range of conditions. We developed the R package mdmb, which facilitates a user-friendly application of the factored regression modeling approach, and present a real-data example that illustrates the flexibility of the software.  相似文献   

19.
丁雅婷  伍麟 《心理科学》2022,45(5):1267-1272
抑郁症患者倾向于在网络社交平台上发布带有抑郁信号的推文。基于这些文字信息,借助自然语言处理进行分析,提取归纳用户的语言特征,可以预测潜在用户的抑郁症状况。由于隐私信息的敏感、相关技术的不成熟等原因,出现了诸如信息获取与隐私侵犯、算法偏见与信息误判、信息权利与信息利益、责任界定与权限模糊等现实问题,成为进一步发展的掣肘。进行算法技术升级、完善法律法规、加强行业伦理约束等是避免道德风险的重要措施。  相似文献   

20.
Responses by rats on an earn lever made available food pellets that were delivered to a food cup by responses on a second, collect, lever. The rats could either collect and immediately consume or accumulate (defined as the percentage of multiple earn responses and as the number of pellets earned before a collect response) earned pellets. In Experiment 1, accumulation varied as a function of variations in the earn or collect response requirements and whether the earn and collect levers were proximal (31 cm) or distal (248 cm) to one another. Some accumulation occurred under all but one of the conditions, but generally was higher when the earn and collect levers were distal to one another, particularly when the earn response requirement was fixed-ratio (FR) 1. In Experiment 2, the contributions of responses and time to accumulation were assessed by comparing an FR 20 earn response requirement to a condition in which only a single earn response was required at the end of a time interval nominally yoked to the FR interval. When 248 cm separated the earn and collect levers, accumulation was always greater in the FR condition, and it was not systematically related to reinforcement rate. In Experiment 3, increasing the earn response requirement with a progressive-ratio schedule that reset only with a collect response increased the likelihood of accumulation when the collect and earn levers were 248 cm apart, even though such accumulation increased the next earn response requirement. Reinforcer accumulation is an understudied dimension of operant behavior that relates to the analysis of such phenomena as hoarding and self-control, in that they too involve accumulating versus immediately collecting or consuming reinforcers.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号