首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Bayesian hypothesis testing presents an attractive alternative to p value hypothesis testing. Part I of this series outlined several advantages of Bayesian hypothesis testing, including the ability to quantify evidence and the ability to monitor and update this evidence as data come in, without the need to know the intention with which the data were collected. Despite these and other practical advantages, Bayesian hypothesis tests are still reported relatively rarely. An important impediment to the widespread adoption of Bayesian tests is arguably the lack of user-friendly software for the run-of-the-mill statistical problems that confront psychologists for the analysis of almost every experiment: the t-test, ANOVA, correlation, regression, and contingency tables. In Part II of this series we introduce JASP (http://www.jasp-stats.org), an open-source, cross-platform, user-friendly graphical software package that allows users to carry out Bayesian hypothesis tests for standard statistical problems. JASP is based in part on the Bayesian analyses implemented in Morey and Rouder’s BayesFactor package for R. Armed with JASP, the practical advantages of Bayesian hypothesis testing are only a mouse click away.  相似文献   

2.
Bayesian parameter estimation and Bayesian hypothesis testing present attractive alternatives to classical inference using confidence intervals and p values. In part I of this series we outline ten prominent advantages of the Bayesian approach. Many of these advantages translate to concrete opportunities for pragmatic researchers. For instance, Bayesian hypothesis testing allows researchers to quantify evidence and monitor its progression as data come in, without needing to know the intention with which the data were collected. We end by countering several objections to Bayesian hypothesis testing. Part II of this series discusses JASP, a free and open source software program that makes it easy to conduct Bayesian estimation and testing for a range of popular statistical scenarios (Wagenmakers et al. this issue).  相似文献   

3.
统计推断在科学研究中起到关键作用, 然而当前科研中最常用的经典统计方法——零假设检验(Null hypothesis significance test, NHST)却因难以理解而被部分研究者误用或滥用。有研究者提出使用贝叶斯因子(Bayes factor)作为一种替代和(或)补充的统计方法。贝叶斯因子是贝叶斯统计中用来进行模型比较和假设检验的重要方法, 其可以解读为对零假设H0或者备择假设H1的支持程度。其与NHST相比有如下优势:同时考虑H0H1并可以用来支持H0、不“严重”地倾向于反对H0、可以监控证据强度的变化以及不受抽样计划的影响。目前, 贝叶斯因子能够很便捷地通过开放的统计软件JASP实现, 本文以贝叶斯t检验进行示范。贝叶斯因子的使用对心理学研究者来说具有重要的意义, 但使用时需要注意先验分布选择的合理性以及保持数据分析过程的透明与公开。  相似文献   

4.
This article explains the foundational concepts of Bayesian data analysis using virtually no mathematical notation. Bayesian ideas already match your intuitions from everyday reasoning and from traditional data analysis. Simple examples of Bayesian data analysis are presented that illustrate how the information delivered by a Bayesian analysis can be directly interpreted. Bayesian approaches to null-value assessment are discussed. The article clarifies misconceptions about Bayesian methods that newcomers might have acquired elsewhere. We discuss prior distributions and explain how they are not a liability but an important asset. We discuss the relation of Bayesian data analysis to Bayesian models of mind, and we briefly discuss what methodological problems Bayesian data analysis is not meant to solve. After you have read this article, you should have a clear sense of how Bayesian data analysis works and the sort of information it delivers, and why that information is so intuitive and useful for drawing conclusions from data.  相似文献   

5.
Bayesian approaches to data analysis are considered within the context of behavior analysis. The paper distinguishes between Bayesian inference, the use of Bayes Factors, and Bayesian data analysis using specialized tools. Given the importance of prior beliefs to these approaches, the review addresses those situations in which priors have a big effect on the outcome (Bayes Factors) versus a smaller effect (parameter estimation). Although there are many advantages to Bayesian data analysis from a philosophical perspective, in many cases a behavior analyst can be reasonably well‐served by the adoption of traditional statistical tools as long as the focus is on parameter estimation and model comparison, not null hypothesis significance testing. A strong case for Bayesian analysis exists under specific conditions: When prior beliefs can help narrow parameter estimates (an especially important issue given the small sample sizes common in behavior analysis) and when an analysis cannot easily be conducted using traditional approaches (e.g., repeated measures censored regression).  相似文献   

6.
As Bayesian methods become more popular among behavioral scientists, they will inevitably be applied in situations that violate the assumptions underpinning typical models used to guide statistical inference. With this in mind, it is important to know something about how robust Bayesian methods are to the violation of those assumptions. In this paper, we focus on the problem of contaminated data (such as data with outliers or conflicts present), with specific application to the problem of estimating a credible interval for the population mean. We evaluate five Bayesian methods for constructing a credible interval, using toy examples to illustrate the qualitative behavior of different approaches in the presence of contaminants, and an extensive simulation study to quantify the robustness of each method. We find that the “default” normal model used in most Bayesian data analyses is not robust, and that approaches based on the Bayesian bootstrap are only robust in limited circumstances. A simple parametric model based on Tukey’s “contaminated normal model” and a model based on the t-distribution were markedly more robust. However, the contaminated normal model had the added benefit of estimating which data points were discounted as outliers and which were not.  相似文献   

7.
Ashby, Maddox and Lee (Psychological Science, 5 (3) 144) argue that it can be inappropriate to fit multidimensional scaling (MDS) models to similarity or dissimilarity data that have been averaged across subjects. They demonstrate that the averaging process tends to make dissimilarity data more amenable to metric representations, and conduct a simulation study showing that noisy data generated using one distance metric, when averaged, may be better fit using a different distance metric. This paper argues that a Bayesian measure of MDS models has the potential to address these difficulties, because it takes into account data-fit, the number of dimensions used by an MDS representation, and the precision of the data. A method of analysis based on the Bayesian measure is demonstrated through two simulation studies with accompanying theoretical analysis. In the first study, it is shown that the Bayesian analysis rejects those MDS models showing better fit to averaged data using the incorrect distance metric, while accepting those that use the correct metric. In the second study, different groups of simulated ‘subjects’ are assumed to use different underlying configurations. In this case, the Bayesian analysis rejects MDS representations where a significant proportion of subjects use different configurations, or when their dissimilarity judgments contain significant amounts of noise. It is concluded that the Bayesian analysis provides a simple and principled means for systematically accepting and rejecting MDS models derived from averaged data.  相似文献   

8.
Statistical inference (including interval estimation and model selection) is increasingly used in the analysis of behavioral data. As with many other fields, statistical approaches for these analyses traditionally use classical (i.e., frequentist) methods. Interpreting classical intervals and p‐values correctly can be burdensome and counterintuitive. By contrast, Bayesian methods treat data, parameters, and hypotheses as random quantities and use rules of conditional probability to produce direct probabilistic statements about models and parameters given observed study data. In this work, we reanalyze two data sets using Bayesian procedures. We precede the analyses with an overview of the Bayesian paradigm. The first study reanalyzes data from a recent study of controls, heavy smokers, and individuals with alcohol and/or cocaine substance use disorder, and focuses on Bayesian hypothesis testing for covariates and interval estimation for discounting rates among various substance use disorder profiles. The second example analyzes hypothetical environmental delay‐discounting data. This example focuses on using historical data to establish prior distributions for parameters while allowing subjective expert opinion to govern the prior distribution on model preference. We review the subjective nature of specifying Bayesian prior distributions but also review established methods to standardize the generation of priors and remove subjective influence while still taking advantage of the interpretive advantages of Bayesian analyses. We present the Bayesian approach as an alternative paradigm for statistical inference and discuss its strengths and weaknesses.  相似文献   

9.
Several authors have suggested the use of multilevel models for the analysis of data from single case designs. Multilevel models are a logical approach to analyzing such data, and deal well with the possible different time points and treatment phases for different subjects. However, they are limited in several ways that are addressed by Bayesian methods. For small samples Bayesian methods fully take into account uncertainty in random effects when estimating fixed effects; the computational methods now in use can fit complex models that represent accurately the behavior being modeled; groups of parameters can be more accurately estimated with shrinkage methods; prior information can be included; and interpretation is more straightforward. The computer programs for Bayesian analysis allow many (nonstandard) nonlinear models to be fit; an example using floor and ceiling effects is discussed here.  相似文献   

10.
Optional stopping refers to the practice of peeking at data and then, based on the results, deciding whether or not to continue an experiment. In the context of ordinary significance-testing analysis, optional stopping is discouraged, because it necessarily leads to increased type I error rates over nominal values. This article addresses whether optional stopping is problematic for Bayesian inference with Bayes factors. Statisticians who developed Bayesian methods thought not, but this wisdom has been challenged by recent simulation results of Yu, Sprenger, Thomas, and Dougherty (2013) and Sanborn and Hills (2013). In this article, I show through simulation that the interpretation of Bayesian quantities does not depend on the stopping rule. Researchers using Bayesian methods may employ optional stopping in their own research and may provide Bayesian analysis of secondary data regardless of the employed stopping rule. I emphasize here the proper interpretation of Bayesian quantities as measures of subjective belief on theoretical positions, the difference between frequentist and Bayesian interpretations, and the difficulty of using frequentist intuition to conceptualize the Bayesian approach.  相似文献   

11.
梁莘娅  杨艳云 《心理科学》2016,39(5):1256-1267
结构方程模型已被广泛应用于心理学、教育学、以及社会科学领域的统计分析中。结构方程模型分析中最常用的估计方法是基于正 态分布的估计量,比如极大似然估计法。这些方法需要满足两个假设。第一, 理论模型必须正确地反映变量与变量之间的关系,称为结构假 设。第二,数据必须符合多元正态分布,称为分布假设。如果这些假设不满足,基于正态分布的估计量就有可能导致不正确的卡方指数、不 正确的拟合度、以及有偏差的参数估计和参数估计的标准误。在实际应用中,几乎所有的理论模型都不能准确地解释变量与变量之间的关系, 数据也常常呈非多元正态分布。为此,一些新的估计方法得以发展。这些方法要么在理论上不要求数据呈多元正态分布,要么对因数据呈非 正态分布而导致的不正确结果进行纠正。当前较为流行的两种方法是稳健极大似然估计和贝叶斯估计。稳健极大似然估计是应用 Satorra and Bentler (1994) 的方法对不正确的卡方指数和参数估计的标准误进行调整,而参数估计和用极大似然方法得出的完全等同。贝叶斯估计方法则是 基于贝叶斯定理,其要点是:参数的后验分布是由参数的先验分布和数据似然值相乘而得来。后验分布常用马尔科夫蒙特卡洛算法来进行模拟。 对于稳健极大似然估计和贝叶斯估计这两种方法之间的优劣比较,先前的研究只局限于理论模型是正确的情境。而本研究则着重于理论模型 是错误的情境,同时也考虑到数据呈非正态分布的情境。本研究所采用的模型是验证性因子模型,数据全部由计算机模拟而来。数据的生成 取决于三个因素:8 类因子结构,3 种变量分布,和3 组样本量。这三个因素产生72 个模拟条件(72=8x3x3)。每个模拟条件下生成2000 个 数据组,每个数据组都拟合两个模型,一个是正确模型、一个是错误模型。每个模型都用两种估计方法来拟合:稳健极大似然估计法和贝叶 斯估计方法。贝叶斯估计方法中所使用的先验分布是无信息先验分布。结果分析主要着重于模型拒绝率、拟合度、参数估计、和参数估计的 标准误。研究的结果表明:在样本量充足的情况下,两种方法得出的参数估计非常相似。当数据呈非正态分布时,贝叶斯估计法比稳健极大 似然估计法更好地拒绝错误模型。但是,当样本量不足且数据呈正态分布时,贝叶斯估计在拒绝错误模型和参数估计上几乎没有优势,甚至 在一些条件下,比稳健极大似然法要差。  相似文献   

12.
Most researchers have specific expectations concerning their research questions. These may be derived from theory, empirical evidence, or both. Yet despite these expectations, most investigators still use null hypothesis testing to evaluate their data, that is, when analysing their data they ignore the expectations they have. In the present article, Bayesian model selection is presented as a means to evaluate the expectations researchers have, that is, to evaluate so called informative hypotheses. Although the methodology to do this has been described in previous articles, these are rather technical and havemainly been published in statistical journals. The main objective of thepresent article is to provide a basic introduction to the evaluation of informative hypotheses using Bayesian model selection. Moreover, what is new in comparison to previous publications on this topic is that we provide guidelines on how to interpret the results. Bayesian evaluation of informative hypotheses is illustrated using an example concerning psychosocial functioning and the interplay between personality and support from family.  相似文献   

13.
In this paper, normal/independent distributions, including but not limited to the multivariate t distribution, the multivariate contaminated distribution, and the multivariate slash distribution, are used to develop a robust Bayesian approach for analyzing structural equation models with complete or missing data. In the context of a nonlinear structural equation model with fixed covariates, robust Bayesian methods are developed for estimation and model comparison. Results from simulation studies are reported to reveal the characteristics of estimation. The methods are illustrated by using a real data set obtained from diabetes patients.  相似文献   

14.
Many psychologists and social scientists are unaware of the field of military psychology. Although marginally aware of Division 19: Military Psychology of the American Psychological Association (APA), a number of psychologists have very mistaken ideas about what military psychology includes and the uses thereof (APA Monitor, 1984). The purpose of this special issue is to present some research conducted under the rubric of military psychology. This issue of the Journal of Applied Social Psychology (JASP) may provide some preliminary answers to the frequently asked questions: What is this creature called military psychology? Who does it? What kinds of research are classified as military psychology?What Is Military Psychology?  相似文献   

15.
Growth curve models have been widely used to analyse longitudinal data in social and behavioural sciences. Although growth curve models with normality assumptions are relatively easy to estimate, practical data are rarely normal. Failing to account for non-normal data may lead to unreliable model estimation and misleading statistical inference. In this work, we propose a robust approach for growth curve modelling using conditional medians that are less sensitive to outlying observations. Bayesian methods are applied for model estimation and inference. Based on the existing work on Bayesian quantile regression using asymmetric Laplace distributions, we use asymmetric Laplace distributions to convert the problem of estimating a median growth curve model into a problem of obtaining the maximum likelihood estimator for a transformed model. Monte Carlo simulation studies have been conducted to evaluate the numerical performance of the proposed approach with data containing outliers or leverage observations. The results show that the proposed approach yields more accurate and efficient parameter estimates than traditional growth curve modelling. We illustrate the application of our robust approach using conditional medians based on a real data set from the Virginia Cognitive Aging Project.  相似文献   

16.
There is a recent increase in interest of Bayesian analysis. However, little effort has been made thus far to directly incorporate background knowledge via the prior distribution into the analyses. This process might be especially useful in the context of latent growth mixture modeling when one or more of the latent groups are expected to be relatively small due to what we refer to as limited data. We argue that the use of Bayesian statistics has great advantages in limited data situations, but only if background knowledge can be incorporated into the analysis via prior distributions. We highlight these advantages through a data set including patients with burn injuries and analyze trajectories of posttraumatic stress symptoms using the Bayesian framework following the steps of the WAMBS-checklist. In the included example, we illustrate how to obtain background information using previous literature based on a systematic literature search and by using expert knowledge. Finally, we show how to translate this knowledge into prior distributions and we illustrate the importance of conducting a prior sensitivity analysis. Although our example is from the trauma field, the techniques we illustrate can be applied to any field.  相似文献   

17.
This article is a how-to guide on Bayesian computation using Gibbs sampling, demonstrated in the context of Latent Class Analysis (LCA). It is written for students in quantitative psychology or related fields who have a working knowledge of Bayes Theorem and conditional probability and have experience in writing computer programs in the statistical language R. The overall goals are to provide an accessible and self-contained tutorial, along with a practical computation tool. We begin with how Bayesian computation is typically described in academic articles. Technical difficulties are addressed by a hypothetical, worked-out example. We show how Bayesian computation can be broken down into a series of simpler calculations, which can then be assembled together to complete a computationally more complex model. The details are described much more explicitly than what is typically available in elementary introductions to Bayesian modeling so that readers are not overwhelmed by the mathematics. Moreover, the provided computer program shows how Bayesian LCA can be implemented with relative ease. The computer program is then applied in a large, real-world data set and explained line-by-line. We outline the general steps in how to extend these considerations to other methodological applications. We conclude with suggestions for further readings.  相似文献   

18.
19.
In most research, linear regression analyses are performed without taking into account published results (i.e., reported summary statistics) of similar previous studies. Although the prior density in Bayesian linear regression could accommodate such prior knowledge, formal models for doing so are absent from the literature. The goal of this article is therefore to develop a Bayesian model in which a linear regression analysis on current data is augmented with the reported regression coefficients (and standard errors) of previous studies. Two versions of this model are presented. The first version incorporates previous studies through the prior density and is applicable when the current and all previous studies are exchangeable. The second version models all studies in a hierarchical structure and is applicable when studies are not exchangeable. Both versions of the model are assessed using simulation studies. Performance for each in estimating the regression coefficients is consistently superior to using current data alone and is close to that of an equivalent model that uses the data from previous studies rather than reported regression coefficients. Overall the results show that augmenting data with results from previous studies is viable and yields significant improvements in the parameter estimation.  相似文献   

20.
Choice confidence is a central measure in psychological decision research, often being reported on a probabilistic scale. Simple mechanisms that describe the psychological processes underlying choice confidence, including those based on error and confirmation biases, have typically received support via fits to data averaged over subjects. While averaged data ease model development, they can also destroy important aspects of the confidence data distribution. In this paper, we develop a hierarchical model of raw confidence judgments using the beta distribution, and we implement two simple confidence mechanisms within it. We use Bayesian methods to fit the hierarchical model to data from a two-alternative confidence experiment, and we use a variety of Bayesian tools to diagnose shortcomings of the simple mechanisms that are overlooked when applied to averaged data. Bugs code for estimating the models is also supplied.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号