共查询到20条相似文献,搜索用时 15 毫秒
1.
Eric-Jan Wagenmakers Jonathon Love Maarten Marsman Tahira Jamil Alexander Ly Josine Verhagen Ravi Selker Quentin F. Gronau Damian Dropmann Bruno Boutin Frans Meerhoff Patrick Knight Akash Raj Erik-Jan van Kesteren Johnny van Doorn Martin Šmíra Sacha Epskamp Alexander Etz Dora Matzke Tim de Jong Don van den Bergh Alexandra Sarafoglou Helen Steingroever Koen Derks Jeffrey N. Rouder Richard D. Morey 《Psychonomic bulletin & review》2018,25(1):58-76
Bayesian hypothesis testing presents an attractive alternative to p value hypothesis testing. Part I of this series outlined several advantages of Bayesian hypothesis testing, including the ability to quantify evidence and the ability to monitor and update this evidence as data come in, without the need to know the intention with which the data were collected. Despite these and other practical advantages, Bayesian hypothesis tests are still reported relatively rarely. An important impediment to the widespread adoption of Bayesian tests is arguably the lack of user-friendly software for the run-of-the-mill statistical problems that confront psychologists for the analysis of almost every experiment: the t-test, ANOVA, correlation, regression, and contingency tables. In Part II of this series we introduce JASP (http://www.jasp-stats.org), an open-source, cross-platform, user-friendly graphical software package that allows users to carry out Bayesian hypothesis tests for standard statistical problems. JASP is based in part on the Bayesian analyses implemented in Morey and Rouder’s BayesFactor package for R. Armed with JASP, the practical advantages of Bayesian hypothesis testing are only a mouse click away. 相似文献
2.
Jeffrey N. Rouder Julia M. Haaf Joachim Vandekerckhove 《Psychonomic bulletin & review》2018,25(1):102-113
In the psychological literature, there are two seemingly different approaches to inference: that from estimation of posterior intervals and that from Bayes factors. We provide an overview of each method and show that a salient difference is the choice of models. The two approaches as commonly practiced can be unified with a certain model specification, now popular in the statistics literature, called spike-and-slab priors. A spike-and-slab prior is a mixture of a null model, the spike, with an effect model, the slab. The estimate of the effect size here is a function of the Bayes factor, showing that estimation and model comparison can be unified. The salient difference is that common Bayes factor approaches provide for privileged consideration of theoretically useful parameter values, such as the value corresponding to the null hypothesis, while estimation approaches do not. Both approaches, either privileging the null or not, are useful depending on the goals of the analyst. 相似文献
3.
We demonstrate the use of three popular Bayesian software packages that enable researchers to estimate parameters in a broad class of models that are commonly used in psychological research. We focus on WinBUGS, JAGS, and Stan, and show how they can be interfaced from R and MATLAB. We illustrate the use of the packages through two fully worked examples; the examples involve a simple univariate linear regression and fitting a multinomial processing tree model to data from a classic false-memory experiment. We conclude with a comparison of the strengths and weaknesses of the packages. Our example code, data, and this text are available via https://osf.io/ucmaz/. 相似文献
4.
D. Trafimow (2003) presented an analysis of null hypothesis significance testing (NHST) using Bayes's theorem. Among other points, he concluded that NHST is logically invalid, but that logically valid Bayesian analyses are often not possible. The latter conclusion reflects a fundamental misunderstanding of the nature of Bayesian inference. This view needs correction, because Bayesian methods have an important role to play in many psychological problems where standard techniques are inadequate. This comment, with the help of a simple example, explains the usefulness of Bayesian inference for psychology. 相似文献
5.
Peter Sedlmeier 《Behavior research methods》1997,29(3):328-336
To date, attempts to teach Bayesian inference to nonexperts have not met with much success. BasicBayes, the computerized tutor presented here, is an attempt to change this state of affairs. BasicBayes is based on a novel theoretical framework about Bayesian reasoning recently introduced by Gigerenzer and Hoffrage (1995). This framework focuses on the connection between “cognitive algorithms” and “information formats.” BasicBayes teaches people how to translate Bayesian text problems into frequency formats, which have been shown to entail computationally simpler cognitive algorithms than those entailed by probability formats. The components and mode of functioning of BasicBayes are described in detail. Empirical evidence demonstrates the effectiveness of BasicBayes in teaching people simple Bayesian inference. Because of its flexible system architecture, BasicBayes can also be used as a research tool. 相似文献
6.
Philosophical Studies - Some arguments include imperative clauses. For example: ‘Buy me a drink; you can’t buy me that drink unless you go to the bar; so, go to the bar!’ How... 相似文献
7.
Recent advances in the cognitive psychology of inference have been of great interest to philosophers of science. The present paper reviews one such area, namely studies based upon Wason's “4-card” selection task. It is argued that interpretation of the results of the experiments is complex, because a variety of inference strategies may be used by subjects to select evidence needed to confirm or disconfirm a hypothesis. Empirical evidence suggests that which strategy is used depends in part on the semantic, syntactic, and pragmatic context of the inference problem at hand. Since the factors of importance are also present in real-world science, and similarly complicate its interpretation, the selection task, though it does not present a “quick fix”, represents a kind of microcosm of great utility for the understanding of science. Several studies which have examined selection strategies in more complex problem-solving environments are also reviewed, in an attempt to determine the limits of generalizability of the simpler selection tasks. Certain interpretational misuses of laboratory research are described, and a claim made that the issue of whether or not scientists are rational should be approached by philosophers and psychologists with appropriate respect for the complexities of the issue. 相似文献
8.
One of the most popular paradigms to use for studying human reasoning involves the Wason card selection task. In this task,
the participant is presented with four cards and a conditional rule (e.g., “If there is an A on one side of the card, there is always a 2 on the other side”). Participants are asked which cards should be turned to
verify whether or not the rule holds. In this simple task, participants consistently provide answers that are incorrect according
to formal logic. To account for these errors, several models have been proposed, one of the most prominent being the information
gain model (Oaksford & Chater, Psychological Review, 101, 608–631, 1994). This model is based on the assumption that people independently select cards based on the expected information gain of
turning a particular card. In this article, we present two estimation methods to fit the information gain model: a maximum
likelihood procedure (programmed in R) and a Bayesian procedure (programmed in WinBUGS). We compare the two procedures and
illustrate the flexibility of the Bayesian hierarchical procedure by applying it to data from a meta-analysis of the Wason
task (Oaksford & Chater, Psychological Review, 101, 608–631, 1994). We also show that the goodness of fit of the information gain model can be assessed by inspecting the posterior predictives
of the model. These Bayesian procedures make it easy to apply the information gain model to empirical data. Supplemental materials
may be downloaded along with this article from . 相似文献
9.
Bayesian inference for graphical factor analysis models 总被引:1,自引:0,他引:1
We generalize factor analysis models by allowing the concentration matrix of the residuals to have nonzero off-diagonal elements.
The resulting model is named graphical factor analysis model. Allowing a structure of associations gives information about
the correlation left unexplained by the unobserved variables, which can be used both in the confirmatory and exploratory context.
We first present a sufficient condition for global identifiability of this class of models with a generic number of factors,
thereby extending the results in Stanghellini (1997) and Vicard (2000). We then consider the issue of model comparison and
show that fast local computations are possible for this purpose, if the conditional independence graphs on the residuals are
restricted to be decomposable and a Bayesian approach is adopted. To achieve this aim, we propose a new reversible jump MCMC
method to approximate the posterior probabilities of the considered models. We then study the evolution of political democracy
in 75 developing countries based on eight measures of democracy in two different years.
We acknowledge support from M.U.R.S.T. of Italy and from the European Science Foundation H.S.S.S. Network. We are grateful
to the referees and the Editor for many useful suggestions and comments which led to a substantial improvement of the paper.
We also thank Nanny Wermuth for stimulating discussions and Kenneth A. Bollen for kindly providing us with the data-set. 相似文献
10.
Lee MD 《Behavior research methods》2008,40(2):450-456
This article describes and demonstrates the BayesSDT MATLAB-based software package for performing Bayesian analysis with equal-variance Gaussian signal detection theory (SDT). The software uses WinBUGS to draw samples from the posterior distribution of six SDT parameters: discriminability, hit rate, false alarm rate, criterion, and two alternative measures of bias. The software either provides a simple MATLAB graphical user interface or allows a more general MATLAB function call to produce graphs of the posterior distribution for each parameter of interest for each data set, as well as to return the full set of posteriorsamples. 相似文献
11.
Over the last decade, the popularity of Bayesian data analysis in the empirical sciences has greatly increased. This is partly
due to the availability of WinBUGS, a free and flexible statistical software package that comes with an array of predefined
functions and distributions, allowing users to build complex models with ease. For many applications in the psychological
sciences, however, it is highly desirable to be able to define one’s own distributions and functions. This functionality is
available through the WinBUGS Development Interface (WBDev). This tutorial illustrates the use of WBDev by means of concrete
examples, featuring the expectancyvalence model for risky behavior in decision making, and the shifted Wald distribution of
response times in speeded choice. 相似文献
12.
We introduce a graphical framework for Bayesian inference that is sufficiently general to accommodate not just the standard case but also recent proposals for a theory of quantum Bayesian inference wherein one considers density operators rather than probability distributions as representative of degrees of belief. The diagrammatic framework is stated in the graphical language of symmetric monoidal categories and of compact structures and Frobenius structures therein, in which Bayesian inversion boils down to transposition with respect to an appropriate compact structure. We characterize classical Bayesian inference in terms of a graphical property and demonstrate that our approach eliminates some purely conventional elements that appear in common representations thereof, such as whether degrees of belief are represented by probabilities or entropic quantities. We also introduce a quantum-like calculus wherein the Frobenius structure is noncommutative and show that it can accommodate Leifer??s calculus of ??conditional density operators??. The notion of conditional independence is also generalized to our graphical setting and we make some preliminary connections to the theory of Bayesian networks. Finally, we demonstrate how to construct a graphical Bayesian calculus within any dagger compact category. 相似文献
13.
14.
A Samuels 《The Journal of analytical psychology》1992,37(2):127-147
The paper is a critical study of the intellectual relations of analytical psychology and national socialism. I try to show that it was Jung's attempt to establish a psychology of nations that brought him into the same frame as Nazi anti-semitic ideology. In addition, Jung was absorbed by the question of leadership, also a pressing issue during the 1930s. Exploring these ideas as thoroughly as possible leads to a kind of reparation, for I think that post-Jungians do have reparation to make. Then it is possible to revalue Jung's overall project in more positive terms. By coupling a less simplistic methodology and a more sensitive set of values to Jung's basic intuitions about the importance of a psychology of cultural difference, analytical psychology has something to offer a depth psychology that is concerned with processes of political and social transformation. 相似文献
15.
Albert MK 《Perception》2000,29(5):601-608
The task of human vision is to reliably infer useful information about the external environment from images formed on the retinae. In general, the inference of scene properties from retinal images is not deductive; it requires knowledge about the external environment. Further, it has been suggested that the environment must be regular in some way in order for any scene properties to be reliably inferred. In particular, Knill and Kersten [1991, in Pattern Recognition by Man and Machine Ed. R J Watt (London: Macmillan)] and Jepson et al [1996, in Bayesian Approaches to Perception Eds D Knill, W Richards (Cambridge: Cambridge University Press)] claim that, given an 'unbiased' prior probability distribution for the scenes being observed, the generic viewpoint assumption is not probabilistically valid. However, this claim depends upon the use of representation spaces that may not be appropriate for the problems they consider. In fact, it is problematic to define a rigorous criterion for a probability distribution to be considered 'random' or 'regularity-free' in many natural domains of interest. This problem is closely related to Bertrand's paradox. I propose that, in the case of 'unbiased' priors, the reliability of inferences based on the generic viewpoint assumption depends partly on whether or not an observed coincidence in the image involves features known to be on the same object. This proposal is based on important differences between the distributions associated with: (i) a 'random' placement of features in 3-D, and (ii) the positions of features on a 'randomly shaped' and 'randomly posed' 3-D object. Similar considerations arise in the case of inferring 3-D motion from image motion. 相似文献
16.
Generalization,similarity, and Bayesian inference 总被引:1,自引:0,他引:1
Shepard has argued that a universal law should govern generalization across different domains of perception and cognition, as well as across organisms from different species or even different planets. Starting with some basic assumptions about natural kinds, he derived an exponential decay function as the form of the universal generalization gradient, which accords strikingly well with a wide range of empirical data. However, his original formulation applied only to the ideal case of generalization from a single encountered stimulus to a single novel stimulus, and for stimuli that can be represented as points in a continuous metric psychological space. Here we recast Shepard's theory in a more general Bayesian framework and show how this naturally extends his approach to the more realistic situation of generalizing from multiple consequential stimuli with arbitrary representational structure. Our framework also subsumes a version of Tversky's set-theoretic model of similarity, which is conventionally thought of as the primary alternative to Shepard's continuous metric space model of similarity and generalization. This unification allows us not only to draw deep parallels between the set-theoretic and spatial approaches, but also to significantly advance the explanatory power of set-theoretic models. 相似文献
17.
18.
Lei Shi Thomas L. Griffiths Naomi H. Feldman Adam N. Sanborn 《Psychonomic bulletin & review》2010,17(4):443-464
Probabilistic models have recently received much attention as accounts of human cognition. However, most research in which
probabilistic models have been used has been focused on formulating the abstract problems behind cognitive tasks and their
optimal solutions, rather than on mechanisms that could implement these solutions. Exemplar models are a successful class
of psychological process models in which an inventory of stored examples is used to solve problems such as identification,
categorization, and function learning. We show that exemplar models can be used to perform a sophisticated form of Monte Carlo
approximation known as importance sampling and thus provide a way to perform approximate Bayesian inference. Simulations of Bayesian inference in speech perception,
generalization along a single dimension, making predictions about everyday events, concept learning, and reconstruction from
memory show that exemplar models can often account for human performance with only a few exemplars, for both simple and relatively
complex prior distributions. These results suggest that exemplar models provide a possible mechanism for implementing at least
some forms of Bayesian inference. 相似文献
19.
G E Zuriff 《The Psychoanalytic quarterly》1992,61(1):18-36
Psychoanalytic developmental theory has been profoundly influenced by recent observational research on infants. Although it is commonly held that these new data refute earlier theories of infancy, an examination of the evidence indicates otherwise. Much of the disagreement between the two is based on differences over the definition of such concepts as "self" and "self/other differentiation" and over strategies of theoretical inference. Inferences about the subjective experience of infants are best viewed as theoretical postulates rather than empirical statements or metaphors. 相似文献
20.
Sue Finch Geoff Cumming Jennifer Williams Lee Palmer Elvira Griffith Chris Alders James Anderson Olivia Goodman 《Behavior research methods, instruments & computers》2004,36(2):312-324
Geoffrey Loftus, Editor of Memory & Cognition from 1994 to 1997, strongly encouraged presentation of figures with error bars and avoidance of null hypothesis significance testing (NHST). The authors examined 696 Memory & Cognition articles published before, during, and after the Loftus editorship. Use of figures with bars increased to 47% under Loftus's editorship and then declined. Bars were rarely used for interpretation, and NHST remained almost universal. Analysis of 309 articles in other psychology journals confirmed that Loftus's influence was most evident in the articles he accepted for publication, but was otherwise limited. An e-mail survey of authors of papers accepted by Loftus revealed some support for his policy, but allegiance to traditional practices as well. Reform of psychologists' statistical practices would require more than editorial encouragement. 相似文献