首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In the psychological literature, there are two seemingly different approaches to inference: that from estimation of posterior intervals and that from Bayes factors. We provide an overview of each method and show that a salient difference is the choice of models. The two approaches as commonly practiced can be unified with a certain model specification, now popular in the statistics literature, called spike-and-slab priors. A spike-and-slab prior is a mixture of a null model, the spike, with an effect model, the slab. The estimate of the effect size here is a function of the Bayes factor, showing that estimation and model comparison can be unified. The salient difference is that common Bayes factor approaches provide for privileged consideration of theoretically useful parameter values, such as the value corresponding to the null hypothesis, while estimation approaches do not. Both approaches, either privileging the null or not, are useful depending on the goals of the analyst.  相似文献   

2.
Bayesian inference for graphical factor analysis models   总被引:1,自引:0,他引:1  
We generalize factor analysis models by allowing the concentration matrix of the residuals to have nonzero off-diagonal elements. The resulting model is named graphical factor analysis model. Allowing a structure of associations gives information about the correlation left unexplained by the unobserved variables, which can be used both in the confirmatory and exploratory context. We first present a sufficient condition for global identifiability of this class of models with a generic number of factors, thereby extending the results in Stanghellini (1997) and Vicard (2000). We then consider the issue of model comparison and show that fast local computations are possible for this purpose, if the conditional independence graphs on the residuals are restricted to be decomposable and a Bayesian approach is adopted. To achieve this aim, we propose a new reversible jump MCMC method to approximate the posterior probabilities of the considered models. We then study the evolution of political democracy in 75 developing countries based on eight measures of democracy in two different years. We acknowledge support from M.U.R.S.T. of Italy and from the European Science Foundation H.S.S.S. Network. We are grateful to the referees and the Editor for many useful suggestions and comments which led to a substantial improvement of the paper. We also thank Nanny Wermuth for stimulating discussions and Kenneth A. Bollen for kindly providing us with the data-set.  相似文献   

3.
Bayesian hypothesis testing presents an attractive alternative to p value hypothesis testing. Part I of this series outlined several advantages of Bayesian hypothesis testing, including the ability to quantify evidence and the ability to monitor and update this evidence as data come in, without the need to know the intention with which the data were collected. Despite these and other practical advantages, Bayesian hypothesis tests are still reported relatively rarely. An important impediment to the widespread adoption of Bayesian tests is arguably the lack of user-friendly software for the run-of-the-mill statistical problems that confront psychologists for the analysis of almost every experiment: the t-test, ANOVA, correlation, regression, and contingency tables. In Part II of this series we introduce JASP (http://www.jasp-stats.org), an open-source, cross-platform, user-friendly graphical software package that allows users to carry out Bayesian hypothesis tests for standard statistical problems. JASP is based in part on the Bayesian analyses implemented in Morey and Rouder’s BayesFactor package for R. Armed with JASP, the practical advantages of Bayesian hypothesis testing are only a mouse click away.  相似文献   

4.
Bayesian parameter estimation and Bayesian hypothesis testing present attractive alternatives to classical inference using confidence intervals and p values. In part I of this series we outline ten prominent advantages of the Bayesian approach. Many of these advantages translate to concrete opportunities for pragmatic researchers. For instance, Bayesian hypothesis testing allows researchers to quantify evidence and monitor its progression as data come in, without needing to know the intention with which the data were collected. We end by countering several objections to Bayesian hypothesis testing. Part II of this series discusses JASP, a free and open source software program that makes it easy to conduct Bayesian estimation and testing for a range of popular statistical scenarios (Wagenmakers et al. this issue).  相似文献   

5.
Lee MD  Wagenmakers EJ 《Psychological review》2005,112(3):662-8; discussion 669-74
D. Trafimow (2003) presented an analysis of null hypothesis significance testing (NHST) using Bayes's theorem. Among other points, he concluded that NHST is logically invalid, but that logically valid Bayesian analyses are often not possible. The latter conclusion reflects a fundamental misunderstanding of the nature of Bayesian inference. This view needs correction, because Bayesian methods have an important role to play in many psychological problems where standard techniques are inadequate. This comment, with the help of a simple example, explains the usefulness of Bayesian inference for psychology.  相似文献   

6.
Probabilistic models have recently received much attention as accounts of human cognition. However, most research in which probabilistic models have been used has been focused on formulating the abstract problems behind cognitive tasks and their optimal solutions, rather than on mechanisms that could implement these solutions. Exemplar models are a successful class of psychological process models in which an inventory of stored examples is used to solve problems such as identification, categorization, and function learning. We show that exemplar models can be used to perform a sophisticated form of Monte Carlo approximation known as importance sampling and thus provide a way to perform approximate Bayesian inference. Simulations of Bayesian inference in speech perception, generalization along a single dimension, making predictions about everyday events, concept learning, and reconstruction from memory show that exemplar models can often account for human performance with only a few exemplars, for both simple and relatively complex prior distributions. These results suggest that exemplar models provide a possible mechanism for implementing at least some forms of Bayesian inference.  相似文献   

7.
Latent trait models for binary responses to a set of test items are considered from the point of view of estimating latent trait parameters=( 1, , n ) and item parameters=( 1, , k ), where j may be vector valued. With considered a random sample from a prior distribution with parameter, the estimation of (, ) is studied under the theory of the EM algorithm. An example and computational details are presented for the Rasch model.This work was supported by Contract No. N00014-81-K-0265, Modification No. P00002, from Personnel and Training Research Programs, Psychological Sciences Division, Office of Naval Research. The authors wish to thank an anonymous reviewer for several valuable suggestions.  相似文献   

8.
A Monte Carlo study was conducted to investigate the ability of three estimation criteria to recover the parameters of Case V and Case III models from comparative judgment data. Significant differences in recovery are shown to exist.This research was supported by grant SOC76-20517 from the National Science Foundation. The authors acknowledge with appreciation comments received on an earlier draft by Harold Lindman, Joseph Zinnes, and anonymous reviewers.  相似文献   

9.
In this paper, we address the use of Bayesian factor analysis and structural equation models to draw inferences from experimental psychology data. While such application is non-standard, the models are generally useful for the unified analysis of multivariate data that stem from, e.g., subjects’ responses to multiple experimental stimuli. We first review the models and the parameter identification issues inherent in the models. We then provide details on model estimation via JAGS and on Bayes factor estimation. Finally, we use the models to re-analyze experimental data on risky choice, comparing the approach to simpler, alternative methods.  相似文献   

10.
A fundamental issue for theories of human induction is to specify constraints on potential inferences. For inferences based on shared category membership, an analogy, and/or a relational schema, it appears that the basic goal of induction is to make accurate and goal-relevant inferences that are sensitive to uncertainty. People can use source information at various levels of abstraction (including both specific instances and more general categories), coupled with prior causal knowledge, to build a causal model for a target situation, which in turn constrains inferences about the target. We propose a computational theory in the framework of Bayesian inference and test its predictions (parameter-free for the cases we consider) in a series of experiments in which people were asked to assess the probabilities of various causal predictions and attributions about a target on the basis of source knowledge about generative and preventive causes. The theory proved successful in accounting for systematic patterns of judgments about interrelated types of causal inferences, including evidence that analogical inferences are partially dissociable from overall mapping quality.  相似文献   

11.
We present an hierarchical Bayes approach to modeling parameter heterogeneity in generalized linear models. The model assumes that there are relevant subpopulations and that within each subpopulation the individual-level regression coefficients have a multivariate normal distribution. However, class membership is not known a priori, so the heterogeneity in the regression coefficients becomes a finite mixture of normal distributions. This approach combines the flexibility of semiparametric, latent class models that assume common parameters for each sub-population and the parsimony of random effects models that assume normal distributions for the regression parameters. The number of subpopulations is selected to maximize the posterior probability of the model being true. Simulations are presented which document the performance of the methodology for synthetic data with known heterogeneity and number of sub-populations. An application is presented concerning preferences for various aspects of personal computers.  相似文献   

12.
Mair P  von Eye A 《心理学方法》2007,12(2):139-156
In this article, the authors have 2 aims. First, hierarchical, nonhierarchical, and nonstandard log-linear models are defined. Second, application scenarios are presented for nonhierarchical and nonstandard models, with illustrations of where these scenarios can occur. Parameters can be interpreted in regard to their formal meaning and in regard to their magnitude. The interpretation of the meaning of parameters is the main focus of this article. Design matrices are used to describe the hypotheses tested in models and to illustrate cases in which parameters are interpretable. Also, design matrices are used to show where and how nonstandard models differ from standard hierarchical models. Coding schemes are discussed, in particular, dummy coding and effects coding. Data examples are given with data and models discussed in the literature.  相似文献   

13.
A test score on a psychological test is usually expressed as a normed score, representing its position relative to test scores in a reference population. These typically depend on predictor(s) such as age. The test score distribution conditional on predictors is estimated using regression, which may need large normative samples to estimate the relationships between the predictor(s) and the distribution characteristics properly. In this study, we examine to what extent this burden can be alleviated by using prior information in the estimation of new norms with Bayesian Gaussian distributional regression. In a simulation study, we investigate to what extent this norm estimation is more efficient and how robust it is to prior model deviations. We varied the prior type, prior misspecification and sample size. In our simulated conditions, using a fixed effects prior resulted in more efficient norm estimation than a weakly informative prior as long as the prior misspecification was not age dependent. With the proposed method and reasonable prior information, the same norm precision can be achieved with a smaller normative sample, at least in empirical problems similar to our simulated conditions. This may help test developers to achieve cost-efficient high-quality norms. The method is illustrated using empirical normative data from the IDS-2 intelligence test.  相似文献   

14.
Generalization,similarity, and Bayesian inference   总被引:1,自引:0,他引:1  
Tenenbaum JB  Griffiths TL 《The Behavioral and brain sciences》2001,24(4):629-40; discussion 652-791
Shepard has argued that a universal law should govern generalization across different domains of perception and cognition, as well as across organisms from different species or even different planets. Starting with some basic assumptions about natural kinds, he derived an exponential decay function as the form of the universal generalization gradient, which accords strikingly well with a wide range of empirical data. However, his original formulation applied only to the ideal case of generalization from a single encountered stimulus to a single novel stimulus, and for stimuli that can be represented as points in a continuous metric psychological space. Here we recast Shepard's theory in a more general Bayesian framework and show how this naturally extends his approach to the more realistic situation of generalizing from multiple consequential stimuli with arbitrary representational structure. Our framework also subsumes a version of Tversky's set-theoretic model of similarity, which is conventionally thought of as the primary alternative to Shepard's continuous metric space model of similarity and generalization. This unification allows us not only to draw deep parallels between the set-theoretic and spatial approaches, but also to significantly advance the explanatory power of set-theoretic models.  相似文献   

15.
Bayesian estimation and testing of structural equation models   总被引:2,自引:0,他引:2  
The Gibbs sampler can be used to obtain samples of arbitrary size from the posterior distribution over the parameters of a structural equation model (SEM) given covariance data and a prior distribution over the parameters. Point estimates, standard deviations and interval estimates for the parameters can be computed from these samples. If the prior distribution over the parameters is uninformative, the posterior is proportional to the likelihood, and asymptotically the inferences based on the Gibbs sample are the same as those based on the maximum likelihood solution, for example, output from LISREL or EQS. In small samples, however, the likelihood surface is not Gaussian and in some cases contains local maxima. Nevertheless, the Gibbs sample comes from the correct posterior distribution over the parameters regardless of the sample size and the shape of the likelihood surface. With an informative prior distribution over the parameters, the posterior can be used to make inferences about the parameters underidentified models, as we illustrate on a simple errors-in-variables model.We thank David Spiegelhalter for suggesting applying the Gibbs sampler to structural equation models to the first author at a 1994 workshop in Wiesbaden. We thank Ulf Böckenholt, Chris Meek, Marijtje van Duijn, Clark Glymour, Ivo Molenaar, Steve Klepper, Thomas Richardson, Teddy Seidenfeld, and Tom Snijders for helpful discussions, mathematical advice, and critiques of earlier drafts of this paper.  相似文献   

16.
To date, attempts to teach Bayesian inference to nonexperts have not met with much success. BasicBayes, the computerized tutor presented here, is an attempt to change this state of affairs. BasicBayes is based on a novel theoretical framework about Bayesian reasoning recently introduced by Gigerenzer and Hoffrage (1995). This framework focuses on the connection between “cognitive algorithms” and “information formats.” BasicBayes teaches people how to translate Bayesian text problems into frequency formats, which have been shown to entail computationally simpler cognitive algorithms than those entailed by probability formats. The components and mode of functioning of BasicBayes are described in detail. Empirical evidence demonstrates the effectiveness of BasicBayes in teaching people simple Bayesian inference. Because of its flexible system architecture, BasicBayes can also be used as a research tool.  相似文献   

17.
We report five experiments in which the role of background beliefs in social judgments of posterior probability was investigated. From a Bayesian perspective, people should combine prior probabilities (or base rates) and diagnostic evidence with equal weighting, although previous research shows that base rates are often underweighted. These experiments were designed so that either piece of information was supplied either by personal beliefs or by presented statistics, and regression analyses were performed on individual participants to assess the relative influence of information. We found that both prior probabilities and diagnostic information significantly influenced judgments, whether supplied by beliefs or by statistical information, but that belief-based information tended to dominate the judgments made.  相似文献   

18.
In a latent class IRT model in which the latent classes are ordered on one dimension, the class specific response probabilities are subject to inequality constraints. The number of these inequality constraints increase dramatically with the number of response categories per item, if assumptions like monotonicity or double monotonicity of the cumulative category response functions are postulated. A Markov chain Monte Carlo method, the Gibbs sampler, can sample from the multivariate posterior distribution of the parameters under the constraints. Bayesian model selection can be done by posterior predictive checks and Bayes factors. A simulation study is done to evaluate results of the application of these methods to ordered latent class models in three realistic situations. Also, an example of the presented methods is given for existing data with polytomous items. It can be concluded that the Bayesian estimation procedure can handle the inequality constraints on the parameters very well. However, the application of Bayesian model selection methods requires more research.  相似文献   

19.
One of the most popular paradigms to use for studying human reasoning involves the Wason card selection task. In this task, the participant is presented with four cards and a conditional rule (e.g., “If there is an A on one side of the card, there is always a 2 on the other side”). Participants are asked which cards should be turned to verify whether or not the rule holds. In this simple task, participants consistently provide answers that are incorrect according to formal logic. To account for these errors, several models have been proposed, one of the most prominent being the information gain model (Oaksford & Chater, Psychological Review, 101, 608–631, 1994). This model is based on the assumption that people independently select cards based on the expected information gain of turning a particular card. In this article, we present two estimation methods to fit the information gain model: a maximum likelihood procedure (programmed in R) and a Bayesian procedure (programmed in WinBUGS). We compare the two procedures and illustrate the flexibility of the Bayesian hierarchical procedure by applying it to data from a meta-analysis of the Wason task (Oaksford & Chater, Psychological Review, 101, 608–631, 1994). We also show that the goodness of fit of the information gain model can be assessed by inspecting the posterior predictives of the model. These Bayesian procedures make it easy to apply the information gain model to empirical data. Supplemental materials may be downloaded along with this article from .  相似文献   

20.
This article describes and demonstrates the BayesSDT MATLAB-based software package for performing Bayesian analysis with equal-variance Gaussian signal detection theory (SDT). The software uses WinBUGS to draw samples from the posterior distribution of six SDT parameters: discriminability, hit rate, false alarm rate, criterion, and two alternative measures of bias. The software either provides a simple MATLAB graphical user interface or allows a more general MATLAB function call to produce graphs of the posterior distribution for each parameter of interest for each data set, as well as to return the full set of posteriorsamples.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号