首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 546 毫秒
1.
Several issues are discussed when testing inequality constrained hypotheses using a Bayesian approach. First, the complexity (or size) of the inequality constrained parameter spaces can be ignored. This is the case when using the posterior probability that the inequality constraints of a hypothesis hold, Bayes factors based on non‐informative improper priors, and partial Bayes factors based on posterior priors. Second, the Bayes factor may not be invariant for linear one‐to‐one transformations of the data. This can be observed when using balanced priors which are centred on the boundary of the constrained parameter space with a diagonal covariance structure. Third, the information paradox can be observed. When testing inequality constrained hypotheses, the information paradox occurs when the Bayes factor of an inequality constrained hypothesis against its complement converges to a constant as the evidence for the first hypothesis accumulates while keeping the sample size fixed. This paradox occurs when using Zellner's g prior as a result of too much prior shrinkage. Therefore, two new methods are proposed that avoid these issues. First, partial Bayes factors are proposed based on transformed minimal training samples. These training samples result in posterior priors that are centred on the boundary of the constrained parameter space with the same covariance structure as in the sample. Second, a g prior approach is proposed by letting g go to infinity. This is possible because the Jeffreys–Lindley paradox is not an issue when testing inequality constrained hypotheses. A simulation study indicated that the Bayes factor based on this g prior approach converges fastest to the true inequality constrained hypothesis.  相似文献   

2.
Informative hypotheses are increasingly being used in psychological sciences because they adequately capture researchers’ theories and expectations. In the Bayesian framework, the evaluation of informative hypotheses often makes use of default Bayes factors such as the fractional Bayes factor. This paper approximates and adjusts the fractional Bayes factor such that it can be used to evaluate informative hypotheses in general statistical models. In the fractional Bayes factor a fraction parameter must be specified which controls the amount of information in the data used for specifying an implicit prior. The remaining fraction is used for testing the informative hypotheses. We discuss different choices of this parameter and present a scheme for setting it. Furthermore, a software package is described which computes the approximated adjusted fractional Bayes factor. Using this software package, psychological researchers can evaluate informative hypotheses by means of Bayes factors in an easy manner. Two empirical examples are used to illustrate the procedure.  相似文献   

3.
When analyzing repeated measurements data, researchers often have expectations about the relations between the measurement means. The expectations can often be formalized using equality and inequality constraints between (i) the measurement means over time, (ii) the measurement means between groups, (iii) the means adjusted for time-invariant covariates, and (iv) the means adjusted for time-varying covariates. The result is a set of informative hypotheses. In this paper, the Bayes factor is used to determine which hypothesis receives most support from the data. A pivotal element in the Bayesian framework is the specification of the prior. To avoid subjective prior specification, training data in combination with restrictions on the measurement means are used to obtain so-called constrained posterior priors. A simulation study and an empirical example from developmental psychology show that this prior results in Bayes factors with desirable properties.  相似文献   

4.
In the psychological literature, there are two seemingly different approaches to inference: that from estimation of posterior intervals and that from Bayes factors. We provide an overview of each method and show that a salient difference is the choice of models. The two approaches as commonly practiced can be unified with a certain model specification, now popular in the statistics literature, called spike-and-slab priors. A spike-and-slab prior is a mixture of a null model, the spike, with an effect model, the slab. The estimate of the effect size here is a function of the Bayes factor, showing that estimation and model comparison can be unified. The salient difference is that common Bayes factor approaches provide for privileged consideration of theoretically useful parameter values, such as the value corresponding to the null hypothesis, while estimation approaches do not. Both approaches, either privileging the null or not, are useful depending on the goals of the analyst.  相似文献   

5.
The development of cognitive models involves the creative scientific formalization of assumptions, based on theory, observation, and other relevant information. In the Bayesian approach to implementing, testing, and using cognitive models, assumptions can influence both the likelihood function of the model, usually corresponding to assumptions about psychological processes, and the prior distribution over model parameters, usually corresponding to assumptions about the psychological variables that influence those processes. The specification of the prior is unique to the Bayesian context, but often raises concerns that lead to the use of vague or non-informative priors in cognitive modeling. Sometimes the concerns stem from philosophical objections, but more often practical difficulties with how priors should be determined are the stumbling block. We survey several sources of information that can help to specify priors for cognitive models, discuss some of the methods by which this information can be formalized in a prior distribution, and identify a number of benefits of including informative priors in cognitive modeling. Our discussion is based on three illustrative cognitive models, involving memory retention, categorization, and decision making.  相似文献   

6.
Bayes factor approaches for testing interval null hypotheses   总被引:1,自引:0,他引:1  
Psychological theories are statements of constraint. The role of hypothesis testing in psychology is to test whether specific theoretical constraints hold in data. Bayesian statistics is well suited to the task of finding supporting evidence for constraint, because it allows for comparing evidence for 2 hypotheses against each another. One issue in hypothesis testing is that constraints may hold only approximately rather than exactly, and the reason for small deviations may be trivial or uninteresting. In the large-sample limit, these uninteresting, small deviations lead to the rejection of a useful constraint. In this article, we develop several Bayes factor 1-sample tests for the assessment of approximate equality and ordinal constraints. In these tests, the null hypothesis covers a small interval of non-0 but negligible effect sizes around 0. These Bayes factors are alternatives to previously developed Bayes factors, which do not allow for interval null hypotheses, and may especially prove useful to researchers who use statistical equivalence testing. To facilitate adoption of these Bayes factor tests, we provide easy-to-use software.  相似文献   

7.
This paper presents a hierarchical Bayes circumplex model for ordinal ratings data. The circumplex model was proposed to represent the circular ordering of items in psychological testing by imposing inequalities on the correlations of the items. We provide a specification of the circumplex, propose identifying constraints and conjugate priors for the angular parameters, and accommodate theory-driven constraints in the form of inequalities. We investigate the performance of the proposed MCMC algorithm and apply the model to the analysis of value priorities data obtained from a representative sample of Dutch citizens. We wish to thank Michael Browne and two anonymous reviewers for their comments. The data for this study were collected as part of the project AIR2-CT94-1066, sponsored by the European Commission.  相似文献   

8.
The software package Bain can be used for the evaluation of informative hypotheses with respect to the parameters of a wide range of statistical models. For pairs of hypotheses the support in the data is quantified using the approximate adjusted fractional Bayes factor (BF). Currently, the data have to come from one population or have to consist of samples of equal size obtained from multiple populations. If samples of unequal size are obtained from multiple populations, the BF can be shown to be inconsistent. This paper examines how the approach implemented in Bain can be generalized such that multiple-population data can properly be processed. The resulting multiple-population approximate adjusted fractional Bayes factor is implemented in the R package Bain.  相似文献   

9.
Bayesian approaches to data analysis are considered within the context of behavior analysis. The paper distinguishes between Bayesian inference, the use of Bayes Factors, and Bayesian data analysis using specialized tools. Given the importance of prior beliefs to these approaches, the review addresses those situations in which priors have a big effect on the outcome (Bayes Factors) versus a smaller effect (parameter estimation). Although there are many advantages to Bayesian data analysis from a philosophical perspective, in many cases a behavior analyst can be reasonably well‐served by the adoption of traditional statistical tools as long as the focus is on parameter estimation and model comparison, not null hypothesis significance testing. A strong case for Bayesian analysis exists under specific conditions: When prior beliefs can help narrow parameter estimates (an especially important issue given the small sample sizes common in behavior analysis) and when an analysis cannot easily be conducted using traditional approaches (e.g., repeated measures censored regression).  相似文献   

10.
We study various axioms of discrete probabilistic choice, measuring how restrictive they are, both alone and in the presence of other axioms, given a specific class of prior distributions over a complete collection of finite choice probabilities. We do this by using Monte Carlo simulation to compute, for a range of prior distributions, probabilities that various simple and compound axioms hold. For example, the probability of the triangle inequality is usually many orders of magnitude higher than the probability of random utility. While neither the triangle inequality nor weak stochastic transitivity imply the other, the conditional probability that one holds given the other holds is greater than the marginal probability, for all priors in the class we consider. The reciprocal of the prior probability that an axiom holds is an upper bound on the Bayes factor in favor of a restricted model, in which the axiom holds, against an unrestricted model. The relatively high prior probability of the triangle inequality limits the degree of support that data from a single decision maker can provide in its favor. The much lower probability of random utility implies that the Bayes factor in favor of it can be much higher, for suitable data.  相似文献   

11.
Klotzke  Konrad  Fox  Jean-Paul 《Psychometrika》2019,84(3):649-672

A multivariate generalization of the log-normal model for response times is proposed within an innovative Bayesian modeling framework. A novel Bayesian Covariance Structure Model (BCSM) is proposed, where the inclusion of random-effect variables is avoided, while their implied dependencies are modeled directly through an additive covariance structure. This makes it possible to jointly model complex dependencies due to for instance the test format (e.g., testlets, complex constructs), time limits, or features of digitally based assessments. A class of conjugate priors is proposed for the random-effect variance parameters in the BCSM framework. They give support to testing the presence of random effects, reduce boundary effects by allowing non-positive (co)variance parameters, and support accurate estimation even for very small true variance parameters. The conjugate priors under the BCSM lead to efficient posterior computation. Bayes factors and the Bayesian Information Criterion are discussed for the purpose of model selection in the new framework. In two simulation studies, a satisfying performance of the MCMC algorithm and of the Bayes factor is shown. In comparison with parameter expansion through a half-Cauchy prior, estimates of variance parameters close to zero show no bias and undercoverage of credible intervals is avoided. An empirical example showcases the utility of the BCSM for response times to test the influence of item presentation formats on the test performance of students in a Latin square experimental design.

  相似文献   

12.
Null hypothesis significance testing (NHST) is the most commonly used statistical methodology in psychology. The probability of achieving a value as extreme or more extreme than the statistic obtained from the data is evaluated, and if it is low enough, the null hypothesis is rejected. However, because common experimental practice often clashes with the assumptions underlying NHST, these calculated probabilities are often incorrect. Most commonly, experimenters use tests that assume that sample sizes are fixed in advance of data collection but then use the data to determine when to stop; in the limit, experimenters can use data monitoring to guarantee that the null hypothesis will be rejected. Bayesian hypothesis testing (BHT) provides a solution to these ills because the stopping rule used is irrelevant to the calculation of a Bayes factor. In addition, there are strong mathematical guarantees on the frequentist properties of BHT that are comforting for researchers concerned that stopping rules could influence the Bayes factors produced. Here, we show that these guaranteed bounds have limited scope and often do not apply in psychological research. Specifically, we quantitatively demonstrate the impact of optional stopping on the resulting Bayes factors in two common situations: (1) when the truth is a combination of the hypotheses, such as in a heterogeneous population, and (2) when a hypothesis is composite—taking multiple parameter values—such as the alternative hypothesis in a t-test. We found that, for these situations, while the Bayesian interpretation remains correct regardless of the stopping rule used, the choice of stopping rule can, in some situations, greatly increase the chance of experimenters finding evidence in the direction they desire. We suggest ways to control these frequentist implications of stopping rules on BHT.  相似文献   

13.
Multilevel autoregressive models are especially suited for modeling between-person differences in within-person processes. Fitting these models with Bayesian techniques requires the specification of prior distributions for all parameters. Often it is desirable to specify prior distributions that have negligible effects on the resulting parameter estimates. However, the conjugate prior distribution for covariance matrices—the Inverse-Wishart distribution—tends to be informative when variances are close to zero. This is problematic for multilevel autoregressive models, because autoregressive parameters are usually small for each individual, so that the variance of these parameters will be small. We performed a simulation study to compare the performance of three Inverse-Wishart prior specifications suggested in the literature, when one or more variances for the random effects in the multilevel autoregressive model are small. Our results show that the prior specification that uses plug-in ML estimates of the variances performs best. We advise to always include a sensitivity analysis for the prior specification for covariance matrices of random parameters, especially in autoregressive models, and to include a data-based prior specification in this analysis. We illustrate such an analysis by means of an empirical application on repeated measures data on worrying and positive affect.  相似文献   

14.
The Savage–Dickey density ratio is a simple method for computing the Bayes factor for an equality constraint on one or more parameters of a statistical model. In regression analysis, this includes the important scenario of testing whether one or more of the covariates have an effect on the dependent variable. However, the Savage–Dickey ratio only provides the correct Bayes factor if the prior distribution of the nuisance parameters under the nested model is identical to the conditional prior under the full model given the equality constraint. This condition is violated for multiple regression models with a Jeffreys–Zellner–Siow prior, which is often used as a default prior in psychology. Besides linear regression models, the limitation of the Savage–Dickey ratio is especially relevant when analytical solutions for the Bayes factor are not available. This is the case for generalized linear models, non-linear models, or cognitive process models with regression extensions. As a remedy, the correct Bayes factor can be computed using a generalized version of the Savage–Dickey density ratio.  相似文献   

15.
Progress in science often comes from discovering invariances in relationships among variables; these invariances often correspond to null hypotheses. As is commonly known, it is not possible to state evidence for the null hypothesis in conventional significance testing. Here we highlight a Bayes factor alternative to the conventional t test that will allow researchers to express preference for either the null hypothesis or the alternative. The Bayes factor has a natural and straightforward interpretation, is based on reasonable assumptions, and has better properties than other methods of inference that have been advocated in the psychological literature. To facilitate use of the Bayes factor, we provide an easy-to-use, Web-based program that performs the necessary calculations.  相似文献   

16.
In this paper, the performance of six types of techniques for comparisons of means is examined. These six emerge from the distinction between the method employed (hypothesis testing, model selection using information criteria, or Bayesian model selection) and the set of hypotheses that is investigated (a classical, exploration‐based set of hypotheses containing equality constraints on the means, or a theory‐based limited set of hypotheses with equality and/or order restrictions). A simulation study is conducted to examine the performance of these techniques. We demonstrate that, if one has specific, a priori specified hypotheses, confirmation (i.e., investigating theory‐based hypotheses) has advantages over exploration (i.e., examining all possible equality‐constrained hypotheses). Furthermore, examining reasonable order‐restricted hypotheses has more power to detect the true effect/non‐null hypothesis than evaluating only equality restrictions. Additionally, when investigating more than one theory‐based hypothesis, model selection is preferred over hypothesis testing. Because of the first two results, we further examine the techniques that are able to evaluate order restrictions in a confirmatory fashion by examining their performance when the homogeneity of variance assumption is violated. Results show that the techniques are robust to heterogeneity when the sample sizes are equal. When the sample sizes are unequal, the performance is affected by heterogeneity. The size and direction of the deviations from the baseline, where there is no heterogeneity, depend on the effect size (of the means) and on the trend in the group variances with respect to the ordering of the group sizes. Importantly, the deviations are less pronounced when the group variances and sizes exhibit the same trend (e.g., are both increasing with group number).  相似文献   

17.
Testing hypotheses on specific environmental causal effects on behavior   总被引:16,自引:0,他引:16  
There have been strong critiques of the notion that environmental influences can have an important effect on psychological functioning. The substance of these criticisms is considered in order to infer the methodological challenges that have to be met. Concepts of cause and of the testing of causal effects are discussed with a particular focus on the need to consider sample selection and the value (and limitations) of longitudinal data. The designs that may be used to test hypotheses on specific environmental risk mechanisms for psychopathology are discussed in relation to a range of adoption strategies, twin designs, various types of "natural experiments," migration designs, the study of secular change, and intervention designs. In each case, consideration is given to the need for samples that "pull-apart" variables that ordinarily go together, specific hypotheses on possible causal processes, and the specification and testing of key assumptions. It is concluded that environmental risk hypotheses can be (and have been) put to the test but that it is usually necessary to use a combination of research strategies.  相似文献   

18.
In traditional approaches to structural equations modeling, variances of latent endogenous variables cannot be specified or constrained directly and, consequently, are not identified, unless certain precautions are taken. The usual method for achieving identification has been to fix one factor loading for each endogenous latent variable at unity. An alternative approach is to fix variances using newer constrained estimation algorithms. This article examines the philosophy behind such constraints and shows how their appropriate use is neither as straightforward nor as noncontroversial as portrayed in textbooks and computer manuals. The constraints on latent variable variances can interact with other model constraints to interfere with the testing of certain kinds of hypotheses and can yield incorrect standardized solutions with some popular software.  相似文献   

19.
Analyses are mostly executed at the population level, whereas in many applications the interest is on the individual level instead of the population level. In this paper, multiple N =?1 experiments are considered, where participants perform multiple trials with a dichotomous outcome in various conditions. Expectations with respect to the performance of participants can be translated into so-called informative hypotheses. These hypotheses can be evaluated for each participant separately using Bayes factors. A Bayes factor expresses the relative evidence for two hypotheses based on the data of one individual. This paper proposes to “average” these individual Bayes factors in the gP-BF, the average relative evidence. The gP-BF can be used to determine whether one hypothesis is preferred over another for all individuals under investigation. This measure provides insight into whether the relative preference of a hypothesis from a pre-defined set is homogeneous over individuals. Two additional measures are proposed to support the interpretation of the gP-BF: the evidence rate (ER), the proportion of individual Bayes factors that support the same hypothesis as the gP-BF, and the stability rate (SR), the proportion of individual Bayes factors that express a stronger support than the gP-BF. These three statistics can be used to determine the relative support in the data for the informative hypotheses entertained. Software is available that can be used to execute the approach proposed in this paper and to determine the sensitivity of the outcomes with respect to the number of participants and within condition replications.  相似文献   

20.
When testing hypotheses, two important problems that applied statisticians must consider are whether a large enough sample was used, and what to do when the frequently adopted homogeneity of variance assumption is violated. The first goal in this paper is to briefly review exact solutions to these problems. In the one-way ANOVA, for example, these procedures tell an experimenter whether enough observations were sampled so that the power will be at least as large as some pre-specified level. If too few observations were sampled, the procedure indicates how many more observations are required. The solution is exact, which is in contrast to another well-known procedure described in the paper. Also, the variances are allowed to be unequal. The second goal is to review how the techniques used to test hypotheses have also been used to solve problems in selection.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号