首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
A Bayesian Model II approach to the estimation of proportions inm groups (discussed by Novick, Lewis, and Jackson) is extended to obtain posterior marginal distributions for the proportions. It is anticipated that these will be useful in applications (such as Individually Prescribed Instruction) where decisions are to be made separately for each proportion, rather than jointly for the set of proportions. In addition, the approach is extended to allow greater use of prior information than previously and the specification of this prior information is discussed.We are grateful to a reviewer for suggestions that made possible a more concise and complete presentation of our work.  相似文献   

2.
Multilevel analyses are often used to estimate the effects of group-level constructs. However, when using aggregated individual data (e.g., student ratings) to assess a group-level construct (e.g., classroom climate), the observed group mean might not provide a reliable measure of the unobserved latent group mean. In the present article, we propose a Bayesian approach that can be used to estimate a multilevel latent covariate model, which corrects for the unreliable assessment of the latent group mean when estimating the group-level effect. A simulation study was conducted to evaluate the choice of different priors for the group-level variance of the predictor variable and to compare the Bayesian approach with the maximum likelihood approach implemented in the software Mplus. Results showed that, under problematic conditions (i.e., small number of groups, predictor variable with a small ICC), the Bayesian approach produced more accurate estimates of the group-level effect than the maximum likelihood approach did.  相似文献   

3.
Conventional Bayesian analysis for proportions is not appropriate for proportions with a background effect, for example, for the proportion of correct responses in a sensory difference test using a forced‐choice method with a guessing probability. The main difficulty is that it is not reasonable to assume that a proportion with a background effect has a beta prior distribution. A generalized posterior distribution for proportions is derived in this paper. It includes the standard beta posterior distribution for a proportion without background effect as a special case. Bayesian inference and Bayesian sample size determination for proportions based on the generalized posterior distribution are discussed.  相似文献   

4.
This article considers Bayesian model averaging as a means of addressing uncertainty in the selection of variables in the propensity score equation. We investigate an approximate Bayesian model averaging approach based on the model-averaged propensity score estimates produced by the R package BMA but that ignores uncertainty in the propensity score. We also provide a fully Bayesian model averaging approach via Markov chain Monte Carlo sampling (MCMC) to account for uncertainty in both parameters and models. A detailed study of our approach examines the differences in the causal estimate when incorporating noninformative versus informative priors in the model averaging stage. We examine these approaches under common methods of propensity score implementation. In addition, we evaluate the impact of changing the size of Occam’s window used to narrow down the range of possible models. We also assess the predictive performance of both Bayesian model averaging propensity score approaches and compare it with the case without Bayesian model averaging. Overall, results show that both Bayesian model averaging propensity score approaches recover the treatment effect estimates well and generally provide larger uncertainty estimates, as expected. Both Bayesian model averaging approaches offer slightly better prediction of the propensity score compared with the Bayesian approach with a single propensity score equation. Covariate balance checks for the case study show that both Bayesian model averaging approaches offer good balance. The fully Bayesian model averaging approach also provides posterior probability intervals of the balance indices.  相似文献   

5.
We present an introduction to Bayesian inference as it is used in probabilistic models of cognitive development. Our goal is to provide an intuitive and accessible guide to the what, the how, and the why of the Bayesian approach: what sorts of problems and data the framework is most relevant for, and how and why it may be useful for developmentalists. We emphasize a qualitative understanding of Bayesian inference, but also include information about additional resources for those interested in the cognitive science applications, mathematical foundations, or machine learning details in more depth. In addition, we discuss some important interpretation issues that often arise when evaluating Bayesian models in cognitive science.  相似文献   

6.
ABSTRACT

Near-death experiences (NDEs) are usually associated with positive affect, however, a small proportion are considered distressing. We aimed to look into the proportion of distressing NDEs in a sample of NDE narratives, categorise distressing narratives according to Greyson and Bush’s classification (inverse, void or hellish), and compare distressing and “classical” NDEs. Participants wrote down their experience, completed the Memory Characteristics Questionnaire (assessing the phenomenology of memories) and the Greyson scale (characterising content of NDEs). The proportion of suicidal attempts, content and intensity of distressing and classical NDEs were compared using frequentist and Bayesian statistics. Distressing NDEs represent 14% of our sample (n?=?123). We identified 8 inverse, 8 hellish and 1 void accounts. The proportion of suicide survivors is higher in distressing NDEs as compared to classical ones. Finally, memories of distressing NDEs appear as phenomenologically detailed as classical ones. Distressing NDEs deserve careful consideration to ensure their integration into experiencers’ identity.  相似文献   

7.
Adam Corner  Ulrike Hahn 《Synthese》2013,190(16):3579-3610
Norms—that is, specifications of what we ought to do—play a critical role in the study of informal argumentation, as they do in studies of judgment, decision-making and reasoning more generally. Specifically, they guide a recurring theme: are people rational? Though rules and standards have been central to the study of reasoning, and behavior more generally, there has been little discussion within psychology about why (or indeed if) they should be considered normative despite the considerable philosophical literature that bears on this topic. In the current paper, we ask what makes something a norm, with consideration both of norms in general and a specific example: norms for informal argumentation. We conclude that it is both possible and desirable to invoke norms for rational argument, and that a Bayesian approach provides solid normative principles with which to do so.  相似文献   

8.

This paper describes a method to measure the sensitivity of an individual to different facial expressions. It shows that individual participants are more sensitive to happy than to fearful expressions and that the differences are statistically significant using the model-comparison approach. Sensitivity is measured by asking participants to discriminate between an emotional facial expression and a neutral expression of the same face. The expression was diluted to different degrees by combining it in different proportions with the neutral expression using morphing software. Sensitivity is defined as measurement of the proportion of neutral expression in a stimulus required for participants to discriminate the emotional expression on 75% of presentations. Individuals could reliably discriminate happy expressions diluted with a greater proportion of the neutral expression compared with that required for discrimination of fearful expressions. This tells us that individual participants are more sensitive to happy compared with fearful expressions. Sensitivity is equivalent when measured on two different testing sessions, and greater sensitivity to happy expressions is maintained with short stimulus durations and stimuli generated using different morphing software. Increased sensitivity to happy compared with fear expressions was affected at smaller image sizes for some participants. Application of the approach for use with clinical populations, as well as understanding the relative contribution of perceptual processing and affective processing in facial expression recognition, is discussed.

  相似文献   

9.
This article examines a Bayesian nonparametric approach to model selection and model testing, which is based on concepts from Bayesian decision theory and information theory. The approach can be used to evaluate the predictive-utility of any model that is either probabilistic or deterministic, with that model analyzed under either the Bayesian or classical-frequentist approach to statistical inference. Conditional on an observed set of data, generated from some unknown true sampling density, the approach identifies the “best” model as the one that predicts a sampling density that explains the most information about the true density. Furthermore, in the approach, the decision is to reject a model when it does not explain enough information about the true density (according to a straightforward calibration of the Kullback-Leibler divergence measure). The posterior estimate of the true density is based on a Bayesian nonparametric prior that can give positive support to the entire space of sampling densities (defined on some sample space). This article also discusses the theoretical and practical advantages of the Bayesian nonparametric approach over all other types of model selection procedures, and over any model testing procedure that depends on interpreting a p-value. Finally, the Bayesian nonparametric approach is illustrated on four real data sets, in the comparison and testing of order-constrained models, cognitive models, models of choice-behavior, and a test of a general psychometric model.  相似文献   

10.
In the field of cognitive psychology, the p-value hypothesis test has established a stranglehold on statistical reporting. This is unfortunate, as the p-value provides at best a rough estimate of the evidence that the data provide for the presence of an experimental effect. An alternative and arguably more appropriate measure of evidence is conveyed by a Bayesian hypothesis test, which prefers the model with the highest average likelihood. One of the main problems with this Bayesian hypothesis test, however, is that it often requires relatively sophisticated numerical methods for its computation. Here we draw attention to the Savage–Dickey density ratio method, a method that can be used to compute the result of a Bayesian hypothesis test for nested models and under certain plausible restrictions on the parameter priors. Practical examples demonstrate the method’s validity, generality, and flexibility.  相似文献   

11.
Few dispute that our models are approximations to reality. Yet when it comes to structural equation models (SEMs), we use estimators that assume true models (e.g. maximum likelihood) and that can create biased estimates when the model is inexact. This article presents an overview of the Model Implied Instrumental Variable (MIIV) approach to SEMs from Bollen (1996). The MIIV estimator using Two Stage Least Squares (2SLS), MIIV-2SLS, has greater robustness to structural misspecifications than system wide estimators. In addition, the MIIV-2SLS estimator is asymptotically distribution free. Furthermore, MIIV-2SLS has equation-based overidentification tests that can help pinpoint misspecifications. Beyond these features, the MIIV approach has other desirable qualities. MIIV methods apply to higher order factor analyses, categorical measures, growth curve models, dynamic factor analysis, and nonlinear latent variables. Finally, MIIV-2SLS permits researchers to estimate and test only the latent variable model or any other subset of equations. In addition, other MIIV estimators beyond 2SLS are available. Despite these promising features, research is needed to better understand its performance under a variety of conditions that represent empirical applications. Empirical and simulation examples in the article illustrate the MIIV orientation to SEMs and highlight an R package MIIVsem that implements MIIV-2SLS.  相似文献   

12.
Common methods for analysing response time (RT) tasks, frequently used across different disciplines of psychology, suffer from a number of limitations such as the failure to directly measure the underlying latent processes of interest and the inability to take into account the uncertainty associated with each individual's point estimate of performance. Here, we discuss a Bayesian hierarchical diffusion model and apply it to RT data. This model allows researchers to decompose performance into meaningful psychological processes and to account optimally for individual differences and commonalities, even with relatively sparse data. We highlight the advantages of the Bayesian hierarchical diffusion model decomposition by applying it to performance on Approach–Avoidance Tasks, widely used in the emotion and psychopathology literature. Model fits for two experimental data-sets demonstrate that the model performs well. The Bayesian hierarchical diffusion model overcomes important limitations of current analysis procedures and provides deeper insight in latent psychological processes of interest.  相似文献   

13.
ABSTRACT— The "wisdom of crowds" in making judgments about the future or other unknown events is well established. The average quantitative estimate of a group of individuals is consistently more accurate than the typical estimate, and is sometimes even the best estimate. Although individuals' estimates may be riddled with errors, averaging them boosts accuracy because both systematic and random errors tend to cancel out across individuals. We propose exploiting the power of averaging to improve estimates generated by a single person by using an approach we call dialectical bootstrapping . Specifically, it should be possible to reduce a person's error by averaging his or her first estimate with a second one that harks back to somewhat different knowledge. We derive conditions under which dialectical bootstrapping fosters accuracy and provide an empirical demonstration that its benefits go beyond reliability gains. A single mind can thus simulate the wisdom of many.  相似文献   

14.
There is a recent increase in interest of Bayesian analysis. However, little effort has been made thus far to directly incorporate background knowledge via the prior distribution into the analyses. This process might be especially useful in the context of latent growth mixture modeling when one or more of the latent groups are expected to be relatively small due to what we refer to as limited data. We argue that the use of Bayesian statistics has great advantages in limited data situations, but only if background knowledge can be incorporated into the analysis via prior distributions. We highlight these advantages through a data set including patients with burn injuries and analyze trajectories of posttraumatic stress symptoms using the Bayesian framework following the steps of the WAMBS-checklist. In the included example, we illustrate how to obtain background information using previous literature based on a systematic literature search and by using expert knowledge. Finally, we show how to translate this knowledge into prior distributions and we illustrate the importance of conducting a prior sensitivity analysis. Although our example is from the trauma field, the techniques we illustrate can be applied to any field.  相似文献   

15.
The ability to understand the goals that drive another person’s actions is an important social and cognitive skill. This is no trivial task, because any given action may in principle be explained by different possible goals (e.g., one may wave ones arm to hail a cab or to swat a mosquito). To select which goal best explains an observed action is a form of abduction. To explain how people perform such abductive inferences, Baker, Tenenbaum, and Saxe (2007) proposed a computational-level theory that formalizes goal inference as Bayesian inverse planning (BIP). It is known that general Bayesian inference–be it exact or approximate–is computationally intractable (NP-hard). As the time required for computationally intractable computations grows excessively fast when scaled from toy domains to the real world, it seems that such models cannot explain how humans can perform Bayesian inferences quickly in real world situations. In this paper we investigate how the BIP model can nevertheless explain how people are able to make goal inferences quickly. The approach that we propose builds on taking situational constraints explicitly into account in the computational-level model. We present a methodology for identifying situational constraints that render the model tractable. We discuss the implications of our findings and reflect on how the methodology can be applied to alternative models of goal inference and Bayesian models in general.  相似文献   

16.
Abstract

We define a desirability effect as the inflation of the judged probability of desirable events or the diminution of the judged probability of undersirable events. A series of studies of this effect are reported. In the first four experiments, subjects were presented with visual stimuli (a grid matrix in two colours, or a jar containing beads in two colours), and asked to estimate the probability of drawing at random one of the colours), and asked to estimate the probability of drawing at random one of the colours. The estimated probabilities for a defined draw were not higher when the draw entailed a gain than when it entailed a loss. In the fifth and sixth experiments, subjects read short stories each describing two contestants competing for some desirable outcome (e.g. parents fighting for child custody, or firms bidding for a contract). Some judged the probability that A would win, others judged the Desirability that A would win. Story elements that enhanced a contestant's desirability did not cause the favoured contestant to be judged more likely to win. Only when a contestant's desirability was enhanced by promising the subject of payoff contingent on that contestant's victory was there some slight evidence for a desirability effect: contestants were judged more likely to win when the subject expected a monetary prize if they won than when the subject expected a prize if the other contestant won. In the last experiment, subjects estimated the probability of an over-20-point weekly change in the Dow Jones average, and were promised prizes contingent on such a change either occurring, or failing to occur. They were also given a monetary incentive for accuracy. Subjects who desired a small change. We conclude that desirability effects, when they exist, operate by biasing the evidence brought to mind regarding the event in question, but when a given body of evidence is considered, its judged probability is not influenced by desirability considerations.  相似文献   

17.
Bayesian statistical inference offers a principled and comprehensive approach for relating psychological models to data. This article presents Bayesian analyses of three influential psychological models: multidimensional scaling models of stimulus representation, the generalized context model of category learning, and a signal detection theory model of decision making. In each case, the model is recast as a probabilistic graphical model and is evaluated in relation to a previously considered data set. In each case, it is shown that Bayesian inference is able to provide answers to important theoretical and empirical questions easily and coherently. The generality of the Bayesian approach and its potential for the understanding of models and data in psychology are discussed.  相似文献   

18.
This article examines the charge that the approach D. Stephen Long identifies as “ecclesial ethics” is a world-denying approach. The article examines typologies that pit world-affirmers against world-deniers, showing how “neo-Augustinians” end up on both sides of this divide, depending on who is constructing the typology. The article argues that these typologies are inaccurate, distorting, and often self-contradictory. It offers an alternative etiology, making a case that “ecclesial ethics” can be understood as a development of the progressive wing of Catholic thought that surfaced in Vatican II. The article examines Giuseppe Dossetti’s advocacy of a Gospel sine glossa at Vatican II, and argues that this type of ethics has deep roots in a Catholic sacramental theology. Finally, the article examines Henri de Lubac’s work as exemplary of such a sacramental theology. The article concludes that the basis of “ecclesial ethics” is a deeply sacramental view of creation being transformed by the grace of God through Jesus Christ.  相似文献   

19.
On what model should a modern multi‐cultural democracy work? Spinosa et al. have argued that the political order should be sustained by a set of common values instilled in the citizens, without, however, any common rank order among these values. I argue that the multi‐cultural state should rather conform to what I call the Secular Model, according to which the citizens need not share any basic values at all. On the Secular Model, people individually stick to the existing constitution (only) as long as they each feel that they have good reasons to do so. To be sure, each citizen of a multi‐cultural state does need a feeling of community identity, a ‘we’ ideology, but it is desirable that each individual can have more than one such identity. It is also important that each individual can shift as he or she pleases, from one such identity to another. So this kind of identity should not be moulded by the state, but by various different free associations, independent of the state.  相似文献   

20.
Klotzke  Konrad  Fox  Jean-Paul 《Psychometrika》2019,84(3):649-672

A multivariate generalization of the log-normal model for response times is proposed within an innovative Bayesian modeling framework. A novel Bayesian Covariance Structure Model (BCSM) is proposed, where the inclusion of random-effect variables is avoided, while their implied dependencies are modeled directly through an additive covariance structure. This makes it possible to jointly model complex dependencies due to for instance the test format (e.g., testlets, complex constructs), time limits, or features of digitally based assessments. A class of conjugate priors is proposed for the random-effect variance parameters in the BCSM framework. They give support to testing the presence of random effects, reduce boundary effects by allowing non-positive (co)variance parameters, and support accurate estimation even for very small true variance parameters. The conjugate priors under the BCSM lead to efficient posterior computation. Bayes factors and the Bayesian Information Criterion are discussed for the purpose of model selection in the new framework. In two simulation studies, a satisfying performance of the MCMC algorithm and of the Bayes factor is shown. In comparison with parameter expansion through a half-Cauchy prior, estimates of variance parameters close to zero show no bias and undercoverage of credible intervals is avoided. An empirical example showcases the utility of the BCSM for response times to test the influence of item presentation formats on the test performance of students in a Latin square experimental design.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号