首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Informative hypotheses are increasingly being used in psychological sciences because they adequately capture researchers’ theories and expectations. In the Bayesian framework, the evaluation of informative hypotheses often makes use of default Bayes factors such as the fractional Bayes factor. This paper approximates and adjusts the fractional Bayes factor such that it can be used to evaluate informative hypotheses in general statistical models. In the fractional Bayes factor a fraction parameter must be specified which controls the amount of information in the data used for specifying an implicit prior. The remaining fraction is used for testing the informative hypotheses. We discuss different choices of this parameter and present a scheme for setting it. Furthermore, a software package is described which computes the approximated adjusted fractional Bayes factor. Using this software package, psychological researchers can evaluate informative hypotheses by means of Bayes factors in an easy manner. Two empirical examples are used to illustrate the procedure.  相似文献   

2.
The Savage–Dickey density ratio is a simple method for computing the Bayes factor for an equality constraint on one or more parameters of a statistical model. In regression analysis, this includes the important scenario of testing whether one or more of the covariates have an effect on the dependent variable. However, the Savage–Dickey ratio only provides the correct Bayes factor if the prior distribution of the nuisance parameters under the nested model is identical to the conditional prior under the full model given the equality constraint. This condition is violated for multiple regression models with a Jeffreys–Zellner–Siow prior, which is often used as a default prior in psychology. Besides linear regression models, the limitation of the Savage–Dickey ratio is especially relevant when analytical solutions for the Bayes factor are not available. This is the case for generalized linear models, non-linear models, or cognitive process models with regression extensions. As a remedy, the correct Bayes factor can be computed using a generalized version of the Savage–Dickey density ratio.  相似文献   

3.
Analyses are mostly executed at the population level, whereas in many applications the interest is on the individual level instead of the population level. In this paper, multiple N =?1 experiments are considered, where participants perform multiple trials with a dichotomous outcome in various conditions. Expectations with respect to the performance of participants can be translated into so-called informative hypotheses. These hypotheses can be evaluated for each participant separately using Bayes factors. A Bayes factor expresses the relative evidence for two hypotheses based on the data of one individual. This paper proposes to “average” these individual Bayes factors in the gP-BF, the average relative evidence. The gP-BF can be used to determine whether one hypothesis is preferred over another for all individuals under investigation. This measure provides insight into whether the relative preference of a hypothesis from a pre-defined set is homogeneous over individuals. Two additional measures are proposed to support the interpretation of the gP-BF: the evidence rate (ER), the proportion of individual Bayes factors that support the same hypothesis as the gP-BF, and the stability rate (SR), the proportion of individual Bayes factors that express a stronger support than the gP-BF. These three statistics can be used to determine the relative support in the data for the informative hypotheses entertained. Software is available that can be used to execute the approach proposed in this paper and to determine the sensitivity of the outcomes with respect to the number of participants and within condition replications.  相似文献   

4.
Model selection is a central issue in mathematical psychology. One useful criterion for model selection is generalizability; that is, the chosen model should yield the best predictions for future data. Some researchers in psychology have proposed that the Bayes factor can be used for assessing model generalizability. An alternative method, known as the generalization criterion, has also been proposed for the same purpose. We argue that these two methods address different levels of model generalizability (local and global), and will often produce divergent conclusions. We illustrate this divergence by applying the Bayes factor and the generalization criterion to a comparison of retention functions. The application of alternative model selection criteria will also be demonstrated within the framework of model generalizability.  相似文献   

5.
A mixture model for repeated measures based on nonlinear functions with random effects is reviewed. The model can include individual schedules of measurement, data missing at random, nonlinear functions of the random effects, of covariates and of residuals. Individual group membership probabilities and individual random effects are obtained as empirical Bayes predictions. Although this is a complicated model that combines a mixture of populations, nonlinear regression, and hierarchical models, it is straightforward to estimate by maximum likelihood using SAS PROC NLMIXED. Many different models can be studied with this procedure. The model is more general than those that can be estimated with most special purpose computer programs currently available because the response function is essentially any form of nonlinear regression. Examples and sample code are included to illustrate the method.  相似文献   

6.
Linear dynamical system theory is a broad theoretical framework that has been applied in various research areas such as engineering, econometrics and recently in psychology. It quantifies the relations between observed inputs and outputs that are connected through a set of latent state variables. State space models are used to investigate the dynamical properties of these latent quantities. These models are especially of interest in the study of emotion dynamics, with the system representing the evolving emotion components of an individual. However, for simultaneous modeling of individual and population differences, a hierarchical extension of the basic state space model is necessary. Therefore, we introduce a Bayesian hierarchical model with random effects for the system parameters. Further, we apply our model to data that were collected using the Oregon adolescent interaction task: 66 normal and 67 depressed adolescents engaged in a conflict-oriented interaction with their parents and second-to-second physiological and behavioral measures were obtained. System parameters in normal and depressed adolescents were compared, which led to interesting discussions in the light of findings in recent literature on the links between cardiovascular processes, emotion dynamics and depression. We illustrate that our approach is flexible and general: The model can be applied to any time series for multiple systems (where a system can represent any entity) and moreover, one is free to focus on various components of this versatile model.  相似文献   

7.
8.
The software package Bain can be used for the evaluation of informative hypotheses with respect to the parameters of a wide range of statistical models. For pairs of hypotheses the support in the data is quantified using the approximate adjusted fractional Bayes factor (BF). Currently, the data have to come from one population or have to consist of samples of equal size obtained from multiple populations. If samples of unequal size are obtained from multiple populations, the BF can be shown to be inconsistent. This paper examines how the approach implemented in Bain can be generalized such that multiple-population data can properly be processed. The resulting multiple-population approximate adjusted fractional Bayes factor is implemented in the R package Bain.  相似文献   

9.
In the psychological literature, there are two seemingly different approaches to inference: that from estimation of posterior intervals and that from Bayes factors. We provide an overview of each method and show that a salient difference is the choice of models. The two approaches as commonly practiced can be unified with a certain model specification, now popular in the statistics literature, called spike-and-slab priors. A spike-and-slab prior is a mixture of a null model, the spike, with an effect model, the slab. The estimate of the effect size here is a function of the Bayes factor, showing that estimation and model comparison can be unified. The salient difference is that common Bayes factor approaches provide for privileged consideration of theoretically useful parameter values, such as the value corresponding to the null hypothesis, while estimation approaches do not. Both approaches, either privileging the null or not, are useful depending on the goals of the analyst.  相似文献   

10.
One of the most important methodological problems in psychological research is assessing the reasonableness of null models, which typically constrain a parameter to a specific value such as zero. Bayes factor has been recently advocated in the statistical and psychological literature as a principled means of measuring the evidence in data for various models, including those where parameters are set to specific values. Yet, it is rarely adopted in substantive research, perhaps because of the difficulties in computation. Fortunately, for this problem, the Savage–Dickey density ratio (Dickey & Lientz, 1970) provides a conceptually simple approach to computing Bayes factor. Here, we review methods for computing the Savage–Dickey density ratio, and highlight an improved method, originally suggested by Gelfand and Smith (1990) and advocated by Chib (1995), that outperforms those currently discussed in the psychological literature. The improved method is based on conditional quantities, which may be integrated by Markov chain Monte Carlo sampling to estimate Bayes factors. These conditional quantities efficiently utilize all the information in the MCMC chains, leading to accurate estimation of Bayes factors. We demonstrate the method by computing Bayes factors in one-sample and one-way designs, and show how it may be implemented in WinBUGS.  相似文献   

11.
Intensive longitudinal studies are becoming progressively more prevalent across many social science areas, and especially in psychology. New technologies such as smart-phones, fitness trackers, and the Internet of Things make it much easier than in the past to collect data for intensive longitudinal studies, providing an opportunity to look deep into the underlying characteristics of individuals under a high temporal resolution. In this paper we introduce a new modelling framework for latent curve analysis that is more suitable for the analysis of intensive longitudinal data than existing latent curve models. Specifically, through the modelling of an individual-specific continuous-time latent process, some unique features of intensive longitudinal data are better captured, including intensive measurements in time and unequally spaced time points of observations. Technically, the continuous-time latent process is modelled by a Gaussian process model. This model can be regarded as a semi-parametric extension of the classical latent curve models and falls under the framework of structural equation modelling. Procedures for parameter estimation and statistical inference are provided under an empirical Bayes framework and evaluated by simulation studies. We illustrate the use of the proposed model though the analysis of an ecological momentary assessment data set.  相似文献   

12.
One basic and important problem in two-level structural equation modeling is to find a good model for the observed sample data. This article demonstrates the use of the well-known Bayes factor in the Bayesian literature for hypothesis testing and model comparison in general two-level structural equation models. It is shown that the proposed methodology is flexible, and can be applied to situations with a wide variety of nonnested models. Moreover, some problems encountered in using existing methods for goodness-of-fit assessment of the proposed model can be alleviated. An illustrative example with some real data from an AIDS care study is presented.  相似文献   

13.
Maximum likelihood estimation of the linear factor model for continuous items assumes normally distributed item scores. We consider deviations from normality by means of a skew‐normally distributed factor model or a quadratic factor model. We show that the item distributions under a skew‐normal factor are equivalent to those under a quadratic model up to third‐order moments. The reverse only holds if the quadratic loadings are equal to each other and within certain bounds. We illustrate that observed data which follow any skew‐normal factor model can be so well approximated with the quadratic factor model that the models are empirically indistinguishable, and that the reverse does not hold in general. The choice between the two models to account for deviations of normality is illustrated by an empirical example from clinical psychology.  相似文献   

14.
Partridge and Lerner (2007), in a secondary analysis of the New York Longitudinal Study, employed a chronometric polynomial growth curve model to argue that the developmental course of difficult temperament follows a non‐linear trajectory over the first 5 years of life. The free curve slope intercept (FCSI) growth curve model of Meredith and Tisak (1990) is presented as a preferable conceptual alternative because it contains a number of currently popular statistical models, including repeated measures multivariate analysis of variance, factor mean, linear growth, linear factor analysis, and hierarchical linear models as special cases. As such, researchers can compare the fit of each of these models relative to the FCSI model, and, at times, to each other. The present paper conducts a re‐analysis of the data, and establishes that fit of the FCSI model is arguably better than other statistical alternatives. The FCSI model is also used as the basis for identifying subgroups of individuals with their qualitatively distinct growth patterns within a growth mixture modeling framework. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

15.
We introduce a Fourier transformation technique that enables one to derive closed-form expressions of performance measures (e.g., hit and false alarm rates) of simulation-based models of recognition memory. Application of the technique is demonstrated using the bind cue decide model of episodic memory (BCDMEM; [Dennis, S., & Humphreys, M.S. (2001). A context noise model of episodic word recognition. Psychological Review, 108(2), 452-478]). In addition to reducing the time required to test the model, which for models like BCDMEM can be excessive, asymptotic expressions of the measures reveal heretofore unknown properties of the model, such as model predictions being dependent on vector length.  相似文献   

16.
There is much empirical evidence that randomized response methods improve the cooperation of the respondents when asking sensitive questions. The traditional methods for analysing randomized response data are restricted to univariate data and only allow inferences at the group level due to the randomized response sampling design. Here, a novel beta‐binomial model is proposed for analysing multivariate individual count data observed via a randomized response sampling design. This new model allows for the estimation of individual response probabilities (response rates) for multivariate randomized response data utilizing an empirical Bayes approach. A common beta prior specifies that individuals in a group are tied together and the beta prior parameters are allowed to be cluster‐dependent. A Bayes factor is proposed to test for group differences in response rates. An analysis of a cheating study, where 10 items measure cheating or academic dishonesty, is used to illustrate application of the proposed model.  相似文献   

17.
A multitrait-multimethod model with minimal assumptions   总被引:1,自引:0,他引:1  
Michael Eid 《Psychometrika》2000,65(2):241-261
A new model of confirmatory factor analysis (CFA) for multitrait-multimethod (MTMM) data sets is presented. It is shown that this model can be defined by only three assumptions in the framework of classical psychometric test theory (CTT). All other properties of the model, particularly the uncorrelated-ness of the trait with the method factors are logical consequences of the definition of the model. In the model proposed there are as many trait factors as different traits considered, but the number of method factors is one fewer than the number of methods included in an MTMM study. The covariance structure implied by this model is derived, and it is shown that this model is identified even under conditions under which other CFA-MTMM models are not. The model is illustrated by two empirical applications. Furthermore, its advantages and limitations are discussed with respect to previously developed CFA models for MTMM data.  相似文献   

18.
19.
Evidence accumulation models of decision-making have led to advances in several different areas of psychology. These models provide a way to integrate response time and accuracy data, and to describe performance in terms of latent cognitive processes. Testing important psychological hypotheses using cognitive models requires a method to make inferences about different versions of the models which assume different parameters to cause observed effects. The task of model-based inference using noisy data is difficult, and has proven especially problematic with current model selection methods based on parameter estimation. We provide a method for computing Bayes factors through Monte-Carlo integration for the linear ballistic accumulator (LBA; Brown and Heathcote, 2008), a widely used evidence accumulation model. Bayes factors are used frequently for inference with simpler statistical models, and they do not require parameter estimation. In order to overcome the computational burden of estimating Bayes factors via brute force integration, we exploit general purpose graphical processing units; we provide free code for this. This approach allows estimation of Bayes factors via Monte-Carlo integration within a practical time frame. We demonstrate the method using both simulated and real data. We investigate the stability of the Monte-Carlo approximation, and the LBA’s inferential properties, in simulation studies.  相似文献   

20.
In comparing characteristics of independent populations, researchers frequently expect a certain structure of the population variances. These expectations can be formulated as hypotheses with equality and/or inequality constraints on the variances. In this article, we consider the Bayes factor for testing such (in)equality-constrained hypotheses on variances. Application of Bayes factors requires specification of a prior under every hypothesis to be tested. However, specifying subjective priors for variances based on prior information is a difficult task. We therefore consider so-called automatic or default Bayes factors. These methods avoid the need for the user to specify priors by using information from the sample data. We present three automatic Bayes factors for testing variances. The first is a Bayes factor with equal priors on all variances, where the priors are specified automatically using a small share of the information in the sample data. The second is the fractional Bayes factor, where a fraction of the likelihood is used for automatic prior specification. The third is an adjustment of the fractional Bayes factor such that the parsimony of inequality-constrained hypotheses is properly taken into account. The Bayes factors are evaluated by investigating different properties such as information consistency and large sample consistency. Based on this evaluation, it is concluded that the adjusted fractional Bayes factor is generally recommendable for testing equality- and inequality-constrained hypotheses on variances.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号