首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
This tutorial explains the foundation of approximate Bayesian computation (ABC), an approach to Bayesian inference that does not require the specification of a likelihood function, and hence that can be used to estimate posterior distributions of parameters for simulation-based models. We discuss briefly the philosophy of Bayesian inference and then present several algorithms for ABC. We then apply these algorithms in a number of examples. For most of these examples, the posterior distributions are known, and so we can compare the estimated posteriors derived from ABC to the true posteriors and verify that the algorithms recover the true posteriors accurately. We also consider a popular simulation-based model of recognition memory (REM) for which the true posteriors are unknown. We conclude with a number of recommendations for applying ABC methods to solve real-world problems.  相似文献   

2.
Multinomial processing tree models are widely used in many areas of psychology. A hierarchical extension of the model class is proposed, using a multivariate normal distribution of person-level parameters with the mean and covariance matrix to be estimated from the data. The hierarchical model allows one to take variability between persons into account and to assess parameter correlations. The model is estimated using Bayesian methods with weakly informative hyperprior distribution and a Gibbs sampler based on two steps of data augmentation. Estimation, model checks, and hypotheses tests are discussed. The new method is illustrated using a real data set, and its performance is evaluated in a simulation study.  相似文献   

3.
Hierarchical (or multilevel) statistical models have become increasingly popular in psychology in the last few years. In this article, we consider the application of multilevel modeling to the ex-Gaussian, a popular model of response times. We compare single-level and hierarchical methods for estimation of the parameters of ex-Gaussian distributions. In addition, for each approach, we compare maximum likelihood estimation with Bayesian estimation. A set of simulations and analyses of parameter recovery show that although all methods perform adequately well, hierarchical methods are better able to recover the parameters of the ex-Gaussian, by reducing variability in the recovered parameters. At each level, little overall difference was observed between the maximum likelihood and Bayesian methods.  相似文献   

4.
Traditionally, multinomial processing tree (MPT) models are applied to groups of homogeneous participants, where all participants within a group are assumed to have identical MPT model parameter values. This assumption is unreasonable when MPT models are used for clinical assessment, and it often may be suspect for applications to ordinary psychological experiments. One method for dealing with parameter variability is to incorporate random effects assumptions into a model. This is achieved by assuming that participants’ parameters are drawn independently from some specified multivariate hyperdistribution. In this paper we explore the assumption that the hyperdistribution consists of independent beta distributions, one for each MPT model parameter. These beta-MPT models are ‘hierarchical models’, and their statistical inference is different from the usual approaches based on data aggregated over participants. The paper provides both classical (frequentist) and hierarchical Bayesian approaches to statistical inference for beta-MPT models. In simple cases the likelihood function can be obtained analytically; however, for more complex cases, Markov Chain Monte Carlo algorithms are constructed to assist both approaches to inference. Examples based on clinical assessment studies are provided to demonstrate the advantages of hierarchical MPT models over aggregate analysis in the presence of individual differences.  相似文献   

5.
Structural equation models are very popular for studying relationships among observed and latent variables. However, the existing theory and computer packages are developed mainly under the assumption of normality, and hence cannot be satisfactorily applied to non‐normal and ordered categorical data that are common in behavioural, social and psychological research. In this paper, we develop a Bayesian approach to the analysis of structural equation models in which the manifest variables are ordered categorical and/or from an exponential family. In this framework, models with a mixture of binomial, ordered categorical and normal variables can be analysed. Bayesian estimates of the unknown parameters are obtained by a computational procedure that combines the Gibbs sampler and the Metropolis–Hastings algorithm. Some goodness‐of‐fit statistics are proposed to evaluate the fit of the posited model. The methodology is illustrated by results obtained from a simulation study and analysis of a real data set about non‐adherence of hypertension patients in a medical treatment scheme.  相似文献   

6.
Current modeling of response times on test items has been strongly influenced by the paradigm of experimental reaction-time research in psychology. For instance, some of the models have a parameter structure that was chosen to represent a speed-accuracy tradeoff, while others equate speed directly with response time. Also, several response-time models seem to be unclear as to the level of parametrization they represent. A hierarchical framework for modeling speed and accuracy on test items is presented as an alternative to these models. The framework allows a “plug-and-play approach” with alternative choices of models for the response and response-time distributions as well as the distributions of their parameters. Bayesian treatment of the framework with Markov chain Monte Carlo (MCMC) computation facilitates the approach. Use of the framework is illustrated for the choice of a normal-ogive response model, a lognormal model for the response times, and multivariate normal models for their parameters with Gibbs sampling from the joint posterior distribution. This study received funding from the Law School Admission Council (LSAC). The opinions and conclusions contained in this paper are those of the author and do not necessarily reflect the policy and position of LSAC. The author is indebted to the American Institute of Certified Public Accountants for the data set in the empirical example and to Rinke H. Klein Entink for his computational assistance  相似文献   

7.
This paper introduces a new technique for estimating the parameters of models with continuous latent data. Using the Rasch model as an example, it is shown that existing Bayesian techniques for parameter estimation, such as the Gibbs sampler, are not always easy to implement. Then, a new sampling-based Bayesian technique, called the DA-T-Gibbs sampler, is introduced. The DA-T-Gibbs sampler relies on the particular latent data structure of latent response models to simplify the computations involved in parameter estimation. This research was supported by the Dutch National Research Council (NWO) (grant number 575-30-001).  相似文献   

8.
An item response theory (IRT) model is used as a measurement error model for the dependent variable of a multilevel model. The dependent variable is latent but can be measured indirectly by using tests or questionnaires. The advantage of using latent scores as dependent variables of a multilevel model is that it offers the possibility of modelling response variation and measurement error and separating the influence of item difficulty and ability level. The two‐parameter normal ogive model is used for the IRT model. It is shown that the stochastic EM algorithm can be used to estimate the parameters which are close to the maximum likelihood estimates. This algorithm is easily implemented. The estimation procedure will be compared to an implementation of the Gibbs sampler in a Bayesian framework. Examples using real data are given.  相似文献   

9.
Nonlinear latent variable models are specified that include quadratic forms and interactions of latent regressor variables as special cases. To estimate the parameters, the models are put in a Bayesian framework with conjugate priors for the parameters. The posterior distributions of the parameters and the latent variables are estimated using Markov chain Monte Carlo methods such as the Gibbs sampler and the Metropolis-Hastings algorithm. The proposed estimation methods are illustrated by two simulation studies and by the estimation of a non-linear model for the dependence of performance on task complexity and goal specificity using empirical data.  相似文献   

10.
Bayesian estimation and testing of structural equation models   总被引:2,自引:0,他引:2  
The Gibbs sampler can be used to obtain samples of arbitrary size from the posterior distribution over the parameters of a structural equation model (SEM) given covariance data and a prior distribution over the parameters. Point estimates, standard deviations and interval estimates for the parameters can be computed from these samples. If the prior distribution over the parameters is uninformative, the posterior is proportional to the likelihood, and asymptotically the inferences based on the Gibbs sample are the same as those based on the maximum likelihood solution, for example, output from LISREL or EQS. In small samples, however, the likelihood surface is not Gaussian and in some cases contains local maxima. Nevertheless, the Gibbs sample comes from the correct posterior distribution over the parameters regardless of the sample size and the shape of the likelihood surface. With an informative prior distribution over the parameters, the posterior can be used to make inferences about the parameters underidentified models, as we illustrate on a simple errors-in-variables model.We thank David Spiegelhalter for suggesting applying the Gibbs sampler to structural equation models to the first author at a 1994 workshop in Wiesbaden. We thank Ulf Böckenholt, Chris Meek, Marijtje van Duijn, Clark Glymour, Ivo Molenaar, Steve Klepper, Thomas Richardson, Teddy Seidenfeld, and Tom Snijders for helpful discussions, mathematical advice, and critiques of earlier drafts of this paper.  相似文献   

11.
The main purpose of this article is to develop a Bayesian approach for structural equation models with ignorable missing continuous and polytomous data. Joint Bayesian estimates of thresholds, structural parameters and latent factor scores are obtained simultaneously. The idea of data augmentation is used to solve the computational difficulties involved. In the posterior analysis, in addition to the real missing data, latent variables and latent continuous measurements underlying the polytomous data are treated as hypothetical missing data. An algorithm that embeds the Metropolis-Hastings algorithm within the Gibbs sampler is implemented to produce the Bayesian estimates. A goodness-of-fit statistic for testing the posited model is presented. It is shown that the proposed approach is not sensitive to prior distributions and can handle situations with a large number of missing patterns whose underlying sample sizes may be small. Computational efficiency of the proposed procedure is illustrated by simulation studies and a real example.The work described in this paper was fully supported by a grant from the Research Grants Council of the HKSAR (Project No. CUHK 4088/99H). The authors are greatly indebted to the Editor and anonymous reviewers for valuable comments in improving the paper; and also to D. E. Morisky and J.A. Stein for the use of their AIDS data set.  相似文献   

12.
The Reduced Reparameterized Unified Model (Reduced RUM) is a diagnostic classification model for educational assessment that has received considerable attention among psychometricians. However, the computational options for researchers and practitioners who wish to use the Reduced RUM in their work, but do not feel comfortable writing their own code, are still rather limited. One option is to use a commercial software package that offers an implementation of the expectation maximization (EM) algorithm for fitting (constrained) latent class models like Latent GOLD or Mplus. But using a latent class analysis routine as a vehicle for fitting the Reduced RUM requires that it be re-expressed as a logit model, with constraints imposed on the parameters of the logistic function. This tutorial demonstrates how to implement marginal maximum likelihood estimation using the EM algorithm in Mplus for fitting the Reduced RUM.  相似文献   

13.
Two‐level structural equation models with mixed continuous and polytomous data and nonlinear structural equations at both the between‐groups and within‐groups levels are important but difficult to deal with. A Bayesian approach is developed for analysing this kind of model. A Markov chain Monte Carlo procedure based on the Gibbs sampler and the Metropolis‐Hasting algorithm is proposed for producing joint Bayesian estimates of the thresholds, structural parameters and latent variables at both levels. Standard errors and highest posterior density intervals are also computed. A procedure for computing Bayes factor, based on the key idea of path sampling, is established for model comparison.  相似文献   

14.
Recent advancements in Bayesian modeling have allowed for likelihood-free posterior estimation. Such estimation techniques are crucial to the understanding of simulation-based models, whose likelihood functions may be difficult or even impossible to derive. However, current approaches are limited by their dependence on sufficient statistics and/or tolerance thresholds. In this article, we provide a new approach that requires no summary statistics, error terms, or thresholds and is generalizable to all models in psychology that can be simulated. We use our algorithm to fit a variety of cognitive models with known likelihood functions to ensure the accuracy of our approach. We then apply our method to two real-world examples to illustrate the types of complex problems our method solves. In the first example, we fit an error-correcting criterion model of signal detection, whose criterion dynamically adjusts after every trial. We then fit two models of choice response time to experimental data: the linear ballistic accumulator model, which has a known likelihood, and the leaky competing accumulator model, whose likelihood is intractable. The estimated posterior distributions of the two models allow for direct parameter interpretation and model comparison by means of conventional Bayesian statistics—a feat that was not previously possible.  相似文献   

15.
Lee MD  Vanpaemel W 《Cognitive Science》2008,32(8):1403-1424
This article demonstrates the potential of using hierarchical Bayesian methods to relate models and data in the cognitive sciences. This is done using a worked example that considers an existing model of category representation, the Varying Abstraction Model (VAM), which attempts to infer the representations people use from their behavior in category learning tasks. The VAM allows for a wide variety of category representations to be inferred, but this article shows how a hierarchical Bayesian analysis can provide a unifying explanation of the representational possibilities using 2 parameters. One parameter controls the emphasis on abstraction in category representations, and the other controls the emphasis on similarity. Using 30 previously published data sets, this work shows how inferences about these parameters, and about the category representations they generate, can be used to evaluate data in terms of the ongoing exemplar versus prototype and similarity versus rules debates in the literature. Using this concrete example, this article emphasizes the advantages of hierarchical Bayesian models in converting model selection problems to parameter estimation problems, and providing one way of specifying theoretically based priors for competing models.  相似文献   

16.
Present optimization techniques in latent class analysis apply the expectation maximization algorithm or the Newton-Raphson algorithm for optimizing the parameter values of a prespecified model. These techniques can be used to find maximum likelihood estimates of the parameters, given the specified structure of the model, which is defined by the number of classes and, possibly, fixation and equality constraints. The model structure is usually chosen on theoretical grounds. A large variety of structurally different latent class models can be compared using goodness-of-fit indices of the chi-square family, Akaike’s information criterion, the Bayesian information criterion, and various other statistics. However, finding the optimal structure for a given goodness-of-fit index often requires a lengthy search in which all kinds of model structures are tested. Moreover, solutions may depend on the choice of initial values for the parameters. This article presents a new method by which one can simultaneously infer the model structure from the data and optimize the parameter values. The method consists of a genetic algorithm in which any goodness-of-fit index can be used as a fitness criterion. In a number of test cases in which data sets from the literature were used, it is shown that this method provides models that fit equally well as or better than the models suggested in the original articles.  相似文献   

17.
18.
Sik-Yum Lee 《Psychometrika》2006,71(3):541-564
A Bayesian approach is developed for analyzing nonlinear structural equation models with nonignorable missing data. The nonignorable missingness mechanism is specified by a logistic regression model. A hybrid algorithm that combines the Gibbs sampler and the Metropolis–Hastings algorithm is used to produce the joint Bayesian estimates of structural parameters, latent variables, parameters in the nonignorable missing model, as well as their standard errors estimates. A goodness-of-fit statistic for assessing the plausibility of the posited nonlinear structural equation model is introduced, and a procedure for computing the Bayes factor for model comparison is developed via path sampling. Results obtained with respect to different missing data models, and different prior inputs are compared via simulation studies. In particular, it is shown that in the presence of nonignorable missing data, results obtained by the proposed method with a nonignorable missing data model are significantly better than those that are obtained under the missing at random assumption. A real example is presented to illustrate the newly developed Bayesian methodologies. This research is fully supported by a grant (CUHK 4243/03H) from the Research Grant Council of the Hong Kong Special Administration Region. The authors are thankful to the editor and reviewers for valuable comments for improving the paper, and also to ICPSR and the relevant funding agency for allowing the use of the data. Requests for reprints should be sent to Professor S.Y. Lee, Department of Statistics, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong.  相似文献   

19.
In this paper we argue that model selection, as commonly practised in psychometrics, violates certain principles of coherence. On the other hand, we show that Bayesian nonparametrics provides a coherent basis for model selection, through the use of a ‘nonparametric’ prior distribution that has a large support on the space of sampling distributions. We illustrate model selection under the Bayesian nonparametric approach, through the analysis of real questionnaire data. Also, we present ways to use the Bayesian nonparametric framework to define very flexible psychometric models, through the specification of a nonparametric prior distribution that supports all distribution functions for the inverse link, including the standard logistic distribution functions. The Bayesian nonparametric approach provides a coherent method for model selection that can be applied to any statistical model, including psychometric models. Moreover, under a ‘non‐informative’ choice of nonparametric prior, the Bayesian nonparametric approach is easy to apply, and selects the model that maximizes the log likelihood. Thus, under this choice of prior, the approach can be extended to non‐Bayesian settings where the parameters of the competing models are estimated by likelihood maximization, and it can be used with any psychometric software package that routinely reports the model log likelihood.  相似文献   

20.
多维题组效应Rasch模型   总被引:2,自引:0,他引:2  
首先, 本文诠释了“题组”的本质即一个存在共同刺激的项目集合。并基于此, 将题组效应划分为项目内单维题组效应和项目内多维题组效应。其次, 本文基于Rasch模型开发了二级评分和多级评分的多维题组效应Rasch模型, 以期较好地处理项目内多维题组效应。最后, 模拟研究结果显示新模型有效合理, 与Rasch题组模型、分部评分模型对比研究后表明:(1)测验存在项目内多维题组效应时, 仅把明显的捆绑式题组效应进行分离而忽略其他潜在的题组效应, 仍会导致参数的偏差估计甚或高估测验信度; (2)新模型更具普适性, 即便当被试作答数据不存在题组效应或只存在项目内单维题组效应, 采用新模型进行测验分析也能得到较好的参数估计结果。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号