首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper we argue that model selection, as commonly practised in psychometrics, violates certain principles of coherence. On the other hand, we show that Bayesian nonparametrics provides a coherent basis for model selection, through the use of a ‘nonparametric’ prior distribution that has a large support on the space of sampling distributions. We illustrate model selection under the Bayesian nonparametric approach, through the analysis of real questionnaire data. Also, we present ways to use the Bayesian nonparametric framework to define very flexible psychometric models, through the specification of a nonparametric prior distribution that supports all distribution functions for the inverse link, including the standard logistic distribution functions. The Bayesian nonparametric approach provides a coherent method for model selection that can be applied to any statistical model, including psychometric models. Moreover, under a ‘non‐informative’ choice of nonparametric prior, the Bayesian nonparametric approach is easy to apply, and selects the model that maximizes the log likelihood. Thus, under this choice of prior, the approach can be extended to non‐Bayesian settings where the parameters of the competing models are estimated by likelihood maximization, and it can be used with any psychometric software package that routinely reports the model log likelihood.  相似文献   

2.
Most past research on sequential sampling models of decision-making have assumed a time homogeneous process (i.e., parameters such as drift rates and boundaries are constant and do not change during the deliberation process). This has largely been due to the theoretical difficulty in testing and fitting more complex models. In recent years, the development of simulation-based modeling approaches matched with Bayesian fitting methodologies has opened the possibility of developing more complex models such as those with time-varying properties. In the present work, we discuss a piecewise variant of the well-studied diffusion decision model (termed pDDM) that allows evidence accumulation rates to change during the deliberation process. Given the complex, time-varying nature of this model, standard Bayesian parameter estimation methodologies cannot be used to fit the model. To overcome this, we apply a recently developed simulation-based, hierarchal Bayesian methodology called the probability density approximation (PDA) method. We provide an analysis of this methodology and present results of parameter recovery experiments to demonstrate the strengths and limitations of this approach. With those established, we fit pDDM to data from a perceptual experiment where information changes during the course of trials. This extensible modeling platform opens the possibility of applying sequential sampling models to a range of complex non-stationary decision tasks.  相似文献   

3.
Vrieze SI 《心理学方法》2012,17(2):228-243
This article reviews the Akaike information criterion (AIC) and the Bayesian information criterion (BIC) in model selection and the appraisal of psychological theory. The focus is on latent variable models, given their growing use in theory testing and construction. Theoretical statistical results in regression are discussed, and more important issues are illustrated with novel simulations involving latent variable models including factor analysis, latent profile analysis, and factor mixture models. Asymptotically, the BIC is consistent, in that it will select the true model if, among other assumptions, the true model is among the candidate models considered. The AIC is not consistent under these circumstances. When the true model is not in the candidate model set the AIC is efficient, in that it will asymptotically choose whichever model minimizes the mean squared error of prediction/estimation. The BIC is not efficient under these circumstances. Unlike the BIC, the AIC also has a minimax property, in that it can minimize the maximum possible risk in finite sample sizes. In sum, the AIC and BIC have quite different properties that require different assumptions, and applied researchers and methodologists alike will benefit from improved understanding of the asymptotic and finite-sample behavior of these criteria. The ultimate decision to use the AIC or BIC depends on many factors, including the loss function employed, the study's methodological design, the substantive research question, and the notion of a true model and its applicability to the study at hand.  相似文献   

4.
5.
6.
This paper considers mixtures of structural equation models with an unknown number of components. A Bayesian model selection approach is developed based on the Bayes factor. A procedure for computing the Bayes factor is developed via path sampling, which has a number of nice features. The key idea is to construct a continuous path linking the competing models; then the Bayes factor can be estimated efficiently via grids in [0, 1] and simulated observations that are generated by the Gibbs sampler from the posterior distribution. Bayesian estimates of the structural parameters, latent variables, as well as other statistics can be produced as by‐products. The properties and merits of the proposed procedure are discussed and illustrated by means of a simulation study and a real example.  相似文献   

7.
The development of cognitive models involves the creative scientific formalization of assumptions, based on theory, observation, and other relevant information. In the Bayesian approach to implementing, testing, and using cognitive models, assumptions can influence both the likelihood function of the model, usually corresponding to assumptions about psychological processes, and the prior distribution over model parameters, usually corresponding to assumptions about the psychological variables that influence those processes. The specification of the prior is unique to the Bayesian context, but often raises concerns that lead to the use of vague or non-informative priors in cognitive modeling. Sometimes the concerns stem from philosophical objections, but more often practical difficulties with how priors should be determined are the stumbling block. We survey several sources of information that can help to specify priors for cognitive models, discuss some of the methods by which this information can be formalized in a prior distribution, and identify a number of benefits of including informative priors in cognitive modeling. Our discussion is based on three illustrative cognitive models, involving memory retention, categorization, and decision making.  相似文献   

8.
9.
This article considers Bayesian model averaging as a means of addressing uncertainty in the selection of variables in the propensity score equation. We investigate an approximate Bayesian model averaging approach based on the model-averaged propensity score estimates produced by the R package BMA but that ignores uncertainty in the propensity score. We also provide a fully Bayesian model averaging approach via Markov chain Monte Carlo sampling (MCMC) to account for uncertainty in both parameters and models. A detailed study of our approach examines the differences in the causal estimate when incorporating noninformative versus informative priors in the model averaging stage. We examine these approaches under common methods of propensity score implementation. In addition, we evaluate the impact of changing the size of Occam’s window used to narrow down the range of possible models. We also assess the predictive performance of both Bayesian model averaging propensity score approaches and compare it with the case without Bayesian model averaging. Overall, results show that both Bayesian model averaging propensity score approaches recover the treatment effect estimates well and generally provide larger uncertainty estimates, as expected. Both Bayesian model averaging approaches offer slightly better prediction of the propensity score compared with the Bayesian approach with a single propensity score equation. Covariate balance checks for the case study show that both Bayesian model averaging approaches offer good balance. The fully Bayesian model averaging approach also provides posterior probability intervals of the balance indices.  相似文献   

10.
The Multilevel Latent Class Model (MLCM) proposed by Vermunt (2003) has been shown to be an excellent framework for analyzing nested data with assumed discrete latent constructs. The nonparametric version of MLCM assumes 2 levels of discrete latent components to describe the dependency observed in data. Model selection is an important step in any statistical modeling. The task of model selection for MLCM amounts to the decision on the number of discrete latent components at both higher and lower levels and is more challenging than standard Latent Class Models. In this article, simulation studies were conducted to systematically examine the effects of sample sizes, clusters/classes distinctness, and the number of latent clusters and classes on the performance of various information criteria in recovering the true latent structure. Results of the simulation studies are summarized and presented. The final section presents the remarks and recommendations about the simultaneous decision regarding the number of latent classes and clusters when applying MLCMs to analyze empirical data.  相似文献   

11.
Formal models in psychology are used to make theoretical ideas precise and allow them to be evaluated quantitatively against data. We focus on one important??but under-used and incorrectly maligned??method for building theoretical assumptions into formal models, offered by the Bayesian statistical approach. This method involves capturing theoretical assumptions about the psychological variables in models by placing informative prior distributions on the parameters representing those variables. We demonstrate this approach of casting basic theoretical assumptions in an informative prior by considering a case study that involves the generalized context model (GCM) of category learning. We capture existing theorizing about the optimal allocation of attention in an informative prior distribution to yield a model that is higher in psychological content and lower in complexity than the standard implementation. We also highlight that formalizing psychological theory within an informative prior distribution allows standard Bayesian model selection methods to be applied without concerns about the sensitivity of results to the prior. We then use Bayesian model selection to test the theoretical assumptions about optimal allocation formalized in the prior. We argue that the general approach of using psychological theory to guide the specification of informative prior distributions is widely applicable and should be routinely used in psychological modeling.  相似文献   

12.
The multinomial (Dirichlet) model, derived from de Finetti's concept of exchangeability, is proposed as a general Bayesian framework to test axioms on data, in particular, deterministic axioms characterizing theories of choice or measurement. For testing, the proposed framework does not require a deterministic axiom to be cast in a probabilistic form (e.g., casting deterministic transitivity as weak stochastic transitivity). The generality of this framework is demonstrated through empirical tests of 16 different axioms, including transitivity, consequence monotonicity, segregation, additivity of joint receipt, stochastic dominance, coalescing, restricted branch independence, double cancellation, triple cancellation, and the Thomsen condition. The model generalizes many previously proposed methods of axiom testing under measurement error, is analytically tractable, and provides a Bayesian framework for the random relation approach to probabilistic measurement (J. Math. Psychol. 40 (1996) 219). A hierarchical and nonparametric generalization of the model is discussed.  相似文献   

13.
14.
Computerized adaptive testing under nonparametric IRT models   总被引:1,自引:0,他引:1  
Nonparametric item response models have been developed as alternatives to the relatively inflexible parametric item response models. An open question is whether it is possible and practical to administer computerized adaptive testing with nonparametric models. This paper explores the possibility of computerized adaptive testing when using nonparametric item response models. A central issue is that the derivatives of item characteristic Curves may not be estimated well, which eliminates the availability of the standard maximum Fisher information criterion. As alternatives, procedures based on Shannon entropy and Kullback–Leibler information are proposed. For a long test, these procedures, which do not require the derivatives of the item characteristic eurves, become equivalent to the maximum Fisher information criterion. A simulation study is conducted to study the behavior of these two procedures, compared with random item selection. The study shows that the procedures based on Shannon entropy and Kullback–Leibler information perform similarly in terms of root mean square error, and perform much better than random item selection. The study also shows that item exposure rates need to be addressed for these methods to be practical. The authors would like to thank Hua Chang for his help in conducting this research.  相似文献   

15.
16.
A key problem in statistical modeling is model selection, that is, how to choose a model at an appropriate level of complexity. This problem appears in many settings, most prominently in choosing the number of clusters in mixture models or the number of factors in factor analysis. In this tutorial, we describe Bayesian nonparametric methods, a class of methods that side-steps this issue by allowing the data to determine the complexity of the model. This tutorial is a high-level introduction to Bayesian nonparametric methods and contains several examples of their application.  相似文献   

17.
18.
The purpose of the popular Iowa gambling task is to study decision making deficits in clinical populations by mimicking real-life decision making in an experimental context. Busemeyer and Stout [Busemeyer, J. R., & Stout, J. C. (2002). A contribution of cognitive decision models to clinical assessment: Decomposing performance on the Bechara gambling task. Psychological Assessment, 14, 253-262] proposed an “Expectancy Valence” reinforcement learning model that estimates three latent components which are assumed to jointly determine choice behavior in the Iowa gambling task: weighing of wins versus losses, memory for past payoffs, and response consistency. In this article we explore the statistical properties of the Expectancy Valence model. We first demonstrate the difficulty of applying the model on the level of a single participant, we then propose and implement a Bayesian hierarchical estimation procedure to coherently combine information from different participants, and we finally apply the Bayesian estimation procedure to data from an experiment designed to provide a test of specific influence.  相似文献   

19.
Hyperspace analog to language (HAL) is a high-dimensional model of semantic space that uses the global co-occurrence frequency of words in a large corpus of text as the basis for a representation of semantic memory. In the original HAL model, many parameters were set without any a priori rationale. We have created and publicly released a computer application, the High Dimensional Explorer (HiDEx), that makes it possible to systematically alter the values of these parameters to examine their effect on the co-occurrence matrix that instantiates the model. We took an empirical approach to understanding the influence of the parameters on the measures produced by the models, looking at how well matrices derived with different parameters could predict human reaction times in lexical decision and semantic decision tasks. New parameter sets give us measures of semantic density that improve the model’s ability to predict behavioral measures. Implications for such models are discussed.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号