首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The speed-accuracy trade-off (SAT) is a ubiquitous phenomenon in experimental psychology. One popular strategy for controlling SAT is to use the response signal paradigm. This paradigm produces time-accuracy curves (or SAT functions), which can be compared across different experimental conditions. The typical approach to analyzing time-accuracy curves involves the comparison of goodness-of-fit measures (e.g., adjusted-R2), as well as interpretation of point estimates. In this article, we examine the implications of this approach and discuss a number of alternative methods that have been successfully applied in the cognitive modeling literature. These methods include model selection criteria (the Akaike information criterion and the Bayesian information criterion) and interval estimation procedures (bootstrap and Bayesian). We demonstrate the utility of these methods with a hypothetical data set.  相似文献   

2.
We describe a formal framework for analyzing how statistical properties of natural environments and the process of natural selection interact to determine the design of perceptual and cognitive systems. The framework consists of two parts: a Bayesian ideal observer with a utility function appropriate for natural selection, and a Bayesian formulation of Darwin's theory of natural selection. Simulations of Bayesian natural selection were found to yield new insights, for example, into the co‐evolution of camouflage, color vision, and decision criteria. The Bayesian framework captures and generalizes, in a formal way, many of the important ideas of other approaches to perception and cognition.  相似文献   

3.
Two psychometric models are presented for evaluating the difficulty of the distractors in multiple-choice items. They are based on the criterion of rising distractor selection ratios, which facilitates interpretation of the subject and item parameters. Statistical inferential tools are developed in a Bayesian framework: modal a posteriori estimation by application of an EM algorithm and model evaluation by monitoring posterior predictive replications of the data matrix. An educational example with real data is included to exemplify the application of the models and compare them with the nominal categories model.This research was supported by the DGI grant BSO2002-01485.I would like to thank Eric Maris and Vicente Ponsoda for their advice, Juan Botella for providing the data for the empirical application, and three anonymous reviewers for their comments that were essential for improving the quality of the paper.  相似文献   

4.
Factor analysis and AIC   总被引:65,自引:0,他引:65  
The information criterion AIC was introduced to extend the method of maximum likelihood to the multimodel situation. It was obtained by relating the successful experience of the order determination of an autoregressive model to the determination of the number of factors in the maximum likelihood factor analysis. The use of the AIC criterion in the factor analysis is particularly interesting when it is viewed as the choice of a Bayesian model. This observation shows that the area of application of AIC can be much wider than the conventional i.i.d. type models on which the original derivation of the criterion was based. The observation of the Bayesian structure of the factor analysis model leads us to the handling of the problem of improper solution by introducing a natural prior distribution of factor loadings.The author would like to express his thanks to Jim Ramsay, Yoshio Takane, Donald Ramirez and Hamparsum Bozdogan for helpful comments on the original version of the paper. Thanks are also due to Emiko Arahata for her help in computing.  相似文献   

5.
In mathematical modeling of cognition, it is important to have well-justified criteria for choosing among differing explanations (i.e., models) of observed data. This paper introduces a Bayesian model selection approach that formalizes Occam’s razor, choosing the simplest model that describes the data well. The choice of a model is carried out by taking into account not only the traditional model selection criteria (i.e., a model’s fit to the data and the number of parameters) but also the extension of the parameter space, and, most importantly, the functional form of the model (i.e., the way in which the parameters are combined in the model’s equation). An advantage of the approach is that it can be applied to the comparison of non-nested models as well as nested ones. Application examples are presented and implications of the results for evaluating models of cognition are discussed.  相似文献   

6.
Formal models in psychology are used to make theoretical ideas precise and allow them to be evaluated quantitatively against data. We focus on one important??but under-used and incorrectly maligned??method for building theoretical assumptions into formal models, offered by the Bayesian statistical approach. This method involves capturing theoretical assumptions about the psychological variables in models by placing informative prior distributions on the parameters representing those variables. We demonstrate this approach of casting basic theoretical assumptions in an informative prior by considering a case study that involves the generalized context model (GCM) of category learning. We capture existing theorizing about the optimal allocation of attention in an informative prior distribution to yield a model that is higher in psychological content and lower in complexity than the standard implementation. We also highlight that formalizing psychological theory within an informative prior distribution allows standard Bayesian model selection methods to be applied without concerns about the sensitivity of results to the prior. We then use Bayesian model selection to test the theoretical assumptions about optimal allocation formalized in the prior. We argue that the general approach of using psychological theory to guide the specification of informative prior distributions is widely applicable and should be routinely used in psychological modeling.  相似文献   

7.
In many types of statistical modeling, inequality constraints are imposed between the parameters of interest. As we will show in this paper, the DIC (i.e., posterior Deviance Information Criterium as proposed as a Bayesian model selection tool by Spiegelhalter, Best, Carlin, & Van Der Linde, 2002) fails when comparing inequality constrained hypotheses. In this paper, we will derive the prior DIC and show that it also fails when comparing inequality constrained hypotheses. However, it will be shown that a modification of the prior predictive loss function that is minimized by the prior DIC renders a criterion that does have the properties needed in order to be able to compare inequality constrained hypotheses. This new criterion will be called the Prior Information Criterion (PIC) and will be illustrated and evaluated using simulated data and examples. The PIC has a close connection with the marginal likelihood in combination with the encompassing prior approach and both methods will be compared. All in all, the main message of the current paper is: (1) do not use the classical DIC when evaluating inequality constrained hypotheses, better use the PIC; and (2) the PIC is considered a proper model selection tool in the context of evaluating inequality constrained hypotheses.  相似文献   

8.
Recent research has shown that over-extraction of latent classes can be observed in the Bayesian estimation of the mixed Rasch model when the distribution of ability is non-normal. This study examined the effect of non-normal ability distributions on the number of latent classes in the mixed Rasch model when estimated with maximum likelihood estimation methods (conditional, marginal, and joint). Three information criteria fit indices (Akaike information criterion, Bayesian information criterion, and sample size adjusted BIC) were used in a simulation study and an empirical study. Findings of this study showed that the spurious latent class problem was observed with marginal maximum likelihood and joint maximum likelihood estimations. However, conditional maximum likelihood estimation showed no overextraction problem with non-normal ability distributions.  相似文献   

9.
Vrieze SI 《心理学方法》2012,17(2):228-243
This article reviews the Akaike information criterion (AIC) and the Bayesian information criterion (BIC) in model selection and the appraisal of psychological theory. The focus is on latent variable models, given their growing use in theory testing and construction. Theoretical statistical results in regression are discussed, and more important issues are illustrated with novel simulations involving latent variable models including factor analysis, latent profile analysis, and factor mixture models. Asymptotically, the BIC is consistent, in that it will select the true model if, among other assumptions, the true model is among the candidate models considered. The AIC is not consistent under these circumstances. When the true model is not in the candidate model set the AIC is efficient, in that it will asymptotically choose whichever model minimizes the mean squared error of prediction/estimation. The BIC is not efficient under these circumstances. Unlike the BIC, the AIC also has a minimax property, in that it can minimize the maximum possible risk in finite sample sizes. In sum, the AIC and BIC have quite different properties that require different assumptions, and applied researchers and methodologists alike will benefit from improved understanding of the asymptotic and finite-sample behavior of these criteria. The ultimate decision to use the AIC or BIC depends on many factors, including the loss function employed, the study's methodological design, the substantive research question, and the notion of a true model and its applicability to the study at hand.  相似文献   

10.
The authors identify and provide an integration of 3 criteria for establishing cue-search hierarchies in inferential judgment. Cues can be ranked by information value according to expected information gain (Bayesian criterion), cue-outcome correlation (correlational criterion), or ecological validity (accuracy criterion). All criteria significantly predicted information acquisition behavior; however, in 3 experiments, the most successful predictor was the correlational criterion (followed by the Bayesian). Although participants showed sensitivity to task constraints, searching for less information when it was more expensive (Experiment 1) and when under time constraints (Experiment 2), concomitant changes in the relative frequency of acquisition of cues with different information values were not observed. A rational analysis illustrates why such changes in the frequency of acquisition would be beneficial, and reasons for the failure to observe such behavior are discussed.  相似文献   

11.
We propose a hierarchical Bayesian model for analyzing multi-site experimental fMRI studies. Our method takes the hierarchical structure of the data (subjects are nested within sites, and there are multiple observations per subject) into account and allows for modeling between-site variation. Using posterior predictive model checking and model selection based on the deviance information criterion (DIC), we show that our model provides a good fit to the observed data by sharing information across the sites. We also propose a simple approach for evaluating the efficacy of the multi-site experiment by comparing the results to those that would be expected in hypothetical single-site experiments with the same sample size.  相似文献   

12.
13.
In this study, eight statistical selection strategies were evaluated for selecting the parameterizations of log‐linear models used to model the distributions of psychometric tests. The selection strategies included significance tests based on four chi‐squared statistics (likelihood ratio, Pearson, Freeman–Tukey, and Cressie–Read) and four additional strategies (Akaike information criterion (AIC), Bayesian information criterion (BIC), consistent Akaike information criterion (CAIC), and a measure attributed to Goodman). The strategies were evaluated in simulations for different log‐linear models of univariate and bivariate test‐score distributions and two sample sizes. Results showed that all eight selection strategies were most accurate for the largest sample size considered. For univariate distributions, the AIC selection strategy was especially accurate for selecting the correct parameterization of a complex log‐linear model and the likelihood ratio chi‐squared selection strategy was the most accurate strategy for selecting the correct parameterization of a relatively simple log‐linear model. For bivariate distributions, the likelihood ratio chi‐squared, Freeman–Tukey chi‐squared, BIC, and CAIC selection strategies had similarly high selection accuracies.  相似文献   

14.
Abstract

The Bayesian information criterion (BIC) has been used sometimes in SEM, even adopting a frequentist approach. Using simple mediation and moderation models as examples, we form posterior probability distribution via using BIC, which we call the BIC posterior, to assess model selection uncertainty of a finite number of models. This is simple but rarely used. The posterior probability distribution can be used to form a credibility set of models and to incorporate prior probabilities for model comparisons and selections. This was validated by a large scale simulation and results showed that the approximation via the BIC posterior is very good except when both the sample sizes and magnitude of parameters are small. We applied the BIC posterior to a real data set, and it has the advantages of flexibility in incorporating prior, addressing overfitting problems, and giving a full picture of posterior distribution to assess model selection uncertainty.  相似文献   

15.
This article examines a Bayesian nonparametric approach to model selection and model testing, which is based on concepts from Bayesian decision theory and information theory. The approach can be used to evaluate the predictive-utility of any model that is either probabilistic or deterministic, with that model analyzed under either the Bayesian or classical-frequentist approach to statistical inference. Conditional on an observed set of data, generated from some unknown true sampling density, the approach identifies the “best” model as the one that predicts a sampling density that explains the most information about the true density. Furthermore, in the approach, the decision is to reject a model when it does not explain enough information about the true density (according to a straightforward calibration of the Kullback-Leibler divergence measure). The posterior estimate of the true density is based on a Bayesian nonparametric prior that can give positive support to the entire space of sampling densities (defined on some sample space). This article also discusses the theoretical and practical advantages of the Bayesian nonparametric approach over all other types of model selection procedures, and over any model testing procedure that depends on interpreting a p-value. Finally, the Bayesian nonparametric approach is illustrated on four real data sets, in the comparison and testing of order-constrained models, cognitive models, models of choice-behavior, and a test of a general psychometric model.  相似文献   

16.
The use of objective criterion measures in validation research raises the issue of criterion contamination. Several methods of treating sales measures in the empirical keying of a biodata instrument are compared. The relative contaminating effects of local economic conditions and company factors are evaluated. This study is an example of how practitioners can use their knowledge about the selection context to develop acceptable criterion norming strategies.  相似文献   

17.
A pplications of standard item response theory models assume local independence of items and persons. This paper presents polytomous multilevel testlet models for dual dependence due to item and person clustering in testlet‐based assessments with clustered samples. Simulation and survey data were analysed with a multilevel partial credit testlet model. This model was compared with three alternative models – a testlet partial credit model (PCM), multilevel PCM, and PCM – in terms of model parameter estimation. The results indicated that the deviance information criterion was the fit index that always correctly identified the true multilevel testlet model based on the quantified evidence in model selection, while the Akaike and Bayesian information criteria could not identify the true model. In general, the estimation model and the magnitude of item and person clustering impacted the estimation accuracy of ability parameters, while only the estimation model and the magnitude of item clustering affected the item parameter estimation accuracy. Furthermore, ignoring item clustering effects produced higher total errors in item parameter estimates but did not have much impact on the accuracy of ability parameter estimates, while ignoring person clustering effects yielded higher total errors in ability parameter estimates but did not have much effect on the accuracy of item parameter estimates. When both clustering effects were ignored in the PCM, item and ability parameter estimation accuracy was reduced.  相似文献   

18.
19.
This study assessed the relative accuracy of 3 techniques--local validity studies, meta-analysis, and Bayesian analysis--for estimating test validity, incremental validity, and adverse impact in the local selection context. Bayes-analysis involves combining a local study with nonlocal (meta-analytic) validity data. Using tests of cognitive ability and personality (conscientiousness) as predictors, an empirically driven selection scenario illustrates conditions in which each of the 3 estimation techniques performs best. General recommendations are offered for how to estimate local parameters, based on true population variability and the number of studies in the meta-analytic prior. Benefits of empirical Bayesian analysis for personnel selection are demonstrated, and equations are derived to help guide the choice of a local validity technique (i.e., meta-analysis vs. local study vs. Bayes-analysis).  相似文献   

20.
In this paper we propose a latent class distance association model for clustering in the predictor space of large contingency tables with a categorical response variable. The rows of such a table are characterized as profiles of a set of explanatory variables, while the columns represent a single outcome variable. In many cases such tables are sparse, with many zero entries, which makes traditional models problematic. By clustering the row profiles into a few specific classes and representing these together with the categories of the response variable in a low‐dimensional Euclidean space using a distance association model, a parsimonious prediction model can be obtained. A generalized EM algorithm is proposed to estimate the model parameters and the adjusted Bayesian information criterion statistic is employed to test the number of mixture components and the dimensionality of the representation. An empirical example highlighting the advantages of the new approach and comparing it with traditional approaches is presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号