首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Tversky (1972) has proposed a family of models for paired-comparison data that generalize the Bradley—Terry—Luce (BTL) model and can, therefore, apply to a diversity of situations in which the BTL model is doomed to fail. In this article, we present a Matlab function that makes it easy to specify any of these general models (EBA, Pretree, or BTL) and to estimate their parameters. The program eliminates the time-consuming task of constructing the likelihood function by hand for every single model. The usage of the program is illustratedby several examples. Features of the algorithm are outlined. The purpose of this article is to facilitate the use of probabilistic choice models in the analysis of data resulting from paired comparisons.  相似文献   

2.
The paper presents a straightforward extension of the Bradley-Terry-Luce model (BTL model) that can be derived from the logistic threshold model of psychophysics which assumes that psychometric functions are logistic probability functions. It is shown that (under weak side conditions) the logistic threshold model is a submodel of the extended BTL model. Moreover, representation and uniqueness theorems are proven that provide some evidence that the extended BTL model is a useful and widely applicable generalization of the ordinary BTL model. Finally, the logistic shape of the psychometric function is derived from axioms about binary choice probabilities. This characterization of the logistic threshold model can replace goodness of fit tests for the logistic probability distribution.  相似文献   

3.
Multinomial processing tree (MPT) models are statistical models that allow for the prediction of categorical frequency data by sets of unobservable (cognitive) states. In MPT models, the probability that an event belongs to a certain category is a sum of products of state probabilities. AppleTree is a computer program for Macintosh for testing user-defined MPT models. It can fit model parameters to empirical frequency data, provide confidence intervals for the parameters, generate tree graphs for the models, and perform identifiability checks. In this article, the algorithms used by AppleTree and the handling of the program are described.  相似文献   

4.
The paper presents different representation theorems for the Bradley — Terry — Luce (BTL) models of Beaver and Gokhale and of Davidson and Beaver. In particular, algorithms that can be used in constructing BTL scales are provided. The uniqueness theorems show that the Davidson — Beaver model should be preferred to the Beaver — Gokhale model since the multiplicative order effect parameter is uniquely determined whereas the additive effect parameter is merely a ratio scale. Finally, a relationship to the simple BTL model is established. Let p(a, b) denote the probability that a is chosen when (a, b) is presented in a fixed order. Then the probabilities p(a, b) satisfy the Beaver — Gokhale model if and only if the balanced probabilities pb(a, b):= ½ (p(a, b) + 1–p (b, a)) satisfy the simple BTL model.  相似文献   

5.
Multinomial processing tree (MPT) models are a family of stochastic models for psychology and related sciences that can be used to model observed categorical frequencies as a function of a sequence of latent states. For the analysis of such models, the present article presents a platform-independent computer program called multiTree, which simplifies the creation and the analysis of MPT models. This makes them more convenient to implement and analyze. Also, multiTree offers advanced modeling features. It provides estimates of the parameters and their variability, goodness-of-fit statistics, hypothesis testing, checks for identifiability, parametric and nonparametric bootstrapping, and power analyses. In this article, the algorithms underlying multiTree are given, and a user guide is provided. The multiTree program can be downloaded from http://psycho3.uni-mannheim.de/multitree.  相似文献   

6.
In the present article, a flexible and fast computer program, called fast-dm, for diffusion model data analysis is introduced. Fast-dm is free software that can be downloaded from the authors' websites. The program allows estimating all parameters of Ratcliff's (1978) diffusion model from the empirical response time distributions of any binary classification task. Fast-dm is easy to use: it reads input data from simple text files, while program settings are specified by commands in a control file. With fast-dm, complex models can be fitted, where some parameters may vary between experimental conditions, while other parameters are constrained to be equal across conditions. Detailed directions for use of fast-dm are presented, as well as results from three short simulation studies exemplifying the utility of fast-dm.  相似文献   

7.
This article uses a general latent variable framework to study a series of models for nonignorable missingness due to dropout. Nonignorable missing data modeling acknowledges that missingness may depend not only on covariates and observed outcomes at previous time points as with the standard missing at random assumption, but also on latent variables such as values that would have been observed (missing outcomes), developmental trends (growth factors), and qualitatively different types of development (latent trajectory classes). These alternative predictors of missing data can be explored in a general latent variable framework with the Mplus program. A flexible new model uses an extended pattern-mixture approach where missingness is a function of latent dropout classes in combination with growth mixture modeling. A new selection model not only allows an influence of the outcomes on missingness but allows this influence to vary across classes. Model selection is discussed. The missing data models are applied to longitudinal data from the Sequenced Treatment Alternatives to Relieve Depression (STAR*D) study, the largest antidepressant clinical trial in the United States to date. Despite the importance of this trial, STAR*D growth model analyses using nonignorable missing data techniques have not been explored until now. The STAR*D data are shown to feature distinct trajectory classes, including a low class corresponding to substantial improvement in depression, a minority class with a U-shaped curve corresponding to transient improvement, and a high class corresponding to no improvement. The analyses provide a new way to assess drug efficiency in the presence of dropout.  相似文献   

8.
The comparative format used in ranking and paired comparisons tasks can significantly reduce the impact of uniform response biases typically associated with rating scales. Thurstone's (1927, 1931) model provides a powerful framework for modeling comparative data such as paired comparisons and rankings. Although Thurstonian models are generally presented as scaling models, that is, stimuli-centered models, they can also be used as person-centered models. In this article, we discuss how Thurstone's model for comparative data can be formulated as item response theory models so that respondents' scores on underlying dimensions can be estimated. Item parameters and latent trait scores can be readily estimated using a widely used statistical modeling program. Simulation studies show that item characteristic curves can be accurately estimated with as few as 200 observations and that latent trait scores can be recovered to a high precision. Empirical examples are given to illustrate how the model may be applied in practice and to recommend guidelines for designing ranking and paired comparisons tasks in the future.  相似文献   

9.
10.
根据顿悟的原型启发理论,通过两个实验考察原型启发的时间抵消和位置效应。实验1采用2(字谜类型)×2(原型呈现时间)混合设计,结果发现:松组块字谜的正确率显著高于紧组块字谜;原型谜面消失的正确率显著大于谜面存在的正确率。实验2采用2(原型位置)×2(字谜类型)×2(原型呈现时间)混合设计,结果不仅证明了实验1,且原型位置、字谜类型、原型呈现时间的交互作用显著,原型在靶字谜之前时,紧组块字谜上谜面消失的正确率显著大于谜面存在的正确率;原型在靶字谜之后时,松组块字谜上谜面消失的正确率显著大于谜面存在的正确率。表明字谜顿悟问题解决中存在原型的时间抵消和位置效应,二者连同任务难度共同影响问题解决。  相似文献   

11.
Studies in the social and behavioral sciences often involve categorical data, such as ratings, and define latent constructs underlying the research issues as being discrete. In this article, models with discrete latent variables (MDLV) for the analysis of categorical data are grouped into four families, defined in terms of two dimensions (time and sampling) of the data structure. A MATLAB toolbox (referred to as the “MDLV toolbox”) was developed for applying these models in practical studies. For each family of models, model representations and the statistical assumptions underlying the models are discussed. The functions of the toolbox are demonstrated by fitting these models to empirical data from the European Values Study. The purpose of this article is to offer a framework of discrete latent variable models for data analysis, and to develop the MDLV toolbox for use in estimating each model under this framework. With this accessible tool, the application of data modeling with discrete latent variables becomes feasible for a broad range of empirical studies.  相似文献   

12.
The article presents a Bayesian model of causal learning that incorporates generic priors--systematic assumptions about abstract properties of a system of cause-effect relations. The proposed generic priors for causal learning favor sparse and strong (SS) causes--causes that are few in number and high in their individual powers to produce or prevent effects. The SS power model couples these generic priors with a causal generating function based on the assumption that unobservable causal influences on an effect operate independently (P. W. Cheng, 1997). The authors tested this and other Bayesian models, as well as leading nonnormative models, by fitting multiple data sets in which several parameters were varied parametrically across multiple types of judgments. The SS power model accounted for data concerning judgments of both causal strength and causal structure (whether a causal link exists). The model explains why human judgments of causal structure (relative to a Bayesian model lacking these generic priors) are influenced more by causal power and the base rate of the effect and less by sample size. Broader implications of the Bayesian framework for human learning are discussed.  相似文献   

13.
The nonlinear random coefficient model has become increasingly popular as a method for describing individual differences in longitudinal research. Although promising, the nonlinear model it is not utilized as often as it might be because software options are still somewhat limited. In this article we show that a specialized version of the model can be fit to data using SEM software. The specialization is to a model in which the parameters that enter the function in a linear manner are random, whereas those that are nonlinear are common to all individuals. Although this kind of function is not as general as is the fully nonlinear model, it still is applicable to many different data sets. Two examples are presented to show how the models can be estimated using popular SEM computer programs.  相似文献   

14.
The problem of deciding whether a set of mental test data is consistent with any one of a large class of item response models is considered. The classical assumption of locla independence is weakened to a new condition, local nonnegative dependence (LND). Necessary and sufficient conditions are derived for a LND item response model to fit a set of data. This leads to a condition that a set of data must satisfy if it is to be representable by any item response model that assumes both local independence and monotone item characteristic curves. An example is given to show that LND is strictly weaker than local independence. Thus rejection of LND models implies rejection of all item response models that assume local independence for a given set of data.This research was supported in part by Grant NIE-G-78-0157 to ETS from the NIE, by the Program Statistics Research Project, and by TOEFL Program Research. I would like to thank Dr. Douglas Jones of ETS for stimulating discussions during the early stages of this research, Dr. Frederick Lord of ETS for his encouragement of this work and comments on earlier drafts of this paper and Professor Robert Berk of Rutgers University for pointing out that conditions (a), (b) and (c) of Theorem 2 were also sufficient for LND and Monotonicity. Dr. Donald Alderman of ETS provided financial support for the development of a computer program to apply these results to data from the TOEFL program.  相似文献   

15.
In divided-attention tasks, responses are faster when two target stimuli are presented, and thus one is redundant, than when only a single target stimulus is presented. Raab (1962) suggested an account of this redundant-targets effect in terms of a race model in which the response to redundant target stimuli is initiated by the faster of two separate target detection processes. Such models make a prediction about the probability distributions of reaction times that is often called the race model inequality, and it is often of interest to test this prediction. In this article, we describe a precise algorithm that can be used to test the race model inequality and present MATLAB routines and a Pascal program that implement this algorithm.  相似文献   

16.
This paper presents two experiments where participants had to approximate function values at various generalization points of a square, using given function values at a small set of data points. A representative set of standard function approximation models was trained to exactly fit the function values at data points, and models' responses at generalization points were compared to those of humans. Then one defined a large class of possible models (including the best two identified predictors) and the class maximal possible prediction accuracy was evaluated. A new model of quick multivariate function approximation belonging to this class was proposed. Its prediction accuracy was close to the maximum possible, and significantly better than that of all other models tested. The new model also provided a significant account of human response variability. Finally, it was shown that this model is more particularly suitable for problems in which the visual system can perform some specific structuring of the data space. This model is therefore considered as a suitable starting point for further investigations into quick multivariate function approximation, which is to date an inadequately explored question in cognitive psychology.  相似文献   

17.
Measurements of people’s causal and explanatory models are frequently key dependent variables in investigations of concepts and categories, lay theories, and health behaviors. A variety of challenges are inherent in the pen-and-paper and narrative methods commonly used to measure such causal models. We have attempted to alleviate these difficulties by developing a software tool, ConceptBuilder, for automating the process and ensuring accurate coding and quantification of the data. In this article, we present ConceptBuilder, a multiple-use tool for data gathering, data entry, and diagram display. We describe the program’s controls, report the results of a usability test of the program, and discuss some technical aspects of the program. We also describe ConceptAnalysis, a companion program for generating data matrices and analyses, and ConceptViewer, a program for viewing the data exactly as drawn.  相似文献   

18.
Some years ago, Beem (1993, 1995) described a program for fitting two regression lines with an unknown change point (Segcurve). He suggested that such models are useful for the analysis of a variety of phenomena and gave an example of an application to the study of strategy shifts in a mental rotation task. This technique has also proven to be very fruitful for investigating strategy use and strategy shifts in other cognitive tasks. Recently, Beem (1999) developed SegcurvN, which fitsn regression lines with (n - 1) unknown change points. In the present article we present this new technique and demonstrate the usefulness of a three-phase segmented linear regression model for the identification of strategies and strategy shifts in cognitive tasks by applying it to data from a numerosity judgment experiment. The advantages and shortcomings of this technique are evaluated.  相似文献   

19.
The past decade has seen a noticeable shift in missing data handling techniques that assume a missing at random (MAR) mechanism, where the propensity for missing data on an outcome is related to other analysis variables. Although MAR is often reasonable, there are situations where this assumption is unlikely to hold, leading to biased parameter estimates. One such example is a longitudinal study of substance use where participants with the highest frequency of use also have the highest likelihood of attrition, even after controlling for other correlates of missingness. There is a large body of literature on missing not at random (MNAR) analysis models for longitudinal data, particularly in the field of biostatistics. Because these methods allow for a relationship between the outcome variable and the propensity for missing data, they require a weaker assumption about the missing data mechanism. This article describes 2 classic MNAR modeling approaches for longitudinal data: the selection model and the pattern mixture model. To date, these models have been slow to migrate to the social sciences, in part because they required complicated custom computer programs. These models are now quite easy to estimate in popular structural equation modeling programs, particularly Mplus. The purpose of this article is to describe these MNAR modeling frameworks and to illustrate their application on a real data set. Despite their potential advantages, MNAR-based analyses are not without problems and also rely on untestable assumptions. This article offers practical advice for implementing and choosing among different longitudinal models.  相似文献   

20.
In behavioral research, PARAFAC analysis, a three-mode generalization of standard principal component analysis (PCA), is often used to disclose the structure of three-way three-mode data. To get insight into the underlying mechanisms, one often wants to relate the component matrices resulting from such a PARAFAC analysis to external (two-way two-mode) information, regarding one of the modes of the three-way data. To this end, linked-mode PARAFAC-PCA analysis can be used, in which the three-way and the two-way data set, which have one mode in common, are simultaneously analyzed. More specifically, a PARAFAC and a PCA model are fitted to the three-way and the two-way data, respectively, restricting the component matrix for the common mode to be equal in both models. Until now, however, no software program has been publicly available to perform such an analysis. Therefore, in this article, the LMPCA program, a free and easy-to-use MATLAB graphical user interface, is presented to perform a linked-mode PARAFAC-PCA analysis. The LMPCA software can be obtained from the authors at http://ppw.kuleuven.be/okp/software/LMPCA. For users who do not have access to MATLAB, a stand-alone version is provided.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号