首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Tversky (1972) has proposed a family of models for paired-comparison data that generalize the Bradley-Terry-Luce (BTL) model and can, therefore, apply to a diversity of situations in which the BTL model is doomed to fail. In this article, we present a Matlab function that makes it easy to specify any of these general models (EBA, Pretree, or BTL) and to estimate their parameters. The program eliminates the time-consuming task of constructing the likelihood function by hand for every single model. The usage of the program is illustrated by several examples. Features of the algorithm are outlined. The purpose of this article is to facilitate the use of probabilistic choice models in the analysis of data resulting from paired comparisons.  相似文献   

2.
We present an application, using Excel, that can solve best-fitting parameters for multinomial models. Multinomial modeling has become increasingly popular and can be used in a variety of domains, such as memory, perception, and other domains in which processes are assumed to be dissociable. We offer an application that can be used for a variety of psychological models and can be used on both PC and Macintosh platforms. We illustrate the use of our program by analyzing data from a source memory experiment.  相似文献   

3.
The PARELLA model is a probabilistic parallelogram model that can be used for the measurement of latent attitudes or latent preferences. The data analyzed are the dichotomous responses of persons to stimuli, with a one (zero) indicating agreement (disagreement) with the content of the stimulus. The model provides a unidimensional representation of persons and items. The response probabilities are a function of the distance between person and stimulus: the smaller the distance, the larger the probability that a person will agree with the content of the stimulus. An estimation procedure based on expectation maximization and marginal maximum likelihood is developed and the quality of the resulting parameter estimates evaluated.I gratefully acknowledge Ivo Molenaar and Wijbrandt van Schuur for their advice and encouragement during the course of the investigation, Derk-Jan Kiewiet who constructed the program for the ML estimator for the person parameter and Anne Boomsma, Wendy Post, Tom Snijders, and David Thissen for their comments on smaller aspects of the investigation.  相似文献   

4.
5.
6.
Abstract: At least two types of models, the vector model and the unfolding model can be used for the analysis of dichotomous choice data taken from, for example, the pick any/ n method. The previous vector threshold models have a difficulty with estimation of the nuisance parameters such as the individual vectors and thresholds. This paper proposes a new probabilistic vector threshold model, where, unlike the former vector models, the angle that defines an individual vector is a random variable, and where the marginal maximum likelihood estimation method using the expectation-maximization algorithm is adopted to avoid incidental parameters. The paper also attempts to discuss which of the two models is more appropriate to account for dichotomous choice data. Two sets of dichotomous choice data are analyzed by the model.  相似文献   

7.
A ratio scale of subjective magnitude is developed from paired-comparison data. The model attempts to combine arguments of Restle, Ekman, and Luce relating data obtained from paired comparisons and corresponding data from direct psychophysical scaling methods at the ratio level. The basic set-theoretical model involves the use of imperfectly nested sets. A numerical example illustrates the application of the theory.This research was supported by the Swedish Social Science Research Council and by the Swedish Board of Computing Machinery. I am indebted to Mr. U. Forsberg for computational assistance.  相似文献   

8.
We report a formal model of transitive inference based on protocols from experiments on squirrel monkeys solving the 5-term series problem (McGonigle & Chalmers, 1977, 1992). These studies generate databases featuring transitive choice, task transfer (where at first a significant decrement is observed, and later substantial improvement without explicit training), and, finally, a Symbolic Distance Effect (SDE) based on decision-time data. Using a rule-based (production) system, we first established rule stacks at the group, then at the individual level, on the basis of triadic transfer performance first recorded in the McGonigle and Chalmers (1977) study. The models for each subject then accommodated data from the more intensive, later study with the same subjects (McGonigle & Chalmers, 1992). We found the initial model capable of dealing with all choice and reaction-time phenomena reported thus far, with only small changes in a rule search procedure. In common with an independent assay by McGonigle and Chalmers (1992), our model-based reassessment of decision times indicates that a major source of reaction time variation is item prominence in the rule stack rather than interitem (ordinal) distance per se.  相似文献   

9.
An objective technique for estimating the kinetics of dark adaptation is presented, with which one can evaluate models with multiple parameters, evaluate several models of dark adaptation simultaneously, and rapidly analyze large data sets. Another advantage is the ability to simultaneously estimate transition times and rates of sensitivity recovery. Finally, this nonlinear regression technique does not require that the distributional properties of the data be transformed, and thus, parameter estimates are in meaningful units and reflect the actual rate of recovery of sensitivity.  相似文献   

10.
Multilevel modeling provides the ability to simultaneously evaluate the discounting of individuals and groups by examining choices between smaller sooner and larger later rewards. A multilevel logistic regression approach is advocated in which sensitivity to relative reward magnitude and relative delay are considered as separate contributors to choice. Examples of how to fit choice data using multilevel logistic models are provided to help researchers in the adoption of these methods.  相似文献   

11.
A lexicographic rule orders multi-attribute alternatives in the same way as a dictionary orders words. Although no utility function can represent lexicographic preference over continuous, real-valued attributes, a constrained linear model suffices for representing such preferences over discrete attributes. We present an algorithm for inferring lexicographic structures from choice data. The primary difficulty in using such data is that it is seldom possible to obtain sufficient information to estimate individual-level preference functions. Instead, one needs to pool the data across latent clusters of individuals. We propose a method that identifies latent clusters of subjects, and estimates a lexicographic rule for each cluster. We describe an application of the method using data collected by a manufacturer of television sets. We compare the predictions of the model with those obtained from a finite-mixture, multinomial-logit model.  相似文献   

12.
13.
A contextual model of concurrent-chains choice   总被引:19,自引:17,他引:2       下载免费PDF全文
An extension of the generalized matching law incorporating context effects on terminal-link sensitivity is proposed as a quantitative model of behavior under concurrent chains. The contextual choice model makes many of the same qualitative predictions as the delay-reduction hypothesis, and assumes that the crucial contextual variable in concurrent chains is the ratio of average times spent, per reinforcement, in the terminal and initial links; this ratio controls differential effectiveness of terminal-link stimuli as conditioned reinforcers. Ninety-two concurrent-chains data sets from 19 published studies were fitted to the model. Averaged across all studies, the model accounted for 90% of the variance in pigeons' relative initial-link responding. The model therefore demonstrates that a matching law analysis of concurrent chains—the assumption that relative initial-link responding equals relative terminal-link value—remains quantitatively viable. Because the model reduces to the generalized matching law when terminal-link duration is zero, it provides a quantitative integration of concurrent schedules and concurrent chains.  相似文献   

14.
A modular package of computer programs is described. The package is designed for parameter fitting in psychology, and includes a program for plotting the fitted curves.  相似文献   

15.
This article presents the current state of a work in progress, whose objective is to better understand the effects of factors that significantly influence the performance of latent semantic analysis (LSA). A difficult task, which consisted of answering (French) biology multiple choice questions, was used to test the semantic properties of the truncated singular space and to study the relative influence of the main parameters. A dedicated software was designed to fine-tune the LSA semantic space for the multiple choice questions task. With optimal parameters, the performances of our simple model were quite surprisingly equal or superior to those of seventh- and eighthgrade students. This indicates that semantic spaces were quite good despite their low dimensions and the small sizes of the training data sets. In addition, we present an original entropy global weighting of the answers’ terms for each of the multiple choice questions, which was necessary to achieve the model’s success.  相似文献   

16.
A subject in a two-choice situation characteristically makes several observing responses before performing the final choice. This behavior can be described by means of a random walk model. The present paper explores some possibilities as to how this model can be extended to include choice time. The assumption is made that the duration of each step in the random walk is a random variable which is exponentially distributed. With this assumption, one can predict the probability distributions of the choice times as well as the moments of these distributions.The author gratefully acknowledges his debt to W. K. Estes and C. J. Burke. This study was initiated while the author held a USPHS postdoctoral fellowship at Indiana University.  相似文献   

17.
18.
Multimethod factor scores were derived from measures of ACT aptitude, ACT nonacademic achievement and the Omnibus Personality Inventory. A sample of 89 subjects whose freshman major was engineering and whose junior year major consisted of a variety of nonengineering subjects represented subjects who had made an unrealistic vocational choice as freshmen. The junior year majors of these subjects were classified by Holland's theory of vocational choice and the relationship between the factor scores and Holland categories was shown by the technique of spatial configuration. These data were employed to illustrate how counseling practice could be integrated with vocational theory.  相似文献   

19.
Almost all models of response time (RT) use a stochastic accumulation process. To account for the benchmark RT phenomena, researchers have found it necessary to include between-trial variability in the starting point and/or the rate of accumulation, both in linear (R. Ratcliff & J. N. Rouder, 1998) and nonlinear (M. Usher & J. L. McClelland, 2001) models. The authors show that a ballistic (deterministic within-trial) model using a simplified version of M. Usher and J. L. McClelland's (2001) nonlinear accumulation process with between-trial variability in accumulation rate and starting point is capable of accounting for the benchmark behavioral phenomena. The authors successfully fit their model to R. Ratcliff and J. N. Rouder's (1998) data, which exhibit many of the benchmark phenomena.  相似文献   

20.
There are many ways in which to estimate thresholds from psychometric functions. However, almost nothing is known about the relationships between these estimates. In the present experiment, Monte Carlo techniques were used to compare psychometric thresholds obtained using six methods. Three psychometric functions were simulated using Naka-Rushton and Weibull functions and a probit/logit function combination. Thresholds were estimated using probit, logit, and normit analyses and least-squares regressions of untransformed orz-score and logit-transformed probabilities versus stimulus strength. Histograms were derived from 100 thresholds using each of the six methods for various sampling strategies of each psychometric function. Thresholds from probit, logit, and normit analyses were remarkably similar. Thresholds fromz-score- and logit-transformed regressions were more variable, and linear regression produced biased threshold estimates under some circumstances. Considering the similarity of thresholds, the speed of computation, and the ease of implementation, logit and normit analyses provide effective alternatives to the current “gold standard”—probit analysis—for the estimation of psychometric thresholds.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号