首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We study various axioms of discrete probabilistic choice, measuring how restrictive they are, both alone and in the presence of other axioms, given a specific class of prior distributions over a complete collection of finite choice probabilities. We do this by using Monte Carlo simulation to compute, for a range of prior distributions, probabilities that various simple and compound axioms hold. For example, the probability of the triangle inequality is usually many orders of magnitude higher than the probability of random utility. While neither the triangle inequality nor weak stochastic transitivity imply the other, the conditional probability that one holds given the other holds is greater than the marginal probability, for all priors in the class we consider. The reciprocal of the prior probability that an axiom holds is an upper bound on the Bayes factor in favor of a restricted model, in which the axiom holds, against an unrestricted model. The relatively high prior probability of the triangle inequality limits the degree of support that data from a single decision maker can provide in its favor. The much lower probability of random utility implies that the Bayes factor in favor of it can be much higher, for suitable data.  相似文献   

2.
It is shown that a unidimensional monotone latent variable model for binary items implies a restriction on the relative sizes of item correlations: The negative logarithm of the correlations satisfies the triangle inequality. This inequality is not implied by the condition that the correlations are nonnegative, the criterion that coefficient H exceeds 0.30, or manifest monotonicity. The inequality implies both a lower bound and an upper bound for each correlation between two items, based on the correlations of those two items with every possible third item. It is discussed how this can be used in Mokken’s (A theory and procedure of scale-analysis, Mouton, The Hague, 1971) scale analysis.  相似文献   

3.
Self-reflecting signed orders were proposed as an aid in estimating preference between subsets of items on the basis of limited information. The data of a signed order are preference comparisons that consider the desirability of excluding items from a subset as well as including items in a subset. This paper investigates signed orders within the setting of probabilistic choice and random utility theory. It focuses on linear signed order polytopes and their relationship to binary choice probabilities. The theory applies both to individual choice behavior and to group decision making. Copyright 2001 Academic Press.  相似文献   

4.
A new probabilistic losses questionnaire as well as Kirby's delayed gains questionnaire and a previously developed delayed losses questionnaire were administered to a large online sample. Almost all participants showed the positive discounting choice pattern expected on the Kirby questionnaire, decreasing their choice of a delayed gain as time to its receipt increased. In contrast, approximately 15% of the participants showed negative discounting on the delayed losses questionnaire and/or the probabilistic losses questionnaire, decreasing their choice of an immediate loss as time to a delayed loss decreased and/or decreasing their choice of a certain loss as likelihood of the probabilistic loss increased. Mixture model analysis confirmed the existence of these negative discounting subgroups. The inconsistent findings observed in previous research involving delayed/probabilistic losses may be due to differences in the proportion of negative discounters who participated in previous studies. Further research is needed to determine how negative discounting of delayed and probabilistic losses manifests itself in everyday decisions. It should be noted that the presence of individuals who show atypical choice patterns when losses are involved may pose challenges for efforts to modify discounting in order to ameliorate behavioral problems, especially because many such problems concern choices that have negative consequences, often delayed and/or probabilistic.  相似文献   

5.
A probabilistic model for choice, and preference, is introduced that includes (Tversky's) elimination by aspects model, and the random utility model, as special cases. The model is based on a covert sequential elimination process, the element that is finally chosen in a simple choice experiment being the eventual lone survivor of the elimination process. The model leads us to question the usual form of simple choice experiments, in which a subject must (eventually) choose one of the currently available alternatives, and to suggest that a much more realistic experimental design would allow the subject the no-choice option, i.e., he may refuse to accept any of the currently available alternatives.  相似文献   

6.
7.
We conceptualize probabilistic choice as the result of the simultaneous pursuit of multiple goals in a vector optimization representation, which is reduced to a scalar optimization that implies goal balancing. The majority of prior theoretical and empirical work on such probabilistic choice is based on random utility models, the most basic of which assume that each choice option has a valuation that has a deterministic (systematic) component plus a random component determined by some specified distribution. An alternate approach to probabilistic choice has considered maximization of one quantity (e.g., utility), subject to constraints on one or more other quantities (e.g., cost). The multiple goal perspective integrates the results regarding the well-studied multinomial logit model of probabilistic choice that has been derived from each of the above approaches; extends the results to other models in the generalized extreme value (GEV) class; and relates them to recent axiomatic work on the utility of gambling.  相似文献   

8.
A binary detection task, free from sensory components, is investigated. A deterministic model prescribing a fixed cutoff point is confirmed; a probabilistic model, which generalizes Lee’s micromatching model for externally distributed stimuli, is rejected.  相似文献   

9.
The question of whether humans represent grammatical knowledge as a binary condition on membership in a set of well‐formed sentences, or as a probabilistic property has been the subject of debate among linguists, psychologists, and cognitive scientists for many decades. Acceptability judgments present a serious problem for both classical binary and probabilistic theories of grammaticality. These judgements are gradient in nature, and so cannot be directly accommodated in a binary formal grammar. However, it is also not possible to simply reduce acceptability to probability. The acceptability of a sentence is not the same as the likelihood of its occurrence, which is, in part, determined by factors like sentence length and lexical frequency. In this paper, we present the results of a set of large‐scale experiments using crowd‐sourced acceptability judgments that demonstrate gradience to be a pervasive feature in acceptability judgments. We then show how one can predict acceptability judgments on the basis of probability by augmenting probabilistic language models with an acceptability measure. This is a function that normalizes probability values to eliminate the confounding factors of length and lexical frequency. We describe a sequence of modeling experiments with unsupervised language models drawn from state‐of‐the‐art machine learning methods in natural language processing. Several of these models achieve very encouraging levels of accuracy in the acceptability prediction task, as measured by the correlation between the acceptability measure scores and mean human acceptability values. We consider the relevance of these results to the debate on the nature of grammatical competence, and we argue that they support the view that linguistic knowledge can be intrinsically probabilistic.  相似文献   

10.
After more then 50 years of probabilistic choice modeling in economics, marketing, political science, psychology, and related disciplines, theoretical and computational advances give scholars access to a sophisticated array of modeling and inference resources. We review some important, but perhaps often overlooked, properties of major classes of probabilistic choice models. For within‐respondent applications, we discuss which models require repeated choices by an individual to be independent and response probabilities to be stationary. We show how some model classes, but not others, are invariant over variable preferences, variable utilities, or variable choice probabilities. These models, but not others, accommodate pooling of responses or averaging of choice proportions within participant when underlying parameters vary across observations. These, but not others, permit pooling/averaging across respondents in the presence of individual differences. We also review the role of independence and stationarity in statistical inference, including for probabilistic choice models that, themselves, do not require those properties. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

11.
Identification Model Based on the Maximum Information Entropy Principle   总被引:1,自引:0,他引:1  
A new theoretical approach to stimulus identification is proposed through a probabilistic multidimensional model based on the maximum information entropy principle. The approach enables us to derive the multidimensional scaling (MDS) choice model, without appealing to Luce's choice rule and without defining a similarity function. It also clarifies the relationship between the MDS choice model and the optimal version of the identification model based on Ashby's general recognition theory; it is shown theoretically that the identification model derived from the new approach includes these two models as special cases. Finally, as an application of our approach, a model of similarity judgment is proposed and compared with Ashby's extended similarity model. Copyright 2001 Academic Press.  相似文献   

12.
Analysis of binary choice behavior in iterated tasks with immediate feedback reveals robust deviations from maximization that can be described as indications of 3 effects: (a) a payoff variability effect, in which high payoff variability seems to move choice behavior toward random choice; (b) underweighting of rare events, in which alternatives that yield the best payoffs most of the time are attractive even when they are associated with a lower expected return; and (c) loss aversion, in which alternatives that minimize the probability of losses can be more attractive than those that maximize expected payoffs. The results are closer to probability matching than to maximization. Best approximation is provided with a model of reinforcement learning among cognitive strategies (RELACS). This model captures the 3 deviations, the learning curves, and the effect of information on uncertainty avoidance. It outperforms other models in fitting the data and in predicting behavior in other experiments.  相似文献   

13.
General goodness of fit tests for probabilistic response models are developed. The tests are applicable in psychophysics, in the theory of choice behavior and in mathematical learning theories. The necessary and sufficient constraints that a measurement model puts on the response probabilities are used for testing this model. In addition, representation theorems for some models are proved and the goodness of fit to experimental data is considered.  相似文献   

14.
Abstract: At least two types of models, the vector model and the unfolding model can be used for the analysis of dichotomous choice data taken from, for example, the pick any/ n method. The previous vector threshold models have a difficulty with estimation of the nuisance parameters such as the individual vectors and thresholds. This paper proposes a new probabilistic vector threshold model, where, unlike the former vector models, the angle that defines an individual vector is a random variable, and where the marginal maximum likelihood estimation method using the expectation-maximization algorithm is adopted to avoid incidental parameters. The paper also attempts to discuss which of the two models is more appropriate to account for dichotomous choice data. Two sets of dichotomous choice data are analyzed by the model.  相似文献   

15.
The authors introduce subset conjunction as a classification rule by which an acceptable alternative must satisfy some minimum number of criteria. The rule subsumes conjunctive and disjunctive decision strategies as special cases. Subset conjunction can be represented in a binary-response model, for example, in a logistic regression, using only main effects or only interaction effects. This results in a confounding of the main and interaction effects when there is little or no response error. With greater response error, a logistic regression, even if it gives a good fit to data, can produce parameter estimates that do not reflect the underlying decision process. The authors propose a model in which the binary classification of alternatives into acceptable/unacceptable categories is based on a probabilistic implementation of a subset-conjunctive process. The satisfaction of decision criteria biases the odds toward one outcome or the other. The authors then describe a two-stage choice model in which a (possibly large) set of alternatives is first reduced using a subset-conjunctive rule, after which an alternative is selected from this reduced set of items. They describe methods for estimating the unobserved consideration probabilities from classification and choice data, and illustrate the use of the models for cancer diagnosis and consumer choice. They report the results of simulations investigating estimation accuracy, incidence of local optima, and model fit. The authors thank the Editor, the Associate Editor, and three anonymous reviewers for their constructive suggestions, and also thank Asim Ansari and Raghuram Iyengar for their helpful comments. They also thank Sawtooth Software, McKinsey and Company, and Intelliquest for providing the PC choice data, and the University of Wisconsin for making the breast-cancer data available at the machine learning archives.  相似文献   

16.
A common practice in cognitive modeling is to develop new models specific to each particular task. We question this approach and draw on an existing theory, instance‐based learning theory (IBLT), to explain learning behavior in three different choice tasks. The same instance‐based learning model generalizes accurately to choices in a repeated binary choice task, in a probability learning task, and in a repeated binary choice task within a changing environment. We assert that, although the three tasks are different, the source of learning is equivalent and therefore, the cognitive process elicited should be captured by one single model. This evidence supports previous findings that instance‐based learning is a robust learning process that is triggered in a wide range of tasks from the simple repeated choice tasks to the most dynamic decision making tasks. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

17.
18.
Hierarchical classes models are models for N-way N-mode data that represent the association among the N modes and simultaneously yield, for each mode, a hierarchical classification of its elements. In this paper we present a stochastic extension of the hierarchical classes model for two-way two-mode binary data. In line with the original model, the new probabilistic extension still represents both the association among the two modes and the hierarchical classifications. A fully Bayesian method for fitting the new model is presented and evaluated in a simulation study. Furthermore, we propose tools for model selection and model checking based on Bayes factors and posterior predictive checks. We illustrate the advantages of the new approach with applications in the domain of the psychology of choice and psychiatric diagnosis. Iwin Leenen is now at the Instituto Mexicano de Investigación de Familia y Población (IMIFAP), Mexico. The research reported in this paper was partially supported by the Spanish Ministerio de Educación y Ciencia (programa Ramón y Cajal) and by the Research Council of K.U.Leuven (PDM/99/037, GOA/2000/02, and GOA/2005/04). The authors are grateful to Johannes Berkhof for fruitful discussions.  相似文献   

19.
Discounting is a useful framework for understanding choice involving a range of delayed and probabilistic outcomes (e.g., money, food, drugs), but relatively few studies have examined how people discount other commodities (e.g., entertainment, sex). Using a novel discounting task, where the length of a line represented the value of an outcome and was adjusted using a staircase procedure, we replicated previous findings showing that individuals discount delayed and probabilistic outcomes in a manner well described by a hyperbola-like function. In addition, we found strong positive correlations between discounting rates of delayed, but not probabilistic, outcomes. This suggests that discounting of delayed outcomes may be relatively predictable across outcome types but that discounting of probabilistic outcomes may depend more on specific contexts. The generality of delay discounting and potential context dependence of probability discounting may provide important information regarding factors contributing to choice behavior.  相似文献   

20.
A model for preferential and triadic choice is derived in terms of weighted sums of centralF distribution functions. This model is a probabilistic generalization of Coombs' (1964) unfolding model and special cases, such as the model of Zinnes and Griggs (1974), can be derived easily from it. This new form extends previous work by Mullen and Ennis (1991) and provides more insight into the same problem that they discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号