首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Discrete choice experiments—selecting the best and/or worst from a set of options—are increasingly used to provide more efficient and valid measurement of attitudes or preferences than conventional methods such as Likert scales. Discrete choice data have traditionally been analyzed with random utility models that have good measurement properties but provide limited insight into cognitive processes. We extend a well‐established cognitive model, which has successfully explained both choices and response times for simple decision tasks, to complex, multi‐attribute discrete choice data. The fits, and parameters, of the extended model for two sets of choice data (involving patient preferences for dermatology appointments, and consumer attitudes toward mobile phones) agree with those of standard choice models. The extended model also accounts for choice and response time data in a perceptual judgment task designed in a manner analogous to best–worst discrete choice experiments. We conclude that several research fields might benefit from discrete choice experiments, and that the particular accumulator‐based models of decision making used in response time research can also provide process‐level instantiations for random utility models.  相似文献   

2.
In this paper we show how utility, lottery dependent utility, and weighted utility models can be calibrated using algebraic and statistical techniques. The models are empirically compared in laboratory (student subjects) and real settings (sales force personnel of the Los Angeles Times). In our empirical comparison we evaluate two aspects: the extent to which observed preferences are consistent with each model, and predictive accuracy of the models on a holdout sample. The results indicate that only about 20% of the observed choice patterns in our experimental design are consistent with the expected utility model, 50% with the weighted utility model, and 90% with the general lottery dependent utility model. On individual level predictions to the holdout sample, however, the expected utility model does as well as the other two models. This latter finding is robust across different measurement and estimation methods and student and non-student subjects.  相似文献   

3.
Most empirical models of choice in economics and consumer research assume that the decision maker assesses all alternatives and information in a perfect information-processing sense. The complexity of the choice environment, the ability of the individual to make complex decisions, and the effect of choice context on the decision strategy are generally not considered in statistical model development. One of the reasons for this omission is that theoretical literature on choice complexity and imperfect ability to choose that has developed in psychology and behavioral decision theory (BDT) literatures has not been translated into empirical econometric analysis. Second, the data used in economics and consumer research studies tend to be somewhat different from the data structures used in psychology and BDT literatures. In this paper we outline a theoretical model that simultaneously considers task complexity, effort applied by the consumer, ability to choose, and choice. We then construct a measure of task complexity and incorporate this in an analysis of a number of data series based on the random utility framework. We also examine the performance of our measure of task complexity in a composite data set that allows for increased variability in factors affecting decision context. Our approach provides a mechanism to link research in BDT and econometric modeling of consumer choice. Our findings suggest that task complexity does affect inferences about choice model parameters and that context effects, such as complexity, have a systematic impact on the parameters of econometric models of choice. The modeling approach provides a mechanism for inclusion of results from psychology and BDT in traditional economic models of consumer choice.  相似文献   

4.
采用爱荷华赌博任务(Behcara等人1994年版本)测量了8类共222名在监男性罪犯以及32名普通男性的决策功能, 并运用前景效用学习模型分析了不同类型罪犯在情感决策中的心理功能缺陷。罪犯组选择牌1的比例与控制组没有显著差异, 选择牌2的比例显著高于控制组, 选择牌3和牌4的比例显著低于控制组。暴力犯和涉黑犯对收益和损失都不敏感, 对过去的预期效用折扣很快; 吸毒犯(已戒除)、涉毒犯、盗窃犯和抢劫犯对奖赏加工正常, 对惩罚不敏感; 经济犯选择一致性最低; 性罪犯选择一致性最高。结果表明不同类型罪犯在爱荷华赌博任务中都具有决策功能缺陷, 但他们的决策功能缺陷由不同的心理功能缺陷所致。  相似文献   

5.
We conceptualize probabilistic choice as the result of the simultaneous pursuit of multiple goals in a vector optimization representation, which is reduced to a scalar optimization that implies goal balancing. The majority of prior theoretical and empirical work on such probabilistic choice is based on random utility models, the most basic of which assume that each choice option has a valuation that has a deterministic (systematic) component plus a random component determined by some specified distribution. An alternate approach to probabilistic choice has considered maximization of one quantity (e.g., utility), subject to constraints on one or more other quantities (e.g., cost). The multiple goal perspective integrates the results regarding the well-studied multinomial logit model of probabilistic choice that has been derived from each of the above approaches; extends the results to other models in the generalized extreme value (GEV) class; and relates them to recent axiomatic work on the utility of gambling.  相似文献   

6.
A simple model for the utility of gambling   总被引:2,自引:0,他引:2  
A model of the utility of gambling is presented in a modified von Neumann-Morgenstern format. Axioms imply a utility function that preserves preferences between sure things and between gambles. The addition of a utility of gambling term to the expected utility of a gamble preserves preference comparisons between gambles and sure things. Aspects of the utility of gambling are noted, and comparisons are made to standard concepts of risk attitudes.The author is indebted to Joseph Sani for valuable discussions on the topic of this paper.  相似文献   

7.
This paper examines the management of foreign exchange risk in multinational corporations in light of the conclusions of previous empirical and theoretical investigations into decision making under uncertainty. Cognitive perceptions of risk and uncertainty are shown to underlie the hedging decisions made by corporate treasury managers, which are often demonstrably sub-optimal in a Bayesian expected utility framework. The findings suggest that simple principal-agent approaches to explaining seemingly sub-optimal corporate risk management preferences are inadequate inasmuch as they fail to account for the markedly different perspectives on risk and uncertainty taken by financial economists (qua economists) and corporate financial risk managers.  相似文献   

8.
An important reason why people deviate from expected utility is reference-dependence of preferences, implying loss aversion. Bleichrodt [Bleichrodt H. (2007). Reference-dependent utility with shifting reference points and incomplete preferences. Journal of Mathematical Psychology, 51, 266-276] argued that in the empirically realistic case where the reference point is always an element of the decision maker’s opportunity set, reference-dependent preferences have to be taken as incomplete. This incompleteness is a consequence of reference-dependence and is different in nature from the type of incompleteness usually considered in the literature. It cannot be handled by existing characterizations of reference-dependence, which all assume complete preferences. This paper presents new preference foundations that extend reference-dependent expected utility to cover this case of incompleteness caused by reference-dependence. The paper uses intuitive axioms that are easy to test. Two special cases of reference-dependent expected utility are also characterized: one model in which utility is decomposed into a normative and a psychological component and one model in which loss aversion is constant. The latter model has been frequently used in empirical research on reference-dependence.  相似文献   

9.
We study various axioms of discrete probabilistic choice, measuring how restrictive they are, both alone and in the presence of other axioms, given a specific class of prior distributions over a complete collection of finite choice probabilities. We do this by using Monte Carlo simulation to compute, for a range of prior distributions, probabilities that various simple and compound axioms hold. For example, the probability of the triangle inequality is usually many orders of magnitude higher than the probability of random utility. While neither the triangle inequality nor weak stochastic transitivity imply the other, the conditional probability that one holds given the other holds is greater than the marginal probability, for all priors in the class we consider. The reciprocal of the prior probability that an axiom holds is an upper bound on the Bayes factor in favor of a restricted model, in which the axiom holds, against an unrestricted model. The relatively high prior probability of the triangle inequality limits the degree of support that data from a single decision maker can provide in its favor. The much lower probability of random utility implies that the Bayes factor in favor of it can be much higher, for suitable data.  相似文献   

10.
In expected utility many results have been derived that give necessary and/or sufficient conditions for a multivariate utility function to be decomposable into lower-dimensional functions. In particular, multilinear, multiplicative and additive decompositions have been widely discussed. These utility functions can be more easily assessed in practical situations. In this paper we present a theory of decomposition in the context of nonadditive expected utility such as anticipated utility or Choquet expected utility. We show that many of the results used in conventional expected utility carry over to these more general frameworks. If preferences over lotteries depend only on the marginal probability distributions, then in expected utility the utility function is additively decomposable. We show that in anticipated utility the marginality condition implies not only that the utility function is additively decomposable but also that the distortion function is the identity function. We further demonstrate that a decision maker who is bivariate risk neutral has a utility function that is additively decomposable and a distortion function q for which q(½) = ½.  相似文献   

11.
A random utility model of choice was developed by combining the basic ideas of the well-known theories of Thurstone and Restle. The new model has exactly the same number of free parameters as Tversky's Elimination-by-Aspects model. Furthermore, both models were found to fit, with equal accuracy, the data reported by Rumelhart and Greeno, and Tversky. It was concluded that although the two theories are not identical, they may be difficult to discriminate empirically.  相似文献   

12.
13.
The first aim of this study was to test the self‐consistency model (SCM) of subjective confidence as it applies to personal preferences. According to SCM, participants presented with a two‐alternative forced‐choice (2AFC) item draw a small sample of representations of the item. Their confidence reflects the extent to which the choice is representative of the population of representations associated with the item, and the likelihood of making that choice on subsequent occasions. The second aim was to use confidence judgment as a clue to the dynamics of online preference construction. Participants were presented with 2AFC items measuring everyday personal preferences. The task was presented five times. In line with SCM, (i) when participants changed their preferences across presentations, they were systematically more confident when they made their more frequent choice; (ii) confidence in a choice in the item's first presentation predicted the likelihood of repeating that choice in subsequent presentations; (iii) despite the idiosyncratic nature of personal preferences, confidence was higher for consensual than for nonconsensual preferences; (iv) when participants predicted the preferences of others, they were also more confident when their predictions agreed with those of others; and (v) the confidence/accuracy correlation for predictions was positive for consensually correct but negative for consensually wrong predictions. These results suggest that confidence in preferences can help separate between the stable and variable contributions to preference construction in terms of the population of representations available in memory and the representations that are accessible at the time of preference solicitation, respectively. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

14.
Current psychometric models of choice behavior are strongly influenced by Thurstone’s (1927, 1931) experimental and statistical work on measuring and scaling preferences. Aided by advances in computational techniques, choice models can now accommodate a wide range of different data types and sources of preference variability among respondents induced by such diverse factors as person-specific choice sets or different functional forms for the underlying utility representations. At the same time, these models are increasingly challenged by behavioral work demonstrating the prevalence of choice behavior that is not consistent with the underlying assumptions of these models. I discuss new modeling avenues that can account for such seemingly inconsistent choice behavior and conclude by emphasizing the interdisciplinary frontiers in the study of choice behavior and the resulting challenges for psychometricians. The author would like to thank R. Darrell Bock whose work inspired many of the ideas presented here. The paper benefitted from helpful comments by Albert Maydeu-Olivares and Rung-Ching Tsai. The reported research was supported in parts by the Social Sciences and Humanities Research Council of Canada.  相似文献   

15.
Abstract: At least two types of models, the vector model and the unfolding model can be used for the analysis of dichotomous choice data taken from, for example, the pick any/ n method. The previous vector threshold models have a difficulty with estimation of the nuisance parameters such as the individual vectors and thresholds. This paper proposes a new probabilistic vector threshold model, where, unlike the former vector models, the angle that defines an individual vector is a random variable, and where the marginal maximum likelihood estimation method using the expectation-maximization algorithm is adopted to avoid incidental parameters. The paper also attempts to discuss which of the two models is more appropriate to account for dichotomous choice data. Two sets of dichotomous choice data are analyzed by the model.  相似文献   

16.
A linear utility model is introduced for optimal selection when several subpopulations of applicants are to be distinguished. Using this model, procedures are described for obtaining optimal cutting scores in subpopulations in quota-free as well as quota-restricted selection situations. The cutting scores are optimal in the sense that they maximize the overall expected utility of the selection process. The procedures are demonstrated with empirical data.  相似文献   

17.
This paper details the results of an empirical investigation of the random errors associated with decomposition estimates of multiattribute utility. In a riskless setting, two groups of subjects were asked to evaluate multiattribute alternatives both holistically and with the use of an additive decomposition. For one group, the alternatives were described in terms of three attributes, and for the other in terms of five. Estimates of random error associated with the various elicitations (holistic, single-attribute utility, scaling constants, or weights) were obtained using a test-retest format. It was found for both groups that the additive decomposition had significantly smaller levels of random error than the holistic evaluation. However, the number of attributes did not seem to make a significant difference to the amount of random error associated with the decomposition estimates. The levels of error found in the various elicitations were consistent with theoretical bounds that have recently been proposed in the literature. These results show that the structure imposed on the problem through decomposition results in measurable improvement in quality of the multiattribute utility judgements, and contribute to a greater understanding of the decomposition method in decision analysis.  相似文献   

18.
A combined multi-attribute utility and expectancy-value model has repeatedly been found to yield a worse fit to choices than to preference ratings. The present study investigated two possible explanations for this finding. First, people's belief-value structures may change in the choice task as they try to find the best alternative. Second, a difficult choice task may cause the decision maker to use simplifying heuristics. In the first of two experiments, subjective belief-value structures were measured on two occasions separated by about one week. Immediately before the second measurement, different groups of subjects performed a choice task, gave preference ratings, or performed a control task. The results did not support an interpretation of the greater difficulty of predicting choices in terms of changes in belief-value structures. However, the notion of simplifying heuristics received support by the finding that adopting simpler versions of the original model improved the predictions of the choices. In the second experiment, beliefs were measured immediately before or after each of a series of choices or preference ratings. The results indicated that although temporary changes in beliefs may occur, they can hardly provide a full account of the differential predictability of preferences and choices.  相似文献   

19.
A lexicographic rule orders multi-attribute alternatives in the same way as a dictionary orders words. Although no utility function can represent lexicographic preference over continuous, real-valued attributes, a constrained linear model suffices for representing such preferences over discrete attributes. We present an algorithm for inferring lexicographic structures from choice data. The primary difficulty in using such data is that it is seldom possible to obtain sufficient information to estimate individual-level preference functions. Instead, one needs to pool the data across latent clusters of individuals. We propose a method that identifies latent clusters of subjects, and estimates a lexicographic rule for each cluster. We describe an application of the method using data collected by a manufacturer of television sets. We compare the predictions of the model with those obtained from a finite-mixture, multinomial-logit model.  相似文献   

20.
There are two general methods of cross-validation: (a) empirical estimation, and (b) formula estimation. In choosing a specific cross-validation procedure, one should consider both costs (eg. inefficient use of available data in estimating regression parameters) and benefits (eg. accuracy in estimating population cross-validity). Empirical cross-validation methods involve significant costs, since they are typically laborious and wasteful of data, but under conditions represented in Monte Carlo studies, they are generally not more accurate than formula estimates. Consideration of costs and benefits suggests that empirical estimation methods are typically not worth the cost, except in a limited number of cases in which Monte Carlo sampling assumptions are not met in the derivation sample. Designs which use multiple samples to estimate the cross-validity of a single regression equation are clearly preferable to single-sample designs; the latter are never expected to be more accurate than formula estimates and thus are never worth the cost. Multi-equation designs are more accurate than single equation designs, but they appear to estimate the wrong parameter, and thus are difficult to interpret.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号