共查询到20条相似文献,搜索用时 0 毫秒
1.
Stochastic dominance concerns conditions on outcome probabilities that are necessary and sufficient for one act to be (strictly) preferred to another according to all preference relations that share certain properties, one of which customarily is an Archimedean property sufficient to entail existence of real-valued representations. We relax this assumption to permit linear lexicographic utility of finite and known dimensionality. In some situations, levels of the lexicographic hierarchy could correspond to explicit criteria or attributes. In our model, subjective probabilities emerge as matrix premultipliers of the outcome utility vectors. We thus obtain matrix probability generalizations of the familiar cumulative probability conditions for stochastic dominance. 相似文献
2.
Ralf M. Bader 《Australasian journal of philosophy》2018,96(3):498-507
This paper addresses the problem of opaque sweetening and argues that one should use stochastic dominance in comparing lotteries even when dealing with incomplete orderings that allow for non-comparable outcomes. 相似文献
3.
Kazimierz Zaras 《Journal of Multi-Criteria Decision Analysis》1999,8(5):291-297
Given a finite set A of actions evaluated by a set of attributes, preferential information is considered in the form of a pairwise comparison table including pairs of actions from subset B⊂A described by stochastic dominance relations on particular attributes and a total order on the decision attribute. Using a rough sets approach for the analysis of the subset of preference relations, a set of decision rules is obtained, and these are applied to a set A\B of potential actions. The rough sets approach of looking for the reduction of the set of attributes gives us the possibility of operating on a multi‐attribute stochastic dominance for a reduced number of attributes. Copyright © 1999 John Wiley & Sons, Ltd. 相似文献
4.
Rainer Dyckerhoff 《Journal of Multi-Criteria Decision Analysis》1994,3(1):41-58
In expected utility many results have been derived that give necessary and/or sufficient conditions for a multivariate utility function to be decomposable into lower-dimensional functions. In particular, multilinear, multiplicative and additive decompositions have been widely discussed. These utility functions can be more easily assessed in practical situations. In this paper we present a theory of decomposition in the context of nonadditive expected utility such as anticipated utility or Choquet expected utility. We show that many of the results used in conventional expected utility carry over to these more general frameworks. If preferences over lotteries depend only on the marginal probability distributions, then in expected utility the utility function is additively decomposable. We show that in anticipated utility the marginality condition implies not only that the utility function is additively decomposable but also that the distortion function is the identity function. We further demonstrate that a decision maker who is bivariate risk neutral has a utility function that is additively decomposable and a distortion function q for which q(½) = ½. 相似文献
5.
Gregory S. Parnell David W. Hughes Roger Chapman Burk Patrick J. Driscoll Paul D. Kucik Benjamin L. Morales Lawrence R. Nunn 《Journal of Multi-Criteria Decision Analysis》2013,20(1-2):49-60
Criteria are the central focus of multi‐criteria decision analysis. Many authors have suggested using our values (or preferences) to define the criteria we use to evaluate alternatives. Value‐focused thinking (VFT) is an important philosophy that advocates a more fundamental view of values in our decision making in our private and professional lives. VFT proponents advocate starting first with our values and then using our values to create decision opportunities, evaluate alternatives and finally develop improved alternatives. It has been 20 years since VFT was first introduced by Ralph Keeney. This paper surveys the VFT literature to provide a comprehensive summary of the significant applications, describe the main research developments and identify areas for future research. We review the scope and magnitude of VFT applications and the key developments in theory since VFT was introduced in 1992 and found 89 papers written in 29 journals from 1992 to 2010. We develop about 20 research questions that include the type of article (application, theory, case study, etc.), the size of the decision space (which, when given, ranged from $200K to billions of dollars), the contribution documented in the article (application benefits) and the research contributions (categorized by preferences, uncertainties and alternatives). After summarizing the answers to these questions, we conclude the paper with suggestions for improving VFT applications and potential future research. We found a large number of significant VFT applications and several useful research contributions. We also found an increasing number of VFT papers written by international authors. Copyright © 2012 John Wiley & Sons, Ltd. 相似文献
6.
We propose and test a novel approach for eliciting subjective joint probabilities. In the proposed approach, judges compare pairs of possible outcomes and identify which of the two is more likely and by how much. These pair‐wise comparative judgments create a matrix of ratio judgments from which the target probabilities are extracted using the rows' (or columns') geometric means. In Study 1, subjects provided direct assessments of the likelihood of joint events (e.g., sunny days and stock market gains) and also made pair‐wise comparisons of the same joint events. Subjects in Study 2 learnt the distribution of hypothetical events pairs and provided direct and ratio estimates. In both studies, the ratio estimates were significantly more accurate than the direct estimates. The results suggest that it is possible to elicit probabilistic estimates without explictly asking for probabilities and that the pair‐wise approach is a candidate for complementing or replacing traditional elicitation approaches. Copyright © 2016 John Wiley & Sons, Ltd. 相似文献
7.
Melvin R. Novick 《Psychometrika》1980,45(4):411-424
In this paper, modern statistics is considered as a branch of psychometrics and the question of how the central problems of statistics can be resolved using psychometric methods is investigated. Theories and methods developed in the fields of test theory, scaling, and factor analysis are related to the principle problems of modern statistical theory and method. Topics surveyed include assessment of probabilities, assessment of utilities, assessment of exchangeability, preposterior analysis, adversary analysis, multiple comparisons, the selection of predictor variables, and full-rank ANOVA. Reference is made to some literature from the field of cognitive psychology to indicate some of the difficulties encountered in probability and utility assessment. Some methods for resolving these difficulties using the Computer-Assisted Data Analysis (CADA) Monitor are described, as is some recent experimental work on utility assessment.1980 Psychometric Society presidential address.I am indebted to Paul Slovic and David Libby for valuable consultation on the issues discussed in this paper and to Nancy Turner and Laura Novick for assistance in preparation.Research reported herein was supported under contract number N00014-77-C-0428 from the Office of Naval Research to The University of Iowa, Melvin R. Novick, principal investigator. Opinions expressed herein reflect those of the author and not those of sponsoring agencies. 相似文献
8.
Casper Storm Hansen 《Thought: A Journal of Philosophy》2015,4(4):213-214
This paper describes a scenario in which a person in his afterlife will with probability 1 spend twice as many days in Heaven as in Hell, but, even though Heaven is as good as Hell is bad, his expected utility for any given day in that afterlife is negative. 相似文献
9.
风险决策心理因素的理论综述 总被引:5,自引:1,他引:5
风险决策理论对于人们是否作出冒险行为的选择提出了两种比较典型的解释:一种解释把风险决策归因于人们共有的基本过程,即“较冷”的心理和认知过程。这些理论认为风险选择是由人类基本的心理和感觉机制引起的。而另一种解释则把风险决策归因于“较热”的情感和动机过程。这些理论认为,情境和人格因素会增强风险决策的动机并导致风险决策个体差异的存在。本文对这两种解释的不同理论作了较为系统的论述。 相似文献
10.
ERIK CARLSON 《Theoria》2007,73(1):3-27
Abstract: Many philosophers have claimed that extensive or additive measurement is incompatiblewith the existence of “higher values”, any amount of which is better than any amount of some other value. In this paper, it is shown that higher values can be incorporated in a non‐standard model of extensive measurement, with values represented by sets of ordered pairs of real numbers, rather than by single reals. The suggested model is mathematically fairly simple, and it applies to structures including negative as well as positive values. 相似文献
11.
PETER LAM HERBERT MOSKOWITZ THOMAS EPPEL JEN TANG 《Journal of Multi-Criteria Decision Analysis》1997,6(1):25-40
Traditionally, parameters of multiattribute utility models, representing a decision maker's preference judgements, are treated deterministically. This may be unrealistic, because assessment of such parameters is potentially fraught with imprecisions and errors. We thus treat such parameters as stochastic and investigate how their associated imprecision/errors are propagated in an additive multiattribute utility function in terms of the aggregate variance. Both a no information and a rank order case regarding the attribute weights are considered, assuming a uniform distribution over the feasible region of attribute weights constrained by the respective information assumption. In general, as the number of attributes increases, the variance of the aggregate utility in both cases decreases and approaches the same limit, which depends only on the variances as well as the correlations among the single-attribute utilities. However, the marginal change in aggregate utility variance decreases rather rapidly and hence decomposition as a variance reduction mechanism is generally useful but becomes relatively ineffective if the number of attributes exceed about 10. Moreover, it was found that utilities which are positively correlated increase the aggregate utility variance, hence every effort should be made to avoid positive correlations between the single-attribute utilities. We also provide guidelines for determining under what condition and to what extent a decision maker should decompose to obtain an aggregate utility variance that is smaller than that of holistic assessments. Extensions of the current model and empirical research to support some of our behavioural assumptions are discussed. © 1997 John Wiley & Sons, Ltd. 相似文献
12.
Alfonso Mateos Antonio Jimnez Jos F. Blanco 《Journal of Multi-Criteria Decision Analysis》2012,19(3-4):129-142
In multi‐attribute utility theory, it is often not easy to elicit precise values for the scaling weights representing the relative importance of criteria. A very widespread approach is to gather incomplete information. A recent approach for dealing with such situations is to use information about each alternative's intensity of dominance, known as dominance measuring methods. Different dominance measuring methods have been proposed, and simulation studies have been carried out to compare these methods with each other and with other approaches but only when ordinal information about weights is available. In this paper, we use Monte Carlo simulation techniques to analyse the performance of and adapt such methods to deal with weight intervals, weights fitting independent normal probability distributions or weights represented by fuzzy numbers. Moreover, dominance measuring method performance is also compared with a widely used methodology dealing with incomplete information on weights, the stochastic multicriteria acceptability analysis (SMAA). SMAA is based on exploring the weight space to describe the evaluations that would make each alternative the preferred one. Copyright © 2012 John Wiley & Sons, Ltd. 相似文献
13.
Robert D. Rupert 《Australasian journal of philosophy》2013,91(3):559-562
A main thread of the debate over mathematical realism has come down to whether mathematics does explanatory work of its own in some of our best scientific explanations of empirical facts. Realists argue that it does; anti-realists argue that it doesn't. Part of this debate depends on how mathematics might be able to do explanatory work in an explanation. Everyone agrees that it's not enough that there merely be some mathematics in the explanation. Anti-realists claim there is nothing mathematics can do to make an explanation mathematical; realists think something can be done, but they are not clear about what that something is. I argue that many of the examples of mathematical explanations of empirical facts in the literature can be accounted for in terms of Jackson and Pettit's [1990] notion of program explanation, and that mathematical realists can use the notion of program explanation to support their realism. This is exactly what has happened in a recent thread of the debate over moral realism (in this journal). I explain how the two debates are analogous and how moves that have been made in the moral realism debate can be made in the mathematical realism debate. However, I conclude that one can be a mathematical realist without having to be a moral realist. 相似文献
14.
Tomohiro Hayashida Ichiro Nishizaki Yoshifumi Ueda Hikaru Honda 《Journal of Multi-Criteria Decision Analysis》2012,19(5-6):227-245
Tokachi sub‐prefecture in Hokkaido is one of the most famous dairy and crop farming regions in Japan. It is known that Tokachi is faced with various difficult problems such as soil degradation, water contamination and unpleasant odours because of the excessive use of chemical fertilizers and inappropriate treatment of livestock excretion. In this paper, we focus on Shihoro town where agricultural outputs are relatively large in Tokachi, and propose collaborative circulating farming with collective operations between arable and cattle farmers. Under the assumption that the decision‐maker in this problem is a representative of a farming organization who hopes for sustainable agricultural development and values the intentions of local residents including arable and cattle farmers in this region, we employ multi‐attribute utility theory in order to evaluate multiple alternatives of the farming management problem. Copyright © 2012 John Wiley & Sons, Ltd. 相似文献
15.
We investigate methods developed in multiple criteria decision‐making that use ordinal information to estimate numerical values. Such methods can be used to estimate attribute weights, attribute values, or event probabilities given ranks or partial ranks. We first review related studies and then develop a generalized rank‐sum (GRS) approach in which we provide a derivation of the rank‐sum approach that had been previously proposed. The GRS approach allows for incorporating the concept of degree of importance (or, difference in likelihood with respect to probabilities and difference in value for attribute values), information that most other rank‐based formulas do not utilize. We then present simulation results comparing the GRS method with other rank‐based formulas such as the rank order centroid method and comparing the GRS methods using as many as three levels of importance (i.e., GRS‐3) with Simos' procedure (which can also incorporate degree of importance). To our surprise, our results show that the incorporation of additional information (i.e., the degree of the importance), both GRS‐3 and Simos' procedure, did not result in better performance than rank order centroid or GRS. Further research is needed to investigate the modelling of such extra information. We also explore the scenario when a decision‐maker has indifference judgments and cannot provide a complete rank order. Copyright © 2014 John Wiley & Sons, Ltd. 相似文献
16.
Early Positive Information Impacts Final Evaluations: No Deliberation‐Without‐Attention Effect and a Test of a Dynamic Judgment Model
下载免费PDF全文

Claudia González Vallejo Jiuqing Cheng Nathaniel Phillips Janna Chimeli Francis Bellezza Jason Harman G. Daniel Lassiter Matthew J. Lindberg 《决策行为杂志》2014,27(3):209-225
Evaluation judgments were affected by information order and not by subsequent unconscious versus conscious deliberation. In three experiments, we examined the influence of early positive information on final evaluations of four objects. Based on a task analysis, we predicted primacy effects in judgments in a sequential data acquisition task. Thinking periods following presentation were used to manipulate conscious or unconscious processing. In all three studies, we found no effects of thinking manipulations but instead found reliable order effects. We developed and tested an online judgment model on the basis of the belief updating model of Hogarth and Einhorn. The model accounted for large proportion of the individual level variability, and model comparison tests supported the presence of a primacy effect. Copyright © 2013 John Wiley & Sons, Ltd. 相似文献
17.
Madjid Tavana 《Journal of Multi-Criteria Decision Analysis》2002,11(2):75-96
The vast amount of information that must be considered to solve inherently ill‐structured and complex strategic problems creates a need for tools to help decision makers (DMs) recognize the complexity of this process and develop a rational model for strategy evaluation. Over the last several decades, a philosophy and a body of intuitive and analytical methods have been developed to assist DMs in the evaluation of strategic alternatives. However, the intuitive methods lack a structured framework for the systematic evaluation of strategic alternatives while the analytical methods are not intended to capture intuitive preferences. Euclid is a simple and yet sophisticated multiobjective value analysis model that attempts to uncover some of the complexities inherent in the evaluation of strategic alternatives. The proposed model uses a series of intuitive and analytical methods including environmental scanning, the analytic hierarchy process (AHP), subjective probabilities, and the theory of displaced ideal, to plot strategic alternatives on a matrix based on their Euclidean distance from the ideal alternative. Euclid is further compared to the quantitative strategic planning matrix (QSPM) in a real world application. The information provided by the users shows that Euclid can significantly enhance decision quality and the DM's confidence. Euclid is not intended to replace the DMs, rather, it provides a systematic approach to support, supplement, and ensure the internal consistency of their judgments through a series of logically sound techniques. Copyright © 2003 John Wiley & Sons, Ltd. 相似文献
18.
Sunita S. Ahlawat 《决策行为杂志》1999,12(1):71-88
Sequential processing of evidence may lead to recency effect, a potential bias in judgment. The present research seeks to extend the literature on recency effects by assessing the potential moderating influence of team work: whether group decision making moderates the severity of recency effects predicted by Hogarth and Einhorn (1992), and whether group processing influences the accuracy of, and confidence in memory for evidence. Experienced auditors from a Big‐6 accounting firm made audit judgments, either individually or as groups. They were randomly assigned to one of two levels of evidence presentation order. After performing the judgment task, participants completed two evidence recognition tests. Consistent with prior findings, recency effects on judgments were observed, but only for individuals. Group judgments or audit reports were not affected by recency. Order effects, however, did not translate into different choices of audit reports, and did not persist in memories of either individuals or groups. As expected, group memory was more accurate than individual memory and groups were more confident than individuals. Overall, confidence in accurate memories was greater than in inaccurate ones. Copyright © 1999 John Wiley & Sons, Ltd. 相似文献
19.
Bruce Rayno Gregory S. Parnell Roger C. Burk Brian W. Woodruff 《Journal of Multi-Criteria Decision Analysis》1997,6(6):344-354
The Air Force SPACECAST 2020 study identified and prioritized some future space systems and technologies required in the next century. This paper presents the results of research on the SPACECAST 2020 value model. The model determines and prioritizes future space systems’ utility towards controlling and exploiting space. This research identifies the assumptions and simplifications in the additive utility function and assesses modifications. The research shows that mutual utility independence of the mission areas is a reasonable assumption. Mutual utility independence allows the use of the multiplicative and multilinear utility functions. Also, the SPACECAST 2020 scoring functions used the same scoring scale and only measured the utility of future capabilities. This study makes modifications to the SPACECAST 2020 measure-of-merit scoring functions. It replaces most of these functions with either a concave, convex, linear or ‘S’ scoring function, which have expanded capability ranges to include both current and future capabilities. The modified scoring functions and alternative utility functions do not alter the SPACECAST 2020 results but do improve the credibility and future usefulness of the model. This study shows that the initial assumption of using an additive utility function is also valid. © 1997 John Wiley & Sons, Ltd. 相似文献
20.
Jessica Shaw Rebecca Campbell Debi Cain 《American journal of community psychology》2016,58(3-4):446-462
Prior research has documented the problematic community response to sexual assault: the majority of sexual assaults reported to police are never prosecuted. Social dominance theory suggests that this response is a form of institutional discrimination, intended to maintain existing social structures, and that police personnel likely draw upon shared ideologies to justify their decision‐making in sexual assault case investigations. This study drew upon social dominance theory to examine how police justified their investigatory decisions to identify potential leverage points for change. The study revealed that the likelihood of a case referral to the prosecutor increased with each additional investigative step completed; of the different types of justifications provided by police for a less‐than‐thorough investigative response and stalled case, blaming the victim for the poor police investigation proved to be the most damaging to case progression; and the type of explanation provided by police was impacted by specific case variables. As suggested by social dominance theory, the study demonstrates that police rely on several different mechanisms to justify their response to sexual assault; implementing criminal justice system policies that target and interrupt these mechanisms has the potential to improve this response, regardless of specific case factors. 相似文献