共查询到20条相似文献,搜索用时 15 毫秒
1.
Philippe Delquié 《决策行为杂志》2008,21(1):91-109
Participants in four studies were faced with everyday‐life decision scenarios involving risk, such as purchasing an airline ticket whose price may change. They were asked to state their maximum willingness to pay (WTP) for resolving the uncertainty with either perfect information or an option. The two are strategically equivalent, therefore, should be valued in the same way. Across all experiments, individuals tend to value the option more than the information. In addition, the distributions of responses reveal frequent and gross violations of normative bounds. Contrary to normative predictions, no relationship is found between individuals' valuation of the gamble and their reported WTP for information or option. Furthermore, although participants pay attention to the probabilities of outcomes in valuing the gambles, they ignore the probabilities in valuing uncertainty resolution. Several reasoning heuristics that participants use to come up with a value of information are identified, and it appears that the Information and Option contexts tend to trigger different heuristics. Shift in reference point and regret are consistent with the Information–Option valuation discrepancy. Copyright © 2007 John Wiley & Sons, Ltd. 相似文献
2.
Previous research has shown that preferences for options, such as gambles, can reverse depending on the response mode. These preference reversals have been demonstrated when tasks were performed sequentially. That is, subjects completed one task before beginning another. In an attempt to eliminate preference reversals, we asked subjects to perform tasks simultaneously. That is, subjects made two types of responses for each pair of gambles before evaluating the next pair. In the condition with no financial incentives, preference reversal rates were slightly reduced. In another condition, subjects were paid for their participation and they were allowed to play a gamble with real monetary compensation. A gamble pair was randomly selected, and if a subject's responses in the two tasks were consistent for that pair, he or she was allowed to play the ‘preferred’ gamble. Otherwise, the experimenter selected the gamble from the pair. With these financial incentives, systematic preference reversals were eliminated for two of the three task combinations. Preference reversals continued to occur for attractiveness ratings versus selling prices, although, even for that pair of tasks, the reversal rate was significantly reduced. For all three task pairs, preference orders from the two tasks appeared to merge into more consistent orders. 相似文献
3.
A simple model for the utility of gambling 总被引:2,自引:0,他引:2
Peter C. Fishburn 《Psychometrika》1980,45(4):435-448
A model of the utility of gambling is presented in a modified von Neumann-Morgenstern format. Axioms imply a utility function that preserves preferences between sure things and between gambles. The addition of a utility of gambling term to the expected utility of a gamble preserves preference comparisons between gambles and sure things. Aspects of the utility of gambling are noted, and comparisons are made to standard concepts of risk attitudes.The author is indebted to Joseph Sani for valuable discussions on the topic of this paper. 相似文献
4.
Michael Scholz 《Journal of Multi-Criteria Decision Analysis》2016,23(1-2):63-71
One‐switch utility functions model situations in which the preference between two alternatives switches only once as the outcome of one attribute of both alternatives changes from low to high. Recent research cites evidence that the sum of exponential functions (sumex) is the most convincing type for modelling one‐switch utility functions. Sumex functions allow to model exactly one preferential switch and they are convenient for estimating one‐switch utility functions. However, it is unclear so far if sumex functions are suitable to model preferential switches that are perceivable by a decision maker. This paper first analyses how different the utility of two alternatives before and after a preferential can be modelled with sumex functions given that the preferential switch is caused by a particular attribute outcome improvement. It thereafter investigates how accurately decision makers perceive such utility differences. Copyright © 2015 John Wiley & Sons, Ltd. 相似文献
5.
6.
Gregory S. Parnell David W. Hughes Roger Chapman Burk Patrick J. Driscoll Paul D. Kucik Benjamin L. Morales Lawrence R. Nunn 《Journal of Multi-Criteria Decision Analysis》2013,20(1-2):49-60
Criteria are the central focus of multi‐criteria decision analysis. Many authors have suggested using our values (or preferences) to define the criteria we use to evaluate alternatives. Value‐focused thinking (VFT) is an important philosophy that advocates a more fundamental view of values in our decision making in our private and professional lives. VFT proponents advocate starting first with our values and then using our values to create decision opportunities, evaluate alternatives and finally develop improved alternatives. It has been 20 years since VFT was first introduced by Ralph Keeney. This paper surveys the VFT literature to provide a comprehensive summary of the significant applications, describe the main research developments and identify areas for future research. We review the scope and magnitude of VFT applications and the key developments in theory since VFT was introduced in 1992 and found 89 papers written in 29 journals from 1992 to 2010. We develop about 20 research questions that include the type of article (application, theory, case study, etc.), the size of the decision space (which, when given, ranged from $200K to billions of dollars), the contribution documented in the article (application benefits) and the research contributions (categorized by preferences, uncertainties and alternatives). After summarizing the answers to these questions, we conclude the paper with suggestions for improving VFT applications and potential future research. We found a large number of significant VFT applications and several useful research contributions. We also found an increasing number of VFT papers written by international authors. Copyright © 2012 John Wiley & Sons, Ltd. 相似文献
7.
Rainer Dyckerhoff 《Journal of Multi-Criteria Decision Analysis》1994,3(1):41-58
In expected utility many results have been derived that give necessary and/or sufficient conditions for a multivariate utility function to be decomposable into lower-dimensional functions. In particular, multilinear, multiplicative and additive decompositions have been widely discussed. These utility functions can be more easily assessed in practical situations. In this paper we present a theory of decomposition in the context of nonadditive expected utility such as anticipated utility or Choquet expected utility. We show that many of the results used in conventional expected utility carry over to these more general frameworks. If preferences over lotteries depend only on the marginal probability distributions, then in expected utility the utility function is additively decomposable. We show that in anticipated utility the marginality condition implies not only that the utility function is additively decomposable but also that the distortion function is the identity function. We further demonstrate that a decision maker who is bivariate risk neutral has a utility function that is additively decomposable and a distortion function q for which q(½) = ½. 相似文献
8.
This paper presents a practical implementation of multicriteria methodologies based on the UTA model by Jacquet‐Lagrèze and Siskos and Quasi‐UTA model by Beuthe and Scannella, which are specified with a non‐linear, but piecewise linear, additive utility function. In contrast with the general UTA model, the Quasi‐UTA specification structures the partial utilities as recursive exponential functions of only one curvature parameter. This allows for a reduction of the quantity of information necessary to build the utility function. The software MUSTARD implements different variants of these models. Firstly, it offers the basic deterministic UTA model of disaggregation, but also its first programmed stochastic version. In both cases, the software proceeds stepwise and interactively helping the decision maker to formulate the problem and state preferences between projects; in the stochastic case, the decision maker is even helped to build the criteria distributions. The Quasi‐UTA specification can be introduced in this disaggregation model. Secondly, the software offers an aggregation model whereby the Quasi‐UTA partial utility functions are built separately through specific questioning processes. The questions relating to deterministic criteria are of the ‘direct rating type’, while those of the stochastic criteria are either of the ‘variable probability’ or the ‘variable outcome’ type. The criteria weights can be assessed by the ‘swing weight’ method or by a UTA‐II side‐program. As an example as well as a test of the Quasi‐UTA aggregation approach, the paper presents its application to a real problem of selecting road investment projects in Belgium. Several experts and civil servants were interviewed, and their individual utility functions derived. The projects are ranked according to their rate of return, which is computed on the basis of the projects certain equivalent money value. Copyright © 2002 John Wiley & Sons, Ltd. 相似文献
9.
10.
11.
Nader Shoaibi 《Ratio》2021,34(1):7-19
The idea that logic is in some sense normative for thought and reasoning is a familiar one. Some of the most prominent figures in the history of philosophy including Kant and Frege have been among its defenders. The most natural way of spelling out this idea is to formulate wide‐scope deductive requirements on belief which rule out certain states as irrational. But what can account for the truth of such deductive requirements of rationality? By far, the most prominent responses draw in one way or another on the idea that belief aims at the truth. In this paper, I consider two ways of making this line of thought more precise and I argue that they both fail. In particular, I examine a recent attempt by Epistemic Utility Theory to give a veritist account of deductive coherence requirements. I argue that despite its proponents’ best efforts, Epistemic Utility Theory cannot vindicate such requirements. 相似文献
12.
Theresa M. Marteau Marie Johnston Jane Kidd Susan Michie Rachel Cook Joan Slack 《Psychology & health》2013,28(1-2):13-22
Abstract The purpose of the current study was to determine which psychological models are most useful in predicting uptake of a prenatal screening test, maternal-serum alphafetoprotein screening for spina bifida and Down's syndrome. 1000 women eligible for the test completed standardised self-report questionnaires at two routine clinic visits to an antenatal clinic prior to the time when the test could take place. 902 underwent the screening test; 51 declined the test; and 47 did not undergo the test, giving no reason for this to staff. Knowledge of the test, the subjective expected utility attached to the test, and attitudes to doctors and medicine were all significant predictors of uptake behaviour. Results of a discriminant function analysis demonstrated distinct psychological processes underlying each of these three uptake behaviours, explaining 21% of the variance in uptake of screening. If uptake of screening is examined not as a dichotomous variable but as a group of behaviours, predictive models are identified accordingly. This would lead to rnodels of health-related behaviours as a heterogeneous rather than homogeneous phenomena, predicted and influenced by different causes. 相似文献
13.
Tony Rosqvist 《Journal of Multi-Criteria Decision Analysis》2001,10(4):205-218
Investments on capital goods are assessed with respect to the life cycle profit as well as the economic lifetime of the investment. The outcome of an investment with respect to these economic criteria is generally non‐deterministic. An assessment of different investment options thus requires probabilistic modelling to explicitly account for the uncertainties. A process for the assessment of life cycle profit and the evaluation of the adequacy of the assessment is developed. The primary goal of the assessment process is to aid the decision‐maker in structuring and quantifying investment decision problems characterized by multiple criteria and uncertainty. The adequacy of the assessment process can be evaluated by probabilistic criteria indicating the degree of uncertainty in the assessment. Bayesian inference is used to re‐evaluate the initial assessment, as evidence of the system performance becomes available. Thus authentication of contracts of guarantee is supported. Numerical examples are given to demonstrate features of the described life cycle profit assessment process. Copyright © 2001 John Wiley & Sons, Ltd. 相似文献
14.
Tomohiro Hayashida Ichiro Nishizaki Yoshifumi Ueda Hikaru Honda 《Journal of Multi-Criteria Decision Analysis》2012,19(5-6):227-245
Tokachi sub‐prefecture in Hokkaido is one of the most famous dairy and crop farming regions in Japan. It is known that Tokachi is faced with various difficult problems such as soil degradation, water contamination and unpleasant odours because of the excessive use of chemical fertilizers and inappropriate treatment of livestock excretion. In this paper, we focus on Shihoro town where agricultural outputs are relatively large in Tokachi, and propose collaborative circulating farming with collective operations between arable and cattle farmers. Under the assumption that the decision‐maker in this problem is a representative of a farming organization who hopes for sustainable agricultural development and values the intentions of local residents including arable and cattle farmers in this region, we employ multi‐attribute utility theory in order to evaluate multiple alternatives of the farming management problem. Copyright © 2012 John Wiley & Sons, Ltd. 相似文献
15.
Alfonso Mateos Antonio Jimnez Jos F. Blanco 《Journal of Multi-Criteria Decision Analysis》2012,19(3-4):129-142
In multi‐attribute utility theory, it is often not easy to elicit precise values for the scaling weights representing the relative importance of criteria. A very widespread approach is to gather incomplete information. A recent approach for dealing with such situations is to use information about each alternative's intensity of dominance, known as dominance measuring methods. Different dominance measuring methods have been proposed, and simulation studies have been carried out to compare these methods with each other and with other approaches but only when ordinal information about weights is available. In this paper, we use Monte Carlo simulation techniques to analyse the performance of and adapt such methods to deal with weight intervals, weights fitting independent normal probability distributions or weights represented by fuzzy numbers. Moreover, dominance measuring method performance is also compared with a widely used methodology dealing with incomplete information on weights, the stochastic multicriteria acceptability analysis (SMAA). SMAA is based on exploring the weight space to describe the evaluations that would make each alternative the preferred one. Copyright © 2012 John Wiley & Sons, Ltd. 相似文献
16.
In this study we clarified the multiple‐level effects of Confucian‐related work values, including self‐discipline and interpersonal ethics, on the performance of individuals and the team. Empirical data of 70 work teams with 472 team members from Taiwanese enterprises were collected to test our hypotheses. Results showed that, at the group level, shared team values of interpersonal ethics were positively related to team performance through the partial mediation of team cooperation. For cross‐level effect, shared team values of interpersonal ethics and individual member's self‐discipline values were both positively related to individual performance. The implications of Confucian dynamism work values on contemporary organizational management and indigenous Chinese team theories were provided. 相似文献
17.
MICHEL SANCHEZ‐CARDENAS 《The International journal of psycho-analysis》2005,86(6):1695-1711
The author interprets Jules Verne's Journey to the centre of the Earth with the help of Matte Blanco's theoretical framework, which describes the principle of symmetry and the principle of generalization. The fi rst states that, from the moment an element or a proposition becomes conscious, it coexists in the unconscious with its symmetrically opposite form. The second refers to the confusion of elements once they have been apprehended by thought as containing a common point; they are put into larger and larger groups which merge into an indivisible whole. Verne's novel is built on paired elements which become symmetrized (e.g. distinct minerals vs molten lava; scientifi c rationality vs madness; the living vs the dead, etc.). These elements in turn become confused with one another, thanks largely to the novel's atmosphere of oral incorporation. This allows the fusion between subject and object, and, in particular, between the orphaned hero and his dead (Earth) mother. The novel's narrative evolution through three stages (separation, fusion and de‐fusion, which are paralleled by rational, irrational and rational thought) can thus be understood as a mourning process. Similar processes can be found in other literary works. 相似文献
18.
Mostafa Taqavi 《World Futures: Journal of General Evolution》2020,76(1):1-16
abstractIn this article Pitt’s and Sharif’s models of technology are discussed. These models are based on two different conceptions of technology, which are technology as “instrument” and as “making use of instrument.” Sharif considers technology as a collection of empowering tools, including technoware, humanware, infoware and orgaware. On the other hand, Pitt sees technology as “humanity at work.” Based on his definition, Pitt proposes a model of technology with three components; first-order transformation, second-order transformation, and the assessment of feedback mechanism. In this article this model will be explained and criticized. After that, Sharif’s model is criticized in the light of Pitt’s theory and it will be shown that Pitt’s model provides a better understanding of different aspects of technology. For example, it will be argued how Pitt’s model is efficient in explaining dynamicity, transfer and control of technology along with its soft dimensions, while Sharif’s model is incapable of doing so. In the next part, Pitt’s model is criticized and it is shown that the mechanism of knowledge progress suggested by this model is controversial and Pitt’s framework cannot support the idea of indigenous technology. Furthermore, the ability of Pitt’s model in describing different technological phenomena is called into question, since this model provides a superficial view of the complexity of an assessment of technology’s consequences. Finally, a list is proposed that contains minimal requirements that every model of technology is expected to explain. It is incumbent on technology theoreticians to consider this list. 相似文献
19.
The rhetorical foundation of philosophical argumentation 总被引:1,自引:0,他引:1
Michel Meyer 《Argumentation》1988,2(2):255-269
The rejection of rhetoric has been a constant theme in Western thought since Plato. The presupposition of such a debasement lies at the foundation of a certain view of Reason that I have called propositionalism, and which is analyzed in this article. The basic tenets of propositionalism are that truth is exclusive, i.e. it does not allow for any alternative, and that there is always only one proposition which must be true, the opposite one being false. Necessity and uniqueness are the ideals of propositionalism. But the question of the necessity of such a necessity is bound to arise. Foundationalism and propositionalism are intrinsically related. Since necessity excludes alternatives, rhetoric, which is based on the possibility of opposite standpoints, is unavoidably devalued as the crippled child of Reason, identical to sophistry or eristic. But propositionalism cannot justify itself and provide a justification for its own foundation without circle or contraditction. Since it responds to the problem of eradicating problems and alternatives through propositional entities, propositionalism is ultimately based on questioning to which it replies in the mode of denial. The unavowed foundation of Reason is therefore the question of questioning, even though this very question is suppressed as propositionalism. The trace of such a question is not only historical, but can also be seen, for instance, in the role played by the principle of contradiction in the constitution of propositional Reason (Artitotle): opposite propositions are not the expression of a problematic situation, they are either possible or successively unique propositions.We want to replace propositionalism by problematology which allows for the conceptualization of alternatives, thereby rendering a true rhetoric possible. Argumentation cannot then be equated with eristic any more, as propositionalism maintained.Rationality must be seen as having questioning as its true starting-point. Reason must be rhetorical if it is to survive the death of propositionalism which took place after the radical criticisms of Marx, Nietzsche and Freud. Even if it is still hard ffor philosophers and rhetoricians to think within another framework and even though they prefer endlessly to deconstauct the old one instead of changing it, problematology is bound to impose itself as the new voice for rationality, because Reason has always endeavored to solve problems. Propositionalism has been only one way of conceiving problems, based on the view that solutions could be but the suppression of questioning. 相似文献
20.