首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 15 毫秒
1.
    
Criteria are the central focus of multi‐criteria decision analysis. Many authors have suggested using our values (or preferences) to define the criteria we use to evaluate alternatives. Value‐focused thinking (VFT) is an important philosophy that advocates a more fundamental view of values in our decision making in our private and professional lives. VFT proponents advocate starting first with our values and then using our values to create decision opportunities, evaluate alternatives and finally develop improved alternatives. It has been 20 years since VFT was first introduced by Ralph Keeney. This paper surveys the VFT literature to provide a comprehensive summary of the significant applications, describe the main research developments and identify areas for future research. We review the scope and magnitude of VFT applications and the key developments in theory since VFT was introduced in 1992 and found 89 papers written in 29 journals from 1992 to 2010. We develop about 20 research questions that include the type of article (application, theory, case study, etc.), the size of the decision space (which, when given, ranged from $200K to billions of dollars), the contribution documented in the article (application benefits) and the research contributions (categorized by preferences, uncertainties and alternatives). After summarizing the answers to these questions, we conclude the paper with suggestions for improving VFT applications and potential future research. We found a large number of significant VFT applications and several useful research contributions. We also found an increasing number of VFT papers written by international authors. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

2.
One‐switch utility functions model situations in which the preference between two alternatives switches only once as the outcome of one attribute of both alternatives changes from low to high. Recent research cites evidence that the sum of exponential functions (sumex) is the most convincing type for modelling one‐switch utility functions. Sumex functions allow to model exactly one preferential switch and they are convenient for estimating one‐switch utility functions. However, it is unclear so far if sumex functions are suitable to model preferential switches that are perceivable by a decision maker. This paper first analyses how different the utility of two alternatives before and after a preferential can be modelled with sumex functions given that the preferential switch is caused by a particular attribute outcome improvement. It thereafter investigates how accurately decision makers perceive such utility differences. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

3.
Given a finite set A of actions evaluated by a set of attributes, preferential information is considered in the form of a pairwise comparison table including pairs of actions from subset BA described by stochastic dominance relations on particular attributes and a total order on the decision attribute. Using a rough sets approach for the analysis of the subset of preference relations, a set of decision rules is obtained, and these are applied to a set A\B of potential actions. The rough sets approach of looking for the reduction of the set of attributes gives us the possibility of operating on a multi‐attribute stochastic dominance for a reduced number of attributes. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

4.
This paper uses a simulation approach to investigate how different attribute weighting techniques affect the quality of decisions based on multiattribute value models. The weighting methods considered include equal weighting of all attributes, two methods for using judgments about the rank ordering of weights, and a method for using judgments about the ratios of weights. The question addressed is: How well does each method perform when based on judgments of attribute weights that are unbiased but subject to random error? To address this question, we employ simulation methods. The simulation results indicate that ratio weights were either better than rank order weights (when error in the ratio weights was small or moderate) or tied with them (when error was large). Both ratio weights and rank order weights were substantially superior to the equal weights method in all cases studied. Our findings suggest that it will usually be worth the extra time and effort required to assess ratio weights. In cases where the extra time or effort required is too great, rank order weights will usually give a good approximation to the true weights. Comparisons of the two rank-order weighting methods favored the rank-order-centroid method over the rank-sum method. © 1998 John Wiley & Sons, Ltd.  相似文献   

5.
Considering that the absence of measurement error in research is a rare phenomenon and its effects can be dramatic, we examine the impact of measurement error on propensity score (PS) analysis used to minimize selection bias in behavioral and social observational studies. A Monte Carlo study was conducted to explore the effects of measurement error on the treatment effect and balance estimates in PS analysis across seven different PS conditioning methods. In general, the results indicate that even low levels of measurement error in the covariates lead to substantial bias in estimates of treatment effects and concomitant reduction in confidence interval coverage across all methods of conditioning on the PS.  相似文献   

6.
Tokachi sub‐prefecture in Hokkaido is one of the most famous dairy and crop farming regions in Japan. It is known that Tokachi is faced with various difficult problems such as soil degradation, water contamination and unpleasant odours because of the excessive use of chemical fertilizers and inappropriate treatment of livestock excretion. In this paper, we focus on Shihoro town where agricultural outputs are relatively large in Tokachi, and propose collaborative circulating farming with collective operations between arable and cattle farmers. Under the assumption that the decision‐maker in this problem is a representative of a farming organization who hopes for sustainable agricultural development and values the intentions of local residents including arable and cattle farmers in this region, we employ multi‐attribute utility theory in order to evaluate multiple alternatives of the farming management problem. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

7.
    
This study focuses on the effect of restrictive information display on decision performance. Specifically the study examines whether two‐channeled and one non‐channeled computerized information display systems result in significant differences in decision accuracy. The two‐channeled information display systems are designed to encourage two general information processing patterns commonly observed in the experimental literature examining multi‐alternative, multi‐attribute choice decisions: information processing by alternative and information processing by attribute. An information display program was developed which used restrictive information display to operationalize the channeled versus non‐channeled manipulation. Channeling was implemented either by displaying information only by alternative or by displaying information only by attribute. The task was an operations scheduling problem that subjects completed under three levels of time pressure. The results indicate statistically significant effects on decision accuracy for both the type of information display and time pressure manipulations. The highest decision accuracy was observed when information was displayed by alternative and when subjects were under highest levels of time pressure. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

8.
    
Investments on capital goods are assessed with respect to the life cycle profit as well as the economic lifetime of the investment. The outcome of an investment with respect to these economic criteria is generally non‐deterministic. An assessment of different investment options thus requires probabilistic modelling to explicitly account for the uncertainties. A process for the assessment of life cycle profit and the evaluation of the adequacy of the assessment is developed. The primary goal of the assessment process is to aid the decision‐maker in structuring and quantifying investment decision problems characterized by multiple criteria and uncertainty. The adequacy of the assessment process can be evaluated by probabilistic criteria indicating the degree of uncertainty in the assessment. Bayesian inference is used to re‐evaluate the initial assessment, as evidence of the system performance becomes available. Thus authentication of contracts of guarantee is supported. Numerical examples are given to demonstrate features of the described life cycle profit assessment process. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

9.
10.
    
This note concerns two issues left unresolved in our study of lexicographic‐order preservation and stochastic dominance in settings where preferences are represented by utility vectors, ordered lexicographically, and judgements emerge as matrices that premultiply utility vectors in expected utility sums. First, a generalization of the ‘Conjecture Σ’, which implied transitivity of a stochastic dominance relation under non‐vacuous resolution‐level information, is proved. Second, this paper comments on using resolution‐level information in higher as well as in first degree stochastic dominance analysis. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

11.
    
Decision‐making is frequently affected by uncertainty and/or incomplete information, which turn decision‐making into a complex task. It is often the case that some of the actors involved in decision‐making are not sufficiently familiar with all of the issues to make the appropriate decisions. In this paper, we are concerned about missing information. Specifically, we deal with the problem of consistently completing an analytic hierarchy process comparison matrix and make use of graph theory to characterize such a completion. The characterization includes the degree of freedom of the set of solutions and a linear manifold and, in particular, characterizes the uniqueness of the solution, a result already known in the literature, for which we provide a completely independent proof. Additionally, in the case of nonuniqueness, we reduce the problem to the solution of nonsingular linear systems. In addition to obtaining the priority vector, our investigation also focuses on building the complete pairwise comparison matrix, a crucial step in the necessary process (between synthetic consistency and personal judgement) with the experts. The performance of the obtained results is confirmed.  相似文献   

12.
    
Numerical decision analysis (NDA), derived from statistical decision theory, is very well known. Verbal decision analysis (VDA), oriented towards so‐called unstructured problems, where the qualitative and uncertain factors dominate, is a newer direction in decision theory and practice. Verbal and numerical decision analyses (DAs) have been compared in an experimental setting, with groups of students. This paper presents the results of a comparison in the context of live practical tasks. Both approaches were attempted on two comparable choices, facing both Russian and US government agencies, involving a choice between oil and gas transportation options. The resulting methodological insights are generalized into a systematic comparison of the strong and weak features of each approach. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

13.
吴锐  丁树良  甘登文 《心理学报》2010,42(3):434-442
题组越来越多地出现在各类考试中, 采用标准的IRT模型对有题组的测验等值, 可能因忽略题组的局部相依性导致等值结果的失真。为解决此问题, 我们采用基于题组的2PTM模型及IRT特征曲线法等值, 以等值系数估计值的误差大小作为衡量标准, 以Wilcoxon符号秩检验为依据, 在几种不同情况下进行了大量的Monte Carlo模拟实验。实验结果表明, 考虑了局部相依性的题组模型2PTM绝大部分情况下都比2PLM等值的误差小且有显著性差异。另外, 用6种不同等值准则对2PTM等值并评价了不同条件下等值准则之间的优劣。  相似文献   

14.
    
A new multilevel latent state graded response model for longitudinal multitrait–multimethod (MTMM) measurement designs combining structurally different and interchangeable methods is proposed. The model allows researchers to examine construct validity over time and to study the change and stability of constructs and method effects based on ordinal response variables. We show how Bayesian estimation techniques can address a number of important issues that typically arise in longitudinal multilevel MTMM studies and facilitates the estimation of the model presented. Estimation accuracy and the impact of between‐ and within‐level sample sizes as well as different prior specifications on parameter recovery were investigated in a Monte Carlo simulation study. Findings indicate that the parameters of the model presented can be accurately estimated with Bayesian estimation methods in the case of low convergent validity with as few as 250 clusters and more than two observations within each cluster. The model was applied to well‐being data from a longitudinal MTMM study, assessing the change and stability of life satisfaction and subjective happiness in young adults after high‐school graduation. Guidelines for empirical applications are provided and advantages and limitations of a Bayesian approach to estimating longitudinal multilevel MTMM models are discussed.  相似文献   

15.
王孟成  邓俏文 《心理学报》2016,(11):1489-1498
本研究通过蒙特卡洛模拟考查了采用全息极大似然估计进行缺失数据建模时辅助变量的作用。具体考查了辅助变量与研究变量的共缺机制、共缺率、相关程度、辅助变量数目与样本量等因素对参数估计结果精确性的影响。结果表明,当辅助与研究变量共缺时:(1)对于完全随机缺失的辅助变量,结果更容易出现偏差;(2)对于MAR-MAR组合机制,纳入单个辅助变量是有益的;对于MAR-MCAR或MAR-MNAR组合机制,纳入多于一个辅助变量的效果更好;(3)纳入与研究变量低相关的辅助变量对结果也是有益的。  相似文献   

16.
    
Multilevel structural equation models are increasingly applied in psychological research. With increasing model complexity, estimation becomes computationally demanding, and small sample sizes pose further challenges on estimation methods relying on asymptotic theory. Recent developments of Bayesian estimation techniques may help to overcome the shortcomings of classical estimation techniques. The use of potentially inaccurate prior information may, however, have detrimental effects, especially in small samples. The present Monte Carlo simulation study compares the statistical performance of classical estimation techniques with Bayesian estimation using different prior specifications for a two-level SEM with either continuous or ordinal indicators. Using two software programs (Mplus and Stan), differential effects of between- and within-level sample sizes on estimation accuracy were investigated. Moreover, it was tested to which extent inaccurate priors may have detrimental effects on parameter estimates in categorical indicator models. For continuous indicators, Bayesian estimation did not show performance advantages over ML. For categorical indicators, Bayesian estimation outperformed WLSMV solely in case of strongly informative accurate priors. Weakly informative inaccurate priors did not deteriorate performance of the Bayesian approach, while strong informative inaccurate priors led to severely biased estimates even with large sample sizes. With diffuse priors, Stan yielded better results than Mplus in terms of parameter estimates.  相似文献   

17.
方杰  张敏强 《心理学报》2012,44(10):1408-1420
针对中介效应ab的抽样分布往往不是正态分布的问题,学者近年提出了三类无需对ab的抽样分布进行任何限制且适用于中、小样本的方法,包括乘积分布法、非参数Bootstrap和马尔科夫链蒙特卡罗(MCMC)方法.采用模拟技术比较了三类方法在中介效应分析中的表现.结果发现:1)有先验信息的MCMC方法的ab点估计最准确;2)有先验信息的MCMC方法的统计功效最高,但付出了低估第Ⅰ类错误率的代价,偏差校正的非参数百分位Bootstrap方法的统计功效其次,但付出了高估第Ⅰ类错误率的代价;3)有先验信息的MCMC方法的中介效应区间估计最准确.结果表明,当有先验信息时,推荐使用有先验信息的MCMC方法;当先验信息不可得时,推荐使用偏差校正的非参数百分位Bootstrap方法.  相似文献   

18.
EM and beyond   总被引:2,自引:0,他引:2  
The basic theme of the EM algorithm, to repeatedly use complete-data methods to solve incomplete data problems, is also a theme of several more recent statistical techniques. These techniques—multiple imputation, data augmentation, stochastic relaxation, and sampling importance resampling—combine simulation techniques with complete-data methods to attack problems that are difficult or impossible for EM.A preliminary version of this article was the Keynote Address at the 1987 European Meeting of the Psychometric Society June 24–26, 1987 in Enschede, The Netherlands. The author wishes to thank the editor and reviewers for helpful comments.  相似文献   

19.
    
Approximately counting and sampling knowledge states from a knowledge space is a problem that is of interest for both applied and theoretical reasons. However, many knowledge spaces used in practice are far too large for standard statistical counting and estimation techniques to be useful. Thus, in this work we use an alternative technique for counting and sampling knowledge states from a knowledge space. This technique is based on a procedure variously known as subset simulation, the Holmes–Diaconis–Ross method, or multilevel splitting. We make extensive use of Markov chain Monte Carlo methods and, in particular, Gibbs sampling, and we analyse and test the accuracy of our results in numerical experiments.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号