首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper details the results of an empirical investigation of the random errors associated with decomposition estimates of multiattribute utility. In a riskless setting, two groups of subjects were asked to evaluate multiattribute alternatives both holistically and with the use of an additive decomposition. For one group, the alternatives were described in terms of three attributes, and for the other in terms of five. Estimates of random error associated with the various elicitations (holistic, single-attribute utility, scaling constants, or weights) were obtained using a test-retest format. It was found for both groups that the additive decomposition had significantly smaller levels of random error than the holistic evaluation. However, the number of attributes did not seem to make a significant difference to the amount of random error associated with the decomposition estimates. The levels of error found in the various elicitations were consistent with theoretical bounds that have recently been proposed in the literature. These results show that the structure imposed on the problem through decomposition results in measurable improvement in quality of the multiattribute utility judgements, and contribute to a greater understanding of the decomposition method in decision analysis.  相似文献   

2.
Utility independence is a central condition in multiattribute utility theory, where attributes of outcomes are aggregated in the context of risk. The aggregation of attributes in the absence of risk is studied in conjoint measurement. In conjoint measurement, standard sequences have been widely used to empirically measure and test utility functions, and to theoretically analyze them. This paper shows that utility independence and standard sequences are closely related: utility independence is equivalent to a standard sequence invariance condition when applied to risk. This simple relation between two widely used conditions in adjacent fields of research is surprising and useful. It facilitates the testing of utility independence because standard sequences are flexible and can avoid cancelation biases that affect direct tests of utility independence. Extensions of our results to nonexpected utility models can now be provided easily. We discuss applications to the measurement of quality-adjusted life-years (QALY) in the health domain.  相似文献   

3.
We show that only two simple trade-off judgments are sufficient to determine whether the multiplicative multiattribute model assumes its additive form, regardless of the number of attributes in the model. This additivity condition offers a useful alternative to the test based on multiattribute lotteries commonly presented in textbooks. It can make the determination of additivity easier and more reliable. © 1997 John Wiley & Sons, Ltd.  相似文献   

4.
In expected utility many results have been derived that give necessary and/or sufficient conditions for a multivariate utility function to be decomposable into lower-dimensional functions. In particular, multilinear, multiplicative and additive decompositions have been widely discussed. These utility functions can be more easily assessed in practical situations. In this paper we present a theory of decomposition in the context of nonadditive expected utility such as anticipated utility or Choquet expected utility. We show that many of the results used in conventional expected utility carry over to these more general frameworks. If preferences over lotteries depend only on the marginal probability distributions, then in expected utility the utility function is additively decomposable. We show that in anticipated utility the marginality condition implies not only that the utility function is additively decomposable but also that the distortion function is the identity function. We further demonstrate that a decision maker who is bivariate risk neutral has a utility function that is additively decomposable and a distortion function q for which q(½) = ½.  相似文献   

5.
Multiattribute analysis depends on measurement of values and weights. Unless these measures reflect the decision maker's true values and weights, the multiattribute formula may put a less-preferred alternative in first place. To avoid such disordinality requires stringent measurement conditions: First, the values and weights must be on linear (equal interval) or ratio (known zero) scales. Second, these scales must satisfy a condition of common unit across disparate attribute dimensions. Most methods of range adjustment beg both of these measurement questions. Functional measurement theory can solve both problems and so can be useful in multiattribute analysis. Past work has established the operation of a general cognitive algebra as an empirical reality. The averaging model, in particular, makes possible the definition and estimation of weights and values as distinct psychological parameters. It can also solve the problem of common unit. Cognitive algebra thus provides a grounded theoretical foundation on which to develop self-estimation methodology, in which decision makers provide direct estimates of their values and weights. The logic is straightforward. Functional measurement can analyze global judgments to obtain validated psychological scales. These scales may then be used as validational criteria for the self-estimates. Procedures to eliminate biases in the self-estimates can thus be tested and refined in well-learned multiattribute tasks, such as judgments of meals, in which global judgments are trustworthy. Once developed, such self-estimation procedures may be used with some confidence for general multiattribute analysis. A number of studies from 20-odd years of work on the theory of information integration are summarized to show good, although not unmixed promise for self-estimation.  相似文献   

6.
The “Peak-End rule” which averages only the most extreme (Peak) and the final (End) impressions, is often a better predictor of overall evaluations of experiences than average impressions. We investigate the similarity between the evaluations of experiences based on Peak-End and average impressions. We show that the use of the Peak-End rule in cross-experience comparisons can be compatible with preferences for experiences that are better on average. Two conditions are shown to make rankings of experiences similar regardless of the aggregation rule: (i) the individual heterogeneity in the perception of stimuli, and (ii) the persistence in impressions. We describe their effects theoretically, and obtain empirical estimates using data from previous research. Higher estimates are shown to increase correlational measures of association between the Peak-End and average impressions. The high association per se is shown to be not only a theoretical possibility, but an empirical fact.  相似文献   

7.
8.
Three studies examined the motives underlying people’s desire to punish. In previous research, participants have read hypothetical criminal scenarios and assigned “fair” sentences to the perpetrators. Systematic manipulations within these scenarios revealed high sensitivity to factors associated with motives of retribution, but low sensitivity to utilitarian motives. This research identifies the types of information that people seek when punishing criminals, and explores how different types of information affect punishments and confidence ratings. Study 1 demonstrated that retribution information is more relevant to punishment than either deterrence or incapacitation information. Study 2 traced the information that people actually seek when punishing others and found a consistent preference for retribution information. Finally, Study 3 confirmed that retribution information increases participant confidence in assigned punishments. The results thus provide converging evidence that people punish primarily on the basis of retribution.  相似文献   

9.
We examined age-related changes of executive functions by means of random noun generation. Consistent with previous observations on random letter generation, older participants produced more prepotent responses than younger ones. In the case of random noun generation, prepotent responses are nouns of the same category as the preceding noun. In contrast to previous observations, older participants exhibited stronger repetition avoidance and a stronger tendency toward local evenness—that is, toward equal frequencies of the alternative responses even in short subsequences. These data suggest that at higher adult age inhibition of prepotent responses is impaired. In addition, strategic attentional processes of response selection are strengthened, in particular the application of a heuristic for randomness. In this sense response selection is more controlled in older than in younger adults.  相似文献   

10.
11.
This experiment sought to determine whether previously found metric violations of additive expectancy-value models C.F. J. C. Shanteau, Journal of Experimental Psychology, 1974, 103, 680–691; J. G. Lynch and J. L. Cohen, Journal of Personality and Social Psychology, 1978, 36, 1138–1151) were attributable to the inappropriateness of these models or to nonlinearities in the relationship between numerical ratings and underlying psychological impressions. Undergraduate participants performed two tasks employing the same experimental stimuli. In the first task, they rated the subjective values of hypothetical bets, judged separately and in combination. In the second task, they made pairwise comparisons of the same bets in terms of preference. The use of the same experimental stimuli in both tasks allowed a test of alternative models of utility judgment through application of the criterion of scale convergence (M. H. Birnbaum & C. T. Veit, Perception and Psychophysics, 1974, 15, 7–15). Results suggested that the additive expectancy-value model of judgments of the utilities of combinations of outcomes should be replaced by a weighted averaging rule in which the weight given to the value of each outcome in the averaging process is greater when this value is negative and extreme than when it is neutral.  相似文献   

12.
ObjectivesThe aim of this study was to further examine the relationship between the Quiet eye (QE, Vickers, 1996) and performance. We aimed to scrutinise the relationship between QE and shot outcome and replicate the robust relationship between QE and expertise. Based on recent findings (Cooke et al., 2015) showing that motor planning is dependent upon the outcome of a previous attempt, we wanted to examine the influence of prior performance on the functionality of the QE. Design: We performed a 2 (expertise) x 2 (outcome) mixed design study. Participants performed golf putts until they had achieved 5 successful (hits) and 5 unsuccessful (misses) attempts.Methods18 experienced and 21 novice golfers participated in the study. Putts were taken from ten feet while wearing a mobile eye tracker.ResultsExperienced golfers had consistently longer QE durations than novices but there was no difference in QE between randomly chosen hits and misses. However, QE durations were significantly longer on hits directly following a miss, but significantly shorter on misses following a miss.ConclusionsThis is the first study to have examined QE duration as a consequence of prior performance. Our findings highlight the important role of QE in recovering from an error and improving performance. The findings add further support for the response programming function of the QE, as additional ‘programming’ was needed to recover from an error. Findings also highlight the potential for a link between QE and the allocation of attentional resources to the task (effort).  相似文献   

13.
Mistakes in skilled performance are often observed to be slower than correct actions. This error slowing has been associated with cognitive control processes involved in performance monitoring and error detection. A limited literature on skilled actions, however, suggests that preerror actions may also be slower than accurate actions. This contrasts with findings from unskilled, discrete trial tasks, where preerror performance is usually faster than accurate performance. We tested 3 predictions about error-related behavioural changes in continuous typing performance. We asked participants to type 100 sentences without visual feedback. We found that (a) performance before errors was no different in speed than that before correct key-presses, (b) error and posterror key-presses were slower than matched correct key-presses, and (c) errors were preceded by greater variability in speed than were matched correct key-presses. Our results suggest that errors are preceded by a behavioural signature, which may indicate breakdown of fluid cognition, and that the effects of error detection on performance (error and posterror slowing) can be dissociated from breakdown effects (preerror increase in variability).  相似文献   

14.
New technologies have expanded the available methods to help individuals learn or re-learn motor skills. Despite equivocal evidence for the impact of robotic guidance for motor skill acquisition (Marchal-Crespo, McHughen, Cramer, & Reinkensmeyer, 2010), we have recently shown that robotic guidance mixed with unassisted practice can significantly improve the learning of a golf putting task (Bested & Tremblay, 2018). To understand the mechanisms associated with this new mixed approach (i.e., unassisted and robot-guided practice) for the learning of a golf putting task, the current study aimed to determine if such mixed practice extends to one’s ability to detect errors. Participants completed a pre-test, an acquisition phase, as well as immediate, delayed (24-h), and transfer post-tests. During the pre-test, kinematic data from the putter was converted into highly accurate, consistent, and smooth trajectories delivered by a robot arm. During acquisition, 2 groups performed putts towards 3 different targets with robotic guidance on either 0% or 50% of acquisition trials. Only the 50% guidance group significantly reduced ball endpoint distance and variability, as well as ball endpoint error estimations, between the pre-test and the post-tests (i.e., immediate retention, 24-h retention, and 24-h transfer). The current study showed that allowing one to experience both robotic guidance and unassisted (i.e., errorful) performances enhances one’s ability to detect errors, which can explain the beneficial motor learning effects of a mixed practice schedule.  相似文献   

15.
In the organizational behaviour and organizational psychology literature, individual errors are considered either as sources of blame (error-prevention culture) or as sources of learning and something to be encouraged in order to promote innovation (error-management culture). While we can assume that a third perspective exists somewhere in between, error management is usually considered as the best solution. Yet scholars have tended to neglect the planned and directed transition from a pure error-prevention to an error-management culture. We thus examine to what extent and under what conditions an organization can culturally transform the representation of individual errors through its business leaders. To answer this question, we conducted a qualitative study on the case of a French insurance company. We portray a realistic image of the promotion of an error management culture, pointing out certain limitations and constraints, while nonetheless identifying some conditions for successful error reframing.  相似文献   

16.
This article explores the consequences for factorial additivity in a Sternberg [(1969). The discovery of processing stages: Extensions of donders method In: W.G. Koster (Ed.), Attention and performance II, Acta Psychologica, 30, 276-315] additive-factors paradigm of the assumptions adopted by models of perception that relate the representation of a stimulus to decision time. Three example models, signal detection theory with the latency-distance hypothesis, stochastic general recognition theory, and a random walk model of exemplar classification, are interrogated to determine what type of interaction they predict factors will yield in a hypothetical factorial (choice) reaction time experiment in which the ‘empirical’ factors’ effects are manifest as parameter changes. All frameworks make the critical assumption that decision time depends on the perceptual representation of the stimulus as well as the architecture. As a consequence, nonadditivity of factors thought to affect different “stages” in the classical approach emerges within the current modeling approach. The nature of this influence is revealed through analytic investigations and simulation. Earlier empirical findings of failures of selective influence that have defied adequate explanation are reinterpreted in light of the present findings.  相似文献   

17.
We conceptualize probabilistic choice as the result of the simultaneous pursuit of multiple goals in a vector optimization representation, which is reduced to a scalar optimization that implies goal balancing. The majority of prior theoretical and empirical work on such probabilistic choice is based on random utility models, the most basic of which assume that each choice option has a valuation that has a deterministic (systematic) component plus a random component determined by some specified distribution. An alternate approach to probabilistic choice has considered maximization of one quantity (e.g., utility), subject to constraints on one or more other quantities (e.g., cost). The multiple goal perspective integrates the results regarding the well-studied multinomial logit model of probabilistic choice that has been derived from each of the above approaches; extends the results to other models in the generalized extreme value (GEV) class; and relates them to recent axiomatic work on the utility of gambling.  相似文献   

18.
Visuospatial tasks are particularly proficient at eliciting gender differences during neuropsychological performance. Here we tested the hypothesis that gender and education are related to different types of visuospatial errors on a task of line orientation that allowed the independent scoring of correct responses ("hits", or H) and one type of incorrect responses ("commission errors", or CE). We studied 343 volunteers of roughly comparable ages and with different levels of education. Education and gender were significantly associated with H scores, which were higher in men and in the groups with higher education. In contrast, the differences between men and women on CE depended on education. We concluded that (I) the ability to find the correct responses differs from the ability to avoid the wrong responses amidst an array of possible alternatives, and that (II) education interacts with gender to promote a stable performance on CE earlier in men than in women.  相似文献   

19.
Three studies involving 176 undergraduates examined the personality-related correlates of tendencies to slow down following errors in choice reaction time tasks. Such tendencies were hypothesized to tap individual differences in threat reactivity processes and therefore be relevant to the prediction of phobic-like fear (Study 1) and displayed anxiety as rated by informants (Studies 2 and 3). However, on the basis of the idea that high levels of extraversion may suppress threat reactivity processes, it was hypothesized that extraversion and post-error slowing tendencies would interact in predicting the dependent measures. The studies supported the latter interactive hypothesis in that post-error slowing tendencies were predictive of displayed anxiety at low, but not high, levels of extraversion. The discussion focuses on the respective roles of error-reactivity processes and extraversion in predicting behavioral inhibition and displayed anxiety.  相似文献   

20.
The principle of ‘divide and conquer’ (DAC) suggests that complex decision problems should be decomposed into smaller, more manageable parts, and that these parts should be logically aggregated to derive an overall value for each alternative. Decompositional procedures have been contrasted with holistic evaluations that require decision makers to simultaneously consider all the relevant attributes of the alternatives under consideration (Fischer, 1977 ). One area where decompositional procedures have a clear advantage over holistic procedures is in the reduction of random error (Ravinder, 1992 ; Ravinder and Kleinmuntz, 1991 ; Kleinmuntz, 1990 ). Adopting the framework originally developed by Ravinder and colleagues, this paper details the results of a study of the random error variances associated with another popular multi‐criteria decision‐making technique, the Analytic Hierarchy Process (AHP); (Saaty, 1977 , 1980 ), as well as the random error variances of a holistic version of the Analytic Hierarchy Process (Jensen, 1983 ). In addition, data concerning various psychometric properties (e.g. the convergent validity and temporal stability) and values of AHP inconsistency are reported for both the decompositional and holistic evaluations. The results of the study show that the Ravinder and Kleinmuntz ( 1991 ) error‐propagation framework extends to the AHP and decompositional AHP judgments are more consistent than their holistic counterparts. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号