首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到12条相似文献,搜索用时 15 毫秒
1.
Probabilistic models of set-dependent and attribute-level best-worst choice   总被引:1,自引:0,他引:1  
We characterize a class of probabilistic choice models where the choice probabilities depend on two scales, one with a value for each available option and the other with a value for the set of available options. Then, we develop similar results for a task in which a person is presented with a profile of attributes, each at a pre-specified level, and chooses the best or the best and the worst of those attribute-levels. The latter design is an important variant on previous designs using best-worst choice to elicit preference information, and there is various evidence that it yields reliable interpretable data. Nonetheless, the data from a single such task cannot yield separate measures of the “importance” of an attribute and the “utility” of an attribute-level. We discuss various empirical designs, involving more than one task of the above general type, that may allow such separation of importance and utility.  相似文献   

2.
This article continues our exploration of the utility of gambling, but here under the assumption of a non-additive but p(olynomial)-additive representation of joint receipt. That assumption causes the utility of gambling term to have a multiplicative impact, rather than just an additive one, on what amounts to ordinary weighted utility. Assuming the two rational recursions known as branching and upper gamble decomposition, we investigate separately the rational property of segregation and the non-rational one of duplex decomposition. Under segregation, we show that each pair of disjoint events either exhibits weight complementarity or both support no utility of gambling, a property called UofG-singular. We develop representations for both cases. The former representation is a simple weighted utility with each term multiplied by a function of the event underlying that branch. The latter representation is an ordinary rank-dependent utility with Choquet weights but with no utility of gambling. Under duplex decomposition, we show that weights have the intuitively unacceptable property that there is essentially no dependence upon the events, making duplex decomposition, in this context, of little behavioral interest.  相似文献   

3.
A standard approach to distinguishing people’s risk preferences is to estimate a random utility model using a power utility function to characterize the preferences and a logit function to capture choice consistency. We demonstrate that with often-used choice situations, this model suffers from empirical underidentification, meaning that parameters cannot be estimated precisely. With simulations of estimation accuracy and Kullback–Leibler divergence measures we examined factors that potentially mitigate this problem. First, using a choice set that guarantees a switch in the utility order between two risky gambles in the range of plausible values leads to higher estimation accuracy than randomly created choice sets or the purpose-built choice sets common in the literature. Second, parameter estimates are regularly correlated, which contributes to empirical underidentification. Examining standardizations of the utility scale, we show that they mitigate this correlation and additionally improve the estimation accuracy for choice consistency. Yet, they can have detrimental effects on the estimation accuracy of risk preference. Finally, we also show how repeated versus distinct choice sets and an increase in observations affect estimation accuracy. Together, these results should help researchers make informed design choices to estimate parameters in the random utility model more precisely.  相似文献   

4.
Three severely mentally retarded adolescents were studied under discrete-trial procedures in which a choice was arranged between edible reinforcers that differed in magnitude and, in some conditions, delay. In the absence of delays the larger reinforcer was consistently chosen. Under conditions in which the smaller reinforcer was not delayed, increasing the delay to delivery of the larger reinforcer decreased the percentage of trials in which that reinforcer was chosen. All subjects directed the majority of choice responses to the smaller reinforcer when the larger reinforcer was sufficiently delayed, although the value at which this occurred differed across subjects. Under conditions in which the larger reinforcer initially was sufficiently delayed to result in preference for the smaller one, progressively increasing in 5-s increments the delay to both reinforcers increased percentage of trials with the larger reinforcer chosen. At sufficiently long delays, 2 of the subjects consistently chose the larger, but more delayed, reinforcer, and the 3rd subject chose that reinforcer on half of the trials. These results are consistent with the findings of prior studies in which adult humans responded to terminate noise and pigeons responded to produce food.  相似文献   

5.
Three experiments were conducted to examine the effects of exposure to a poisoned conspecific on subsequent food aversion in rats. In Experiment 1A, rats that had been aversively conditioned to a cocoa-flavored food were exposed to a poisoned conspecific that had eaten the same food. On the subsequent choice test, the animals increased their aversion to that food. These results were reconfirmed in Experiment 1B, in which a cinnamon-flavored food was used as the stimulus. In Experiment 2, subjects were first exposed to a poisoned conspecific and then conditioned to the food which the conspecific had eaten. On the test, they showed no sign of increased aversion to that food.  相似文献   

6.
7.
Abstract

This article introduces the Attitudinal Entropy (AE) framework, which builds on the Causal Attitude Network model that conceptualizes attitudes as Ising networks. The AE framework rests on three propositions. First, attitude inconsistency and instability are two related indications of attitudinal entropy, a measure of randomness derived from thermodynamics. Second, energy of attitude configurations serves as a local processing strategy to reduce the global entropy of attitude networks. Third, directing attention to and thinking about attitude objects reduces attitudinal entropy. We first discuss several determinants of attitudinal entropy reduction and show that several findings in the attitude literature, such as the mere thought effect on attitude polarization and the effects of heuristic versus systematic processing of arguments, follow from the AE framework. Second, we discuss the AE framework’s implications for ambivalence and cognitive dissonance.  相似文献   

8.
9.
Measures of epistemic utility are used by formal epistemologists to make determinations of epistemic betterness among cognitive states. The Brier rule is the most popular choice (by far) among formal epistemologists for such a measure. In this paper, however, we show that the Brier rule is sometimes seriously wrong about whether one cognitive state is epistemically better than another. In particular, there are cases where an agent gets evidence that definitively eliminates a false hypothesis (and the probabilities assigned to the other hypotheses stay in the same ratios), but where the Brier rule says that things have become epistemically worse. Along the way to this ‘elimination experiment’ counter-example to the Brier rule as a measure of epistemic utility, we identify several useful monotonicity principles for epistemic betterness. We also reply to several potential objections to this counter-example.  相似文献   

10.
Subjects worked on a task which was described as either easy or difficult. When the task was thought to be difficult, Ss high in resultant achievement motivation performed better than those low in resultant achievement motivation. However, when the task was perceived as easy, the high motive group performed worse than the low group. These results confirm a prediction from Kukla's attributional theory of performance, in which resultant achievement motivation is conceived as a measure of perceived ability. They are not, however, deriveable from Spence's theory of the effects on performance of objective task difficulty, nor from Weiner's hypothesis concerning the motivational effects of success and failure. On the other hand, Kukla's theory provides an explanation for both the data usually cited in support of Spence's position and those taken to confirm Weiner's hypothesis. The relationship between the present results and Atkinson's theory of achievement motivation, which also hypothesizes an effect of perceived difficulty on performance, is discussed.  相似文献   

11.
It is frequently assumed that the mental activity which leads to a given response is made up of separable components or processes. One or more of the processes are assumed to contribute to the time required to respond. Computation of the mean, variance, and distribution of the reaction time is relatively straightforward when all processes are arranged in series or parallel. However, such is not the case when the processes have complex arrangements. A solution to a useful special case of the above problem is proposed. Specifically, it is shown that simple computations yield closed form expressions for the mean, variance, and distribution of reaction time when the processes can be arranged in a stochastic PERT network and when the durations of individual processes are sums of mutually independent, exponentially distributed random variables. The method of solution relies on the construction of an Order-of-Processing (OP) diagram from the original PERT network representation of behavior.  相似文献   

12.
Abstract:  A quantitative scale for identifying cardiac versus vascular reactor, balance of blood pressure equivalents ( BE ), was newly advocated and compared with a very recently advocated one, hemodynamic profile ( HP ), by Gregg, Matyas and James (2002). BE was defined as " (Δ Q/Q 0) P 0'  − ( Δ R/R 0 )P 0." Here, P 0 , Q 0, and R 0 were mean blood pressure ( P ), cardiac output ( Q ), and total peripheral resistance ( R ) during baseline, and Δ Q and Δ R were the difference scores of Q and R from baseline to stress, respectively. This was named as BE because the two terms in the formula indicated changes in Q and R in their blood pressure equivalents. Comparisons of the BE and HP scales were carried out, theoretically, in a newly introduced pressor space, on orthogonality with the extent of elevation of P (Δ P ); and then, practically, by using hypothetical data. In summary, it was shown that data points in the neighborhood in the pressor space can be judged as having not so different hemodynamic balance whether the BE or HP scale was used. As a merit, the BE scale seemed helpful to an intuitive understanding of the hemodynamics during stress. As a demerit, it cannot maintain the quasi-orthogonal relationship with Δ P when Q or R changes profoundly in the face of stress.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号