首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   52篇
  免费   2篇
  54篇
  2023年   1篇
  2018年   1篇
  2017年   1篇
  2016年   1篇
  2015年   1篇
  2014年   1篇
  2013年   12篇
  2012年   2篇
  2011年   5篇
  2010年   1篇
  2009年   2篇
  2008年   1篇
  2007年   2篇
  2004年   1篇
  2003年   3篇
  2002年   2篇
  2001年   3篇
  2000年   3篇
  1998年   1篇
  1997年   1篇
  1996年   1篇
  1995年   1篇
  1994年   1篇
  1992年   1篇
  1990年   2篇
  1989年   1篇
  1985年   1篇
  1980年   1篇
排序方式: 共有54条查询结果,搜索用时 0 毫秒
21.
A fundamental assumption of prospect theory is gain–loss separability (GLS)—the assertion that the overall utility of a prospect can be expressed as a function of the utilities of its positive and negative components. Violations of GLS may potentially limit the generalization of results from studies of single‐domain prospects to mixed prospects and systematically distort the predictions of the theory. Violations also have implications for how choices with positive and negative components should be presented to decision makers. Previous studies, using different elicitation methods, have documented different rates, and types, of systematic violations of GLS. We discuss the differences between two specific elicitation methods—binary choice and certainty equivalents—and report results of a new study of GLS using both methods and randomly generated prospects. We compare the extent and nature of GLS violations under the two elicitation methods using between‐subject and within‐subject analyses. We find (i) systematic violations of GLS under both methods, (ii) higher rates of violations under choice, (iii) higher sensitivity to the outcomes for the certainty equivalents, which is consistent with the predictions of the scale‐compatibility hypothesis, and (iv) different patterns of violations under the two methods, which are explained by method‐specific preferences. We discuss the psychological mechanisms underlying the findings and the implications for presenting information with gain and loss components. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   
22.
23.
Abstract

We define a desirability effect as the inflation of the judged probability of desirable events or the diminution of the judged probability of undersirable events. A series of studies of this effect are reported. In the first four experiments, subjects were presented with visual stimuli (a grid matrix in two colours, or a jar containing beads in two colours), and asked to estimate the probability of drawing at random one of the colours), and asked to estimate the probability of drawing at random one of the colours. The estimated probabilities for a defined draw were not higher when the draw entailed a gain than when it entailed a loss. In the fifth and sixth experiments, subjects read short stories each describing two contestants competing for some desirable outcome (e.g. parents fighting for child custody, or firms bidding for a contract). Some judged the probability that A would win, others judged the Desirability that A would win. Story elements that enhanced a contestant's desirability did not cause the favoured contestant to be judged more likely to win. Only when a contestant's desirability was enhanced by promising the subject of payoff contingent on that contestant's victory was there some slight evidence for a desirability effect: contestants were judged more likely to win when the subject expected a monetary prize if they won than when the subject expected a prize if the other contestant won. In the last experiment, subjects estimated the probability of an over-20-point weekly change in the Dow Jones average, and were promised prizes contingent on such a change either occurring, or failing to occur. They were also given a monetary incentive for accuracy. Subjects who desired a small change. We conclude that desirability effects, when they exist, operate by biasing the evidence brought to mind regarding the event in question, but when a given body of evidence is considered, its judged probability is not influenced by desirability considerations.  相似文献   
24.
25.
Cohen's κ measures the improvement in classification above chance level and it is the most popular measure of interjudge agreement. Yet, there is considerable confusion about its interpretation. Specifically, researchers often ignore the fact that the observed level of matched agreement is bounded from above and below and the bounds are a function of the particular marginal distributions of the table. We propose that these bounds should be used to rescale the components of κ (observed and expected agreement). Rescaling κ in this manner results in κ′, a measure that was originally proposed by Cohen (1960) Cohen, J. 1960. A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20: 3746. [Crossref], [Web of Science ®] [Google Scholar] and was largely ignored in both research and practice. This measure provides a common scale for agreement measures of tables with different marginal distributions. It reaches the maximal value of 1 when the judges show the highest level of agreement possible, given their marginal disagreements. We conclude that κ′ should be used to measure the level of matched agreement contingent on a particular set of marginal distributions. The article provides a framework and a set of guidelines that facilitate comparisons between various types of agreement tables. We illustrate our points with simulations and real data from two studies—one involving judges' ratings of baseball players and one involving ratings of essays in high-stakes tests.  相似文献   
26.
Prior findings suggest managers often choose ranges to communicate uncertainty in future earnings. We analyzed earnings forecasts over 11 years and find higher earnings uncertainty firms are more likely to choose range estimates. We study investors’ attitudes to forecast precision and argue investors’ evaluations of forecasts can be explained by a sequential non-compensatory two-stage process – First, investors determine whether a point or a range estimate is more appropriate for a particular domain based on the congruence principle. Then, they seek the most precise reasonable range to maximize informativeness. Results from three experiments indicate the preference for (im)precision is non-monotonic – it peaks for low levels of imprecision and diminishes when the range gets wider, and is consistent with participants’ desire for congruent and informative estimates, and supports the claim that investors favor forecasts that are as precise as warranted by the information available, but not more precise.  相似文献   
27.
L. Brenner's (2000) critique of I. Erev, T. S. Wallsten and D. V. Budescu (1994) focuses on their (a) use of a model to explain the paradox of the same data appearing to suggest over- and underconfidence, depending on how they are analyzed; (b) definitions of true judgment and error; and (c) specific use of judgments transformed to log-odds and a model formulated in those terms. The authors of the present article strongly disagree with the first point and discuss the importance of using models to interpret data. With regard to the second, the authors admit that the constructs of true judgment and error are poorly named but dispute L. Brenner's specific criticisms. Concerning the third, the authors had not claimed that the log-odds metric has any special status in judgment research and thus agree with L. Brenner's basic point.  相似文献   
28.
A general method is presented for comparing the relative importance of predictors in multiple regression. Dominance analysis (D. V. Budescu, 1993), a procedure that is based on an examination of the R2 values for all possible subset models, is refined and extended by introducing several quantitative measures of dominance that differ in the strictness of the dominance definition. These are shown to be intuitive, meaningful, and informative measures that can address a variety of research questions pertaining to predictor importance. The bootstrap is used to assess the stability of dominance results across repeated sampling, and it is shown that these methods provide the researcher with more insights into the pattern of importance in a set of predictors than were previously available.  相似文献   
29.
There is strong evidence that groups perform better than individuals do on intellective tasks with demonstrably correct solutions. Typically, these studies assume that group members share common goals. The authors extend this line of research by replacing standard face-to-face group interactions with competitive auctions, allowing for conflicting individual incentives. In a series of studies involving the well-known Wason selection task, they demonstrate that competitive auctions induce learning effects equally impressive as those of standard group interactions, and they uncover specific and general knowledge transfers from these institutions to new reasoning problems. The authors identify payoff feedback and information pooling as the driving factors underlying these findings, and they explain these factors within the theoretical framework of collective induction.  相似文献   
30.
We derive an analytic model of the inter-judge correlation as a function of five underlying parameters. Inter-cue correlation and the number of cues capture our assumptions about the environment, while differentiations between cues, the weights attached to the cues, and (un)reliability describe assumptions about the judges. We study the relative importance of, and interrelations between these five factors with respect to inter-judge correlation. Results highlight the centrality of the inter-cue correlation. We test the model’s predictions with empirical data and illustrate its relevance. For example, we show that, typically, additional judges increase efficacy at a greater rate than additional cues.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号