首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   49篇
  免费   5篇
  国内免费   2篇
  2021年   1篇
  2019年   2篇
  2018年   4篇
  2017年   3篇
  2015年   3篇
  2013年   7篇
  2012年   1篇
  2010年   2篇
  2009年   2篇
  2008年   3篇
  2007年   4篇
  2006年   1篇
  2005年   1篇
  2004年   2篇
  2003年   3篇
  2002年   1篇
  2000年   1篇
  1999年   1篇
  1997年   2篇
  1996年   1篇
  1995年   2篇
  1994年   2篇
  1993年   1篇
  1991年   1篇
  1990年   1篇
  1988年   1篇
  1984年   1篇
  1983年   1篇
  1979年   1篇
排序方式: 共有56条查询结果,搜索用时 8 毫秒
1.
Hindsight bias was studied in the context of the accident in the Chernobyl nuclear power plant, which took place on April 26th 1986. An individual difference factor which relates to the motivation to process information, need for cognition, was expected to moderate the occurrence of hindsight bias. Probability estimates of many casualties due to the use of nuclear power in The Netherlands were obtained from 212 individuals two months before the accident in Chernobyl. These estimates were compared with similar estimates made in hindsight by the same individuals five months after the accident. Loglinear Analyses reveal a systematic hindsight bias. However, the direction of the bias was contrary to expectations. In hindsight, individuals gave lower probabilities than they actually did two months before the Chernobyl accident. These results reveal a reverse hindsight bias. As hypothesized, need for cognition moderates hindsight bias: individuals low and medium in need for cognition express a systematic reverse hindsight bias, while individuals high in need for cognition do not. High need for cognition individuals also show higher literal consistency between the two measurements, which supports a memory explanation of the moderating effect of need for cognition.  相似文献   
2.
In expected utility many results have been derived that give necessary and/or sufficient conditions for a multivariate utility function to be decomposable into lower-dimensional functions. In particular, multilinear, multiplicative and additive decompositions have been widely discussed. These utility functions can be more easily assessed in practical situations. In this paper we present a theory of decomposition in the context of nonadditive expected utility such as anticipated utility or Choquet expected utility. We show that many of the results used in conventional expected utility carry over to these more general frameworks. If preferences over lotteries depend only on the marginal probability distributions, then in expected utility the utility function is additively decomposable. We show that in anticipated utility the marginality condition implies not only that the utility function is additively decomposable but also that the distortion function is the identity function. We further demonstrate that a decision maker who is bivariate risk neutral has a utility function that is additively decomposable and a distortion function q for which q(½) = ½.  相似文献   
3.
Predictions of uncertain events are often described in terms of what can or what will happen. How are such statements used by speakers, and what are they perceived to mean? Participants in four experiments were presented with distributions of variable product characteristics and were asked to generate natural, meaningful sentences containing either will or can. Will was typically associated with either low or intermediate numeric values, whereas can consistently suggested high (maximum) values. For instance, laptop batteries lasting from 1.5 to 3.5 hours will last for 1.5 hours or for 2.5 hours, but they can last for 3.5 hours. The same response patterns were found for positive and negative events. In will‐statements, the most frequent scalar modifiers were at least and about, whereas in can‐statements, the most frequent modifier included up to. A fifth experiment showed that will indicates an outcome that may be certain but more often simply probable. Can means possible, but even can‐statements are perceived to imply probable outcomes. This could create a communication paradox because most speakers use can to describe outcomes that because of their extremity are at the same time quite unlikely. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   
4.
The vast amount of information that must be considered to solve inherently ill‐structured and complex strategic problems creates a need for tools to help decision makers (DMs) recognize the complexity of this process and develop a rational model for strategy evaluation. Over the last several decades, a philosophy and a body of intuitive and analytical methods have been developed to assist DMs in the evaluation of strategic alternatives. However, the intuitive methods lack a structured framework for the systematic evaluation of strategic alternatives while the analytical methods are not intended to capture intuitive preferences. Euclid is a simple and yet sophisticated multiobjective value analysis model that attempts to uncover some of the complexities inherent in the evaluation of strategic alternatives. The proposed model uses a series of intuitive and analytical methods including environmental scanning, the analytic hierarchy process (AHP), subjective probabilities, and the theory of displaced ideal, to plot strategic alternatives on a matrix based on their Euclidean distance from the ideal alternative. Euclid is further compared to the quantitative strategic planning matrix (QSPM) in a real world application. The information provided by the users shows that Euclid can significantly enhance decision quality and the DM's confidence. Euclid is not intended to replace the DMs, rather, it provides a systematic approach to support, supplement, and ensure the internal consistency of their judgments through a series of logically sound techniques. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   
5.
Two studies tested whether people interpreted verbal chance terms in a self‐serving manner. Participants read statements describing the likelihood of events in their own future and in the future of a randomly chosen other. They interpreted the chance terms numerically. Chance terms were interpreted as denoting a higher probability when they were used to describe the likelihood of pleasant events in one's own future than when they were used to describe the likelihood of pleasant events in someone else's future (Study 1). Similarly, chance terms were interpreted as denoting a lower probability when they were used to describe the likelihood of unpleasant events in one's own future than when they were used to describe the likelihood of unpleasant events in someone else's future (Studies 1 and 2). These differences occurred primarily when the risk statements were threatening. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   
6.
The Dutch Identity: A new tool for the study of item response models   总被引:1,自引:0,他引:1  
The Dutch Identity is a useful way to reexpress the basic equations of item response models that relate the manifest probabilities to the item response functions (IRFs) and the latent trait distribution. The identity may be exploited in several ways. For example: (a) to suggest how item response models behave for large numbers of items—they are approximate submodels of second-order loglinear models for 2 J tables; (b) to suggest new ways to assess the dimensionality of the latent trait—principle components analysis of matrices composed of second-order interactions from loglinear models; (c) to give insight into the structure of latent class models; and (d) to illuminate the problem of identifying the IRFs and the latent trait distribution from sample data.This research was supported in part by contract number N00014-87-K-0730 from the Cognitive Science Program of the Office of Naval Research. I realized the usefulness of the identity in Theorem 1 while lecturing in the Netherlands during October, 1986. Because this was in no small part due to the stimulating psychometric atmosphere there, I call the result the Dutch Identity.  相似文献   
7.
8.
To date, several data analysis methods have been used to estimate contingency strength, yet few studies have compared these methods directly. To compare the relative precision and sensitivity of four analysis methods (i.e., exhaustive event‐based, nonexhaustive event‐based, concurrent interval, concurrent+lag interval), we applied all methods to a simulated data set in which several response‐dependent and response‐independent schedules of reinforcement were programmed. We evaluated the degree to which contingency strength estimates produced from each method (a) corresponded with expected values for response‐dependent schedules and (b) showed sensitivity to parametric manipulations of response‐independent reinforcement. Results indicated both event‐based methods produced contingency strength estimates that aligned with expected values for response‐dependent schedules, but differed in sensitivity to response‐independent reinforcement. The precision of interval‐based methods varied by analysis method (concurrent vs. concurrent+lag) and schedule type (continuous vs. partial), and showed similar sensitivities to response‐independent reinforcement. Recommendations and considerations for measuring contingencies are identified.  相似文献   
9.
Teigen KH  Keren G 《Cognition》2003,87(2):55-71
Outcome expectations can be expressed prospectively in terms of probability estimates, and retrospectively in terms of surprise. Surprise ratings and probability estimates differ, however, in some important ways. Surprises are generally created by low-probability outcomes, yet, as shown by several experiments, not all low-probability outcomes are equally surprising. To account for surprise, we propose a contrast hypothesis according to which the level of surprise associated with an outcome is mainly determined by the extent to which it contrasts with the default, expected alternative. Three ways by which contrasts can be established are explored: contrasts due to relative probabilities, where the obtained outcome is less likely than a default alternative; contrasts formed by novelty and change, where a contrast exists between the obtained outcome and the individual's previous experience; and contrasts due to the perceptual or conceptual distance between the expected and the obtained. In all these cases, greater contrast was accompanied by higher ratings of surprise.  相似文献   
10.
This approach does not define a probability measure by syntactical structures. It reveals a link between modal logic and mathematical probability theory. This is shown (1) by adding an operator (and two further connectives and constants) to a system of lower predicate calculus and (2) regarding the models of that extended system. These models are models of the modal systemS 5 (without the Barcan formula), where a usual probability measure is defined on their set of possible worlds. Mathematical probability models can be seen as models ofS 5.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号