首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
Abstract.— When asked to estimate the probability of outcomes of draws from a binomial population, student subjects tend to report p values that clearly exceed the objective ones. The probability of specific binomial sequences was found to be even more overestimated, while the answers became much more conservative when the outcomes were grouped into a few categories. These findings were replicated in a second experiment, where the probability of heights in a male and a female student population was estimated. When the task was to estimate frequency of occurrence, instead of probability, the answers became more realistic. The conclusion is drawn that the direct p estimates are relatively independent of frequency judgments, the chief determinant being the properties of the particular sample to be evaluated, irrespective of the number and probabilities of other possible samples.  相似文献   

3.
4.
Cory F. Juhl 《Synthese》1996,109(3):293-309
Subjective Bayesians typically find the following objection difficult to answer: some joint probability measures lead to intuitively irrational inductive behavior, even in the long run. Yet well-motivated ways to restrict the set of reasonable prior joint measures have not been forthcoming. In this paper I propose a way to restrict the set of prior joint probability measures in particular inductive settings. My proposal is the following: where there exists some successful inductive method for getting to the truth in some situation, we ought to employ a (joint) probability measure that is inductively successful in that situation, if such a measure exists. In order to do show that the restriction is possible to meet in a broad class of cases, I prove a Bayesian Completeness Theorem, which says that for any solvable inductive problem of a certain broad type, there exist probability measures that a Bayesian could use to solve the problem. I then briefly compare the merits of my proposal with two other well-known proposals for constraining the class of admissible subjective probability measures, the leave the door ajar condition and the maximize entropy condition.The author owes special thanks to Kevin Kelly, for a number of helpful ideas for the proof of the Bayesian Completeness Theorem, as well as other aspects of the paper. Thanks also to Clark Glymour for some helpful suggestions for improvement of an earlier draft. Part of the work leading to this paper was funded by a Summer Research Grant from the University Research Institute of the University of Texas at Austin.  相似文献   

5.
6.
Interpersonal variability in understanding linguistic probabilities can adversely affect decision making. Using the fact that everyone judges canonical probability events similarly in a manner consistent with axiom systems that yield a probability measure, we developed and tested a method for comparing the meanings of probability phrases across individuals. An experiment demonstrated that despite extreme heterogeneity in participants' linguistic probability lexicons, interpersonal similarity in phrase meaning is well predicted by phrase rank order within the lexicons. Thus, equally ranked phrases have similar meanings, and individual differences in linguistic probabilities may simply be explained by the phrases people use at each rank.  相似文献   

7.
8.
A stochastic model of the calibration of subjective probabilities based on support theory (Rottenstreich and Tversky, 1997, Tversky and Koehler, 1994) is presented. This model extends support theory—a general representation of probability judgment—to the domain of calibration, the analysis of the correspondence between subjective and objective probability. The random support model can account for the common finding of overconfidence, and also predicts the form of the relationship between overconfidence and item difficulty (the “hard–easy effect”). The parameters of the model have natural psychological interpretations, such as discriminability between correct and incorrect hypotheses, and extremity of judgment. The random support model can be distinguished from other stochastic models of calibration by: (a) using fewer parameters, (b) eliminating the use of variable cutoffs by mapping underlying support directly into judged probability, (c) allowing validation of model parameters with independent assessments of support, and (d) applying to a wide variety of tasks by framing probability judgment in the integrative context of support theory.  相似文献   

9.
When Ss revise subjective probabilities, in the light of new evidence, a common finding is that they are conservative with respect to Bayes' theorem; revisions are too small. One kind of hypothesis to account for this is ‘model specific’, assuming a breakdown in an otherwise potentially Bayesian process. The other kind assumes that statistically irrelevant, task-specific information is processed. An example of the latter is the commitment hypothesis, assuming a commitment building up to the indications of early evidence, causing Ss to lag behind Bayes' theorem in their later judgements. Evidence is presented suggesting that Ss are not necessarily sensitive to mere sub-sets of a sequence, but that this form of suboptimality may result from overall sequence structure; specifically from a bias against long runs of like evidence. This would fit with findings from other areas of research, and would suggest that there is a general form of suboptimality operating which is relevant to all sequential processing tasks.  相似文献   

10.
Studies in subjective probability IV: probabilities, confidence, and luck   总被引:1,自引:0,他引:1  
Probably because of formal advantages, probabilities are often regarded as more basic than other dimensions of attitudes towards uncertain events (beliefs, confidences, doubts, statements of hope and fear, good and bad luck etc.). In a series of experiments, some of these concepts were empirically compared by asking students to give their views on a variety of uncertain events, ranging from future examinations to lotteries. Confidence turned out to be closely related to perceived chance, but very imperfectly to the subjective probability of the event in question, except when all outcomes are judged equally due to chance. Judgments of good and bad luck were still more independent of the probabilities involved, even in a chance situation, It is concluded that subjective probability plays a secondary role in assessments of confidence as well as of luck, and is poorly suited as a common measure of the varieties of subjective uncertainty. A final experiment suggests the subjective and statistical conceptions of uncertainty to have partially opposing connotations. "An uncertain future" seems to be subjectively interpreted as an open future, restricted possibilities of prediction.  相似文献   

11.
As part of a method for assessing health risks associated with primary National Ambient Air Quality Standards. T. B. Feagans and W. F. Biller (Research Triangle Park, North Carolina. EPA Office of Air Quality Planning and Standards, May 1981) developed a technique for encoding experts' subjective probabilities regarding dose--response functions. The encoding technique is based on B. O. Koopman's (Bulletin of the American Mathematical Society, 1940, 46, 763-764; Annals of Mathematics, 1940, 41, 269-292) probability theory, which does not require probabilities to be sharp, but rather allows lower and upper probabilities to be associated with an event. Uncertainty about a dose--response function can be expressed either in terms of the response rate expected at a given concentration or, conversely, in terms of the concentration expected to support a given response rate. Feagans and Biller (1981, cited above) derive the relation between the two conditional probabilities, which is easily extended to upper and lower conditional probabilities. These relations were treated as coherence requirements in an experiment utilizing four ozone and four lead experts as subjects, each providing judgments on two separate occasions. Four subjects strongly satisfied the coherence requirements in both conditions. and three more did no in the second session only. The eighth subject also improved in Session 2. Encoded probabilities were highly correlated between the two sessions, but changed from the first to the second in a manner that improved coherence and reflected greater attention to certain parameters of the dose--response function.  相似文献   

12.
John C. Harsanyi 《Synthese》1983,57(3):341-365
It is argued that we need a richer version of Bayesian decision theory, admitting both subjective and objective probabilities and providing rational criteria for choice of our prior probabilities. We also need a theory of tentative acceptance of empirical hypotheses. There is a discussion of subjective and of objective probabilities and of the relationship between them, as well as a discussion of the criteria used in choosing our prior probabilities, such as the principles of indifference and of maximum entropy, and the simplicity ranking of alternative hypotheses.  相似文献   

13.
Locksley, Borgida, Brekke, and Hepburn (1980) assert that subjects fall prey to the base-rate fallacy when they make stereotype-related trait judgments. They found that subjects ignored their stereotypes when trait judgments were made in the presence of trait-related behavioral information. The present article reexamines those findings with respect to two issues: (a) the use of a normative criterion in comparison with subjects' judgments and (b) the level of analysis (group vs. individual) of subjects' judgments. We conducted a replication of the Locksley et al. (1980) Study 2, and the results were examined with respect to these two issues. We found no support for the base-rate fallacy. When a Bayesian normative criterion was constructed for each subject based on the subject's own stereotype judgments and was compared with assertiveness judgments made in the presence of individuating information, there was no evidence that subjects ignored or underused their stereotypes as the base-rate fallacy predicts.  相似文献   

14.
15.
Psychometric functions and the associated indices of discriminative performance (i.e., the point of subjective equality [PSE], just noticeable difference, and Weber fraction) were obtained with the method of constant stimuli using perceptual and remembered line-length standards. Three important results were obtained. First, comparisons with a perceptual or a remembered standard were sensitive to variations of absolute stimulus differences with a common ratio; that is, Weber's law was violated. Second, relative to discriminative performance with the longest and shortest remembered standards, comparisons involving mid-range remembered standards displayed increased variability in the PSE and inflated Weber fractions, characteristic of a reduction in the quality of the memorial representation. Finally, large and negative time-order errors (TOE) were observed for successive line judgments but not for those involving remembered standards. The implications of these findings for research concerned with the relationships between perception and memory, as well as the TOE phenomenon, are discussed.  相似文献   

16.
Humans and other primates are able to make relative magnitude comparisons, both with perceptual stimuli and with symbolic inputs that convey magnitude information. Although numerous models of magnitude comparison have been proposed, the basic question of how symbolic magnitudes (e.g., size or intelligence of animals) are derived and represented in memory has received little attention. We argue that symbolic magnitudes often will not correspond directly to elementary features of individual concepts. Rather, magnitudes may be formed in working memory based on computations over more basic features stored in long-term memory. We present a model of how magnitudes can be acquired and compared based on BARTlet, a representationally simpler version of Bayesian Analogy with Relational Transformations (BART; Lu, Chen, & Holyoak, 2012). BARTlet operates on distributions of magnitude variables created by applying dimension-specific weights (learned with the aid of empirical priors derived from pre-categorical comparisons) to more primitive features of objects. The resulting magnitude distributions, formed and maintained in working memory, are sensitive to contextual influences such as the range of stimuli and polarity of the question. By incorporating psychological reference points that control the precision of magnitudes in working memory and applying the tools of signal detection theory, BARTlet is able to account for a wide range of empirical phenomena involving magnitude comparisons, including the symbolic distance effect and the semantic congruity effect. We discuss the role of reference points in cognitive and social decision-making, and implications for the evolution of relational representations.  相似文献   

17.
Low numerical probabilities tend to be directionally ambiguous, meaning they can be interpreted either positively, suggesting the occurrence of the target event, or negatively, suggesting its non-occurrence. High numerical probabilities, however, are typically interpreted positively. We argue that the greater directional ambiguity of low numerical probabilities may make them more susceptible than high probabilities to contextual influences. Results from five experiments supported this premise, with perceived base rate affecting the interpretation of an event’s numerical posterior probability more when it was low than high. The effect is consistent with a confirmatory hypothesis testing process, with the relevant perceived base rate suggesting the directional hypothesis which people then test in a confirmatory manner.  相似文献   

18.
19.
20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号