首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Theories of subjective probability are viewed as formal languages for analyzing evidence and expressing degrees of belief. This article focuses on two probability langauges, the Bayesian language and the language of belief functions (Shafer, 1976). We describe and compare the semantics (i.e., the meaning of the scale) and the syntax (i.e., the formal calculus) of these languages. We also investigate some of the designs for probability judgment afforded by the two languages.  相似文献   

2.
People often have knowledge about the chances of events but are unable to express their knowledge in the form of coherent probabilities. This study proposed to correct incoherent judgment via an optimization procedure that seeks the (coherent) probability distribution nearest to a judge's estimates of chance. This method was applied to the chances of simple and complex meteorological events, as estimated by college undergraduates. No judge responded coherently, but the optimization method found close (coherent) approximations to their estimates. Moreover, the approximations were reliably more accurate than the original estimates, as measured by the quadratic scoring rule. Methods for correcting incoherence facilitate the analysis of expected utility and allow human judgment to be more easily exploited in the construction of expert systems.  相似文献   

3.
Existing research on category-based induction has primarily focused on reasoning about blank properties, or predicates that are designed to elicit little prior knowledge. Here, we address reasoning about nonblank properties. We introduce a model of conditional probability that assumes that the conclusion prior probability is revised to the extent warranted by the evidence in the premise. The degree of revision is a function of the relevance of the premise category to the conclusion and the informativeness of the premise statement. An algebraic formulation with no free parameters accurately predicted conditional probabilities for single- and two-premise conditionals (Experiments 1 and 3), as well as problems involving negative evidence (Experiment 2).  相似文献   

4.
5.
Representativeness is the name given to the heuristic people often employ when they judge the probability of a sample by how well it represents certain salient features of the population from which it was drawn. The representativeness heuristic has also been used to account for how people judge the probability that a given population is the source of some sample. The latter probability, however, depends on other factors (e.g., the population's prior probability) as well as on the sample characteristics. A review of existing evidence suggests that the ignoring of such factors, a central finding of the heuristics approach to judgment under uncertainty, is a phenomenon which is conceptually distinct from the representativeness heuristic. These factors (base rates, sample size, and predictability) do not always exert the proper influence on people's first-order probability judgments, but they are not ignored when people make second-order (i.e., confidence) judgments. Other fallacies and biases in subjective evaluations of probability are, however, direct causal results of the employment of representativeness. For example, representativeness may be applied to the wrong features. Most devastating, perhaps, is that subjective probability judgments obey a logic of representativeness judgments, even though probability ought to obey an altogether different logic. Yet although the role of representativeness judgments in probability estimation leaves a lot to be desired, it is hard to envision prediction and inference completely unaided by representativeness.  相似文献   

6.
What makes some explanations better than others? This paper explores the roles of simplicity and probability in evaluating competing causal explanations. Four experiments investigate the hypothesis that simpler explanations are judged both better and more likely to be true. In all experiments, simplicity is quantified as the number of causes invoked in an explanation, with fewer causes corresponding to a simpler explanation. Experiment 1 confirms that all else being equal, both simpler and more probable explanations are preferred. Experiments 2 and 3 examine how explanations are evaluated when simplicity and probability compete. The data suggest that simpler explanations are assigned a higher prior probability, with the consequence that disproportionate probabilistic evidence is required before a complex explanation will be favored over a simpler alternative. Moreover, committing to a simple but unlikely explanation can lead to systematic overestimation of the prevalence of the cause invoked in the simple explanation. Finally, Experiment 4 finds that the preference for simpler explanations can be overcome when probability information unambiguously supports a complex explanation over a simpler alternative. Collectively, these findings suggest that simplicity is used as a basis for evaluating explanations and for assigning prior probabilities when unambiguous probability information is absent. More broadly, evaluating explanations may operate as a mechanism for generating estimates of subjective probability.  相似文献   

7.
John Forge 《Erkenntnis》1990,33(3):371-390
Conclusion By using the concept of a uniformity, the Structuralists have given us a most useful means of representing approximations. In the second section of this paper, I have made use of this technique to show how we can deal with errors of measurement — imprecise explananda — in the context of theoretical explanation. As well as (I hope) providing further demonstration of the power of the Structuralist approach, this also serves to support the ontic conception of explanation by showing that it can help us resolve substantial problems in the theory of explanation.I would like to thank Professor C. U. Moulines for his kindness in reading an earlier draft of this paper, and in particular for suggesting to me to mention the points made in footnotes 12 and 13. I am also most grateful to this journal's referee for many helpful comments whereby the paper has been much improved.  相似文献   

8.
The influence of hierarchy on probability judgment   总被引:6,自引:0,他引:6  
Lagnado DA  Shanks DR 《Cognition》2003,89(2):157-178
Consider the task of predicting which soccer team will win the next World Cup. The bookmakers may judge Brazil to be the team most likely to win, but also judge it most likely that a European rather than a Latin American team will win. This is an example of a non-aligned hierarchy structure: the most probable event at the subordinate level (Brazil wins) appears to be inconsistent with the most probable event at the superordinate level (a European team wins). In this paper we exploit such structures to investigate how people make predictions based on uncertain hierarchical knowledge. We distinguish between aligned and non-aligned environments, and conjecture that people assume alignment. Participants were exposed to a non-aligned training set in which the most probable superordinate category predicted one outcome, whereas the most probable subordinate category predicted a different outcome. In the test phase participants allowed their initial probability judgments about category membership to shift their final ratings of the probability of the outcome, even though all judgments were made on the basis of the same statistical data. In effect people were primed to focus on the most likely path in an inference tree, and neglect alternative paths. These results highlight the importance of the level at which statistical data are represented, and suggest that when faced with hierarchical inference problems people adopt a simplifying heuristic that assumes alignment.  相似文献   

9.
This paper seeks to meet the need for a general treatment of the problem of error in classification. Within an m-attribute classificatory system, an object's typical subclass is that subclass to which it is most often allocated under repeated experimentally independent applications of the classificatory criteria. In these terms, an error of classification is an atypical subclass allocation. This leads to definition of probabilitiesO of occasional subclass membership, probabilitiesT of typical subclass membership, and probabilitiesE of error or, more generally, occasional subclass membership conditional upon typical subclass membership. In the relationshipf: (O, T, E) the relative incidence of independentO, T, andE values is such that generally one can specifyO values givenT andE, but one cannot generally specifyT andE values givenO. Under the restrictions of homogeneity ofE values for all members of a given typical subclass, mutual stochastic independence of errors of classification, and suitable conditions of replication, one can find particular systemsO =f(T, E) which are solvable forT andE givenO. A minimum of three replications of occasional classification is necessary for a solution of systems for marginal attributes, and a minimum of two replications is needed with any cross-classification. Although for such systems one can always specifyT andE values givenO values, the solution is unique for dichotomous systems only.With grateful acknowledgement to the Rockefeller Foundation; and to the United States Department of Health, Education, and Welfare, Public Health Service, for N. I. M. H. Grant M-3950.  相似文献   

10.
General features of a probability model for errors of classification are recapitulated as an introduction to particular cases and applications. Several models for dichotomous and nondichotomous systems are examined in sufficient detail to elaborate a procedure for dealing with any particular case. The systemO =f(T,E) has empirical reference where, as statistic or parameter, probability of occasional subclass membership is given by observation, and one seeks to recoverT andE values fromO. A procedure for relating models and data is described. Applications of the concepts and methods are illustrated for several areas of psychological research.With grateful acknowledgment to the Rockefeller Foundation; and to the United States Department of Health, Education, and Welfare, Public Health Service, for N. I. M. H. Grant M-3950.  相似文献   

11.
12.
Three experiments show that understanding of biases in probability judgment can be improved by extending the application of the associative-learning framework. In Experiment 1, the authors used M. A. Gluck and G. H. Bower's (1988a) diagnostic-learning task to replicate apparent base-rate neglect and to induce the conjunction fallacy in a later judgment phase as a by-product of the conversion bias. In Experiment 2, the authors found stronger evidence of the conversion bias with the same learning task. In Experiment 3, the authors changed the diagnostic-learning task to induce some conjunction fallacies that were not based on the conversion bias. The authors show that the conjunction fallacies obtained in Experiment 3 can be explained by adding an averaging component to M. A. Gluck and G. H. Bower's model.  相似文献   

13.
The purpose of this paper is to present a two-phase hypothesis generation model to describe behavior in multiple-cue probability learning tasks with nonmetric cues. The model assumes that on each trial the subject generates two sets of hypotheses: (a) a hypothesis concerning which cue dimension (or pattern) will lead to a correct prediction on that trial and (b) a hypothesis concerning which response will be correct given the cue dimension attended to on that trial.Five-hundred-twelve subjects were assigned to 20 groups in a binary choice task involving two binary cue dimensions. Each group observed cues which differed in validity. Analysis of the data indicated that subjects attend to both cue dimensions in making judgments even when one cue has zero validity. A test of the fit of the observed data to the asymptotic response proportions predicted by the model indicated a reasonable fit.  相似文献   

14.
15.
Order of information plays a crucial role in the process of updating beliefs across time. In fact, the presence of order effects makes a classical or Bayesian approach to inference difficult. As a result, the existing models of inference, such as the belief-adjustment model, merely provide an ad hoc explanation for these effects. We postulate a quantum inference model for order effects based on the axiomatic principles of quantum probability theory. The quantum inference model explains order effects by transforming a state vector with different sequences of operators for different orderings of information. We demonstrate this process by fitting the quantum model to data collected in a medical diagnostic task and a jury decision-making task. To further test the quantum inference model, a new jury decision-making experiment is developed. Using the results of this experiment, we compare the quantum inference model with two versions of the belief-adjustment model, the adding model and the averaging model. We show that both the quantum model and the adding model provide good fits to the data. To distinguish the quantum model from the adding model, we develop a new experiment involving extreme evidence. The results from this new experiment suggest that the adding model faces limitations when accounting for tasks involving extreme evidence, whereas the quantum inference model does not. Ultimately, we argue that the quantum model provides a more coherent account for order effects that was not possible before.  相似文献   

16.
Contingency information is information about empirical associations between possible causes and outcomes. In the present research, it is shown that, under some circumstances, there is a tendency for negative contingencies to lead to positive causal judgments and for positive contingencies to lead to negative causal judgments. If there is a high proportion of instances in which a candidate cause (CC) being judged is present, these tendencies are predicted by weighted averaging models of causal judgment. If the proportion of such instances is low, the predictions of weighted averaging models break down. It is argued that one of the main aims of causal judgment is to account for occurrences of the outcome. Thus, a CC is not given a high causal judgment if there are few or no occurrences of it, regardless of the objective contingency. This argument predicts that, if there is a low proportion of instances in which a CC is present, causal judgments are determined mainly by the number of Cell A instances (i.e., CC present, outcome occurs), and that this explains why weighted averaging models fail to predict judgmental tendencies under these circumstances. Experimental results support this argument.  相似文献   

17.
The authors provide evidence that people typically evaluate conditional probabilities by subjectively partitioning the sample space into n interchangeable events, editing out events that can be eliminated on the basis of conditioning information, counting remaining events, then reporting probabilities as a ratio of the number of focal to total events. Participants' responses to conditional probability problems were influenced by irrelevant information (Study 1), small variations in problem wording (Study 2), and grouping of events (Study 3), as predicted by the partition-edit-count model. Informal protocol analysis also supports the authors' interpretation. A 4th study extends this account from situations where events are treated as interchangeable (chance and ignorance) to situations where participants have information they can use to distinguish among events (uncertainty).  相似文献   

18.
A theoretical explanation for the classical distinction between the conversion and dissociative hysterias was advanced based on previous research with cases of conversion hysteria and multiple personality. The principles were illustrated and extended using Rorschach and Hand Test data from a fugue state.  相似文献   

19.
In this paper I draw on Einstein's distinction between “principle” and “constructive” theories to isolate two levels of physical theory that can be found in both classical and (special) relativistic physics. I then argue that when we focus on theoretical explanations in physics, i.e. explanations of physical laws, the two leading views on explanation, Salmon's “bottom‐up” view and Kitcher's “top‐down” view, accurately describe theoretical explanations for a given level of theory. I arrive at this conclusion through an analysis of explanations of mass—energy equivalence in special relativity.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号