首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Angelo Gilio 《Synthese》2005,146(1-2):139-152
We study a probabilistic logic based on the coherence principle of de Finetti and a related notion of generalized coherence (g-coherence). We examine probabilistic conditional knowledge bases associated with imprecise probability assessments defined on arbitrary families of conditional events. We introduce a notion of conditional interpretation defined directly in terms of precise probability assessments. We also examine a property of strong satisfiability which is related to the notion of toleration well known in default reasoning. In our framework we give more general definitions of the notions of probabilistic consistency and probabilistic entailment of Adams. We also recall a notion of strict p-consistency and some related results. Moreover, we give new proofs of some results obtained in probabilistic default reasoning. Finally, we examine the relationships between conditional probability rankings and the notions of g-coherence and g-coherent entailment.  相似文献   

2.
Conspiracy theories as alternative explanations for events and states of affairs enjoy widespread popularity. We test one possible explanation for why people are prone to conspiratorial thinking: We hypothesize that conspiratorial thinking as an explanation for events increases as the probability of those events decreases. In order to test this hypothesis, we have conducted five experiments in which participants were exposed to different information about probabilities of fictional events. The results of all experiments support the hypothesis: The lower the probability of an event, the stronger participants embrace conspiratorial explanations. Conspiratorial thinking, we conclude, potentially represents a cognitive heuristic: A coping mechanism for uncertainty.  相似文献   

3.
Alexander R. Pruss 《Synthese》2014,191(15):3525-3540
Consider the regularity thesis that each possible event has non-zero probability. Hájek challenges this in two ways: (a) there can be nonmeasurable events that have no probability at all and (b) on a large enough sample space, some probabilities will have to be zero. But arguments for the existence of nonmeasurable events depend on the axiom of choice (AC). We shall show that the existence of anything like regular probabilities is by itself enough to imply a weak version of AC sufficient to prove the Banach–Tarski Paradox on the decomposition of a ball into two equally sized balls, and hence to show the existence of nonmeasurable events. This provides a powerful argument against unrestricted orthodox Bayesianism that works even without AC. A corollary of our formal result is that if every partial order extends to a total preorder while maintaining strict comparisons, then the Banach–Tarski Paradox holds. This yields an argument that incommensurability cannot be avoided in ambitious versions of decision theory.  相似文献   

4.
Bayesian models of cognition hypothesize that human brains make sense of data by representing probability distributions and applying Bayes’ rule to find the best explanation for available data. Understanding the neural mechanisms underlying probabilistic models remains important because Bayesian models provide a computational framework, rather than specifying mechanistic processes. Here, we propose a deterministic neural-network model which estimates and represents probability distributions from observable events—a phenomenon related to the concept of probability matching. Our model learns to represent probabilities without receiving any representation of them from the external world, but rather by experiencing the occurrence patterns of individual events. Our neural implementation of probability matching is paired with a neural module applying Bayes’ rule, forming a comprehensive neural scheme to simulate human Bayesian learning and inference. Our model also provides novel explanations of base-rate neglect, a notable deviation from Bayes.  相似文献   

5.
Subadditivity in Memory for Personal Events   总被引:1,自引:0,他引:1  
Abstract—People's subjective probability judgments of external events are often subadditive (i.e., the probability estimates of component parts of a single event sum to greater than one)—a clear violation of the extensional nature of probability theory. We show that people's frequency judgments of personal events can also be subadditive. We found subadditivity even when component events made up a proper subset of a wider composite event. Our findings imply that the somewhat arbitrary choice of the specificity with which questions are asked can produce widely different reports for the same composite events.  相似文献   

6.
7.
This study examines the distribution and calibration of probability assessments given to general knowledge questions and question concerning future events. Two experiments revealed that: (1) People use certainty responses less frequently in response to questions concerning then-future events than to general knowledge questions even when the then-future event questions are easier than the general knowledge questions. (2) Indonesian students previously thought to have little grasp of probabilistic thinking, are able to give realistic probability assessments for then-future events. Cultural and task influences on our findings are discussed in relation to a procedural model of the processes involved in answering a question. We conclude that, as most applications of decision analysis involve future uncertainty, research in probability assessment should concentrate on questions concerning future events rather than on general knowledge questions.  相似文献   

8.
We generalize the concept of a ‘ranking associated with a linear order’ from linear orders to arbitrary finite binary relations. Using the concept of differential of an object in a binary relation as theoretical primitive, we axiomatically introduce several measurement scales, some of which include the generalized ranking as a special case. We provide a computational formula for this generalized ranking, discuss its many elegant properties and offer some illustrating examples.  相似文献   

9.
Charles G. Morgan 《Topoi》1999,18(2):97-116
In this paper we examine the thesis that the probability of the conditional is the conditional probability. Previous work by a number of authors has shown that in standard numerical probability theories, the addition of the thesis leads to triviality. We introduce very weak, comparative conditional probability structures and discuss some extremely simple constraints. We show that even in such a minimal context, if one adds the thesis that the probability of a conditional is the conditional probability, then one trivializes the theory. Another way of stating the result is that the conditional of conditional probability cannot be represented in the object language on pain of trivializing the theory.  相似文献   

10.
We furnish a characterization of the representability of an interval order through a pair of continuous real-valued functions which in addition represent two total preorders associated to the given interval order. Our techniques lean on the key concept of a biorder. We introduce the concept of a natural topology for an interval order, and through such concept we extend the classical biorder approach to the continuous case.  相似文献   

11.
《Journal of Applied Logic》2014,12(4):462-476
We extend the framework of Inductive Logic to Second Order languages and introduce Wilmers' Principle, a rational principle for probability functions on Second Order languages. We derive a representation theorem for functions satisfying this principle and investigate its relationship with the first order principles of Regularity and Super Regularity.  相似文献   

12.
Patrick Maher 《Synthese》2010,172(1):119-127
Bayesian decision theory is here construed as explicating a particular concept of rational choice and Bayesian probability is taken to be the concept of probability used in that theory. Bayesian probability is usually identified with the agent’s degrees of belief but that interpretation makes Bayesian decision theory a poor explication of the relevant concept of rational choice. A satisfactory conception of Bayesian decision theory is obtained by taking Bayesian probability to be an explicatum for inductive probability given the agent’s evidence.  相似文献   

13.
14.
Studies in subjective probability III: The unimportance of alternatives   总被引:2,自引:0,他引:2  
In four experiments, student subjects were asked to estimate probabilities for a list of two to ten exhaustive, non-chance events, covering a variety of situations, both of prediction and diagnosis. Only in the two-alternative case a majority gave estimates which add up to unity (or 100%). As the number of alternatives increased, the total probability increased far beyond 100%, indicating a non-distributional probability concept. For instance, the probability that a person has committed murder was considered to be quite independent of his being one of three or one of five suspects. Even when subjects were told that the list should be extended with two additional alternatives and were allowed to correct their earlier estimates, few thought it necessary to do so, and corrections went equally in both directions.  相似文献   

15.
Probability is usually closely related to Boolean structures, i.e., Boolean algebras or propositional logic. Here we show, how probability can be combined with non-Boolean structures, and in particular non-Boolean logics. The basic idea is to describe uncertainty by (Boolean) assumptions, which may or may not be valid. The uncertain information depends then on these uncertain assumptions, scenarios or interpretations. We propose to describe information in information systems, as introduced by Scott into domain theory. This captures a wide range of systems of practical importance such as many propositional logics, first order logic, systems of linear equations, inequalities, etc. It covers thus both symbolic as well as numerical systems. Assumption-based reasoning allows then to deduce supporting arguments for hypotheses. A probability structure imposed on the assumptions permits to quantify the reliability of these supporting arguments and thus to introduce degrees of support for hypotheses. Information systems and related information algebras are formally introduced and studied in this paper as the basic structures for assumption-based reasoning. The probability structure is then formally represented by random variables with values in information algebras. Since these are in general non-Boolean structures some care must be exercised in order to introduce these random variables. It is shown that this theory leads to an extension of Dempster–Shafer theory of evidence and that information algebras provide in fact a natural frame for this theory.  相似文献   

16.
Existing research on category-based induction has primarily focused on reasoning about blank properties, or predicates that are designed to elicit little prior knowledge. Here, we address reasoning about nonblank properties. We introduce a model of conditional probability that assumes that the conclusion prior probability is revised to the extent warranted by the evidence in the premise. The degree of revision is a function of the relevance of the premise category to the conclusion and the informativeness of the premise statement. An algebraic formulation with no free parameters accurately predicted conditional probabilities for single- and two-premise conditionals (Experiments 1 and 3), as well as problems involving negative evidence (Experiment 2).  相似文献   

17.
Weber  Erik 《Synthese》1999,118(3):479-499
This article has three aims. The first is to give a partial explication of the concept of unification. My explication will be partial because I confine myself to unification of particular events, because I do not consider events of a quantitative nature, and discuss only deductive cases. The second aim is to analyze how unification can be reached. My third aim is to show that unification is an intellectual benefit. Instead of being an intellectual benefit unification could be an intellectual harm, i.e., a state of mind we should try to avoid by all means. By calling unification an intellectual benefit, we claim that this form of understanding has an intrinsic value for us. I argue that unification really has this alleged intrinsic value. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

18.
Jürgen Humburg 《Topoi》1986,5(1):39-50
The aim of my book is to explain the content of the different notions of probability.Based on a concept of logical probability, which is modified as compared with Carnap, we succeed by means of the mathematical results of de Finetti in defining the concept of statistical probability.The starting point is the fundamental concept that certain phenomena are of the same kind, that certain occurrences can be repeated, that certain experiments are identical. We introduce for this idea the notion: concept K of similarity. From concept K of similarity we derive logically some probability-theoretic conclusions:If the events E() are similar —of the same kind - on the basis of such a concept K, it holds good that intersections of n of these events are equiprobable on the basis of K; in formulae: E(1)...E( n K E('1)...E(' n , i j ,' j ' j for ij On the basis of some further axioms a partial comparative probability structure results from K, which forms the starting point of our further investigations and which we call logical probability on the basis of K.We investigate a metrisation of this partial comparative structure, i.e. normed -additive functions m K, which are compatible with this structure; we call these functions m K measure-functions in relation to K.The measure-functions may be interpreted as subjective probabilities of individuals, who accept the concept K.Now it holds good: For each measure-function there exists with measure one the limit of relative frequencies in a sequence of the E().In such an event, where all measure-functions coincide, we speak of a quantitative logical probability, which is the common measure of this event. In formulae we have: l K (h n lim h n )=1 in words: There is the quantitative logical probability one that the limit of the relative frequencies exists. Another way of saying this is that the event * (hn lim h n) is a maximal element in the comparative structure resulting from K.Therefore we are entitled to introduce this limit and call it statistical probability P.With the aid of the measure-functions it is possible to calculate the velocity of this convergence. The analog of the Bernoulli inequation holds true: m K h n –P¦)1–1/4n2.It is further possible in the work to obtain relationships for the concept of statistical independence which are expressed in terms of the comparative probability.The theory has a special significance for quantum mechanics: The similarity of the phenomena in the domain of quantum mechanics explains the statistical behaviour of the phenomena.The usual mathematical statistics are explained in my book. But it seems more expedient on the basis of this new theory to use besides the notion of statistical probability also the notion of logical probability; the notion of subjective probability has only a heuristic function in my system.The following dualism is to be noted: The statistical behaviour of similar phenomena may be described on the one hand according to the model of the classical probability theory by means of a figure called statistical probability, on the other hand we may express all formulae by means of a function, called statistical probability function. This function is defined as the limit of the relative frequencies depending on the respective state of the universe. The statistical probability function is the primary notion, the notion of statistical probability is derived from it; it is defined as the value of the statistical probability function for the true unknown state of the universe.As far as the Hume problem, the problem of inductive inference, is concerned, the book seems to give an example of how to solve it.The developed notions such as concept, measure-function, logical probability, etc. seem to be important beyond the concept of similarity.The present work represents a summary of my book Grundzüge zu einem neuen Aufbau der Wahrscheinlich-keitstheorie [5], For this reason, I have frequently dispensed with providing proof and in this connection refer the interested reader to my book.  相似文献   

19.
Much of learning and reasoning occurs in pedagogical situations—situations in which a person who knows a concept chooses examples for the purpose of helping a learner acquire the concept. We introduce a model of teaching and learning in pedagogical settings that predicts which examples teachers should choose and what learners should infer given a teacher’s examples. We present three experiments testing the model predictions for rule-based, prototype, and causally structured concepts. The model shows good quantitative and qualitative fits to the data across all three experiments, predicting novel qualitative phenomena in each case. We conclude by discussing implications for understanding concept learning and implications for theoretical claims about the role of pedagogy in human learning.  相似文献   

20.
When people estimate the probability of an event using a list that includes all or most of the possible events, their estimate of that probability is lower than if the other possible events are not explicitly identified on the list (i.e., are collapsed into an all-other-possibilities category). This list-length (or pruning) effect has been demonstrated to occur even for people who have expertise or considerable knowledge in the event domain. We reasoned that the experts used in previous studies would be unlikely to have probabilistic representations of their problem domains (e.g., auto mechanics, auditors, hospitality managers). We used baseball experts (n= 35) and novices (n = 56) on the assumption that expertise in baseball almost certainly involves mental representations of probability for various baseball events. Subjects estimated the frequency of hits, walks, strikeouts, putouts, and “all other” outcomes for an average major league player in 100 times at bat. Other subjects estimated these event outcome frequencies in a short-list condition (e.g., strikeouts, walks, and “all other”). Strong list-length effects were observed with novices; the frequency estimate for strikeouts, for example, was nearly twice as high in the short-list condition as in the long-list condition. Experts, however, showed no list-length effect and their estimated probabilities were very near the actual (normatively correct) probabilities in all conditions. We argue that the omission effect can be overridden by strong mental representations of the family of possible events and/or a clear knowledge of the probabilities associated with the events. As well, we argue that list-length effects seem to result at least in part from an anchoring-and-adjustment strategy.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号