首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Jon Williamson 《Synthese》2011,178(1):67-85
Objective Bayesianism has been criticised on the grounds that objective Bayesian updating, which on a finite outcome space appeals to the maximum entropy principle, differs from Bayesian conditionalisation. The main task of this paper is to show that this objection backfires: the difference between the two forms of updating reflects negatively on Bayesian conditionalisation rather than on objective Bayesian updating. The paper also reviews some existing criticisms and justifications of conditionalisation, arguing in particular that the diachronic Dutch book justification fails because diachronic Dutch book arguments are subject to a reductio: in certain circumstances one can Dutch book an agent however she changes her degrees of belief. One may also criticise objective Bayesianism on the grounds that its norms are not compulsory but voluntary, the result of a stance. It is argued that this second objection also misses the mark, since objective Bayesian norms are tied up in the very notion of degrees of belief.  相似文献   

2.
A group is often construed as one agent with its own probabilistic beliefs (credences), which are obtained by aggregating those of the individuals, for instance through averaging. In their celebrated “Groupthink”, Russell et al. (2015) require group credences to undergo Bayesian revision whenever new information is learnt, i.e., whenever individual credences undergo Bayesian revision based on this information. To obtain a fully Bayesian group, one should often extend this requirement to non‐public or even private information (learnt by not all or just one individual), or to non‐representable information (not representable by any event in the domain where credences are held). I propose a taxonomy of six types of ‘group Bayesianism’. They differ in the information for which Bayesian revision of group credences is required: public representable information, private representable information, public non‐representable information, etc. Six corresponding theorems establish how individual credences must (not) be aggregated to ensure group Bayesianism of any type, respectively. Aggregating through standard averaging is never permitted; instead, different forms of geometric averaging must be used. One theorem—that for public representable information—is essentially Russell et al.'s central result (with minor corrections). Another theorem—that for public non‐representable information—fills a gap in the theory of externally Bayesian opinion pooling.  相似文献   

3.
Inductive logic admits a variety of semantics (Haenni et al. (2011) [7, Part 1]). This paper develops semantics based on the norms of Bayesian epistemology (Williamson, 2010 [16, Chapter 7]). Section 1 introduces the semantics and then, in Section 2, the paper explores methods for drawing inferences in the resulting logic and compares the methods of this paper with the methods of Barnett and Paris (2008) [2]. Section 3 then evaluates this Bayesian inductive logic in the light of four traditional critiques of inductive logic, arguing (i) that it is language independent in a key sense, (ii) that it admits connections with the Principle of Indifference but these connections do not lead to paradox, (iii) that it can capture the phenomenon of learning from experience, and (iv) that while the logic advocates scepticism with regard to some universal hypotheses, such scepticism is not problematic from the point of view of scientific theorising.  相似文献   

4.
Objective Bayesianism with predicate languages   总被引:1,自引:1,他引:0  
Jon Williamson 《Synthese》2008,163(3):341-356
Objective Bayesian probability is often defined over rather simple domains, e.g., finite event spaces or propositional languages. This paper investigates the extension of objective Bayesianism to first-order logical languages. It is argued that the objective Bayesian should choose a probability function, from all those that satisfy constraints imposed by background knowledge, that is closest to a particular frequency-induced probability function which generalises the λ = 0 function of Carnap’s continuum of inductive methods.  相似文献   

5.
Andreoletti  Mattia  Oldofredi  Andrea 《Topoi》2019,38(2):477-485

Medical research makes intensive use of statistics in order to support its claims. In this paper we make explicit an epistemological tension between the conduct of clinical trials and their interpretation: statistical evidence is sometimes discarded on the basis of an (often) underlined Bayesian reasoning. We suggest that acknowledging the potentiality of Bayesian statistics might contribute to clarify and improve comprehension of medical research. Nevertheless, despite Bayesianism may provide a better account for scientific inference with respect to the standard frequentist approach, Bayesian statistics is rarely adopted in clinical research. The main reason lies in the supposed subjective elements characterizing this perspective. Hence, we discuss this objection presenting the so-called Reference analysis, a formal method which has been developed in the context of objective Bayesian statistics in order to define priors which have a minimal or null impact on posterior probabilities. Furthermore, according to this method only available data are relevant sources of information, so that it resists the most common criticisms against Bayesianism.

  相似文献   

6.
Belief revision is the problem of finding the most plausible explanation for an observed set of evidences. It has many applications in various scientific domains like natural language understanding, medical diagnosis and computational biology. Bayesian Networks (BN) is an important probabilistic graphical formalism widely used for belief revision tasks. In BN, belief revision can be achieved by finding the maximum a posteriori (MAP) assignment. Finding MAP is an NP-Hard problem. In previous work, we showed how to find the MAP assignment in BN using High Order Recurrent Neural Networks (HORN) through an intermediate representation of Cost-Based Abduction. This method eliminates the need to explicitly construct the energy function in two steps, objective and constraints. This paper builds on that previous work by providing the theoretical foundation and proving that the resultant HORN used to find MAP is strongly equivalent to the original BN it tries to solve.  相似文献   

7.
8.
Conclusions Probabilities are important in belief updating, but probabilistic reasoning does not subsume everything else (as the Bayesian would have it). On the contrary, Bayesian reasoning presupposes knowledge that cannot itself be obtained by Bayesian reasoning, making generic Bayesianism an incoherent theory of belief updating. Instead, it is indefinite probabilities that are of principal importance in belief updating. Knowledge of such indefinite probabilities is obtained by some form of statistical induction, and inferences to non-probabilistic conclusions are carried out in accordance with the statistical syllogism. Such inferences have been the focus of much attention in the nonmonotonic reasoning literature, but the logical complexity of such inference has not been adequately appreciated.  相似文献   

9.
Rational agents have (more or less) consistent beliefs. Bayesianism is a theory of consistency for partial belief states. Rational agents also respond appropriately to experience. Dogmatism is a theory of how to respond appropriately to experience. Hence, Dogmatism and Bayesianism are theories of two very different aspects of rationality. It's surprising, then, that in recent years it has become common to claim that Dogmatism and Bayesianism are jointly inconsistent: how can two independently consistent theories with distinct subject matter be jointly inconsistent? In this essay I argue that Bayesianism and Dogmatism are inconsistent only with the addition of a specific hypothesis about how the appropriate responses to perceptual experience are to be incorporated into the formal models of the Bayesian. That hypothesis isn't essential either to Bayesianism or to Dogmatism, and so Bayesianism and Dogmatism are jointly consistent. That leaves the matter of how experiences and credences are related, and so in the remainder of the essay I offer an alternative account of how perceptual justification, as the Dogmatist understands it, can be incorporated into the Bayesian formalism.  相似文献   

10.
Many philosophers have claimed that Bayesianism can provide a simple justification for hypothetico-deductive (H-D) inference, long regarded as a cornerstone of the scientific method. Following up a remark of van Fraassen (1985), we analyze a problem for the putative Bayesian justification of H-D inference in the case where what we learn from observation is logically stronger than what our theory implies. Firstly, we demonstrate that in such cases the simple Bayesian justification does not necessarily apply. Secondly, we identify a set of sufficient conditions for the mismatch in logical strength to be justifiably ignored as a “harmless idealization”. Thirdly, we argue, based upon scientific examples, that the pattern of H-D inference of which there is a ready Bayesian justification is only rarely the pattern that one actually finds at work in science. Whatever the other virtues of Bayesianism, the idea that it yields a simple justification of a pervasive pattern of scientific inference appears to have been oversold.  相似文献   

11.
In this paper some parts of the model theory for logics based on generalised Kripke semantics are developed. Löwenheim-Skolem theorems and some applications of ultraproduct constructions for generalised Kripke models with variable universe are investigated using similar theorems of the model theory for classical logic. The results are generalizations of the theorems of [4].  相似文献   

12.
Ronald N. Giere 《Synthese》1969,20(3):371-387
A comparison of Neyman's theory of interval estimation with the corresponding subjective Bayesian theory of credible intervals shows that the Bayesian approach to the estimation of statistical parameters allows experimental procedures which, from the orthodox objective viewpoint, are clearly biased and clearly inadmissible. This demonstrated methodological difference focuses attention on the key difference in the two general theories, namely, that the orthodox theory is supposed to provide a known average frequency of successful estimates, whereas the Bayesian account provides only a coherent ordering of degrees of belief and a subsequent maximization of subjective expected utilities. To rebut the charge of allowing biased procedures, the Bayesian must attack the foundations of orthodox, objectivist methods. Two apparently popular avenues of attack are briefly considered and found wanting. The first is that orthodox methods fail to apply to the single case. The second is that orthodox methods are subject to a typical Humean regress. The conclusion is that orthodox objectivist methods remain viable in the face of the subjective Bayesian alternative — at least with respect to the problem of statistical estimation.  相似文献   

13.
Humans are adept at inferring the mental states underlying other agents’ actions, such as goals, beliefs, desires, emotions and other thoughts. We propose a computational framework based on Bayesian inverse planning for modeling human action understanding. The framework represents an intuitive theory of intentional agents’ behavior based on the principle of rationality: the expectation that agents will plan approximately rationally to achieve their goals, given their beliefs about the world. The mental states that caused an agent’s behavior are inferred by inverting this model of rational planning using Bayesian inference, integrating the likelihood of the observed actions with the prior over mental states. This approach formalizes in precise probabilistic terms the essence of previous qualitative approaches to action understanding based on an “intentional stance” [Dennett, D. C. (1987). The intentional stance. Cambridge, MA: MIT Press] or a “teleological stance” [Gergely, G., Nádasdy, Z., Csibra, G., & Biró, S. (1995). Taking the intentional stance at 12 months of age. Cognition, 56, 165-193]. In three psychophysical experiments using animated stimuli of agents moving in simple mazes, we assess how well different inverse planning models based on different goal priors can predict human goal inferences. The results provide quantitative evidence for an approximately rational inference mechanism in human goal inference within our simplified stimulus paradigm, and for the flexible nature of goal representations that human observers can adopt. We discuss the implications of our experimental results for human action understanding in real-world contexts, and suggest how our framework might be extended to capture other kinds of mental state inferences, such as inferences about beliefs, or inferring whether an entity is an intentional agent.  相似文献   

14.
Chalmers (Mind 120(479): 587–636, 2011a) presents an argument against “referentialism” (and for his own view) that employs Bayesianism. He aims to make progress in a debate over the objects of belief, which seems to be at a standstill between referentialists and non-referentialists. Chalmers’ argument, in sketch, is that Bayesianism is incompatible with referentialism, and natural attempts to salvage the theory, Chalmers contends, requires giving up referentialism. Given the power and success of Bayesianism, the incompatibility is prima facie evidence against referentialism. In this paper, I review Chalmers’ arguments and give some responses on behalf of the referentialist.  相似文献   

15.
Brian Hill 《Studia Logica》2008,89(3):291-323
In the companion paper (Towards a “sophisticated” model of belief dynamics. Part I), a general framework for realistic modelling of instantaneous states of belief and of the operations involving them was presented and motivated. In this paper, the framework is applied to the case of belief revision. A model of belief revision shall be obtained which, firstly, recovers the Gärdenfors postulates in a well-specified, natural yet simple class of particular circumstances; secondly, can accommodate iterated revisions, recovering several proposed revision operators for iterated revision as special cases; and finally, offers an analysis of Rott’s recent counterexample to several Gärdenfors postulates [32], elucidating in what sense it fails to be one of the special cases to which these postulates apply.  相似文献   

16.
Matteo Colombo 《Synthese》2018,195(11):4817-4838
The rise of Bayesianism in cognitive science promises to shape the debate between nativists and empiricists into more productive forms—or so have claimed several philosophers and cognitive scientists. The present paper explicates this claim, distinguishing different ways of understanding it. After clarifying what is at stake in the controversy between nativists and empiricists, and what is involved in current Bayesian cognitive science, the paper argues that Bayesianism offers not a vindication of either nativism or empiricism, but one way to talk precisely and transparently about the kinds of mechanisms and representations underlying the acquisition of psychological traits without a commitment to an innate language of thought.  相似文献   

17.
This paper concerns the extent to which uncertain propositional reasoning can track probabilistic reasoning, and addresses kinematic problems that extend the familiar Lottery paradox. An acceptance rule assigns to each Bayesian credal state p a propositional belief revision method ${\sf B}_{p}$ , which specifies an initial belief state ${\sf B}_{p}(\top)$ that is revised to the new propositional belief state ${\sf B}(E)$ upon receipt of information E. An acceptance rule tracks Bayesian conditioning when ${\sf B}_{p}(E) = {\sf B}_{p|_{E}}(\top)$ , for every E such that p(E)?>?0; namely, when acceptance by propositional belief revision equals Bayesian conditioning followed by acceptance. Standard proposals for uncertain acceptance and belief revision do not track Bayesian conditioning. The ??Lockean?? rule that accepts propositions above a probability threshold is subject to the familiar lottery paradox (Kyburg 1961), and we show that it is also subject to new and more stubborn paradoxes when the tracking property is taken into account. Moreover, we show that the familiar AGM approach to belief revision (Harper, Synthese 30(1?C2):221?C262, 1975; Alchourrón et al., J Symb Log 50:510?C530, 1985) cannot be realized in a sensible way by any uncertain acceptance rule that tracks Bayesian conditioning. Finally, we present a plausible, alternative approach that tracks Bayesian conditioning and avoids all of the paradoxes. It combines an odds-based acceptance rule proposed originally by Levi (1996) with a non-AGM belief revision method proposed originally by Shoham (1987).  相似文献   

18.
The idea of a probabilistic logic of inductive inference based on some form of the principle of indifference has always retained a powerful appeal. However, up to now all modifications of the principle failed. In this paper, a new formulation of such a principle is provided that avoids generating paradoxes and inconsistencies. Because of these results, the thesis that probabilities cannot be logical quantities, determined in an objective way through some form of the principle of indifference, is no longer supportable. Later, the paper investigates some implications of the new principle of indifference. To conclude, a re-examination of the foundations of the so-called objective Bayesian inference is called for.  相似文献   

19.
Wesley Salmon and John Earman have presented influential Bayesian reconstructions of Thomas Kuhn’s account of theory-change. In this paper I argue that all attempts to give a Bayesian reading of Kuhn’s philosophy of science are fundamentally misguided due to the fact that Bayesian confirmation theory is in fact inconsistent with Kuhn’s account. The reasons for this inconsistency are traced to the role the concept of incommensurability plays with reference to the ‘observational vocabulary’ within Kuhn’s picture of scientific theories. The upshot of the discussion is that it is impossible to integrate both Kuhn’s claims and Bayesianism within a coherent account of theory-change.
Lefteris FarmakisEmail:
  相似文献   

20.
A widespread assumption in the contemporary discussion of probabilistic models of cognition, often attributed to the Bayesian program, is that inference is optimal when the observer's priors match the true priors in the world—the actual “statistics of the environment.” But in fact the idea of a “true” prior plays no role in traditional Bayesian philosophy, which regards probability as a quantification of belief, not an objective characteristic of the world. In this paper I discuss the significance of the traditional Bayesian epistemic view of probability and its mismatch with the more objectivist assumptions about probability that are widely held in contemporary cognitive science. I then introduce a novel mathematical framework, the observer lattice, that aims to clarify this issue while avoiding philosophically tendentious assumptions. The mathematical argument shows that even if we assume that “ground truth” probabilities actually do exist, there is no objective way to tell what they are. Different observers, conditioning on different information, will inevitably have different probability estimates, and there is no general procedure to determine which one is right. The argument sheds light on the use of probabilistic models in cognitive science, and in particular on what exactly it means for the mind to be “tuned” to its environment.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号