首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The paper starts by describing and clarifying what Williamson calls the consequence fallacy. I show two ways in which one might commit the fallacy. The first, which is rather trivial, involves overlooking background information; the second way, which is the more philosophically interesting, involves overlooking prior probabilities. In the following section, I describe a powerful form of sceptical argument, which is the main topic of the paper, elaborating on previous work by Huemer. The argument attempts to show the impossibility of defeasible justification, justification based on evidence which does not entail the (allegedly) justified proposition or belief. I then discuss the relation between the consequence fallacy, or some similar enough reasoning, and that form of argument. I argue that one can resist that form of sceptical argument if one gives up the idea that a belief cannot be justified unless it is supported by the totality of the evidence available to the subject—a principle entailed by many prominent epistemological views, most clearly by epistemological evidentialism. The justification, in the relevant cases, should instead derive solely from the prior probability of the proposition. A justification of this sort, that does not rely on evidence, would amount to a form of entitlement, in (something like) Crispin Wright’s sense. I conclude with some discussion of how to understand prior probabilities, and how to develop the notion of entitlement in an externalist epistemological framework.  相似文献   

2.
Vague subjective probability may be modeled by means of a set of probability functions, so that the represented opinion has only a lower and upper bound. The standard rule of conditionalization can be straightforwardly adapted to this. But this combination has difficulties which, though well known in the technical literature, have not been given sufficient attention in probabilist or Bayesian epistemology. Specifically, updating on apparently irrelevant bits of news can be destructive of one’s explicitly prior expectations. Stability of vague subjective opinion appears to need a more complex model.  相似文献   

3.
Whether humans can accurately make decisions in line with Bayes’ rule has been one of the most important yet contentious topics in cognitive psychology. Though a number of paradigms have been used for studying Bayesian updating, rarely have subjects been allowed to use their own preexisting beliefs about the prior and the likelihood. A study is reported in which physicians judged the posttest probability of a diagnosis for a patient vignette after receiving a test result, and the physicians’ posttest judgments were compared to the normative posttest calculated from their own beliefs in the sensitivity and false positive rate of the test (likelihood ratio) and prior probability of the diagnosis. On the one hand, the posttest judgments were strongly related to the physicians’ beliefs about both the prior probability as well as the likelihood ratio, and the priors were used considerably more strongly than in previous research. On the other hand, both the prior and the likelihoods were still not used quite as much as they should have been, and there was evidence of other nonnormative aspects to the updating, such as updating independent of the likelihood beliefs. By focusing on how physicians use their own prior beliefs for Bayesian updating, this study provides insight into how well experts perform probabilistic inference in settings in which they rely upon their own prior beliefs rather than experimenter-provided cues. It suggests that there is reason to be optimistic about experts’ abilities, but that there is still considerable need for improvement.  相似文献   

4.
Daniel Steel 《Synthese》2007,156(1):53-77
The likelihood principle (LP) is a core issue in disagreements between Bayesian and frequentist statistical theories. Yet statements of the LP are often ambiguous, while arguments for why a Bayesian must accept it rely upon unexamined implicit premises. I distinguish two propositions associated with the LP, which I label LP1 and LP2. I maintain that there is a compelling Bayesian argument for LP1, based upon strict conditionalization, standard Bayesian decision theory, and a proposition I call the practical relevance principle. In contrast, I argue that there is no similarly compelling argument for or against LP2. I suggest that these conclusions lead to a restrictedly pluralistic view of Bayesian confirmation measures.  相似文献   

5.
The conflict of narrowness and precision in direct inference occurs if a body of evidence contains estimates for frequencies in a certain reference class and less precise estimates for frequencies in a narrower reference class. To develop a solution to this conflict, I draw on ideas developed by Paul Thorn and John Pollock. First, I argue that Kyburg and Teng’s solution to the conflict of narrowness and precision leads to unreasonable direct inference probabilities. I then show that Thorn’s recent solution to the conflict leads to unreasonable direct inference probabilities. Based on my analysis of Thorn’s approach, I propose a natural distribution for a Bayesian analysis of the data directly obtained from studying members of the narrowest reference class.  相似文献   

6.
Keith Lehrer 《Synthese》1983,57(3):283-295
Weighted averaging is a method for aggregating the totality of information, both regimented and unregimented, possessed by an individual or group of individuals. The application of such a method may be warranted by a theorem of the calculus of probability, simple conditionalization, or Jeffrey's formula for probability kinematics, all of which average in terms of the prior probability of evidence statements. Weighted averaging may, however, be applied as a method of rational aggregation of the probabilities of diverse perspectives or persons in cases in which the weights cannot be articulated as the prior probabilities of statements of evidence. The method is justified by Wagner's Theorem exhibiting that any method satisfying the conditions of the Irrelevance of Alternatives and Zero Unanimity must, when applied to three or more alternatives, be weighted averaging.  相似文献   

7.
Joel Pust 《Synthese》2013,190(9):1489-1501
Terence Horgan defends the thirder position on the Sleeping Beauty problem, claiming that Beauty can, upon awakening during the experiment, engage in “synchronic Bayesian updating” on her knowledge that she is awake now in order to justify a 1/3 credence in heads. In a previous paper, I objected that epistemic probabilities are equivalent to rational degrees of belief given a possible epistemic situation and so the probability of Beauty’s indexical knowledge that she is awake now is necessarily 1, precluding such updating. In response, Horgan maintains that the probability claims in his argument are to be taken, not as claims about possible rational degrees of belief, but rather as claims about “quantitative degrees of evidential support.” This paper argues that the most plausible account of quantitative degree of support, when conjoined with any of the three major accounts of indexical thought in such a way as to plausibly constrain rational credence, contradicts essential elements of Horgan’s argument.  相似文献   

8.
A widespread assumption in the contemporary discussion of probabilistic models of cognition, often attributed to the Bayesian program, is that inference is optimal when the observer's priors match the true priors in the world—the actual “statistics of the environment.” But in fact the idea of a “true” prior plays no role in traditional Bayesian philosophy, which regards probability as a quantification of belief, not an objective characteristic of the world. In this paper I discuss the significance of the traditional Bayesian epistemic view of probability and its mismatch with the more objectivist assumptions about probability that are widely held in contemporary cognitive science. I then introduce a novel mathematical framework, the observer lattice, that aims to clarify this issue while avoiding philosophically tendentious assumptions. The mathematical argument shows that even if we assume that “ground truth” probabilities actually do exist, there is no objective way to tell what they are. Different observers, conditioning on different information, will inevitably have different probability estimates, and there is no general procedure to determine which one is right. The argument sheds light on the use of probabilistic models in cognitive science, and in particular on what exactly it means for the mind to be “tuned” to its environment.  相似文献   

9.
In diagnostic causal reasoning, the goal is to infer the probability of causes from one or multiple observed effects. Typically, studies investigating such tasks provide subjects with precise quantitative information regarding the strength of the relations between causes and effects or sample data from which the relevant quantities can be learned. By contrast, we sought to examine people’s inferences when causal information is communicated through qualitative, rather vague verbal expressions (e.g., “X occasionally causes A”). We conducted three experiments using a sequential diagnostic inference task, where multiple pieces of evidence were obtained one after the other. Quantitative predictions of different probabilistic models were derived using the numerical equivalents of the verbal terms, taken from an unrelated study with different subjects. We present a novel Bayesian model that allows for incorporating the temporal weighting of information in sequential diagnostic reasoning, which can be used to model both primacy and recency effects. On the basis of 19,848 judgments from 292 subjects, we found a remarkably close correspondence between the diagnostic inferences made by subjects who received only verbal information and those of a matched control group to whom information was presented numerically. Whether information was conveyed through verbal terms or numerical estimates, diagnostic judgments closely resembled the posterior probabilities entailed by the causes’ prior probabilities and the effects’ likelihoods. We observed interindividual differences regarding the temporal weighting of evidence in sequential diagnostic reasoning. Our work provides pathways for investigating judgment and decision making with verbal information within a computational modeling framework.  相似文献   

10.
Like scientists, children seek ways to explain causal systems in the world. But are children scientists in the strict Bayesian tradition of maximizing posterior probability? Or do they attend to other explanatory considerations, as laypeople and scientists – such as Einstein – do? Four experiments support the latter possibility. In particular, we demonstrate in four experiments that 4‐ to 8‐year‐old children, like adults, have a robust latent scope bias that leads to inferences that do not maximize posterior probability. When faced with two explanations equally consistent with observed data, where one explanation makes an unverified prediction, children consistently preferred the explanation that does not make this prediction (Experiment 1), even if the prior probabilities are identical (Experiment 3). Additional evidence suggests that this latent scope bias may result from the same explanatory strategies used by adults (Experiments 1 and 2), and can be attenuated by strong prior odds (Experiment 4). We argue that children, like adults, rely on ‘explanatory virtues’ in inference – a strategy that often leads to normative responses, but can also lead to systematic error.  相似文献   

11.
Conclusions Probabilities are important in belief updating, but probabilistic reasoning does not subsume everything else (as the Bayesian would have it). On the contrary, Bayesian reasoning presupposes knowledge that cannot itself be obtained by Bayesian reasoning, making generic Bayesianism an incoherent theory of belief updating. Instead, it is indefinite probabilities that are of principal importance in belief updating. Knowledge of such indefinite probabilities is obtained by some form of statistical induction, and inferences to non-probabilistic conclusions are carried out in accordance with the statistical syllogism. Such inferences have been the focus of much attention in the nonmonotonic reasoning literature, but the logical complexity of such inference has not been adequately appreciated.  相似文献   

12.
We view a perceptual capacity as a nondeductive inference, represented as a function from a set of premises to a set of conclusions. The application of the function to a single premise to produce a single conclusion is called a "percept" or "instantaneous percept." We define a stable percept as a convergent sequence of instantaneous percepts. Assuming that the sets of premises and conclusions are metric spaces, we introduce a strategy for acquiring stable percepts, called directed convergence. We consider probabilistic inferences, where the premise and conclusion sets are spaces of probability measures, and in this context we study Bayesian probabilistic/recursive inference. In this type of Bayesian inference the premises are probability measures, and the prior as well as the posterior is updated nontrivially at each iteration. This type of Bayesian inference is distinguished from classical Bayesian statistical inference where the prior remains fixed, and the posterior evolves by conditioning on successively more punctual premises. We indicate how the directed convergence procedure may be implemented in the context of Bayesian probabilistic/recursive inference. We discuss how the L(infinity) metric can be used to give numerical control of this type of Bayesian directed convergence. Copyright 2001 Academic Press.  相似文献   

13.
Kahneman and Tversky (1973) described an effect they called ‘insensitivity to prior probability of outcomes’, later dubbed base rate neglect, which describes people’s tendency to underweight prior information in favor of new data. As probability theory requires that prior probabilities be taken into account, via Bayes’ theorem, the fact that most people fail to do so has been taken as evidence of human irrationality and, by others, of a mismatch between our cognitive processes and the questions being asked (Cosmides & Tooby, 1996). In contrast to both views, we suggest that simplistic Bayesian updating using base rates is not necessarily rational. To that end, we present experiments in which base rate neglect is often the right strategy, and show that people’s base rate usage varies systematically as a function of the extent to which the data that make up a base rate are perceived as trustworthy.  相似文献   

14.
Many have argued that a rational agent's attitude towards a proposition may be better represented by a probability range than by a single number. I show that in such cases an agent will have unstable betting behaviour, and so will behave in an unpredictable way. I use this point to argue against a range of responses to the ‘two bets’ argument for sharp probabilities.  相似文献   

15.
The attribution made by an observer (O) to an actor in the forced compliance situation was regarded as a probability revision process which can be described by a Bayesian inference model. Os' perceptions of the forced compliance situation were analyzed in terms of the input components into the Bayesian model: prior probabilities of the relevant attitudes and the diagnostic values of the behaviors which the actor may choose. In order to test propositions made by attribution theory about such perceptions (Kelley, 1967;Messick, 1971), Os viewed actors under conditions of Low Inducement (LI) and High Inducement (HI). Before observing the actor's decision, Os estimated the prior probabilities of the relevant attitudes and the conditional probabilities of compliance and refusal given each of the attitudes. After observing the actor's decision, Os estimated the posterior probabilities of the attitudes. As expected, in the LI condition, compared to the HI condition, compliance was seen as less probable and more diagnostic about the actor's attitudes, and the posterior probability of the corresponding attitude was higher. Contrary to expectations, within both conditions, compliance, compared to refusal, was seen as less diagnostic and more probable.  相似文献   

16.
Ruurik Holm 《Synthese》2013,190(18):4001-4007
This article discusses the classical problem of zero probability of universal generalizations in Rudolf Carnap’s inductive logic. A correction rule for updating the inductive method on the basis of evidence will be presented. It will be shown that this rule has the effect that infinite streams of uniform evidence assume a non-zero limit probability. Since Carnap’s inductive logic is based on finite domains of individuals, the probability of the corresponding universal quantification changes accordingly. This implies that universal generalizations can receive positive prior and posterior probabilities, even for (countably) infinite domains.  相似文献   

17.
18.
We introduce a graphical framework for Bayesian inference that is sufficiently general to accommodate not just the standard case but also recent proposals for a theory of quantum Bayesian inference wherein one considers density operators rather than probability distributions as representative of degrees of belief. The diagrammatic framework is stated in the graphical language of symmetric monoidal categories and of compact structures and Frobenius structures therein, in which Bayesian inversion boils down to transposition with respect to an appropriate compact structure. We characterize classical Bayesian inference in terms of a graphical property and demonstrate that our approach eliminates some purely conventional elements that appear in common representations thereof, such as whether degrees of belief are represented by probabilities or entropic quantities. We also introduce a quantum-like calculus wherein the Frobenius structure is noncommutative and show that it can accommodate Leifer??s calculus of ??conditional density operators??. The notion of conditional independence is also generalized to our graphical setting and we make some preliminary connections to the theory of Bayesian networks. Finally, we demonstrate how to construct a graphical Bayesian calculus within any dagger compact category.  相似文献   

19.
Bayesian models of cognition hypothesize that human brains make sense of data by representing probability distributions and applying Bayes’ rule to find the best explanation for available data. Understanding the neural mechanisms underlying probabilistic models remains important because Bayesian models provide a computational framework, rather than specifying mechanistic processes. Here, we propose a deterministic neural-network model which estimates and represents probability distributions from observable events—a phenomenon related to the concept of probability matching. Our model learns to represent probabilities without receiving any representation of them from the external world, but rather by experiencing the occurrence patterns of individual events. Our neural implementation of probability matching is paired with a neural module applying Bayes’ rule, forming a comprehensive neural scheme to simulate human Bayesian learning and inference. Our model also provides novel explanations of base-rate neglect, a notable deviation from Bayes.  相似文献   

20.
The capability of the human brain for Bayesian inference was assessed by manipulating probabilistic contingencies in an urn-ball task. Event-related potentials (ERPs) were recorded in response to stimuli that differed in their relative frequency of occurrence (.18 to .82). A veraged ERPs with sufficient signal-to-noise ratio (relative frequency of occurrence > .5) were used for further analysis. Research hypotheses about relationships between probabilistic contingencies and ERP amplitude variations were formalized as (in-)equality constrained hypotheses. Conducting Bayesian model comparisons, we found that manipulations of prior probabilities and likelihoods were associated with separately modifiable and distinct ERP responses. P3a amplitudes were sensitive to the degree of prior certainty such that higher prior probabilities were related to larger frontally distributed P3a waves. P3b amplitudes were sensitive to the degree of likelihood certainty such that lower likelihoods were associated with larger parietally distributed P3b waves. These ERP data suggest that these antecedents of Bayesian inference (prior probabilities and likelihoods) are coded by the human brain.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号