首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A measure of coherence is said to be truth conducive if and only if a higher degree of coherence (as measured) results in a higher likelihood of truth. Recent impossibility results strongly indicate that there are no (non-trivial) probabilistic coherence measures that are truth conducive. Indeed, this holds even if truth conduciveness is understood in a weak ceteris paribus sense (Bovens & Hartmann, 2003, Bayesian epistemology. New York, Oxford: Oxford University Press; Olsson, 2005, Against coherence: Truth probability and justification. Oxford: Oxford University Press). This raises the problem of how coherence could nonetheless be an epistemically important property. Our proposal is that coherence may be linked in a certain way to reliability. We define a measure of coherence to be reliability conducive if and only if a higher degree of coherence (as measured) results in a higher probability that the information sources are reliable. Restricting ourselves to the most basic case, we investigate which coherence measures in the literature are reliability conducive. It turns out that, while a number of measures fail to be reliability conducive, except possibly in a trivial and uninteresting sense, Shogenji’s measure and several measures generated by Douven and Meijs’s recipe are notable exceptions to this rule.  相似文献   

2.
Bovens and Hartmann (Bayesian Epistemology, Oxford: Oxford University Press, 2003) propose to analyze coherence as a confidence-boosting property. On the basis of this idea, they construct a new probabilistic theory of coherence. In this paper, I will attempt to show that the resulting measure of coherence clashes with some of the intuitions that motivate it. Also, I will try to show that this clash is not due to the view on coherence as a confidence-boosting property or to the general features of the model that Bovens and Hartmann use to analyze coherence. It will turn out that there is at least one other measure that is similarly based on the concept of a confidence-boosting property, but does not have the same counterintuitive results.  相似文献   

3.
Tomoji Shogenji 《Synthese》2007,157(3):361-372
This paper aims to reconcile (i) the intuitively plausible view that a higher degree of coherence among independent pieces of evidence makes the hypothesis they support more probable, and (ii) the negative results in Bayesian epistemology to the effect that there is no probabilistic measure of coherence such that a higher degree of coherence among independent pieces of evidence makes the hypothesis they support more probable. I consider a simple model in which the negative result appears in a stark form: the prior probability of the hypothesis and the individual vertical relations between each piece of evidence and the hypothesis completely determine the conditional probability of the hypothesis given the total evidence, leaving no room for the lateral relation (such as coherence) among the pieces of evidence to play any role. Despite this negative result, the model also reveals that a higher degree of coherence is indirectly associated with a higher conditional probability of the hypothesis because a higher degree of coherence indicates stronger individual supports. This analysis explains why coherence appears truth-conducive but in such a way that it defeats the idea of coherentism since the lateral relation (such as coherence) plays no independent role in the confirmation of the hypothesis. An earlier version of this paper was presented at the workshop Coherence: Interpreting the Impossibility Results held in Lund, Sweden. I would like to thank the participants of the workshop, especially Erik J. Olsson who subsequently sent me written comments. I would also like to thank Ken Akiba for comments on precursors of this paper  相似文献   

4.
Stefan Schubert 《Synthese》2012,187(2):607-621
A measure of coherence is said to be reliability conducive if and only if a higher degree of coherence (as measured) of a set of testimonies implies a higher probability that the witnesses are reliable. Recently, it has been proved that the Shogenji measure of coherence is reliability conducive in restricted scenarios (e.g., Olsson and Schubert, Synthese, 157:297?C308, 2007). In this article, I investigate whether the Shogenji measure, or any other coherence measure, is reliability conducive in general. An impossibility theorem is proved to the effect that this is not the case. I conclude that coherence is not reliability conducive.  相似文献   

5.
Mark Siebel 《Erkenntnis》2005,63(3):335-360
It is shown that the probabilistic theories of coherence proposed up to now produce a number of counter-intuitive results. The last section provides some reasons for believing that no probabilistic measure will ever be able to adequately capture coherence. First, there can be no function whose arguments are nothing but tuples of probabilities, and which assigns different values to pairs of propositions {A, B} and {A, C} if A implies both B and C, or their negations, and if P(B)=P(C). But such sets may indeed differ in their degree of coherence. Second, coherence is sensitive to explanatory relations between the propositions in question. Explanation, however, can hardly be captured solely in terms of probability.  相似文献   

6.
Coherentism in epistemology has long suffered from lack of formal and quantitative explication of the notion of coherence. One might hope that probabilistic accounts of coherence such as those proposed by Lewis, Shogenji, Olsson, Fitelson, and Bovens and Hartmann will finally help solve this problem. This paper shows, however, that those accounts have a serious common problem: the problem of belief individuation. The coherence degree that each of the accounts assigns to an information set (or the verdict it gives as to whether the set is coherent tout court) depends on how beliefs (or propositions) that represent the set are individuated. Indeed, logically equivalent belief sets that represent the same information set can be given drastically different degrees of coherence. This feature clashes with our natural and reasonable expectation that the coherence degree of a belief set does not change unless the believer adds essentially new information to the set or drops old information from it; or, to put it simply, that the believer cannot raise or lower the degree of coherence by purely logical reasoning. None of the accounts in question can adequately deal with coherence once logical inferences get into the picture. Toward the end of the paper, another notion of coherence that takes into account not only the contents but also the origins (or sources) of the relevant beliefs is considered. It is argued that this notion of coherence is of dubious significance, and that it does not help solve the problem of belief individuation.  相似文献   

7.
In his groundbreaking book, Against Coherence (2005), Erik Olsson presents an ingenious impossibility theorem that appears to show that there is no informative relationship between probabilistic measures of coherence and higher likelihood of truth. Although Olsson's result provides an important insight into probabilistic models of epistemological coherence, the scope of his negative result is more limited than generally appreciated. The key issue is the role conditional independence conditions play within the witness testimony model Olsson uses to establish his result. Olsson maintains that his witness model yields charitable ceteris paribus conditions for any theory of probabilistic coherence. Not so. In fact, Olsson's model, like Bayesian witness models in general, selects a peculiar class of models that are in no way representative of the range of options available to coherence theorists. Recent positive results suggest that there is a way to develop a formal theory of coherence after all. Further, although Bayesian witness models are not conducive to the truth, they are conducive to reliability.  相似文献   

8.
Staffan Angere 《Synthese》2007,157(3):321-335
The impossibility results of Bovens and Hartmann (2003, Bayesian epistemology. Oxford: Clarendon Press) and Olsson (2005, Against coherence: Truth, probability and justification. Oxford: Oxford University Press.) show that the link between coherence and probability is not as strong as some have supposed. This paper is an attempt to bring out a way in which coherence reasoning nevertheless can be justified, based on the idea that, even if it does not provide an infallible guide to probability, it can give us an indication thereof. It is further shown that this actually is the case, for several of the coherence measures discussed in the literature so far. We also discuss how this affects the possibility to use coherence as a means of epistemic justification.  相似文献   

9.
The credible intervals that people set around their point estimates are typically too narrow (cf. Lichtenstein, Fischhoff, & Phillips, 1982). That is, a set of many such intervals does not contain the actual values of the criterion variables as often as it should given the probability assigned to this event for each estimate. The typical interpretation of such data is that people are overconfident about the accuracy of their judgments. This paper presents data from two studies showing the typical levels of overconfidence for individual estimates of unknown quantities. However, data from the same subjects on a different measure of confidence for the same items, their own global assessment for the set of multiple estimates as a whole, showed significantly lower levels of confidence and overconfidence than their average individual assessment for items in the set. It is argued that the event and global assessments of judgment quality are fundamentally different and are affected by unique psychological processes. Finally, we discuss the implications of a difference between confidence in single and multiple estimates for confidence research and theory.  相似文献   

10.
This paper presents a new methodology for estimating the weights or saliences of subcriteria (attributes) in a composite criterion measure. The inputs to the estimation procedure consist of (i) a set of stimuli or objects with each stimulus defined by its subcriteria profile (set of attribute values) and (ii) the set of paired comparison dominance (e.g., preference) judgments on the stimuli made by a single judge (expert) in terms of the global criterion. A criterion of fit is developed and its optimization via linear programming is illustrated with an example. The procedure is generalized to estimate a common set of weights when the pairwise judgments on the stimuli are made by more than one judge. The procedure is computationally efficient and has been applied in developing a composite criterion of managerial success yielding high concurrent validity.This methodology can also be used to perform ordinal multiple regression—i.e., multiple regression with an ordinally scaled dependent variable and a set of intervally scaled predictor variables. The approach is further extended to internal analysis (unfolding) using the vector model of preference and to the additive model of conjoint measurement.  相似文献   

11.
Roger Rosenkrantz 《Synthese》1971,23(2-3):167-205
Synopsis I I set out my view that all inference is essentially deductive and pinpoint what I take to be the major shortcomings of the induction rule.II The import of data depends on the probability model of the experiment, a dependence ignored by the induction rule. Inductivists admit background knowledge must be taken into account but never spell out how this is to be done. As I see it, that is the problem of induction.III The induction rule, far from providing a method of discovery, does not even serve to detect pattern. Knowing that there is uniformity in the universe is no help to discovering laws. A critique of Reichenbach's justification of the straight rule is constructed along these lines.IV The induction rule, by itself, cannot account for the varying rates at which confidence in an hypothesis mounts with data. The mathematical analysis of this salient feature of inductive reasoning requires prior probabilities. We also argue, against orthodox statisticians, that prior probabilities make a substantive contribution to the objectivity of inductive methods, viz. to the design of experiments and the selection of decision rules.V Carnap's general criticisms of various estimation rules, like the straight rule and the impervious rule, are seen to be misguided when the prior densities to which they correspond are taken into account.VI Analysis of Hempel's definition of confirmation qua formalization of the enumerative (naive) conception of instancehood. We show that from the standpoint of the quantitative measure P(H/E):P(H) for the degree to which E confirms H, Hempel's classificatory concept yields correct results only for sampling at large from a finite population with a two-way classification all of whose compositions are equally probable. We extend the analysis to Goodman's paradox, finding cases in which grue-like hypotheses do receive as much confirmation as their opposite numbers. We argue, moreover, the irrelevancy of entrenchment, and maintain that Goodman's paradox is no more than a straightforward counter-example to the enumerative conception of instancehood embodied in Hempel's definition.VII We rebutt the objection that prior probabilities, qua inputs of Bayesian analysis, can only be obtained by enumerative induction (insofar as they are objective). The divergence in the prior densities of two rational agents is less a function of subjectivity, we maintain, than of vagueness.VIII Our concluding remarks stress that, for Bayesians, there is no problem of induction in the usual sense.  相似文献   

12.
Luca Moretti 《Synthese》2007,157(3):309-319
Recent works in epistemology show that the claim that coherence is truth conducive – in the sense that, given suitable ceteris paribus conditions, more coherent sets of statements are always more probable – is dubious and possibly false. From this, it does not follows that coherence is a useless notion in epistemology and philosophy of science. Dietrich and Moretti (Philosophy of science 72(3): 403–424, 2005) have proposed a formal of account of how coherence is confirmation conducive—that is, of how the coherence of a set of statements facilitates the confirmation of such statements. This account is grounded in two confirmation transmission properties that are satisfied by some of the measures of coherence recently proposed in the literature. These properties explicate everyday and scientific uses of coherence. In his paper, I review the main findings of Dietrich and Moretti (2005) and define two evidence-gathering properties that are satisfied by the same measures of coherence and constitute further ways in which coherence is confirmation conducive. At least one of these properties vindicates important applications of the notion of coherence in everyday life and in science.  相似文献   

13.
Stefan Schubert 《Synthese》2012,187(2):305-319
A measure of coherence is said to be reliability conducive if and only if a higher degree of coherence (as measured) results in a higher likelihood that the witnesses are reliable. Recently, it has been proved that several coherence measures proposed in the literature are reliability conducive in a restricted scenario (Olsson and Schubert 2007, Synthese 157:297?C308). My aim is to investigate which coherence measures turn out to be reliability conducive in the more general scenario where it is any finite number of witnesses that give equivalent reports. It is shown that only the so-called Shogenji measure is reliability conducive in this scenario. I take that to be an argument for the Shogenji measure being a fruitful explication of coherence.  相似文献   

14.
Stefan Schubert 《Erkenntnis》2011,74(2):263-275
A measure of coherence is said to be reliability conducive if and only if a higher degree of coherence (as measured) among testimonies implies a higher probability that the witnesses are reliable. Recently, it has been proved that several coherence measures proposed in the literature are reliability conducive in scenarios of equivalent testimonies (Olsson and Schubert 2007; Schubert, to appear). My aim is to investigate which coherence measures turn out to be reliability conducive in the more general scenario where the testimonies do not have to be equivalent. It is shown that four measures are reliability conducive in the present scenario, all of which are ordinally equivalent to the Shogenji measure. I take that to be an argument for the Shogenji measure being a fruitful explication of coherence.  相似文献   

15.
David H. Glass 《Erkenntnis》2005,63(3):375-385
Two of the probabilistic measures of coherence discussed in this paper take probabilistic dependence into account and so depend on prior probabilities in a fundamental way. An example is given which suggests that this prior-dependence can lead to potential problems. Another coherence measure is shown to be independent of prior probabilities in a clearly defined sense and consequently is able to avoid such problems. The issue of prior-dependence is linked to the fact that the first two measures can be understood as measures of coherence as striking agreement, while the third measure represents coherence as agreement. Thus, prior (in)dependence can be used to distinguish different conceptions of coherence.  相似文献   

16.
In 1975, An Essay on Knowledge Formation by H. Törnebohm was published in this Journal. Its content in revised form was included in a work in Swedish of 1983 on knowledge development.HT defines his confirmation criterion in terms of a measure oftruth degree T, which is based on a measure ofmatching M, which is also used as a measure of the degree to which propositionp (an hypothesis) is supported or undermined by another propositionq (the evidence forp),M is defined in terms of a measure of thecontent C.  相似文献   

17.
Debates about the utility of p values and correct ways to analyze data have inspired new guidelines on statistical inference by the American Psychological Association (APA) and changes in the way results are reported in other scientific journals, but their impact on the Journal of the Experimental Analysis of Behavior (JEAB) has not previously been evaluated. A content analysis of empirical articles published in JEAB between 1992 and 2017 investigated whether statistical and graphing practices changed during that time period. The likelihood that a JEAB article reported a null hypothesis significance test, included a confidence interval, or depicted at least one figure with error bars has increased over time. Features of graphs in JEAB, including the proportion depicting single‐subject data, have not changed systematically during the same period. Statistics and graphing trends in JEAB largely paralleled those in mainstream psychology journals, but there was no evidence that changes to APA style had any direct impact on JEAB. In the future, the onus will continue to be on authors, reviewers and editors to ensure that statistical and graphing practices in JEAB continue to evolve without interfering with characteristics that set the journal apart from other scientific journals.  相似文献   

18.
19.
Sequential effects are ubiquitous in decision-making, but no more than in the absolute identification task where participants must identify stimuli from a set of items that vary on a single dimension. A number of competing explanations for these sequential effects have been proposed, and recently Matthews and Stewart [(2009a). The effect of inter-stimulus interval on sequential effects in absolute identification. The Quarterly Journal of Experimental Psychology, 62, 2014–2029] showed that manipulations of the time between decisions is useful in discriminating between these accounts. We use a Bayesian hierarchical regression model to show that inter-trial interval has an influence on behaviour when it varies across different blocks of trials, but not when it varies from trial to trial. We discuss the implications of both our and Matthews and Stewart's results on the effect of inter-trial interval for theories of sequential effects.  相似文献   

20.
This paper starts by considering an argument for thinking that predictive processing (PP) is representational. This argument suggests that the Kullback–Leibler (KL)-divergence provides an accessible measure of misrepresentation, and therefore, a measure of representational content in hierarchical Bayesian inference. The paper then argues that while the KL-divergence is a measure of information, it does not establish a sufficient measure of representational content. We argue that this follows from the fact that the KL-divergence is a measure of relative entropy, which can be shown to be the same as covariance (through a set of additional steps). It is well known that facts about covariance do not entail facts about representational content. So there is no reason to think that the KL-divergence is a measure of (mis-)representational content. This paper thus provides an enactive, non-representational account of Bayesian belief optimisation in hierarchical PP.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号