首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
It has been common wisdom for centuries that scientific inference cannot be deductive; if it is inference at all, it must be a distinctive kind of inductive inference. According to demonstrative theories of induction, however, important scientific inferences are not inductive in the sense of requiring ampliative inference rules at all. Rather, they are deductive inferences with sufficiently strong premises. General considerations about inferences suffice to show that there is no difference in justification between an inference construed demonstratively or ampliatively. The inductive risk may be shouldered by premises or rules, but it cannot be shirked. Demonstrative theories of induction might, nevertheless, better describe scientific practice. And there may be good methodological reasons for constructing our inferences one way rather than the other. By exploring the limits of these possible advantages, I argue that scientific inference is neither of essence deductive nor of essence inductive.  相似文献   

2.
In this article I give an overview of some recent work in philosophy of science dedicated to analysing the scientific process in terms of (conceptual) mathematical models of theories and the various semantic relations between such models, scientific theories, and aspects of reality. In current philosophy of science, the most interesting questions centre around the ways in which writers distinguish between theories and the mathematical structures that interpret them and in which they are true, i.e. between scientific theories as linguistic systems and their non-linguistic models. In philosophy of science literature there are two main approaches to the structure of scientific theories, the statement or syntactic approach—advocated by Carnap, Hempel and Nagel—and the non-statement or semantic approach—advocated, among others, by Suppes, the structuralists, Beth, Van Fraassen, Giere, Wójcicki. In conclusion, I briefly review some of the usual realist inspired questions about the possibility and character of relations between scientific theories and reality as implied by the various approaches I discuss in the course of the article. The models of a scientific theory should indeed be adequate to the phenomena, but if the theory is ‘adequate’ to (true in) its conceptual (mathematical) models as well, we have a model-theoretic realism that addresses the possible meaning and reference of ‘theoretical entities’ without relapsing into the metaphysics typical of the usual scientific realist approaches.  相似文献   

3.
Causal explanation and scientific realism   总被引:1,自引:0,他引:1  
It is widely believed that many of the competing accounts of scientific explanation have ramifications which are relevant to the scientific realism debate. I claim that the two issues are orthogonal. For definiteness, I consider Cartwright's argument that causal explanations secure belief in theoretical entities. In Section I, van Fraassen's anti-realism is reviewed; I argue that this anti-realism is, prima facie, consistent with a causal account of explanation. Section II reviews Cartwright's arguments. In Section III, it is argued that causal explanations do not license the sort of inferences to theoretical entities that would embarass the anti-realist. Section IV examines the epistemic commitments involved in accepting a causal explanation. Section V presents my conclusions: contra Cartwright, the anti-realist may incorporate a causal account of explanation into his vision of science in an entirely natural way.  相似文献   

4.
HÅLLSTEN  HENRIK 《Synthese》1999,120(1):49-59
Any theory of explanation must account for the explanatory successes of statistical scientific theories. This should not be done by endorsing determinism. These considerations have been taken as sufficient ground for rejecting the demand on explanations to be deductive. The arguments for doing so, in Coffa (1974) and Salmon (1977, 1984, 1988), are, however, not persuasive. Deductivism is a viable position. Considering that doubts can be raised against the explanatory validity of probabilistic causal relations and the intuitive plausibility of deductivism, it is also a recommendable position, though elaboration is needed in accounting for some of the uses of statistical theories in explanations. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

5.
Joel Pust 《Synthese》1996,108(1):89-104
Hilary Kornblith (1993) has recently offered a reliabilist defense of the use of the Law of Small Numbers in inductive inference. In this paper I argue that Kornblith's defense of this inferential rule fails for a number of reasons. First, I argue that the sort of inferences that Kornblith seeks to justify are not really inductive inferences based on small samples. Instead, they are knowledge-based deductive inferences. Second, I address Kornblith's attempt to find support in the work of Dorrit Billman and I try to show that close attention to the workings of her computational model reveals that it does not support Kornblith's argument. While the knowledge required to ground the inferences in question is perhaps inductively derived, Billman's work does not support the notion that small samples provide a reliable basis for our generalizing inferences.  相似文献   

6.
This paper deals with meta-statistical questions concerning frequentist statistics. In Sections 2 to 4 I analyse the dispute between Fisher and Neyman on the so called logic of statistical inference, a polemic that has been concomitant of the development of mathematical statistics. My conclusion is that, whenever mathematical statistics makes it possible to draw inferences, it only uses deductive reasoning. Therefore I reject Fisher's inductive approach to the statistical estimation theory and adhere to Neyman's deductive one. On the other hand, I assert that Neyman-Pearson's testing theory, as well as Fisher's tests of significance, properly belong to decision theory, not to logic, neither deductive nor inductive. I then also disagree with Costantini's view of Fisher's testing model as a theory of hypothetico-deductive inferences.In Section 5 I disapprove Hacking1's evidentialists criticisms of the Neyman-Pearson's theory of statistics (NPT), as well as Hacking2's interpretation of NPT as a theory of probable inference. In both cases Hacking misses the point. I conclude, by claiming that Mayo's conception of the Neyman-Pearson's testing theory, as a model of learning from experience, does not purport any advantages over Neyman's behavioristic model.  相似文献   

7.
In section 1, I develop epistemic communism, my view of the function of epistemically evaluative terms such as ‘rational’. The function is to support the coordination of our belief‐forming rules, which in turn supports the reliable acquisition of beliefs through testimony. This view is motivated by the existence of valid inferences that we hesitate to call rational. I defend the view against the worry that it fails to account for a function of evaluations within first‐personal deliberation. In the rest of the paper, I then argue, on the basis of epistemic communism, for a view about rationality itself. I set up the argument in section 2 by saying what a theory of rational deduction is supposed to do. I claim that such a theory would provide a necessary, sufficient, and explanatorily unifying condition for being a rational rule for inferring deductive consequences. I argue in section 3 that, given epistemic communism and the conventionality that it entails, there is no such theory. Nothing explains why certain rules for deductive reasoning are rational.  相似文献   

8.
C. A. Hooker 《Synthese》1994,99(2):181-231
In his bookMinimal Rationality (1986), Christopher Cherniak draws deep and widespread conclusions from our finitude, and not only for philosophy but also for a wide range of science as well. Cherniak's basic idea is that traditional philosophical theories of rationality represent idealisations that are inaccessible to finite rational agents. It is the purpose of this paper to apply a theory of idealisation in science to Cherniak's arguments. The heart of the theory is a distinction between idealisations that represent reversible, solely quantitative simplifications and those that represent irreversible, degenerate idealisations which collapse out essential theoretical structure. I argue that Cherniak's position is best understood as assigning the latter status to traditional rationality theories and that, so understood, his arguments may be illuminated, expanded, and certain common criticisms of them rebutted. The result, however, is a departure from traditional, formalist theories of rationality of a more radical kind than Cherniak contemplates, with widespread ramifications for philosophical theory, especially philosophy of science itself.I would like to thank Professor R. E. Butts and the Department of Philosophy at the University of Western Ontario, Canada, for generous support and stimulating discussion during the research leave at which time this paper was prepared, and the University of Newcastle and its vice-chancellor, Professor K. Morgan, for support. I am greatly indebted to extended discussion with Professor H. I. Brown, to thoughtful comments from two anonymousSynthese referees, and to discussion with Professor W. Harper; between them they have sharpened and corrected the presentation at several places, especially Sections 3 (referees), 4passim (Brown), 4.1 (referee), 4.3 (Harper). More specific acknowledgement is given as appropriate.  相似文献   

9.
Moti Mizrahi 《Synthese》2013,190(15):3209-3226
In this paper, I consider the pessimistic induction construed as a deductive argument (specifically, reductio ad absurdum) and as an inductive argument (specifically, inductive generalization). I argue that both formulations of the pessimistic induction are fallacious. I also consider another possible interpretation of the pessimistic induction, namely, as pointing to counterexamples to the scientific realist’s thesis that success is a reliable mark of (approximate) truth. I argue that this interpretation of the pessimistic induction fails, too. If this is correct, then the pessimistic induction is an utter failure that should be abandoned by scientific anti-realists.  相似文献   

10.
Computational theories of mind assume that participants interpret information and then reason from those interpretations. Research on interpretation in deductive reasoning has claimed to show that subjects' interpretation of single syllogistic premises in an “immediate inference” task is radically different from their interpretation of pairs of the same premises in syllogistic reasoning tasks (Newstead, 1989, 1995; Roberts, Newstead, & Griggs, 2001). Narrow appeal to particular Gricean implicatures in this work fails to bridge the gap. Grice's theory taken as a broad framework for credulous discourse processing in which participants construct speakers' “intended models” of discourses can reconcile these results, purchasing continuity of interpretation through variety of logical treatments. We present exploratory experimental data on immediate inference and subsequent syllogistic reasoning. Systematic patterns of interpretation driven by two factors (whether the subject's model of the discourse is credulous, and their degree of reliance on information packaging) are shown to transcend particular quantifier inferences and to drive systematic differences in subjects' subsequent syllogistic reasoning. We conclude that most participants do not understand deductive tasks as experimenters intend, and just as there is no single logical model of reasoning, so there is no reason to expect a single “fundamental human reasoning mechanism”.  相似文献   

11.
Computational theories of mind assume that participants interpret information and then reason from those interpretations. Research on interpretation in deductive reasoning has claimed to show that subjects' interpretation of single syllogistic premises in an “immediate inference” task is radically different from their interpretation of pairs of the same premises in syllogistic reasoning tasks (Newstead, 1989, 1995; Roberts, Newstead, & Griggs, 2001). Narrow appeal to particular Gricean implicatures in this work fails to bridge the gap. Grice's theory taken as a broad framework for credulous discourse processing in which participants construct speakers' “intended models” of discourses can reconcile these results, purchasing continuity of interpretation through variety of logical treatments. We present exploratory experimental data on immediate inference and subsequent syllogistic reasoning. Systematic patterns of interpretation driven by two factors (whether the subject's model of the discourse is credulous, and their degree of reliance on information packaging) are shown to transcend particular quantifier inferences and to drive systematic differences in subjects' subsequent syllogistic reasoning. We conclude that most participants do not understand deductive tasks as experimenters intend, and just as there is no single logical model of reasoning, so there is no reason to expect a single “fundamental human reasoning mechanism”.  相似文献   

12.
Sun Weimin 《Dao》2009,8(4):403-423
In this essay, I examine the nature of Chinese logic and Chinese sciences in the history of China. I conclude that Chinese logic is essentially analogical, and that the Chinese did not have theoretical sciences. I then connect these together and explain why the Chinese failed to develop theoretical sciences, even though they enjoyed an advanced civilization and great scientific and technological innovations. This is because a deductive system of logic is necessary for the development of theoretical sciences, and analogical logic cannot provide the deductive connections between a theory and empirical observations required by a theoretical science. This also offers a more satisfactory answer to the long-standing Needham Problem.  相似文献   

13.
非形式逻辑出现于上个世纪70年代,它首先是一种寻求更好的大学逻辑教学方式的努力。进而,非形式逻辑学者们逐渐开始触及和探讨越来越多的理论议题,即发展不依赖于形式逻辑的关于论证以及好论证的理解方式。在1998年世界哲学大会上,布莱尔和我阐释了非形式逻辑为哲学所带来的理论后果,其中一点我们称之为“演绎主义的终结”。那时我们并未给出演绎主义的定义,而只是把它等同于麦金泰尔的那个精炼的说法:“任何推理,不是演绎的,就是有缺陷的。”但现在看来,我们在当时就作出“演绎主义的终结”这一论断,显然是有点为时过早了。因为演绎主义似乎至今都还很有生命力,甚至在那些倾向于非形式逻辑的学者那里它也能得到支持:恩尼斯长期以来都辩护演绎主义是一种论证重构策略,格罗尔克更是致力于辩护它是一种论证评价理论。在本文中我将论证,非形式逻辑最好被理解为一种不诉诸于演绎技巧和演绎规范的逻辑研究。当我们意识到演绎和演绎主义已经如何深深地扎根于我们的哲学发展史中,并且牢牢地控制了我们关于逻辑研究的理解,我们就能明白非形式逻辑这一理论努力是多么的困难和重要。我将首先澄清“演绎主义”的意思,然后再分别考察那些赞成和反对演绎主义的论证,最后,我将表明非形式逻辑是从演绎主义中挽救逻辑的理论尝试。  相似文献   

14.
This paper starts by indicating the analysis of Hempel’s conditions of adequacy for any relation of confirmation (Hempel, 1945) as presented in Huber (submitted). There I argue contra Carnap (1962, Section 87) that Hempel felt the need for two concepts of confirmation: one aiming at plausible theories and another aiming at informative theories. However, he also realized that these two concepts are conflicting, and he gave up the concept of confirmation aiming at informative theories. The main part of the paper consists in working out the claim that one can have Hempel’s cake and eat it too — in the sense that there is a logic of theory assessment that takes into account both of the two conflicting aspects of plausibility and informativeness. According to the semantics of this logic, α is an acceptable theory for evidence β if and only if α is both sufficiently plausible given β and sufficiently informative about β. This is spelt out in terms of ranking functions (Spohn, 1988) and shown to represent the syntactically specified notion of an assessment relation. The paper then compares these acceptability relations to explanatory and confirmatory consequence relations (Flach, 2000) as well as to nonmonotonic consequence relations (Kraus et al., 1990). It concludes by relating the plausibility-informativeness approach to Carnap’s positive relevance account, thereby shedding new light on Carnap’s analysis as well as solving another problem of confirmation theory. A precursor of this paper has appeared as “The Logic of Confirmation and Theory Assessment” in L. Běhounek & M. Bílková (eds.), The Logica Yearbook 2004, Prague: Filosofia, 2005, 161–176.  相似文献   

15.
Nader Shoaibi 《Ratio》2021,34(1):7-19
The idea that logic is in some sense normative for thought and reasoning is a familiar one. Some of the most prominent figures in the history of philosophy including Kant and Frege have been among its defenders. The most natural way of spelling out this idea is to formulate wide‐scope deductive requirements on belief which rule out certain states as irrational. But what can account for the truth of such deductive requirements of rationality? By far, the most prominent responses draw in one way or another on the idea that belief aims at the truth. In this paper, I consider two ways of making this line of thought more precise and I argue that they both fail. In particular, I examine a recent attempt by Epistemic Utility Theory to give a veritist account of deductive coherence requirements. I argue that despite its proponents’ best efforts, Epistemic Utility Theory cannot vindicate such requirements.  相似文献   

16.
In this essay I argue against I. Bernard Cohen's influential account of Newton's methodology in the Principia: the ‘Newtonian Style’. The crux of Cohen's account is the successive adaptation of ‘mental constructs’ through comparisons with nature. In Cohen's view there is a direct dynamic between the mental constructs and physical systems. I argue that his account is essentially hypothetical‐deductive, which is at odds with Newton's rejection of the hypothetical‐deductive method. An adequate account of Newton's methodology needs to show how Newton's method proceeds differently from the hypothetical‐deductive method. In the constructive part I argue for my own account, which is model based: it focuses on how Newton constructed his models in Book I of the Principia. I will show that Newton understood Book I as an exercise in determining the mathematical consequences of certain force functions. The growing complexity of Newton's models is a result of exploring increasingly complex force functions (intra‐theoretical dynamics) rather than a successive comparison with nature (extra‐theoretical dynamics). Nature did not enter the scene here. This intra‐theoretical dynamics is related to the ‘autonomy of the models’.  相似文献   

17.
Some twenty years ago, Bogen and Woodward challenged one of the fundamental assumptions of the received view, namely the theory-observation dichotomy and argued for the introduction of the further category of scientific phenomena. The latter, Bogen and Woodward stressed, are usually unobservable and inferred from what is indeed observable, namely scientific data. Crucially, Bogen and Woodward claimed that theories predict and explain phenomena, but not data. But then, of course, the thesis of theory-ladenness, which has it that our observations are influenced by the theories we hold, cannot apply. On the basis of two case studies, I want to show that this consequence of Bogen and Woodward’s account is rather unrealistic. More importantly, I also object against Bogen and Woodward’s view that the reliability of data, which constitutes the precondition for data-to-phenomena inferences, can be secured without the theory one seeks to test. The case studies I revisit have figured heavily in the publications of Bogen and Woodward and others: the discovery of weak neutral currents and the discovery of the zebra pattern of magnetic anomalies. I show that, in the latter case, data can be ignored if they appear to be irrelevant from a particular theoretical perspective (TLI) and that, in the former case, the tested theory can be critical for the assessment of the reliability of the data (TLA). I argue that both TLI and TLA are much stronger senses of theory-ladenness than the classical thesis and that neither TLI nor TLA can be accommodated within Bogen and Woodward’s account.  相似文献   

18.
For deductive reasoning to be justified, it must be guaranteed to preserve truth from premises to conclusion; and for it to be useful to us, it must be capable of informing us of something. How can we capture this notion of information content, whilst respecting the fact that the content of the premises, if true, already secures the truth of the conclusion? This is the problem I address here. I begin by considering and rejecting several accounts of informational content. I then develop an account on which informational contents are indeterminate in their membership. This allows there to be cases in which it is indeterminate whether a given deduction is informative. Nevertheless, on the picture I present, there are determinate cases of informative (and determinate cases of uninformative) inferences. I argue that the model I offer is the best way for an account of content to respect the meaning of the logical constants and the inference rules associated with them without collapsing into a classical picture of content, unable to account for informative deductive inferences.  相似文献   

19.
Moral philosophers are, among other things, in the business of constructing moral theories. And moral theories are, among other things, supposed to explain moral phenomena. Consequently, one's views about the nature of moral explanation will influence the kinds of moral theories one is willing to countenance. Many moral philosophers are (explicitly or implicitly) committed to a deductive model of explanation. As I see it, this commitment lies at the heart of the current debate between moral particularists and moral generalists. In this paper I argue that we have good reasons to give up this commitment. In fact, I show that an examination of the literature on scientific explanation reveals that we are used to, and comfortable with, non‐deductive explanations in almost all areas of inquiry. As a result, I argue that we have reason to believe that moral explanations need not be grounded in exceptionless moral principles.  相似文献   

20.
In this paper, I examine Wilfrid Sellars’ famous Myth of Jones. I argue the myth provides an ontologically austere account of thoughts and beliefs that makes sense of the full range of our folk psychological abilities. Sellars’ account draws on both Gilbert Ryle and Ludwig Wittgenstein. Ryle provides Sellars with the resources to make thoughts metaphysically respectable and Wittgenstein the resources to make beliefs rationally criticisable. By combining these insights into a single account, Sellars is able to see reasons as causes and, hence, to respect the full range of our folk psychological generalisations. This is achieved by modelling folk psychological practice on theoretical reasoning. But despite frequent misinterpretation, Sellars does not claim that thoughts and beliefs are theoretical concepts. Thus, folk psychological explanation is not theoretical, and hence, it is not replaceable by scientific theory. Hence, scientific concepts will not eliminate folk psychological concepts. Thus, Sellars avoids eliminativism.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号