首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
The second printing of Principia Mathematica in 1925 offered Russell an occasion to assess some criticisms of the Principia and make some suggestions for possible improvements. In Appendix A, Russell offered *8 as a new quantification theory to replace *9 of the original text. As Russell explained in the new introduction to the second edition, the system of *8 sets out quantification theory without free variables. Unfortunately, the system has not been well understood. This paper shows that Russell successfully antedates Quine's system of quantification theory without free variables. It is shown as well, that as with Quine's system, a slight modification yields a quantification theory inclusive of the empty domain.  相似文献   

3.
This paper discusses the history of the confusion and controversies over whether the definition of consequence presented in the 11-page 1936 Tarski consequence-definition paper is based on a monistic fixed-universe framework—like Begriffsschrift and Principia Mathematica. Monistic fixed-universe frameworks, common in pre-WWII logic, keep the range of the individual variables fixed as ‘the class of all individuals’. The contrary alternative is that the definition is predicated on a pluralistic multiple-universe framework—like the 1931 Gödel incompleteness paper. A pluralistic multiple-universe framework recognizes multiple universes of discourse serving as different ranges of the individual variables in different interpretations—as in post-WWII model theory. In the early 1960s, many logicians—mistakenly, as we show—held the ‘contrary alternative’ that Tarski 1936 had already adopted a Gödel-type, pluralistic, multiple-universe framework. We explain that Tarski had not yet shifted out of the monistic, Frege–Russell, fixed-universe paradigm. We further argue that between his Principia-influenced pre-WWII Warsaw period and his model-theoretic post-WWII Berkeley period, Tarski's philosophy underwent many other radical changes.  相似文献   

4.
In his new introduction to the 1925 second edition of Principia Mathematica, Russell maintained that by adopting Wittgenstein's idea that a logically perfect language should be extensional mathematical induction could be rectified for finite cardinals without the axiom of reducibility. In an Appendix B, Russell set forth a proof. Gödel caught a defect in the proof at *89.16, so that the matter of rectification remained open. Myhill later arrived at a negative result: Principia with extensionality principles and without reducibility cannot recover mathematical induction. The finite cardinals are indefinable in it. This paper shows that while Gödel and Myhill are correct, Russell was not wrong. The 1925 system employs a different grammar than the original Principia. A new proof for *89.16 is given and induction is recovered.  相似文献   

5.
Anders Kraal 《Synthese》2014,191(7):1493-1510
I argue that three main interpretations of the aim of Russell’s early logicism in The Principles of Mathematics (1903) are mistaken, and propose a new interpretation. According to this new interpretation, the aim of Russell’s logicism is to show, in opposition to Kant, that mathematical propositions have a certain sort of complete generality which entails that their truth is independent of space and time. I argue that on this interpretation two often-heard objections to Russell’s logicism, deriving from Gödel’s incompleteness theorem and from the non-logical character of some of the axioms of Principia Mathematica respectively, can be seen to be inconclusive. I then proceed to identify two challenges that Russell’s logicism, as presently construed, faces, but argue that these challenges do not appear unanswerable.  相似文献   

6.
The paper first formalizes the ramified type theory as (informally) described in the Principia Mathematica [32]. This formalization is close to the ideas of the Principia, but also meets contemporary requirements on formality and accuracy, and therefore is a new supply to the known literature on the Principia (like [25], [19], [6] and [7]).As an alternative, notions from the ramified type theory are expressed in a lambda calculus style. This situates the type system of Russell and Whitehead in a modern setting. Both formalizations are inspired by current developments in research on type theory and typed lambda calculus; see [3].Supported by the Co-operation Centre Tilburg and Eindhoven Universities. 1[32], Introduction, Chapter II, Section I, p. 37.Presented by Wolfgang Rautenberg  相似文献   

7.
In this article I examine the status of putative aesthetic judgements in science and mathematics. I argue that if the judgements at issue are taken to be genuinely aesthetic they can be divided into two types, positing either a disjunction or connection between aesthetic and epistemic criteria in theory/proof assessment. I show that both types of claim face serious difficulties in explaining the purported role of aesthetic judgements in these areas. I claim that the best current explanation of this role, McAllister's ‘aesthetic induction’ model, fails to demonstrate that the judgements at issue are genuinely aesthetic. I argue that, in light of these considerations, there are strong reasons for suspecting that many, and perhaps all, of the supposedly aesthetic claims are not genuinely aesthetic but are in fact ‘masked’ epistemic assessments.  相似文献   

8.
A three-valued propositional logic is presented, within which the three values are read as ‘true’, ‘false’ and ‘nonsense’. A three-valued extended functional calculus, unrestricted by the theory of types, is then developed. Within the latter system, Bochvar analyzes the Russell paradox and the Grelling-Weyl paradox, formally demonstrating the meaninglessness of both.  相似文献   

9.
This essay will focus briefly on (1) a definitional and (2) an epistemic analysis of Stewart Guthrie's cultural-anthropological theory of anthropomorphism in his bookFaces in the Clouds. In Part I of the essay, I will examine specific definitional claims about religion that Guthrie advances in chapter 1 (‘The Need for a Theory’) and chapter 3 (‘The Origin of Anthropomorphism’). In Part II, crucial statements in chapter 6 (‘Anthropomorphism in Philosophy and Science’) and chapter 7 (‘Religion as Anthropomorphism’) raise questions about Guthrie's epistemic assumptions that in philosophy and science the objects referred to as anthropomorphic have critically been known to be errors and have been wisely set aside in the margins of those enterprises, whereas the objects referred to as anthropomorphic in religion have always been at the centre of religion. Guthrie employs five theoretical criteria (of observability, simplicity, generality, fallibility, and probability) to explain why religion always anthropomorphizes. The essay concludes with a formal question about the epistemic status of Guthrie's observability and universality criteria.  相似文献   

10.
Some hold that the lesson of Russell’s paradox and its relatives is that mathematical reality does not form a ‘definite totality’ but rather is ‘indefinitely extensible’. There can always be more sets than there ever are. I argue that certain contact puzzles are analogous to Russell’s paradox this way: they similarly motivate a vision of physical reality as iteratively generated. In this picture, the divisions of the continuum into smaller parts are ‘potential’ rather than ‘actual’. Besides the intrinsic interest of this metaphysical picture, it has important consequences for the debate over absolute generality. It is often thought that ‘indefinite extensibility’ arguments at best make trouble for mathematical platonists; but the contact arguments show that nominalists face the same kind of difficulty, if they recognize even the metaphysical possibility of the picture I sketch.  相似文献   

11.
Scott Stapleford 《Synthese》2013,190(18):4065-4075
Mark Nelson argues that we have no positive epistemic duties. His case rests on the evidential inexhaustibility of sensory and propositional evidence—what he calls their ‘infinite justificational fecundity’. It is argued here that Nelson’s reflections on the richness of sensory and propositional evidence do make it doubtful that we ever have an epistemic duty to add any particular beliefs to our belief set, but that they fail to establish that we have no positive epistemic duties whatsoever. A theory of epistemic obligation based on Kant’s idea of an imperfect duty is outlined. It is suggested that such a theory is consistent with the inexhaustibility of sensory and propositional evidence. Finally, one feature of our epistemic practice suggestive of the existence of imperfect epistemic duties is identified and promoted.  相似文献   

12.
13.
‘Fred must open the door’ concerns Fred’s obligations. This obligative meaning is turned off by adding aspect: ‘Fred must have opened/be opening/have been opening the door’ are one and all epistemic. Why? In a nutshell: obligative ’must’ operates on procedural contents of imperative sentences, epistemic ‘must’ on propositional contents of declarative sentences; and adding aspect converts procedural into propositional content.  相似文献   

14.
Abstract: Accounts of virtue suffer a conflation problem when they appear unable to preserve intuitive distinctions between types of virtue. In this essay I argue that a number of influential attempts to preserve the distinction between moral and epistemic virtues fail, on the grounds that they characterize virtuous traits in terms of ‘characteristic motivation’. I claim that this does not distinguish virtuous traits at the level of value‐conferring quality, and I propose that the best alternative is to distinguish them at the level of good produced. It follows from this that a consequentialist account is best placed to avoid a conflation of moral and epistemic virtue.  相似文献   

15.
The philosophical case for extended cognition is often made with reference to ‘extended‐memory cases’ (e.g. Clark & Chalmers 1998); though, unfortunately, proponents of the hypothesis of extended cognition (HEC) as well as their adversaries have failed to appreciate the kinds of epistemological problems extended‐memory cases pose for mainstream thinking in the epistemology of memory. It is time to give these problems a closer look. Our plan is as follows: in §1, we argue that an epistemological theory remains compatible with HEC only if its epistemic assessments do not violate what we call ‘the epistemic parity principle’. In §2, we show how the constraint of respecting the epistemic parity principle stands in what appears to be a prima facie intractable tension with mainstream thinking about cases of propositional memory. We then outline and evaluate in §3 several lines of response.  相似文献   

16.
What Russell regarded to be the ‘chief outcome’ of his 1914 Lowell Lectures at Harvard can only be fully appreciated, I argue, if one embeds the outcome back into the ‘classificatory problem’ that many at the time were heavily engaged in. The problem focused on the place and relationships between the newly formed or recently professionalized disciplines such as psychology, Erkenntnistheorie, physics, logic and philosophy. The prime metaphor used in discussions about the classificatory problem by British philosophers was a spatial one, with such motifs as ‘standpoints’, ‘place’ and ‘perspectives’ in the space of knowledge. In fact, Russell’s construction of a perspectival space of six-dimensions was meant precisely to be a timely solution to the widely discussed classificatory problem.  相似文献   

17.
Several philosophers have inquired into the metaphysical limits of conceptual engineering: ‘Can we engineer? And if so, to what extent?’. This paper is not concerned with answering these questions. It does concern itself, however, with the limits of conceptual engineering, albeit in a largely unexplored sense: it cares about the normative, rather than about the metaphysical limits thereof. I first defend an optimistic claim: I argue that the ameliorative project has, so far, been too modest; there is little value theoretic reason to restrict the project to remedying deficient representational devices, rather than go on a more ambitious quest: conceptual improvement. That being said, I also identify a limitation to the optimistic claim: I show that the ‘should’ in ameliorative projects suffers from a ‘wrong-kind-of-reasons’ problem. Last but not least, I sketch a proposal of normative constraining meant to address both the above results. The proposal gives primacy to epistemic constraints: accordingly, a concept should be ameliorated only insofar as this does not translate into epistemic loss.  相似文献   

18.
In this paper I argue for a doctrine I call ‘infallibilism’, which I stipulate to mean that If S knows that p, then the epistemic probability of p for S is 1. Some fallibilists will claim that this doctrine should be rejected because it leads to scepticism. Though it's not obvious that infallibilism does lead to scepticism, I argue that we should be willing to accept it even if it does. Infallibilism should be preferred because it has greater explanatory power than fallibilism. In particular, I argue that an infallibilist can easily explain why assertions of ‘p, but possibly not-p’ (where the ‘possibly’ is read as referring to epistemic possibility) is infelicitous in terms of the knowledge rule of assertion. But a fallibilist cannot. Furthermore, an infallibilist can explain the infelicity of utterances of ‘p, but I don't know that p’ and ‘p might be true, but I'm not willing to say that for all I know, p is true’, and why when a speaker thinks p is epistemically possible for her, she will agree (if asked) that for all she knows, p is true. The simplest explanation of these facts entails infallibilism. Fallibilists have tried and failed to explain the infelicity of ‘p, but I don't know that p’, but have not even attempted to explain the last two facts. I close by considering two facts that seem to pose a problem for infallibilism, and argue that they don't.  相似文献   

19.
Andrew Roos 《Ratio》2004,17(2):207-217
In chapter seven ‘Self Identification’ of his challenging book The Varieties of Reference, Gareth Evans attempts to give an account of how it is that one is able to think about oneself self‐consciously. On Evans’ view, when one attempts to think of oneself self‐consciously that person is having what he calls an ‘I’ thought. Since these ‘I’ thoughts are a case of reference, more specifically self‐reference, Evans thinks that these thoughts can be explained by employing the same theoretical framework that he uses to explain other kinds of reference. Evans thinks all thoughts are essentially structured, and this means that they must fall under his ‘generality constraint’. Since ‘I’ thoughts are also ‘thoughts’ they are essentially structured as well, and they too must be subject to the generality constraint. The radical implication of this is that Evans thinks that if ‘I’ thoughts are subject to the generality constraint, then he can show that self‐reference must be reference to a thing which we can locate on a spatio‐temporal map. In this article I hope to accomplish three things. First, I will spell out in detail the argument Evans uses to arrive at his claim that self‐reference must be reference to something located on a spatio‐temporal map. Second, I will raise an objection, which states that Evans’ conclusion that self‐reference must involve spatio‐temporal location is not a consequence of the generality constraint. Finally I will argue that Evans’ conclusion that self‐reference must involve spatio‐temporal location is in fact in tension with the generality constraint, rather than being an implication of it.  相似文献   

20.
This paper is meant to link the philosophical debate concerning the underdetermination of theories by evidence with a rather significant socio-political issue that has been taking place in Canada over the past few years: the so-called ‘death of evidence’ controversy. It places this debate within a broader philosophical framework by discussing the connection between evidence and theory; by bringing out the role of epistemic values in the so-called scientific method; and by examining the role of social values in science. While it should be admitted that social values play an important role in science, the key question for anyone who advocates this view is: what and whose values? The way it is answered makes an important epistemic difference to how the relation between evidence and theory is appraised. I first review various arguments for the claim that evidence underdetermines theory and shows their presuppositions and limitations, using conceptual analysis and historical examples. After broaching the relation between evidence and method in science by highlighting the need to incorporate epistemic values into the scientific method, my discussion focuses on recent arguments for the role of social values in science. Finally, I address the implications of the approach outlined for the current ‘death of evidence’ debate in Canada.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号