共查询到20条相似文献,搜索用时 15 毫秒
1.
Hartry Field has recently examined the question whether our logical and mathematical concepts are referentially indeterminate. In his view, (1) certain logical notions, such as second-order quantification, are indeterminate, but (2) important mathematical notions, such as the notion of finiteness, are not (they are determinate). In this paper, I assess Fields analysis, and argue that claims (1) and (2) turn out to be inconsistent. After all, given that the notion of finiteness can only be adequately characterized in pure second-order logic, if Field is right in claiming that second-order quantification is indeterminate (see (1)), it follows that finiteness is also indeterminate (contrary to (2)). After arguing that Field is committed to these claims, I provide a diagnosis of why this inconsistency emerged, and I suggest an alternative, consistent picture of the relationship between logical and mathematical indeterminacy. 相似文献
2.
3.
Matthias Schirn 《Erkenntnis》2003,59(2):203-232
In Die Grundlagen der Arithmetik, Frege attempted to introduce cardinalnumbers as logical objects by means of a second-order abstraction principlewhich is now widely known as ``Hume's Principle' (HP): The number of Fsis identical with the number of Gs if and only if F and G are equinumerous.The attempt miscarried, because in its role as a contextual definition HP fails tofix uniquely the reference of the cardinality operator ``the number of Fs'. Thisproblem of referential indeterminacy is usually called ``the Julius Caesar problem'.In this paper, Frege's treatment of the problem in Grundlagen is critically assessed. In particular, I try to shed new light on it by paying special attention to the framework of his logicism in which it appears embedded. I argue, among other things, that the Caesar problem, which is supposed to stem from Frege's tentative inductive definition of the natural numbers, is only spurious, not genuine; that the genuine Caesar problem deriving from HP is a purely semantic one and that the prospects of removing it by explicitly defining cardinal numbers as objects which are not classes are presumably poor for Frege. I conclude by rejecting two closely connected theses concerning Caesar put forward by Richard Heck: (i) that Frege could not abandon Axiom V because he could not solve the Julius Caesar problem without it; (ii) that (by his own lights) his logicist programme in Grundgesetze der Arithmetik failed because he could not overcome that problem. 相似文献
4.
Alessandro Torza 《Philosophy and phenomenological research》2020,101(2):365-382
The threat of ontological deflationism (the view that disagreement about what there is can be non-substantive) is averted by appealing to realism about fundamental structure—or so tells us Ted Sider. In this paper, the notion of structural indeterminacy is introduced as a particular case of metaphysical indeterminacy; then it is argued that structural indeterminacy is not only compatible with a metaphysics of fundamental structure, but it can even safeguard it from a crucial objection; finally, it is shown that, if there are instances of structural indeterminacy, a hitherto unacknowledged variety of ontological deflationism will arise. Unless structure is shown to be determinate, ontological deflationism remains a live option. Furthermore, I will consider whether structural indeterminacy could be challenged by adopting a naturalistic epistemology of structure; the question is answered in the negative on the basis of a formal result concerning theory choice. Finally, I submit a new way of articulating the epistemology of structure, which hinges on the very possibility of structural indeterminacy. 相似文献
5.
Ralph J. Greenspan 《Science and engineering ethics》2012,18(3):447-452
Reductionist explanations in biology generally assume that biological mechanisms are highly deterministic and basically similar between individuals. A contrasting view has emerged recently that takes into account the degeneracy of biological processes??the ability to arrive at a given endpoint by a variety of available paths, even within the same individual. This perspective casts significant doubt on the prospects for the ability to predict behavior accurately based on brain imaging or genotyping, and on the ability of neuroscience to stipulate ethics. 相似文献
6.
7.
Philosophical Studies - 相似文献
8.
9.
10.
11.
Philosophia - The analysis of the derogatory aspect of slurs has recently aroused interest among philosophers of language. A puzzling element of it is its erratic behaviour in embeddings, for... 相似文献
12.
Philosophical Studies - The ‘no-difference problem’ challenges us to explain in which way the occurrence of an aggregate effect gives us reason to act in a specific way, although our... 相似文献
13.
14.
Cristian Constantinescu 《Ethical Theory and Moral Practice》2012,15(1):57-70
Two competing accounts of value incomparability have been put forward in the recent literature. According to the standard
account, developed most famously by Joseph Raz, ‘incomparability’ means determinate failure of the three classic value relations
(better than, worse than, and equally good): two value-bearers are incomparable with respect to a value V if and only if (i) it is false that x is better than y with respect to V, (ii) it is false that x is worse than y with respect to V and (iii) it is false that x and y are equally good with respect to V. Most philosophers have followed Raz in adopting this account of incomparability. Recently, however, John Broome has advocated
an alternative view, on which value incomparability is explained in terms of vagueness or indeterminacy. In this paper I aim to further Broome’s view in two ways. Firstly, I want to supply independent reasons for thinking that
the phenomenon of value incomparability is indeed a matter of the indeterminacy inherent in our comparative predicates. Secondly,
I attempt to defend Broome’s account by warding off several objections that worry him, due mainly to Erik Carlson and Ruth
Chang. 相似文献
15.
Avrutin and Hickok (1993) argue that agrammatic patients have the ability to represent nonreferential or "government" chains ("who... e") but not referential or "binding" chains ("which girl... e"). By contrast, we propose the "referential representation hypothesis," which suggests that agrammatics attempt to cope with their well-known capacity limitations by favoring referential or content-based representations. This predicts that agrammatic patients′ performance should degrade noticeably as task demands increase, and referential demands should take priority over computational ones. In a semantic task, referential phrases should lead to better or more accurate performances. In syntactic tasks, the availability of a referential or content-based representation will interfere with the development of a syntactic representation, resulting in worse syntactic performance on the referential phrases than on nonreferential ones. This predicts that agrammatic patients should incorrectly accept (resumptive) pronoun sentences with a referential wh-phrase because the pronouns will find the semantic or discourse referent of the referential wh-phrase and take it as an antecedent for the pronoun. However, they should reject a (resumptive) pronoun in a sentence with the nonreferential question constituent "who" or "what." "Who" and "what" will remain in syntactic form, since they have only grammatical content and therefore will have only a "nonreferential" syntactic representation. Consequently, they cannot serve as the antecedent of the pronoun. These predictions were largely confirmed by the results of a grammaticality judgement study. Agrammatics performed well on questions with pragmatic biases but failed to distinguish reliably between grammatical and ungrammatical questions where pragmatic biases were neutralized. They assigned especially low ratings to object gap sentences with referential wh-constituents, as predicted. They assigned relatively high ratings to ungrammatical subject pronoun sentences with either type of wh-constituent. The agrammatics accepted ungrammatical reflexive sentences even though syntactic number and gender features alone could have been used to correctly judge the sentences. We attribute this, too, to the unavailability of a reliable syntactic representation of those phrases with referential or extragrammatical semantic content. 相似文献
16.
17.
Cees van Leeuwen 《Psychological research》1990,52(1):1-4
Summary We are less prone today than in Köhler's time to believe in a unified science. The Gestalt program could therefore safely abandon the psychophysical isomorphism heuristic if it wished to. Indeterminacy of the heuristic might be a reason for doing so. The determination of the components involved in the isomorphism of the visual field as well as the electrocortical events would have to occur simultaneously. In the Gestalt program, however, components are determined by their position in the whole. They can therefore not be compared in different contexts, so no independent test of a candidate heuristic is possible. 相似文献
18.
19.
George Darby 《Australasian journal of philosophy》2013,91(2):227-245
There has been recent interest in formulating theories of non-representational indeterminacy. The aim of this paper is to clarify the relevance of quantum mechanics to this project. Quantum-mechanical examples of vague objects have been offered by various authors, displaying indeterminate identity, in the face of the famous Evans argument that such an idea is incoherent. It has also been suggested that the quantum-mechanical treatment of state-dependent properties exhibits metaphysical indeterminacy. In both cases it is important to consider the details of the metaphysical account and the way in which the quantum phenomenon is captured within it. Indeed if we adopt a familiar way of thinking about indeterminacy and apply it in a natural way to quantum mechanics, we run into illuminating difficulties and see that the case is far less straightforward than might be hoped. 相似文献
20.
Philip Kremer 《Synthese》2014,191(8):1757-1760
In ‘Fair Infinite Lotteries’ (FIL), Wenmackers and Horsten use non-standard analysis to construct a family of nicely-behaved hyperrational-valued probability measures on sets of natural numbers. Each probability measure in FIL is determined by a free ultrafilter on the natural numbers: distinct free ultrafilters determine distinct probability measures. The authors reply to a worry about a consequent ‘arbitrariness’ by remarking, “A different choice of free ultrafilter produces a different ... probability function with the same standard part but infinitesimal differences.” They illustrate this remark with the example of the sets of odd and even numbers. Depending on the ultrafilter, either each of these sets has probability 1/2, or the set of odd numbers has a probability infinitesimally higher than 1/2 and the set of even numbers infinitesimally lower. The point of the current paper is simply that the amount of indeterminacy is much greater than acknowledged in FIL: there are sets of natural numbers whose probability is far more indeterminate than that of the set of odd or the set of even numbers. 相似文献