首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This paper presents and defends a definition of vagueness, compares it favourably with alternative definitions, and draws out some consequences of accepting this definition for the project of offering a substantive theory of vagueness. The definition is roughly this: a predicate ‘F’ is vague just in case for any objects a and b, if a and b are very close in respects relevant to the possession of F, then ‘Fa’ and ‘Fb’ are very close in respect of truth. The definition is extended to cover vagueness of many-place predicates, of properties and relations, and of objects. Some of the most important advantages of the definition are that it captures the intuitions which motivate the thought that vague predicates are tolerant, without leading to contradiction, and that it yields a clear understanding of the relationships between higher-order vagueness, sorites susceptibility, blurred boundaries, and borderline cases. The most notable consequence of the definition is that the correct theory of vagueness must countenance degrees of truth.  相似文献   

2.
Fallibilists about looks deny that the relation of looking the same as is non-transitive. Regarding familiar examples of coloured patches suggesting that such a relation is non-transitive, they argue that, in fact, indiscriminable adjacent patches may well look different, despite their perceptual indiscriminability: it’s just that we cannot notice the relevant differences in the chromatic appearances of such patches. In this paper, I present an argument that fallibilism about looks requires commitment to an empirically false consequence. To succeed in deflecting putative cases of non-transitivity, fallibilists would have to claim that there can’t be any perceptual limitations of any kind on human chromatic discrimination. But there are good reasons to think such limitations exist.  相似文献   

3.
Based on a close study of benchmark examples in default reasoning, such as Nixon Diamond, Penguin Principle, etc., this paper provides an in depth analysis of the basic features of default reasoning. We formalize default inferences based on Modus Ponens for Default Implication, and mark the distinction between “local inferences” (to infer a conclusion from a subset of given premises) and “global inferences” (to infer a conclusion from the entire set of given premises). These conceptual analyses are captured by a formal semantics that is built upon the set-selection function technique. A minimal logic system M of default reasoning that accommodates Modus Ponens for Default Implication and suitable for local inferences is proposed, and its soundness is proved. __________ Translated from Zhexue Yanjiu 哲学研究 (Philosophical Studies), 2003 (special issue) by Ye Feng  相似文献   

4.
In this paper, I argue that those who accept the conceptualist view in the philosophy of perception should reject the traditional view that colour indiscriminability is non-transitive. I start by outlining the general strategy that conceptualists have adopted in response to the familiar ‘fineness of grain’ objection, and I show why a commitment to what I call the indiscriminability claim seems to form a natural part of this strategy. I then show how together, the indiscriminability claim and the non-transitivity claim –the claim that colour indiscriminability is non-transitive –entail a further, suspicious-looking claim that I call the problematic claim. My argument then splits into two parts. In the first part, I show why the conceptualist does indeed need to reject the problematic claim. Given that this claim is jointly entailed by the indiscriminability claim and the non-transitivity claim, the conceptualist is then left with a straight choice: reject the indiscriminability claim, or reject the non-transitivity claim. In the second part, I then explain why the conceptualist should choose the latter option.  相似文献   

5.
Allwein  Gerard  MacCaull  Wendy 《Studia Logica》2001,68(2):173-228
Gelfand quantales are complete unital quantales with an involution, *, satisfying the property that for any element a, if a b a for all b, then a a* a = a. A Hilbert-style axiom system is given for a propositional logic, called Gelfand Logic, which is sound and complete with respect to Gelfand quantales. A Kripke semantics is presented for which the soundness and completeness of Gelfand logic is shown. The completeness theorem relies on a Stone style representation theorem for complete lattices. A Rasiowa/Sikorski style semantic tableau system is also presented with the property that if all branches of a tableau are closed, then the formula in question is a theorem of Gelfand Logic. An open branch in a completed tableaux guarantees the existence of an Kripke model in which the formula is not valid; hence it is not a theorem of Gelfand Logic.  相似文献   

6.
We investigated the ability of subjects to discriminate sugars with a whole-mouth forced-choice paradigm, in which a standard solution was compared with a test solution of varied concentration. Discrimination probabilities were U-shaped functions of test concentration: for 6 subjects and pairwise combinations of fructose, glucose, and sucrose, discriminability always declined to chance over a narrow range of test concentrations. At concentrations ≦ 100 mM, maltose was indiscriminable from fructose but discriminable at higher concentrations for 4 subjects. By analogy with themonochromacy of night vision, whereby any two lights are indiscriminable when their relative intensities are suitably adjusted, we call the gustatory indiscriminability of these sugarsmonogeusia. The simplest account of monogeusia is that all information about the indiscriminable sugars is represented by a single neural signal that varies only in magnitude. The discriminability of maltose from the other sugars at higher concentrations is consistent with the hypothesis that maltose also activates a second gustatory code.  相似文献   

7.
God’s silence     
Vagueness manifests itself (among other things) in our inability to find boundaries to the extension of vague predicates. A semantic theory of vagueness plans to justify this inability in terms of the vague semantic rules governing language and thought. According to a supporter of semantic theory, the inability to find such a boundary is not dependent on epistemic limits and an omniscient being like God would be equally unable. Williamson (Vagueness, 1994) argued that cooperative omniscient beings adequately instructed would find a precise boundary in a sorites series and that, for this reason, the semantic theory misses its target, while Hawthorne (Philosophical Studies 122:1–25, 2005) stood with the semantic theorists and argued that the linguistic behaviour of a cooperative omniscient being like God would clearly demonstrate that he does not find a precise boundary in the sorites series. I argue that Hawthorne’s definition of God’s cooperative behaviour cannot be accepted and that, contrary to what has been assumed by both Williamson and Hawthorne, an omniscient being like God cannot be a cooperative evaluator of a semantic theory of vagueness.  相似文献   

8.
This paper is about the Problem of Order, which is basically the problem how to account for both the distinctness of facts like a’s preceding b and b’s preceding a, and the identity of facts like a’s preceding b and b’s succeeding a. It has been shown that the Standard View fails to account for the second part and is therefore to be replaced. One of the contenders is Anti-Positionalism. As has recently been pointed out, however, Anti-Positionalism falls prey to a regress argument which is to prove its failure. In the paper we spell out this worry, show that the worry is a serious one, and distinguish four possible strategies for Anti-Positionalism to deal with it.  相似文献   

9.
Analogies must be symmetric. If a is like b, then b is like a. So if a has property R, and if R is within the scope of the analogy, then b (probably) has R. However, analogical arguments generally single out, or depend upon, only one of a or b to serve as the basis for the inference. In this respect, analogical arguments are directed by an asymmetry. I defend the importance of this neglected – even when explicitly mentioned – feature in understanding analogical arguments.  相似文献   

10.
According to contextualism about vagueness, the content of a vague predicate is context sensitive. On this view, when item a is in the penumbra of the vague predicate ‘F’, speakers may (truly) utter ‘Fa’, or they may (truly) utter ‘not‐Fa’, without contravening the literal meaning of ‘F’. Unlike its more popular variants, the version of contextualism I defend rejects the principle of tolerance, a principle according to which small differences should not affect the applicability of a vague predicate. My goal is to show how such a rejection allows for a plausible treatment of higher‐order vagueness, and for a dissolution of paradoxes of higher‐order vagueness.  相似文献   

11.
Prawitz proved a theorem, formalising ‘harmony’ in Natural Deduction systems, which showed that, corresponding to any deduction there is one to the same effect but in which no formula occurrence is both the consequence of an application of an introduction rule and major premise of an application of the related elimination rule. As Gentzen ordered the rules, certain rules in Classical Logic had to be excepted, but if we see the appropriate rules instead as rules for Contradiction, then we can extend the theorem to the classical case. Properly arranged there is a thoroughgoing ‘harmony’, in the classical rules. Indeed, as we shall see, they are, all together, far more ‘harmonious’ in the general sense than has been commonly observed. As this paper will show, the appearance of disharmony has only arisen because of the illogical way in which natural deduction rules for Classical Logic have been presented.  相似文献   

12.
Charlie Pelling 《Synthese》2011,178(3):437-459
According to the epistemic theory of hallucination, the fundamental psychological nature of a hallucinatory experience is constituted by its being ‘introspectively indiscriminable’, in some sense, from a veridical experience of a corresponding type. How is the notion of introspective indiscriminability to which the epistemic theory appeals best construed? Following M. G. F. Martin, the standard assumption is that the notion should be construed in terms of negative epistemics: in particular, it is assumed that the notion should be explained in terms of the impossibility that a hallucinator might possess a certain type of knowledge on a certain basis. I argue that the standard assumption is mistaken. I argue that the relevant notion of introspective indiscriminability is better construed in terms of positive epistemics: in particular, I argue that the notion is better explained by reference to the fact that it would be rational for a hallucinator positively to make a certain type of judgement, were that judgement made on a certain basis.  相似文献   

13.
We propose a framework which extends Antitonic Logic Programs [Damásio and Pereira, in: Proc. 6th Int. Conf. on Logic Programming and Nonmonotonic Reasoning, Springer, 2001, p. 748] to an arbitrary complete bilattice of truth-values, where belief and doubt are explicitly represented. Inspired by Ginsberg and Fitting's bilattice approaches, this framework allows a precise definition of important operators found in logic programming, such as explicit and default negation. In particular, it leads to a natural semantical integration of explicit and default negation through the Coherence Principle [Pereira and Alferes, in: European Conference on Artificial Intelligence, 1992, p. 102], according to which explicit negation entails default negation. We then define Coherent Answer Sets, and the Paraconsistent Well-founded Model semantics, generalizing many paraconsistent semantics for logic programs. In particular, Paraconsistent Well-Founded Semantics with eXplicit negation (WFSXp) [Alferes et al., J. Automated Reas. 14 (1) (1995) 93–147; Damásio, PhD thesis, 1996]. The framework is an extension of Antitonic Logic Programs for most cases, and is general enough to capture Probabilistic Deductive Databases, Possibilistic Logic Programming, Hybrid Probabilistic Logic Programs, and Fuzzy Logic Programming. Thus, we have a powerful mathematical formalism for dealing simultaneously with default, paraconsistency, and uncertainty reasoning. Results are provided about how our semantical framework deals with inconsistent information and with its propagation by the rules of the program.  相似文献   

14.
The Logic of Knowledge Based Obligation   总被引:1,自引:0,他引:1  
Deontic Logic goes back to Ernst Mally’s 1926 work, Grundgesetze des Sollens: Elemente der Logik des Willens [Mally. E.: 1926, Grundgesetze des Sollens: Elemente der Logik des Willens, Leuschner & Lubensky, Graz], where he presented axioms for the notion ‘p ought to be the case’. Some difficulties were found in Mally’s axioms, and the field has much developed. Logic of Knowledge goes back to Hintikka’s work Knowledge and Belief [Hintikka, J.: 1962, Knowledge and Belief: An Introduction to the Logic of the Two Notions, Cornell University Press] in which he proposed formal logics of knowledge and belief. This field has also developed quite a great deal and is now the subject of the TARK conferences. However, there has been relatively little work combining the two notions of knowledge (belief) with the notion of obligation. (See, however, [Lomuscio, A. and Sergot, M.: 2003, Studia Logica 75 63–92; Moore, R. C.: 1990, In J. F. Allen, J. Hendler and A. Tate (eds.), Readings in Planning, Morgan Kaufmann Publishers, San Mateo, CA]) In this paper we point out that an agent’s obligations are often dependent on what the agent knows, and indeed one cannot reasonably be expected to respond to a problem if one is not aware of its existence. For instance, a doctor cannot be expected to treat a patient unless she is aware of the fact that he is sick, and this creates a secondary obligation on the patient or someone else to inform the doctor of his situation. In other words, many obligations are situation dependent, and only apply in the presence of the relevant information. Thus a case for combining Deontic Logic with the Logic of Knowledge is clear. We introduce the notion of knowledge based obligation and offer an S5, history based Kripke semantics to express this notion, as this semantics enables us to represent how information is transmitted among agents and how knowledge changes over time as a result of communications. We consider both the case of an absolute obligation (although dependent on information) as well as the (defeasible) notion of an obligation which may be over-ridden by more relevant information. For instance a physician who is about to inject a patient with drug d may find out that the patient is allergic to d and that she should use d′ instead. Dealing with the second kind of case requires a resort to non-monotonic reasoning and the notion of justified belief which is stronger than plain belief, but weaker than absolute knowledge in that it can be over-ridden. This notion of justified belief also creates a derived notion of default obligation where an agent has, as far as the agent knows, an obligation to do some action a. A dramatic application of this notion is our analysis of the Kitty Genovese case where, in 1964, a young woman was stabbed to death while 38 neighbours watched from their windows but did nothing. The reason was not indifference, but none of the neighbours had even a default obligation to act, even though, as a group, they did have an obligation to take some action to protect Kitty. Earlier versions of this paper were presented at the conferences SEP-2004, and DALT-2004.  相似文献   

15.
In this paper we investigate a semantics for first-order logic originally proposed by R. van Rooij to account for the idea that vague predicates are tolerant, that is, for the principle that if x is P, then y should be P whenever y is similar enough to x. The semantics, which makes use of indifference relations to model similarity, rests on the interaction of three notions of truth: the classical notion, and two dual notions simultaneously defined in terms of it, which we call tolerant truth and strict truth. We characterize the space of consequence relations definable in terms of those and discuss the kind of solution this gives to the sorites paradox. We discuss some applications of the framework to the pragmatics and psycholinguistics of vague predicates, in particular regarding judgments about borderline cases.  相似文献   

16.
This paper uses a non-distributive system of Boolean fractions (a|b), where a and b are 2-valued propositions or events, to express uncertain conditional propositions and conditional events. These Boolean fractions, ‘a if b’ or ‘a given b’, ordered pairs of events, which did not exist for the founders of quantum logic, can better represent uncertain conditional information just as integer fractions can better represent partial distances on a number line. Since the indeterminacy of some pairs of quantum events is due to the mutual inconsistency of their experimental conditions, this algebra of conditionals can express indeterminacy. In fact, this system is able to express the crucial quantum concepts of orthogonality, simultaneous verifiability, compatibility, and the superposition of quantum events, all without resorting to Hilbert space. A conditional (a|b) is said to be “inapplicable” (or “undefined”) in those instances or models for which b is false. Otherwise the conditional takes the truth-value of proposition a. Thus the system is technically 3-valued, but the 3rd value has nothing to do with a state of ignorance, nor to some half-truth. People already routinely put statements into three categories: true, false, or inapplicable. As such, this system applies to macroscopic as well as microscopic events. Two conditional propositions turn out to be simultaneously verifiable just in case the truth of one implies the applicability of the other. Furthermore, two conditional propositions (a|b) and (c|d) reside in a common Boolean sub-algebra of the non-distributive system of conditional propositions just in case b=d, their conditions are equivalent. Since all aspects of quantum mechanics can be represented with this near classical logic, there is no need to adopt Hilbert space logic as ordinary logic, just a need perhaps to adopt propositional fractions to do logic, just as we long ago adopted integer fractions to do arithmetic. The algebra of Boolean fractions is a natural, near-Boolean extension of Boolean algebra adequate to express quantum logic. While this paper explains one group of quantum anomalies, it nevertheless leaves no less mysterious the ‘influence-at-a-distance’, quantum entanglement phenomena. A quantum realist must still embrace non-local influences to hold that “hidden variables” are the measured properties of particles. But that seems easier than imaging wave-particle duality and instant collapse, as offered by proponents of the standard interpretation of quantum mechanics. Partial support for this work is gratefully acknowledged from the In-House Independent Research Program and from Code 2737 at the Space & Naval Warfare Systems Center (SSC-SD), San Diego, CA 92152-5001. Presently this work is supported by Data Synthesis, 2919 Luna Avenue, San Diego, CA 92117.  相似文献   

17.
Tan  Yao-Hua 《Synthese》1997,110(3):357-379
Currently there is hardly any connection between philosophy of science and Artificial Intelligence research. We argue that both fields can benefit from each other. As an example of this mutual benefit we discuss the relation between Inductive-Statistical Reasoning and Default Logic. One of the main topics in AI research is the study of common-sense reasoning with incomplete information. Default logic is especially developed to formalise this type of reasoning. We show that there is a striking resemblance between inductive-statistical reasoning and default logic. A central theme in the logical positivist study of inductive-statistical reasoning such as Hempels Criterion of Maximal Specificity turns out to be equally important in default logic. We also discuss to what extent the relevance of the results of Logical Positivism to AI research could contribute to a reevaluation of Logical Positivism in general.  相似文献   

18.
It is a commonplace that the extensions of most, perhaps all, vague predicates vary with such features as comparison class and paradigm and contrasting cases. My view proposes another, more pervasive contextual parameter. Vague predicates exhibit what I call open texture: in some circumstances, competent speakers can go either way in the borderline region. The shifting extension and anti-extensions of vague predicates are tracked by what David Lewis calls the “conversational score”, and are regulated by what Kit Fine calls penumbral connections, including a principle of tolerance. As I see it, vague predicates are response-dependent, or, better, judgement-dependent, at least in their borderline regions. This raises questions concerning how one reasons with such predicates. In this paper, I present a model theory for vague predicates, so construed. It is based on an overall supervaluationist-style framework, and it invokes analogues of Kripke structures for intuitionistic logic. I argue that the system captures, or at least nicely models, how one ought to reason with the shifting extensions (and anti-extensions) of vague predicates, as borderline cases are called and retracted in the course of a conversation. The model theory is illustrated with a forced march sorites series, and also with a thought experiment in which vague predicates interact with so-called future contingents. I show how to define various connectives and quantifiers in the language of the system, and how to express various penumbral connections and the principle of tolerance. The project fits into one of the topics of this special issue. In the course of reasoning, even with the external context held fixed, it is uncertain what the future extension of the vague predicates will be. Yet we still manage to reason with them. The system is based on that developed, more fully, in my Vagueness in Context, Oxford, Oxford University Press, 2006, but some criticisms and replies to critics are incorporated.  相似文献   

19.
In a definition (∀x)((xєr)↔D[x]) of the set r, the definiens D[x] must not depend on the definiendum r. This implies that all quantifiers in D[x] are independent of r and of (∀x). This cannot be implemented in the traditional first-order logic, but can be expressed in IF logic. Violations of such independence requirements are what created the typical paradoxes of set theory. Poincaré’s Vicious Circle Principle was intended to bar such violations. Russell nevertheless misunderstood the principle; for him a set a can depend on another set b only if (bєa) or (b ⊆ a). Likewise, the truth of an ordinary first-order sentence with the G?del number of r is undefinable in Tarki’s sense because the quantifiers of the definiens depend unavoidably on r.  相似文献   

20.
We add a limited but useful form of quantification to Coalition Logic, a popular formalism for reasoning about cooperation in game-like multi-agent systems. The basic constructs of Quantified Coalition Logic (QCL) allow us to express such properties as “every coalition satisfying property P can achieve φ” and “there exists a coalition C satisfying property P such that C can achieve φ”. We give an axiomatisation of QCL, and show that while it is no more expressive than Coalition Logic, it is nevertheless exponentially more succinct. The complexity of QCL model checking for symbolic and explicit state representations is shown to be no worse than that of Coalition Logic, and satisfiability for QCL is shown to be no worse than satisfiability for Coalition Logic. We illustrate the formalism by showing how to succinctly specify such social choice mechanisms as majority voting, which in Coalition Logic require specifications that are exponentially long in the number of agents.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号