首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Ernest W. Adams 《Synthese》2005,146(1-2):129-138
Syllogisms like Barbara, “If all S is M and all M is P, then all S is P”, are here analyzed not in terms of the truth of their categorical constituents, “all S is M”, etc., but rather in terms of the corresponding proportions, e.g., of Ss that are Ms. This allows us to consider the inferences’ approximate validity, and whether the fact that most Ss are Ms and most Ms are Ps guarantees that most Ss are Ps. It turns out that no standard syllogism is universally valid in this sense, but special ‘default rules’ govern approximate reasoning of this kind. Special attention is paid to inferences involving existential propositions of the “Some S is M” form, where it is does not make sense to say “Almost some S is M”, but where it is important that in everyday speech, “Some” does not mean “At least one”, but rather “A not insignificant number”.  相似文献   

2.
Default reasoning occurs whenever the truth of the evidence available to the reasoner does not guarantee the truth of the conclusion being drawn. Despite this, one is entitled to draw the conclusion “by default” on the grounds that we have no information which would make us doubt that the inference should be drawn. It is the type of conclusion we draw in the ordinary world and ordinary situations in which we find ourselves. Formally speaking, ‘nonmonotonic reasoning’ refers to argumentation in which one uses certain information to reach a conclusion, but where it is possible that adding some further information to those very same premises could make one want to retract the original conclusion. It is easily seen that the informal notion of default reasoning manifests a type of nonmonotonic reasoning. Generally speaking, default statements are said to be true about the class of objects they describe, despite the acknowledged existence of “exceptional instances” of the class. In the absence of explicit information that an object is one of the exceptions we are enjoined to apply the default statement to the object. But further information may later tell us that the object is in fact one of the exceptions. So this is one of the points where nonmonotonicity resides in default reasoning. The informal notion has been seen as central to a number of areas of scholarly investigation, and we canvass some of them before turning our attention to its role in AI. It is because ordinary people so cleverly and effortlessly use default reasoning to solve interesting cognitive tasks that nonmonotonic formalisms were introduced into AI, and we argue that this is a form of psychologism, despite the fact that it is not usually recognized as such in AI. We close by mentioning some of the results from our empirical investigations that we believe should be incorporated into nonmonotonic formalisms.  相似文献   

3.
We investigated whether two basic forms of deductive inference, Modus Ponens and Disjunctive Syllogism, occur automatically and without awareness. In Experiment 1, we used a priming paradigm with a set of conditional and disjunctive problems. For each trial, two premises were shown. The second premise was presented at a rate designed to be undetectable. After each problem, participants had to evaluate whether a newly-presented target number was odd or even. The target number matched or did not match a conclusion endorsed by the two previous premises. We found that when the target matched the conclusion of a Modus Ponens inference, the evaluation of the target number was reliably faster than baseline even when participants reported that they were not aware of the second premise. This priming effect did not occur for any other valid or invalid inference that we tested, including the Disjunctive Syllogism. In Experiment 2, we used a forced-choice paradigm in which we found that some participants were able to access some information on the second premise when their attention was explicitly directed to it. In Experiment 3, we showed that the priming effect for Modus Ponens was present also in subjects who could not access any information about P(2). In Experiment 4 we explored whether spatial relations (e.g., "a before b") or sentences with quantifiers (e.g., "all a with b") could generate a priming effect similar to the one observed for Modus Ponens. A priming effect could be found for Modus Ponens only, but not for the other relations tested. These findings show that the Modus Ponens inference, in contrast to other deductive inferences, can be carried out automatically and unconsciously. Furthermore, our findings suggest that critical deductive inference schemata can be included in the range of high-level cognitive activities that are carried out unconsciously.  相似文献   

4.
Personality signatures are sets of if-then rules describing how a given person would feel or act in a specific situation. These rules can be used as the major premise of a deductive argument, but they are mostly processed for social cognition purposes; and this common usage is likely to leak into the way they are processed in a deductive reasoning context. It is hypothesised that agreement with a Modus Ponens argument featuring a personality signature as its major premise is affected by the reasoner's own propensity to display this personality signature. To test this prediction, Modus Ponens arguments were constructed from conditionally phrased items extracted from available personality scales. This allowed recording of (a) agreement with the conclusion of these arguments, and (b) the reasoner's propensity to display the personality signature, using as a proxy this reasoner's score on the personality scale without the items used in the argument. Three experiments (N = 256, N = 318, N = 298) applied this procedure to Fairness, Responsive Joy, and Self-Control. These experiments yielded very comparable effects, establishing that a reasoner's propensity to display a given personality signature determines this reasoner's agreement with the conclusion of a Modus Ponens argument featuring the personality signature.  相似文献   

5.
“Surrender; therefore, surrender or fight” is apparently an argument corresponding to an inference from an imperative to an imperative. Several philosophers, however (Williams 1963; Wedeking 1970; Harrison 1991; Hansen 2008), have denied that imperative inferences exist, arguing that (1) no such inferences occur in everyday life, (2) imperatives cannot be premises or conclusions of inferences because it makes no sense to say, for example, “since surrender” or “it follows that surrender or fight”, and (3) distinct imperatives have conflicting permissive presuppositions (“surrender or fight” permits you to fight without surrendering, but “surrender” does not), so issuing distinct imperatives amounts to changing one’s mind and thus cannot be construed as making an inference. In response I argue inter alia that, on a reasonable understanding of ‘inference’, some everyday-life inferences do have imperatives as premises and conclusions, and that issuing imperatives with conflicting permissive presuppositions does not amount to changing one’s mind.  相似文献   

6.
We report an experiment in which we test the possible influence of the tense of the verb and explicit negatives with indicative conditionals. We tested the effects of systematically negating the constituents of four fundamental inferences based on conditionals in three different tenses (present tense, past tense, future tense): Modus Ponens (i.e., inferences of the form: if p then q; p; therefore q), Modus Tollens (if p then q; not-q; therefore not-p), Affirmation of the Consequent (if p then q; q; therefore p), and Denial of the Antecedent (if p then q; not-p; therefore not-q). The latter two inferences are invalid for true conditionals, but are valid for bi-conditionals (if, and only if, p then q). The participants drew their own conclusions from premises about letters and numbers on cards. We discuss the results in relation to an affirmation premise bias, a negative conclusion bias, and a double negation effect. We outline the importance of our findings for theories about conditional and counterfactual thinking.  相似文献   

7.
Causal conditional reasoning means reasoning from a conditional statement that refers to causal content. We argue that data from causal conditional reasoning tasks tell us something not only about how people interpret conditionals, but also about how they interpret causal relations. In particular, three basic principles of people's causal understanding emerge from previous studies: the modal principle, the exhaustive principle, and the equivalence principle. Restricted to the four classic conditional inferences—Modus Ponens, Modus Tollens, Denial of the Antecedent, and Affirmation of the Consequent—causal conditional reasoning data are only partially able to support these principles. We present three experiments that use concrete and abstract causal scenarios and combine inference tasks with a new type of task in which people reformulate a given causal situation. The results provide evidence for the proposed representational principles. Implications for theories of the naïve understanding of causality are discussed.  相似文献   

8.
People accept conclusions of valid conditional inferences (e.g., if p then q, p therefore q) less, the more disablers (circumstances that prevent q to happen although p is true) exist. We investigated whether rules that through their phrasing exclude disablers evoke higher acceptance ratings than rules that do not exclude disablers. In three experiments we re-phrased content-rich conditionals from the literature as either universal or existential rules and embedded these rules in Modus Ponens and Modus Tollens inferences. In Experiments 2 and 3, we also used abstract rules. The acceptance of conclusions increased when the rule was phrased with “all” instead of “some” and the number of disablers had a higher impact on existential rules than on universal rules. Further, the effect of quantifier was more pronounced for abstract rules and when tested within subjects. We discuss the relevance of phrasing, quantifiers and knowledge on reasoning.  相似文献   

9.
Ariel Cohen 《Studia Logica》2008,90(3):369-383
Most solutions to the sorites reject its major premise, i.e. the quantified conditional . This rejection appears to imply a discrimination between two elements that are supposed to be indiscriminable. Thus, the puzzle of the sorites involves in a fundamental way the notion of indiscriminability. This paper analyzes this relation and formalizes it, in a way that makes the rejection of the major premise more palatable. The intuitive idea is that we consider two elements indiscriminable by default, i.e. unless we know some information that discriminates between them. Specifically, following Rough Set Theory, two elements are defined to be indiscernible if they agree on the vague property in question. Then, a is defined to be indiscriminable from b if a is indiscernible by default from b. That is to say, a is indiscriminable from b if it is consistent to assume that a and b agree on the relevant vague property. Indiscernibility by default is formalized with the use of Default Logic, and is shown to have intuitively desirable properties: it is entailed by equality, is reflexive and symmetric. And while the relation is neither transitive nor substitutive, it is “almost” substitutive. This definition of indiscriminability is incorporated into three major theories of vagueness, namely the supervaluationist, epistemic, and contextualist views. Each one of these theories is reduced to a different strategy dealing with multiple extensions in Default Logic, and the rejection of the major premise is shown to follow naturally. Thus, while the proposed notion of indiscriminability does not solve the sorites by itself, it does make the unintuitive conclusion of many of its proposed solutions—the rejection of the major premise—a bit easier to accept.  相似文献   

10.
Inductive probabilistic reasoning is understood as the application of inference patterns that use statistical background information to assign (subjective) probabilities to single events. The simplest such inference pattern is direct inference: from “70% of As are Bs” and “a is an A” infer that a is a B with probability 0.7. Direct inference is generalized by Jeffrey’s rule and the principle of cross-entropy minimization. To adequately formalize inductive probabilistic reasoning is an interesting topic for artificial intelligence, as an autonomous system acting in a complex environment may have to base its actions on a probabilistic model of its environment, and the probabilities needed to form this model can often be obtained by combining statistical background information with particular observations made, i.e., by inductive probabilistic reasoning. In this paper a formal framework for inductive probabilistic reasoning is developed: syntactically it consists of an extension of the language of first-order predicate logic that allows to express statements about both statistical and subjective probabilities. Semantics for this representation language are developed that give rise to two distinct entailment relations: a relation ⊨ that models strict, probabilistically valid, inferences, and a relation that models inductive probabilistic inferences. The inductive entailment relation is obtained by implementing cross-entropy minimization in a preferred model semantics. A main objective of our approach is to ensure that for both entailment relations complete proof systems exist. This is achieved by allowing probability distributions in our semantic models that use non-standard probability values. A number of results are presented that show that in several important aspects the resulting logic behaves just like a logic based on real-valued probabilities alone.  相似文献   

11.
True beliefs and truth‐preserving inferences are, in some sense, good beliefs and good inferences. When an inference is valid though, it is not merely truth‐preserving, but truth‐preserving in all cases. This motivates my question: I consider a Modus Ponens inference, and I ask what its validity in particular contributes to the explanation of why the inference is, in any sense, a good inference. I consider the question under three different definitions of ‘case’, and hence of ‘validity’: (i) the orthodox definition given in terms of interpretations or models, (ii) a metaphysical definition given in terms of possible worlds, and (iii) a substitutional definition defended by Quine. I argue that the orthodox notion is poorly suited to explain what's good about a Modus Ponens inference. I argue that there is something good that is explained by a certain kind of truth across possible worlds, but the explanation is not provided by metaphysical validity in particular; nothing of value is explained by truth across all possible worlds. Finally, I argue that the substitutional notion of validity allows us to correctly explain what is good about a valid inference.  相似文献   

12.
Markus Knauff 《Topoi》2007,26(1):19-36
The aim of this article is to strengthen links between cognitive brain research and formal logic. The work covers three fundamental sorts of logical inferences: reasoning in the propositional calculus, i.e. inferences with the conditional “if...then”, reasoning in the predicate calculus, i.e. inferences based on quantifiers such as “all”, “some”, “none”, and reasoning with n-place relations. Studies with brain-damaged patients and neuroimaging experiments indicate that such logical inferences are implemented in overlapping but different bilateral cortical networks, including parts of the fronto-temporal cortex, the posterior parietal cortex, and the visual cortices. I argue that these findings show that we do not use a single deterministic strategy for solving logical reasoning problems. This account resolves many disputes about how humans reason logically and why we sometimes deviate from the norms of formal logic.
Markus KnauffEmail:
  相似文献   

13.
Deductive inference is usually regarded as being “tautological” or “analytical”: the information conveyed by the conclusion is contained in the information conveyed by the premises. This idea, however, clashes with the undecidability of first-order logic and with the (likely) intractability of Boolean logic. In this article, we address the problem both from the semantic and the proof-theoretical point of view. We propose a hierarchy of propositional logics that are all tractable (i.e. decidable in polynomial time), although by means of growing computational resources, and converge towards classical propositional logic. The underlying claim is that this hierarchy can be used to represent increasing levels of “depth” or “informativeness” of Boolean reasoning. Special attention is paid to the most basic logic in this hierarchy, the pure “intelim logic”, which satisfies all the requirements of a natural deduction system (allowing both introduction and elimination rules for each logical operator) while admitting of a feasible (quadratic) decision procedure. We argue that this logic is “analytic” in a particularly strict sense, in that it rules out any use of “virtual information”, which is chiefly responsible for the combinatorial explosion of standard classical systems. As a result, analyticity and tractability are reconciled and growing degrees of computational complexity are associated with the depth at which the use of virtual information is allowed.  相似文献   

14.
In certain contexts reasoners reject instances of the valid Modus Ponens and Modus Tollens inference form in conditional arguments. Byrne (1989) observed this suppression effect when a conditional premise is accompanied by a conditional containing an additional requirement. In an earlier study, Rumain, Connell, and Braine (1983) observed suppression of the invalid inferences “the denial of the antecedent” and “the affirmation of the consequent” when a conditional premise is accompanied by a conditional containing an alternative requirement. Here we present three experiments showing that the results of Byrne (1989) and Rumain et al. (1983) are influenced by the answer procedure. When reasoners have to evaluate answer alternatives that only deal with the inferences that can be made with respect to the first conditional, then suppression is observed (Experiment 1). However, when reasoners are also given answer alternatives about the second conditional (Experiment 2) no suppression is observed. Moreover, contrary to the hypothesis of Byrne (1989), at least some of the reasoners do not combine the information of the two conditionals and do not give a conclusion based on the combined premise. Instead, we hypothesise that some of the reasoners have reasoned in two stages. In the first stage, they form a putative conclusion on the basis of the first conditional and the categorical premise, and in the second stage, they amend the putative conclusion in the light of the information in the second premise. This hypothesis was confirmed in Experiment 3. Finally, the results are discussed with respect to the mental model theory and reasoning research in general.  相似文献   

15.
Given that A is longer than B, and that B is longer than C, even 5-year-old children can infer that A is longer than C. Theories of reasoning based on formal rules of inference invoke simple axioms ("meaning postulates") to capture such transitive inferences. An alternative theory proposes instead that reasoners construct mental models of the situation described by the premises in order to draw such inferences. An unexpected consequence of the model theory is that if adult reasoners construct simple models of typical situations, then they should infer transitive relations where, in certain cases, none exists. We report four studies corroborating the occurrence of these "pseudo-transitive" fallacies. Experiment 1 established that individuals' diagrams of certain non-transitive relations yield transitive conclusions. Experiment 2 showed that these premises also give rise to fallacious transitive inferences. Experiment 3 established that when the context suggested alternatives to the simple models, the participants made fewer errors. Experiment 4 showed that tense is an important aspect of meaning which affects whether individuals draw transitive conclusions. We discuss the implications of these results for various theories of reasoning.  相似文献   

16.
There has been considerable work on practical reasoning in artificial intelligence and also in philosophy. Typically, such reasoning includes premises regarding means–end relations. A clear semantics for such relations is needed in order to evaluate proposed syllogisms. In this paper, we provide a formal semantics for means–end relations, in particular for necessary and sufficient means–end relations. Our semantics includes a non-monotonic conditional operator, so that related practical reasoning is naturally defeasible. This work is primarily an exercise in conceptual analysis, aimed at clarifying and eventually evaluating existing theories of practical reasoning (pending a similar analysis regarding desires, intentions and other relevant concepts). “They were in conversation without speaking. They didn’t need to speak. They just changed reality so that they had spoken.” Terry Pratchett, Reaper Man  相似文献   

17.
Classic deductive logic entails that once a conclusion is sustained by a valid argument, the argument can never be invalidated, no matter how many new premises are added. This derived property of deductive reasoning is known as monotonicity. Monotonicity is thought to conflict with the defeasibility of reasoning in natural language, where the discovery of new information often leads us to reject conclusions that we once accepted. This perceived failure of monotonic reasoning to observe the defeasibility of natural-language arguments has led some philosophers to abandon deduction itself (!), often in favor of new, non-monotonic systems of inference known as `default logics'. But these radical logics (e.g., Ray Reiter's default logic) introduce their desired defeasibility at the expense of other, equally important intuitions about natural-language reasoning. And, as a matter of fact, if we recognize that monotonicity is a property of the form of a deductive argument and not its content (i.e., the claims in the premise(s) and conclusion), we can see how the common-sense notion of defeasibility can actually be captured by a purely deductive system.  相似文献   

18.
In research on the recognition heuristic (Goldstein & Gigerenzer, Psychological Review, 109, 75–90, 2002), knowledge of recognized objects has been categorized as “recognized” or “unrecognized” without regard to the degree of familiarity of the recognized object. In the present article, we propose a new inference model—familiarity-based inference. We hypothesize that when subjective knowledge levels (familiarity) of recognized objects differ, the degree of familiarity of recognized objects will influence inferences. Specifically, people are predicted to infer that the more familiar object in a pair of two objects has a higher criterion value on the to-be-judged dimension. In two experiments, using a binary choice task, we examined inferences about populations in a pair of two cities. Results support predictions of familiarity-based inference. Participants inferred that the more familiar city in a pair was more populous. Statistical modeling showed that individual differences in familiarity-based inference lie in the sensitivity to differences in familiarity. In addition, we found that familiarity-based inference can be generally regarded as an ecologically rational inference. Furthermore, when cue knowledge about the inference criterion was available, participants made inferences based on the cue knowledge about population instead of familiarity. Implications of the role of familiarity in psychological processes are discussed.  相似文献   

19.
In this paper, it is argued that Ferguson’s (2003, Argumentation 17, 335–346) recent proposal to reconcile monotonic logic with defeasibility has three counterintuitive consequences. First, the conclusions that can be derived from his new rule of inference are vacuous, a point that as already made against default logics when there are conflicting defaults. Second, his proposal requires a procedural “hack” to the break the symmetry between the disjuncts of the tautological conclusions to which his proposal leads. Third, Ferguson’s proposal amounts to arguing that all everyday inferences are sound by definition. It is concluded that the informal logic response to defeasibility, that an account of the context in which inferences are sound or unsound is required, still stands. It is also observed that another possible response is given by Bayesian probability theory (Oaksford and Chater, in press, Bayesian Rationality: The Probabilistic Approach to Human Reasoning, Oxford University Press, Oxford, UK; Hahn and Oaksford, in press, Synthese).  相似文献   

20.
Probabilistic inference forms lead from point probabilities of the premises to interval probabilities of the conclusion. The probabilistic version of Modus Ponens, for example, licenses the inference from \({P(A) = \alpha}\) and \({P(B|A) = \beta}\) to \({P(B)\in [\alpha\beta, \alpha\beta + 1 - \alpha]}\) . We study generalized inference forms with three or more premises. The generalized Modus Ponens, for example, leads from \({P(A_{1}) = \alpha_{1}, \ldots, P(A_{n})= \alpha_{n}}\) and \({P(B|A_{1} \wedge \cdots \wedge A_{n}) = \beta}\) to an according interval for P(B). We present the probability intervals for the conclusions of the generalized versions of Cut, Cautious Monotonicity, Modus Tollens, Bayes’ Theorem, and some SYSTEM O rules. Recently, Gilio has shown that generalized inference forms “degrade”—more premises lead to less precise conclusions, i.e., to wider probability intervals of the conclusion. We also study Adam’s probability preservation properties in generalized inference forms. Special attention is devoted to zero probabilities of the conditioning events. These zero probabilities often lead to different intervals in the coherence and the Kolmogorov approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号