首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper raises a problem for contrastivist accounts of knowledge. It is argued that contrastivism fails to succeed in providing a modest solution to the sceptical paradox—i.e. one according to which we have knowledge of a wide range of ordinary empirical propositions whilst failing to know the various anti-sceptical hypotheses entailed by them—whilst, at the same time, retaining a contrastivist version of the closure principle for knowledge.  相似文献   

2.
采用2Wingdings 2MC@2Wingdings 2MC@2混合设计和相关分析法,考察个体的分析性认知风格对其在完成有、无冲突的推理判断任务时的逻辑反应倾向和冲突探查过程的影响。结果表明分析性认知风格不会直接影响被试完成推理任务的逻辑反应倾向性,高、低分析性认知风格倾向组被试在逻辑反应正确率上不存在显著差异; 但对个体的冲突探查过程会有影响,高、低认知风格倾向组被试在反应自信率上存在显著差异,且冲突探查大小与分析性认知风格显著负相关,这一结果表明那些高分析性认知风格倾向的个体在完成冲突任务时,更可能探查到刻板反应与遵从逻辑规则做出的反应之间的冲突。  相似文献   

3.
Feeling we’re biased: Autonomic arousal and reasoning conflict   总被引:1,自引:0,他引:1  
Human reasoning is often biased by intuitive beliefs. A key question is whether the bias results from a failure to detect that the intuitions conflict with logical considerations or from a failure to discard these tempting intuitions. The present study addressed this unresolved debate by focusing on conflict-related autonomic nervous system modulation during biased reasoning. Participants’ skin conductance responses (SCRs) were monitored while they solved classic syllogisms in which a cued intuitive response could be inconsistent or consistent with the logical correct response. Results indicated that all reasoners showed increased SCRs when solving the inconsistent conflict problems. Experiment 2 validated that this autonomic arousal boost was absent when people were not engaged in an active reasoning task. The presence of a clear autonomic conflict response during reasoning lends credence to the idea that reasoners have a “gut” feeling that signals that their intuitive response is not logically warranted. Supplemental materials for this article may be downloaded from http://cabn.psychonomic-journals.org/content/supplemental.  相似文献   

4.
This paper explores Kornblith’s proposal in Knowledge and its Place in Nature that knowledge is a natural kind that can be elucidated and understood in scientific terms. Central to Kornblith’s development of this proposal is the claim that there is a single category of unreflective knowledge that is studied by cognitive ethologists and is the proper province of epistemology. This claim is challenged on the grounds that even unreflective knowledge in language-using humans reflects forms of logical reasoning that are in principle unavailable to nonlinguistic animals.  相似文献   

5.
Marilyn Ford 《Synthese》2005,146(1-2):71-92
Three studies of human nonmonotonic reasoning are described. The results show that people find such reasoning quite difficult, although being given problems with known subclass-superclass relationships is helpful. The results also show that recognizing differences in the logical strengths of arguments is important for the nonmonotonic problems studied. For some of these problems, specificity – which is traditionally considered paramount in drawing appropriate conclusions – was irrelevant and so should have lead to a “can’t tell” response; however, people could give rational conclusions based on differences in the logical consequences of arguments. The same strategy also works for problems where specificity is relevant, suggesting that in fact specificity is not paramount. Finally, results showed that subjects’ success at responding appropriately to nonmonotonic problems involving conflict relies heavily on the ability to appreciate differences in the logical strength of simple, non-conflicting, statements.  相似文献   

6.
This paper argues that logical inferentialists should reject multiple-conclusion logics. Logical inferentialism is the position that the meanings of the logical constants are determined by the rules of inference they obey. As such, logical inferentialism requires a proof-theoretic framework within which to operate. However, in order to fulfil its semantic duties, a deductive system has to be suitably connected to our inferential practices. I argue that, contrary to an established tradition, multiple-conclusion systems are ill-suited for this purpose because they fail to provide a ‘natural’ representation of our ordinary modes of inference. Moreover, the two most plausible attempts at bringing multiple conclusions into line with our ordinary forms of reasoning, the disjunctive reading and the bilateralist denial interpretation, are unacceptable by inferentialist standards.  相似文献   

7.
Limitations of working memory are proposed as a major determinant of problem difficulty in the THOG task. This task is a logical reasoning task which uses an exclusive disjunction and requires hypothetico-deductive reasoning. Four experiments with students of mathematics or psychology were used to test the hypotheses that, first, guiding participants' attention facilitates the task and, second, the use of paper and pencil as external problem representation reliefs working memory load. Focusing participants' attention upon a critical aspect of the task does not improve solution rates. Students of mathematics were better than students of psychology, but only if they were allowed to use paper and pencil or to work on the task repeatedly. These results partially support the working memory hypothesis. They point toward the importance of training and practice in relatively simple meta-cognitive skills in logical reasoning. Received: 20 March 2000 / Accepted: 22 January 2001  相似文献   

8.

The paper proposes a revised logic of rights in order to accommodate moral conflict. There are often said to be two rival philosophical accounts of rights with respect to moral conflict. Specificationists about rights insist that rights cannot conflict, since they reflect overall deontic conclusions. Generalists instead argue that rights reflect pro tanto constraints on behaviour. After offering an overview of the debate between generalists and specificationists with respect to rights, I outline the challenge of developing a logic of rights-reasoning that is compatible with generalism. I then proceed to offer a new logical framework, which utilizes a simple non-monotonic logic of practical reasoning. Both generalist and specificationist interpretations of the logic are explored. The revised logic shows that traditional characterizations of the debate between specificationists and generalists obscure other relevant philosophical positions.

  相似文献   

9.
10.
艾炎  胡竹菁 《心理科学》2018,(4):869-875
通过控制自变量一致性(冲突、非冲突)与问题类型(基础比率问题、合取问题),对推理判断中偏差反应的加工机制:冲突探查失败,还是抑制失败,进行验证。反应时、反应自信度的结果支持抑制失败说,被试对冲突的探查是成功的,偏差反应的产生是没能抑制占主导的直觉。而冲突探查大小(Conflict detection size)的结果表明冲突探查存在个体差异,有些被试对两种加工间的冲突探查过程完美无暇(Flawless detection),有些被试的冲突探查过程松散马虎(Lax detection),而且受到问题类型的影响。  相似文献   

11.
Coherentism in epistemology has long suffered from lack of formal and quantitative explication of the notion of coherence. One might hope that probabilistic accounts of coherence such as those proposed by Lewis, Shogenji, Olsson, Fitelson, and Bovens and Hartmann will finally help solve this problem. This paper shows, however, that those accounts have a serious common problem: the problem of belief individuation. The coherence degree that each of the accounts assigns to an information set (or the verdict it gives as to whether the set is coherent tout court) depends on how beliefs (or propositions) that represent the set are individuated. Indeed, logically equivalent belief sets that represent the same information set can be given drastically different degrees of coherence. This feature clashes with our natural and reasonable expectation that the coherence degree of a belief set does not change unless the believer adds essentially new information to the set or drops old information from it; or, to put it simply, that the believer cannot raise or lower the degree of coherence by purely logical reasoning. None of the accounts in question can adequately deal with coherence once logical inferences get into the picture. Toward the end of the paper, another notion of coherence that takes into account not only the contents but also the origins (or sources) of the relevant beliefs is considered. It is argued that this notion of coherence is of dubious significance, and that it does not help solve the problem of belief individuation.  相似文献   

12.
学习支持对基于计算机模拟的发现学习的影响   总被引:1,自引:0,他引:1  
研究了针对发现活动的意义性的解释性支持 (IS)、针对其系统逻辑性的实验性支持 (ES)以及学习者的推理能力对基于模拟的科学发现学习的影响。设计开发了关于浮力的模拟软件 ,被试为北京十四中初二学生 80名 ,采用 2 (有 /无IS)× 2 (有 /无ES)× 3(推理能力 )的混合实验设计。结果发现 ,推理能力在原理性知识、直觉性理解测验以及学习者所设计实验的质量上有显著的主效应。IS在原理性知识和灵活应用测验上有显著的主效应。ES与推理能力在原理性知识测验上有显著的交互作用 ,ES在学习者所设计实验的质量上有显著的主效应。这一结果说明 ,发现活动的意义性和系统逻辑性对基于模拟的发现学习有重要的制约作用 ,应该针对这两个侧面设计相应的学习支持  相似文献   

13.
Nicholas Asher 《Topoi》1994,13(1):37-49
A fundamental question in reasoning about change is, what information does a reasoning agent infer about later times from earlier times? I will argue that reasoning about change by an agent is to be modeled in terms of the persistence of the agent's beliefs over time rather than the persistence of truth and that such persistence is explained by pragmatic factors about how agents acquire information from other agents rather than by general principles of persistence about states of the world. AI accounts of persistence have focused on ‘closed world’ examples of change, in which the agent believes that the truth of a proposition is unaltered so long as he or she has no evidence that it has been changed. AI principles of persistence seem plausible in a closed world where one assumes the agent knows everything that is happening. If one drops the assumption of omniscience, however, the analysis of persistence is implausible. To get a good account of persistence and reasoning about change, I argue we should examine ‘open world’ examples of change, in which the agent is ignorant of some of the changes occurring in the world. In open world examples of change, persistence must be formulated, I argue, as a pragmatic principle about the persistence of beliefs. After elaborating my criticisms of current accounts of persistence, I examine how such pragmatic principles fare with the notorious examples of reasoning about action that have collectively characterized the so-called frame problem.  相似文献   

14.
A probability heuristic model (PHM) for syllogistic reasoning is proposed. An informational ordering over quantified statements suggests simple probability based heuristics for syllogistic reasoning. The most important is the "min-heuristic": choose the type of the least informative premise as the type of the conclusion. The rationality of this heuristic is confirmed by an analysis of the probabilistic validity of syllogistic reasoning which treats logical inference as a limiting case of probabilistic inference. A meta-analysis of past experiments reveals close fits with PHM. PHM also compares favorably with alternative accounts, including mental logics, mental models, and deduction as verbal reasoning. Crucially, PHM extends naturally to generalized quantifiers, such as Most and Few, which have not been characterized logically and are, consequently, beyond the scope of current mental logic and mental model theories. Two experiments confirm the novel predictions of PHM when generalized quantifiers are used in syllogistic arguments. PHM suggests that syllogistic reasoning performance may be determined by simple but rational informational strategies justified by probability theory rather than by logic.  相似文献   

15.
Jaegwon Kim’s supervenience/exclusion argument attempts to show that non-reductive physicalism is incompatible with mental causation. This influential argument can be seen as relying on the following principle, which I call “the piggyback principle”: If, with respect to an effect, E, an instance of a supervenient property, A, has no causal powers over and above, or in addition to, those had by its supervenience base, B, then the instance of A does not cause E (unless A is identical with B). In their “Epiphenomenalism: The Dos and the Don’ts,” Larry Shapiro and Elliott Sober employ a novel empirical approach to challenge the piggyback principle. Their empirical approach pulls from the experiments of August Weismann regarding the inheritance of acquired characteristics. Through an examination of Weismann’s experiments, Shapiro and Sober extract lessons in reasoning about the epiphenomenalism of a property. And according to these empirically drawn lessons, the piggyback principle is a don’t. My primary aim in this paper is to defend the piggyback principle against Shapiro and Sober’s empirical approach.  相似文献   

16.
17.
Default reasoning occurs whenever the truth of the evidence available to the reasoner does not guarantee the truth of the conclusion being drawn. Despite this, one is entitled to draw the conclusion “by default” on the grounds that we have no information which would make us doubt that the inference should be drawn. It is the type of conclusion we draw in the ordinary world and ordinary situations in which we find ourselves. Formally speaking, ‘nonmonotonic reasoning’ refers to argumentation in which one uses certain information to reach a conclusion, but where it is possible that adding some further information to those very same premises could make one want to retract the original conclusion. It is easily seen that the informal notion of default reasoning manifests a type of nonmonotonic reasoning. Generally speaking, default statements are said to be true about the class of objects they describe, despite the acknowledged existence of “exceptional instances” of the class. In the absence of explicit information that an object is one of the exceptions we are enjoined to apply the default statement to the object. But further information may later tell us that the object is in fact one of the exceptions. So this is one of the points where nonmonotonicity resides in default reasoning. The informal notion has been seen as central to a number of areas of scholarly investigation, and we canvass some of them before turning our attention to its role in AI. It is because ordinary people so cleverly and effortlessly use default reasoning to solve interesting cognitive tasks that nonmonotonic formalisms were introduced into AI, and we argue that this is a form of psychologism, despite the fact that it is not usually recognized as such in AI. We close by mentioning some of the results from our empirical investigations that we believe should be incorporated into nonmonotonic formalisms.  相似文献   

18.
Using the analogical transfer paradigm, the present study investigated the competing explanations of Girotto and Legrenzi (Psychological Research 51: 129–135, 1993) and Griggs, Platt, Newstead, and Jackson (Thinking and Reasoning 4: 1–14, 1998) for facilitation on the SARS version of the THOG problem, a hypothetico-deductive reasoning task. Girotto and Legrenzi argue that facilitation is based on logical analysis of the task [System 2 reasoning in Evans’s (Trends in Cognitive Sciences 7: 454–459, 2003) dual-process account of reasoning] while Griggs et al. maintain that facilitation is due to an attentional heuristic produced by the wording of the problem (System 1 reasoning). If Girotto and Legrenzi are correct, then System 2 reasoning, which is volitional and responsible for deductive reasoning, should be elicited, and participants should comprehend the solution principle of the THOG task and exhibit analogical transfer. However, if Griggs et al. are correct, then System 1 reasoning, which is responsible for heuristic problem solving strategies such as an attentional heuristic, should occur, and participants should not abstract the solution principle and transfer should not occur. Significant facilitation (68 and 82% correct) was only observed for the two SARS source problems, but significant analogical transfer did not occur. This lack of transfer suggests that System 1 reasoning was responsible for the facilitation observed in the SARS problem, supporting Griggs et al.’s attentional heuristic explanation. The present results also underscore the explanatory value of using analogical transfer rather than facilitation as the criterion for problem understanding.  相似文献   

19.
I propose a few epistemological and methodological reflexions to account for intercultural daily communication. These reflexions emerged during a sociological research in Mendoza, Argentina, with Huarpes Indigenous students at the University of Cuyo. I observed that Indigenous people became quasi “ethnographers” of diverse environments. To make intelligible their classmates’ behavior, and to account for their own behavior, Huarpes follow, in diverse environments and interactions, public rules of meaning. The objective of this paper is twofold: (a) to stress the methodological scope of ordinary communication and ordinary reasoning in order to study understanding between people from different groups and categories, and (b) to contest a kind of “pessimist” standpoint in social sciences and philosophy according to which the use of ordinary language reduces possibilities for understanding. Interviews, participant observation in natural situations, and a review of literature about language and understanding are the basis of this paper.  相似文献   

20.
Based on a close study of benchmark examples in default reasoning, such as Nixon Diamond, Penguin Principle, etc., this paper provides an in depth analysis of the basic features of default reasoning. We formalize default inferences based on Modus Ponens for Default Implication, and mark the distinction between “local inferences” (to infer a conclusion from a subset of given premises) and “global inferences” (to infer a conclusion from the entire set of given premises). These conceptual analyses are captured by a formal semantics that is built upon the set-selection function technique. A minimal logic system M of default reasoning that accommodates Modus Ponens for Default Implication and suitable for local inferences is proposed, and its soundness is proved. __________ Translated from Zhexue Yanjiu 哲学研究 (Philosophical Studies), 2003 (special issue) by Ye Feng  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号