首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A quite popular approach to solving the Causal Exclusion Problem is to adopt a counterfactual theory of causation. In this paper, I distinguish three versions of the Causal Exclusion Argument. I argue that the counterfactualist approach can block the first two exclusion arguments, because the Causal Inheritance Principle and the Upward Causation Principle upon which the two arguments are based respectively are problematic from the perspective of the counterfactual account of causation. However, I attempt to show that the counterfactualist approach is unable to refute a sophisticated version (i.e. the third version) of the exclusion argument in that the Downward Causation Principle, a premise of the third exclusion argument, is actually implied by the counterfactual theory of causation. Therefore, even if other theories of causation might help the non‐reductive physicalist to solve the exclusion problem, the counterfactual theory of causation cannot.  相似文献   

2.
3.
For over 20 years, Jaegwon Kim’s Causal Exclusion Argument has stood as the major hurdle for non-reductive physicalism. If successful, Kim’s argument would show that the high-level properties posited by non-reductive physicalists must either be identical with lower-level physical properties, or else must be causally inert. The most prominent objection to the Causal Exclusion Argument—the so-called Overdetermination Objection—points out that there are some notions of causation that are left untouched by the argument. If causation is simply counterfactual dependence, for example, then the Causal Exclusion Argument fails. Thus, much of the existing debate turns on the issue of which account of causation is appropriate. In this paper, however, I take a bolder approach and argue that Kim’s preferred version of the Causal Exclusion Argument fails no matter what account one gives of causation. Any notion of causation that is strong enough to support the premises of the argument is too strong to play the role required in the logic of the argument. I also consider a second version of the Causal Exclusion Argument, and suggest that although it may avoid the problems of the first version, it begs the question against a particular form of non-reductive physicalism, namely emergentism.  相似文献   

4.
The argument for modal collapse is partly responsible for the widespread rejection of the so‐called Principle of Sufficient Reason (PSR) in recent times. This paper discusses the PSR against the background of the recent debate about grounding and develops principled reasons for rejecting the argument from modal collapse.  相似文献   

5.
The Acquaintance Principle has been the subject of extensive debate in philosophical aesthetics. In one of the most recent developments, it has become popular to claim that some works of conceptual art are counterexamples to it. It is further claimed that this is a genuinely new problem in the sense that it is a problem even for versions of the Acquaintance Principle modified to deal with previous objections. I argue that this is essentially correct; however, the claim as it stands needs some work. I draw attention to, and defend, two assumptions on which the claim rests but which have so far gone unrecognized. I also address an objection that has recently been made to the claim and threatens to raise further complications for it. In doing this, we arrive at a fuller, more robust version of the initial claim.  相似文献   

6.
Gold's [1967. Language identification in the limit. Information and Control, 16, 447-474] celebrated work on learning in the limit has been taken, by many cognitive scientists, to have powerful negative implications for the learnability of language from positive data (i.e., from mere exposure to linguistic input). This provides one, of several, lines of argument that language acquisition must draw on other sources of information, including innate constraints on learning. We consider an ‘ideal learner’ that applies a Simplicity Principle to the problem of language acquisition. The Simplicity Principle chooses the hypothesis that provides the briefest representation of the available data—here, the data are the linguistic input to the child. The Simplicity Principle allows learning from positive evidence alone, given quite weak assumptions, in apparent contrast to results on language learnability in the limit (e.g., Gold, 1967). These results provide a framework for reconsidering the learnability of various aspects of natural language from positive evidence, which has been at the center of theoretical debate in research on language acquisition and linguistics.  相似文献   

7.
We postulate the Testing Principle : that individuals ''act like statisticians'' when they face uncertainty in a decision problem, ranking alternatives to the extent that available evidence allows. The Testing Principle implies that completeness of preferences, rather than the sure-thing principle, is violated in the Ellsberg Paradox. In the experiment, subjects chose between risky and uncertain acts in modified Ellsberg-type urn problems, with sample information about the uncertain urn. Our results show, consistent with the Testing Principle, that the uncertain urn is chosen more often when the sample size is larger, holding constant a measure of ambiguity (proportion of balls of unknown colour in the urn). The Testing Principle rationalises the Ellsberg Paradox. Behaviour consistent with the principle leads to a reduction in Ellsberg-type violations as the statistical quality of sample information is improved, holding ambiguity constant. The Testing Principle also provides a normative rationale for the Ellsberg paradox that is consistent with procedural rationality.  相似文献   

8.
The gap/gapless processing debate in the psycholinguistics literature contrasts two processing models: one that assumes the trace-based Government and Binding (or Principles and Parameters) Grammar and the (augmented) Active Filler Strategy and one that assumes the traceless Dependency Categorial Grammar and the Principle of Dependency Formation. This paper reports on an experiment that found new evidence against the gapless/traceless model, considers why such evidence was not found in previous studies, and explores whether a parser that combines a (partially) traceless grammar and the augmented Active Filler Strategy can explain the current finding.  相似文献   

9.
Peter J. Lewis 《Synthese》2010,175(3):369-382
All parties to the Sleeping Beauty debate agree that it shows that some cherished principle of rationality has to go. Thirders think that it is Conditionalization and Reflection that must be given up or modified; halfers think that it is the Principal Principle. I offer an analysis of the Sleeping Beauty puzzle that allows us to retain all three principles. In brief, I argue that Sleeping Beauty’s credence in the uncentered proposition that the coin came up heads should be 1/2, but her credence in the centered proposition that the coin came up heads and it is Monday should be 1/3. I trace the source of the earlier mistakes to an unquestioned assumption in the debate, namely that an uncentered proposition is just a special kind of centered proposition. I argue that the falsity of this assumption is the real lesson of the Sleeping Beauty case.  相似文献   

10.
Does knowledge depend in any interesting way on our practical interests? This is the central question in the pragmatic encroachment debate. Pragmatists defend the affirmative answer to this question while purists defend the negative answer. The literature contains two kinds of arguments for pragmatism: principle‐based arguments and case‐based arguments. Principle‐based arguments derive pragmatism from principles that connect knowledge to practical interests. Case‐based arguments rely on intuitions about cases that differ with respect to practical interests. I argue that there are insurmountable problems for both kinds of arguments, and that it is therefore unclear what motivates pragmatism.  相似文献   

11.
Bostrom  Nick 《Synthese》2001,127(3):359-387
The Doomsday argument purports to show that the risk of the human species going extinct soon has been systematically underestimated. This argument has something in common with controversial forms of reasoning in other areas, including: game theoretic problems with imperfect recall, the methodology of cosmology, the epistemology of indexical belief, and the debate over so-called fine-tuning arguments for the design hypothesis. The common denominator is a certain premiss: the Self-Sampling Assumption. We present two strands of argument in favor of this assumption. Through a series of thought experiments we then investigate some bizarre prima facie consequences – backward causation, psychic powers, and an apparent conflict with the Principal Principle.  相似文献   

12.
Sir Karl Popper is one of the few authors to have discussed the reduction of chemistry. His approach consists of what I term naturalistic reduction, which I suggest bears close similarities to the way in which scientists regard reduction. The present article aims to build on Popper's insights into the nature of reduction in science and more specifically to suggest an approach to characterizing a specific sense of the notion of approximate reduction in the context of chemistry. In the course of the discussion, one of Popper's better known passages on the reduction of chemistry is analysed in some detail.  相似文献   

13.
Metaphysical rationalism, the doctrine which affirms the Principle of Sufficient Reason (the PSR), is out of favor today. The best argument against it is that it appears to lead to necessitarianism, the claim that all truths are necessarily true. Whatever the intuitive appeal of the PSR, the intuitive appeal of the claim that things could have been otherwise is greater. This problem did not go unnoticed by the great metaphysical rationalists Spinoza and Leibniz. Spinoza’s response was to embrace necessitarianism. Leibniz’s response was to argue that, despite appearances, rationalism does not lead to necessitarianism. This paper examines the debate between these two rationalists and concludes that Leibniz has persuasive grounds for his opinion. This has significant implications both for the plausibility of the PSR and for our understanding of modality.  相似文献   

14.
The complexity of chromium chemistry makes it an ideal example of how the Principle of Expediency, first articulated by sanitary pioneer Earle Phelps, can be used in a standard setting. Expediency, defined by Phelps as “the attempt to reduce the numerical measure of probable harm, or the logical measure of existing hazard, to the lowest level that is practicable and feasible within the limitations of financial resources and engineering skill”, can take on negative connotations unless subject to ethical guidance. In this paper we argue that without ethical principles as a rubric for negotiating environmental regulations, communities run the risk of slipping from the Principle of Expediency as defined by Phelps to the alternative usage of expediency meaning that which does not reflect ethical consideration or concern beyond self-serving interest. Three ethical ideals—justice, mercy and humility—are suggested as values to be considered while resolving regulatory issues related to environmental protection. The Principle of Expediency serves as a working principle, but not as a rigid algorithm, for setting regulatory limits for environmental concentrations of waste products like chromium. This paper is based on a dissertation submitted in partial fulfillment of the PhD degree by Lauren Bartlett, Duke University, 1997. An earlier version of this paper was presented at a mini-conference, Practicing and Teaching Ethics in Engineering and Computing, held during the Sixth Annual Meeting of the Association for Practical and Professional Ethics, Washington, D.C., March 8–9, 1997. This paper is one of a series edited by Michael C. Loui. See Volume 3, No. 4, 1997 for other papers in this series.  相似文献   

15.
A long lasting debate in the field of implicit learning is whether participants can learn without acquiring conscious knowledge. One crucial problem is that no clear criterion exists allowing to identify participants who possess explicit knowledge. Here, we propose a method to diagnose during a serial reaction time task those participants who acquire conscious knowledge. We first validated this method by using Stroop-like material during training. Then we assessed participants’ knowledge with the Inclusion/Exclusion task (Experiment 1) and the wagering task (Experiment 2). Both experiments confirmed that for participants diagnosed as having acquired conscious knowledge about the underlying sequence the Stroop congruency effect disappeared, whereas for participants not diagnosed as possessing conscious knowledge it only slightly decreased. In addition, both experiments revealed that only participants diagnosed as conscious were able to strategically use their acquired knowledge. Thus, our method allows to reliably distinguish between participants with and without conscious knowledge.  相似文献   

16.
This paper responds to, and extends, the debate between Gelade, Wall, Symon and Hodgkinson in JOOP. In concluding that JOOP is fulfilling its remit for robust information exchange between research and practice, four lines of argument are proposed that (i) the Principle of Scientific Replication warrants full details of study methods being routinely published, (ii) any divide is reflective of a perfectly natural distance between the two wings of the discipline and is not necessarily harmful as long as sufficient bridging mechanisms exist, (iii) several strategic‐level bridging mechanisms do exist but need to be better utilized and (iv) as JOOP will be unable to be all things to all readers, its most suitable niche remains as a scientific outlet for pragmatic research in IWO psychology internationally.  相似文献   

17.
This paper reflects on a critique of cosmopolitanism mounted by Tom Campbell, who argues that cosmopolitans place undue stress on the issue of global justice. Campbell argues that aid for the impoverished needy in the third world, for example, should be given on the Principle of Humanity rather than on the Principle of Justice. This line of thought is also pursued by ‘Liberal Nationalists’ like Yael Tamir and David Miller. Thomas Nagel makes a similar distinction and questions whether the ideal of justice can even be meaningfully applied on a global scale. The paper explores whether the distinction between the Principle of Humanity and the Principle of Justice might be a false dichotomy in that both principles could be involved in humanitarian assistance. It will suggest that both principles might be grounded in an ethics of caring and that the ethics of caring cannot be so sharply distinguished from the discourse of justice and of rights. As a result, the Principle of Humanity and the Principle of Justice cannot be so sharply distinguished either. It is because we care about others as human beings (Principle of Humanity) that we pursue justice for them (Principle of Justice) and the alleviation of their avoidable suffering.  相似文献   

18.
Summary  If we are to constrain our place in the world, two principles are often appealed to in science. According to the Copernican Principle, we do not occupy a privileged position within the Universe. The Cosmological Principle, on the other hand, says that our observations would roughly be the same, if we were located at any other place in the Universe. In our paper we analyze these principles from a logical and philosophical point of view. We show how they are related, how they can be supported and what use is made of them. Our main results are: 1. There is a logical gap between both principles insofar as the Cosmological Principle is significantly stronger than the Copernican Principle. 2. A step that is often taken for establishing the Cosmological Principle on the base of the Copernican Principle and observations is not incontestable as it stands, but can be supplemented with a different argument. 3. The Cosmological Principle might be crucial for cosmology to the extent it is not supported by empirical evidence.  相似文献   

19.
The average net income of physicians in the USA is more than four times the average net income of people working in all domestic industries in the USA. When critics suggest that physicians make too much money, defenders typically appeal to the following four prominent principles of economic justice: Aristotle's Income Principle, the Free Market Principle, the Utilitarian Income Principle, and Rawls' Difference Principle. I shall show that no matter which of these four principles is assumed, the present high incomes of physicians cannot be defended.  相似文献   

20.
Two compelling principles, the Reasonable Range Principle and the Preservation of Irrelevant Evidence Principle, are necessary conditions that any response to peer disagreements ought to abide by. The Reasonable Range Principle maintains that a resolution to a peer disagreement should not fall outside the range of views expressed by the peers in their dispute, whereas the Preservation of Irrelevant Evidence (PIE) Principle maintains that a resolution strategy should be able to preserve unanimous judgments of evidential irrelevance among the peers. No standard Bayesian resolution strategy satisfies the PIE Principle, however, and we give a loss aversion argument in support of PIE and against Bayes. The theory of imprecise probability allows one to satisfy both principles, and we introduce the notion of a set‐based credal judgment to frame and address a range of subtle issues that arise in peer disagreements.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号