首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 281 毫秒
1.
Ausonio Marras 《Synthese》2006,151(3):561-569
In this paper I examine Jaegwon Kim’s view that emergent properties are irreducible to the base properties on which they supervene. Kim’s view assumes a model of ‘functional reduction’ which he claims to be substantially different from the traditional Nagelian model. I dispute this claim and argue that the two models are only superficially different, and that on either model, properly understood, it is possible to draw a distinction between a property’s being reductively identifiable with its base property and a property’s being reductively explainable in terms of it. I propose that we should take as the distinguishing feature of emergent properties that they be truly novel properties, i.e., ontologically distinct from the ‘base’ properties which they supervene on. This only requires that emergent properties cannot be reductively identified with their base properties, not that they cannot be reductively explained in terms of them. On this conception the set of emergent properties may well include mental properties as conceived by nonreductive physicalists.  相似文献   

2.
The paper sets out a new strategy for theory reduction by means of functional sub‐types. This strategy is intended to get around the multiple realization objection. We use Kim's argument for token identity (ontological reductionism) based on the causal exclusion problem as starting point. We then extend ontological reductionism to epistemological reductionism (theory reduction). We show how one can distinguish within any functional type between functional sub‐types. Each of these sub‐types is coextensive with one type of realizer. By this means, a conservative theory reduction is in principle possible, despite multiple realization. We link this account with Nagelian reduction, as well as with Kim's functional reduction.  相似文献   

3.
4.
According to Putnam the reference of natural kind terms is fixed by the world, at least partly; whether two things belong to the same kind depends on whether they obey the same objective laws. We show that Putnam's criterion of substance identity only “works” if we read “objective laws” as “OBJECTIVE LAWS”. Moreover, at least some of the laws of some of the special sciences have to be included. But what we consider to be good special sciences and what not depends upon our values. Hence, “objective laws” cannot be read as “OBJECTIVE LAWS”. It follows that the reference of natural kind terms cannot be fixed by the world, not even partly. The final conclusion applies to a variety of realisms. This revised version was published online in August 2006 with corrections to the Cover Date.  相似文献   

5.
In the on-going debate between scientific realism and its various opponents, a crucial role in challenging the realist claim that success of scientific theories must be attributed to their approximate truth is played by the so-called pessimistic meta-induction: Arguing that the history of science boils down to a succession of theories which, though successful at a time, were eventually discarded only to be replaced by alternative theories which in turn met with the same fate, it purports to show that the empirical success of scientific theories cannot have any bearing on claims about their truth-likeness. Yet, the same historical record suggests a possible strategy to counter this argument. Far from being a barren wasteland, it contains cases which point to the need of adopting a selective attitude when passing judgments about truth-likeness of theories. In this vein, Psillos has proposed the so-called divide et impera move. It consists in pointing out that responsibility for the success of a theory should be attributed to an indispensable core element in it, acting in unison with other elements reflecting the concrete historical conditions under which the theory was formulated. In what follows it is argued that the discovery of Kepler’s first two laws and the transition to Newtonian mechanics provides a notable case to exhibit the divide et impera move. All the more so, since—as will be shown—Kepler himself employs a variant of this move, a fact that sheds light on the philosophical implications of his theories. As a result, it will be argued that the appropriate selective attitude towards theories becomes solidly warranted if wedded to the diachronic element in the process of a theory’s development.  相似文献   

6.
Kim On Reduction     
A. Marras 《Erkenntnis》2002,57(2):231-257
In Mind in a Physical World (1998), Jaegwon Kim has recently extended his ongoing critique of `non-reductive materialist' positions in philosophy of mind by arguing that Nagel's model of reduction is the wrong paradigm in terms of which to contest the issue of psychophysical reduction, and that an altogether different model of scientific reduction – a functional model of reduction – is needed. In this paper I argue, first, that Kim's conception of the Nagelian model is substantially impoverished and potentially misleading; second, that his own functional model is problematic in several respects; and, third, that the basic idea underlying his functional model can well be accommodated within a properly reinterpreted Nagelian model. I conclude with some reflections on the issue of psychophysical reduction.  相似文献   

7.
Nancy Cartwright (1983, 1999) argues that (1) the fundamental laws of physics are true when and only when appropriate ceteris paribus modifiers are attached and that (2) ceteris paribus modifiers describe conditions that are almost never satisfied. She concludes that when the fundamental laws of physics are true, they don't apply in the real world, but only in highly idealized counterfactual situations. In this paper, we argue that (1) and (2) together with an assumption about contraposition entail the opposite conclusion — that the fundamental laws of physics do apply in the real world. Cartwright extracts from her thesis about the inapplicability of fundamental laws the conclusion that they cannot figure in covering-law explanations. We construct a different argument for a related conclusion — that forward-directed idealized dynamical laws cannot provide covering-law explanations that are causal. This argument is neutral on whether the assumption about contraposition is true. We then discuss Cartwright's simulacrum account of explanation, which seeks to describe how idealized laws can be explanatory. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

8.
Toby Handfield 《Synthese》2008,160(2):297-308
This paper develops two ideas with respect to dispositional properties: (1) Adapting a suggestion of Sungho Choi, it appears the conceptual distinction between dispositional and categorical properties can be drawn in terms of susceptibility to finks and antidotes. Dispositional, but not categorical properties, are not susceptible to intrinsic finks, nor are they remediable by intrinsic antidotes. (2) If correct, this suggests the possibility that some dispositions—those which lack any causal basis—may be insusceptible to any fink or antidote. Since finks and antidotes are a major obstacle to a conditional analysis of dispositions, these dispositions that are unfinkable may be successfully analysed by the conditional analysis of dispositions. This result is of importance for those who think that the fundamental properties might be dispositions which lack any distinct causal basis, because it suggests that these properties, if they exist, can be analysed by simple conditionals and that they will not be subject to ceteris paribus laws.  相似文献   

9.
I outline and motivate a way of implementing a closest world theory of indicatives, appealing to Stalnaker’s framework of open conversational possibilities. Stalnakerian conversational dynamics helps us resolve two outstanding puzzles for a such a theory of indicative conditionals. The first puzzle—concerning so-called ‘reverse Sobel sequences’—can be resolved by conversation dynamics in a theory-neutral way: the explanation works as much for Lewisian counterfactuals as for the account of indicatives developed here. Resolving the second puzzle, by contrast, relies on the interplay between the particular theory of indicative conditionals developed here and Stalnakerian dynamics. The upshot is an attractive resolution of the so-called “Gibbard phenomenon” for indicative conditionals.  相似文献   

10.
Randomized controlled trials (RCTs) are widely taken as the gold standard for establishing causal conclusions. Ideally conducted they ensure that the treatment ‘causes’ the outcome—in the experiment. But where else? This is the venerable question of external validity. I point out that the question comes in two importantly different forms: Is the specific causal conclusion warranted by the experiment true in a target situation? What will be the result of implementing the treatment there? This paper explains how the probabilistic theory of causality implies that RCTs can establish causal conclusions and thereby provides an account of what exactly that causal conclusion is. Clarifying the exact form of the conclusion shows just what is necessary for it to hold in a new setting and also how much more is needed to see what the actual outcome would be there were the treatment implemented.  相似文献   

11.
Herman Dooyeweerd (1985) argued that among the modalities making up the fabric of reality a specifically economic one is to be found. The aim of the present paper is to discuss the texture of such a modality and how it both differentiates and intertwines with others. For an updated brief, albeit cogent and analytically lucid presentation of the Law Framework ontology, see Clouser (2009). Dooyeweerd’s view entails that the proper object of economics is irreducible to that of other disciplines, but a non-reductionist view of the object of economics presupposes that the nuclear meaning of that discipline has been clearly delimited: this is required in order to determine its nature and separate identity as a scientific discipline. By ‘the nuclear meaning of a discipline’ I understand a pre-theoretical delimitation of its field of research, such as that of the field of physics, characterized by the laws governing force and energy. Within one and the same field there may be many theories, theories competing to explain the same phenomena, or dealing with phenomena so different that it is nearly impossible to trace conceptual connections among them. This last situation calls for a unified-field theory. In the second section of this paper I will attempt to defend a rather commonly accepted definition of the field of economics that sees this discipline as a science of choice. In the third I will show how the analytical conception it involves can be naturally complemented with a classificatory one. According to a classificatory conception, the aggregated social-level phenomena, patterns and regularities economic theories usually deal with, are economic in that sense, even though they are not prima facie cases of individual behavior, or are unintended consequences of aggregated individual choices. In the fourth I will discuss the meaning of the most general, supra-arbitrary economic laws—the modal laws of economics. In the final section I will offer a non-reductionist view of economics that nevertheless takes into account its intertwining with other spheres.  相似文献   

12.
This paper discusses Husserl’s views on physical theories in the first volume of his Logical Investigations, and compares them with those of his contemporaries Pierre Duhem and Henri Poincaré. Poincaré’s views serve as a bridge to a discussion of Husserl’s almost unknown views on physical geometry from about 1890 on, which in comparison even with Poincaré’s—not to say Frege’s—or almost any other philosopher of his time, represented a rupture with the philosophical tradition and were much more in tune with the physical geometry underlying the Einstein-Hilbert general theory of relativity developed more than two decades later.  相似文献   

13.
Max Kistler 《Synthese》2006,151(3):347-354
I analyse Rueger’s application of Kim’s model of functional reduction to the relation between the thermal conductivities of metal bars at macroscopic and atomic scales. 1) I show that it is a misunderstanding to accuse the functional reduction model of not accounting for the fact that there are causal powers at the micro-level which have no equivalent at the macro-level. The model not only allows but requires that the causal powers by virtue of which a functional predicate is defined, are only a subset of the causal powers of the properties filling the functional specification. 2) The fact that the micro-equation does not converge to the macro-equation in general but only under the constraint of a “solvability condition” does not show that reduction is impossible, as Rueger claims, but only that reduction requires inter-level constraints. 3) Rueger tries to analyse inter-level reduction with the conceptual means of intra-level reduction. This threatens the coherence of his analysis, given that it makes no sense to ascribe macroproperties such as thermal conductivity to entities at the atomic level. Ignoring the distinction between theses two senses of “reduction” is especially confusing because they have opposite directions: in intra-level reduction, the more detailed account reduces to the less detailed one, whereas in inter-level reduction, the less detailed theory is reduced to the more detailed one. 4) Finally I criticize Rueger’s way of using Wimsatt’s criteria for emergence in terms of non-aggregativity, to construct a concept of synchronic emergence. It is wrong to require, over and above non-aggregativity, irreducibility as a criterion for emergence.  相似文献   

14.
We examine some assumptions about the nature of ‘levels of reality’ in the light of examples drawn from physics. Three central assumptions of the standard view of such levels (for instance, Oppenheim and Putnam 1958) are (i) that levels are populated by entities of varying complexity, (ii) that there is a unique hierarchy of levels, ranging from the very small to the very large, and (iii) that the inhabitants of adjacent levels are related by the parthood relation. Using examples from physics, we argue that it is more natural to view the inhabitants of levels as the behaviors of entities, rather than entities themselves. This suggests an account of reduction between levels, according to which one behavior reduces to another if the two are related by an appropriate limit relation. By considering cases where such inter-level reduction fails, we show that the hierarchy of behaviors differs in several respects from the standard hierarchy of entities. In particular, while on the standard view, lower-level entities are ‘micro’ parts of higher-level entities, on our view, a system’s macro-level behavior can be seen as a (‘non-spatial’) part of its micro-level behavior. We argue that this second hierarchy is not really in conflict with the standard view and that it better suits examples of explanation in science.  相似文献   

15.
Dan Mcarthur 《Synthese》2006,151(2):233-255
In this paper I argue against Nancy Cartwright’s claim that we ought to abandon what she calls “fundamentalism” about the laws of nature and adopt instead her “dappled world” hypothesis. According to Cartwright we ought to abandon the notion that fundamental laws (even potentially) apply universally, instead we should consider the law-like statements of science to apply in highly qualified ways within narrow, non-overlapping and ontologically diverse domains, including the laws of fundamental physics. For Cartwright, “laws” are just locally applicable refinements of a more open-ended concept of capacities. By providing a critique of the dappled world approach’s central notion of open ended capacities and substituting this concept with an account of properties drawn from recent writing on the subject of structural realism I show that a form of fundamentalism is viable. I proceed from this conclusion to show that this form of fundamentalism provides a superior reading of case studies, such as the effective field theory program (EFT) in quantum field theory, than the “dappled world” view. The case study of the EFT program demonstrates that ontological variability between theoretical domains can be accounted for without altogether abandoning fundamentalism or adopting Cartwright’s more implausible theses.  相似文献   

16.
We introduce two abstract, causal schemata used during causal learning. (1) Tolerance is when an effect diminishes over time, as an entity is repeatedly exposed to the cause (e.g., a person becoming tolerant to caffeine). (2) Sensitization is when an effect intensifies over time, as an entity is repeatedly exposed to the cause (e.g., an antidepressant becoming more effective through repeated use). In Experiment 1, participants observed either of these cause—effect data patterns unfolding over time and exhibiting the tolerance or sensitization schemata. Participants inferred stronger causal efficacy and made more confident and more extreme predictions about novel cases than in a condition with the same data appearing in a random order over time. In Experiment 2, the same tolerance/sensitization scenarios occurred either within one entity or across many entities. In the manyentity conditions, when the schemata were violated, participants made much weaker inferences. Implications for causal learning are discussed.  相似文献   

17.
This paper presents an attempt to integrate theories of causal processes—of the kind developed by Wesley Salmon and Phil Dowe—into a theory of causal models using Bayesian networks. We suggest that arcs in causal models must correspond to possible causal processes. Moreover, we suggest that when processes are rendered physically impossible by what occurs on distinct paths, the original model must be restricted by removing the relevant arc. These two techniques suffice to explain cases of late pre?mption and other cases that have proved problematic for causal models.
Toby HandfieldEmail:
  相似文献   

18.
Opponents of ceteris paribus laws are apt to complain that the laws are vague and untestable. Indeed, claims to this effect are made by Earman, Roberts and Smith in this volume. I argue that these kinds of claims rely on too narrow a view about what kinds of concepts we can and do regularly use in successful sciences and on too optimistic a view about the extent of application of even our most successful non-ceteris paribus laws. When it comes to testing, we test ceteris paribus laws in exactly the same way that we test laws without the ceteris paribus antecedent. But at least when the ceteris paribus antecedent is there we have an explicit acknowledgment of important procedures we must take in the design of the experiments — i.e., procedures to control for “all interferences” even those we cannot identify under the concepts of any known theory. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

19.
In this paper, we make a preliminary attempt to approach the phenomenon of prospective memory (PM) from the point of view of Haken’s theory of synergetics that refers to complex, self-organizing systems, in general, and to brain functioning and cognition, in particular. In the following, we consider one form of PM only—the so-called event- or cue-dependent PM. We first interpret cue-dependent PM in terms of synergetics and then applythe mathematical formalism of synergetics.  相似文献   

20.
Following the September 2001 terrorist attacks on the United States, much support for torture interrogation of terrorists has emerged in the public forum, largely based on the “ticking bomb” scenario. Although deontological and virtue ethics provide incisive arguments against torture, they do not speak directly to scientists and government officials responsible for national security in a utilitarian framework. Drawing from criminology, organizational theory, social psychology, the historical record, and my interviews with military professionals, I assess the potential of an official U.S. program of torture interrogation from a practical perspective. The central element of program design is a sound causal model relating input to output. I explore three principal models of how torture interrogation leads to truth: the animal instinct model, the cognitive failure model, and the data processing model. These models show why torture interrogation fails overall as a counterterrorist tactic. They also expose the processes that lead from a precision torture interrogation program to breakdowns in key institutions—health care, biomedical research, police, judiciary, and military. The breakdowns evolve from institutional dynamics that are independent of the original moral rationale. The counterargument, of course, is that in a society destroyed by terrorism there will be nothing to repair. That is why the actual causal mechanism of torture interrogation in curtailing terrorism must be elucidated by utilitarians rather than presumed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号