首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The inference from determinism to predictability, though intuitively plausible, needs to be qualified in an important respect. We need to distinguish between two different kinds of predictability. On the one hand, determinism implies external predictability, that is, the possibility for an external observer, not part of the universe, to predict, in principle, all future states of the universe. Yet, on the other hand, embedded predictability as the possibility for an embedded subsystem in the universe to make such predictions, does not obtain in a deterministic universe. By revitalizing an older result—the paradox of predictability—we demonstrate that, even in a deterministic universe, there are fundamental, non-epistemic limitations on the ability of one subsystem embedded in the universe to predict the future behaviour of other subsystems embedded in the same universe. As an explanation, we put forward the hypothesis that these limitations arise because the predictions themselves are physical events which are part of the law-like causal chain of events in the deterministic universe. While the limitations on embedded predictability cannot in any direct way show evidence of free human agency, we conjecture that, even in a deterministic universe, human agents have a take-it-or-leave-it control over revealed predictions of their future behaviour.  相似文献   

2.
The goal of behavioral neuroscience is to map psychological concepts onto physiological and anatomical concepts and vice versa. The present paper reflects on some of the hidden obstacles that have to be overcome in order to find unique psychophysiological relationships. These are, among others: (1) the different status of concepts which are defined in the two domains (ontological subjectivity in psychology and ontological objectivity in physiology); (2) the distinct hierarchical levels to which concepts from the two domains may belong; (3) ambiguity of concepts, because—due to limited measurement resolution or definitional shortcomings—they sometimes do not cover unique states or processes; (4) ignored context dependencies. Moreover, it is argued that due to the gigantic number of states and state changes, which are possible in a nervous system, it seems unlikely that neuroscience can provide exact causal explanations and predictions of behavior. Rather, as in statistical thermodynamics the transition from the microlevel of explanations to the macrolevel is only possible with probabilistic uncertainty.  相似文献   

3.
Accounts of ontic explanation have often been devised so as to provide an understanding of mechanism and of causation. Ontic accounts differ quite radically in their ontologies, and one of the latest additions to this tradition proposed by Peter Machamer, Lindley Darden and Carl Craver reintroduces the concept of activity. In this paper I ask whether this influential and activity-based account of mechanisms is viable as an ontic account. I focus on polygenic scenarios—scenarios in which the causal truths depend on more than one cause. The importance of polygenic causation was noticed early on by Mill (1893). It has since been shown to be a problem for both causal-law approaches to causation (Cartwright 1983) and accounts of causation cast in terms of capacities (Dupré 1993; Glennan 1997, pp. 605–626). However, whereas mechanistic accounts seem to be attractive precisely because they promise to handle complicated causal scenarios, polygenic causation needs to be examined more thoroughly in the emerging literature on activity-based mechanisms. The activity-based account proposed in Machamer et al. (2000, pp. 1–25) is problematic as an ontic account, I will argue. It seems necessary to ask, of any ontic account, how well it performs in causal situations where—at the explanandum level of mechanism—no activity occurs. In addition, it should be asked how well the activity-based account performs in situations where there are too few activities around to match the polygenic causal origin of the explanandum. The first situation presents an explanandum-problem and the second situation presents an explanans-problem—I will argue—both of which threaten activity-based frameworks.  相似文献   

4.
An algebraic approach to programs called recursive coroutines — due to Janicki [3] — is based on the idea to consider certain complex algorithms as algebraics models of those programs. Complex algorithms are generalizations of pushdown algorithms being algebraic models of recursive procedures (see Mazurkiewicz [4]). LCA — logic of complex algorithms — was formulated in [11]. It formalizes algorithmic properties of a class of deterministic programs called here complex recursive ones or interacting stacks-programs, for which complex algorithms constitute mathematical models. LCA is in a sense an extension of algorithmic logic as initiated by Salwicki [14] and of extended algorithmic logic EAL as formulated and examined by the present author in [8], [9], [10]. In LCA — similarly as in EAL-ω + -valued logic is applied as a tool to construct control systems (stacks) occurring in corresponding algorithms. The aim of this paper is to give a complete axiomatization. of LCA and to prove a completeness theorem. Logic of complex algorithms was presented at FCT'79 (International Symposium on Fundamentals of Computation Theory, Berlin 1979)  相似文献   

5.
There is an assumption common in the philosophy of mind literature that kinds in our sciences—or causal kinds, at least—are individuated by the causal powers that objects have in virtue of the properties they instantiate. While this assumption might not be problematic by itself, some authors take the assumption to mean that falling under a kind and instantiating a property amount to the same thing. I call this assumption the “Property-Kind Individuation Principle”. A problem with this principle arises because there are cases where we can sort objects by their possession of common causal powers, and yet those objects do not intuitively form a causal kind. In this short note, I discuss why the Property-Kind Individuation Principle is thus not a warranted metaphysical assumption.  相似文献   

6.
Peter Fazekas 《Erkenntnis》2009,71(3):303-322
The present paper surveys the three most prominent accounts in contemporary debates over how sound reduction should be executed. The classical Nagelian model of reduction derives the laws of the target-theory from the laws of the base theory plus some auxiliary premises (so-called bridge laws) connecting the entities of the target and the base theory. The functional model of reduction emphasizes the causal definitions of the target entities referring to their causal relations to base entities. The new-wave model of reduction deduces not the original target theory but an analogous image of it, which remains inside the vocabulary of the base theory. One of the fundamental motivations of both the functional and the new-wave model is to show that bridge laws can be evaded. The present paper argues that bridge laws—in the original Nagelian sense—are inevitable, i.e. that none of these models can evade them. On the one hand, the functional model of reduction needs bridge laws, since its fundamental concept, functionalization, is an inter-theoretical process dealing with entities of two different theories. Theoretical entities of different theories (in a general heterogeneous case) do not have common causal relations, so the functionalization of an entity—without bridge laws—can only be executed in the framework of its own theory. On the other hand, the so-called images of the new-wave account cannot be constructed without the use of bridge laws. These connecting principles are needed to guide the process of deduction within the base theory; without them one would not be able to recognize if the deduced structure was an image of the target theory.  相似文献   

7.
This is a largely expository paper in which the following simple idea is pursued. Take the truth value of a formula to be the set of agents that accept the formula as true. This means we work with an arbitrary (finite) Boolean algebra as the truth value space. When this is properly formalized, complete modal tableau systems exist, and there are natural versions of bisimulations that behave well from an algebraic point of view. There remain significant problems concerning the proper formalization, in this context, of natural language statements, particularly those involving negative knowledge and common knowledge. A case study is presented which brings these problems to the fore. None of the basic material presented here is new to this paper—all has appeared in several papers over many years, by the present author and by others. Much of the development in the literature is more general than here—we have confined things to the Boolean case for simplicity and clarity. Most proofs are omitted, but several of the examples are new. The main virtue of the present paper is its coherent presentation of a systematic point of view—identify the truth value of a formula with the set of those who say the formula is true.  相似文献   

8.
9.
Some philosophers have argued that so long as two neural events, within a subject, are both of the same type and both carry the same content, then these events may jointly constitute a single mental token, regardless of the sort of causal relation to each other that they bear. These philosophers have used this claim—which I call the “singularity-through-redundancy” position—in order to argue that a split-brain subject normally has a single stream of consciousness, disjunctively realized across the two hemispheres. This paper argues, against this position, that the kind of causal relations multiple neural events bear to each other constrains the mental tokens with which functionalists who are realists can identify them.  相似文献   

10.
John A. Schuster 《Synthese》2012,185(3):467-499
One of the chief concerns of the young Descartes was with what he, and others, termed “physico-mathematics”. This signalled a questioning of the Scholastic Aristotelian view of the mixed mathematical sciences as subordinate to natural philosophy, non explanatory, and merely instrumental. Somehow, the mixed mathematical disciplines were now to become intimately related to natural philosophical issues of matter and cause. That is, they were to become more ’physicalised’, more closely intertwined with natural philosophising, regardless of which species of natural philosophy one advocated. A curious, short-lived yet portentous epistemological conceit lay at the core of Descartes’ physico-mathematics—the belief that solid geometrical results in the mixed mathematical sciences literally offered windows into the realm of natural philosophical causation—that in such cases one could literally “see the causes”. Optics took pride of place within Descartes’ physico-mathematics project, because he believed it offered unique possibilities for the successful vision of causes. This paper traces Descartes’ early physico-mathematical program in optics, its origins, pitfalls and its successes, which were crucial in providing Descartes resources for his later work in systematic natural philosophy. It explores how Descartes exploited his discovery of the law of refraction of light—an achievement well within the bounds of traditional mixed mathematical optics—in order to derive—in the manner of physico-mathematics—causal knowledge about light, and indeed insight about the principles of a “dynamics” that would provide the laws of corpuscular motion and tendency to motion in his natural philosophical system.  相似文献   

11.
This paper addresses a problem that arises when it comes to inferring deterministic causal chains from pertinent empirical data. It will be shown that to every deterministic chain there exists an empirically equivalent common cause structure. Thus, our overall conviction that deterministic chains are one of the most ubiquitous (macroscopic) causal structures is underdetermined by empirical data. It will be argued that even though the chain and its associated common cause model are empirically equivalent there exists an important asymmetry between the two models with respect to model expansions. This asymmetry might constitute a basis on which to disambiguate corresponding causal inferences on non-empirical grounds.
Michael BaumgartnerEmail:
  相似文献   

12.
John Neil Martin 《Synthese》2008,165(1):31-51
Though acknowledged by scholars, Plato’s identification of the Beautiful and the Good has generated little interest, even in aesthetics where the moral concepts are a current topic. The view is suspect because, e.g., it is easy to find examples of ugly saints and beautiful sinners. In this paper the thesis is defended using ideas from Plato’s ancient commentators, the Neoplatonists. Most interesting is Proclus, who applied to value theory a battery of linguistic tools with fixed semantic properties—comparative adjectives, associated gradable adjectives, mass nouns, and predicate negations—all with a semantics that demand a privative scale of value. It is shown how it is perfectly possible to interpret value terms Platonically over privative Boolean algebras so that beautifuland good diverge while at higher levels other value terms are coextensional. Considerations are offered that this structure conforms to actual usage.  相似文献   

13.
In this paper, I distinguish causal from logical versions of the direct argument for incompatibilism. I argue that, contrary to appearances, causal versions are better equipped to withstand an important recent challenge to the direct-argument strategy. The challenge involves arguing that support for the argument’s pivotal inference principle falls short just when it is needed most, namely when a deterministic series runs through an agent’s unimpaired deliberations. I then argue that, while there are limits to what causal versions can accomplish, they can be used to buttress the ultimacy argument, another important argument for incompatibilism.  相似文献   

14.
This paper presents an attempt to integrate theories of causal processes—of the kind developed by Wesley Salmon and Phil Dowe—into a theory of causal models using Bayesian networks. We suggest that arcs in causal models must correspond to possible causal processes. Moreover, we suggest that when processes are rendered physically impossible by what occurs on distinct paths, the original model must be restricted by removing the relevant arc. These two techniques suffice to explain cases of late pre?mption and other cases that have proved problematic for causal models.
Toby HandfieldEmail:
  相似文献   

15.
In signal detection theory (SDT), responses are governed by perceptual noise and a flexible decision criterion. Recent criticisms of SDT (see, e.g., Balakrishnan, 1999) have identified violations of its assumptions, and researchers have suggested that SDT fundamentally misrepresents perceptual and decision processes. We hypothesize that, instead, these violations of SDT stem from decision noise: the inability to use deterministic response criteria. In order to investigate this hypothesis, we present a simple extension of SDT—the decision noise model—with which we demonstrate that shifts in a decision criterion can be masked by decision noise. In addition, we propose a new statistic that can help identify whether the violations of SDT stem from perceptual or from decision processes. The results of a stimulus classification experiment—together with model fits to past experiments—show that decision noise substantially affects performance. These findings suggest that decision noise is important across a wide range of tasks and needs to be better understood in order to accurately measure perceptual processes.  相似文献   

16.
Harry Frankfurt has famously criticized the principle of alternate possibilities—the principle that an agent is morally responsible for performing some action only if able to have done otherwise than to perform it—on the grounds that it is possible for an agent to be morally responsible for performing an action that is inevitable for the agent when the reasons for which the agent lacks alternate possibilities are not the reasons for which the agent has acted. I argue that an incompatibilist about determinism and moral responsibility can safely ignore so-called “Frakfurt-style cases” and continue to argue for incompatibilism on the grounds that determinism rules out the ability to do otherwise. My argument relies on a simple—indeed, simplistic—weakening of the principle of alternate possibilities that is explicitly designed to be immune to Frankfurt-style criticism. This alternative to the principle of alternate possibilities is so simplistic that it will no doubt strike many readers as philosophically fallow. I argue that it is not. I argue that the addition of one highly plausible premise allows for the modified principle to be employed in an argument for incompatibilism that begins with the observation that determinism rules out the ability to do otherwise. On the merits of this argument I conclude that deterministic moral responsibility is impossible and that Frankfurt’s criticism of the principle of alternate possibilities—even if successful to that end—may be safely ignored.
Richard M. GlatzEmail:
  相似文献   

17.
Randomized controlled trials (RCTs) are widely taken as the gold standard for establishing causal conclusions. Ideally conducted they ensure that the treatment ‘causes’ the outcome—in the experiment. But where else? This is the venerable question of external validity. I point out that the question comes in two importantly different forms: Is the specific causal conclusion warranted by the experiment true in a target situation? What will be the result of implementing the treatment there? This paper explains how the probabilistic theory of causality implies that RCTs can establish causal conclusions and thereby provides an account of what exactly that causal conclusion is. Clarifying the exact form of the conclusion shows just what is necessary for it to hold in a new setting and also how much more is needed to see what the actual outcome would be there were the treatment implemented.  相似文献   

18.
This article argues against the view that affirmative action is wrong because it involves assigning group rights. First, affirmative action does not have to proceed by assigning rights at all. Second, there are, in fact, legitimate “group rights” both legal and moral; there are collective rights—which are exercised by groups—and membership rights—which are rights people have in virtue of group membership. Third, there are continuing harms that people suffer as blacks and claims to remediation for these harms can fairly treat the (social) property of being black as tracking the victims of those harms. Affirmative action motivated in this way aims to respond to individual wrongs; wrongs that individuals suffer, as it happens, in virtue of their membership in groups. Finally, the main right we have when we are being considered for jobs and places at colleges is that we be treated according to procedures that are morally defensible. Morally acceptable procedures sometimes take account of the fact that a person is a member of a certain social group.  相似文献   

19.
We consider the problems arising from using sequences of experiments to discover the causal structure among a set of variables, none of whom are known ahead of time to be an “outcome”. In particular, we present various approaches to resolve conflicts in the experimental results arising from sampling variability in the experiments. We provide a sufficient condition that allows for pooling of data from experiments with different joint distributions over the variables. Satisfaction of the condition allows for an independence test with greater sample size that may resolve some of the conflicts in the experimental results. The pooling condition has its own problems, but should—due to its generality—be informative to techniques for meta-analysis.  相似文献   

20.
Toby Handfield 《Synthese》2008,160(2):297-308
This paper develops two ideas with respect to dispositional properties: (1) Adapting a suggestion of Sungho Choi, it appears the conceptual distinction between dispositional and categorical properties can be drawn in terms of susceptibility to finks and antidotes. Dispositional, but not categorical properties, are not susceptible to intrinsic finks, nor are they remediable by intrinsic antidotes. (2) If correct, this suggests the possibility that some dispositions—those which lack any causal basis—may be insusceptible to any fink or antidote. Since finks and antidotes are a major obstacle to a conditional analysis of dispositions, these dispositions that are unfinkable may be successfully analysed by the conditional analysis of dispositions. This result is of importance for those who think that the fundamental properties might be dispositions which lack any distinct causal basis, because it suggests that these properties, if they exist, can be analysed by simple conditionals and that they will not be subject to ceteris paribus laws.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号