首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A scientific explanatory project, part-whole explanation, and a kind of science, part-whole science are premised on identifying, investigating, and using parts and wholes. In the biological sciences, mechanistic, structuralist, and historical explanations are part-whole explanations. Each expresses different norms, explananda, and aims. Each is associated with a distinct partitioning frame for abstracting kinds of parts. These three explanatory projects can be complemented in order to provide an integrative vision of the whole system, as is shown for a detailed case study: the tetrapod limb. My diagnosis of part-whole explanation in the biological sciences as well as in other domains exploring evolved, complex, and integrated systems (e.g., psychology and cognitive science) cross-cuts standard philosophical categories of explanation: causal explanation and explanation as unification. Part-whole explanation is itself one essential aspect of part-whole science.  相似文献   

2.
This paper considers the way mathematical and computational models are used in network neuroscience to deliver mechanistic explanations. Two case studies are considered: Recent work on klinotaxis by Caenorhabditis elegans, and a long-standing research effort on the network basis of schizophrenia in humans. These case studies illustrate the various ways in which network, simulation, and dynamical models contribute to the aim of representing and understanding network mechanisms in the brain, and thus, of delivering mechanistic explanations. After outlining this mechanistic construal of network neuroscience, two concerns are addressed. In response to the concern that functional network models are nonexplanatory, it is argued that functional network models are in fact explanatory mechanism sketches. In response to the concern that models which emphasize a network’s organization over its composition do not explain mechanistically, it is argued that this emphasis is both appropriate and consistent with the principles of mechanistic explanation. What emerges is an improved understanding of the ways in which mathematical and computational models are deployed in network neuroscience, as well as an improved conception of mechanistic explanation in general.  相似文献   

3.
Mechanistic explanation is at present the received view of scientific explanation. One of its central features is the idea that mechanistic explanations are both “downward looking” and “upward looking”: they explain by offering information about the internal constitution of the mechanism as well as the larger environment in which the mechanism is situated. That is, they offer both constitutive and contextual explanatory information. Adequate mechanistic explanations, on this view, accommodate the full range of explanatory factors both “above” and “below” the target phenomenon. The aim of this paper is to demonstrate that mechanistic explanation cannot furnish both constitutive and contextual information simultaneously, because these are different types of explanation with distinctly different aims. Claims that they can, I argue, depend on several intertwined confusions concerning the nature of explanation. Particularly, such claims tend to conflate mechanistic and functional explanation, which I argue ought to be understood as distinct. Conflating them threatens to oversell the explanatory power of mechanisms and obscures the means by which they explain. I offer two broad reasons in favor of distinguishing mechanistic and functional explanation: the first concerns the direction of explanation of each, and the second concerns the type of questions to which these explanations offer answers. I suggest an alternative picture on which mechanistic explanation is understood as fundamentally constitutive, and according to which an adequate understanding of a phenomenon typically requires supplementing the mechanistic explanation with a functional explanation.  相似文献   

4.
This paper aims to explore mechanistic and teleological explanations of consciousness. In terms of mechanistic explanations, it critiques various existing views, especially those embodied by existing computational cognitive models. In this regard, the paper argues in favor of the explanation based on the distinction between localist (symbolic) representation and distributed representation (as formulated in the connectionist literature), which reduces the phenomenological difference to a mechanistic difference. Furthermore, to establish a teleological explanation of consciousness, the paper discusses the issue of the functional role of consciousness on the basis of the aforementioned mechanistic explanation. A proposal based on synergistic interaction between the conscious and the unconscious is advanced that encompasses various existing views concerning the functional role of consciousness. This two-step deepening explanation has some empirical support, in the form of a cognitive model and various cognitive data that it captures.  相似文献   

5.
I argued in Karl Marx's Theory of History that the central claims of historical materialism are functional explanations, and I said that functional explanations are consequence explanations, ones, that is, in which something is explained by its propensity to have a certain kind of effect. I also claimed that the theory of chance variation and natural selection sustains functional explanations, and hence consequence explanations, of organismic equipment. In Section I I defend the thesis that historical materialism offers functional or consequence explanations, and I reject Jon Elster's contention that game theory can, and should, assume a central role in the Marxist theory of society. In Section II I contrast functional and consequence explanation, thereby revising the position of Karl Marx's Theory of History, and I question whether evolutionary biology supports functional explanations. Section III is a critique of Elster's views on functional explanation, and Sections IV and V defend consequence explanation against metaphysical and epistemological doubts. A concluding section summarizes my present understanding of the status of historical materialist explanations.  相似文献   

6.
Till Grüne-Yanoff 《Synthese》2009,169(3):539-555
It is often claimed that artificial society simulations contribute to the explanation of social phenomena. At the hand of a particular example, this paper argues that artificial societies often cannot provide full explanations, because their models are not or cannot be validated. Despite that, many feel that such simulations somehow contribute to our understanding. This paper tries to clarify this intuition by investigating whether artificial societies provide potential explanations. It is shown that these potential explanations, if they contribute to our understanding, considerably differ from potential causal explanations. Instead of possible causal histories, simulations offer possible functional analyses of the explanandum. The paper discusses how these two kinds explanatory strategies differ, and how potential functional explanations can be appraised.  相似文献   

7.
When someone encounters an explanation perceived as weak, this may lead to a feeling of deprivation or tension that can be resolved by engaging in additional learning. This study examined to what extent children respond to weak explanations by seeking additional learning opportunities. Seven‐ to ten‐year‐olds (N = 81) explored questions and explanations (circular or mechanistic) about 12 animals using a novel Android tablet application. After rating the quality of an initial explanation, children could request and receive additional information or return to the main menu to choose a new animal to explore. Consistent with past research, there were both developmental and IQ‐related differences in how children evaluated explanation quality. But across development, children were more likely to request additional information in response to circular explanations than mechanistic explanations. Importantly, children were also more likely to request additional information in direct response to explanations that they themselves had assigned low ratings, regardless of explanation type. In addition, there was significant variability in both children's explanation evaluation and their exploration, suggesting important directions for future research. The findings support the deprivation theory of curiosity and offer implications for education.  相似文献   

8.
David Barrett 《Synthese》2014,191(12):2695-2714
Piccinini and Craver (Synthese 183:283–311, 2011) argue for the surprising view that psychological explanation, properly understood, is a species of mechanistic explanation. This contrasts with the ‘received view’ (due, primarily, to Cummins and Fodor) which maintains a sharp distinction between psychological explanation and mechanistic explanation. The former is typically construed as functional analysis, the analysis of some psychological capacity into an organized series of subcapacities without specifying any of the structural features that underlie the explanandum capacity. The latter idea, of course, sees explanation as a matter of describing structures that maintain (or produce) the explanandum capacity. In this paper, I defend the received view by criticizing Piccinini and Craver’s argument for the claim that psychological explanation is not distinct from mechanistic explanation, and by showing how psychological explanations can possess explanatory force even when nothing is known about the underlying neurological details. I conclude with a few brief criticisms about the enterprise of mechanistic explanation in general.  相似文献   

9.
Philippe Huneman 《Synthese》2010,177(2):213-245
This paper argues that besides mechanistic explanations, there is a kind of explanation that relies upon “topological” properties of systems in order to derive the explanandum as a consequence, and which does not consider mechanisms or causal processes. I first investigate topological explanations in the case of ecological research on the stability of ecosystems. Then I contrast them with mechanistic explanations, thereby distinguishing the kind of realization they involve from the realization relations entailed by mechanistic explanations, and explain how both kinds of explanations may be articulated in practice. The second section, expanding on the case of ecological stability, considers the phenomenon of robustness at all levels of the biological hierarchy in order to show that topological explanations are indeed pervasive there. Reasons are suggested for this, in which “neutral network” explanations are singled out as a form of topological explanation that spans across many levels. Finally, I appeal to the distinction of explanatory regimes to cast light on a controversy in philosophy of biology, the issue of contingence in evolution, which is shown to essentially involve issues about realization.  相似文献   

10.
Jonathan Waskan 《Synthese》2011,183(3):389-408
Resurgent interest in both mechanistic and counterfactual theories of explanation has led to a fair amount of discussion regarding the relative merits of these two approaches. James Woodward is currently the pre-eminent counterfactual theorist, and he criticizes the mechanists on the following grounds: Unless mechanists about explanation invoke counterfactuals, they cannot make sense of claims about causal interactions between mechanism parts or of causal explanations put forward absent knowledge of productive mechanisms. He claims that these shortfalls can be offset if mechanists will just borrow key tenets of his counterfactual theory of causal claims. What mechanists must bear in mind, however, is that by pursuing this course they risk both the assimilation of the mechanistic theories of explanation into Woodward’s own favored counterfactual theory, and they risk the marginalization of mechanistic explanations to a proper subset of all explanations. An outcome more favorable to mechanists might be had by pursuing an actualist-mechanist theory of the contents of causal claims. While it may not seem obvious at first blush that such an approach is workable, even in principle, recent empirical research into causal perception, causal belief, and mechanical reasoning provides some grounds for optimism.  相似文献   

11.
Mechanistic accounts of explanation have recently found popularity within philosophy of science. Presently, we introduce the idea of an extended mechanistic explanation, which makes explicit room for the role of environment in explanation. After delineating Craver and Bechtel’s (2007) account, we argue this suggestion is not sufficiently robust when we take seriously the mechanistic environment and modeling practices involved in studying contemporary complex biological systems. Our goal is to extend the already profitable mechanistic picture by pointing out the importance of the mechanistic environment. It is our belief that extended mechanistic explanations, or mechanisms that take into consideration the temporal sequencing of the interplay between the mechanism and the environment, allow for mechanistic explanations regarding a broader group of scientific phenomena.  相似文献   

12.
Adrian Mitchell Currie 《Synthese》2014,191(6):1163-1183
Geologists, Paleontologists and other historical scientists are frequently concerned with narrative explanations targeting single cases. I show that two distinct explanatory strategies are employed in narratives, simple and complex. A simple narrative has minimal causal detail and is embedded in a regularity, whereas a complex narrative is more detailed and not embedded. The distinction is illustrated through two case studies: the ‘snowball earth’ explanation of Neoproterozoic glaciation and recent attempts to explain gigantism in Sauropods. This distinction is revelatory of historical science. I argue that at least sometimes which strategy is appropriate is not a pragmatic issue, but turns on the nature of the target. Moreover, the distinction reveals a counterintuitive pattern of progress in some historical explanation: shifting from simple to complex. Sometimes, historical scientists rightly abandon simple, unified explanations in favour of disunified, complex narratives. Finally I compare narrative and mechanistic explanation, arguing that mechanistic approaches are inappropriate for complex narrative explanations.  相似文献   

13.
Biological realism ( [Revonsuo, 2001] and [Revonsuo, 2006] ) states that dreaming is a biological phenomenon and therefore explainable in naturalistic terms, similar to the explanation of other biological phenomena. In the biological sciences, the structure of explanations can be described with the help of a framework called ‘multilevel explanation’. The multilevel model provides a context that assists to clarify what needs to be explained and how, and how to place different theories into the same model. Here, I will argue that the multilevel framework would be useful when we try to construct scientific explanations of dreaming.  相似文献   

14.
The topic of history-of-science explanation is first briefly introduced as a generally important one for the light it may shed on action theory, on the logic of discovery, and on philosophy's relations with historiography of science, intellectual history, and the sociology of knowledge. Then some problems and some conclusions are formulated by reference to some recent relevant literature: a critical analysis of Laudan's views on the role of normative evaluations in rational explanations occasions the result that one must make aconceptual distinction between evaluations and explanations of belief, and that there are at leastthree subclasses of the latter, rational, critical, and theoretical; I then discuss the problem of whether explanations of discoveries are self-evidencing and predictive by focusing on views of Hempel and Nickles, and I attempt a formalization of some aspects of the problem. Finally, a more systematic and concrete analysis is undertaken by using as an example the explanation of Galileo's rejection of space-proportionality, and it is argued that the historical explanation of scientific beliefs is a type of logical analysis.  相似文献   

15.
In this article, I develop an account of the use of intentional predicates in cognitive neuroscience explanations. As pointed out by Maxwell Bennett and Peter Hacker, intentional language abounds in neuroscience theories. According to Bennett and Hacker, the subpersonal use of intentional predicates results in conceptual confusion. I argue against this overly strong conclusion by evaluating the contested language use in light of its explanatory function. By employing conceptual resources from the contemporary philosophy of science, I show that although the use of intentional predicates in mechanistic explanations sometimes leads to explanatorily inert claims, intentional predicates can also successfully feature in mechanistic explanations as tools for the functional analysis of the explanandum phenomenon. Despite the similarities between my account and Daniel Dennett's intentional-stance approach, I argue that intentional stance should not be understood as a theory of subpersonal causal explanation, and therefore cannot be used to assess the explanatory role of intentional predicates in neuroscience. Finally, I outline a general strategy for answering the question of what kind of language can be employed in mechanistic explanations.  相似文献   

16.
Larry Wright and others have advanced causal accounts of functional explanation, designed to alleviate fears about the legitimacy of such explanations. These analyses take functional explanations to describe second order causal relations. These second order relations are conceptually puzzling. I present an account of second order causation from within the framework of Eells' probabilistic theory of causation; the account makes use of the population-relativity of causation that is built into this theory.  相似文献   

17.
In this paper, I propose two theses, and then examine what the consequences of those theses are for discussions of reduction and emergence. The first thesis is that what have traditionally been seen as robust, reductions of one theory or one branch of science by another more fundamental one are a largely a myth. Although there are such reductions in the physical sciences, they are quite rare, and depend on special requirements. In the biological sciences, these prima facie sweeping reductions fade away, like the body of the famous Cheshire cat, leaving only a smile. ... The second thesis is that the “smiles” are fragmentary patchy explanations, and though patchy and fragmentary, they are very important, potentially Nobel-prize winning advances. To get the best grasp of these “smiles,” I want to argue that, we need to return to the roots of discussions and analyses of scientific explanation more generally, and not focus mainly on reduction models, though three conditions based on earlier reduction models are retained in the present analysis. I briefly review the scientific explanation literature as it relates to reduction, and then offer my account of explanation. The account of scientific explanation I present is one I have discussed before, but in this paper I try to simplify it, and characterize it as involving field elements (FE) and a preferred causal model system (PCMS) abbreviated as FE and PCMS. In an important sense, this FE and PCMS analysis locates an “explanation” in a typical scientific research article. This FE and PCMS account is illustrated using a recent set of neurogenetic papers on two kinds of worm foraging behaviors: solitary and social feeding. One of the preferred model systems from a 2002 Nature article in this set is used to exemplify the FE and PCMS analysis, which is shown to have both reductive and nonreductive aspects. The paper closes with a brief discussion of how this FE and PCMS approach differs from and is congruent with Bickle’s “ruthless reductionism” and the recently revived mechanistic philosophy of science of Machamer, Darden, and Craver.  相似文献   

18.
Young children often endorse explanations of the natural world that appeal to functions or purpose—for example, that rocks are pointy so animals can scratch on them. By contrast, most Western-educated adults reject such explanations. What accounts for this change? We investigated 4- to 5-year-old children’s ability to generalize the form of an explanation from examples by presenting them with novel teleological explanations, novel mechanistic explanations, or no explanations for 5 nonliving natural objects. We then asked children to explain novel instances of the same objects and novel kinds of objects. We found that children were able to learn and generalize explanations of both types, suggesting an ability to draw generalizations over the form of an explanation. We also found that teleological and mechanistic explanations were learned and generalized equally well, suggesting that if a domain-general teleological bias exists, it does not manifest as a bias in learning or generalization.  相似文献   

19.
Are explanations of different kinds (formal, mechanistic, teleological) judged differently depending on their contextual utility, defined as the extent to which they support the kinds of inferences required for a given task? We report three studies demonstrating that the perceived “goodness” of an explanation depends on the evaluator’s current task: Explanations receive a relative boost when they support task-relevant inferences, even when all three explanation types are warranted. For example, mechanistic explanations receive higher ratings when participants anticipate making further inferences on the basis of proximate causes than when they anticipate making further inferences on the basis of category membership or functions. These findings shed light on the functions of explanation and support pragmatic and pluralist approaches to explanation.  相似文献   

20.
Elber-Dorozko  Lotem  Shagrir  Oron 《Synthese》2019,199(1):43-66

It is generally accepted that, in the cognitive and neural sciences, there are both computational and mechanistic explanations. We ask how computational explanations can integrate into the mechanistic hierarchy. The problem stems from the fact that implementation and mechanistic relations have different forms. The implementation relation, from the states of an abstract computational system (e.g., an automaton) to the physical, implementing states is a homomorphism mapping relation. The mechanistic relation, however, is that of part/whole; the explaining features in a mechanistic explanation are the components of the explanandum phenomenon and their causal organization. Moreover, each component in one level of mechanism is constituted and explained by components of an underlying level of mechanism. Hence, it seems, computational variables and functions cannot be mechanistically explained by the medium-dependent states and properties that implement them. How then, do the computational and the implementational integrate to create the mechanistic hierarchy? After explicating the general problem (Sect. 2), we further demonstrate it through a concrete example, of reinforcement learning, in the cognitive and neural sciences (Sects. 3 and 4). We then examine two possible solutions (Sect. 5). On one solution, the mechanistic hierarchy embeds at the same levels computational and implementational properties. This picture fits with the view that computational explanations are mechanistic sketches. On the other solution, there are two separate hierarchies, one computational and another implementational, which are related by the implementation relation. This picture fits with the view that computational explanations are functional and autonomous explanations. It is less clear how these solutions fit with the view that computational explanations are full-fledged mechanistic explanations. Finally, we argue that both pictures are consistent with the reinforcement learning example, but that scientific practice does not align with the view that computational models are merely mechanistic sketches (Sect. 6).

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号