首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Some twenty years ago, Bogen and Woodward challenged one of the fundamental assumptions of the received view, namely the theory-observation dichotomy and argued for the introduction of the further category of scientific phenomena. The latter, Bogen and Woodward stressed, are usually unobservable and inferred from what is indeed observable, namely scientific data. Crucially, Bogen and Woodward claimed that theories predict and explain phenomena, but not data. But then, of course, the thesis of theory-ladenness, which has it that our observations are influenced by the theories we hold, cannot apply. On the basis of two case studies, I want to show that this consequence of Bogen and Woodward’s account is rather unrealistic. More importantly, I also object against Bogen and Woodward’s view that the reliability of data, which constitutes the precondition for data-to-phenomena inferences, can be secured without the theory one seeks to test. The case studies I revisit have figured heavily in the publications of Bogen and Woodward and others: the discovery of weak neutral currents and the discovery of the zebra pattern of magnetic anomalies. I show that, in the latter case, data can be ignored if they appear to be irrelevant from a particular theoretical perspective (TLI) and that, in the former case, the tested theory can be critical for the assessment of the reliability of the data (TLA). I argue that both TLI and TLA are much stronger senses of theory-ladenness than the classical thesis and that neither TLI nor TLA can be accommodated within Bogen and Woodward’s account.  相似文献   

2.
Jochen Apel 《Synthese》2011,182(1):23-38
In this paper I offer an appraisal of James Bogen and James Woodward’s distinction between data and phenomena which pursues two objectives. First, I aim to clarify the notion of a scientific phenomenon. Such a clarification is required because despite its intuitive plausibility it is not exactly clear how Bogen and Woodward’s distinction has to be understood. I reject one common interpretation of the distinction, endorsed for example by James McAllister and Bruce Glymour, which identifies phenomena with patterns in data sets. Furthermore, I point out that other interpretations of Bogen and Woodward’s distinction do not specify the relationship between phenomena and theories in a satisfying manner. In order to avoid this problem I propose a contextual understanding of scientific phenomena according to which phenomena are states of affairs which play specific roles in scientific practice and to which we adopt a special epistemic attitude. Second, I evaluate the epistemological significance of Bogen and Woodward’s distinction with respect to the debate between scientific realists and constructive empiricists. Contrary to what Bogen and Woodward claim, I argue that the distinction does not provide a convincing argument against constructive empiricism.  相似文献   

3.
Bogen and Woodward claim that the function of scientific theories is to account for 'phenomena', which they describe both as investigator-independent constituents of the world and as corresponding to patterns in data sets. I argue that, if phenomena are considered to correspond to patterns in data, it is inadmissible to regard them as investigator-independent entities. Bogen and Woodward's account of phenomena is thus incoherent. I offer an alternative account, according to which phenomena are investigator-relative entities. All the infinitely many patterns that data sets exhibit have equal intrinsic claim to the status of phenomenon: each investigator may stipulate which patterns correspond to phenomena for him or her. My notion of phenomena accords better both with experimental practice and with the historical development of science. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

4.
The distinction between data and phenomena introduced by Bogen and Woodward (Philosophical Review 97(3):303–352, 1988) was meant to help accounting for scientific practice, especially in relation with scientific theory testing. Their article and the subsequent discussion is primarily viewed as internal to philosophy of science. We shall argue that the data/phenomena distinction can be used much more broadly in modelling processes in philosophy.  相似文献   

5.
James F. Woodward 《Synthese》2011,182(1):165-179
This paper provides a restatement and defense of the data/ phenomena distinction introduced by Jim Bogen and me several decades ago (e.g., Bogen and Woodward, The Philosophical Review, 303–352, 1988). Additional motivation for the distinction is introduced, ideas surrounding the distinction are clarified, and an attempt is made to respond to several criticisms.  相似文献   

6.
Depending on different positions in the debate on scientific realism, there are various accounts of the phenomena of physics. For scientific realists like Bogen and Woodward, phenomena are matters of fact in nature, i.e., the effects explained and predicted by physical theories. For empiricists like van Fraassen, the phenomena of physics are the appearances observed or perceived by sensory experience. Constructivists, however, regard the phenomena of physics as artificial structures generated by experimental and mathematical methods. My paper investigates the historical background of these different meanings of “phenomenon” in the traditions of physics and philosophy. In particular, I discuss Newton’s account of the phenomena and Bohr’s view of quantum phenomena, their relation to the philosophical discussion, and to data and evidence in current particle physics and quantum optics.  相似文献   

7.
Ioannis Votsis 《Synthese》2011,182(1):89-100
In a recent paper James Bogen and James Woodward denounce a set of views on confirmation that they collectively brand ‘IRS’. The supporters of these views cast confirmation in terms of Inferential Relations between observational and theoretical Sentences. Against IRS accounts of confirmation, Bogen and Woodward unveil two main objections: (a) inferential relations are not necessary to model confirmation relations since many data are neither in sentential form nor can they be put in such a form and (b) inferential relations are not sufficient to model confirmation relations because the former cannot capture evidentially relevant factors about the detection processes and instruments that generate the data. In this paper I have a two-fold aim: (i) to show that Bogen and Woodward fail to provide compelling grounds for the rejection of IRS models and (ii) to highlight some of the models’ neglected merits.  相似文献   

8.
Eran Tal 《Synthese》2011,182(1):117-129
This paper draws attention to an increasingly common method of using computer simulations to establish evidential standards in physics. By simulating an actual detection procedure on a computer, physicists produce patterns of data (‘signatures’) that are expected to be observed if a sought-after phenomenon is present. Claims to detect the phenomenon are evaluated by comparing such simulated signatures with actual data. Here I provide a justification for this practice by showing how computer simulations establish the reliability of detection procedures. I argue that this use of computer simulation undermines two fundamental tenets of the Bogen–Woodward account of evidential reasoning. Contrary to Bogen and Woodward’s view, computer-simulated signatures rely on ‘downward’ inferences from phenomena to data. Furthermore, these simulations establish the reliability of experimental setups without physically interacting with the apparatus. I illustrate my claims with a study of the recent detection of the superfluid-to-Mott-insulator phase transition in ultracold atomic gases.  相似文献   

9.
Michela Massimi 《Synthese》2011,182(1):101-116
This paper investigates some metaphysical and epistemological assumptions behind Bogen and Woodward’s data-to-phenomena inferences. I raise a series of points and suggest an alternative possible Kantian stance about data-to-phenomena inferences. I clarify the nature of the suggested Kantian stance by contrasting it with McAllister’s view about phenomena as patterns in data sets.  相似文献   

10.
Syllogisms are arguments about the properties of entities. They consist of 2 premises and a conclusion, which can each be in 1 of 4 "moods": All A are B, Some A are B, No A are B, and Some A are not B. Their logical analysis began with Aristotle, and their psychological investigation began over 100 years ago. This article outlines the logic of inferences about syllogisms, which includes the evaluation of the consistency of sets of assertions. It also describes the main phenomena of reasoning about properties. There are 12 extant theories of such inferences, and the article outlines each of them and describes their strengths and weaknesses. The theories are of 3 main sorts: heuristic theories that capture principles that could underlie intuitive responses, theories of deliberative reasoning based on formal rules of inference akin to those of logic, and theories of deliberative reasoning based on set-theoretic diagrams or models. The article presents a meta-analysis of these extant theories of syllogisms using data from 6 studies. None of the 12 theories provides an adequate account, and so the article concludes with a guide-based on its qualitative and quantitative analyses-of how best to make progress toward a satisfactory theory.  相似文献   

11.
This article tests whether individual differences in inferring one trait from another (intertrait inferences) can be linked to lay beliefs about the malleability of personality (person theories). It finds that holding the belief that personality is malleable (incremental theory) rather than fixed (entity theory) at the time of inferences is associated with less extreme inferences involving semantically related (but not unrelated) traits. Although person theories have been assumed to be stable over time, existing short-term test-retest coefficients do not capture their instability over a longer period. These results can illuminate interrater discrepancies in assessments of personality pathology and job performance, enrich understanding of such phenomena as stereotyping and impression formation, refine the interpretation of past research involving person theories, and inform research planning.  相似文献   

12.
Stern HS 《心理学方法》2005,10(4):494-499
I. Klugkist, O. Laudy, and H. Hoijtink (2005) presented a Bayesian approach to analysis of variance models with inequality constraints. Constraints may play 2 distinct roles in data analysis. They may represent prior information that allows more precise inferences regarding parameter values, or they may describe a theory to be judged against the data. In the latter case, the authors emphasized the use of Bayes factors and posterior model probabilities to select the best theory. One difficulty is that interpretation of the posterior model probabilities depends on which other theories are included in the comparison. The posterior distribution of the parameters under an unconstrained model allows one to quantify the support provided by the data for inequality constraints without requiring the model selection framework.  相似文献   

13.
We report four experiments investigating conjunctive inferences (from a conjunction and two conditional premises) and disjunctive inferences (from a disjunction and the same two conditionals). The mental model theory predicts that the conjunctive inferences, which require one model, should be easier than the disjunctive inferences, which require multiple models. Formal rule theories predict either the opposite result or no difference between the inferences. The experiments showed that the inferences were equally easy when the participants evaluated given conclusions, but that the conjunctive inferences were easier than the disjunctive inferences (1) when the participants drew their own conclusions, (2) when the conjunction and disjunction came last in the premises, (3) in the time the participants spent reading the premises and in responding to given conclusions, and (4) in their ratings of the difficulty of the inferences. The results support the model theory and demonstrate the importance of reasoners' inferential strategies.  相似文献   

14.
This paper compares two theories and their two corresponding computational models of human moral judgment. In order to better address psychological realism and generality of theories of moral judgment, more detailed and more psychologically nuanced models are needed. In particular, a motivationally based theory of moral judgment (and its corresponding computational model) is developed in this paper that provides a more accurate account of human moral judgment than an existing emotion‐reason conflict theory. Simulations based on the theory capture and explain a range of relevant human data. They account not only for the original data that were used to support the emotion – reason conflict theory, but also for a wider range of data and phenomena.  相似文献   

15.
This article evaluates two theoretical accounts of how sarcasm is understood; the traditional model, which asserts that listeners derive a counterfactual inference from the sarcastic comment, and relevance theory, which asserts that listeners recognize sarcasm as a scornful echo of a previous assertion. Evidence from normal speakers provides only partial support for both theories. Evidence from brain-injured populations suggests that aspects of the pragmatic process can be arrested in ways not predicted by either theory. It is concluded that sarcasm is more effortful to process than nonsarcastic comments and that inferences about the facts of the situation and the mental state of the speaker (e.g., attitudes, knowledge, and intentions) are important to comprehending sarcasm. It is questioned whether inferences about mental state are relatively more difficult for brain-injured subjects and, if so, whether this is a continuum of difficulty or reflects reliance upon different cognitive processes.  相似文献   

16.
Brent Mundy 《Erkenntnis》1990,33(3):345-369
The view that scientific theories are partially interpreted deductive systems (theoretical deductivism) is defended against recent criticisms by Hempel. Hempel argues that the reliance of theoretical inferences (both from observation to theory and also from theory to theory) uponceteris paribus conditions orprovisos must prevent theories from establishing deductive connections among observations. In reply I argue, first, that theoretical deductivism does not in fact require the establishing of such deductive connections: I offer alternative H-D analyses of these inferences. Second, I argue that when the refined character of scientific observation is taken into account, we find that a theorymay after all establish such deductive connections among scientific observations, without reliance on provisos.These conclusions are based on the multi-level Popperian contextualist account of empirical interpretation sketched in a previous paper. As before, I claim that the supposed objections to theoretical deductivism depend upon questionable empiricist theses unnecessarily conjoined with theoretical deductivism by the Logical Positivists. Theoretical deductivism itself is unaffected by these arguments, and remains (when empirical interpretation is properly analyzed) the best account of scientific theories.This paper develops points first made very briefly in my forthcoming review (c). I would like to thank Professor Hempel for correspondence regarding an earlier version of that review, and Professor Demopoulos for commissioning the review.  相似文献   

17.
We asked people to validate conditional inferences (e.g., "A, therefore C" with "if A then C"). People are more likely to look for falsifications ("A and not-C") versus confirmations ("A and C") given a forced choice. Second, falsification rates are lower for logically valid versus invalid inferences. Logically valid inferences are inferences that follow necessarily. Experiment 1 (N = 96) shows that emphasising this logicality constraint increases falsification rates in the validation task and corroborates that validation-by-falsification increases logically correct inference evaluations. Experiment 2 (N = 41) corroborates the other way round that people who are more likely to make logically correct evaluations, show higher falsification performance in the validation task. The results support mental-models theory and suggest alternative theories similarly need to specify how people would go about looking for counterexamples. We proffer such a specification for two alternatives to the model theory. (PsycINFO Database Record (c) 2008 APA, all rights reserved).  相似文献   

18.
Three versions of the additive theories of behavioral contrast   总被引:2,自引:2,他引:0       下载免费PDF全文
The additive theories of behavioral contrast state that contrast will occur only when two types of responses interact during multiple schedules. Three more specific versions of the theories may be defined according to how they distinguish these two types of responses. A strong version physically distinguishes them. A second version distinguishes them according to the theoretical processes which control them. A weak version distinguishes them on the basis of the environmental relations which control them. Only the weak version of the theories is currently testable. The weak theory should be tested by establishing each of the two environmental relations independently and then combining them to assess their effect on behavior. Because this test is not usually performed, many of the results which have been taken to support or contradict the additive theories are actually ambiguous.  相似文献   

19.
A fundamental issue for theories of human induction is to specify constraints on potential inferences. For inferences based on shared category membership, an analogy, and/or a relational schema, it appears that the basic goal of induction is to make accurate and goal-relevant inferences that are sensitive to uncertainty. People can use source information at various levels of abstraction (including both specific instances and more general categories), coupled with prior causal knowledge, to build a causal model for a target situation, which in turn constrains inferences about the target. We propose a computational theory in the framework of Bayesian inference and test its predictions (parameter-free for the cases we consider) in a series of experiments in which people were asked to assess the probabilities of various causal predictions and attributions about a target on the basis of source knowledge about generative and preventive causes. The theory proved successful in accounting for systematic patterns of judgments about interrelated types of causal inferences, including evidence that analogical inferences are partially dissociable from overall mapping quality.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号