首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   192篇
  免费   5篇
  国内免费   2篇
  2022年   2篇
  2021年   4篇
  2020年   1篇
  2019年   6篇
  2018年   4篇
  2017年   4篇
  2016年   10篇
  2015年   3篇
  2014年   5篇
  2013年   30篇
  2012年   6篇
  2011年   11篇
  2010年   2篇
  2009年   16篇
  2008年   14篇
  2007年   22篇
  2006年   3篇
  2005年   3篇
  2004年   3篇
  2003年   7篇
  2002年   4篇
  2001年   4篇
  1999年   1篇
  1998年   2篇
  1996年   1篇
  1995年   2篇
  1994年   4篇
  1993年   2篇
  1992年   3篇
  1991年   3篇
  1989年   1篇
  1988年   1篇
  1987年   2篇
  1985年   3篇
  1983年   2篇
  1982年   2篇
  1979年   2篇
  1978年   3篇
  1977年   1篇
排序方式: 共有199条查询结果,搜索用时 15 毫秒
171.
Alvin Plantinga has famously argued that metaphysical naturalism is self-defeating, and cannot be rationally accepted. I distinguish between two different ways of understanding this argument, which I call the "probabilistic inference conception", and the "process characteristic conception". I argue that the former is what critics of the argument usually presuppose, whereas most critical responses fail when one assumes the latter conception. To illustrate this, I examine three standard objections to Plantinga's evolutionary argument against naturalism: the Perspiration Objection, the Tu Quoque Objection, and the "Why Can't the Naturalist Just Add a Little Something?" Objection. I show that Plantinga's own responses to these objections fail, and propose counterexamples to his first two principles of defeat. I then go on to construct more adequate responses to these objections, using the distinctions I develop in the first part of the paper.  相似文献   
172.
In this paper a theory of finitistic and frequentistic approximations — in short: f-approximations — of probability measures P over a countably infinite outcome space N is developed. The family of subsets of N for which f-approximations converge to a frequency limit forms a pre-Dynkin system . The limiting probability measure over D can always be extended to a probability measure over , but this measure is not always σ-additive. We conclude that probability measures can be regarded as idealizations of limiting frequencies if and only if σ-additivity is not assumed as a necessary axiom for probabilities. We prove that σ-additive probability measures can be characterized in terms of so-called canonical and in terms of so-called full f-approximations. We also show that every non-σ-additive probability measure is f-approximable, though neither canonically nor fully f-approximable. Finally, we transfer our results to probability measures on open or closed formulas of first-order languages.  相似文献   
173.
Belief is a central focus of inquiry in the philosophy of religion and indeed in the field of religion itself. No one conception of belief is central in all these cases, and sometimes the term ‘belief’ is used where ‘faith’ or ‘acceptance’ would better express what is intended. This paper sketches the major concepts in the philosophy of religion that are expressed by these three terms. In doing so, it distinguishes propositional belief (belief that) from both objectual belief (believing something to have a property) and, more importantly, belief in (a trusting attitude that is illustrated by at least many paradigm cases of belief in God). Faith is shown to have a similar complexity, and even propositional faith divides into importantly different categories. Acceptance differs from both belief and faith in that at least one kind of acceptance is behavioral in a way neither of the other two elements is. Acceptance of a proposition, it is argued, does not entail believing it, nor does believing entail acceptance in any distinctive sense of the latter term. In characterizing these three notions (and related ones), the paper provides some basic materials important both for understanding a person’s religious position and for appraising its rationality. The nature of religious faith and some of the conditions for its rationality, including some deriving from elements of an ethics of belief, are explored in some detail.  相似文献   
174.
Chaos-related obstructions to predictability have been used to challenge accounts of theory validation based on the agreement between theoretical predictions and experimental data (Rueger & Sharp, 1996. The British Journal for the Philosophy of Science, 47, 93–112; Koperski, 1998. Philosophy of Science, 40, 194–212). These challenges are incomplete in two respects: (a) they do not show that chaotic regimes are unpredictable in principle (i.e., with unbounded resources) and, as a result, that there is something conceptually wrong with idealized expectations of correct predictions from acceptable theories, and (b) they do not explore whether chaos-induced predictive failures of deterministic models can be remedied by stochastic modeling. In this paper we appeal to an asymptotic analysis of state space trajectories and their numerical approximations to show that chaotic regimes are deterministically unpredictable even with unbounded resources. Additionally, we explain why stochastic models of chaotic systems, while predictively successful in some cases, are in general predictively as limited as deterministic ones. We conclude by suggesting that the way in which scientists deal with such principled obstructions to predictability calls for a more comprehensive approach to theory validation, on which experimental testing is augmented by a multifaceted mathematical analysis of theoretical models, capable of identifying chaos-related predictive failures as due to principled limitations which the world itself imposes on any less-than-omniscient epistemic access to some natural systems. We give special thanks to two anonymous reviewers for their helpful comments that have substantially contributed to the final version of this paper  相似文献   
175.
Colin Howson 《Synthese》2007,156(3):491-512
Many people regard utility theory as the only rigorous foundation for subjective probability, and even de Finetti thought the betting approach supplemented by Dutch Book arguments only good as an approximation to a utility-theoretic account. I think that there are good reasons to doubt this judgment, and I propose an alternative, in which the probability axioms are consistency constraints on distributions of fair betting quotients. The idea itself is hardly new: it is in de Finetti and also Ramsey. What is new is that it is shown that probabilistic consistency and consequence can be defined in a way formally analogous to the way these notions are defined in deductive (propositional) logic. The result is a free-standing logic which does not pretend to be a theory of rationality and is therefore immune to, among other charges, that of “logical omniscience”.  相似文献   
176.
The reference class problem is your problem too   总被引:2,自引:0,他引:2  
Alan Hájek 《Synthese》2007,156(3):563-585
The reference class problem arises when we want to assign a probability to a proposition (or sentence, or event) X, which may be classified in various ways, yet its probability can change depending on how it is classified. The problem is usually regarded as one specifically for the frequentist interpretation of probability and is often considered fatal to it. I argue that versions of the classical, logical, propensity and subjectivist interpretations also fall prey to their own variants of the reference class problem. Other versions of these interpretations apparently evade the problem. But I contend that they are all “no-theory” theories of probability - accounts that leave quite obscure why probability should function as a guide to life, a suitable basis for rational inference and action. The reference class problem besets those theories that are genuinely informative and that plausibly constrain our inductive reasonings and decisions. I distinguish a “metaphysical” and an “epistemological” reference class problem. I submit that we can dissolve the former problem by recognizing that probability is fundamentally a two-place notion: conditional probability is the proper primitive of probability theory. However, I concede that the epistemological problem remains.  相似文献   
177.
It was investigated whether commonly used factor score estimates lead to the same reproduced covariance matrix of observed variables. This was achieved by means of Schönemann and Steiger’s (1976) regression component analysis, since it is possible to compute the reproduced covariance matrices of the regression components corresponding to different factor score estimates. It was shown that Thurstone’s, Ledermann’s, Bartlett’s, Anderson-Rubin’s, McDonald’s, Krijnen, Wansbeek, and Ten Berge’s, as well as Takeuchi, Yanai, and Mukherjee’s score estimates reproduce the same covariance matrix. In contrast, Harman’s ideal variables score estimates lead to a different reproduced covariance matrix.  相似文献   
178.
179.
The Y2K Bug, the programming glitch expected to derail computerized systems worldwide when the year changed from 1999 to 2000, provided a rich context for examining anticipatory coping and preparatory behaviors. In the last 2 months of 1999, 697 respondents completed an online survey of proactivity, worry about Y2K, dispositional optimism, primary and secondary control-oriented coping efforts, estimates of Y2K-related disruptions, and household preparations. Higher levels of proactivity, worry, and optimism were independently associated with greater self-reported preparations. These predictors were positively associated with greater primary control-oriented coping efforts, but showed differential relations to secondary control efforts, such as accepting the situation or trusting a higher power, especially among participants who thought the damage would be severe and lasting. Implications for understanding multiple ways of coping with potential stressors are discussed.
Lisa G. AspinwallEmail:
  相似文献   
180.
Lagnado DA  Shanks DR 《Cognition》2002,83(1):81-112
Why are people's judgments incoherent under probability formats? Research in an associative learning paradigm suggests that after structured learning participants give judgments based on predictiveness rather than normative probability. This is because people's learning mechanisms attune to statistical contingencies in the environment, and they use these learned associations as a basis for subsequent probability judgments. We introduced a hierarchical structure into a simulated medical diagnosis task, setting up a conflict between predictiveness and coherence. Thus, a target symptom was more predictive of a subordinate disease than of its superordinate category, even though the latter included the former. Under a probability format participants tended to violate coherence and make ratings in line with predictiveness; under a frequency format they were more normative. These results are difficult to explain within a unitary model of inference, whether associative or frequency-based. In the light of this, and other findings in the judgment and learning literature, a dual-component model is proposed.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号