首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   192篇
  免费   5篇
  国内免费   2篇
  199篇
  2022年   2篇
  2021年   4篇
  2020年   1篇
  2019年   6篇
  2018年   4篇
  2017年   4篇
  2016年   10篇
  2015年   3篇
  2014年   5篇
  2013年   30篇
  2012年   6篇
  2011年   11篇
  2010年   2篇
  2009年   16篇
  2008年   14篇
  2007年   22篇
  2006年   3篇
  2005年   3篇
  2004年   3篇
  2003年   7篇
  2002年   4篇
  2001年   4篇
  1999年   1篇
  1998年   2篇
  1996年   1篇
  1995年   2篇
  1994年   4篇
  1993年   2篇
  1992年   3篇
  1991年   3篇
  1989年   1篇
  1988年   1篇
  1987年   2篇
  1985年   3篇
  1983年   2篇
  1982年   2篇
  1979年   2篇
  1978年   3篇
  1977年   1篇
排序方式: 共有199条查询结果,搜索用时 9 毫秒
181.
In this paper a theory of finitistic and frequentistic approximations — in short: f-approximations — of probability measures P over a countably infinite outcome space N is developed. The family of subsets of N for which f-approximations converge to a frequency limit forms a pre-Dynkin system . The limiting probability measure over D can always be extended to a probability measure over , but this measure is not always σ-additive. We conclude that probability measures can be regarded as idealizations of limiting frequencies if and only if σ-additivity is not assumed as a necessary axiom for probabilities. We prove that σ-additive probability measures can be characterized in terms of so-called canonical and in terms of so-called full f-approximations. We also show that every non-σ-additive probability measure is f-approximable, though neither canonically nor fully f-approximable. Finally, we transfer our results to probability measures on open or closed formulas of first-order languages.  相似文献   
182.
Franz Huber 《Synthese》2008,161(1):89-118
The problem addressed in this paper is “the main epistemic problem concerning science”, viz. “the explication of how we compare and evaluate theories [...] in the light of the available evidence” (van Fraassen, BC, 1983, Theory comparison and relevant Evidence. In J. Earman (Ed.), Testing scientific theories (pp. 27–42). Minneapolis: University of Minnesota Press). Sections 1– 3 contain the general plausibility-informativeness theory of theory assessment. In a nutshell, the message is (1) that there are two values a theory should exhibit: truth and informativeness—measured respectively by a truth indicator and a strength indicator; (2) that these two values are conflicting in the sense that the former is a decreasing and the latter an increasing function of the logical strength of the theory to be assessed; and (3) that in assessing a given theory by the available data one should weigh between these two conflicting aspects in such a way that any surplus in informativeness succeeds, if the shortfall in plausibility is small enough. Particular accounts of this general theory arise by inserting particular strength indicators and truth indicators. In Section 4 the theory is spelt out for the Bayesian paradigm of subjective probabilities. It is then compared to incremental Bayesian confirmation theory. Section 4 closes by asking whether it is likely to be lovely. Section 5 discusses a few problems of confirmation theory in the light of the present approach. In particular, it is briefly indicated how the present account gives rise to a new analysis of Hempel’s conditions of adequacy for any relation of confirmation (Hempel, CG, 1945, Studies in the logic of comfirmation. Mind, 54, 1–26, 97–121.), differing from the one Carnap gave in § 87 of his Logical foundations of probability (1962, Chicago: University of Chicago Press). Section 6 adresses the question of justification any theory of theory assessment has to face: why should one stick to theories given high assessment values rather than to any other theories? The answer given by the Bayesian version of the account presented in section 4 is that one should accept theories given high assessment values, because, in the medium run, theory assessment almost surely takes one to the most informative among all true theories when presented separating data. The concluding section 7 continues the comparison between the present account and incremental Bayesian confirmation theory.  相似文献   
183.
Igor Douven 《Synthese》2008,164(1):19-44
According to so-called epistemic theories of conditionals, the assertability/acceptability/acceptance of a conditional requires the existence of an epistemically significant relation between the conditional’s antecedent and its consequent. This paper points to some linguistic data that our current best theories of the foregoing type appear unable to explain. Further, it presents a new theory of the same type that does not have that shortcoming. The theory is then defended against some seemingly obvious objections.  相似文献   
184.
When natural language input contains grammatical forms that are used probabilistically and inconsistently, learners will sometimes reproduce the inconsistencies; but sometimes they will instead regularize the use of these forms, introducing consistency in the language that was not present in the input. In this paper we ask what produces such regularization. We conducted three artificial language experiments, varying the use of determiners in the types of inconsistency with which they are used, and also comparing adult and child learners. In Experiment 1 we presented adult learners with scattered inconsistency - the use of multiple determiners varying in frequency in the same context - and found that adults will reproduce these inconsistencies at low levels of scatter, but at very high levels of scatter will regularize the determiner system, producing the most frequent determiner form almost all the time. In Experiment 2 we showed that this is not merely the result of frequency: when determiners are used with low frequencies but in consistent contexts, adults will learn all of the determiners veridically. In Experiment 3 we compared adult and child learners, finding that children will almost always regularize inconsistent forms, whereas adult learners will only regularize the most complex inconsistencies. Taken together, these results suggest that regularization processes in natural language learning, such as those seen in the acquisition of language from non-native speakers or in the formation of young languages, may depend crucially on the nature of language learning by young children.  相似文献   
185.
A perplexing yet persistent empirical finding is that individuals assess probabilities in words and in numbers nearly equivalently, and theorists have called for future research to search for factors that cause differences. This study uses an accounting context in which individuals are commonly motivated to reach preferred (rather than accurate) conclusions. Within this context, I predict new differences between verbal and numerical probability assessments, as follows: first, individuals will justify an optimistic verbal assessment (e.g., somewhat possible) by retaining the option of re-defining it, in case of negative outcomes, as though the phrase really means something different, and, for that matter, means more things. This re-definition will maintain some connection to the original meaning of the phrase, but de-emphasized relative to the new meaning. Second, based on this behavior, I also predict individuals’ verbal probability assessments to be (1) more biased and yet (2) perceived as more justifiable than their numerical assessments. I find supportive evidence in an experiment designed to test the hypotheses. This study contributes to motivated reasoning and probability assessment theories (1) with new evidence of how individuals can word-smith in multiple attributes of a phrase to justify reaching a preferred conclusion, and (2) with new, reliable differences between verbal and numerical probability assessments. This study has important theoretical and practical implications relevant to organizational contexts in which people assess the likelihoods of uncertainties in words or numbers, and with motivations to reach a preferred conclusion.  相似文献   
186.
Chaos-related obstructions to predictability have been used to challenge accounts of theory validation based on the agreement between theoretical predictions and experimental data (Rueger & Sharp, 1996. The British Journal for the Philosophy of Science, 47, 93–112; Koperski, 1998. Philosophy of Science, 40, 194–212). These challenges are incomplete in two respects: (a) they do not show that chaotic regimes are unpredictable in principle (i.e., with unbounded resources) and, as a result, that there is something conceptually wrong with idealized expectations of correct predictions from acceptable theories, and (b) they do not explore whether chaos-induced predictive failures of deterministic models can be remedied by stochastic modeling. In this paper we appeal to an asymptotic analysis of state space trajectories and their numerical approximations to show that chaotic regimes are deterministically unpredictable even with unbounded resources. Additionally, we explain why stochastic models of chaotic systems, while predictively successful in some cases, are in general predictively as limited as deterministic ones. We conclude by suggesting that the way in which scientists deal with such principled obstructions to predictability calls for a more comprehensive approach to theory validation, on which experimental testing is augmented by a multifaceted mathematical analysis of theoretical models, capable of identifying chaos-related predictive failures as due to principled limitations which the world itself imposes on any less-than-omniscient epistemic access to some natural systems. We give special thanks to two anonymous reviewers for their helpful comments that have substantially contributed to the final version of this paper  相似文献   
187.
Colin Howson 《Synthese》2007,156(3):491-512
Many people regard utility theory as the only rigorous foundation for subjective probability, and even de Finetti thought the betting approach supplemented by Dutch Book arguments only good as an approximation to a utility-theoretic account. I think that there are good reasons to doubt this judgment, and I propose an alternative, in which the probability axioms are consistency constraints on distributions of fair betting quotients. The idea itself is hardly new: it is in de Finetti and also Ramsey. What is new is that it is shown that probabilistic consistency and consequence can be defined in a way formally analogous to the way these notions are defined in deductive (propositional) logic. The result is a free-standing logic which does not pretend to be a theory of rationality and is therefore immune to, among other charges, that of “logical omniscience”.  相似文献   
188.
The reference class problem is your problem too   总被引:2,自引:0,他引:2  
Alan Hájek 《Synthese》2007,156(3):563-585
The reference class problem arises when we want to assign a probability to a proposition (or sentence, or event) X, which may be classified in various ways, yet its probability can change depending on how it is classified. The problem is usually regarded as one specifically for the frequentist interpretation of probability and is often considered fatal to it. I argue that versions of the classical, logical, propensity and subjectivist interpretations also fall prey to their own variants of the reference class problem. Other versions of these interpretations apparently evade the problem. But I contend that they are all “no-theory” theories of probability - accounts that leave quite obscure why probability should function as a guide to life, a suitable basis for rational inference and action. The reference class problem besets those theories that are genuinely informative and that plausibly constrain our inductive reasonings and decisions. I distinguish a “metaphysical” and an “epistemological” reference class problem. I submit that we can dissolve the former problem by recognizing that probability is fundamentally a two-place notion: conditional probability is the proper primitive of probability theory. However, I concede that the epistemological problem remains.  相似文献   
189.
It was investigated whether commonly used factor score estimates lead to the same reproduced covariance matrix of observed variables. This was achieved by means of Schönemann and Steiger’s (1976) regression component analysis, since it is possible to compute the reproduced covariance matrices of the regression components corresponding to different factor score estimates. It was shown that Thurstone’s, Ledermann’s, Bartlett’s, Anderson-Rubin’s, McDonald’s, Krijnen, Wansbeek, and Ten Berge’s, as well as Takeuchi, Yanai, and Mukherjee’s score estimates reproduce the same covariance matrix. In contrast, Harman’s ideal variables score estimates lead to a different reproduced covariance matrix.  相似文献   
190.
Bayesian estimation of a multilevel IRT model using gibbs sampling   总被引:3,自引:0,他引:3  
In this article, a two-level regression model is imposed on the ability parameters in an item response theory (IRT) model. The advantage of using latent rather than observed scores as dependent variables of a multilevel model is that it offers the possibility of separating the influence of item difficulty and ability level and modeling response variation and measurement error. Another advantage is that, contrary to observed scores, latent scores are test-independent, which offers the possibility of using results from different tests in one analysis where the parameters of the IRT model and the multilevel model can be concurrently estimated. The two-parameter normal ogive model is used for the IRT measurement model. It will be shown that the parameters of the two-parameter normal ogive model and the multilevel model can be estimated in a Bayesian framework using Gibbs sampling. Examples using simulated and real data are given.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号