全文获取类型
收费全文 | 332篇 |
免费 | 89篇 |
国内免费 | 31篇 |
出版年
2024年 | 1篇 |
2023年 | 6篇 |
2022年 | 6篇 |
2021年 | 9篇 |
2020年 | 13篇 |
2019年 | 13篇 |
2018年 | 14篇 |
2017年 | 16篇 |
2016年 | 14篇 |
2015年 | 9篇 |
2014年 | 19篇 |
2013年 | 29篇 |
2012年 | 12篇 |
2011年 | 17篇 |
2010年 | 14篇 |
2009年 | 23篇 |
2008年 | 27篇 |
2007年 | 31篇 |
2006年 | 12篇 |
2005年 | 17篇 |
2004年 | 13篇 |
2003年 | 12篇 |
2002年 | 13篇 |
2001年 | 12篇 |
2000年 | 4篇 |
1999年 | 19篇 |
1998年 | 5篇 |
1997年 | 6篇 |
1996年 | 7篇 |
1995年 | 6篇 |
1994年 | 6篇 |
1993年 | 5篇 |
1992年 | 4篇 |
1991年 | 8篇 |
1990年 | 4篇 |
1989年 | 3篇 |
1988年 | 2篇 |
1987年 | 1篇 |
1986年 | 2篇 |
1985年 | 1篇 |
1983年 | 2篇 |
1982年 | 2篇 |
1981年 | 1篇 |
1980年 | 1篇 |
1979年 | 4篇 |
1978年 | 4篇 |
1975年 | 3篇 |
排序方式: 共有452条查询结果,搜索用时 15 毫秒
351.
Sebastian Sequoiah-Grayson 《Journal of Philosophical Logic》2008,37(1):67-94
This article provides the first comprehensive reconstruction and analysis of Hintikka’s attempt to obtain a measure of the
information yield of deductive inferences. The reconstruction is detailed by necessity due to the originality of Hintikka’s
contribution. The analysis will turn out to be destructive. It dismisses Hintikka’s distinction between surface information and depth information as being of any utility towards obtaining a measure of the information yield of deductive inferences. Hintikka is right to
identify the failure of canonical information theory to give an account of the information yield of deductions as a scandal,
however this article demonstrates that his attempt to provide such an account fails. It fails primarily because it applies
to only a restricted set of deductions in the polyadic predicate calculus, and fails to apply at all to the deductions in
the monadic predicate calculus and the propositional calculus. Some corollaries of these facts are a number of undesirable
and counterintuitive results concerning the proposed relation of linguistic meaning (and hence synonymy) with surface information.
Some of these results will be seen to contradict Hintikka’s stated aims, whilst others are seen to be false. The consequence
is that the problem of obtaining a measure of the information yield of deductive inferences remains an open one. The failure
of Hintikka’s proposal will suggest that a purely syntactic approach to the problem be abandoned in favour of an intrinsically
semantic one. 相似文献
352.
Rapid acquisition of preference in concurrent chains when alternatives differ on multiple dimensions of reinforcement 总被引:1,自引:1,他引:0
Pigeons responded in a concurrent-chains procedure in which terminal-link reinforcer variables were changed unpredictably across sessions. In Experiment 1, the terminal-link schedules were fixed-interval (FI) 8 s and FI 16 s, and the reinforcer magnitudes were 2 s and 4 s. In Experiment 2 the probability of reinforcement (100% or 50%) was varied with immediacy and magnitude. Multiple-regression analyses showed that pigeons' initial-link response allocation was determined by current-session reinforcer variables, similar to previous studies which have varied only immediacy (Grace, Bragason, & McLean, 2003). Sensitivity coefficients were positive and statistically significant for all reinforcer variables in both experiments. Analyses of responding within individual sessions showed that final levels of preference for dominated sessions, in which all reinforcer variables favored the same terminal link, were more extreme than for tradeoff sessions in which at least one reinforcer variable favored each alternative. This result implies that response allocation was determined by multiple reinforcer variables within individual sessions, consistent with the concatenated matching law. However, in Experiment 2, there was a nonlinear (sigmoidal) relationship between response allocation and relative value, which suggests the possibility that reinforcer variables may interact during acquisition, contrary to the matching law. 相似文献
353.
The idea that people often make probability judgments by a heuristic short-cut, the representativeness heuristic, has been widely influential, but also criticized for being vague. The empirical trademark of the heuristic is characteristic deviations between normative probabilities and judgments (e.g., the conjunction fallacy, base-rate neglect). In this article the authors contrast two hypotheses concerning the cognitive substrate of the representativeness heuristic, the prototype hypothesis (Kahneman & Frederick, 2002) and the exemplar hypothesis (Juslin & Persson, 2002), in a task especially designed to elicit representativeness effects. Computational modelling and an experiment reveal that representativeness effects are evident early in training and persist longer in a more complex task environment and that the data are best accounted for by a model implementing the exemplar hypothesis. 相似文献
354.
Error probabilities for inference of causal directions 总被引:1,自引:0,他引:1
Jiji Zhang 《Synthese》2008,163(3):409-418
A main message from the causal modelling literature in the last several decades is that under some plausible assumptions,
there can be statistically consistent procedures for inferring (features of) the causal structure of a set of random variables
from observational data. But whether we can control the error probabilities with a finite sample size depends on the kind
of consistency the procedures can achieve. It has been shown that in general, under the standard causal Markov and Faithfulness
assumptions, the procedures can only be pointwise but not uniformly consistent without substantial background knowledge. This implies the impossibility of choosing a finite sample size to control
the worst case error probabilities. In this paper, I consider the simpler task of inferring causal directions when the skeleton
of the causal structure is known, and establish a similarly negative result concerning the possibility of controlling error
probabilities. Although the result is negative in form, it has an interesting positive implication for causal discovery methods. 相似文献
355.
356.
Darrell P. Rowbottom 《Studia Logica》2007,87(1):65-71
It is a common view that the axioms of probability can be derived from the following assumptions: (a) probabilities reflect
(rational) degrees of belief, (b) degrees of belief can be measured as betting quotients; and (c) a rational agent must select
betting quotients that are coherent. In this paper, I argue that a consideration of reasonable betting behaviour, with respect
to the alleged derivation of the first axiom of probability, suggests that (b) and (c) are incorrect. In particular, I show
how a rational agent might assign a ‘probability’ of zero to an event which she is sure will occur. 相似文献
357.
Lombrozo T 《Cognitive psychology》2007,55(3):232-257
What makes some explanations better than others? This paper explores the roles of simplicity and probability in evaluating competing causal explanations. Four experiments investigate the hypothesis that simpler explanations are judged both better and more likely to be true. In all experiments, simplicity is quantified as the number of causes invoked in an explanation, with fewer causes corresponding to a simpler explanation. Experiment 1 confirms that all else being equal, both simpler and more probable explanations are preferred. Experiments 2 and 3 examine how explanations are evaluated when simplicity and probability compete. The data suggest that simpler explanations are assigned a higher prior probability, with the consequence that disproportionate probabilistic evidence is required before a complex explanation will be favored over a simpler alternative. Moreover, committing to a simple but unlikely explanation can lead to systematic overestimation of the prevalence of the cause invoked in the simple explanation. Finally, Experiment 4 finds that the preference for simpler explanations can be overcome when probability information unambiguously supports a complex explanation over a simpler alternative. Collectively, these findings suggest that simplicity is used as a basis for evaluating explanations and for assigning prior probabilities when unambiguous probability information is absent. More broadly, evaluating explanations may operate as a mechanism for generating estimates of subjective probability. 相似文献
358.
Fred Kronz 《Journal of Philosophical Logic》2007,36(4):449-472
A non-monotonic theory of probability is put forward and shown to have applicability in the quantum domain. It is obtained
simply by replacing Kolmogorov’s positivity axiom, which places the lower bound for probabilities at zero, with an axiom that
reduces that lower bound to minus one. Kolmogorov’s theory of probability is monotonic, meaning that the probability of A is less then or equal to that of B whenever A entails B. The new theory violates monotonicity, as its name suggests; yet, many standard theorems are also theorems of the new theory
since Kolmogorov’s other axioms are retained. What is of particular interest is that the new theory can accommodate quantum
phenomena (photon polarization experiments) while preserving Boolean operations, unlike Kolmogorov’s theory. Although non-standard
notions of probability have been discussed extensively in the physics literature, they have received very little attention
in the philosophical literature. One likely explanation for that difference is that their applicability is typically demonstrated
in esoteric settings that involve technical complications. That barrier is effectively removed for non-monotonic probability
theory by providing it with a homely setting in the quantum domain. Although the initial steps taken in this paper are quite
substantial, there is much else to be done, such as demonstrating the applicability of non-monotonic probability theory to
other quantum systems and elaborating the interpretive framework that is provisionally put forward here. Such matters will
be developed in other works. 相似文献
359.
Winston R. Sieck Edgar C. Merkle Trisha Van Zandt 《Organizational behavior and human decision processes》2007
The ASC model of choice and confidence in general knowledge proposes that respondents first Assess the familiarity of presented options, and then use the high-familiarity option as a retrieval cue to Search memory for the purposes of Constructing an explanation about why that high-familiarity option is true. The ASC process implies that overconfidence results in part from a tendency to fixate on the high-familiarity option, to the neglect of the other option. If this implication is true, then judgment tasks requiring respondents to evaluate each option independently should result in reduced overconfidence as compared with standard judgment tasks. Two experiments tested this implication, and found that confidence and overconfidence were reduced when respondents evaluated options independently. The findings support the proposal that option fixation contributes to overconfidence, and also clarify the limitations of random error explanations of overconfidence. 相似文献
360.
作者用以大学生被试的一个实验考察归类不确定情景下样例的代表典型性对特征归纳预测的可能影响。实验结果表明,预测特征相对于目标特征的综合条件概率影响特征归纳预测,而样例的代表典型性并不影响特征归纳预测,没有出现样例的代表典型性效应。预测特征综合条件概率策略能预测解释这种结果。 相似文献