首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   192篇
  免费   5篇
  国内免费   2篇
  2022年   2篇
  2021年   4篇
  2020年   1篇
  2019年   6篇
  2018年   4篇
  2017年   4篇
  2016年   10篇
  2015年   3篇
  2014年   5篇
  2013年   30篇
  2012年   6篇
  2011年   11篇
  2010年   2篇
  2009年   16篇
  2008年   14篇
  2007年   22篇
  2006年   3篇
  2005年   3篇
  2004年   3篇
  2003年   7篇
  2002年   4篇
  2001年   4篇
  1999年   1篇
  1998年   2篇
  1996年   1篇
  1995年   2篇
  1994年   4篇
  1993年   2篇
  1992年   3篇
  1991年   3篇
  1989年   1篇
  1988年   1篇
  1987年   2篇
  1985年   3篇
  1983年   2篇
  1982年   2篇
  1979年   2篇
  1978年   3篇
  1977年   1篇
排序方式: 共有199条查询结果,搜索用时 15 毫秒
11.
Overconfidence is often regarded as one of the most prevalent judgment biases. Several studies show that overconfidence can lead to suboptimal decisions of investors, managers, or politicians. Recent research, however, questions whether overconfidence should be regarded as a bias and shows that standard “overconfidence” findings can easily be explained by different degrees of knowledge of agents plus a random error in predictions. We contribute to the current literature and ongoing research by extensively analyzing interval estimates for knowledge questions, for real financial time series, and for artificially generated charts. We thereby suggest a new method to measure overconfidence in interval estimates, which is based on the implied probability mass behind a stated prediction interval. We document overconfidence patterns, which are difficult to reconcile with rationality of agents and which cannot be explained by differences in knowledge as differences in knowledge do not exist in our task. Furthermore, we show that overconfidence measures are reliable in the sense that there exist stable individual differences in the degree of overconfidence in interval estimates, thereby testing an important assumption of behavioral economics and behavioral finance models: stable individual differences in the degree of overconfidence across people. We do this in a “field experiment,” for different levels of expertise of subjects (students on the one hand and professional traders and investment bankers on the other hand), over time, by using different miscalibration metrics, and for tasks that avoid common weaknesses such as a non‐representative selection of trick questions. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   
12.
According to what is now commonly referred to as “the Equation” in the literature on indicative conditionals, the probability of any indicative conditional equals the probability of its consequent of the conditional given the antecedent of the conditional. Philosophers widely agree in their assessment that the triviality arguments of Lewis and others have conclusively shown the Equation to be tenable only at the expense of the view that indicative conditionals express propositions. This study challenges the correctness of that assessment by presenting data that cast doubt on an assumption underlying all triviality arguments.  相似文献   
13.
The perception of target events presented in a rapid stream of non-targets is impaired for early target positions, but then gradually improves, a phenomenon known as attentional awakening. This phenomenon has been associated with better resource allocation. It is unclear though whether improved resource allocation and attentional awakening are a consequence of the temporal context, that is, the position of the target event in the stimulus stream, or are due to a simple expectancy or foreperiod effect. Expectancy is an alternative explanation of attentional awakening because it depends on the a posteriori probabilities, which will increase with target position when all target positions are equally likely. To differentiate between the expectancy and the temporal context account the a priori (objective) probability of target position was defined such that the a posteriori probability would be high for early and late, and low for intermediate target positions. EEG was collected and the P3 ERP evoked by target events was derived as an indicator of resource allocation. A robust attentional awakening effect was observed. The relationships between measures of performance and P3 amplitude, and respectively target position, a priori, and a posteriori probability were analyzed. Results showed that in contrast to target position, a posteriori probability had little impact on performance and did not moderate the association between P3 amplitude and performance. Results also indicated that in spite of the evident role of target position on resource allocation and the perception of target events in rapid stimulus streams, target position is likely not the only variable these are affected by. Nevertheless, the findings of the present study suggest that whereas the temporal context of a rapid serial event is a key player for resource allocation to and perception of the event, expectancy seems of very little consequence.  相似文献   
14.
Abstract

This paper explores the relationship between our attempts to define the aims of analysis and the acceptance of probability in New Physics. It draws attention to the influence of physicists on both Jung and Bion, which is well documented. It presents an argument for process-based aims rather than the recognition of innate knowledge as an aim. Two processes from different traditions (Jungian and Kleinian) are suggested as central to the aims of analysis: containment and coniunctio. The coniunctio/disiunctio axis in Jung's writing is paralleled with the axis PS<->D of the post-Kleinians. Clinical material is presented supporting the necessity of differential aims and illustrating development/stasis along this axis. A case is made for analysis to embrace the reality of uncertainty and to work with the psychic obstacles in patients, and in analysts, that result from coming to terms with probability.  相似文献   
15.
Automated reasoning about uncertain knowledge has many applications. One difficulty when developing such systems is the lack of a completely satisfactory integration of logic and probability. We address this problem directly. Expressive languages like higher-order logic are ideally suited for representing and reasoning about structured knowledge. Uncertain knowledge can be modeled by using graded probabilities rather than binary truth values. The main technical problem studied in this paper is the following: Given a set of sentences, each having some probability of being true, what probability should be ascribed to other (query) sentences? A natural wish-list, among others, is that the probability distribution (i) is consistent with the knowledge base, (ii) allows for a consistent inference procedure and in particular (iii) reduces to deductive logic in the limit of probabilities being 0 and 1, (iv) allows (Bayesian) inductive reasoning and (v) learning in the limit and in particular (vi) allows confirmation of universally quantified hypotheses/sentences. We translate this wish-list into technical requirements for a prior probability and show that probabilities satisfying all our criteria exist. We also give explicit constructions and several general characterizations of probabilities that satisfy some or all of the criteria and various (counter)examples. We also derive necessary and sufficient conditions for extending beliefs about finitely many sentences to suitable probabilities over all sentences, and in particular least dogmatic or least biased ones. We conclude with a brief outlook on how the developed theory might be used and approximated in autonomous reasoning agents. Our theory is a step towards a globally consistent and empirically satisfactory unification of probability and logic.  相似文献   
16.
We argue that in spite of their apparent dissimilarity, the methodologies employed in the a priori and a posteriori assessment of probabilities can both be justified by appeal to a single principle of inductive reasoning, viz., the principle of symmetry. The difference between these two methodologies consists in the way in which information about the single-trial probabilities in a repeatable chance process is extracted from the constraints imposed by this principle. In the case of a posteriori reasoning, these constraints inform the analysis by fixing an a posteriori determinant of the probabilities, whereas, in the case of a priori reasoning, they imply certain claims which then serve as the basis for subsequent probabilistic deductions. In a given context of inquiry, the particular form which a priori or a posteriori reason may take depends, in large part, on the strength of the underlying symmetry assumed: the stronger the symmetry, the more information can be acquired a priori and the less information about the long-run behavior of the process is needed for an a posteriori assessment of the probabilities. In the context of this framework, frequency-based reasoning emerges as a limiting case of a posteriori reasoning, and reasoning about simple games of chance, as a limiting case of a priori reasoning. Between these two extremes, both a priori and a posteriori reasoning can take a variety of intermediate forms.  相似文献   
17.
ABSTRACT

The probability of an event occurring and the reward associated with the event can both modulate behaviour. Response times are decreased to stimuli that are either more rewarding or more likely. These two factors can be combined to give an Expected Value (EV) associated with the event (i.e., probability of the event x reward magnitude). In four experiments we investigate the effect of reward and probability on both saccadic and manual responses. When tested separately we find evidence for both a reward and probability effect across response types. When manipulations of reward magnitude and probability of the event were combined, the probability modulations dominated and these data were not well accounted for by the EV. However, a post-hoc model that included an additional intrinsic reward associated with responding provided an excellent account for the data. We argue that reward consists of both an explicit and intrinsic component. In our task, the saccadic and manual responses are linked to the information provided by the targets and the goals of the task, and successful completion of these is in itself rewarding. As a result, targets associated with a higher probability of being presented have a higher intrinsic reward.  相似文献   
18.
The Monty Hall dilemma (MHD) is a notorious probability problem with a counterintuitive solution. There is a strong tendency to stay with the initial choice, despite the fact that switching doubles the probability of winning. The current randomised experiment investigates whether feedback in a series of trials improves behavioural performance on the MHD and increases the level of understanding of the problem. Feedback was either conditional or non-conditional, and was given either in frequency format or in percentage format. Results show that people learn to switch most when receiving conditional feedback in frequency format. However, problem understanding does not improve as a consequence of receiving feedback. Our study confirms the dissociation between behavioural performance on the MHD, on one hand, and actual understanding of the MHD, on the other. We discuss how this dissociation can be understood.  相似文献   
19.
John Maynard Keynes claimed that not all probabilities were comparable. Frank Ramsey argued that they were, and that Keynes's views to the contrary rested on a confusion of degree of entailment and degree of belief. We will argue that Keynes and Ramsey largely talked past each other, and yet that there are issues of great significance underlying their dispute. In particular, the simple principle of maximizing expected utility may be seen in a new light as one step of a rich and complex process.  相似文献   
20.
Oaksford and Chater (2014 Oaksford, M., &; Chater, N. (2014). Probabilistic single function dual process theory and logic programming as approaches to non-monotonicity in human vs. artificial reasoning. Thinking and Reasoning, 20, 269295. doi:10.1080/13546783.2013.877401[Taylor &; Francis Online], [Web of Science ®] [Google Scholar], Thinking and Reasoning, 20, 269–295) critiqued the logic programming (LP) approach to nonmonotonicity and proposed that a Bayesian probabilistic approach to conditional reasoning provided a more empirically adequate theory. The current paper is a reply to Stenning and van Lambalgen's rejoinder to this earlier paper entitled ‘Logic programming, probability, and two-system accounts of reasoning: a rejoinder to Oaksford and Chater’ (2016) in Thinking and Reasoning. It is argued that causation is basic in human cognition and that explaining how abnormality lists are created in LP requires causal models. Each specific rejoinder to the original critique is then addressed. While many areas of agreement are identified, with respect to the key differences, it is concluded the current evidence favours the Bayesian approach, at least for the moment.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号