首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   188篇
  免费   6篇
  国内免费   3篇
  2022年   1篇
  2021年   7篇
  2020年   4篇
  2019年   1篇
  2018年   4篇
  2017年   3篇
  2016年   3篇
  2015年   8篇
  2014年   8篇
  2013年   16篇
  2012年   5篇
  2011年   9篇
  2010年   3篇
  2009年   17篇
  2008年   22篇
  2007年   17篇
  2006年   9篇
  2005年   12篇
  2004年   11篇
  2003年   6篇
  2002年   7篇
  2001年   4篇
  2000年   2篇
  1999年   9篇
  1998年   5篇
  1997年   1篇
  1996年   1篇
  1993年   1篇
  1991年   1篇
排序方式: 共有197条查询结果,搜索用时 31 毫秒
121.
Research on probability judgment has traditionally emphasized that people are susceptible to biases because they rely on “variable substitution”: the assessment of normative variables is replaced by assessment of heuristic, subjective variables. A recent proposal is that many of these biases may rather derive from constraints on cognitive integration, where the capacity-limited and sequential nature of controlled judgment promotes linear additive integration, in contrast to many integration rules of probability theory (Juslin, Nilsson, & Winman, 2009). A key implication by this theory is that it should be possible to improve peoples’ probabilistic reasoning by changing probability problems into logarithm formats that require additive rather than multiplicative integration. Three experiments demonstrate that recasting tasks in a way that allows people to arrive at the answers by additive integration decreases cognitive biases, and while people can rapidly learn to produce the correct answers in an additive formats, they have great difficulty doing so with a multiplicative format.  相似文献   
122.
Probability is usually closely related to Boolean structures, i.e., Boolean algebras or propositional logic. Here we show, how probability can be combined with non-Boolean structures, and in particular non-Boolean logics. The basic idea is to describe uncertainty by (Boolean) assumptions, which may or may not be valid. The uncertain information depends then on these uncertain assumptions, scenarios or interpretations. We propose to describe information in information systems, as introduced by Scott into domain theory. This captures a wide range of systems of practical importance such as many propositional logics, first order logic, systems of linear equations, inequalities, etc. It covers thus both symbolic as well as numerical systems. Assumption-based reasoning allows then to deduce supporting arguments for hypotheses. A probability structure imposed on the assumptions permits to quantify the reliability of these supporting arguments and thus to introduce degrees of support for hypotheses. Information systems and related information algebras are formally introduced and studied in this paper as the basic structures for assumption-based reasoning. The probability structure is then formally represented by random variables with values in information algebras. Since these are in general non-Boolean structures some care must be exercised in order to introduce these random variables. It is shown that this theory leads to an extension of Dempster–Shafer theory of evidence and that information algebras provide in fact a natural frame for this theory.  相似文献   
123.
该文认为,“articulation”是拉克劳、墨菲的后马克思主义的一个核心概念,它的基本意思是“链结”。该文从词源上考察了“articulation”一词的意义及其历史演变,并对其基本含义和特征做了深入的分析。该文认为,链接是理解后马克思主义的偶然性逻辑的核心概念。链接是发生在主体身份之间的,话语性的,偶然性的建构实践活动。但拉克劳、墨菲的链接虽然力图避免近代哲学的本质主义和还原主义,但却仍然避免不了其精致的“唯心”倾向和向上的还原论的嫌疑。我们对此要有清晰的认识。  相似文献   
124.
试论"真矛盾"及次协调逻辑的哲学价值   总被引:1,自引:0,他引:1  
二十世纪后期以来,再也没有哪一种逻辑像次协调逻辑那样,在取得丰富研究成果的同时,也领略到了极为尖锐的批判,仿佛其本身就是一种意味深长的“矛盾”。依照次协调逻辑“真矛盾”的观点所构建的次协调逻辑系统,夷非所思地试图要容纳我们所认知的悖论、辩证矛盾、甚至逻辑矛盾。次协调逻辑试图要和悖论永久共存,但我们可以为了暂时的搁置这种矛盾,发展某理论的其他方面而接受这种逻辑技术。次协调逻辑在反映人脑的容错机制上具有重要的描述功能,这对提高计算机的智能化具有十分重要的实践意义。  相似文献   
125.
The paper studies first order extensions of classical systems of modal logic (see (Chellas, 1980, part III)). We focus on the role of the Barcan formulas. It is shown that these formulas correspond to fundamental properties of neighborhood frames. The results have interesting applications in epistemic logic. In particular we suggest that the proposed models can be used in order to study monadic operators of probability (Kyburg, 1990) and likelihood (Halpern-Rabin, 1987).  相似文献   
126.
We propose a framework which extends Antitonic Logic Programs [Damásio and Pereira, in: Proc. 6th Int. Conf. on Logic Programming and Nonmonotonic Reasoning, Springer, 2001, p. 748] to an arbitrary complete bilattice of truth-values, where belief and doubt are explicitly represented. Inspired by Ginsberg and Fitting's bilattice approaches, this framework allows a precise definition of important operators found in logic programming, such as explicit and default negation. In particular, it leads to a natural semantical integration of explicit and default negation through the Coherence Principle [Pereira and Alferes, in: European Conference on Artificial Intelligence, 1992, p. 102], according to which explicit negation entails default negation. We then define Coherent Answer Sets, and the Paraconsistent Well-founded Model semantics, generalizing many paraconsistent semantics for logic programs. In particular, Paraconsistent Well-Founded Semantics with eXplicit negation (WFSXp) [Alferes et al., J. Automated Reas. 14 (1) (1995) 93–147; Damásio, PhD thesis, 1996]. The framework is an extension of Antitonic Logic Programs for most cases, and is general enough to capture Probabilistic Deductive Databases, Possibilistic Logic Programming, Hybrid Probabilistic Logic Programs, and Fuzzy Logic Programming. Thus, we have a powerful mathematical formalism for dealing simultaneously with default, paraconsistency, and uncertainty reasoning. Results are provided about how our semantical framework deals with inconsistent information and with its propagation by the rules of the program.  相似文献   
127.
Over recent years, various semantics have been proposed for dealing with updates in the setting of logic programs. The availability of different semantics naturally raises the question of which are most adequate to model updates. A systematic approach to face this question is to identify general principles against which such semantics could be evaluated. In this paper we motivate and introduce a new such principle the refined extension principle. Such principle is complied with by the stable model semantics for (single) logic programs. It turns out that none of the existing semantics for logic program updates, even though generalisations of the stable model semantics, comply with this principle. For this reason, we define a refinement of the dynamic stable model semantics for Dynamic Logic Programs that complies with the principle.  相似文献   
128.
Summary  Popper uses the “Humean challenge” as a justification for his falsificationism. It is claimed that in his basic argument he confuses two different doubts: (a) the Humean doubt (Popper’s problem of induction), and (b) the “Popperean” doubt whether – presupposing that there are laws of nature – the laws we accept are in fact valid. Popper’s alleged solution of the problem of induction does not solve the problem in a straightforward way (as Levison and Salmon have remarked before). But if Popper’s solution of the Humean challenge is re-interpreted as being close to Kant’s it makes sense. Even though Popper explicitly rejects Kant’s synthetic judgements a priori, it is claimed here that this is so because he misinterprets Kant’s argument. If he had understood Kant correctly he should have been a modern “Kantianer”!  相似文献   
129.
130.
We present a set-theoretic model of the mental representation of classically quantified sentences (All P are Q, Some P are Q, Some P are not Q, and No P are Q). We take inclusion, exclusion, and their negations to be primitive concepts. We show that although these sentences are known to have a diagrammatic expression (in the form of the Gergonne circles) that constitutes a semantic representation, these concepts can also be expressed syntactically in the form of algebraic formulas. We hypothesized that the quantified sentences have an abstract underlying representation common to the formulas and their associated sets of diagrams (models). We derived 9 predictions (3 semantic, 2 pragmatic, and 4 mixed) regarding people's assessment of how well each of the 5 diagrams expresses the meaning of each of the quantified sentences. We report the results from 3 experiments using Gergonne's (1817) circles or an adaptation of Leibniz (1903/1988) lines as external representations and show them to support the predictions.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号