首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Inductive probabilistic reasoning is understood as the application of inference patterns that use statistical background information to assign (subjective) probabilities to single events. The simplest such inference pattern is direct inference: from “70% of As are Bs” and “a is an A” infer that a is a B with probability 0.7. Direct inference is generalized by Jeffrey’s rule and the principle of cross-entropy minimization. To adequately formalize inductive probabilistic reasoning is an interesting topic for artificial intelligence, as an autonomous system acting in a complex environment may have to base its actions on a probabilistic model of its environment, and the probabilities needed to form this model can often be obtained by combining statistical background information with particular observations made, i.e., by inductive probabilistic reasoning. In this paper a formal framework for inductive probabilistic reasoning is developed: syntactically it consists of an extension of the language of first-order predicate logic that allows to express statements about both statistical and subjective probabilities. Semantics for this representation language are developed that give rise to two distinct entailment relations: a relation ⊨ that models strict, probabilistically valid, inferences, and a relation that models inductive probabilistic inferences. The inductive entailment relation is obtained by implementing cross-entropy minimization in a preferred model semantics. A main objective of our approach is to ensure that for both entailment relations complete proof systems exist. This is achieved by allowing probability distributions in our semantic models that use non-standard probability values. A number of results are presented that show that in several important aspects the resulting logic behaves just like a logic based on real-valued probabilities alone.  相似文献   

2.
This paper describes the integration of zChaff and MiniSat, currently two leading SAT solvers, with Higher Order Logic (HOL) theorem provers. Both SAT solvers generate resolution-style proofs for (instances of) propositional tautologies. These proofs are verified by the theorem provers. The presented approach significantly improves the provers' performance on propositional problems, and exhibits counterexamples for unprovable conjectures. It is also shown that LCF-style theorem provers can serve as viable proof checkers even for large SAT problems. An efficient representation of the propositional problem in the theorem prover turns out to be crucial; several possible solutions are discussed.  相似文献   

3.
Ibens  Ortrun 《Studia Logica》2002,70(2):241-270
Automated theorem proving amounts to solving search problems in usually tremendous search spaces. A lot of research therefore focuses on search space reductions. Our approach reduces the search space which arises when using so-called connection tableau calculi for first-order automated theorem proving. It uses disjunctive constraints over first-order equations to compress certain parts of this search space. We present the basics of our constrained-connection-tableau calculi, a constraint extension of connection tableau calculi, and deal with the efficient handling of constraints during the search process. The new techniques are integrated into the automated connection tableau prover Setheo.  相似文献   

4.
Irrelevant clauses in resolution problems increase the search space, making proofs hard to find in a reasonable amount of processor time. Simple relevance filtering methods, based on counting symbols in clauses, improve the success rate for a variety of automatic theorem provers and with various initial settings. We have designed these techniques as part of a project to link automatic theorem provers to the interactive theorem prover Isabelle. We have tested them for problems involving thousands of clauses, which yield poor results without filtering. Our methods should be applicable to other tasks where the resolution problems are produced mechanically and where completeness is less important than achieving a high success rate with limited processor time.  相似文献   

5.
Population counts and longitude and latitude coordinates were estimated for the 50 largest cities in the United States by computational linguistic techniques and by human participants. The mathematical technique Latent Semantic Analysis applied to newspaper texts produced similarity ratings between the 50 cities that allowed for a multidimensional scaling (MDS) of these cities. MDS coordinates correlated with the actual longitude and latitude of these cities, showing that cities that are located together share similar semantic contexts. This finding was replicated using a first-order co-occurrence algorithm. The computational estimates of geographical location as well as population were akin to human estimates. These findings show that language encodes geographical information that language users in turn may use in their understanding of language and the world.  相似文献   

6.
THINKER is an automated natural deduction first-order theorem proving program. This paper reports on how it was adapted so as to prove theorems in modal logic. The method employed is an indirect semantic method, obtained by considering the semantic conditions involved in being a valid argument in these modal logics. The method is extended from propositional modal logic to predicate modal logic, and issues concerning the domain of quantification and existence in a world's domain are discussed. Finally, we look at the very interesting issues involved with adding identity to the theorem prover in the realm of modal predicate logic. Various alternatives are discussed.  相似文献   

7.
Recent years have seen considerable interest in procedures for computing finite models of first-order logic specifications. One of the major paradigms, MACE-style model building, is based on reducing model search to a sequence of propositional satisfiability problems and applying (efficient) SAT solvers to them. A problem with this method is that it does not scale well because the propositional formulas to be considered may become very large.We propose instead to reduce model search to a sequence of satisfiability problems consisting of function-free first-order clause sets, and to apply (efficient) theorem provers capable of deciding such problems. The main appeal of this method is that first-order clause sets grow more slowly than their propositional counterparts, thus allowing for more space efficient reasoning.In this paper we describe our proposed reduction in detail and discuss how it is integrated into the Darwin prover, our implementation of the Model Evolution calculus. The results are general, however, as our approach can be used in principle with any system that decides the satisfiability of function-free first-order clause sets.To demonstrate its practical feasibility, we tested our approach on all satisfiable problems from the TPTP library. Our methods can solve a significant subset of these problems, which overlaps but is not included in the subset of problems solvable by state-of-the-art finite model builders such as Paradox and Mace4.  相似文献   

8.
The System for Automated Deduction (SAD) is developed in the framework of the Evidence Algorithm research project and is intended for automated processing of mathematical texts. The SAD system works on three levels of reasoning: (a) the level of text presentation where proofs are written in a formal natural-like language for subsequent verification; (b) the level of foreground reasoning where a particular theorem proving problem is simplified and decomposed; (c) the level of background deduction where exhaustive combinatorial inference search in classical first-order logic is applied to prove end subgoals.

We present an overview of SAD describing the ideas behind the project, the system's design, and the process of problem formalization in the fashion of SAD. We show that the choice of classical first-order logic as the background logic of SAD is not too restrictive. For example, we can handle binders like Σ or lim without resort to second order or to a full-powered set theory. We illustrate our approach with a series of examples, in particular, with the classical problem .  相似文献   


9.
10.
Sledgehammer is a tool that harnesses external first-order automatic theorem provers (ATPs) to discharge interactive proof obligations arising in Isabelle/HOL. We extended it with LEO-II and Satallax, the two most prominent higher-order ATPs, improving its performance on higher-order problems. To explore their usefulness, these ATPs are measured against first-order ATPs and built-in Isabelle tactics on a variety of benchmarks from Isabelle and the TPTP library. Sledgehammer provides an ideal test bench for individual features of LEO-II and Satallax, revealing areas for improvements.  相似文献   

11.
In the recent paper “Naive modus ponens”, Zardini presents some brief considerations against an approach to semantic paradoxes that rejects the transitivity of entailment. The problem with the approach is, according to Zardini, that the failure of a meta-inference closely resembling modus ponens clashes both with the logical idea of modus ponens as a valid inference and the semantic idea of the conditional as requiring that a true conditional cannot have true antecedent and false consequent. I respond on behalf of the non-transitive approach. I argue that the meta-inference in question is independent from the logical idea of modus ponens, and that the semantic idea of the conditional as formulated by Zardini is inadequate for his purposes because it is spelled out in a vocabulary not suitable for evaluating the adequacy of the conditional in semantics for non-transitive entailment. I proceed to generalize the semantic idea of the conditional and show that the most popular semantics for non-transitive entailment satisfies the new formulation.  相似文献   

12.
Floris Roelofsen 《Synthese》2013,190(1):79-102
In classical logic, the proposition expressed by a sentence is construed as a set of possible worlds, capturing the informative content of the sentence. However, sentences in natural language are not only used to provide information, but also to request information. Thus, natural language semantics requires a logical framework whose notion of meaning does not only embody informative content, but also inquisitive content. This paper develops the algebraic foundations for such a framework. We argue that propositions, in order to embody both informative and inquisitive content in a satisfactory way, should be defined as non-empty, downward closed sets of possibilities, where each possibility in turn is a set of possible worlds. We define a natural entailment order over such propositions, capturing when one proposition is at least as informative and inquisitive as another, and we show that this entailment order gives rise to a complete Heyting algebra, with meet, join, and relative pseudo-complement operators. Just as in classical logic, these semantic operators are then associated with the logical constants in a first-order language. We explore the logical properties of the resulting system and discuss its significance for natural language semantics. We show that the system essentially coincides with the simplest and most well-understood existing implementation of inquisitive semantics, and that its treatment of disjunction and existentials also concurs with recent work in alternative semantics. Thus, our algebraic considerations do not lead to a wholly new treatment of the logical constants, but rather provide more solid foundations for some of the existing proposals.  相似文献   

13.
Different reasoning systems have different strengths and weaknesses, and often it is useful to combine these systems to gain as much as possible from their strengths and retain as little as possible from their weaknesses. Of particular interest is the integration of first-order and higher-order techniques. First-order reasoning systems, on the one hand, have reached considerable strength in some niches, but in many areas of mathematics they still cannot reliably solve relatively simple problems, for example, when reasoning about sets, relations, or functions. Higher-order reasoning systems, on the other hand, can solve problems of this kind automatically. But the complexity inherent in their calculi prevents them from solving a whole range of problems. However, while many problems cannot be solved by any one system alone, they can be solved by a combination of these systems.We present a general agent-based methodology for integrating different reasoning systems. It provides a generic integration framework which facilitates the cooperation between diverse reasoners, but can also be refined to enable more efficient, specialist integrations. We empirically evaluate its usefulness, effectiveness and efficiency by case studies involving the integration of first-order and higher-order automated theorem provers, computer algebra systems, and model generators.  相似文献   

14.
Semantic associations and elaborative inference   总被引:2,自引:0,他引:2  
In this article, a theoretical framework is proposed for the inference processes that occur during reading. According to the framework, inferences can vary in the degree to which they are encoded. This notion is supported by three experiments in this article that show that degree of encoding can depend on the amount of semantic-associative information available to support the inference processes. In the experiments, test words that express possible inferences from texts are presented for recognition. When testing is delayed, with other texts and test items intervening between a text and its test word, performance depends on the amount of semantic-associative information in the text. If the inferences represented by the test words are not supported by semantic associates in the text, they appear to be only minimally encoded (replicating McKoon & Ratcliff, 1986), but if they are supported by semantic associates, they are strongly encoded. With immediate testing, only 250 ms after the text, performance is shown to depend on semantic-associative information, not on textual information. This suggests that it is the fast availability of semantic information that allows it to support inference processes.  相似文献   

15.
A new theory of problem solving is presented, which embeds problem solving in the theory of action; in this theory, a problem is just a difficult action. Making this work requires a sophisticated language for-talking about plans and their execution. This language allows a broad range of types of action, and can also be used to express rules for choosing and scheduling plans. To ensure flexibility, the problem solver consists of an interpreter driven by a theorem prover which actually manipulates formulas of the language. Many examples of the use of the system six given. including an extended treatment of the world of blocks. Limitations and extensions of the system are discussed at length. It is concluded that a rule-based problem solver is necessary and feasible, but that much more work remains to be done on the underlying theory of planning and acting.  相似文献   

16.
《认知与教导》2013,31(1):49-101
We present information-processing models of different levels of knowledge for understanding the language used in texts of arithmetic word Problems, for forming semantic models of the situations that the texts describe, and for making the inferences needed to answer the questions in the problems. In the simplest cognitive models, inferences are limited to properties of sets that exist in a semantic model. In more complex cognitive models, relations between sets are represented internally and support more complex reasoning. Performance on three sets of problems by kindergarten through third-grade students was used to test the models. Global tests provided support for the models. These included measures of scalability and frequencies of individual children's patterns of solutions that agreed with predictions of the models. Performance on problems involving combinations and changes of sets was explained better by the cognitive models than performance on problems involving comparisons. Comparisons may require more advanced understanding of numbers as values of operators rather than only as cardinalities of sets.  相似文献   

17.
Embodied experience and linguistic meaning   总被引:5,自引:0,他引:5  
What role does people's embodied experiences have in their use and understanding of meaning? Most theories in cognitive science view meaning in terms of propositional structures that may be combined to form higher-order complexes in representing the meanings of conversations and texts. A newer approach seeks to capture meaning in terms of high-dimensional semantic space. Both views reflect the idea that meaning is best understood as abstract and disembodied symbols. My aim in this article is to make the case for an embodied view of linguistic meaning. This view provides a challenge to traditional approaches to linguistic meaning (although may not necessarily be entirely incompatible with them). I discuss several new lines of research from both linguistics and psychology that explore the importance of embodied perception and action in people's understanding of words, phrases, and texts. These data provide strong evidence in favor of the idea that significant aspects of thought and language arises from, and is grounded in, embodiment.  相似文献   

18.
This paper presents the architecture and functionality of a logic prover designed for question answering. The approach transforms questions and answer passages into logic representations based on syntactic, semantic and contextual information. World knowledge supplements the linguistic, ontological, and temporal axioms supplied to the prover which renders a deep understanding of the relationship between the question and answer text. The trace of the proofs provides a basis for generating human comprehensible answer justifications. The results show that the prover boosts the performance of the Question Answering system on TREC 2004 questions by 12%.  相似文献   

19.
20.
Processing language requires the retrieval of concepts from memory in response to an ongoing stream of information. This retrieval is facilitated if one can infer the gist of a sentence, conversation, or document and use that gist to predict related concepts and disambiguate words. This article analyzes the abstract computational problem underlying the extraction and use of gist, formulating this problem as a rational statistical inference. This leads to a novel approach to semantic representation in which word meanings are represented in terms of a set of probabilistic topics. The topic model performs well in predicting word association and the effects of semantic association and ambiguity on a variety of language-processing and memory tasks. It also provides a foundation for developing more richly structured statistical models of language, as the generative process assumed in the topic model can easily be extended to incorporate other kinds of semantic and syntactic structure.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号