Probabilities on Sentences in an Expressive Logic |
| |
Authors: | Marcus Hutter John W. Lloyd Kee Siong Ng William T.B. Uther |
| |
Affiliation: | 1. Research School of Computer Science, The Australian National University, Australia;2. EMC Greenplum and The Australian National University, Australia;3. National ICT Australia and University of New South Wales, Australia |
| |
Abstract: | Automated reasoning about uncertain knowledge has many applications. One difficulty when developing such systems is the lack of a completely satisfactory integration of logic and probability. We address this problem directly. Expressive languages like higher-order logic are ideally suited for representing and reasoning about structured knowledge. Uncertain knowledge can be modeled by using graded probabilities rather than binary truth values. The main technical problem studied in this paper is the following: Given a set of sentences, each having some probability of being true, what probability should be ascribed to other (query) sentences? A natural wish-list, among others, is that the probability distribution (i) is consistent with the knowledge base, (ii) allows for a consistent inference procedure and in particular (iii) reduces to deductive logic in the limit of probabilities being 0 and 1, (iv) allows (Bayesian) inductive reasoning and (v) learning in the limit and in particular (vi) allows confirmation of universally quantified hypotheses/sentences. We translate this wish-list into technical requirements for a prior probability and show that probabilities satisfying all our criteria exist. We also give explicit constructions and several general characterizations of probabilities that satisfy some or all of the criteria and various (counter)examples. We also derive necessary and sufficient conditions for extending beliefs about finitely many sentences to suitable probabilities over all sentences, and in particular least dogmatic or least biased ones. We conclude with a brief outlook on how the developed theory might be used and approximated in autonomous reasoning agents. Our theory is a step towards a globally consistent and empirically satisfactory unification of probability and logic. |
| |
Keywords: | Higher-order logic Probability on sentences Gaifman Cournot Induction Confirmation Learning Prior Knowledge Entropy |
本文献已被 ScienceDirect 等数据库收录! |
|