Abstract: | Lexical ambiguity can be syntactic if it involves more than one grammatical category for a single word, or semantic if more than one meaning can be associated with a word. In this article we discuss the application of a Bayesian-network model in the resolution of lexical ambiguities of both types. The network we propose comprises a parsing subnetwork, which can be constructed automatically for any context-free grammar, and a subnetwork for semantic analysis, which, in the spirit of Fillmore's (1968) case grammars, seeks to fulfill the required cases of all candidates for verb of the sentence. Solving for the highest joint probability of the variables conditioned upon the evidences to the network yields the most likely candidate with its meaning, along with its cases and respective meanings. This is achieved by fixing the values of all evidence nodes concurrently, and then performing a stochastic simulation in which the remaining nodes are updated probabilistically with a high degree of parallelism. The process of disambiguation is directed neither by the syntax nor the semantics, but rather by the interrelation between the two subnetworks. The use of a Bayesian-network model allows us to express this interrelation between the two subnetworks and among their constituents in a rather direct and rigorous way that, in connection with the convergence properties of the stochastic simulation, reveals a very robust model. |