Graphical models and variational approximation
Michael I. Jordan
University of California, Berkeley
Abstract
Probabilistic models have become increasingly prominent in
recent years in artificial intelligence. General inference
algorithms have been discovered that apply to a wide class of
interesting and useful models known as ``graphical models''
(aka, Bayesian networks and Markov random fields). These
algorithms essentially treat probability theory as a combinatorial
calculus, and make creative use of graph theory to stave off
the inevitable exponential growth in complexity. There is another
feature of probability theory, however, which recommends it as
a general tool for AI. Probability involves taking averages, and
when averaging is present complex models can be probabilistically
simple. In this talk, I discuss variational methodology, which
aims to leverage averaging as a computational tool. Indeed, the
variational approach provides a general framework for approximate
inference in graphical models. I will discuss applications of
these ideas to a variety of probabilistic graphical models,
including layered networks with logistic or noisy-OR nodes,
coupled hidden Markov models, factorial hidden Markov models,
hidden Markov decision trees, and hidden Markov models with
long-range dependencies.
Eyal Amir
Last modified: Mon Dec 21 11:41:17 PST 1998