Parameter Learning and Convergent Inference for Dense Random Fields

Philipp Krähenbühl and Vladlen Koltun

Stanford University

ICML 2013

Abstract:
Dense random fields are models in which all pairs of variables are directly connected by pairwise potentials. It has recently been shown that mean field inference in dense random fields can be performed efficiently and that these models enable significant accuracy gains in computer vision applications. However, parameter estimation for dense random fields is still poorly understood. In this paper, we present an efficient algorithm for learning parameters in dense random fields. All parameters are estimated jointly, thus capturing dependencies between them. We show that gradients of a variety of loss functions over the mean field marginals can be computed efficiently. The resulting algorithm learns parameters that directly optimize the performance of mean field inference in the model. As a supporting result, we present an efficient inference algorithm for dense random fields that is guaranteed to converge.

Files:
Paper, supplementary material.

Code:
here. This version includes both the learning and inference code. This version is a bit faster than the previous on, but it has additional dependencies on Eigen and liblbfgs (both included).

Older version:
You can find an older version of the code and all the unary potentials here .