Upsampling Range Data in Dynamic Environments
CVPR 2010
Abstract
We present a flexible method for fusing information from optical and
range sensors based on an accelerated high-dimensional filtering
approach. Our system
takes as input a sequence of monocular camera images as well as a
stream of sparse range measurements as obtained from a laser or other sensor system.
In contrast with existing approaches, we do not assume that
the depth and color data streams have the same data rates or that the observed scene
is fully static. Our method produces a dense, high-resolution depth
map of the scene, automatically generating confidence values for every interpolated depth point.
We describe how to integrate priors on object shape, motion and appearance and how to achieve
an efficient implementation using parallel processing hardware such
as GPUs.
|
|
Paper
Adobe Acrobat PDF (1.4 MB)
Videos
Example of Processing the highway1 dataset: AVI Video (4.1 MB)
Example of Processing the synthetic1 dataset: AVI Video (9.8 MB)
Data Sets
One download:    All 8 data sets
Individual data sets:
Implementations
(Note: all analysis in the paper uses the GPU version of the algorithm)
Here is the GPU implementation, as described in the
paper, which runs on linux with NVIDIA's cuda installed (in theory it
should also run on Windows, but that theory has not been tested).
There's a README file and sample Makefile included: download
Here is the CPU implementation, also with a README and
sample Makefile: download.
For image processing operations and data visualization, we recommend ImageStack. ImageStack also includes a more general joint bilateral filter implmentation for the CPU, written by Andrew Adams.
For more information about the original versions of the data structures
used in our GPU and CPU code, see:
Fast High-Dimensional Filtering Using the Permutohedral Lattice
and
Gaussian
Kd-Trees for Fast High Dimensional Filtering
|