Camera
Placement Considering Occlusion for
Robust Motion Capture
Xing Chen, James Davis
Abstract
In
multi-camera tracking systems, camera placement can have a significant impact on
the overall performance. In feature-based motion capture systems, degradation
can come from two major sources, low image resolution and target occlusion. In
order to achieve better tracking and
automate the camera placement process, a quantitative metric to evaluate
the quality of multi-camera configurations is needed. We propose a quality
metric that estimates the error caused by both image resolution and occlusion..
It includes a probabilistic occlusion model that reflects the dynamic
self-occlusion of the target. Using this metric, we show the impact of occlusion
on optimal camera pose by analyzing several camera configurations. Finally, we
show camera placement examples that demonstrate how this metric can be applied
toward the automatic design of more accurate and robust tracking systems.
Paper
|
Selected Figures
Left:
Two cameras constrained to move on an outer sphere try to cover a inner
spherical target space; the angle with which they intersect the sphere center
is q.
The occlusion and resolution metrics
are minimized at different angles. Center: 3D uncertainty vs. q
considering resolution only . Right: 3D uncertainty vs. q
considering occlusion only. (q
is from 0 to 180 degrees.)
3D
uncertainty vs. number of
cameras. Error due to imager resolution is nearly minimized by only two
cameras. Many more cameras are needed to robust insure against occlusion.
|