Presented at the
European Conference on Computer Vision 2002
Abstract
We propose a new tracking technique that is able
to capture non-rigid motion by exploiting a space-time rank constraint. Most
tracking methods use a prior model in order to deal with challenging local
features. The model usually has to be trained on carefully handlabeled
example data before the tracking algorithm can be used. Our new model-free
tracking technique can overcome such limitations. This can be achieved in redefining the problem. Instead of first training a model and then tracking the model
parameters, we are able to derive trajectory constraints first,
and then estimate the model. This reduces the search space significantly
and allows for a better feature disambiguation that would not be possible with
traditional trackers. We demonstrate that sampling in the trajectory space,
instead of in the space of shape configurations,
allows us to track challenging footage without use of prior models.