Presented at the
European Conference on Computer Vision 2002
We propose a new tracking technique that is able to capture non-rigid motion by exploiting a space-time rank constraint. Most tracking methods use a prior model in order to deal with challenging local features. The model usually has to be trained on carefully handlabeled example data before the tracking algorithm can be used. Our new model-free tracking technique can overcome such limitations. This can be achieved in redeﬁning the problem. Instead of ﬁrst training a model and then tracking the model parameters, we are able to derive trajectory constraints ﬁrst, and then estimate the model. This reduces the search space signiﬁcantly and allows for a better feature disambiguation that would not be possible with traditional trackers. We demonstrate that sampling in the trajectory space, instead of in the space of shape conﬁgurations, allows us to track challenging footage without use of prior models.