- Introduction
- Parametrizing the Problem
- Exploiting the MATLAB API
- Experiments and Results
- Conclusions
- References

In this document, I describe how I implemented an optimization routine using the MATLAB optimization toolbox [5]. The aim was to get accurate, consistent values for the calibration parameters while keeping the system scalable to 128 cameras. Here, I talk primarily about optimization; motivation for the problem can be found in my project proposal.

The next section describes implementation decisions I had to make, and issues in parametrizing the problem with its constraints. Thereafter, I discuss making best use of the MATLAB API for the particular optimization problem I wish to solve. I present inital results on a data set acquired from a four-camera prototype (the array is still under construction), and conclude with analysis and thoughts on how these techniques could be applied to other problems in my research.

The matlab function `lsqnonlin`

is the ideal candidate to minimize this in the L^{2}
norm. There is an advantage in the special structure of a nonlinear least squares problem versus a general
minization (`fminunc, fmincon`

): the gradient and hessian of the 2-norm of the objective function
can be easily computed using the jacobian and previously computed hessians [6, Pg 134].
The drawback - at least in MATLAB - is that we cannot specify additional constraints but have to solve an
unconstrained problem. Moreover, we cannot eliminate potential outliers during the minimization. We stick
with `lsqnonlin`

, however, because we can encode the important constraints in the parametrization
described below, and our feature detector conservatively filters out outliers.

```
lsqnonlin
```

, without any information on sparsity or partial derivatives. This worked better than I expected it
to, although the optimization was slow, and terminated due to the threshold on number of function evaluations,
the residual was significantly minimized by that stage. The main observation from the
MATLAB output was that most of the function evaluations were beind used to estimate the Jacobian.
Indeed, there were 79 function evaluations per iteration, of which 78 were being used for finite differences.
This inspired the first enhancement.
`JacobPattern`

option in the options passed to `lsqnonlin`

.
The results are encouraging: the number of function evaluations drops to 19 per iteration. This is consistent with the fact that there are at most 18 non-zero entries in any row of the Jacobian. Adding this option lets the minimization proceed till the limit on iterations is reached, we even see a slightly smaller residual in the MATLAB output.

`'DerivativeCheck'`

option, which
compares programmer-supplied derivatives to those estimted via finite differencing was extremely helpful in
debugging, but even with this it often took hours to track down the sources of error. Fortunately, I
managed to end up with a reasonably modular, bug-free implementation.
Since the difference between the analytic derivatives and those estimated by finite differencing was of the
order of 10^{-5}, I was not suprised there was not any increase in accuracy. The good news was that
having programmer-supplied Jacobians permits us to speedup computation by exploiting its sparsity. This will
be important as the size of the problem increases with more cameras.

`J*Y, J`^{T}*Y,
J^{T}*J*Y

. MATLAB accepts a user specified function for evaluating these products. Hence, the
full Jacobian need not ever be formed during the minimization. I implemented a sparse representation of the
Jacobian and function to compute the required products, and passed them to the minimization by enabling the
`'JacobMult'`

option. As expected, this runs much faster.
This is a tremendous win, both in terms of time and space. The full Jacobian is MxN, where M depends on the
number of observations we are fitting a model to, and N is a constant plus 12 times the number of cameras.
The sparse Jacobian, however, has no more than 18M non-zero entries. This is very encouraging, as it *
does not* increase if we add more cameras - beyond the fact that we will have more observations.
Since `flop counts`

are no longer available in MATLAB, I'm not able to provide an accurate
measure of increase in speed. However, it is very likely this will be a major source of speedup when we
calibrate the entire array of 128 cameras.

Implementation |
Normalized Residual Error |
Time per Iteration |
Termination |
MATLAB Output |

Naive, unoptimized | 0.3929 pixels | 7.1 sec | Exceeded number of function evaluations | naive.txt |

Used Jacobian structure | 0.3908 pixels | 2.5 sec | Exceeded maximum number of iterations | jpattern.txt |

Jacobian evaluated analytically | 0.3905 pixels | 0.53 sec | Exceeded maximum number of iterations | sparsity.txt |

Unfortunately, the completion of the camera array took longer than I anticipated and so I was not able to run the optimization on 128 cameras. However, when the construction of the array is finished, the calibration code will be ready for it.

- [1] The Light Field Camera Array
- [2]
The Camera Calibration Toolbox (Matlab)

Jean-Yves Bouget. - [3] Intel OpenCV Computer Vision Library (C++)
- [4]
A Flexible Technique for Camera Calibration

Zhenghyou Zhang, MSR Tech. Report MSR-TR-98-71 - [5]
The MATLAB Optimization Toolbox Guide

The MathWorks - [6] Practical Optimization

Philip Gill, Walter Murray, Margaret Wright,*Academic Press 1999*