My final project will be to implement a set of tools for performing image-based lighting. The theory will be based on Paul Debevec's Siggraph 1998 paper titled "Rendering Synthetic Objects into Real Scenes" (http://www.debevec.org/).
Image-based lighting techniques have become more important in recent years, particularly for the film and television post-production industry. IBL provides a practical and more structured approach to produce convincing images of synthetic objects integrated with a photograph of a live scene.
There has been quite a lot of work done in this field. Jaemin Lee has done some interesting work in image-based lighting but unfortunately, replicating his work would not be possible in the time given. Paul Debevec's work is probably the most complete and thus I have chosen to start with his work.
My tasks can be roughly broken down as follows:
Having done some industry work in post-production before coming to Stanford, my primary interest is in developing a system that supports a reasonable workflow for creating an image with image-based lighting, from data acqusition to the final image.
With such a system, there are many kinds of images that I can try to reproduce. However, what would be much more interesting is creating enhanced images that are normally not possible or difficult to obtain in real-life, such as rendering a new statue into the middle of the Stanford Quad.