# Assignment 3 Camera Simulation

## Compound Lens Simulator

### Description of implementation approach and comments

I made an effort to take this one step at a time, so I wouldn't run into big problems with no idea what to fix.

My first step was to setup the rendering system, and make sure I get thr raster->camera and camera->world coordinate transformations correct. I tested this by setting the direction vector normal to the film plane (ie: (0,0,1) in camera coords.), and let everything run, which should produce a very small orthonormal projection. This is what I got:

which looks like it should. Its the very middle of the image, flipped upside down like a good film plane should be.

Next, I got the lens files reading in properly, and verified the output. I wrote a Ray-Sphere intersection routine, and tested it using only the back lens, with orthonormal rays, which produced the following output of interesections: [[br]]

[[br]]

which looks pretty spherical lens-like. Next, I aimed the rays at the lensUV point after converting to a disk with concentricSampleDisk, and got what I expected:

### Final Images Rendered with 512 samples per pixel

 My Implementation Reference Telephoto Double Gausss Wide Angle Fisheye

### Final Images Rendered with 4 samples per pixel

 My Implementation Reference Telephoto Double Gausss Wide Angle Fisheye

## Experiment with Exposure

 Image with aperture full open Image with half radius aperture

......

## Autofocus Simulation

### Description of implementation approach and comments

I took the distance approach for computing focus. In order to get the ditance information back to the camera class I needed to modify several parts of pbrt. I took pointers from Jared Sohn's webpage, who has a pbrt plugin to enhance the film class. I modified the Ray class in geometry.h to keep the intersection point in the ray, and then put code in scene.cpp to actually record the intersection point. Then, back in the realistic.cpp file, I can simply query ray.p to get the world coordinates of the intersection point of the ray. I calculate the distance from the front of the camera to world coordinates, and then just take the vector Distance between the two points. From this, I can compute a depth map of the scene, as seen here:

The above image is from the cone example. You can see the close objects are dark, while farther objects (like the wall in the background) are bright - indicating a large distance. Since the depth map is still noisy, I need to run about 8 samples per pixel to get an accurate enough sampling - which is much better than the number of samples needed for the "focus by image sharpness" technique.

### Final Images Rendered with 512 samples per pixel

 Adjusted film distance My Implementation Reference Double Gausss 1 mm Double Gausss 2 mm Telephoto mm

## Any Extras

...... Go ahead and drop in any other cool images you created here .....