Why raytrace?
Why not raytrace?
A compromise?
We will use this geometry information to render the raytraced image in OpenGL in a manner that allows us to move OpenGL objects around in the scene, and to haptically interact with the objects in the scene.
Finally, a screenshot...
What you can't tell yet is that this is not simply being rendered as a texture-mapped quad in OpenGL. It's actually a point cloud, with each pixel in the image being rendered as one GL_POINT. Because we're rendering with an orthographic camera, all the points line up (no perspective is applied). But if you could swing the camera out to one side of the box, you would see a cloud of points, spread out in the z-dimension.
Prepare to be immersed
Learning to love our z-buffer
Feel the box...