A subset of these readings are cited in the online lecture notes, and a subset of those will be handed out in class. In the list below, the "Comments:" fields were added informally by Marc or Bennett, and are not guaranteed to be correct. If you see an error, let us know. The ".file: directory/filename" fields refers to subdirectories of http://graphics.stanford.edu/~levoy/downloaded/, or of /u/levoy/downloaded/ if you're logged on to one of the graphics lab machines. These files have typically been downloaded from the web, so they should be easy to find using Google.
Adams, A., The Camera, Little, Brown, and Co., 1976.
London, B., Upton, J., Photography, fifth edition, HarperCollins, 1994.
Hedgecoe, J., The Photographer's Handbook, third edition, Alfred A. Knopf, 1993.
Hedgecoe, J., The New Manual of Photography's Handbook, third edition, Dorling-Kindersley, 2003.
Frost, L., The Complete Guide to Night and Low-Light Photography, Watson-Guptill, 1999.
Hunter, F., Fuqua, P., Light Science & Magic, 2nd edition, Focal Press, 1997.
Frost, L., Panoramic Photography David and Charles, 2005.
Peterson, L., Learning to See Creatively, Watson-Guptill, 1988.
Professional Photographic Illustration, Antonio LoSapio, ed., Kodak, 1994.
Ray, S., Scientific Photography and Applied Imaging, Focal Press, 1999.
Hope, T., Extreme Photography, RotoVision, 2004
Renner, E., Pinhole Photography, Focal Press, 2000.
Hecht, E., Optics, second edition, Addison-Wesley, 1987.
Kingslake, R., Optics in Photography, SPIE Press, 1992.
Kingslake, R., Optical System Design, Academic Press, 1983.
Kingslake, R., A History of the Photographic Lens, Academic Press, 1989.
Smith, W. J., Modern Optical Engineering, McGraw-Hill, 2000.
Goldberg, N., Camera technology: the dark side of the lens, Academic Press, 1992.
Kolb, C., Mitchell, D., Hanrahan, P., A Realistic Camera Model for Computer Graphics Proc. Siggraph '95.
Manly, P.L., Unusual Telescopes, Cambridge University Press, 1992.
Adelson, E.H., Bergen, J.R., The plenoptic function and the elements of early vision, In Computation Models of Visual Processing, M. Landy and J.A. Movshon, eds., MIT Press, Cambridge, 1991. .file: image-based-rendering/adelson-plenoptic-cmvp91.pdf
Levoy, M., Hanrahan, P., Light field rendering, Proc. Siggraph '96. URL: http://graphics.stanford.edu/papers/light/
Gortler, S.J., Grzeszczuk, R., Szeliski, R., Cohen, M.F., The Lumigraph, Proc. Siggraph '96. .file: image-based-rendering/gortler-lumigraph-sig96.pdf
Gershun, A., The Light Field, Moscow, 1936. Translated by P. Moon and G. Timoshenko, Journal of Mathematics and Physics, Vol. XVIII, MIT, 1939, pp. 51-151.
Moon, P., Spencer, D.E., The Photic Field, MIT Press, 1981.
Adelson, E.H., Bergen, J.R., The plenoptic function and the elements of early vision, Computation Models of Visual Processing, M. Landy and J.A. Movshon, eds., MIT Press, Cambridge, 1991. .file: image-based-rendering/adelson-plenoptic-cmvp91.pdf
Langer, M.S., Zucker, S.W., What is a light source? Proc. CVPR '97. .comment: taxonomy of various light sources using 4D light field rep .file: image-based-rendering/langer-light-source-cvpr97.pdf
Sloan, P.-P., Cohen, M.F., Gortler, S.J., Time Critical Lumigraph Rendering Proc. 1997 Symposium on Interactive 3D Graphics. .file: image-based-rendering/sloan-timecritical-i3d97.pdf
Camahort, E., Lerios, A., Fussell, D., Uniformly sampled light fields, Proc. Eurographics Rendering Workshop '98. .comment: 2 parameterizations: sphere x sphere, and direction x position .file: image-based-rendering/camahort-uniformLF-rend98.pdf
Camahort, E., Fussell, D., A Geometric Study of Light Field Representations, Technical Report TR99-35, Department of Computer Sciences, The University of Texas at Austin, 1999. .comment: analysis of errors in various light field configurations .file: image-based-rendering/camahort-lightfield-tr99-35.pdf
Camahort, E., 4D Light-Field Modeling and Rendering PhD Dissertation, University of Texas at Austin, 2001. .comment: summary of alternative parameterizations of light fields .file: image-based-rendering/camahort-dissertation.pdf
Chai, J.-X., Tong, X., Chan, S.-C. Shum, H.-Y. Plenoptic Sampling, Proc. Siggraph 2000. .comment: Z-disparity versus light field sampling rate, Fourier analysis .file: image-based-rendering/shum-plenoptic-sig00.pdf
Lin, Z., Shum, H.-Y., On the number of samples needed in light field rendering with constant-depth assumption, Proc. CVPR 2000. .comment: more disparity analysis, ideal st-plane = harmonic mean depth .file: image-based-rendering/shum-light-field-sampling-cvpr2000.pdf
Kang, S.B., Seitz, S.M., Sloan, P.-P., Visual tunnel analysis for visibility prediction and camera planning, Proc. CVPR 2000. .comment: view planning for light field acquisition, in particular, .comment: defines a visual tunnel P(x,y,z,theta,w) and derived functions: .comment: for flatland, f(x,y) = density of rays available at that point, .comment: and viewing volume within which virtual cameras with specific .comment: orientations and FOV could be populated from the light field .file: image-based-rendering/kang-visual-tunnel-cvpr2000.pdf
Halle, M., Multiple viewpoint rendering, Proc. Siggraph '98. .file: image-based-rendering/halle-multiview-rendering-sig98.pdf
Isaksen, A., McMillan, L., Gortler, S.J., Dynamically reparameterized light fields, Proc. Siggraph 2000 .file: image-based-rendering/isaksen-reparameterized-sig00.pdf
Buehler, C., Bosse, M., McMillan, L., Gortler, S., Cohen, M., Unstructured Lumigraph rendering, Proc. Siggraph 2001 .file: image-based-rendering/buehler-unstructured-sig01.pdf
Lin, Z., Wong, T.-T., Shum, H.-Y., Relighting with the Reflected Irradiance Field: Representation, Sampling and Reconstruction Proc. CVPR 2001. .comment: fixed viewpoint, point light moves across a plane, so still 4D .comment: object depth and surface BRDF -> est. of radiometric error .comment: ignores interreflection, see also Koudelka et al., CVPR 2001
Miller, G., Volumetric Hyper-Reality: A Computer Graphics Holy Grail for the 21st Century?, Proc. Graphics Interface '95, Canadian Information Processing Society, 1995, pp. 56-64.
Teller, S., Bala, K., Dorsey, J., Conservative radiance interpolants for ray tracing, Proc. Eurographics Rendering Workshop '96. .comment: aliasing in ray tracing due to gaps, blockers, funnels, peaks .comment: radiance interpolation within a nodes of a hierarchy of 4D .comment: ray spaces defined by two planes, similar to light fields, .comment: for the radiance leaving each surface in a scene, .comment: also uses lazy evaluation as observer moves
Miller, G., Rubin, S., Ponceleon, D., Lazy decompression of surface light fields for precomputed global illumination, Proc. Eurographics Rendering Workshop '98. .comment: per-surface light fields, DCT-based compression
Wood, D.N., Azuma, D.I., Aldinger, K., Curless, B., Duchamp, T., Salesin, D.H., Stuetzle, W., Surface Light Fields for 3D Photography Proc. Siggraph 2000. .file: image-based-rendering/wood-surfacelfs-sig00.pdf
Chen, W.-C., Bouguet, J.-Y., Chu, M.H., Grzeszczuk, R., Light field mapping: efficient representation and hardware rendering of surface light fields, Proc. Siggraph 2002.
Wilburn, B., Smulski, M., Lee, K., Horowitz, M.A., The Light Field Video Camera, Proc. SPIE Electronic Imaging 2002. .file: sensing-hardware/wilburn-lfcamera-spie02.pdf
Vaish, V., Wilburn, B., Levoy, M., Using Plane + Parallax for Calibrating Dense Camera Arrays Proc. CVPR 2004. URL: http://graphics.stanford.edu/papers/plane+parallax_calib/
The CMU Virtualized Reality dome URL: www-2.cs.cmu.edu/afs/cs/project/VirtualizedR/www/VirtualizedR.html
Kanade, T., Saito, H., Vedula, S., The 3D Room: Digitizing Time-Varying 3D Events by Synchronized Multiple Video Streams, Technical Report CMU-RI-TR-98-34, Carnegie Mellon University, 1998. .comment: 49 video cameras streaming raw video to 17 PCs .file: image-based-rendering/kanade-3droom-tr98.pdf
Naemura, T., Yoshida, T., Harashima, H., 3-D computer graphics based on integral photography, Optics Express, Vol. 8, No. 2, February 12, 2001. .comment: lens array -> HDTV signal -> traditional light field rendering .comment: low-res (54 x 63 pixels, 20 x 20 angles), no depth assumption .comment: or constant Z depth assumption .comment: see also Ooi, ICIP 2001 .file: stereoscopic-displays/naemura-lens-array-optics01.pdf
Ooi, R., Hamamoto, T., Naemura, T., Aizawa, K., Pixel Independent Random Access Image Sensor for Real Time Image-Based Rendering System, Proc. ICIP 2001. .comment: CMOS sensor with selectable readout region .comment: see also Naemura, Optics Express, 2001 .file: image-based-rendering/ooi-array-icip02.pdf
Schirmacher, H., Ming, L., Seidel, H.-P., On-the-fly processing of generalized Lumigraphs, Proc. Eurographics 2001. .comment: "Lumishelf" of 6 firewire cameras (3 x 2 array) .file: image-based-rendering/schirmacher-lumigraphs-eg01.pdf
Yang, J.C., Everett, M., Buehler, C., McMillan, L., A Real-Time Distributed Light Field Camera, Proc. Eurographics Rendering Workshop 2002. .file: sensing-hardware/mcmillan-array-rend02.pdf
Adelson, E.H., Wang, J.Y.A., Single Lens Stereo with a Plenoptic Camera , IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), Vol. 14, No. 2, 1992, pp. 99-106. .comment: lenticular screen over sensor, one main lens, to extract depth .file: sensing-hardware/adelson-plenoptic-pami92.pdf
Taylor, Dayton Virtual Camera Movement: The Way of the Future? American Cinematographer, Vol 77, No. 9, September, 1996, pp. 93-100.
Chen, S.E., QuickTime VR - An Image-Based Approach to Virtual Environment Navigation, Proc. Siggraph '95.
Szeliski, R. and Shum, H.Y. Creating Full View Panoramic Image Mosaics and Environment Maps. Computer Graphics Proceedings, Annual Conference Series, 1997, pp. 251-258. .file: image-based-rendering/szeliski-mosaics-tr97.pdf
Shum, H., He, L.-W., Rendering with Concentric Mosaics, Proc. Siggraph '99. .file: image-based-rendering/shum-concentric-sig99.pdf
Quan, L., Lu, L.,. Shum, H., Lhuillier, M., Concentric Mosaic(s), Planar Motion and 1D Cameras, Proc. ICCV 2001. .comment: decomposes CM into 1D perspective x 1D affine cameras, .comment: implications for calibration and shape-from-motion
Aliaga, D.G., Carlbom, I., Plenoptic Stitching: A Scalable Method for Reconstructing Interactive Walkthroughs, Proc. SIGGRAPH '01. .comment 360-degree panoramic camera moves along multiple arbitrary paths through a scene (unoccluded), creating a sparsely populated 4D light field parameterized by (x,y) position on the plane containing the camera centers and (u,v) on a closed, possibly sinuous, vertical wall that surrounds the scene.
Aliaga, D.G., Funkhouser, T., Yanovsky, D., Carlbom, I., Sea of images, IEEE CG&A, Vol. 23, No. 6, November/December 2003. .comment 10,000-image horizontal-parallel-only light field acquired from a camera on a moving cart.
Debevec, P., Hawkins, T., Tchou, C., Duiker, H.-P., Sarokin, W., Sagar, M., Acquiring the Reflectance Field of a Human Face, Proc. Siggraph 2000. .file: image-based-rendering/debevec-reflectance-sig00.pdf
Debevec, P., Image-based lighting, IEEE Computer Graphics and Applications, Vol. 22, No. 2, March/April, 2002. .comment Summary of his Siggraph papers, with nice images.
Masselus, V., Peers, P., Dutre, P., Willems, Y.D., Relighting with 4D incident light fields, Proc. Siggraph 2003. .comment Projector illuminates scene with patterns, camera records it. Single rectangles or non-overlapping patterns of rectangles of pixels are used. Rectangles are projected directly onto scene without a diffuser; compare to Schechner's multiplexed illumination.
Schechner, Y., Nayar., S., A Theory of Multiplexed Illumination, Proc. ICCV 2003. .file image-processing/schechner-multiplexed-iccv03.pdf .comment Uses Hadamard matrices to illuminate an object over a series of trials, then inverts the matrix to compute its response to directional illumination. Compare to Masselus, Proc. Siggraph 2003.
Goesele, M., Granier, X., Heidrich, W., Seidel, H.-P., Accurate light source acquisition and rendering, Proc. Siggraph 2003. .comment Measurement of the light field emanating from a luminaire like a flashlight. Uses a grayscale slide to apply pre-filter during acquisition, one for positive and one for negative filter coefficients. Future work includes using an LCD to vary filter and a robot to move aperture across source.
Herman, G.T., Image Reconstruction from Projections, Academic Press, New York, 1980.
Gering, D.T.and Wells III, W.M. Object Modeling using Tomography and Photography. Proc. CVPR '99. .comment: reconstruction from backprojection of light fields .file: shape-from-light-fields/gering-backprojection-cvpr99.pdf
Seitz, S.M., Dyer, C.R., Photorealistic Scene Reconstruction by Voxel Coloring. Int. J. Computer Vision, Vol. 35, No. 2, 1999, 151-173. .comment: the first paper on voxel algorithms for shape from many cameras .file: shape-from-light-fields/seitz-voxel-coloring-ijcv99.pdf
De Bonet, J., Viola, P., Roxels: responsibility weighted 3D volume reconstruction, Proc. ICCV '99. .comment: ARTS-like iterative backprojection/forward projection, .file: shape-from-light-fields/debonet-roxels-iccv99.pdf
Broadhurst, A., Drummond, T., Cipolla, R., A Probabilistic Framework for Space Carving, Proc. ICCV 2001. .comment: similar to De Bonet and Viola, ICCV '99,
Dachille, F. IX, Mueller, K., Kaufman, A., Volumetric Backprojection, Proc. 2000 Symposium on Volume Visualization and Graphics, ACM, October, 2000. .comment: for global illumination, but should suffer from .comment: self-illumination like volumetric shadowing algorithms, and for .comment: shape-from-light-fields using filtered backprojection and ART
Marks, D.L., Stack, R.A., Brady, D.J., Munson Jr. D.C., Brady, R.B., Visible Cone-Beam Tomography With a Lensless Interferometric Camera, Science, Vol. 284, No. 5423, June 25, 1999, pp. 2164-6, .comment: lensless (infinite depth-of-field) interferometric images .comment: of opaque object followed by tomographic reconstruction, .comment: see also Fetterman et al., (Optics Express 7(5)) .file: shape-from-light-fields/marks-visible-tomography-science99.pdf
Fetterman, M.R., Tan, E., Ying, L., Stack, R.A., Marks, D., Feller, S., Cull, E., Sullivan, J., Munson, D.C.Jr., Thoroddsen, S.T., Brady, D.J., Tomographic imaging of foam, Optics Express, Vol. 7, No. 5, August 28, 2000. .comment: tomographic reconstruction of pins and foam from light images .comment: see also Marks, Stack, et al (Science 284(5423)) .file: shape-from-light-fields/fetterman-tomography-foam-optics00.pdf
See also the section on "optical tomography" in Berthold Horn's course on computational imaging.
Matusik, W., Buehler, C., Raskar, R., Gortler, S.J., McMillan, L., Image-Based Visual Hulls, Proc. Siggraph 2000. .file: image-based-rendering/buehler-visualhulls-sig00.pdf
Faugeras, O., Keriven, R., Complete Dense Stereovision using Level Set Methods Proc. ECCV '98. .file: shape-from-X/faugeras-levelset-eccv98.pdf .comment Determining shape from a circle of cameras (extendable to a surface of cameras) by evolving a surface toward points in 3-space that maximize a photo-consistency metric, continually updating this volumetric function to account for occlusion by the evolving surface. Metric is cross-correlation of camera intensities, which is different than in Seitz's voxel coloring algorithm. Seems to work, but no running times are given, also no evaluation of convergence, robustness to noise, lack of texture, changes in number of cameras, etc.
Schechner, Y., Kiryati, N., Depth from defocus vs. stereo: How different really are they? IJCV, Vol. 39, No. 2, 2000, pp. 141-162. .folder: shape-from-X .file: shape-from-X/schechner-focus-stereo-ijcv00.pdf .comment Argues that depth from single-view or multiple-view stereo, depth from motion, depth from motion blur, depth from defocus, and depth from defocus all perform triangulation within some 1D or 2D aperture (called the "baseline" for depth from stereo, and the "path" for depth from motion), and can be compared on this basis.For example, the techniques are largely similar in their sensitivity to repeating patterns, called "matching ambiguity" in depth from stereo, and "aliasing" in depth from defocus. However, the authors argue that that depth from focus is less prone to aliasing than depth from defocus. Specifically, for repeating patterns, the output of a focus operator will have a unique maximum value, but will decreases non-monotonically with changes in aperture size or axial motion of the focal plane. This means that whereas a depth from defocus computation, e.g. estimating depth from a pinhole image and one defocused image, may return the wrong depth, a depth from focus computation, i.e. estimating depth by searching for the focus setting at which the operator returns its maximum, will always return the correct depth.
On the other hand, the techniques differ in the shape and sampling of the aperture: 1D discrete for stereo, 1D near-continuous for motion, and 2D continuous for focus. The difference in shape leads to a difference in sensitivity to occlusion. Specifically, occlusions occur more often in the 2D aperture of a focus method than in the 1D aperture of a stereo method, but are less disruptive. The difference in sampling (i.e. stereo uses two or more pinholes, while focus uses a continuum of rays) means that focus methods admit more light for a given baseline length, making them less sensitive to noise. Also, 2D apertures are immune to the "aperture problem" that plagues 1D aperture methods such as stereo or motion.
The paper contains no discussion of stereo, motion, or focus operators.
Chen, S.E., Williams, L., View Interpolation for Image Synthesis, Proc. Siggraph '93. .comment: the first view interpolation paper, assumes Z-information .file: image-based-rendering/chen-viewinterp-sig93.pdf
McMillan, L., Bishop, G., Plenoptic Modeling: An Image-Based Rendering System, Proc. Siggraph '95 .file: image-based-rendering/mcmillan-plenoptic-sig95.pdf
Sillion, F., Drettakis, G., Bodelet, B., Efficient impostor manipulation for real-time visualization of urban scenery, Computer Graphics Forum (Proc. Eurographics '97), Vol. 16, No. 3, 1997, pp. 207-218. .comment: textured range images (=3D imposters) for bkg, geometry for fgd .file: image-based-rendering/sillion-imposter-eg97.pdf
Mark, W.R., McMillan, L., Bishop, G., Post-Rendering 3D Warping. Proc. 1997 Symposium on Interactive 3D Graphics. .comment: view interpolation (with Z) instead of rendering every frame .file: image-based-rendering/mark-warping-i3d97.pdf & -plates.pdf
Shade, J., Gortler, S.J., He, L., Szeliski, R., Layered Depth Images, Proc. Siggraph '98. .file: image-based-rendering/shade-layereddepth-sig98.pdf
Debevec, P., Yu, Y., Borshukov, G., Efficient view-dependent image-based rendering with projective texture-mapping, Proc. Eurographics Rendering Workshop '98. .comment: compute visibility map from each camera, split polygons, .comment: view-dependent texture mapping by hardware-projected textures
Pulli, K., Surface Reconstruction and Display from Range and Color Data, PhD dissertation, University of Washington, 1997. .file: 3D-shape-acquisition/pulli-dissertation.pdf .comment: chapters on global registration, image-guided registration, .comment: viewpoint-dependent textures
Aliaga, D.G., Lastra, A.A., Automatic Image Placement to Provide a Guaranteed Frame Rate, Proc. Siggraph '99. .comment: image caching, with guaranteed frame rate .file: image-based-rendering/aliaga-framerate.sig99.pdf
Magnor, M., Geometry-Adaptive Multi-View Coding Techniques for Image-based Rendering, PhD dissertation, University of Erlangen, 2000. .file: image-based-rendering/magnor-dissertation.pdf
Magnor, M., Girod, B., Data Compression for Light-Field Rendering, IEEE Transactions on circuits and systems for video technology, Vol. 10, No. 3, April, 2000. .comment: disparity-compensated compression and interpolation .file: image-based-rendering/magnor-lfcomp-tcsvt00.pdf
Magnor, M., Girod, B., Model-based coding of multi-viewpoint imagery, Proc. VCIP 2000. .comment: surface-based light fields using vision-based 3D voxel model Folder: image-based rendering
Tong, X., Gray, R.M., Coding of multi-view images for immersive viewing, Proc. ICASSP 2000.
Maciel, P.W.C., Shirley, P., Visual navigation of large environments using textured clusters, Proc. 1995 Symposium on Interactive 3D Graphics, ACM, 1995, pp. 95-102. .comment: general LOD framework with (static) sprites .file: image-based-rendering/maciel-navigation-i3d95.pdf
Aliaga, D.G., Visualization of complex models using dynamic texture-based simplification, Proc. Visualization '96, IEEE Computer Society Press, October, 1996, pp. 101-106. .comment: texture as surrogate for geometry in the usual way, but morph .comment: remaining geometry to match texture rather than warping texture .file: image-based-renderng/aliaga-caching-vis96.pdf
Shade, J., Lischinski, D., DeRose, T.D., Snyder, S., Salesin, D.H., Hierarchical image caching for accelerated walkthroughs of complex environments, Proc. Siggraph '96. .comment: render portions of scene, cache images, map onto planes .file: image-based-rendering/shade-caching-sig96.pdf
Schaufler, G., Sturzlinger, W., A three-dimensional image cache for virtual reality, Computer Graphics Forum (Proc. Eurographics '96), Vol. 15, No. 3, 1995, pp. 227-235. .comment: very similar to Shade et al., Siggraph '96 .file: image-based-rendering/schaufler-cache-eg96.pdf
Wilson, A., Mayer-Patel, K., Manocha, D., Spatially-encoded far-field representations for interactive walkthroughs, Proc. Multimedia 2001. .comment: scene divided into cells with far-field images on each wall, .comment: similar to Regan's concentric environment maps, .comment: n-D MPEG encoding used to capture inter-cell coherence, .comment: general survey of IBR, with many references .file: image-based-rendering/wilson-spatialvideo-mm01.pdf
Seitz, S., Dyer, C, View Morphing, Proc. Siggraph '96. .file: image-based-rendering/seitz-view-morphing-sig96.pdf
Debevec, P.E., Taylor, C.J., Malik, J., Modeling and Rendering Architecture from Photographs, Proc. Siggraph '96. .comment: also introduces view-dependent texture mapping .file: image-based-rendering/debevec-modeling-sig96.pdf
Debevec, P., Rendering Synthetic Objects Into Real Scenes: Bridging Traditional and Image-Based Graphics With Global Illumination and High Dynamic Range Photography, Proc. Siggraph '98.
Debevec, P., Yu, Y., Borshukov, G., Efficient view-dependent image-based rendering with projective texture-mapping, Proc. Eurographics Rendering Workshop '98. .comment: compute visibility map from each camera, split polygons, .comment: view-dependent texture mapping by hardware-projected textures
Irani, M., and Peleg, S., Improving Resolution by Image Registration. Graphical Models and Image Processing. Vol. 53, No. 3, May, 1991, pp. 231-239.
Mann, S., Picard, R., Virtual Bellows: Constructing High Quality Stills From Video Proc. IEEE Int. Conf. on Image Processing, 1994. .comment: superresolution via upsampling -> adding -> deblurring .file: image-based-rendering/mann-stillsfromvideo-cip94.pdf
Zomet, A., Rav-Acha, A., Peleg, S., Robust Super-Resolution, Proc. CVPR 2001. .comment median estimator to remove outliers before super-resolution
Zomet, A., Peleg, S., Super-Resolution from Multiple Images having Arbitrary Mutual Motion, in S. Chaudhuri (ed.), Super-Resolution Imaging, Kluwer Academic, September 2001.
Shechtman, E., Caspi, Y., Irani, M., Increasing Space-Time Resolution in Video Proc. ECCV '02. .file: image-processing/irani-spacetimeSR-eccv02.pdf .comment: Super-resolution in a video cube. Success depends on regularization operator, not described in detail here, taken from Shin, Paik, et al. (SPIE 2001).
Zhao, W.Y., Sawhney, H.S., Is super-resolution with optical flow feasible? Proc. ECCV 2002. .file: image-processing/zhao-superres-eccv02.pdf .comment Super-resolution from image sequences is feasible only if noise is low and if flow is "consistent." They propose a bundle adjustment algorithm to obtain such a flow.
Baker, S., Kanade, T., Limits on super-resolution and how to break them, IEEE Trans. PAMI, Vol. 24, No. 8, August, 2002. .file: image-processing/baker-superres-limits-pami02.pdf
Finkelstein, A., Jacobs, C.E., Salesin, D.H., Multiresolution Video, Proc. Siggraph '96.
Heeger, D.J., Bergen, J.R., Pyramid-Based Texture Analysis/Synthesis, Proc. Siggraph '95. .comment: a good representative of statistical synthesis techniques
Efros, A., Leung, T., Texture synthesis by non-parametric sampling, Proc. ICCV '99. .comment: for each pixel, search for similar neighborhoods in sample, .comment: very slow, but the basis for Wei and Levoy and others .file: texture-synthesis/efros-texture-iccv99.pdf
Wei, L.-Y., Levoy, M., Fast texture synthesis using tree-structured vector quantization, Proc. Siggraph 2000. .comment: based on Efros and Leung
Praun, E., Finkelstein, A., Hoppe, H., Lapped textures, Proc. Siggraph 2000. .comment: manually specified patches laid down randomly on 2D manifold, .comment: works surprisingly well for some textures, extremely fast .file: texture-synthesis/praun-lapped-sig00.pdf
Efros, A.A., Freeman, W.T., Image quilting for texture synthesis and transfer, Proc. Siggraph 2001. .comment: Efros-Leung search + minimum-error cut between patches .file: texture-synthesis/efros-quilting-sig01.pdf
Ashikhmin, M., Synthesizing Natural Textures, Proc. 2001 Symposium on Interactive 3D Graphics, ACM. .file: texture-synthesis/ashikhmin-texture-i3d01.pdf .comment Hybrid of Wei and Levoy Sig00 and Praun's lapped textures by trying to build extended patches, randomly restarts at borders, more successful for some textures, less successful for others, also proposes iterative method for texturing an existing image.
Ashikhmin, M., Shirley, P., Steerable illumination textures, ACM Transactions on Graphics, Vol. 21, No. 1, January, 2002. .comment Sum of textures to simulate texture, including shadows and interreflections, by using steerable functions, you can "steer" the lighting, hence the shadows.
Hertzmann, A., Jacobs, C.E., Oliver, N., Curless, B., Salesin, D.H., Image Analogies, Proc. Siggraph '01. .file: texture-synthesis/hertmann-analogies-sig01.pdf
Sawhney, H.S., Guo, Y., Hanna, K., Kumar, R., Adkins, S., Zhou, S., Hybrid Stereo Camera: An IBR Approach for Synthesis of Very High Resolution Stereoscopic Image Sequences, Proc. Siggraph '01. .file: texture-synthesis/sawhney-stereo-sig01.pdf
Mann, S., Picard, R.W., On being 'undigital' with digital cameras: Extending Dynamic Range by Combining Differently Exposed Pictures. M.I.T Media Laboratory Perceptual Computing Section Technical Report No. TR-323. IS&T's 48th Annual Conference, May 7-11, 1995, pp. 422-428. Folder: image-based rendering
Debevec, P., Malik, J., Recovering High Dynamic Range Radiance Maps from Photographs, Proc. Siggraph '97. .file: sensing-hardware/debevec-highrange-sig97.pdf
Mitsunaga, T., Nayar, S., Radiometric self calibration, Proc. CVPR '99. .comment: creating high-dynamic range radiance maps from ratios of pixel .comment: values in multiple exposures *without* knowing exposure times, .comment: as is required in Debevec and Malik .file: sensing-hardware/mitsunaga-calibration-cvpr99.pdf
Nayar, S.K., Mitsunaga, T., High dynamic range imaging: spatially varying pixel exposures Proc. CVPR 2000. .comment: collaboration with Sony on placing a mask in front of sensor .file: sensing-hardware/nayar-high-dyanmic-range-cvpr2000.pdf
Aggarwal, M., Ahuja, N., High dynamic range panoramic imaging, Proc. ICCV 2001. .comment: stepped grayscale mask placed over stepwise rotating sensor .comment: survey of high dynamic range imaging .comment: (see also Schechner and Nayar in these proceedings)
Nayar, S.K., Branzoi, V., Adaptive Dynamic Range Imaging: Optical Control of Pixel Exposures Over Space and Time, Proc. ICCV 2003. .comment Using a liquid crystal light modulator to limit the light reaching each pixel, thereby allowing HDR acquisition. Scoops an idea by Bolas & McDowall. See also Nayar et al., Proc. CVPR 2004.
Nayar, S.K., Branzoi, V., Boult, T.E., Programmable Imaging using a Digital Micromirror Array, Proc. CVPR 2004. .file: sensing-hardware/nayar-dmd-cvpr04.pdf (unrevised version?) .comment HDR imaging by bouncing the scene off a DLP chip and into a camera. Proposes three methods: (1) globally decreasing the exposure by displaying a graylevel on the DLP chip, allowing the camera to continue to use its full integration time most everywhere in the image, hence 8 bits x 8 bits = 16 bits, (2) displaying a checkerboard on the DLP, like Nayar and Mitsunaga (CVPR 2000), and (3) displaying the captured image itself on the DLP chip (requires two acquisitions), allowing the camera to continue to use nearly its full integration time *at every pixel*. See also Nayar and Branzoi, Proc. ICCV 2003.They also propose using the DLP to implement intra-pixel spatial convolution of the captured image in real time, assuming the DLP has higher spatial resolution than the (video) camera. They even propose using the DLP to perform pixel-independent processing, such as feature matching, in which the multiplication is performed by modulating the DLP image, leaving only an integration to be performed by summing all the pixels in the camera. Finally, by flipping all the mirrors in the DLP, they can shift the field of view by 20 degrees. What they'd *like* to have is programmable mirror angles, allowing truely adaptive optics.
Cohen, J., Tchou, C., Hawkins, T., Debevec, P., Real-time High Dynamic Range Texture Mapping, Proc. Eurographics Rendering Workshop 2001. .file: image-based-rendering/debevec-highrange-rend01.pdf
DiCarlo, J., Wandell, B., Rendering High Dynamic Range Images, Proc. SPIE Electronic Imaging 2000, Vol. 3965, San Jose, CA, January 2000. .file: sensing-hardware/wandell-dynamic-range-spie00.pdf
Wandell, B., Catrysse, P., DiCarlo, J., Yang, D., El Gamal, A., Multiple Capture Single Image Architecture with a CMOS Sensor, Proc. Chiba Conference on Multispectral Imaging, Chiba, Japan, 1999. .file: sensing-hardware/wandell-MCSI-chiba99.pdf .comment: trading off spatial resolution and dynamic range
Tumblin, J. and Rushmeier, H., Tone reproduction for realistic images, IEEE Computer Graphics and Applications, Vol. 13, No. 6, November, 1993, pp. 42-48. .comment: dealing with gamma correction, CRT contrast, ambient .comment: illumination, and computed luminance all at once, tone .comment: reproduction operator
Tumblin, J., Turk, G., LCIS: A Boundary Hierarchy For Detail-Preserving Contrast Reduction, Proc. Siggraph '99. .file: sensing-hardware/tumblin-highrange-sig99.pdf
Fattal, R., Lischinksi, D., Werman, Gradient domain high dynamic range compression, Proc. Siggraph 2002.
Durand, F., Dorsey, J., Fast bilateral filtering for the display of high-dynamic-range images, Proc. Siggraph 2002.
Reinhard, E., Stark, M., Shirley, P., Ferwerda, J., Photographic tone reproduction for digital images, Proc. Siggraph 2002. .comment Uses Ansel Adams's Zone System.
Edgerton, H., Stopping Time, Abrams, 1987. .comment: his classic high-speed photographs
Sidney F. Ray ed., High Speed Photography, Focal Press, 1997.
Wilburn, B., Joshi, N., Vaish, V., Levoy, M., Horowitz, M., High Speed Video Using a Dense Camera Array Proc. CVPR 2004. URL: http://graphics.stanford.edu/papers/highspeedarray/index.html
Wilburn, B., Joshi, N., Chou, K., Levoy, M., Horowitz, M., Spatiotemporal Sampling and Interpolation for Dense Camera Arrays, URL: https://graphics.stanford.edu/papers/st_interp/
Nayar, S.K., Karmarkar, A., 360 x 360 mosaics, Proc. CVPR 2000. .comment: panning a camera around the equator, (1) using a parabolic .comment: mirror each frame sees (overlapping) longitudinal strips, .comment: which are then mosaiced together to make a 360 x 360, .comment: or (2) using a conical mirror each frame sees a single arc .comment: of longitude, with each latitude ray smeared into a line .comment: segment on the sensor, thereby permitting superresolution .file: sensing-hardware/nayar-360-360-cvpr2000.pdf
Schechner, Y., Nayar, S., Generalized Mosaicing, Proc. ICCV 2001. .comment: continuous grayscale mask placed over moving color camera, or .comment: continuous rainbow mask placed over moving B&W camera .comment: special technique for registering differently filtered images .comment: suggests varying defocus, suggests many variables at once, etc. .comment: (see also Aggarwal and Ahuja in these proceedings) .comment: later tried polarization and extended depth-of-field, see: .comment: http://www.cs.columbia.edu/CAVE/, demos, video demonstrations
Dowski, E.R., Johnson, G.E., Wavefront Coding: A modern method of achieving high performance and/or low cost imaging systems, Proc. SPIE, August, 1999. .comment: xy-separable sinusoidal (anamorphic) lens -> afocal image .comment: in which objects at different depths are equally misfocused -> .comment: digital processing -> extended depth-of-field, seems to work! .file: image-processing/dowski-wavefront-coding-spie99.pdf .comment: see also Dowski (OSA '95) and Bradburn (Applied Optics '97) .comment: see also http://www.cdm-optics.com, esp. .comment: http://www.cdm-optics.com/wave/pubs/papers/edf/paper.html
Dowski, E.R., An Information Theory Approach to Incoherent Information Processing Systems, Signal Recovery and Synthesis V, OSA Technical Digest Series, pp. 106-108, March, 1995. .comment: theory behind Dowski and Johnson (SPIE '99) .comment: see also Bradburn et al. (Applied Optics '97) .file: image-processing/dowski-infotheory-osa95.pdf
See also the section on "coded aperture imaging" in Berthold Horn's course on computational imaging.
Ogden, J.M., Adelson, E.H., Bergen, J.R., Burt, P.J., Pyramid-Based Computer Graphics, RCA Engineer, Vol. 30, No. 5, Sept./Oct., 1985, pp. 4-15. Folder: texture rendering .comment: includes extended depth-of-field, .comment: cited by Haeberli's Graphica Obscura article on depth of field, .comment: see http://www.sgi.com/grafica/depth/
Yang, D.X.D., El Gamal, A., Fowler, B., Tian, H., A 640×512 CMOS Image Sensor with Ultra Wide Dynamic Range Floating-Point Pixel-Level ADC, Proc. IEEE International Solid-State Circuits Conference (ISSCC), 1999.
Chen, T., Catrysse, P., El Gamal, A., Wandell, B., How Small Should Pixel Size Be? Proc. SPIE Electronic Imaging 2000, Vol. 3965, San Jose, CA, January 2000. .file: sensing-hardware/gamal-pixel-size-spie00.pdf .comment: trading off spatial resolution, dynamic range, and SNR
El Gamal, A., Yang, D., Fowler, B., Pixel Level Processing - Why, What, and How? Proc. SPIE Electronic Imaging '99, Vol. 3650, January 1999. .file: sensing-hardware/gamal-pixel-processing-spie99.pdf .comment: CMOS imaging, analog versus digital interpixel processing
Lim, S.H., El Gamal, A., Integrating Image Capture and Processing -- Beyond Single Chip Digital Camera, Proc. SPIE Electronic Imaging 2001, Vol. 4306, San Jose, CA, January 2001. .file: sensing-hardware/gamal-motion-est-spie01.pdf .comment: motion estimation from high-speed imaging (10,000fps)
Liu, X.Q., El Gamal, A., Photocurrent Estimation from Multiple Non-destructive Samples in a CMOS Image Sensor, Proc. SPIE Electronic Imaging 2001, Vol. 4306, San Jose, CA, January 2001. .file: sensing-hardware/gamal-high-range-spie01.pdf .comment: improvement (for darks) on their high dynamic range sensor
Kleinfelder, S., Lim, S., Liu, X., Gamal, A., A 10,000 Frames/s 0.18 µm CMOS Digital Pixel Sensor with Pixel-Level Memory, Proc. 2001 International Solid State Circuits Conference. .comment: latest 10,000 frame/sec CMOS sensor .file: sensing-hardware/gamal-10kfps-isscc01.pdf
Agarwala, A., Dontcheva, M., Agrawala, M., Drucker, S., Colburn, A., Curless, B., Salesin, D., Cohen, M., Interactive digital photomontage, Proc. SIGGRAPH 2004. .comment Compendium of cool ways to combine multiple images for interesting effects.
Petschnigg, G., Agrawala, M., Hoppe, H., Szeliski, R., Cohen, M., Toyama, K., Digital Photography with Flash and No-Flash Image Pairs, Proc. SIGGRAPH 2004. .comment Reducing noise by combining flash and no-flash image pairs. Grew out of a student project (by Georg Petschnigg) in this course!
Agrawal, A., Raskar, R., Nayar, S.K., Li, Y., Removing photography artifacts using gradient projection and flash-exposure sampling, Proc. SIGGRAPH 2005. .comment Removing reflections of the flash, or of the scene from the flash, and other artifacts.
Raskar, R., Tan, K.-H., Feris, R., Yu, J., Turk, M., Non-photorealistic camera: depth edge detection and stylized rendering using multi-flash imaging, Proc. SIGGRAPH 2004. .comment Flash from multiple locations. Identifies depth discontinuities. Can be used for NPR rendering, vision algorithms.
Srinivasan, S., Chellapp, R., Image sequence stabilization, mosaicking, and superresolution, In Handbook of Image and Video Processing (chapter 3.13), Al Bovik ed., Academic Press, 2000. .comment: survey of these subjects, including ideas for the future
Srinivasan, S., Chellapp, R., Image stabilization and mosaicking using the overlapped basis optical flow field, Proc. ICIP '97. .comment: Paper underlying their chapter (3.13) in Al Bovik's book .file: image-processing/srinivasan-stabilization-icip97.pdf
Buehler, C., Bosse, M., McMillan, L., Non-Metric Image-Based Rendering for Video Stabilization, Proc. CVPR 2001.
Kemp, M., The Science of Art, Yale University Press, 1990.
Cole, A., Perspective, Dorling Kindersley, 1992.
Leonardo Da Vinci, Leonardo on Painting, translated by M. Kemp and M. Walker, Yale University Press, 1989.
Rademacher, Pl, Bishop, G., Multiple-Center-of-Projection Images, Proc. SIGGRAPH '98. .file: image-based-rendering/rademacher-mcop-sig98.pdf
Andrew Davidhazy's articles and examples of panoramic, strip, and peripheral (rollout) photographs URL: http://www.rit.edu/~andpph/
Zomet, A., Peleg, S., Arora, C., Rectified mosaicing: mosaics without the curl, Proc. CVPR 2000. .comment: if the camera is tilted, the resulting panorama is curved, .comment: solved by cutting imagery into strips and unkeystoning each one
Rousso, B., Peleg, S., Finci, I., Rav-Acha, A., Universal mosaicing using pipe projection, Proc. ICCV '98. .comment: if a camera moves or zooms toward a focus-of-expansion (FOE), .comment: for each frame, project one circle of pixels around the FOE .comment: onto the surface of a pipe extruded along the 3D camera motion, .comment: creates a single pipe-shaped mosaic for the image sequence, .comment: which can then be reprojected onto a plane for viewing, if .comment: camera moves and scene is not flat, result is multi-perspective
Peleg, S., Herman, J., Panoramic Mosaics by Manifold Projection, Proc. CVPR '97. .comment: swept video camera -> grab central column from each frame -> .comment: panoramic image, result is ultra-wide-angle planar projection .file: image-based-rendering/peleg-panoramic-cvpr97.pdf
Gu, X., Gortler, S.J., Cohen, M.F., Polyhedral geometry and the two-plane parameterization, Proc. Eurographics Rendering Workshop '97. .file: image-based-rendering/gortler-polyhedral-rend97.pdf .comment Some observations on the geometric and algebraic structure of linear subspaces of 4D space, i.e. slices of light fields, e.g. what is the space of all lines passing through a point, line segment, or triangle. No applications given, but presages Cohen's video cube and Zomet's cross-slit panoramas. See also Yu and McMillan.
Yu, J., McMillan, L., General Linear Cameras, Proc. ECCV 2004 (to appear). .file: perspective/mcmillan-cameras-eccv04.pdf .comment Gu et al.'s missing camera, and some others.
Seitz, S., The Space of All Stereo Images Proc. ICCV 2001.
Agrawala, M., Zorin, D., Munzner, T., Artistic multiprojection rendering, Proc. Eurographics Rendering Workshop 2000.
Weinshall, D., Lee, M.S., Brodsky, T., Trajkovic, M., Feldman, D., New View Generation with a Bi-centric Camera, Proc. ECCV '02. .file: image-based-rendering/weinshall-bicentric-eccv02.pdf .comment: Panoramas from a translating, sideways-looking camera, where the horizontal and vertical centers of projection lie at different points along the virtual camera's optical axis, applied to architectural interiors. Also movie sequences composed of such panoramas, where one center of projection moves. Note: straight lines map to hyperbolae in these images, and these distortions change over time in the movies. Finally, since they take strips from each image, and the scene is not flat, there are discontinuities at strip boundaries in the panoramas. These discontinuities move coherently in the movies.
Zomet, A., Feldman, D., Peleg, S., Weinshall, D., Non-Perspective Imaging and Rendering with the Crossed-Slits Projection. Technical report #2002-41, Hebrew University, July, 2002. .file: image-based-rendering/zomet-xslits-TR02.pdf .comment: Generalization of Weinshall et al. (ECCV '02). Two slits (line segments) in arbitrary position. Parameterization of the set of rays passing through the slits is forced by their intersection with a rectangularly parameterized image plane in general position. Analyzes the alebraic and geometric properties of these images, including the effects of rotating the image plane and tilting the slits. Also discusses variants on slits, including one linear and one circular. Applications include sideways-looking video taken from a helicopter.
Nayar, S.K., Catadioptric omnidirectional camera, Proc. CVPR '97. .comment: hemispherical video camera using a mirror and a single sensor
Nayar, S.K., Peri, V., Folded catadioptric cameras, Proc. CVPR '99. .comment: imaging systems with two conic mirrors followed by lenses
Baker, S., Nayar, S., A Theory of Catadioptric Image Formation, Proc. ICCV '98. .file: sensing-hardware/baker-nayar-catadioptric-iccv98.pdf .comment: see later and expanded version in IJCV '99
Baker, S., Nayar, S., A Theory of Single-Viewpoint Catadioptric Image Formation, Proc. IJCV '99. .file: sensing-hardware/baker-nayar-catadioptric-ijcv99.pdf .comment: tutorial on geometrical optics of mirror/lens systems .comment: earlier version appeared in ICCV '98
Gluckman, J., Nayar, S.K., Rectified catadioptric stereo sensors, Proc. CVPR 2000. .comment: using planar mirrors to obtain stereo views using one camera, .comment: arrangements that yield parallel scanlines in both cameras .file: sensing-hardware/nayar-rectified-stereo-cvpr2000.pdf
Swaminathan, R., Grossberg, M., Nayar, S., Caustics of Catadioptric Cameras, Proc. ICCV 2001. .comment: locus of viewpoints of single-reflector conical systems
Swaminathan, R., Nayar, S.K., Grossberg, M.D., Framework for Designing Catadioptric Projection and Imaging Systems, IEEE International Workshop on Projector-Camera Systems (PROCAMS-2003), At ICCV 2003. .file: projectors/nayar-desiging-procams03.pdf .comment Designing a mirror to allow undistorted projection onto any surface without digital resampling.
Raskar, R., Shader Lamps: Animating Real Objects with Image-Based Illumination, Proc. Eurographics Rendering Workshop 2001. .file: projectors/raskar-shaderlamps-egrw01.pdf .comment Revision of UNC TR. Also discusses inverse global illumination, because projected light will unavoidably scatter from receiving surfaces onto other surfaces. It might be possible to partially correct for this effect, but never eliminate it. (Think of a surface with white projected onto it that is adjacent to one that is intended to stay black.)
Raskar, R., Welch, G., Cutts, M., Lake, A., Stesin, L., Fuchs, H., The Office of the Future: A Unified Approach to Image-Based Modeling and Spatially Immersive Displays, Proc. SIGGRAPH '98. .file: image-based-rendering/fuchs-office-future-sig98.pdf
Raskar, R., Brown, M.S., Yang, R., Chen, W.-C., Welch, G., Towles, H., Seales, B., Fuchs, H., Multi-projector displays using camera-based registration, Proc. Visualization '99, IEEE Computer Society Press, October, 1999, pp. 161-168. .comment Structured light 3D scanning + 3D merging + image-guided warp to produce seamless albeit geometrically imperfect calibration.
Jaynes, C., Webb, S., Steele, R.M., Brown, M., Seales, W.B., Dynamic Shadow Removal from Front Projection Displays Proc. Visualization 2001, ACM, October, 2001. .comment Shadow cancelation if two projector images overlap on screen. See also Rehg et al. in ICARV '02, and Cham et al. in PROCAMS-2003.
Rehg, J.M., Flagg, M., Cham, T.-J., Sukthankar, R., Sukthankar, G., Projected Light Displays Using Visual Feedback, Intl. Conf. on Control, Automation, Robotics, and Vision (ICARV '02). .file: projectors/rehg-projected-icarv02.pdf .comment Shadow cancelation and elimination of projector illumination of speaker. See also their more complete paper in PROCAMS-2003. See also Jaynes et al. in Vis '01.
Cham, T.-J., Rehg, J.M., Sukthankar, R., Sukthankar, G., Shadow Elimination and Occluder Light Suppression for Multi-Projector Displays, IEEE International Workshop on Projector-Camera Systems (PROCAMS-2003), At ICCV 2003. .file: projectors/cham-shadows-procams03.pdf .comment Camera must be placed sufficiently to the side that its view of the screen is not occluded. This is a significant limitation for long-range applications. Uses serial perturbation of the brightness of one projector after another to discover which one is causing each observed shadow. In synthetic aperture illumination, by contrast, we don't vary projectors separately. See also their earlier paper in ICARV '02. See also Jaynes et al. in Vis '01.
Raskar, R., Beardsley, P., A Self-Correcting Projector, Proc. CVPR 2001. .comment Projector + camera + tilt sensor + assumed vertical wall. Allows automatic correction for projector keystone and tilt.
Raskar, R., Baar, J.v., Beardsley, P., Willwacher, T., Rao, S., Forlines, C., iLamps: Geometrically Aware and Self-Configuring Projectors, Proc. Siggraph 2003. .comment Single or clusters of projectors that adapt themselves to the available display surface, even if curved, and to each other. Projectors include cameras, tilt sensors, and wireless LAN cards.
Jaynes, C., Ramakrishnan, D., Super-Resolution Composition in Multi-Projector Displays, IEEE International Workshop on Projector-Camera Systems (PROCAMS-2003), At ICCV 2003. .comment Using overlapped projectors with pre-sharpened imagery to boost resolution.
Levoy, M., Chen, B., Vaish, V., Horowitz, M., McDowall, I., Bolas, M., Synthetic aperture confocal imaging, URL: http://graphics.stanford.edu/papers/confocal/
Stark, M., Stereo Technology, Web page, circa 1995. .file: stereoscopic-displays/stark-stereo-technology.html .comment good survey of technologies, with citations of patents
Halle, M., Autostereoscopic displays and computer graphics Computer Graphics, ACM SIGGRAPH, 31(2), May 1997, pp 58-62. .file: stereoscopic-displays/halle-autostereoscopic-disptech97.pdf
Okoshi, T., Three-Dimensional Imaging Techniques, Academic Press, 1976.
Benton, S. A. and Lucente, M. Interactive Computation of Display Holograms. Proc. Computer Graphics International '92, June, 1992.
Klug, M. A., Halle, M. W. and Hubel, P. M. Full Color Ultragrams. Proc. SPIE: Practical Holography VI, Vol. 1667.
St. Hilaire, P., Modulation Transfer Function of Holographic Stereograms. SPIE Proceedings #2577 "Applications of Optical Holography", Toshio Honda editor, pages 41 - 49 (1995).
Rusinkiewicz, S., Hall-Holt, O., Stripe Boundary Codes for Real-Time Structured-Light Range Scanning of Moving Objects, Proc. ICCV '01. URL: http://graphics.stanford.edu/papers/realtimerange/
Rusinkiewicz, S., Real-time Acquisition and Rendering of Large 3D Models, PhD dissertation, Stanford University. URL: http://graphics.stanford.edu/papers/smr_thesis/
Rusinkiewicz, S., Hall-Holt, O., Levoy, M., Real-Time 3D Model Acquisition, To appear in Siggraph '02. URL: https://graphics.stanford.edu/papers/rt_model/
Chang, N.L., Efficient Dense Correspondences using Temporally Encoded Light Patterns, IEEE International Workshop on Projector-Camera Systems (PROCAMS-2003), At ICCV 2003. file: 3D-shape-acquisition/chang-codedlight-iccv03.pdf .comment Szymon's system analyzes horizontally adjacent camera pixels. This works only if the camera's scanlines are parallel to the projector-camera baseline. This limits the placement of multiple cameras. The proposed system relaxes this constraint by successively projecting vertical stripes (like Szymon), then horizontal stripes. His camera can therefore be anywhere, and he can employ multiple cameras. He doesn't attempt to make his system real-time. He mentions multiple projectors and one camera as an alternative, but he doesn't elaborate.
Oh, B.M., Chen, M., Dorsey, J., Durand, F., Image-based modeling and photo editing, Proc. Siggraph 2001.
Siggraph '98 course #15, Debevec et al. URL: http://www.cs.berkeley.edu/~debevec/IBMR98/
Siggraph '99 course #39, Debevec et al. URL: http://www.debevec.org/IBMR99/
CMU 15-869, Seitz and Heckbert URL: http://www-2.cs.cmu.edu/~ph/869/www/869.html
Karlsruhe IBMR-Focus resource page URL: http://i31www.ira.uka.de/~oel/ibmr-focus/
UNC IBR resource page URL: http://www.cs.unc.edu/~ibr/
Aaron Isaksen's web page on autostereoscopic display of light fields URL: http://graphics.lcs.mit.edu/~aisaksen/projects/autostereoscopic/
Intel's work on surface light fields URL: http://www.intel.com/research/mrl/research/lfm/
U. Washington's work on surface light fields URL: http://grail.cs.washington.edu/projects/slf/
Korean page of links for autostereoscopic displays: URL: http://vr.kjist.ac.kr/~3D/Research/Stereo/display.html
The Page of Omnidirectional Vision URL: http://www.cis.upenn.edu/~kostas/omni.html