CS348n: Neural Representations and

Generative Models for 3D Geometry

Leonidas Guibas

Spring 2022-23

Breaking News:


The goal of this course is to cover recent neural techniques for the generation of synthetic 3D content, including object and scene models as well as animations. This is an area where currently there is a great deal of exploration and activity.

3D design has historically been the purview of specialists in game design or movie special effects. Machine learning, however, can exploit extraction of priors and patterns from large 2D and 3D data corpora to democratize 3D design and make the process accessible to a much broader audience. Generative AI has made great strides recently in the language (e.g., ChatGPT) and image domains, including cross-modal image generation from text, synthesizing outputs of striking realism (e.g., Dall-E 2, Stable Diffusion, Imagen). There has also been a flurry of efforts in 3D content generation (e.g., DreamFusion, Magic3D), but they remain a work in progress, possibly hampered by the relatively smaller sets of available 3D data.

This course will survey the wide variety of neural architectures and techniques available to generate 3D content (object and scene models, static as well as dynamic) in either explicit or implicit form. Topics to be covered include:

Topics to be covered include:

  • classical 3D geometry representations (voxels, point clouds, meshes, parametric and implicit surfaces)
  • neural architectures for geometric data: point clouds
  • generative models: VAEs and coordinate-based implicit representation networks (e.g., deepSDF)
  • 3D generative adversarial networks (GANs)
  • public data sets of 3D shapes and scenes
  • neural ODEs and normalizing flows, autoregressive model
  • denoising diffusion and score based model
  • hierarchical generation of structure and geometr
  • vector graphics, deep architectures for meshe
  • conditional generation: shape completion, shape from image
  • geometry generation from languag
  • neural radiance fields (NeRFs) and neural rendering
  • shape variation generation, continuous and discret
  • scene generation, object placement, interaction synthesi
  • shape / scene edits and deformations

The coursework will involve three brief assignments on a selection of the above topics, a class presentation, plus a small project. Class participation, discussion and teamwork will be strongly encouraged.


These pages are maintained by Leonidas Guibas guibas@cs.stanford.edu.
Last update April 17, 2023.