Semantically-Enriched 3D Models for Common-sense Knowledge
Stanford University
This project is part of the ShapeNet dataset
CVPR 2015 Vision meets Cognition Workshop
Abstract
We identify and connect a set of physical properties to 3D models to create a richly-annotated 3D model dataset with data on physical sizes, static support, attachment surfaces, material compositions, and weights. To collect these physical property priors, we leverage observations of 3D models within 3D scenes and information from images and text. By augmenting 3D models with these properties we create a semantically rich, multi-layered dataset of common indoor objects. We demonstrate the usefulness of these annotations for improving 3D scene synthesis systems, enabling faceted semantic queries into 3D model datasets, and reasoning about how objects can be manipulated by people using weight and static friction estimates.