Shape Completion using 3D-Encoder-Predictor CNNs and Shape Synthesis
Data Formats
For 323, we have train and test data ashdf5
. We also provide the raw scanned and complete data at 323 and 1283 in custom binary formats:
- Partial Data (
*.sdf
):#binary
dimX #uint64
dimY #uint64
dimZ #uint64
data #(dimX*dimY*dimZ) floats for sdf values - Complete Data (
*.df
):#binary
dimX #uint64
dimY #uint64
dimZ #uint64
data #(dimX*dimY*dimZ) floats for df values
Input Data
We provide virtually scanned partial models as from ShapeNet Core, as well as their corresponding distance transforms of the complete models. Files are structured [class id]/[model id]__[trajectory id]__.[sdf/df]
- h5_shapenet_dim32_sdf.zip (30GB)
- shapenet_dim32_df.zip (4GB)
- shapenet_dim32_sdf.zip (11GB)
- shapenet_dim64_df.zip (30GB)
- shapenet_dim128_df.zip (256GB)
- shapenet_dim32_sdf_pc.zip (10GB)
Benchmark
We provide two synthetic test benchmarks of 1200 partial models each (shapenet model id list here). The images benchmark contains models scanned with a single depth image using a horizontal camera, while the scans benchmark contains models scanned from a trajectory with at least one depth image. We also provide a real test benchmark on real scan data based on the dataset from Qi et. al. 2016, containing instances from the chair, desk, nightstand, sofa, and table categories.
- h5_test-real_dim32_sdf.zip (5MB)
- test-real_dim32_sdf.zip (4MB)
- h5_test-images_dim32_sdf.zip (150MB)
- test-images_dim32_sdf.zip (18MB)
- h5_test-scans_dim32_sdf.zip (180MB, see input data above for corresponding
- test-images_dim128_sdf.zip (960MB)
- test-images_dim32_sdf_pc.zip (13MB)
- for test-scans, see point cloud data above for the respective shapenet model ids
Our results:
- output-test-real-32.zip (10MB)
- output-test-real-128.zip (100MB)
- output-test-images-32.zip (110MB)
- output-test-images-128.zip (800MB)
- output-test-scans-32.zip (110MB)
- output-test-scans-128.zip (800MB)
Trained models (including classifier): trained_models.zip (740MB)
ℓ1 norm to ground truth distance field (masked) |
||||
---|---|---|---|---|
method |
scans, 323 |
scans, 1283 |
images, 323 |
images, 1283 |
epn-unet-class + synth [1] | 0.309 | 1.80 | 0.374 | 1.89 |
epn-unet + synth [1] | 0.310 | 1.82 | 0.379 | 1.91 |
epn-class + synth [1] | 0.376 | 1.92 | 0.483 | 2.16 |
epn + synth [1] | 0.382 | 1.94 | 0.512 | 2.33 |
3d ShapeNets [2] | - | - | 0.905 | 3.70 |
ShapeRecon [3] | - | - | 0.970 | 4.63 |
Poisson Surface Reconstruction [4,5] | - | - | 1.91 | 8.46 |
To add your results to the leaderboard, please email Angela Dai.
[1] A. Dai, C. Qi, M. Nießner. Shape Completion using 3D-Encoder-Predictor CNNs and Shape Synthesis. CVPR 2017.
[2] Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, J. Xiao. 3D ShapeNets: A Deep Representation for Volumetric Shapes. CVPR 2015.
[3] J. Rock, T. Gupta, J. Thorsen, J. Gwak, D. Shin, D. Hoiem. Completing 3D Object Shape from One Depth Image. CVPR 2015.
[4] M. Kazhdan, M. Bolitho, H. Hoppe. Poisson Surface Reconstruction. Eurographics Symposium on Geometry Processing 2016.
[5] M. Kazhdan, H. Hoppe. Screened Poisson Surface Reconstruction. SIGGRAPH 2013.