‹header›
‹date/time›
‹footer›
‹#›
Body Text
Second Level
Third Level
Fourth Level
Fifth Level
•0:45, 1:30 for intro (through list of problems), 32:15 for entire talk (yeah, right)
•I have given many talks about the Digital Michelangelo Project
–in past talks, I stressed positive results
•but…
–not everything went well on the project, and
–to date, we have not produced a complete archive of 3D models of the statues we scanned 3 years ago
•so…
–this will be an anti-Digital Michelangelo Project talk
–What went wrong during the scanning?
–What continues to go wrong as we process our data?
–Why is 3D scanning so hard?
34:15 total + 30% = ~45 minutes
•0:15
•This is not intended to be a talk on the Digital Michelangelo Project
–so here is the extent of my executive summary of that project,
–as well as a proof that we did obtain some useful data
•
•0:30
•I’ll organize my talk around these eight problems
–not an exhaustive list – there are other hard problems
–my choices are clearly biased by our experiences in the DMP
–on the other hand, these problems are not unique to our project - so I hope you will find them interesting
•By necessity…
–I”ll be interleaving consideration of these problems with a description of our scanning pipeline,
–so let’s start off with that…
•0:30, 2:30 through St. Matthew’s face
•Cyberware
–triangulation laser scanner
–custom design
•Faro + 3D Scanners
–also a triangulation laser scanner
–on an articulated arm
–off-the-shelf
•for tight spots
–but not too much - it was tiring to use
•Cyra
–time-of-flight
–prototype
0:45
4:30 through Matthew
•0:30
0:30
•0:30
•Let’s cut to the chase
–here we are in the Galleria dell’Accademia
–scanning Michelangelo’s unfinished statue of the apostle St. Matthew
–you can see the David in the background
•In the middle image
–we’re acquiring geometry
–you can see the laser stripe sweeping across the face of St. Matthew
•In the right image
–we’re acquiring color
–you can see our white spotlight on the statue’s neck
–
0:15
•0:30
•Here’s the range image acquired during one sweep of the laser stripe
–the texture you see… are Michelangelo’s chisel marks
–here’s a plot through his cheek
–each valley is the groove left by one tooth of a multi-tooth chisel
–possibly this two-tooth chisel called a Dente di cane ("dog-tooth")
•The spacing between our samples points (in X and Y) is Ό mm
–our depth resolution is 50 microns, but
–how good is that 50 microns?…
0:30
•0:45, 3:00 for entire problem
•dirty -> dark -> dropouts in range data
•shiny -> false returns -> outliers in range data
•fuzzy – like hair
•scattering – i.e. translucent
•transparent – esp. objects that change ray paths, like lenses
•2:00 including next slide
•When a laser beam strikes a block of marble, it scatters beneath the surface
–this gives marble its distinctive glow, which you can see here
•So how does this subsurface scattering affect range scanning?
–working with the National Research Council of Canada
–we've formed the following hypothesis
•Normally, when the laser beam strikes an object,
–it's  reflected from the surface towards the camera
–this is the basis for laser triangulation
•When the beam strikes a marble surface
–it refracts, then forms a volume of scattered light beneath the surface
–the camera sees a refracted view of this volume
•Ignoring the refraction for a moment,
–the centroid of this volume is clearly displaced horizontally from where the laser struck the surface
–this displacement causes a systematic bias in the computed depth
–we’ve observed 40 microns
•More importantly,
–the shape of the volume varies randomly across the surface with the marble crystal structure
–giving rise to noise in the range data – almost Ό mm
–this noise fundamentally limits the accuracy with which we can scan marble objects
•In this case, the noise is about 0.1 mm
–twice this bad, or worse, on the highly polished statue of Night
•Fortunately, the noise changes randomly also with direction of incidence of the laser, so
–we can reduce the noise to some extent by taking multiple scans from different directions and averaging them
•
•0:15
•In addition to the level of polish
–some marbles are naturally more translucent than others, and hence scatter more
–Michelangelo’s Pieta is famous for its extraordinary translucency
–this statue may be unscannable
•Of course,
–this statue may be unscannable for another reason…
•1:30 + 1:15 movie excerpt (skip to 2/3 point), 3:00 for entire problem
•fig 4
–in the DMP, we could roll our scanner– this helped us scan grooves of different orientation
•0:10
•0:45, 2:45 for entire problem
•Although it may not be immediately apparent,
–the problem of reaching all surfaces on an object is
–intimately related to the problem of insuring safety for the object during scanning
•There are actually several safety issues…
•not a problem for marble statues
–we also scanned some old violins – and we had to “run a calculation”
–in general energy deposition is not a problem for rangefinders that using scanning (i.e. moving) laser beams
–the spotlight we used for color acquisition deposits more energy on the object than the moving laser beam
•avoiding collisions…
–yes, collisions - between the scanner and the object being scanned
•1:30
•fig1
–circumscribe it with a circle
–think of this circle as a conservative convex hull
•fig 2
–to scan this part of the object reasonably perpendicularly requires a standoff equal in size to the diameter of the circle
–since most triangulation scanners have a fixed standoff, this is a large and cumbersome scanner
–you could move the scanner inside the circle – allowing a smaller standoff, but this introduces the possibility of collision
•fig 3
–of course, sometimes we can’t avoid moving inside the circle
–this greatly increases the chances of a collision
•summarizing
–cannot completely scan an arbitrary statue from outside its convex hull
–eventually something – a scan head, or a mirror – must get close to the statue
•0:30
•How do you avoid colliding with a statue?
–we used a variety of techniques, which I won’t describe in detail
•we were lucky
–in 5 months of around-the-clock scanning, we didn't damage anything
•but there's no silver bullet for this problem
–it will always be with us
–and there will be accidents
•1:15, 2:30 for entire problem
•So far we’ve been talking mainly about problems at the scale of individual range samples
–let’s switch scales
•There are  many geometric configurations of scanners that we could have built for the DMP
–can digitize a tall statue, and
–can capture chisel marks
•the ratio between these two scales
–what one might call the geometric dynamic range of a scanning system
–is what makes the design problem hard
•Why is it hard to provide a 20,000:1 dynamic range?
–scanning at this resolution, using today’s triangulation technology, implies a small working volume
–given a 14cm stripe, and overlapping the stripes, it would take 30 stripes to get around the David
–for each stripe, the scanner must be repositioned,
–and not every one will capture useful geometry
•0:30
•It’s surprisingly difficult to design a system
–with this much flexibilty
–that is also field-deployable – you can carry it into a museum in pieces and assemble it there
•Our Siggraph 2000 paper talks at length about the design of our gantry…
•0:15
•we made 104 uncalibrated moves to scan St. Matthew – a relatively simple statue
0:15
•0:30
•rolled the gantry (or worse - remounted the scan head) 480 times
–and the gantry, with its extra trusses and counterweights, weighed 800 kilograms – about a ton
0:45
4:00 through architectural reps
•1:15
•in the field
–i.e. using a field-deployable gantry
•One of the fundamental design choices we made in the DMP
–rotating scan head instead of translating (as the primary scanning motion)
–because it reduced the chances of colliding with the statue during scanning
•It is fundamentally hard to make a rotating scanner accurate
–because rotational errors are magnified by the lever arm of the standoff
–1/100 degree of rotational error –> 1/4mm error at 1m standoff – equal to our range sample spacing
•Also,
–remounting the scan head…not repeatable
•For this reason, and others,
–it is essential to have a way to recalibrate a large reconfigurable scanner in the field
•We didn’t have a way to do this,
–and the quality of our data suffered as a result
•1:15 up to (but not including) What really happens?, 3:00 for entire range processing pipeline
•1:45
•unstable on smooth surfaces
–Rusinkiewicz01 reduces this problem by distributing point pairs more uniformly around the Gaussian sphere of directions
–but this doesn’t solve the problem in all situations
•distributes errors unevenly
–for example if one part of the statue was overscanned (scanned multiple times), hence sampled more densely
–a pair of undulating overlapped surfaces will dominate over a smaller but reliable “lock and key”
»suggesting that perhaps sum-of-squares is not a good error metric for these algorithms
»perhaps maximum error – the L-infinity norm – is better
–there are many such sources of bias in global registration algorithms
–still an open problem
•0:45, 2:30 for entire color pipeline
•discard specular pixels
–we haven’t yet characterized specularity, although we have the data we need to do it
•correct for irradiance
–converts color to diffuse reflectance
–sometimes called de-shading
•1:00
•treated reflectance as Lambertian
–Oren and Nayar (and others) have shown the error in this approximation
•used aggregate surface normals
–caused fine-scale geometry to pollute diffuse reflectance
–made reflectance less useful for scientific applications
•ignored interreflections
–Nayar’s work on Shape from Interreflections suggests that this problem can be addressed, althought it would be expensive
•0:15 (no time for movie)
•0:30 through David’s heads
0:15 for 2 slides
•
•0:15 (defer to James’s talk)
•0:30, 2:15 for entire problem
•large datasets
–one of the implications of the 20,000:1 dynamic range
•It’s hard to convey just how much data we acquired of the David
–I’ll try to do it with a sequence of zooms
•1:00
•range images
–keep our data in the form of regular arrays as long as possible
•eventually, we need to convert the data to polygons
–the output of our merging algorithm is a polygon mesh
–Schroeder’s semi-regular meshes may point toward an alternative approach
•(skip over list of requirements)
•0:45 including brief demo and streaming Qsplat, but skipping details
•other aspects of handling large datasets
–structure of interactive programs that manipulate the data
•2:00 for entire problem
•metadata
–since 3D scanning is not (yet) standardized
•secure viewers
–so far we’ve been allowed to distribute our data only to scholars
–we’d like to give it to artists, art schools, public schools
–but the Italian authorities don’t want the models appearing in video games,
–and they don’t want unauthorized physical replicas
–secure viewers would allow downloading and viewing, but not extracting and storing
–this is an unsolved problem across the computer graphics industry
•watermarking
–may be unsolvable
–legal action may be the only effective solution to this problem
•longevity
–very unlikely that our digital data will last longer than the statue
–of course, this is a problem for all digital archives, not just archives of 3D content
•1:30 for this slide, 23:00 from beginning through this slide
•noisy topology
–Schroeder’s group has observed that the genus of David’s head is over 300!
•geometric signal processing
–as opposed to combinatoral geometry
•not unorganized points
–at least in a sweeping scanner
–the same assumptions that permit us to identify an observation as a range sample
–allow us to decide if it is connected to its neighbor
–if one of these assumptions is counterindicated, then both are
–connectivity can be used in alignment, merging, compression, hole filling, etc.
•lines of sight
–like connectivity, it’s another aspect of the structure of a range scan
–James Davis will show tomorrow how it can be used to advantage during hole filling
•almost nothing for this slide, 2:00 for all games
•…I didn’t want to make this entirely a negative talk…
•1:00
•volumetric scan conversion
–Cyra time-of-flight scan of a tree (actually a movie set), represented as a cloud of points
–the best representation for range data like this may not be points or a mesh – it might be a volume?
–simply by counting the number of range samples that fall in a voxel
•volume texture synthesis
–generating more “tree-like vegetation”
–from acquired data
•0:30
•in range images
–or semi-regular meshes,
–or merged surface meshes
•0:30
•to anybody considering doing this,
–sensor fusion is hard
•0:15 for this slide, 3:30 for entire final questions
•1:15 for entire automatic 3D scanning
•
•
0:15 for sequence
•6 DOF
–to allow access to any point from the laser plane at any orientation
•a system like this has never been built
–a “grand challenge” for our field
•1:00
•rapid protyping is improving
–new technologies come on the market
–clearly catering to graphics community – all over Siggraph
•color printing
–some companies can do spatially-varying primary colors, but
–subtle color mixtures are still elusive
•how to match BRDF
–a holy grail for rapid prototyping
–Is there a technology that can “print” (i.e. match) a specified reflectance characteristic?
•1:00
•fly around it
–as opposed to making a physical replica, for example
•Will IBR replace 3D scanning?
–a special version of the general question: Will image-based rendering replace model-based rendering?
–specific versions of this question have been posed for volume rendering, and for point rendering – Will they take over the world?
•You can see my Yes’s and No’s
–I suspect the answer is:  Maybe, for some applications, but not for all.
0:45
Total conclusions = 1:15