CSL 859: Advanced Computer Graphics
Dept of Computer Sc. & Engg.IIT Delhi
Monday 15 November 2010
Point Based Representation
Point sampling of Surface Mesh construction, or Mesh-less
Often come from laser scanning Or even natural light
How do you render How do you peform other processing
Visibility, Collision etc.
Related concepts: image based representation, particle systems
Monday 15 November 2010
Laser Range Scanning
Laser scanners sample points on a 3D shape Often on a grid
Large number of samples Most surfaces have a high frequency
Michaelangelo Project
Monday 15 November 2010
Point-Based Rendering
Monday 15 November 2010
Surfels
[Pfister et al., SIGGRAPH 2000]
Monday 15 November 2010
Sampling Known Objects
3D “Rasterization” Layered Depth Images Cast rays through the
object along each axis Advance ray by regular
increments Store pre-filtered texture
colors at the points Project the surfel into texture
space and filter to get a single color
Disk radius >= maximum inter sample distance
rk = r0 2k
Monday 15 November 2010
Filter shape (e.g., Gaussian) is usually circular/spherical We want that shape on screen => ellipse in space EWA Filter = low-pass filter warped reconstruction filter
Low-Pass Filter
Volume
Elliptical Weighted Average
Monday 15 November 2010
Filter shape (e.g., Gaussian) is usually circular/spherical We want that shape on screen => ellipse in space EWA Filter = low-pass filter warped reconstruction filter
W
Low-Pass Filter
Volume
Elliptical Weighted Average
Monday 15 November 2010
Filter shape (e.g., Gaussian) is usually circular/spherical We want that shape on screen => ellipse in space EWA Filter = low-pass filter warped reconstruction filter
W
Low-Pass Filter
Volume
Projection
Elliptical Weighted Average
Monday 15 November 2010
Filter shape (e.g., Gaussian) is usually circular/spherical We want that shape on screen => ellipse in space EWA Filter = low-pass filter warped reconstruction filter
W
Low-Pass Filter
Volume
Projection
Convolution
Elliptical Weighted Average
Monday 15 November 2010
Filter shape (e.g., Gaussian) is usually circular/spherical We want that shape on screen => ellipse in space EWA Filter = low-pass filter warped reconstruction filter
W
Low-Pass Filter
Volume
Projection
Convolution
Elliptical Weighted Average
Monday 15 November 2010
Storing surfels
3 Layered depth images (LDI), one per axis LDI = Images with multiple <depth,color> per pixel Makes a Layered Depth Cube (LDC)
Pixel spacing h0 related to expected screen resolution
Octree with node = bxb block of pixels Bottom-up construction Make pixel spacing = h0 2i
Filter down each block
Grossman and Dally [12] also use view-independent texture fil-tering and store one texture sample per surfel. Since we use a mod-ified z-buffer algorithm to resolve visibility (Section 7.3), not allsurfels may be available for image reconstruction, which leads totexture aliasing artifacts. Consequently, we store several (typicallythree or four) prefiltered texture samples per surfel. Tangent diskswith dyadically larger radii are mapped to texturespace and used to compute the prefiltered colors. Because of itssimilarity to mipmapping [13], we call this a surfel mipmap. Fig-ure 4b shows the elliptical footprints in texture space of consecu-tively larger tangent disks.
6 Data StructureWe use the LDC tree, an efficient hierarchical data structure, tostore the LDCs acquired during sampling. It allows us to quicklyestimate the number of projected surfels per pixel and to trade ren-dering speed for higher image quality.
6.1 The LDC TreeChang et al. [4] use several reference depth images of a scene toconstruct the LDI tree. The depth image pixels are resampled ontomultiple LDI tree nodes using splatting [29]. We avoid these inter-polation steps by storing LDCs at each node in the octree that aresubsampled versions of the highest resolution LDC.The octree is recursively constructed bottom up, and its height is
selected by the user. The highest resolution LDC — acquired dur-ing geometry sampling — is stored at the lowest level . If thehighest resolution LDC has a pixel spacing of , then the LDC atlevel has a pixel spacing of . The LDC is subdividedinto blocks with user-specified dimension , i.e., the LDIs in a blockhave layered depth pixels. is the same for all levels of the tree.Figure 5a shows two levels of an LDC tree with using a 2Ddrawing. In the figure, neighboring blocks are differently shaded,
b)a)
Figure 5: Two levels of the LDC tree (shown in 2D).
and empty blocks are white. Blocks on higher levels of the octreeare constructed by subsampling their children by a factor of two.Figure 5b shows level of the LDC tree. Note that surfels athigher levels of the octree reference surfels in the LDC of level 0,i.e., surfels that appear in several blocks of the hierarchy are storedonly once and shared between blocks.Empty blocks (shown as white squares in the figure) are not
stored. Consequently, the block dimension is not related to thedimension of the highest resolution LDC and can be selected ar-bitrarily. Choosing makes the LDC tree a fully volumetricoctree representation. For a comparison between LDCs and vol-umes see [19].
6.2 3-to-1 ReductionTo reduce storage and rendering time it is often useful to optionallyreduce the LDCs to one LDI on a block-by-block basis. Becausethis typically corresponds to a three-fold increase in warping speed,we call this step 3-to-1 reduction. First, surfels are resampled tointeger grid locations of ray intersections as shown in Figure 6.Currently we use nearest neighbor interpolation, although a more
resampled surfelson grid locations
LDI 1 surfelsLDI 2 surfels
Figure 6: 3-to-1 reduction example.
sophisticated filter, e.g., splatting as in [4], could easily be imple-mented. The resampled surfels of the block are then stored in asingle LDI.The reduction and resampling process degrades the quality of
the surfel representation, both for shape and for shade. Resampledsurfels from the same surface may have very different texture col-ors and normals. This may cause color and shading artifacts thatare worsened during object motion. In practice, however, we didnot encounter severe artifacts due to 3-to-1 reduction. Because ourrendering pipeline handles LDCs and LDIs the same way, we couldstore blocks with thin structures as LDCs, while all other blockscould be reduced to single LDIs.As in Section 5.2, we can determine bounds on the surfel density
on the surface after 3-to-1 reduction. Given a sampling LDI withpixel spacing , the maximum distance between adjacent surfelson the object surface is , as in the original LDC tree.The minimum distance between surfels increases todue to the elimination of redundant surfels, making the imaginaryDelaunay triangulation on the surface more uniform.
7 The Rendering PipelineThe rendering pipeline takes the surfel LDC tree and renders it us-ing hierarchical visibility culling and forward warping of blocks.Hierarchical rendering also allows us to estimate the number of pro-jected surfels per output pixel. For maximum rendering efficiency,we project approximately one surfel per pixel and use the same res-olution for the z-buffer as in the output image. For maximum imagequality, we project multiple surfels per pixel, use a finer resolutionof the z-buffer, and high quality image reconstruction.
7.1 Block CullingWe traverse the LDC tree from top (the lowest resolution blocks)to bottom (the highest resolution blocks). For each block, we firstperform view-frustum culling using the block bounding box. Next,we use visibility cones, as described in [11], to perform the equiv-alent of backface culling of blocks. Using the surfel normals, weprecompute a visibility cone per block, which gives a fast, con-servative visibility test: no surfel in the block is visible from anyviewpoint within the cone. In contrast to [11], we perform all visi-bility tests hierarchically in the LDC tree, which makes them moreefficient.
7.2 Block WarpingDuring rendering, the LDC tree is traversed top to bottom [4]. Tochoose the octree level to be projected, we conservatively estimatefor each block the number of surfels per pixel. We can choose onesurfel per pixel for fast rendering or multiple surfels per pixel forsupersampling. For each block at tree level , the number of sur-fels per pixel is determined by , the maximum distance be-tween adjacent surfels in image space. We estimate by divid-ing the maximum length of the projected four major diagonals ofthe block bounding box by the block dimension . This is correctfor orthographic projection. However, the error introduced by usingperspective projection is small because a block typically projects toa small number of pixels.For each block, is compared to the radius of the de-
sired pixel reconstruction filter. is typically , where
338
Monday 15 November 2010
Projection
View frustum culling Traverse blocks to find the right resolution
One surfel per pixel, or n for supersampling inmax = Length of projected block diagonals/b If inmax > filter footprint, traverse children
Warp blocks to screen space Fast incremental algorithms [GrossMan & Daly] Only a few operations per surfel
fewer than matrix multiplication due to regularity of samples
Monday 15 November 2010
Filling
Use Z buffer Each surfel covers an area Project this area orthographically and scan convert
Approximate ellipse with a bounding parallelogram Depth varies linearly: surfel normal
May have holes (magnification or surfel orientation) Shading: Per-surfel Phong illumination
Also incorporates environment/normal maps Reconstruct image by filling holes between
projected surfels Apply a filter in screen space Super-sample to improve quality
Monday 15 November 2010
Splatting and Reconstruction
Monday 15 November 2010
QSplat
Primary goal is interactive rendering of very large point-data sets
Built for the Digital Michelangelo Project
[Rusinkiewicz and Levoy, SIGGRAPH 2000]
Monday 15 November 2010
Sphere Trees
A hierarchy of spheres, with leaves containing single vertices
Each sphere stores center, radius, normal, normal cone width, and color (optional)
Tree built the same way one would build a KD-tree Median cut method
Rendered really large models for the day Focus on memory layout and data
quantization
Monday 15 November 2010
Rendering Sphere Trees
Start at root Do a visibility test
View frustum Also back-face cull based on normal cone
Recurse or Draw Recurse based on projected area of sphere
with an adapted threshold To draw, use normals for lighting and z-buffer
to resolve occlusion
Monday 15 November 2010
Splat Shape
Several options Square (OpenGL “point”) Circle (triangle fan or texture mapped square) Gaussian (have to do two-pass)
Can squash splats depending on viewing angle Sometimes causes holes at silhouettes, can
be fixed by bounding squash factor
Monday 15 November 2010
Splat Shape
Monday 15 November 2010
Splat Silhouettes
Monday 15 November 2010
Few Splats
Monday 15 November 2010
Many Splats
Monday 15 November 2010
Monday 15 November 2010
Surface Reconstruction: Cocone
• p+ ≡ pole of p = point in the Voronoi cell farthest from p
ε < 0.1 → the vector from p to p+ is
within π/8 of the true surface normal
The surface is nearly flat within the cell
Voronoi cell of p
p
Monday 15 November 2010
Surface Reconstruction: Cocone
• p+ ≡ pole of p = point in the Voronoi cell farthest from p
ε < 0.1 → the vector from p to p+ is
within π/8 of the true surface normal
The surface is nearly flat within the cell
Voronoi cell of p
p+
p
Monday 15 November 2010
Surface Reconstruction: Cocone
• p+ ≡ pole of p = point in the Voronoi cell farthest from p
ε < 0.1 → the vector from p to p+ is
within π/8 of the true surface normal
The surface is nearly flat within the cell
Voronoi cell of p
p+
p
Monday 15 November 2010
Surface Reconstruction: Cocone
• p+ ≡ pole of p = point in the Voronoi cell farthest from p
ε < 0.1 → the vector from p to p+ is
within π/8 of the true surface normal
The surface is nearly flat within the cell
Voronoi cell of p
p+
p
Monday 15 November 2010
Surface Reconstruction: Cocone
• p+ ≡ pole of p = point in the Voronoi cell farthest from p
ε < 0.1 → the vector from p to p+ is
within π/8 of the true surface normal
The surface is nearly flat within the cell
Voronoi cell of p
p+
p
Monday 15 November 2010
Sample Reconstructed Surfaces
courtesy Tamal Dey, OSU
Monday 15 November 2010
Moving Least Squares
For point set P, MLS(P) is a projection operator which maps each point to itself: MLS(P) = {x: (P, x) = x}
(P, x): Find a local reference plane at x: p(u,v)
Compute local polynomial approximation to surface on the plane: p(u,v)
ψψ
Monday 15 November 2010
MLS
courtesy Luiz Velho
Monday 15 November 2010
Reference domain The local plane:
is computed so as to minimize a local weighted sum of squared distances of the points pi to the plane
Monday 15 November 2010
Monday 15 November 2010
Light Field
Uniformly sample two planes plane of cameras
Many plane-pairs needed
Need compression Quantization
Monday 15 November 2010
Light Field
Uniformly sample two planes plane of cameras
Many plane-pairs needed
Need compression Quantization
Monday 15 November 2010
Light Field
Uniformly sample two planes plane of cameras
Many plane-pairs needed
Need compression Quantization
Monday 15 November 2010
Light Field
Uniformly sample two planes plane of cameras
Many plane-pairs needed
Need compression Quantization
Monday 15 November 2010
Rendering
For each desired ray: Compute intersection with (u,v) and
(s,t) planes Take closest ray Filter from the closest
Monday 15 November 2010
Compression
Vector quantization Build a codebook of 4D tiles Each tile an index into the codebook
Example: 2x2x2x2 tiles, 16 bit index = 24:1 compression
Monday 15 November 2010
Top Related