Real-Time Collision Deformations using Graphics Hardware · meshes and Rezk-Salama et al [2001]...

17
Real-Time Collision Deformations using Graphics Hardware Pawel Wrotek Alexander Rice Brown University Morgan McGuire * Figure 1: (a) A low-polygon video game car drives down a highway. (b) The driver slams into the guard rail and spins out. (c) The collision with the rail results in a dent along the side of the car, which we simulate in the car’s bump map. (d) The car strikes the opposite rail, smashing the front. The damage to the hood of the car is too extensive to represent with a bump map (the bump map in this case became fully saturated at the lowest value) so we instead use a pre-deformed mesh. (e) What the side of the car looks like without bump mapping. (f) Wireframe for the side of the car; unmodified by collision. (g) Wireframe for the front of the car; predeformed. Abstract We present a method for efficiently simulating minor deformations that result from collisions. Our method alters only bump maps and leaves mesh geometry unchanged. It is well suited to games because although the results are not physically correct, they are plausible and are computed in real-time. In a real-time simulation the CPU already carries a high load due to game logic, I/O, and physical simulation. To achieve high performance, we move the deformation computation off the CPU. The task of computing surface deformations and collisions for physics is very similar to that of rendering computa- tional solid geometry objects. We exploit this observation by “rendering” the intersection to an off-screen buffer using graphics hardware and parallel texture map operations. 1 Introduction This paper proposes a new method for plausible simulation of minor deformations that result from collisions. The state of the art for representing model shape for real-time 3D rendering * e-mail: {pwrotek,acrice,morgan}@cs.brown.edu

Transcript of Real-Time Collision Deformations using Graphics Hardware · meshes and Rezk-Salama et al [2001]...

Page 1: Real-Time Collision Deformations using Graphics Hardware · meshes and Rezk-Salama et al [2001] propose a way to use the GPU to speed up deformations in volume rendering applications

Real-Time Collision Deformations using GraphicsHardware

Pawel Wrotek Alexander RiceBrown University

Morgan McGuire∗

Figure 1: (a) A low-polygon video game car drives down a highway. (b) The driver slamsinto the guard rail and spins out. (c) The collision with the rail results in a dent along theside of the car, which we simulate in the car’s bump map. (d) The car strikes the oppositerail, smashing the front. The damage to the hood of the car is too extensive to represent with abump map (the bump map in this case became fully saturated at the lowest value) so we insteaduse a pre-deformed mesh. (e) What the side of the car looks likewithoutbump mapping. (f)Wireframe for the side of the car; unmodified by collision. (g) Wireframe for the front of thecar; predeformed.

Abstract

We present a method for efficiently simulating minor deformations that result from collisions. Ourmethod alters only bump maps and leaves mesh geometry unchanged. It is well suited to games becausealthough the results are not physically correct, they are plausible and are computed in real-time.

In a real-time simulation the CPU already carries a high load due to game logic, I/O, and physicalsimulation. To achieve high performance, we move the deformation computation off the CPU. The taskof computing surface deformations and collisions for physics is very similar to that of rendering computa-tional solid geometry objects. We exploit this observation by “rendering” the intersection to an off-screenbuffer using graphics hardware and parallel texture map operations.

1 Introduction

This paper proposes a new method for plausible simulation ofminor deformations that resultfrom collisions. The state of the art for representing model shape for real-time 3D rendering

∗e-mail:{pwrotek,acrice,morgan}@cs.brown.edu

Page 2: Real-Time Collision Deformations using Graphics Hardware · meshes and Rezk-Salama et al [2001] propose a way to use the GPU to speed up deformations in volume rendering applications

has three levels of detail: polygons for macro-structure, bump (scalar height displacement) andnormal maps for meso-structure, and BRDF approximations for micro-structure. Minor defor-mations affect meso-structure, so we simulate them by changing the bump and normal mapsonly. Very tiny deformations such as scratches affect the micro-structure, which is modeledby the BRDF; we do not address this in our paper but leave it as possible future work. Macro-structure affects the silhouette; it is generally on the order of tens of pixels. Micro-structureaffects reflectance characteristics but not shading normals and is subpixel. Meso-structure fea-tures therefore contain a few pixels each when rendered, e.g. dimples on a face, raised brickson a wall, and most relevant to deformation, good-sized dents like the ones left in your car bya runaway shopping cart.

A variety of deformation algorithms have been proposed in the past for macro-structure,Muller et al. [2001] propose a method for deforming malleable materials using volumetricmeshes and Rezk-Salama et al [2001] propose a way to use the GPU to speed up deformationsin volume rendering applications by moving calculations of the mesh alteration into hardware.Our method can be used in conjunction with these to create even more plausible results. Forexample, in figure 1d, the car’s geometry is affected by a pre-computed (in this case, artistcreated) macro-deformation, while its side is deformed with our method.

Because our technique does not modify the underlying mesh, it preserves polygon count anddoes not interfere with other mesh-based algorithms for collision detection, physical simula-tion, spatial subdivision, level of detail, animation, and shadow casting.

2 Background: Simulation2.1 Deformation Model

Figure 2: A stress-strain curve for both tension and compression, the vertical axis representingstress and the horizontal representing strain. Note that in our case theverticalaxis is indepen-dent, however the relation is invertible and we follow mechanical engineering literature whichplots stress vertically.

Mechanical engineers model deformations of solid (versus liquid) objects with a stress-straincurve [Crandall et al. 1999], like the one shown in figure 2. The vertical axis isstress, whichhas pressure units ofPa= N/m2. The horizontal axis ispercent strain, the percent change inlength of an object due to stress along that dimension. The curves are identical whether thestress is due to tension or compression.

The curve has three phases. Under small force per unit area, the surface deforms like a spring,where displacement is linear in force (x = F/ks). The slope of the curve during thiselastic

Page 3: Real-Time Collision Deformations using Graphics Hardware · meshes and Rezk-Salama et al [2001] propose a way to use the GPU to speed up deformations in volume rendering applications

deformation phase is Young’s Elastic Modulus,E, and for most metals is on the order of1010N/m2.

When the pressure becomes sufficient to break bonds within the material (called theyieldstress, Syield), plasticdeformation begins, which creates permanent shape changes. We modelthe plastic deformation curve as a line with a shallower slope,P < E, although materials doexist that have nonlinear responses.

When the pressure falls, materials experience the springback phase. Duringspringbackthedisplacement decreases with Young’s Modulus until pressure reaches zero. Note that if thepeak pressure never exceeded the yield stress, there will be no permanent deformation becausethe displacement retreats along the original curve.

We simulate only the permanent deformation, not the temporary elastic deformation that willspring back. The net percent deformation in response to stressS is thus:

percentDe f orm= plastic−springback (1)

percentDe f orm=(

Syield

E+

S−Syield

P

)− S

E(2)

percentDe f orm=(S−Syield

)(1P− 1

E

)(3)

for S> Syield and percentDeform = 0 otherwise.

The yield stress and parametersE, Syield, andP are material dependent and are available fromreference texts and online resources like eFunda (http://www.efunda.com). However, mostinteresting objects are composites and the raw constants are not directly useful. For examplethe body panels of a car are composite plastic, steel, and air body panels, and deform morelike hard clay than solid steel. A set of composite constants must be synthesized to use thematerial within the stress-strain model. In practice, one obtains a composite constant froma raw material number by consulting with materials engineer (when accuracy is critical), orby simply choosing between the raw material upper and lower bounds, e.g. “divide the solidaluminum constant by 100, to simulate a thin car door and add 0.001 Pa for the plastic inside.”Throughout this paper we give the constants we used for various experiments.

To compute deformations, we must reconcile the mechanical engineer’s stress (N/m2) modelwith the instantaneous impulse (N ·s) collision model used for real-time simulation:

j =−NC[1+min(εA,εB)]NC · (vB−vA)

m−1A +m−1

B

(4)

WhereNC is the collision normal,ε is thecoefficient of restitution, v is velocity, andm is mass.See Guendelman et al. [2003] for details.

In the instantaneous model, the coefficient of restitution describes the amount of momentumconserved during a collision. The momentum lost becomes sound, heat, and deformation.Each object in a collision has its own coefficient. The collision is processed with some com-bination of the two, which is typically the minimum. It is common practice to use a constantvalue forε in computer graphics, but this is an oversimplification of the collision process. Ina more realistic model, the value ofε would change with the momentum transfer. For exam-ple, two steel balls colliding at low speed lose almost no momentum because they are in the

Page 4: Real-Time Collision Deformations using Graphics Hardware · meshes and Rezk-Salama et al [2001] propose a way to use the GPU to speed up deformations in volume rendering applications

elastic deformation part of the stress-strain curve and no net deformation occurs (the molecu-lar bonds act like springs and restore energy). However, two steel balls colliding at very highspeed will undergo plastic deformation and energy will be lost. The low speed collision shouldbe modeled withε = 1 and the high speed collision withε < 1.

To work with the instantaneous model and constantε value, we use the trends of the stress-strain model but not the actual units and constants. We represent the elastic deformation andthe yield stress as an impulse thresholdY below which no deformation occurs, and a scalefactorκ that abstracts the term1P− 1

E ,percentDe f orm= κA max(0, || jε=1||−YA) (5)

where YA =1

1− εA−1 (6)

and jε=1 is the impulse that would be experienced by objectA due to objectB if the collisionwere perfectly elastic. The product ofpercentDeformand the thickness of objectA along thecollision axis (in practice, we approximate width for a thick, solid object like a brick and aconstant thickness hollow or shell object like a car body) is the maximum penetration depththat should result from the collision. We usedκ = 0.1m/s to generate the results shown in thepaper.

2.2 Physical Simulation

We use a physics simulator based on Guendelman et al. [2003], with collision detection han-dled by the G3D (http://g3d-cpp.sf.net) and OPCODE (http://codercorner.com/Opcode.htm)libraries. During the simulator’s collision detection phase, we apply our technique to deformcolliding objects, but only if the collision impulse is above some minimum threshold. Thisthreshold keeps us from getting bogged down with too many weak collisions, assuring that wedeform objects only when the impact is strong enough to leave a mark. We do not deform ob-jects during the contact resolution phase in order to further speed up the simulation. Granted,in reality a heavy object can deform a soft, malleable surface by simply resting on it, but wechoose to ignore this case in favor of speed.

We can adjust collision and contact normals by the normal map when calculating impulse.This changes the direction of the objects’ resultant velocities after collision in a way thatmakes it appear like the objects are reacting to each other’s normal-mapped surfaces. Manyof our results are generated with this feature disabled. See section 5 for discussion.

While we find the Guendelman simulator to be convenient and powerful, our deformationmethod can just as easily be worked into any other type of physical simulator.

3 Parameterization and Rendering

Each object in the simulation has an associated bump map, which may be either artist createdor uniformly 0.5 for a “flat” surface. Each point on the bump map must correspond to aunique surface location. Note that it is a common practice among artists to use texture memoryefficiently by tiling maps or re-using patches (e.g., figure 3). Such maps must be expandedand their patches duplicated before they can be used with our method, increasing the memoryrequired.

Our method works best with a mesh parameterization like the one proposed by Sheffer and deSturler [2001] that minimizes distortion, providing uniform bump map resolution across theobject’s surface.

Page 5: Real-Time Collision Deformations using Graphics Hardware · meshes and Rezk-Salama et al [2001] propose a way to use the GPU to speed up deformations in volume rendering applications

Figure 3: (a) The Space Marine model from Natural Selection uses the same texture for theleft and right sides of the body, a typical modeling trick to save texture memory. Our algorithmcannot work with this one-to-many mapping; the texture parameterization for thebump mapmust be one-to-one. (b) The cow model shows a one-to-one mapping compatible with ouralgorithm.

v

imaginary surface

actual surfacep p1 2

b b1 2

Figure 4: Parallax Bump Mapping

For rendering we useparallax bump mapping with offset limiting[Kaneko et al. 2001; Welsh2004] that approximates both self-occlusion and shading for a rough surface. Briefly, thisalgorithm works as follows:

1. For each pointp1 being rendered, form the view vectorv from the eye top1. At pointp1 the corresponding pointb1 in the bump map stores a height value.

2. Back-track along the view vector until the height off the surface is equal to the heightstored atb1. Using the assumption that the height values are locally similar, this givespointb2.

3. Project pointb2 onto the surface of the object at pointp2.

4. Set the texture and normal values at pointp1 using the values from pointp2.This method works well if the surface has only low frequencies (i.e. no sharp edges). Highfrequencies violate the assumption that height values in the bump map are locally similar. Inthat case, artifacts appear giving the appearance of two surfaces, one floating above another.

We pack the normal and bump map into a single texture with values[R,G,B,A] = [Nx,Ny,Nz,h]

Page 6: Real-Time Collision Deformations using Graphics Hardware · meshes and Rezk-Salama et al [2001] propose a way to use the GPU to speed up deformations in volume rendering applications

whereN is the tangent space normal andh is the surface displacement.

4 Deforming an Object on Collision

When two objects collide, we alter their momentum as if they were rigid bodies but we de-form them as if they were malleable. The fact that we deform the objects does not affect thecomputations of the simulator in any way beyond the loss of momentum absorbed inε.

For a deformation to be plausible, its size and shape must reflect the size, shape, and elasticityof the object that caused it. For example, a sphere leaves a different mark than a cube aftercolliding with a surface, and a steel sphere will leave a mark on a clay cube but a clay cubewill not dent a steel ball.

To approximate the size and shape of a deformation, we cause the objects to interpenetrateduring collision, and then measure the shape of the resulting overlap between the objects. In-tuitively, this overlap is the amount by which the surface of one object must recede to accom-modate the other object. We scale the depth of the deformation by the result of the stress-strainmodel, taking into account the coefficient of restitution of the object being deformed.

Akeley and Jermoluk [1988] first used the depth and stencil buffers to compute object inter-sections and other CSG operations for rendering purposes. We extend their method with anaddress map for taking the results back into texture space and a technique for choosing appro-priate projection and modelview matrices so that the entire intersection is in view. Govindarajuet al. [2003] also use depth and stencil buffers to detect penetration, however they are onlyinterested in computing collisions, and do not find the shape and texture space mapping of theinterpenetration.

We update the objects’ bump maps with the results of their respective deformations, and leavethe meshes unaltered.

4.1 Calculating Deformations

We now present the method to deform the bump maps of objectsA andB on collision usingthe variables in table 1. Throughout this section we will refer to lines of pseudocode given infigure 11.

Figure 5: (a) A blue planeA and an orange ballB prior to collision, we compute the defor-mation forA. (b) Orthographic camera setup used to compute deformations on the GPU. (c)Front faces of planeA rendered. (d) Back faces of ballB rendered where they are deeperthan the previously rendered faces of planeA (i.e. whereB penetratesA). (e) The resultingdeformation on the surface of the plane.

We assume a simulator that returns the world space collision locationPC, and normalNC, andpenetration depthdC when objectsA andB collide. We deform the objects separately and give

Page 7: Real-Time Collision Deformations using Graphics Hardware · meshes and Rezk-Salama et al [2001] propose a way to use the GPU to speed up deformations in volume rendering applications

Table 1: Collision and Bump Map Variables

CA[i, j] Color buffer at pixel[i, j] used as an address mapfor objectA

DA[i, j] Depth buffer at pixel[i, j] for objectA∆D[i, j] Penetration depth at pixel[i, j] near collision areahA[i, j] Tangent-space bump-map elevation for objectA at

[i, j] before deformation (m)h′A[i, j] Bump-map elevation after deformation (m)NC World-space collision normal; objectA “owns”

this normal, meaning that it points away from thesurface ofA

PC World-space collision point (m)dC Penetration depth for collision (m)tA Thickness of objectA, or of its walls if hollow (m)percentDe f ormA Percent deformation ofA’s surfacescam Distance between collision and orthogonal cam-

era’s near plane (m)znear World-space z-coordinate of orthogonal camera’s

near planezf ar World-space z-coordinate of orthogonal camera’s

far planek Maximum bump map depth in world-space (m).

0.1m in our examples.k′ Maximum bump map depth in world-space used

when altering depth buffer values (m). 0.02m inour examples.

the remainder of the discussion from the perspective of objectA; it should be repeated forobjectB with the subscripts swapped except where we explicitly note that the direction of thecollision normal is assumed relative to the first object.

We begin by retracting the objects bydC along the collision normal to the initial point ofcontact1. To create the net plastic deformation2, we next advanceA into B along the collisionnormal by the product of thicknesstA andpercentDe f ormA, which is derived from equation5 [BumpDeformA, lines 1-2].

Once the objects have been positioned, we place an orthographic projection camera atPC +NC · scam, with view vector−NC and up vector to whichever of the world spacex- or y-axesproduces the smaller absolute dot product. We selectscam such that the camera’s orthogonalview volume contains the bounding boxes of both objects [BumpDeformA, lines 4-10]. Withthis setup, the camera faces towardsA (i.e. towards the collision location) andB is closer tothe camera thanA, except for where the two objects intersect (Figure 5b). Since the overlap

1There are two kinds of collision detection. Reactive systems allow interpenetration to occur and thencorrect it (or step backwards in time). Predictive systems detect that a collision will occur in the nextframe and advance precisely to the time of collision. For a predictive system, it is not necessary to retractthe objects because they never interpenetrate.

2We have also obtained good, but less physically motivated results by allowing the initial interpene-tration to be the net plastic deformation, omitting the rollback and thepercentDe f ormA computation.

Page 8: Real-Time Collision Deformations using Graphics Hardware · meshes and Rezk-Salama et al [2001] propose a way to use the GPU to speed up deformations in volume rendering applications

can at most be as large as the smaller of the two objects, we scale the viewport to fit the smallerobject3.

Bump values range from 0 (max indentation) to 1 (max bump) in the bump map. We shift by0.5 and multiply byk to achieve an apparent world-space range [−k

2 , k2 ]. The rangek can be

chosen for each object independently based on scene scale; we use 0.1m in our experimentswhere most objects are vehicle-sized (approx. 3 meters).

To determine the size and shape of the deformation, we use the orthographic projection camerato render both objects to the back buffer in sequence, reading back the result between objects.To determine which portion of the bump map needs to be deformed we use anaddress map.When rendering the two objects, we color each pixel so that thex andy texture coordinates arethe red and green channels. The color of a pixel is thus the 2D address of the bump map texelcorresponding to it. The blue channel is always 0, so it is a mask for the object. We executethe following steps (with no lighting or parallax bump mapping):

1. Clear the frame buffer with color [0,0,1].

2. Render thefront facesof A (Figure 5c) with color = texture coordinate [BumpDeformA,lines 21-24].

3. Read back depth bufferDA (which now holds the “highest” points onA) and color bufferCA.

4. Clear the color buffer (leaving the values in the depth buffer) and set the depth test topass when the new pixel is farther from the camera than the old one (GLGREATER).

5. Render theback facesof B with color = texture coordinate [BumpDeformA, lines 30-33].Only those pixels whereB is “lower” thanA, i.e. whereverB penetratesA, are renderedbecause of the depth test (Figure 5d).

6. Read back the depth bufferDB (containing the “lowest” points onB in the area of over-lap) and the color bufferCB.

See figure 6 for a flow-chart depicting the contents of the buffers that are used during thesesteps. We use the two depth buffers,DA andDB, to compute the depth of the deformation.The difference∆D[i, j] = DB[i, j]−DA[i, j] measuresB’s geometry penetration intoA at thislocation. Since the objects are bump mapped, we want the deformation to reflect not onlythe objects’ geometries, but also the information from those bump maps. For this, we usea fragment shader during steps 2 and 5 that alters the values which are put into the depthbuffers. For each pixel, our shader uses the corresponding value from the address map toindex into the object’s bump map, adjusts the pixel’s depth by the resulting height value, andsetsgl FragDepthaccordingly. The shader adjusts the depth differently based on whether weare on step 2 or 5. Since we render the front faces ofA during step 2, we subtract the bump mapvalue from the geometric depth at this step [RenderPassA, line 2], while during step 5, whenwe render the back faces ofB, we instead add the bump map value to the geometric depth[RenderPassB, line 2]. Note that we can manipulate the depth values in this manner sinceour orthographic camera matrix produces linear depth, not hyperbolic depth like a perspectiveprojection.

To alter the depth buffers based on the objects’ bump maps, we must convert the bump mapvalues into depth values. To do this, we first multiply the bump map values by our value for themaximum bump map depth in world-space to get a world-space distance, and then we divideby the distance between the orthogonal camera’s near and far clipping planes (zf ar − znear)

3Of course, a more sophisticated method for packing objects into rectangles would slightly increasethe usable resolution.

Page 9: Real-Time Collision Deformations using Graphics Hardware · meshes and Rezk-Salama et al [2001] propose a way to use the GPU to speed up deformations in volume rendering applications

to get the corresponding depth value. However, we cannot simply usek as the maximumworld-space bump map depth. If we did so, we would in effect be treating the bump mapinformation exactly as geometry information, but since the objects collide based solely ontheir actual geometries, this extra information would cause the overlap to be too large. Thiswould result in the deformation being too deep to be represented in the bump map. Sincewe cannot detect collisions between bump maps (we must rely only on the objects’ geometryfor this), but we still want the information stored in the bump maps to have some effect onthe deformation, we choose to decrease the importance of the bump map values in relation togeometry when adjusting the depth buffer. To do this, we select ak′ that is smaller than thek which we later use for updating the bump maps (section 4.2). As a result, the values in thedepth buffer store information from the bump maps along with the geometry depth, but thebump map is scaled so that the resulting deformation is not too deep.

Note that the bump and normal maps are in tangent space, while the collision depth∆D is in“collision space,” meaning that the depth values all lie along the axis that is defined by thecamera’s view vector,−NC. Therefore, when we update the bump maps based on∆D, weintroduce an error unlessNC is perfectly parallel with the interpolated world-space vertex nor-mal N (i.e. the tangent spacez-axis) of the pixel being deformed. The deformation will havethe correct depth, but it might be in the wrong direction. In the extreme, an object might brushthe very edge of a sphere and create a deep groove perpendicular to the collision direction.One could project∆D onto N by multiplying it by N ·NC. This produces deformations that“fade out” faster than they should, but it prevents excessively deep dents. Alternatively, onecould divide∆D by the dot product to create a deformation that has the correct depth alongNC, but might be much too deep along an orthogonal axis.

There is no obviously correct solution, since the fundamental problem is that bump mapscannot be displaced except along the underlying tangent spacez-axis. We assume thatNC ·N is close to 1.0 and leave∆D unmodified. When the collision is between two faces, andthe objects are smooth, this assumption holds. When the assumption is violated the resultsare still plausible because true collisions are chaotic events involving multiple interactions.Additionally, we avoid division by the dot product which may be nearly zero.

4.2 Updating the Bump Map

We want to deform any pixel[i, j] where there is overlap betweenA andB. Overlap occursonly at pixels where both objects have been rendered (ifCA[i, j] = (rA,gA,bA) andCB[i, j] =(rB,gB,bB) thenbA = bB = 0) and where there is a difference between the values of the twodepth buffers (∆D[i, j] 6= 0). For each pixel where there is overlap betweenA andB, we wantto decrease the value ofhA[rA,gA] by ∆D[i, j]. However, note that we cannot simply iterateover each pixel[i, j] and do this because the texelhA[rA,gA] might affect multiple pixels ofCA during rendering, which would cause multiple pixelsCA[i, j] to map to the same texelhA[rA,gA]. If this is the case, then changing the value ofhA[rA,gA] for every pixel that mapsto it causes the result to be inaccurate. The deformation should be performed only once onany given texel, and so we want to choose one of the pixels that map to the texel, ignore anyothers, and alter the value of the texel accordingly. We choose to use the pixel with the largestcorresponding∆D[i, j].

To update the bump map, we first drawhA to the back buffer, so that pixelhA[i, j] is drawn atposition[i, j] in the buffer [BumpDeformA, lines 40-41]. We then execute the following stepsfor each pixel[i, j]:

1. From the saved buffers, get the values(rA,gA,bA) = CA[i, j], (rB,gB,bB) = CB[i, j] and∆D[i, j] = DB[i, j]−DA[i, j] [BumpDeformA, lines 45-47].

Page 10: Real-Time Collision Deformations using Graphics Hardware · meshes and Rezk-Salama et al [2001] propose a way to use the GPU to speed up deformations in volume rendering applications

2. If bA 6= 0 or bB 6= 0 or ∆D[i, j] = 0, there is no overlap at this pixel, so skip the next stepsand move on to the next pixel [BumpDeformA, lines 49-50].

3. From the bump map, get the valuehA = hA[rA,gA] [BumpDeformA, line 52].

4. Compute the new height value such thath′A = hA− (∆D[i, j] · (zf ar−znear))/k. The newheight value is clamped to stay within the range [0, 1] [BumpDeformA, lines 53-54].

5. Render a point at position[rA,gA] to the back buffer with color(h′A,h′A,h′A) and depthh′A[BumpDeformA, lines 56-57].

When this has been done for all pixels[i, j], the back buffer now storesA’s bump map withthe updated height values drawn on top in the correct positions. The way in which the depthof the rendered points is set ensures that points with a smaller height (more deformation) aredrawn in front of points with a larger height (less deformation). Therefore, if multiple pixelsCA[i, j] were mapped to the same texelhA[rA,gA], in the end we will only get the deformationthat resulted from the pixel with the greatest corresponding∆D[i, j]. We getA’s updated bumpmap by reading it from the back buffer.

We do not setA’s bump map to the updated version yet because we still need to deform objectB and we do not wantA’s newly acquired deformation to affect this. We moveA back to itsoriginal position [BumpDeformA, line 64] and then repeat the whole process to deformB. Theonly difference this time around is that we moveB into A based ontB andpercentDe f ormB,and we updatehB instead ofhA. Once we also haveB’s updated bump map, we moveB backto its original position and then set both objects’ bump maps to the updated versions.

5 ResultsFigure 1 shows a good result on a typical video game scene. In image 1a, a car drives alonga highway and is in perfect condition. The car is in a low-polygon game format (Renderware,the engine used in the well-known titleGrand Theft Auto III). It is bump mapped, howeverthe bump map is uniform everywhere. The driver loses control and smashes into the guardrail in 1b. At this point, our system responds to the collision by altering the momentum of thecar and deforming both the car and the guard rail’s bump maps. The large dent on the side ofthe car is visible in 1c as it spins out across the highway. Finally, as the car crashes into theopposite guard rail in 1d we use traditional mesh deformation on the hood of the car.

Figure 7 shows how our algorithm takes into account the bump maps of colliding objects.One face of the cube is bump mapped with the shape of a cow. When this face collides witha flat wall, the deformation left in the wall reflects both the cube’s geometry and the surfaceinformation stored in the bump map.

Finally, figure 8 shows a straightforward extension of our method to contact collisions, wherethe tank treads leave trails in the ground. The naive implementation needs a single bumpmap to cover the entire terrain; see section 6 for a brief discussion of bump map memoryconservation.

As an extension to deformations, we use the normal map to change collision normals (previousapproaches use the mesh normal only). However, we found that this can cause simulationfailures under rolling contact. Figure 10 shows one such failure. In the figure, a sphere rollsdown an incline, where the underlying geometry is a horizontal plane and the incline is onlypresent in the bump map. In this situation, the collisionlocation is on the horizontal plane butthe collisionnormal is that of the incline. The sphere rolls at the angle of the incline, so itfalls through the plane. Because of the problems (particularly for contact collisions) causedby altering collision normals, most of our results were produced with this feature disabled.

Page 11: Real-Time Collision Deformations using Graphics Hardware · meshes and Rezk-Salama et al [2001] propose a way to use the GPU to speed up deformations in volume rendering applications

However, it can give increased realism for moving collisions, and would presumably workwell for contact collisions were it extended with a method for efficiently detecting collisionlocations from the bump map instead of the underlying geometry.

6 Discussion and Limitations

Although our method is physically motivated, the results differ from physically correct ones inthe following ways. Real objects are continuous at meso-scale but our bump maps are discretein both range and spatial dimensions. We follow the computer graphics impulse collisionmodel where the coefficient of restitution is independent of momentum. Our deformationmagnitude is simplified from the true stress-strain model. We reduce the impact of the bumpmap itself on the deformation shape and do not take into account the potential difference inorientation between the tangent space and collision space.

Large-scale deformations that change an object’s overall structure cannot be performed by ourmethod since we do not alter an object’s mesh. In figure 1d, the kind of damage needed torecreate the effect of having the car’s front crushed cannot be represented with the use of abump map. Our method would “flatten” the portion of the bump map covering the front of thecar, but since this would not affect the structure of the car, we cannot hope to achieve the kindof effect given by mesh deformation.

Dents can only be so deep before the visual effect of parallax bump mapping breaks down.Repeated deformations in the same area will eventually saturate leaving a flat surface. In thefuture we would like to implement a “crumple” effect in which a dent leaves crater-like ringsby actually increasing the height values around the dent to avoid saturation and add increasedrealism.

There is also tradeoff between dynamic range and scale resolution for the bump map. Largervalues ofk give larger dynamic range (avoiding saturation) but poorer discrimination betweenvalues, leading to stair-step crater edges in the extreme case.

We require both a 1:1 mapping between points and the bump map and a fairly high resolutionbump map. Both are increasingly common in video games, however, our technique requiresa distinct bump map for every object which is a sizable memory increase. One solution tothis problem (which we have not explored) is to use a single bump map for all instances of aparticular model that have not been deformed, and allocate an individual bump map only aftercollision. The bump maps for objects that are unlikely to be seem again, e.g. dead enemies orlong passed scenery, can be reclaimed. This is typical of the related common practice in gamesof placing temporary decal textures on walls to mimic bullet-holes or explosion residue.

Since we update the bump maps based on the contents of the address buffers from our ren-dering passes, we can only update those texels that have been rendered to the address buffers.If we undersample the address maps, gaps will appear in the deformed region of the bumpmap. This occurs where the surface normal is far from the collision normal and when theaddress map has lower resolution than the projected bump map. The surface normal issue wehave little control over. Very sharp objects will receive incorrect deformations if they collidepoint-on (a sharp object should be blunted by the collision, which is something a bump mapcannot represent anyway, so this is not much of a loss.) To guarantee sufficient resolution forsurfaces that are oriented to the collision normal, we recommend an address map of at leastthe same resolution as the bump maps. Since bump maps tend to be lower resolution thantexture maps and most collisions only involve a small portion of the bump map, this guidelineshould generally provide more than enough resolution in the address map.

Page 12: Real-Time Collision Deformations using Graphics Hardware · meshes and Rezk-Salama et al [2001] propose a way to use the GPU to speed up deformations in volume rendering applications

The final step of our deformation method, where we update the objects’ bump maps by render-ing points to the screen over the original bump maps, is currently performed on the CPU. Tomove our algorithm fully to the GPU, the process of determining each point’s position, color,and depth can be moved to a vertex shader that follows the steps described in section 4.2 anduses texture lookups to get the necessary information from the color and depth buffers (whichcan be saved as textures) and the two bump maps being modified. To access all that informa-tion, this vertex shader would need to perform five texture lookups (one for each color anddepth buffer and another one for the bump map). Current (DirectX SM3.0 compliant) GPUtechnology allows for texture lookups from up to four textures within the vertex shader, whichis not enough for our purposes. So while our hypothetical shader cannot be implemented cur-rently, it should be only a matter of time before GPU technology advances and gives us thenecessary number of texture lookups for our vertex shader. The advantage of the GPU-onlyapproach is that bump maps will never be transferred back to main memory.

Because the underlying mesh is unchanged, and using normal-mapped normals can result insimulation failure, when objects collide the physics simulation can appear incorrect if bumpmap “geometry” differs greatly from mesh geometry. Additionally, because our method relieson extra rendering passes to compute deformations, using high polygon models can causelarge drops in frame rates when resolving collisions.

7 Acknowledgements

We thank Matt Scheuring (Tri-Ocean Nathiq Engineering, Ltd.) for helping us derive a physi-cally realistic model of deformation, Alla Sheffer (University of British Columbia) for provid-ing us with parameterized meshes, editor Ronen Barzel for dramatically improving the presen-tation of our technique, Charlie Cleveland (Unknown Worlds Entertainment) for permission touse the Space Marine model fromNatural Selection, and Chrominance for the Mercedes-BenzCL600 Grand Theft Auto 3 (i.e., RenderWare) model from http://www.polycount.com. Mor-gan’s research is supported by an NVIDIA Fellowship. All results were produced on GeForce6800 cards donated by NVIDIA.

References

AKELEY, K., AND JERMOLUK, T. 1988. High-performance polygon rendering. InProceed-ings of the 15th annual conference on Computer graphics and interactive techniques, ACMPress, 239–246.

CRANDALL , S. H., LARDNER, T. J., AND DAHL , N. C. 1999. An Introduction to theMechanics of Solids: Second Edition with SI Units. McGraw-Hill, August.

GOVINDARAJU, N. K., REDON, S., LIN , M. C., AND MANOCHA, D. 2003. Cullide:interactive collision detection between complex models in large environments using graph-ics hardware. InProceedings of the ACM SIGGRAPH/EUROGRAPHICS conference onGraphics hardware, Eurographics Association, 25–32.

GUENDELMAN , E., BRIDSON, R., AND FEDKIW, R. 2003. Nonconvex rigid bodies withstacking.ACM Trans. Graph. 22, 3, 871–878.

KANEKO, T., TAKAHEI , T., INAMI , M., KAWAKAMI , N., YANAGIDA , Y., MAEDA , T., ANDTACHI , S. 2001. Detailed shape representation with parallax mapping. InICAT.

Page 13: Real-Time Collision Deformations using Graphics Hardware · meshes and Rezk-Salama et al [2001] propose a way to use the GPU to speed up deformations in volume rendering applications

MULLER, M., MCM ILLAN , L., DORSEY, J.,AND JANGOW, R. 2001. Real-time simulationof deformation and fracture of stiff materials. InProceedings of the Eurographics Workshopin Manchester, UK.

REZK-SALAMA , C., SCHEUERING, M., SOZA, G., AND GREINER, G. 2001. Fast vol-umetric deformation on general purpose hardware. InProceedings of the ACM SIG-GRAPH/EUROGRAPHICS workshop on Graphics hardware, ACM Press, 17–24.

SHEFFER, A., AND DE STURLER, E. 2001. Parameterization of faceted surfaces for meshingusing angle based flattening.Engineering with Computers 17, 3, 326–337.

WELSH, T. 2004. Parallax mapping with offset limiting: A perpixel approximation of unevensurfaces. Tech. rep., Infiscape Corporation.

Page 14: Real-Time Collision Deformations using Graphics Hardware · meshes and Rezk-Salama et al [2001] propose a way to use the GPU to speed up deformations in volume rendering applications

Figure 6: A view of the buffers created by the rendering passes shown in figure 5, as well asthe resulting deformed bump maps. Note that the shape of the deformation onh′B is skewed tofit the parameterization of the sphere. This is a result of using the color buffer as an addressmap, which guarantees that we get a properly shaped deformation when we render the spherewith the bump map.

Page 15: Real-Time Collision Deformations using Graphics Hardware · meshes and Rezk-Salama et al [2001] propose a way to use the GPU to speed up deformations in volume rendering applications

Figure 7: (a - b) A box with a bump mapped cow collides with a plane, causing the plane todeform. (c) The deformation reflects both the box geometry and bump map.

Figure 8: Bump map deformations are used to create tracks in the ground and dents in thebunker’s walls.

Page 16: Real-Time Collision Deformations using Graphics Hardware · meshes and Rezk-Salama et al [2001] propose a way to use the GPU to speed up deformations in volume rendering applications

Figure 9: (1 - 3) A cow collides with a ground plane, causing the plane to deform. (4) Thedeformation does not alter the plane’s geometry. (5-6) Ground plane’s bump map before andafter collision. (7-8) Ground plane’s normal map before and after collision.

Figure 10: (Left) A ball rolling down a geometric plane. (Right) A ball collides with a flatgeometric plane with a bump-mapped ramp and uses the normal-mapped normals to resolvethe collision. The ball rolls into the plane because it is attempting to roll down a plane thatdoesn’t exist.

Page 17: Real-Time Collision Deformations using Graphics Hardware · meshes and Rezk-Salama et al [2001] propose a way to use the GPU to speed up deformations in volume rendering applications

BumpDeformA (CPU)1 de f ormA = tA ·κA ·max(0, || jε=1||− (1/(1− εA)−1))2 posA = NC · (de f ormA−dC)34 poscam = PC +NC ·scam5 lookcam =−NC6 upcam = (0,1,0)7 if (abs(upcam· lookcam) > 0.9)8 upcam = (1,0,0)9 create orthographic projection camera atposcam with look vectorlookcam and up vectorupcam10 resize the camera’s viewport to fit the smaller ofA andB1112 glDisable(GLLIGHTING)13 glEnable(GLDEPTH TEST)14 glDepthMask(GLTRUE)15 glEnable(GLCULL FACE)16 glDrawBuffer(GL BACK)17 glReadBuffer(GLBACK)18 glClearColor(0, 0, 1, 1)19 glClear(GLCOLOR BUFFER BIT | GL DEPTH BUFFER BIT)2021 glDepthFunc(GLLESS)22 glCullFace(GLBACK)23 set fragment shaderRenderPassA24 render(A)25 glReadPixels(x,y,w,h, GL RGB, GL UNSIGNED BYTE, CA)26 glReadPixels(x,y,w,h, GL DEPTH COMPONENT24ARB, GL FLOAT, DA)2728 glClear(GLCOLOR BUFFER BIT)2930 glDepthFunc(GLGREATER)31 glCullFace(GLFRONT)32 set fragment shaderRenderPassB33 render(B)34 glReadPixels(x,y,w,h, GL RGB, GL UNSIGNED BYTE, CB)35 glReadPixels(x,y,w,h, GL DEPTH COMPONENT24ARB, GL FLOAT, DB)3637 glClear(GLCOLOR BUFFER BIT | GL DEPTH BUFFER BIT)38 glDepthFunc(GLLESS)3940 glRasterPos2i(x,y)41 glDrawPixels(w,h, GL RGB, GL UNSIGNED BYTE, hA)4243 glBegin(GL POINTS)44 for each pixel[i, j]:45 (rA,gA,bA) = CA[i, j]46 (rB,gB,bB) = CB[i, j]47 ∆D[i, j] = DB[i, j]−DA[i, j]4849 if (bA = 1 or bB = 1 or ∆D[i, j] = 0)50 go to next pixel5152 hA = hA[rA,gA]53 h′A = hA− (∆D[i, j] · (zf ar −znear))/k54 clamp(h′A, 0, 1)5556 glColor3f(h′A,h′A,h′A)57 glVertex3f(rA,gA,h′A)58 glEnd()5960 glCopyTexImage2D(hA, 0, GL RGB,x,y,w,h, 0)6162 position ofA = posA

RenderPassA(GPU)1 h = texture2D(hA, gl TexCoord[0].xy)2 gl FragDepth = glFragCoord.z -h∗k′/(zf ar −znear)3 gl FragColor = (glTexCoord[0].xy, 0)

RenderPassB(GPU)1 h = texture2D(hB, gl TexCoord[0].xy)2 gl FragDepth = glFragCoord.z +h∗k′/(zf ar −znear)3 gl FragColor = (glTexCoord[0].xy, 0)

Figure 11: Pseudocode