Facial Expressions & Rigging CSE169: Computer Animation Instructor: Steve Rotenberg UCSD, Winter...

Click here to load reader

  • date post

  • Category


  • view

  • download


Embed Size (px)

Transcript of Facial Expressions & Rigging CSE169: Computer Animation Instructor: Steve Rotenberg UCSD, Winter...

  • Slide 1

Facial Expressions & Rigging CSE169: Computer Animation Instructor: Steve Rotenberg UCSD, Winter 2004 Slide 2 Turning in Projects turnin tool http://acs.ucsd.edu/info/turnin.php Slide 3 Matrix Computations Slide 4 Vector Dot Vector Slide 5 Vector Dot Matrix Slide 6 Matrix Dot Matrix Slide 7 Homogeneous Vectors Technically, homogeneous vectors are 4D vectors that get projected into the 3D w=1 space Slide 8 Homogeneous Vectors Vectors representing a position in 3D space can just be written as: Vectors representing direction are written: The only time the w coordinate will be something other than 0 or 1 is in the projection phase of rendering, which is not our problem Slide 9 Homogeneous Matrices A homogeneous (4x4) matrix is written: The only time the right hand column will not be (0,0,0,1) is during view projection Slide 10 Position Vector Dot Matrix Slide 11 v=(.5,.5,0,1) x y Local Space (0,0,0) Slide 12 Position Vector Dot Matrix v=(.5,.5,0,1) x y Local Space (0,0,0) x y World Space (0,0,0) a b d Matrix M Slide 13 Position Vector Dot Matrix v=(.5,.5,0,1) x y Local Space (0,0,0) x y World Space (0,0,0) a b d vv Slide 14 Inversion If M transforms v into world space, then M -1 transforms v back into local space Slide 15 Direction Vector Dot Matrix Slide 16 Matrix Dot Matrix (4x4) The row vectors of M are the row vectors of M transformed by matrix N Notice that a, b, and c transform as direction vectors and d transforms as a position Slide 17 Identity Take one more look at the identity matrix Its a axis lines up with x, b lines up with y, and c lines up with z Position d is at the origin Therefore, it represents a transformation with no rotation or translation Slide 18 Human Facial Expressions Slide 19 Facial Features Key Facial Features Deformable Skin Hair Eyes Articulated Jaw (teeth) Tongue Inside of mouth Each of these may require a different technical strategy Slide 20 Facial Muscles Slide 21 Universal Expression Groups Sadness Anger Happiness Fear Disgust Surprise Slide 22 FACS Facial Action Coding System (Ekman) Describes a set of Action Units (AUs) that correspond to basic actions (some map to individual muscles, but other involve multiple muscles, or even joint motion) Examples: 1.Inner Brow Raiser(Frontalis, Pars Medialis) 2.Outer Brow Raiser(Frontalis, Pars Lateralis) 14. Dimpler(Buccinator) 17. Chin Raiser(Mentalis) 19. Tongue Out 20. Lip Stretcher(Risoris) 29. Jaw Thrust 30. Jaw Sideways 31. Jaw Clencher Slide 23 FACS Expressions are built from basic action units Happiness: 1. Inner Brow Raiser(Frontalis, Pars Medialis) 6. Cheek Raiser(Orbicularis Oculi, Pars Orbitalis) 12. Lip Corner Puller(Zygomatic Major) 14. Dimpler(Buccinator) Slide 24 Emotional Axes Emotional states can loosely be graphed on a 2-axis system X=Happy/Sad Y=Excited/Relaxes Slide 25 Wrinkles Wrinkles are important visual indicators of facial expressions, and have often been overlooked in computer animation Slide 26 Facial Expression Reading Books Computer Facial Animation (Parke, Waters) The Artists Complete Guide to Facial Expression (Faigin) The Expression of Emotions in Man and Animals (Darwin) Papers A Survey of Facial Modeling and Animation Techniques (Noh) Slide 27 Facial Modeling Slide 28 Preparing the facial geometry and all the necessary expressions can be a lot of work There are several categories of facial modeling techniques Traditional modeling (in an interactive 3D modeler) Photograph & digitize (in 2D with a mouse) Sculpt & digitize (with a 3D digitizer) Scanning (laser) Vision (2D image or video) Slide 29 Traditional Modeling Slide 30 Photograph & Digitize Slide 31 Sculpt & Digitize Slide 32 Laser Scan Slide 33 Computer Vision Slide 34 Facial Expression Techniques Slide 35 Texture swapping/blending Bones (joints & smooth skin) Shape interpolation (morphing) Artificial muscles (FFDs) Anatomical simulation Slide 36 Texture Based Methods A very low quality approach to doing facial expressions is simply to swap or blend between various texture maps on the face Obviously, this is pretty cheezy and doesnt really model actual skin deformations It might be acceptable for low detail (distant) characters in video games However, texture based methods can be combined with geometric methods to achieve certain useful effects Wrinkles Vascular expression (blushing) Slide 37 Bone Based Methods Using joints & skinning to do the jaw and eyes makes a lot of sense One can also use a pretty standard skeleton system to do facial muscles and skin deformations, using the blend weights in the skinning This gives quite a lot of control and is adequate for medium quality animation Slide 38 Shape Interpolation Methods One of the most popular methods in practice is to use shape interpolation Several different expressions are sculpted and blended to generate a final expression One can interpolate the entire face (happy to sad) or more localized zones (left eyelid, brow, nostril flare) Slide 39 Pose Space Deformation PSD is an advanced shape interpolation technique that can be used for both facial expressions and other skin deformations Read that paper about it on the class web page Slide 40 Artificial Muscle Methods With this technique, muscles are modeled as deformations that affect local regions of the face The deformations can be built from simple operations, joints, interpolation targets, FFDs, or other techniques Slide 41 Artificial Muscles Slide 42 Anatomical Methods One can also do detailed simulations involving a rigid skull, volume preserving muscle tissue, and deformable skin Slide 43 Anatomical Methods Slide 44 Shape Interpolation Slide 45 Shape interpolation allows blending between and combining several pre-sculpted expressions to generate a final expression It is a very popular technique, as it ultimately can give total control over every vertex if necessary However, it tends to require a lot of set up time It goes by many names: Morphing Morph Targets Multi-Target Blending Vertex Blending Geometry Interpolation etc. Slide 46 Interpolation Targets One starts with a 3D model for the face in a neutral expression, known as the base Then, several individual targets are created by moving vertices from the base model The topology of the targets must be the same as the base model (i.e., same number of verts & triangles, and same connectivity) Each target is controlled by a DOF i that will range from 0 to 1 Slide 47 Shape Interpolation Algorithm To compute a blended vertex position: The blended position is the base position plus a contribution from each target whose DOF value is greater than 0 (targets with a DOF value of 0 are essentially off and have no effect) If multiple targets affect the same vertex, their results combine in a reasonable way Slide 48 Position Interpolation v base v 6 v 14 v 6 =0.5 14 =0.25 Slide 49 Normal Interpolation To compute the blended normal: Note: if the normal is going to undergo further processing (i.e., skinning), we might be able to postpone the normalization step until later Slide 50 Shape Interpolation and Skinning Usually, the shape interpolation is done in the skins local space After the shape is blended, it can be attached to the skeleton through the smooth skinning algorithm Slide 51 Morph Target DOFs We need DOFs to control the interpolation They will generally range from 0 to 1 This is why it is nice to have a DOF class that can be used by joints, morph targets, or anything else we may want to animate Higher level code does not need to distinguish between animating an elbow DOF and animating an eyebrow DOF Slide 52 Target Storage Morph targets can take up a lot of memory. This is a big issue for video games, but less of a problem in movies. The base model is typically stored in whatever fashion a 3D model would be stored internally (verts, normals, triangles, texture maps, texture coordinates) The targets, however, dont need all of that information, as much of it will remain constant (triangles, texture maps) Also, most target expressions will only modify a small percentage of the verts Therefore, the targets really only need to store the positions and normals of the vertices that have moved away from the base position (and the indices of those verts) Slide 53 Target Storage Also, we dont need to store the full position and normal, only the difference from the base position and base normal i.e., other than storing v 3, we store v 3 -v base There are two main advantages of doing this: Fewer vector subtractions at runtime (saves time) As the deltas will typically be small, we should be able to get better compression (saves space) Slide 54 Target Storage In a pre-processing step, the targets are created by comparing a modified model to the base model and writing out the difference The information can be contained in something like this: class MorphTarget { int NumVerts; int Index [ ]; Vector3 DeltaPosition [ ]; Vector3 DeltaNormal [ ]; } Slide 55 Morph::Update() for(i=each vertex in base model) { v [ i ]=v base [ i ]; n [ i ]=n base [ i ]; } for(j=each target) { if(DOF[ j ]==0) continue; for(i=each vertex in target[j]) { v [ target[ j ].Index[ i ] ] += DOF[ j ] * target[ j ].DeltaPosition[ i ]; n [ target[ j ].Index[ i ] ] += DOF[ j ] * target[ j ].DeltaNormal[ i ]; } for(i=each vertex in base model) {// skip this if we will do it later n [ i ].Normalize(); } Slide 56 Colors and Other Properties In addition to interpolating the positions and normals, one can interpolate other per-vertex data: Colors Alpha Texture coordinates Auxiliary shader properties Slide 57 Wrinkles One application of auxiliary data interpolation is adding wrinkles Every vertex stores an auxiliary property indicating how wrinkled that area is On the base model, this property would probably be 0 in most of the verts, indicating an unwrinkled state Target expressions can have this property set at or near 1 in wri