Versatile Human Behavior Generation via Dynamic, Data- Driven control Tao Yu COMP 768.

37
Versatile Human Behavior Generation via Dynamic, Data-Driven control Tao Yu COMP 768
  • date post

    20-Dec-2015
  • Category

    Documents

  • view

    215
  • download

    0

Transcript of Versatile Human Behavior Generation via Dynamic, Data- Driven control Tao Yu COMP 768.

Page 1: Versatile Human Behavior Generation via Dynamic, Data- Driven control Tao Yu COMP 768.

Versatile Human Behavior Generation via Dynamic, Data-Driven control

Tao YuCOMP 768

Page 2: Versatile Human Behavior Generation via Dynamic, Data- Driven control Tao Yu COMP 768.

Motivation

Motion of virtual character is prevalent in: Game Movie (visual effect) Virtual reality And more…

FIFA 2006 (EA)

NaturalMotion endorphin

Page 3: Versatile Human Behavior Generation via Dynamic, Data- Driven control Tao Yu COMP 768.

Motivation

What virtual characters should be able to do:

1. Lots of behaviors - leaping, grasping, moving, looking, attacking

2. Exhibit personality - move “sneakily” or “aggressively”

3. Awareness of environment - balance/posture adjustments

4. Physical force-induced movements (jumping, falling, swinging)

Page 4: Versatile Human Behavior Generation via Dynamic, Data- Driven control Tao Yu COMP 768.

Outline Motion generation techniques

Motion capture and key-framing Data-driven synthesis Physical-based animation Hybrid approach

Dynamic motion controllers Quick Ragdoll Introduction Controllers

Transitioning between simulation and motion data Motion search – When and where Simulation-driven transition - How

Page 5: Versatile Human Behavior Generation via Dynamic, Data- Driven control Tao Yu COMP 768.

Mocap and Key-framing

(+) Captures style and subtle nuances(+) Absolute control “wyciwyg”(-) Difficult to adapt, edit, reuse(-) Not physically dynamic, especially

highly dynamic motion

Page 6: Versatile Human Behavior Generation via Dynamic, Data- Driven control Tao Yu COMP 768.

Data-driven synthesis

Generate motion from examples Blending, displacement map Kinematic controller built upon existing data Optimization / learning statistical model

(+) creators retain controlCreators define all rules for movement

(-) violates the “checks and balances” of motionMotion control abuses its power over physics

(-) limits emergent behavior

Page 7: Versatile Human Behavior Generation via Dynamic, Data- Driven control Tao Yu COMP 768.

Physical-based animation

Ragdoll simulation Dynamic controllers

(+) Interacts well with environment(-) “Ragdoll” movement is lifeless(-) Difficult to develop complex

behaviors

Page 8: Versatile Human Behavior Generation via Dynamic, Data- Driven control Tao Yu COMP 768.

Hybrid approaches

Mocap Stylistic realismPhysical simulation Physical realism

Hybrid approaches:• Combine the best of both approaches Activate either one when most appropriate Add life to ragdolls using control systems

(only simulate behaviors that are manageable)

Page 9: Versatile Human Behavior Generation via Dynamic, Data- Driven control Tao Yu COMP 768.

A high-level example

Page 10: Versatile Human Behavior Generation via Dynamic, Data- Driven control Tao Yu COMP 768.

Outline Motion generation techniques

Motion capture and key-framing Data-driven synthesis Physical-based animation Hybrid approach

Dynamic motion controllers Quick Ragdoll Introduction Controllers

Transitioning between simulation and motion data Motion search – When and Where Simulation-driven transition - How

Page 11: Versatile Human Behavior Generation via Dynamic, Data- Driven control Tao Yu COMP 768.

Overview of dynamic controller Decision making: objectives, current state (x[t]) →

desired motion (xd[t]) Motion Control: desired motion (xd[t]), current state

(x[t]) → motor forces (u[t]) Physics: current state (x[t]) → next state (x[t+1])

PhysicsMotionControl

DecisionMaking

objectives

x[t+1]

u[t]xd[t]

u[t]=MC(xd[t]-x[t]) x[t+1]=P(x[t],u[t])xd[t]=Goal(x[t])

Page 12: Versatile Human Behavior Generation via Dynamic, Data- Driven control Tao Yu COMP 768.

Physics: setting up ragdolls Given a dynamics engine

Set primitive for each body part Mass and inertial properties Create 1, 2, or 3-DOF joints between parts Set joint limit constraints for each joint External forces (gravity, impact etc)

Dynamics Engine Supplies Updated positions/orientations Collision resolution with world

MotionControl

DecisionMaking

objectives

x[t+1]

u[t]xd[t]

u[t]=MC(xd[t]-x[t])xd[t]=Goal(x[t])

Page 13: Versatile Human Behavior Generation via Dynamic, Data- Driven control Tao Yu COMP 768.

Controller types

Basic Joint-torque Controller Low-level control Sparse Pose control (May be specified by artist) Continuous control (e.g.: Tracking mocap data)

Hierarchical Controller Layered controllers Higher level controller determines correct

desired value for low level Derived from sensor or state info, support

polygon, center of mass, body contacts, etc.

Page 14: Versatile Human Behavior Generation via Dynamic, Data- Driven control Tao Yu COMP 768.

Joint-torque controller

Proportional-Derivative (PD servo) Controller

Actuate each joint towards desired target:

Acts like a damped spring attached to joint (rest position at desired angle)

θdes is desired joint angle and θ is current angleks and kd are spring and damper gains

)()(

desddess kk

Page 15: Versatile Human Behavior Generation via Dynamic, Data- Driven control Tao Yu COMP 768.

Live demo

Created with

http://www.ode.org

Page 16: Versatile Human Behavior Generation via Dynamic, Data- Driven control Tao Yu COMP 768.

Outline Motion generation techniques

Motion capture and key-framing Data-driven synthesis Physical-based animation Hybrid approach

Dynamic motion controllers Quick Ragdoll Introduction Controllers

Transitioning between simulation and motion data Motion search – When and where Simulation-driven transition - How

Page 17: Versatile Human Behavior Generation via Dynamic, Data- Driven control Tao Yu COMP 768.

Simulating falling and recovering behavior [Mandel 2004]

Page 18: Versatile Human Behavior Generation via Dynamic, Data- Driven control Tao Yu COMP 768.

Transitioning between Techniques

Motion data Simulation When: Significant external forces applied on a

virtual character How: Just initialize simulation with pose and

velocities extracted from motion data. Simulation Motion data

When and where: some appropriate pose is reached (hard to decide); Motion frame closest to simulated pose.

How: Drive simulation toward matched motion data using PD controller.

Page 19: Versatile Human Behavior Generation via Dynamic, Data- Driven control Tao Yu COMP 768.

Motion state spaces

State space of data-driven technique: Any pose present in the motion database

State space of dynamics-based technique: Set of poses allowable by physical constraints

The latter is larger because it: can produce motion difficult to animate or

capture includes large set of unnatural poses

Correspondence must be made to allow transitions between the two

Page 20: Versatile Human Behavior Generation via Dynamic, Data- Driven control Tao Yu COMP 768.

Motion searching Problem: Find nearest matches in the motion database to the

current simulated motion.

Approach: 1. Data representation

• Joint position

2. Process into spatial data structure• kd-tree/bbd-tree (box-decomposition)

3. Search structure at runtime• Query pose comes from simulation• Nearest neighbor search (ANN)

Page 21: Versatile Human Behavior Generation via Dynamic, Data- Driven control Tao Yu COMP 768.

Data Representation: Joint Positions

Need representation that allows numerical comparison of body posture

Joint angles not as discriminating as joint positions

Ignore root translation and align about vertical axis May also want to include joint velocities

Joint velocity is considered by taking surrounding frames into distance computation

Page 22: Versatile Human Behavior Generation via Dynamic, Data- Driven control Tao Yu COMP 768.

Distance metric

J – Number of jointsWj – Joint weightp – global position of jointT - Transformation to align the first frame

Original Joint positions Aligned positions

Page 23: Versatile Human Behavior Generation via Dynamic, Data- Driven control Tao Yu COMP 768.

Searching process

Approximate Nearest Neighbor (ANN) Search First finds the cell containing the query point in

spatial data structure of the input data points. A randomized search then finds surrounding cells containing points within the given ε threshold distance from actual nearest neighbors.

Results guaranteed to be within a factor (1+ε) distance of actual nearest neighbors.

O(log n3) expected run time and O(nlogn) space requirement Much better in practice than KNN as

dimensionality of points increases

Page 24: Versatile Human Behavior Generation via Dynamic, Data- Driven control Tao Yu COMP 768.

Speeding up search

Curse of dimensionality

Search Each Joint Position Separately

Pair more joints together to increase accuracy

n 3-DOF searches is faster than one 3n-DOF search...

Page 25: Versatile Human Behavior Generation via Dynamic, Data- Driven control Tao Yu COMP 768.

Simulating behavior

Model reaction to impacts causing loss of balance Two controllers handle before and after contact phases

respectively Ensure transitioning to a balanced posture in motion data

Page 26: Versatile Human Behavior Generation via Dynamic, Data- Driven control Tao Yu COMP 768.

Fall controller

Aim: produce biomechanically inspired, protective behaviors in response to the many different ways a human may fall to the ground.

Page 27: Versatile Human Behavior Generation via Dynamic, Data- Driven control Tao Yu COMP 768.

Fall controller

Continuous control strategy 4 controller states according to falling direction: backward, forward,

right, left During each state one or both arms are controlled to track

predicted landing position of the shoulders Goal of the controlled arm is to have wrists intersect the line

between the shoulder and its predicted landing position. A small natural bend is added to the elbow and the desired angles

for the rest of the body are set to initial angles at the time the fall controller is activated.

Page 28: Versatile Human Behavior Generation via Dynamic, Data- Driven control Tao Yu COMP 768.

Fall controller

Determine controller state

θ is the facing direction of the character. V is the average velocity of the limbs.

Page 29: Versatile Human Behavior Generation via Dynamic, Data- Driven control Tao Yu COMP 768.

Fall controller

Determine target shoulder joint angle Can change when simulation steps forward The ks and kd are properly tuned

Page 30: Versatile Human Behavior Generation via Dynamic, Data- Driven control Tao Yu COMP 768.

Settle controller

Aim: Driving the character to similar motion clip at an appropriate time

Beginning when hands impact the ground. Two states

Absorb impact: gains are adjusted to reduce hip and upper body

velocity. Last a half second before next state.

ANN search: Find a frame in motion database that is close to

currently simulated posture Use found frame as target while continuing to absorb

impact Simulated motion is smoothly blended into motion data.

Final results demo

Page 31: Versatile Human Behavior Generation via Dynamic, Data- Driven control Tao Yu COMP 768.

An alternative on response motion synthesis [Zordan 2005]

Problem: Generating dynamic response motion to external impact

Insight: Dynamics is often only needed for a short

time (a burst). After that, the utility of the dynamics

decreases and due to the lack of good behavior control

Return to mocap once the character becomes “conscious” again

Page 32: Versatile Human Behavior Generation via Dynamic, Data- Driven control Tao Yu COMP 768.

Generating dynamic response motion

1. Transition to simulation when impact takes place

2. Search motion data for transition-to sequence similar with simulated response motion

3. Run the second simulation with joint-torque controller actuating the character toward matching motion

4. Final blending to eliminate the discontinuity between simulated and transition-to motions

Page 33: Versatile Human Behavior Generation via Dynamic, Data- Driven control Tao Yu COMP 768.

Motion selection

Aim: to find a transition-to motion Frame windows are compared between

simulation and motion data Frames are aligned so that roots position and orientation of

start frame in each window coincide Distance between and :

pb, θb: body part position and orientationwi: window weight, quadratic function with highest value atstart frame and decreasing for subsequent frameswpb, wθb: linear and angular distance scale for each body part

Page 34: Versatile Human Behavior Generation via Dynamic, Data- Driven control Tao Yu COMP 768.

Transition motion synthesis

Aim: generate the motion to fill the gap between the beginning of interaction and found motion data

Realized in 2 steps: Run a second simulation to track the

intermediate sequence Blend the physically generated motion

into transition-to motion data

Page 35: Versatile Human Behavior Generation via Dynamic, Data- Driven control Tao Yu COMP 768.

Transition motion synthesis

Simulation 2 An inertia-scaled PD-servo is used to

compute torque at each joint

The tracked sequence is generated by blend start and end frames using SLERP with an ease-in/ease-out.

A deliberate delay in tracking is introduced to make the reaction realistic

Page 36: Versatile Human Behavior Generation via Dynamic, Data- Driven control Tao Yu COMP 768.

Conclusion

Hybrid approaches Complex dynamic behaviors are hard to model

physically A viable option to synthesize character motion

under wider range of situations Able to incorporate unpredictable interactions,

especially in game Making it more practical

Automatic computation of motion controller parameters [Allen 2007]

Speeding up search via pre-learned model [Zordan 2007]

Page 37: Versatile Human Behavior Generation via Dynamic, Data- Driven control Tao Yu COMP 768.

References MANDEL, M., 2004. Versatile and interactive virtual humans: Hybrid use of

data-driven and dynamics-based motion synthesis. Master's Thesis, Carnegie Mellon University.

ZORDAN V. B., MAJKOWSKA A., CHIU B., FAST M.: Dynamic response for motion capture animation. ACM Trans. Graph. 24, 3 (2005), 697.701.

B. Allen, D. Chu, A. Shapiro, P. Faloutsos. On the Beat! Timing and Tension for Dyanmic Characters, ACM SIGGRAPH/Eurographics Symposium on Computer Animation 2007

Zordan, V.B., Macchietto, A., Medina, J., Soriano, M., Wu C.C., Interactive Dynamic Response for Games, ACM SIGGRAPH Sandbox Symposium 2007