Agents and Avatars - HWruth/year4VEs/Slides11/L15.pdf · Agents and Avatars ... Mocap process...

Post on 28-Feb-2018

224 views 2 download

Transcript of Agents and Avatars - HWruth/year4VEs/Slides11/L15.pdf · Agents and Avatars ... Mocap process...

Agents and Avatars

Ruth Aylett

Overview

Agents and Avatars Believability v naturalism Building the body H-anim Moving and sensing

IVAS

Intelligent Virtual Agents (IVAs)– Also:

• Synthetic characters• Embodied conversational characters (ECAs)• Virtual humans• BUT do not have to be humanoid…

– Embodied and autonomous• Require a control architecture - or ‘agent mind’• Insides v outsides: combining AI with graphics

– ‘Inhabit’ a virtual environment

Why IVAs?

Adding life to a VE– Animals, birds, insects– Crowds

• E.g. students in the virtual campus– Increase sense of presence

As a guide or teacher– Front-end to embedded knowledge in a VE

As a character in a story– Computer games

Interface agents– On web pages to make them more human– Personal representative– Sales rep

Why IVAs? - 2

As virtual actors– Instead of extras– Immersive Education Media Stage

As part of a simulation– Hostage release training– Battlefield medical training

Scientific investigation– Buildings evacuation– Testing for disability friendliness– Ecology and animal behaviour

Avatars

Hindu Sanskrit Term - “Representation of aDeity in visible form”– Snow Crash (Neil Stephenson 1992) - “A

graphical representation of yourself”

VE representation of the user– Embodiment need not be humanoid

Driven by the user– So NOT autonomous– A mapping rather than a control problem

Video avatars

“magic video mirror”with back-projection

video image mixedwith computergeneratedoverlay

Creating a video avatar (ALIVE, MIT)

•unencumbered interactionbetween human visitor andvirtual character based onposition, postures, and gestures

•main focus on “virtual presence”

http://www.ai.univie.ac.at/oefai/agents/

Believability

Term introduced by Joe Bates of the OZgroup at CMU in the 1990s– Combined art and technology

Very hard to define– A willing suspension of disbelief?– Seem like ‘real’ characters?– The ‘illusion of life’?– Willingness to attribute an internal state

• Ascribe intentionality

Believability and Naturalism

Are they the same thing? Graphics people seem to think so

– Is Mickey a real mouse?– Is he a believable character?

The uncanny valley

The ‘uncanny valley’– Work by a Japanese

researcher, Mori– Acceptability v

naturalism

Goes -ve asbecomes ‘nearlyhuman’

SEE: http://en.wikipedia.org/wiki/Uncanny_Valley

The problem of expectations

Humans have hard-wired expectations– Used to interpret inter-personal behaviour– Fundamental social skill

Need to invoke this very carefully– Acceptability of movement

• Lip sync a key problem

– Interactive responsiveness• Memory of interaction a key issue• Games create a real problem with instant rewind

Building a body

Use of 3D modelling package– 3ds Character Studio; Poser etc

Creation of skeleton– Required for animation later

Polygonal body– But overall count must be low for real-time

interaction, low 000s or less if possible

Texture to cover body– One body, many textures?

TextureImages

X 4

SilhouetteImages

X 4

Stage 1: Image Capture

•Generic Avatar

•Polygonal Structure

•1500 Polygons

Change Data:Deforms the Mesh

Stage 2: Deform the Mesh

Apply Texturesto Deformed

Mesh

Stage 3: Apply the Textures

The touch up process

What sort of motion?

Walking around– Jumping, running, swimming– Human and non-human

Picking things up– Agent-agent interaction

Gesture Talking heads

– Facial muscles, lip synch

Group Movement– Crowds, flocks, herds

Moving the body

Animation– Drawing on cartoon animation

Motion capture– Drawing on gait analysis and then film

PBM– Drawing on robotics

• Analytically calculated• Learned

Animation

Time consuming for good results– Requires artistic skills

The main character, Woody, in ToyStory had:– 700 degrees of freedom (200 for face and

fifty for the mouth)– 150 people (at Pixar) would generate 3

minutes of animation a week Can it be standardised?

Self-animation

Poses new problems:– Parametrisable animation very desirable

• E.g a walk that could be reused with differentstride length or foot height

– Melding and combining of animations ‘onthe fly’• Not just a morphing problem• ‘starting position’?

– Extra actions inserted by character

Standardising animation?

Animation depends on the structure ofskeleton that is being animated

Can we standardise a humanoidskeleton?– H-anim just such an attempt– Originally in the context of VRML– See www.h-anim.org

H-anim

A set of standard components– Humanoid: root of a figure– Joint: attached using transform specifying

current state of articulation plus geometryassociated with attached body part

– Segment: specifies attributes of physical linksbetween joints

– Site: where can add semantics– Displacer:range of movement allowed for object

in which embedded

H-anim

And in X3d..

<HAnimJoint DEF='hanim_l_hip' center='0.0961 0.9124 -0.0001'name='l_hip'>

<HAnimSegment DEF='hanim_l_thigh' name='l_thigh'/> <HAnimJoint DEF='hanim_l_knee' center='0.1040 0.4867 0.0308' name='l_knee'> <HAnimSegment DEF='hanim_l_calf' name='l_calf'/> <HAnimJoint DEF='hanim_l_ankle' center='0.1101 0.0656 - 0.0736' name='l_ankle'> <HAnimSegment DEF='hanim_l_hindfoot’ name='l_hindfoot'/> </HAnimJoint> </HAnimJoint></HAnimJoint>

Motion capture

Electro-magnetic or by camera Produces the most ‘natural’ results

– Extensively used in film for graphical extras– Can be used on avatars very successfully to transmit user

movement to their graphical representation

Even more problems for self-animation– Harder to parametrise– Worse combining/melding problems

Mocap process

Calibration Capture 3D Position Reconstruction Fitting to the Skeleton Post Processing

Calibration

Triangulate camera position and set origin

Wand– Calibrate and triangulate

cameras and capturevolume

L-frame– Define ground plane, up

vector and origin

θ1

θ2

Calibration

3D Position Reconstruction(Utopia)

3D Position Reconstruction(Reality)

Multiple Hypothesis Tracking

For small number of markers: Size Occlusions are a problem Multiple Hypothesis Tracking

Ringer, et al., 2002

C1

C2

Fitting to the Skeleton

Utopian approach– 10 – 20% length

changes

Markers on bothsides

Joint Displacement Use Rotation Angles

Only

Post Processing

Motion Editing– Cut, Copy, Paste

Motion Warping– Speed up or Slow Down– Rotate, Scale or Translate

Motion Signal Processing– Smoother Motions

Physically-based modelling

Extending robot motor control Forward kinematics

– If you move joints so much– Where does the end effector go?

Inverse kinematics– If you want the end effector at x,y,z…– How much should which joints move?

Computationally demanding and often not verynaturalistic– But the most flexible option– Animation blending often added…

Ragdoll models

Multiple rigid body simulation– Each tied to bone in skeleton– Constraints for joint movement

Extended into procedural animation ofwhole body– Natural Motion: Euphoria

Fish example - spring-massmodel

Learning to move

Use of AI learning algorithms– Move– Evaluate and score– Keep movements which work well

Karl Sims ‘Blockies’– Early 1990s

Terzoplolous - Fish Learning

http://www.csri.utoronto.ca/~dt/

Virtual sensors

Local v global interaction– Global - read from world data-structures– Local - ‘sense’ the environment

Global approach very common– In most computer games– Efficient, easy to test

Advantages of local sensing

Scales well– Not affected by global size of environment

Agent has independence of environment– Up to a point

Makes for believability– Agent perceives what it should– Can’t see you round corners– Emergent complexity

TeleTubby Sensors

Forward ray-tracing sensor isseven meters and sweeps 45o,five times/sec

One vector sensor directed vertically downwards. Intersectionwith the ground is continually beingdetected

All these sensors are attached tothe geometry of the agent

Sensing by message passing

Requires an architecture that distributesevents as messages– Concept of locale

• What is local for local sensing?

– Semantically determined: e.g a room

Scenegraphs not set up for this style ofmessage passing– Think of routing in VRML for example

Credits

Edward Tse, University of Calgary