XSI guia basica

402
The Basics Guide

Transcript of XSI guia basica

Page 1: XSI guia basica

The Basics Guide

Page 2: XSI guia basica
Page 3: XSI guia basica

Basics • 3

Copyright and Disclaimer

© 2009 Autodesk, Inc. All rights reserved. Except as otherwise permitted by Autodesk, Inc., this publication, or parts thereof, may not be reproduced in any form, by any method, for any purpose.

Certain materials included in this publication are reprinted with the permission of the copyright holder.

The following are registered trademarks or trademarks of Autodesk, Inc., and/or its subsidiaries and/or affiliates in the USA and other countries: 3DEC (design/logo), 3December, 3December.com, 3ds Max, ADI, Algor, Alias, Alias (swirl design/logo), AliasStudio, Alias|Wavefront (design/logo), ATC, AUGI, AutoCAD, AutoCAD Learning Assistance, AutoCAD LT, AutoCAD Simulator, AutoCAD SQL Extension, AutoCAD SQL Interface, Autodesk, Autodesk Envision, Autodesk Intent, Autodesk Inventor, Autodesk Map, Autodesk MapGuide, Autodesk Streamline, AutoLISP, AutoSnap, AutoSketch, AutoTrack, Backburner, Backdraft, Built with ObjectARX (logo), Burn, Buzzsaw, CAiCE, Can You Imagine, Character Studio, Cinestream, Civil 3D, Cleaner, Cleaner Central, ClearScale, Colour Warper, Combustion, Communication Specification, Constructware, Content Explorer, Create>what's>Next> (design/logo), Dancing Baby (image), DesignCenter, Design Doctor, Designer's Toolkit, DesignKids, DesignProf, DesignServer, DesignStudio, Design|Studio (design/logo), Design Web Format, Discreet, DWF, DWG, DWG (logo), DWG Extreme, DWG TrueConvert, DWG TrueView, DXF, Ecotect, Exposure, Extending the Design Team, Face Robot, FBX, Fempro, Filmbox, Fire, Flame, Flint, FMDesktop, Freewheel, Frost, GDX Driver, Gmax, Green Building Studio, Heads-up Design, Heidi, HumanIK, IDEA Server, i-drop, ImageModeler, iMOUT, Incinerator, Inferno, Inventor, Inventor LT, Kaydara, Kaydara (design/logo), Kynapse, Kynogon, LandXplorer, Lustre, MatchMover, Maya, Mechanical Desktop, Moldflow, Moonbox, MotionBuilder, Movimento, MPA, MPA (design/logo), Moldflow Plastics Advisers, MPI, Moldflow Plastics Insight, MPX, MPX (design/logo), Moldflow Plastics Xpert, Mudbox, Multi-Master Editing, NavisWorks, ObjectARX, ObjectDBX, Open Reality, Opticore, Opticore Opus, Pipeplus, PolarSnap, PortfolioWall, Powered with Autodesk Technology, Productstream, ProjectPoint, ProMaterials, RasterDWG, Reactor, RealDWG, Real-time Roto, REALVIZ, Recognize, Render Queue, Retimer,Reveal, Revit, Showcase, ShowMotion, SketchBook, Smoke, Softimage, Softimage|XSI (design/logo), Sparks, SteeringWheels, Stitcher, Stone, StudioTools, Topobase, Toxik, TrustedDWG, ViewCube, Visual, Visual Construction, Visual Drainage, Visual Landscape, Visual Survey, Visual Toolbox, Visual LISP, Voice Reality, Volo, Vtour, Wire, Wiretap, WiretapCentral, XSI, and XSI (design/logo).

Python is a registered trademark of Python Software Foundation. All other brand names, product names or trademarks belong to their respective holders.

Disclaimer

THIS PUBLICATION AND THE INFORMATION CONTAINED HEREIN IS MADE AVAILABLE BY AUTODESK, INC. "AS IS." AUTODESK, INC. DISCLAIMS ALL WARRANTIES, EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE REGARDING THESE MATERIALS.

Documentation Team

Judy Bayne, Grahame Fuller, Amy Green, Edna Kruger, and Naomi Yamamoto.

11 2009

Page 4: XSI guia basica

Copyright and Disclaimer

4 • Softimage

Page 5: XSI guia basica

Basics • 5

Contents

Welcome to Autodesk® Softimage®! . . . . . . . . . . . . . . . . . 9

Section 1Introducing Softimage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13The Softimage Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14Getting Commands and Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16Setting Values for Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18Working with Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21Working in 3D Views. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23Exploring Your Scene . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

Section 2Elements of a Scene . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37What’s in a Scene? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38Selecting Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51Components and Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54Parameter Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

Section 3Moving in 3D Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61Coordinate Systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64Center Manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70Freezing Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70Resetting Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70Setting Neutral Poses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70Transform Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71Transformations and Hierarchies . . . . . . . . . . . . . . . . . . . . . . . . . . 71Snapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

Section 4Organizing Your Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73Where Files Get Stored . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74Scenes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79Importing and Exporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82

Section 5General Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83Overview of Modeling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84Geometric Objects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85Accessing Modeling Commands . . . . . . . . . . . . . . . . . . . . . . . . . . 88Starting from Scratch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89Operator Stack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91Modeling Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94Attribute Transfer (GATOR) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95Manipulating Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95Deformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102

Section 6Curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103About Curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104Drawing Curves. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104Manipulating Curve Components . . . . . . . . . . . . . . . . . . . . . . . . 107Modifying Curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110Creating Curves from Other Objects . . . . . . . . . . . . . . . . . . . . . . 110Importing EPS Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111

Page 6: XSI guia basica

6 • Softimage

Section 7Polygon Mesh Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . 113Overview of Polygon Mesh Modeling . . . . . . . . . . . . . . . . . . . . . . 114About Polygon Meshes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114Converting Curves to Polygon Meshes . . . . . . . . . . . . . . . . . . . . . 118Drawing Polygons. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119Subdividing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120Drawing Edges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121Extruding Components. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122Removing Polygon Mesh Components . . . . . . . . . . . . . . . . . . . . . 123Combining Polygon Meshes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124Symmetrizing Polygons. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125Cleaning Up Meshes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126Reducing Polygons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126Polygon Normals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127Subdivision Surfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128

Section 8NURBS Surface Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . 131About Surfaces. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132Building Surfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133Modifying Surfaces. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134Projecting and Trimming with Curves . . . . . . . . . . . . . . . . . . . . . . 135Surface Meshes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136

Section 9Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139Bringing It to Life . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140Playing the Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143Previewing Animation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145Animating with Keys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146Animating Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151Editing Keys and Function Curves . . . . . . . . . . . . . . . . . . . . . . . . . 154Layering Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159Constraints. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160Path Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163

Linking Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164Expressions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166Copying Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168Scaling and Offsetting Animation . . . . . . . . . . . . . . . . . . . . . . . . . 169Plotting (Baking) Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170Removing Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170

Section 10Character Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171Character Animation in a Nutshell . . . . . . . . . . . . . . . . . . . . . . . . 172Setting Up Your Character . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175Building Skeletons for Characters . . . . . . . . . . . . . . . . . . . . . . . . . 177Enveloping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181Rigging a Character . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187Animating Characters with FK and IK . . . . . . . . . . . . . . . . . . . . . . 190Walkin’ the Walk Cycle. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194Motion Capture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195Making Faces with Face Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . 198

Section 11Shape Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201Things are Shaping Up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202Using Construction Modes for Shape Animation. . . . . . . . . . . . . . 204Creating and Animating Shapes in the Shape Manager . . . . . . . . 205Selecting Target Shapes to Create Shape Keys . . . . . . . . . . . . . . . 206Storing and Applying Shape Keys . . . . . . . . . . . . . . . . . . . . . . . . . 207Using the Animation Mixer for Shape Animation . . . . . . . . . . . . . 208Mixing the Weights of Shape Keys . . . . . . . . . . . . . . . . . . . . . . . . 209

Section 12Actions and the Animation Mixer . . . . . . . . . . . . . . . . . . . 211What Is Nonlinear Animation? . . . . . . . . . . . . . . . . . . . . . . . . . . . 212The Animation Mixer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213Storing Animation in Action Sources. . . . . . . . . . . . . . . . . . . . . . . 214Working with Clips in the Animation Mixer . . . . . . . . . . . . . . . . . 216Mixing the Weights of Action Clips. . . . . . . . . . . . . . . . . . . . . . . . 217

Page 7: XSI guia basica

Basics • 7

Modifying and Offsetting Action Clips. . . . . . . . . . . . . . . . . . . . . 218Sharing Animation between Models . . . . . . . . . . . . . . . . . . . . . . 220Adding Audio to the Mix. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222

Section 13Simulation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223Simulated Effects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224Making Things Move with Forces . . . . . . . . . . . . . . . . . . . . . . . . 225Hair and Fur . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227Rigid Body Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232Cloth Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237Soft Body Dynamics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239

Section 14ICE: The Interactive Creative Environment . . . . . . . . . . . 241What is ICE? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242The ICE Tree View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244ICE Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247Forces and ICE Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250ICE Deformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252Building ICE Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255ICE Compounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267

Section 15ICE Particles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271Making ICE Particle Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272Particles that Bounce, Splash, Stick, Slide, and Flow. . . . . . . . . . . 277Particle Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279Spawning New Particles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281Particle Strands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283Particle Instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285ICE Particle States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287ICE Rigid Bodies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289ICE Particle Shaders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292

Section 16Shaders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295The Shader Library. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296About Surface Shaders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300Basic Surface Color Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . 302Reflectivity, Transparency, and Refraction . . . . . . . . . . . . . . . . . . 303Applying Shaders to Scene Elements . . . . . . . . . . . . . . . . . . . . . . 306The Render Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307Building Shader Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310Creating Shader Compounds . . . . . . . . . . . . . . . . . . . . . . . . . . . 312

Section 17Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315About Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316The Material Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317Creating and Assigning Materials . . . . . . . . . . . . . . . . . . . . . . . . 319Material Libraries. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321

Section 18Texturing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323How Surface and Texture Shaders Work Together . . . . . . . . . . . . 324Types of Textures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325Applying Textures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326Texture Projections and Supports. . . . . . . . . . . . . . . . . . . . . . . . . 327Editing Texture Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333UV Coordinates. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335Editing UV Coordinates in the Texture Editor . . . . . . . . . . . . . . . . 336Texture Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338Bump Maps and Displacement Maps. . . . . . . . . . . . . . . . . . . . . . 342Reflection Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344Baking Textures with RenderMap . . . . . . . . . . . . . . . . . . . . . . . . 345Painting Colors at Vertices. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346

Page 8: XSI guia basica

8 • Softimage

Section 19Lighting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347Types of Lights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348Placing Lights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349Setting Light Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350Selective Lights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352Creating Shadows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352Global Illumination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355Caustics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357Final Gathering. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358Ambient Occlusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359Image-Based Lighting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359Light Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360

Section 20Cameras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361Types of Cameras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362The Camera Rig . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363Working with Cameras. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364Setting Camera Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365Lens Shaders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366Motion Blur . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368

Section 21Rendering. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369Rendering Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370Render Passes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371Render Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375Setting Rendering Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375Different Ways to Render . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379

Section 22Compositing and 2D Paint . . . . . . . . . . . . . . . . . . . . . . . . . . 381Softimage Illusion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382Adding Images and Render Passes . . . . . . . . . . . . . . . . . . . . . . . . 383Adding and Connecting Operators . . . . . . . . . . . . . . . . . . . . . . . . 384Editing and Previewing Operators . . . . . . . . . . . . . . . . . . . . . . . . . 386Rendering Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3872D Paint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388Vector Paint vs. Raster Paint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389Painting Strokes and Shapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390Merging and Cloning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392

Section 23Customizing Softimage . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393Plug-ins and Add-ons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394Toolbars and Shelves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395Custom and Proxy Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . 396Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399Key Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400Other Customizations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401

Page 9: XSI guia basica

Basics • 9

Welcome to Autodesk® Softimage®!

Softimage is a powerful 3D system that integrates modeling, animation, simulation, compositing, and rendering into a single, seamless environment. Softimage incorporates many standard 3D tools and functions, but goes far beyond that in terms of tool sophistication and artistic control.

The Interface

Softimage’s interface is laid out in a way that gives you both a large viewing area as well as easy access to all the tools you need, all the time. You can easily resize any panel or viewport in the Softimage interface, as well as customize its layout to exactly what you want.

Modeling

The modeling tools are designed for creating and editing seamless animated models of any sort. Softimage offers many tools for creating, editing, and deforming polygons and subdivision surfaces, as well as NURBS curves and surfaces.

Animation

Softimage provides you with a complete set of both low-level and high-level animation tools. All the fundamental low-level tools are there with keyframing, fcurve editor, dopesheet, constraints, linked parameters, and expressions. You can also layer keyframe animation on top of animation, such as motion capture (mocap) data.

Shape animation is achieved using a number of techniques and tools, including the popular and easy-to-use shape manager.

For high-level animation, you have the animation mixer which lets you mix, transition, and combine all forms of animation, shapes, and audio in a nonlinear and non-destructive manner.

Character Animation

Building and animating characters is fully supported with all the regular animation tools, as well as special character tools such as skeletons that use inverse kinematics, envelopes and weight maps, and easy-to-create character rigs and rigging tools. As well, you can retarget any type of animation, including mocap data, to any type of rig.

The Face Robot module lets you make faces in a unique way. You first set up a facial rig by going through several simple stages. Once the facial rig is created, you can animate the facial controls and sculpt and tune the soft facial tissue using a special set of tools.

Copyright © 2005 by Paramount Pictures Corporation and Viacom International Inc. All Rights Reserved. Nickelodeon, Barnyard and all related titles, logos and characters are trademarks of Viacom International Inc.

Page 10: XSI guia basica

Welcome to Autodesk® Softimage®!

10 • Softimage

Simulation

You can simulate almost any kind of natural, or unnatural, phenomena you can think of using rigid bodies, soft bodies, or cloth — or grow some hair! Simulation-type objects can then be influenced by forces and collisions to create simulated animations.

ICE: Interactive Creative Environment

ICE is a visual programming environment available directly within the Softimage interface. Using a node-based data tree format, you can modify how any tool works, create custom tools and effects, and see the results interactively, all without scripting a line of code. ICE is currently used mostly for creating particle and deformation effects.

Using ICE trees, you can create almost any type of particle effect you want. You can make natural phenomena, such as smoke, fire, and rain, but you can also use objects or characters act in a simulated environment: rocks tumbling, glass pieces breaking, grass or hair growing, or humans running about.

Shaders and Texturing

Using a graphical node-based connection tool called the render tree, you can create an unlimited range of materials by connecting any type of shader to any object. You can also project 2D and 3D textures into texture spaces, which can then be manipulated like a 3D object.

Rendering

Drawing upon the integration of mental ray® rendering technology, Softimage offers full-resolution, interactive rendering, caustics, global illumination, and motion blur, not only for the final render, but also within a render region that can be drawn in any Softimage viewport. It renders everything in Softimage, letting you adjust your render parameters at any stage of modeling, animating, or even during playback.

As well, you can embed unlimited render passes into a single scene and for each pass, generate multiple rendered channels such as specular or reflections. Softimage’s render passes and render channels are extremely easy to create, customize, and edit.

Painting and Compositing

Softimage has a built-in compositor, called Softimage Illusion. Softimage Illusion is designed to edit textures and image-based lighting in real time. You can use it to rough out final shots, touch up your textures, morph, warp and rig images, create custom mattes, and tweak the results of a multi-pass render, all within Softimage.

Page 11: XSI guia basica

Basics • 11

About this Guide

This guide provides an overview of the main features, tools, and workflows of Softimage, helping you get a headstart in understanding and using the software:

• If you’re new to Softimage, it gives you a foot in the proverbial Softimage door. You may be new to 3D, or just new to Softimage but familiar with other 3D software packages. Either way, you can skim through this guide and quickly see what’s possible in Softimage, as well as discover what the different tools and elements are called.

• If you’re an old hand at Softimage, this guide may provide you with a quick start for areas of Softimage that you’ve never needed to use before. For example, if modeling is your thing and now you have to do some animation, this guide can help you get a sense of what’s possible in animation and what tools you can use.

This guide has been updated for Softimage 2010, but because it covers the fundamental concepts and workflows of Softimage, the information it contains will apply to Softimage well beyond this version.

If you’re eager to take Softimage for a spin, there’s enough information in this guide to get you started without needing to do more homework. Many workflow overviews are included, as well as command names that tell you where to find things.

Remember that all the detailed information and procedures are covered in the Softimage User’s Guide and the Softimage SDK Guide available from the Help menu on the main menu bar in Softimage (or press the F1 key): we’ve just filtered out the main goodies for you here.

Now, go fire up Softimage and have some fun!

The Softimage Documentation Team

Page 12: XSI guia basica

Welcome to Autodesk® Softimage®!

12 • Softimage

Page 13: XSI guia basica

Basics • 13

Section 1

Introducing Softimage

New to Softimage? Take a quick guided tour through the interface and basic operations.

What you’ll find in this section ...

• The Softimage Interface

• Getting Commands and Tools

• Setting Values for Properties

• Working with Views

• Working in 3D Views

• Exploring Your Scene

Page 14: XSI guia basica

Section 1 • Introducing Softimage

14 • Softimage

The Softimage Interface

Welcome to your new home—the Softimage interface. The interface is composed of several toolbars and panels surrounding the viewports that display the elements in your scene. Each part of the interface is designed to help you accomplish different aspects of your work.

The image below shows the default layout. Take a minute to become familiar with the names and locations of the parts of the interface. You

can toggle parts of the standard layout using View > Optional Panels. Other layouts are available from the View > Layouts menu. You can even create your own layout for a customized workflow.

Softimage has many preferences for many tools, editors, and working methods (choose File > Preferences). If you want to change something, chances are there’s a preference for it!

A B

F

C

D

E

G

Page 15: XSI guia basica

Basics • 15

The Softimage Interface

Sample Content

Softimage ships with a sample database XSI_SAMPLES containing scenes, models, presets, scripts, and other goodies. Open a Softimage file browser (View > General > Browser or press 5 at the top of the keyboard), then click Paths and choose Sample Project.

A Title bar

Displays the version of Softimage, your license type, and the name of the open project and scene.

B Viewports

Lets you view the contents of your scene in different ways. You can resize, hide, and mute viewports in any combination.

See Working with Views on page 21 for details.

C Main menu bar

D Main Toolbar

Contains commands and tools for different aspects of 3D work. Press 1 for the Model toolbar, 2 for Animate, 3 for Render, 4 for Simulate, and Ctrl+2 for Hair. You can also access these controls from the main menu bar.

For more information about other controls that can be displayed in this area, see The Main Toolbar, Weight Paint Panel, and Palette Toolbar on page 16 and Switching Toolbars on page 16.

E Icons

Switch between toolbar and other panels, or choose viewport presets.

See The Main Toolbar, Weight Paint Panel, and Palette Toolbar on page 16 as well as Viewport Presets on page 22 for details.

F Main command panel (MCP)

Contains frequently used commands grouped by category. Switch between the MCP, KP/L, and MAT panels using the tabs at lower right.

See The MCP, KP/L, and MAT Panels on page 17 for details.

G Lower interface controls

The controls at the bottom of the interface include a command box, script editor icon, the mouse/status line, the timeline, the playback panel, and the animation panel.

Page 16: XSI guia basica

Section 1 • Introducing Softimage

16 • Softimage

Getting Commands and Tools

There are several different types of menus in Softimage. Each menu typically contains a mixture of commands and tools:

• Commands have an immediate effect on the scene, for example, duplicating the selected object.

• Tools activate a mode that requires mouse interaction, for example, selecting elements, translating an object, orbiting the camera, or drawing polygons and curves. A tool stays active until you deactivate it by pressing Esc or by activating a different tool.

Menu Buttons

Buttons with a triangle open up a menu of commands and tools. You can middle-click on a menu button to repeat the last action you performed on that menu.

Context Menus

You can right-click on elements in the views to open a menu with items that relate to the element under the mouse pointer. This is a quick and convenient way to access commands and tools, for example, when modeling.

• In the explorer or schematic view, right-click on an element to open its context menu.

• In a 3D view, Alt+right-click (Ctrl+Alt+right-click on Linux) on an object to open its context menu, or on the background to open the Camera View menu.

• When object components like points, polygons, or edges are selected, right-click anywhere on the object for the selected components’ context menu. Right-click anywhere else for the Camera View menu.

• Some tools like the Tweak Components tool have their own right-click menus with options specific for each tool.

The Main Toolbar, Weight Paint Panel, and Palette Toolbar

The three buttons at the lower left switch between the main toolbar, the weight paint panel, and the palette:

• The main toolbar is where you’ll do most of your work.

• The weight paint panel contains a specialized layout for editing envelope weights. See The Weight Paint Panel on page 182.

• The palette contains some wire color and display mode presets, as well as a custom toolbar where you can store custom commands.

Switching Toolbars

The main toolbar on the left side of the interface can display categories for modeling, animation, rendering, simulation, and hair. You can switch between these categories by clicking on the toolbar’s title as shown at right, or by pressing 1, 2, 3, 4, or Ctrl+2 (use the number keys at the top of the keyboard, not on the numeric keypad).

If you prefer, you can also access the same commands from the main menu bar:

Main toolbar Palette

Weight paint panel

Page 17: XSI guia basica

Basics • 17

Getting Commands and Tools

The MCP, KP/L, and MAT Panels

The three tabs at the bottom of the panel on the right side of the interface switch between the MCP, KP/L, and MAT panels:

• MCP is the main command panel. It is divided into sub-panels with controls for selection, transformation, constraints, snapping, and editing. The tools and commands available here are described in context throughout this guide.

• KP/L contains the keying panel as well as controls for working with animation and scene layers. See Keying Parameters in the Keying Panel on page 147, Layering Animation on page 159, and Scene Layers on page 49.

• MAT is the material panel. It provides similar controls to the texture layer editor, but in a different arrangement. See Texture Layers on page 338.

Collapsing MCP Panels

You can collapse panels in the MCP by right-clicking on their main menu buttons. To expand a collapsed panel, simply right-click on it again. This is useful when working on small monitors, like on laptops.

Tearing Off Menus

To tear off a menu, click on the dotted line at the top of a menu or submenu and drag to any area in the interface.

The menu stays open in a floating window until you close it.

Hotkeys: Sticky or Supra

Using hotkeys, tools can be activated in either of two modes:

• Sticky: Press and release the key quickly. The tool stays active until you activate a different tool or press Esc.

• Supra: Press and hold the key to temporarily override the current tool. The new tool stays active only while the key is held down. When you release the key, the previous tool is reactivated.

Repeating Commands and Tools

Press . (period) to repeat the last command, and press , (comma) to reactivate the last tool (other than selection, navigation, or transformation).

Page 18: XSI guia basica

Section 1 • Introducing Softimage

18 • Softimage

Setting Values for Properties

Property editors are where you’ll find an element’s properties. They are a basic tool that you use constantly to define and modify elements in a scene. Select an object or property and press Enter to open its property editor, or click its icon in an explorer. In addition to property editors, you can enter values in many of the text boxes in the main command panel, such as the Transform panel, and use virtual sliders to change values for marked parameters in the explorer.

A B

C D E

F

G

IH

J

K

L

A The title bar of the property shows the name of the element being edited. When multiple elements are selected for editing, the title bar shows “multi”.

B Control how property editors update:

•Focus updates only for properties of the same type when other elements are selected.

•Recycle updates with the properties of the currently selected elements.

•Lock does not update when other elements are selected.

C Click the key button to set or remove a key on all parameters in all property sets in the editor.

Right-click the key button to access a menu of commands that affect all parameters.

Use the arrows to move between next and previous keys on any parameter.

D The arrow buttons move along the sequence of property editors (up a level, previous, and next).

E Revert changes, or save and load presets.

F Use the tabs to quickly move between different property sets in an editor.

Click the triangle to collapse a property set (like Scene Material in this picture) or expand it (like Phong).

For help on the parameters in a property set, click the corresponding help icon (?).

G Within a property set like Phong, tabs switch between groups of parameters.

H The animation icon shows if and how the parameter is animated.

Click to set or remove a key.

Right-click to access animation commands for that parameter.

I Drag a slider to change values.

To change R, G, and B values simultaneously for a color, press Ctrl while dragging any one of them.

Page 19: XSI guia basica

Basics • 19

Setting Values for Properties

Entering Values Outside of Slider Ranges

Many parameters with sliders let you set values outside of the slider range. For example, the range of the Local Transform property editor’s Position sliders is between -50 and +50, but objects can be much farther from their parent’s origin than that.

If a parameter supports values outside of the slider range, you can set such values by typing them into the associated numeric box or by pressing Alt while using the virtual slider tool.

When you set a value outside the slider range, the displayed range automatically expands to twice the current value. For example, if the default range of a parameter is between 0 and 10 and you set the value to 15, the new range is 0 to 30. However, the change is not permanent—if you set the parameter to a value within the default range and then close and reopen the property editor, the displayed range is back to its default.

Virtual Sliders

Virtual sliders let you do the job of a slider without having to open up a property editor. Select one or more objects, mark the desired parameters, then press F4 and middle-drag in a 3D view. Use Ctrl, Shift, and Ctrl+Shift to change increments, and Alt to extend beyond the slider’s display range.

J Type a numerical value in a text box to change the parameter’s values precisely. You can sometimes enter values beyond the slider range.

•Drag the mouse in a circular motion over the text box to change values (scrubbing). Scrub clockwise to increase and counterclockwise to decrease.

• Increment values using [ and ]. Ctrl and Shift change the increment size. For example, press Ctrl+] to increment by 10. You can also press Ctrl or Shift with the arrow keys to change values by increments.

•Enter relative values with the addition (+), subtraction (-), multiplication (*), and division (/) symbols after the value. For example, 2- decreases the value by 2. On the other hand, -2 enters negative two.

•With multiple elements, use l(min, max) for a linear range, r(min, max) for random values, and g(mean, var) for a normal distribution.

K Click a color box to open the color editors, from which you can pick or define the colors you want. See Color Editors on page 20.

You can copy colors by dragging and dropping one color box onto another.

Click the label below the box to cycle the color space for the sliders through RGB, HLS, and HSV.

L The connection icon links a parameter value to a shader, weight map, or texture map which modulates it.

Click the icon to inspect the connected element, or right-click for options.

Page 20: XSI guia basica

Section 1 • Introducing Softimage

20 • Softimage

Color Editors

Instead of using the RGB color sliders, you can click on a color box to open a color editor.

A To set a color, click in the color area and then adjust it using the slider. To select which color components appear in the color area and which one appears on the slider, click the “>” button.

B The color box on the left shows the previous color for reference.

C The color box on the right shows the current color.

D Use the numeric boxes to set color values precisely. To select a color model, click the “>” button.

A

I

K

H

F

C

BG

D

E

J

L

E To pick a color:

•Click the color picker button (the eyedropper) and click anywhere in the Softimage window. This tool can be especially useful when trying to match a color in the Image Clip editor.

•On Windows systems, you can click outside of the Softimage window to pick a color, even though the mouse pointer does not show that the color picker is active outside of the window. This does not work on Linux systems, but you can import an image clip and load it into the Image Clip editor as a workaround.

•To cancel the color picker, click the right mouse button.

The color picker takes the color you see on the screen rather than the true color of the objects. There may be rounding errors because most display adapters have only 256 levels for each of the RGB channels.

F Click on the browse (...) button to open the full color editor, where you can use additional controls.

G Click the palette button to choose a preset color.

H Click the “>” button to open the menu shown.

I The Color Area commands specify the configuration of the color area and slider.

J The Numeric Entry commands select the color model for the numeric boxes.

K The Normalized option specifies whether numeric values are represented as real numbers in the range [0.0–1.0] or as integers in the range [0, 255].

L The Gamma Correction option toggles gamma correction display for all color controls in the color editor.

Page 21: XSI guia basica

Basics • 21

Working with Views

Working with Views

Views provide a window into the current scene, whether they display a 3D view of geometric objects such as in the Camera view or a hierarchical view of the data such as in the explorer. Views can be displayed docked in a viewport, or floating in separate windows.

Views Docked in the Viewports

There are four viewports in the view manager at the center of the default Softimage layout. Each viewport is identified by a letter. When you start Softimage, viewport A (top left) shows the Top orthographic view, viewport B (top right) shows the Camera perspective view, viewport C (bottom left) shows the Front orthographic view, and viewport D (bottom right) shows the Right orthographic view.

Switching Views in the Viewports

You can change the view displayed by a viewport using the menu on the left of its title bar. Middle-click to display the previous view.

The 3D views show the geometry of your scene and include:

• Any cameras that are present in your scene.

• The orthographic Top, Front, and Right views.

• The User view, which is not a real camera but an extra perspective view that you can navigate in without modifying your main camera setup or its animation.

• Any spotlights that are present in your scene.

• The Object view, which shows the selected object in isolation.

See Working in 3D Views on page 23.

The other views include alternative representations of your scene data such as the explorer or the schematic views (see Exploring Your Scene on page 32), as well as tools for specialized tasks.

Resizing Viewports

Viewports can be resized, maximized, or expanded vertically and horizontally. Drag the horizontal and vertical splitter bars (or their intersection) to resize the viewports. Middle-click the bars to reset them.

Page 22: XSI guia basica

Section 1 • Introducing Softimage

22 • Softimage

Use the Resize icon at the right of a viewport’s toolbar to maximize, expand, and restore:

• Left-click to maximize a viewport, or restore a maximized viewport. Alternatively, press F12 while the pointer is over the viewport.

• Middle-click to expand or restore horizontally.

• Ctrl+middle-click to expand or restore vertically.

• Right-click on the Resize icon to open a menu as shown.

Viewport Presets

Instead of switching views and resizing viewports manually, you can use the buttons at the lower left to display various preset combinations.

Muting and Soloing Viewports

The letter identifier in the upper-left corner of the title bar allows you to mute and solo viewports. Muting a viewport’s neighbors helps speed up its refresh rate.

• Middle-click the letter to mute the viewport. A muted viewport does not update until you un-mute it. The letter of a muted viewport is displayed in orange. Middle-click the letter again to un-mute the viewport.

• Click the letter to solo the viewport. Soloing a viewport mutes all the others. The letter of a soloed viewport is displayed in green. Middle-click the letter again to un-solo the viewport.

To control how viewports update when playing back animation, see Selecting a Viewport for Playback on page 143.

Floating Views

You can open views as floating windows using the first group of submenus on the Views menu. Some floating views also have shortcut keys. Depending on the type of view, you can have multiple windows of the same type open at the same time.

You can adjust floating windows in the usual ways:

• To move a window, drag its title bar.

• To resize a window, drag its borders.

• To bring a window to the front and display it on top of other windows, click in it.

• To close a window, click x in the top right corner.

• To minimize a window, click _ in the top right corner.

You can cycle through all open windows, whether minimized or not, using Ctrl+Tab. Use Shift+Ctrl+Tab to cycle backwards.

You can collapse a floating view by double-clicking on its title bar. When collapsed, only the title bar is visible and you can still move it around by dragging. To expand a collapsed view, double-click on the title bar again; the view is restored at its current location.

A Word about the Active Window

The active window is always the one directly under the mouse pointer—it’s the one that has “focus” and accepts keyboard and mouse input even if it is not on top.

For example, you can open a floating explorer window, then move the pointer over the camera viewport and press F to frame the selected elements. If you pressed F while the pointer was still over the explorer, the list would have expanded and scrolled to find the next selected object.

Be careful that you don’t accidentally send commands to the wrong window.

Page 23: XSI guia basica

Basics • 23

Working in 3D Views

Working in 3D Views

3D views are where you view, edit, and manipulate the geometric elements of your scene.

A B C D E F G H

A Viewport letter identifier: Click to solo the viewport or middle-click to mute it.

B Views menu: Choose which view to display in the viewport.

C Memo cams: Store up to 4 views for quick recall. Left click to recall, middle-click to save, Ctrl+middle-click to overwrite, and right-click to clear.

D Camera icon menu: Navigate and frame elements in the scene.

E Eye icon menu (Show menu): Specify which object types, components, and attributes are visible in the viewports. Hold down the Shift key to keep the menu open while you choose multiple options.

F XYZ buttons: Click on X to view the right side, Y to view the top side, and Z to show the front side. Middle-click to view the left, back, and bottom sides respectively. These commands change the viewpoint but you can still orbit afterwards unlike in the Top, Front, and Right views selected from the Views menu. Click again to return to the previous viewpoint.

G Display Mode menu: Specifies how scene elements are displayed: wireframe, shaded, and other options.

H Resize icon: Resizes viewports to full-screen, horizontal, or vertical layouts.

•Click to maximize and restore.

•Middle-click to maximize and restore horizontally.

•Ctrl+middle-click to maximize and restore vertically.

•Right-click for a menu.

Page 24: XSI guia basica

Section 1 • Introducing Softimage

24 • Softimage

Types of 3D Views

There are many ways to view your scene in the 3D views. These viewing modes are available from the Views menu in viewports and from the View menu in the object view.

Except for camera views, all of the viewing modes are “viewpoints”. Like camera views, viewpoints show you the geometry of objects in a scene. They can be previewed in the render region, but they cannot be rendered to file like camera views.

Camera Views

Camera views let you display your scene in a 3D view from the point of view of a particular camera. You can also choose to display the viewpoint of the camera associated to the current render pass.

The Render Pass view is also a camera view: it shows the viewpoint of the particular camera associated to the current render pass. Only a camera associated to a render pass is used in a final render.

Spotlight Views

Spotlight views let you select from a list of spotlights available in the scene. Selecting a spotlight from this list switches the point of view in the active 3D view relative to the chosen spotlight. The point of view is set according to the direction of the light cone defined for the chosen spotlight.

Top, Front, and Right Views

The Top, Front, and Right views are parallel projection views, called such because the object’s projection lines do not converge in these views. Because of this, the distance between an object and the viewpoint has no influence on the scale of the object. If one object is close and an identical object is farther away, both appear to be the same size.

The Top, Front, and Right views are also orthographic, which means that the viewpoint is perpendicular (orthogonal) to specific planes:

• The Top view faces the XZ plane.

• The Front view faces the XY plane.

• The Right view faces the YZ plane.

You cannot orbit the camera in an orthographic view.

User View (Viewports Only)

The User view is a viewpoint that shows objects in a scene from a virtual camera’s point of view, but is not actually linked to a scene camera or spot light.

The User point of view can be placed at any position and at any angle. You can orbit, dolly, zoom, and pan in this view. It’s useful for navigating the scene without changing the render camera’s position and zoom settings.

Top

Front Right

Page 25: XSI guia basica

Basics • 25

Working in 3D Views

The Object View

The object view is a 3D view that displays only the selected scene elements. It has standard display and show menus, and works the same way as any 3D view in most respects. Selection, navigation, framing, and so on work as they do in any viewport. There are also some custom viewing options, available from the object view’s View menu, that make it easier to work with local 3D selections.

To open the object view, do one of the following:

• From any viewport’s views menu, choose Object View.

or

• From the main menu, choose View > General > Object View.

A B C C E F G

A View menu: Choose the viewpoint to display, and set various viewing options. This is similar to the viewports’ Views menu, but includes special viewing controls for the object view.

B Show menu (equivalent to the eye icon menu): Specify which object types, components, and attributes are visible in the viewports. Hold down the Shift key to keep the menu open while you choose multiple options.

C Memo cams: Store up to 4 views for quick recall. Left click to recall, middle-click to save, Ctrl+middle-click to overwrite, and right-click to clear.

D XYZ buttons: Click on X to view the right side, Y to view the top side, and Z to show the front side. Middle-click to view the left, back, and bottom sides respectively. These commands change the viewpoint but you can still orbit afterwards unlike in the Top, Front, and Right views in viewports. Also unlike in the viewports, they are not temporary overrides and you cannot click them again to return to the previous viewpoint.

E Lock: Prevent the view from updating when you select a different object in another view. Click again to unlock.

F Update: Refresh the view if it is locked.

G Display Mode menu: Specifies how scene elements are displayed: wireframe, shaded, and other options.

Page 26: XSI guia basica

Section 1 • Introducing Softimage

26 • Softimage

Navigating in 3D Views

In 3D views, a set of navigation controls and shortcut keys lets you control the viewpoint. You can use these controls and keys to zoom in and out, frame objects, as well as orbit, track, and dolly among other things.

Activating Navigation Tools

Most navigation tools have a corresponding shortcut key so you can quickly activate them from the keyboard. However, some tools are only available from a viewport’s camera icon menu. In either case, activating a navigation tool makes it the current tool for all 3D views, including object views which do not have an equivalent to the camera icon menu.

After you activate a tool, check the mouse bar at the bottom of the Softimage interface to see which mouse button does what.

Tool or Command

Key Description

Zoom mouse wheel

By default, zooms in and out in various views and editors.

You can control how the mouse wheel is used for zooming in your Tools > Camera preferences.

Navigation S Combines the most common navigation tools:

•Pan (track) with the left mouse button.

•Dolly with the middle mouse button.

•Orbit with the right mouse button.

In your Tools > Camera preferences, you can change the order of the mouse buttons as well as remap this tool to the Alt key.

Selecting navigation tools fromthe camera icon menu activates

them for all 3D views.

Pan/Zoom Z Moves the camera laterally, or changes the field of view:

•Pan (track) with the left mouse button.

•Zoom in with the middle mouse button.

•Zoom out with the right mouse button.

In your Tools > Camera preferences, you can activate Zoom On Cursor to center the zoom wherever the mouse pointer is located.

Rectangular Zoom

Shift+Z Zooms onto a specific area:

•Draw a diagonal with the left mouse button to fit the corresponding rectangle in the view.

•Draw a diagonal with the right mouse button to fit the current view in the corresponding rectangle.

In perspective (non-orthographic) views, rectangular zoom activates pixel zoom

mode , which offsets and enlarges the view without changing the camera’s pose or field of view.

Orbit O Rotates a camera, spotlight, or user viewpoint around its point of interest. This is sometimes called tumbling or arc rotation.

•Use the left mouse button to orbit freely.

•Use the middle mouse button to orbit horizontally.

•Use the right mouse button to orbit vertically.

In your Tools > Camera preferences, you can set Orbit Around Selection.

Dolly P Moves the camera forward and back. Use the different mouse buttons to dolly at different speeds. In orthographic views, dollying is equivalent to zooming.

Tool or Command

Key Description

Page 27: XSI guia basica

Basics • 27

Working in 3D Views

In addition to the above, there are other tools available on the camera icon menu, such as pivot, walk, fly, and so on.

Undoing Camera Navigation

As you navigate in a 3D view, you may want to undo one or more camera moves. Luckily, there is a separate camera undo stack that lets you undo navigation in 3D views.

To undo a camera move, press Alt+Z. To redo an undone camera move, press Alt+Y.

Display Modes

You can display scene objects in different ways by choosing various display modes from a 3D view’s Display Mode menu. The Display Mode menu always displays the name of the current display mode, such as Wireframe.

Roll L Rotates a perspective view along its Z axis. Use the different mouse buttons to roll at different speeds.

Frame F Frames the selected elements in the view under the mouse pointer.

Frame (All Views)

Shift+F Frames the selected elements in all open views.

Frame All A Frames the entire scene in the view under the mouse pointer.

Frame All(All Views)

Shift+A Frames the entire scene in all open views.

Center Alt+C Centers the selected elements in the view under the mouse pointer. Centering is similar to framing, but without any zooming or dollying. The camera is tracked horizontally and vertically so that the selected elements are at the center of the viewport.

Center (All Views)

Shift+Alt+C

Centers the selected elements in all open views.

Reset R Resets the view under the mouse pointer to its default viewpoint.

Tool or Command

Key Description

Wireframe

Shows the geometric object made up of its edges, drawn as lines resembling a model made of wire. This image displays all edges without removing hidden parts or filling surfaces.

Bounding Box

Reduces all scene objects to simple cubes. This speeds up the redrawing of the scene because fewer details are calculated in the screen refresh.

Page 28: XSI guia basica

Section 1 • Introducing Softimage

28 • Softimage

Depth Cue

Applies a fade to visible objects, based on their distance from the camera, in order to convey depth. You can set the depth cue range to the scene, selection, or a custom start and end point. Objects within the range fade as they near the edge of the range, while objects completely outside the range are made invisible. You can also display depth cue fog to give a stronger indication of fading.

Hidden Line Removal

Shows only the edges of objects that are facing the camera. Edges that are hidden from view by the surface in front of them are not displayed.

Constant

Ignores the orientation of surfaces and instead considers them to be pointing directly toward an infinite light source. All the object’s surface triangles are considered to have the same orientation and be the same distance from the light. This results in an object that appears to have no shading.

This mode is useful when you want to concentrate on the silhouettes of objects.

Shaded

Provides an OpenGL hardware-shaded view of your scene that shows shading, material color, and transparency, but not textures, shadows, reflections, or refraction. By default, selected objects have their wireframes superimposed, making it easy to manipulate points and other components.

Page 29: XSI guia basica

Basics • 29

Working in 3D Views

Textured

Similar to Shaded, but also shows image-based textures (not procedural textures).

Textured Decal

This is like the textured, viewing mode, but textures are displayed with constant lighting. The net effect is a general “brightening” of your textures and an absence of shadow. This allows you to see a texture on any part of an object regardless of how well that part is lit.

Realtime Shaders

Evaluates the real-time shaders that have been applied to objects. In the example shown here, the same textures have been used as for the non-realtime shaders, so the result is similar to the textured mode.Several realtime display modes are available, depending on your graphics card:

• OpenGL: displays realtime shader attributes for objects that have been textured using OpenGL realtime shaders.

• Cg: displays realtime shader attributes for objects that have been textured using Cg realtime shaders as well as Softimage’s Cg-compatible MetaShaders.

• DirectX: displays realtime shader attributes for objects that have been textured using DirectX realtime shaders.

Page 30: XSI guia basica

Section 1 • Introducing Softimage

30 • Softimage

Rotoscopy

Rotoscopy is the use of images in the background of the 3D views. You can use rotoscopy in different 3D views (Front, Top, Right, User, Camera, etc.) and any display mode (Wireframe, Shaded, etc.). Furthermore, you can use different images for each view.

• Single images are useful as guides for modeling in the orthographic views.

• Image sequences or clips are useful for matching animation with footage of live action in the perspective views.

To load an image in a view, choose Rotoscope from the Display Mode menu and select an image and other options.

There are two types of rotoscoped images:

• By default, rotoscoped images in perspective views have Image Placement set to Attached to Camera. This means that they follow the camera as it moves and zooms so that you can match animation with live action plates.

• On the other hand, rotoscoped images that are displayed in the orthographic views (Front, Top, and Right) have the Image Placement option set to Fixed by default. This allows you to navigate the camera while modeling without losing the alignment between the image and the modeled geometry.

Fixed images are sometimes called image planes, and they can be displayed in all views, not just the one for which they were defined.

Navigating with Images Attached to the Camera

Normally when a rotoscoped image or sequence is attached to the camera, it is fully displayed in the background no matter how the camera is zoomed, panned, or framed. However you can activate Pixel Zoom mode if you need to maintain the alignment between objects in the scene and the background, for example if you want to temporarily zoom into a portion of the scene.

Attached to Camera

Fixed

Pixel Zoom

Page 31: XSI guia basica

Basics • 31

Working in 3D Views

In Pixel Zoom mode, you can:

• Zoom (Z + middle or right mouse button, S + middle mouse button)

• Pan (Z + left mouse button, S + left mouse button)

• Frame (F for selection, A for all)

The original view is restored when you exit Pixel Zoom mode. Be careful not to orbit, dolly, roll, pivot, or track because these actions change the camera’s transformations and will not be undone when you deactivate Pixel Zoom.

Setting Viewing Options and Preferences

There are several places you can go to set options and preferences related to viewing.

Colors

You can modify scene, element, and component colors (such as the viewport background) by choosing Scene Colors from any viewport’s camera icon menu. For instance, by default a selected object is displayed in white and an unselected object is displayed in black; points are displayed in blue, knots are displayed in brown, and so on.

Camera and 3D Views Display

You can set display options to control how cameras and views display scene objects. These camera display options can be set for individual 3D views, or for all 3D views at once.

• To open an individual 3D view’s Camera Display property editor, choose Display Options from any viewport or object view’s Display Mode menu.

• To open the Camera Display property editor for all 3D views, choose Display > Display Options (all cameras) from the main menu.

Object Visibility

Each object in the scene has its own set of visibility controls that allow you to control how objects appear in the scene, or whether they appear at all, as well as how shadows, reflections, transparency, final gathering, and other attributes are rendered.

For example, you may wish to temporarily exclude objects from a render but retain them in the scene. This can come in handy when you are working with complex objects and want to reduce lengthy refresh times.

You can open an object’s Visibility property editor from the explorer by clicking the Visibility icon in the object’s hierarchy.

Object Display

You can control how individual objects are displayed in a 3D view. Giving an object or objects different display characteristics is particularly useful for heavily-animated scenes.

For example, if you want to tweak a static object within a scene that has a complex animated character, you could set the character in wireframe display mode while adjusting the lighting of your static object in shaded mode.

You can open an object’s Display property editor from the explorer by clicking the Display icon in the object’s hierarchy.

The ability to view different objects in different display modes works only when you turn off Override Object Properties in a view’s Display Mode menu.

Page 32: XSI guia basica

Section 1 • Introducing Softimage

32 • Softimage

Exploring Your Scene

Three of the most important tools for exploring your scene are the explorer, the quick filter box, and the schematic.

The Explorer

The explorer displays the contents of your scene in a hierarchical structure called a tree. This tree can show objects as well as their properties as a list of nodes that expand from the top root. You normally use the explorer as an adjunct while working in Softimage, for example, to find or select elements.

To open an explorer in a floating window, press 8 at the top of the keyboard, or choose View > General > Explorer from the main menu.

A B C D E

F

G

H

I

A Scope of elements to view. See Setting the Scope of the Explorer on page 33.

B Viewing and sorting options.

C Filters for displaying element types. See Filtering the Display on page 33.

D Lock and update. This works only when the scope is set to Selection.

E Search by name, type, or keyword.

F Expand and collapse the tree.

G Click an icon to open property editor.

H Click a name to select. Use Shift to select ranges and Ctrl to toggle-select.

Middle-click to branch-select.

Right-click for a context menu.

I You can pan the view by dragging up and down in an empty area within the explorer.

You can also use the mouse wheel to scroll up and down. First make sure the explorer has focus by clicking anywhere in the explorer.

Page 33: XSI guia basica

Basics • 33

Exploring Your Scene

Keeping Track of Selected Elements

If you have selected objects, their nodes are highlighted in the explorer. If their nodes are not visible, choose View > Find Next Selected Node. The explorer scrolls up or down to display the first object node in the order of its selection. Each time you choose this option, the explorer scrolls up or down to display the next selected node. After the last selected item, the explorer goes back to the first.

Choose View > Track Selection if you want to automatically scroll the explorer so that the node of the first selected object is always visible.

Setting the Scope of the Explorer

The Scope button determines the range of elements to display. You can display entire scenes, specific parts, and so on.

The Selection option in the explorer’s scope menu isolates the selected object. If you click the Lock button with the Selection option active, the explorer continues to display the property nodes of the currently selected objects, even if you go on to select other objects in other views. When Lock is on, you can also select another object and click Update to lock on to it and update the display.

Filtering the Display

Filters control which types of nodes are displayed in the explorer. For example, you can choose to display objects only, or objects and properties but not clusters nor parameters, and so on. By displaying exactly the types of elements you want to work with, you can find things more quickly without scrolling through a forest of nodes.

The basic filters are available on the Filters menu (between the View menu and the Lock button). The label on the menu button shows the current filter. The filters that are available on the menu depend on the scope. For example, when the scope is Scene Root, the Filters menu offers several different preset combinations of filters, followed by specific filters that you can toggle on or off individually.

A Click the Scope button to select the range of elements to view.

B The current scope is indicated by the button label. It is also bulleted in the list.

C The bold item in the menu indicates the last selected scope. Middle-click the Scope button to quickly select this view.

A

B

C

Preset display filter combinations.

Individual display filter toggles.

Page 34: XSI guia basica

Section 1 • Introducing Softimage

34 • Softimage

Other Explorer Views

You can view other smaller versions of the explorer (pop-up explorers) elsewhere in the interface. They are used to view the properties of selected scene elements.

Select Panel Explorer

Explorer filter buttons in the Select panel offer a shortcut by instantly displaying filtered information on specific aspects of currently selected objects.

The Explore button opens a pop-up menu of additional filters for specifying the type of information you wish to obtain on the scene.

Click outside a pop-up explorer to close it.

Object Explorers

You can quickly display a pop-up explorer for a single object—just select the object and press Shift+F3. If the object has no synoptic property or annotation, you can press simply F3. Click outside the pop-up explorer or press those keys again to close it.

The Quick Filter Box

The Quick Filter box on the main Softimage menu bar lets you find scene objects by name.

A Explorer filter buttons

1 Example: Click the Selection filter button...

2 ...to display a pop-up explorer showing all property nodes associated with the selected object.

A

1

2 A Enter part of the name to search for. Softimage waits for you to pause typing before it displays the search results. You can continue typing to modify the search string, and the updated results will be displayed when you pause again.

Softimage finds the elements that contain the search string anywhere in their names (substring search). Strings are not case-sensitive.

Alternatively, you can also use wildcards and a subset of regex (regular expressions) just like in the explorer.

B Recall a recent search string.

C Clear the search string and close the search results.

D Open the floating Scene Search window with the current search and additional options.

A B C D

EF

Page 35: XSI guia basica

Basics • 35

Exploring Your Scene

The Schematic View

The schematic view presents the scene in a hierarchical structure so that you can analyze the way a scene is constructed. It includes graphical links that show the relationships between objects, as well as material and texture nodes to indicate how each object is defined.

To open a schematic view in a floating window, press 9 at the top of the keyboard, or choose View > General > Schematic from the main menu.

• Press the spacebar to click and select nodes. Use the left mouse button for node selection, the middle mouse button for branch selection, and the right mouse button for tree selection.

• Press M to click and drag nodes to new locations. The schematic remembers the location of nodes, so you can arrange them as you please.

• Press s or z to pan and zoom.

Relationships between elements are displayed as lines called links. You can display or hide links for different types of relationship using the Show menu.

You can also click a parent-child link to select the child. This is useful if you have located the parent but can’t find the child in a jumbled hierarchy. Again, use the left, middle, or right mouse buttons to select the child in node, branch, or tree modes.

When other types of link are displayed, you can click and drag across the link to select the corresponding operator, such as a constraint or expression. When a link is selected, you can press Enter to open the property editor related to the associated relationship (if applicable), or press Delete to remove the operator.

E The search results are listed here. They obey the current settings in the Scene Search view for sorting and name/path display.

•To select an element, click on it.

•To select a range of elements, click on the first one and then Shift+click on the last one.

•To toggle-select an element, Ctrl+click on it.

•To deselect an element, Ctrl+Shift+click on it.

•To rectangle-select a range of elements, click in the background first and then drag across the elements to select. This is easier if only names are displayed, rather than paths.

•To select all elements found, press Ctrl+A.

•To rename the selected elements, press F2.

•Right-click on any element for a context menu. If you right-click on a selected element, then some commands apply to all selected elements.

F To dismiss the list of results, click anywhere outside the pop-up or press Escape.

Page 36: XSI guia basica

Section 1 • Introducing Softimage

36 • Softimage

A E F G

H

I

B C D

A Scope: Show the entire scene, the current selection, or the current layer.

B Edit: Access navigation and selection commands.

C Show: Set filters that specify which elements to display.

D View: Set various viewing options.

E Memo cams: Store up to 4 views for quick recall. Left click to recall, middle-click to save, Ctrl+middle-click to overwrite, and right-click to clear.

F Lock: Prevent the view from updating when you select a different object in another view (if Scope = Selection). Click again to unlock.

G Update: Refresh the view if it is locked.

H To select a node, click its label. Middle-click to branch-select and right-click to tree-select.

To open a node’s property editor, click its icon or double-click its label.

Alt+right-click (Ctrl+Alt+right-click on Linux) on a node to open a context menu for the node.

Press F2 to rename the selected node.

I Alt+right-click (Ctrl+Alt+right-click on Linux) in an empty area to quickly access a number of viewing and navigation commands.

Page 37: XSI guia basica

Basics • 37

Section 2

Elements of a Scene

This section provides a guide to the objects, properties, and components you will find in Softimage scenes, and describes some of the workflows for working with them.

What you’ll find in this section ...

• What’s in a Scene?

• Selecting Elements

• Objects

• Properties

• Components and Clusters

• Parameter Maps

Page 38: XSI guia basica

Section 2 • Elements of a Scene

38 • Softimage

What’s in a Scene?

Scenes contain objects. In turn, objects can have components and properties.

Objects

Objects are elements that you can put in your scene. They have a position in space, and can be transformed by translating, rotating, and scaling. Examples of objects include lights, cameras, bones, nulls, and geometric objects. Geometric objects are those with points, such as polygon meshes, surfaces, curves, particles, hair, and lattices.

Components

Components are the subelements that define the shape of geometric objects: points, edges, polygons, and so on. You can deform a geometric object by moving its components. Components can be grouped into clusters for ease of selection and other purposes.

Properties

Properties control how an object looks and behaves: its color, position, selectability, and so on. Each property contains one or more parameters that can be set to different values.

Properties can be applied to elements directly, or they can be applied at a higher level and passed down (propagated) to the children elements in a hierarchy.

Element Names

All elements have a name. For example, if you choose Get > Primitive > Polygon Mesh > Sphere, the new sphere is called sphere by default, but you can rename it if you want. In fact, it’s a good idea to get into the habit of giving descriptive names to elements to keep your scenes understandable. You can see the names in the explorer and schematic views, and you can even display them in the 3D views.

You can typically name an element when you create it. You can rename an object at any time by choosing Rename from a context menu or pressing F2 in the explorer or schematic.

Softimage restricts the valid characters in element names to a–z, A–Z, 0–9, and the underscore (_) to keep them variable-safe for scripting. You can also use a hyphen (-) but it is not recommended. Invalid characters are automatically converted to underscores. In addition, element names cannot start with a digit; Softimage automatically adds an underscore at the beginning. If necessary, Softimage adds a number to the end of names to keep them unique within their namespace.

Points on different geometry types: polygon mesh, curve, surface, and lattice.

Page 39: XSI guia basica

Basics • 39

Selecting Elements

Selecting Elements

Selecting is fundamental to any software program. In Softimage, you select objects, components and other elements to modify and manipulate them.

In Softimage, you can select any object, component, property, group, cluster, operator, pass, partition, source, clip, and so on; in short, just about anything that can appear in the explorer. The only thing that you can’t select are individual parameters—parameters are marked for animation instead of selected.

Overview of Selection

To select an object in a 3D or schematic view, press the space bar and click on it. Use the left mouse button for single objects (nodes), the middle mouse button for branches, and the right mouse button for trees and chains.

To select components, first select one or more geometric objects, then press a hotkey for a component selection mode (such as T for rectangle point selection), and click on the components. Use the middle mouse button for clusters.

For elements with no predefined hotkey, you can manually activate a selection tool and a selection filter.

In all cases:

• Shift+click adds to the selection.

• Ctrl+click toggle-selects.

• Ctrl+Shift+click deselects.

• Alt lets you select loops and ranges. You can use Alt in combination with Shift, Ctrl, and Ctrl+Shift.

A Select menu: Access a variety of selection tools and commands.

B Select icon: Reactivates the last active selection tool and filter.

C Filter buttons: Select objects or their components, such as points, curves, etc.

D Object Selection and Sub-object Selection text boxes: Enter the name of the object and its components you want to select. You can use * and other wildcards to select multiple objects and properties.

E Explore menu and explorer filter buttons: Display the current scene hierarchy, current selection, or the clusters or properties of the current selection.

These buttons are particularly useful because they display pre-filtered information but don’t take up a viewport.

A

B

C

D

E

F

G

H

F Group/Cluster button: Selects groups and clusters.

G Center button: Not used for selection.

H Hierarchy navigation: Select an object’s sibling or parent.

Page 40: XSI guia basica

Section 2 • Elements of a Scene

40 • Softimage

Selection Hotkeys Selection Tools

To select something in the 3D views, a selection tool must be active. Softimage offers a choice of several selection tools, each with a different mouse interaction: Rectangle, Lasso, Raycast, and others. The choice of selection tool is partly a matter of personal preference, and partly a matter of what is easiest or best to use in a particular situation. They are all available from the Select > Tools menu or hotkeys.

Rectangle Selection Tool

Rectangle selection is sometimes called marquee selection. You select elements by dragging diagonally to define a rectangle that encompasses the desired elements.

Raycast Selection Tool

The Raycast tool casts rays from under the mouse pointer into the scene—elements that get hit by these rays as you click or drag the mouse are affected. Raycast never selects elements that are occluded by other elements.

Lasso Selection Tool

The Lasso tool lets you select one or more elements by drawing a free-form shape around them. This is especially useful for selecting irregularly shaped sets of components.

Freeform Selection Tool

The Freeform tool lets you select elements by drawing a line across them. This is particularly useful for selecting a series of edges when modeling with polygon meshes, or for selecting a series of curves in order for lofting or creating hair from curves, as well as in many other situations.

Key Tool or action

space bar Select objects with the Rectangle selection tool, in either supra or sticky mode.

E Select edges with the Rectangle selection tool, in either supra or sticky mode.

T Select points with the Rectangle selection tool, in either supra or sticky mode.

Y Select polygons with the Rectangle selection tool, in either supra or sticky mode.

U Select polygons with the Raycast selection tool, in either supra or sticky mode.

I Select edges with the Raycast selection tool, in either supra or sticky mode.

' (apostrophe) Select hair tips with the Rectangle selection tool, in either supra or sticky mode.

F7 Activate Rectangle selection tool using current filter.

F8 Activate Lasso selection tool using current filter.

F9 Activate Freeform selection tool using current filter.

F10 Activate Raycast selection tool using current filter.

Shift+F10 Activate Rectangle-Raycast selection tool using current filter.

Ctrl+F7 Activate Object filter with current selection tool.

Ctrl+F8 Activate Point filter with current selection tool.

Ctrl+F9 Activate Edge filter with current selection tool.

Ctrl+F10 Activate Polygon filter with current selection tool.

Alt+space bar Activate last-used selection filter and tool.

Page 41: XSI guia basica

Basics • 41

Selecting Elements

Rectangle-Raycast Tool

The Rectangle-Raycast selection tool is mixture of the Rectangle and the Raycast tools. You select by dragging a rectangle to enclose the desired elements, like the Rectangle tool. Elements that are occluded behind others in Hidden Line Removal, Constant, Shaded, Textured, and Textured Decal display modes are ignored, like the Raycast tool.

Paint Selection Tool

The Paint selection tool lets you use a brush to select components. It is limited to selecting points (on polygons meshes and NURBS), edges, and polygons. The brush’s radius controls the size of the area selected by each stroke, which you can adjust interactively by pressing R and dragging to the left or right. Use the left mouse button to select and the right mouse button to deselect. Press Ctrl to toggle-select.

Selection Filters

Selection filters determine what you can select in the 3D and schematic views. You can restrict the selection to a specific type of object, component, or property. Press Shift while activating a new filter to keep the current selection, allowing you to select a mixture of component types.

Selection and Hierarchies

You can select objects in hierarchies in several ways: node, branch, and tree.

Node Selection

Left-click to node-select an object. Node selection is the simplest way in which an object can be selected. When you node-select an object, only it is selected. If you apply a property to a node-selected object, that property is not inherited by its descendants.

A Selection filter buttons: Select objects or their components in the 3D views. The component buttons are contextual: they change depending on what type of object is currently selected.

B Click the triangle for additional filters.

C Click the bottom button to re-activate the last filter.

B

A

C

Effect of node-selecting Object.

Page 42: XSI guia basica

Section 2 • Elements of a Scene

42 • Softimage

Branch Selection

Middle-click to branch-select an object. When you branch-select an object, its descendants “inherit” the selection status and are highlighted in light gray. You would branch-select an object when you want to apply a property that gets inherited by all the object’s descendants.

Tree Selection

Right-click to tree-select an object. This selects the object’s topmost ancestor in branch mode. For kinematic chains, right-clicking will select the entire chain.

Selecting Ranges and Loops of Components

Use the Alt key to select ranges or loops of components. Softimage tries to find a path between two components that you pick. In the case of ranges, it selects all components along the path between the picked components. In the case of loops, it extends the path, if possible, and selects all components along the entire path.

• For polygon meshes, you can select ranges or loops of points, edges, or polygons. Several strategies are used to find a path, but priority is given to borders and quadrilateral topology.

• For NURBS curves and surfaces, you can select ranges or loops of points, knots, or knot curves. Points and knots must lie in the same U or V row. In addition, paths and loops stop at junctions between subsurfaces on assembled surface meshes.

Range Selection

Alt+click to select a range of components using any selection tool (except Paint). This allows you to select the interconnected components that lie on a path between two components you pick.

Effect of branch-selecting Object.

Effect of tree-selecting Object.

1 First specify the anchor.

2 Then specify the end component to select the range of components in-between.

1 2

Page 43: XSI guia basica

Basics • 43

Selecting Elements

1. Select the first “anchor” component normally.

2. Alt+click on the second component. Note that the anchor component is highlighted in light blue as a visual reference while the Alt key is pressed.

All components between the two components on a path become selected.

3. Use the following key and mouse combinations to further refine the selection:

- Use Shift to add individual components to the selection as usual. If you want to add additional ranges or loops using Alt+Shift, the last component added to the selection is the new anchor. If you want to start a new range anchored at the end of the previous range, you must reselect the last component by Shift+clicking or Alt+Shift+clicking. Once you have selected a new anchor, you can Alt+Shift+click to add another range to the selection.

- Use Ctrl to toggle-select. Once you have selected a new anchor, you can Alt+Ctrl+click to toggle the selection of a range.

- Use Ctrl+Shift to deselect. Once you have selected a new anchor, you can Alt+Ctrl+Shift+click to deselect a range.

Loop Selection

Alt+middle-click to select a loop of components using any selection tool (except Paint). When you select a loop of components, Softimage finds a path between two components that you pick. It then extends the path in both directions, if it is possible, and selects all components along the extended path.

1. Do one of the following:

- Select the first “anchor” component normally, then Alt+middle-click on the second component. Note that the anchor component is highlighted in light blue as a visual reference while the Alt key is pressed.

or

- Alt+middle-click to select two adjacent components in a single mouse movement.

All components on an extended path connecting the two components become selected.

1 First specify the anchor.

2 Then specify another component to select the entire loop of components.

1 2

Page 44: XSI guia basica

Section 2 • Elements of a Scene

44 • Softimage

Note that for edges, the direction is implied so you only need to Alt+middle-click on a single edge. However, for parallel edge loops, you still need to specify two edges as described previously.

2. Use the following key and mouse combinations to further refine the selection:

- Use Shift to add individual components to the selection as usual. If you want to add additional ranges or loops using Alt+Shift, the last component added to the selection is the new anchor. The last selected component becomes the anchor for any new loop. Once you have selected a new anchor, you can Alt+Shift+middle-click to add another loop to the selection.

- Use Ctrl to toggle-select. Once you have selected a new anchor, you can Alt+Ctrl+middle-click to toggle the selection of a loop.

- Use Ctrl+Shift to deselect. Once you have selected a new anchor, you can Alt+Ctrl+Shift+middle-click to deselect a loop.

Modifying the Selection

The Select menu has a variety of commands you can use to modify the selection. For example, among many other things, you can:

• Invert the selection.

• Grow or shrink a component selection (polygon meshes only).

• Select adjacent points, edges, or polygons.

Defining Selectability

You can make an object unselectable in the 3D and schematic views by opening up its Visibility properties and turning off Selectability. This can come in handy and speed up your workflow if you are working in a very dense scene and there are one or more objects that you don’t wish to select.

Unselectable objects are displayed in dark gray in the wireframe and schematic views. Regardless of whether an object’s Selectability is on or off, you can always select it using the explorer or using its name.

The selectability of an object can also be affected by its membership in a group or layer.

Page 45: XSI guia basica

Basics • 45

Objects

Objects

Objects can be duplicated, cloned, and organized into hierarchies, groups, and layers.

Duplicating and Cloning Objects

Duplicating an object creates an independent copy: modifying the original after duplication has no effect on the copy. Cloning creates a linked copy: modifying the geometry of the original affects the clone, but you can still make additional changes to the clone without affecting anything else. All the related commands can be found in Edit > Duplicate/Instantiate.

Duplicating Objects

When an object is duplicated, the original and its duplicates can be modified separately with no effect on each other.

To duplicate an object, select it and choose Edit > Duplicate/Instantiate > Duplicate Single or press Ctrl+D. The object is duplicated using the current options and the copy is immediately selected. You may need to move it away from the original. By default, any transformation you apply is remembered for the next duplicate.

To make multiple copies, Edit > Duplicate/Instantiate > Duplicate Multiple or press Ctrl+Shift+d. Specify the number of copies and the incremental transformations to apply to each one.

Other commands in the Edit > Duplicate/Instantiate menu let you duplicate symmetrically, from animation, and so on.

Select the object (a step) to be duplicated and transformed.

Example: Applying multiple transformations to duplicated objects

With the step selected, press Ctrl+Shift+d. Specify 5 copies and a transformation to apply to each.

Result: Five copies of the original step are generated, with each duplicate translated, rotated and scaled to give the appearance of a flight of spiral stairs.

Note:The center of the step was repositioned to the right so that the step could be rotated along its right edge.

1

2

3

Page 46: XSI guia basica

Section 2 • Elements of a Scene

46 • Softimage

Cloning Objects

When an object is cloned, editing the original object affects all the clones but editing one of the clones has no effect on the others.

You can clone objects using the Clone commands on the Edit > Duplicate/Instantiate menu.

Clones are displayed in the explorer with a cyan c superimposed on the model icon. In the schematic view, they are represented by trapezoids with the label Cl.

Hierarchies

Hierarchies describe the relationship between objects, usually using a combination of parent-child and tree analogies, as you do with a family tree. Objects can be associated to each other in a hierarchy for a number of reasons, such as to make manipulation easier, to propagate applied properties, or to animate children in relation to a parent. For example, the parent-child relationship means that any properties applied to the parent (in branch mode) also affect the child.

In a hierarchy there is a parent, its children, its grandchildren, and so on:

• A root is a node at the base of either a branch or the entire tree.

• A tree is the whole hierarchy of nodes stemming from a common root.

• A branch is a subtree consisting of a node and all its descendants.

• Nodes with the same parent are called siblings.

Clone in the explorer. Clone in the schematic view.

Page 47: XSI guia basica

Basics • 47

Objects

Creating Hierarchies

You can create a hierarchy by selecting an object and activating the Parent tool from the Constrain panel (or pressing the / key). Click on another object to make it the child of the selected object, or middle-click to make the selected object the child of the picked object. Continue picking objects or right-click to exit the tool.

You can also create hierarchies by dragging and dropping in the explorer:

In the schematic, you can create a hierarchy by pressing Alt while dragging a node onto a new parent.

Cutting Links in a Hierarchy

You will often need to cut the hierarchical links between a parent and its child or children in a hierarchy of objects. If the child is also a parent, the links to its own children are not affected.

Select the child and click Cut in the Constrain panel, or press Ctrl+/. A cut object becomes a child of its model. If an object is cut from its model, it becomes a child of the parent model.

Deleting an Object in a Hierarchy

If you delete an object with children, it is replaced by a null with the same name in order to preserve the hierarchy structure. Deleting this null just replaces it with another one. If you want to get rid of it, you must first cut its children if you want to keep them, or branch-select the object to remove it and its children.

Groups

You can organize 3D objects, cameras, and lights into groups for the purpose of selection, applying operations, assigning properties and shaders, and attaching materials and textures.

For example, you can add several objects to a group, and then apply a property like Display, Geometry Approximation, or a material to the group. The group’s properties override the members’ own ones.

Besides being able to organize objects into groups, you can also create a group of groups. An object can be a member of more than one group. Groups, however, can’t be added in hierarchies. They can only live immediately beneath the scene root or a model.

1 Make the ball_child a child of the ball_parent by dropping its node onto the ball_parent’s node.

2 The ball_child is now under the ball_parent’s node.

1 2

In Softimage, groups are a tool for organizing and sharing properties.

• If you are familiar with Autodesk® Maya® and want to use groups to control transformations, for example, in a character rig, use transform groups instead.

• If you are familiar with Autodesk® 3ds Max®, note that you don’t need to open a group to select its members individually. You can always select either the group as a whole or any of its members.

Page 48: XSI guia basica

Section 2 • Elements of a Scene

48 • Softimage

Creating Groups

To create a group, select some objects and click Group in the Edit panel or press Ctrl+g. In the Group property editor, enter a name for your group and select the different View and Render Visibility, Selectability, and Animation Ghosting options.

All selected objects are grouped together. In the explorer, you can see the group with all its members within it.

Selecting Groups

You can select groups in the 3D and schematic views using the Group selection button or the = key. Note that the Group button changes to the Cluster button when a component filter is active.

Once a group is selected, you can select all its members using Select > Select Members/Components. The members of the group are selected as multiple objects. If you want to select a single member of a group, simply select it normally in any 3D, explorer, or schematic view.

Adding and Removing Elements from Groups

To add objects to a group, select the group and add the objects you want to the selection. In the Edit panel, click the + button (next to the Group button). You can also drag objects onto a group in an explorer view.

If an object is a member of just one group, you can ungroup it by just selecting it and clicking the – button (next to the Group button). If an object is a member of multiple groups, you must select the group to remove it from before selecting the object. Alternatively, use the context menu in the explorer.

Removing Groups

You can remove a group by selecting it and pressing Delete. When you delete groups, only the group node and its properties are deleted, not the member objects themselves.

Group selection (or use = key)

Add to Group

Remove from Group

Right-click on name of object within the group to be removed and choose Remove from Group.

Page 49: XSI guia basica

Basics • 49

Objects

Scene Layers

Scene layers are containers — similar to groups or render passes — that help you organize, view, display, and edit the contents of your scene. For example, you can put different objects into different scene layers and then hide a particular layer when you don’t want to see that part of your scene. Or you might want to make a scene layer’s objects unselectable if the scene is getting too complex to select objects accurately. You can create as many layers as your scene requires.

The main differences between a scene layer and a group are that every object is a member of a layer (that would be the default layer if you haven’t created any new layers) and objects cannot belong to more than one layer.

Scene Layer Attributes

Each scene layer has four main attributes: viewport visibility, rendering visibility, selectability, and animation ghosting. You can activate or deactivate each these attributes for each layer in the scene. Scene layers can also have custom properties such as wireframe color and geometry approximation.

Scene Layers in the Explorer

You can view and edit scene layers in the explorer. This is most useful when you wish to move several objects between layers, since you can quickly drag and drop them from one layer to another.

The Scene Layer Manager

The scene layer manager is a grid-style view from which you can quickly view and edit all of the layers in a scene. You can use the layer control to do things like add objects to — or remove them from — layers, create new scene layers, toggle scene layer attributes, select objects in a scene layer, and so on.

To open the scene layer manager in a floating window, press 6 at the top of the keyboard, or choose View > General > Scene Layer Manager from the main menu. The scene layer manager is also available on the KP/L panel.

A •The Layers menu contains commands for creating layers, moving selected objects into the current layer, and so on. Other commands are available by right-clicking in the grid.

•The View menu contains various display preferences, including how layers should be sorted and which columns are visible. Press and hold Shift to keep the menu open while you toggle multiple items.

B Scene layers are represented as indented rows. Right-click anywhere in the row for various commands that affect the corresponding layer.

B

C

D

A

F

G

E

H

Page 50: XSI guia basica

Section 2 • Elements of a Scene

50 • Softimage

C The current layer is indicated by a green background and a double-chevron. To make a layer current, click in the in the leftmost column of the corresponding row.

D Scene layer groups are represented as rows with a light gray background. Right-click anywhere in the row for various commands that affect all layers in the group. Click the triangle at left to hide or display the rows of its individual layers.

E To rename a layer or group, double-click on its name, type a new name, and press Enter.

You can select multiple layers for certain commands by clicking on their names. To select a range, click on the first layer and then Shift+click on the last, or drag across the desired rows. To add individual layers to the selection, Ctrl+click on their rows. Note that selecting layers in the grid in this way simply selects them for certain commands in the scene layer manager—it does not affect the global scene selection.

F Scene layer attributes: wireframe color, view visibility, render visibility, selectability, and animation ghosting.

•Click in a cell to toggle its value.

•Click+drag to toggle multiple cells in a rectangular area.

•Right-click on a column heading and choose Check All or Uncheck All.

•Double-click on a color swatch to set the wireframe color and other display attributes.

G Use the cells of a layer group to control all layers in the group. You can still change the settings of individual layers afterward. When different layers in the group have different values, the cell has a light gray checkmark.

H Right-click on a column heading and choose Check All or Uncheck All.

Resize a column by dragging the borders of its heading.

Page 51: XSI guia basica

Basics • 51

Properties

Properties

A property is a set of related parameters that controls some aspect of objects in a scene.

Applying Properties

You can apply many properties using the Get > Property menu of any toolbar. This applies the default preset of a property’s parameter values to the selected objects, possibly replacing an existing version of the same property.

Editing Properties

To edit an existing property, open its property editor by clicking on the property node in an explorer. A handy way to do this is to press F3 to see a mini-explorer for the selected object, or click the Selection button at the bottom of the Select menu. You can also right-click on Selection to display properties according to type.

How Properties Are Propagated

Objects can inherit properties from many different sources. This inheritance is called propagation.

For some properties, such as Display and Geometry Approximation, an object can have only one at a time. If it inherits the same property from more than one source, the source with the highest “strength” is used.

In increasing order of strength, the possible sources of property propagation are:

• Scene Default: This is the weakest source. If an object does not inherit a property from anywhere else, it uses the scene’s default values. For example, if an object has never had a material applied to it, it uses the scene default material.

• Branch: If a parent has a property applied when it is branch-selected, its children all inherit the property.

• Local: If a child inherits a branch property from its parent, but has the same property applied directly to it, it uses its local values.

• Cluster: Materials, textures, and other properties applied to a cluster take precedence over those applied to the object.

• Group: If an object is a member of a group, then any properties applied to the group take precedence over local and branch properties. Similarly, if a cluster is a member of a group, any properties applied to the group take precedence over those applied directly to the cluster.

• Layer: Any properties applied to an object’s layer take precedence over group, local, and branch properties.

• Partition: Properties applied to a partition of a render pass have the highest priority of all when that render pass is current.

Click Selection...

...then click a property icon... ...or right-click Selection.

Page 52: XSI guia basica

Section 2 • Elements of a Scene

52 • Softimage

For other types of properties, an object can have many at the same time. For example, an object can have several local annotations as well as several annotations inherited from different ancestors, groups, and so on.

Simple PropagationIn this sphere hierarchy, each sphere is parented to the one above it. Because the larger sphere was branch-selected when the texture was applied, every sphere beneath it inherits the checkerboard texture.

Branch PropagationOne sphere was branch-selected and given a cloud texture. The remaining sphere retains the checkerboard texture because it is on another branch.

Local Material/Texture ApplicationOne sphere was single-selected and given a blue surface. This applies a local material/texture that is in turn applied to the selected object only— and none of its children; the sphere’s children still inherit the checkerboard texture, despite assigning a local texture to their parent.

Reverting to the Scene’s Default MaterialThe larger sphere was single-selected and has had its material deleted. Since other spheres can no longer inherit their texture from the parent (because its been deleted), they revert back to the scene’s default gray (or another color you’ve defined).

Page 53: XSI guia basica

Basics • 53

Properties

Viewing Propagation in the Explorer

You can also set the following options in the explorer’s View menu:

• Local Properties displays only those properties that have been applied directly to an object.

• Applied Properties shows all properties that are active on a object, no matter how they are propagated.

Creating Presets of Property Settings

You can save property settings as a preset. Presets are data files with a .preset file extension that contain property information. Presets let you work more efficiently because you can save the modified properties and reuse them as needed, as well as transfer settings between scenes. For quick access, you can also place presets on a toolbar.

To save or load a preset, click the button at the top of a property editor. The saved preset contains values for only the parameter set currently selected on the property set tabs in the property editor. For materials and shaders, it also contains parameter settings for any connected shaders. Presets do not contain any animation—only the current parameter values are stored. If there is a render region open when you save a preset, it will be used as a thumbnail.

A Properties that are applied in branch-mode, and therefore propagated, are marked with B.

B Shared properties such as materials are shown in italics. The property’s source (where it’s propagated from) is shown in parentheses.

If no source is shown, then it is inherited from the scene root.

A

B

Save/Load Presets

Page 54: XSI guia basica

Section 2 • Elements of a Scene

54 • Softimage

Components and Clusters

Components are elements, like points and edges, that define the shape of 3D objects. Clusters are named groups of components.

Displaying Components

You can display the various component types in a specific 3D view using the individual options available from its eye icon (Show menu) or in all open 3D views using the options on the Display > Attributes menu on the main menu bar.

For more options, you can set the visibility options in the Camera Visibility property editor: click a 3D view’s eye icon (Show menu) and choose Visibility Options, or Display > Visibility Options for all open 3D views.

Note that when you activate a component selection filter, the corresponding components are automatically displayed in the 3D views.

Clusters

A cluster is a named set of components that are grouped together for a specific modeling, animation, or texturing purpose. By grouping and naming components, it makes it easier to work with those same components again and again. For example, by grouping all points that form an

eyebrow, you can easily deform the eyebrow as if it were an object instead of trying to reselect the same points each time you work with it. You can also apply operators like deformations or Cloth to specific clusters instead of an entire object.

You can define as many clusters on an object as you like, and the same component can belong to a number of different clusters.

You can define clusters for points, edges, polygons, subsurfaces, and other components. Each cluster can contain one type of component. For example, a cluster can contain points or polygons, but not both.

Creating Clusters

To create a cluster, select some components and click Cluster on the Edit panel (the Cluster button changes to Group when objects are selected). As soon as the cluster is created, it is selected and you can press Enter to open its property editor and change its name.

To create a cluster whose components aren’t already in other clusters, choose Edit > Create Non-overlapping Cluster instead. You can also use Edit > Create Cluster with Center to make a cluster with a null “center” that you can transform and animate. If you prefer to use a different object as a center, simply create a cluster and apply Deform > Cluster Center manually.

Eye icon

Spinning top with two clusters

Top

Bottom

Clusters may shift if you edit an operator in an object’s construction history and add components before the position where the cluster was created.

Page 55: XSI guia basica

Basics • 55

Components and Clusters

Adding and Removing Components from Clusters

To add components to a cluster, select the cluster and add the components you want to the selection. In the Edit panel, click the + button (next to the Cluster button).

To remove components from a cluster, select the cluster, add the components to remove to the selection, and click the – button.

Selecting Clusters

You can select clusters using the Clusters button at the bottom of the Select panel, or in any other explorer.

You can also select clusters in a 3D view when a component selection filter is active. Simply activate the Cluster button at the top of the Select panel, or press =, or use the middle mouse button while clicking on any component in the cluster.

Removing Clusters

To remove a cluster, select it and press Delete. Removing a cluster removes the group, but does not remove the individual components from the object.

Manipulating Components and Clusters

Not every type of component or cluster can be directly manipulated in Softimage. You can select and manipulate points, edges, and polygons in the 3D views, and you can select and manipulate texture UV coordinates (samples) in the texture editor.

• You can transform points, edges, and polygons in 3D space. This is a fundamental part of modeling an object’s shape.

• You can apply deformations to deform points, edges, and polygons in the same way that you apply them to objects.

• You cannot animate component and cluster transformations directly. Instead, you can use a deformer such as a cluster center or volume deformer and animate the deformer, or you can use shape animation.

When you add components to an object, any new components that are surrounded by similar components in a cluster are automatically added to the cluster.

Add to Cluster

Remove from Cluster

Page 56: XSI guia basica

Section 2 • Elements of a Scene

56 • Softimage

Parameter Maps

Certain parameters are mappable—you can vary the parameter’s value across an object’s geometry by connecting a weight map, texture map, vertex color property, or other cluster property. This allows you to, for example, control the amplitude of a deformation or the emission rate of a particle system across an object’s surface.

Mappable parameters have a connection icon in their property editors that allows you to drive the value using a map.

Which Parameters Are Mappable?

Almost any parameter with a connection icon in its property editor is mappable. These parameters include:

• Certain deformation parameters, such as Amplitude in the Push operator or Strength in the Smooth operator.

• The Multiplier parameter in the Polygon Reduction operator.

• Edge and vertex crease values.

• Various simulation parameters, such as the length and density of hair, the stiffness of cloth, and so on.

• Shapes in the animation mixer.

What Can You Connect to Mappable Parameters?

You can connect just about any cluster property to a mappable parameter. The most useful properties include the following:

• Weight maps allow you to start from a base map such as a constant value or gradient, and then paint values on top.

• Texture maps consist of an image file or sequence, and a set of UV coordinates. They are similar to ordinary textures, but are connected to parameters instead of shaders.

• Vertex color properties are color values stored at each polynode or texture sample of a geometric object.

In addition to the attributes listed above, you can connect mappable parameters to other cluster properties, including UV coordinates (texture projections), shapes, user normals, and envelope weights. While these may not always be useful for driving modeling and simulation parameters, the ability to connect to these properties may be useful for custom developers.

Connecting Maps

No matter what type of map you want to connect to a parameter, the basic procedure is the same. In a property editor, click on the connection icon of a mappable parameter and choose Connect. A pop-up explorer opens—navigate through the explorer and pick the desired map:

• Weight maps are found under the appropriate cluster.

• Texture maps are properties directly under the object. They can also be found under the appropriate cluster. Make sure you don’t accidentally select the texture projection.

• Vertex color properties are also found under the appropriate cluster.

The connection icon changes to show that a map is connected. When a map is connected, you can click on this icon to open the map’s property editor.

If you connect a map that has multiple components, like an RGBA color, to a parameter that has a single dimension, like Amplitude, you can use the options in the Map Adaptor to control the conversion.

To disconnect a weight map, right-click on the connection icon of a connected parameter and choose Disconnect.

Connection icon

unconnected connected

Page 57: XSI guia basica

Basics • 57

Parameter Maps

Weight Maps

Weight maps are properties of point clusters on geometric objects. They associate each point in a cluster with a weight value. Each cluster can have multiple weight maps, so you can modulate different parameters on different operators in different ways.

Each weight map has its own operator stack. When you create a weight map, a WeightMapOp operator sets the base map, which can be constant or one of a variety of gradients. Then when you paint on the weight map, the strokes are added to a WeightPainter operator on top of the WeightMapOp in the stack. Like other elements with operator stacks, you can freeze a weight map to discard its history and simplify your scene data.

The following steps present a quick overview of the workflow for using weight maps.

1. Start with an object.

2. Optionally, select some points or a cluster.

3. Apply a weight map using Get > Property > Weight Map.

4. Press W to activate the Paint tool, then use the mouse to paint on the weight map.

- Press R and drag the mouse to control the brush radius.

- Press E and drag the mouse to control the opacity.

- Press Ctrl+W to open the Brush properties to set other parameters.

In the default paint mode (normal, also called additive), use the left mouse button to add paint and the right mouse button to remove weight. Press Alt to smooth.

5. Connect the weight map to drive the value of a parameter—for example in the image below, it is driving the Amplitude of a Push deformation.

• To connect maps to hair parameters, you must first transfer the maps from the emitter to the hair object.

• In the case of weight maps and deformations, you can simply select the weight map and then apply the deformation instead of manually connecting it. Since the weight map is selected by default as soon as you create it, this technique is quick and easy.

Selected cluster

Blank weight map,ready for painting

A spot of paintand it’s as good as new!

Page 58: XSI guia basica

Section 2 • Elements of a Scene

58 • Softimage

6. You can reselect the weight map and continue to paint on it to modify the effect further.

Freezing Weight Maps

Weight maps can be frozen to simplify your scene’s data. Freezing collapses the weight map generator (the base constant or gradient map you chose when you created the weight map) together with any strokes you have applied.

To freeze a weight map, select it and click the Freeze button on the Edit panel. After you have frozen a weight map, you can still add new strokes but you cannot change the base map or delete any strokes you performed before freezing.

Texture Maps

Texture maps consist of an image file or sequence, and a set of UV coordinates. They are similar to ordinary textures, but are used to control operator parameters instead of surface colors.

Creating Texture Maps

To create a texture map, you select the texture projection method and then link an image file to it.

1. Apply a texture projection and texture maps to the selected object by doing one of the following:

- If the object already has a set of UV coordinates (texture projection) that you want to use, select it and choose Get > Property > Texture Map > Texture Map.

This creates a blank texture map property for the object and opens a blank Texture Map property editor in which you need to set the texture projection and select an image that will be used as the map (as described in the next steps).

or

- To create a new texture projection for the map, select the object and choose Get > Property > Texture Map > projection type (such as Cylindrical, Spherical, UV, or XZ) that is appropriate for the shape of the object.

This creates a texture map property and texture projection for the object, but doesn’t open the Texture Map property editor. Now you must open the Texture Map property editor to associate the image to this projection to use as the map (in the explorer, click the Texture Map property under the object).

If your object has multiple maps, you may need to select the desired one before you can paint on it. You can do this easily using Explore > Property Maps from the Select panel.

A slight Push is all that’s needed.

HDR images are fully supported. Floating-point values are not truncated.

Page 59: XSI guia basica

Basics • 59

Parameter Maps

2. In the Clip section of the Texture Map property editor, select an image or sequence to use as the map. If there isn’t already a clip for the desired image, click New to create one.

3. In the UV Property area beneath the image, select an existing texture projection or create a New texture projection (if there isn’t already one) that is appropriate to the shape of the object or how you want to project the mapped image.

Editing Texture Maps

To edit the UV coordinates of a texture map’s projection, select the object and open the text editor. If necessary, use the Clips menu to display the correct image and the UVs menu to display the correct projection.

If you do this, you should make sure that the operator connected to the texture map is above the modeling region of the construction history, for example, in the animation region. Otherwise, the UV edits are “above” the operator and appear to have no effect. You can move the operator back to the modeling region when you are done.

Page 60: XSI guia basica

Section 2 • Elements of a Scene

60 • Softimage

Page 61: XSI guia basica

Basics • 61

Section 3

Moving in 3D Space

Working in 3D space is fundamental to Softimage. You will use the transformation tools constantly as you model and animate objects and components.

What you’ll find in this section ...

• Coordinate Systems

• Transformations

• Center Manipulation

• Freezing Transformations

• Resetting Transformations

• Setting Neutral Poses

• Transform Setup

• Transformations and Hierarchies

• Snapping

Page 62: XSI guia basica

Section 3 • Moving in 3D Space

62 • Softimage

Coordinate Systems

Softimage uses coordinate systems, also called reference frames, to describe the position of objects in 3D space.

Cartesian Coordinates

One essential concept that a first-time user of 3D computer graphics should understand is the notion of working within a virtual three-dimensional space using a two-dimensional user interface.

Softimage uses the classical Euclidean/Cartesian mathematical representation of space. The Cartesian coordinate system is based on three perpendicular axes, X, Y, and Z, intersecting at one point. This reference point is called the origin. You can find it by looking at the center of the grid in any of the 3D windows.

XYZ Axes

Softimage uses a “Y-up” system, where the Y direction represents height. This is different from some other software, which are “Z-up”. This is something to keep in mind if you are familiar with other software, or are trying to import data into Softimage.

A small icon representing the three axes and their directions is shown in the corner of 3D views. The icon’s three axes are represented by color-coded vectors: red for X, green for Y, and blue for Z.

XYZ Coordinates

With the Cartesian coordinate system, you can locate any point in space using three coordinates. Positions are measured from the origin, which is at (0, 0, 0). For example, if X = +2, Y = +1, Z = +3, a point would be located to the right of, above, and in front of the origin.

XZ, XY, YZ Planes

Since you are working with a two-dimensional interface, spatial planes are used to locate points in three-dimensional space. The perpendicular axes extend as spatial planes: XZ, XY, and YZ. In the 3D views, these planes correspond to three of the parallel projection windows: Top, Front, and Right. Imagine that the XZ, XY, and YZ planes are folded together like the top, front, and right side of a box. This helps you keep a sense of orientation when you are working within the parallel projection windows.

An easy way to remember the color coding is RGB = XYZ. This mnemonic is repeated throughout Softimage: object centers, manipulators, axis controls on the Transform panel, and so on.

Location = (2, 1, 3)

Z = 3

Y = 1

X = 2

Origin

Page 63: XSI guia basica

Basics • 63

Coordinate Systems

Global and Local Coordinate Systems

The location of an object in 3D space is defined by a point called its center. This location can be described in more than one way or according to more than one frame of reference. For example, the global position is expressed in relation to the scene’s origin. The local position is expressed in terms of the center of the object’s parent.

The center of an object is only a reference—it is not necessarily in the middle of the object because it can be relocated (as well as rotated and scaled). The position, orientation, and scaling (collectively known as the pose) of the object’s center defines the frame of reference for the local poses of its own children.

Softimage Units

Throughout Softimage, lengths are measured in Softimage units. How big is a Softimage unit? It is an arbitrary, relative value that can be anything you want: a foot, 10 cm, or anything else.

However, it is generally recommended that you avoid making your objects too big, too small, or too far from the scene origin. This is because rounding errors can accumulate in mathematical calculations, resulting in imprecisions or even jittering in object positions. As a general rule of thumb, an entire character should not fit within 1 or 2 units, nor exceed 1000 units.

The Softimage units used for objects also matters for creating dynamic simulations where objects have mass or density and are affected by forces such as gravity. For simulations, Softimage assumes that 1 unit is 10 cm by default, but you can change this by changing the strength of gravity.

Scene origin

Object and its center

Parent

Page 64: XSI guia basica

Section 3 • Moving in 3D Space

64 • Softimage

Transformations

Transformations are fundamental to 3D. They include the basic operations of scaling, rotating, and translating: scaling affects an element’s size, rotation affects an element’s orientation, and translation affects an element’s position. Transformations are sometimes called SRTs.

You transform by selecting an object or components, activating a transform tool, then clicking and dragging a manipulator in a 3D view.

Local versus Global Transformations

There are two types of transformation values that can be stored for animation: local and global. Local transformations are stored relative to an object’s parent, while global ones are stored relative to the origin of the scene’s global coordinate system. The global transformation values are the final result of all the local transformations that are propagated down the object hierarchy from parent to child.

You can animate either the local or the global transformation values. It’s usually better to animate the local transformations—this lets you move the parent while all objects in the hierarchy keep their relative positions rather than staying in place.

Transforming Interactively

5 Click and drag on the manipulator. See Usingthe Transform Manipulators on page 68.

1 Select objects or components to transform and activate a tool:– Scale (press x)– Rotate (press c)– Translate (press v)

3 If desired, specify the active axes. See Specifying Axes on page 67.

4 If desired, set the pivot. See Setting the Pivot on page 67.

2 Set themanipulation mode.

See ManipulationModes on page 65.

Page 65: XSI guia basica

Basics • 65

Transformations

Manipulation Modes

When you transform interactively, you always do so using one of several modes set on the Transform panel: View, Local, Global, etc. The mode determines the axes and the default pivot used for manipulation. If an object isn’t transforming as you expected, it’s possible that you need to change the manipulation mode. It is important to remember that the mode does not affect the values stored for animation (local versus global), it only affects your interaction with the transform tool.

Global

Global translations and rotations are performed along the scene’s global axes.

Local

Local transformations are performed along the axes of the object’s local coordinate system as defined by its center. This is the only true mode available for scaling—scaling is always performed along an object’s own axes.

View

View translations and rotations are performed with respect to the 3D view. The plane in which the object moves depends on whether you are manipulating it in the Camera, Top, Front, Right, or other view.

Par

Par, or parent, translations and rotations use the axes of the object’s parent. For translation, this is the only mode where the axes of interaction correspond exactly to the coordinates of the object’s local position for the purpose of animation. When you activate individual axes on the Transform panel, the corresponding local position parameters are automatically marked. To activate Par for rotations, activate Add and press Ctrl.

Object is transformed...

...using global axes as the reference.

Object is transformed...

...using the object’s own local axes as the reference.

If you are using the SRT manipulators in a perspective view like Camera or User, View mode uses the global scene axes.

Object is transformed using the axes of the 3D view as the reference.

Object is transformed...

...using the local space of its parent as the reference.

Page 66: XSI guia basica

Section 3 • Moving in 3D Space

66 • Softimage

Add

Add, or additive, mode is only available for rotation. It lets you directly control the object’s local X, Y, and Z rotations as stored relative to its parent. This mode is especially useful when animating bones and other objects in hierarchies.

For rotations, this is the only mode where the axes of interaction correspond exactly to the coordinates of the object’s local orientation for the purpose of animation. When you activate individual axes on the Transform panel, the corresponding local position parameters are automatically marked.

Uni

Uni, or uniform, is available only for scaling. It is not really a mode but it modifies the way objects are scaled locally. It scales along all active local axes at the same time with a single mouse button. You can activate and deactivate axes as described in Specifying Axes on page 67. You can also temporarily turn on Uni by pressing Shift while scaling.

Vol

Like Uni, Vol or volume is available only for scaling and is a modifier rather than a mode. It scales along one or two local axes, while automatically compensating the other axes so that the volume of the object’s bounding box remains constant.

Ref

Ref, or reference, mode lets you translate an object along the X, Y, and Z axes of another element or an arbitrary reference plane. Right-click on Ref to set the reference.

Par mode is not available for components. In its place, Object mode uses the local coordinates of the object that “owns” the components.

Object is transformed...

...using the local space of a picked object as its reference.

Page 67: XSI guia basica

Basics • 67

Transformations

Plane

Plane mode lets you drag an object along the XZ plane of another element or an arbitrary reference plane. Right-click on Plane to choose the plane.

Specifying Axes

When transforming interactively, you can specify which axes are active using the x, y, and z icons in the Transform panel. For example, you can activate rotation in Y only, or deactivate translation only in Z. Active icons are colored, and inactive icons are gray.

• Click an axis icon to activate it and deactivate the others.

• Shift+click an axis icon to activate it without affecting the others.

• Ctrl+click an axis icon to toggle it.

• Click the All Axes icon to activate all three axes.

• Ctrl+click the All Axes icon to toggle all three axes.

If Allow Double-click to Toggle Active Axes is on in the Transform preferences, then you can also specify transformation axes by double-clicking in the 3D views while a transformation tool is active:

• Double-click on a single axis to activate it and deactivate the others.

• If only one axis is currently active, double-click on it to activate all three axes.

• Shift+double-click on an axis to toggle it on or off individually. (If it is the only active axis, it will be deactivated and both of the other two axes will be activated).

Setting the Pivot

When transforming elements interactively, you can set the pivot by pressing the Alt key while a transformation tool is active. The pivot defines the position around which elements are rotated or scaled (center of transformation). When translating and snapping, the pivot is the position that snaps to the target.

1. Make sure that Transform > Modify Object Pivot is set to the desired value:

- Off (unchecked) to set the tool pivot used for interactive manipulation only. This is useful if you are simply moving elements into place. The tool pivot is normally reset when you change the selection. However, you can lock and reset the position manually.

- On (checked) to modify the object pivot. The object pivot acts like a center for the object’s local transformations. It is used when playing back animated transformations, and is also the object’s default pivot for manipulation. You can animate the object pivot to create a rolling cube.

2. Activate a transform tool.

...using the local space of a user-defined plane in space.

Object is transformed...

Individual axes

All Axes

Page 68: XSI guia basica

Section 3 • Moving in 3D Space

68 • Softimage

3. Do any of the following:

- Alt+drag the manipulator’s center, or one of its axes, to change the position of the pivot manually. You can use snapping, as well as change manipulation modes on the Transform panel.

- Alt+click in a geometry view. The pivot snaps to the closest point, edge midpoint, polygon midpoint, or object center among the selected objects. This lets you easily rotate or scale an object about one of its components.

- Alt+middle-click to reset the pivot to the default.

You can lock the pivot by pressing Alt, clicking on the triangle below the pivot icon, and choosing Lock. The tool pivot remains at its current location, even if you change the selection.

Using the Transform Manipulators

Translate Manipulator

Rotate Manipulator

Scale Manipulator

In addition to dragging the handles, you can:

• Middle-click and drag anywhere in the 3D views to translate along the axis that most closely matches the drag direction.

• Click and drag anywhere in the 3D views (except on the manipulator) to perform different actions, depending on the setting for Click Outside Manipulator in the Tools > Transform preferences.

• Right-click on the manipulator to open a context menu, where you can set the manipulation mode and other options.

Pivot icon

Click and drag on a single axis to translate along it.

Click and drag between two axes to translate along the corresponding plane.

Click and drag on the center to translate in the viewing plane.

Click and drag on a single ring to rotate around that axis.

Click and drag on the silhouette to rotate about the viewing axis. This does not work in Add mode.

Click and drag on the ball to rotate freely. This does not work in Add mode.

Click and drag on a single axis to scale along it.

Click and drag along the diagonal between two axes to scale both those axes uniformly.

Click and drag the center left or right to scale all active axes uniformly.

Page 69: XSI guia basica

Basics • 69

Transformations

Setting Values Numerically

As an alternative to transforming objects interactively, you can enter numerical values in the boxes on the Transform panel:

• In Global mode, values are relative to the scene origin.

• In Ref mode, values are relative to the active reference plane.

• In View mode, values can be either global or relative to the object’s parent depending on what’s set in your preferences.

• In all other modes, values are relative to the object’s parent.

Transformation Preferences

Transform > Transform Preferences contains several settings that affect the display, interaction, and other options of the transformation tools. Since you will be spending a great deal of your time transforming things, it’s a good idea to explore these and find the settings that are most comfortable for you.

Hierarchical (Softimage) versus Classic Scaling

Hierarchical (Softimage) scaling uses the local axes of child objects when their parent is scaled. This maintains the relative shape of the children without shearing if they are rotated with respect to their parent.

When this option is off, the result is called classic scaling—children are scaled along their parent’s axes and may be sheared with non-uniform scaling. Classic scaling is recommended if you are exchanging data with other applications, such as game engines, motion capture systems, or 3D applications that do not understand Softimage scaling.

You specify which method to use for each child in its Local Transform property. You can also set the default value used for all new objects.

To specify hierarchical or classic scaling

1. Select one or more child objects and open their Local Transform property editor.

2. On the Scaling tab, turn Hierarchical (Softimage) Scaling off or on. If it is off, classic scaling is used.

To set the default scaling mode used for all new objects

1. Choose File > Preferences from the main menu bar.

2. Click General.

3. Toggle Use Classical Scaling for Newly Created Objects.

Parent and child branch-selected before scaling.

Scaled in Y using hierarchical scaling.

Scaled in Y using classic scaling.

Page 70: XSI guia basica

Section 3 • Moving in 3D Space

70 • Softimage

Center Manipulation

Center manipulation lets you move the center of an object without moving its points. This changes the default pivot point used for rotation and scaling. You can manipulate the center by using Center mode interactively, or by using commands on the Transform menu (Move Center to Vertices and Move Center to Bounding Box).

It’s important to note that center manipulation is actually a deformation. As the center is moved, the geometry is compensated to stay in place. Because it is a deformation, you cannot manipulate the center of non-geometric objects. This includes nulls, bones, implicit objects, control objects, and anything else without points.

Freezing Transformations

The Transform > Freeze commands reset an object’s size, orientation, or location to the default values without moving the object’s geometry in global space. For instance, freezing an object’s translation moves its center to (0, 0, 0) in its parent’s space without visibly displacing its points.

Like center manipulation, freezing transformations is actually a deformation. As the center is transformed, the geometry is compensated to stay in place.

Resetting Transformations

The Transform > Reset commands return an object’s local scaling, rotation, and translation return to the default values. It effectively removes transformations applied since the object was created or parented, or since its transformations were frozen.

If you want an object to return to a pose other than the origin of its parent’s space when you reset its transformations, set a neutral pose for it.

Setting Neutral Poses

The Transform > Set Neutral commands “zero out” an object’s transformations. This is useful if you want an object to return to a pose other than the origin of its parent’s space when you reset its transformations. For example, you can set the neutral pose of a chain bone so that it returns to a “natural” position when you reset it. Neutral poses are also useful for visualizing the transformation values—it’s easier to imagine a rotation from 0 to 45 degrees than from 78.4 to 123.4 degrees.

The neutral pose acts as an offset for the object’s local transformation values, as if there was an intermediate null between the object and its parent in the hierarchy. The neutral pose values are stored in the object’s Local Transform property, and can be viewed or modified on the Neutral Pose tab of that property editor.

When you set the neutral pose, any existing animation of the local transformation values is interpreted with respect to the new pose. This may give unexpected results when you play back the animation. For that reason, you should set the neutral pose before animating the transformations of an object.

If you remove the neutral pose using Transform > Remove Neutral Pose, the neutral pose values are added to the local transformation before being reset to the defaults. The object does not move in global space as a result.

If a neutral pose exists when you freeze an object’s transformations, the object’s center moves to the neutral pose instead of the origin of its parent’s space. If you want the object’s center to be at the origin, you should remove the neutral pose in addition to freezing the transformations. You can perform these two operations in either order.

Page 71: XSI guia basica

Basics • 71

Transform Setup

Transform Setup

The Transform Setup property lets you define a preferred transformation for an object. When you select that object, its preferred transformation tool is automatically activated. Of course, you can still choose a different tool and change transformation options manually if you want to.

Transform setups are particularly useful when building animation rigs for characters. If you are using an object to control a character’s head orientation, you can set its preferred transformation to rotation. If you are using another object to control the character’s center of gravity (COG), you can set its preferred transformation to translation. When you select the head control, the Rotate tool is automatically activated, and then when you select the COG control, the Translate tool is automatically activated.

You apply a Transform Setup property by choosing Get > Property > Transform Setup from any toolbar and then setting all the options. You can modify the options later by opening the property from the explorer.

While Transform Setups are useful for many tasks, like animating a rig, at other times you don’t want the current tool to keep changing as you select objects. In these cases, you can ignore Transform Setups for all objects in your scene by turning off Transform > Enable Transformation Setups. Turn it back on to resume using the preferred tool of each object.

Transformations and Hierarchies

Transformations are propagated down hierarchies. Each object’s local position is stored relative to its parent. It’s as if the parent’s center is the origin of the child’s world.

Basics of Transforming Hierarchies

Objects in hierarchies behave differently when they transformed depending on whether the objects are node-selected or branch-selected. By default:

• If an object is branch-selected, then its children are transformed as well. You can change this behavior by modifying the parent constraint on the Options tab of the child’s Local Transform property editor.

• If an object is node-selected, then children with local animation follow the parent. This is because the local animation values are stored relative to the parent’s center. However, what happens to non-animated children depends on the ChldComp (Child Transform Compensation) option on the Constrain panel.

Child Transform Compensation

The ChldComp option on the Constrain panel controls what happens to non-animated children if an object is node-selected and transformed.

• If this option is off, all children with an active parent constraint follow the parent. You cannot move the parent without moving its children.

• If this option is on, the children are not visibly affected. Their local transformations are compensated so that they maintain the same global position, orientation, and size.

Child Transform Compensation does not affect what happens when a child has local animation on the corresponding transformation parameters nor when the parent is branch-selected.

Page 72: XSI guia basica

Section 3 • Moving in 3D Space

72 • Softimage

Snapping

Snapping lets you align components and objects when moving or adding them. You can snap to targets like objects, components, and the viewport grids, or you can snap by increments.

Snapping to Targets

Use the Snap panel to activate snapping to targets.

The grid used for snapping depends on the manipulation mode:

• Global, Local, Par, Object, and Ref use the Snap Increments set in the Transform > Transform Preferences. They do not use the visible floor/grid displayed in 3D views.

• View mode uses the Floor/Grid Setup set in the Camera Visibility property editor (Shift+s over a specific 3D view, or Display > Visibility Options (All Cameras)).

• Plane mode uses the Snap Size set in the Reference Plane property editor.

Incremental Snapping

When translating, rotating, and scaling elements, you can snap incrementally. Instead of snapping to a target, elements jump in discrete increments from their current values. This is useful if you want to move an element by exact multiples of a certain value, but keep it offset from the global grid.

To snap incrementally:

• Press Shift while rotating or translating an element.

• Press Ctrl while scaling (Shift is used for scaling uniformly).

You can set the Snap Increments using Transform > Transform Preferences.Activate or deactivate snapping.

Use Ctrl to temporarily togglethe current state.

Set a variety of options from the menu.

Specify the type of target: points, curves/edges, facets, or the grid.

Right-click to select various sub-types.

Page 73: XSI guia basica

Basics • 73

Section 4

Organizing Your Data

Working in Softimage involves saving and retrieving files between systems. A typical project in Softimage contains many files that need to be easily accessible to you or members of your workgroup. Softimage provides data management features that help you optimize your production pipeline.

What you’ll find in this section ...

• Where Files Get Stored

• Scenes

• Projects

• Models

• Importing and Exporting

Page 74: XSI guia basica

Section 4 • Organizing Your Data

74 • Softimage

Where Files Get Stored

There are two types of files in Softimage: project files and application data files.

Project files include scenes as well as any accompanying files such as texture images, referenced models, cached simulations, rendered pictures, and so on. They are stored in various subfolders of a main project folder.

Application data files are not specific to a single project. They include presets and various customizations you can make or install, such as commands, keyboard mappings, toolbars, shelves, views, layouts, plug-ins, add-ons, and so on. The application data files can be stored in various subfolders at one of three locations:

• User is the location for your personal customizations. Typically, it is C:\users\username\Autodesk\Softimage_2010 on Windows or ~/Autodesk/Softimage_2010 on Linux.

• Workgroup is the location for customizations that are shared among a group of users working on the same local area network.

• Installation (Factory) is the location for presets and sample customizations that ship with Softimage. It is located in the directory where the Softimage program files are installed. It is not recommended that you store your own customizations here.

Setting a Workgroup

Workgroups provide a method for easily sharing customizations among a group of people working on the same project. Simply set your workgroup path to a shared location on your local network, and you can take advantage of any presets, plug-ins, add-ons, shaders, toolbars, views, and layouts that are installed there.

The workgroup is usually created by a technical director or site supervisor. To connect to an existing workgroup, choose File > Plug-in Manager, click the Workgroups tab, click Connect, and specify the location.

Whenever you use an Softimage filebrowser to access files on disk, you canquickly switch among your project,user, workgroup, and instal lationlocations using the Paths button.

Page 75: XSI guia basica

Basics • 75

Scenes

Scenes

A scene file contains all the information necessary to identify and position all the models and their animation, lights, cameras, textures, and so on for rendering. All the elements of a scene are compiled into a single file with an .scn extension.

The Softimage title bar identifies the name of the current scene and the project in which it resides.

Merging Scenes combines objects in any numberof Softimage scenes. When you merge a scene into

the current scene, it is automatically loaded as amodel.

Press the Ctrl key as you drag and drop a scene(*.scn) file from an external window into a 3D view

to merge it as a model under the scene root.

Manage scenes and their associated projects usingthe Project Manager. You can also create, open,

and save scenes to different projects from here.

A New Scene is automatically generated when you start Softimage or create a new project. You can also create a new scene any time while you work. Every new scene is created in the active project and its name appears as “Untitled” in the Softimage title bar.

Choose Edit > Delete All from the Edit panel in the main command panel or press Ctrl+Delete to clear the workspace before creating a new scene.

Open a scene.

or

Open a recently used scene.

You can also drag and drop a scene (*.scn) file from an external window into a 3D view to open the scene. Note that you cannot drag and drop scenes from external windows on Linux systems.

When you open a scene file, a temporary “lock” file is created. Anyone else who opens the file in the meantime must work on a copy and any changes to the scene must be saved under a different file name. The lock file is deleted when you close the scene

Choose Preferences > Data Managementto set options for backing up, autosaving,

recovering, and debugging your scenes.

Save or Save As to update the existing scene orsave it to a new name in the current project.

Import and export scenes from and to other3D or CAD/CAM programs saved in the

dotXSI™, COLLADA, FBX, DirectX, IGES, andOBJ formats.

The File Menu contains most of the commands for creating, opening, and managing scenes.

Page 76: XSI guia basica

Section 4 • Organizing Your Data

76 • Softimage

Managing External Files in Scenes

Scenes can reference many external files such as referenced models, texture images, action sources, and audio clips. Some of these referenced files may be located outside of your project structure. When you save a scene, the path information that lets Softimage locate and refer to these external files is saved as well.

As you develop the scene, you’ll probably need to perform some clean-up and management operations on its external files. For example, you might need to update some paths or locate a missing image. You can do all this, as well as perform other file management tasks, using the external files manager.

Choose File > External Files to open the external files manager.

The grid lists all of the external files for the scene/model specified in the left-hand pane, and of the type specified in the File Type list.

The controls for viewing and managing external files.Click here to refresh the list of files.

The left pane allows you to choose whether to show all external files used by the scene, or only those used by a particular model.

Files with invalid paths are highlighted in red.

Selected files are highlighted in green.

Page 77: XSI guia basica

Basics • 77

Scenes

Displaying Scene Information

You can obtain important statistics for your scene by choosing Edit > Info Scene from the Edit panel or by pressing Ctrl+Enter. This information can be helpful when evaluating a scene’s complexity for the purpose of optimization.

Getting and Setting Data in the Scene TOC

Scene files can be further modified by its scene TOC. The scene TOC (scene table of contents) is an XML-based file that contains scene information. It has an extension of .scntoc with the same name and in the same folder as the corresponding scene file.

By default, the scene TOC is created automatically when you save a scene. When you open a scene file, Softimage looks for a corresponding scene TOC file. If it is found, Softimage automatically applies the information it contains.

This lets you use a text editor or XML editor to change the path for external files such as referenced models or texture images, change render options, change the current render pass, and so on.

Page 78: XSI guia basica

Section 4 • Organizing Your Data

78 • Softimage

Projects

In Softimage, you always work within the structure of a project. A project is a system of folders that contain the scenes you build and the external files referenced by those scenes.

Projects are used to keep your work organized and provide a level of consistency that can simplify production for a workgroup. A project can exist locally on your machine or can be shared from a network drive.

When you open Softimage for the first time, an untitled scene is created in the XSI_SAMPLES factory project. You can set your own project as the default project that opens with Softimage. The project name in the title bar at the top of the Softimage interface is the active project.

Project lists are text-based files with an .xsiprojects file name extension. You can build, manage and distribute your project lists among members of your workgroup using the Project Manager.

Scan for projects in a specified path and add them to the project list.

Select a project from the project list.

Sets the selected project as the active project.

Sort projects by Name, Origin (factory [F], user [U], and workgroup [W]), or none.

Sets the default project that opens automatically when you start Softimage. Location of your project folder.

The tool for managing multiple projects and scenes. You can create new projects and scenes, open existing projects and scenes, scan your system for projects, delete projects, as well as add and remove projects from the project list.

Export the list of projects and have all members of the workgroup import it.

The Project Manager The Project Structure

Subfolders created in every new project folder store and organize the elements of your work such as rendered pictures, scenes, material libraries, external action sources, etc.

Page 79: XSI guia basica

Basics • 79

Models

Models

Models are like “mini scenes” that can be easily reused in scenes and projects. They act as a container for objects, usually hierarchies of objects, and many of their properties. Models contain not just the objects’ geometry but also the function curves, shaders, mixer information, groups, and other properties. They can also contain internal expressions and constraints; that is, those expressions and constraints that refer only to elements within the model’s hierarchy.

There are two types of models:

• Local models are specific to a single scene.

• Referenced models are external files that can be reused in many scenes.

Models and Namespaces

Each model defines its own namespace. This means that each object in a model’s hierarchy must have a unique name, but objects in different models can have the same name. For example, two characters in the same scene can both have chains named left_arm and right_arm if they are in different models.

All models exist in the namespace of the scene. This means that each model must have its own unique name, even if it is within the hierarchy of another model.

Namespaces let you reuse animations that have been stored as actions. If an action contains animation for one model’s left_arm chain, you can apply the action to another model and it automatically connects to the second model’s left_arm. If your models contain elements with different naming schemes, for example, LeftArm and L_ARM, you can use connection mapping templates to specify the proper connections.

Creating Local Models

To create a model in your scene, select the elements you want it to contain and choose Create > Model from the Model toolbar.

At this point, the model has its own namespace and its own mixer, so it can share action sources with other models in the same scene. It can also be instantiated or duplicated within the same scene. If that’s all you need a model for, you do not need to export and import it.

You can add elements to the model by parenting them to the model hierarchy. To remove elements, cut them from the hierarchy.

“Club bot” model structure contains many things that define the character.

Page 80: XSI guia basica

Section 4 • Organizing Your Data

80 • Softimage

Exporting Models

Use File > Export > Model to export models created in Softimage for use in other scenes. Using models to export objects is the main way of sharing objects between scenes.

When you export a model, a copy is saved as an independent file. The file names of exported models have an .emdl extension.

The original model remains in the scene. If you ever need to modify the model, you can change it in the original scene, and then re-export it using the same file name. If other scenes use that file as a referenced model, they will update automatically when you open them. If you imported the file into another scene as a local model, you must delete the model from that scene and re-import it from the file to obtain the updated version.

Importing Local Models

When you import a model locally instead of as a referenced model, its data becomes part of your scene. It is as if the model was created directly in the scene—there is no live link to the .emdl file. You can make any changes you want to the model and its children.

To import a model locally, choose File > Import > Model from the main menu. You can also drag an .emdl file from a browser or a link on a Net View page and drop it onto the background of a 3D view. On Windows, you can also drag an .emdl file from a folder window.

Importing Referenced Models

Referenced models are models that are imported using File > Import > Referenced Model or converted to referenced using Edit > Model > Convert to Referenced. Their data is not stored in the scene—it is referenced from an external .emdl or .xsi file. Changes made to the external model are reflected in your scene the next time you open the scene or update the reference.

For example, let’s say that you’re modeling a car that will be used in various scenes, but the animator needs to start animating with the car on another computer before you can finish the details. You export the car as porsche.emdl, which the animator can import into her scene while you continue your work. Any changes that the animator makes to the car, such as setting keys or expressions, are automatically stored in the model’s delta in the scene.

When you’re done modeling the car, you can re-export using the same file name. Now when the animator loads the scene or updates the referenced model, all the changes you made are automatically reflected in the car in her scene. After the model is updated, Softimage reapplies the changes stored in the delta to the model within the animator’s scene.

Referenced models also let you work at different levels of detail. You can have a low-resolution model for fast interaction while animating, a medium-resolution model for more accurate previewing, and a high-resolution model for the final results.

Referenced models are indicated in the explorer by a white man icon. The default name of this node depends on the name of the external file, but you can change it if you want. The name of the active resolution appears in square brackets after the model’s name. The name of a delta’s target model appears after the delta’s name.

Use the Modify > Model menu on the Model toolbar to set the current resolution, or to temporarily offload models.

Page 81: XSI guia basica

Basics • 81

Models

You can change a referenced model’s parameters values, animate them, apply new properties, and so on. These changes are stored in the clip and reapplied when the model is updated. There are some changes you can’t make, such as adding an object to the hierarchy or deleting a property.

Whatever changes you perform, make sure that they are selected in the delta’s Recorded/Applied Modifications property, otherwise they will be lost the next time the model is updated.

Instantiating Models

An instance is an exact replica of a model. Any type of model can be instanced. You can create as many instances as you like using the commands on the Edit > Duplicate/Instantiate menu, and position them anywhere in your scene. When you modify the original “master” model, all instances update automatically.

Instances are useful because they require very little memory: only the transformations of the instance root is stored. However, you cannot modify, for example, an instance’s geometry or material.

Instantiation has the following advantages:

• Instances use much less disk space than duplicates or clones because you’re not duplicating the geometry.

• Editing multiple identical objects is very simple because you only have to edit the original.

• Wireframe, shading, and memory operations are much faster.

Instances are displayed in the explorer with a cyan i superimposed on the model icon. In the schematic view, they are represented by trapezoids with the label I.

Parameters display a white lock icon but they can still be modified and animated.

Instance in the explorer.

Instance in the schematic view.

Page 82: XSI guia basica

Section 4 • Organizing Your Data

82 • Softimage

Importing and Exporting

In any production pipeline, you will need to import and export scene data for reuse in other scenes or software packages.

Softimage provides a number of importers and exporters available from the File > Import, File > Export, and File > Crosswalk menus. Softimage also supports many other file types such as audio, video, various graphics and middleware formats, as well as specialized scene elements such as function curves, actions, and motion capture data.

Importing and Exporting with Crosswalk

Crosswalk is a set of plug-ins and converters that lets you transfer assets such as scenes and models between Softimage and other programs in your pipeline such as Autodesk Maya and Autodesk 3ds Max. The Crosswalk converters are available in Softimage from File > Crosswalk. You can download the latest version of Crosswalk from www.autodesk.com/softimage-crosswalk.

FBX, Collada, and dotXSI

You can use Crosswalk in Softimage to import and export scenes and models in FBX (.fbx), Collada (.dae, .xml), and dotXSI (.xsi) formats.

3ds Max and Maya

Crosswalk plug-ins for Maya and 3ds Max allow you to import and export dotXSI files in those programs. This allows you to share assets back and forth with Softimage.

Crosswalk SDK

You can use the templates and examples provided in the Crosswalk SDK to create converters to import dotXSI files into your own custom format, such as for games content.

Importing and Exporting with Point Oven

Point Oven is a suite of plug-ins available from within Softimage that allow you to simplify your Softimage scenes by baking in vertex and function curve data. These plug-ins also allow you to streamline your pipeline by providing data transfer between different applications that also use Point Oven.

The Softimage Point Oven plug-ins let you load and save various types of data: you can import and export Lightwave Object (LWO2) files, bake vertices to MDD files, import and export Point Oven scenes (PSC), export Lightwave scenes (LWS), export Messiah scenes (FXS), and import MDD files.

You can access the Point Oven plug-ins from the File > Import > Point Oven and File > Export > Point Oven menus.

Importing and Exporting Obj Files

You can import and export Wavefront Obj files to transfer data back and forth with other programs that support this format using File > Import > Obj File and File > Export > Obj File.

Importing and Exporting Other Formats

In addition to the formats explicitly mentioned here, Softimage supports a large number of other formats for scenes, animation, motion capture, images, and so on.

Page 83: XSI guia basica

Basics • 83

Section 5

General Modeling

Modeling is the task of creating the objects that you will animate and render. No matter what type of object you are modeling, the same basic concepts and techniques apply. This section explores the aspects of modeling that aren’t specific to any specific type of geometry such as curves, polygon meshes, or NURBS surfaces.

What you’ll find in this section ...

• Overview of Modeling

• Geometric Objects

• Accessing Modeling Commands

• Starting from Scratch

• Operator Stack

• Modeling Relations

• Attribute Transfer (GATOR)

• Manipulating Components

• Deformations

Page 84: XSI guia basica

Section 5 • General Modeling

84 • Softimage

Overview of Modeling

4 Iteratively refine the object, moving points and adding more detail where required.

2 Add more subdivisions to work with.1 Start with a basic object, such as a primitive cube.

3 Rough out the basic shape of the object. 5 Once the modeling is done, the object is ready to be textured and animated.

If changes are necessary, you can still perform modeling operations on the animated, textured object.

Page 85: XSI guia basica

Basics • 85

Geometric Objects

Geometric Objects

By definition, geometric objects have points. The set of these points and their positions determine the shape of an object and are often called the object’s geometry. The number of points and how they are connected is called its topology.

No matter what the type of geometry, Softimage allows you to select, manipulate, and deform points in the same way.

Types of Geometry

The main types of renderable geometry in Softimage are polygon meshes and NURBS surfaces. In addition, there are other types of geometry that you can use for specialized purposes.

Polygon Meshes

Polygon meshes are quilts of polygons joined at their edges and vertices. One advantage of polygon meshes is that they allow for almost arbitrary topology—you are not limited to rectangular patches and you can add extra points for more detail where needed.

On the other hand, polygon meshes may require very heavy geometry (that is, many points) to approximate smoothly curved objects. However, you can subdivide them to create “virtual” geometry that is smoother.

NURBS Surfaces

Surfaces are two-dimensional NURBS (non-uniform rational B-splines) patches defined by intersecting curves in the U and V directions. In a cubic NURBS surface, the surface is mathematically interpolated between the control points, resulting in a smooth shape with relatively few control points.

The accuracy of NURBS makes them ideal for smooth, manufactured shapes like car and aeroplane bodies. One limitation of surfaces is that they are always four-sided.

A polygon mesh sphere

A subdivision surface created from a cube.

NURBS surfaces allow for smooth geometry with relatively few control points.

Page 86: XSI guia basica

Section 5 • General Modeling

86 • Softimage

Curves

In Softimage, curves are one-dimensional NURBS of linear or cubic degree. Cubic curves with Bézier knots can be manipulated as if they are Bézier curves.

Curves have points but they are not renderable because they have no thickness. Nevertheless, they have many uses, such as serving as the basis for constructing polygon meshes surfaces, paths for objects to move along, controlling deformations like deform by curve and deform by spine, and so on.

Lattices

Lattices are a hybrid between geometric objects and control objects. Although they have points, they do not render and are used only to deform other geometric objects.

Particles

Particles are disconnected points in a point cloud. They are often emitted in simulations to create a variety of effects, such as fire, water, and smoke.

In Softimage, point clouds are controlled by ICE trees. See ICE Particles on page 271.

Hair

Hair objects let you use guide hairs to control a full head of render hairs. You can style the hairs manually as well as apply a dynamic simulation.

Density

Density refers to the number of points on an object. Part of the art of modeling is controlling the balance of density. Generally speaking, you need more density in areas where an object has high detail or needs to deform smoothly. However, too much density means that an object will be unnecessarily slow to load, update, and render.

A simple cubic NURBS curve.

Page 87: XSI guia basica

Basics • 87

Geometric Objects

Normals

On polygon meshes and surfaces, the control points form bounded areas. Normals are vectors perpendicular to these closed areas on the surface, and they indicate the visible side of the object and how its surface is oriented. Normals are used to compute shading between surface triangles.

Normals are represented by thin blue lines. To display or hide them, click the eye icon (Show menu) of a 3D view and choose Normals.

When normals are oriented in the wrong direction, they cause modeling or rendering problems. You can invert them using Modify > Surface > Invert Normals or Modify > Poly. Mesh > Invert Normals on the Model toolbar.

If an object was generated from curves, you can also invert its normals by inverting one or more of its generator curves with Modify > Curve > Inverse.

Eye icon

Right

Normals should point toward the camera.

Wrong

Page 88: XSI guia basica

Section 5 • General Modeling

88 • Softimage

Accessing Modeling Commands

The modeling tools can be found, not surprisingly, on the Model toolbar. In addition, the context menu also contains many of the most useful modeling commands that apply to the current selection.

Model Toolbar

You’ll find the Model toolbar at the far left of the screen. These commands are also available from the main menu.

Context Menus

Many modeling commands are available from context menus. The context menu appears when you Alt+right-click in the 3D views (Ctrl+Alt+right-click on Linux).

• If you click a selected object, the menu items apply to all selected objects. On Windows, you can also press the context-menu key (next to the right Ctrl key on some keyboards).

• If you click an unselected object, the menu items apply only to that object.

• When components are selected, you can right-click anywhere on the object that “owns” the selected components. The items on the context menu apply to the selected components.

• If you click over an empty area of a 3D view, the menu items apply to the view itself.

To display the Model toolbar:

– Click the toolbar title and choose Model.

or

– Press 1 at the top of the keyboard.

Get commandsCreate generic elements, including primitive objects, cameras, and lights (also available on Animate, Render, and Simulate toolbars).

Create commandsDraw new objects or generate them from existing ones.

Modify commandsChange an object’s topology or deform its geometry.

If the Palette or Paint panel is currently displayed, first click the Toolbar icon or press Ctrl+1.

Page 89: XSI guia basica

Basics • 89

Starting from Scratch

Starting from Scratch

When modeling, you need to start somewhere. You can:

• Get a basic shape from the Primitive menu.

• Create text.

• Generate an object from a curve.

Primitives

Primitives are basic shapes like cubes, grids and spheres. You can add them to a scene and then modify them as you wish. For example, you can start with a sphere and move points to create a head. You can then attach eyeballs and ears to the head and put the whole head on a model of a character.

There are several different primitive shapes for each geometry type. Each primitive shape has parameters that are particular to it—for example, a sphere has a radius that you can specify, a cube has a length, a cylinder has both height and radius, and so on.

There are also several parameters that are common to all or to several primitive shapes: Subdivisions, Start and End Angles, and Close End.

Getting Primitives

You add a primitive object to the scene by choosing an option from the Get > Primitive menu on any of the toolbars at the left of the main window.

1. Choose Get > Primitive.

2. Choose an item from the submenus:

- Curve displays a submenu from which you can choose an available NURBS curve shape.

- Polygon Mesh displays a submenu from which you can choose an available polygon mesh shape.

- Surface displays a submenu from which you can choose an available NURBS surface shape.

3. Set the parameters as desired. The geometric primitives (curves, polygon meshes, and surfaces) have certain typical controls:

- The shape-specific page contains the basic characteristics of the shape. Each shape has different characteristics; for example, a sphere has one radius and a torus has two.

- The Geometry page controls how the implicit shape is subdivided when converted into a surface. More subdivisions yield more points, resulting in greater detail but heavier geometry.

Text

You can create text in Softimage, as well as import it from RTF (rich text format) files. Text is not a type of geometric object in Softimage; instead, text information is immediately converted to curves. After that, the curves can be optionally converted to planar or extruded polygon meshes.

Creating Text

• Choose one of the following commands from the Model toolbar:

- Create > Text > Curves creates a Text primitive and converts it to a curve object.

- Create > Text > Planar Mesh creates a Text primitive, converts it to a curve object, and then finally converts the curve to a polygon mesh with the Extrusion Length set to 0. The curve object is automatically hidden.

Page 90: XSI guia basica

Section 5 • General Modeling

90 • Softimage

- Create > Text > Solid Mesh creates a Text primitive, converts it to a curve object, and then finally converts the curve to a polygon mesh with the Extrusion Length set to 0.5 by default. Once again, the curve object is automatically hidden.

In each case, a property editor with the following pages is displayed:

Objects from Curves

You can generate polygon meshes and surfaces from curves using the first group of commands in the Create > Surf. Mesh menu or the Create > Poly. Mesh menu on the Model toolbar.

The commands and the general procedures on these two menus are the same—the only difference is the type of object that is created.

Convert curves to polygon meshes (optional).

Convert text to curves.

Enter text and font properties.

Create polygon mesh from curves

Create surface from curves

Page 91: XSI guia basica

Basics • 91

Operator Stack

1. Select the first input curve, then add the remaining input curves (if any) to the selection.

Different commands require different numbers of input curves. For example, Revolution Around Axis requires only one curve, while Loft allows for any number of profile curves to define the cross-section.

You are not limited to curve objects. You can also select curves on surfaces, including any combination of isolines, knot curves, boundaries, surface curves, and trim curves. For example, you can create a loft surface that joins two surface boundaries while passing through other curves.

2. Choose one of the commands from the first group in the Create > Surf. Mesh or the Create > Poly. Mesh on the Model toolbar.

3. In the property editor that opens, adjust the parameters as desired. For more information, refer to the Softimage Reference by clicking on the ? in the property editor.

Operator Stack

The operator stack (also known as the modifier stack or construction history) is fundamental to modeling in Softimage. Every time you perform a modeling operation, such as modify the topology or apply a deformation, an operator is added to the stack. Operators propagate their effects upwards through the stack, with the output of one operator being the input of the next. At any time, you can go back and modify or delete operators in the stack.

Viewing and Modifying Operators

You can view the operator stack of an object in an explorer if Operators is active in the Filters menu. The operator stack is under the first subnode of an object in the explorer, typically named Polygon Mesh, NURBS Surface Mesh, NURBS Curve List, and so on.

For example, suppose you get a primitive polygon mesh grid, apply a twist, then randomize the surface. The operator stack shows the operators that have been applied. You can open the property page of any operator by clicking on its icon, and then modify values. Any changes you make are passed up through the history and reflected in the final object.

For example, you can:

• Change the size of the grid in its Geometry node.

• Change the angle, offset, and axis of the twist in Twist Op.

• Change the random displacement parameters in Randomize Op.

Profile curveGuide curve

Example of extruding a curve along another curve

12

Click icon to open the property editor.

Click the name to select the operator. Then you can press Enter to open the editor, or press Delete to remove the operator.

Page 92: XSI guia basica

Section 5 • General Modeling

92 • Softimage

Construction Modes and Regions

The construction history is divided into four regions: Modeling, Shape Modeling, Animation, and Secondary Shape Modeling. The purpose of these regions is to keep the construction history clean and well ordered by allowing you to classify operators according to how you intend to use them.

For example, when you apply a deformation, you might be building the object’s basic geometry (Modeling), or creating a shape key for use with shape animation (Shape Modeling), or creating an animated effect (Animation), or creating a shape key to tweak an enveloped object (Secondary Shape Modeling).

Here is a quick overview of the workflow for using construction modes:

1. Set the current construction mode using the selector on the main menu bar.

2. Continue modeling objects by applying new operators. New deformations (operations that only change the positions of points) are applied at the top of the current region, and new topology modifiers (operators that change the number of components) are always applied at the top of the Modeling region. If you apply a deformation in the wrong region, you can move it by dragging and dropping in the explorer.

3. At any time as you work, you can display the final result (the result of all operators in all regions) or the just the current mode (the result of all operators in the current region and those below it) by selecting an option from the Construction Mode Display submenu of the Display Mode menu on the top right of a viewport:

- Result (top) always shows the final result of all operators, no matter which construction mode is current.

- Sync with construction mode shows the result of the operators in the current construction region and below.

You can even have different displays in different views so, for example, you can see and move points in one view in Modeling mode while you see the results after enveloping and other deformations in another view.

To quickly open the last operator in the selected object’s stack, press Ctrl+End or choose Edit > Properties > Last Operator in Stack.

If you modify specific components, then go back earlier in the stack and change the number of subdivisions, you’ll probably get undesirable results because the indices of the affected points have changed.

ModelingCreate the basic shape and topology of an object.

Use Freeze M to freeze this region.

Shape ModelingDefine shapes for

animation.

Secondary ShapeModeling

Define shapes ontop of envelopes,

e.g., muscle bulges.

AnimationApply envelopes or other animated deformations.

Display Mode menu

Page 93: XSI guia basica

Basics • 93

Operator Stack

Changing the Order of Operators

You can change the order of operators in an object’s stack by dragging and dropping them in an explorer view. You must always drop the operator onto the operator or marker that is immediately below the position where you want the dragged operator to go.

Be aware that you might not always get the results you expect, particularly if you move topology operators or move other operators across topology operators, because operators that previously affected certain components may now affect different ones. In addition, some deformation operators like MoveComponent or Offset may not give expected results when moved because they store offsets for point positions whose reference frames may be different at another location in the stack.

When you try to drag and drop an operator, Softimage evaluates the implications of the change to make sure it creates no dependency cycles in the data. If it detects a dependency, it will not let you drop the operator in that location. Moving an operator up often works better than moving it down—this is because of hidden cluster creation operators on which some operators depend.

Freezing the Operator Stack

When you are satisfied with an object, you can freeze all or part of its operator stack. This removes the current history—as a result, the object requires less memory and is quicker to update. However, you can no longer go back and change values.

• To freeze the entire stack, select the object and click Freeze on the Edit panel.

• To freeze just the modeling region, select the object and click Freeze M.

• To freeze from a specific operator down, select the operator in an explorer and click Freeze.

Collapsing Deformation Operators

Sometimes, it is useful to “freeze” certain operators in the stack without freezing earlier operators that are lower in the stack. For example, you might have many MoveComponent operators that are slowing down your scene, but you don’t want to lose an animated deformation or a generator (if your object has a modeling relation that you want to keep).

In these cases, you can collapse several deformation operator into a single Offset operator. The Offset operator is a single deformation that contains the net effect of the collapsed deformations at the current frame. Simply select the deformations operators in an explorer and choose Edit > Operator > Collapse Operators.

• Freezing removes any animation on the modeling operators (such as the angle of a Twist deformation). The values at the current frame are used.

• For hair objects, the Hair Generator and Hair Dynamics operators are never removed.

Page 94: XSI guia basica

Section 5 • General Modeling

94 • Softimage

Modeling Relations

When you generate an object from other objects, a modeling relation is established. For example, if you create a surface by extruding one curve along another curve, the resulting surface is linked to its generator curves. If you modify the curves, the surface updates automatically. The modeling relation is sometimes called construction history in other software.

You can modify the generated object in any way you like, for example, by moving points or applying a deformation. When you modify the generators, the generated object is updated while any modifications you have made to it are preserved.

You can display the modeling relations:

• In a 3D view, click the eye icon (Show menu) and make sure that Relations is on.

• In a schematic view, make sure that Show > Operator Links is on.

If the selected object has a modeling relation, it is linked to its input objects by lines. A label on the line identifies the type of relation (such as wave or revolution) and the name of the input object. You can click the line to select the corresponding operator.

Modeling RelationThe road was created by extruding a cross-section along a guide. When the original guide was deformed into a loop, the road was updated automatically.

If you delete the input objects, the generated object is removed as well. To avoid this, freeze the generated object or at least the generator operator before deleting the inputs. If you use the Delete button in the Inputs section of the generator’s property editor, the generator is automatically frozen first.

Page 95: XSI guia basica

Basics • 95

Attribute Transfer (GATOR)

Attribute Transfer (GATOR)

You can transfer and merge clusters with properties from object to object. The cluster properties that you can transfer in this way include materials, texture UV coordinates, vertex colors, property weight maps, envelope weights, and shape animation.

Attributes can be transferred in two ways:

• If you are generating a polygon mesh object from others, for example using Merge or Subdivision, use the controls in the generator’s property editor to transfer attributes from the input objects to the generated objects.

• Otherwise, select the target object, choose Get > Property > GATOR, pick one or more input objects, and right-click to end the picking session. You can use any combination of polygon meshes and NURBS surfaces.

Manipulating Components

Tweak Component is the main tool for moving components. It allows you to translate, rotate, and scale points, polygons, and edges. You can use it in two ways:

• Click and drag components for a fast, uninterrupted interaction.

• Select a component and then use the manipulator for a more controlled interaction.

To use the Tweak Component tool

1. Select a geometric object.

2. Activate the Tweak Component tool by pressing m or choosing Modify > Component > Tweak Component Tool from the Model toolbar.

Note that if a curve is selected, then pressing m activates the Direct Manipulation tool instead. However, you can still use Tweak Component with curves by choosing it from the toolbar menu.

3. Move the mouse pointer over the object in any geometry view. As the pointer moves, the component under the pointer is highlighted.

The Tweak Component tool will not highlight backfacing components, or components that are occluded by parts of the same object. When there are multiple types of components within the picking radius, priority is given first to points, then to edges, and finally to polygons.

4. Do one of the following:

- Click+drag to perform a simple transformation on the highlighted component. If all axes are active on the Transform panel, translation occurs in the viewing plane and scaling is uniform in local space. If one or more axes have been toggled off, translation and scaling use the current manipulation mode and active axes set on the Transform panel. For example, to translate along a point’s normal, activate Local and the Y axis only.

Transfer and mergesurface attributes.

Transfer and mergeanimation attributes.

Transfer and mergespecific attributes

manually.

Page 96: XSI guia basica

Section 5 • General Modeling

96 • Softimage

Rotation uses the current manipulation mode and the Y axis by default, but you can select a different axis by deactivating the others.

- Click and release the mouse button to select the highlighted component. A manipulator appears (unless you’ve toggled it off). You can use the manipulator to transform the selection, or if you prefer you can first modify the selection, change the pivot, and set other options.

The Tweak Component tool uses the Ctrl, Shift, and Alt modifier keys with the left and middle mouse buttons to perform different functions—look at the mouse/status line at the bottom of the Softimage window for brief descriptions, or read the rest of this section for the details. The right mouse button opens a context menu.

5. The Tweak Component tool remains active, so you can repeat steps 3 and 4 to manipulate other components.

When you have finished, deactivate the tool by pressing Esc or activating a different tool.

Switching between Translation, Rotation, and Scaling

The Tweak Component tool lets you translate, rotate, or scale components. Select the desired transformation using the v, c, and x keys—press and release a key to change the transformation (sticky mode) or press and hold a key to temporarily override the current transformation (supra mode).

• To translate, press v or choose Translate from the context menu.

• To rotate, press c or choose Rotate from the context menu.

Drag an axis to translate in the corresponding direction.

Drag the center to translate freely in the viewing plane.

Drag an axis to rotate in the corresponding direction.

Page 97: XSI guia basica

Basics • 97

Manipulating Components

• To scale, press x or choose Scale from the context menu.

The mouse pointer updates to reflect the current action. You can also press Tab to cycle through the three actions, or Shift+Tab to cycle in reverse order.

To activate the standard Translate, Rotate, or Scale tools, you must either deactivate the Tweak Component tool before pressing v, c, or x, or use the t, r, or s buttons on the Transform panel.

Setting Manipulation Modes

The Tweak Component tool uses the manipulation modes shown on the Transform panel. They affect the axes and pivot used for the transformation.

• Global transformations are performed along the scene’s global axes.

• Local transformations use the component’s own reference frame. In this mode, Y is the normal direction.

• View transformations are performed with respect to the viewing plane of the 3D view.

• Object transformations are performed in the local coordinate system of the object that contains the components.

• Ref, or reference, mode lets you transform elements using another component or object as the reference frame. See Setting the Pivot on page 98.

• Plane mode is similar to Ref. It uses the same axes as Ref but the object center as the pivot.

Activating Axes

You can activate or deactivate axes on the Transform panel:

• Click an axis icon to activate it and deactivate the others.

• Shift+click an axis icon to activate it without affecting the others.

• Ctrl+click an axis icon to toggle it.

• Click the All Axes icon to activate all three axes.

• Ctrl+click the All Axes icon to toggle all three axes.

Alternatively if the Tweak manipulator is displayed, you can activate a single axis by double-clicking on it. Double-click on the same axis again to re-activate all axes, or on a different one to activate it instead.

Drag an axis to scale in the corresponding direction.

Drag the center to scale uniformly.

Individual axes

All Axes

Page 98: XSI guia basica

Section 5 • General Modeling

98 • Softimage

Selecting Components

The Tweak Component tool lets you select components in a similar way to the standard selection tools, but there are some differences.

Selecting, Deselecting, and Extending the Selection

Use the following keyboard and mouse combinations for selection:

• Click a component to select it.

• Shift+click a component to add it to the selection.

• Shift+middle-click to toggle-select a component.

• Ctrl+Shift+click to deselect a component.

• To quickly deselect all components, click anywhere outside the object.

Note that you can only multi-select components of the same type. You cannot select a heterogeneous collection of points, edges, and polygons.

Selecting Loops and Ranges

Use the Alt key to select loops or ranges of components.

To select loops or ranges of components

1. Click to select the first or “anchor” component.

2. Do one of the following:

- Alt+click on a second component to select all components on a path between the two components.

- Alt+middle-click on a second component to select all components in the loop that contains both components.

3. To select additional loops or ranges, use Shift+click to specify a new anchor and then Alt+Shift+click for a new range or Alt+Shift+middle-click for a new loop.

Note that for edge loops, the direction is implied so you can simply Alt+middle-click on an edge to select the loop and then Alt+Shift+middle-click to select additional loops. However, to select parallel edge loops, you still need to specify two components as described above.

Selecting by Type

The Tweak Component tool allows you to manipulate points, edges, and polygons, but you can limit it to a particular type of component if you desire. Use the context menu to activate Tweak All, Points, Edges, Polygons, or Points + Edges.

Setting the Pivot

You can quickly set the pivot by middle-clicking on a component. For example, to rotate a polygon about one of its edges, simply click to select the polygon and then middle-click to specify the edge as the reference. The manipulator does not react to middle-clicks unless Shift is pressed, so you can pick a component even if the manipulator is covering it in a view.

Middle-clicking temporarily switches to Ref manipulation mode. As soon as you select a new component, the previous manipulation mode is restored. If you want to transform several components about the same reference one after another, you should manually switch to Ref mode and then middle-click to specify the reference. In this way, the reference frame does not revert to the default when you select a new component to manipulate.

Page 99: XSI guia basica

Basics • 99

Manipulating Components

Using Proportional Modeling

When you manipulate points, edges, and polygons, you can use proportional modeling. When this option is on, neighboring components are affected as well, with a falloff that depends on distance. Proportional modeling is sometimes known as “magnet” or “soft selection”.

To activate proportional modeling, click the Prop button on the Transform panel.

Components that are affected by the proportional falloff are highlighted, and the Distance Limit is displayed as a circle.

You can change the Distance Limit interactively when proportional modeling is active by pressing and holding r while dragging the mouse left or right. You can change the Falloff (Bias) profile by pressing and holding Shift+R while dragging the mouse.

To change other proportional settings, right-click on Prop.

Sliding Components

You can slide components with the Tweak Component tool. This helps to preserve the contours of objects as you tweak them.

Sliding an edge moves its endpoints along the adjacent edges by an equal percentage. Sliding a point or a polygon clamps the associated points to the nearest location on the surface of the mesh, as if they had been shrinkwrapped to the original untweaked object. Sliding works only on polygon mesh components.

To activate or deactivate sliding:

• While the Tweak Component tool is active, do one of the following:

- Press j. Press and release the key to toggle sliding on or off (sticky mode) or press and hold it to temporarily override the current behavior (supra mode).

- Click the on-screen Slide icon at the bottom of the view.

- Right-click and choose Slide Components.

Proportional modeling off

Proportional modeling on Selected edge loop. Effect of sliding. Effect of ordinary translation

for comparison.

Slide Components button

Page 100: XSI guia basica

Section 5 • General Modeling

100 • Softimage

Snapping

You can use the Ctrl key to snap while using the Tweak Component tool:

• Press Ctrl to toggle snapping to targets on or off (depending on its current setting on the Snap panel) while translating.

• Press Ctrl to snap by increments while scaling.

For more information about snapping options, see Snapping on page 72.

Welding Points

You can interactively weld pairs of points on polygon meshes while using the Tweak Component tool. Welding merges points into a single vertex.

To weld points

1. While the Tweak Component tool is active, toggle Weld Points on by doing one of the following:

- Press l. Press and release the key to toggle welding on or off (sticky mode) or press and hold it to temporarily override the current behavior (supra mode).

- Click the on-screen Weld Points icon at the bottom of the view.

- Right-click and choose Weld Points.

2. Click and drag a point. As you move the mouse pointer, the point snaps to points within the region.

3. Release the mouse button over the point you want to weld to.

Note that interactive welding uses the same snapping region size as the Snap tool. You can modify the region size using the Snap menu.

4. Repeat steps 2 and 3 to weld more points, if desired. When you have finished welding, toggle Weld Points off.

Hiding the Manipulator

If you don’t like working with the manipulator, you can hide or unhide it by clicking the on-screen button at the bottom of the view or by choosing Toggle Manipulator from the context menu.

When the manipulator is off, the Tweak Component tool is always in click-and-drag mode:

• If all axes are active on the Transform panel, translation occurs in the viewing plane and scaling is uniform in local space. If one or more axes have been toggled off, translation and scaling use the current manipulation mode and active axes set on the Transform panel.

• Rotation uses the current manipulation mode and the Y axis by default, but you can select a different axis by deactivating the others.

Weld Points button

Toggle Manipulator button

Page 101: XSI guia basica

Basics • 101

Manipulating Components

Manipulating Components Symmetrically

Symmetrical manipulation lets you move points and other components while maintaining the symmetry of an object. Any manipulation performed on components on one side is mirrored to the corresponding components on the other side. Components that lie directly on the plane of symmetry are “locked down”; they can be translated or moved only along the plane of symmetry itself.

There are two ways to do this in Softimage:

• To move components symmetrically in “live” mode, simply activate Sym on the Transform panel. Softimage automatically finds symmetrical components (within a small tolerance) and moves them, too.

• If you will need to maintain a correspondence between points even after an object is no longer symmetrical, you first need to apply a symmetry map (Get > Property > Symmetry Map) while the object is still symmetrical. This allows you to manipulate components symmetrically after a character has been enveloped and posed, for example.

To specify the plane of symmetry or set other options, right-click on Sym.

Alternatives to the Tweak Component Tool

In addition to the Tweak Component tool, Softimage provides many other ways to manipulate components. For example, you could use the regular selection and transformation tools, or some of the other tools on the Modify > Component menu.

Page 102: XSI guia basica

Section 5 • General Modeling

102 • Softimage

Deformations

Deformations are operators that change the shape of geometric objects. Softimage provides a large variety of deformation types available from the Modify > Deform menu of the Model and Simulate toolbars as well as the Deform > Deform menu of the Animate toolbar.

Some deformations, like Bend and Twist, are very simple. Others, like Lattice and Curve, use additional objects to control the effect.

Deformations can be used either as modeling tools or animation tools. Depending on the type of deformation, you can animate the deformation’s own parameters, such as the amplitude of a Push, or the properties of a controlling object, such as the center of a Wave.

Examples of Deformations

Here are just some examples of the many types of deformation and their possible uses.

Deformation by Curve

Lattice Deformation

Wave Deformation

Muting Deformations

All deformations can be muted. This temporarily disables its effect. To mute a deformation, activate Mute in its property editor. Alternatively, right-click on its operator in an explorer and choose Mute.

Object and curve before the deformation is applied

Object and curve after the deformation is applied

Circular wave

Planar wave

Page 103: XSI guia basica

Basics • 103

Section 6

Curves

Softimage provides a full set of tools for creating and editing curves in 3D space. Although they can’t be rendered by themselves, curves form the basis for a lot of modeling and animation techniques.

What you’ll find in this section ...

• About Curves

• Drawing Curves

• Manipulating Curve Components

• Modifying Curves

• Inverting Curves

• Importing EPS Files

Page 104: XSI guia basica

Section 6 • Curves

104 • Softimage

About Curves

In Softimage, you can use curves to:

• To build objects, for example, by revolving, extruding, or using Curves to Mesh,

• To deform objects, for example, using curve or spine deformations.

• As paths and trajectories for animation.

Curves are linear (degree 1) or cubic (degree 3) NURBS (Non-Uniform Rational B-Splines). NURBS are a class of curves that computers can easily manipulate, allowing for a great deal of flexibility in modeling.

Curve Components

Curves have many components. You can display these components using the options on a viewport’s Show menu (eye icon) and select them using the filters on the Select panel.

Drawing Curves

Softimage has tools and commands that let you draw and manipulate curves in a variety of ways.

In Softimage, you can draw and manipulate two types of curve: linear and cubic. Linear curves are composed of straight segments, and cubic curves are composed of curved segments.

On a cubic curve, each knot can have a multiplicity of 1, 2, or 3. This value refers to the number of control points associated to the knot. In general, knots with higher multiplicity are less smooth but provide more control over the trace of the curve. A knot with multiplicity 3 is like a Bézier point, with one control point at the position of the knot and the other two control points acting as the tangent handles.

The Tweak Curve tool allows you to manipulate these knots in a Bézier-like manner—see Manipulating Curve Components on page 107. Whether the back and forward tangents remain aligned depends on how you manipulate them—it is not a property of the knot itself.

Cubic curves are interpolated between points.

Hulls join points.

Knots lie on the curve.

NURBS Boundaries show the beginning of the curve (U = 0).

Segments are the span between knots.

Linear Curve Cubic CurveKnot has multiplicity 1.

Cubic CurveKnot has multiplicity 2.

Cubic CurveKnot has multiplicity 3 (Bézier).

Page 105: XSI guia basica

Basics • 105

Drawing Curves

Bézier knots also allow you to create straight segments by rotating the tangents to point at adjacent knots, so that four control points are lined up in a row. Again, whether the control points remain lined up depends on how you manipulate the adjacent knots—it is not a property of the segment. See Drawing a Combination of Linear and Curved Segments on page 106.

You can draw cubic or linear curves by clicking to place control points or to place knots. Use one of the following commands from the Create > Curve menu of the Model or Animate toolbar:

• Draw Cubic by CVs allows you to place control points (also known as control vertices or CVs). The curve does not pass through the locations you click but is a weighted interpolation between the control points. As you add more points, the existing knot positions may change but the point positions do not.

• Draw Cubic by Bézier-Knot Points allows you to place knots of multiplicity 3. The curve passes through the points you click. As you add more knots, the positions of the control points are automatically adjusted to ensure maximum smoothness of the curve as the curve passes through the existing knot positions.

• Draw Cubic by Knot Points allows you to place knots of multiplicity 1. Again, the curve always passes through the locations you click and the positions of the control points are automatically adjusted as you add more knots.

• Draw Linear allows you to draw lines of connected straight segments (sometimes called polylines). The straight segments meet at the locations you click.

To add points or knots to an existing curve, use the corresponding commands on the Modify > Curve menu. To remove points or knots, select them and press Delete.

The choice between linear, cubic Bézier, and cubic non-Bézier drawing tools depends on the situation. When creating profiles for modeling, linear curves give a good sense of the final result. For paths, you’ll want cubic curves—non-Bézier curves are smoother but you may find Bézier curves easier to control. Bézier curves also give you the ability to have sharp corners, and to mix curved and straight segments. The choice between placing control points or placing knots to draw cubic non-Bézier curves is simply a matter of personal preference.

While drawing a curve:

• To add a point at the end of the curve, use the left mouse button.

• To add a point between two existing points, use the middle mouse button.

• To add a point before the first point, first right-click and choose LMB = Add at Start and then use the left mouse button. To return to adding points at the end of the curve, first right-click and choose LMB = Add at End.

• Other useful commands are available on the context menu when you right-click: Open/Close, Invert, Start New Curve, and, of course, Exit Tool.

Before you release the mouse button, you can drag the mouse to adjust the point’s location. Snapping can also be very useful for controlling the position of points and knots. While drawing, you can move any point or knot by pressing and holding m while dragging to activate the Tweak Curve tool in supra mode.

Broken tangents create a sharp corner.

Four control points create a straight segment when they are lined up.

Page 106: XSI guia basica

Section 6 • Curves

106 • Softimage

Drawing a Combination of Linear and Curved Segments

Although Softimage does not support having linear and cubic NURBS segments in the same subcurve, you can use Bézier knots to obtain straight segments on a cubic curve:

• If you have already begun drawing a linear curve, make it cubic using Modify > Curve > Raise Degree and then use Modify > Curve > Add Point Tool by Bézier-Knot Points to draw curved sections. Press Shift while adding knots to preserve the existing trace if you want the last-drawn segment to remain straight.

• If you have already begun drawing a cubic curve, place the knots where you want them and then straighten the desired segments as described in Creating Straight Segments on page 109.

Straight segments are not inherently linear. Whether they remain straight depends on how you manipulate them. Using the Tweak Curve tool to move a knot preserves the linearity, but it will break if you move a tangent or use another tool.

Setting Knot Multiplicity

You can change the multiplicity of a knot to suit your needs. For example, reducing the multiplicity makes a curve smoother, but increasing the multiplicity to 3 allows you to use Bézier controls and make sharp angles.

1. Select one or more knots on a cubic curve. To affect all knots on one or more curves, select the curve objects instead.

2. Choose one of the following commands from the Modify > Curve menu of the Model toolbar:

- Make Knots Bezier set the multiplicity of the selected knots to 3.

- Make Knots Non-Bezier set the multiplicity of the selected knots to 1.

- Set Knots Multiplicity opens the Set Crv Knot Multiplicity Op property editor, where you can set the multiplicity of the selected knots to 0, 1, 2, or 3. Setting it to 0 is equivalent to removing the knot.

If you will be using curves as profiles for modeling, you should draw them in a counterclockwise direction. This ensures that the normals of any surface or polygon mesh you create from the curves will be oriented correctly. If you will be using curves as paths for animation or extruding, you should draw them from beginning to end. Otherwise, you may need to invert the curves or generated objects later.

Page 107: XSI guia basica

Basics • 107

Manipulating Curve Components

Manipulating Curve Components

The main tool for manipulating curve components is Tweak Curve. It allows you to manipulate curves in a Bézier-like manner. In addition to Bézier knots, you can manipulate non-Bézier knots, control points, and isopoints.

1. Select a curve and activate the Tweak Curve tool by pressing m or choosing Modify > Curve > Tweak Curve from the Model toolbar.

Note that pressing m when a curve is not selected will activate the Tweak Component tool instead.

2. As you move the mouse pointer close to a knot, the manipulator jumps to it. Click and drag the manipulator’s handles to adjust the knot’s position, tangent angle, or tangent length.

Handle on a Bézier knotDrag the square handle to move the tangent freely.

Use the middle mouse button to drag one side independently. Once the tangent is broken in this way, the handles always move independently until you align them again.

Shift+drag to scale the tangent length without affecting the slope. Again, use the middle mouse button to scale one side independently.

Drag the round handle to rotate the tangent without changing its length.

Use middle mouse button to rotate one side independently.

If the handles have been broken and you want to maintain their relative angle while rotating them, right-click on the manipulator and choose LMB Binds Broken Tangents.

Drag the central knot to move it freely. The tangent handles maintain their relative positions to the knot, unless an adjacent segment is linear (four control points lined up). In that case, the tangent handles are automatically adjusted to maintain the linearity of the segment.

Use the middle mouse button to drag the central knot while leaving the tangent points in place.

Handle on a non-Bézier pointDrag the round handle to rotate the tangent without changing its length.

Drag a control point to move it and affect the trace of the curve indirectly. Drag the square handle to move the tangent freely.

Press Shift to scale the tangent length without affecting the slope.

Drag the knot (or isopoint) to move it freely.

Page 108: XSI guia basica

Section 6 • Curves

108 • Softimage

You can also:

- Click and drag a control point to move it to a new location.

- Select an isopoint by clicking on a curve segment between knots. A manipulator appears at the isopoint. To select an isopoint that is very close to a knot, you can click on the curve farther away and then slide the mouse pointer closer before releasing the button.

- Right-click on a knot or isopoint manipulator to access a context menu containing commands that affect that point, as well as other tool options.

Note that if you right-click on a selected knot (or on another part of the curve while knots are selected), the context menu is different (although many of the same items are available on both menus). In this case, the commands apply to all selected knots and not just the one under the mouse pointer.

- Click and drag a rectangle across one or more knots to select them. Use Shift to add to the selection, Ctrl to toggle, or Ctrl+Shift to deselect. This allows you to apply commands to multiple selected knots using the context menu or the Modify > Curve menu.

3. The Tweak Curve tool remains active, so you can repeat step 2 as often as you like. When you have finished, exit the tool by pressing Esc or activating a different tool.

Note that if you move an isopoint that is adjacent to Bézier knots, the tangents will break. If desired, first add a Bézier knot at the isopoint’s location to preserve continuity.

Breaking and Aligning Bézier Tangents

On a Bézier knot, the back and forward tangents can have different orientations. When the tangents are “broken” or “unlinked” in this way, the result is a sharp corner.

Breaking Tangents

To break Bézier tangents and adjust the handles independently of each other, use the middle mouse button while using the Tweak Curve tool.

Aligning Tangents

After tangent handles have been broken, they can be realigned to make the curve smooth again at that point. Select one or more Bézier knots and choose one of the following commands from the Modify > Curve menu on the Model toolbar:

• Align Bezier Handles sets the slopes of both tangents to their average orientation.

• Align Bezier Handles Back to Forward sets the slope of back tangent equal to the forward tangent.

• Align Bezier Handles Forward to Back sets the slope of forward tangent equal to the back tangent.

“Back” and “forward” are considered in terms of the curve’s parameterization from start to end point.

Brokentangents

Aligned tangents

Page 109: XSI guia basica

Basics • 109

Manipulating Curve Components

Creating Straight Segments

You can create straight segments on curves using the commands available on the Modify > Curve menu of the Model toolbar, or on the context menu of the Tweak Curve tool. Softimage creates Bézier knots, if necessary, and rotates the appropriate tangents to point at the adjacent knots. Once a straight segment has been created this way, the Tweak Curve tool maintains the linearity when you move the adjacent knots. However, the segment will revert to a curve if you adjust the tangent handles, or if you use a different tool to move control points.

To straighten segments between knots

1. Select the knots at both ends of each segment you want to straighten. You must do this individually for each segment you want to straighten, even if segments are consecutive.

2. Choose Modify > Curve > Make Knot Segments Linear from the Model toolbar.

The segments between selected knots become straight.

To straighten segments adjacent to a knot

1. Select a curve.

2. Activate the Tweak Curve tool (press m).

3. Move the mouse pointer over an unselected knot.

4. Right-click and choose one of the following commands from the context menu:

- Make Adjacent Knot Segments Linear straightens both segments connected to the knot.

- Make Fwd Knot Segment Linear straightens the forward segment.

- Make Bwd Knot Segment Linear straightens the back segment.

“Back” and “forward” are considered in terms of the curve’s parameterization from start to end point.

Alternatives to the Tweak Curve Tool

In addition to the Tweak Curve tool, Softimage provides many other ways to manipulate components. For example, you could use the regular selection and transformation tools, or some of the other tools on the Modify > Component menu.

Page 110: XSI guia basica

Section 6 • Curves

110 • Softimage

Modifying Curves

The Modify > Curve menu of the Model toolbar contains a variety of commands you can use to modify curves in various ways. Two of the more common modifications are inverting and opening/closing, but there are other operations you can perform as well.

Opening and Closing Curves

Modify > Curve > Open/Close opens a closed curve and closes an open one.

Inverting Curves

Modify > Curve > Invert switches the start and end points of a curve. The result is as if you had drawn the curve clockwise instead of counterclockwise or vice versa.

For example, if an object uses the curve as a path, it moves in the opposite direction once you invert the curve. Similarly, if a surface has been built from the curve and its operator stack was not frozen, its normals become reversed.

Creating Curves from Other Objects

Many of the commands on the Create > Curve menu of the Model toolbar allow you to create curves based on other objects in your scene. The illustrations here give you an idea of just some of the possibilities.

Extracting Curve Segments

Fitting Curves onto Curves

Creating Curves from Intersecting Surfaces

Open curve Closed curve

Original curve Extracted segment

New curve fitted onto sketched curveOriginal sketched curve

Intersection between two surfaces

Page 111: XSI guia basica

Basics • 111

Importing EPS Files

Blending Curves

Filleting Curves

Creating Curves from Animation

If you have animated the translation of an object, you can use Tools > Plot > Curve from the Animate toolbar plot the motion of its center to generate a curve. For example, this can be used to create a trajectory curve. You can also plot the movement of a selected point or cluster.

Importing EPS Files

Use File > Import > EPS File from the main menu to import curves saved as EPS (encapsulated PostScript) and AI (Adobe Illustrator) files from a drawing program. Once in Softimage, you can convert them to polygon meshes using Create > Poly. Mesh > Curves to Mesh to create planar or extruded logos.

Preparing EPS and AI Files for Import

There are some restrictions on the files you can import. Follow these guidelines:

• Make sure the file contains only curves. Convert text and other elements to outlines.

• Save or export as version 8 or previous.

• Do not include a TIFF preview header.

Original curves New blend curve

Intersecting curves Fillet between them

Page 112: XSI guia basica

Section 6 • Curves

112 • Softimage

Page 113: XSI guia basica

Basics • 113

Section 7

Polygon Mesh Modeling

Polygon meshes are one of the basic renderable geometry types in Softimage. They are ideally suited for modeling non-organic objects with hard edges and corners, but they can also be used to approximate smooth, organic objects. Polygon meshes are particularly used for games development because of the requirements of most game engines. Polygon meshes are also the basis of subdivision surfaces.

What you’ll find in this section ...

• Overview of Polygon Mesh Modeling

• About Polygon Meshes

• Converting Curves to Polygon Meshes

• Drawing Polygons

• Subdividing

• Drawing Edges

• Extruding Components

• Removing Polygon Mesh Components

• Combining Polygon Meshes

• Symmetrizing Polygons

• Cleaning Up Meshes

• Reducing Polygons

• Subdivision Surfaces

Page 114: XSI guia basica

Section 7 • Polygon Mesh Modeling

114 • Softimage

Overview of Polygon Mesh Modeling

There are three basic approaches to modeling with polygon meshes.

Box Modeling

Box modeling starts with a primitive like a cube, then adds subdivision and shapes it by deforming, adding edges, extruding, and so on.

Modeling with Curves

When you model with curves, you begin with curves outlining the basic shape of your object and convert them to polygon meshes. You can then continue to add detail using any techniques you like.

Polygon-by-polygon Modeling

With polygon-by-polygon modeling, you draw each polygon directly.

About Polygon Meshes

When working with polygon meshes, there are some basic concepts you should understand.

Polygons

A polygon is a closed 2D shape formed by straight edges. The edges meet at points called vertices. There are exactly the same number of vertex points as edges. The simplest polygon is a triangle.

Polygons are classified by the number of edges or vertices. Triangles and quadrilaterals (or quads) are the most commonly used for modeling. Triangles have the advantage of always being planar, while quads give better results when used as the basis of subdivision surfaces. Certain game engines may require that objects be composed entirely of triangles or quads.

Polygons that are very long and thin, or that have extremely sharp angles, can give poor results when deforming or shading. Polygons that are regularly shaped, with all edges and angles being almost equal, generally give the best results.

Triangle Quad N-gon

Page 115: XSI guia basica

Basics • 115

About Polygon Meshes

Polygon Meshes

A polygon mesh is a 3D object composed of one or more polygons. Typically these polygons share edges to form a three-dimensional patchwork.

However, a single polygon mesh object can also contain discontiguous sections that are not connected by edges. These disconnected polygon “islands” can be created by drawing them directly or by combining existing polygon meshes.

Types of Polygon Mesh Components

Polygon meshes contain several different types of component: points (vertices), edges, and polygons.

• Points are the vertices of the polygons. Each point can be shared by many adjacent polygons in the same mesh.

• Edges are the straight line segments that join two adjacent points. Edges can be shared by no more than two polygons.

Edges that are not shared represent the boundary of the polygon mesh object and are displayed in light blue if Boundaries and Hard Edges are visible in a 3D view.

• Polygons are the closed shapes that make up the “tiles” of the mesh.

Planar and Non-planar Polygons

When an individual polygon on a polygon mesh is completely flat, it is called planar. All its vertices lie in the same plane, and are thus coplanar. Planar polygons give better results when rendering.

Triangles are always planar because any three points define a plane. However, quadrilaterals and other polygons can become non-planar, particularly as you move vertices around in 3D space. When objects are automatically tessellated before rendering, non-planar polygons are divided into triangles. However, other applications such as game engines may not support non-planar polygons properly.

A polygon mesh sphere

Edge

Polygon

Point

Non-planar polygoncreated by moving a point below the ground plane.

Planar polygonon the ground plane with

normals visible.

Page 116: XSI guia basica

Section 7 • Polygon Mesh Modeling

116 • Softimage

Valid Meshes

Softimage has strict rules for valid polygon mesh structures and won’t let you create an invalid mesh. Some of the rules are:

• Every point must belong to at least one polygon.

• Every edge must belong to at least one polygon.

• A given point can be used only once in the same polygon.

• All edges of a single polygon must be connected to each other. Among other things, this means that you cannot have a hole in a single polygon. To get a hole in a polygon mesh, you must have at least two polygons.

• Edges cannot be shared by more than two polygons. Tri-wings are not supported. To connect three polygons in this way, a double edge is required.

• Softimage does support one case of non-manifold geometry. A single point can be shared by two otherwise unconnected parts of a single mesh object.

If you export geometry from Softimage, remember that such geometry may not be considered valid by other applications.

Controlling Shading on Meshes

Use the mesh’s Geometry Approximation property to control whether the shading is smooth or faceted across polygons. If the object doesn’t already have a Geometry Approximation property, choose Get > Property > Geometry Approximation from any toolbar.

The Discontinuity parameters on the Polygon Mesh page of the Geometry Approximation property editor control whether the objects are faceted or smooth at the edges.

Hole in apolygon mesh

At least two polygons are required.

A non-manifold geometry that is valid in Softimage.

Faceted polygons are appropriate for geometric shapes like dice.

Smooth polygons are appropriate for organic shapes like faces.

Page 117: XSI guia basica

Basics • 117

About Polygon Meshes

The illusion of smoothness is created by averaging the normals of adjacent polygons. When normals are averaged in this way, the shading is a smooth gradient along the surface of a polygon. When normals are not averaged, there is an abrupt change of shading at the polygon edges.

Automatic discontinuity lets you turn off the averaging of normals for sharper edges and the discontinuity Angle lets you specify how sharp edges must be before they appear faceted. If the dihedral angle (angle between normals) of two adjacent polygons is less than the Discontinuity Angle, the normals are averaged; otherwise, they are not averaged.

You can achieve different effects by adjusting these two parameters:

• If Automatic is on, then the Angle determines the threshold for faceted polygons.

• If Automatic is on and Angle is 0, the object is completely faceted.

• If Automatic is off, the object is completely smooth.

Discontinuity on Selected Edges

In addition to setting the geometry approximation for an entire object, you can make selected edges discontinuous by marking them as “hard” using Modify > Component > Mark Hard Edge/Vertex from the Model toolbar. Hard edges are displayed in dark blue when Boundaries and Hard Edges is checked on a viewport’s Show menu (eye icon).

Dihedral angles: flatter edges have small angles and sharper

edges large angles.

Flat edges: normalsaveraged, smooth shading

Sharp edges: normals not averaged, faceted

Selected edges marked as hard.

Page 118: XSI guia basica

Section 7 • Polygon Mesh Modeling

118 • Softimage

Converting Curves to Polygon Meshes

Use Create > Poly. Mesh > Curves to Mesh from the Model toolbar to create a polygon mesh based on the selected curves.

Tesselating

Tesselation is the process of tiling the curves’ shapes with polygons. Softimage offers three different tesselation methods:

• Minimum Polygon Count uses the least number of polygons possible but yields irregular polygons.

• Delaunay generates a mesh composed entirely of triangular polygons. This method gives consistent and predictable results, and in particular, it will not give different results if the curves are rotated.

• Medial Axis creates concentric contour lines along the medial axes (averages between the input boundary curves), morphing from one boundary shape to the next. This method creates mainly quads with some triangles, so it is well-suited for subdivision surfaces.

Other Options

In addition to controlling the tesselation, there are many other options to control holes, extrusion, beveling, embossing, and so on.

Exterior closed curves become disjointparts of the same mesh object.

Interior closed curves can become holes.

Page 119: XSI guia basica

Basics • 119

Drawing Polygons

Drawing Polygons

Modify > Poly. Mesh > Add/Edit Polygon Tool is a multi-purpose tool that lets you draw polygons interactively by placing vertices. You can use it to add polygons to an existing mesh, add or remove points on existing polygons, or to create a new polygon mesh object.

1. Do one of the following:

- To create a new polygon mesh object, first make sure that no polygon meshes are currently selected.

or

- To add polygons to an existing polygon mesh object, select the mesh first.

or

- To add or remove points on an existing polygon in a existing polygon mesh object, select that polygon.

2. Choose Modify > Poly. Mesh > Add/Edit Polygon Tool from the Model toolbar or press n.

3. Do one of the following:

- Click in a 3D view to add a point. If necessary, you can adjust the position by moving the mouse pointer before releasing the button.

or

- Click an existing point on another polygon in the same mesh to attach the current polygon to it.

or

- Click an existing edge of another polygon in the same mesh to attach the current polygon to it.

or

- Left-click and drag on a vertex of the current polygon to move it.

or

- Middle-click a vertex of the current polygon to remove it.

As you move the mouse pointer, the edges that would be created are outlined in red. To insert the new point between a different pair of vertices of the current polygon, first move the mouse across the edge connecting them.

The direction of the normals is determined by the direction in which you draw the vertices. If the vertices are drawn in a counterclockwise direction, the normals face toward the camera and if drawn clockwise, they face away from the camera. As you draw, red arrows indicate the order of the vertices.

4. When you have finished drawing a polygon, do one of the following:

- To start a new polygon and automatically share an edge with the current one, first move the mouse pointer across the desired edge and then click the middle mouse button. Repeat step 3 as necessary.

or

- To start a new polygon without sharing automatically sharing an edge, click the right mouse button. Repeat step 3 as necessary.

or

- When you are finished drawing polygons, exit the Add/Edit Polygon tool by clicking the right mouse button twice in a row, by choosing a different tool, or by pressing Esc.

Page 120: XSI guia basica

Section 7 • Polygon Mesh Modeling

120 • Softimage

Subdividing

You can subdivide polygon meshes to add more detail where needed.

Subdividing Polygons and Edges Evenly

You can subdivide polygons and edges evenly using Modify > Poly. Mesh > Subdivide Polygons/Edges from the Model toolbar. Select specific polygons or edges first, or just select a polygon mesh object to subdivide all polygons.

For polygons, you can choose different subdivision types:

For edges, you can connect the new points and extend the subdivision to a loop of parallel edges (that is, the opposite edges of quad polygons):

Subdividing Polygons with Smoothing

You can subdivide and smooth selected polygons using Modify > Poly. Mesh > Local Subdivision from the Model toolbar.

Splitting Edges

You can split edges interactively using Modify > Poly. Mesh > Split Edge Tool from the Model toolbar. Activate this tool then click an edge to split it. Use the middle mouse button to split parallel edges. Press Ctrl while clicking to bisect edges evenly.

Other Ways to Subdivide

The Modify > Poly. Mesh menu of the Model toolbar contains many other tools and commands that can subdivide and add detail to polygon meshes. For example:

• Add Vertex Tool

• Split Polygon Tool

• Split Edges (with split control)

• Dice Polygons

• Slice Polygons

Plus Diamond TrianglesX

Parallel Edge Loop and Connect both off. Connect on.

Parallel Edge Loop and Connect both on.

Parallel Edge Loop on.

Page 121: XSI guia basica

Basics • 121

Drawing Edges

Drawing Edges

Choose Modify > Poly. Mesh > Add Edge Tool from the Model toolbar to split or cut polygons interactively by drawing new edges. You can use this tool to freeform or redraw your object’s flow lines.

1. Select a polygon mesh object.

2. Choose Modify > Poly. Mesh > Add Edge Tool from the Model toolbar or press \ .

3. Start a new edge by clicking on an existing edge or point.

You can also press Alt while clicking to start in the middle of a polygon and automatically connect to the nearest edge by a triangle

4. If desired, click in the interior of a polygon to add a point. You can repeat this step to add as many interior points as you like, creating a polyline, before terminating it.

5. Terminate the new edge by clicking or middle-clicking on an existing edge or point.

You can also:

- Press Ctrl while clicking or middle-clicking an edge to bisect it evenly.

- Press Shift while clicking or middle-clicking an edge to ensure that the angle between the new edge and the target edge snaps to multiples of the Snap Increments - Rotate value set in your Transform preferences. For example, if Snap Increments - Rotate is 15, then the new edge will snap at 15 degrees, 30 degrees, 45 degrees, and so on. Angles are calculated in screen space.

- Press Ctrl+Shift while clicking or middle-clicking an edge to attach the new edge at a right angle to the target edge. The angle is calculated in object space.

- Press Alt while clicking in the middle of the polygon to add a point and connect it to the nearest edge by a triangle.

If you are trying to attach a new edge to an existing edge or vertex, and the target does not become highlighted when you move the pointer over it, it means that you cannot attach the new edge at that location because it would create an invalid mesh.

6. To continue adding edges starting at a new location, right-click and then repeat steps 2 to 4.

To exit the Add Edge tool, press Esc or choose a different tool.

Click inside a polygon to add an interior point.

Click to continue drawing edges from the last point.

Middle-click to continue drawing edges from the previous point.

You cannot attach the edge to this point.

Page 122: XSI guia basica

Section 7 • Polygon Mesh Modeling

122 • Softimage

Extruding Components

You can extrude polygon mesh components to create local details, such as indentations or protuberances like limbs and tentacles. You can extrude polygons, edges, or points.

Extruding Components

1. Select one or more components on a polygon mesh, and then press Ctrl+d or choose Edit > Duplicate/Instantiate > Duplicate Single.

2. Use the transform tools or the Tweak Component tool to translate, rotate, and scale the extruded components as desired.

If you want to adjust other properties, open the Extrude Op property editor in the stack.

Extruding with Options

To display additional options when extruding, select one or more components and press Ctrl+Shift+d or choose Modify > Polygon Mesh > Extrude Along Axis. This lets you control whether adjacent components are extruded separately or together, as well as specify the subdivisions, inset, transformations, and other values.

Extruding Along a Curve

You can get more control over the shape of an extrusion by using a curve. Select one or more components, choose Modify > Polygon Mesh > Extrude Along Curve, and then pick the curve.

Duplicating Polygons

Duplicating is similar to extruding, but the polygons are not connected to the original geometry. This is useful for building repeating forms like steps or railings. Choose Modify > Polygon Mesh > Duplicate, or check Duplicate Polygons in the Extrude Op property editor.

Page 123: XSI guia basica

Basics • 123

Removing Polygon Mesh Components

Removing Polygon Mesh Components

There are several different ways to remove polygon mesh components using different commands from the Modify > Poly. Mesh menu: Delete Components, Collapse Components, Dissolve Components, and Dissolve and Clean Adjacent Vertices.

When components are selected, pressing Delete performs different actions:

• Points and edges are dissolved and adjacent vertices are cleaned.

• Polygons are deleted.

Deleting Polygon Mesh Components

Deleting removes selected components and anything attached to them, leaving empty holes.

Collapsing Polygon Mesh Components

Collapsing removes selected components and reattaches the adjacent ones, creating no new holes.

Dissolving Components

Dissolving removes selected components and then fills in the holes with new polygons.

Dissolving Components and Cleaning Vertices

Cleaning automatically collapses vertices that are shared by only two edges after dissolving, but were shared by more before.

Deleting selected point

Collapsing selected edge

Dissolving selected polygons

Vertices already shared by two edges are not collapsed.Vertices shared by three or more edges are not collapsed.

Vertices shared by two edges after dissolving are collapsed.

Selected polygons will be dissolved.

Before

AfterDissolving and

cleaning vertices

Page 124: XSI guia basica

Section 7 • Polygon Mesh Modeling

124 • Softimage

Combining Polygon Meshes

You can combine two or more polygon mesh objects into a single new one. Select all the meshes you want to combine, then choose Create > Poly. Mesh > Blend or Merge from the Model toolbar.

The two commands differ in how they treat boundary edges on different objects when the boundaries are close to each other.

• With Blend, nearby boundaries on different objects are joined by new polygons.

• With Merge, nearby boundaries on different objects are merged into a single edge at the average position.

There is a Tolerance parameter for determining the maximum distance in Softimage units between boundaries for them to be considered “nearby”.

Other Ways of Combining Meshes

You can also combine meshes using the Boolean commands on the Create > Poly. Mesh and Modify > Poly. Mesh menus.

Blended objectNear boundaries are joined

Far boundaries are not joined

Original objects

Merged objectNear boundaries are merged

Far boundaries are not merged

Page 125: XSI guia basica

Basics • 125

Symmetrizing Polygons

Symmetrizing Polygons

You can model one half of a polygon mesh object and then symmetrize it. This creates new polygons that mirror the geometry on the original side.

1. Model the polygons on one side of the object. In the example below, an ornamental curlicue was added to the hilt of the dagger.

2. Prepare the other side of the object for symmetrization. For example, if you intend to merge the symmetrized portions by welding or bridging, then you may need to create holes for the new polygons to fit and add vertices to aid the merge.

3. Select the polygons to be symmetrized. You can symmetrize the whole object or just a portion.

4. Choose Modify > Poly. Mesh > Symmetrize Polygons from the Model toolbar.

5. In the Symmetrize Polygon Op property editor, set the parameters as desired, for example, to specify the plane of symmetry.

Model one side of the object.

Prepare the other side.

Select the desired polygons.

The finished dagger.

Page 126: XSI guia basica

Section 7 • Polygon Mesh Modeling

126 • Softimage

Cleaning Up Meshes

You can filter polygon mesh objects to clean them up. Filtering removes components that match certain criteria, for example, small components that represent insignificant detail.

Filtering Edges

Modify > Poly. Mesh > Filter Edges on the Model toolbar removes edges by collapsing them based on either their length or angle. In both cases, you can protect boundary edges using Keep Borders Edges Intact.

Edge filtering is especially useful for reducing the triangulation on polygon meshes generated by Boolean operations.

Filtering Points

Modify > Poly. Mesh > Filter Points on the Model toolbar welds together vertices that are within a specified distance from each other. Among other things, this can be very useful for fixing disconnected polygons in “exploded” meshes which can occur when meshes are exported from some other programs.

• Average position welds each clump of points in the selection together at their average position.

• Selected point welds each clump of points in the selection together at the position of the point that is nearest to the average position.

• Unselected point welds each selected point to an unselected point on the same object.

Filtering Polygons

Modify > Poly. Mesh > Filter Polygons removes polygons based on their area or their dihedral angles:

• When you filter polygons by angle, adjacent polygons are merged together if their dihedral angle is less than the threshold you specify. Small angles correspond to flat areas, so this method preserves sharp detail.

• When you filter polygons by area, the smallest polygons are removed. This eliminates small, “noisy” details.

Reducing Polygons

The Modify > Poly. Mesh > Polygon Reduction command on the Model toolbar lightens a heavy object by reducing the number of polygons, while still retaining a useful fidelity to the shape of the original high-resolution version. For example, you can use polygon reduction to meet maximum polygon counts for game content, or to reduce file size and rendering times by simplifying background objects.

Polygon reduction also allows you to generate several versions of an object at different levels of detail (LODs).

Polygon reduction works by collapsing edges into points. Edges are chosen according to their “energy”, which is a metric based on their length, orientation, and other criteria. In addition, you have options to control the extent to which certain features, such as quad polygons, are preserved by the process.

Page 127: XSI guia basica

Basics • 127

Polygon Normals

Polygon Normals

Shading normals are vectors that are perpendicular to the surface of polygons at each corner. They control how polygon meshes are shaded. If the normals are averaged across an edge or corner, the shading is smooth. If they are not averaged, the shading is faceted and the edge is considered “hard”.

To display normals on selected objects, click on a view’s Show menu (eye icon) and choose Normals.

In Softimage, polygon meshes can have auto normals or user normals:

• Auto normals are calculated automatically based on a mesh’s geometry.

• User normals are custom-defined.

Controlling Auto Normals

The best way to control auto normals on a polygon mesh is to apply a Geometry Approximation property (from the Get > Property menu) if there isn’t already one on the object, and turn off Discontinuity: Automatic. Then, manually mark any edges or vertices you want to be hard by selecting them and using Modify > Component > Mark Hard Edge/Vertex on the Model toolbar.

Controlling User Normals

Instead of relying on the automatically generated normals, you can specify custom normals to use for shading. These custom normals are called user normals, or explicit normals in some other programs including 3ds Max. User normals allow you to create things like a box with rounded corners using a minimum number of polygons.

On a cube with beveled edges, the interpolation of the automatic normals creates a gradation in the shading across the large, flat sides. To create the illusion of a box with rounded corners, you can set user normals so that their interpolation produces the correct shading.

There are two main ways to set user normals:

• Activate Modify > Component Tweak User Normals Tool on the Model toolbar, and then drag normals interactively in the viewports.

• Select points, polygons, and edges and then use the commands on the Modify > Component > Set User Normals submenu.

Page 128: XSI guia basica

Section 7 • Polygon Mesh Modeling

128 • Softimage

Subdivision Surfaces

Subdivision surfaces (sometimes called “subdees”) allow you to create smooth, high-resolution polygon meshes from lower-resolution ones. They provide the smoothness of NURBS surfaces with the local detail and texturing capabilities of polygon meshes.

Applying Geometry Approximation

You can turn a polygon mesh object into a subdivision surface by pressing + and – on the numeric keypad. This applies a local Geometry Approximation property if there isn’t already one, and sets the subdivision level for render and display. The higher the subdivision level, the smoother the object.

The original geometry forms a hull that is used to control the shape of the smoothed, “proxy” geometry. You can toggle the display of the hull and the subdivision surface on the Show menu (eye icon).

Subdivision Rules

Softimage gives you a choice of several subdivision rules (smoothing algorithms): Catmull-Clark, XSI-Doo-Sabin, and linear. In addition, you have the option of using Loop for triangles when using Catmull-Clark or linear.

The subdivision rule is set in the Polygon Mesh property editor.

Catmull-Clark

The Catmull-Clark subdivision algorithm produces rounder shapes. The generated polygons are all quadrilateral.

XSI-Doo-Sabin

The XSI-Doo-Sabin subdivision algorithm is a variation of the standard Doo-Sabin algorithm. It produces more geometry than Doo-Sabin, but it works better with cluster properties such as texture UVs, vertex colors, and weight maps, as well as with creases.

Polymesh hull

Subdivision surface

Catmull-Clark Subdivision

XSI-Doo-Sabin Subdivision

Page 129: XSI guia basica

Basics • 129

Subdivision Surfaces

Linear Subdivision

Linear subdivision does not perform any smoothing, so the object’s shape is unchanged. It is useful when you want an object to deform smoothly without rounding its contours.

Loop Subdivision

With the Catmull-Clark and linear subdivision methods, you have the option of using Loop subdivision for triangles. The Loop method subdivides triangles into smaller triangles rather instead of into quads, which gives better results when smoothing and shading.

Creases

Subdivision surfaces typically produce a smooth result because the original vertex positions are averaged during the subdivision process. However, you can still create sharp spikes and creases in subdivision surfaces. This is done by adjusting the hardness value of points or edges on the hull. The harder a component, the more strongly it “pulls” on the resulting subdivision surface.

Use Modify > Component > Mark Hard Edge/Vertex to make components completely hard, or Set Edge/Vertex Crease Value to apply an adjustable value.

Other Methods of Subdividing

• You can create a new object that is a smoother, denser version of an existing one using Create > Poly. Mesh > Subdivision from the Model toolbar.

• You can create a new object that is a smoother, denser version based on the Geometry Approximation settings of an existing object using Edit > Duplicate/Instantiate > Duplicate Using Geometry Approx.

Linear Subdivision

Catmull-Clark with Loop

Catmull-Clark

Page 130: XSI guia basica

Section 7 • Polygon Mesh Modeling

130 • Softimage

Page 131: XSI guia basica

Basics • 131

Section 8

NURBS Surface Modeling

NURBS surfaces are one of the basic types of renderable geometry in Softimage. They are rectangular patches that allow for very smooth shapes with relatively few control points. Surfaces can model precise shapes using less geometry than polygon meshes and they’re ideal for smooth, manufactured objects like car and aeroplane bodies.

What you’ll find in this section ...

• About Surfaces

• Building Surfaces

• Modifying Surfaces

• Projecting and Trimming with Curves

• Surface Meshes

Page 132: XSI guia basica

Section 8 • NURBS Surface Modeling

132 • Softimage

About Surfaces

In Softimage, surfaces are NURBS patches. Mathematically, they are an interconnected patchwork of smaller surfaces defined by intersecting NURBS curves.

Components of Surfaces

You can display surface components and attributes in the 3D views, as well as select them for various tasks.

• Points are the control points of the curves that define the surface. Their positions define the shape of the surface.

• NURBS hulls are display lines that join consecutive control points. It can be useful to display them when working with curves and surfaces.

• Surface knots are the knots of the curves that define the surface; they lie on the surface where the U and V curve segments meet.

• Knot curves (sometimes called isoparams or isoparms) are sets of connected knots along U or V—they are the “wires” shown in wireframe views. You can select knot curves and use them, for example, to build other surfaces using the Loft operator.

• Isolines are not true components. They are, in fact, arbitrary lines of constant U or V on a surface. You can use the U and V Isoline selection filter to help you pick isolines for lofting and other operations.

Points define and control the surface.

You can display lines between points.

Knots lieon the surface.

Knot curves connect knots.

Isolines are arbitrary lines on the surface in U or V.

Page 133: XSI guia basica

Basics • 133

Building Surfaces

Building Surfaces

The commands on the Create > Surf. Mesh menu can be used to build NURBS surfaces in a variety of ways. The first set of commands generate surfaces from curves—see Objects from Curves on page 90 for an overview of the basic procedure. Here are a few examples of some of the other ways you can build surfaces.

Blending Surfaces

Blending creates a new surface that fills the gap between the selected boundaries on two other surfaces.

Merging Surfaces

Merging two surfaces creates a third surface that spans the originals. You have the option of also selecting an intermediary curve for the merged surface to pass through.

Filleting Intersections

A fillet is a surface that smooths the intersection of two others, like a molding between a wall and a ceiling.

Input surfaces Resulting blend

Input surfaces Single merged surface

Input surfaces Resulting fillet Shaded view

Page 134: XSI guia basica

Section 8 • NURBS Surface Modeling

134 • Softimage

Modifying Surfaces

You can modify surfaces in a variety of ways using the commands in the Modify > Surface menu of the Model toolbar, for instance, by adding and removing knot curves. Here are a few examples of some other ways of modifying surfaces.

Inverting Normals

If the normals of a surface are pointing in the wrong direction, you can invert them.

Opening and Closing Surfaces

You can open a closed surface and close an open surface. A surface can be open in both U and V like a grid, closed in both like a torus, or open in one and closed in the other like a tube.

Extending Surfaces

You can extend a surface from the selected boundary to a curve.

Inverting a surface

Open Closed

Page 135: XSI guia basica

Basics • 135

Projecting and Trimming with Curves

Projecting and Trimming with Curves

You can project curves onto surfaces and then use the result to remove a portion of the surface, or for any other modeling purpose. This is useful for modeling manufactured objects like car parts with holes or for creating smooth surfaces that aren’t four-sided like a standard NURBS patch.

What Are Surface and Trim Curves?

Both surface and trim curves involve projecting a curve object onto a NURBS surface. The difference is whether the result is used to remove a portion of the surface or not.

Surface Curves

If the curve object is just projected and nothing more, the result is called a surface curve. It is a new component of the surface. This surface curve can be used like any other curve component of the surface (isoline, knot curve, and so on) for modeling operations like Loft, Extend to Curve, and others.

Trim Curves

If you use the curve to remove part of the surface, it is called a trim curve.

Trimming affects the visible portion of the surface. All the underlying points are still there and you can still affect the surface’s shape by moving points in the trimmed area.

Projecting or Trimming by Curves

Select a NURBS surface object, choose Modify > Surface > Trim by Projection from the Model toolbar, and then pick a curve object. The curves are projected onto the surface and, by default, the surface is trimmed using all projected curves.

In the Trim Surface by Space Curve property editor, do any of the following:

• To trim the surface using only some of the projected curves, click Pick Trims and then pick the desired surface curves. Right-click when you have finished picking.

• To trim the surface using all the projected curves, click Trim with All.

• To project the curve onto the surface, click Project All.

Curve object

NURBS surface

Surface curve

Trim curve

Page 136: XSI guia basica

Section 8 • NURBS Surface Modeling

136 • Softimage

• Use Is Boundary to choose whether to trim the inside or the outside.

• Use Projection Precision to control the precision used to calculate the projection. If the shape of the projected curve is not accurate, increase this value. However, high values take longer to calculate and may slow down your computer. For best performance, set this parameter to the lowest value that gives good results.

Deleting Trims

Deleting a trim allows you to remove a trim operation even after you have frozen the surface’s operator stack. Set the selection filter to Trim Curve, select one or more trim curves on the surface, and choose Modify > Surface > Delete Trim from the Model toolbar.

Surface Meshes

Surface meshes provide a way to assemble multiple surfaces into a single object that remains seamless under animation and deformation.

1. Create a collection of separate surfaces. These will become the surface mesh’s subsurfaces.

2. Optionally, line up pairs of boundaries by selecting them and choosing Create > Surf Mesh > Snap Boundary from the Model toolbar.

Line the surfaces up into a basic configuration.

This illustration shows a common configuration for a leg or arm.

Snap opposite boundaries together to connect the surfaces across the junction.

Page 137: XSI guia basica

Basics • 137

Surface Meshes

3. Select all the surfaces and choose Create > Surf Mesh > Assemble. The surfaces are assembled into a single surface mesh. The continuity manager ensures that the continuity is preserved at the seams.

4. You can now deform and animate the surface mesh as desired.

Excluding Points from Continuity Managements

All assembled surface meshes have a special cluster called NonFixingPointsCluster. If a point on a subsurface boundary is in this cluster, its continuity is not managed by SCM when Don’t Fix the Tagged Points is on. The other points on the same junction are not affected. This lets you create holes in the surface mesh for mouths, eyes, and so on.

If you ever freeze the assembled surface, you will need to reapply the surface continuity manager manually using Create > Surf Mesh > Continuity Manager.

Notice how the assembled surface mesh blends smoothly across the junctions.

Page 138: XSI guia basica

Section 8 • NURBS Surface Modeling

138 • Softimage

Page 139: XSI guia basica

Basics • 139

Section 9

Animation

To animate means to make things come alive, and life is always signified by change: growth, movement, dynamism. In Softimage, everything can be animated, and animation is the process of changing things over time. For example, you can make a cat leap on a chair, a camera pan across a scene, a chameleon change color, or a face change shape.

What you’ll find in this section ...

• Animating with Keys

• Animating Transformations

• Playing the Animation

• Editing Keys and Function Curves

• Layering Animation

• Constraints

• Path Animation

• Linking Parameters

• Expressions

• Copying Animation

• Scaling and Offsetting Animation

• Plotting (Baking) Animation

• Removing Animation

Page 140: XSI guia basica

Section 9 • Animation

140 • Softimage

Bringing It to Life

The animation tools in Softimage let you create animation quickly so that you can spend your time editing movements, changing the timing, and trying out different techniques for perfecting the job. Softimage gives you the control and quick feedback you need to produce great animation. Basically, if you want to make something move, Softimage has the tools.

What Can You Animate in Softimage?

You can animate every scene element and most of their parameters—in effect, if a parameter exists on a property page, it can probably be animated.

• Motion: Probably the most common form of animation, this involves transforming an object by either moving (translating), rotating, or scaling (resizing) it. Special character tools let you easily animate humans, animals, and all manner of fantastical creatures. You can also use dynamic simulations to create movement according to the physical forces of nature.

• Geometry: You can animate an object’s geometry by changing values such as U and V subdivision, radius, length, or scale. You can also use numerous deformation tools and skeletons to bend, twist, and contort your object.

• Appearance: Material, textures, visibility, lighting, and transparency are just some of the parameters controlling appearance that can be changed over time.

The Highs and Lows of Animation

One of the most important features of Softimage is its high and low-level approach to animation:

Low-level animation means getting down to the parameters of an object and animating their values. Keyframing is the most common method of direct animation, but you can also use path animation, constraints, linked parameters, expressions, and scripted operators for creating animation control relationships.

High-level animation means that you are working with animation in a way that is nonlinear (the animation is independent of the timeline) and non-destructive (any modifications do not destroy your original animation data).

Motion, geometry deformations, and appearances can all be animated in Softimage.

Page 141: XSI guia basica

Basics • 141

Bringing It to Life

You store animation or shapes in sources, then use the animation mixer to edit, mix, and reuse those sources as clips.

To use these levels together, you can animate at a low level by keyframing a specific parameter, then store that animation and others into action sources and mix them together in the animation mixer to animate at a high level. This allows you to easily manage complex animation yet retain the ability to work at the most granular level.

So Many Choices ...

Softimage provides you with many choices of tools and techniques for animating: explore and decide which tool lets you animate in the most effective way. In most projects you have, you will probably use a combination of a number of these tools together to get the best results.

• The most basic method of animation is keying. You set parameter values at specific frames, and then set keys for these values. The values for the frames between the keys are calculated by interpolation.

• Create animation relationships between objects at the lowest (parameter) level. These include constraints, path animation, linked parameters, expressions, and scripted operators.

Keyframed (low-level) animation can be contained in action sources, then brought into the animation mixer as a clip (high level).

Page 142: XSI guia basica

Section 9 • Animation

142 • Softimage

• Character animation tools offer you control for creating and animating skeletons. You can animate them with forward or inverse kinematics, apply mocap data, add an enveloping model, set up a rig, and fine-tune the skeleton’s movements in a myriad of ways to get just the right motion.

• The animation mixer is a powerful editing tool that is nonlinear and non-destructive. Any type of animation that you generate can be stored and reused later, on the same model or a different one. You can also mix different types of animation together and weight them against each other.

• Shape animation lets you can change the geometry of an object over time. To do this, you deform the object into different shapes using any type of deformation tool, then store shape keys for each pose that you want to animate.

• Dynamic simulations let you create realistic motion with natural forces acting on rigid bodies, soft bodies, cloth, hair, and particles (done with ICE). With simulations, you can create animation that could be difficult or time-consuming to achieve with other animation techniques.

Animation and Models

Models in Softimage are data containers (like mini scenes) that make it easy to organize elements that need to be kept together, such as all the parts that make up a character.

The main reason for using models for animation is that they provide the easiest way to import and export animated objects between scenes, and to copy animation between objects.

Models also make it easy to use the animation mixer. Each model can have only one Mixer node that contains mixer and animation data. This means that if you have many objects in a scene that use the mixer and each is within a model, you can copy animation from one object to another.

Page 143: XSI guia basica

Basics • 143

Playing the Animation

Playing the Animation

The first thing you need to do before starting an animation is to set up your frame rate and format to match the medium in which you will be saving the final animation. In animation, the smallest unit of time is the amount required to display a single frame. The speed at which frames are displayed, or the frame rate, is always determined by how the final animation will be viewed. If you are compositing your animation with other film or video footage, it’s usually best for the animation to be at the same frame rate as the footage.

When you change the timing of the animation, you change the way that the actions look. This means that the timing that looked correct while you were previewing it in Softimage may not look as good on video or film. For example, an action that spans 24 frames would take one second on film; changing the frame rate to suit North American video at 30 fps would cause the same 24 frames to span 0.8 seconds.

You can set up the default frame format and frame rate preferences for your scene using the options in the Output Format preferences property editor (choose File > Preferences). These settings propagate to many other parts of Softimage that depend on timing. Regardless of whether you enter time code or a frame number as the frame format, Softimage internally converts your entry into time code.

Selecting a Viewport for Playback

To optimize playback speed, you can specify a single viewport for playback (viewport B by default). When the playback is over, the other viewports are updated to the current frame. If you scrub in the timeline, however, all viewports are updated at each frame.

To select a viewport for this, choose All Views, Active View, or a specific viewport (A, B, C, or D) from the Playback > Playback View menu. You can also set it as a preference in the Interaction Preferences.

Setting up the timing for your animation is the first thing you should do before you start. You can set the frame rate and frame format in the Output Format preferences.

These settings affect many areas of Softimage, including the timeline and playback controls.

Page 144: XSI guia basica

Section 9 • Animation

144 • Softimage

Using the Timeline and the Playback Controls

A big part of the animation process is the constant tweaking and replaying of the animation to see that you get things right. There are different ways of playing back animation in the viewports, but the most common way is by dragging the playback cursor in the timeline and using the playback controls below the timeline.

Before you start playing back the animation, you should set up the time range, the time display format, and the timeline’s start and end frames. These define the range of frames in which you can play in the scene.

• The time range determines the global range of frames, and the range slider in it lets you play back a smaller range of frames within the global range. If you are working with an animation sequence that is very long, you can focus on just a subsection of frames which you can easily change and move along the timeline. You can set the global length by entering frame numbers in the boxes at either end of the time range.

• The timeline displays which frames can be played, which is linked to the range slider. The current frame of the animation is indicated by the playback cursor (the vertical red bar), which you can drag to different frames. You can set the scene’s length by entering frame numbers in the boxes at either end of the timeline.

• The controls in the Playback panel below the timeline allow you to view and play animations, simulations, and audio in different ways.

Timeline

Time range

Playback menu displays many playback options, such as for setting preferences, opening the flipbook, setting real-time play rates, and setting the current viewport.

Increment Backward/Forward moves the currently displayed frame backward/forward by predefined increments (default is 1).

Start/First Frame displays (resets) the first frame at the beginning of the timeline.

End/Last Frame displays the last frame at the end of the timeline.

Play Backward plays/stops the animation or simulation in the backward direction (to the left on timeline). Click this icon to play from the last frame on the timeline; click it again to stop playback; middle-click to play from the current frame.

Note that you can only play simulations backwards if you have cached them.

Play Forward plays/stops the animation or simulation in the forward direction (to the right on timeline). Click this icon to play from the first frame on the timeline; click it again to stop playback; middle-click it to play from the current frame.

Loop repeats the animation or simulation in a continuous loop.

Audio toggles sound on/off during playback. It is on by default. When the audio is off (muted), the icon appears highlighted.

All/RT toggles between playing back frame by frame (All) or in real time (RT).

Page 145: XSI guia basica

Basics • 145

Previewing Animation

Previewing Animation

You can capture and cache images from an animation sequence and play them back in a flipbook to help you see the animation in real time. Anything that is shown in the viewport you choose is captured—render region, rotoscoped scene with background, or any display mode (wireframe, textured, shaded, etc.). For example, you may want to set the display mode to Hidden Line Removal for a “pencil test” effect.

You can include audio files to play back with the flipbook, especially useful for lip synching. You can also export flipbooks in a variety of standard formats, such as AVI and QuickTime.

Creating a Flipbook

1. In the viewport whose images you want to capture, set the display options as you like. Then click the camera icon in that viewport and choose Start Capture.

2. In the Capture Viewport dialog box, set the options for the flipbook’s file name, image size, format, sequence, padding, and frame rate.

3. View the flipbook in the Softimage flipbook or in the native media player on your computer. You can open the Softimage flipbook by choosing Flipbook from the Playback menu.

Ghosting

Animation ghosting, also known as onion-skinning, lets you display a series of snapshots of animated objects at frames or keyframes behind and/or ahead of the current frame. This lets you visualize an object’s motion, helping you improve its timing and flow. You can display an object’s geometry, points, centers, trails, and velocity vectors as ghosts.

Ghosting works for any object that moves in 3D space, either by having its transformation parameters (scaling, rotation, and translation) animated in any way, or by having its geometry changed by shape animation or deformations (including envelopes), or with simulated rigid bodies, soft bodies, or cloth.

Ghosting is set per object by selecting the Ghosting option in the object’s Visibility property editor. Once this is done, you can set ghosting per scene layer or per group, in their respective property editors.

To see ghosting in a 3D view, such as a viewport, choose the Animation Ghosting command in the Display Mode menu of a 3D view, then set up the ghost display options in the Camera Display property editor.

Page 146: XSI guia basica

Section 9 • Animation

146 • Softimage

Animating with Keys

Keyframing (or “keying”) is the process of animating values over time. In traditional animation, an animator draws the extreme (or critical) poses at the appropriate frames (key frames), thus creating “snapshots” of movement at specific moments.

As in traditional animation, a keyframe in Softimage is also a “snapshot” of one or more values at a given frame, but unlike traditional animation, Softimage handles the in-betweening for you, computing the intermediate values between keyframes by interpolation.

You can set keys for just about anything in Softimage that has a value: this includes an object’s transformation, geometry, colors, textures, lighting, and visibility.

You can set keys for any animatable parameter in any order and at any time. When you add a new key, Softimage recalculates the interpolation between the previous and next keys. If you set a key for a parameter at a frame that already has a key set for that parameter, the new key overwrites the old one.

When you set keys on a parameter’s value, a function curve (or fcurve) is created. An fcurve is a graph that represents the changes of a parameter’s values over time, as well as how the interpolation between the keys occurs. When you edit an fcurve, you change the animation.

Methods of Keying

There are a number of ways in which you can set keys in Softimage depending on what type of workflow you’re used to and the tools you want or need to use for your production. Any way you choose, each method results in keyframes being created.

There are three main keying workflows from which to choose:

• Keyable parameters on the keying panel.

• Character key sets

• Marked parameters (and marking sets)

Always Set the Keying Preference First!

Before you start setting keys, you need to set a preference that determines the way in which you key: with keyable parameters, with character key sets, or with marked parameters.

This preference determines which parameters are keyed when you save a key by pressing K, by clicking the keyframe icon in the Animation panel, or by choosing the Save Key command from the Animation menu.

To set the preference, click the Save Key preference button in the Animation panel, then select an option from the menu.

Keys set at frames 1, 50, and 100. Intermediate frames are interpolated automatically.

Page 147: XSI guia basica

Basics • 147

Animating with Keys

Keying Parameters in the Keying Panel

Using the keying panel (click the KP/L tab on the main command panel), you can quickly and easily change values and set keys for specific parameters of a selected object. The parameters that are displayed in the keying panel are called keyable parameters.

If you’re using the Maya interaction model, Softimage is automatically set up to work in this manner.

Once you have set up the object’s keying panel with the keyable parameters you want, you simply select that object and press K or click the keyframe icon to set a key on whatever is in its keying panel.

Overview of Using the Keying Panel

1 Set the Save Key preference to Key All Keyable.

2 Select an object and open the keying panel (click the KP/L tab).

3 If you need to add other keyable parameters to the keying panel, select them in the keyable parameters editor.

4 Go to a frame where you want to set a key.

5 Change the values for the selected object’s keyable parameters.

6 Set a key for the keyable parameters.

3

5

2

6

4

1

Page 148: XSI guia basica

Section 9 • Animation

148 • Softimage

Keying with Character Key Sets

Character key sets are sets of keyable parameters that you create for an object or hierarchy for quick and easy keying. Once you have created key sets, you don’t need to select an object first to key its parameters—just press K or click the keyframe icon and whatever is in the current character key set is keyed.

If you’re transferring from another 3D software, you may prefer this method of working.

Character key sets let you keep the same set of parameters available for any object or hierarchy for easy keying, such as only the rotation parameters for the upper body control in a rig.

Overview of Using Character Key Sets

1 Create a character key set that includes the parameters you want to key on an object.

2 Set the current character key set. If you just created a character key set, it is set as the current one.

3 Set the Save Key preference to Key Character Key Set.

4 Go to a frame where you want to set a key.

5 Change the values for the parameters in the set.

6 Set a key for the parameters in the current character key set.

5

2

6

1

34

Page 149: XSI guia basica

Basics • 149

Animating with Keys

Keying Marked Parameters

Marking parameters is a way of identifying which parameters you want to use for a specific animation task, such as keying. By keying only the marked parameters, you can keep the animation information small and specific to the selected object.

Overview of Marking Parameters

Keying with Marking Sets

You can also create marking sets, which are similar to character key sets. You can have only one marking set per object at a time. Marking sets make it easy to key in hierarchies because each object within that structure can have its own marking set, such as a marking set of rotation parameters for bones, or a marking set of translation parameters for IK effectors.

• To create a marking set, select an object and mark the parameters you want to keep in the set. Then press Ctrl+Shift+M.

• To key marking sets, select one or more objects with a marking set. Then press Ctrl+M to activate the marking set, then set a key by pressing K. Press Alt+K to set a branch key, which is useful for working with characters and other hierarchies.

2

4

3

5

1

1 Set the Save Key preference to Key Marked Parameters.

2 Select the object you want to animate and go to the frame at which you want to set a key.

3 Mark the parameters you want to key.

You can mark parameters by clicking them in the marked parameter list (in the lower-right of the interface), a property editor, the explorer, or the keying panel. Marked parameters are highlighted in yellow.

Transformation parameters are automatically marked when you activate a transformation tool.

4 Set the marked parameter values for the selected object.

5 Set a key for the marked parameters at this frame.

Page 150: XSI guia basica

Section 9 • Animation

150 • Softimage

Setting Keys on Individual Parameters

In addition to the three main keying workflows, you can also set keys directly on individual parameters in these different ways. These methods don’t need to consider the keying preference that you have selected.

B

DC

E

A

A Click the keyframe icon to set keys on, or remove keys from, all or only marked parameters on the property page.

BClick the animation icon to set keys on, or remove keys from, only that parameter. You can also right-click it and choose Set Key or Remove Key from the menu.

C In an explorer, right-click a parameter’s animation icon and choose Set Key or Remove Key from the menu.

D Click the autokey button to automatically set a key each time you change a parameter’s values.

E

Choose Animation > Set Keys at Multiple Frames to set keys for the parameters’ current values at the multiple frames that you enter. This is handy for setting up basic keyframes for pose-to-pose type animation.

Page 151: XSI guia basica

Basics • 151

Animating Transformations

Animating Transformations

Animating the transformations (scaling, rotation, and translation) of objects is something that you will be doing frequently. It is one of the most fundamental things to animate in Softimage.

You can find transformation parameters in the object’s Kinematics node in the explorer. Kinematics in this case refers to “movement,” not to inverse or forward kinematics as is used in skeleton animation.

Animating Local or Global Transformations

You can animate objects either in terms of their parents (local animation) or in terms of the scene’s world origin (global animation).

It’s usually better to animate the local transformations because you usually animate relative to the object’s parent instead of animating relative to the world origin. Animating locally lets you branch-select an object’s parent and move it while all objects in the hierarchy keep their relative positions to the parent.

If you animate both the local and the global transformations, the global animation takes precedence.

Manipulation Modes versus Transformation Values

When you transform an object interactively in a 3D view, you use one of several modes that determine which coordinate system to use for manipulation. The manipulation mode affects the interaction only, the resulting values of which you see in the Transform panel.

This is important to know, particularly for understanding the Local manipulation mode: the values shown in the Transform panel while using a transformation tool may not be the same as the local transform values that are stored for the object: that is, the values that you animate.

So, how do you manipulate an object so that the values on the Transform panel are the same as the stored values for local animation? You rotate in Add mode or

A Within the Kinematics node are the Global Transform and Local Transform nodes, referring to the type of transformation.

BWithin each of the Transform nodes, there are the Pos (position, also called translation), Ori (orientation, also called rotation), and Scl (scale) folders.

C Each of the Pos, Ori, and Scl folders contain the X, Y, and Z parameters corresponding to each axis.

A B

C

Manipulation modes for current transformation (in this case, translation).

Page 152: XSI guia basica

Section 9 • Animation

152 • Softimage

translate in Par mode. These are the only two manipulation modes that transform in the same way as local animation: they are both relative to the object’s parent.

Of course, you can always set and animate the values as you like directly in the object’s Local Transform or Global Transform property editor.

Marking Transformation Parameters

When you activate any of the transformation tools, all three of their corresponding local transformation parameters (X, Y, Z) are automatically marked.

For example, when you rotate in Local mode, all three rotation axes are marked automatically, even if only one rotation axis is selected.

To have only specific axes X, Y, or Z marked, you can rotate in Add mode or translate in Par mode.

Or you can choose Transform > Automark Active Transform Axes: then when you click a transformation’s specific axis button (such as the Rotation’s Y button) on the Transform panel, only that axis is marked, regardless of the current manipulation mode.

Remembering Transformation Tools for an Object

When you’re manipulating or animating an object, you often use the same transformation tool for it, such as always using the Rotate tool for bones in a skeleton. You can create a transform setup property (choose Get > Property > Transform Setup) for an object so that the same transformation tool is automatically activated when you select that object.

This is very useful for working quickly with control objects in a character rig—for example, when you select the head’s effector, the Translate tool is automatically activated.

Page 153: XSI guia basica

Basics • 153

Animating Transformations

Animating Transformations in Hierarchies

Transformations are propagated down through hierarchies so that each object’s local position is stored relative to its parent. Objects in hierarchies behave differently when they are transformed, depending on whether the objects are node-selected (left-click) or branch-selected (middle-click). By default:

• When you branch-select a parent object and animate its transformation, the animation is propagated to its children.

• When you node-select a parent and animate its transformation, its children are not transformed unless their respective local transformations are animated. For example, suppose the child’s local translation is animated but its rotation isn’t: if you translate the parent, the child follows; however if you rotate the parent, the child stays put.

This is because animation on the local transformations is stored relative to the parent’s center. You can make unanimated children follow the parent with the Child Transform Compensation command (or ChldComp button) on the Constrain panel.

• When you animate a child object, its animation is always done relative to its parent (local animation).

• When you animate anything in global, it’s always done in relation to the world origin: it does not matter if your objects are in a hierarchy or not. Nothing is inherited if you have global transformation keys because they override any parent-to-child inheritance.

Animating Rotations

When you animate rotations in Softimage, you normally use three separate function curves that are connected to the X, Y, and Z rotation parameters. These three rotation parameters are called Euler angles. Euler interpolation works well when the axis of interpolation coincides with one of the XYZ rotation axes, but is not as good at interpolating arbitrary orientations. Euler angles can also suffer from gimbal lock, which is the phenomenon of two rotational axes aligning with each other so that they both point in the same direction.

To solve this, you can change the order in which the rotation axes are evaluated (by default, it’s XYZ), which changes where the gimbal lock occurs. As well, you can convert Euler fcurves to quaternion.

Quaternion interpolation provide smooth interpolation with any sequence of rotations. The XYZ angles are treated as a single unit to determine an object’s orientation, so they are not restricted to a particular order of rotation axes. Quaternions interpolate the shortest path between two rotations. You can create quaternion fcurves by setting quaternion keys directly, or by converting Euler fcurves to quaternion using the Animation > Convert commands in the Animation panel. And you can always convert back to Euler fcurves in the same way.

Skeleton chains are an exception to these hierarchy animation rules because the end location of one chain element always determines the start location of the next one in the chain.

Cone is rotated on 90 degrees in X and Y.

Euler interpolation of the rotation values.

Quaternion interpolation of the rotation values.

Page 154: XSI guia basica

Section 9 • Animation

154 • Softimage

Editing Keys and Function Curves

After you have set keys to animate a parameter’s value, you can edit the keys and the function curve (or fcurve) to edit the animation. An fcurve is a graph that represents the changes of a parameter’s values over time, as well as how the interpolation between the keys occurs.

Softimage has several tools that help you edit keys and function curves:

• Editing keys in the timeline is the easiest and most direct method for working with keys.

• The dopesheet lets you work with keys as well, but with tools and in a larger view—great for working at the scene level when you’re offsetting and scaling animation.

• The fcurve editor is the most sophisticated view that gives you the best tools for making the fcurves exactly as you want them.

Editing Keys in the Timeline

You can view and edit keys in the timeline similar to how you do in the dopesheet. The advantage of doing this in the timeline, of course, is that you don’t need to open up a separate window for the dopesheet: the keys are right there. This lets you keep the object that you’re animating in full view at all times.

Once you have selected an animated object, you can easily move its keys, cut or copy and paste its keys, and scale a region of keys, all within the timeline. This is especially useful for blocking out rough animations before you do more detailed editing. You can also select single keys and move, cut, copy, and paste them.

A Keys are displayed as red lines in the timeline.

B Right-click in the timeline to open a menu of options for displaying and editing the keys.

C

Press Shift+drag to draw a region, then drag it to a new area on the timeline.

Press Ctrl while dragging to copy the keys, or choose Copy and Paste from the right-click menu.

D Press Shift+click to select a single key, then you can move it, or cut/copy, and paste it.

E You can scale a region by dragging either of its ends in the appropriate direction.

A

C

D

E

B

Page 155: XSI guia basica

Basics • 155

Editing Keys and Function Curves

Editing Keys in the Dopesheet

The dopesheet provides you with a way of viewing and editing key animation. Similar to a cel animator’s dopesheet, it shows your entire animated sequence, frame by frame.

Because you can see your whole animation in the dopesheet, it makes an ideal tool for editing overall motion and timing. For example, if you wanted to change a 100-frame sequence to 200 frames, you would simply stretch (scale) the animation segment on the track to be 200 frames long.

You can modify your animation sequences by editing regions of keys on the tracks with standard operations such as moving, scaling, copying, cutting, and pasting. You can delete them, shift them left and right, scale them—all with or without a ripple. Summary tracks help you see the animation for the whole scene or just the selected objects.

To open a dopesheet, you can open the animation editor (press 0 [zero]), then choose Editor > Dopesheet from its command bar. Or choose it in a viewport, like any other view.

A

B

C

D

F

E

G

A The Explorer, Lock, and Update buttons apply only to the animation explorer (4).

B Timeline. Click and drag the red playback cursor in it to “scrub” through the animation.

C Summary tracks display keys for all objects in the scene or all objects currently displayed in the dopesheet.

D Animation explorer displays the parameters of objects that you select.

E Regions (press Q) let you edit multiple keys, including moving them, scaling them, copying and pasting them, and deactivating animation.

FThe keys represent the keyframes of the selected parameter’s animation. Each colored block is one frame long. You can edit (move, copy, paste) individual keys on tracks.

G The tracks display and let you manipulate the animation keys. You can expand and collapse tracks to view exactly what you want.

Page 156: XSI guia basica

Section 9 • Animation

156 • Softimage

Editing Function Curves

When you set keyframes to animate a parameter, a function curve, or fcurve, is created. An fcurve is a representation of the animated parameter’s values over time. You can edit fcurves in the fcurve editor, which lives in the animation editor and is its default editor. You can also display the dopesheet, expression editor, and scripted operator editor in the animation editor.

The fcurve editor is an ideal tool to help you control the animation’s speed and interpolation, as well as easily adding and deleting keys.

Press the 0 (zero) key to open the animation editor in a floating window, or you can open it in any viewport. If you open it with an object already selected, its fcurves automatically appear in the fcurve editor.

The graph in the fcurve editor is where you manipulate the fcurve: time is shown along the graph’s X axis (horizontal), while the parameter’s value is plotted along the graph’s Y axis (vertical).

The shape of the fcurve shows how the parameter’s value changes over time. On the fcurve, keyframes are represented by key points (also referred to as keys) and the interpolation between them is represented by segments of the curve linking the key points. You can change the interpolation for a segment or for the whole fcurve.

The slope of the curve between keys determines the rate of change in the animation, while the handles at each key let you define the fcurve’s slope in the same way that control points define Bézier curves.

A

BC

D

E

FG

H

Page 157: XSI guia basica

Basics • 157

Editing Keys and Function Curves

Editing a Function Curve’s Slope

The fcurve’s slope determines the rate of change in the animation. By modifying the slope, you change the acceleration or deceleration in or out from a key, making the animation change rapidly or slowly, or even reversing it.

You can change the slope of any fcurve that uses spline interpolation by using the two handles (called slope handles) that extend out from a key. By modifying the handles’ length and direction, you can define the way the curve moves into and out from each key. You can change the length and angle of each handle in unison or individually.

The slope handles are tangent to the curve at their key when Unified Slope Orientation is on. (A) This keeps the acceleration and deceleration smooth, but you can also turn off this option to “break” the slope at a certain point. (B) This creates a sudden animation acceleration or deceleration, or change of direction altogether.

A Command bar contains menu commands and icons to edit fcurves in many different ways.

B Animation explorer displays the parameters of objects that you select.

C Values for the parameter are shown on the graph’s Y (vertical) axis.

D Timeline. Time is shown on the graph’s X (horizontal) axis. Click and drag the red playback cursor in it to “scrub” through the animation.

ESelected fcurves are white. When not selected, the curves for X, Y, and Z parameters are red, green, and blue, respectively. You can also change the color of any fcurve you like.

F

The keys on the fcurves represent the keyframes of the selected parameter’s animation. You must select an fcurve before you can select its keys. Selected keys are red with slope handles. Unselected keys match the color of their fcurve.

GThe slope handles (tangents) at each key indicate the rate at which an fcurve’s value changes at that key. These handles only appear on keys on fcurves that have spline interpolation.

H

Types of interpolation:

•By default, fcurves use spline interpolation to calculate intermediate values. The curves ease into and ease out of each key, resulting in a smooth transition.

•Linear interpolation connects keys by straight line segments. This creates a constant speed with sudden changes at each key.

•Constant interpolation repeats the value of a key until the next one. The creates sudden changes at keys and static positions between keys, such as for animating a cut from one camera to another.

A

B

Page 158: XSI guia basica

Section 9 • Animation

158 • Softimage

Ways of Editing Function Curves and Keys

When you select one or more fcurves, any modifications you perform are done only to them. You can select keys on the selected fcurves to edit only them, including regions of keys on fcurves.

A

BC

D

E

F

A Move fcurves and keys in X (horizontally) to change the time or in Y (vertically) to change the values.

B Add or delete keys on an fcurve.

CCreate regions (press Q) of keys for editing.

Drag the region up or down to move the keys, or drag the region’s handles to scale.

DCopy and paste an fcurve and keys. You can also set paste options to control how keys are pasted—whether they replace the selection or are added to it.

EScale fcurves or regions of keys. When you shorten the length, you speed up the animation; increasing the length slows it down. Scaling vertically changes the values.

FCycle the fcurves for repetitive motions. You can create basic cycles, or you can have relative cycles that are progressively offset, such as when creating a walk cycle.

Page 159: XSI guia basica

Basics • 159

Layering Animation

Layering Animation

Animation layering allows you to have one or more levels of animation on top of an object’s parameters base animation at the same time. You usually want to layer animation when you need to add an offset to the base animation on an object, but you don’t want to change the original animation, such as with mocap data. You can only add keys in the layers, and the existing base animation must be either action clips or fcurves.

Animation layers are non-destructive, meaning that they don’t alter your base animation in any way: the keys in the layers always remain as a separate entity. Layering allows you to experiment with different effects on your animations and build several variations, each in its own layer.

For example, let’s say that you’ve imported a mocap action clip of a character running down the flight of stairs. However, in your current scene, the stairs are shallower than those used for the mocap session, so the character steps “through” the stairs instead of on them.

To fix this problem, you create an animation layer, offset the contact points for the character’s feet so that they step on the stair, then set keys. The result is an offset animation that sits on top of the mocap data: you don’t need to touch the original mocap clip at all. You can then easily edit the fcurves for the animation layer, tweaking it as you like.

Animation layers are actually controlled and managed in the animation mixer, but you don’t need to access the mixer for creating and setting keys in layers. You can use the Animation Layers panel (click the KP/L tab on the main command panel) to do this. However, you may want to use the animation mixer for added control over each layer, such as for setting each layer’s weight.

1 2

3

5

6

4

Page 160: XSI guia basica

Section 9 • Animation

160 • Softimage

Overview of Layering Animation

There are different ways in which you can work with animation layers in Softimage, but here’s a simple overview just to get you started.

Constraints

Constraining is a way of increasing the speed and efficiency in which you animate. It lets you animate one object “via” another one’s animation. You can constrain different properties, such as position or direction, of one object to that of an animated object. Then when the animated object moves, the constrained object follows in the same way.

There are a number of types of constraints in Softimage:

• Constraining transformations: in position, orientation, direction, scaling, pose (all transformation), and symmetry.

• Constraining in space: by distance, or between 2, 3, or any number of points.

• Constraining to objects: to clusters, surfaces and curves, bounding volumes, and bounding planes.

For many of the constraints, you can add a tangency or up-vector directions to the mix. The tangency and up-vector constraints are properties of several constraint types that determine the direction in which the constrained object should point. For example, if you apply a Direction constraint to an object, you can also add an up-vector (Y axis) to control the “roll” of the direction-constrained object.

1 Make sure the objects are in a model structure.

2 Animate the objects. This animation is the base layer. You cannot create animation layers without first having a base layer.

3 Create an animation layer in the Animation Layer panel.

4 Select the animated objects, change their values, and set keys for them in the layer you created.

5 Edit the layer’s fcurves.

6 Collapse the layer to combine its animation with the base layer of animation.

Radar dish constrained by direction to the planeThe X axis of the radar dish continually points in the direction of the plane’s center.

Page 161: XSI guia basica

Basics • 161

Constraints

Overview of Constraining Objects

Creating Offsets between Constrained Objects

When you constrain an object, you often need to offset it in some way from the constraining object. This could be an offset in position, orientation, or scaling. For example, if you position-constrain one object to another without an offset, both objects end up sharing the same position (“on top” of each other), so you need to offset them.

12

3

4

1 Select the object to be constrained.

2 Choose the constraint command from the Constrain menu.

3 Pick the constraining (control) object. The constraint is created between the objects.

4 Adjust the constraint in its property editor that opens. You can see constraint information in the viewport if you click the eye icon in a viewport’s menu bar and select Relations.

Position constraint without offset: The position of the constrained object’s center matches that of the constraining object’s center.

Position constraint with offset: An offset is applied to the position of the constrained object’s center.

Constrained object (airplane)

Constraining object (magnet)

Page 162: XSI guia basica

Section 9 • Animation

162 • Softimage

With almost all types of constraints, you can set offsets using the controls in their property editors. The offset is set between the centers of the constrained and constraining objects on any axis.

To set an offset interactively, you can use the CnsComp button (Constraint Compensation) on the Constrain panel. With compensation, you can interactively offset the constrained object from the constraining object and animate it independently while keeping the constraint.

Blending Constraints

You can blend multiple constraints on an object with each other, as well as blend constraints with other animation on the constrained object. You set the Blend Weight parameter’s value in each constraint’s property editor to blend the weight (or “strength”) of one constraint against the others. And, of course, you can animate the blending to have it change over time.

Blending is done in the order in which you applied the constraints, from the first-applied constraint to the last. Each constraint takes the previous result and gives a new one based on the value you set. For example, if you have three position constraints on an object, you can have the object placed exactly in the center of them.

In the example on the right, the cone has three blended position constraints to keep it positioned in the middle of the triangle formed by objects A, B, and C:

A First to A with a blend weight of 1.

B Next to B with a blend weight of 0.5.

C Lastly to C with a blend weight of 0.333.

You can see the order of the constraints as well as their blend weight values in a viewport if you click the eye icon in a viewport and select Relations and Relations Info.

A

B

C

Page 163: XSI guia basica

Basics • 163

Path Animation

Path Animation

A path provides a route in global space for an object to follow in order to get from one point to another. The object stays on the path because its center is constrained to the curve for the duration of the animation.

You can create path animation in Softimage using a number of methods, each one having its own advantages:

• The quickest and easiest way of animating an object along a path is by using the Create > Path > Set Path command and picking the curve to be used as the path. There’s no need to set keyframes—just set the start and end frames. The object is automatically constrained to the path and animated along the percentage of the curve’s length.

• Constrain an object to curve using the Curve (Path) constraint and manually set keys for the percentage of the path traveled.

• Choose the Create > Path > Set Trajectory command and pick a trajectory to use a curve’s knots as indicators of the object’s position at each frame.

• Move an object about your scene and save path keys with the Create > Path > Save Key on Path command at different positions—the path curve is created automatically as you go.

• Convert the existing movement of an object into a path using the Create > Path > Convert Position Fcurves to Path command.

After you’ve created path animation, you can modify the animation by changing the timing of the object on the path (choose the Create > Path > Path Retime command), or by moving, adding, or removing points on the path curve as you would to edit any curve.

For example, using the Path Retime command, you can shorten (and therefore increase the speed) a path animation that went from frame 1 to 100 to frames 20 to 70. You can even reverse the animation—for example, enter 100 as the start and 1 as the end frame.

Want to convert a path animation to translation? Plot the position of the path-animated object, then apply the result to the object or as an action in the animation mixer.

AThe dotted line is connected to the center of the constraining curve. You can select the line and press Enter to open the PathCns or TrajectoryCns property editor.

B A triangle represents a locked-path key.

C A square represents a key saved on the path.

DA circle represents a key set directly from a property page or the animation editor. These are the only type of keys found on trajectories.

You can see path information in a viewport if you click the eye icon in a viewport and select Relations.

AB

C

D

Page 164: XSI guia basica

Section 9 • Animation

164 • Softimage

Linking Parameters

When you create linked parameters, also known as driven keys, you create a relationship in which one parameter depends on the animation state of another. In Softimage, you can create simple one-to-one links with one parameter controlling another, or you can have multiple parameters controlling one parameter.

After you link parameters, you set the values that you want the parameters to have, relative to a certain condition (when A does this, B does this).

You can link any animatable parameters together—from translation to color—to create some very interesting or unusual animation conditions. For example, you could create a chameleon effect so that when object A approaches object B, it changes color. Basically, if you can animate a parameter, you can link it.

There are three basic ways in which you can link parameters. You can:

• Create simple one-to-one links with one parameter driving one or more other parameters. When you link one parameter to another, a relationship is established that makes the value of the linked parameter depend on the value of the driving parameter.

• Drive a single parameter with the combined animation values of multiple parameters. This allows you to create more complex relationships, where many parameter values are interpolated to create an output value for one parameter.

• Drive a single parameter with the whole orientation of an object.

Overview of Linking Parameters

To open the Parameter Connection Editor, choose View > Animation > Parameter Connection Editor. Then follow these steps:

Venus flytrap eyes its victim. Its jaw’s rotation Z parameter is linked to the position X parameter of the fly that is animated along a path.

1

2

3

4

5

6

Page 165: XSI guia basica

Basics • 165

Linking Parameters

1 Select an object, then select one or more of its parameters in the Driven Target explorer. These are the parameters whose values will be controlled by the driving parameter.

2 Click the lock icon to prevent the explorer from changing when you select other objects.

3 Select an object, then select one of its parameter in the Driving Source explorer. This is the parameter whose values will control the linked parameters.

If you are driving a single parameter with multiple parameters, select two or more of the parameters (Ctrl+click) here. These are the parameters whose interpolated values will control the linked parameter.

4 Select Link With from the link list.

If you are driving a single parameter with multiple parameters, select Link With Multi.

5 Click the Link button.

A link relationship is established between the parameters. An l_fcv expression appears in the Definition text box and the animation icon of the linked parameter displays an “L” to indicate this.

If you are driving a single parameter with multiple parameters, an l_interp expression appears in the Definition text box.

6 Set the driving and linked parameters’ values as you want them to be relative to each other, then click the Set Relative Values button.

Repeat this step for each relative state you want to set.

Page 166: XSI guia basica

Section 9 • Animation

166 • Softimage

Expressions

Expressions are mathematical formulas that you can use to control any parameter that can be animated, such as translation, rotation, scaling, materials, colors, or textures. Expressions are useful to creating regular or mechanical movements, such as oscillations or rotating wheels. As well, they allow you to create almost any connection you like between any parameters, from simple “A = B” relationships to very complex ones using predefined variables, standard math functions, random number generators, and more.

However you use expressions, you will find that they are very powerful because they allow you to animate precisely, right down to the parameter level. Once you’re more experienced using them, you can create all sorts of custom setups, like character rigs and animation control systems.

Overview of writing an expression

4

2

3

1 5

1 Select an object and open the expression editor by pressing Ctrl+9.

2 Select the target, which is the parameter controlled by the expression.

The Current Value box below it shows the value of the expression at the current frame.

3 Enter the expression in the expression pane by typing directly or by choosing items from the Function, Object, and Param menus.

You can also enter parameter names by typing their script names and then pressing F12. This prompts you with a list of possible parameters in context.

You can copy, cut, and paste in the expression pane using standard keyboard shortcuts (Ctrl+C, Ctrl+X, and Ctrl+V, respectively).

4 The message pane updates as you work, letting you know whether the expression is valid or not.

5 Click the Validate and Apply buttons to validate and then apply the expression.

For a complete description and syntax of all the functions and constants available, refer to the Expression Function Reference (choose Help > User’s Guides).

Page 167: XSI guia basica

Basics • 167

Expressions

How to create a simple equal (=) expression: 3 ways

Use any of these methods to create a simple equal expression between two parameters:

A In a property editor, drag an unanimated parameter’s animation icon onto another parameter’s animation icon. This animation icon shows an equal sign and its value is made to be equal to the first parameter.

B In the explorer, drag the name of an unanimated parameter and drop it on another parameter’s name.

C In the parameter connection editor, set up the Driving Source and Target parameters, then select Equals (=) Expression.

B

C

A

Page 168: XSI guia basica

Section 9 • Animation

168 • Softimage

Copying Animation

There are different levels at which you can copy animation in Softimage: between parameters, between objects, or between models. Here are some of the main ways to do this.

• You can copy any type of animation between selected objects, models, or parameters using the Copy Animation commands from the Animation menu in the Animation panel.

• You can copy keys between parameters or objects in the dopesheet, or copy function curves and keys between parameters or objects in the fcurve editor.

In the dopesheet, you can copy animation from one model to another, or from one hierarchy of objects to another within the same model. For example, you can paste a walk cycle animation from the Bob model to the Fred model, as long as Fred has the same parameter names as Bob.

• Store an object’s animation in an action source and copy it between models, which is especially useful for exchanging animation between scenes.

• You can copy animation between any parameters in the explorer or a property editor in a number of ways:

A In the explorer, drag the name of an animated parameter and drop it on another parameter’s name.

B In a property editor, drag the animation icon of an animated parameter and drop it on another parameter’s animation icon.

CIn either the explorer or a property editor, right-click the animation icon of an animated parameter and choose Copy Animation. Paste this on another parameter with the Paste Animation command.

DIn the explorer, you can drag an entire folder from one object onto another object’s folder of the same name, such as the Pos folder which contains translation (position) parameters.

BA

C

D

Page 169: XSI guia basica

Basics • 169

Scaling and Offsetting Animation

Scaling and Offsetting Animation

If you find that your whole animation is a bit too long or too short, or you just want to offset by a few frames, you can do so with the Sequence Animation commands from the Animation menu in the Animation panel. They give you control over animation by offsetting or scaling (shortening or lengthening) the motion of all objects, selected objects, or just the marked parameters of selected objects.

You can offset or scale all function curves. You can scale and offset using explicit values, or else you can retime an animation by fitting it into a specified frame range. You can even easily reverse an animation.

You can also use the dopesheet to offset or scale animation for an object or even the scene, especially using its summary tracks.

A The selected fcurve (white) has been scaled to twice its length. The ghosted fcurve (black) shows the original fcurve’s size.

B The selected fcurve has been offset by about 20 frames.

C The selected fcurve has been retimed so that a range of 125 frames in the middle of it has been compressed into a range of 80 frames.

B

A

C

Page 170: XSI guia basica

Section 9 • Animation

170 • Softimage

Plotting (Baking) Animation

When you plot the animation on an object using the commands in the Tools > Plot menu on the Animate toolbar, the animation is evaluated frame by frame and function curves are created.

Plotting is useful for generating function curves from any type of animation or simulation, such as from the simulation of a spring-based tail on a dog, or plotting mocap animation from a rig. You can also plot the animation of a constrained object and then remove its constraints so that only the plotted animation remains on the object.

Plotting is done by first creating an action source. You can choose to either keep or delete this action source after the animation has been plotted:

• You can apply the plotted animation (fcurves) immediately to the object and delete the action source.

• You can apply the plotted animation (fcurves) to the object and also keep them stored in an action source. This may be useful if you’re using the animation mixer.

• You can keep the action source of the plotted animation (fcurves) but not have it applied to the object immediately. This may be useful for creating a library of action sources that can be applied to the same or even a different object.

Removing Animation

There are different levels at which you can remove animation in Softimage: between parameters, between objects, or between models. Here are some of the main ways to do this.

• You can remove any type of animation from selected objects, models, or parameters using the Remove Animation commands from the Animation menu in the Animation panel.

• You can remove all keys from parameters or objects in the timeline or in the dopesheet, or remove fcurves or all keys from parameters or objects in the fcurve editor.

• When you remove keys from an fcurve, a flat (static) fcurve remains. To remove the static fcurve, choose Remove Animation > from All Parameters, Static Fcurves from the Animation menu.

• In the dopesheet, you can easily remove all animation from a model or from a hierarchy of objects using its summary tracks.

• To remove animation from parameters in a property editor, right-click the keyframe icon at the top of the editor and choose Remove Animation. This removes animation from all or marked animated parameters on that property page.

• To remove animation from parameters in the explorer or a property editor, right-click the animation icon of an animated parameter and choose Remove Animation.

Animation of an object constrained between two points is plotted.

Page 171: XSI guia basica

Basics • 171

Section 10

Character Animation

Character animation is all about bringing your characters to life, whether it’s some guy dancing in a club, a dog catching a frisbee, or a simple bouncing ball with personality to spare.

Even though you’re working in a virtual environment, your job is to make these characters seem believable in their movements and expression. In Softimage, you’ll find everything you need to make any type of character come alive.

What you’ll find in this section ...

• Character Animation in a Nutshell

• Setting Up Your Character

• Building Skeletons for Characters

• Enveloping

• Rigging a Character

• Animating Characters with FK and IK

• Walkin’ the Walk Cycle

• Motion Capture

• Making Faces with Face Robot

Page 172: XSI guia basica

Section 10 • Character Animation

172 • Softimage

Character Animation in a Nutshell

Softimage has many tools to help you create and animate your characters. Some of them are tools designed for character animation, such as inverse kinematics, while others are part of the standard Softimage tool set, such as modeling and keying tools.

The following outline gives you an idea of which steps to take and which tools to use for developing and animating characters in Softimage.

1 Model the body geometry that is to be used as the envelope (skin).

You can use either a low or high-resolution version of the envelope. A low-res envelope lets you work out the animation with it as a reference, but doesn’t hinder the refresh speed. You can later switch to the high-res version for the final animation and rendering.

4Build a skeleton to provide a framework for a character, and to pose or deform it intuitively.

The structure of your character’s skeleton determines every aspect of how it will move.

With the envelope as a guide, you can create the bones for the skeleton and assemble them into a hierarchy.

3

2

Create a rig using different control objects to help you to pose and animate the character more quickly and accurately than without a rig.

While simple characters may not require a rig, a character that is complex or needs to do complicated movements will need a rig.

Create a model structure for your character, starting with the body geometry.

Then as you create the other elements (skeleton, rig controls, Mixer node), you put them in the model to keep all the character’s elements together.

This makes it easy to copy or export your character later on.

Page 173: XSI guia basica

Basics • 173

Character Animation in a Nutshell

Apply the envelope to the skeleton. This also involves setting how the different parts of the envelope are weighted to the different bones in the skeleton.

You should also save a reference pose of the envelope before you start animating for a home base to which you can return.

Animate the skeleton using inverse kinematics (IK) and forward kinematics (FK).

You can also apply mocap data to your character to animate it, including retargeting the data onto different characters with the MOTOR tools.

5

7 Adjust the animation using any of the animation tools in Softimage, such as the dopesheet, the fcurve (animation) editor, animation layers, or the animation mixer.

For example, you many want to fix foot sliding in the fcurve editor, add a progressive offset to a walk cycle in the mixer, or add a few keyframes on top of some mocap data with animation layers.

6

Page 174: XSI guia basica

Section 10 • Character Animation

174 • Softimage

Getting Started with Ready-Made Characters

Looking for a quick way to get started with characters in Softimage? Check out the ready-made models in the Get > Primitive > Model and Get > Primitive > Character menus. Here are just a few of the characters you’ll meet on these menus:

All predefined skeletons, bodies, characters, and rigs are implemented as models. As well, most of the bipeds share the same basic hierarchy structure that you can see in the explorer, making it easy to share animation later, especially if you’re using actions in the animation mixer.

Making Custom Characters and Faces

The Character Designer (choose Get > Primitive > Character > Man Maker) loads a generic male body, then use sliders in a property editor to interactively manipulate individual body and head features. You can create many bodies, each with their own distinctive look, yet have all bodies sharing the same underlying topology.

The Face Maker (choose Get > Primitive > Character > Face Maker) loads a predefined low-resolution polygon mesh head (male or female). This lets you can create any number of different faces with the same topology, allowing you to easily copy shape animation keys between them. Perfect for testing out some shape animation!

Complete Woman Skeleton and Biped Character

XSI Man Armored and Elephant

Man Maker

Face Maker

Page 175: XSI guia basica

Basics • 175

Setting Up Your Character

Setting Up Your Character

How you set up your character determines its destiny in many different ways. Here are some issues to think about while you’re planning out your character animation.

Putting the Character’s Elements into Models

Models in Softimage are containers that make it easy to organize scene elements that need to be kept together. A character’s skeleton hierarchy, rig controls, envelope geometry, and groups are often kept together within a model. The main reasons for using models with character animation is that they provide the easiest way to import and export characters between scenes and to copy animation between characters.

You can refine your rigs and character models over the course of a production without fear of lost animation. For example, character animators can start roughing out animation with a simple rig and low resolution proxy model while the other creative work is still being worked out. As long as you keep the rig controls’ names and their coordinate space consistent, all the animation is kept and can be reapplied as the character and rigging both get more complex.

Another reason to work with models is to easily use the animation mixer. Each model can only have one Mixer node. If you have many characters in a scene but they aren’t within models, you have only one Mixer node for the whole scene (under the scene root, which is technically a model) which means that you can’t copy animation from one character to another.

Organizing Your Character into Scene Layers and Groups

Scene layers let you divide up different scene elements into groupings whose visibility, selectability, renderability, and ghosting can be controlled. Press 6 to open the scene layer manager and set up the layers.

For example, you can separate the character’s envelope (geometry), its skeleton, and its control objects for the rig each into different layers.

Layers, however, live only at the scene level, so if you’re importing and exporting models between scenes, they’re not going to include any layer information. This is where groups can be of help.

Groups let you keep certain character elements together for easy selection, such as all objects that are to be enveloped. Groups are properties of a model, so you can export them with your character model.

Page 176: XSI guia basica

Section 10 • Character Animation

176 • Softimage

Tools for Easy Viewing and Selecting

When you’re animating a skeleton, you may want to work with a low-resolution version of the envelope on the skeleton. This helps you get a sense of how the animation will work with the final envelope. However, working with enveloped skeletons can make it difficult to view or select chain elements. To help you with this, Softimage has several viewing and selection options, with the most common ones shown here.

You can set up a character synoptic view for other members of your team, allowing them to use your character easily. Synoptic views allow you and others to quickly access commands and data related to a specific object or model. They consist of a simple HTML image map stored as a separate file outside of the Softimage scene file. The HTML file is then linked to a scene element.

Clicking on a hot spot in the image either opens another synoptic view or runs a linked script. You can include all sorts of information about the character, set up hotspots for selecting body parts, setting keys on different elements, running a script, etc.

Shadow icons are displayed here as cylinders for many bones. These shadows have been resized and offset from the bone to make them easy to see and grab. You can also color-code the shadows to identify different groups of controls.

You can also change the shape, color, and size of the chain elements themselves (such as resizing the bones), including having no chain element displayed at all.

X-ray shading lets you see and select the underlying chains while still seeing the shaded surface of the envelope.

You can display the chains in screen (bones inside) or overlay (bones on top) modes.

Synoptic views

Click on a hot spot on the synoptic image to run the script that is linked to that image.

Page 177: XSI guia basica

Basics • 177

Building Skeletons for Characters

Building Skeletons for Characters

Skeletons provide an intuitive way to pose and animate your character. A well-constructed skeleton can be used for a wide variety of poses and actions. Skeletons in Softimage are made up of bones that are linked together by joints that can rotate. The combination of bones and joints

is referred to generically as a chain in Softimage because you can use chains for animating any type of object, not just humans or creatures. Chains have several elements, each of which has an important part to play, as shown below.

The root is a null that is the starting pointon the chain. It is the parent of all other

elements in the chain.

Because the first joint is local to the root,the root’s position and rotation determinethe position and rotation of the rest of the

chain.

The bones are connected by joints. A bone always rotates about its joint, which is at its top. The first bone rotates around the root.

The first bone in the chain is a child of the root, and all other bones are children of their preceding bones.

Keying the rotation of bones is how you animate with forward kinematics (FK).

A joint is the connection between elements in a chain: between bones in the chain, between the root and the first bone, and between the last bone and the effector. By default, joints are not shown but you can easily display them.

•In a 2D chain, the joints act as hinges, restricting movement so that it’s easier to create typical limb actions, such as bending an arm or leg. Only its first joint at the root acts as a ball joint, allowing a free range of movement: when using IK, the rest of the 2D chain’s joints rotate only on the root’s Z axis, like hinges. Of course, you can rotate the joints of a 2D chain in any direction with FK, but this is overridden as soon as you invoke IK.

• In a 3D chain, the joints can move any which way they like. All of its joints are like ball joints that can rotate freely on any axis, allowing you to animate wiggly objects like a tail or seaweed.

The effector is a null that is the last part ofa chain. Moving the effector invokes

inverse kinematics (IK), which modifiesthe angles of all the joints in that chain.

When you create a chain, the effector is achild of the root, not the preceding bone.

Anatomy of a skeleton

Page 178: XSI guia basica

Section 10 • Character Animation

178 • Softimage

Creating Skeletons

Drawing chains is pretty simple in Softimage: you choose the Create > Skeleton > Draw 2D Chain or 3D Chain command on the Animate toolbar and click where you want the root, joints, and effector to be. Here are some tips to help you draw chains:

• Draw the chains in relation to the default pose of the envelope that you’re planning to use. This means you don’t have to spend as much time adjusting each bone’s size and position later.

• Draw the chain with at least a slight bend to determine its direction of movement when using IK. Drawing bones in a straight line can result in unpredictable bending.

• If you want two chains to be mirrored, such as a character’s arms or legs, you can draw one and have the other one created at the same time. Just activate symmetry (Sym) mode and then draw a chain.

After you have created the chains for a character’s skeleton, you need to organize them in a hierarchy. Hierarchies are parent-child relationships that make it easy to animate the skeleton. There are many different ways in which you can set up a hierarchy, depending on the skeleton’s structure and the type of movements that the character needs to make.

Click once more to create another bone and joint.

Click once to create root and first joint.

Click again to create first bone and second joint.

1

2

3

Choose the Create > Skeleton > Draw 2D Chain or Draw 3D Chain command.

When you’re ready to finish, right-click to create the effector and end the chain.

5

4Tip: You can try out the joint’s location by keeping the mouse button held down as you drag.

The bone and joint are not created until you let go of the mouse button.

Part of a skeleton hierarchy structure shown in the schematic view. In this case, the spine root is the parent of the leg roots, spine, and spine effector.

These elements are, in turn, parents of the legs, neck, shoulders, spine, and so on.

Select the node you want to be the parent, click the Parent button and then pick the elements that will be its children.

Right-click to end the parenting mode.

In an explorer, drag the nodes you want to be children and drop them onto the node that will be the parent.

How to create a hierarchy

OR

Page 179: XSI guia basica

Basics • 179

Building Skeletons for Characters

Hold That Pose!

When you’re creating a skeleton, it’s a good idea to save it in a default position (pose) before it’s animated or enveloped. This way you have a solid reference point to revert to when enveloping and animating the skeleton. This pose is known as the neutral pose, reference pose, base pose, or bind pose, and is usually set up so that the character has outstretched arms and legs (a T-pose), making it easy to weight the envelope and adjust its textures.

To save the skeleton in a pose, you can create an action source using the Skeleton > Store Skeleton Pose command. To return to this pose at any time, you apply it to your character with the Skeleton > Apply Skeleton Pose command.

Because this pose is saved in an action source, you can pop it into the animation mixer to do nonlinear animation. For example, you could use this pose, as well as other stored action poses, to block out a rough animation for the character in the mixer.

Neutral Poses for Easy Keying

While a character’s reference or neutral T-pose makes it easy to weight the envelope and adjust its textures, it’s not the best pose for animating. This is because it can create local transformation values that are not easy to key. For example, if you load the default skeleton that comes in Softimage and you want to key the rotation of the finger bones, you’ll see that the bones’ local rotation values are difficult numbers to use for keying because they often involve several decimal places.

To solve this problem, pose your character how you want for its neutral pose and then simply choose the Skeleton > Create Neutral Pose command. This creates a neutral pose that uses zero for its local transformation values (0 for rotation and translation, 1 for scaling). Basically, this neutral pose acts as an offset for the object’s current local transformation values. To return to this neutral pose, you can enter zero in the Transform panel (“zero out” the values).

Then when you key the character’s values, they reflect the relative difference from zero, and not a number that’s difficult to use. For example, when you key a hand bone at coordinates (0, 3, 0), you know that it’s 3 units in the Y axis above the neutral pose.

Character in his neutral pose for weighting and texturing.

If you store a skeleton pose of this position, it’s easy to return to it at any point of your character’s development.

Hand bone rotated and keyed. Notice how the rotation values are easy to understand because they’re using 0 as a reference.

Branch-selected hand bone in neutral pose at 0.

Page 180: XSI guia basica

Section 10 • Character Animation

180 • Softimage

Making Adjustments to a Skeleton

Even though you’ve created your skeleton with the envelope in mind, you always need to resize bones, chains, or a whole skeleton to achieve the exact structure you want. As well, you may need to add or remove bones to the skeleton.

It’s usually better to modify a skeleton before you apply the envelope to it so that you don’t have to reweight the envelope to the bones. However, you can change the skeleton after it’s been enveloped, and decide whether to have the envelope adjust to the skeleton or not.

You can add bones to a chain using the Create > Skeleton > Add Bone to Chain command.

Click at the point where you want the new bone to end, and the new bone is added between the last bone and the effector.

Keep on adding as many bones as you like, then right-click to end the mode.

You can’t select and delete individual bones from a chain because of their hierarchy dependencies, but you can branch-select (middle-click) a chain and then delete it.

If there are children in that chain that you want to keep, make sure to Cut their links before deleting the chain, and then reparent them to the modified chain.

Adding bones

Removing bones

Use the Move Joint tool to move the knee joint to a new position. The bones connected above and below this joint are resized.

The easiest way to resize bones is to use the Create > Skeleton > Move Joint/Branch tool (press Ctrl+J).

This tool lets you interactively resize bones by moving any chain element to a new location. The bones that are immediately connected to that chain element are resized and rotated to fit the chain element’s new location.

Resizing bones

Modifying bones for an enveloped skeleton

Moving the knee joint using Move Branch resizes only the bone above it: this joint’s children are moved as a group but are not resized.

If you resize or add bones to a skeleton that’s already enveloped, the envelope automatically adjusts to the new skeleton. This means that you may need to adjust the weighting on the envelope.

If you want to resize bones without having the envelope adjust to the new size, you set a new reference pose with the Deform > Envelope > Set Reference Pose command.

Page 181: XSI guia basica

Basics • 181

Enveloping

Enveloping

An envelope is an object that deforms automatically, based on the pose of its skeleton or other deformers. In this way, for example, a character moves as you animate its skeleton. The process of setting up an envelope is sometimes called skinning or boning.

Every point in an envelope is assigned to one or more deformers. For each point, weights control the relative influence of its deformers. Each point on an envelope has a total weight of 100, which is divided between the deformers to which it is assigned. For example, if a point is weighted by 75 to the femur and 25 to the tibia, then the femur pulls on the point three times more strongly than the tibia.

Setting Envelopes

1. Make sure the envelope and deformers are in the reference pose (sometimes called a bind pose). The reference pose determines how points are initially assigned and weighted. It’s best to choose a reference pose that makes it easy to see and control how points will be assigned.

2. Select the objects, hierarchies, or clusters to become envelopes.

3. Choose Deform > Envelope > Set Envelope from the Animate toolbar.

If the current construction mode is not Animation, you are prompted to apply the envelope operator in the animation region of the operator stack anyway. In most cases, this is probably what you want.

4. Pick the objects that will act as deformers. You are not restricted to skeleton bones; you can pick any object. Left-click to pick individual objects and middle-click to pick branches. You can also pick groups

in the explorer—this is equivalent to picking every object in the group individually. If you make a mistake, Ctrl+click to undo the last pick.

5. When you have finished picking deformers, right-click to terminate the picking session. Each deformer is assigned a color, and points that are weighted 50% or more toward a particular deformer are displayed in the same color.

Use the Automatic Envelope Assignment property editor to adjust the basic settings.

6. Move the deformers to see how the envelope deforms. If necessary, you can now change the deformers to which points are assigned, as well as modify the envelope weights using the methods described in the next few sections.

If you ever need to reopen the Automatic Envelope Assignment property editor, you can find it in the envelope weight stack in an explorer.

Page 182: XSI guia basica

Section 10 • Character Animation

182 • Softimage

The Weight Paint Panel

The weight paint panel is very useful when modifying weights. It combines several features from the weight editor, brush properties, and the Animate toolbar. To display the weight paint panel, press Ctrl+3 or click the weight paint panel icon at the bottom of the toolbar.

Painting Envelope Weights

You can use the Paint tool to adjust envelope weights. This lets you use a brush to apply and remove weights on points in the 3D views.

1. Select an envelope.

2. Activate the Paint tool using the weight paint panel or by pressing w.

3. Pick a deformer for which you want to paint weights by selecting it in the list in the weight paint panel or by pressing d while picking it in a 3D view.

4. If desired, set the paint mode. Most of the time you will be using Add (additive) but Smooth, Erase, and Abs (absolute) are also sometimes useful.

5. If desired, adjust the brush properties:

- Use the r key to change the brush radius interactively.

- Use the e key to change the opacity interactively.

- Set other options in the Brush Properties editor (Ctrl+w).

6. Click and drag to paint on points on the envelope. In normal (additive) paint mode:

- To add weight, use the left mouse button.

Open weight editor

Chose a paint mode.

Set paint density.

Set brush size.

Update continuously (on) or only when mouse button is released (off).Pick a deformer for painting from the 3D views.

Select the deformer with the most influence on the point you pick

Click to pick deformer for painting. Right-click for other options.

Change color of current deformer.

Set weight assignment of selected points to current deformer numerically.

Numeric weight assignment options.

Smooth weights on object or selected points.

Reassign points to other deformers.

Freeze initial weight assignment and any modifications.

Activate Paint tool.

Display only current deformer’s weight map.

Weight Paint Panel

Page 183: XSI guia basica

Basics • 183

Enveloping

- To remove weight, either use the right mouse button or press Shift+left mouse button.

- To smooth weight values between deformers, press Alt+left mouse button.

7. Repeat steps 3 to 6 for other deformers and points until you are satisfied with the weighting.

Smoothing Envelope Weights

In addition to painting in Smooth mode, you can select an envelope or specific points and click Apply Smooth on the weight panel. This applies a Smooth Envelope Weight operator with several options.

Mirroring Envelope Weights Symmetrically

You can mirror the envelope weighting symmetrically. This lets you set up the weighting on one half or your character and then copy the weights to the corresponding points and deformers on the other half.

First, you must establish the correspondence between symmetrical points and deformers using Deform > Envelope > Create Symmetry Mapping Template from the Animate toolbar. Then, you can select properly weighted points and copy their values to the other side using Deform > Envelope > Mirror Weights.

Reassigning Points to Specific Deformers

You can reassign points to specific deformers. This is useful in case the automatic assignment did not assign the points to the desired bones.

1. Select points on the envelope.

2. Choose Deform > Envelope > Reassign Locally on the Animate toolbar, or click Local Reassign on the weight paint panel.

3. Pick one or more of the original deformers. If your envelope has multiple maps, for example, a weight map in addition to an envelope weight map, then you may need to select the envelope weight map explicitly before you can paint on it. A quick way is to select the enveloped geometry object, then choose Explore > Property Maps from the Select panel and select the map to paint on.

These points are incorrectly assigned to this deformer.

Page 184: XSI guia basica

Section 10 • Character Animation

184 • Softimage

Setting Weights Numerically

The weight editor allows you to modify envelope weight assignments numerically. You can open the weight editor by pressing Ctrl+e or by clicking Weight Editor on the weight panel.

Set weight of selected cells.

Control display of enveloped objects.

Control display of points and deformers.

Weight assignment options.

Deformers are listed in columns. Right-click for display options. Drag a column

border to resize.

Multiple envelopes.Double-click to expand and collapse, or

right-click for more options.

If some points aren’t fully weighted, thename is shown in red. Hover the mousepointer over the name to see how many

points aren’t fully weighted.

Points are listed in rows. Click to select,right-click for display options. Drag a row

border to resize. Points that aren’t fullyweighted are shown in red.

Selected cells are highlighted.

Smooth weights on object or selected points.

Reassign points to other deformers.

Transfer cell selection to 3D views.

Lock weights.

Non-zero weights are shaded.

Limit the number of deformers per point.

Freeze the envelope operator stack.

Points with more deformers than the limit are shown in yellow, as are envelopes with such points.

Page 185: XSI guia basica

Basics • 185

Enveloping

Locking Envelope Weights

You can lock or “hold” the values of envelope weights using the weight editor, the Envelope menu of the Animate toolbar, or the context menu in the deformer list of the weight panel. Locking prevents you from accidentally modifying points that you have carefully adjusted when you are working on other points. It is also useful for setting exact numeric values while keeping Normalize on so that points don’t inadvertently become partially weighted to no deformer. If you need to modify locked points later, you must first unlock them. Points that are locked for all deformers are drawn in black in the 3D views.

Freezing Envelope Weights

When you freeze envelope weights using Freeze Weights on the weight paint panel, the weight map’s operator stack is collapsed, removing the original Automatic Envelope Assignment property along with any Weight Painter, Modify Envelope Weight, and Smooth Envelope Weight operators that have been applied. This reduces the amount of stored data and increases performance, but also has a number of other effects:

• The initial envelope weights can no longer be recalculated—it’s as if the envelope was imported as is.

• If you change the reference pose, you can no longer change the initial envelope weights based on the new pose.

• If you add a deformer to an envelope, you can no longer recalculate the weights automatically. The envelope points are all weighted 0 to the new deformer, and you must assign weights manually.

However, you can still add new paint strokes, smooth weights, and edit weights numerically after freezing. In addition, you can still reassign points locally to other deformers.

Using Envelope Presets

You can use the commands on the File menu of the weight editor to save and load presets of envelope weights. This can be useful if you want to experiment with modifying weights—you can save the current weights and reload them later if you don’t like the results.

To share presets between different envelopes, the envelopes must meet the following conditions:

• They must have exactly the same topology. This includes both the number of points and their connections.

If you added points after you created a preset, and then reapply the preset to the modified geometry, the new points are not weighted to any deformer until you assign them manually.

• Their deformers must have the same names.

The easiest way to meet these conditions is to simply duplicate a model containing an envelope and its deformers.

Changing Reference Poses

After an envelope has been assigned, you can change the reference pose of the envelope. The reference pose is the stance that the envelope and its deformers return to when you use the Reset Actor command. It is also the pose that determines the initial weighting of points to deformers based on proximity.

First mute the envelope, then adjust the positions of the envelope and deformers. Next, select both the envelope and deformers and choose Deform > Envelope > Set Reference Poses from the Animate toolbar. Finally, unmute the envelope.

Page 186: XSI guia basica

Section 10 • Character Animation

186 • Softimage

Adding and Removing Deformers

After you have applied an envelope, you can add and remove deformers. To add deformers, select the envelope, choose Deform > Envelope > Set Envelope from the Animate toolbar, pick the new deformers, and right-click when you have finished. If the envelope weights have been frozen or if Automatically Reassign Envelope When Adding Deformers is off, no points are weighted to the new deformers so you must do that manually. Otherwise, the initial weight assignments are recalculated and any modifications you made to them are preserved.

To remove deformers, simply choose Deform > Envelope > Remove Deformers from the Animate toolbar, pick the deformers to remove, and right-click when you are finished.

Modifying Enveloped Objects

Sometimes, after carefully assigning weights manually, you discover that you need to make a substantial change to the enveloped object, such as adding points. Luckily, you do not need to redo all your weighting—you can add and move points after enveloping.

When you add a point to an enveloped object, it is automatically weighted based on the surrounding points. It is better to add new points before removing old ones—this means that there is more weight information for the new points. You can assign the new points to specific deformers and modify weights as with any point on the envelope.

If you want to apply a deformation or move points on an enveloped object, make sure to first set the construction mode based on what you want to accomplish. For example:

• If you want to modify the base shape of the envelope, set the construction mode to Modeling.

• If you want to author shape keys on top of the envelope, for example, to create muscle bulges, set the construction mode to Secondary Shape Modeling.

Limiting the Number of Deformers per Point

You can limit the number of deformers to which each point’s weight is assigned. This can be especially important for game characters, because some game engines have a limit on the number of deformers.

1. Set the maximum number of deformers on the weight editor’s command bar.

If a point’s weight is assigned to more than this number of deformers, its row is shown in yellow in the weight editor. If an envelope has any such points, its row is shown in yellow, too.

2. To try to fix these points automatically, click Enforce Limit. A Limit Envelope Deformers operator is applied, and its property page is opened automatically. By default, the limit is the one you set on the command bar, but you can change it for individual operators.

If a point has more than the maximum number of deformers, the operator unassigns the deformers with the lowest weights and then normalizes the weight among the remainder. However, it will respect locked weights—locked weights are never changed, even if other deformers have greater weight. If there aren’t enough unlocked weights to modify, then the total weight might not add up to 100%.

Maximum number of deformers

Page 187: XSI guia basica

Basics • 187

Rigging a Character

Rigging a Character

Control rigs allow for “puppeteering” a character, helping you easily pose and animate it. Once a control rig is set up properly, you can animate more quickly and accurately than without one.

There are a number of tools in Softimage to help you create a rig for your character. You can use them to create control objects and constrain them to the skeleton, and to create shadows rigs and manage the constraints between them and their parent rigs.

You can also use the prefab guides and rigs in Softimage to help you get going quickly. These are available for biped, dog-leg biped, and quadruped characters. The rigs are skeletons that include control objects that you can position and orient to animate the various parts of the character’s body.

Shadow Rigs and Exporting Animation

Shadow rigs are simpler rigs that are constrained to your more complex main rig that is used for animating the character. Shadow rigs are usually used for exporting animation, such as to a games or crowd engine or other 3D software programs.

You can load a basic shadow rig with the Get > Primitive > Model > Biped - Box command. You can also create a shadow rig from a guide with the Character > Hierarchy from Guide command, or generate a shadow rig at the same time that you create a prefab rig.

To transfer the animation from the complex (animated) rig to its shadow rig, you plot the animation while the shadow rig is still constrained to the complex rig. Then you can export the shadow rig or just its animation.

Separate controls for the chest, upper body, and hips let you position and rotate each area individually.

Feet have three controls to allow for complex angles and foot rolls.

Ready-made (prefab) biped rig that comes with Softimage

Volume indicators help you work with envelopes.

You can create either a quaternion or regular chain spine and head.

Animated main rig

Animation transferred to shadow rig while it’s constrained to the main rig.

Page 188: XSI guia basica

Section 10 • Character Animation

188 • Softimage

Creating Your Own Rig

There are a number of tools in Softimage to help you create a rig for your character. You can create primitive control objects (such as spheres and cubes) or sophisticated control elements, (such as spines and spring-based tails) and constrain them to the skeleton. Expressions and scripted operators on these controls allow you to have ultimate control over your character’s animation. There are also tools to help you easily create shadows rigs and manage the constraints between them and their parent rigs.

Create control objects out of primitive objects or curves for each skeleton element you want to control.

You can also create your own objects to look like the body parts you’re controlling, such as the feet, hands, head, or hips.

You can create a simple but flexible spine with the Create > Skeleton > Create Spine command. This creates a quaternion-blended spine for controlling a character the way you like. You constrain the top and bottom vertebrae to hip and chest control objects that you create.

1

Create spring-based tail or ear controls using the Create > Skeleton > Create Tail command. Spring-based controls use dynamics to make them react to motion, such as bouncing when a character runs or jumps.

Constrain the control object to its skeleton element using constraints from the Constrain menu.

The pose constraint is often used because it constrains all transformations (SRT) of the control object to its skeleton element.

Use up-vector constraints for controlling the resolution plane of the arms and legs when using IK.

Put the control objects behind the legs or arms and constrain them to the thigh or upper-arm bones using the Create > Skeleton or Constrain > Chain Up Vector command.

2

Create an object, such as a null, and make it the parent of all skeleton and rig control objects.

Also make sure that all the rig control objects are within the character’s model.

You can also create a Transform Group in which a null becomes an invisible parent of all selected objects.

3

Page 189: XSI guia basica

Basics • 189

Rigging a Character

Using Prefab Guides and Rigs

You can use the prefab guides and rigs in Softimage to get going quickly. These are available for biped, dog-leg biped, and quadruped characters. The resulting rigs created from the guides are skeletons that include control objects that you can position and orient to animate the various parts of the character’s body.

You can customize these guides and rigs so that they contain only the elements you need. They can be used as a starting point for different rigging styles, and technical directors can write their own proportioning script to attach their own rig to a guide.

The guides have synoptic views to help you select and animate the rig controls: select any control and press F3. There are also preset character key sets and action sources to help you animate the rig.

Drag the red cubes to resize the different parts of the body.

You can use symmetry to resize the limbs on both sides of the body at the same time.

Create a guide by choosing Character > Biped Guide (or quadruped or biped dog-leg) and adjust it to fit your character’s envelope.

When the guide is fitted to the envelope, create a rig based on it by choosing Character > Rig from Biped Guide.

The rig is a skeleton that also includes standard Softimage objects as control objects.

Apply the body geometry as an envelope to the rig using the envelope_group in the rig’s model to apply it to the correct parts of the rig.

1 2

You can also create tail, ear, and bellycontrols that are driven by springs. Thislets you create secondary animation on

these body parts using dynamics.

4

3

Position and rotate the rig controls and key them to animate the various parts of the skeleton.

Page 190: XSI guia basica

Section 10 • Character Animation

190 • Softimage

Animating Characters with FK and IK

Skeletons provide an intuitive way to pose and animate your model. A well-constructed skeleton can be used for a wide variety of poses and actions, in much the same way as the skeletons in our bodies can. How parts of the skeleton move relative to each other is determined by the way your skeleton hierarchy is built, whether and how objects are constrained to each other.

Before you start animating your character, it is important to understand how animating transformations work in Softimage. There are several issues related to local and global animation, as well as animating transformations in skeleton hierarchies (see Animating Transformations on page 151).

You animate skeletons using inverse kinematics (IK) and forward kinematics (FK). The method you choose depends on what type of motion you’re trying to achieve. Of course, you can animate with both IK and FK on the same chain and then blend between them, allowing you the flexibility to animate as you like.

Animating with Forward Kinematics

Forward kinematics, or FK as it is usually known, allows for complete control of the chain’s behavior. When you animate with FK, you rotate a bone into position, which sets the angle of its joint, and then key the bone’s rotation values (its orientation). Each movement needs to be planned to create the resulting animation. For example, to bend an arm, you start from the “top” and move down by rotating the upper arm bone, the forearm bone, and finally the hand bone.

With FK, you can:

• Key the exact orientation (in X, Y, Z) of a joint. This prevents any surprises from occurring when 2D chains flatten on their resolution plane.

• Control certain joints that are difficult to animate, such as shoulders and arms.

• Have a movement properly “follow through”, such as giving a good, hard kick to a football.

To help make keying easier, you can create a character key set that contains all the rotation parameters for the bones. Then you can quickly key using this set. In a similar way, you can use the keying panel to key only the rotation parameters that you have set as “keyable” for the bones.

Bones in arm are rotated and keyed in order from the upper arm down to move from an outstretched position to a raised position with a flexed wrist.

Select a bone or the control rig object to which a bone is constrained.

Click the Rotate (r) button in the Transform panel or press C.

Rotate the bone into position on any axis (X, Y, Z).

Key the bone’s rotation values.

You could also animate with FK by first translating the chain’s effector (invoking IK) to move the bones into position, and then tweaking each bone’s rotation as necessary.

When things are in position, choose Create > Skeleton > Key All Bone Rotations to set rotation keys for all the bones in that chain.

Forward kinematics

1

2

4

To animate with FK

3

Page 191: XSI guia basica

Basics • 191

Animating Characters with FK and IK

Animating with Inverse Kinematics

Inverse kinematics, usually referred to as simply IK, is a goal-oriented way of animating: you define the chain’s goal position by placing its effector where you want, then Softimage calculates the angles at which the previous joints in the chain must rotate so that the chain can reach that goal.

IK is an intuitive way of animating because it’s how you probably think of movement. For example, when you want to grab an apple, you think about moving your hand to the apple (goal-oriented), not rotating your shoulder first, then your arm, and then your hand.

With IK, you can:

• Easily try out different poses. Dragging an effector to reach a goal is intuitive for certain types of actions.

• Quickly animate simple movements, including 2D chains that have a limited range of movement.

• Easily set up poses for a chain by positioning the effector, then keying either the effector’s translation (IK) or the bones’ rotation values (FK).

Translation values on effectors of chains created in Softimage are local to the effector’s parent (by default, the chain root). By not having the effector tied to its preceding bone, you are free to create local animation on the effector that can be translated with its parent. However, many animators prefer to constrain effectors and bones to a separate hierarchy of control objects (control rigs) so that they never animate the skeleton itself directly.

To help make keying easier, you can create a character key set that contains all the translation parameters for the effector. Then you can quickly key using this set. In a similar way, you can use the keying panel to key only the translation parameters that you have set as “keyable” for the effector.

Select the chain’s effector or the control rig object to which the effector is constrained.

Click the Translate (t) button in the Transform panel or press V.

Move the effector so that the chain is in the position you want.

Key the effector’s translation values.

Inverse kinematics

Leg’s effector is branch-selected (middle-clicked) and translated to move the leg from a standing position to doing the can-can.

To animate with IK1

2

4

3

You could also constrain the effector to a curve with the Constrain > Path command and animate it with path animation.

The chain is solved in the same way as if you keyed the effector’s positions.

Page 192: XSI guia basica

Section 10 • Character Animation

192 • Softimage

Basic Concepts for Inverse Kinematics

There are two fundamental concepts you should understand when working in IK: the chain’s preferred angle and its resolution plane.

When you draw a chain, you usually draw it with a bend to be able to predict its behavior when using IK. This bend is called the chain’s preferred angle. When you move the effector, the chain’s built-in solver computes a solution that considers these angles and the effector’s position.

You can change the joint’s preferred angle to get the correct skeleton structure for the animation that you want to create. This solves the IK in a new way, affecting the movement of the whole chain. You can also reset a bone’s rotation to the value of its preferred rotation, which resets the chain to its pose when you created it.

With 2D chains, the preferred axis of a chain (the X axis, by default) is perpendicular to the plane in which Softimage tries to keep the chain when moving the effector. This plane is referred to as the general orientation or resolution plane of a chain. It is in the space of this plane that the IK system resolves the joints’ rotations when you move the effector.

The resolution plane of this skeleton’s leg is shown with a gray triangle, connecting the root, the effector, and the knee joint. This plane is defined by the first joint’s XY plane, and any joint rotations stay aligned with this plane.

When the first joint is rotated, the resolution plane rotates accordingly, and all joint rotations remain on the resulting resolution plane.

Chain is drawn with a slight bend to determine its direction of movement when using IK.

This determines the preferred angle of rotation for each bone’s joint.

Using an up-vector constraint for chains, you can constrain the orientation of a chain to prevent it from flipping when it crosses certain zones.

The up-vector constraint forces the Y axis of a chain to point to a constraining object so that the solver knows exactly how to resolve the chain’s rotations.

You add up-vector constraints to the first bone of a chain because that is the bone that determines the resolution plane.

First point(joint 1 at chain root)

Resolution plane (gray triangle)

Second point(effector)

Third point(a null constrained by an up-vector constraint)

Resolution planePreferred angle Constraining the chain to prevent flipping

Page 193: XSI guia basica

Basics • 193

Animating Characters with FK and IK

Blending between FK and IK Animation

When you’re animating a skeleton, you may need to use both FK and IK animation on the same chain. For example, you want to use IK to have the hand grab at something, but to get a more convincing swing from the shoulder, you need to use FK.

In Softimage, it’s easy to blend between FK and IK using the Blend FK/IK slider in the Kinematics Chain property editor. This slider controls the influence that IK and FK both have on a chain, smoothly blending the results of bone rotation and effector translation.

By blending, you can animate with rotations to get a good “whip” effect (FK), and then blend in specific grabbing/punching/kicking (goal-oriented IK) movements, or mix goal-oriented movements (IK) against motion capture data (FK).

Solving the Dreaded Gimbal Lock

When you’re setting up a character, you should consider how the bones will be rotating for each body part so that you can choose the proper rotation order for them.

While the default rotation order of XYZ works for some body parts, there are certain body parts or movements for which this order can cause gimbal lock. Gimbal lock is a state that Euler angles go through when two rotation axes overlap. The angle values can change drastically when rotations are interpolated through it.

When you change the rotation order, you can solve the gimbal lock. You can change the order in which an object is rotated about its parent’s axes by selecting a Rotation > Order in the bone’s Local Transform > SRT property page (select the bone and press Ctrl+K).

You can also convert the rotation angles from Euler to quaternion using the Animation > Convert to Quaternion command in the Animation panel. Quaternion rotation angles produce a smooth interpolation which helps to prevent gimbal lock.

Animate the chain in FK (key the bone’s rotation parameters), as well as in IK (key the effector’s position).

Here, the blue ghost above the arm shows the chain at full FK; the red ghost below the arm shows the chain at full IK.

Drag the Blend FK/IK slider to set the value you want between FK (0) and IK (1). The chain interpolates smoothly between its IK and FK positions.

Set keys for the Blend FK/IK values at the appropriate frames where you want the blend to start and finish.

3

2

1 To help you see how the chain is blending, you can use ghosting. Ghosts are shown for the full FK and IK

positions of the chains.

Page 194: XSI guia basica

Section 10 • Character Animation

194 • Softimage

Walkin’ the Walk Cycle

A walk cycle is probably the most common task you’re going to do as an animator. You can do this with traditional tools, such as keying and the fcurve editor, but Softimage provides other excellent tools to help you animate your character. These include all the tools shown in this section, as well as the animation mixer.

You can store the walk cycle in an action source, then bring that source into the mixer to cycle it. Once in the mixer, you can reverse it, stretch it out or compress it to change the timing, cycle it, move it around in time, mix it with other actions, and more—all in a nondestructive way.

Key the position and rotation of the character’s arms, legs, and hips on one side of the body. Key the 5 basic poses at frames 1, 5, 9, etc., or frames 1, 6, 11, depending on your character’s stride.

The start and end poses must match so that the motion can be properly cycled in the animation mixer.

If the feet slide when they’re on the ground, you can fix it by making the fcurve interpolation flat between the pose keys. Open the animation (fcurve) editor, select the keys on the fcurves, and choose Keys > Zero Slope Orientation.

The fcurve editor is the tool to help you fine-tune the walk’s fcurves in many ways.

You can use rotoscoped images of models to act as a template from which you can base the character’s poses to be keyed.

You’ll need to tweak your character’s walk afterward to make it look natural and appropriate for the character.

Tip: It helps to make the arms and legs of the left and right side in different colors. Here, the right leg and arm are in black.

Save the finished walk cycle in an action source using the Action > Store > Fcurves command.Repeat the same poses for the

other side of the body on frames 21, 25, 29, and 3 (the first pose is the same as the last pose of the side you just did).

Cycle the walk clip in the mixer by dragging one of the clip’s lower corners. You can also quicken or slow down the walk pace, blend it with another action, or create a transition to yet another action, such as to a run cycle.

Use the cid clip effect variable to add a progressive forward offset to a stationary cycle.

2

1

4

Open the animation mixer, and load the action source into it by right-clicking on a green track and choosing Insert Source.

This create an action clip for the walk cycle on that track.

5

63

Page 195: XSI guia basica

Basics • 195

Motion Capture

Motion Capture

Motion captured animation (usually known as mocap) offers a way to animate a character based on motion that is electronically gathered from a human or animal. This is useful for animating actions that are particularly difficult to do well with keyframing or other methods of animation creation. In Softimage, you can import mocap data and apply it onto rigs, as well as retarget animation from BVH or C3D mocap files to rigs.

Importing Acclaim and Biovision Mocap Data

You can import motion capture information into Softimage using the File > Import > Acclaim and Biovision commands. Once the files are imported, you can constrain the skeletons to a rig and plot the mocap data into fcurves so that you can edit the animation.

Acclaim Skeleton files (ASF) contain information about the hierarchy and base pose of the skeleton. The animation for this skeleton is saved in an accompanying Acclaim Motion Capture (AMC) file. Biovision (BVH) files contain information about the hierarchy of the skeleton.

Adding Offsets to Mocap Data

It’s inevitable: the director took a look at the mocap animation for this character. It looks good but now he has some comments and wants to make a few changes. This can be problematic when the change affects a key pose or move because many other moves and poses are usually linked to it.

Luckily, in Softimage you can easily add non-destructive offsets to mocap data in any of these ways:

• Creating animation layers: Create a layer of keys as an offset to mocap animation. Layers let you keyframe as you would normally, but those keys are kept in a separate layer of animation so that they don’t affect the base mocap animation. After you’ve added one or more layers of keys and you’re happy with the results, you can collapse the layers to “bake” them into the base layer of animation.

• Mixing fcurves with an action clip: Normally, when there is an action clip in the mixer, it overrides any other animation on that object that covers the same frames. However, you can blend fcurves directly with an action clip over the same frames. This allows you to blend mixer animation with scene level animation.

Mocap files with hierarchy imported as bone chains.

Mocap files with hierarchy imported as nulls.

Club-bot with a mocap run action clip in the

animation mixer.

The left leg and arm are rotated a bit and then keyed as an offset to the clip.

Page 196: XSI guia basica

Section 10 • Character Animation

196 • Softimage

• Creating action clip effects in the mixer. Clip effects let you adjust the animation in an action clip without affecting the original animation in the action source. Clip effects add values “on top” of a clip, such as noise or offsets.

Working with High-density Fcurves

When you import motion capture data, the fcurves often have many keys, usually one per frame. A high-density fcurve is difficult to edit because if you change even a few keys, you then have to adjust many other keys to retain the overall shape of the curve.

Because editing these fcurves is not always easy, there are tools in the fcurve editor that can help you work with them: the HLE (high-level editing) tool and the curve processing tools (for smoothing, resampling, and fitting curves).

Retargeting Animation with MOTOR

Retargeting allows you to transfer any type of animation between characters, regardless of their size or proportions. Retargeting involves first tagging (identifying) the elements of a rig, then transferring animation from another rig or a mocap data file to the target rig. The animation is retargeted to the new rig as it’s transferred. The retargeted animation is “live” on the rig, controlled by the retargeting operators that live on the tagged rig elements. Because of this, you can adjust the animation on the rig at any time so that the motion is exactly as you like. If you want to commit the retargeted animation to fcurves, you can plot it on the rig.

While you can retarget any type of animation between characters, it is especially useful for reusing motion capture data to animate many different characters with the same movements, such as you would for a game. For example, you can reuse a basic run mocap file for many characters and then adjust the animation for each one as you like by adding offsets in different animation layers. Using the retargeting and layering tools in Softimage, you can quickly test out many variations of animation on the characters.

Using the commands in the Tools > MOTOR menu on the Animate toolbar, you can perform all of these tasks:

• Tag rig elements so that animation can be retargeted onto them.

• Retarget any type of animation from one rig to another.

• Retarget animation from BVH or C3D mocap files to a rig.

• Adjust the retargeted animation on the rig, such as by setting position and rotation offsets for the whole rig or just certain elements.

• Save any type of retargeted animation in a normalized motion format (.motor file) so that it can be loaded and retargeted on any tagged rig. This makes it easy to build up libraries of animation that can be used across all your rigs.

The HLE tool in the fcurve editor lets you shape an fcurve in an overall fashion, like lattices shaping an object’s geometry.

The HLE tool creates a sculpting curve that has few keys (shown here in green), but each one refers to a group of points on the dense fcurve.

Page 197: XSI guia basica

Basics • 197

Motion Capture

• Plot the retargeted animation on a rig into fcurves so that you can keep and edit the animation.

Before you start tagging the character elements or retargeting animation, make sure that the skeleton or rig is in a model. Retargeting can work only within model structures.

Tagging a rig’s elements

Tagging tells Softimage which part is which on your character, such as its hips, chest, legs, root, and so on. You tag the rig controls or skeleton parts that you use to animate the character. These tags are used to create a map (template) for that character.

Retargeting animation between rigs

When you retarget animation between rigs, the retargeting operator figures out which rig elements match based on their tags. Then it maps and generates the animation that is transferred to the target rig.

The animation between the two rigs is a live link that allows for interaction.

Select a rig and choose the Tools > MOTOR > Tag Rig command to tag its elements.

Once you have tagged a rig, you can use it for retargeting with another rig or with mocap data.

You can then save the mocap animation on the rig in a .motor file so that you can apply it to any tagged rig of the same structure.

Retargeting mocap data from a file to a rig

You can retarget mocap data from either C3D or BVH files to a tagged rig.

C3D rig

Biovision rig

Select the source rig, then press Ctrl and select the target rig.

Then choose the Tools > MOTOR > Rig to Rig command to retarget the animation from the source to the target rig.

If you want to save the animation on the target rig, you must plot (bake) it into fcurves.

Choose the Tools > MOTOR > Mocap to Rig command to load either a C3D or Biovision file and apply it to a rig in Softimage.

Page 198: XSI guia basica

Section 10 • Character Animation

198 • Softimage

Making Faces with Face Robot

Face Robot is a suite of tools that work together to help you easily rig and animate life-like human and humanoid faces, no matter what that face may look like!

Face Robot lets you quickly set up a facial rig by taking you through several required stages. Once the facial rig is created, you can animate the facial controls and sculpt and tune the soft facial tissue using Face Robot-specific tools, as well as some standard Softimage ones. When you’re done, you can export the Face Robot head in different ways, including as a games rig or a shape animation rig.

Face Robot has its own interface layout and operators that are separate from the rest of the Softimage interface layout. As a result, you need to enable a special mode that opens the Face Robot layout and loads its operators: choose Face Robot > Enable Face Robot from the main menu at the top of the Softimage window.

To start out with Face Robot, you load in a head model from the Face Robot layout’s Stage 1 panel. You then follow the instructions on the first four stage panels in Face Robot to create a solved head.

A solved head is one that has been processed by Face Robot and contains all the necessary objects and operators, as is shown on the right.

Once the head is solved, you can move freely between Stage 5 and Stage 6 to animate and sculpt the face.

• Stage 1: Assemble: Load in a single head model and possibly face parts (such as eyeballs, teeth, and a tongue). These need to be polygon meshes.

• Stage 2: Pick: Identify each face part for Face Robot (head, eyeballs, teeth, and an optional tongue). A visual guide on the panel helps you through this process.

• Stage 3: Landmarks: Pick the landmark points on the face that tell Face Robot the head’s size and proportions. A visual guide on the panel helps you through this process.

• Stage 4: Fit: Make any adjustments to the facial controls that are generated from the landmarks.

• Stage 5: Act: Set keyframes on the face’s animation controls or apply facial motion capture files (C3D) to them. You can retarget the mocap, and also blend it with keyframes.

• Stage 6: Tune: Use different tools to adjust the deformation of the face’s soft tissue to achieve the range of facial expressions that your character needs to make.

Page 199: XSI guia basica

Basics • 199

Making Faces with Face Robot

s

A

B

C

DE

A Main menu bar contains all standard menu commands. This is the same as in the main Softimage interface.

B The Face Robot panel gives you access to all six Face Robot stages for completing your facial animation.

C Click this button to hide/display the Face Robot panel and enlarge the viewport.

D Click this button to display/hide the Softimage main command panel (MCP).

E Click this button to display/hide the standard Softimage tool bars.

Page 200: XSI guia basica

Section 10 • Character Animation

200 • Softimage

Page 201: XSI guia basica

Basics • 201

Section 11

Shape Animation

Shape animation is the process of deforming an object over time. You take “snapshots” called shape keys of the object in different poses, then you blend these poses over time to animate them.

Softimage offers a number of tools with which you can create shape animation, allowing you to choose the method that works for you.

What you’ll find in this section ...

• Different Tools for Animating Shapes

• Shape Animation on Clusters

• Using Construction Modes for Shape Animation

• Creating and Animating Shapes in the Shape Manager

• Selecting Target Shapes to Create Shape Keys

• Storing and Applying Shape Keys

• Using the Animation Mixer for Shape Animation

• Mixing the Weights of Shape Keys

Page 202: XSI guia basica

Section 11 • Shape Animation

202 • Softimage

Things are Shaping Up

With shape animation, you can change the shape of an object over time. To do this, you move the object’s clusters of points in different ways, then store shape keys for each pose of these clusters that you want.

You can create shape keys from any kind of deformation to produce shape animation. For example, you can store shape keys for clusters on an object by moving points or by deforming by spline, such as for facial animation and lip-syncing. Or you can create a shape key for an object’s overall deformation using envelopes, lattices, or any of the standard deform operators (Bend, Bulge, Twist, etc.).

In Softimage, all shape animation is done on clusters. This means that you can have multiple clusters animated at the same time on the same object, such as a cluster for each eyebrow, one for the upper lip, one for the lower lip, etc. Or you can treat a complete object as one cluster, such as a head, and store shape keys for it.

You can use surface or polygon objects to create shape animation, or even curves, particles, and lattices—any geometry that has a static number of points.

Different Tools for Animating Shapes

Shape animation in Softimage uses the animation mixer under the hood to do its work. You can also use the animation mixer to do your shape work, but there are other methods too. You can:

• Use the shape manager to easily create and animate shape keys. This is probably the fastest and easiest way to work.

• Create shape keys for a base object from a group of target shapes (sometimes called morphing or blend shapes).

• Store shape keys, then apply them at different frames.

You can use the animation mixer with any of these methods. It is a powerful tool that gives you a high degree of flexibility in reworking your shape animation in a nonlinear way. Because shape animation is essentially pose-based, you can easily reorder the poses in time, reuse the same pose several times, and mix the poses together as you like, in the animation mixer. You can even add audio clips to the mixer to synchronize your shape animation to sound, such as for lip syncing.

Shape Animation and Models

Before you start to animate shapes, it’s a good idea to create a model containing the object that is to be shape-animated. This puts the object under its own Model node and creates a Mixer node for that model that contains all its shape keys. This way, the shape keys are stored with the model rather than just being in the entire scene.

You can then reuse the model with its shape animation in another scene, import and export the model with all its shapes and mixer, or duplicate the model with its shape animation.

Shape animation is done for this face by simply moving the points in different clusters on the head object, then storing a shape key for each cluster’s pose.

You could also treat the whole head object as a cluster and deform its points in the same way, then store shape keys for each pose for the object.

Page 203: XSI guia basica

Basics • 203

Things are Shaping Up

Object with cluster

Object with tagged points. A cluster of these points is automatically created when you store a shape key.

Whole object. A cluster including all points on head is automatically created when you store a shape key.

Click the Clusters button on the Select panel to see a list of the

object’s clusters.

Always store shape keys using the same cluster of points. When you deform an object, but store a shape key only for a cluster of points on that object, the deformed points that don’t belong to that cluster snap back to their original position when you change frames.

To make it easier to use the same cluster, give the cluster a descriptive name as soon as you create it.

All shape animation is done on clusters. You can have multiple clusters on the same object, or you can treat an object as one cluster. You can even store shape keys for tagged points that are not saved as a cluster.

Shape Animation on Clusters

Shape reference modes control how the shape behaves when the base shape is deformed in Modeling mode.

You should select a reference mode before you store shape keys on a cluster.

Local Relative ModeShape deforms with object.

Shape Reference Modes

A shape source is the shape that you have stored and is usually referred to as a shape key. By storing several shapes for an object, you can build up a library of sources. Shape sources are stored in the model’s Mixer > Sources > Shape folder.

A shape clip is an instance of that source on a track in the animation mixer. Even if you don’t use the mixer for shape animation, a clip is always created when you create a shape key.

Shape Sources and Clips

Shape key on a single cluster

Absolute ModeShape stays locked in place as object deforms.

Object Relative Mode: Shape deforms with object but keeps original orientation.

Page 204: XSI guia basica

Section 11 • Shape Animation

204 • Softimage

Using Construction Modes for Shape Animation

When you’re creating shapes, you can use any number of deformation operators, including envelopes, as the tools for sculpting the shapes. Because you can use these deformation operators for tasks other than shape animation, you need to let Softimage know how you want to use them. For example, when you apply a deformation, you could be building the object’s basic geometry (modeling), or creating a shape key for use with shape animation (shape modeling), or creating an animated deformation effect (animation).

To tell Softimage how you’re using the deformation, you need to select the correct construction mode: Modeling, Shape Modeling, Animation, or Secondary Shape. The mode puts the deformation operator in one of four regions in the object’s construction history that corresponds to that mode. These regions keep the construction history clean and well ordered by allowing you to classify operators according to how you want to use them.

Here is a quick overview of how you can use the four different construction modes for doing shape animation:

Switch to Shape Modeling mode to create shape keys. These shape keys are set in reference to the object’s base shape (each cluster is an offset from the base).

1

2

34

If the object is to be an envelope for a skeleton, switch to Animation mode and apply it as an envelope.

In this case, the jaw bone is rotated to help deform the envelope for lip syncing.

To fix any geometry problems due to the envelope’s animation, switch to Secondary Shape mode and create shape keys in reference to the animated envelope’s geometry.

For example, you can fix up the shape in the corner of the mouth in relation to the jaw opening and deforming the envelope.

Select one of the four construction modes from the list in the menu bar at the top of the Softimage window.

In Modeling mode, create and deform the object to be shape-animated.

This is the base shape for the object, which is a result of all the operators in the Modeling region of the object’s construction history.

When you create shape keys, they are stored as the difference of point positions from this base shape’s geometry.

Markers in the explorer divide up the object’s construction history into regions that correspond to the four construction modes.

Deformation operators are kept in their appropriate region.

Page 205: XSI guia basica

Basics • 205

Creating and Animating Shapes in the Shape Manager

Creating and Animating Shapes in the Shape Manager

The shape manager provides you with an environment for creating, editing, and animating shapes. To help you work efficiently, the shape manager has a viewer that immediately displays the results of the changes as you make them to the object.

When you create a new shape in the shape manager, a shape key is added to the object’s Mixer > Sources > Shape list and shape clips are created for the object in the animation mixer.

1 3 4

5

2

With an object selected, select Shape or an existing shape in the shape list.

Duplicate the shape and rename it.

Deform the object or cluster into a new shape in the shape viewer.

Repeat these two steps to create a library of different shapes for this object.

Go to the next frame at which you want to set a key, change the values of the weight sliders, and set another key. Continue on in this manner.

On the Animate tab, set the values of the shape weight sliders until you get the shape you want. Notice the object update in the shape viewer as you change the slider values.

Set a key at this frame.

6

Open the shape manager in a viewport or in a floating window (choose View > Animation > Shape Manager).

Page 206: XSI guia basica

Section 11 • Shape Animation

206 • Softimage

Selecting Target Shapes to Create Shape Keys

Selecting shape keys (also known as morphing or blend shapes) lets you deform an object using a series of objects that are deformed in different shapes (called target shapes). These objects must have the same type of geometry and the same topology (number and arrangement of points) as the base object that they’re shape-animating. The easiest way to do this is to duplicate the base object that you want to shape-animate, and then deform each of the copies in a different way that will correspond to a target shape.

Selecting target shapes sets up a relation between the base object and the shape keys, allowing to you fine-tune the target shapes and have those adjustments appear on the base object. For example, if your client thinks that the nose is too long on one of the target shapes, all you have to do is change the nose for it and the nose on the base object is updated. You can also choose to break the relationship between the base object and its target shapes to keep performance optimal.

Create the base object in a neutral pose. This is the object to be deformed with the target shapes.

Duplicate the object and deform into different shapes (target shape) such as for phonemes.

Move them out of the way of the camera.

1 32

45

Select the base object and choose Deform > Shape > Select Shape Key. Then pick each of the target shapes in the order that

you want to create shape keys for the object.

Select Shape Modeling Mode from the Construction Mode list.

Label the first shape key created in the Name text box, such as face. The other shape keys use this name plus a number, such as face1, face2, etc.

For each target shape you pick, a shape key is added to the model’s Mixer > Sources > Shape folder.

To create the animation, set the values for each shape key’s weight slider in the animation mixer or in the Shape Weights custom parameter set.

6

In either the mixer or the parameter set, click the weight slider’s animation icon to key this value at this frame.

Page 207: XSI guia basica

Basics • 207

Storing and Applying Shape Keys

Storing and Applying Shape Keys

When you store and apply shape keys, you create a shape source in the model’s Mixer > Sources > Shape folder, as well as a shape clip in the animation mixer.

If you want to use the mixer for doing your shape animation, this is an easy way to work because the clips are set up for you. In the mixer, you can then change the length of the clips, create transitions between clips, change the weight of the clips, and so on.

If you don’t want to use the mixer, storing and applying shape keys is still an easy way to work because everything is set up “under the hood” in the mixer for you.

You can then animate the shape weights in the Shape Weights custom parameter set that is automatically created for you. This custom parameter set contains a proxy of each shape key’s weight slider.

You can also simply store shape keys and then apply them to the object or cluster later. When you store shape keys, a shape key is created for the current shape and added to the model’s list of shape sources, but it does not create a shape clip in the mixer. Storing shape keys is a good way to build up a library of shapes: when you’re ready to apply the shape keys, you can load them into the animation mixer to create shape clips. Or if you don’t want to use the mixer, you can simply apply the shape keys to the object or cluster at different frames.

Select a cluster of points or the whole object (creates one cluster for the object).

132 Select Shape Modeling Mode

from the Construction Mode list.

Deform the cluster or object into a shape that you want to store, then choose Deform > Shape > Store and Apply Shape Key.

When you store and apply, the shape key is applied to the cluster or object at the current frame. A shape clip for this shape key is also created in the animation mixer.

Go to the frame at which you want to set a shape key.

Go to the next frame at which you want to set a shape key, deform the cluster or object, and store and apply another shape key.

5

4

You can edit the shape animation in the mixer. You can resize and layer the clips, and add transitions between the clips for a smooth change between shapes.

You can also animate the weight of each shape clip against each other in the mixer or in the Shape Weights custom parameter set.

6

Page 208: XSI guia basica

Section 11 • Shape Animation

208 • Softimage

Using the Animation Mixer for Shape Animation

Once you have created shape keys, you can use the animation mixer to sequence and mix them as shape clips. This lets you easily move shape clips around in a nonlinear way and change the weighting between two or more clips where they overlap in time.

The first step to using shape keys in the mixer is to add them as shape clips to a shape track. If you stored and applied shape keys or selected shape keys, this is automatically done for you.

Shape clips do not actually contain animation—they are simply static poses. This is why you need to create transitions between them and/or weight their shapes against each other to animate. Transitions create smooth and more complex animation than is possible with shape keys simply set at different frames with no transitions or weighting.

Once you have added shape clips to the animation mixer, you can use any of the mixer’s features to move, reorder, copy, scale, trim, and blend them.

To add a shape key as a clip to a track in the mixer, right-click on a blue shape track and choose Insert Source, then pick the source (shape key) you’ve stored.

You can also drag a shape key from the model’s Mixer > Sources > Shapes folder in the explorer and drop it on a blue shape track.

Notice how the shape interpolates over time, from clip to clip.

You can make composite shapes by creating compound clips for different clusters on the same frames of different tracks.

For example, one compound clip could drive the eyebrow cluster of a character while another clip drives the mouth cluster.

Create sequence of shapes by creating clips one after another using transitions to help smooth the spaces between them.

You can easily reorder the shape clips in time on the tracks, or duplicate a clip to repeat a shape several times over the animation. Because each shape clip refers to the source, you don’t need to duplicate the source.

Page 209: XSI guia basica

Basics • 209

Mixing the Weights of Shape Keys

Mixing the Weights of Shape Keys

Shape clips don’t contain any animation—they are simply static poses. As a result, one way to create animation with shapes is to animate the weight of each shape. Weighting is always done in relation to another shape key. This means that shape keys have to be overlapping in time with at least one other shape key to be weighted.

The higher the weight value, the more strongly a clip contributes to the combined animation. For example, if you set the weight’s value to 1, the clip’s contribution to the animation is 100% of its weight.

You can mix shape key weights in different ways, depending on how you created the shape keys in the first place and on how you like to work. You can mix shape key weights:

• Using the shape manager.

• Using the animation mixer.

• Using a custom parameter set, either the Shape Weights one or one you set up yourself.

The advantage of having a custom control panel is that you can have all the sliders in one property editor that you can easily move around in the workspace. As well, you can key all the sliders’ values at once by clicking the property set’s keyframe icon.

Click the Shape Weights icon beneath the shape-animated object in the explorer to open the custom parameter set.

No matter which tool you use, the basic process is the same: go to the frame you want, set each shape weight’s value, then click the keyframe or animation icon to set a key. You can then edit the resulting weight fcurve in the animation editor as you would any other fcurve.

Red curves in the clip display its weight values.

Set a weight value for each clip at this frame.

4

After you are done setting keys for the weights, you can edit the resulting weight fcurves. Right-click the weight’s animation icon and choose Animation Editor.

5

3

Move to the frame at which you want to set a key.

2

Click each weight’s animation icon to set a key for this value at this frame.

1

How to Mix and Key Action Weights in the Mixer

Put clips on different tracks and overlap them where you want to mix them.

In most cases, this is for the whole duration of the scene.

Page 210: XSI guia basica

Section 11 • Shape Animation

210 • Softimage

Normalized or Additive Weighting

One of the most important things to understand about weighting is to know whether weights are normalized (averaged) or additive. You can control how the weights of clips are combined, depending on whether or not you select the Normalize option in the Mixer Properties.

You’ll know that shapes are normalized if they seem to average or “smooth” each other out, or if different clusters on the same object affect each other when they shouldn’t (such as an eyebrow affecting the mouth shape). You may want to use the normalized mode if you’re mixing together shapes for a whole object.

In many cases, you will probably want the weight to be additive instead of normalized, such as if you’re mixing different clusters on one face over the same frames. This adds the shapes together but doesn’t “blend” them together.

Shape 1 Shape 2

Additive mix of Shapes 1 and 2. Theshapes are literally added together to

create a composite result. You can alsoexaggerate shapes by setting weight

values higher than 1.

Normalized mix of Shapes 1 and 2. The shapes are averaged resulting in a combination of the shapes. The total weight value of the two shapes equals 1.

+ = or

Page 211: XSI guia basica

Basics • 211

Section 12

Actions and the Animation Mixer

Actions are “packages” of low-level animation, such as function curves, expressions, constraints, and linked parameters. By creating a package that represents the animation, you can work at a higher level of animation that is not restricted by time.

The animation mixer is the tool that lets you work with actions, all in a nonlinear and non-destructive way.

What you’ll find in this section ...

• What Is Nonlinear Animation?

• The Animation Mixer

• Storing Animation in Action Sources

• Working with Clips in the Animation Mixer

• Mixing the Weights of Action Clips

• Modifying and Offsetting Action Clips

• Sharing Animation between Models

• Adding Audio to the Mix

Page 212: XSI guia basica

Section 12 • Actions and the Animation Mixer

212 • Softimage

What Is Nonlinear Animation?

Nonlinear animation is a way of animating that does not restrict you to a fixed time frame. You store animation into a package called an action source, then load this package in the animation mixer. In the mixer, you can layer and mix the animation sequences at a higher level in a nonlinear and non-destructive way. You can reuse and fine-tune animation you’ve created with keyframes, expressions, constraints, and shape animation (shape keys stored in shape sources). You can even add audio clips to the mixer to help synchronize it with the animation. And at any time, you can go back and modify the animation data at the lower levels, without needing to begin again and redo all your work.

When you bring an action source into the animation mixer, it becomes a clip. In the mixer, you can move an action clip around anywhere in time, squeeze or stretch its length as you like, apply one action after another in sequences, and combine two or more actions together to create a new animation. On the frames “covered” by the clip, the data stored in the source drives the object’s animation.

The animation mixer is well-suited for editing existing material and bringing together all the pieces of an animation. In it, you can assemble all the bits and pieces you’ve imported from different scenes and models to help you build them into a final animation.

If you’re modifying someone else’s animation, you don’t really have to deconstruct their work—just add a layer with your own animation. You can even modify the existing animation with a clip effect, acting as a separate and removable layer on top of the original animation.

Models and the Mixer

Models provide a way of organizing the objects in a scene, like a mini scene. You should always put your object structures within a model so that you have a Mixer node for it, because each model can have only one Mixer node. This node contains mixer data, such as action sources, mixer tracks, clips, transitions, and compounds.

If the characters in the scene aren’t within models, you have only one Mixer node for the whole scene (in the Scene Root) which means that you can’t easily copy animation from one model to another.

There are a number of ways in which you can share animation between models, whether they are in the same scene or different scenes. You can copy action sources, clips, compound clips, and even a model’s whole Mixer node between models. And when you duplicate a model, all sources and clips and mixer information are also duplicated.

Club_bot model structure contains many elements, including a Mixer node that has its action sources.

Page 213: XSI guia basica

Basics • 213

The Animation Mixer

The Animation Mixer

The animation mixer gives you high-level control over animation because you can layer and mix sequences in a nonlinear and non-destructive way, making it the ideal tool to use for complex animation. The animation mixer looks like a digital video editor, but instead of editing video sequences, you create animation sequences, transitions, and mixes. It helps you reuse and fine-tune animation you’ve created with keyframes, expressions, and constraints.

You can use the animation mixer with animation data (action sources), shape animation data (shape keys as shape sources), and add audio files for synchronization. Once you have a library of action sources created, you bring them into the mixer as action clips.

Each action clip is an instance of its action source. The original animation data stays untouched, making it easy to experiment with the animation without fear of destroying anything. You can always go back and change the original data and all your changes will automatically be applied; or you can add animation on top of the original animation source, as you may want to do with motion capture data.

On the frames “covered” by the clip, the data stored in the source drives the animation for the object. The mixer overrides any other animation that is on the object at that frame, unless you set a special option that mixes an action clip with fcurves on the object over the same frames.

You can ripple, mute, solo, and ghost all clips on a track.

Clips appear as colored bars according to their type. Create sequences of clips on the

same track or on different tracks.

Mix overlapping clips by setting and animating their weight values in the weight panel.

The playback cursor shows the current frame on the timeline.

Icons indicate the type of track and let you select the track.

Multiple tracks let you overlap clips in time and mix their weights.

Animation (action) tracksare green.

Audio tracks are sand.

Shape tracks are blue.

Tracks are the background on which you add and sequence clips in the mixer. You can sequence one clip after another on the same track or different tracks. To overlap clips in time for mixing, they must be on separate tracks.

You can display the animation mixer in any viewport, or display it in a floating window by pressing Alt+0 (zero).

Select an object, then click the Update icon in the mixer to see its tracks and clips.

To add a track, press Shift+A, Shift+S, or Shift+U to add animation (action), shape, or audio tracks, respectively. You can also choose a type from the Track menu.

Page 214: XSI guia basica

Section 12 • Actions and the Animation Mixer

214 • Softimage

Storing Animation in Action Sources

Action sources are packages of animation that you can use in the animation mixer. This is where the animation lives. You can package function curves, expressions, constraints, and linked parameters into a source, as well as rigid body or ICE simulations. You can create an entire library of actions, like walk cycles or jumps, and then share them among any number of models.

When you create an action source, it is saved in the Sources > model folder for the scene, which you can find in the explorer. This lets you see all sources for all models in the scene. However, for convenience, a copy of the source is available in the model’s Mixer > Sources > Animation folder. The name of this source is in italics to indicate that it’s a copy of the original source.

1 Animate an object or model. Each animation sequence here will be stored in its own source.

2 Select the animated object and choose an appropriate command from the Actions > Store menu. This stores the animation in an action source.

Arm wave Step and look Ground jimmy

You can composite actions by adding clips for different parameters on the same frames of different tracks.

Here, the top clip drives the legs of the character while the bottom clip drives the arms.

You can use the mixer as a simple sequencing tool that lets you position and scale multiple clips on a single track.

You may find the technique of pose-to-pose animation using the mixer easy to do by saving static poses of a character, loading the actions onto the tracks in sequence, and then creating transitions between the poses.

How to Create Action Sources and Clips

Right-click on a track and choose Insert Source. An action clip is created.

You can also drag a source from the model’s Sources folder in the explorer and drop it on a track.

4 Once the clip is in the mixer, you can manipulate it in many ways.

Here are some ideas ...

3

Page 215: XSI guia basica

Basics • 215

Storing Animation in Action Sources

Changing What’s in an Action Source

After you have created an action source, you can modify the original animation data stored in its source, remove items from it, or even add keys to fcurves in the source. When you modify the source, you change the animation for all action clips that were created from that source and refer to it.

Because editing an action source is destructive (you’re changing the original animation data), you should always make a backup copy of it before editing. This is also useful to do if you don’t want all action clips to share the same source (duplicate the source before creating clips from it).

You can access the animation data in an action source by right-clicking an action clip and choosing Source, or right-click and choose Animation Editor to access the source’s fcurves.

To add keys to a source, use the Action Key button in the mixer’s command bar.

Restoring the Original Animation to an Object

You can return to the original animation stored in an action source at any time by applying that action source to the object. This is useful if you removed the animation when you created an action source, or you can also apply the animation in the source to another model.

To apply the action source to a model, you simply select the source in the model’s Mixer > Sources > Animation folder in the explorer and choose the Actions > Apply > Action command.

Creating Action Sources from Clips

Because applying works only on sources, you can’t use it on clips. But what do you do when you want to combine some clips? You can select the clips and choose Clip > Freeze to New Source or Clip > Merge to New Source in the mixer to create a new source. You can then apply this new source to the model with the Actions > Apply > Action command.

Click this button toaccess the source’s

fcurves orconstraints

(depending on thetype of animation

in the source)

If expressions are stored in the source, enter information in a Value cell to edit them.

You can also deactivate or remove certain parameters in the source.

If you want to modify an action clip without affecting the source, you must use clip effects.

Select the action source in the model’s Mixer > Sources > Animation node, then choose the Actions > Apply > Action command to restore it to that object.

Page 216: XSI guia basica

Section 12 • Actions and the Animation Mixer

216 • Softimage

Working with Clips in the Animation Mixer

Clips are instances of action sources that you have created. While sources contain data such as function curves, clips don’t actually contain any animation: they simply reference the animation in the source and wrap it with timing information. You can create multiple clips from the same source and modify the clips independently of each other without affecting the animation data in the source.

Clips are represented by boxes on tracks in the mixer that you can move, scale, copy, trim, cycle, bounce, etc. Clips define the range of frames over which the animation items in the source are active and play back. You can also create compound clips which are a way of packaging multiple clips together so that you can work with larger amounts of animation data more easily.

Select only

Select and move clips

Click and drag in the middle of either end of a clip to scale it.

Drag on either of the clip’s lower corners to cycle it.

Transitions interpolate from one clip to the next, making the animation flow smoothly between clips rather than jerk suddenly at the start of the next clip.

If you’re working in a pose-to-pose method of animation using pose-based action clips, you need to use transitions to prevent a blocky-looking animation.

Press Ctrl while dragging the clip to copy it. You can copy clips between different models’ mixers this way, one clip at a time.

Select and drag a clip to move it somewhere else on the same track or a different track of the same type (action, shape, or audio).

Press Ctrl+drag on either of the clip’s lower corners to bounce it.

Drag on either of the clip’s upper corners to hold the clip’s first or last frames for any number of frames.

To add a clip to a track in the mixer, right-click on a track and choose Insert Source, then pick the source you’ve stored.

You can also drag a source from the model’s Sources folder in the explorer and drop it on a track in the mixer.

Add markers to clips and add information to a clip, such as to synchronize action or shape clips with audio clips.

Create thumbnails for each clip to help quickly identify what’s in them.

Page 217: XSI guia basica

Basics • 217

Mixing the Weights of Action Clips

Mixing the Weights of Action Clips

One of the most powerful features of the animation mixer is its ability to mix the weight of clips against each other. When two or more clips overlap in time and drive the same objects, you can mix them by setting their weights. By adjusting the weight of a clip, you can control how much of an influence it has compared to the other clips in the

resulting animation. The higher the mix weight, the more strongly a clip contributes to the animation. Mixing compound clips is an easy way to blend animation at an even higher level.

You can set keys on each clip’s weight to animate the changes. When the weight is animated, a weight fcurve is created that you can adjust like any other fcurve.

For the club-bot here, an arm wave action is being mixed with a dejected turn action.

1

Set a weight value for each clip at this frame.

How to Mix and Key Action Clip Weights

Put clips on different tracks and overlap them where you want to mix them.

This can also be for the duration of the scene.

4

After you’re done setting keys for the weights, you can edit the resulting weight fcurves.

Right-click the weight’s animation icon and choose Animation Editor.

5

3

Move to the frame at which you want to set a key.

2

You can also create a custom parameter set, then drag and drop the animation icons from each action clip weight in the mixer into the parameter set to make proxies of those weight sliders.

Click each weight’s animation icon to set a key for this value at this frame.

Red curves on the clip display its weight values.

You can control how the weights of clips are combined using the Normalize option in the Mixer Properties:

•When Normalize is on, the weight values of the separate clips are averaged out. This is useful if you’re blending similar actions, such as two leg actions of a character.

•When Normalize is off, mixes are additive meaning that the weight values of the separate clips are added on top of each other. This is useful if you’re weighting dissimilar actions against each other, such as weighting arm and leg actions of a character.

Page 218: XSI guia basica

Section 12 • Actions and the Animation Mixer

218 • Softimage

Mixing Fcurves with Action Clips

Normally, when there is an action clip in the mixer, it overrides any other animation on that object that covers the same frames. However, by selecting the Mix Current Animation option in the Mixer Properties editor, you can blend fcurves on the object directly with an action clip over the same frames.

For example, you can paste a clip in the mixer that contains the final animation for an object, then you can blend it with other fcurve animation you have added to that object, such as a slight offset or a minor adjustment to a mocap clip.

Being able to mix clips directly with fcurves means that you can easily create animation using the mixer, as well as using it for blending and tweaking final animations. You can keep manipulating and setting keys for the animated object and not have to make its animation into a clip to blend it with another clip.

Modifying and Offsetting Action Clips

If you want to modify an action clip that contains animation data from fcurves, you can create a clip effect. A clip effect is a package of any number of variables and functions that you use to modify the data in the action source. Each clip effect is an independent package, associated with its action clip, and sits “on top” of the clip’s original action source animation without touching it.

Because the effect is an independent unit, you can easily activate or deactivate it, allowing you to toggle between the clip’s original animation and the animation modifications in the clip effect. This makes it easy to test out changes to your animation.

You may need to edit a clip’s animation for a number of reasons:

• Add a progressive offset (using the cid variable) to a stationary walk cycle so that a character moves forward with each cycle.

• Animation coming from a library of stored actions often needs to be modified to fit a particular goal or environment. For example, you have a walk cycle, but the character must now step over an obstacle, so you have to move the leg over the obstacle.

• Animation that was originally created or captured for a given character must be applied to a different character that has different proportions.

• Animation with numerous keys, such as motion capture animation, must be adjusted, but you don’t want to touch the original animation because it can be difficult to edit.

Open the Mixer Properties editor and select Mix Current Animation. Then adjust the leg and arm a bit (as below right) and key it.

The Mix Weight value determines how much influence the fcurve animation has over the animation in the clip.

Key this parameter to blend the fcurves in and out of the action clips.

Club-bot with a run action clip active in the animation mixer.

Moving a key point in a mocap fcurve results in a peak in the curve.

Page 219: XSI guia basica

Basics • 219

Modifying and Offsetting Action Clips

Offsetting Clip Values

Offsetting actions is a task that you will probably perform frequently. This lets you move an object in local space so that its animation occurs in a different location from where it was originally defined.

To offset a clip’s values, you can:

• Click the Offset Map button in the mixer’s command bar.

• Choose the Set Offset Map - Changed Parameters command which compares the current value of all parameters driven by the clip and sets an offset if there is a difference.

• Choose the Effect > Set Offset Keys - Marked Parameters, which is the same as creating a clip effect, except that the clip effect’s offset expression is created for you.

• Choose the Set Pose Offset command to offset all transformations (scaling, rotation, and translation). All parameters to be offset are calculated together as a whole instead of as independent entities. The pose offset is especially useful for offsetting an object’s rotation, as well as position. As with clip effects, pose offsets sit “on top” of a clip’s animation.

How to Add a Clip Effect to a Clip

Enter formulas for any item’s expression to create a clip effect.

Right-click an action clip and choose Clip Properties.

In the Instanced Action property editor, click the Clip Item Information tab.

The clip effect is created and displayed as a yellow bar above the clip.

The cid variable in a clip effect is the cycle ID number. The cycle ID can be used to progressively offset a parameter in an action, such as for having a walk cycle move forward. The Cycle ID of the current frame is in the Time Control property editor (select the clip and press Ctrl+T).

For example, with a clip effect expression like (cid * 10) + thisthe parameter value of the action is used for the duration of the original clip, then 10 is added for the first cycle, 20 is added for the second cycle, and so on.

1

32

4

Original position on left with foot in ball.

Leg effector is translated to a position where Club-bot is just about to kick the ball and an offset key is set.

Page 220: XSI guia basica

Section 12 • Actions and the Animation Mixer

220 • Softimage

Changing Time Relationships (Timewarps)

A timewarp basically defines the speed of the animation in a clip. Timewarps change the relationship between the local time of the clip and the time of its parent (either a compound clip or the entire scene) while taking into account other things like scales, cycles etc. You can make a clip speed up, slow down, and reverse itself in a nonlinear way (such as making a character run or walk backwards).

When you apply a timewarp to a compound clip, it creates an overall effect that encompasses all clips that are contained within the compound clip.

If your clip is cycled or bounced, the timewarp can either be repeated on each cycle or bounce or encompass the duration of the whole extrapolated clip (the warp is not repeated with each cycle or bounce). This means, for example, that the overall animation on a cycled clip could increase in speed with each cycle.

You can apply a timewarp by right-clicking a clip and choosing Time Properties, or by selecting a clip and pressing Ctrl+T. The Warp page is home to both the Do Warp and Clip Warp options. Use the Clip Warp option for applying a warp over an extrapolated clip to warp its overall animation.

Sharing Animation between Models

One of the great things about actions is that you can use them again and again. You can create an action for one model and then use it again to animate another model in the same or another scene. You can even use the same action for different objects within the same model.

There are a number of ways in which you can share animation between models, whether they are in the same scene or a different scene:

• Copy action sources and compound sources between models in the same scene.

• Copy action clips and compound clips (which lets you combine a number of clips non-destructively) between models.

• Save an action source as a preset to copy action sources between models in different scenes.

• Create an external action source in a separate file in different formats (.xsi or .eani) to be used in other Softimage scenes.

• Import and export action sources in different file formats to be used in other scenes or other software packages.

• Import and export a model’s animation mixer as a preset (.xsimixer) to copy it to models in the same scene or another scene.

These two models can share actions easily because they have similar hierarchies.

Page 221: XSI guia basica

Basics • 221

Sharing Animation between Models

Copying Action Sources between Models

If you want to share an action source between models in the same scene, you can drag-and-drop one from the model’s Mixer > Sources > Animation folder in the explorer onto the mixer of another model. This makes a copy of that action source for the model.

To copy compound sources between models, press Ctrl while you drag the compound action source from the model’s Mixer > Sources > Animation to a track in the other model’s mixer.

Mapping Model Elements for Sharing

Sharing actions is possible because each model has its own namespace. This means that each object in a single model’s hierarchy must have a unique name, but objects in different models can have the same name. For example, if an action contains animation for Bob’s left_arm, you can apply the action to Biff ’s model and it automatically connects to Biff ’s left_arm element.

If the names for some of the objects and parameter names in the source don’t match when you’re copying sources between models, the Action Connection Resolution dialog box opens up in which you can resolve how the object or parameters are mapped.

You can also create connection-mapping templates to specify the proper connections between models before you copy action sources between models. These templates set up rules for mapping the object and parameter names stored in the action sources, such as when similar elements have with different naming schemes, such as L_ARM and LeftArm.

To create a connection-mapping template, open the animation mixer and choose Effect > Create Empty Connection Template. A template is created for the current model and the Connection Map property editor opens. Once you have created an empty connection-mapping template, you can add and modify the rules as you like.

Drag a source from the original model’s Mixer > Sources > Animation folder in the explorer and drop it on a track in the animation mixer of the target model.

Open the animation mixer for the model to which you want to copy the action source (the target).

Open an explorer and expand the Model node for the model from which you want to copy the action source (the original).

1

3

2

Jaiqua’s (on the left) elements are mapped to the corresponding ones on the Club-bot using a connection-mapping template.

This is set up before action sources are shared between them.

Page 222: XSI guia basica

Section 12 • Actions and the Animation Mixer

222 • Softimage

Adding Audio to the Mix

You can add audio files to your scenes using the animation mixer. This allows you to adjust the timing of your animations by using the sound as a reference. For example, you can use an audio file as reference for lip syncing with a shape-animated face, or sync up some special effect noise with an animation. Or you could load an audio file to do some previsualization or storyboarding as you’re experimenting with your animation project.

Sound files are added as audio clips on tracks in the animation mixer in the same way that you load action and shape sources as clips on tracks. Once you have an audio clip in the mixer, you can move it along the track, copy it, scale it, add markers to it, mute, and solo it.

The following process shows how you can easily load and play sound files in the animation mixer.

4

When you’re satisfied with the results, do a final render and use an editing suite to add the sound to the final animation.

3

2

Adjust the animation of the character (such as facial animation) to match the marked audio waveforms.

To help do this, you can view the audio waveform in the timeline or the fcurve editor to sync with the animation.

Or you can create a flipbook to preview the animation with audio.

Load an audio source file on an audio track in the animation mixer to create an audio clip. To do this, right-click a tan-colored audio track and choose Load Source from File.

1

How to Synchronize Audio with AnimationIn the Playback panel, click the All button so that RT (real-time playback) is active.

Play the audio clip using the regular playback controls below the timeline, including scrubbing in the timeline and looping.

Markers let you delimit different portions of the audio clip and give their wave patterns a corresponding meaningful name to help you synchronize more easily with the animation.

Move the playback cursor to the portion of audio wave you want to mark. Create markers with the Create Marker tool in the mixer by pressing the M key, then dragging over a range of frames on the clip.

Toggle the sound on and off by clicking the headphones icon.

On Muted

5

Page 223: XSI guia basica

Basics • 223

Section 13

Simulation

Imagine a scene with an alien climbing out of her space ship: it has just crashed to the ground after breaking through fence posts like match sticks, smoke streaming out of the engine. As she stares at the burning rubble that was once her home in the skies, a single tear rolls down her cheek. She stumbles through a raging snow storm, the howling wind whipping through her hair and tearing at her cape.

You can use all the simulation powers in Softimage to create your own compelling scenes—all the tools are there for you.

What you’ll find in this section ...

• Simulated Effects

• Making Things Move with Forces

• Hair and Fur

• Rigid Body Dynamics

• Soft Body Dynamics

• Cloth Dynamics

Page 224: XSI guia basica

Section 13 • Simulation

224 • Softimage

Simulated Effects

In Softimage, you can simulate almost any kind of natural, or unnatural, phenomena you can think of. To simulate these phenomena, you must first make objects into rigid bodies, soft bodies, or cloth, generate hair from an emitter, or create ICE particles. Only these types of objects can be influenced by forces and collisions to create simulations.

Forces make simulated objects move and add realism. As well, you can create collisions using any type and number of obstacles for any type of simulated object.

About Particles in Softimage

The Particles, Fluid, and Explode operators that existed in Softimage for many versions (now referred to as legacy particles) have been removed from Softimage to make room for ICE particles.

If you’re used to working with the legacy particle system, you’re going to recognize some of the same concepts and features in ICE particles, but that’s where it ends. Everything for ICE particles works in a completely different system.

ICE (Interactive Creative Environment) is a visual programming environment designed to easily create particle effects, and much more, by connecting data nodes together to create an ICE tree. You may find the learning curve for using the ICE tree a little steep at first, depending on what you want to do and what your technical level is, but soon you’ll find yourself connecting nodes together like a pro!

For information, see ICE: The Interactive Creative Environment on page 241 and ICE Particles on page 271.

Particles

ParticlesHair

Cloth

Rigid bodies

Page 225: XSI guia basica

Basics • 225

Making Things Move with Forces

Making Things Move with Forces

Forces make simulated objects move according to different types of forces in nature. Each force in Softimage has a control object that you can select, translate, rotate, and scale like any other object in a scene. For example, you can animate the rotation of a fan’s control object to create the effect of a classic oscillating fan. Scaling a force’s control object changes its strength as well as its size.

Each simulated object can have multiple natural forces applied to it, and the same force can be applied to any number of simulated objects.

Creating and Applying a Force

You can apply a force to hair, soft bodies, and cloth as described below.

1. Select the hair, cloth, or soft body object to which you want to apply the force.

2. Create a force from the Get > Force menu on the Simulate toolbar.

3. The force is automatically applied to the selected object.

You could also select the hair object and apply an existing force to it by choosing Modify > Environment > Apply Force on the Hair toolbar, or select the cloth/soft body object and choose Cloth/Soft Body > Modify > Apply Force on the Simulate toolbar.

For rigid bodies, the process is simpler: simply create a force from the Get > Force menu and it is applied to all rigid bodies in the current simulation environment.

To use forces on ICE particles, see Forces and ICE Simulations on page 250.

Types of Forces

You can use any of these forces with hair, ICE particles, and rigid bodies, but not all forces work with soft body or cloth.

1 23

AGravity applies a force that defines an acceleration over time. To get the correct gravitational behavior from simulated objects, their size must be taken into consideration.

B The Fan creates a “local” effect of wind blowing through a cylinder so that everything inside the cylinder is affected.

C An Eddy force simulates the effect of a vacuum or local turbulence by creating a vortex force field inside a cylinder.

D The Drag force opposes the movement of simulated objects, as if they were in a fluid.

E The Vortex simulates a spiralling, swirling movement.

F The Wind is a directional force with velocity and strength. It generates a force that speeds up simulated objects to a target velocity.

GThe Turbulence force builds a wind field to let you imitate turbulence effects, such as the violent gusts of air that occur when an airplane lands.

H The Toric force simulates the effect of a vacuum or local turbulence by creating a vortex force field inside a torus.

I The Attractor force attracts or repels simulated objects much like a magnet attracts/repels iron filings.

Page 226: XSI guia basica

Section 13 • Simulation

226 • Softimage

Types of Forces

A

B

C

D

E

F G

H

I

Page 227: XSI guia basica

Basics • 227

Hair and Fur

Hair and Fur

In Softimage you can make all sorts of hairy and furry things—from Lady Godiva to wolves, bears, and grass. Hair in Softimage is a fully integrated hair generator that interacts with other elements in the scene. If you apply dynamics to the hair, the dynamics operator calculates the movement of the hair according to the velocity of the emitter object and any forces that are applied to the hair object.

Hair comes with a set of styling tools that allow you to groom and style the hair, almost as easily as if it was on your head. You can control the styling hairs one at a time, or grab many and style in an overall way.

To control the rendered look, you can use two special shaders designed for hair, or you can use any other Softimage shader with hair. And as with all things rendered in Softimage, you can use the render region to preview accurate results.

Hair is represented by two types of hairs: guide hairs and render hairs. Guide hairs are segmented curves that are used for styling, while render hairs are the “filler” hairs that are generated from and interpolated between the guide hairs. Render hairs are the only hairs that are actually rendered.

Guide hairs shown in white (selected). These are the hairs that you style.

The render hairs are interpolated between the guide hairs—these are the hairs that are rendered.

Select obstacles for hair collisions.

1 2

3 4

5

Emit hair from an object, cluster, or curves.

Style the guide hairs using tools on the Hair toolbar.

Adjust the default hair shader or apply another one to the hair.

6

Overview of Growing and Grooming Hair

Apply dynamics to have hair respond to movement, forces, and collision.

View and set up how the render hairs look.

Page 228: XSI guia basica

Section 13 • Simulation

228 • Softimage

Basic Grooming 101

When you’re styling, you always work with the guide hairs: these are the hairs that are similar to and behave like segmented IK chains. In fact, the you can grab a hair tip and position it the same way as you would the effector on an IK chain.

Because guide hairs are actual geometry, you can use all of the standard Deformation tools on them to come up with some groovy hairdos! Lattices, envelopes, deform by cluster center, randomize, and deform by volume usually produce the best results. However, if you animate the deformations, you cannot then use dynamics on the hair.

Select tips, points, or entire strands of hair to style in any way. Here, just the tips of some hair strands are selected.

Copy the style to another hair object.

Comb the hair in the desired direction, such as in the negative Y direction. Maybe use Puff to give some lift at the roots.

Use the Clump tool to bring hair strands or points together or fan them out.

When you use a styling tool after selecting Tip, press Alt+spacebar to return to the Tip selection tool.

You can find all styling tools on the Hair toolbar (press Ctrl+2).

Change the length of the guide hairs using the Cut tool or the Scale tool.

Translate and rotate specific tips or points of hair.

Use the Brush tool to sculpt hairs with a natural falloff, like proportional modeling.

You can deform the

shape of the hair using any deformation tool, like a lattice.

To have smoother animation, activate Stretchy mode to allow the hair segments to stretch along with the deformation.

Page 229: XSI guia basica

Basics • 229

Hair and Fur

Making Hair Move with Dynamics

When you apply dynamics to hair, you make it possible for the hair to move according to the velocity of the hair emitter object, like long hair whipping around as a character turns her head quickly. The dynamics calculations also take into account any forces applied to hair, such as gravity or wind, as well as any collisions of the hair with obstacles.

You can also use dynamics as a styling tool by freezing the hair when it’s at a state that you like. For example, apply dynamics, apply some wind to the hair, then freeze the hair when it has that wind-swept look.

Getting the Look with Render Hairs

The render hairs are the “filler” hairs that are generated from and interpolated between the guide hairs. And as their name implies, render hairs are the hairs that are actually rendered. You can change the look of a hair style quite a lot by modifying the render hairs.

Select the hair and choose Create > Dynamics on the Hair toolbar.

Play through the simulation—you may want to loop it.

Animate the hair emitter object’s translation or rotation, or apply a force to the hair to make it move.

Adjust the hair’s Stiffness, Wiggle, and Dampening parameters, if necessary.

Set the Cache to Read&Write, then play the simulation to cache it to a file for faster playback and scrubbing.

Caching also helps for more consistent rendering results.

1

2

3

45

Tip: Click the Style button on the Hair toolbar to toggle the dynamics state. You can style the hair only when dynamics is off.

How to apply dynamics to hair

Change the number of segments to change the hair’s resolution. Use a higher amount for curly or wavy hair.

Set the hair’s density according to a weight or texture map so that you can create some bald spots or sparser growth.

You can also use cut maps for the render hair length so that some areas have shorter hair than others according to a weight map.

Set the number of render hairs to be rendered, then decide which percentage of this value you want to display. To work quickly, display a low percentage, then display the full amount of hair for the final render.

Set the render hair root and tip thickness separately.

Add kink, waves, and frizz to render hairs to change their shape.

Page 230: XSI guia basica

Section 13 • Simulation

230 • Softimage

Hair Shaders and Rendering

Rendering hair is similar to rendering any other object in Softimage. You can use all standard lighting techniques (including final gathering and global illumination), set shadows, and apply motion blur. Hair is rendered as a special hair primitive geometry by the mental ray renderer.

While you can use any type of Softimage shader on hair, the Hair Renderer and Hair Geo shaders give you the most control for making the hair look the way you want. You can determine different coloring, transparency, and translucency anywhere along the length of the hair, such as at the roots and tips.

The Hair Geo shader lets you set the coloring, transparency, and translucency using gradient sliders, which give you lots of control over where the shading occurs along the hair strand.

You can even add incandescence to make the hair “glow”.

The Hair Renderer shader gives you control over coloring, transparency, and shadows along the hair strands. You can also optimize the render and take advantage of final gathering.

To get started with some hair coloring, choose View > General > Preset Manager, then drag and drop a preset from the Materials > Hair tab onto a hair object. These presets use the Hair Renderer shader.

To connect other Softimage shaders to the hair, disconnect the current Hair shader. Then you can load and connect another shader directly to the hair’s Material node.

For example, you can attach a Toon Paint or standard surface shader to the Surface and Shadow inputs of the hair’s Material node to change the hair’s color.

How to attach shaders to hair

1

2

3

To switch to the Hair Geo shader, choose Nodes > Hair > Hair Geometry Shading and attach it to the hair’s Material node in the same way as the Hair Renderer shader.

Select the hair and open a render tree (press 7). This tree shows the default shader connection when you create hair.

Incandescence on the inner part of the hair strand.

Incandescence on the rim of the hair strand.

Page 231: XSI guia basica

Basics • 231

Hair and Fur

Connecting a Texture Map to Hair Color Parameters

A texture map is the combination of a texture projection plus an image file whose pattern of colors you want to map. Instead of a value being applied over the surface as with a weight map, a texture map applies a color. When mapping a texture to the hair color parameters in the hair shaders, the color of the individual strands are derived from the texture color found at the root of the hair.

Unlike other geometry in Softimage, hair is not a typical surface so you can’t apply projections directly to it. Instead, you need to create a texture map property for the hair emitter object first, and then transfer it to the hair itself.

To do this, apply a texture map to the hair emitter using one of the Get > Property > Texture Map commands, associate an image to this projection to use as the map, then transfer the texture map from the hair emitter to the hair object itself using the Transfer Map button on the Hair toolbar.

Rendering Objects (Instances) in Place of Hairs

Replacing hairs with objects allows you to use any type of geometry in a hair simulation. You can replace hair with one or more geometric objects (referred to as instances) to create many different effects. For example, you could instance a feather object for a bird or instance a leaf object to create a jungle of lush vegetation.

The instanced geometry can be animated, such as its local rotation or scaling, or animated with deformations. This allows you to animate the hair without needing to use dynamics, such as instancing wriggling snakes on a head to transform an ordinary character into Medusa!

To render instances for the hairs, simply put the objects you want to instance into a group, and each object in the group is assigned to a guide hair using the Instancing options in the Hair property editor. The instanced geometry is calculated at render time, so you’ll only see the effect in a render region or when you render the frames of your scene.

You can choose whether to replace the render hairs or just the guide hairs. You can also control how the instances are assigned to the hair (randomly or using a weight map values), as well as control their orientation by using a tangent map or have them follow an object’s direction.

You can change the color of the hair using a texture map connected to the hair shaders’ color parameters.

Transfer the texture map from the hair emitter to the hair object using the Transfer Map button.

You can render instances of 3D objects as hair instead of the hair’s geometry.

The instance objects can even be animated!

Page 232: XSI guia basica

Section 13 • Simulation

232 • Softimage

Rigid Body Dynamics

Rigid body dynamics let you create realistic motion using rigid body objects (referred to as rigid bodies), which are objects that do not deform in a collision. With rigid body dynamics, you can create animation that could be difficult or time-consuming to achieve with other animation techniques, such as keyframing. For instance, you can easily make effects such as curling rocks colliding and rebounding off each other, a brick wall crumbling into pieces, or a saloon door swinging on its hinges.

You can make a regular object into a rigid body by simply selecting it and choosing a Create > Rigid Body command from the Simulate toolbar. This applies rigid body properties to that object, which include the object’s physical and collision properties, such as its mass or density, center of mass, elasticity, and friction.

The center of mass is the location at which a rigid body spins around itself when dynamics is applied (forces and/or collisions). By default, the center of mass is at the same location as the object’s center, but you can move it to wherever you like.

Center of mass at default location of object’s center.

Notice how the box bounces a bit in the middle before falling off the edge.

Center of mass is moved to the bottom right corner of the object.

Notice how the box hits the edge and tumbles more quickly with more spinning.

Select an object and choose either Create > Rigid Body > Active Rigid Body or Passive Rigid Body from the Simulate toolbar.

A simulation environment is automatically created in which the rigid body dynamics are calculated.

Apply a force to the scene, such as gravity. The force is added to the simulation environment.

If a rigid body is animated, you don’t need a force to make it move: just make sure to use its animation as its initial state for the simulation.

Have two or more rigid bodies collide— make their geometries intersect at any time other than at the first frame.

Here, the floor is set as an obstacle by making it a passive rigid body.

Set up the playback for the environment. This includes the duration of the simulation, the playback mode, and caching the simulation.

Play the simulation!

1

2

3

4

5

How to Create a Rigid Body Simulation

Tip: Animation ghosting lets you display a series of snapshots of the rigid bodies at frames behind and/or ahead of the current frame.

You can preview the simulation result without having to run the simulation!

Page 233: XSI guia basica

Basics • 233

Rigid Body Dynamics

Simulation Environments

All elements that are part of a rigid body simulation are controlled within a simulation environment. A simulation environment is a set of connection groups, one for each type of element in the simulation.

A simulation environment is created as soon as you make an object into a rigid body. You can also create more environments so that you have multiple simulation environments in one scene.

The dynamics operator solves the simulation for all elements that are in this environment. You have a choice of dynamics operators in Softimage: physX or ODE. physX is the default operator, offering you stable and accurate collisions with many rigid bodies in a scene, even when using the rigid body’s actual shape as the collision geometry. ODE is a free, open source library for simulating rigid body dynamics.

Adding Forces to the Environment

When you create a force in a scene, that force is automatically added to the Forces group in the current simulation environment and the dynamics solver calculates all active rigid bodies’ movements according to the force. If there are other simulations in the scene (such as particles or hair), they are not affected by the force unless you specifically apply it to them.

After you apply the force, you can adjust its weight individually on the rigid bodies. For example, you may want to have only 50% of a gravity force’s weight applied to a specific rigid body, while you want 100% of the gravity’s weight used on all the other rigid bodies in the simulation.

Passive or Active?

Rigid bodies can be either active or passive:

• Active rigid bodies are affected by dynamics, meaning that they can be moved by forces and collisions with other rigid bodies.

• Passive rigid bodies participate in the simulation but are not affected by dynamics; that is, they do not move as a result of forces or collisions with other rigid bodies. They can, however, be animated. You often use passive objects as stationary obstacles or as stationary objects in conjunction with rigid constraints (as an anchor point).

You can easily change the state of a rigid body by toggling the Passive option in the rigid body’s property editor.

You can see the current simulation environment by using the Curr. Envir. scope in the explorer. Or use the Environments scope to see all simulation environments in the scene.

All elements involved in the rigid body simulation are contained within this environment.

The pool table is a passive rigid body, while the white ball is an active rigid body with the gravity force applied.

The ball rebounds off the table but the table does not move.

Page 234: XSI guia basica

Section 13 • Simulation

234 • Softimage

Animation or Simulation?

You can apply rigid body dynamics to objects that are animated or not:

• If the rigid bodies are animated, you can use their animation (position, rotation, and linear/angular velocity) for the initial state of the simulation. When you apply a force to an animated rigid body, the force takes over the object’s movement as soon as the simulation starts.

• If the rigid bodies are not animated, you need to apply a force to make them move.

You can easily animate the active/passive state of a rigid body to achieve various effects: you simply animate the activeness of the Passive option in the rigid body’s property editor.

Creating Collisions with Rigid Bodies

Rigid bodies are all collision objects—you don’t need to specifically set an object as an obstacle with rigid bodies. For example, to animate billiard balls colliding with each other, you simply make the balls into rigid bodies. Then when they come in contact with each other, they all react to the collision.

At least one rigid body must be active to create a collision. When you have collisions between two or more active objects, they all move because they are all affected by the dynamics.

You can put rigid bodies into different collision layers, which lets you create exclusive groups of rigid bodies that can collide only with each other. By putting rigid bodies that don’t need to collide together in different layers, you can lessen the collision processing time.

The billiard ball is a passive rigid body whose rotation and translation is animated to make it move to the table’s edge. A gravity force has been applied to the simulation environment.

When the ball reaches the edge of the table, the ball’s state is switched from passive to active, the simulation takes over, and gravity makes the ball fall down.

Simulation

Animation All billiard balls are assigned as active rigid bodies.

When the white ball (circled) hits them, they all react to the collision.

1

2

3

Page 235: XSI guia basica

Basics • 235

Rigid Body Dynamics

Elasticity and Friction

All rigid bodies use a set of collision properties to calculate their reactions to each other during a collision, including elasticity and friction.

• Elasticity is the amount of kinetic energy that is retained when an object collides with another object. For example, when a billiard ball hits the table, elasticity influences how much the ball rebounds.

• Friction is the resisting force that determines how much energy is lost by an object as it moves along the surface of another. For example, a billiard ball rolling along a table has a lower friction value than a rubber ball along a table. Likewise, a billiard ball rolling on a carpet would have more friction than if it was rolling on a marble floor.

Collision Geometry Types

The collision type is the geometry used for the collision, which can be a bounding box/capsule/sphere, a convex hull, or the actual shape of the rigid body’s geometry.

• Bounding shapes (capsules, spheres, and boxes) provide a quick solution for collisions when shape accuracy is not an issue or the bounding shape’s geometry is close enough to the shape of the rigid body.

• Actual Shape provides an accurate collision but takes longer to calculate than bounding shapes or convex hulls. This is useful for rigid body geometry that is irregular in shape or has holes, dips, or peaks that you want to consider for the collision, such as this bowl with cherries falling inside of it.

• Convex hulls give a quick approximation of a rigid body’s shape, with the results similar to a box being shrinkwrapped around the rigid body. They have the advantage of being very fast. Any dips or holes in the rigid body geometry are not calculated, but it is otherwise the same as the rigid body’s original shape.

Actual Shape provides an accurate collision using the rigid body’s original shape.

Convex hull doesn’t calculate the dip in this bowl, but is otherwise the same as the bowl’s shape.

Bounding shapes: box, sphere, and capsule

Page 236: XSI guia basica

Section 13 • Simulation

236 • Softimage

Constraints between Rigid Bodies

You can set constraints between rigid bodies to limit a rigid body to a specific type of movement. For example, you could create a trap door that has a hinge at one of its ends. Then when some crates fall on the trap door, the collision causes the trap door to open up and the crates fall through it.

Rigid body constraints are actual objects that you can transform (translate, rotate, and scale), select, and delete like any other 3D object in Softimage.

You can constrain two rigid bodies together, a single rigid body to a point in global space, or constrain several active rigid bodies together as a chain.

Hinge

Slider

Ball and socket

Types of rigid body constraints

Spring

Fixed

A is a passive rigid body and B is an active rigid body.

A B

Choose a constraint from the Create > Rigid Body > Rigid Constraint menu, then left-click to pick the position for the constraint object.

Pick the first constrained rigid body (A). The constraint object connects to its center. A

BA

B

Pick the second constrained rigid body (B). The constraint connects to its center, joining the two rigid bodies together.

Rigid body B’s resulting movement with gravity applied. Notice how the constraint object is attached to both rigid bodies’ centers.

1

2

3

How to constrain rigid bodies

To constrain multiple rigid bodies to one, choose a command from the Create > Rigid Body > Multi Constraint Tool menu.

AB

Page 237: XSI guia basica

Basics • 237

Cloth Dynamics

Cloth Dynamics

The cloth simulator uses a spring-based model for animating cloth dynamics. You can specify and control the mass of the fabric, the friction, and the degree of stiffness, allowing you to simulate different materials such as leather, silk, dough, or even paper.

Cloth deformation is controlled by a virtual “spring net” which is made up from three different types of springs, each controlling a different kind of deformation: shearing, stretching, and bending.

After you set up how the cloth is deformed according to its own “internal” spring-based forces, you can then affect how it’s deformed using external forces, such as gravity, wind, fans, and eddies.

As well, you can have the cloth collide with external objects or with itself. The obstacles can be animated or deformed and interact with the cloth model according to the cloth’s and obstacle’s friction.

Although you can apply cloth only to single objects, you could create a larger object (such as a garment) made of multiple NURBS surface patches stitched together using any number of points.

You must first assemble the different patches into a single surface mesh object, then apply cloth to that object. Set the Stitching parameters in the ClothOp property editor to create seams between the different NURBS surfaces of the same surface mesh model.

Low resistance to Shear.

Low resistance to Bend.

Low resistance to Stretch.

•Bend controls the resistance to bending. With low values, the cloth moves very freely like silk; with high values, the cloth appears like rigid linen or even leather.

•Stretch controls the resistance to stretching, which is the elasticity of the material. Low values allow the cloth to deform without resistance, while higher values prevent the cloth from having elasticity.

•Shear controls the resistance to shearing (crosswise stretching), keeping as much to the original shape as possible. Try to decrease this value if the cloth’s wrinkling is too rigid.

To give you a head start on creating cloth, there are several presets in the Cloth property editor that let you quickly simulate the look and behavior of different materials, such as leather, paper, silk, or pizza dough.

Paper preset Silk preset

Page 238: XSI guia basica

Section 13 • Simulation

238 • Softimage

Select Animation as the Construction Mode. This tells Softimage that you want to use cloth as an animated deformation.

1

2

How to apply cloth to an object

Select an object and choose Create > Cloth > From Selection from the Simulate toolbar.

3

Apply forces to make the cloth move.

Here, a little gravity and a large fan are applied to create the effect of a strong wind blowing on the flag.

4

Set the cloth’s physical properties such as mass, friction, and resistance to shearing, bending, and stretching.

Play the simulation.6

To calculate the whole simulation more quickly, go to the last frame of the simulation.

You can cache the simulation to files to play back faster, as well as being able to scrub the simulation and play it backwards.

You can also set clusters of points to define specific areas of a cloth that you want to be affected by the cloth simulation, then use the Nail parameter to nail down these clusters.

For example, you can anchor down clusters at the sides or corners of a flag to keep it from blowing away in the wind.

As well, you can animate the Nail parameter as being on or off, making it easy to create the effect of a cloth being grabbed and then let go.

5 Select objects as obstacles for collisions and choose Cloth > Modify > Set Obstacle.

You can also have the cloth collide with itself by activating Self Collision in the ClothOp property page.

Page 239: XSI guia basica

Basics • 239

Soft Body Dynamics

Soft Body Dynamics

As the name would indicate, soft bodies are objects that easily deform when they collide with obstacles. In fact, the main reason to create soft bodies is to have collisions with obstacles. You can, for example, use soft body to deform a beach ball being blown across the sand and have it get squashed when it collides with a pail.

Soft body is a deform operator meaning that it moves only an object’s vertices, never the object’s center. Soft body computes the movements and deformations of the object by means of a spring-based lattice whose resolution you can define using the Sampling parameter in the SoftBodyOp property editor.

You can use soft body on clusters (such as points and polygons), allowing only that part of an object to be deformed by soft body. For example, you can have the cluster of points that form a character’s belly be deformed by soft body for some jelly-like fun!

If the soft-body object is animated, you can either preserve its animation or recalculate it according to any forces you apply, such as wind and gravity. If you keep the object’s animation, soft body acts only as a deformer on the object, but does not influence its movement.

If you want to convert the soft body simulation to animation, you can plot it as shape animation using the Tools > Plot > Shape command on the Animate toolbar.

Select Animation as the Construction Mode. This tells Softimage that you want to use soft body as an animated deformation.

1

2

How to apply soft body to an object or cluster

Select an object or cluster and choose Create > Soft Body > From Selection from the Simulate toolbar. The object can also be animated.

Set the soft body physical properties such as mass, friction, stiffness, and plasticity.

3

Apply a gravity and/or wind force.

If the soft body is not already animated, you need to apply a force to make it move.

4

5

To give you a head start, click a button on the Presets page to quickly make the object behave like a rubber ball, an air bag, and more.

Select objects as obstacles for collisions and choose Soft Body > Modify > Set Obstacle.

Then play the simulation and watch the ball bounce!

Page 240: XSI guia basica

Section 13 • Simulation

240 • Softimage

Page 241: XSI guia basica

Basics • 241

Section 14

ICE: The Interactive Creative Environment

ICE is a graph-based system for controlling deformations and particle effects in Softimage. You can quickly create an effect by connecting a few nodes, or you can dig deeper and use ICE as a complete visual programming environment.

This section describes some of the basic concepts of ICE. The next section, ICE Particles on page 271, describes the workflow for using the predefined ICE compounds to create particle systems.

What you’ll find in this section ...

• What is ICE?

• The ICE Tree View

• ICE Simulations

• Forces and ICE Simulations

• ICE Deformations

• Building ICE Trees

• ICE Compounds

Page 242: XSI guia basica

Section 14 • ICE: The Interactive Creative Environment

242 • Softimage

What is ICE?

ICE is a node-based system for controlling all the attributes that define a deformation or particle effect. There are two parts to ICE:

• At its basic level, ICE is a complete visual programming environment. You can combine basic nodes for getting data, modifying data, setting data, and controlling execution flow into elaborate ICE trees. You can easily experiment, in a way that you can’t when writing code, by simply connecting nodes and seeing the results immediately in the viewports. When you’re done, you can package your tree into reusable compounds that you can use in other scenes, share with your team, or even put online to share with the Softimage community.

• On top of that level, Softimage comes with a comprehensive set of predefined compounds for particle simulations. For simple effects, you can connect compounds that define forces or basic behaviors like sticking and bouncing. For more complex effects, you can use the predefined state machine to switch between several behaviors on a per-particle basis.

You can use ICE to:

• Completely control particle systems. You can add and remove points on point clouds. You can move points directly, or apply a simulation using particle or rigid body behavior.

• Deform various geometry types, including polygon meshes, NURBS surfaces, curves, lattices, and point clouds. However, you cannot add or remove components on any geometry type except point clouds.

You cannot use ICE on hair, non-ICE (legacy) particle clouds, groups, or branches.

There are three ways you can approach ICE:

• You can simply use the predefined compounds and adjust their input values to create basic effects.

• At the other extreme, you can dive right in and create your own custom effects from scratch using the base nodes.

• Between the two extremes, you can start with the factory compounds and then modify or augment them with extra nodes to create your own variations of effects.

Under the hood, many nodes connected together in the point cloud’s ICE tree are doing all the work.

Page 243: XSI guia basica

Basics • 243

What is ICE?

A Few Thing to Know About ICE...

It’s All About the Nodes

Nodes are the building blocks for ICE: they are operators that work on object data. Some nodes get data from the scene, and some modify and process this data. They have input and output ports that allow them to be connected to each other.

Compounds

Compounds are the “über nodes” of the ICE world. They can contain a whole ICE tree or just parts of it. Compounds make it easy to create more complex effects in the ICE tree because they package numerous nodes into one. And because they’re in a package, you can easily bring compounds into other scenes or share them with other users.

You can connect compounds in the same way that you do for nodes in the ICE tree. As well, you can open up a compound to edit it or just to see what makes it tick.

Softimage ships with many compounds that are designed specifically for particle and deformation workflows. You can find these on the Tasks tab of the preset manager in the ICE Tree view.

The ICETree Node

The ICETree node is like Grand Central Station for an ICE tree: it’s the main operator that processes all the data that flows into it. Nodes in the tree must be connected to it in order to be evaluated.

You can have multiple ICE trees per object as long as each ICETree operator has a different name—and you can easily rename it in the explorer.

Attributes

Attributes are at the heart of ICE. Attributes are data that is associated with objects, or with components such as points, edges, polygons, and nodes. With attributes, you can get and set information such as a particle’s color or shape, or an object’s point position. Almost every ICE tree involves getting and setting attributes in some way.

Attributes can be inherent (always part of the scene), predefined (innately understood by certain base ICE nodes, but dynamic in that they only exist when they are set), or custom (create your own).

Two nodes with ports connected together.

Compound with several input ports.

Some of the many attributes that are available for point clouds.

You can view attributes in an explorer.

Page 244: XSI guia basica

Section 14 • ICE: The Interactive Creative Environment

244 • Softimage

The ICE Tree View

The ICE tree view is where you build ICE trees by connecting nodes.

You can open an ICE tree view in a floating window by pressing Alt+9 or by choosing View > General > ICE Tree.

To display an ICE Layout with the ICE tree view embedded, choose View > Layouts > ICE.

A B C D E GF

J K L

H I

A Memo Cams. Save and restore up to four views:

Left-click to recall stored view.

Middle-click to store current view.

Ctrl+middle-click to overwrite stored view with current view.

Right-click to clear stored view.

B Lock. Prevents the view from updating when you select other objects in the scene.

C Refresh. When the view is locked, forces it to update with the current selection in the scene.

D Clear. Clears the view.

E Opens the preset manager in a floating window.

F Displays or hides the preset manager embedded in the left panel (J).

G Displays or hides the local explorer embedded in the right panel (L).

H Bird’s Eye View. Click to view a specific area of the workspace, or drag to scroll. Toggle it on or off with Show > Bird’s Eye View.

Page 245: XSI guia basica

Basics • 245

The ICE Tree View

ICE Nodes in the Preset Manager

In the preset manager, ICE nodes are separated into two tabs:

• The Tasks tab contains higher-level compounds for accomplishing specific tasks. You can select a task (Particles or Deformation) from the drop-down, and then select a sub-task from the list below.

• The Tools tab contains base nodes and general utility compounds for performing basic operations, like getting data, setting data, adding values, etc.

You can drag a node from the preset manager into an ICE tree and connect it to the graph.

I Control timers and display performance highlights. This is an advanced feature used for profiling and optimizing the performance of ICE trees.

J Embedded preset manager.

You can press Ctrl+F to quickly put the cursor in the preset manager’s text box so that you can start typing a search string. Pressing Ctrl+F will also temporarily display the preset manager if it is hidden.

K ICE tree workspace.

Connect nodes by dragging an output port from the right side of one node onto an input port on the left side of another node. You can connect the same output to as many inputs as you want.

Open a node’s property editor by double-clicking on it. This lets you set parameters that cannot be driven by connections.

Right-click on a node, on a port, or on the background for various options.

Hover the mouse pointer over a connection to highlight the connected ports. If a port is not visible because it has been collapsed or because the view is zoomed out too far, information about the port is displayed in a pop-up.

The nodes in the tree can be base nodes or compound nodes. Compounds are encapsulated subtrees built from base nodes and other compounds. Base nodes have a single border and compound nodes have a double border. See ICE Compounds on page 267 for information on building and exporting your own compounds.

Nodes that cannot be evaluated because of a structural error are displayed in red. Other nodes that will not be evaluated because of an error in their branch are displayed in yellow. See Debugging ICE Trees on page 264.

L Local explorer. When there are multiple ICE trees on the same object, click to select the one to view. You can also click on a material to switch to the render tree view.

Page 246: XSI guia basica

Section 14 • ICE: The Interactive Creative Environment

246 • Softimage

Anatomy of an ICE Tree

The following illustration shows a typical ICE tree for a simple particle system. To see some examples of how to build up an ICE tree, check out the three tutorials at the end of this guide.

A Data flows “downstream” from left to right along connections from one node’s output ports to the next node’s input ports. Each connection represents a data set.

B The ICETree node is the main operator that processes all the data that flows into it. Nodes must be connected to it to be evaluated.

A

B

C

D

E

F

C Execution flows sequentially from top to bottom along the input ports of the ICETree node (and any other type of Execute node).

Because the nodes are evaluated in order, it matters where you plug them in. Sometimes one operation requires another to be done first so that it can be evaluated properly.

D Nodes that are connected to an Emit node’s Execute on Emit port are applied only to new points that are generated on the current frame. They are not applied to all particles on every frame.

E Nodes that are connected to the root node are executed on every frame. You can control which data gets set on which elements by using If and Filter nodes in the upstream branches.

The simulation framework resets every particle’s force to 0 at the end of each frame, so forces must be reapplied at every frame, which is why the Add Forces node is plugged into the ICETree node and not the Emit node.

F The Simulate Particles node is the “standard” particles node that updates the position and velocity of each particle at each frame based on mass and force.

You could use the Simulate Rigid Bodies node instead to make particles into rigid bodies. Particles can then collide with each other and with other objects that are set as obstacles.

You do not need to include a simulation node in your tree—if you prefer, you can set point positions directly.

Page 247: XSI guia basica

Basics • 247

ICE Simulations

ICE Simulations

As with animation, a simulation calculates the way in which an object changes over time. However, with a simulation, the result of the current frame depends on the result of the previous frame.

With ICE, you can create both particle and deformation simulations.

• You can emit and change particles in a point cloud for effects such as cigarette smoke curling as it rises, leaves falling lazily to the ground, vines growing up out of the ground, or even crowds of people milling about in the street.

• You can deform various geometry types, including polygon meshes, NURBS surfaces, curves, lattices, and point clouds, to create effects such as turbulent ocean waves, gentle ripples on a pond, or ribbons twisting in the wind.

ICE snow particles fly from the point of impact of the boulder with the snow on the hill.

An ICE deformation also occurs on the hill as the boulder rolls down it, crushing the snow as it goes.

The point cloud’s simulated ICE tree emits the snow particles and makes them move.

A simulated ICE tree also exists for the polygon mesh hill’s deformation effect.

Page 248: XSI guia basica

Section 14 • ICE: The Interactive Creative Environment

248 • Softimage

Simulations and the Construction Regions

An ICETree node can be either simulated or not: the only difference between the two is the ICETree operator’s position in the object’s construction stack.

When you create a simulated ICETree node, the Simulation and Post-Simulation regions are created in the object’s construction stack, and the ICETree operator is placed in the Simulation region.

Operators in the Simulation region calculate the result of the current frame based on the previous frame rather than on the construction regions that are below it. This is true not only for ICE trees, but for all operators in the Simulation region. For example, if you apply a non-ICE Twist deformation with a small Angle value in the Simulation region and play back the scene, the object becomes progressively more twisted.

Operators in the Post-Simulation region are applied on top of the simulation. You could use the Post-Simulation region to apply a deformation, such as a lattice, on top of a particle simulation.

When the simulation is not active, the operators in the Simulation region are skipped. On the first frame that the simulation is active, the operators below the Simulation region are evaluated to define the default initial state but the operators in the Simulation region are not evaluated—this means that if you are emitting particles, for example, they will appear on the second frame of the simulation. While the simulation is active, the operators below the Simulation region are not re-evaluated.

You can turn a simulated tree into a non-simulated one by moving it to another region, like Modeling, and vice versa. However, remember that the lower regions are not re-evaluated when the simulation is active if the Simulation region exists in an object’s construction stack. To fix

this, you can select and delete the Simulation region marker from the construction operator stack. Both the Simulation and Post-Simulation region markers are removed if either one is deleted, but operators in these regions are not removed and can be moved to the desired regions afterward.

The Simulation Environment

A simulation environment is automatically created when a simulated ICETree node is created. This simulation environment houses the Simulation Time Control, the cache files, and any non-ICE forces used in the simulation.

The Simulation Time Control property is where you set the frame range during which the simulation is active. It’s also where you set the Play Mode which controls how the simulation plays back: Live, Standard, or Interactive.

To play the simulation, use the standard playback controls below the timeline to play, scrub, or jog forward. Since simulations depend on the previous frame, the viewports do not update if you play, scrub, or jog backwards unless the simulation has been cached. If you jump to a later frame, the intervening frames are calculated in the background.

Setting the ICE Simulation’s Initial State

By default, the initial state of a simulation is the result of the operators in the construction regions that are below the Simulation region on the first frame that the simulation is active.

However, with simulations you often need to have a certain state be the first frame of the simulation, such as a candle already burning or rigid bodies already settled. You can select any frame in an existing simulation and use that as the initial state by choosing ICE > Edit > Set Initial State from the Simulate toolbar.

Page 249: XSI guia basica

Basics • 249

ICE Simulations

Caching ICE Simulations

Much of the work in creating a convincing simulation is the process of trial and error. Caching can help you try out different combinations of settings until you find the right effect. Caching stores the current simulation frames into a file that you can play back using the ICE tree, the animation mixer, or simply the playback controls and the timeline.

With cache files in the animation mixer you can scale, trim, cycle (loop), blend, etc. them in the same way that you can for action clips.

There are three file formats from which you can choose to create cache files: the default ICECache file format, the PC2 file format, and the Custom file format (if you create your own custom plug-in for caching).

There are three ways of caching ICE simulations:

A Use the Tools > Plot > Write Geometry Cache command on the Animate toolbar to plot any type of simulation (except hair) or animation into cache files. Then select an object and load the cache files on it with the Plot > Load Geometry Cache command. This brings them into the animation mixer.

B Use the Caching option in the Simulation Time Control to cache the simulation frames from any simulated object into an action source, which you can then bring into the animation mixer.

C Use the Cache on File node in the ICE tree to write the simulation or animation data stored on an ICE object to a cache file, which you can bring into the animation mixer. You can also read the cache data with this node.

A

B

C

Page 250: XSI guia basica

Section 14 • ICE: The Interactive Creative Environment

250 • Softimage

Forces and ICE Simulations

In the ICE tree, you can make simulated particles and deformed objects move according to different types of forces. Each simulated object can have multiple forces applied to it.

You can use either of these types of forces in an ICE tree:

• The forces that are available from the Get > Primitive > Forces menu on any toolbar (see Making Things Move with Forces on page 225).

• The ICE forces that are available as compounds in the ICE tree view’s preset manager or Nodes menu.

You can also create your own force compounds using the nodes found within ICE.

The main ICE force is the Add Forces compound, which is a hub for all the forces in your ICE tree. It adds up the effect of all forces that are plugged into it, then outputs one force (vector). The order in which the forces are plugged into the Add Force compound is not important.

If nothing is plugged into the Add Force compound, you can use it to set a simple directional force on each axis.

ICE Forces

A Gravity applies a force that defines an acceleration over time. To get the correct gravitational behavior from objects or particles, their size must be taken into consideration.

B The Surface force attracts particles/objects to or repels them from an object’s surface. While this force is similar to creating goals for particles, this force keeps the particles moving around (“swarming”) the surface object instead of stopping once they reach the goal.

C The Wind is a directional force with velocity and strength. It generates a force that speeds up particles or objects to a target velocity.

D The Null Controller force uses a null to attract or repel particles/objects, much like how particles move toward or away from a goal object. Changing the icon shape of the null (to something like Rings, Square, or Circle) changes the behavior of this force.

E The Neighboring Particles force attracts particles to each other when they get within a certain range, but there is no friction between the particles so they don’t stay clumped together—they keep moving.

F The Drag force opposes the movement of simulated objects, as if they were in a fluid.

G The Coagulate force attracts points toward their neighbors to form clumps. Once the points get within a certain range of each other, the friction (drag) slows them down.

H The Point force attracts particles/objects to or repels them from a position in space that you define.

Page 251: XSI guia basica

Basics • 251

Forces and ICE Simulations

A

H

F

B

C

D

G

E

Types of ICE Forces

Page 252: XSI guia basica

Section 14 • ICE: The Interactive Creative Environment

252 • Softimage

ICE Deformations

Any ICE tree that modifies point positions on an object without adding or deleting points can be considered a deformation. With ICE, you can deform various geometry types, including polygon meshes, NURBS surfaces, curves, lattices, and point clouds. However, you cannot add or remove components on any geometry type except point clouds.

A deformer works by getting current point positions, modifying them based on other variables, then setting new positions. This means that you can create your own custom deformers with ICE.

You can create three types of deformations with ICE: simulated, animated, and non-time based.

The snow on the polygon mesh hill crushes under the weight of the boulder as it rolls down the hill.

The simulated ICE tree for the polygon mesh hill’s deformation effect. A Bulge operation is used along with turbulence.

Page 253: XSI guia basica

Basics • 253

ICE Deformations

Simulated Deformations

To create a simulated deformation in ICE, you need to use a Simulated ICETree node. You can then change the object point positions as you like with any type of deformer, including one of your own design.

As an example, the Footprints compound creates a simple deformation. It lowers the points of an object where the surface of another geometric object (the deformer) is below them in the object’s local Y axis. The points stay deformed during the simulation, so you can move the deformer to create more indentations. When you return to the first frame of the simulation, the geometry returns to its initial undeformed state.

Time-based, Non-simulated Deformations

You can also use ICE to create deformations that are time-based, but not simulated in that they are not in the Simulation region of the construction stack and therefore do not depend on the previous frame’s point positions.

One way to do this is to simply animate the input port values of the ICE tree. Another way is to include time-dependent nodes in the ICE tree, such as a Turbulence node. This node creates a coherent noise pattern that varies continuously in space, as well as optionally in time.

Here, the Turbulence node is used to set the point positions in Y. Space Frequency was set differently in X and Z, resulting in long, thin ripples.

There are also several Turbulize compounds based on this node, but designed to work with specific situations. You can find them in the preset manager.

1 Select the geometric object to be deformed and choose Deform > Footprints (ICE) from the Model, Animate, or Simulate toolbar. This creates an ICE tree for this object.

Alternatively, you can get the Footprints compound from the preset manager and set up this tree yourself.

2 Pick the geometric object to act as the deformer. In this case, it’s the infamous foot!

3 Play the scene to run the simulation, then move the deformer to create indentations in the object.

1

2

Page 254: XSI guia basica

Section 14 • ICE: The Interactive Creative Environment

254 • Softimage

Non–time-based Deformations

You can create deformations that are not time-based but instead depend on the position of deformer objects or other factors to modify point positions. The deformation can then be controlled by animating the deformers in any way.

The following example is a variation of the Push deformation that uses the proximity of a null to displace points along their normals.

Page 255: XSI guia basica

Basics • 255

Building ICE Trees

Building ICE Trees

ICE allows you to create operators by building a network of nodes called an ICE tree. The ICE tree makes it easy to build up an effect by connecting pieces of data together.

The real work in creating an ICE tree, however, is finding out the type of data you can use and then figuring out how to connect that data together to achieve the effect you want.

Using the compounds that come with Softimage can get you pretty far for some effects, but you might need to work at the base node level at some point. While this isn’t rocket science, it’s also not trivial. The level at which you get into tree building depends on what you want or need to do, as well as how comfortable you are with math and programming concepts.

Overview of How to Create an ICE Tree

This is a basic workflow for creating ICE trees.

1 Select the geometric object to which you want to apply an ICE tree.

2 Display the ICE tree view by pressing Alt+9.

Click the Update button in the ICE tree view to show the selection. The view will be empty if there’s no ICE tree on the object yet.

3 Create an ICE tree or a simulated ICE tree (for particle or deformation simulations) by choosing Create > ICE Tree or Simulated ICE Tree. This creates the ICETree node.

4 Add nodes to the workspace in a variety of ways, such as by dragging them from the preset manager into the ICE tree workspace or by choosing them from the Nodes menu.

You can also get data from a scene element. An easy way to do this is to select the object and press F3 so that a floating explorer opens, then drag the emitter’s name from there into the ICE tree workspace. This adds a pre-filled Get Data node for that object.

5 Connect the nodes together to achieve the effect you want. This is where all the thinking and work takes place!

You can also open a node’s property editor to edit parameters that are not (or cannot be) driven by connections.

Right-click on a node, on a port, or on the background for various options.

6 If the tree is a particle simulation, add either the Simulate Particles or Simulate Rigid Bodies node to make sure that it updates properly at each frame.

7 You can create a compound node and export it for reuse in other trees and scenes.

Page 256: XSI guia basica

Section 14 • ICE: The Interactive Creative Environment

256 • Softimage

3

4 5

6

7

21

Page 257: XSI guia basica

Basics • 257

Building ICE Trees

The Way Trees Work

Each connection in a tree represents a set of data, with one value per element of the set. For example, if you get Self.PointPosition, the set consists of one 3D vector per point of the “Self” object (the object with the ICE tree).

When tracing the logic and connections of an ICE tree, you can think of the nodes as working on all members of the data set at once, or you can concentrate on what happens to a single representative of that set.

When you combine a single constant value (or something else in the singleton context) with a data set, it gets combined with every member of the set. For example, you can add the same value to all members of a set, or you can multiply them all by the same number, and so on.

When you combine two data sets, the corresponding members of the set are combined. For example, if you add Self.PointPosition and Self.PointNormal as you might do in a Push-type deformation, then each point’s position vector is added to its own normal vector. This is why component contexts must be the same when you combine them—there must be the same number of elements and there must be a correspondence between the members.

A data set is not an array, or at least, it’s not exposed as an array in ICE. Traditional programming concepts related to arrays do not apply. You do not need to use the nodes in the Array category to work with data sets (unless your data set actually contains arrays, for example Self.PointNeighbors, and even then you can connect directly to many nodes without worrying too much about the fact that the data consists of arrays). You do not need to iterate on the members of the data set—just plug the data into another node, such as a Math node, to process the data.

Connecting Nodes

In general, you connect ICE nodes by clicking and dragging an output port from the right of one node onto the input port on the left of another node. You can connect the same output to as many inputs as you want. Data flows along the connection from the first node and is processed by the second node.

Some nodes, such as Execute, Add, Multiply, and so on, allow an unlimited number of input connections. These nodes have special virtual ports identified as “New (port name)”. You can connect to the “New” port to create a new port, or right-click on an existing port to manually insert and remove ports.

There are some special factors that determine whether you can connect two ports together:

• The type of the data, as indicated by the port colors.

• The context of the data.

• The structure of the data: either single or array (ordered set).

When you connect to an input port, any existing animation on the port’s value is lost.

Page 258: XSI guia basica

Section 14 • ICE: The Interactive Creative Environment

258 • Softimage

Data Types

The data type defines the kind of values that a port can pass or accept, such as Boolean, integer, scalar or vector. The data type is identified by the color of the port.

You cannot connect two ports if their data types are incompatible. However, you can convert between many data types using the different Conversion nodes.

Here are the types of data you might see:

Type Description

Polymorphic Accepts a variety of data types. See Polymorphic Ports on page 259.

Boolean A Boolean value: True or False.

Integer A positive or negative number without decimal fractions, for example, 7, –2, or 0.

Scalar A real number represented as a decimal value, for example, 3.14. Internally this is a single-precision float value.

2D Vector A two-dimensional vector [x, y] whose entries are scalars, for example, a UV coordinate.

3D Vector A three-dimensional vector [x, y, z] whose entries are scalars, for example, a position, velocity, or force.

4D Vector A four-dimensional vector [w, x, y, z] whose entries are scalars.

Quaternion A quaternion [x, y, z, w]. Quaternions are usually used to represent an orientation. Quaternions can be easily blended and interpolated, and help address gimbal-lock problems when dealing with animated rotations.

Rotation A rotation as represented by an axis vector [x, y, z] and an angle in degrees.

3x3 Matrix A 3-by-3 matrix whose entries are real numbers. 3x3 matrices are often used to represent rotation and scaling.

4x4 Matrix A 4-by-4 matrix whose entries are real numbers. 4x4 matrices are often used to represent transformations (scaling, rotation, and translation).

Shape A primitive geometrical shape, or a reference to the shape of an object in the scene. This data type is used to determine the shape of particles.

Geometry A reference to a geometrical object in the scene, such as a a polygon mesh, NURBS curve, NURBS surface, or point cloud. You can sample the surface of a geometry to generate surface locations for emitting particles.

Surface Location A location on the surface of a geometric object. The locator is “glued” to the surface of the object so that even if the object transforms and deforms, the locator moves with the object and stays in same relative position.

Execution Not a data type in the conventional sense. You connect Execution ports such as the output of a Set Data into an Execute or root node to control the flow of execution in the tree.

Reference Also not a data type in the conventional sense. This is a reference to an object, parameter, or attribute in the scene, expressed as a character string. You can daisy-chain these as described in Daisy-chaining References on page 261.

Type Description

Page 259: XSI guia basica

Basics • 259

Building ICE Trees

Polymorphic Ports

Polymorphic ports can accept several different data types. For example, the Add node can be used to add together two or more integers, or two or more scalars, or two or more vectors, and so on.

Once you connect a value to a polymorphic port, its port type becomes resolved. Other input and output ports on the same node and on connected nodes may also become resolved and only accept specific data types. This reflects the fact that, for example, you cannot add an integer to a vector.

Even after a port’s type has been resolved, you can still change it by replacing the connection with a different data type. However, this works only if the port is not resolved by other connections in the tree.

If a port’s type is unresolved, you cannot set values in its property editor. Once it is resolved, the appropriate controls appear in the property editor. Different data types use different controls: for example, checkboxes for Booleans, sliders for scalars, and so on.

While polymorphic ports accept several data types, they don’t necessarily accept all types of connection. For example, the ports of a Pass Through node accept any type of value, but it doesn’t make sense to use a Multiply by Scalar node with a Boolean value.

Data Context

In ICE, attributes are always associated with elements, either objects or one of their component types such as points, polygons, edges, and so on. For example, “sphere.PointNormal” consists of one 3D vector for each point of the object called “sphere”; in other words, the context is per point of sphere.

For two ports to be connectable, their contexts must be compatible. Context is determined by two factors:

• The type of element associated with the data: object or a specific component type (points, polygons, etc.).

• The object that owns the components.

The data context gets propagated through node connections in the same way as the data types of polymorphic nodes.

Before anything is connected, the Add node’s ports are unresolved (black).

Once a node is connected to Value1, then Value2 and Result become resolved. In this case, they are yellow for 3D vectors.

Before any connection, the Add node’s property editor is blank.

After connection, controls appear for Value2. There are no controls for Value1 because it is being driven by the connection.

Page 260: XSI guia basica

Section 14 • ICE: The Interactive Creative Environment

260 • Softimage

The different types of context are summarized in the following table: Specifying Scene References

Certain nodes can refer to elements in the scene using strings as references. For example, references can specify things like:

• Attributes to get or set.

• Point clouds to which to add particles.

• Geometric objects to query for closest points, etc.

References are resolved by name. Character strings are not case-sensitive. Object, property, and attribute names are separated by a period (.), for example, “grid.PointPosition” or “sphere.cls.WeightMapCls.Weight_Map.Weights”.

You can specify a scene reference by using controls in a node’s property editor to enter, explore for, or pick elements in the scene. Alternatively, you can right-click on a node and choose Explore for Port Data.

Context Description

Singleton A data set containing exactly one value. For example, an object’s position, a bone’s length, a mesh’s volume, etc. The singleton context includes data that is associated directly with objects rather than their components, as well as scene data such the current time and the frame rate.

Singleton data is always compatible with other singleton data. Singleton data is usually compatible with other contexts, for example, you can add the position of one object to the point positions of another object.

Point A data set containing one value for each point of a geometric object (point cloud, polygon mesh, NURBS surface, curve, lattice, etc.). For example, point positions, envelope weight assignments, etc.

Line A data set containing one value for each edge, subcurve, or surface boundary. For example, edge lengths.

Face A data set containing one value for each polygon or subsurface. For example, polygon normals or polygon areas.

Sample A data set containing one value for each texture sample of a geometry. A sample is usually a polygon node on a polygon mesh, but there can also be samples on NURBS surfaces and curves.

Node In some cases, the context is bound to a node in the ICE tree. For example, the Generate Sample Set node generates a set of random point locators on the surface of a geometric object. The size of the set of point locators does not necessarily match the number of any kind of element in the scene, but is actually controlled by the rate parameter of the Generate Sample Set node.

Node-bound contexts are typically incompatible with each other. For example, if you generate two sets of locations on the same geometry, they are bound to different nodes and cannot be combined.

A Type the reference. Use periods to separate objects, properties, and attributes. Strings are not case-sensitive. Use the token “self” to refer to the object on which the tree exists. You can also use the tokens “this” (same as “self”) and “this_model” (the model that contains the object with the tree).

B Click Explorer, expand the tree, and choose an element. The tree shows the attributes that you can get from the current element name path or location. This list includes predefined attributes and any custom attributes (including those defined in unconnected Set Data nodes).

C Click Pick and then pick an element from a viewport, explorer, or schematic view.

D You can combine methods A and B, for example, type “self”, click Explore, and then choose an attribute such as PointPosition.

A B CD

Page 261: XSI guia basica

Basics • 261

Building ICE Trees

Daisy-chaining References

You can use the In Name and Out Name ports to connect references on Get Data and other nodes in sequence, like a daisy chain. For example, you can get “sphere” and then use that to get “sphere.PointPosition”, “sphere.PointNormal”, and so on. If you want to change “sphere” to “torus” later on, there’s only one node that needs to be changed. This is particularly useful when creating compounds, because you only need to expose the leftmost reference.

References that are connected in this way are concatenated, so for example Get Data (“Self”) plugged into Get Data (“PointPosition”) results in “Self.PointPosition”. You do not need to worry about periods at the beginning or end of the references—periods are automatically added or removed as necessary.

When a node has a reference connected to its In Name port, then the Explore button and Explore for Port Data command both start from the current path. For example, if you click Explore in the property editor of a leaf (left-most) Get Data, you can select anything starting from the scene root. However if a Get Data has a reference to a geometric object connected to its In Name, you can select properties and attributes on that object.

Tokens in References

The token “self” always refers to the object on which the ICE tree is directly applied. This token allows you to create trees that are easily reusable because they don’t depend on specific object names.

Other tokens that you can use are “this” (same as “self”) and “this_model” (refers to the model that contains the object with the ICE tree).

If you have built an ICE tree using specific object names and want to make it more generic so that you can make a compound to use on other objects, you can automatically replace the object name with “Self” using User Tools > Replace Object Name with Self (Nested Compounds).

Resolving Scene References

Scene references are automatically maintained as you modify the scene.

• If you have an object called “sphere” and you rename it to “ball”, references to “sphere” are automatically updated to “ball”.

• If you delete the object named “sphere” instead, any references to it are invalid and the affected nodes become red. If you later add another object named “sphere”, or rename an existing object to “sphere”, then the references become resolved again.

• If you add the object named “sphere” to a model named “Fluffy”, references to “sphere” are automatically updated to “Fluffy.sphere”. If the ICE tree is on an object in the Fluffy model, the references are updated to “this_model.sphere” instead.

Page 262: XSI guia basica

Section 14 • ICE: The Interactive Creative Environment

262 • Softimage

Getting and Setting Data in ICE Trees

Almost every ICE tree involves getting data, performing calculations, and then setting data. You can get and set any data using Get Data, Set Data, and other nodes found in the Data Access category of the Tools tab in the preset manager. There are also some compounds for getting and setting specific data on the Task > Particles or Deformations tabs.

You can get any data in the scene. Once you have a Get Data node in your tree, you can specify or modify the reference.

You can set only certain data:

• Some intrinsic attributes, such as PointPosition or EdgeCrease. Other attributes are read-only, like PointNormal and PolygonArea.

• Any dynamic attribute, including predefined ones like Force, Velocity, and so on.

• Any property in Softimage except for kinematics.

Getting Data

You get data using Get Data nodes. You can add a Get Data node to your scene by dragging it from the preset manager (it’s in the Data Access category of the Tools tab) or by selecting it from the Nodes > Data Access menu. You can also get a specific object or other element by dragging its name from any explorer view. Once you have a Get Data node in your tree, you can specify or modify the reference as described in Specifying Scene References on page 260.

You can get data by explicit string references or at locations.

• When you get data by an explicit string reference, you get a set of values with one value for each component. For example, if you get “sphere.PointNormal”, you get one 3D vector for each point of the sphere object; in other words, the context is per point of sphere.

• When you get data at a location, the context depends on the context of the set of locations that is connected to the Source port of the Get Data node. For example, if you start by getting “grid.PointPosition”, then use that to get the closest location on sphere, and in turn use that to get PointNormal, the data consists of normals on the sphere but the context is per point of the grid. If instead you started by getting “grid.PolygonPosition”, the context would be per polygon of the grid.

Getting Data at Locations

To get data at a location, plug any location data into a Get Data node’s Source port. When a location is plugged into the Source port of a Get Data node in this way, its Explorer button shows only the attributes that are available at that location.

Page 263: XSI guia basica

Basics • 263

Building ICE Trees

You can use this technique to get data from other objects using geometry queries like Get Closest Location nodes. For example, you can get PointNormal at the closest location on a sphere.

If an attribute is stored on points, you can still get it at an arbitrary location. The value is interpolated among the neighboring point values.

You can convert a location on a geometry into a position (3D vector) by getting the PointPosition attribute at that location.

Reusing Get Data Nodes

You can connect the same Get Data node to as many nodes as you want if you need the same data elsewhere in the tree. However if the data has changed in-between, the Get Data node will return the new data later in the tree.

The Get Self.Foo node returns different values to Stuff and More Stuff because Self.Foo was set in-between.

Page 264: XSI guia basica

Section 14 • ICE: The Interactive Creative Environment

264 • Softimage

Setting Data

To set data, use the Set Data compound. You can find this node in the Data Access category of the Tools tab in the preset manager, or on the Nodes > Data Access menu. Simply specify the desired reference and value, either through connections or directly in the property editor. See Specifying Scene References on page 260.

Not all attributes can be set. Read-only attributes like NbPoints are not shown in the Set Data node’s explorer.

You can set data using an explicit string reference only. You cannot set data at locations. To set an attribute, you must be in the appropriate context. For example, to set PointPosition, you must be in the per point context of the appropriate object.

If data has been set for some but not all components in a data set, uninitialized components have default values: zero for most data types, false for Booleans, identity for matrices, black for color, etc.

Setting Custom Attributes

To create a custom attribute, simply use a Set Data node and make up a new attribute name. Don’t forget to include the full reference including the object name, for example, “PointCloud.my_custom_attribute”.

You can use custom attributes to store any type of value, including locations. The context and data type of custom attributes are determined by the connected nodes. If the data type is undetermined, the Set Data node is in error (red) —you can use a node from the Constant category to force a specific data type. If the context is undetermined, it defaults to the object context. However, this context can be changed to a component context if you connect nodes that force a different context, as long as there are no conflicting constraints on the context.

Debugging ICE Trees

ICE includes some basic tools that help you identify and correct the different types of problems you may encounter when building ICE trees.

Structural Problems

Structural problems are caused by incompatible data types, contexts, or structures in the tree. Nodes that are in error because of structural problems are displayed in red, and other nodes in that branch that will not be evaluated because of the error are displayed in yellow. If you have red nodes, or if you cannot connect nodes that you think should be connectable, then your tree has structural problems.

Messages on ports and nodes help you identify structural problems:

• Hover the mouse pointer over a port to display a pop-up message showing the data types, context, and structure that the port supports, for example, “Array of 3D Vector per Point of “PointCloud.pointcloud”.

• To see more detailed information about a port, right-click over a port or connection and then choose Log port type details. Information is logged to the history pane of the script editor.

• If a node is red (in error), hover the mouse pointer over it (not over a port) to see the first error message. To see all error messages, right-click over the node and choose Show Messages.

• When you drag an output port onto an incompatible input port, a pop-up message informs you of the conflict and shows the data types, contexts, and structure that are supported by the two ports.

Page 265: XSI guia basica

Basics • 265

Building ICE Trees

Logical Problems

If a tree is working but not doing what you think it should be doing, it may be that the values being passed to ports are not what you expect them to be.

You can display port values in the 3D views by right-clicking on a connection and choosing Show Values. There are several options for controlling the color, style, and placement of the information.

When port values are displayed, a V icon appears on the connection. Click the icon to change display properties, or right-click and choose Hide Values to remove the display.

Performance Problems

You can profile the performance of ICE trees by displaying execution times directly on nodes in the ICE tree viewer. This shows you which nodes take the most processing time, and lets you see where you can try to optimize the tree.

Displaying values on this connection.

A Start Performance Timers. Activates and deactivates performance logging. Typically, you activate this and then play back or advance frames.

B Reset Performance Timers. Clears the performance numbers. When you have made changes and want to start logging the new performance values, click this.

C Performance Highlight. Choose one:

•No Highlight. Displays nodes and ports normally.

•Time (Top Thread). Shows the performance of the worst thread per node. The number on the root ICETree node is still the total for the entire tree and its inputs.

•Time (All Threads). Shows the total performance of all threads per node.

D Update. You may need to click this to see new values.

A B CD

Page 266: XSI guia basica

Section 14 • ICE: The Interactive Creative Environment

266 • Softimage

Adding a Comment or Two

When you’re building a tree, it’s immensely useful to write down notes about it as you go, especially when a tree grows many branches. You can easily do this by adding comments to individual nodes in a tree or to a group of nodes.

To add a comment to a single node, right-click on it and choose Create Comment. Enter the comment text and set its color.

To add a comment that is not connected to any specific node, use a Comment node. o

To add a comment a group of nodes, use a Group Comment node. To move the comment along with the node group, middle-click and drag in the comment area. Group Comment colors are visible in the bird’s eye view, so they are a handy way of visually organizing your trees.

Page 267: XSI guia basica

Basics • 267

ICE Compounds

ICE Compounds

Compounds are ICE nodes that are built from other nodes, which can be base nodes or even other compounds.

You can use compounds to simplify and organize your ICE trees to make them easier to read and understand, but the real advantage of compounds is that you can export them and reuse them in other ICE trees and scenes, as well as share them with other users.

Softimage includes many pre-built compounds for performing specific tasks. You can find these in the preset manager in the ICE tree view. These compounds are built from the same nodes that are also available in the preset manager. Inspecting the supplied compounds is a great way to see how ICE trees work. You can then edit these compounds to use them as a base for building your own effect.

Overview of How to Create and Use ICE Compounds

1 You can’t store the ICETree node in a compound, so insert an Execute node to merge all the root connections into a single output. To do this, right-click the ICETree node and choose Insert Execute Node.

2 Select all the nodes you want to save in your compound. To keep the compound generic, you should leave out object-specific nodes (such as particle emitter data) so that you can apply this effect to any appropriate object in any scene.

3 Convert the selected subtree into a compound: choose Compounds > Create Compound from the ICE tree toolbar.

4 Edit the compound—see Editing Compounds on page 268.

5 Export the compound—see Exporting Compounds on page 269.

6 You can modify the compound and re-export it—see Versioning Compounds on page 270.

1

2

3

4

5

6

Page 268: XSI guia basica

Section 14 • ICE: The Interactive Creative Environment

268 • Softimage

Editing Compounds

When you edit a compound, you can change the compound name and expose different ports of the nodes inside so that they are easily accessible from your compound later on.

A

B C D

E

G

H

I

F

J

K

L

Page 269: XSI guia basica

Basics • 269

ICE Compounds

Parts of the Compound Editor

Exporting Compounds

Compounds are XML-based files that contain all the connections and data of all the nodes in the tree. They are saved as .xsicompound files.

Exporting a compound allows you to use it in other trees and scenes, including sharing it with others by downloading to Softimage|NET.

To export a compound, right-click on a compound (not over a port) and choose Export Compound and give a file name and location for it.

You can then bring your exported compounds into an ICE tree in the usual way: from the preset manager, from the Nodes menu, using Compounds > Import Compound, or by dragging it from a Softimage file browser or folder window.

A Opens the compound editor. Move the mouse over a compound node, and click the e icon that pops up. Or right-click on a compound node (not over a port) and choose Edit Compound.

B Compound name. To change the name, double-click and type a new one.

C Category is used to organize exported compounds on the Tool tab of the preset manager. To change the category, double-click and type a different name. To create a new category, simply enter a new category name. The new category is automatically added to the preset manager and Nodes menu when the compound is exported. If a compound has no category, it does not appear on the Tool tab of the preset manager.

D Tasks are used to further organize exported compounds by workflow on the Task tab of the preset manager. Double-click to enter or change a comma-separated list of tasks. Use a slash to separate tasks and subtasks, for example, “task/subtask,task1/subtask1”. To create a new task or subtask, simply enter new names. New tasks and subtasks are automatically added to the preset manager when the compound is exported. If a compound has no task, it does not appear on the Task tab of the preset manager.

E Modify the tree by adding, editing, and connecting nodes in the usual way.

F Expand or collapse the list of exposed input parameters (shown expanded). When the list is collapsed, you can display the name of a port by hovering the mouse pointer over its connection.

G Expose a new input port or parameter. Drag this icon onto a node’s input. Unlike ports, parameters don’t display a circle next to their labels but you can still drag this icon onto them to expose parameters such as references.

H Exposed input ports and parameters. Double-click on a port’s name to change it while the list is expanded. Drag the circle icon onto another node to share the input. Right-click on a specific port to change the order, remove it, or set properties.

I Expand or collapse the list of exposed output parameters (shown collapsed). Here again, when the list is collapsed, you can display the name of a port by hovering the mouse pointer over its connection.

J Expose a new output port. Drag an output port from any node onto the black circle. You can have as many output ports as you want.

K Exposed output ports. Double-click on a port’s name to change it while the list is expanded. Right-click on a specific port to change the order, remove it, or set properties. Not all properties apply to output ports.

L Exit and return to the parent tree.

If two or more compounds have the same name, Softimage logs a warning message telling you the locations of the version that will be used and the versions that will be ignored.

Page 270: XSI guia basica

Section 14 • ICE: The Interactive Creative Environment

270 • Softimage

Versioning Compounds

Softimage uses a built-in versioning system to manage updates to exported compounds. You should use this versioning system instead of renaming .xsicompound files manually; otherwise, you may end up with multiple compounds that share the same name and version. If this happens, Softimage warns you that the locations of the file that will be used and the files that will be ignored.

The major and minor version numbers are stored in the .xsicompound file. Major version changes are for large functional changes, while minor version changes are for bug fixes and small adjustments.

If you modify a compound in an ICE tree and don’t export the new version, it is identified by an asterisk.

Compounds that already exist in a scene are not updated automatically even if new versions are available. You can update them individually, or by using the Compound Version Manager (Compounds > Compound Version Manager).

Page 271: XSI guia basica

Basics • 271

Section 15

ICE Particles

ICE is a complete visual programming environment that’s allows you to create particle effects.

In the real world, you think of particles as being small pieces of matter such as dust, sea salt, water droplets, sand, smoke, or sparks from a fire. With ICE particles, you can create all these types of natural phenomena and so much more!

What you’ll find in this section ...

• Making ICE Particle Effects

• Particles that Bounce, Splash, Stick, Slide, and Flow

• Particle Goals

• Spawning New Particles

• Particle Strands

• Particle Instances

• ICE Particle States

• ICE Rigid Bodies

• ICE Particle Shaders

Page 272: XSI guia basica

Section 15 • ICE Particles

272 • Softimage

Making ICE Particle Effects

In Softimage, ICE particles are simply points in a point cloud that are simulated using nodes in the ICE tree.

While that doesn’t sound too exciting, you can actually create any type of particle effect you want with them: you can make natural phenomena such as smoke, fire, and sparks. But you can also make objects or characters act like particles: rocks tumbling, glass pieces breaking, grass growing, or humans running about.

Creating ICE Particles

You can create ICE trees on a point cloud to create particle simulations. This point cloud can simply exist in the scene or it can have its points (particles) emitted from a scene element.

You can emit particles from polygon meshes and NURBS surfaces, from within object volumes, from curves, from nulls, from multiple objects and groups of objects, or even from any random position in global space.

ICE firework particles are emitted from different positions in space. When they reach a certain position, they explode into a new cloud of spawned particles.

The point cloud’s simulated ICE tree emits the particles and uses a state system to determine the condition under which the fireworks will explode and spawn a new cloud.

Page 273: XSI guia basica

Basics • 273

Making ICE Particle Effects

1

2

34

5

Overview of ICE Particle Workflow

A

D

C

B

Page 274: XSI guia basica

Section 15 • ICE Particles

274 • Softimage

6

7

8

Page 275: XSI guia basica

Basics • 275

Making ICE Particle Effects

1 Create a point cloud or emit particles: The simplest way is to select one or more objects to be the particle emitter(s) and then choose ICE > Create > Emit Particles from Selection on the Simulate toolbar. This automatically creates a point cloud and sets up certain nodes in the ICE Tree for that point cloud.

You can also set up these nodes in the ICE tree from scratch.

2 Open the ICE tree view: press Alt+9 or choose ICE > Edit > Open ICE Tree on the Simulate toolbar to open it in a floating window.

A • The ICETree node is the main processing operator in an ICE tree. Because this is a particle simulation, the ICETree node type is simulated.

B • The disc is the particle emitter object. The Get Data node for it simply gets the disc’s object data so that it can be used in the ICE tree.

C • The Emit compound is responsible for emitting the particles and setting certain particle attributes (such as size, color, velocity, mass, shape, etc.) at emission time. At every frame, it adds points to the point cloud.

The Emit compounds are always plugged into the top of the ICETree node in a particle simulation because you need to emit the particles before anything else can happen to them.

D • The Simulate Particles node updates the position and velocity of each particle at each frame based on its mass, position, and velocity of the previous frame.

This node is usually plugged into the bottom of the ICETree node because it needs to take all information from the nodes that precede it and then use that information to update each particle at each frame.

3 Edit the Emit parameters: These define how the particles will look and act when they are emitted: set the particle rate, speed, orientation, direction, color, mass, etc.

4 Delete particles at their age limit: The Set Particle Age Limit compound determines how long the particle will live, then the Delete Particles at Age Limit compound does its job.

If you don’t put a limit on their age, the particles live the duration of the simulation, which you may want for some effects.

5 Add forces to make the particles move. The Add Forces compound is a hub into which other forces can be connected. Here, only the Turbulence value is modifying the force, but you could easily add other forces.

6 Build the particle ICE tree: Plug in different nodes for different effects. Remember this:

• When you plug nodes into the ICETree node, their output gets evaluated at every frame. You want to do this if you want the particle data to be updated throughout the simulation, not just when the particles are emitted.

• When you plug nodes into any of the Emit compounds, their output is evaluated only once, upon particle emission. This means that data from this node won’t change the particles during the rest of the simulation.

• You can connect ports together only if their data matches in type and context.

7 Create a compound: This step is not necessary, but creating a compound of this particle effect lets you use it in other scenes or share it with others.

8 Render the particles as volumes using ICE particle shaders, or render particles as surfaces using Softimage surface shaders.

Page 276: XSI guia basica

Section 15 • ICE Particles

276 • Softimage

Setting Up a Particle Emission From Scratch

You can create any type of particle emission by creating and connecting nodes yourself in a point cloud’s ICE Tree.

1

23

4

5

1 Create a point cloud by choosing Get > Primitive > Point Cloud > Empty Cloud (or any of the shapes) from any toolbar.

2 In the ICE tree view, create a Simulated ICE Tree node: from the menu bar of the ICE Tree, choose Create > Simulated ICE Tree.

3 Drag the emitter’s name from an explorer into the ICE Tree view to create a Get Data node for it. An easy way is to select the object and press F3 so that a floating explorer opens, then drag the emitter’s name from there into the ICE Tree.

4 Drag one of the Emit compounds from the preset manager into the ICE tree view.

5 Drag the Simulate Particles node from the preset manager into the ICEtree view.

Plug all the nodes together as shown here. You can then continue to build your ICE tree as you like.

Page 277: XSI guia basica

Basics • 277

Particles that Bounce, Splash, Stick, Slide, and Flow

Particles that Bounce, Splash, Stick, Slide, and Flow

There are several compounds that let you control a particle’s motion and the way it interacts with object surfaces. These compounds are fairly complete within themselves, but you can also use them in conjunction with State systems as part of a larger effect. Within a State system, you can choose one of these compounds to create the effect that happens when a trigger compound’s value is reached: for example, new particles can be spawned when they collide with an obstacle.

When particles collide with an obstacle’s geometry, the collision geometry type that is used for the obstacles is its actual shape. As well, the particle’s size is taken into account upon collision. However, if you’re using instances as the particle geometry, an approximated box (or sphere) is created around it: its actual shape is not used.

Bouncing

Using the Bounce Off Surface compound, you can make particles bounce off any number of obstacles upon collision. This is useful for creating ballistics, fragments, rain, or other debris bouncing off surfaces.

Sticking - and Letting Go

Using the Stick to Surface compound, you can make particles stick to an obstacle’s surface upon collision and remain there for the duration of their lifetime, such as paint being sprayed onto a surface. If the obstacles’s geometry is deform- or shape-animated, the particles will follow the shape of the changing surface.

If you want the particles to unstick, you can set up some condition that makes them fall off, such as the obstacle being at a certain angle or a force threshold being reached.

Making a Splash

The Emit Splash from Surface Collision compound lets you emit a splash of new particles upon impact with an obstacle in a collision. This can be useful for creating effects like dust puffing up as a foot steps on the ground, mud splashing up as a ball hits a mud puddle, or sparks flying as two metal objects collide.

Sliding and Dripping Off

Using the Slide on Surface compound, you can make particles cling to and then slide on an obstacle’s surface. Obstacles can be shape-animated and the particles will follow the shape of the changing surface.

You can set the conditions in which the particles will drip off the obstacle, such as the obstacle being at a certain angle or a force threshold being reached. This is useful for creating sweat, water condensation, or other liquid drops sliding down a surface and then dripping off.

Flowing Along a Curve

You can make particles flow along a curve using the appropriately-named Flow Along Curve compound. This is useful for when you need particles to follow a path or direction, such as a school of fish swimming and turning suddenly, blood cells flowing through arteries, or lava oozing in streams down a mountain.

Flowing Around an Object

You can make particles flow around an obstacle using the Flow Around Surface compound. This is useful for doing effects such as water flowing around a rock in a stream, or for doing crowds/flocking simulations where the characters need to move around an obstacle.

Page 278: XSI guia basica

Section 15 • ICE Particles

278 • Softimage

Bounce

Stick

Flow Along

Slide

Flow Around

Splash

Page 279: XSI guia basica

Basics • 279

Particle Goals

Particle Goals

When you create a goal for particles, the particles are attracted to it or repelled from it, similar to magnets. With goals, you can create a number of particle effects, such as drops of water forming into a puddle, paint being sprayed over a surface, or butterflies following the infamous ClubBot.

When a particle is born, it is assigned to a location on the goal object that you have defined, and it evolves towards this location throughout its life. This can be a random location on the goal, the location on the goal that it closest to the particle, or any location that you specify on the goal.

The particles try to reach the position and/or shape of the goal objects, even as the goal moves or its surface is deformed. When the particles reach the goal, their velocity decreases and they stop until the goal moves or is deformed again.

Goals are part of the overall particle simulation, which means that any particles that are progressing toward a goal can also react to any other forces that are applied to them. In fact, goals are a force on particles, similar to how an attraction force works.

Creating goals requires the Move Towards Goal compound. This compound lets you do two things: choose the location on the goal object to which the particles are attracted (or repelled) and define how the particles move toward the goal, such as their speed, acceleration, and alignment with the goal.

Moving Toward One Goal

You can set up a simple goal ICE tree with particles moving toward one goal, as you see on the left with the butterflies fluttering toward the walking ClubBot.

Page 280: XSI guia basica

Section 15 • ICE Particles

280 • Softimage

Moving Towards Two Goals

You can use two Move Towards Goal compounds with two goals and the If node to have particles move to two goals at once based on a condition that you set up.

Moving From Goal to Goal

If you want to have particles move from one goal to another, you can create several sets of “Move Towards Goal+goal object” nodes, then plug each set into the Multi Goal Sequencer compound.

Page 281: XSI guia basica

Basics • 281

Spawning New Particles

Spawning New Particles

Spawning generates new particles (points) from existing particles. These new particles are often referred to as particle trails. Spawning makes it easy to create effects such as fireworks, laser shots, streams of falling rain, or smoke trails.

To spawn particles, you can use several different Spawn compounds, either on their own or as part of a larger effect via a State system:

• Spawn Trails is the basic compound that creates particle trails.

• Spawn on Collision spawns particles upon collision with an object.

• Spawn on Trigger spawns particles when a trigger value is reached.

Each of the Spawn compounds is based on the Clone Point node. This node is responsible for creating new particles which are an exact replica of the original particles. The points, including all of their attributes (except ID, which is unique), are copied from a point cloud and then added to either the same point cloud or to another point cloud that you select.

If you spawn particles into the same point cloud, the shaders and forces on the spawned particles are the same as for the original point cloud. You can, however, add new attributes to the spawned particles to change their color, size, shape, and so on.

Spawning into a different point cloud is similar to creating a new particle simulation because this point cloud has a separate ICE tree. You can also use different shaders for that point cloud, giving you control over the rendered look of the spawned particles.

Spawning Trails

The Spawn Trails compound gives you a basic way to spawn new particles. Here, pixie dust is spawned as a trail to follow the original particle as it travels upwards.

Different sets of spawned particles create fireworks with some help from a state system.

Page 282: XSI guia basica

Section 15 • ICE Particles

282 • Softimage

Spawning Upon Collision

You can use the Spawn on Collision compound in conjunction with any of the Surface Interaction compounds (such as Bounce Off Surface) to have new particles spawned when a particle collides with an object.

Here, the small blue particles are spawned when an orange particle bounces on the surface of an obstacle.

Spawning on Trigger

You can use the Spawn on Trigger compound with either a State system or just a simple If node system. Either way, you need to set the condition upon which new particles are spawned.

Here, the small blue particles are spawned when the bubble-looking trail particles reach their age limit.

Page 283: XSI guia basica

Basics • 283

Particle Strands

Particle Strands

Particle strands are solid shape trails that are drawn after a particle. These solid shapes are actually continuous segments of the shape that you have chosen for the particle, such as spheres, rectangles, boxes, discs, blobs, or even instanced particle geometry. Strands makes it easy to create effects that require more solid-looking objects than trails, such as ribbons, seaweed, or hair, and much more.

Using the numerous Strands compounds, you have a lot of control over the appearance and movement of strands to create many types of particle effects. There are two main compounds you can use to actually create the strands using two different methods:

• Create Strands is the basic compound that creates particle strands. You can use any particle shape for the strands.

• Generate Strand Trails lets you dynamically generate particle strands based on the length of the simulation and the number of segments, such as for “growing” things like grass or vines. One strand segment is created per second up to the maximum number of segments that you have set.

Because these two compounds create strands in different ways, you can use only one of them at a time on the same set of particles.

Create Strands

Generate Strand Trails

Page 284: XSI guia basica

Section 15 • ICE Particles

284 • Softimage

Modifying the Strands

You can use any of these compounds with either the Create Strands or Generate Strand Trails compound to change the strand behavior:

Viewing and Rendering Strands

You can view the strands as trails in a 3D view if you set the particle shape or display type to Segment. The Segment shape draws a line from each point position through each strand position point. You can easily simulate hair using this shape type.

If you want to see what the strands will look like rendered using any shape type, however, you need to draw a render region.

You can render strands as surfaces using the Softimage surface shaders (such as Phong, Lambert, etc.), or you can render strands as volumes with the Particle Strand Gradient shader compound connected to the Particle Volume Cloud shader (or the Particle Renderer or Particle Shaper shader compounds).

Bend Strand

Turbulize Strand

Strand Sine Wave

Twist Strand

Page 285: XSI guia basica

Basics • 285

Particle Instances

Particle Instances

You can use any 3D geometric object, hierarchy of objects, or group of objects in place of particles to create many different effects. For example, you could use cars to create a flow of traffic or characters to create a crowd scene; or create flocking scenes with flying birds, butterflies, or insects. The object is assigned to a particle and stays with that particle for its lifetime.

Instances are exact copies of their master object, including its materials (color) and rendering information. However, instances inherit the particle’s position, velocity, orientation, and size: the instance’s transformation is not used, although children keep their relative transformation to their parent.

If you’re using instances as particle shapes in collisions with an obstacle (as rigid bodies or using a compound with surface interaction, such as Bounce Off Surface), you can use an approximated box or sphere around it: its actual shape is not used.

To use instances as particles, you assign them to the point cloud using either the Instance Shape node or the Set Instance Geometry compound in the ICE Tree:

• If the instanced objects are not animated, you should use the Instance Shape node. This node provides the simplest and fastest way to create large numbers of instances whose geometry is not animated.

• If the instanced object is animated, you can use the Set Instance Geometry and Control Instance Animation compounds. If an object’s transformation is animated, it has to be in relation to its parent, and then you choose the parent as the instance object.

Page 286: XSI guia basica

Section 15 • ICE Particles

286 • Softimage

Using Groups of Instanced Objects

If you’ve selected a group of objects for the instances, you have some control over which object is instanced. The objects in the group are picked according to their creation order, as shown in the explorer. You can choose View > Reorder Tool in the explorer to change the objects’ order in the group. You can also plug in a Randomize compound into the Group Object Index port to change their order randomly.

Controlling the Instance’s Animation

If the instanced objects are animated, you can create crowds or flocking scenes, such as with flying birds, butterflies, or walking characters. If you’re doing a crowd, for example, each character can walk at a different pace.

If an object’s transformation is animated, such as a walk cycle, it has to be in relation to its parent. You then select the parent as the instance object, and choose Object and Children in the Set Instance Geometry compound.

There are three compounds that help you control animated instances:

• The Set Instance Geometry compound lets you choose the instance object to use, as well as which frame of its animation to use as the starting frame for each particle.

• The Control Instance Animation compound is like a playback control for how you want the instance’s animation played during the particle simulation. For example, if the instance’s animation goes from frames 1 - 50, you can choose to use only frames 20 - 40 for its animation in the particle simulation.

• The Control Displacement Instance Animation compound scales the instanced object’s animation according to its size when it becomes a particle.

For example, in the image below are two simple animated rigs that are used as master objects: one hopping, one rolling. When they become instanced, they are much smaller than their original size, so their animation cycles must go at a faster rate to cover the same distance as the original animation.

0

1

2

3

Master objects Instanced objects as particles

Page 287: XSI guia basica

Basics • 287

ICE Particle States

ICE Particle States

Particle states offer a way of dividing particles into behavior groups. States are basically a combination of two things: a trigger and an effect.

The trigger determines what causes the particles’ behavior to change, and the effect is the behavior that the particles adopt when the trigger is executed. Using these two elements, you can have many different combinations of things happening to particles. Of course, state systems can be much more elaborate than this, with many State compounds defining many behavioral changes happening to the particles.

For example, you could create fireworks: the particle trail is seen going up into the sky, then suddenly bursts into another particle cloud at the end of its lifetime and leaves trails. Or by spawning particles at every frame, you could create something simple such as the smoky trail left by a hurtling fireball.

Each State compound you define is plugged into the State Machine compound. This compound is the “grand central station” for the states. The states are executed in the order in which they’re plugged into the State Machine, from top to bottom.

How to Create a Particle State System

This is an overview of the state workflow using a simple example of particles changing their size and shape when they reach their age limit.

1

2

3

Page 288: XSI guia basica

Section 15 • ICE Particles

288 • Softimage

4

5

7

9

8

6

Page 289: XSI guia basica

Basics • 289

ICE Rigid Bodies

ICE Rigid Bodies

ICE rigid body dynamics let you create realistic motion using particles as rigid body objects, which are objects that do not deform in a collision. The PhysX dynamics engine that is used for creating non-ICE rigid bodies is also used for creating ICE rigid bodies.

With rigid body particles, you can create effects that involve many small pieces that collide or accumulate, such as bricks, stones, or anything falling in a pile or being blasted apart.

The node that makes all this rigid body action happen is the Simulate Rigid Bodies node. It updates a particle’s position, velocity, orientation, and angular velocity from the previous frame based on its rigid body attributes (such as elasticity, friction, shape, size, and scale).

Rigid body particles can collide with geometric objects (obstacles) that are set as rigid bodies—just plug them into an Obstacle > Geometry port on the Simulate Rigid Bodies node in the ICE Tree.

Rigid body particles can also collide with each other if they’re in the same point cloud. To create the illusion of particles from several point clouds colliding, you can use several emitters and/or emissions in the ICE tree of a single point cloud. Then set up the emission properties for each to look like different particles.

1 Create a particle simulation. Then drag in a State Machine compound and plug it into the ICE Tree node.

2 Drag in a State compound for each behavior set you want to define. Plug each one into the State Machine compound in the order you want them executed.

3 Disconnect the Simulate Particles compound from the ICE Tree node. This is because each State compound has its own Simulate Particles node inside.

4 Give each state a unique ID to identify it in the system, and give it a unique color to help you identify each state’s particles as you work.

5 Get a trigger compound and plug it into the first State compound. Here, the trigger compound tests when the age limit of the particle is reached.

6 Define the trigger’s value. This is done by setting the particle age limit value, which is set to 2 seconds here.

7 Specify the state to which you want the particle to transition when the trigger is pulled. In this case, State 0 transitions to State 1.

8 Get one or more effect nodes or compounds and plug them into the second state. Here, these two Set compounds will set the particle shape and size when the particle age limit is reached.

9 Define the effect’s behavior. The values of the Set Particle compounds are set so that the size decreases to 0.1 and the shape is changed to a Cone when the particle age limit is reached.

10 You can keep adding state compounds and defining each trigger/effect set by following steps 4 - 9 to create more complex effects.

Page 290: XSI guia basica

Section 15 • ICE Particles

290 • Softimage

Passive Rigid Bodies

By default, rigid body particles are active, meaning that they change position and orientation when affected by forces and collisions with other rigid body particles in the same point cloud, as well as with obstacle objects.

Using the IsPassiveRigidBody attribute, however, you can make rigid body particles passive so that they act as obstacles: their position does not change when active rigid body particles collide with them.

This character is made up of rigid body particle cubes and is heading for a rigid body particle wall. What will happen?

Luckily for the character, he’s set as passive in this situation, so he’s unscathed by the collision with the wall.

Not so lucky this time! Here, the wall is set as passive, but the character isn’t. Ouch.

Page 291: XSI guia basica

Basics • 291

ICE Rigid Bodies

Collision Geometry

The Simulate Rigid Bodies node calculates the particle and obstacle collisions according to the shape of their collision geometry. The collision geometry used is different depending on whether the rigid bodies are particles or obstacle objects:

• For rigid body particles, this is a bounding shape (sphere, capsule, or box) that approximates the particle Shape that you have set. Bounding shapes provide a quick solution for calculating particle collisions because they don’t have to calculate detailed geometry.

• For instanced geometry on the particles, a box or sphere is used, not the instanced object’s actual geometry. This is done to make the calculation time faster.

• For rigid body obstacle objects, this is a convex hull. Convex hulls give a quick approximation of an object’s actual shape, with the results similar to an object being shrinkwrapped. Convex hull doesn’t calculate any dips or holes in the rigid body obstacle’s geometry, but is otherwise the same as the obstacle’s original shape.

Elasticity and Friction

All rigid bodies use a set of collision attributes to calculate their reactions to each other during a collision. These attributes include elasticity and friction (static and dynamic).

• Elasticity determines how much energy is retained when rigid bodies collide. For example, when a basketball hits the ground, its elasticity influences how much the ball rebounds.

• Friction is the resistive force acting between rigid bodies that tends to oppose and dampen motion. For example, a bowling ball rolling on a carpet would have more friction than if it was rolling on a wooden floor.

It’s the combination of the friction and elasticity attributes of all rigid bodies involved in a collision that determines the results. Any rigid body attribute values you set for the particles are multiplied with the obstacle’s rigid body properties that you set.

• You can set the Elasticity, StaticFriction, and DynamicFriction attributes for each rigid body particle in a point cloud using a Set Data node.

• You can set the obstacle’s rigid body properties in the Simulate Rigid Bodies node’s property editor.Convex hull collision

geometry for an obstacle. The dip in the obstacle is not calculated so the boxes simply bounce off the obstacle top.

Elasticity set to 1 on only the table (obstacle).

Elasticity set to 1 on both the table (obstacle) and the particles.

Page 292: XSI guia basica

Section 15 • ICE Particles

292 • Softimage

ICE Particle Shaders

In many basic ways, rendering particles is similar to rendering any other object in Softimage. You can use shaders, all standard lighting techniques, set shadows, and apply motion blur.

To the mental ray renderer, particles (point clouds) are a surface, just like any other object in Softimage, which means that you can plug many of the regular shaders into a point cloud’s render tree to shade the particles as surfaces.

However, when you render particles, you often want them to look like certain types of volume-based phenomena, such as smoke, clouds, or fire. To help you do this, there are some special ICE shaders that are designed to create volumic effects on ICE particles.

• The Particle Volume Cloud shader is the main particle shader that renders the point cloud’s bounding box as a volume.

You can also choose Get > Material > ICE Particle Volume from the Render toolbar to do some basic shader connection work for you.

• The Particle Density shader renders noise functions as density fields to create clouds, fireballs, smoke etc. This shader helps define each particle’s shape within the point cloud’s volume so that it doesn’t look like a single volumetric mass.

• The Particle Gradient shader lets you change the color and/or density of the particles based on density, age, or any other ICE attribute that you define for the particles.

• The Fractal Scalar and Cell Scalar shaders are actually texture shaders, but they are very useful for adding noise to particle volume effects, such as smoke, clouds, and fire.

The point cloud’s render tree shows how the particles get their volume and definition from the Particle Volume and Particle Shape compounds.

The color and density are defined by the Particle Gradient shader, with a Fractal Scalar adding noise to the density.

Dragon breath particles are rendered as a volume.

Page 293: XSI guia basica

Basics • 293

ICE Particle Shaders

The ICE Particle shaders and shader compounds can be found in the preset manager or in the Nodes menu in the render tree.

Particle Shader Compounds

Shader compounds are like ICE data compounds in that they contain several connected nodes (in this case, shader nodes). Once you have shaders hooked up together in the render tree as you like them, you can create a compound that contains all of these shaders. This allows you to create a standard particle shader effect, such as fire, that you can use in different scenes or share with other people.

Softimage ships with several particle shader compounds that you can use as a starting point for your own shader effects.

• Start out with the Particle Renderer or Particle Shaper shader compound to render a volume quickly. These compounds use the Particle Volume Cloud shader as a base.

• The Particle Gradient Fcurve compound creates a curve that you can plug into a Gradient port of a shader to control the gradient’s falloff over distance.

• The Particle Strand Gradient compound sets up a color/alpha gradient for rendering particle strands.

Connecting Particle Shaders

To apply shaders to the point cloud, you connect them in the render tree. This gives you precise control over which shaders are connected together using which ports.

The shaders that you choose to plug in to a point cloud’s Material node depend on whether you want to render the particles as a surface or as a volume.

Particle Surfaces

If you want to render particles as a surface, you can hook up any surface shader to the Surface port of the point cloud’s Material node. In fact, when you create an ICE particle simulation, the Phong shader is connected to the point cloud’s Material node by default.

Particle image sprites are rendered onto rectangle particle shapes using the Phong shader.

Particles using the Blob shape are rendered using the Lambert shader.

Page 294: XSI guia basica

Section 15 • ICE Particles

294 • Softimage

Particle Volume

If you want to render particles as a volume, you need to first hook up the Particle Volume Cloud shader (or the Particle Renderer shader compound) to the Volume port of the Material node.

Bringing ICE Data into the Render Tree

The ICE attribute shaders allow you to control a point cloud’s shading in the render tree based on calculations done in the point cloud’s ICE tree. To use the attribute shaders, you must make sure that the particles first have the appropriate attribute created for it in the ICE tree.

For example, you can control the particle’s transparency based on the distance to a surface by creating an ICE tree that sets a DistancetoSurface attribute, and then accessing that attribute in the render tree via an attribute shader.

Another example is to override the color of the instanced particle geometry with the particle’s Color or Init_Color attribute. You can use the Attribute Color shader to do this.

In the render tree, drag the appropriate shader from the Attributes group in the preset manager or from the Nodes menu. There is one Attribute shader per data type: Boolean, Color, Integer, Scalar, Transform, and Vector.

Dry ice particle volume is created with a combination of several ICE particle shaders.

The Fractal Scalar and Cell Scalar shaders help to give this particle volume a unique look.

Page 295: XSI guia basica

Basics • 295

Section 16

Shaders

A shader is a miniature computer program that controls the behavior of the rendering software during, or immediately after, the rendering process. Some shaders compute the color values of pixels. Other shaders can displace or create geometry on the fly.

Shaders are used to create materials and effects in just about every part of a scene. An object’s surface and shadows are controlled by shaders. So are scene lighting and camera lens effects. Even shaders’ parameters are usually controlled by other shaders. You can even apply shaders at the render pass level to affect the entire scene.

What you’ll find in this section ...

• The Shader Library

• About Surface Shaders

• Applying Shaders to Scene Elements

• The Render Tree

• Building Shader Networks

• Creating Shader Compounds

Page 296: XSI guia basica

Section 16 • Shaders

296 • Softimage

The Shader Library

Softimage’s shaders are divided into several different categories based on how they are used in a render tree. shaders can be quickly and easily accessed from the preset manager, the browser, an explorer view, or the Nodes menu in the render tree view.

Surface shaders are one of the most important types of shaders. All geometric objects in a scene have an associated surface shader, even if it is only the scene’s default shader. Surface shaders determine an object’s basic color and illumination characteristics. Surface shaders are also responsible for object transparency, refraction and reflectivity.

2D texture shaders apply a two-dimensional texture onto an object, just as 3D texture shaders implement a three-dimensional texture into an object. They are connected to the object’s surface shader to define the object’s texture.

Light shaders define the characteristics of the scene’s light sources. For example, a spotlight shader uses the illumination direction to attenuate the amount of light emitted. A light shader is used whenever a surface shader uses a built-in function to evaluate a light.

If shadows are used, light shaders normally cast shadow rays to detect occluding objects between the light source and the illuminated point.

Lens shaders are used when a primary ray is cast by the camera. They may modify the ray’s origin and direction to implement cameras other than the standard pinhole camera and they may modify the result of the primary ray to implement effects such as lens flares, distortion, or cartoon ink lines.

Page 297: XSI guia basica

Basics • 297

The Shader Library

Environment shaders are used instead of surface shaders when a visible ray leaves the scene entirely without intersecting an object or when the maximum ray depth is reached.They are used to create backgrounds for scenes, create quick-rendering reflections, light scenes with High Dynamic Range Images, and so on.

Volume shaders modify rays as they pass through an object (local volume shader) or the scene as a whole (global volume shader). They can simulate effects such as clouds, smoke, and fog.

There are also particle volume shaders that help you create these same types of effects on a point cloud.

Toon shaders apply non-photorealistic or cartoon style effects to objects. They control cel-animation type properties like inking and painting.

To get a full toon effect, it’s best to use the toon material shaders in conjunction with the toon lens shaders.

BBC “Everyman”: Animation by Aldis Animation

Shadow shaders determine how the light coming from a light source is altered when it is obstructed by an object. They are used to define the way an object’s shadow is cast, such as its opacity and color.

Lightmap shaders sample object surfaces and store the result in a file that can be used later. For example, you can use a lightmap shader to bake a complex material into a single texture file. Lightmaps are also used by the Fast Subsurface Scattering and Fast Skin shaders to store information about scattered light.

Photon shaders are used for global illumination and caustics. They process light to determine how it floods the scene. Photon rays are cast from light sources rather than from a camera.

Page 298: XSI guia basica

Section 16 • Shaders

298 • Softimage

Output shaders operate on images after they are rendered but before they are written to a file. They can perform such as glows, blurs, background colors, and so on.

Displacement shaders alter an object’s surface by displacing its points. The resulting bumps are visibly raised and can cast shadows.

Material phenomena are combinations of shaders that are packaged into a single shader node. These are often used to create more complex rendering effects. Connecting a material phenomenon to an object’s material prevents that material from accepting other shaders directly, though you can extend the phenomenon’s effect by driving its parameters with other shaders. The Fast Subsurface Scattering and Fast Skin shaders are examples of material phenomena.

Realtime shaders allow you to use the render tree to build and control the multipass realtime rendering pipeline. You can connect these shaders together to achieve a multitude of sophisticated rendering effects, from basic surface shading to complex texture blending and reflection.

Geometry shaders are evaluated before rendering starts. This allows the shader to introduce procedural geometry into the scene. For example, a geometry shader might be used to create feathers on a bird or leaves on a tree.

Tool shaders let you create a shader from scratch or extend an existing one. Although some tool shaders can be used on their own, many of them must work in conjunction with another to achieve a highly customized effect.

Some examples of tool shaders include: Color Channels, Conversion, Image Processing, Math, Mixers, and Texture Generators, Texture Space Controller, and Texture Space Generators.

Page 299: XSI guia basica

Basics • 299

The Shader Library

The Preset Manager

Many shaders and material (and ICE node) presets are installed with Softimage, all accessible from the preset manager.

The preset manager is available on the left side of the render tree (and the ICE tree). You can also open it as a floating window by choosing View > General > Preset Manager from the main menu.

You can apply shaders or materials by dragging and dropping them onto objects in the scene. This connects the shader or material to the object’s Material node ports.

You can also drag shaders or shader compounds into the render tree as a shader node that you can then connect to an object’s tree to build up an effect.

A

B

C

D E F G H

A Select from Materials, Shaders, or ICE Nodes type of presets.

B Select Favorites, All Nodes, or a specific category. You can add items to your Favorites for easier access to presets that you use frequently.

C Items in the selected category appear in this panel. You can drag and drop materials onto objects and material libraries; shaders onto objects and into render trees, and ICE nodes into ICE trees.

D Sets thumbnail size and arrangement.

E Refresh. Clicking this button forces an update. This may be necessary if you have moved, added, or removed preset files on disk since opening the preset manager.

F Enter all or part of a name to filter the presets that are displayed in the right panel (3). Filtering works across all categories.

In this case, grad is entered, so all shaders in all categories that have “grad” in their names appear in the right panel.

G Recalls previous filter strings.

H Clears the filter string (show all nodes). You can also delete the text string to show all nodes again.

Page 300: XSI guia basica

Section 16 • Shaders

300 • Softimage

About Surface Shaders

Surface shaders are some of the most commonly used shaders in Softimage. Each one defines an object’s basic surface characteristics, like color, transparency, reflectivity, specularity, and so on, according to a specific shading model. Shading models determine how an object’s surface reacts to scene lighting.

Phong

Uses ambient, diffuse, and specular colors. This shading model reads the surface normals’ orientation and interpolates between them to create an appearance of smooth shading. It also processes the relation between normals, the light, and the camera’s point of view to create a specular highlight.

The result is a smoothly shaded object with diffuse and ambient areas of illumination on its surface and a specular highlight so that the object appears shiny, like a billiard ball or plastic.

Lambert

Uses the ambient and diffuse colors to create a matte surface with no specular highlights. It interpolates between normals of adjacent surface triangles so that the shading changes progressively, creating a matte surface. The result is a smoothly shaded object, like an egg or ping-pong ball.

Blinn

Uses diffuse, ambient, and specular color, as well as a refractive index for calculating the specular highlight. Blinn produces results that are virtually identical to Phong except that the shape of the specular highlight reflects the actual lighting more accurately when there is a high angle of incidence between the camera and the light.

Blinn is useful for rough or sharp edges and simulating a metal surface. The specular highlight also appears brighter than the Phong model.

Cook-Torrance

Uses diffuse, ambient, and specular color, as well as a refractive index used to calculate the specular highlight. It reads the surface normals’ orientation and interpolates between them to create an appearance of smooth shading. It also processes the relation between normals, the light, and the camera’s point of view to create a specular highlight.

Cook-Torrance produces results that are somewhere between Blinn and Lambert and is useful for simulating smooth and reflective objects, such as leather. Because this shading model is more complex to calculate, it takes longer to render than the other shading models.

Page 301: XSI guia basica

Basics • 301

About Surface Shaders

Strauss

Uses only the diffuse color to simulate a metal surface. The surface’s specular is defined with smoothness and “metalness” parameters that control the diffuse to specular ratio as well as reflectivity and highlights.

Anisotropic

Sometimes called Ward, this shading model simulates a glossy surface using an ambient, diffuse, and a glossy color. To create a “brushed” effect, such as brushed aluminum, it is possible to define the specular color’s orientation based on the object’s surface orientation. The specular is calculated using UV coordinates.

Constant

Uses only the diffuse color. It ignores the orientation of surface normals. All the object’s surface triangles are considered to have the same orientation and be the same distance from the light.

It yields an object whose surface appears to have no shading at all, like a paper cutout. This can be useful when you want to add static blur to an object so that there is no specular or ambient light.

Toon

This model begins with a constant-shading-like base color. Ambient lighting, as well as highlights and rim lights are composited over the base color to produce the final result.

The result is a cel-animation type of shading that can vary enormously depending on how you configure the highlights and rim lights. The toon shading model is typically used in conjunction with the Toon Ink Lens shader (applied to the render pass camera), which creates the cartoon-style ink lines.

Page 302: XSI guia basica

Section 16 • Shaders

302 • Softimage

Basic Surface Color Attributes

You can create a very specific color for an object by defining its ambient, diffuse, and specular colors separately on the Illumination page of its surface shader property editor.

To open an object’s surface shader property editor, select the object and choose Modify > Shader from the Render toolbar.

Not all shading models support all of these basic characteristics. For example, only the Phong, Blinn, Cook-Torrance and Anisotropic shading models support specular highlights (although the Strauss shader’s Smoothness and Metalness parameters affect specularity). Similarly, the Strauss shader does not support an ambient color, while most other models do.

It’s also worth noting that because different shading models compute these basic characteristics, the parameters that control the attributes vary from one property editor to another. For example, the Anisotropic shader has much more elaborate specular highlight controls than the Phong shader.

Diffuse

This is the color that the light scatters equally in all directions so that the surface appears to have the same brightness from all viewing angles. It usually contributes the most to an object’s overall appearance and it can be considered the “main” color of the surface.

Ambient

This color simulates a uniform non-directional lighting that pervades the entire scene. It is multiplied by the scene ambience value, and blended with the diffuse color. Often, the ambient color is set to the same value as the diffuse color, allowing the scene ambience to provide the ambient color.

Specular

This is the color of shiny highlights on the surface. It is usually set to white or to a brighter shade of the diffuse color. The size of the highlight depends on the defined Specular Decay value. Specular highlights are not visible in all shading models.

The combined result of the ambient, diffuse, and specular colors/lighting contributions.

Page 303: XSI guia basica

Basics • 303

Reflectivity, Transparency, and Refraction

Reflectivity, Transparency, and Refraction

In addition to controlling an object’s basic surface shading characteristics, surface shaders also control reflectivity, transparency, and refraction. Parameters for controlling these attributes are on the Transparency/Reflection tab of the surface shader’s property editor.

To open an object’s surface shader property editor, select the object and choose Modify > Shader from the Render toolbar.

Reflectivity

A surface shader’s Reflection parameters control an object’s reflectivity. The more reflective an object is, the more other objects in the scene appear reflected in the object’s surface.

As an object becomes more reflective, its other surface parameters, such as those related to diffuse, ambient, and specular areas of illumination, become less visible. If an object’s material is fully reflective, its other material attributes are not visible at all.

Reflectivity values are defined using color sliders. Setting the color to black makes the object completely non-reflective, while setting the color to white makes it completely reflective. If necessary, you can even control reflectivity in individual color channels.

Controlling Reflectivity with Textures

You can also control reflectivity using a texture by connecting the texture to the surface shader’s reflectivity input.

Normally, grayscale images are used since black, white and shades of gray adjust reflectivity uniformly in all color channels. Black areas of the image make the corresponding portions of the object non-reflective, white areas make the corresponding portions of the object completely reflective, and gray areas make the corresponding portions of the object partially reflective.

No reflectivity in gray ball’s material

35% reflectivity

In this example, the surface shader’s reflectivity parameter is connected to a simple black and white stripe texture.

The white areas are reflective, while the black areas are not.

Page 304: XSI guia basica

Section 16 • Shaders

304 • Softimage

Transparency

A surface shader’s Transparency parameters control an object’s transparency. The more transparent an object is, the more you can see through it.

As with reflectivity, transparency affects the visibility of an object’s other surface attributes. You can compensate for this by increasing the attributes’ values, such as changing specular color values that were 1 on an opaque object to 10 or higher on a transparent object.

Transparency values are also defined using color sliders. Setting the color to black makes the object completely opaque, while setting the color to white makes it completely transparent. If necessary, you can even control transparency in individual color channels.

Controlling Transparency with Textures

As with reflectivity, you can also control transparency using a texture by connecting the texture to the surface shader’s reflectivity input.

Normally, grayscale images are used since black, white and shades of gray adjust transparency uniformly in all color channels. Black areas of the image make the corresponding portions of the object opaque, white areas make the corresponding portions of the object completely transparent, and gray areas make the corresponding portions of the object partially transparent — or translucent.

75% transparency

70% transparency with 30% reflection.

In this example, the surface shader’s transparency parameter is connected to a simple black and white stripe texture.

The white areas are transparent, while the black areas are opaque.

Page 305: XSI guia basica

Basics • 305

Reflectivity, Transparency, and Refraction

Refraction

When transparency is incorporated into an object’s surface definition, you can also define the refraction value. Refraction is the bending of light rays as they pass from one transparent medium to another, such as from air to glass or water.

You can set the index of refraction from a surface shader’s property editor. The default value is 1, which represents the density of air. This value allows light rays to pass straight through a transparent surface without bending. Higher values make the light rays bend, while values less than 1 makes light rays bend in the opposite direction, simulating light passing from air into an even less dense material (such as a vacuum).

Refractive index values usually vary between 0 and 2, but you can type in higher values as needed.

Refraction value of 0.9

Refraction value of 1.1

Page 306: XSI guia basica

Section 16 • Shaders

306 • Softimage

Applying Shaders to Scene Elements

There are a number of ways to apply and connect shaders in Softimage. You can use any of these methods depending on the tool you prefer to use or the task you need to perform.

• Render tree: You can connect many shader nodes together to build up a tree for the object. Easy! See The Render Tree on page 307.

• Material manager: You can use this tool to create and apply a material to an object. See The Material Manager on page 317.

• Preset manager: Drag an drop a shader or material preset from here onto the appropriate type of object to apply it, or drag it into the render tree as a node. See The Preset Manager on page 299.

• Netview: Drag and drop a shader or material preset from a netview window (press Alt+5) onto the appropriate type of object to apply it, or drag it into the render tree as a node.

• Render toolbar:

- Choose Get > Material on the Render toolbar to create and apply materials to selected objects.

- The Get > Shader menu has sub-menus containing shaders that you can connect to all of an object’s Material node’s input ports.

- The Get > Texture menu lists commonly used texture shaders and allows you to connect them to any combination of a surface shader’s ambient, diffuse, transparency and reflection ports.

• Shader’s property editor: A shader’s property editor contains all the parameters that you can edit. To the right of each parameter, there is a “plug” connection icon . Clicking this icon opens a menu that lists shaders that you can attach directly to that parameter.

• Shader stacks: Some scene elements, like render passes and cameras, have shader “stacks” in their property editors where you apply shaders that affect the whole scene rather than individual objects.

Page 307: XSI guia basica

Basics • 307

The Render Tree

The Render Tree

The render tree is where you can connect shader nodes together to build trees that create a visual effect for an object. You can have one render tree per object.

To open the render tree, select an object and press 7 or choose View > Rendering/Texturing > Render Tree. Click the Refresh icon in the render tree to show the shader nodes available for that object.

Shaders in the render tree are called nodes as a way of describing their representation as a container. These nodes can be single shader nodes or shader compound nodes. Shader compounds are shader “packages” built from shader nodes and possibly other shader compounds.

Every shader’s node exposes a set of inputs and outputs (called ports) for most or all of a shader’s parameters. You connect shaders together by simply dragging a connection arrow from one shader’s output port to another shader’s input port. It’s so easy!

A B C D E GF H

J

K

I

O

N

L

M

Page 308: XSI guia basica

Section 16 • Shaders

308 • Softimage

Connecting Shader Nodes

You connect shader nodes by clicking and dragging an output port from the right side of one shader node onto an input port on the left side of another shader node. Data flows along the connection from the first node and is processed by the second node. All data ends up being processed by the Material node.

When a port is connected, the value of its corresponding parameter is driven by the connection, which means that you can no longer set the parameter’s value in that shader’s property editor. In fact, the parameter and its controls (checkboxes, sliders, etc.) are not even displayed. If you remove the connection, the controls reappear in the property editor.

A Memo Cams. You can save and restore up to four views of the render tree workspace.

B Lock. Prevents the view from updating when you select other objects in the scene.

C Refresh. When the view is locked, clicking this button forces it to update with the current selection in the scene.

D Clears the render tree workspace.

E Opens the preset manager in a floating window.

F Displays or hides shaderballs on the shader nodes.

G Displays or hides the preset manager embedded in the left panel (10).

H Name and path of the current Material node.

I Bird’s Eye View. Click to view a specific area of the workspace, or drag to scroll. Toggle it on or off with Show > Bird’s Eye View.

J Embedded preset manager shows all shader nodes and compounds that are available to use.

You can drag and drop shader nodes from here into the render tree workspace. You can also get shaders from the Nodes menu.

K The render tree workspace. This is where you can connect shader nodes together to build trees.

L Connection arrow between shaders’ output and input ports shows the data flow between them. Data always flows from the left to the right of the tree.

M Shader node. This shader is a texture shader, as indicated by its light green color. Each type of shader has a different color.

N Texture layers. These layers let you mix several textures together so that each texture is blended with the cumulative result of the preceding textures.

O Material node: This node acts like a placeholder for every shader that is applied to an object. Every object must have one or it won’t render. Its input ports support each type of shader.

.. and is processed by this node via its input port.

Data travels from this node’s output port ...

Page 309: XSI guia basica

Basics • 309

The Render Tree

Node Color Codes

Every shader node in the render tree is color coded, as are each of its ports. This coding system helps you visualize which shaders are doing what within their render tree structures.

Shader node ports are also color coded. A node’s output is indicated by a port (colored dot) in the top right of the node, while each input port is indicated on the left side of the node. The color of a port identifies what type of input value the port will accept, and what type of value it will output.

The following table shows which input/output port color is assigned to which type of value:

Realtime shader

Volume shader

Output shader

Lens /camera shader

Light shader

Material node

Material phenomenon

Surface shader

Texture shader

Lightmap shader

Environment shader

Selected nodes are highlighted in white.

Click the arrow to expandor collapse a node.

Click the port to create a connection arrow.

Color Input/Output Port

Result

Color Returns or outputs a color (RGB) value. These ports are usually used in conjunction with the surface of an object or when defining a light or camera.

Scalar Represents a scalar input/output with any value between 0 and 1.

Vector Represents an output/input that corresponds to vector positions or coordinates.

Boolean Represents an input/output that corresponds to a 0 or 1, or On/ Off.

Integer Consists of a single integer (such as 2 or 73).

Texture/Image Clip

Accepts or returns an image file.

RealTime Accepts connections from other realtime shaders and outputs to other realtime shaders or to the Material node’s RealTime port.

Lightmap Outputs the result of a lightmap shader to the Material node’s Lightmap port.

Material Phenomenon

Outputs the result of a material phenomenon shader to the Material node’s Material port.

Page 310: XSI guia basica

Section 16 • Shaders

310 • Softimage

Building Shader Networks

The process of building shader networks in the render tree is best explained visually. Essentially, you create an effect by connecting shaders to an object’s material, using other shaders to control those shaders’ parameters, and so on.

There are no hard and fast rules for how shaders should be connected, and experimenting with different connections is usually rewarding.

What follows is a simple example of how to connect shaders in the render tree to build an object’s material.

1 To begin with, the mug has a Phong shader connected to its material node’s Surface port to create basic surface shading — ambient and diffuse colors, specular highlights and, in this case, some reflectivity.

2 Since there are no other objects in the scene, the mug’s reflectivity is not apparent. Connecting an Environment map shader to the material node’s Environment port makes the reflectivity visible and creates some reflections on the mug’s surface.

3 Now it’s time to add some color and detail. Connecting two textures to a Mix2Colors shader blends the textures together. The combined result is then connected to the Phong shader’s Ambient and Diffuse ports, coloring the mug’s surface.

4 Connecting a Bump Map generator shader to the material node’s Bump Map port adds some bumpiness to the mug’s surface. Note how this affects the reflections from the environment map. The mug now looks more like stoneware than porcelain.

5 Finally, connecting an Ambient Occlusion shader between the Phong shader and the material node’s Surface port darkens the mug where it occludes itself. The Phong shader’s branch, which includes the textures, is connected to the Ambient Occlusion shader’s Bright Color port, while the Dark Color is set to black. The Ambient Occlusion effect is most visible on the inside of the mug and the inner surface of the handle.

1

2

Page 311: XSI guia basica

Basics • 311

Building Shader Networks

5

4

3

Page 312: XSI guia basica

Section 16 • Shaders

312 • Softimage

Creating Shader Compounds

In the render tree, you can hook up shaders together and set their values to create an effect. Once you have things set up as you like, you can then create a compound of this tree that contains all the shaders.

Shader compounds allow you to create an effect and save it in one node, then use it in different scenes or share it with other users. You can expose only the parameters of each shader that you want others to see and adjust.

You can create a shader compound containing any type of shader. The compound can contain many shaders connected together, or just one shader, if you like. Softimage ships with some shader compounds for ICE particles and subsurface scattering effects — open them up and see what makes them tick! .

1

2

3

4

7

9

8

1056

Page 313: XSI guia basica

Basics • 313

Creating Shader Compounds

Overview of Creating a Shader Compound

These steps show the basic process of how to create a shader compound of your own.

1 In the render tree, select all the shader nodes you want to save in the compound. To keep the compound generic, you should leave out the Material node so that you can apply this compound to any object.

2 From the render tree toolbar, choose Compounds > Create Shader Compound.

This creates a compound named ShaderCompound, which contains all the shaders that have just disappeared.

3 Click the little e on your new compound to edit it. This opens up the compound editor in which you can expose ports for the compound. Only exposed ports will be available for connections and editing back in the render tree.

4 The bar on the left shows all the exposed shader ports for your compound.

5 Click this arrow to expand or collapse the list of exposed input parameters. When the list is collapsed, you can display the name of a port by hovering the mouse pointer over its connection.

6 To expose a shader port, click the black circle beside Expose Input and drag it to a port. That port is included on the bar. Keep doing this for every port you want to expose.

7 You can rename an exposed port by right-clicking on it and choosing Properties, then entering new names:

• The Display Name is the one that is displayed in the compound node in the render tree and in the compound’s property editor. If this is blank, then the display name is the same as the Name.

• The Name is the one that is displayed in the blue bar on the left here and is used in scripting.

Double-clicking on an exposed port or right-clicking and choosing Rename sets the scripting name only, not the display name.

8 Create the output port by dragging an output port from the shader on the furthest right (the shader into which all other shaders are plugged) to black dot on the bar on the right.

9 In the bar at the top, double-click where ShaderCompound is written and give your compound a class name (in this example, it’s Bonfire). Do the same for the Category, which is where it will show up in the groups in the preset manager, such as Particle.

If you like, you can add comments to your compound to document how everything inside it works.

10 Click the little x box in the upper-left corner to close the compound and return to the regular render tree.

11 Choose Compounds > Export Shader Compound from the render tree toolbar to export your compound so that it can be used in other scenes or by other users.

Page 314: XSI guia basica

Section 16 • Shaders

314 • Softimage

Page 315: XSI guia basica

Basics • 315

Section 17

Materials

In Softimage, an object’s look and feel is defined by one or more shaders that are plugged into the object’s material node. The material node itself provides access to the object’s attributes while the shaders control how those attributes appear when rendered. This section introduces ways of creating and working with materials.

What you’ll find in this section ...

• About Materials

• The Material Manager

• Creating and Assigning Materials

• Material Libraries

Page 316: XSI guia basica

Section 17 • Materials

316 • Softimage

About Materials

Every object needs a material. In Softimage, the term “material” is used to refer to the cumulative effect of all of the shaders that you use to alter an object’s look and feel. Strictly speaking, though, materials in Softimage are really just containers for an object’s various attributes. If an object’s material has no shaders attached to it, nothing defines the object’s look, and the object won’t render.

The easiest way to understand what a material is to look at it in the render tree where it is represented by a Material node. The Material node lists all of the inputs to a given material. These inputs are sometimes referred to as “ports.” Each port controls a set of object attributes. When the material is assigned to an object, the shaders that you connect to these ports alter the corresponding attributes.

For example, the Surface port controls object surface characteristics. By connecting a shader or a network of shaders to this port, you can change an object’s color, transparency, reflectivity, and so on. The important thing to understand is that nearly every change you make to an object’s appearance involves connecting shaders to define the object’s material.

The Default Scene Material

Every new scene has a default material, called Scene_Material, which is assigned to the scene’s root in branch mode. An object (in a hierarchy or not) that does not inherit a material from a parent, and does not have a locally-defined material, inherits the scene’s default material. In the explorer, you can view the default material in the material library’s hierarchy, or as a node of the scene root, which you can display by choosing Local Properties from the Show menu.

When you assign a local material to an object, it replaces the default scene material for that object only. If you remove or delete the object’s local material, the object inherits the default scene material again.

You can modify the default scene material as you would any other material and the changes are applied to any objects that inherit it.

If you delete the default scene material, the oldest created material in the scene becomes the new default material, and is assigned to all objects to which the previous default material was assigned (whether explicitly or through propagation).

Materials and Surface Shaders

It’s worth noting that all new materials that you create in Softimage start out with some type of surface shader attached to them. This provides basic surface shading so that the material is renderable from the beginning. For example, if you create a material from within a material library, it has a Phong shader attached to its Surface, Shadow, and Photon ports. If you create a material using a command from the Render toolbar’s Get > Material menu, you can choose a surface shader to attach to the material.

Default Scene Material

By default, new materials have a surface shader, like the Phong shader, attached to them.

Page 317: XSI guia basica

Basics • 317

The Material Manager

The Material Manager

The material manager is a tool that is designed for creating, managing, and editing all your materials and libraries.

You can open the material manager by pressing Ctrl+7 or choosing Modify > Materials from the Render toolbar. Its different areas are outlined here.

B CDA

E

Page 318: XSI guia basica

Section 17 • Materials

318 • Softimage

A The left panel contains the explorer that has the Scene (cluster) and Image Clip tabs—see the image on the right for more details.

In the Scene explorer, you can switch between local materials (applied locally on object or cluster itself) and applied materials.

Selecting a material in the explorer highlights it in the shelf and displays it in the bottom panel.

In the Image Clip explorer, all image clips in the scene are displayed.

B On the top, the command bar provides tools for applying materials, such as creating, duplicating, or deleting materials, as well as tools for managing material libraries.

C The middle right is a shelf with shaderballs for the materials in your scene. Multiple libraries appear on separate tabs.

D Click a shaderball to select the material, or drag a shaderball onto an object or cluster in the scene to apply it.

E The tabs on the bottom of the material manager can display one of several views:

•The selected material in the render tree (default view).

•The selected material in the texture layer editor.

•A list of image clips used by the selected material. Right-click on a clip’s thumbnail for a context menu that allows you to edit a clip’s properties and other options. In the Material Manager preferences, you can set the size of the thumbnails used on this tab.

•A list of objects and clusters that use the selected material (Who Uses?). In the Material Manager preferences, you can set the size of the thumbnails used on this tab.

A Select the thumbnail size for the clips displayed in this list: small, medium, large, or list view. You can turn off the display of the thumbnails to optimize performance.

B Filters clips by All, Used, and Unused clips.

C Filters clips displayed by scene layer.

D Filter clips displayed by user keywords.

E Filters clips displayed by name.

F Right-click a clip to display a context menu.

G Drag and drop one or more images into the image clip explorer panel to create sources and clips.

A

D

B

E

G

F

C

Scene and Image Clip Explorer

Page 319: XSI guia basica

Basics • 319

Creating and Assigning Materials

Creating and Assigning Materials

Giving an object a material is the first step in defining its look. There are a couple of different tools you can use to create new materials and assign them to objects. Once you create a material, it belongs to a material library, and you can assign it to as many objects as you’d like.

Creating a New Material

You can create materials and assign them to objects using the material manager, the explorer, or the Get > Material commands on the Render toolbar.

With any of these methods, a new material is created, consisting of a Material node with a Phong shader connected to its Surface, Shadow, and Photon ports.

The easiest way to do this is to use the material manager.

1. Select one or more objects.

2. Click the Create New Material icon, or choose a material from the Create menu.

3. Click the Assign Material to Selected Objects icon to apply the material to the objects.

Assigning Any Material

Once a material is created, you can access it from a material library and then assign it to an element in your scene. The material manager provides you with several ways of doing this, but you can also use the explorer or the Get > Material > Assign Material command on the Render toolbar.

This is one of the easiest ways to assign a material using the material manager.

1. Click a tab to select a material library.

2. Select a material you want to assign.

3. Drag the material’s shaderball and drop it on an unselected object, or on one or more selected objects to apply it.

2 3

1

1

2 3

Page 320: XSI guia basica

Section 17 • Materials

320 • Softimage

Assigning Materials to Polygons and Clusters

Using the same tools and techniques that you use to assign materials to objects, you can assign materials locally to selections of polygons and/or polygon clusters on a polygon mesh object. If you choose the former, a cluster is created from the selection. The cluster’s local material always overrides the one assigned to the entire object.

In the explorer, a cluster’s material appears under the cluster’s node, rather than directly under the object’s node. To access it, expand the object’s Polygon Mesh > Clusters > name of cluster node.

If you remove a material from a cluster, the material inherits the material either assigned to or inherited by the object.

Assigning Materials to Hierarchies

You can assign a material to a hierarchy’s parent object using the same tools and techniques that you use to assign materials to objects. The only thing different is that you must middle-click the parent object to branch-select it. Children in the hierarchy that don’t have a locally assigned material then inherit the parent’s material.

For example, if you have an object such as a table, you may want the legs and top to be the same color. If you assign a material to color the parent (table top), the material definition is propagated to its children (table legs).

Materials assigned to hierarchies are subject to the same rules of propagation as any other properties.

Polygon mesh object with global material assigned.

Object with specific polygons selected.

Local material assigned to selected polygons.

The object’smaterial is here.

The cluster’smaterial is here.

Simple Propagation

The larger sphere was branch-selected and given a checkerboard material. Because it was applied in branch mode, the material is inherited by all the descendants.

Local Material Application

One sphere was selected and given a blue material. This material is local for the selected object only, but not for any of its children.

Page 321: XSI guia basica

Basics • 321

Material Libraries

Material Libraries

Most properties in Softimage are owned by the scene elements to which they’re applied. Materials, on the other hand, belong to material libraries. Material libraries are common containers for all of the materials in a scene. Each time you create a material, it’s added to a material library. Although all of the materials in a scene belong to a library, they are used only by the objects to which they are assigned.

The material manager is designed to let you easily view and manage your material libraries. Most of the commands that you need for managing your libraries are found in the Libraries menu.

Click a library tab to switch between libraries. The selected tab becomes the current library. Unless you explicitly create a new material in another library, all newly created materials are added to the current library.

You can also manage your libraries using an explorer with its scope set to Materials (press M).

Storing materials in a library makes it easy to share a single material between several objects. It also allows you to access and edit all of the materials in a scene from a single place. Furthermore, because materials belong to libraries and not to individual objects, you can delete an object from the scene, but keep its material for later use. If you no longer want to use a material, you can simply delete it once, regardless of the number of objects to which it’s assigned.

You can create as many material libraries as you need. For example, you might want to keep separate libraries for different types of materials (wood, metals, rock, skin, scales, and so on), or create a material library for each character in your scene.

You can drag and drop materials onto the Favorites tab in the material manager to create shortcuts to materials that you want to keep handy. You can also create your own custom “favorites” tabs to collect and sort the material shortcuts as you like.

By default, material libraries are stored internally as part of the scene. However, you can store them externally, as dotXSI (.xsi) or material library (.xsiml) files, which allows you to share them between multiple scenes.

The Default Material Library

Every new scene has a material library called DefaultLib. Initially, the library contains only the default scene material, but all new materials that you create in the scene are added to the default library until you create or import a new library and set it as the current library.

Page 322: XSI guia basica

Section 17 • Materials

322 • Softimage

Page 323: XSI guia basica

Basics • 323

Section 18

Texturing

Texturing is the process of adding color and texture to an object. You can use textures to define everything from basic surface color to more tactile characteristics like bumps or dirt. Textures can also be used to drive a wide variety of shader parameters, allowing you to create maps that define an object’s transparency, reflectivity, bumpiness, and so on.

What you’ll find in this section ...

• How Surface and Texture Shaders Work Together

• Types of Textures

• Applying Textures

• Texture Projections and Supports

• Editing Texture Projections

• UV Coordinates

• Editing UV Coordinates in the Texture Editor

• Texture Layers

• Bump Maps and Displacement Maps

• Baking Textures with RenderMap

• Painting Colors at Vertices

Page 324: XSI guia basica

Section 18 • Texturing

324 • Softimage

How Surface and Texture Shaders Work Together

Surface shaders and texture shaders work together to create an object’s look. A surface shader defines how an object responds to lighting, and defines other basic characteristics such as transparency and reflectivity. A texture shader applies either an image or a procedural texture onto the object. The texture doesn’t “cover” the surface shader; rather, it is combined with the surface shader such that the object is textured and responds correctly to scene lighting.

In most cases, a surface shader is connected to the material node’s Surface port, and then a texture shader is connected to the Ambient and Diffuse parameters of the surface shader. The following example illustrates how combining texture shaders and surface shaders affects the final result.

A texture shader connected to the Surface port of the cow’s body’s material. Note that without a surface shader, the lighting appears constant.

Using the texture shader to drive the surface shader’s Ambient and Diffuse colors produces a textured cow that responds properly to lighting.

A Blinn shader connected to the Surface port of the cow’s body’s material node. The hoofs, horns, and so on have different materials.

Page 325: XSI guia basica

Basics • 325

Types of Textures

Types of Textures

Softimage allows you to use two different types of textures: image textures, which are separate image files applied to an object’s surface, and procedural textures, which are calculated mathematically.

Image Textures

Image textures are images that can be wrapped around an object’s surface, much like a piece of paper that’s wrapped around an object. To use a 2D texture, you start with any type of picture file (PIC, TIFF, PSD, etc.). These can be scanned photos or any file containing data that describes all the pixels in an image, RGB or RGBA data.

Image Sources and Clips

Every time you select an image to use as a texture or for rotoscopy, an image clip and an image source of the selected image is created.

• An image source is not really a usable scene element. It is merely a pointer to the original image stored on disk. Images sources are listed in your scene in the Sources folder of the Scene Root. They can be stored within your project folder structure, or outside of it.

• An image clip is a copy, or instance, of an image source file. Each time you use an image source, an image clip of it is created. You can have as many clips of the same source as you wish. You can then modify the image clip without affecting the original source image.

Clips are useful because they allow you to create different representations of the same texture image (source), such as five different blur levels of the same source image. Also, clips are memory-efficient because the source is only loaded once, regardless of the number of clips are created from it.

Procedural Textures

Procedural textures are generated mathematically, each according to a particular algorithm. Typically, they are used to simulate natural materials and patterns such as wood, marble, rock, veins, and so on.

Softimage’s shader library contains both 2D and 3D procedural textures. 2D procedurals are calculated on the object’s surface — according to their texture projections — while 3D procedurals are calculated through the object’s volume. In other words, unlike 2D textures, 3D textures are projected “into” objects rather than onto them. This means they can be used to represent substances having internal structure, like the rings and knots of wood.

2D textures are wrapped around objects.

3D textures are defined throughout an object.

Page 326: XSI guia basica

Section 18 • Texturing

326 • Softimage

Applying Textures

There are a number of ways to connect textures to objects in Softimage. These include:

• Using the render tree, where you can choose a texture from the Nodes > Texture menu. Once you choose a texture, it is added to the render tree workspace and you can connect it to the material’s or other shaders’ ports.

• Using the Get > Texture menu lists commonly used texture shaders that can be connected to any combination of a surface shader’s ambient, diffuse, transparency and reflection ports.

• Using the parameter connection icon menu in a shader’s property editor lists textures that you can attach directly to the parameter. Attaching a texture to a parameter lets you control the parameter with a texture instead of a simple color or numeric value.

This is a convenient way to connect a texture to a surface shader’s Ambient and Diffuse ports immediately after applying the surface shader to the object.

Adding More Textures

To add a texture in addition to the one applied using Method 1, choose Modify > Texture > Add from the Render toolbar.

This adds a new texture layer to the object’s surface shader. The parameters that you add the new texture to are added to the layer, and the layer’s texture is blended with them.

Choosing a texture from the Nodes > Texture menu adds it to the render tree workspace.

Choose Modify > Texture > Add from the Render toolbar.

The menu lists texture shaders that can be blended with the surface shader via a new texture layer.

Page 327: XSI guia basica

Basics • 327

Texture Projections and Supports

Texture Projections and Supports

Typically when you apply a texture to an object, a texture projection and texture support are created.

The texture support is a graphical representation of how the texture is projected on the object. It defines the type of projection and applies textures to your 3D objects using that definition.

By default, an object’s texture support is constrained to the object; otherwise, animated objects would move through space without their projection. Transforming the texture support is a useful way of animating or repositioning a texture on an object.

Texture projections exist on the support and record the correspondence between pixels in the texture and points on the object’s surface—in other words they define where the texture is projected on the object.

You can transform a texture projection on a given support to define the part of the object to which the texture is applied. You can then add any number of projections, adjacent or overlapping, to the support.

The sphere shown below has three texture projections connected to its support. The wireframe view on the left shows how the projections are positioned, and the textured view on the right shows the rendered result.

Texture support

Texture projections Rendered result of how the textures are projected onto this sphere.

Page 328: XSI guia basica

Section 18 • Texturing

328 • Softimage

Types of Texture Projections

Choosing the right type of texture projection is an important part of the texturing process. The more closely the projection conforms to the original shape of the object, the less you’ll have to adjust the texture to get the object looking just right. This section describes the types of texture projections that are available to you.

All of the projections described can be applied to objects from the Render toolbar’s Get > Property > Texture Projection menu.

You can also create and apply texture projections from any texture shader’s property editor. Every texture shader needs a projection to define where the texture should appear on the object.

Cylindrical Projections

If you map the picture file cylindrically, it is projected as if wrapped around a cylinder.

Planar XY Cylindrical

Spherical Projections

A standard spherical projection stretches the texture over the front of the object so that its edges meet at the back. Distortion occurs towards the pinch points at the object’s +Y and -Y poles.

SphericalLollipop

XZXY YZ

Lollipop Projections

A lollipop projection is a spherical-type projection that stretches the texture over the top of the object so its corners meet on the bottom, like the wrapper of a lollipop. A single pinch-point occurs at the -Y pole.

Planar Projections

Planar projections are used for mapping textures onto an object’s XY, XZ, and YZ planes. By default, the projection plane is one pixel smaller than the surface plane, therefore no “streaking” or distortion occurs on the object’s other planes.

Page 329: XSI guia basica

Basics • 329

Texture Projections and Supports

Cubic Projections

A cubic projection assigns an object’s polygons to a specific face of the cube based either on the orientation of their normals, or their positions relative to the cubic texture support. The texture is then projected onto each face using a planar or spherical projection method.

By default, the entire texture is projected onto each face. However, you can choose from a number of different cubic projection presets. You can also transform each face of the cube individually and save the transformations as presets of your own.

UV Projections

UV projections are useful for texturing NURBS surface objects. They behaves like a rubber skin stretched over the object’s surface. The points of the object correspond exactly to a particular coordinate in the texture, allowing you to accurately map a texture to the object’s geometry. Even when you deform an object, its texture follows the object’s geometry.

Spatial Projections

A spatial projection is a three-dimensional UVW texture projection that has either the object’s origin or the scene’s origin as its center. Spatial projections are used to apply procedural textures that are computed mathematically, rather than being somehow wrapped around the object.

A cubic projection is applied to a cube so that the entire texture image is projected onto each face.

+X face (right)

-X face (left)-Z face (back)

+Y face (top)

-Y face (bottom)

+Z face (front)

+X face (right)

-X face (left)-Z face (back)

+Y face (top)

-Y face (bottom)

+Z face (front)A cubic projection is applied to a head so that a different part of the texture image is projected onto each face.

A NURBS surface (left) with a wood texture applied using an planar XZ map (below, left) and UV map (below, right). With the UV map applied, the pattern accurately follows the contours of the object.

By default, a spatial projection’s texture support appears in the center of the textured object’s volume.

Polygon sphere with a vein texture applied using a spatial projection.

Page 330: XSI guia basica

Section 18 • Texturing

330 • Softimage

Camera Projections

A simple and convenient way to texture objects is to project a texture from the camera onto the object’s surface, much like a slide projector does. This is useful for projecting live action backgrounds into your scene so you can model and animate your 3D elements against them.

Changing the camera’s position changes the projection’s position. Once you have positioned the texture on the surface to your liking, you can freeze the projection.

Unfolding

Unfolding creates a UV texture projection by “unwrapping” a polygon mesh object using the edges you specify as cut lines or seams. When unfolding, the cut lines are treated as if they are disconnected to create borders or separate islands in the texture projection. The result is like peeling an orange or a banana and laying the skin out flat.

Unfolding does not rely on a texture support. To adjust the projection further, edit the UV coordinates in the texture editor.

Texture image used Wireframe view of the rendered frame.

Top view showing where the texture is projected.

Final rendered frame

In this example, the corner of a room was textured using the original texture (top-left). The texture was projected from a scene camera (top right). The rendered result shows the modeled teddy bear against the projected background.

Page 331: XSI guia basica

Basics • 331

Texture Projections and Supports

Contour Stretch UVs Projection (Polygons Only)

Contour Stretch UVs projections allow you to project a texture image onto a selection of an object’s polygons. Rather than projecting according to a specific form, however, a contour stretch projection analyzes a four-cornered selection to determine how best to stretch the polygons’ UV coordinates over the image.

Contour stretch projections are useful for a number of different texturing tasks, particularly for applying textures to tracks, and irregular, terrain-like meshes. They are also useful for fitting regular-shaped textures onto curved meshes. For example, they would be useful to place a label texture on a beer bottle, right at the junction of the bottle’s neck and body.

Contour stretch projections do not have the same alignment and positioning options as other projections. Instead, you select a stretching method that is appropriate to the selection’s topology and complexity. Also, contour stretch projections do not have a texture support. To adjust it further, edit the UV coordinates in the texture editor.

The contour stretch projection is ideal for texturing a curvy path like this road.

Page 332: XSI guia basica

Section 18 • Texturing

332 • Softimage

Unique UVs Projection (Polygons Only)

Unique UVs mapping applies a texture to polygon objects using one of two possible methods:

• Individual polygon packing assigns each polygon’s UV coordinates to its own distinct piece of the texture so that no one polygon’s coordinates overlap another’s.

This is useful for rendermapping polygon objects. You can apply textures to an object using a projection type appropriate to its geometry, then rendermap the object using a new Unique UVs

projection to output a texture image that you can reapply to the object. The texture is applied to texture each polygon properly without you worrying about “unfolding” it to fit properly.

• Angle Grouping, after deciding on a projection direction, groups neighboring polygons whose normal directions fall within a specified angle tolerance. This process is repeated until all of the object’s polygons are in a group. The groups—or islands—are then assigned to distinct pieces of the texture so that no two islands’ coordinates overlap each other.

Unique UVs projections do not have a texture support. To adjust it further, edit the UV coordinates in the texture editor.

A Unique UVs projection was applied to this sphere.

The Individual Polygon Packing method produces UV coordinates that look like this: each polygon’s UV coordinates separated from the rest of the coordinate set so it can be assigned to its own portion of texture.

The Angle Grouping method produces “islands” of polygons.

Page 333: XSI guia basica

Basics • 333

Editing Texture Projections

Editing Texture Projections

A texture projection’s property editor contains options for modifying, transforming and renaming the projection. You can open a texture projection’s property editor by selecting an object and choosing one of the following from the Render toolbar:

• Modify > Projection > Inspect Current UV opens the property editor for the object’s current texture projection. This is the projection used when the object is viewed in a textured display mode (textured, textured decal, and so on).

• Modify > Projection > Inspect All UVs opens a multi-selection property editor for all of the object’s texture projections.

• Modify > Texture > name of the texture from the Render toolbar to open the texture’s property editor.

Then click the Edit button on the Texture tab (beside the Texture Projection list) to open the Texture Projection property editor.

Making Projections Implicit

You can make most texture projections into implicit projections. Implicit projections are slightly slower to render because it performs its own projection computation (based on a predefined projection model; that is, spherical, planar, and so on) at each pixel, as opposed to using predefined interpolated UV data like explicit projections.

You would make a texture projection implicit to obtain a better overall result for spherical and cylindrical projections on an object, especially one with fewer polygons. For example, when mapping a texture onto a sphere (using either spherical or cylindrical projection), implicit texturing produces more accurate results at the spheres’ poles than does explicit projection.

Wrapping Texture Projections

The texture projection’s wrapping options control whether the texture extends past the projection’s boundaries to wrap around the object.

The examples below show a sphere whose texture projection has been adjusted such that the texture covers only a portion of the object’s surface. You can see the effect of wrapping in different directions.

Both spheres have a texture applied to their diffuse parameter. The sphere on the left uses an explicit projection and the sphere on the right uses an implicit projection.

No Wrapping Wrap in U

Wrap in V Wrap in U and V

Page 334: XSI guia basica

Section 18 • Texturing

334 • Softimage

Transforming Texture Projections

By default, a texture projection fills the entire texture support. For example, if you apply a simple XZ Planar projection to a grid, the texture coordinates span the entire projection from one grid corner to the other. You can transform the texture projection to reposition the texture, or to make room on the support for other projections in different locations.

There are two ways to transform texture projections—using the projection manipulator in a 3D view, or by editing the scaling, rotation, and translation values in the Texture Projection property editor.

To activate the projection manipulator, press j, or choose Modify > Projection Edit Projection Tool from the Render toolbar.

Muting and Freezing Texture Projections

Once you have scaled, rotated, or translated a texture projection to your liking, you can freeze it permanently or mute it temporarily. Freezing a texture projection is the equivalent of freezing the texturing operator stack. This is useful if you want to avoid accidentally editing or moving your texture support, especially when the object is animated.

Drag one of the cornerhandles or borders to

scale the projection.

Drag the green arrowto scale the

projection vertically.

Drag the red arrow to scale the projection horizontally.

Drag the red line to translate the projection horizontally.

Drag the green line totranslate the

projection vertically.

In edit mode, the mouse cursor changes to this icon.

Right-click to switch to another projection, if one exists.

Drag the intersectionof the red and green

arrows to translatethe projection freely.

Middle-click + drag to rotate the projection about its center.

The texture projection manipulator allows you to reposition a texture projection on an object by changing the projection’s position on the texture support.

UVW Transformation controls

Alternatively, you can use the texture projection definition parameters to transform a texture on the surface of an object.

Page 335: XSI guia basica

Basics • 335

UV Coordinates

UV Coordinates

Applying a texture projection to an object creates a set of texture coordinates — often called UV coordinates or simply UVs — that control where the texture corresponds to the surface of the object.

• On a polygon object, each vertex can hold multiple UV coordinates — one for each polygon corner that shares the vertex. The portion of the texture enclosed by a polygon’s UVs is mapped to the polygon.

• On NURBS objects, UV coordinates are not stored at the vertices; instead, they are generated based on a regular sampling of the object‘s surface. However, as with polygon objects, the portion of the texture enclosed by, say, four UVs is mapped to the corresponding portion of the object.

You can view and adjust UV coordinates using the texture editor, where they are represented by sample points. When you select sample points, you are actually selecting the UV coordinates held at the corresponding position on the object.

For example, as you can see in the images below, the center point of a 2x2 polygon grid holds four UV coordinates. When you select the corresponding sample point in the texture editor, you are selecting all four coordinates (although it is possible to select a single polygon-corner’s UV coordinate).

In this example, the image shown left was used to texture a 2 x2 polygon grid such that each polygon’s UV coordinates were mapped to the texture differently.

This exploded view of the textured grid shows how each polygon’s UVs correspond to the texture image.

The grid’s middle vertex holds four overlapping UVs. Each UV belongs to a specific polygon and holds a coordinate which, along with the polygon’s other UV coordinates, defines the portion of the texture mapped onto that polygon.

Page 336: XSI guia basica

Section 18 • Texturing

336 • Softimage

Editing UV Coordinates in the Texture Editor

When you apply an image to an object, it’s unlikely that it will fit perfectly. The next step after applying the texture is to adjust what parts of the image correspond to the various parts of your object.

You can do this using the texture editor, which displays an object’s UV coordinates. These are a two-dimensional representation of the object’s geometry which, when superimposed on a texture image, shows what portion of the texture appears on any part of the object’s surface.

By selecting the object’s UV coordinates and moving them to a new location, you can control which portions of the texture correspond to different parts of the object. The texture editor has a wide variety of tools to help you select and move UV coordinates.

To open the texture editor, press 7 or choose View > Rendering/Texturing > Texture Editor from the main menu.

Texture imageThe image clip currently applied to the object.

Selected UVs are highlighted red, and unselected UVs are blue.

Texture editor command bars provide quick access to commonly used texture editor commands.

UV position boxes allow you to move selected sample points to precise U and V locations.

Texture editor menu bar contains all of the texture editor commands, including those accessible from the command bar

Texture editor workspace is where you manipulate the selected object’s UV coordinates.

Status bar displays the UV coordinates, pixel coordinates, and RGBA values of the current mouse pointer position

Connectivity Tabs help you make sense of the object’s UVs by highlighting boundaries shared between of UV “islands”.

This character and his head are separate objects, each with its own projection. Both sets of UVs are shown in the texture editor.

Page 337: XSI guia basica

Basics • 337

Editing UV Coordinates in the Texture Editor

Choosing a Texture Image to Display

The texture editor’s Clips menu allows you to choose the image clip that is displayed in the texture editor workspace, as well as on the object in the 3D views in Textured and Textured Decal display modes while the texture editor is open. It contains an alphabetical list of all of the image clips used by the selected object(s). Choosing different clips allows you to see how the same set of UV coordinates maps different textures onto an object.

Displaying the Checkerboard

Clips > Checkerboard displays a checkerboard image in the texture editor workspace, as well as on objects in the 3D views in Textured and Textured Decal display modes while the texture editor is open. This is useful for seeing how the texture image gets stretched over different areas of an object. An even distribution of regularly sized squares indicates minimal stretching, which is usually preferable, although you may want a higher density of squares in areas of high detail such as the face of a character. You can set the number of squares in the texture editor preferences.

Dimming the Texture Image

If you’re having trouble seeing a projection’s UV coordinates in the texture editor workspace, you can dim the texture image to make the coordinates more visible. Click the Dim Image button or choose View > Dim Image.

Selecting and Editing UV Coordinates

Use the Vertex, Edge, or Polygon selection filters to select the texture samples associated with vertices, edges, or polygons. ALL lets you select the samples associated with any component type, depending on what you click on. ISL selects whole islands, and CLS restricts the selection to clusters.

Once you have selected samples, you can edit them using the transform tools (x, c, and v) or other commands.

Tearing

When tearing is off, connected and coincident UV samples are automatically affected by any manipulation even if they are not explicitly selected.

When tearing is on, it’s possible to separate samples into discontinuous islands. Polynode bisectors appear, which allow you to select individual samples at a vertex.

Polygon Bleeding

When polygon bleeding is on and you select samples, all samples

belonging to the adjacent polygons become selected automatically. This allows you to move the polygons in a block without internal distortion.

Displaying an image clip in the texture editor workspace does not apply the image to the selected object.

Page 338: XSI guia basica

Section 18 • Texturing

338 • Softimage

Editing Multiple UV Sets

You can display and edit multiple UV coordinate sets simultaneously in the texture editor. Simply select multiple objects or an object with multiple UV sets, and open or update the texture editor. On the UVs menu, Shift+click to toggle the display of specific UV coordinate sets.

Any UV coordinate set that is currently displayed in the texture editor is “live” and can be edited. You can select and modify sample points on multiple coordinate sets simultaneously. You can snap sample points from one set to another, and copy and paste coordinates between sets. The different coordinate sets are independent for the purposes of operations like healing, relaxing, and matching.

Texture Layers

Texture layering is the process of mixing several textures together, one after the other, such that each texture is blended with the cumulative result of the preceding textures. In Softimage, you can use this technique to build complex effects by adding texture layers to an object’s material or its shaders.

When you add a texture layer to a shader, one or more of that shader’s parameters, or ports, is added to the layer. The layer is mixed on the selected ports, in accordance with its assigned strength, or weight, using one of several different mixing methods.

For texture layering purposes, the shader’s ports are collectively treated as the base layer with which the texture layers are blended. If some of the shader’s ports are connected to other shaders, those shaders are considered part of the base layer as well. For example, if you’ve connected a Cell texture to a Phong shader’s Ambient and Diffuse ports, the Cell texture is treated as part of the Phong’s base layer.

What makes texture layers so powerful is that at any time in the texturing process, you can add, modify, and remove any layer, giving you complete control over the resulting effect. You can also quickly and easily change the order in which layers are blended together, something that’s quite difficult to do when you mix textures using mixer shaders in the render tree. Because texture layers only affect designated ports, you can blend a number of layers with each of a shader’s attributes and create a complex effect for each.

Page 339: XSI guia basica

Basics • 339

Texture Layers

The parameters of the grid’s Lambert surface shader are represented in the base layers. In this case, nothing is connected to the Lambert shader’s ports, so only the base colors are shown.

The first layer adds the basic sign texture to the Ambient and Diffuse ports. The texture’s alpha channel is used to control transparency, cutting out the shape of the sign.

The second layer adds some rust. The rust texture is blended with the Ambient and Diffuse ports according to its alpha channel, and a separate mask—in this case, a weight map.

The final layer, blended with Ambient, Diffuse, and Transparency adds the bullet holes. Bump mapping is activated in the layer’s shader, creating the depression around each bullet hole.

The weatherbeaten road sign shown here was created by adding three texture layers to a basic Lambert-shaded grid. The images on the left show the cumulative effect of the layers.

Page 340: XSI guia basica

Section 18 • Texturing

340 • Softimage

The Texture Layer Editor

The texture layer editor is a grid-style editor from which you can view and edit all of a shader or material’s texture layers.

The advantage of using the texture layer editor is that it packs a tremendous amount of information into a relatively compact interface. At a glance, you can see which shaders are directly connected to a shader’s port, how many texture layers have been added to the shader,

how many ports those layers affect, and how and in which order the layers are blended together. Add to this the ability to modify the majority of each layer’s properties, and the texture layer editor makes for quite a powerful tool.

To open the texture layer editor, choose View >Rendering/Texturing > Texture Layer Editor from the main menu.

The Base Colors layerdisplays color boxes for

unconnected ports

Texture layers are blendedwith the base layer and

with each other.

The Selected shader’s ports can be added to texture layers and base layers.

Base layers represent shadersthat are directly connected to

the current shader’s ports.

The shader list displays all ofthe shaders connected to the

current selection‘s material.Select a shader to update the

editor with its layers.

The texture controls allowyou to control the texture

projections assigned toselected layers’ inputs.

Layer/port controls indicate that the port has been added to the layer.

An empty cell indicates that the port is not affected by the layer.Layer controls and layer/port controls allow you to

set texture layer properties.

Page 341: XSI guia basica

Basics • 341

Texture Layers

Texture Layers in the Render Tree

When a shader has one or more texture layers, a new section called Layers is added to its node in the render tree. The Layers section contains a parameter group for each of the shader’s layers.

Expanding the Layers section reveals all of the individual layer parameter groups. Expanding an individual texture layer’s parameter group reveals the ports for its Color and Mask parameters.

Layers behave exactly like any other parameter group in the render tree, meaning that you can connect shaders to texture layer parameters as you would to any other shader parameter. This lets you control each texture layer with its own branch of the render tree.

Layers section

Expanded layer parameter group.

Collapsed layer parameter group

Shader ports that have beenadded to layers are marked

with a small blue “L”.

Layer Color and Mask ports.

Layers section

Expanded layer parameter group.

Collapsed layer parameter group

Shader ports that have beenadded to layers are marked

with a small blue “L”.

Layer Color and Mask ports.

Page 342: XSI guia basica

Section 18 • Texturing

342 • Softimage

Bump Maps and Displacement Maps

Although real surfaces can be perfectly smooth, you are more likely to encounter surfaces with flaws, bumps, and ridges. You can add this kind of “noise” to object surfaces using bump maps and displacement maps.

Bump Maps

Bump maps use textures to perturb an object’s shading normals to create the illusion of relief on the object’s surface. Because they do not actually change the object’s geometry, they are best suited to creating fine detail that does not come too far off the surface.

Creating a Bump Map

To give you the most control over surface bumping, the best way to create a bump map is to connect a Bumpmap shader to the Bump Map port of an object’s material node.

However, every texture shader has bump map parameters, so you can create a bump map using textures that you’ve connected to, for example, a surface shader’s Ambient and Diffuse ports.

When Not to Use Bump Maps

Because bump maps do not actually alter object geometry, their limitations can become apparent when too much relief is required.

Consider the sphere shown here: even with a very high bump step, the bumping is not convincing on the silhouette where there is no indication that the surface is raised.

In these cases, it’s better to either model the necessary geometry or to use a displacement map.

Displacement Maps

A displacement map is a scalar map that, for each point on an object’s surface, displaces the geometry in the direction of the object’s normal. Unlike regular bump mapping that “fakes” the look of relief, displacement mapping creates actual self-shadowing geometry.

The sphere shown here was bump-mapped with a fine noise. A negative bump factor was used to make the white areas bump outward.

The sphere shown here was displacement-mapped using the texture shown below.

Page 343: XSI guia basica

Basics • 343

Bump Maps and Displacement Maps

Creating a Displacement Map

You create a displacement map by connecting a texture, preferably grayscale, to the Displacement port of an object’s material node. It is often helpful to add an intensity node between the map and the material node to help control the displacement.

Setting Displacement Map Parameters

In addition to any shaders that you add to the render tree to modulate displacement, the main displacement controls are on the Displacement tab of the object’s Geometry Approximation property editor. From there, you can choose the type of displacement appropriate to your object and refine the displacement effect.

When Not to Use a Displacement Map

Because they actually modify object geometry, displacement maps can take considerably longer to render than bump maps. Generally speaking you should not use a displacement map if you can achieve a satisfactory effect using a bump map.

Using Displacement Maps and Bump Maps Together

You can use bump maps and displacement maps together to create extremely detailed surfaces. Typically, the best approach is to use a displacement map to create the coarser surface detail — major features that need to be visible at the object’s edges and can benefit from self-shadowing. You can then use the bump map to create a top layer of fine detail. The bump-mapping is applied to the displaced geometry.

The sphere on the left uses a bump map, while the one on the right uses a displacement map. In this case, the difference is slight enough that the bump map’s shorter render time makes it the better choice.

This sphere uses the texture on the left as a displacement map to create coarse surface detail, and the texture on the right as a bump map to create fine surface detail.

Page 344: XSI guia basica

Section 18 • Texturing

344 • Softimage

Reflection Maps

Reflection maps, also called environment maps, can be used to simulate an image reflected on an object’s surface, without using actual raytraced reflections. They can also be used to add an extra reflection to an object’s reflective, raytraced surface.

When objects are reflective, you can define whether the reflections on its surface are Raytracing Enabled or Environment Only. Reflection settings are found on the Transparency/Reflection tab of the object’s surface shader’s property editor (choose Modify > Shader from the Render toolbar to open the property editor).

Raytraced Reflections are slower to render because they actually compute reflections for everything around them.

Non-Raytraced Reflection Maps are much faster to compute because they simulate the reflection of a specified texture or image, defined by an environment map, on the object’s surface.

When reflection mapping is used without raytracing, only the reflection map appears on the object’s surface; when used with raytracing, the map is combined with raytraced reflections.

Raytraced reflection and reflection mapWith both types of reflection activated, you get the real reflections of scene object and simulated reflections from the map, producing highly

detailed reflections.

Reflection map onlyUsing only a reflection map, no scene objects are reflected in reflective surfaces. Instead, the only reflection is that simulated by the reflection map.

Raytraced reflection onlyNote how reflective objects reflect other objects in the scene. For example, you can see the flask and the floor reflected in the retort.

You can apply a reflection map to an object by connecting an environment map shader to the Environment port of the object’s material node.

You can apply a reflection map to the entire scene by adding an environment map shader to a render pass’ shader stack.

Page 345: XSI guia basica

Basics • 345

Baking Textures with RenderMap

Baking Textures with RenderMap

RenderMap allows you to capture a wide variety of surface information from scene objects, and bake that information into image files that can be reapplied to the rendermapped object and/or used for a myriad of other purposes.

RenderMap captures surface information by casting rays from a virtual camera in order to sample each point on an object’s surface. The results are rendered as one or more 2D images that you can apply to the object as you would any other 2D texture.

To rendermap an object, you need to apply a RenderMap property. Choose Get > Property > RenderMap from the Render Toolbar. This opens the RenderMap property editor, from which you can configure all of the maps that you wish to output.

The following example shows how you can use RenderMap to create a single texture (which includes lighting information) out of a complex render tree.

Displacement map

Specular mapBump map

Color map

After RenderMap

To bake the hand’s surface attributes into a single texture file, a RenderMap property was applied to the hand, and a Surface Color map was generated. The resulting texture image was then applied directly to the Surface input of the hand’s material node. Finally, the scene lights were deleted, producing the result shown at right—a good approximation of the hand’s original appearance.

Because the hand’s illumination is baked into the rendermap image, you can get this result without using lights or an illumination shader.

Alpha map

Before RenderMap

The disembodied hand shown here was textured using a combination of several images mixed together in a complex render tree, and lit using two infinite lights. The result is a highly detailed surface that incorporates color, bump, displacement, and lighting information, and takes a fair amount of time to render.

Page 346: XSI guia basica

Section 18 • Texturing

346 • Softimage

Painting Colors at Vertices

Another way to apply color to polygon mesh objects is to paint their vertices. Vertex colors aren’t considered to be material or texture shaders: they are actually a constant color stored directly in the vertices of a polygon at the geometry level. Each vertex of a polygon has polynodes (a type of “subvertex”) that hold its UV coordinates and vertex colors.

The Color at Vertices (CAV) property allows you to color an entire polygon or just its edge rather than the actual vertex (the information

is stored at the vertex level, hence the name). For example, you can paint each edge of a square polygon a different color. As a result, the center of the polygon would display a blend of each of the four colors. If necessary, you can store several color at vertices properties on the same object.

Vertex colors are often used in games because they are an efficient way of storing color information that can be used in a variety of ways, e.g., for pre-baked lighting, texture blending, and so on.

1

3

Choose Get > Property > Color at Vertices Map to add a CAV Property to the selected object. An object can have as many

CAV properties as you need.

2 Press Ctrl+W to open the Brush Properties property editor. On the Vertex Colors tab, you can choose a paint mode and color, set the brush size, set falloff and bleeding options and so on. Basically, you’re defining how the brush strokes look.

Press Shift+W to activate the brush tool and paint the color (or other attribute) onto the object in any 3D view. When you move the brush into any 3D view, the view’s display mode automatically changes to Constant.

4 If you’d like, you can render the result of the color at vertices property using a Vertex RGBA shader in the render tree.

Page 347: XSI guia basica

Basics • 347

Section 19

Lighting

Conventional lighting (direct light sources), indirect lighting, and image-based lighting are all techniques that contribute to a scene’s illumination and affects the way all object surfaces appear in the rendered image.

What you’ll find in this section ...

• Types of Lights

• Placing Lights

• Setting Light Properties

• Selective Lights

• Creating Shadows

• Global Illumination

• Caustics

• Final Gathering

• Image-Based Lighting

• Light Effects

Page 348: XSI guia basica

Section 19 • Lighting

348 • Softimage

Types of Lights

Infinite (Default)

Infinite lights simulate light sources that are infinitely far from objects in the scene. There is no position associated with an infinite light, only a direction. All objects are lit by parallel light rays. The scene’s default light is infinite.

Point

Point lights casts rays in all directions from the position of the light. They are similar to light bulbs, whose light rays emanate from the bulb in all directions.

Spot

Spot lights cast rays in a cone-shape, simulating real spotlights. This is useful for lighting a specific object or area. The manipulators can be used to edit the light cone’s length, width, and falloff points.

Light Box

Light box lights simulate a light diffused with a white fabric. The light and shadows created by this light are very soft. Specularity is still visible, but noticeably weaker. Manipulating the box shapes the projected light.

Neon

Neon lights simulate real-world neon lights. They are essentially point lights whose settings and shapes are altered to resemble fluorescent tubes. The manipulators can be used to change the tube into any rectangular or square shape.

You can add lights to a scene by choosing them from the Render toolbar’s Get > Primitive > Light menu.

Every light type has its own special characteristics and is represented by its own icon in 3D views.

Page 349: XSI guia basica

Basics • 349

Placing Lights

Placing Lights

You can translate, rotate, and scale lights as you would any other object. However, scaling a light only affects the size of the icon and does not change any of the light properties.

Spotlights have a third set of manipulators that let you control their start and end falloff, as well as their spread and cone angles. Area lights also have a third set of manipulators that let you scale the geometric area from which the light rays emanate. These manipulators are discussed later in this section.

Placing Spotlights Using the Spot Light View

The Spot Light view in a 3D view that lets you select from a list of spotlights available in the scene. A spotlight view is useful to see what objects a spotlight is lighting and from what angle.

Rotating an infinite light. This is the only useful transformation for infinite lights since their scale and position do not affect the lighting. Rotating the light, on the other hand, changes its direction.

Translating a point light. Rotating and scaling point lights does not affect the lighting. Translating a point light changes its position, which does change the scene lighting.

Translating a spotlight. When you translate the spotlight, it rotates automatically to point toward its interest.

Scaling a spotlight has no effect on the lighting. Since the spotlight is normally constrained to its interest, you cannot rotate it either (unless you delete the interest).

Select a spotlight from the view menu to see the scene from the light’s point of view.

Navigate in the spotlight viewport to change the position of the light.

The inner and outer circles correspond to the light’s spread angle and cone angle respectively.

The rendered result shows the scene lit from the spotlight.

Note that the light falls off exactly where the cone and spread circles indicate that it should.

1

2

3

Page 350: XSI guia basica

Section 19 • Lighting

350 • Softimage

Setting Light Properties

Once you create a light, you can edit its properties from its property editor. To open a light’s property editor, select the light and choose Modify > Shader from the Render toolbar. Some of the most commonly edited light properties are described below.

Setting Light Color

The color of a light controls the color of the rays emitted by the light. The final result depends on both the color of the light and the color of objects.

When you define the color of an object’s material, you should work with a white light because colored light sources affect the material’s appearance. You can color your light source afterward to achieve the final look of the scene.

Setting Light Intensity

You can control a light’s intensity by adjusting the Intensity slider in the light’s property editor. By default, values range from 0 to 1, but you can set much higher values if needed.

Alternatively, you can control light intensity indirectly using its color channels. Setting RGB values greater than 1 creates more intense light.

Pale Blue Light

Pale Yellow LightWhite Light

Intensity: 0.75

Intensity: 0.5Intensity: 0.25

Page 351: XSI guia basica

Basics • 351

Setting Light Properties

Setting Light Falloff

Falloff refers to the diminishing of a light’s intensity over distance, also called attenuation. This mimics the way light behaves naturally. The falloff options are available only for point and spotlights.

You can set the distance at which the light begins to diminish, as well as the distance at which the falloff is complete (darkness). This means you can set the values so the falloff affects only those features you want. In addition, you can control how quickly or slowly the light diminishes.

Setting a Spotlight

A spotlight casts its rays in a cone aimed at its interest. Spotlights have special parameters, called Spread and Cone Angle, that control the size and shape of the cone. You can set these options using the spotlight’s property editor or its 3D manipulators. You can also use the 3D manipulators to set the light’s falloff.

To activate a spotlight’s manipulators, select the light and press B. You can then adjust the light by dragging any of the manipulators labeled in the image below.

Start falloff = 0End falloff = 4

Start falloff = 0End falloff = 8

Start falloff = 6End falloff = 8

FalloffStart and End Falloff values. Using a point light, umbra = 0; bottom corner of chess board is 0; top, left corner is 10.

The yellow line indicates the light’s spread angle.

The white line indicates the cone angle.

The inner, solid cone is thespotlight’s Spread Angle.

The wireframe outline is thespotlight’s Cone Angle.

The lower circle is the EndFalloff point.

The upper circle is the StartFalloff point.

Page 352: XSI guia basica

Section 19 • Lighting

352 • Softimage

Selective Lights

When you create a light, it affects all visible objects in the scene. However, every light has a selective property that you can use to make it affect, or not affect, a designated group of objects called Associated Models. This can reduce rendering time by limiting the number of calculations per light.

You can set a light’s selective property to be Inclusive or Exclusive.

• Exclusive illuminates every object except for those in the light’s Associated Models group.

• Inclusive illuminates every object defined in the light’s Associated Models group.

Creating Shadows

You can create shadows that appear to be cast by the objects in your scene. Shadows can make all the difference in a scene: a lack of them can create a sterile environment, whereas the right amount can augment the realism of the same scene. Shadows are controlled independently for each light source, so you can have some lights casting shadows and others not.

To create a shadow using the mental ray renderer for a scene or a render pass, you must set up three things:

• A light that generates shadows.

• Objects that cast and receive shadows.

• Rendering options that render shadows.

There are three basic kinds of shadows you can create using mental ray: raytraced, shadow-mapped, and soft.

Raytraced Shadows

Raytraced shadows use the renderer’s raytracing algorithm to calculate how light rays are reflected, refracted, and obstructed. The shadows are very realistic but take longer to render than other types of shadows.

To create raytraced shadows, you need to activate shadows in the light’s property editor.

You also need to make sure that the Primary Rays Type is set to Raytracing in the renderer options.

A simple scene illuminated by a point light. None of the geometric objects are included in the light’s Associated Models’ list, so they are not affected by the light’s selective property.

The King piece (center) has been added to the light’s Associated Models list, making it affected by the light’s selective property.

The light has been defined as Exclusive, thereby not illuminating the objects on the light’s Associated Models list.

The light is set to Inclusive. Now the light source affects only the objects listed in the Associated Models list (only the King piece) and ignores the rest.

Activates shadows for the light.

Page 353: XSI guia basica

Basics • 353

Creating Shadows

Shadow-Mapped Shadows

Shadow-mapped shadows, also known as depth-mapped shadows, use the renderer’s scanline algorithm. They are quick to render, but not as accurate as raytraced shadows.

The shadow map algorithm calculates color and depth (z-channel) information for each pixel, based on its surface and distance from the camera. Before rendering starts, a shadow map is generated for the light. This map contains information about the scene from the perspective of the light’s origin. The information describes the distance from the light to objects in the scene and the color of the shadow on that object. During the rendering process, the map is used to determine if an object is in a shadow.

To create shadow-mapped shadows, you need to activate shadows and configure the Shadow Map in the light’s property editor.

Then, you need to enable shadow maps in the renderer options.

Volumic Shadow Maps

Volumic shadow maps, are similar to regular shadow maps, but store more detail. Instead of simply storing the distance from the light to the first object hit, the volumic shadow map algorithm raymarches through the scene from the light’s origin until it hits a fully opaque object. Along the way it stores changes in light color or intensity along with the depth at which the change occurred. Volumic shadow maps are typically used when rendering shadows for geometry hair.

Regular shadow-mapped shadow of hair.

Volumic shadow-mapped shadow of hair.

Page 354: XSI guia basica

Section 19 • Lighting

354 • Softimage

Soft Shadows

Soft shadows are created by defining area lights which are a special kind of point or spotlight. The rays emanate from a geometric area instead of a single point. This is useful for creating soft shadows with both an umbra (the full shadow where an object blocks all rays from the light) and a penumbra (the partial shadow where an object blocks some of the rays).

The shadow’s relative softness (the relation between the umbra and penumbra) is affected by the shape and size of the light’s geometry. You can choose from four shapes and set the size as you wish.

To determine the amount of illumination on a surface, a sample of points is distributed evenly over the area light geometry. Rays are cast from each sample point; all, some, or none of the rays may be blocked by an object. This creates a smoothly graded penumbra.

To create raytraced shadows, you need to activate shadows in the light’s property editor. You also need to activate and configure the Area Light in the light’s property editor.

Finally, you need to make sure that the Primary Rays Type is set to Raytracing in the renderer options.

A rectangular area light emits light from a rectangular object like this one.

Page 355: XSI guia basica

Basics • 355

Global Illumination

Global Illumination

Global illumination simulates the way bright light bounces off of objects and bleeds their color into surrounding surfaces. When global illumination is activated, photons emitted from a designated light travel through the scene, bounce off photon-casting objects and are stored by photon-receiving objects.

Photon casting and reception are not mutually exclusive properties: an object can do both, but only a light can emit photons. Global illumination is often used with caustics, which is also a photon effect.

The following is an overview of how to set up global illumination for the mental ray renderer.

Define objects as casters and receivers.1

An object’s visibility property allows you to set options that control how the object responds to global illumination photons emitted from a light.

•Caster controls whether photons bounce off of the object and continue to travel through the scene. When this is off, the object simply absorbs photons.

•Receiver controls whether the object receives and stores photons. When this is off, the photon effect is not visible on the object’s surface.

•Visible controls whether the object is visible to photons at all. When this is off, photons simply pass through the object.

Set the light to emit global illumination photons.2

Activate Global Illumination on the Photon tab of the light’s property editor.

You can then set the Intensity of the photon energy, which determines the intensity of the color that bleeds onto photon receiving objects.

You can also set the Number of Emitted Photons.

Typically, both of these values will need to be set in the tens or hundreds of thousands for the final global illumination effect.

Page 356: XSI guia basica

Section 19 • Lighting

356 • Softimage

Increase the radiance of the receiver object.4

3 Adjust the global illumination effect.

Once you’ve defined the caster, receivers and emitting lights, you need to adjust the rendering options that control the photon effect.

On the Caustics and GI tab for the renderer, activate Global Illumination, then set these two important parameters:

•GI Accuracy specifies the number of photons that are considered when any point is rendered.

•Photon Search Radius specifies the distance from the rendered point within which photons are considered.

You’ll also need to fine-tune the photon intensity and the number of emitted photons for each of the emitting lights.

To further fine-tune the global illumination effect, adjust the Radiance of the global illumination receiver objects.

Radiance controls the strength of the photon effect on the object’s surface. This is useful for brightening or darkening photon lighting in specific areas of a scene. The Radiance parameter is set in each object’s surface shader.

Page 357: XSI guia basica

Basics • 357

Caustics

Caustics

Caustic effects recreate the way that light is distorted when it bounces off a specular surface or passes through refractive objects/volumes. The classic example is the light sparkling in the middle of a wine glass or the floor of a swimming pool. In either case, light passes through refractive surfaces and is distorted, creating complex light patterns on surfaces that it affects.

As with global illumination, caustics compute how photons emitted from a light travel across the scene and bounce over and through caster and receiver objects.

Here is an overview of setting up caustic lighting for the mental ray renderer, which is almost identical to setting up global illumination:

Define objects as casters and receivers.1

An object’s visibility property allows you to set options that control how the object responds caustics photons emitted from a light.

Set the light to emit caustic photons.2

To make a light into a global illumination photon emitter, activate Caustics on the Photon tab of the light’s property editor.

You can then set the Intensity of the photon energy and the Number of Emitted Photons.

3 Adjust the caustic effect.

Adjust the rendering options that control the photon effect on the Caustics and GI tab for the renderer.

Activate Caustics on this tab, then set these two important parameters:

•Caustic Accuracy specifies the number of photons that are considered when any point is rendered.

•Photon Search Radius specifies the distance from the rendered point within which photons are considered.

You’ll also need to go back to the property editors of all emitting lights and fine tune the photon intensity and the number of emitted photons.

Increase the radiance of the receiver objects.4

To fine-tune the caustics effect, adjust the Radiance of the caustics receiver objects.

Radiance controls the strength of the photon effect on the object’s surface. This is useful for brightening or darkening photon lighting in specific areas of a scene. The Radiance parameter is set for each object’s surface shader.

Page 358: XSI guia basica

Section 19 • Lighting

358 • Softimage

Final Gathering

Final gathering is a way of calculating indirect illumination without using photon energy. Instead of using rays cast from a light to calculate illumination, final gathering uses rays cast from each illuminated point on an object’s surface. The rays sample a hemisphere of a specified radius above each point and calculate direct and indirect illumination based on what they hit. The overall effect is that every object in the scene becomes a “light source” and influences the color and illumination of the objects and environment surrounding it.

Creating a Final Gathering Effect

Creating final gathering in a scene is more straightforward than applying caustics or global illumination. Most of the options that control the final gathering effect for the mental ray renderer are on the Final Gathering tab of the renderer options.

The final gathering Accuracy options are the main settings used to control the quality of a final gathering render.

You can use the scene objects’ visibility properties to precisely control how each object participates in final gathering calculations.

1 Camera eye ray intersects with geometry who’s shading needs to calculate indirect illumination.

2 Final gathering rays are shot into the hemisphere above the intersection point to sample for illumination.

3 Indirect illumination contribution.

1

2

3

4

4 Direct illumination contribution.

5 Final gathering point is computed.

This scene was rendered using final gathering, which ”collects” the indirect and direct light around illuminated points on an object’s surface to simulate real-world lighting.

Page 359: XSI guia basica

Basics • 359

Ambient Occlusion

Ambient Occlusion

Ambient occlusion is a fast and computationally inexpensive way to simulate indirect illumination. It works by firing sample rays into a predefined hemispherical region above a given point on an object's surface in order to determine the extent to which the point is blocked, or occluded, by other geometry.

Once the amount of occlusion has been determined, a bright and a dark color are returned for points that are unoccluded and occluded respectively. Where the object is partially occluded the bright and dark colors are mixed in accordance with the amount of occlusion.

In Softimage, you can create an ambient occlusion effect by connecting the Ambient Occlusion shader in the render tree. This is most commonly done at the render pass level to create an occlusion pass that can be added in and adjusted during compositing. You can also use the shader on individual objects to limit the occlusion calculation.

Image-Based Lighting

You can light your scenes with images using the Environment shader which surrounds the scene with an image. However, this shader has a set of parameters that allow you to control the image’s contribution to final gathering and reflections.

Although you can use any image to light the scene this way, you will get the best results using a High Dynamic Range (HDR) image. That’s because HDR images contain a greater range of illumination than regular images, making them better able to simulate real-world lighting.

The image above shows a scene rendered using only the Ambient Occlusion shader. The bright color is set to white and the dark color to black. This type of rendering can be composited with other passes to add the occlusion effect to the scene’s color and illumination.

Page 360: XSI guia basica

Section 19 • Lighting

360 • Softimage

Light Effects

The point light inside of this street lamp uses a flare effect. Flares are created as properties of scene lights.

In the background of the scene, you can see the effect of depth-fading. Even though it affects the entire scene, the depth fading is defined by a light’s volumic property.

The volumic light shining out from the window in the stairwell is created using a volumic property applied to a light.

Softimage includes a number of lighting effects that you can use to enhance the realism and alter the look and mood of your rendered scenes.

Different effects are applied differently. Some are applied as properties of lights, while others are defined by shaders in the render tree.

This scene uses a variety of light effects to capture the feeling of a dimly lit alley on a foggy evening.

Page 361: XSI guia basica

Basics • 361

Section 20

Cameras

Virtual cameras in Softimage are similar to physical cameras in the real world. They define the views that you can render. You can add as many cameras as you want in a scene.

you can also achieve a photorealistic motion blur effect for every object and/or camera in your scene.

What you’ll find in this section ...

• Types of Cameras

• The Camera Rig

• Working with Cameras

• Setting Camera Properties

• Lens Shaders

• Motion Blur

Page 362: XSI guia basica

Section 20 • Cameras

362 • Softimage

Types of Cameras

Each of the images below was taken from the same position, but using a different camera each time. The image on the right shows a wireframe view of the original scene, including the position of the camera.

These camera types are available from the Get > Primitive > Camera menu.

Perspective (Default)

Uses a perspective projection, which simulates depth. Perspective cameras are useful for simulating a physical camera. The default camera in any new scene is a perspective camera.

Wide Angle

Creates a wide-angle view by using a perspective projection and a large angle (100°) of view. Wide angle cameras have a very large field of view and can often distort the perspective.

Telephoto

Uses a perspective projection and a small angle of view (5°) to simulate a telephoto lens view where objects are “zoomed.”

Orthographic

Makes all of the camera rays parallel. Objects stay the same size regardless of their distance from the camera. These projections are useful for architectural and engineering renderings.

Page 363: XSI guia basica

Basics • 363

The Camera Rig

The Camera Rig

Each camera that you create is made up of three separate parts: the camera root, the camera interest, and the camera itself. If you look at a camera in the explorer, you’ll see that the camera root is the parent of both the camera and its interest. Each of these elements is displayed in the 3D views as well.

The Camera Interest

The camera’s interest—what the camera is always looking at—is represented by a null. You can translate and animate the null to change the camera’s interest.

Camera Direction

The camera icon displays a blue and a green arrow. The blue arrow shows where the camera is “looking”; that is, the direction the lens is facing. The green arrow shows the camera’s up direction, which you can change by rolling the camera (press L).

The Camera

The camera is the camera is the camera. In the 3D views, it is represented by a wireframe control object that you can manipulate in 3D space. The camera has a directional constraint to the camera interest.

The Camera Root

The camera root is represented by a null. By default, it appears in the middle of the wireframe camera, but you can translate and animate it as you would any other object. The null is useful as an extra level of control over the camera rig, allowing you to translate and animate the entire rig the same way that you animate its individual components.

Page 364: XSI guia basica

Section 20 • Cameras

364 • Softimage

Working with Cameras

Once you’ve created your cameras, you’ll probably want to move them around to capture just the right angles. You may also need to switch back and forth between different cameras to compare points of view.

Selecting Cameras and Camera Interests

Cameras or their interests can be tricky to select. Luckily, there are several ways to select either or both. You can:

• Locate the camera or interest in a 3D view and click it to select.

• From any viewport, click the camera icon on its menu bar, then choose Select Camera or Select Interest. This selects the camera used in that viewport.

• From the Select panel, choose Explore > Cameras. This opens a floating explorer that shows every camera in your scene and its interest. Select a camera or interest from the list. Of course, you can also do the same thing from a regular explorer once you locate the cameras.

Selecting Camera Views

Camera views let you display your scene in a 3D view from the point of view of a particular camera. If you have created more than one camera in your scene, you can display a different camera view in each 3D view.

Choosing a camera from a viewport’s Cameras menu switches the viewpoint to that of a “real” camera in your scene. All other views such as User, Top, Front, and Right are orthogonal viewpoints and are not associated to an actual camera.

Positioning Cameras

Once you select a camera, you can translate, rotate, and scale it as you would any other object. However, scaling a camera only affects the size of the icon and does not change any of the camera properties.

Generally, the most intuitive way of positioning cameras is to set a 3D view to a camera view and then use the 3D view navigation tools to change the camera’s position. As you navigate in the 3D view, the camera is subject to any transformations that are necessary to keep its interest in the center of its focal view.

Since positioning cameras is often a process of trial and error, you’ll probably find yourself wanting to undo and redo camera moves.

• Press Alt+Z to undo the last camera move.

• Press Alt+Y to redo the last undone camera move.

If you’ve zoomed in and out too much and the perspective on your camera is in need of a reset or refresh, press R. This resets the camera in the 3D view in which the cursor is.

Choose Render Pass to switch to the camera view defined for your render pass.

You can select a predefined orthographic viewpoint, but it’s not an actual camera view.

Choose a camera from the list to switch the viewport to that camera’s view.

Page 365: XSI guia basica

Basics • 365

Setting Camera Properties

Setting Camera Properties

The Camera property editor contains every parameter needed to define how a camera “sees” your scene.

To open the camera property editor, do one of the following:

• Select a camera and choose Modify > Shader from the Render toolbar.

• Click on the camera’s icon in the explorer.

• Double-click the camera’s node in the render tree.

• From the camera icon menu of any viewport set to the camera’s view, choose Properties.

Camera Format

The camera’s “format” refers to the picture standard that the camera is using and the corresponding picture ratio. You can also specify a custom picture standard with a picture ratio that you define. The default camera format is NTSC D1 4/3 720x486, with a picture ratio of 1.333, but several standard NTSC, PAL, HDTV, Cine, and Slide formats are also available.

Field of View

The field of view is the angular measurement of how much the camera can see at any one time. By changing the field of view, you can distort the perspective to give a narrow, peephole effect or a wide, fish-eye effect.

Using the same camera in the same location, the Vertical field of view is much smaller, thus making only a small part of the building visible.

The camera’s Vertical field of view was made large enough to accommodate the entire building. The Horizontal field of view was automatically calculated based on the aspect ratio.

Page 366: XSI guia basica

Section 20 • Cameras

366 • Softimage

Setting Clipping Planes

You can use clipping planes to set the minimum and maximum viewable distances from the camera. Objects outside these planes are not visible.

By default, the near plane is very close to the camera and the far plane is very far away, so most objects are usually visible. You can set clipping planes to display or hide specific objects.

Lens Shaders

Lens shaders are used to apply a variety of different effects to everything that a camera sees. Some lens shaders create generalized effects, such as depth of field, cartoon ink lines, or lens distortion. Others are more utility oriented, and do things like emulate real-world camera lenses or render depth information.

Lens shaders can be used alone, or in conjunction with other lens shaders. For example, you might want to render a bulge distortion and depth of field simultaneously. You can apply lens shaders to cameras as well as passes.

This is a camera with no clipping planes set—which means the resulting image (right) is every object in the scene.

This is a camera with near and far clipping planes set. The near plane is between the first two buildings and the far clipping plane is between the last two buildings. Everything before the first plane is invisible and everything beyond the far clipping plane is also invisible, as seen in the resulting image (right).

Applies a shader to the camera.

Removes a shader from the shader stack.

Opens the selected shader’s property editor.

Lists every shader applied to a camera.

Lens shaders are applied via the shader stack on the Lens Shaders tab of the camera’s property editor.

Page 367: XSI guia basica

Basics • 367

Lens Shaders

Depth of Field shader

The images below and beside show this scene rendered using three different lens shaders.

Lens Effects Shader (Fisheye distortion setting)

Toon Ink Lens shader

Page 368: XSI guia basica

Section 20 • Cameras

368 • Softimage

Motion Blur

Motion blur adds realism to a scene’s moving objects by simulating the blur that results from objects passing in front of a camera lens over a specified period of exposure. In Softimage, you can easily achieve a photorealistic motion blur effect for every object and/or camera in your scene.

Creating a Motion Blur Property

To control motion blur for a specific object in a scene, you can assign it a motion blur property. This is primarily useful when you want to force motion blur off for a given object, or when you have a few objects that need deformation motion blur.

To create the motion blur property, select one or more objects and choose Get > Property > Motion Blur from the Render toolbar.

You can apply motion blur properties to cameras. This is useful when both the camera and scene objects are moving, but you only want the blur caused by the object’s movement.

Rendering Motion Blur

Motion blur is active for the scene by default. To view the motion blur of objects in a scene, activate the motion blur settings in the render region options and/or the render pass options. As long as these options are on and you have a moving object in your scene, the motion blur is visible.

First set the scene motion blur settings. In particular, the Speed option which specifies the time interval (usually between 0 and 1) during which the geometry and any motion transformations and motion vectors are evaluated for the frame. The motion data is then pushed to the renderer (by default mental ray).

Setting the Speed value to 0 turns motion blur off. Longer (slow) shutter speeds (a difference of greater than 0.6) create a wider and/or longer motion blur effect, simulating a faster speed. Shorter (quicker) shutter speeds (a difference of less than 0.3) create subtler motion blurs.

You can also specify an Offset for the shutter’s time interval which allows you to push the motion blur trails, even extend them into later frames. Additionally, you can define where on the frame the blur is evaluated and rendered.

In the first image (left), a quick shutter speed (< 0.1) is used, then a slower shutter speed (middle), and finally (right) a very slow shutter speed (> 0.6).

Page 369: XSI guia basica

Basics • 369

Section 21

Rendering

Rendering is the last step in the 3D content creation process. Once you have created your objects, textured them, animated them, and so on, you can render out your scene as a sequence of 2D images.

Your ultimate goal may not be just to render, but to optimize rendering quality and speed.

What you’ll find in this section ...

• Rendering Overview

• Render Passes

• Render Channels

• Setting Rendering Options

• Different Ways to Render

Page 370: XSI guia basica

Section 21 • Rendering

370 • Softimage

Rendering Overview

The process or rendering out your scenes can vary considerably from project to project. However, here is a typical sequence of tasks you might follow when rendering:

1. Set up render passes and define their options.

Render passes let you render different aspects of your scene separately, such as a matte pass, a shadow pass, a highlight pass, or a complete beauty pass. You can define as many render passes as you want: within each pass, you can create partitions of lights and objects, then apply shaders and control their settings together.

2. Set up render channels and define their options. These allow you to output different information about the pass to separate files.

3. Set rendering options.

All objects, including lights and cameras, are defined by their rendering properties. For example, you can determine whether a geometric object is visible, whether its reflection is visible, and whether it casts shadows. Rendering properties can be set per render pass as well.

4. Preview the results of any modifications.

The viewports can display your scene in different display modes, including wireframe, hidden-line removal, shaded, and textured. In addition, you can view any portion of your scene in a viewport and rendered by defining a render region. Or preview a full frame using Render Preview.

5. Render the passes and their render channels.

Softimage gives you the option of rendering using any one of the following methods:

- Interactively from the Render Region.

- Interactively, using the single-frame preview tool.

- Interactively from the Softimage user interface.

- Batch rendering using the [xsi -render | xsibatch -render] command line.

- Batch rendering with scripts using the -script option at the command line.

- Using the ray3.exe command line.

- Using mental ray’s tile-base distributed rendering across several machines. To do so, you must define which machines to use and how.

6. Composite and apply effects to passes.

You can use Softimage Illusion, a compositing and effects toolset that’s fully integrated in Softimage, or you can use another post-production tool.

Softimage and mental ray®

Softimage uses mental ray as its core rendering engine. mental ray is fully integrated in Softimage, meaning that most mental ray features are exposed in Softimage’s user interface, and are easy to adjust — both while creating a scene and during the final renderings. Full integration with mental ray also allows artists to generate final-quality preview renders interactively in 3D views, using the render region.

Rendering Visibility

Every geometric object in a scene has a visibility property that controls whether it is visible when rendering, and in particular whether it is visible to various types of rays (primary, secondary, final gathering, and so on). This visibility property exists locally on every 3D object in Softimage and cannot be applied or deleted. However, visibility can be overridden at the partition level. In complex scenes, setting rendering visibility options can be difficult to manage on a per-object basis. It’s easier to partition objects and use overrides to control rendering visibility for all of the objects in a partition.

Page 371: XSI guia basica

Basics • 371

Render Passes

Render Passes

A render pass creates a “layer” of a scene that can be composited with any other passes to create a complete image. Passes also allow you to quickly re-render a single layer without re-rendering the entire scene. Later, you can composite the rendered passes back together, making adjustments to each layer as needed.

Each scene can contain as many render passes as you need. When you first create a scene in Softimage, it has a single pass named Default_pass. This is a “beauty pass” that is set to render every element of the scene. You can create additional passes to render specific elements and attributes as needed.

This image is the composite of all these passes. Rendering in passes allows you to tweak each isolated element separately without having to re-render your scene.

This photograph (background pass) is the background scene over which the dinosaur will be composited.

This pass is a rendered image of the dinosaur. Compositing it over the background would make the scene rather flat and unrealistic.

The matte pass “cuts out” a section of the rendered image so another image can be composited over or beneath it.

The shadow pass isolates the scene’s shadows so you can composite them in later. This allows you to edit a shadow’s blur, intensity, and color without any additional rendering.

The specular pass is used to capture an object’s highlights.

Page 372: XSI guia basica

Section 21 • Rendering

372 • Softimage

Render Pass Workflow

The following steps provide an overview of how to use render passes:

1. Create and name a new render pass.

2. Edit the pass using either Pass > Edit > Current Pass from the Render toolbar or the render manager.

3. Define partitions to edit the objects and lights in your render pass depending on what effect you want to achieve.

4. Specify the active camera for the pass.

5. Apply shaders and override properties for the pass and its partitions. An override lets you control specific shader parameters in a partition.

6. Set the pass options for each pass.

7. Set the renderer options for each pass.

8. After you have set up render passes, you can render them.

9. You can then composite and apply effects to the passes using Softimage Illusion.

Creating Passes

You will most likely want to create several passes as your scene grows in size and complexity. You can create a variety of pass types from the Render toolbar’s Pass > Edit > New Pass menu.

Setting the Current Pass

The current pass is the pass to which all pass and partition properties are applied. The current pass is also the pass displayed in 3D views when the Render Pass view is displayed in a viewport.

Setting the Pass Camera

You can specify the camera you want to use for each render pass. The active camera provides the viewpoint from which the pass is rendered. In the render pass property editor, on the Output tab, choose a camera from the Pass Camera list, which lists all of a scene’s cameras.

Choose Cameras > Render Pass from any viewport Views menu to see the current pass from the viewpoint of the active camera.

To set the current pass click the arrow beside the Pass selection menu on the Render toolbar.

Then from the pass list, choose the render pass you want to set as current.

Page 373: XSI guia basica

Basics • 373

Render Passes

Creating Partitions

A partition is a division of a pass that behaves like a group. There are two types of partitions: object and light. Light partitions can only contain lights, and object partitions can only contain geometric objects.

Placing objects in partitions allows you to control their attributes by modifying them at the partition level rather than at the individual object level. The modifications affect only the objects in the partition for the specific render pass to which the partition belongs. This allows you to change object attributes on a per-pass basis.

Create an empty partition by choosing Pass > Partition > New Partition on the Render toolbar and then add elements to it. Or you can select some objects and choose the same command to create a partition that automatically includes these objects.

Viewing Passes and Partitions in the Explorer

In an explorer set the scope to Passes (press P) to see a hierarchical list of all of the render passes in your scene with their contents.

A

D

E

G

F

BC

H

A Current pass. The current pass is always displayed in bold typeface.

Each pass has its own options. This lets you optimize your rendering by enabling only those options you need for each pass. For example, you could enable shadow calculations only in the shadow pass.

Expanding any pass node displays its renderer options, the active camera for the pass, its partitions, and any environment, output, and/or volume shaders applied to the pass as a whole.

B Pass renderer options. Depending on which renderer you have chosen for your pass, click the Hardware Renderer or mental ray icon to edit the pass’s renderer options.

You can identify whether the pass is using a local or global set of render options by the Roman or italic typeface displayed for the renderer’s node.

C Pass camera. Click the camera icon to define camera and lens-shader options for the pass. You can add new cameras to your scene and set them as active if needed.

D Background partition. Every pass is created with two background partitions which contain the scene’s objects and lights.

Background partitions usually contain every object in your scene that isn’t modified in the pass. However, nothing is stopping you from modifying the contents of these partitions as well.

Page 374: XSI guia basica

Section 21 • Rendering

374 • Softimage

Applying Shaders to Passes and Partitions

You can apply environment, volume, output, and lens shaders to an entire pass using the shader stacks in the render pass property editor.

When you apply shaders to partitions using the Get > Material command, they take precedent over the shaders applied directly to objects in the scene, but only for that pass.

Overriding Shader Parameters

You can use an override property to redefine specific shader parameters in a partition. For example, if a scene contains several hundred objects and you want to edit each object’s transparency value without modifying the original material, you can create a partition that contains the objects you want to change, and apply an override property that affects only the transparency parameter of each material.

E Partition. A partition is a division of a pass, which behaves like a group. Partitions are used to organize scene elements within a pass.

Expanding any partition node allows you to see its contents, as well as any materials, shaders, overrides, and other properties that are applied to it.

Each pass has two default partitions: a background objects partition that contains most or all of the scene’s objects, and a background lights partition that contains most or all of the scene’s lights. You can add as many additional partitions as you need for a pass, but an object can only be in one partition per pass.

F Framebuffers. The framebuffers folder holds all the active render channels defined for the pass including its Main render channel.

G Passes. Additional passes including the default “beauty” pass are listed in creation order unless you have modified the explorer’s sort order settings.

H A material is assigned to a partition. The “B” indicates that it was applied in branch mode and is propagated to every object in the partition. If any objects in the partition have local materials, they will be overridden by the partition-level material for this pass.

Applies a shader to the camera.

Removes a shader from the stack.

Opens the selected shader’s property editor.

The applied shaders are listed in the stack.

An override changes the ambient and diffuse values to black, but leaves the other values untouched.

Page 375: XSI guia basica

Basics • 375

Render Channels

Render Channels

Render channels are a mechanism for outputting multiple images, each containing different information, from a single pass. When you render the pass, you can specify which channels should be output in addition to the full pass. By default a Main render channel is defined for every pass (you can think of it as the “beauty” channel rendered for each pass). You can use these images at the compositing stage, the same way you would use any render pass.

The advantage of using render channels is that they are easy to define and quick to add to any pass. Preset render channels allow you to isolate scene attributes that are commonly rendered in separate passes. You do not need to create complex systems of partitions and overrides to extract a particular scene attribute. All you need is your default pass and you can quickly output the preset diffuse, specular, reflection, refraction, and irradiance render channels.

Setting Rendering Options

Rendering options are set for the scene, for your renderer of choice (by default this is mental ray), and for each render pass you define. For interactive preview renders, the render region has its own set of renderer options. You can access these rendering options from different places:

• Render toolbar: opens the scene, pass, and renderer property editors.

- Choose Render > Scene Options

- Choose Render > Pass Options (for the current pass)

- Choose Render > Renderer Options (active renderer for the current pass)

• Explorer: press 8 to open an explorer and then press P to set the scope to Passes or press U to set it to Current Pass. From there you can click the scene, pass, or renderer nodes to display their property editors.

• Render Manager: a dedicated view for editing scene, pass, and renderer options. It contains a built-in explorer view, quick access to pass rendering, rendering and output preferences, and a copy manager for you render settings. Choose Render > Render Manager from the Render toolbar.

Irradiance ChannelReflection ChannelRefraction Channel

This scene defines six preset render channels, each extracting specific attributes of the objects’ surface materials.

Any combination of these channels can be rendered with the pass.

Ambient Channel Diffuse Channel Specular Channel

Page 376: XSI guia basica

Section 21 • Rendering

376 • Softimage

A

B CD

E

F

I

H

G

J

Anatomy of the Render Manager

Page 377: XSI guia basica

Basics • 377

Setting Rendering Options

A Explorer panel (left panel)

Select from the explorer the various render options available for editing. You can edit render options for the scene, for the renderer, and for each pass defined in the scene. Depending on your selection, the options are displayed in the middle or right panel.

B Render pass panel (middle panel)

When you select a render pass, the render options for the selected pass are displayed in the middle panel.

If you select multiple passes (Ctrl-select), you can simultaneously edit their common parameters. “Multi Edit” will appear at the top of the panel to indicate that you are in this mode.

C Renderer options panel (right panel)

When you select Scene Render Options or one of the global renderers (mental ray, Hardware Renderer, etc.), the options for the selected item are displayed in the right panel.

This is also the case when you select a render pass that contains a set of local render options.

If your selected passes use different renderers then “Mixed Selection” will appear at the top of the panel and no options are displayed.

D

Render menu

Use these commands to render the current pass, the selected passes, all passes in the scene, the current frame, or the current frame for all passes in the scene.

E

Edit menu

Edit > Override Marked Pass Parameters

Edit > Make Renderer Local to Pass

Edit > Make Pass Renderer Global

Edit > Open Rendering Preferences

Edit > Open Output Format Preferences

Edit > Copy Render Options

F

Refresh

Updates the render manager after modifications.

G Passes The render options for all the render passes defined in your scene.

The pass render options allow you to modify settings specific to each pass. You can set output paths, specify the pass camera, output your pass to a movie file, apply pass-level shaders, add render channels, and more.

H Global renderers

The render options for all available renderers.

I Scene Render Options

The scene render options allow you to modify global settings for the entire scene. You can specify things like the renderer to use, the frames to render, the basic output path and format for rendered images. You can also create custom render channels that you can add to individual passes.

J Current pass The current pass is displayed in bold in the explorer.

Page 378: XSI guia basica

Section 21 • Rendering

378 • Softimage

Selecting a Renderer

You usually render a scene using the default mental ray rendering software, which is built into Softimage. mental ray uses three rendering algorithms: scanline, raytracing, and rasterizer. You can also use the hardware renderer, which renders whatever is displayed in a 3D view (such as a viewport in Shaded display mode).

Scanline and raytracing are normally used together. mental ray uses the scanline method until an eye ray changes direction (due to reflection or refraction and so on), at which point it switches to the raytracing method. Once it switches, it does not go back to scanline until the next eye ray is fired. Without scanline rendering, the render is usually slower. Without raytracing, transparency rays are rendered, but reflection rays cannot be cast and refraction rays are not computed.

The Rasterizer accelerates motion blur rendering in large and complex scenes with a lot of motion blur. You must set special sampling options.

Scanline

Scanline rendering is a rendering method used to determine primary visible surfaces. Scene objects are projected onto a 2D viewing plane, and sorted according to their X and Y coordinates. The image is then rendered point-by-point and scanline-by-scanline, rather than object-by-object. Scanline rendering is faster than raytracing but does not produce as accurate results for reflections and refractions.

Raytracing

Raytracing calculates the light rays that are reflected, refracted, and obstructed by surface, producing more realistic results. Each refraction or reflection of a light ray creates a new branch of that ray when it bounces off an object and is cast in another direction. The various branches a ray constitute a ray tree. Each new branch can be thought of as a layer: if you add together the total number of a ray’s layers, it represents the depth of that ray.

Hardware Rendering

The Softimage hardware renderer allows you to output a scene as it appears when displayed in any 3D view whose viewpoint is that of the pass camera. Most of the hardware rendering modes correspond to the 3D views’ display modes Wireframe, Shaded, Textured, and so on.

Hardware rendering is useful for generating previews of your scene using all of the display options available in 3D views. It is also useful for outputting realtime shader effects to file.

This scene was rendered using scanline rendering only. Notice how the transparency has little depth, and there is no reflection or refraction.

This scene was rendered using the raytracing render method. Notice how the glass’ reflections, transparency, and refraction are more realistic than with Scanline rendering.

Page 379: XSI guia basica

Basics • 379

Different Ways to Render

Different Ways to Render

There are several ways to render a scene, from single frame previews to large sequences rendered to file. Some rendering methods are launched from Softimage’s interface, others from the command-line.

Previewing Interactively with the Render Region

You can view a rendering of any section or object in your scene quickly and easily using a render region. Rather than setting up and launching a preview, you can simply draw a render region over any 3D view and see how your scene will appear in the final render.

To draw a render region, press Q to activate the render region tool and drag in any 3D view to define the region’s rectangle. Press Shift+Q to toggle the region on and off.

You can resize and move a render region, select objects and elements within the region, as well as modify its properties to optimize your preview. Whatever is displayed inside that region is continuously updated as you make changes to the rendering properties of the objects. Only this area is refreshed when changing object, camera, and light properties, when adjusting rendering options, or when applying textures and shaders.

Comparing Render Regions

The render region has memo regions that allow you to store, compare, and recall settings. They look similar to the viewports’ memo cams, but are not saved with the scene.

The render region uses the same renderer as the final render (mental ray), so you can set the region to render your previews at final output quality. This gives you an accurate preview of what your final rendered scene will look like.

Be careful when comparing render regions. You should do this only when you are tweaking material and rendering parameters, and not making other changes to the scene. If you revert to previous settings, either accidentally or on purpose, you will lose any modeling, animation, or other changes you have made in the meantime.

Middle-click to store, and click to display. The currently displayed cache is highlighted in white. Right-click for other options.

The left side shows the stored region.

The right side shows the current settings.

Drag the swiper to show more or less of one image or the other.

Page 380: XSI guia basica

Section 21 • Rendering

380 • Softimage

Previewing a Single Frame

The Render > Preview command in the Render toolbar lets you preview the current frame at fully rendered quality in a floating window. The frame is rendered using the render options for the current render pass or using the render region options defined in any of the four viewports.

Rendering to File from the Softimage Interface

You can render your passes directly from the Softimage interface. Once the pass options are set, all you need to do is start the render in any of these ways:

• To render all of your scene’s passes, click the Render Pass > All button in the Render Manager, or choose Render > Render > All Passes from the Render toolbar.

• To render the current pass, click the Render Pass > Current button in the Render Manager, or choose Render > Render > Current Pass from the Render toolbar.

• To render a selection of passes, select the passes in the explorer and click the Render Pass > Selected button in the Render Manager, or choose Render > Render > Selected Passes from the Render toolbar. The passes are rendered one after the other.

Batch Rendering (xsi -render| xsibatch -render)

You can use -render command-line options to render scenes without opening the Softimage user interface. In addition, you can export render archives from the command line. The most common rendering options are available directly from the command line, while other options can be changed by specifying a script using the -script option.

ray3.exe Rendering

You can render scenes using the mental ray standalone — ray3.exe from a command line. Although many of the ray3.exe commands are available in the Softimage interface, you may want to use the ray3.exe command line tool to manually override options in exported MI2 files. You can edit the MI2 files to define extra shaders, create objects, swap textures, or perform other tasks.

Distributed Rendering

Distributed rendering is a way of sharing rendering tasks among several networked machines. It uses a tile-based rendering method where each frame is broken up into segments, called tiles, which are distributed to participating machines. Each machine renders one tile at a time, until all of the frame’s tiles are rendered and the frame is reassembled. By spreading the workload this way, you can decrease overall rendering time considerably.

Once you’ve set up a distributed rendering network, rendering tasks are distributed automatically once a render is initiated on a computer. The initiating computer is referred to as the master and the other computers on the network are referred to as slaves. The master and slaves communicate via a mental ray service that listens on a designated TCP port and passes information to the mental ray renderer.

Page 381: XSI guia basica

Basics • 381

Section 22

Compositing and 2D Paint

Softimage Illusion is a fully integrated compositing, effects, and 2D paint toolset that is resolution independent and supports 8, 16, and 32-bit floating-point compositing.

You can use Softimage Illusion operators to perform compositing and effects tasks ranging from tweaking the results of a multi-pass render to creating complex special effects sequences.

The effects that you create are part of your scene that are accessible from the explorer, are accessible to Softimage’s scripting and animation features, and support clips and sources, as well as render passes.

What you’ll find in this section ...

• Softimage Illusion

• Adding Images and Render Passes

• Adding and Connecting Operators

• Editing and Previewing Operators

• Rendering Effects

• 2D Paint

• Vector Paint vs. Raster Paint

• Painting Strokes and Shapes

• Merging and Cloning

Page 382: XSI guia basica

Section 22 • Compositing and 2D Paint

382 • Softimage

Softimage Illusion

The Softimage Illusion toolset consists of three core views: the FxTree, where you build networks of effects operators; the Fx Viewer, where you preview the results; the Fx Operator Selector, from which you insert pre-connected operators into the FxTree.

Each of these views can be opened in a viewport or as a floating view (choose View > Compositing > name of view from the main menu).

There is also a Compositing layout available from the View > Layouts menu. It contains the three core Fx tools arranged in a way that makes it easy to build and preview effects. Using this layout for compositing and effects work is usually more efficient than simply opening the required views in viewports because the non-compositing tools and views are mostly hidden.

Fx Treewhere you create

networks of linkedoperators to composite

images and createeffects.

You can create multipleinstances of the FxTree

workspace — calledtrees — to organize

effects more efficiently.

Fx Viewer2D viewer in which you can preview each operator to see how it contributes to the overall effect.

Fx Operator SelectorLists all of the available compositing and effects operators.

Once you select an operator here, you can pre-set its connections to existing operators in the Fx Tree and then simultaneously insert and connect it in the Fx Tree.

Fx OperatorsOperators are

represented by nodesthat you can link

together manually orconnect beforehand

using the Fx OperatorSelector.

Page 383: XSI guia basica

Basics • 383

Adding Images and Render Passes

Adding Images and Render Passes

Before you can composite anything, or create any effects, you need to import images into the Fx Tree. There are several ways of doing this.

Getting File Input Operators

Importing images into the FxTree creates a File Input operator for each imported image. The operator points directly to the image on disk without creating an image source or clip.

When you import an image, the File Input operator’s properties are automatically updated according to the image’s properties.

To import image files, click the Import Images button in the FxTree menu bar or choose File > Import Images from the FxTree menu bar. A browser opens from which you can select an image to import.

Getting Render Passes

In the FxTree, you can import any rendered pass (or all of them at once) from the Passes menu. A File Input node is created for each pass that you import, and the file name, start frame, and end frame are all based on the pass render options. The file extension is based on the pass’ image format and output channels.

Getting Image Clips

The Fx Tree has direct access to all of the image clips in your project. Inserting image clips into the FxTree creates a pair of Image Clip operators for each imported clip.

Image Clip operator pairs consist of two operators:

• Clip In (or From): reads from the image clip.

• Clip Out (or To): writes back to it.

You can modify the image clip itself by adding effects operators between the Clip In and Clip Out operators. This updates the clip wherever it is used in the scene.

The Clip In and Clip Out operators are primarily used to modify images that are used outside of the Fx Tree. For an actual composite or effect that you intend to render to file, it’s better to use File Input operators.

To import image clips, select an image clip from the FxTree’s Clips menu.

Setting Image Defaults

Before you begin building effects, you may want to adjust the Fx Tree’s image defaults to conform to your chosen picture format. The image defaults affect all operators that create an image (the Pattern Generator operator, for example), and are applied when you opt to output an operator with the default size, and/or bit-depth. Each tree that you create has its own set of image defaults that specifies the width/height, bit depth, and pixel ratio.

To set the image defaults, choose File > Tree Properties from the Fx Tree menu.

Select this option to import an image using a File Input operator.

Select this option to add all rendered passes to the Fx Tree.

Select a pass to add it to the Fx Tree.

Clip InOperator

Clip OutOperator

Page 384: XSI guia basica

Section 22 • Compositing and 2D Paint

384 • Softimage

Adding and Connecting Operators

The FxTree is where you create networks of linked operators to composite images and create effects. Operators are represented by nodes that you can link together manually, by dragging connection lines, or connect beforehand using the Fx Operator Selector.

If you need to build several different networks, you can create multiple instances of the FxTree workspace—called trees—to organize them more efficiently. Each tree is a separate operator in the scene with its own node in the explorer.

Operator Connection Icons

•Green icons accept image inputs. You can connect almost any operator to green inputs.

•Blue icons accept matte (A) inputs, which are generally used to control transparency.

•Red connections icons are outputs, plain and simple.

Navigation ControlAllows you to navigate in the Fx Tree workspace when a network of operators becomes to large to display all at once.

•Dragging in the rectangle pans in the Fx Tree workspace.

•Dragging the zoom slider up and down zooms in and out.

Operator informationPositioning the mouse pointer over an operator displays information at the bottom of the Fx Tree.

Once you define all of the needed connections, middle-click an empty area of the Fx Tree workspace to add the operator.

Fx Operator SelectorA tool for inserting operators into the Fx Tree. Select an operator from the list, then consecutively middle-click the existing operators you wish to connect to its inputs and output. Middle-click in an empty area of the Fx Tree workspace to add the operator.

Start by adding images and/or sequences to the Fx Tree. These are the images that you want to composite together and/or build effects on.

Fx Tree MenuProvides access to operators, render

passes, image clips, and Fx Tree toolsand preferences.

1

2

3

Next you need to add and connect the operators required to build your effect.

You can get any operator from the Ops menu and connect it by dragging connection lines from other operators’ outputs to its inputs.

You can also use the operator selector to pre-define operator connections before you inset the operators into the Fx Tree.

Once you’ve built your effect, you can render it out using a File Output operator.

Page 385: XSI guia basica

Basics • 385

Adding and Connecting Operators

Fx Operator Types

Whether you’re compositing a simple foreground image over a background, or applying a complex series of effects to an image, every step of the process is accomplished by an operator in the FxTree. By connecting these operators together, you can create composites and special effects.

Operator Type Description

Image Image operators act as the in and out points for each effect in the FxTree.

•File input operators are placeholders for images in the tree.

•Paint Clip operators are used to import images into the FxTree for raster painting.

•Vector Paint operators are used to create vector paint layers in the FxTree.

•PSD Layer Extract operators extract a single layer from a .psd image.

•File Output Operators let you set the output and rendering options for your composites and effects.

Composite Composite operators offer you several ways to combine foreground images with a background image to produce a composited result. Most compositing operators require a foreground image, a background image, and an internal or external matte.

Retiming Retiming operators allow you to change the timing of image sequences. You can, for example, convert from 24 to 30 frames-per-second and vice versa, interlace and de-interlace clips, and change the duration of clips by dropping frames, or combining them together in different ways.

Transition Transition operators create animated changes from one image clip to another. You can use transition operators to apply dissolves, fades, wipes, pushes, and peels.

Color Adjust Color adjust operators let you color correct clips in the FxTree. You can modify and animate hue, saturation, lightness, brightness, contrast, gamma, and RGB values. You can also perform various operations like inverting, images, premultiplying images, and so on.

Color Curves Use the Color Curves operators to graphically adjust color components of images in the FxTree, and to extract mattes for foreground images so that you can composite them over background images.

Grain Grain operators alter the appearance of film grain in your image sequences. You can add and remove grain, as well as adding and removing noise.

Optics Optics operators create optical effects in images in the FxTree. These include depth-of-field, lens flares, and flare rings.

Filter Filter operators let you control the appearance of images in the FxTree. Among other things they can reproduce the effects of different lens filters, apply blurs, and add or remove noise.

Distort Distort operators simulate 3D changes to images in the FxTree. Use these operators to apply distortions and transformations

Transform Transform operators adjust the dimensions and/or position of Images in the FxTree. Besides cropping and resizing images, you can also use the 3D Transform operator to transform an image in a simulated 3D space, as well as warp and morph images.

Plugins The plugins operators offer a variety of patterns and special effects that you can use in your FxTrees. All of the Plugins operators are custom operators—called UFOs— that were created using the UFO SDK.

Painterly Effects®

Painterly Effects operators allow you to apply a variety of classic artistic effects to images in the FxTree. The Softimage compositor’s three sets of Painterly Effects operators let you apply effects like Chalk & Charcoal, Watercolor, Bas Relief, Palette Knife, Stained Glass, and many more, to images in the FxTree.

Operator Type Description

Page 386: XSI guia basica

Section 22 • Compositing and 2D Paint

386 • Softimage

Editing and Previewing Operators

A big part of building an effect is previewing operators and editing their properties. As you adjust an operator’s parameters, you can see the effect of your changes reflected in the Fx Viewer. There are several ways to edit and preview operators, but the easiest is to use the View and Edit hotspots that appear when you position the mouse pointer over an operator.

The Edit hotspot opens the operator’s property editor, while the View hotspot previews the operator in the Fx Viewer. This allows you to open one operator’s property editor while you’re previewing another operator. For example, you might want to see how color correcting one image affects the composited result of that image and another one.

Mixes the view with the Merge Source.

Image courtesy of Ouch! Animation

Compare AreaDisplays a portion of one image while you're editing another image. This is useful for seeing one operator’s effect on another.

Display AreaDisplays the operator that you’re previewing.

Operator InfoDisplays info about the operators being viewed and edited.

Navigation ToolDrag in the rectangle to pan. Drag on the slider to zoom.

Toggles the Compare Area

Updates the Compare Area with the current image.

Forces the current image to fit in the viewer.Switch viewers A and B.

Display image’s alpha channel as a red overlay.

Click the Edit hotspot to open the operator’s property editor.

Click the View hotspot to preview the operator in the Fx Viewer.

Isolate one of the image’s color channels.

Split viewers A and B.

Displays the current image at full size.

Page 387: XSI guia basica

Basics • 387

Rendering Effects

Rendering Effects

Once you have your effect looking the way you want it, you can render it to a variety of different image formats using a File Output operator.

The File Output property editor is where you set all of the effect’s output options, including the picture standard, file format, and range of frames.

Once you’ve set the output options, all you need to do is click the Render button to start the rendering process. In the Rendering window, you will get information regarding the rendering of the sequence.

Rendering Effects From the Command Line

You can also render effects non-interactively from the command line using xsi -script or xsibatch -script. Make sure that your script contains the following line (VBScript example):

RenderFxOp "OutputOperator", False

where OutputOperator is the name of the FileOutput operator that you want to render. The False statement specifies that the Fx Rendering dialog box should not be displayed during rendering.

Enter a valid filename, path and format here.

When the sequence is rendered, click here to open a flipbook and view it.

Click here to open the Rendering window.

Specify the range of frames to render.

Page 388: XSI guia basica

Section 22 • Compositing and 2D Paint

388 • Softimage

2D Paint

Softimage’s compositing and Effects toolset includes a 2D paint module which offers 8 and 16-bit raster and vector painting. To paint on images, you set up paint operators in the FxTree and then paint on them in the Fx viewer, where a Paint menu gives you access to a variety of paint tools.

You work with paint operators the same way you work with other Fx operators, making it easy to touch up images, fine-tune effects, edit image clips, paint custom mattes, create write-on effects, and so on. You can also use blank paint operators to paint images from scratch.

Fx Paint Brush ListLists all of the paint brushes available for painting strokes.

All of the brushes are presets based on the same core set of properties.

The Fx Paint Brush List is an optional view in the compositing layout (shown here).

To open: choose View > Compositing > Fx Color Selector from the main menu.

Fx ViewerWhen you edit and preview a paint operator, the Fx Viewer is where you actually paint strokes and shapes.

Paint OperatorsBehave exactly like other operators in the Fx Tree, and can be connected manually or using the operator selector.

Fx Color SelectorAllows you to choose foreground and background paint colors using a variety of different color models.

To open: position the mouse pointer in the Fx Viewer and press 1, or choose View > Compositing > Fx Color Selector from the main menu.

Paint MenuWhen you edit a paint operator, the paint menu is

added the Fx Viewer, giving you access to all of thepaint-related commands and tools.

Page 389: XSI guia basica

Basics • 389

Vector Paint vs. Raster Paint

Vector Paint vs. Raster Paint

Softimage’s paint tools allow for both vector- and raster-based painting. Each has its advantage, as well as its own operator to use in the Fx Tree.

Vector Paint

Vector painting is a non-destructive, shape-based process where every brush stroke is editable even after you’ve painted it. Rather than painting directly on an image, you paint on a vector shapes layer that is composited over an input image or other operator.

In the Fx Tree, you add a vector shapes layer overtop of an image by connecting the image’s operator to Vector Paint operator’s input. You can then paint on the vector shapes layer in the Fx viewer.

A Vector Paint operator has a small paint brush/shape icon in its upper-left corner. This differentiates it from non-paint operators, which you cannot paint on, and from raster paint operators, which use a different icon.

One convenience of painting in vector paint operators is that you don’t have to manage changes to each frame the way you do with raster paint clips. Every shape in a vector paint operator is stored as part of the operator’s data, and is animatable. This allows you to paint shapes and strokes that stay in the image for as many frames as you need.

Vector paint operators are blank by default and do not have source images. Instead, they are more like other Fx operators in that they have both an input and an output and use other operators’ outputs as their sources. However, there’s nothing preventing you from keeping them blank and painting their contents from scratch.

Mask Shapes

The Mask Shapes operator is an alpha-only version of the Vector Paint operator. You can use the vector paint tools in a Mask Shapes to paint a matte that you can use in any other Fx operator.

Raster Paint

Raster painting is the process of painting directly on an image. It is destructive, meaning that each time you paint a stroke, you’re directly altering the image’s pixels. Once you’ve painted on the image, the stroke or shape cannot be moved or altered (unless, of course, you paint a new stroke over it).

In the Fx Tree, you can paint on images or sequences (but not a movie file —.avi, Quicktime, and so on) loaded in a Paint Clip operator, which is available from the Ops menu. You can also insert a blank paint clip and configure it later. A Paint Clip operator has a small paint brush icon in its upper-left corner.

When you paint on a sequence, you can manage changes to frames using the tools on the Modified Frames tab of an Paint Clip’s property editor. You can revert painted frames back to their last saved state, and save changes when you’re ready to commit them.

Lists every unsaved frame that you’ve changed.

Save changes to frames.

Revert frames to their original state.

Where you manage painted frames.

Page 390: XSI guia basica

Section 22 • Compositing and 2D Paint

390 • Softimage

Painting Strokes and Shapes

At its most basic, painting on an image is a simple matter of inserting a paint operator in the FxTree, choosing a paint color, brush and tool, and using the mouse pointer to paint in the Fx viewer.

The following a general overview of the paint process, intended to give you an idea of workflow, as well as a sense of where to set the options necessary for defining strokes and shapes.

Add a paint operator to the Fx Tree workspace and edit its properties. This activates the Fx Viewer’s paint menu, giving you access to paint tools and options.

Set the active paint brush from the Fx Paint Brush List. The active paint brush is used by any paint tool that can paint a stroke (the paint brush tool, the line tool, the shape tools, and so on).

Choose a paint tool from the Fx Viewer’s Draw menu.

Choose the foreground and (if needed) the background color from the Fx Color Selector.

Select a brush from the list.

Choose a brush category from the brush-type list.

3

2

4

1

If necessary, edit the tool properties. To open the tool property editor, position the mouse pointer in the Fx Viewer and press 3.

If necessary, edit the brush properties. To open the brush property editor, position the mouse pointer in the Fx Viewer and press 2.

The five most recently used colors are stored in the selector for easy access.

Page 391: XSI guia basica

Basics • 391

Painting Strokes and Shapes

5 Paint on the operator in the Fx Viewer.

The Freehand Shape tool allows you to draw editable vector shapes as if you were using a pen and paper. You need only drag the paint cursor around the outline of the shape that you wish to draw.

The Freehand Shape tool is only available in vector paint operators.

The Mark Out Shape tool allows you to create an editable vector shape by clicking to define the locations of the shape's points. As you add points, each new point is connected to the previous point by a line segment. The line segment’s curve, or lack thereof, depends on the type of shape you’re drawing: Bézier, B-Spline, or Polyline.

The Mark Out Shapes tool is only available in vector paint operators.

The Draw Rectangle and Draw Ellipse tools are unique in that they are the only shape tools that work in both raster paint clips and vector paint operators (all other shape tools are vector-paint only). In either mode, the shapes are drawn using the current colors and paint brush settings.

The Line tool, as you might imagine, allows you to draw straight lines. This is especially useful for painting wires out of an image or sequence.

In vector paint operators, drawing a line creates a two-point color shape drawn using the outline (stroke) only

The Brush tool is the most basic tool for painting brush strokes. You use it to paint on images as if you were using a real paint brush, or one of the myriad tools simulated by the brush presets in the Fx paint brush list. Painting is a simple matter of clicking and dragging on a paint operator’s image.

6 If you are using vector paint operators, you can edit any vector shapes that you’ve painted. The two images below show the manipulators used to transform a vector shape and to edit a vector shape’s points.

The Flood Fill tool (not shown) fills pixels that you click, and neighboring pixels of similar color, with the specified foreground color.

Page 392: XSI guia basica

Section 22 • Compositing and 2D Paint

392 • Softimage

Merging and Cloning

Merging and cloning are both ways of painting using an image’s pixels as the paint color. In the Brushes category of the Fx paint brush list, you’ll find the Merge brush and the Clone brush, which you can use to paint strokes and lines, or draw shapes that use a source image as the paint color.

Merging

Merging is the process of painting pixels from a source image — called the merge source — onto the corresponding portion, or a different portion, of a destination image. This is useful for painting unwanted elements, like wires, out of images. It is also useful for painting new elements into images, like the clouds in the example below.

You can set any operator in the Fx Tree as the merge source by right-clicking it and choosing Set as Paint Merge Source from the menu. This adds a small paint-bucket icon to the operator to help you identify it as the merge source.

Cloning

Cloning is the process of painting pixels from one region of an image to a different region of the same image. This can be useful for duplicating elements in an image, as in the example below. It is also often used to paint out unwanted elements. For example, you can remove wires from a clear sky by painting over them with adjacent pixels.

When you paint using the Clone brush, you’ll only see a result if you use a brush offset. The offset is the distance between the area from which you’re painting and the area to which you’re painting. You can offset the brush in any direction and use any offset distance, as long as both the source and destination cursors can be placed somewhere on the target image simultaneously.

In this example, the image of the clouds is set as the merge source and is being painted into the image of the field, as shown below.

Merge Source

Merge Source icon

Before

SourceDestination Offset

After

OriginalClone

In this example, the trumpet player and his shadow are cloned into the left side of the frame.

Page 393: XSI guia basica

Basics • 393

Section 23

Customizing Softimage

You can extend Softimage in a variety of ways by customizing it. Many customizations are too involved to cover here, but you can get more details in the Softimage User’s Guide and Softimage SDK Guide.

What you’ll find in this section ...

• Plug-ins and Add-ons

• Toolbars and Shelves

• Custom and Proxy Parameters

• Displaying Custom and Proxy Parameters in 3D Views

• Scripts

• Key Maps

• Other Customizations

Page 394: XSI guia basica

Section 23 • Customizing Softimage

394 • Softimage

Plug-ins and Add-ons

You can extend the functionality of Softimage using plug-ins and add-ons:

• A plug-in is a customization, for example, a command or operator, implemented in a single file (possibly with a separate help file).

• An add-on is a set of related customization files stored together in an Add-on directory. It may consist of a toolbar and its associated commands, operators, properties, and so on. Add-ons can be packaged into a single .xsiaddon file to distribute to others.

Plug-ins and add-ons can be managed using the Plug-in Manager.

Plug-in Manager

The Plug-in Manager is the central location for managing your customizations. You can display the Plug-in Manager using File > Plug-in Manager or in the Tool Development Environment (View > Layouts > Tool Development Environment).

Installing a simple plug-in is as easy as copying the script or library file to the Plugins directory of your user or workgroup location.

Installing Add-on Packages

The easiest way to install (and uninstall) a packaged add-on is to use the Plug-in Manager.

To install an .xsiaddon

1. In the Plug-in Tree, right-click User Root or the first workgroup in the tree and choose Install .xsiaddon.

If you want to install the add-on in a different workgroup, go to the Workgroup tab and move that workgroup to the top of the list. You can install add-ons only in the first workgroup.

2. In the Select Add-on File dialog box, locate the .xsiaddon file you want to install, and click OK.

To uninstall an .xsiaddon

• In the Plug-in Tree, right-click an add-on and choose Uninstall Add-on.

• You can also install an add-on by dragging an .xsiaddon file to an Softimage viewport. This installs the add-on in the User location or the first workgroup, depending on the value in the DefaultDestination tag of the .xsiaddon.

• The SDK Guides contain additional information about other methods of installing add-ons.

Page 395: XSI guia basica

Basics • 395

Toolbars and Shelves

Toolbars and Shelves

Softimage lets you create and edit your own custom toolbars and shelves. This gives you convenient access to commands, presets, and other files.

• Toolbars contain buttons for running commands or applying presets.

• Shelves are floating windows that contain tabs. Each tab can be a toolbar, display the contents of a file directory, or hold other items.

Toolbars and shelves can be floating windows, or embedded in a view or layout. They are stored as XML-based files with the .xsitb extension in the Application\toolbars subdirectory of the user, workgroup, or factory path.

At startup, Softimage gathers the files it finds in these locations and adds them to the View > Toolbars menu. Toolbars and shelves that are found in your user location are marked with u in the View menu, and those that are found in a workgroup location are marked with w.

You can remove toolbars and shelves stored in the user location. Choose View > Manage, check any items you want to remove, and click Delete. The items are not physically deleted but they are marked for removal. When you exit Softimage, the file extensions are changed to .bak so they won’t be detected and loaded when you restart.

Custom Toolbars

You can create your own toolbar and use it to hold commonly-used tools and presets. Tools and presets are represented as buttons on the toolbar.

Softimage also includes a couple of blank toolbars that are ready for you to customize by adding your own scripts, commands, and presets:

• The lower area of the palette and script toolbar.

• The Custom tab of the main shelf (View > Optional Panels > Main Shelf).

To create a new toolbar, choose View > New Custom Toolbar.

• To add presets to the toolbar, drag them from a file browser.

• To add commands and tools, choose View > Customize Toolbar, select a command category, and drag items onto the toolbar. Use the Toolbar Widgets category to organize your toolbar.

• To add a script, drag lines from the script editor or a script file from a browser and choose Script Button.

• To remove an item from a custom toolbar, right-click on a toolbar button and choose Remove Button.

To save the toolbar, right-click on an empty area of the toolbar and choose Save or Save As.

Shelves

To create a custom shelf, choose View > New Custom Shelf. To add a tab, right-click on an empty part of the tab area and choose an item from the Add Tab menu. If no tabs have been defined yet, you can right-click anywhere in the shelf.

• Folder tabs display files in a specific directory. You can drag files like presets from a folder tab onto objects and views in Softimage.

• Toolbar tabs hold buttons for commands and presets.

• Driven tabs can be filled with scene elements such as clips by using the object model of the SDK.

To save a custom shelf, Click the Options icon and choose Save or Save As.

Page 396: XSI guia basica

Section 23 • Customizing Softimage

396 • Softimage

Custom and Proxy Parameters

Custom parameters are parameters that you create for your own purpose. Proxy parameters are linked copies of other parameters that you can add to your own custom parameter sets. Both custom parameters and proxy parameters are contained in custom properties, also known as custom parameter sets.

Custom Parameters

Custom parameters are parameters that you create for any specific animation purpose you want. You typically create a custom parameter and then connect it to other parameters using expressions or linked parameters. You can then use the sliders in the custom parameter set’s property editor to drive the connected parameters in your scene.

For example, you can use a set of sliders in a property editor to drive the pose of a character instead of creating a virtual control panel using 3D objects.

First, create a custom parameter set by selecting an element and using Create > Parameter > New Custom Parameter Set on the Animate toolbar, and then giving it a meaningful name.

Next, create a new parameters using Create > Parameter > New Custom Parameter. If your object has only one custom parameter set, the custom parameters are placed in it. If there are multiple sets, you should select the desired one beforehand. If there aren’t any custom parameter sets on the selected object, one is created using a default name.

At this point, the custom parameter set exists only in the scene in which it was created. It is not installed at the application level. You can copy it to other objects in the same scene, or save a preset to apply it to objects in other scenes.

If you want, you can convert the custom parameter set into a self-installing custom property by right-clicking in the light gray header bar of the property editor and choosing Migrate to Self-installed. This lets you distribute the property as a script plug-in. You can also edit the script file to control the layout and logic of the property.

Proxy Parameters

Proxy parameters are similar to custom parameters, but with a fundamental difference. Custom parameters can drive target parameters, but they are still separate and different parameters. This means that when you set keyframes, you key the custom parameter and not the driven parameter. So what do you do when you want to drive the actual parameter, or create a single parameter set that holds only those existing parameters you are interested in? You can use proxy parameters.

Unlike custom parameters, proxy parameters are cloned parameters: they reflect the data of another parameter in the scene. Any operation done on a proxy parameter has the same result as if it had been done on the real parameter itself (change a value, save a key, etc.).

While you can create proxy parameters for any purpose, it’s most likely that you will use them to create custom property pages. You can create your own property pages for just about anything you like: for example, locate all animatable parameters for an object on a single property

Page 397: XSI guia basica

Basics • 397

Custom and Proxy Parameters

page, making it much quicker and easier to add keys because all the animated parameters are in one place. Or as a technical director, you can expose only the necessary parameters for your animation team to use, thereby streamlining their workflow and reducing potential errors.

First, create a custom parameter set, then open an explorer and drag and drop parameters into the custom property editor or onto the custom parameter set node in an explorer. Alternatively, use Create > Parameter > New Proxy Parameter to specify parameters with a picking session.

Displaying Custom and Proxy Parameters in 3D Views

You can display and edit parameter values directly in a 3D view. This is sometimes called a heads-up display or HUD.

You do this by creating a custom parameter set whose name starts with the text DisplayInfo. You can simply display information, for example, about your company or a particular scene shot, or you can mark parameters and change their values.

Viewing the Information in a 3D View

To view the DisplayInfo information in a 3D View, click the eye icon in a 3D view and choose Visibility Options. On the Stats page in the Camera Visibility property editor, select Show Custom “DisplayInfo” Parameters.

Select one or more objects with a DisplayInfo custom parameter set. If nothing is selected, the DisplayInfo set of the scene root is displayed (if it has one).

Changing Parameter Values in a 3D View

You can easily modify the parameters displayed in the 3D views. There is a preference that controls the interaction:

• If Enable On-screen Editing of “DisplayInfo” Parameters is on in your Display preferences, you can modify the values as well as animate them directly in the display.

• If on-screen editing is disabled, you can still mark the parameters and modify them using the virtual slider.

If on-screen editing is enabled, the parameters appear in a transparent box in the view. The title of the parameter set is shown at the top (without the “DisplayInfo_” prefix). Each parameter has animation controls that allow you to set keys.

Page 398: XSI guia basica

Section 23 • Customizing Softimage

398 • Softimage

You can do any of the following:

• Click and drag on a parameter name to modify the value. You don’t need to explicitly activate the virtual slider tool.

- Drag to the left to decrease the value, and drag to the right to increase it.

- Press Ctrl for coarse control.

- Press Shift for fine control.

- Press Ctrl+Shift for ultra-fine control.

- Press Alt to extend beyond the range of the parameter’s slider in its property editor (if the slider range is smaller than its total range).

If the parameter that you click on is not marked, it becomes marked. If it is already marked, then all marked parameters are modified as you drag.

• Double-click on a numeric value to edit it using the keyboard. The current value is highlighted, so you can type in a new value. Only the parameter you click on is affected even if multiple parameters are marked.

• Double-click on a Boolean value to toggle it. Only the parameter you click on is affected even if multiple parameters are marked.

• Click on an animation icon to set or remove a key for the corresponding parameter.

• Right-click on an animation icon to open the animation context menu for the corresponding parameter.

• Click the triangle in the top right corner to expand or collapse the parameter set.

The color of the animation icon indicates the following information:

• Gray: The parameter is not animated.

• Red: There is a key for the current value at the current frame.

• Yellow: The parameter is animated by an fcurve, and the current value has been modified but not keyed.

• Green: The parameter is animated by an fcurve, and the current value is the interpolated result between keys.

• Blue: The parameter is animated by something other than an fcurve (expression, constraint, mixer, etc.).

If there is a DisplayInfo property on the scene root, you cannot edit its parameters on-screen unless the scene root is selected.

Page 399: XSI guia basica

Basics • 399

Scripts

Scripts

Scripts are text files containing instructions for modifying data in Softimage. They provide a powerful way to automate many tasks and simplify your workflow.

History pane contains the most recently used commands in your current session. Drag and drop lines into the editing pane to get a head start on your own scripts.

The history pane also contains messages related to importing and exporting, debugging information, and so on.

Get help on the command selected in the editing pane.

Editing pane is a text editor in which you can create scripts by typing or pasting. Right-click for a context menu.

Run the lines selected in the editing pane. Ifno lines are selected, the entire script is run.

Command box displays the most recent command. Modify the contents or type a new command, then press Enter to execute it. Selects any of the last 25 commands.

Script editor icon opens the script editor.

Page 400: XSI guia basica

Section 23 • Customizing Softimage

400 • Softimage

Key Maps

Key maps determine the keyboard combinations that are used to run commands, open windows, and activate tools. You can create your own key maps to create new key bindings or change the default ones.

Key maps are stored as XML-based .xsikm files in the \Application\keymaps subdirectory of the user, workgroup, or factory path. At startup, Softimage gathers the files it finds at these locations and makes them available for selection in the Keyboard Mapping editor.

When you change a key mapping, the new key automatically appears next to the command in menus and context menus. For some menus, you must restart Softimage to see the new label.

Open the keyboard mapping editor by choosing File > Keyboard Mapping from the main menu. Select an existing Key Map, or click New to create a new one.

Keyboard shortcuts are grouped by interface component.

Click an interface component in the Group list to display its commands and their keyboard shortcuts in the Command list.

Click a command in the Command list to display its keyboard shortcut in red.

Create or modify a shortcut by dragging a command label to a shortcut key.

Hold down the Shift, Ctrl, or Alt key while dragging to add a modifier to the new shortcut command.

Remove a shortcut key by selecting a command from the Command box and pressing Clear.

To see which command is mapped to a key, click the appropriate modifiers (Alt, Ctrl, Shift) from the check boxes or the keyboard diagram, then rest your mouse pointer over a key on the keyboard diagram.

Page 401: XSI guia basica

Basics • 401

Other Customizations

The keyboard keys are color-coded to indicate the following:

• White: no keyboard shortcut has been assigned to this key.

• Beige: a keyboard shortcut from another interface component has been assigned to this key.

• Light Brown: a keyboard shortcut from the currently selected interface component has been assigned to this key.

• Red: this keyboard shortcut corresponds to the currently selected item in the Command box.

To see key conflicts with other windows, select View and choose a window from the adjacent list. Keys that are used by the selected window are highlighted in dark brown. For combinations involving modifiers, select the appropriate Ctrl, Shift, and Alt boxes or press and hold those keys on your keyboard.

Other Customizations

In addition to the customizations briefly mentioned so far, there are many other ways you can extend Softimage:

• Custom commands can automate repetitive or difficult tasks. Commands can be scripted or compiled.

• Custom operators can automatically update data in the operator stack. Operators can be scripted or compiled.

• Layouts define the main window of Softimage. You can create layouts based on your preferences or common tasks.

• Views can be floating or embedded in a layout. You can create views for specialized tasks.

• Events run automatically when certain situations occur in Softimage.

• Synoptic views allow you to run scripts by clicking hotspots in an image, for example, you can create custom control panels for a rig.

• Net View allows you to create an HTML interface for sharing scripts, models, and other data.

• Shaders give you complete control over the final look of your work.

For more information about customizing Softimage, see the SDK Guides, as well as Customization in the Softimage Guides.

Page 402: XSI guia basica

Section 23 • Customizing Softimage

402 • Softimage