(videogame) Rendering 101

33
1

Transcript of (videogame) Rendering 101

Page 1: (videogame) Rendering 101

1

Page 2: (videogame) Rendering 101

There are plenty of beginners tutorials and books about OpenGL or DirectX or rendering for videogames. This 101 won’t be like that, it would provide a way more high-level overview and maybe some notions that derive from my own experience. I’ll provide plenty of pointers and links in the notes for further reads...

2

Page 3: (videogame) Rendering 101

3

Page 4: (videogame) Rendering 101

4

Page 5: (videogame) Rendering 101

Jim Kajiya... The author of the rendering equation (independently formulated in the same year also by David Immel: http://dl.acm.org/citation.cfm?doid=15922.15901 in the context of finite-element solutions - radiosity) Hawkins wrote in the preface of one of his books that his editor once told him that for each equation in his book, he would lose half of the audience. So he included only one, and that’s what I’ll do too... The equation is in an ugly form, I took the image from Wikipedia

5

Page 6: (videogame) Rendering 101

To generate an image, we want to know from every point in the scene the light that reaches our virtual eye. The basic device is something that tells us how much light from a point goes to a given direction, this is what the lighting equation describes. We want to compute the light outgoing (“Lo”) from the big black arrow (radiance), emitting from a point towards our sensor (camera/eye): 1) We need to know all the light that arrieves on that point. Only lights emit light (that is accounted by

the term Le, that is non-zero if the point x is on a light), all the other points in the scene just reflect some energy that arrieves from somewhere else...

In math, this is an integral (sum) over an hemisphere (a point can’t receive light from directions behind its surface) of the “Li” (incoming light)

This quantity is called “irradiance” 2) Each ray of light that reaches our point is scattered by the surface material in some direction, with

some intensity A function defines the properties of the material, in the equation is the “fr(...)” term. It’s called BRDF – bi-directional (as it takes two angles, the incoming one from Li and the

outgoing one we’re computing Lo on) reflectance distribution function It models microscopic surface properties that we don’t capture in the geometrical model itself...

It depends on the scale of the simulation, and sometimes we might have in the same scene a given property that is modelled geometrically but that fades into the scale of BRDF as the object scales, a typical example is the ocean for which recently a geometrical-to-brdf fading model was proposed (Real-time Realistic Ocean Lighting using Seamless Transitions from Geometry to BRDF, CG Forum 29)

The problem is that we don’t know the light incoming (Li) from all the directions on that point! To compute that, we need to solve another light-from-a-point-towards-a-direction kind of problem that is, our equation is recursive. In general, even to compute effects as simple and fundamental as shadows, we have to consider the other objects in the scene to find the color of each other (thus we call these effects “global” illumination)

6

Page 7: (videogame) Rendering 101

The very core of rendering, is all in that equation... well, plus some signal/sampling theory as you’re going to make an image out of pixels, some knowledge about colors and human perception and a sprinkle of geometry in order to have some math that is able to represent your scene... Et voilà! (at least for static images, in theory...) --- SmallPT: path tracing in 99 lines of C++ code: http://www.kevinbeason.com/smallpt/ The image took 10 hours on a Intel Core 2 Quad, 2.4ghz, using 4 threads The CUDA version is very interesting: http://code.google.com/p/tokaspt/ Other fun sourcecodes to play with: http://igad.nhtv.nl/~bikker/ http://www.hxa.name/minilight/ http://www.pbrt.org/ http://sunflow.sourceforge.net/

7

Page 8: (videogame) Rendering 101

Aaaaaarrrrghhh... We need to know the light in order to compute the light... Infinite! Under some conditions though, it’s like a spiral, it converges to a point, which we can compute.

8

Page 9: (videogame) Rendering 101

The series solution is called “Liouville-Neumann” series. The best reference for this theory (even if it’s quite heavy in terms of math) is Eric Veach’s dissertation thesis. Monte-Carlo methods are not the only way to solve light transport, even if they are probably the most popular ones in modern global illumination renderers. For further investigation see: Path Tracing (advanced: Bi-Directional Path Tracing, Metropolis Light Transport, Energy-Distribution Path Tracing, Quasi Monte Carlo Methods) - Radiosity - Instant Radiosity - Photon Mapping (advanced: Progressive Photon Mapping)

9

Page 10: (videogame) Rendering 101

Not computable: search for the “raytrix” at Stanford Approximated: we consider only straight rays, the equation here does not consider athmosphere and volumes, nor that the light penetrates into some materials and exists at a different point (i.e. Wax, human skin), does not consider other minor effects like fluorescence and phosphorescence, diffraction, polarization, dispersion, interference... All these are conceptually easy to model in the equation but trivial solutions are intractably slow.

10

Page 12: (videogame) Rendering 101

12

Page 13: (videogame) Rendering 101

13

Page 14: (videogame) Rendering 101

14

Page 15: (videogame) Rendering 101

15

Page 17: (videogame) Rendering 101

Writing a software rasterizer is cool Scanline: http://www.flipcode.com/documents/fatmap.txt http://chrishecker.com/Miscellaneous_Technical_Articles#Perspective_Texture_Mapping Half-plane: http://www.devmaster.net/forums/showthread.php?t=1884 Reyes: http://en.wikipedia.org/wiki/Reyes_rendering http://www.aqsis.org/

17

Page 20: (videogame) Rendering 101

Amazon links: http://www.amazon.com/Real-Time-Rendering-Third-Tomas-Akenine-Moller/dp/1568814240/ref=sr_1_1?ie=UTF8&qid=1317419811&sr=8-1 http://www.amazon.com/Physically-Based-Rendering-Second-Implementation/dp/0123750792/ref=sr_1_1?ie=UTF8&qid=1317689281&sr=8-1 http://www.amazon.com/Real-Time-Collision-Detection-Interactive-Technology/dp/1558607323/ref=sr_1_13?ie=UTF8&qid=1317419811&sr=8-13 http://www.amazon.com/Computer-Graphics-3rd-Alan-Watt/dp/0201398559/ref=pd_sim_b56 http://www.amazon.com/Advanced-Global-Illumination-Second-Philip/dp/1568813074/ref=pd_sim_b2 http://www.amazon.com/Realistic-Image-Synthesis-Photon-Mapping/dp/1568814623/ref=sr_1_1?s=books&ie=UTF8&qid=1317420066&sr=1-1 http://www.amazon.com/Production-Rendering-Ian-Stephenson/dp/1852338210/ref=sr_1_1?ie=UTF8&qid=1317689536&sr=8-1

20

Page 21: (videogame) Rendering 101

•“mostly” local... Commonly the shading (color) of a surface depends only on the surface and the lights, plus the shadows that are the only widespread global effect

• We still don’t do our math right in a LOT of cases! Makes everything complicated, when your shading in practice depends on hundreds of material paramters (and hacks), different rendering passes/effects, a lot of source data (images, geometry...)

21

Page 23: (videogame) Rendering 101

Slide stolen from Fight Night Champion GDC2011 presentation: http://c0de517e.blogspot.com/2011/09/fight-night-champion-gdc.html --- Shaders are some sort of black box. They model a given function (BRDF, or better, a given approximation of the whole rendering equation), and each parameter can change part of the function’s shape. The art-direction and visual targets define an “objective function”, that we don’t know mathematically in any form. Artists are “optimizers”, they try to find the parameters that best fit the shader function to the art-direction. More parameters, more degrees of freedom = harder work for the artists. We need to guide the artists, provide intuitive models, limit parameter ranges and dependencies. Math and physics guides our process, they make us understand what to add and why, if a parameter makes sense or not, what is the root cause of a given visual defect.

23

Page 25: (videogame) Rendering 101

Slide stolen from Fight Night Champion GDC2011 presentation: http://c0de517e.blogspot.com/2011/09/fight-night-champion-gdc.html --- Some big changes are still perceptually believable. The black and white image also has more global and local contrast. Global blurring does not change our ability of recognizing the “human” quality of the face (but blurring only the skin detail while retaining the edges would for example!) Most of the visual quality are relative. The lips in the bottom-left picture appear to be redder than the one on the top, even if only the skin around them has a slight cool tint, the lips did not change. Also some “minor” changes in absolute values can make huge differences. The bottom-mid image has a bit of green added to the midtones, the bottom-right one only changed the reds making them more pink.

25

Page 26: (videogame) Rendering 101

In detail: http://c0de517e.blogspot.com/2008/04/gpu-part-1.html http://c0de517e.blogspot.com/2008/04/how-gpu-works-part-2.html http://c0de517e.blogspot.com/2008/04/how-gpu-works-part-3.html http://c0de517e.blogspot.com/2009/05/how-gpu-works-appendix.html http://c0de517e.blogspot.com/2008/07/gpu-versus-cpu.html

26

Page 28: (videogame) Rendering 101

28

Page 29: (videogame) Rendering 101

29

Page 30: (videogame) Rendering 101

30

Page 31: (videogame) Rendering 101

Culling is needed because our scenes are often huge, we can’t send _everything_ to the GPU and hope it will generate something fast enough... Many things can be overlapped between the GPU and CPU, here we highlight only that when we send commands to from the CPU to the GPU, the two chips are operating in parallel, and if they need to talk to each other it needs syncronization. In practice the overlap will be there during the whole execution, usually when the rendering is taking the data from the game the GPU is still busy rendering the previous frame for example. All real renderers also generate an image in many different passes, it might need to render the scene from the point of view of the lights (to cast shadows) and to generate many intermediate views and images, combine them and post-process them and use some as inputs for the rendering of others.

31

Page 32: (videogame) Rendering 101

These images were generated by instrumenting a retail PC copy of Space Marines (Relic/THQ): http://www.spacemarine.com/

32

Page 33: (videogame) Rendering 101

Content production is a HUGE deal... I won’t delve into any of that. Often we write tools, we should think about architectures. Iteration is fundamental on both the data (art) and code ends, as algorithms and techniques can save order of magnitudes of content work (i.e. Automatic mesh processing for LODs, occlusion, reduction of number of parameters etc... Removing work! Less work) while neat tools/UI/workflows can “only” make work faster (same amount of work, but more efficiently done... It’s still fundamental, espeically as interactive feedback is crucial to nail a given look, to have the artists/paramter optimizers work at their best). We are currently not great in both areas, we don’t engineer great workflows and Uis and we don’t spend enough money to focus on algorithms that serve content... But we are getting better (we HAVE to, or we close) Engineering...

How to paint our triangles? What color? Which lights? How does the light interact with the materials?

What commands we want to send to the GPU? What objects are currently visible in the scene? What is the best way of drawing what we need to see?

How to organize our resources? What is the source data? How to load it? Streaming? Loading logic... How to store it? Data pipeline... Memory is always a problem

How to process our vertices? Do we need to move them? Animation... Do we need to draw multiple copies? Instancing...

Sometimes we write powerpoint presentations...

33