Lighting, part 2 CSE167: Computer Graphics Instructor: Steve Rotenberg UCSD, Fall 2006.

38
Lighting, part 2 CSE167: Computer Graphics Instructor: Steve Rotenberg UCSD, Fall 2006
  • date post

    22-Dec-2015
  • Category

    Documents

  • view

    220
  • download

    3

Transcript of Lighting, part 2 CSE167: Computer Graphics Instructor: Steve Rotenberg UCSD, Fall 2006.

Page 1: Lighting, part 2 CSE167: Computer Graphics Instructor: Steve Rotenberg UCSD, Fall 2006.

Lighting, part 2

CSE167: Computer Graphics

Instructor: Steve Rotenberg

UCSD, Fall 2006

Page 2: Lighting, part 2 CSE167: Computer Graphics Instructor: Steve Rotenberg UCSD, Fall 2006.

Triangle Rendering

The main stages in the traditional graphics pipeline are: Transform Lighting Clipping / Culling Scan Conversion Pixel Rendering

Page 3: Lighting, part 2 CSE167: Computer Graphics Instructor: Steve Rotenberg UCSD, Fall 2006.

Lighting

Lighting is an area of the graphics pipeline that has seen no limits to its complexity and continues to promote active research

Many advanced lighting techniques very complex and require massive computational and memory resources

New algorithms continue to be developed to optimize various pieces within these different areas

The complexity of lighting within the context of photoreal rendering completely dwarfs the other areas of the graphics pipeline to the point where it can almost be said that rendering is 99% lighting

The requirements of photoreal lighting have caused radical modifications to the rendering process to that point that modern high quality rendering has very little resemblance to the processes that we’ve studied so far

For one thing, so far, we’ve talked about rendering triangle-by-triangle, whereas photoreal rendering is generally done pixel-by-pixel, but we will look at these techniques in more detail in a later lecture

Page 4: Lighting, part 2 CSE167: Computer Graphics Instructor: Steve Rotenberg UCSD, Fall 2006.

Basic Lighting & Texturing So far, we have mainly focused on the idea of computing lighting at the

vertex and then interpolating this across a triangle (Gouraud shading), and then combining it with a texture mapped color to get the final coloring for a pixel

We also introduced the Blinn reflection model which treats a material reflectivity as having a sum of diffuse and specular components

We saw that zbuffering is a simple, powerful technique of hidden surface removal that allows us to render triangles in any order, and even handle situations where triangles intersect each other

We also saw that texture mapping (combined with mipmapping to fix the shimmering problems) is a nice way to add significant detail without adding tons of triangles

We haven’t covered transparency or fog yet, but we will in just a moment This classic approach of ‘Blinn lit, Gouraud shaded, z-buffered, mipmapped

triangles with transparency and fog’ essentially forms the baseline of what one needs to achieve any sort of decent quality in a 3D rendering

The biggest thing missing is shadows, but with a few tricks, one can achieve this as well as a wide variety of other effects

Page 5: Lighting, part 2 CSE167: Computer Graphics Instructor: Steve Rotenberg UCSD, Fall 2006.

Blinn-Gouraud-zbuffer-mipmap-fog-transparency

This was the state of the art in software rendering back around 1978, requiring only a couple hours to generate a decent image (on a supercomputer)

This was the state of the art in realtime graphics hardware in 1988, allowing one to render perhaps 5000 triangles per frame, at 720x480 resolution, at 60 frames per second (assuming one could afford to spend $1,000,000+ for the hardware)

By the late 1990’s, consumer hardware was available that could match that performance for under $200

The Sony PS2 essentially implements this pipeline, and can crank out maybe 50,000 triangles per frame at 60 Hz

The XBox was the first video game machine to progress beyond this basic approach, and high end PC graphics boards were starting to do it a couple years before the XBox (maybe around 2000)

Modern graphics boards support general purpose programmable transformation/lighting operations per vertex, as well as programmable per-pixel operations including Phong shading, per-pixel lighting, and more, but still operate on one triangle at a time, and so still fall within the classification of traditional pipeline renderers

Page 6: Lighting, part 2 CSE167: Computer Graphics Instructor: Steve Rotenberg UCSD, Fall 2006.

Per Vertex vs. Per Pixel Lighting

We can compute lighting per vertex and interpolate the color (Gouraud shading)

Or we can interpolate the normals and compute lighting per pixel (Phong shading)

The two approaches compute lighting at different locations, but still can use exactly the same techniques for computing the actual lighting

In either case, we are still just computing the lighting at some position with some normal

Page 7: Lighting, part 2 CSE167: Computer Graphics Instructor: Steve Rotenberg UCSD, Fall 2006.

Classic Lighting Model The classic approach to lighting is to start by defining a set of lights in the

scene There are a variety of simple light types, but the most basic ones are directional

and point lights Each light in the scene needs to have its type specified as well as any other

relevant properties (color, position, direction…). Geometric properties are usually specified in world space, although they may end up getting transformed to camera space, depending on the implementation

And a set of materials Materials define properties such as: diffuse color, specular color, and shininess

And then a bunch of triangles Each triangle has a material assigned to it Triangles can also specify a normal for each vertex

Then we proceed with our rendering: When we render a triangle, we first apply the lighting model to each vertex For each vertex, we loop through all of the lights and compute how that light

interacts with the position, normal, and unlit color of the vertex, ultimately computing the total color of the light reflected in the direction of the viewer (camera)

This final color per vertex value is interpolated across the triangle in the scan conversion and then combined with a texture color at the pixel level

Page 8: Lighting, part 2 CSE167: Computer Graphics Instructor: Steve Rotenberg UCSD, Fall 2006.

Lighting

n

l1 e

l2l3

material

Point light

Directional light

Point light

c1

c3c2

c=?

v

Camera

Page 9: Lighting, part 2 CSE167: Computer Graphics Instructor: Steve Rotenberg UCSD, Fall 2006.

Incident Light

To compute a particular light’s contribution to the total vertex/pixel color, we start by computing the color of the incident light

The incident light color clgt represents the actual light reaching the surface in question

For a point light, for example, the actual incident light is going to be the color of the source, but will be attenuated based on the inverse square law (or some variation of it)

We also need to know the incident light direction. This is represented by the unit length vector l (that’s supposed to be a lower case L)

Computing the incident light color & direction is pretty straightforward, but will vary from light type to light type

Page 10: Lighting, part 2 CSE167: Computer Graphics Instructor: Steve Rotenberg UCSD, Fall 2006.

Reflection Models

Different materials reflect light in different patterns The way a particular material reflects light is referred to as a

reflection model (or sometimes as a lighting model, shading model, local illumination model, scattering model, or BRDF)

The reflection model takes the direction and color of the incident light coming from a particular light source and computes what color of light is going to be reflected in the direction of the camera

Perfect diffuse reflectors reflect light uniformly in every direction, and so we don’t even need to know where the camera is. However, almost any other type of material is going to scatter light differently in different directions, so we must consider the camera direction

The Lambert reflection model treats the material as an ideal diffuse reflector

The Blinn reflection model treats the material as having a sum of diffuse and specular components. Each is computed independently and simply added up

Page 11: Lighting, part 2 CSE167: Computer Graphics Instructor: Steve Rotenberg UCSD, Fall 2006.

Diffuse Reflection The diffuse lighting is going to follow Lambert’s law and be

proportional to the cosine of the angle between the normal and the light direction (in other words, n·l)

It will also be proportional to the diffuse color of the material, mdif, giving a final diffuse color of

This is the Lambert reflection model Also, if the dot product is negative, indicating that the light is on the

wrong side of the surface, we clamp it to zero Note that the product mdif*clgt is computed by performing

component-by-component multiplication (not a dot or cross product)

lncmc lgtdif

nl

clgt

mdif

Page 12: Lighting, part 2 CSE167: Computer Graphics Instructor: Steve Rotenberg UCSD, Fall 2006.

Specular Highlights

We assume that the basic material is specularly reflective (like a metal), but with a rough surface that causes the actual normals to vary at a small scale

We will say that the surface at a microscopic scale is actually composed of many tiny microfacets, which are arranged in a more or less random fashion

Page 13: Lighting, part 2 CSE167: Computer Graphics Instructor: Steve Rotenberg UCSD, Fall 2006.

Specular Highlights

The surface roughness will vary from material to material With smooth surfaces, the microfacet normals are very closely

aligned to the average surface normal With rougher surfaces, the microfacet normals are spread around

more, but we would still expect to find more facets close to the average normal than far from the average

Smooth surfaces have sharp highlights, while rougher surfaces have larger, more blurry highlights

Polished:

Smooth:

Rough:

Very rough:

Page 14: Lighting, part 2 CSE167: Computer Graphics Instructor: Steve Rotenberg UCSD, Fall 2006.

Specular Highlights

n

lclgt

mdif

eh

To compute the highlight intensity, we start by finding the unit length halfway vector h, which is halfway between the vector l pointing to the light and the vector e pointing to the eye (camera)

le

leh

Page 15: Lighting, part 2 CSE167: Computer Graphics Instructor: Steve Rotenberg UCSD, Fall 2006.

Specular Highlights

The halfway vector h represents the direction that a mirror-like microfacet would have to be aligned in order to cause the maximum highlight intensity

n

lclgt

mdif

eh

Page 16: Lighting, part 2 CSE167: Computer Graphics Instructor: Steve Rotenberg UCSD, Fall 2006.

Specular Highlights

The microfacet normals will point more or less in the same direction as the average surface normal, so the further that h is from n, the less likely we would expect the microfacets to align

In other words, we want some sort of rule that causes highlights to increase in brightness in areas where h gets closer to n

The Blinn lighting model uses the following value for the highlight intensity:

Where s is the shininess or specular exponent

sf nh

Page 17: Lighting, part 2 CSE167: Computer Graphics Instructor: Steve Rotenberg UCSD, Fall 2006.

Specular Highlights

h·n will be 1 when h and n line up exactly and will drop off to 0 as they approach 90 degrees apart

Raising this value to an exponent retains the behavior at 0 and 90 degrees, but the dropoff increases faster as s gets higher, thus causing the highlight to get narrower

sf nh

Page 18: Lighting, part 2 CSE167: Computer Graphics Instructor: Steve Rotenberg UCSD, Fall 2006.

Blinn Reflection Model

To account for highlights, we simply add a specular component to our existing diffuse equation:

This is essentially the Blinn reflection model. It appears in a few slightly different forms and in a wide variety of notations…

sspecdiflgt hnmlnmcc *

Page 19: Lighting, part 2 CSE167: Computer Graphics Instructor: Steve Rotenberg UCSD, Fall 2006.

Multiple Lights

To account for multiple lights, we just sum up the contribution from each individual light

We have also included the cheezy ambient term mamb*camb that approximates uniform background light

sispecidifilgtambamb hnmlnmccmc *

Page 20: Lighting, part 2 CSE167: Computer Graphics Instructor: Steve Rotenberg UCSD, Fall 2006.

This classic lighting model is a decent place to start, as it allows for a few simple light types and for a reasonable range of material properties

There are several obvious things that would need to be modified, however, to improve the overall quality of the lighting

Classic Lighting Model

sispecidifilgtambamb hnmlnmccmc **

Page 21: Lighting, part 2 CSE167: Computer Graphics Instructor: Steve Rotenberg UCSD, Fall 2006.

Shadows

Adding shadows to the model requires determining if anything is between the light source and the point to be lit

If something is blocking it, then that particular light source is skipped, and doesn’t contribute to the total lighting

In a sense, we could say that the lighting model already includes this, since the incident light would just be 0 in this case

Conceptually, this is a trivial modification to the actual lighting model itself, however to implement this requires considerable changes to the way we’ve been thinking about rendering so far

To implement shadows this way would require some type of function that takes two points in space (representing the position of the vertex and the position of the light source) and tests to see if the line segment between the two points intersects any triangle within the entire scene

Assuming we had such a function, it is pretty clear that we would want to compute shadows pixel by pixel, instead of simply at the vertices, if we expect to get sharp shadows cast onto low detail geometry

sispecidifilgtambamb hnmlnmccmc **

Page 22: Lighting, part 2 CSE167: Computer Graphics Instructor: Steve Rotenberg UCSD, Fall 2006.

Direct/Indirect Light

One obvious problem stems from the fact that the model only considers light coming directly from light sources (called direct light) and drastically simplifies bounced light (called indirect light) in the ambient term

In reality, light will be shining onto a surface from every direction in the hemisphere around the surface

True, the light is most likely to be strongest in the direction of the light sources, but properly handling issue is important to high quality rendering

To modify the model to account for light coming from every direction, we start by dropping the ambient term, and replacing the summation over light directions to an integration over the hemisphere of directions. This leads to the advanced rendering topic of global illumination

sispecidifilgtambamb hnmlnmccmc **

Page 23: Lighting, part 2 CSE167: Computer Graphics Instructor: Steve Rotenberg UCSD, Fall 2006.

Photoreal Rendering

Two key components to photoreal rendering are: Local illumination: accurate modeling of light

scattering at surfaces (what we have been calling the reflection model)

Global illumination: accurate modeling of light propagation throughout a complex environment

Implementing these two things requires several additional complex systems that bear little direct resemblance to the two components listed

Page 24: Lighting, part 2 CSE167: Computer Graphics Instructor: Steve Rotenberg UCSD, Fall 2006.

Photoreal Rendering Features Model local material properties:

Diffuse/specular scattering Transparency, refraction, absorbtion Subsurface scattering (translucency) Volumetric scattering

Simulate global lighting behavior: Shadows Diffuse interreflection (soft lighting, color bleeding…) Specular interreflection & caustics

Model light sources Accurate handling of color Accurate modeling of light emission geometry Area & volumetric light sources

Cameras Simulate camera & lens effects (focus, blooming, halos, lens flares…) Simulate behavior of CCD/film/retina exposure & color compensation Handle high dynamic range color information Simulate shutter & motion blur effects

etc.

Page 25: Lighting, part 2 CSE167: Computer Graphics Instructor: Steve Rotenberg UCSD, Fall 2006.

Reflection Models

There is a lot to say about reflection models The Blinn reflection model is a very simple example. In

fact, it has been shown to violate some very important physical laws and so can’t be used in photoreal renderers

There have been many advanced ones designed to model the optical behavior of a wide range of materials

However, a truly useful reflection model must satisfy a wide range of complex mathematical and user interface requirements and it has proven very difficult to come up with powerful, general purpose ones

This continues to be an active area of research in modern computer graphics

Page 26: Lighting, part 2 CSE167: Computer Graphics Instructor: Steve Rotenberg UCSD, Fall 2006.

Cook-Torrance

An early attempt at a more physically plausible reflection model is the Cook-Torrance model

The book spends some time talking about this and it is worth reading

However, the Cook-Torrance model has its share of mathematical problems and isn’t very popular in modern rendering

It was, however, one of the first of a long chain of reflection models that have been proposed in graphics literature over the years

Page 27: Lighting, part 2 CSE167: Computer Graphics Instructor: Steve Rotenberg UCSD, Fall 2006.

Transparency

Transparency is another important feature which is implemented at the pixel rendering stage

Photoreal renderers treat transparency and translucency in very sophisticated ways

For now, we will just look at some simple approaches that are compatible with the traditional graphics pipeline

Page 28: Lighting, part 2 CSE167: Computer Graphics Instructor: Steve Rotenberg UCSD, Fall 2006.

Alpha We need some sort of way to define the transparency of an object The most common approach is to add an additional property to material

colors called alpha Alpha actually represents the opacity of an object and ranges from 0 (totally

invisible) to 1 (fully opaque). A value between 0 and 1 will be partially transparent

Alpha is usually combined with rgb colors (red, green, blue) so that colors are actually represented as rgba

The alpha value can usually be represented with an 8 bit quantity, just like red, green, and blue, so we can store colors as a 32 bit rgba value. This is convenient, as many processors use 32 bit quantities as their basic integer size

Texture maps often store alpha values per texel so that the opacity can vary across the surface of a triangle

Alpha is also generally specified per vertex as well, so that we can extend our Vertex class to store a color as a Vector4 (rgba) value, and use glColor4f() to pass it to GL

If alpha is not stored or specified, it is assumed to be 1.0 (fully opaque)

Page 29: Lighting, part 2 CSE167: Computer Graphics Instructor: Steve Rotenberg UCSD, Fall 2006.

Transparency Draw Order

Transparency requires a little special handling to fit it into the traditional graphics pipeline

Because of the zbuffer, we can generally render triangles in any order without affecting the final image

With transparency, however, we actually need to render transparent surfaces in a back to front (far to near) order

This is required because the transparent surface will modify the color already stored at the pixel

If we have a blue tinted glass in front of a brick wall, we render the brick wall first, then when we render the blue glass, we need to apply a blue tinting to the brick pixels already in the framebuffer

At a minimum, we should render all opaque surfaces in a scene before rendering the transparent surfaces. For best quality, the transparent triangles should be sorted back to front, which can be an expensive operation

Page 30: Lighting, part 2 CSE167: Computer Graphics Instructor: Steve Rotenberg UCSD, Fall 2006.

Computing Transparency

When we are scan converting a transparent triangle, we end up with an alpha quantity per pixel

This might come from the interpolated alpha per vertex, or it might be specified as the alpha component of a texture map, or a combination of both. We refer to this as the source alpha

We also end up with a color value per pixel which comes from the Gouraud interpolated color per vertex, or the texture map, or both. We refer to this as the source color

We can also read the color value already stored in the pixel. We call this the destination color

The final color we render into the pixel will be a linear blend of the source color and destination color. The alpha quantity controls how much we blend each one

destsrcfinal ccc 1

Page 31: Lighting, part 2 CSE167: Computer Graphics Instructor: Steve Rotenberg UCSD, Fall 2006.

Alpha Blending

This alpha blending equation is very simple and fast and is often implemented with 8 bit fixed point operations in hardware

Many other alpha blending operations exist to achieve a wide variety of visual effects

destsrcfinal ccc 1

Page 32: Lighting, part 2 CSE167: Computer Graphics Instructor: Steve Rotenberg UCSD, Fall 2006.

Alpha Effects

Alpha blending can be useful for a variety of effects, and rendering systems often support several alpha blending options

Some of the most useful alpha blending modes include: No blending: Transparency: Modulate: Add: Subtract:

destsrcfinal

destsrcfinal

destsrcfinal

destsrcfinal

srcfinal

ccc

ccc

ccc

ccc

cc

*

1

Page 33: Lighting, part 2 CSE167: Computer Graphics Instructor: Steve Rotenberg UCSD, Fall 2006.

Fog & Depth Cueing

Another feature built into many classic rendering pipelines is per-pixel fog or depth cueing

Depth cueing refers to the visual cues from which one can determine an object’s distance (depth)

For example, as an object recedes in the fog, its color gradually approaches the overall fog color until it disappears entirely beyond some distance

Fog is implemented in classic renderers at the pixel stage, as one has access to the depth value already being interpolated for zbuffering

Fogging is usually applied after the texture color has been looked up and combined with the Gouraud interpolated vertex color

The fogged color is a blend between the texture/Gouraud color and the fog color. The blend factor is determined by the z distance value

After we have our fogged color, we can apply transparency if desired

Page 34: Lighting, part 2 CSE167: Computer Graphics Instructor: Steve Rotenberg UCSD, Fall 2006.

Pixel Rendering If we are lighting at the vertices, we still have several final computations to

perform per pixel to end up with our final colors The input to our pixel rendering phase is the output of the scan conversion phase,

namely a set of interpolated properties (color, depth, texture coords…) for each pixel in the triangle

The classic features that take place per pixel include: Test the zbuffer to see the pixel should be rendered at all Perform texture lookups (this may involve mipmapping, or bilinear sampling, for

example) Multiply the texture color with the Gouraud interpolated color (including the alpha

component) Blend this result with the fog color, based on the pixel z depth Blend this result with the existing color in the framebuffer, based on the alpha value Write this final color, as well as the interpolated z depth into the framebuffer/zbuffer

This represents a reasonable baseline for pixel rendering. One could certainly do more complex things, such as re-evaluate the entire lighting model per pixel, or more

Older graphics hardware was hardwired to perform these baseline tasks. Modern hardware allows programmable pixel shaders

Page 35: Lighting, part 2 CSE167: Computer Graphics Instructor: Steve Rotenberg UCSD, Fall 2006.

Texturing Combining

There are many cases where one would want to apply more than one texture map to a triangle and combine them in some way

For example, if a character walks in front of a slide projector, we might want to combine the texture map of his shirt with the texture map of the projected slide image

Both software and hardware rendering systems typically allow support for several textures per triangle

One can apply several different texture maps, each with its own set of texture coordinates, and then define various rules on how they are combined to get the final color

The texture combining operations tend to be similar to the alpha blending operations

Texture combining enables a wide range of possibilities for tricks and special effects…

Page 36: Lighting, part 2 CSE167: Computer Graphics Instructor: Steve Rotenberg UCSD, Fall 2006.

Pixel Shaders

As people learned about the things that could be done by combining textures, there was a trend towards allowing more and more generality at the pixel stage of the pipeline

The concept of texture maps can be generalized to the mapping of any number of arbitrary values across the surface of triangles that can then be combined in any arbitrary way to come up with the final pixel color

The term shader has been used to refer to a user programmable coloring function that can be assigned to a surface

Many software & hardware renderers allow some sort of programmable shaders, but different systems allow different levels of flexibility

Page 37: Lighting, part 2 CSE167: Computer Graphics Instructor: Steve Rotenberg UCSD, Fall 2006.

Procedural Textures

Lets say we want a texture map of some bricks on the side of a building We could go photograph a brick wall and use it as a texture map Alternately, we could use a procedural texture to generate it on the fly A 2D procedural texture takes a 2D texture coordinate and returns a color In this sense, it is a lot like a texture map, except instead of simply storing

an image, a procedural texture can be programmed to implement any function desired

Procedural textures offer the potential advantage of effectively infinite resolution and freedom from tiling artifacts

They also allow for broader user flexibility and control Plus, they can be animated…

Page 38: Lighting, part 2 CSE167: Computer Graphics Instructor: Steve Rotenberg UCSD, Fall 2006.

Noise Functions