Computer Graphics 1, Spring 2004 - it.uu.se · Lecture 17 May 28, 2004 Patrick Karlsson 1 Course...

13
Lecture 17 May 28, 2004 Patrick Karlsson 1 Course summary Computer Graphics 1, Spring 2004 Lecture 17 Computer Graphics 1, Spring 2004 Images in a computer... 94 100 104 119 125 136 143 153 157 158 103 104 106 98 103 119 141 155 159 160 109 136 136 123 95 78 117 149 155 160 110 130 144 149 129 78 97 151 161 158 109 137 178 167 119 78 101 185 188 161 100 143 167 134 87 85 134 216 209 172 104 123 166 161 155 160 205 229 218 181 125 131 172 179 180 208 238 237 228 200 131 148 172 175 188 228 239 238 228 206 161 169 162 163 193 228 230 237 220 199 … just a large array of numbers True-color frame buffer Store R,G,B values directly in the frame-buffer. Each pixel requires at least 3 bytes => 2^24 colors. Drawing lines Line algorithm 5, Bresenham Brushing up things a little Xdifference = (Xend-Xstart) Ydifference = (Yend-Ystart) Yerror = 2*Ydifference - Xdifference StepEast = 2*Ydifference StepNorthEast = 2*Ydifference - 2*Xdifference put_pixel(x, y) loop x Xstart to Xend if Yerror <= 0 then Yerror = Yerror + StepEast else y = y + 1 Yerror = Yerror + StepNorthEast end_if put_pixel(x, y) end_of_loop x Uses only integer maths => Fast!

Transcript of Computer Graphics 1, Spring 2004 - it.uu.se · Lecture 17 May 28, 2004 Patrick Karlsson 1 Course...

Lecture 17 May 28, 2004

Patrick Karlsson 1

Course summary

Computer Graphics 1, Spring 2004Lecture 17

Computer Graphics 1, Spring 2004

Images in a computer...

94 100 104 119 125 136 143 153 157 158103 104 106 98 103 119 141 155 159 160109 136 136 123 95 78 117 149 155 160110 130 144 149 129 78 97 151 161 158109 137 178 167 119 78 101 185 188 161100 143 167 134 87 85 134 216 209 172104 123 166 161 155 160 205 229 218 181125 131 172 179 180 208 238 237 228 200131 148 172 175 188 228 239 238 228 206161 169 162 163 193 228 230 237 220 199

… just a large array of numbers

True-color frame buffer

Store R,G,B values directly in the frame-buffer.

Each pixel requires at least 3 bytes => 2^24 colors.

Drawing lines Line algorithm 5, BresenhamBrushing up things a little

Xdifference = (Xend-Xstart)Ydifference = (Yend-Ystart)

Yerror = 2*Ydifference - XdifferenceStepEast = 2*YdifferenceStepNorthEast = 2*Ydifference - 2*Xdifference

put_pixel(x, y)loop x Xstart to Xend

if Yerror <= 0 thenYerror = Yerror + StepEast

elsey = y + 1Yerror = Yerror + StepNorthEast

end_ifput_pixel(x, y)

end_of_loop x

Uses only integermaths => Fast!

Lecture 17 May 28, 2004

Patrick Karlsson 2

Drawing polygons

We draw our polygons scan line by scan line. This is called scan conversion.

The process is done by following the edges of the polygon, and then fillbetween the edges.

Bi-linear interpolation

Linear interpolation in two directions.First we interpolate in the y-direction,then we interpolate in the x-direction.Unique solution for flat objects (triangles are always flat.)

Anti-aliasing

Due to the discrete nature of our computers, aliasing effects appear in many situations, when drawing edges, shaded surfaces, textures etc. To reduce the effect of aliasing various anti-aliasing techniques can be applied. Some examples are:

Dithering

•Low-pass filtering (poor mans anti-aliasing)•Area-sampling (colour proportional to area)•Super-sampling (more samples than pixels)•Dithering (anti-aliasing in the colour domain)•MIP mapping (prefiltering for textures)

Color = The eye’s and the brain’s impression of electromagnetic radiation in the visual spectra.How is color perceived?

light source

reflecting object

detector

s( )λr( )λ

rgb

( )( )( )

λλλ

rodsrods & cones

red-sensitive

green-sensitive

blue-sensitive

COLOR

The Fovea

There are three types of cones, S, M and L

Projection of the CIE XYZ-space

Lecture 17 May 28, 2004

Patrick Karlsson 3

Mixing light and mixing pigment

green

bluered

yellow cyan

magenta

green

blue

red

yellow

cyanmagenta

CMY

RGB

= 1-[][] R+B+G=white (additive) R+G=Y C+M+Y=black (subtractive) C+M=B etc...

(CMYK common in printing, where K is black pigment)

Affine transformationsTo build objects, as well as move them around we need to be able to transform our triangles and objects in different ways.

There are many classes of transformations.

Rigid body transformations include translation and rotation only.

Add scaling and shearing to this, and we get the class of

Affine transformations.

•Translate (move around)•Rotate•Scale•Shear (can be constructed from scaling and rotation)

Translation

P(x,y)

P’(x’,y’)

x’ = x + dxy’ = y + dy

Simply add a translation vector

Rotationaround the origin

1. P in polar coordinates:

x = r cos(ϕ), y = r sin(ϕ)

2. P’ in polar coordinates:

x’ = r cos(ϕ +θ) = r cos(ϕ) cos(θ) - r sin(ϕ) sin(θ)

y’ = r sin(ϕ +θ) = r cos(ϕ) sin(θ) - r sin(ϕ) cos(θ)

3. Substitute x, and y in (2)

x’ = x cos(θ) - y sin(θ)y’ = x sin(θ) + y cos(θ)

Arbitrary rotation

Translate the rotation axis to the originRotateTranslate back

Scalingaround the origin

x’ = sx xy’ = sy y

x

y

x x’

P(x,y)

P’(x’,y’)

Multiply by a scale factor

Lecture 17 May 28, 2004

Patrick Karlsson 4

Affine transformations are linear!

a f(a+b) = a f(a) + a f(b)

This implies that we need only to transform the vertices of our objects, as the inner points are built up of linear combinations of the vertex points.

p(α) = (1− α) p0 + α p1 α = 0..1

p0

p1

Using homogeneous coordinatesxy1

⎜⎜⎜

⎟⎟⎟

xy''

1

⎜⎜⎜

⎟⎟⎟Translation: p’ = T(dx,dy) p =

1 00 10 0 1

dxdy

⎜⎜⎜

⎟⎟⎟

Rotation: p’ = R(θ) p=

Scaling: p’ = S(sx,sy) p =

Shear: p’ = Hx(θ) p =

xy''

1

⎜⎜⎜

⎟⎟⎟

xy''

1

⎜⎜⎜

⎟⎟⎟

xy''

1

⎜⎜⎜

⎟⎟⎟

cos sinsin cos

θ θθ θ

−⎛

⎜⎜⎜

⎟⎟⎟

00

0 0 1

ss

x

y

0 00 00 0 1

⎜⎜⎜

⎟⎟⎟

1 00 1 00 0 1

cotα⎛

⎜⎜⎜

⎟⎟⎟

xy1

⎜⎜⎜

⎟⎟⎟

xy1

⎜⎜⎜

⎟⎟⎟

xy1

⎜⎜⎜

⎟⎟⎟

Concatenation of transformations

p’ = T-1(S(R(T(p))) = (T-1SRT)pM = T-1SRTp’ = Mp

Observe: Order (right to left) is important!

Homogeneous coordinates

What about this extra coordinate W?

xy

W

xy

W

x Wy W

⎜⎜⎜

⎟⎟⎟

=

⎜⎜⎜

⎟⎟⎟

=

⎜⎜⎜

⎟⎟⎟

ααα

//1

This is called: To homogenize

Cartesian coordinates:

xy

x Wy W

c

c

h

h

⎛⎝⎜

⎞⎠⎟ =

⎛⎝⎜

⎞⎠⎟

//

W=0: Points at infinity = vector

xy0

⎜⎜⎜

⎟⎟⎟

The View transformation

Put the observer in the originAllign the x and y axises with the ones of the screenRighthand coordinate system => z-axis pointing backwards

World Coordsys.

View Coordsys.

zx

y

z

xy

Simplifies light, clip, HSR and perspective calculations.

Change of Frame

The View transformation is a change of frame.A change of frame is a change of coordinate system + change of origin.We can split this operation into; change of coordinate system + a translation.The change of coordinate system can be seen as a rotation, but is really more general.

e1

e2

f1f2

Lecture 17 May 28, 2004

Patrick Karlsson 5

Change of Coordinate system

We wish to express the same point, with two different sets of basis vectors.

p = x e1 + y e2 = x’ f1 + y’ e2

e1

e2

f1f2

p

x

y

x’

y’

More change of Coordsys

Assume that we know the basis vectors of the new coordinate system F in terms of the old E.

f1 = a e1 + b e2

f2 = c e1 + d e2

e1

e2

f1f2

a

b

c

d

Change of frame, final

( )xy M T P

xy

To

'' ( )

⎛⎝⎜

⎞⎠⎟ = −

⎛⎝⎜

⎞⎠⎟

−1

xy MT P

xyo

'' ( )

⎛⎝⎜

⎞⎠⎟ = −

⎛⎝⎜

⎞⎠⎟

Iff we are dealing with ON-basises, M is a pure rotation matrix, i.e. M is orthonormal, then MT=M-1 =>

ff

a bc d

ee M

ee

1

2

1

2

1

2

⎛⎝⎜

⎞⎠⎟ =

⎛⎝⎜

⎞⎠⎟⎛⎝⎜

⎞⎠⎟ =

⎛⎝⎜

⎞⎠⎟

LookAt(eye,at,up)

WC

VC

f3

f1

f2

e3

e1

e2

eyeat

up feye ateye at3 =

−−

fup fup f1

3

3

=××

f f f2 3 1= ×

Perspective projection

yy

zz

dz

' '= =

COP

View plane

z d

y y’

x xdz

' =

y ydz

' =

z d zdz

' = =

Homogeneous coordinates

x xdz

' =

y ydz

' =

z zdz

' =

xyz

z d d

xyz

/ /

⎜⎜⎜⎜

⎟⎟⎟⎟

=

⎜⎜⎜⎜

⎟⎟⎟⎟

⎜⎜⎜⎜

⎟⎟⎟⎟

1 0 0 00 1 0 00 0 1 00 0 1 0 1

x x W xzd

xdz

' / /= = =

similar for y’ and z’

Use W to fit the perspective transformation into a matrix multiplication

Note: It is not a lineartransformation!

Lecture 17 May 28, 2004

Patrick Karlsson 6

Some coordinate systems on the way

p’ = Vp * P * C * Vt * M * p

Object Coordsys.

World Coordsys.

View Coordsys.

Window Coordsys.

Device/screenCoordsys.

Model/object transform

View transform

Projection

Viewporttransform

Optional Clip transf.

Clipping

Remove things which are outside the View volume

Front clipping plane

Back clipping plane

View volume

Distortion of the view volume

Instead of writing a general clipper, that works on tilted frustrum shaped view volumes. We distort the view volume to the shape of a cube, that is usually centered around the origin, and alligned with our axises and of size 2x2x2.

Scissoring = Clipping in screen space

When we fill our polygons, we do not want to draw pixels which are outside the screen.

It is quite easy to insert scissoring into our polygon filling routine.

Hidden surface removal /Visible surface determination

We do not wish to see things which are hidden behind other objects.

To main types of HSR

Object space approachWorks on object level, comparing objects with each other.

Painter algorithmDepth sort

Image/pixel space approachWorks on pixel level, comparing pixel values.

z-Buffer algorithmRay-casting / ray-tracing

Lecture 17 May 28, 2004

Patrick Karlsson 7

Painters algorithmWorks like a painter does things

Sort all object according to their z position (VC)Draw the farthest object first and the closest last (possibly on top of others).

Object based, compare objects with each other.Hard to implement in a pipeline fashion.Makes quite many errors.

We draw unnecessary polygons.Sorting of almost sorted list is fast! (E.g. bubblesort)

z-Buffer algorithm

fill z-buffer with infinite distancefor all polygons

for each pixelcalculate z-value (linear interpolation of q=1/z)if z(x,y) is closer than z-buffer(x,y)

draw pixelz-buffer(x,y)=z(x,y)

endend

end

• Must interpolate q=1/z to get perspective correct result.• Image/pixel space• Easy to implement in a pipeline structure (hardware)• Always correct result!

Back-face culling

We will never see the ”inside” of solid objects. Therefore, there is no reason to draw the backsides of the face polygons.We can check if we see the front side of a polygon by checking if the angle between the normal and the vector pointing towards the observer is smaller than 90 degrees.After View transformation and perspective distortion, this simply becomes a check of the z-coordinate of the normal vector.

COP

nv

v * n > 0

Adding some light

Lightsource

Ray-tracing = Follow the photons!Global models are slow!

Local reflection model

Shiny (specular) surface Frosted (diffuse) surface

n n

Transparent surface

n

Ia = ka La

n Id = kd Ld cos(θ) = kd Ld (n•l)θ

Lamberth’s Lawl

n

lr

v

ϕ

Id = ks Ls cosα(ϕ) = ks Ls (v•r)α

The Phong reflection model Ambient + Diffuse + Specular

Lecture 17 May 28, 2004

Patrick Karlsson 8

Phong Reflection model

I = Σ (ka La + 1

2a bd cd+ +( kd Ld (n•l) + ks Ls (v•r)α ))

For each color (r,g,b) calculate reflected intensity:

All lightsources

Ambient Diffuse SpecularDistance term

Shininess

n

lr

v

ϕθ

Distance to lightsource

Ambient Diffuse

Specular, shininess=20 Ambient+diffuse+specular

Polygonal Shading

Flat Gouraud Phong

To calculate the color at each pixel is computationally heavy,we use interpolation to get color values over the polygon surface.

Flat shadingOne color for the whole polygon.Normal vector is easily calculated from a cross product.Fast, but with rather poor result.Mach bands are very prominent.

a b

c

n = (b-a) × (c-a) / |(b-a) × (c-a)|

n

Interpolated/Gouraud shading

Calculate one color at each vertex point.Linearly interpolate the color over the surface.Normal vector needed at each vertex point.Medium speed, medium result. Commonly used.

a b

c

na nb

nc

Phong shading

Interpolate the normal vector over the surface.Calculate the color at each pixel.Normal vector needed at each vertex point.Best result. Slow in the original form.

a b

c

na nb

nc

Lecture 17 May 28, 2004

Patrick Karlsson 9

Coordinate systems againStarting object

Object coordsys.Model transf.

World coordsys.View transf.

View coordsys.Clip distortion

Clip coordsys.Persp. distortion

Normalized Device coordsys.Orthographics proj.

Window coordsys.Viewport transf.

Screen/device coordsys.Rasterization!

Cube 2x2x2

NDC

The Display PipelineThe order may vary somewhat

•Modelling, create objects (this is not really a part of the pipeline)•Transformations, move, scale, rotate…•View transformation, put yourself in the origin of the world•Projection normalization, reshape view volume•Clipping, cut away things outside the view volume•Object based Hidden surface removal, we don’t see through things•Light and illumination, shadows perhaps•Orthographics projection, 3D => 2D•Rasterisation, put things onto our digital screen

Texture, shading, image based HSR...

Transform Clipping HSR Light calculations ...

Polygon in

...

Texture mappingInstead of using a huge number of small polygons to describe the structure and texture of our surfaces, we can map an image onto the polygon. This is called texture mapping and is frequently used to enhance the realism in images without any significant loss of performance.

Texture map Our polygon

Binding the texture mapFor each vertex on our polygon, we assign a texel position (u,v) in a texture map (this is really a part of the modelling).

Texture mapping

When we take a step rasterizing the polygon, we also take a step in the texture map. The code is just the same linear interpolation as we already have done for color, and z-coordinate interpolation. Just two new coordinates, u and v and new slopes for them.

For each pixel to draw, we check in the texture map, what is thecolor of that pixel.

Everything maps!

The use of mapping has become a commonly used way to store precalculated features to speed up rendering. Some examples are: • Texture maps (i.e color maps)• Bump maps (i.e. normal displacement maps)• Displacement maps • Reflection maps (environment maps)• Light maps (luminosity maps)• Mapping of other surface properties, e.g. shininess

You name it!

Lecture 17 May 28, 2004

Patrick Karlsson 10

Bump mapsDisplace the normal vector to fool the eye inte 3D thinking.Not the the physical geometry of the object does not change (no change in outline or shadow).

Displacement mapsReal displacement of the pixels along the normal vector.Changes physical geometry, hard to render in real-time, but works nicely in ray traced images.

Reflection maps / environment maps Light maps

Quake uses light maps in addition to texture maps. Texture maps are used to add detail to surfaces, and light maps are used to store pre-computed illumination.

Textures only Texture and light map

Light map

Perspective problemsAll our mappings, as well as our Gouraud shading, does interpolation in the screen space (2D). This does not work perfectly together with perspective, which is a non-lineartransformation.

The solution to the problem is once again our homogeneous coordinates, and to do the texture interpolation in (u’, v’, q) instead of (u,v).

Parametric cubic curves

In order to assure C2 continuity our functions must be of at least degree 3

Here's what a parametric cubic spline function looks like: x(t) = axt3 + bxt2 + cxt + dx

y(t) = ayt3 + byt2 + cyt + dy

Alternatively, it can be written in matrix form:

Lecture 17 May 28, 2004

Patrick Karlsson 11

Blending functions

The contribution of each geometric factor can be considered separately, this approach gives a so-called blending function associated with each factor.

Beginning with our matrix formulation:

p(t) = t M G

By reordering our multiplications we get:p(t) = b(t) G

or simply

p(t) = Σ bi(t) pi

Bezier curves

Instead of giving gradients as numbers, we can get them from using two more control points.

p(0) = p1p(1) = p4p’(0) = (p2-p1)/(1/3)p’(1) = (p4-p3)/(1/3)

Recursive subdivisionWhen rendering our Bezier curves, one can of course take tiny steps in the parameter, and plot the corresponding points.

p(t) = Σ bi(t) pi

There exists a faster way though, called recursive subdivision, where we cut the curve into smaller and smaller pieces, which we finally aproximate with straight lines.

Cubic B-Splines

Passing through the same mathematic machinery as before we end up with.

a ab bc cd d

x yx yx y

x y

x y

x y

x y

x y

i i

i i

i i

i i

⎢⎢⎢⎢

⎥⎥⎥⎥

=

− −−

⎢⎢⎢⎢

⎥⎥⎥⎥

⎢⎢⎢⎢

⎥⎥⎥⎥

− −

− −

+ +

16

1 3 3 13 6 3 0

3 0 3 01 4 1 0

2 2

1 1

1 1

Or using blending polynomials

p t

tt t

t t tt

ppp

p

i

i

i

i

( )

( )

=

−− +

+ + −

⎢⎢⎢⎢

⎥⎥⎥⎥

⎢⎢⎢⎢

⎥⎥⎥⎥

+

16

14 6 3

1 3 3 3

3

2 3

2 3

3

2

1

1

All control points have the same meaning.We pick four control points at a time with a sliding window.

C2 continuous!

Rendering B-splines by change of basis

As there exists a nice algorithm for rendering Bezier curves, B-splines are usually rendered by first transforming them into Bezier curves.

p(t) = t MS GS = t MB GB => GB = MB-1 MS GS

SurfacesSurfaces is really just the same thing as curves, we just add another dimension.

Instead of having one parameter t, we use two, t & s.Instead of having curve segments, we have surface patches.

p(s,t) = Σ Σ bi(s) bj(t) pij

The equations are separable both in (x,y,z) space andthe parameter space (s,t).

Lecture 17 May 28, 2004

Patrick Karlsson 12

Fractals and dimension

The fractal dimension of simple self-similar curves can be calculated as (by definition).

D = ln (n) / ln (1/s)

where n is the number of subparts and s is the scale factor in each iteration.

Loosely speaking, the fractal dimension of a curve is the rate at which the length of the curve increases as the measurement scale is reduced.

Ferns and L-systems

Lindenmayer SystemsExample:Axiom: XRewriting rules:

F --> FFX --> F-[[X]+X]+F[+FX]-Xø = 22.5

Character MeaningF Move forward by line length drawing a line+ Turn left by turning angle- Turn right by turning angle[ Push current drawing state onto stack] Pop current drawing state from the stack

Fractal landscapesIf we are to create a mountain for example, we start with a flat surface, find its midpoint and displace it, by a random distance, in the verticaldirection, creating new surfaces. This procedure is repeated a finite number of times using a zero-mean Gaussian random number generator.

The fractaldimension is controlled by the variance of the random numbergenerator.

If the variance is proportional to l*2(2-D) where l is the length of the line segment to be divided, the fractaldimension of the curve will be D.

Multidimensional images

A multidimensional image can be considered as a function f(x,y,z,t,b), wherez: third spatial directiont: time sequencesb: different spectral bands

These can be combined in different ways, for example, f(x,y,z,t) is a 4D image representing a time sequence of 3D volume images

Surface versus Volume RenderingSurface rendering+ Standard computer graphics software can be used. + Supported by dedicated harware - fast.+ Normally a high data reduction can be achieved.- (There is no intensity information.)- Cutting through the data is meaningless- Changes in surface definition criteria means

recalculation of the data.Volume rendering+ Arbitrary cuts can be made allowing the user to see

inside the data.+ Allows for display of different aspects such as

maximum intensity projections, semi-transparent surfaces etc.

+ Rendering parameters can be changes interactively. - Slow!

Marching CubesAlgorithm summary

1. Create a cube.2. Classify each vertex.3. Build an index.4. Get edge list.5. Interpolate triangle vertices.6. Calculate and interpolate normals.

?

Lecture 17 May 28, 2004

Patrick Karlsson 13

The OpenGL API is the most widely adopted 3D graphicsAPI in the industry.

THE END

Clipping

View plane

Camera orobserver

Lightsource

Transformation

Projection3D

2D

Rasterization

Shadow

Hidden surface

View volume

Required reading

Literature:

Angel 3rd ed., Ch 1, 2.5, 4.1-4.9, 5.1-5.6, 5.8-5.9, 6, 7.5-7.9, 8, 9.1-9.8, 10.1-10.4, 10.6-10.10, 11.6-11.7, 12

Angel 2nd ed., Ch 1, 2.4, 4.1-4.9, 5.1-5.5, 5.7-5.8, 6, 7, 8.1-8.8, 9.1-9.4, 9.7, 10.1-10.4, 10.6-10.10, 11.6-11.7, 12

All previous hand-outs and extra material

No OpenGL knowledge will be tested!

None of the links in the literature list that starts with “WWW” are required reading!

AssignmentsOnly assignments that are correct (graded G) before june7 deadline will give bonus on the exam.

If you, for example have been rewarded "R 1.5", you will ONLY get 1.5 bonus points IF you correct your revisions BEFORE the june 7 deadline (in other words your "R" has changed to a "G").

A strong deadline is at 23:59 on Monday june 7 2004(2004-06-07). Assignments turned in after this deadline WILL NOT be graded until the august 31 2004 (see below).

An even stronger deadline is at 23:59 on Tuesday august 31 2004 (2004-08-31), if you HAVE NOTcompleted the assignments by then you are referred to the NEXT time the course is given (fall 2004).

Exam

08:30-13:30 Thursday June 3 2004

Room B154 at Ekonomikum

Questions?