Engg 407 Notes
description
Transcript of Engg 407 Notes
YANI’S ENGG 407 NUMERICAL METHODS NOTES (WINTER2010)
DANIEL CHINUNIVERSITY OF CALGARY
MECH ENGG([email protected])
Lecture notes from Yani’s (Pouyan Jazayeri) lectures winter 2010 at the Universityof Calgary
Date: October 4, 2010.1
2 DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG ([email protected])
For future emails (after I graduate): [email protected]
Part 1. Introduction
This is Engg 407 Numerical Methods notes package that I am compiling from myhandwritten notes from class. This is my first attempt at using LATEX to create adocument. In the future perhaps I will be able to fully use LATEX in all of my math-ematical courses to type notes. ————————————————————Unfortunately I cannot easily include figures——————————————————————————from classes dealing with physical mechanical systems etc. I hope that during thislearning process with LATEX, I will be able to more efficiently type my notes, espe-cially mathematical formulas.Last updated: October 4, 2010 : Finished! —————————————————as of yet no cheap plastic Yani notes—————————————————Crappy scanned versions included
1. Notes
Curriculum changes in 2009-2010 to gear the course to non-zoo students meansthat memory and all notes regarding how computers work will not be tested
YANI’S ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010) 3
Contents
4 DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG ([email protected])
Part 2. January Notes
2. Numerical Methods
Wednesday January 13, 2010
• Methods in which the complex mathematical problem is reformatted so itcan be solved by simple arithmetic operators• inevitable there will be some discrepancies or errors between the numerical
results & the true results
2.1. Error. The True Error Et is defined as true value− approximation.The shortcoming is that order of magnitude of the error is not accounted for so wehave True Fractional Relative Error
Etf =true value− approximation
truevalue
The problem is that the true value is rarely known in real world applications, so wewill use the relative approximation approach to errorRelative approximation error EA
EA =current approximation− previous approximation
current approximation× 100%
• The sign of EA (+/−) is not significant, so we use |EA|• Typically we would like |EA to be smaller than a predetermined threshold
(tolerance) of ES (stopping condition) therefore |EA| < ES
In application of numerical methods, we need to focus on two types of errors
(1) Round off errors(2) Truncation errors
Round Off ErrorsThese errors are introduce since computers can only represent some numbers approx-imately. Because each number is assigned to a finite (fixed) number of bits, we arelimited to a finite number of significant digits.
2.2. Review on how binary works (not tested in 2009-2010). Binary is a base2 number system-if we have 16 bits for storing signed integers the largest signed number we can store is:
0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
YANI’S ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010) 5
First bit is for sign therefore largest number is:
215 + 214 + 213 + . . . + 21 + 20 = 13276
Note that we are restricted to a finite range of numbersReal numbers are stored as a floating point format
m× be
Where m is the mantissa (significand), b is the base of the number used (in computersit is 2) and e is the exponent used.The mantissa is usually normalized before it is stored , such as removing the leadingzeroes after the decimal is removed.Example: 1
19= 0.05263 as a base 10 system with a 5 digit mantissa
We store this as 5.2631∗10−2instead of 0.05263∗100 to gain 2 extra significant digits.In a typical computer the real numbers are stored as per the IEEE 754 floating pointformat.In the double precision format 64 bits are allocated for each number. the divisionare as follows:
• 1st bit is the sign bit of the mantissa• next 11 bits are the signed exponent• the final 52 bits are the mantissa itself
The largest value available for floating point double is 1.7977 ∗ 10308
And the smallest available number is 3.2251 ∗ 10−808
Any number between 0 and 3.2251 ∗ 10−808 is therefore rounded to the nearest avail-able number, or are clipped (chopped) [truncated]This is the finite precision available with computers
6 DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG ([email protected])
3. Truncation
Friday January 15 2010
Machine epsilon: parameter used to indicate the level of precision offered by a com-puter with IEEE 754 format. For doubles machine epsilon is 2−53 = 2.22 ∗ 10−16.This refers to the smallest number that we can haveSignificant digits are the digits of a number that can be used with confidence. Toremove uncertainty we use scientific notation to show training zeroes.
# Sig. digits5000 1,2,3,4 ???5 ∗ 103 15.000 ∗ 103 45008 45008.5 50.005 1
Truncation Error these are the result of using an approximation instead of anexact mathematical formula.Ex: ex = 1 + x+ x2
2+ . . .+ xn
n!=∑∞
n=1 1 + xn
n!
If we compute ei.2, approximating with the first three terms we have
e1.2 ' 1 + (1.2) +(1.2)2
2!= 2.92
[the real value is a bit different]
The Truncation Error is all of the remaining terms =(1.2)3
3!+
(1.2)4
4!+ . . .
If we now use the first 4 terms:
e1.2 ' 1 + (1.2) +(1.2)2
2!+
(1.2)3
3!= 3.208
EA the relative approximation error EA1 =|3.208− 2.92|
3.208∗ 100% = 1.98%
Another example is the derivation of a function at point x is defined as:
f ′(x) = lim∆x→0
f(x+ ∆x)− f(x)
∆x
f ′(x) ' f(x+ ∆x)− f(x)
∆x
The truncation error is introduced by using a finite (practical) value of ∆x insteadof (impractical) ∆x→ 0
YANI’S ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010) 7
3.1. Taylor Series Polynomial. Very Important-Very fundamental to numerical methods-Suppose we have the value of a function at a single point xi and we know the valueof all of its derivatives (1st, 2nd, . . . ) at the point xi-Then we can calculate the value of that function at any other point xi+1 from ourtaylor series
f(xi+1) = f(xi) + f ′(xi)h+ f ′′(xi)h2
2!+ f ′′′(xi)
h3
3!+ . . .
where h = xi+1 − xiFunction must also be smooth between xi → xi+1
For example:
f(x) = sin(x)
f ′(x) = cos(x)
We have points at xi = 0:
f(0) = sin(0) = 0
f ′(0) = cos(0) = 1
f ′′(0) = − sin(0) = 0
f (3)(0) = − cos(0) = −1
f (4)(0) = sin(0) = 0
f (5)(0) = cos(0) = 1
Using the first six terms we can use the taylor series approximation to find
sin(π
3)
xi = 0 xi+1 =π
3
h =π
3
f(π
3) = 0 + 1(
π
3) +
0
2!(π
3)2 +
−1
3!(π
3)3 +
0
4!(π
3)4 +
1
5!(π
3)5 + (H.O.T )
8 DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG ([email protected])
Where H.O.T is higher order terms. If we drop the higher order terms (as Yani says“drop it like it’s hot”1) we then have truncation error and the taylor series approxi-mation.
f(π
3) ' π
3− π
3!33+
π5
5!35= 0.866295
While our exact value is
√3
2= 0.866025
Therefore our truncation error ' 2.69 ∗ 10−4
1Actual quotation
YANI’S ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010) 9
4. Taylor Series Polynomial
Monday January 18, 2010
f(xi+1) = f(xi) + f ′(xi)h+ f ′′(xi)h2
2!+ f ′′′(xi)
h3
3!+ . . .
General Equation:
f(xi+1) =∞∑k=0
f (k)(xi)hk
k!
f(xi+1) = f(xi)︸ ︷︷ ︸Zero Order Approximation
+ f ′(xi)h
︸ ︷︷ ︸First Order Approximation
+ f ′′(xi)h2
2!
︸ ︷︷ ︸Second Order Approximation
+ f ′′′(xi)h3
3!
︸ ︷︷ ︸Third Order Approximation
+ . . .
4.1. Zero Order Approximation.
In the zero order equation we only consider the ’zeroth’ term
f(xi+1) ' f(xi)
10 DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG ([email protected])
The induced truncation error (E) is the portion of the taylor series that we did notaccount for. This is as follows:
E = f ′(xi)h+ f ′′(xi)h2
2!+ . . .
4.2. First Order Approximation.
We now take into account the first derivative of the function (we add the slope)
f(xi+1) ' f(x)i + f ′(xi)h
Again the induced truncation error (E) is the portion of the taylor series that wetruncated (ignored).
E = f ′′(xi)h2
2!+ . . .
4.3. Improving the accuracy of taylor series approximation. We have twodifferent ways of increasing the accuracy of our taylor series approximation:- increase number of terms (higher order of approximation, higher order derivatives)- smaller step size (h)
4.4. McGovin series. for points where initial point xi = 0ex. f(x) = sin(x), xi = 0, h = xi+1 − xi = xi+1
!e know from trigonometry that sin′(x) = cos(x) and cos′(x) = − sin(x)
f(0) = 0 f ′(0) = 1 f ′′(0) = 0f (3)(0) = −1 f (4)(0) = 0 f (5)(0) = 1
We can already see that because of trigonometric patterns, we have a much simplifiedsolution
YANI’S ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010) 11
If we now replace x(i + 1) with x we now have the following formula, the formulathat calculators in fact use to calculate the value of sin
sin(x) = x− x3
3!+x5
5!. . .
4.5. General Taylor Series information. If only n number of derivatives of afunction available then
f(xi+1) ' (xi) + f ′(xi)h+ . . .+ f (n)hn
n!
f(xi+1) = (xi) + f ′(xi)h+ . . .+ f (n)hn
n!+Rn
Where Rn is the remainder term, or truncation errorDue to the first term of the truncated part of the taylor series, the closed termexpression for Rn can be expressed as:
Rn =f (c)hn+1
(n+ 1)!
Where C = some value between xi → xi+1
From the relationship that we have gathered, we now know that Rn is directly pro-portional to a high order of the step size (h)
Rn ∝ hn+1
Or:Rn = O(hn+1)
Where O refers to order of in mathematical notation
12 DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG ([email protected])
5. Behavior of Error and Numerical Differentiation
Wednesday January 20, 2010
5.1. Big OH in terms of error. In an approximation to a mathematical function,big OH shows how the error will behave as h→ 0Example
R2 =f ′′′(xi)h
3
3!+f (4)(xi)h
4
4!+ . . .
Where the function is only a variable of h (one variable)The h3 term dominates the expression as h→ 0therefore |R2| ≤ const|h3| and |R2| ≤ O|h3|Example: taylor series for f(x) = ex at xi = 0
f(x) = ex = 1 + x+x2
2!+x3
3!+ . . .
xi = 0 and xi+1 = x
-Use the fourth order taylor approximation to estimate e2
-Use the remainder expression to find the bounds on the truncation error
e2 = 1 + 2 +4
2!+
16
4!+R4 ' 1 + 2 +
4
2+
16
24= 7
|R4| ≤ k|h5|
R4 =f (5)(c)hn+1
n+ 1!
R4 =ec25
5!
Since h = xi+1 − xi = 2− 0 = 2 and 5th derivative of ex = ex
R4 =4
15ec
Where:xi < c < xi+1
0 < c < 2and since ec increases continuously between 0→ 2 our error bounds are:
4
15e0 =
4
15≤ R4 ≤
4
15e2 = 1.97
4
15< R4 < 1.97
YANI’S ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010) 13
If we double check our answers
e2 = 7.389
|∆| = 7.389− 7 = 0.389
4
15< 0.389 < 1.97
We can see that our range is correct!
5.2. Numerical Differentiation. with taylor series
f(xi+1) = f(xi) + f ′(xi)h+ f ′′(xi)h2
2!+ f ′′′(xi)
h3
3!+ . . .
We rearrange our terms to solve for f ′(x):
f ′(x) ' f(xi+1)− f(xi)
h− f ′′(xi)h
2!− f ′′′(xi)h
2
3!+ . . .
f ′(x) ' f(xi+1)− f(xi)
h
(Forward Difference Formula)
The truncation error is O(h)Backward Differential formula-1st backwards taylor series expansion(note that xi+1 is replaced with xi−1)
f(xi−1) = f(xi)− f(xi− 1)h+f ′′′(xi)h
2
2!+ . . .
f ′(xi) 'f(xi)− f(xi−1)
h(Backwards Difference Formula)
Truncation error is still O(h)Central Difference MethodThe best approximation combines the two formulas
(Backwards Difference Formula) f ′(xi) 'f(xi+1)− f(xi−1)
2h
Error is now of the order E = O(h2)
14 DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG ([email protected])
6. Numerical Differentiation and N dimensional Taylor series
Friday January 22, 2010
Numerical differentiation techniques that we have used so far are finite differencein numerical methodsFind numerical derivative at xi given:
f(x0) = 2.2874 at x0 = 1.0
f(x1) = 2.6773 at x1 = 1.1
f(x2) = 3.0945 at x2 = 1.2
Backward Difference
f ′(x1) ' f(x1)− f(x0)
x1 − x0
= 3.899
Forward Difference
f ′(xi) 'f(x2)− f(x1)
x2 − x1
= 4.172
Central Difference
f ′(xi) 'f(xi)− f(xi−1)
x2 − x0
= 4.0355
6.1. Finite Difference Approximation of Higher Derivatives.
xi+2 = xi + 2h
Taylor series:
f(xi+ 2) ' f(xi) + f ′(xi)2h+f ′′(xi)(2h)2
2!And:
f(xi+1) ' f(xi) + f ′(xi)h+f ′′(xi)h
2
2!f(xi+2)− 2f(xi+1) = −f(xi) + f ′′(xi)h
2
Solving for f ′′(xi) we can derive equations to find the second derivative of the function
(Forward Difference) f ′′(xi) 'f(xi+2)− 2f(xi+1) + f(xi)
h2
(Backwards Difference) f ′′(xi) 'f(xi)− 2f(xi−1) + f(xii−2)
h2
YANI’S ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010) 15
(Backwards Difference) f ′′(xi) 'f(x+ i)− 2f(xi−1) + f(xii−1)
h2
Truncation error is still now O(h2)Another way of writing central difference method is the change in the derivativecompared to change in the x value
6.2. N-Dimensional Taylor Series - 1st order approximation. For 1-D func-tions we only need f(x), but for N dimensional function of f(x1, x2, . . . , xn) we usevectors to define our values of x as a vector x.
x =
x1
x2...xn
Therefore: f(x1, x2, . . . , xn) = f(x) = f
x1
x2...xn
We will now look at the differences between 1-Dimensional and N-Dimensionalfunctions
Quality 1-Dimensional N-Dimensional
Function Variable xi xi =
x1
x2...xN
Expansion Point xi xi =
x2i
x1i...xNi
Step Size h = xi+1 − xi h =
x1(i+1) − x1i
x2(i+1) − x2i...
xN(i+1) − xNi
= xi+1 − xi
First Derivative f ′(x) f ′(xi) =
∂f(x)
∂x1...
∂f(x)
∂xN
= J(x)
The first derivative of a multi-dimensional function is called the JacobianJ(xi) Jacobian of the function f(x) at x = xi
16 DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG ([email protected])
The Taylor series expansion at this point for a multi dimensional function is:
f(xi+1) ' f(xi) + f ′(xi)h
f(xi+1) ' f(xi) + J(xi)T · h
Where T represents transposition
f(xi+1) ' f(xi) +
[∂f(x)
∂x1
. . .∂f(x)
∂xN
]·
x1(i+1) − x1i...
xN(i+1) − xNi
YANI’S ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010) 17
7. Functions of Multiple Dimensions
Monday January 25, 2010
7.1. Example 1.
xi =
000
, xi+1 =
0.10.10.1
, P (x) = 3x1 + sin(x2) + 2x2x3 + 1
If P (xi) = 1 What is P (xi+1) =
J(x) =
∂(3x1 + sin(x2) + 2x2x3 + 1)
∂x1∂(3x1 + sin(x2) + 2x2x3 + 1)
∂x2∂(3x1 + sin(x2) + 2x2x3 + 1)
∂x3
=
3cos(x2) + 2x3
2x2
And at point xi
J((x)i) =
3cos(0)
0
=
310
And due to our 3rd dimensional Taylor series:
f(xi+1) ' f(xi) + J(xi)T · h ' 1 +
[3 1 0
]·
0.10.10.1
' 1.4
We will now try the second order approximation-We need to determine the second order derivative for the functionf(x)
∂2f(x)
∂x21
∂2f(x)
∂x1∂x2
. . .∂2f(x)
∂x1∂xN∂2f(x)
∂x2∂x2
∂2f(x)
∂x22
. . .∂2f(x)
∂x2∂xN...
.... . .
...∂2f(x)
∂xN∂x1
∂2f(x)
∂xN∂x2
. . .∂2f(x)
∂x2N
= H(x)where H is called the Hessian
Hessian also can be written as:[∂J(x)
∂x1
∂J(x)
∂x2
. . .∂J(x)
∂xN
]
18 DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG ([email protected])
Note that the Hessian has symmetry : HT = HTherefore H(i, j) = H(j, i)
Since∂2f(x)
∂xi∂xj=∂2f(x)
∂xj∂xiNow we can use this result to determine the second order taylor polynomial:
f(xi+1) ' f(xi) + f ′(xi)h+f ′′(xi)h
2
2!
f(xi+1) ' f(xi) + J(xi)T · h+
hT ·H(xi) · h2!
Note that as h→ 0 we may get round off errors and increase total errorExample: use second order approximation to estimate:
xi =
000
, h =
0.10.10.1
, f(xi) = 1
J(x) =
3cos(x2) + 2x3
2x2
, J(xi) = J(0) =
310
H(x) =
0 0 00 − sin(x2) 20 2 0
, H(xi) = H(0) =
0 0 00 0 20 2 0
Therefore:
f(xi+1) ' 1 +
[0.1 0.1 0.1]·
310
+1
2
[0.1 0.1 0.1]·
0 0 00 0 20 2 0
·0.1
0.10.1
' 1 + 0.4 +
1
2
[0.1 0.1 0.1]·
00.20.2
' 1 + 0.4 +
1
2(0.04)
' 1.42
7.2. Roots of Functions. The root of a single variable function have many appli-cationsOptimization of a single variable function where f ′(x) = 0For multivariable functions it is more difficult to get optimization and rootsFor roots of a single variable functions we will use the following methods
• Bracketing Methods– Bisection
YANI’S ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010) 19
– False position• Open Methods
– Newton–Raphson– Secant
• Polynomials - Muller
BracketingWe find solutions of f(x) = c and assume xE[xA, xB]Define g(x) = f(x) − c and find roots of g(x) in [xA, xB] bracket, and minimizebracket
20 DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG ([email protected])
8. Bracketing Methods
Wednesday January 27, 2010
8.1. Bisection.
-Idea is to half the interval and discard the past that does not have root.Observe that if a root exists between xA and xB, between g(xA)→ g(xB) There willbe a sign change. Therefore g(xA) · g(xB) < 0
8.1.1. Binomial Algorithm.
(1) Find midpoint between xA & xB. Midpoint xm =xA + xB
2(2) Find g(xm)(3) Test to see if midpoint is zero or close to it
If |g(xm)| = 0 or ≤ Etolerance We are done!xr = xm = root
(4) See if sign change takes place in the first half of the bracketElse if g(xA) · g(xm) > 0Positive sign means root is in upper half of bracket (xm < xr < xB)We now change the bounds of the bracket to the upper half.xA = xm xB = xBReturn to step 1 and repeat
YANI’S ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010) 21
Else if g(xA) · g(xm) < 0Negative sign means root is in lower half of bracket (xA < xr < xm)Now we take the lower half of our bracket as our now bracketxA = xA xB = xmReturn to step 1 and repeat
Note: if g(xA) · g(xB) > 0 for initial interval we either have:-No rood between them-Even amount of roots between themIf we have an odd number of roots bisection will only get one rootTip: Try to graph function first before choosing the interval
8.2. False Position. (FP)-Bisection we can see is not very efficient
22 DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG ([email protected])
-we use false position root instead of midpoint, but same method besides thisThrough similar triangle relationship to find our false position midpoint.
g(xA)
xm − xA=−g(xB)
xB − xm, xm = Unknown
xm =g(xA)xB − g(xB)xAg(xA)− g(xB)
xm = xB −g(xB)(xA − xB)
g(xA)− g(xB)
8.3. Bracket Summary. Bracket midpoint update terms:
Bisection xm =xA + xB
2
False Position xm = xB −g(xB)(xA − xB)
g(xA)− g(xB)
FP places xm closer to root, but slower in functions with strong curvature
8.4. Open Methods. :No bracket is required, only 1 initial point is needed (a guess)Usually faster than bracketing methodsmay not always find a given root, may not converge correctly (if guess is bad)
YANI’S ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010) 23
8.4.1. Newton–Raphson. 1st order approximation of taylor series
g(xi+1) ' g(xi) + g′(xi)(xi+1 − xi)xi = current point. xi+1 = next point for root.
we are looking for g(xi+1) = 0
Therefore: 0 ' g(xi) + g′(xi)(xi+1 − xi)
xi+1 ' xi −g(xi)
g′(xi)Newton-Raphson Update Term
24 DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG ([email protected])
9. Open Methods
Friday January 29, 2010
9.1. Newton-Raphson. (NR)
g′(xi) =g(xi)
xi − xi+1
xi+1 = xi −g(xi)
g′(xi)
Same NR update function:
(1) x = x0
(2) if g(x) = 0 or |g(x)| ≤ Etot We are donexr = x
(3) xnew = x− g(x
g′(x)x = xnew Return to step 2 and repeat
Example:
g(x) = ex − 10 exact root ' 2.302
x0 = 4 g′(x) = ex
g(x0) = e4 − 10 6= 0 E= 1.70
x1 = x0 −g(x0)
g′(x0)= 4− e4 − 10
e4= 3.18 E= 0.88
x2 = 3.18− e3.18 − 10
e3.18= 2.59 E = 0.293
x3 = 2.34 E= 0.037
We can see that the solution for x converges, and error from true value decreases.
9.2. Convergence on open method. Bisection error roughly decreased by 12
everyiteration Linear ConvergenceIn NR the error is roughly proportional to the square of the previous error QuadraticConvergenceWarning Watch for inflection and local min/max, since if g′(xi) = 0 its bad!
9.3. Secant Method. In NR update: xi+1 = xi −g(xi)
g′(xi)we use the derivative.
When the derivative is unknown, we use backwards difference method to approximatederivative g′(x)
YANI’S ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010) 25
g′(x) ' g(xi+1)− g(xi)
xi−1 − xi
Therefore: xi+1 = xi −g(xi)(xi−1 − xi)g(xi−1)− g(xi)
26 DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG ([email protected])
10. Cheap Plastic Yani Notes
Muller’s method and root finding
Hurrah, it looks like the notes are finally done!
YANI’S ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010) 27
!"#$%& '()"*+ *,"&-$' )."/*,) 01 23,,)4 2)-(05
!"#$%&'( )*' +,&&'-.* +'/012 /1 3452 /0' -11/* 13 !"#$ % &'("#$
6' 7518 /0'-' 4* # -11/ #/ " 9 :; 9 <=>?@ A,/ %-'/'52 8' 21 51/ 7518 /04* #52 *'/ ,% 1,- 454/4#& %145/* #/(BC 9 CB< 9 C=>B; 9 <
D4-*/@ /0' +EFGEH I12' 31- 4$%&'$'5/45J /04* $'/012( x0 = 0; x1 = 0.5; x2 = 1; err = 100; % arbitrary large number counter = 0; while err > 0.001 % Evaluate f at current points fx0 = cos(x0); fx1 = cos(x1); fx2 = cos(x2); % Calculate the coeffs for the 2nd order polynomial d0 = (fx1 - fx0)/(x1 - x0); d1 = (fx2 - fx1)/(x2 - x1); h0 = x1 - x0; h1 = x2 - x1; a = (d1 - d0)/(h1 + h0); b = a*h1 + d1; c = fx2; % Calculate the inverse of the roots of the 2nd order polynomial z1 = (-b + sqrt(b^2 -4*a*c))/(2*c); z2 = (-b - sqrt(b^2 -4*a*c))/(2*c); % Find the largest of the two z = max (abs(z1), abs(z2)); % Calculate the next point and the error x3 = x2 + 1/z; err = abs(x3-x2); % Update points x0 = x1; x1 = x2; x2 = x3; counter = counter + 1; end
28 DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG ([email protected])
!"#$#%&'(#"$)
*#+)$!$,+%$#("
% - ./01234 - ./55206 - ./708
98 - :/02:
;,6("<!$,+%$#("
% - ./1=724 - :/.=5=6 - ./.323
98 - :/7=08
YANI’S ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010) 29
!"#$%& '()"* *+"&,$' ("#-./, .# 0.., 1$#-$#2 3$,( 456758
fzero 1/#',$.# 9 :$#2+) ;"0$"<+) #.#+$#)"0 =)0. 1$#-$#2
!"#$ %&#'( #) *+!,+- .&)/(#0) &$1$ 2 /03%#)2(#0) 0. %42/51(#)6 2)7 081) (1/")#9&1$ (0 .#)7 ("1 400($ 0. )0)'#)124 .&)/(#0)$:
;#38'#.#17 ;<)(2=> X = FZERO(FUN,X0) (4#1$ (0 .#)7 2 ?140 0. ("1 .&)/(#0) FUN )124 X0: @X0 #$ ("1 #)#(#2' 6&1$$AFUN /2) %1 $81/#.#17 #) 2 )&3%14 0. B2<$: !0 .#)7 ("1 400($ 0. !"#$ % #& '()"#$ <0& /2) &$1>
f = @(x) x^2*sin(x); C D412(1 2) 2)0)<30&$ .&)/(#0)
r1 = fzero (f , 3); C E#)7 ("1 400( B#(" 2) #)#(#2' 6&1$$ 0. F:
G4H 70 #( 2'' 2( 0)/1>
r1 = fzero(@(x) x^2*sin(x) , 3);
I=238'1> E#)7#)6 ("1 400($ 0. !"#$ % #& '()"#$ &$#)6 *+!,+->
>> f = @(x) x^2*sin(x);
456758 >.??"#- @.., 1./#->> r = fzero(f,4); >> r = fzero(f,0.5); J
>> r = fzero(f,10); F
,1$$0)> 400( 0%(2#)17 .403 .?140 7181)7$ 0) ("1
#)#(#2' 6&1$$:
deconv 1/#',$.# 9 A)1+",$#2 B.+C#.?$"+&
+'' ("1 31("07$ 7#$/&$$17 $0 .24 @-#$1/(#0)H E2'$1 K0$#(#0)H LMH ;1/2)(A /2) %1 288'#17 (0 .#)7#)6 412' 400($ 0. 80'<)03#2'$:
E04 80'<)03#2'$ B#(" 3&'(#8'1 400($H 80'<)03#2' 71.'2(#0) /2) %1 &$17 #) /03%#)2(#0) B#(" 2%0N1 31("07$ (0 2N0#7 ("1 )117
.04 /03#)6 &8 B#(" 840814 #)#(#2' 6&1$$ N2'&1$:
I=238'1> E#)7#)6 ("1 400($ 0. .@=A O =P
Q R=F
S R=T
S R= U
;&880$1 B1 $(24( B#(" 2) #)#(#2' 6&1$$ 0. =J O T:T 2)7 &$1 LM (0 71(143#)1 ("2( = O T #$ 2 400(: !"#$ 2'$0 312)$ ("2( @= TA #$ 2
.2/(04 0. .@=A: V1 /2) )0B *+!,-.+ .@=A B#(" ("1 .2/(04 @= TA
=F
Q F=T
Q = S F
= Q T =P
Q R=F
S R=T
S R= U
WW:
J
30 DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG ([email protected])
!"#$#%&$#' %()* + () ,*( -. / .), / ) 0 .*!"12 314 1 5&2 &% 3&$67 8#294 :#2 ;<!8<= 2& >& ?2@
;<!8<= A1B C# D4#> 2& E#$%&$F 2"# E&5GB&F?15 >#%512?&B 2146@
H&5GB&F?154 A1B C# #B2#$#> ?B2& ;<!8<= CG 42&$?B: 2"# A&#%%?A?#B24 14 1 I#A2&$7 J&' %&$ )K / L). 0 L), 0 L) M' 3# A1B A$#12# 1I#A2&$ A155#> %@
>> f = [1 -5 5 5 -6]; N O&#%%?A?#B24 ?B >#4A#B>?B: E&3#$
<B> %&$ () ,* 3# A1B A$#12# 1 4#A&B> I#A2&$ A155#> %$P@
>> fr1 = [1 -2];
Deconv %DBA2?&B A1B B&3 C# D4#> 2& E#$%&$F 2"# E&5GB&F?15 >?I?4?&B %$&F 5142 E1:#@
>> f2 = deconv(f, fr1);
!"?4 3?55 E$&>DA#@ f2 = [ 1 -3 -1 3]
Q"?A" 2$1B4512#4 2& 2"# E&5GB&F?15 @ ). / .), / ) 0 .
=1A6 2& &D$ E$&C5#F@%()* + () ,*( ). / .), / ) 0 .*
%,()*
%,()* ?4 2"# B#3 %DBA2?&B %&$ 3"?A" 3# B##> 2& %?B> 1 $&&27 <EE5G RS 1:1?B7 JDEE&4# 3# A&BI#$:# &B 2"# $&&2 12 ) + P7R&3' 3# B##> 2& >#%512# %,()* CG () P*@
), / ,) / .) / P ). / .), / ) 0 .
TT7U
!"#$#%&$#' %,()* + () P*( ), / ,) / .*
8#294 "1I# ;<!8<= >#%512# %,7
>> f2 = [ 1 -3 -1 3]; N %, + ). / .), / ) 0 .
>> fr2 = [1 -1]; N I#A2&$ %&$ E&5GB&F?15 () / P* >> f3 = deconv(f2, fr2); !"?4 3?55 E$&>DA#@ f3 = [1 -2 -3]
Q"?A" 2$1B4512#4 2& 2"# E&5GB&F?15 @ ), / ,) / .
YANI’S ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010) 31
Part 3. February Notes
11. Roots of Polynomials
Monday February 1, 2010
We can use bracketing or open methods for a given polynomialCombine with polynomial deflation to avoid finding the same root twiceProblems with the method is slow convergence in strong curvature, and complex roots
11.1. Muller’s Method. We use the second order taylor approximation to deter-mine next estimateWe can find real, complex, single or multiple roots using this methodThis method can be applied to any function, it is very general. We require 3 initialpoints.Parabola:
f(x) = dx2 + ex+ F
= A(x− x2)2 +B(x− x2) + C
We model our parabola with the given pointsA B and C can be found using linear algebra
32 DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG ([email protected])
f(x0) = Ax02 +Bx0 + C
f(x1) = Ax12 +Bx1 + C
f(x2) = Ax22 +Bx2 + C
or:
f(x0) = A(x0 − x2)2 +B(x0 − x2)
f(x1) = A(x1 − x2)2 +B(x1 − x2)
f(x2) = C
Where:
A =δ1 − δ0
h1 − h0
B = Ah1 + δ1
δ0 =f(x1)− f(x0)
x1 − x0
h0 = x1 − x0
δ1 =f(x2)− f(x1)
x2 − x1
h1 = x2 − x1
Therefore
0 = A(x1 − x2)2 +B(x1 − x2) + C
(x1 − x2) =−B ±
√B2 − 4AC
2A
We will get two roots; we choose the one that is closest to x2 (smallest)Once xr is determined. we update our three points and repeat
11.2. Complication With Muller Method. if b2 >>> 4acThen −b±
√b2 − 4ac→ Very small = round off errors
YANI’S ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010) 33
We then let z =1
xr − x2
z−1 = xr − x2
0 = Az−2 +Bz−1 + C
0 = Cz2 +Bz + A
where to avoid round off:
z =−B ±
√B2 − 4AC
2ATake the largest root |z|
z =1
xr − x2
root with smallest |xr − x2|
To avoid calculating for both roots:
z =
−B −
√B2 − 4AC
2Aif B > 0
−B +√B2 − 4AC
2Aif B < 0
34 DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG ([email protected])
12. Golden Section Search
Wednesday February 3, 2010
12.1. How It Works. - similar to bracketing method
Assume minimum exists between upper and lower bounds: xu and xl:
(1) Choose distance d [d < (xu − xl)](2) Set up 2 overlapping intervals of d:[xl x1] & [x2 xu]{
x1 = xl + d
x2 = xu − d(3) Determine f(x1) and f(x2)
(4)
{iff(x1) < f(x2) : minimum is in upper interval: [x2 xu](x2 becomes new xl)
iff(x1) > f(x2) : minimum is in lower interval: [xl x1](x1 becomes new xu)
(5) Return to # 2 and repeat
12.2. How To Choose d. Range of d values:xu − xl
2< d < xu − xl
Smaller d value means smaller overlap with faster convergence but may be inaccurate.Larger d value means larger overlap with more accurate results but will be slow.Golden Ratio is the most efficient ratio that has been found:If interval is divided according to golden ratio:
YANI’S ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010) 35
l1 + l2l1
=l1l2
1 + Φ =1
ΦΦ2 + Φ− 1 = 0
Φ =−1±
√1 + 4
2=
√5− 1
2Φ ' 0.618
Therefore d =
(√5− 1
2
)(xu − xl)
12.3. Golden Section Maximum Search.
36 DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG ([email protected])
{if f(x2) < f(x1) : maximum is in upper interval: [x2 xu](x2 becomes new xl)
if f(x1) < f(x2) : maximum is in lower interval: [xl x1](x1 becomes new xu)
12.4. Example. Optimize f(x) = 1− x2 using golden ratio, interval of [0.5 1]
YANI’S ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010) 37
d =
(√5− 1
2
)1.5 = 0.927
x1 = xl + d
= −0.5 + 0.927 = 0.427
x2 = xu − d= 1− 0.927 = 0.073
f(x1) = 1− (0.427)2 = 0.818
f(x2) = 1− (0.073)2 = 0.995
Because we are looking for the max: f(x1) < f(x2) (Increases → left)Now the interval is the lesser one: [xl x1] = [−0.5 0.427]Second iteration:
d =
(√5− 1
2
)0.927 = 0.573
x1 = xl + d = 0.073
x2 = xu − d = −0.146
f(x1) = 1− (0.427)2 = 0.995
f(x2) = 1− (0.073)2 = 0.978
f(x1) > f(x2)
38 DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG ([email protected])
And we continue to decrease the size of the interval, until stopping conditions
YANI’S ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010) 39
13. Parabolic Curve Fit
Friday February 5, 2010
13.1. How It Works. - similar to Muller’s method
(1) Fit parabola to 3 points given from the function(2) Find the min/max from taking the derivative of the parabola approximation(3) Based on the location of the min/max found points, set up 3 new points and
repeat until reaches stability or small enough interval
g(x) = Ax2 +Bx+ C
Where A,B,C are unknowns. We use f(x1), f(x2), f(x3) to solve:
f(x1) Ax21 +Bx1 +C
f(x2) = Ax22 +Bx2 +C =
f(x3) Ax23 +Bx3 +C
A B C︷ ︸︸ ︷x21 x1 1x2
2 x2 1x2
3 x3 1
And we solve for A, B, and C.Because:
g(x) = Ax2 +Bx+ C
d
dx(Ax2 +Bx+ C) = 2Ax+B
xp = − B
2A
40 DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG ([email protected])
For the next iteration, we use xp, x1, x2, x3
Rule: choose xp and its 2 closest neighbours
13.2. Example. f(x) = |x+ 1|+ 1
x1 = 2 x2 = 0 x3 = −1y1 = 2 y2 = 2 y3 = 34 2 1 20 0 1 21 −1 1 3
∼4 2 0 0
0 0 1 21 −1 0 1
∼6 0 0 2
0 0 1 21 −1 0 1
A =
1
3B = −2
3C = 2
g(x) =1
3x2 − 2
3x+ 2
min: g′(x) =2
3x− 2
3
xp = − B
2A= 1
We know if it is optimum via analytic approach or graphs
xp = 1 x1 = 2 x2 = 0 x3 = −1
YANI’S ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010) 41
We can see that the two closest points are x1 and x2
new initial points:
x1 = 0 y1 = 2
x2 = 0 y2 = 1
x3 = 2 y3 = 24 2 1 21 1 1 10 0 1 2
∼2 0 −1 0
1 1 1 10 0 1 2
∼2 0 0 2
0 1 1 00 0 1 2
∼1 0 0 1
0 1 0 −20 0 1 2
A = 1B = −2C = 2
g(x) = x2 − 2x+ 2
g′(x) = 2x− 2
xp = − B
2A= −(−2)
2= 1
Since xp is constant, we are done!
13.3. Derivative Method. If g(x) = ddxf(x) is known, this is the preferred method.
The slope of the of g(x) curvature of f(x) will determine if there is a min or maxEx:
f(x) = e−t cos (t)
g(x) = −e−t cos (t)− et sin (t) = −et(cos (t) sin (t)
We then use NR or bisection to solve for g(x) = 0
13.4. Multi Dimensional. We only focus on cases where derivative jacobian existsThe objective is to find when the jacobian is 0Converts a multi-dimensional optimization to solve a non-linear system of equations.
13.5. Case 1 Jacobian is linear system of equations. :Example 1: optimization of f(x) = x2
1 + x22 + 3
J(x) =
∂f(x)
∂x1∂f(x)
∂x2
=
[2x1
2x2
]
When J(x) = 0:
J(x) =
[2x1 0 00 2x2 0
] {x1 = 0
x2 = 0
42 DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG ([email protected])
Example 2: optimization of f(x) = x21 + x2
2 + x1x2 + 2x1
J(x) =
∂f(x)
∂x1∂f(x)
∂x2
=
[2x1 + x2 + 2x1 + 2x2
]x2 x1 #︷ ︸︸ ︷[
1 2 −22 1 0
]∼[1 2 −21 1
20
]∼[0 1
22
1 12
0
] {x1 = −4
3
x2 = 23
YANI’S ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010) 43
14. Complicated Jacobian: non-linear system of equations
Monday February 8, 2010
14.1. Example 1. Optimum of f(x) = x1 + 2x2 − x41 + 2x1x2 + 2x2
2:
J(x) =
[4 − 4x3
1 + 2x2
2 + 2x1 + 2x2
]→ system of non-linear equations
We need to find x so that J(x) = 0if x is our current estimate for the root, the Taylor Series expansion for jacobian atxi is:
J(xi+1) ' J(xi) +
Hessian︷ ︸︸ ︷H(xi) ·(xi+1 − xi)
We want our next point xi+1 to be the root of Jacobian J(xi+1) = 0
0 ' J(xi) +H(xi) · (xi+1 − xi)−J(xi) = H(xi) · (xi+1 − xi)xi+1 = xi −H−1(xi) · J(xi)
xi+1 = xi −H−1(xi) · J(xi) } Newton Raphson update term or Steepest Decent
update
H(x) = J(x)d
dx=
[∂J(x)∂x1∂J(x)∂x2
]=
[−12x2
1 22 2
]Let:
x0 =
[10
]J(x0) =
[4− 42 + 2
]=
[04
]} (note at x0, J(x) 6= 0) H(x0) =
[−12 2
2 2
]
xi+1 = x1 = x0 −H−1(x0) · J(x0)
=
[10
]−[−12 2
2 2
]−1
·[04
]=
[0.744−1.714
]= x1
44 DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG ([email protected])
x1 =
[0.744−1.714
]J(x1) =
[−0.886
0
]x2 =
[0.615−1.605
]J(x2) =
[−0.097
0
]x3 =
[0.59−1.59
]J(x3) =
[−0.0017
0
]x4 =
[0.5898−1.5898
]J(x4) =
[00
]
The optimized value of x is therefore:
[0.5898−1.5898
]Because the Jacobian (derivative) J(x) = 0
14.2. Solutions to Linear Algebraic Equations. In general:
general cases
{m equations
n unknowns
a11 . . . a1n
a21 . . ....
.... . .
...am1 . . . amn
x1
x2...xn
=
b1
b2...bm
a11x1 + a12x2 + . . . + a1nxn = b1
. . . . . . . . . . . . . . . . . . . . . . . . . . .am1x1 + am2x2 + . . . + amnxn = bm
The system will be defined by the relationship between m an n valuesm < n undetermined system
m = n solution at a point - a square matrix
m > n (parametric) overdetermined system. ”Best fit” solution
Example:
3x1 + 2x2 = 18−x1 + 2x2 = 2
[3 2 18−1 2 2
]∼[0 8 241 −2 −2
]∼[0 1 31 0 4
]∼[1 0 40 1 3
]
YANI’S ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010) 45
14.3. Terms to know.
• main diagonal (a11, a22, a33)• upper triangle matrix (see below)
a11 a12 a13
0 a22 a23
0 0 a33
Matrix inversion
46 DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG ([email protected])
15. Linear Equations: Things that can go wrong
Wednesday February 10, 2010
Midterm: March 1st 6:30-8
15.1. Singular Matrices.[1 12 2
]︸ ︷︷ ︸
A
[x1
x2
]=
[12
]System is linear dependant
A is a singular matrix: det(A) = 0Coincident lines: no simultaneous solution[
1 12 2
]︸ ︷︷ ︸
A′
[x1
x2
]=
[12
]det(A) = 0
Parallel lines: no solutions to equation
15.2. Near Similar Cases.
|E| <<< 1,
[1 1
1 + E 1
] [x1
x2
]=
[12
]det(A) = 1× 1− E × 1 = −E
YANI’S ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010) 47
A
[x1
x2
]=
[12
][x1
x2
]= A−1
[12
]=
1
−E
[1 −1
−(1 + E) 1
] [12
][x1
x2
]=
− 1
E(1− 2)
− 1
E(−1− E + 2)
=1
E
[1
E − 1
]︸ ︷︷ ︸E−1≈−1
'
1
E
− 1
E
Function is very sensitive to E as E → 0Round off errors are criticalThis is an ill conditioned systemSystem will hypothetically have a solution in extreme cases
15.3. Naıve Gaussian Elimination Algorithm. Square matrixn variables, n equationsThis should be review of linear algebra
a11 . . . a1n
a21 . . ....
.... . .
...am1 . . . amn
x1
x2...xn
=
b1
b2...bm
Goal is: Simple operations on A:
• Multiply (or divide) rows
48 DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG ([email protected])
• Add or subtract rows• Switch rows
To create an upper triangular matrixThen use back substitution to find [x]1) Create a combined matrix.
a11 . . . a1n b1
a21 . . ....
......
. . ....
...am1 . . . amn bn
For row m = 2, 3, 4 . . . n
row m’ = row m −am1
a11
× row1
a11 = pivot element in step 1
2) a11 a12 . . . a1n b1
0 a22 . . ....
...
0...
. . ....
...0 am2 . . . amn bn
If pivot element = 0 or small, we switch rows
for m= 3. . .n row m = row m− am2
a22× row 2
We use back substitution to isolate main diagonalUntil we have triangular matrix:
a11 a12 . . . . . . b1
0 a22. . .
......
0 0 a33...
...0 0 0 amn bn
Nıeve Problems: if pivot elements are small or zero, we have to exchange rows
YANI’S ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010) 49
16. Systems of Linear Equations
Monday February 22, 2010
A[x]=B
(1) Gaussian Elimination: Reduce to upper triangular matrix and back substi-tution
(2) LU: Decompose matrix A only
16.1. Daniel’s LU method (from linear algebra).
A =
1 2 31 0 12 1 0
identity matrix︷ ︸︸ ︷1 0 0
0 1 00 0 1
A′ = −r1
−2r1
1 2 30 −2 −20 −3 −6
1 0 01 1 02 0 1
+r1
+2r1
A′′ =−3
2r2
1 2 30 −2 −20 −6 −3
1 0 01 1 02 3
21
+3
2r2
Therefore: U =
1 2 30 −2 −20 −6 −3
L =
1 0 01 1 02 3
21
test: A = LU
A =
1 0 01 1 02 3
21
1 2 30 −2 −20 −6 −3
Matrix multiplication proves us true
16.2. Eigen Values and Eigen Vectors.
From [A]
{λ = eigen values
V = eigen vectors
50 DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG ([email protected])
[A][λ] = [B]
Relationship: [A]V = [λ]V
([A]− [λ][I])V = 0 will happen if:
V = 0 Trivial answer
or:
|A− λI| = 0
= det|A− λI|
= det
∣∣∣∣∣∣∣∣∣
a11 − λ . . . a1n
a21 a22 − λ...
.... . .
...an1 . . . ann − λ
∣∣∣∣∣∣∣∣∣
This gives the polynomial order n in terms of λ
16.3. Eigen Example. [1 23 4
]= u∣∣∣∣1− λ 2
3 4− λ
∣∣∣∣ = (1− λ)(4− λ)− 6 = 0
0 = λ2 − 5λ+ 4− 6
λ =5±√
38
2=
{−0.37
5.37
(A− λ1I)V 1 = 0[1− (−0.37) 2
3 −(−0.37)
] [ab
]=
[00
][1.37 2 0
3 4.37 0
]∼ {Let a = 1} b = −1.37
2
V 1 =
[1−1.37
2
]
YANI’S ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010) 51
Typical normalization we use unit vector:
V 1 =1√
a2 + b2
[ab
]=
[0.825−0.566
]V 2 =
[−4.63 2 0
3 −1.37 0
]=
[0.4160.909
]
52 DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG ([email protected])
17. Curve Fitting
Wednesday February 24, 2010
Curve fitting is finding “Best Fit Line” for noisy data
While Polynomial interpolation fits all points
YANI’S ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010) 53
17.1. Linear Regression. We begin with lots of measurements:x yx1 y1
x2 y2...
...xn yn
We know physical properties prior to analysis: linear relationship between x & y.y = p1 + p2xThe objective is to find p1 & p2 that best represents data (Try to reduce variance(V =
∑ni=0
√Ei)
If data is error free:
y1 = p1 + p2x1
y2 = p1 + p2x2...
......
......
yn = p1 + p2xn
y1
y2...yn
=
Over determined system of equations:︷ ︸︸ ︷1 x1
1 x2...
...1 xn
[p1
p2
]
M = A× P
54 DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG ([email protected])
17.2. Pseudo Inverse. We need to solve for P:
M = AP
ATM =︷ ︸︸ ︷ATAP Square Matrix
if (ATA)−1 exists
(ATA)−1ATM = P
(ATA)−1AT︸ ︷︷ ︸Pseudo Inverse of A
M = P
17.3. Linear regression example. Noisy data (y=x+1)
0 11 2.12 2.43 3.8
Find P1, P2 if we know y = P1 + p2x
1
2.12.43.8
︸ ︷︷ ︸
M
=
1 01 11 21 3
︸ ︷︷ ︸
A
[P1
P2
]︸ ︷︷ ︸
P
[1 1 1 10 1 2 3
]1
2.12.93.8
=
[1 1 1 10 1 2 3
]1 01 11 21 3
[P1
P2
][
9.819.3
]=
[4 66 14
]︸ ︷︷ ︸
ATA
[P1
P2
]
YANI’S ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010) 55
Inverting ATA (Daniel’s Method):
A I︷ ︸︸ ︷[4 6 1 06 14 0 1
]∼[2 3 1
20
0 5 −32
1
]∼[2 0 14
5−3
50 1 − 3
1015
]A−1 =
[75− 3
10− 3
1015
]Therefore: [
75− 3
10− 3
1015
] [9.819.3
]=
[P1
P2
]=
[0.921.07
]y = 0.92 + 1.07x
17.4. Consider N equally spaced points.
y = P1 + P2(x) or: 1P1 + xP2y1
y2...yN
=
1 11 2...
...1 N
︸ ︷︷ ︸
f1(x) f2(x)
[P1
P2
]
56 DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG ([email protected])
P = (ATA)−1ATM
=
[1 1 . . . 11 2 . . . N
]1 11 2...
...1 N
−1 [
1 1 . . . 11 2 . . . N
]y1
y2...yN
=
[N
∑Ni n∑N
i n∑N
i n2
]−1 [ ∑Ni yn∑Ni nyn
][
N∑N
i n∑Ni n
∑Ni n
2
]−1 [ ∑Ni yn∑Ni nyn
](ATA)−1 AT
YANI’S ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010) 57
18. General Least Squares Formulation
Friday February 26, 2010
Extend methodology to non-linear systems
18.1. Model.
y = p1f1(x) + p2f2(x) + . . .+ pnfn(x)
y1
y2...yN
︸ ︷︷ ︸
Measurement Vector M
=
f1(x1) . . . fn(x1)
f1(x2) . . ....
.... . .
...f1(xn) . . . fn(xn)
︸ ︷︷ ︸
Model Matrix A
P1
P2...Pn
︸ ︷︷ ︸
Parameter Vector P
M = AP Once again we use least square solution to solve for P
P = (ATA)−1AT︸ ︷︷ ︸Pseudo inverse matrix of A
M
18.2. Special Case: Polynomial Model. of Degree Q
y = P0 + P1x+ P2x2 + . . .+ PQx
Q
A =
1 x1 x2
1 . . . xQ11 x2 x2
2 . . . xQ2...
......
. . ....
1 xN x2N . . . xQN
Example: y = P1 sin(x) Goal to find amplitude of function (P)
M =
y1...yN
A =
sin(x1)...
sin(xN)
P = [P1]
P = (ATA)−1ATM
=
[sin(x1) . . . sin(xN)] sinx1
...sin(xN)
−1 [sin(x1) . . . sin(xN)
] y1...yN
=[∑N
i=1 sin2(xi)]−1 [∑N
i=1 sin(xi)yi]
58 DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG ([email protected])
18.3. Derivation of Pseudo-Inverse.
P = (ATA)−1ATM
We claim that this is the least square approximationShowing for P1x = z:Error associated with any point n is:
En = P1xn︸︷︷︸Model
− yn︸︷︷︸Measurement
N∑n=1
E2n =
N∑n=1
(P1xn − yn)2 = J
18.4. Objective: to find P that minimizes total squared error.
∂J
∂P1
= 0 = 2N∑
n=1
(P1xn − yn) · xn
N∑n=1
P1x2N =
N∑n=1
ynxn
P1 =
(N∑
n=1
x2N
)−1
=
[x1 . . . xn] x1
...xn
−1 [x1 . . . xn
] x1...xn
Therefore: P = (ATA)−1ATM
YANI’S ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010) 59
18.5. Midterm contents: Covers notes, practice and problems5 short answer questions10 multiple choice - choose the obvious
• Taylor Series• Root finding (optimizing)
– Singe variable– Multi variable
• Golden ratio• Parabolic curve fit• Multi variable Jacobian and Hessian
60 DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG ([email protected])
Part 4. March Notes
19. Midterm Review Session
Monday March 1, 2010
19.1. Definitions: Accuracy = closeness to true valuePrecision = closeness to other results
19.2. Taylor Series.
f(x) = f(x0) + f ′(x0)(x− x0) +f ′′(x0)(x− x0)2
2!+ . . .
example: write f(x) = ex + sin(x) as third order polynomial about x0 = 1
f ′(x) = ex + cos(x)
f ′′(x) = ex − sin(x)
f ′′′(x) = ex − cos(x)
f(x) ' e+ sin(1)︸ ︷︷ ︸0
+ (e+ cos(1))(x− 1)︸ ︷︷ ︸1
+1
2(e− sin(1))(x− 1)2︸ ︷︷ ︸
2
+1
6(e− cos(1))(x− 1)3︸ ︷︷ ︸
3
Good enough for the test: don’t simplifyTruncation Error: (1st term dominates)
=
∣∣∣∣(e+ sin(1))(x− 4)4
4!
∣∣∣∣ = R3 '∣∣∣∣(e+ sin(1))(x− 4)4
24
∣∣∣∣We don’t care about the sign!
19.3. Numerical Derivatives 1st order.
Forward f ′(xi) 'f(xi+1)− f(xi)
(xi+1 − xi)
Reverse f ′(xi) 'f(xi)− f(xi−1)
(xi − xi−1)
Central Difference f ′(xi) 'f(xi+1)− f(xi−1)
(xi+1 − xi−1)
Central difference is more accurate error E ∝ h2
Use simple answers since some need simple responses
YANI’S ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010) 61
19.4. Root Finding.
• Bracketing Methods– Bisection– False Position
• Open Methods– Newton-Raphson– Secant
• Muller
1-2 Iterations can doBracketing : initial bracket is [xa xb]Needs odd number of roots f(xa) · f(xb) < 0
NR: xi+1 = xi +f(xi)
f ′(xi)Secant NR slow if function is unknown
xi+1 = xi +f(xi)(xi−1 − xi)f(xi−1)− f(xi)
Muller Can handle strong curvature and complex rootsNeeds 3 initial pointsGoal is to find A, B, and C
f(x) = Ax2 +Bx+ C For finding roots
f ′(x) = 2Ax+B For finding Optimumy1
y2
y3
=
Ax21 + Bx1 + C
Ax22 + Bx2 + C
Ax23 + Bx3 + C
The new point xnew is there the function = 0. The next iteration will utilize thisnew point, as well as the two closest points.
19.5. Single Variable Optimization. Is when the derivative of the function f ′(x) = 0Golden Search
62 DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG ([email protected])
if : f(x1) < f(x2)
Minimum︷ ︸︸ ︷[x2, xU ]
Maximum︷ ︸︸ ︷[xL, x1]
if : f(x1) > f(x2) [xL, x1]︸ ︷︷ ︸Minimum
[x2, xU ]︸ ︷︷ ︸Maximum
d can be [0.5→ 1] so long as overlap happens
Golden Ratio=
√5− 1
2' 0.62× Total
Parabolic Muller Covered already
19.6. Multi-Variable Optimization.
xi+1 = xi −H−1(xi)J(xi)
YANI’S ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010) 63
20. Interpolation
Wednesday March 3, 2010
20.1. Definition. Smooth curve that fits all given data points
20.2. Polynomial Interpolation. Most common type (curve fitting)Nth order polynomialN+1 Data points has only 1 Nth order polynomial that will pass through all pointsThe coefficients of the polynomial are needed to solve forExample: 4 Data Points are used, we get a cubic function
Ax3 +Bx2 + Cx+D = 0 (A+B + C +D = 4)
A B C D = y︷ ︸︸ ︷x3
1 x21 x1 1 y1
x32 x2
2 x2 1 y2
x33 x2
3 x3 1 y3
x34 x2
4 x4 1 y4
We solve the system of linear equations to get A,B,C,DSimilar to linear regression we have a polynomial functionNumber of Points = Number of coefficientsShortcomings: As N increases, there is potential for round off error in the modelsince xN becomes either very small or very big. This leads to problems.
64 DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG ([email protected])
20.3. Lagrange Interpolation. is used to avoid numerical problems in determiningthe coefficients of the interpolating polynomialStill find N order function that passes through N+1 data points
N Order︷ ︸︸ ︷fN(x) =
N∑n=0
·yn · Ln(x)
Where : Ln =N∏
i=0,i 6=0
(x− xi)(xn − xi)
ex : L1(x) =N∏
i=0,i 6=0
(x− xi)(x1 − xi)
=(x− x0)
(x1 − x0)
(x− x2)
(x1 − x2)
(x− x3)
(x1 − x3). . .
(x− xN)
(x1 − xN)
Note that for the example there is no(x− x1)
(x1 − x1)term, since the denominator is zero
Example: If we have two points (x0, y0) (x1, y1): N+1=2, N=1
f1 =1∑
n=0
ynLn(x) = y0L0(x) + y1L1(x)
= y0
[1∏
i=0,i 6=0
x− xix0 − xi
=x− x1
x0 − x1
]+ y1
[1∏
i=0,i 6=0
x− xix1 − xi
=x− x0
x1 − x0
]
= y0x− x1
x0 − x1
+ y1x− x0
x1 − x0
If for example data is a linear relationship:
x0 = 1, y0 = 3, x1 = 2, y1 = 6
f = 3x− 2
1− 2+ 6
x− 3
2− 1= −3(x− 2) + 6(x− 3) = 3x as expected
YANI’S ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010) 65
for more points we just need to expand further
Example: three points:x 1 2 3y 1 4 9
f2(x) = y0L0 + y1L1 + y2L2
L0 =2∏
i=0
x− xix0 − xi
=(x− x1)
(x0 − x1)
(x− x2)
(x0 − x2)=
(x− 2)(x− 3)
(1− 2)(1− 3)
L1 =2∏
i=0
x− xix1 − xi
=(x− 1)(x− 3)
(2− 1)(2− 3)
f2(x) = 1(x− 2)(x− 3)
(1− 2)(1− 3)+ 4
(x− 1)(x− 3)
(2− 1)(2− 3)+ 9
(x− 1)(x− 2)
(3− 1)(3− 2)
= x2
But for example if:x 1 2 3y 1 8 27
f2(x) = 1(x− 2)(x− 3)
(1− 2)(1− 3)+ 8
(x− 1)(x− 3)
(2− 1)(2− 3)+ 27
(x− 1)(x− 2)
(3− 1)(3− 2)
= 6x2 − 11x+ 6
Therefore we see that Lagrangian method does not determine coefficients directly.No round off errors affect sensitivity
66 DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG ([email protected])
21. Spline Interpolation
Friday March 5, 2010
-Previously we have seen polynomial interpolation (Lagrangian and normal) Wherenth order polynomial fits n+1 points.-The problem is that abrupt changes in data lead to ascillations behaviour of higherorder polynomials
21.1. Definition. Piecewise spline interpolation uses lower order polynomials forsubset of data points
YANI’S ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010) 67
We use a collection of lower order polynomial to estimate instead of a higher orderpolynomial that passes through all points
21.2. Linear Spline. (1st order)Each point is connected with a straight line to the next point
f(x) =
f1(x) = y1 +m1(x− x1) x1 ≤ x ≤ x2
f2(x) = y2 +m2(x− x2) x2 ≤ x ≤ x3
fn(x) = yn +mn(x− xn) xn ≤ x ≤ xn+1
68 DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG ([email protected])
Where: mi =yi+1 − yixi+1 − xi
Closer data points creates smaller errorThe problem there is discontinuity of 1st derivative at knots - f(x) is not smooth
21.3. Quadratic Spline. (2nd order)
f(x) =
f1(x) = a1x
2 + b1x+ c1 x1 ≤ x ≤ x2
f2(x) = a2x2 + b2x+ c2 x2 ≤ x ≤ x3
fn(x) = anx2 + bnx+ cn xn ≤ x ≤ xn+1
YANI’S ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010) 69
Each quadratic spline segment (fi(x)) contains 3 unknowns: ai, bi, cin+1 data points means that there are N segments and N unknownsTherefore we require 3N equations
70 DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG ([email protected])
22. Spline Conditions
Monday March 8, 2010
For quadratic spline that looks like:
f(x) =
f1(x) = y1 +m1(x− x1) x1 ≤ x ≤ x2
f2(x) = y2 +m2(x− x2) x2 ≤ x ≤ x3
fn(x) = yn +mn(x− xn) xn ≤ x ≤ xn+1
22.1. Condition 1. Polynomial at each interval must pass through its 2 endpoints.Ex: f1(x)
Or in general:
y1 = a1x21 + b1x1 + c1 start yi = aix
2i + bixi + ci
y2 = a1x22 + b1x2 + c1 end yi+1 = aix
2i+1 + bixi+1 + ci
There are a total of N intervals and 2 equations per interval for a total of 2N equations
22.2. Condition 2. The 1st derivative at interior knots must be equal (continousslope as curve switches from one interval to the next)Example:
f ′1(x2) = f ′2(x2)
2x2a1 + b1 = 2x2a2 + b2
Or in general:
2ai−1xi + bi−1 = 2aixi + bi
for : 2 ≤ i ≤ N
This gives us (N-1) knots in the interior= N-1 equationsFrom these imposed conditions we have 3N-1 equations
22.3. Condition 3. We need one more equation to solve the system. We thenarbitrarily set the second derivative to zero at the 1st point x1
1st interval f ′′′(x1) = 0 = a1:
f1(x) = 0x2 + b1x+ c1 = b1x+ c1
There is a straight line between xi → x2
Now have to solve 3N polynomial coefficientsEither set up system of equations or recursive solution
YANI’S ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010) 71
1st Interval f1(x)
fits : (x1, y1)(x2, y2)
a1 = 0 (from condition 3)
y1 = b1x1 + c1
y2 = b1x2 + c1
solve for b1, c1
2nd Interval f2(x)
fits : (x2, y2)(x3, y3)
{y2 = a2x
22 + b2x2 + c2
y3 = a2x23 + b2x3 + c2
solve for a2, b2, c2
f ′1(x2) = f ′2(x2)
2x2a1 + b1 = 2x2a2 + b2
Both a1 and b1 are known: 3 equations and 3 unknowns to solve
Repeat N times
22.4. Quadratic spline example. interpolate f(x) = ex between 0→ 2
72 DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG ([email protected])
f1(x) = a1x2 + b1x+ c1
f2(x) = a2x2 + b2x+ c2
Condition 1: f(x) must pass through points:
f1(x) =
{1 = a1(02) + b1(0) + c1
e = a1(12) + b1(1) + c1
f2(x) =
{e = a2(12) + b2(1) + c2
e2 = a2(22) + b2(2) + c2
Condition 2: derivative at internal knots must be equal
f ′1(x2) = f ′2(x2) = 2a1(1) + b1 = 2a2(1) + b2
Condition 3: Arbitrarily set second derivative at x1 = 0, therefore: a1 = 0
Setting up a system of equations:(a1 b1 c1 a2 b2 c2
)0 0 1 0 0 0 11 1 1 0 0 0 e0 0 0 1 1 1 e2
2 1 0 −2 −1 0 01 0 0 0 0 0 0
The first four equations are from condition 1
The last two are from condition 2 and 3 respectively
22.5. Cubic Splines.fi(x) = aix
3 + bix2 + cix+ di
N+1 points, N segmentsThere are 4 N unknowns
22.6. Condition 1. Polynomial must pass through both of its end points
yi = aix3i + bix
2i + cixi + di start
yi+1 = aix3i+1 + bix
2i+1 + cixi+1 + di end
This gives us 2N equations
22.7. Condition 2. First derivative at all interior knots must be the same
f ′i−1(xi) = f ′i(xi)
3ai−1x2i + 2bi−1xi + ci−1 = 3aix
2i + 2bixi + ci
For 2 ≤ i ≤ N we get N-1 equations
YANI’S ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010) 73
22.8. Condition 3. Set second derivatives for all interior points equal to each other
f ′′i−1(x) = f ′′i (x)
6ai−1xi + 2bi−1 = 6aixi + 2bi
For 2 ≤ i ≤ N we get N-1 equations
22.9. Condition 4. Arbitrarily set f ′′i (x) = 0 but we need 2 more eqautionsOption 1 set second derivative = 0 at start and end point
6a1 + 2b1 = 0 = 6aN + 2bN
Option 2 set second and third derivative = at 1st point
6a1 + 2b1 = 0
6a1 = 0
a1 = 0, b1 = 0
Therefore : f1 = c1x+ d1 for 1st interval (straight line)
Now recursive solution is available
74 DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG ([email protected])
23. Numerical Integration
Wednesday March 10, 2010
If original equation is unknown or difficult to integrate we may have to use datapoint from function to solveThe idea is to approximate the complicated/non-existing function with an easy func-tion, then integrate.fn(x) = nth order polynomialList of methods to cover:
• Rectangular• Trapezoidal• Simpson’s:
– 13
rule
– 38
rule• Gaussian Quadrature
23.1. Taylor Series Based Approximations. Using the Taylor series at a we in-tegrate both sides of the equation:
f(x) = f(a) + f ′(a)(x− a) + f ′′(x− a)2
2+ . . .∫ b
a
f(x)dx =
∫ b
a
f(a)dx︸ ︷︷ ︸0th order Taylor = Rectangle
+
∫ b
a
f ′(a)(x− a)dx
︸ ︷︷ ︸1st order Taylor = Trapezoidal
YANI’S ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010) 75
23.2. Rectangular Approximation.
∫ b
a
f(x)dx '∫ b
a
f(a)dx = f(a)(b− a)
Error '∫ b
a
f ′(a)(x− a)dx =f ′(a)
2(b− a)2
76 DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG ([email protected])
23.3. Trapezoidal Approximation.
f(x) ' f(a) + f ′(a)(x− a)︸ ︷︷ ︸1st order approximation
+ f ′′(a)(x− a)2
2︸ ︷︷ ︸error
∫ b
a
f(x)dx '∫ b
a
f(a)dx+
∫ b
a
f ′(a)(x− a)dx∫ b
a
f(x)dx ' f(a)(b− a) +f ′(a)
2(b− a)2
Forward Difference Approximation: f ′(a) ' f(b)− f(a)
b− a∫ b
a
f(x) ' (b− a)f(a) + f(b)
2= Area of a trapezoid
Error:
Error =
∫ b
a
f ′′(a)(x− a)2
2dx
=f ′′(a)
2
(x− a)3
3
∣∣∣ba
=1
6f ′′(a)(b− a)3
Error ∝ h3
YANI’S ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010) 77
23.4. Example. find∫ π
2
0sin(x)dx using both rectangular and trapezoidal methods
Rectangular:
f(a) = 0
f(x) ' f(a)∫ π2
0
f(x)dx '∫ π
2
0
f(a)dx = 0
Trapezoidal:∫ b
a
f(x)dx '∫ b
a
f(a)dx+
∫ b
a
f ′(a)(x− a)dx
= 0 +1
2(b− a) =
1
2
π
2=π
4Or using the formula and knowing: f ′(a) = cos(0) = 1
= (b− a)f(a) + f(b)
2
=π
2
1− 0
2=π
4
78 DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG ([email protected])
24. More Numerical Integration
Friday March 12, 2010
24.1. Other Options. Higher order polynomials than rectangular and trapezoidalcan be usedIf the data points from f(x) are known yet f(x) itself is unknown yet smooth [a,b],we can use cubic spline to approximate function
First we determine a cubic spline interpolation for the points.Now each interval xi → xi+1 approximated by fi(x) = aix
3 + bix2 + cix+ di
Therefore:∫ b
a
f(x)dx '∫ x2
x1
f1(x)dx+
∫ x3
x2
f2(x)dx+
∫ x4
x3
f3(x)dx+ . . .+
∫ xN+1
xN
fN(x)dx
For each interval:∫ xi+1
xi
fi(x)dx =
∫ xi+1
xi
(aix3 + bix
2 + cix+ di)dx
=ai4
(x4i+1 − x4
i ) +bi3
(x3i+1 − x3
i ) +ci2
(x2i+1 − x2
i + di(xi+1xi)
24.2. Simpson’s 1/3 Rule. Fits 2nd order polynomial(Lagrange) to 3 equallyspaced points from f(x) to integrate
YANI’S ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010) 79
Lagrangian polynomial:
f2(x) = f(x0)L0(x) + f(x1)L1(x) + f(x2)L2(x)
Where: L0(x) =(x− x1)(x− x2)
(x0 − x1)(x0 − x2)=
(x− h)(x− 2h)
(−h)(−2h)=
(x− h)(x− 2h)
2h2
L1(x) =(x− x0)(x− x2)
(x1 − x0)(x1 − x2)=x(x− 2h)
h(−h)=−x(x− 2h)
h2
L2(x) =(x− x0)(x− x1)
(x2 − x0)(x2 − x1)=
(x− h)x
(2h)h=x(2− h)
2h2
So all together:
f2(x) =f(0)
h2
(x− h)(x− 2h)
2− f(h)
h2x(x− 2h) +
f(2h)
h2
(x− h)x
2∫ 2h
0
f(x)dx '∫ 2h
0
f2(x)dx
' f(0)
h2
∫ 2h
0
(x− h)(x− 2h)
2dx− f(h)
h2
∫ 2h
0
x(x− 2h)dx+f(2h)
h2
∫ 2h
0
(x− h)x
2dx
=1
3f(0)h+
4
3f(h)h+
1
3f(2h)h
=h
3(f(0) + 4f(h) + f(2h)
80 DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG ([email protected])
Hence the 1/3 importance
Simpson’s 1/3 Rule:∫ 2h
0f(x)dx ' h
3(f(0) + 4f(h) + f(2h)
24.3. Splitting Up The Function. Each Panel has to contain 3 points. So if thereare total of 5 points:
I1 =h
3(f(x0) + 4f(x1) + f(x2)
I2 =h
3(f(x2) + 4f(x3) + f(x4)
Overall Integral I = I1 + I2
=h
3(f(x0) + f(x1) + 2f(x2) + 4f(x3) + f(x4))
Note the factors for each point:
• 2 = even data points• 4 = every odd data point• 1 = endpoints
YANI’S ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010) 81
25. Simpson’s Rule
Monday March 15, 2010
25.1. Using Simpson’s 1/3 Rule for Multiple Panels. Simpson’s 1/3 Rule:
I ' h
3(f(x0) + 4f(x1) + f(x2) (For 1 panel)
Points must be uniformly and linearly spaced so that h is constantGiven N+1 uniformly spaced data point, N being an even number:
I ' h
3(f(x0) + 4f(x1) + 2f(x2) + . . .+ f(xN))
Factors for each point: endpoints = 1, odds = 4, even = 2
I =h
3
f(x0) + 2N−2∑
j=2,4,6...
f(xi)︸ ︷︷ ︸i=Even
+
j=Odd︷ ︸︸ ︷4
N−1∑i=1,3,5...
f(xj) +f(xN)
error ' h5N
180
Test: if f(x)=G (Constant) for all values of x x0 → xN , N= even
82 DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG ([email protected])
I =h
3
(f(x0) + 2
N−2∑j=2,4,6...
f(xi) + 4N−1∑
i=1,3,5...
f(xj) + f(xN)
)
I =Gh
3
(1 + 2
N−2∑j=2,4,6...
1 + 4N−1∑
i=1,3,5...
1 + 1
)
I =Gh
3
(1 + 2
N − 2
2+ 4
N
2+ 1
)=Gh
3(2 +N − 2 + 2N)
= G︸︷︷︸height
Width︷︸︸︷Nh
25.2. Simpson’s 1/3 Rule Summary.
• h must be uniform• N points must be even number (total points must be odd)• x0 is 1st point• xN is last pint• Minimum 3 points
I =h
3
f(x0) + 2N−2∑
j=2,4,6...
f(xi)︸ ︷︷ ︸i=Even
+
j=Odd︷ ︸︸ ︷4
N−1∑i=1,3,5...
f(xj) +f(xN)
25.3. Simpson’s 3/8 Rule. Now every panel contains 4 equally spaced points3rd order Lagrangian polynomial is used to interpolate the function
YANI’S ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010) 83
f3(x) = f(0)L0(x) + f(h)L1(x) + f(2h)L2(x) + f(3h)L3(x)
Where: L0(x) =(x− h)(x− 2h)(x− 3h)
(0− h)(0− 2h)(0− 3h), L1(x) =
(x− 0)(x− 2h)(x− 3h)
(h− 0)(h− 2h)(h− 3h). . .∫ 3h
0
f(x)dx '∫ 3h
0
f3(x)dx = (Complicated)
I =3h
8(f(0) + 3f(h) + 3f(2h) + f(3h))
Given N+1 uniformly spaced points (N= a multiple of 3)
84 DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG ([email protected])
∫ NH
0f(x) ' I =
3h
8
f(x0) + f(xN) + 3N−1∑
i=1,2,4,5,7...
f(xi)︸ ︷︷ ︸i 6=Multiple of 3
+2
j=Multiple of 3︷ ︸︸ ︷N−3∑
j=3,6,9...
f(xj)
Simpson’s 1/3, 3/8 rules can be used together for all cases: For both even or odddata pointsEx if N=5 we cannot use just one rule since N isn’t a multiple or 2 or 3
YANI’S ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010) 85
86 DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG ([email protected])
26. Gaussian Quadrature
Wednsday March 17, 2010
Provides a very simple formula for numerical integration but requires knowledge off(x) - not the data pointsSuppose we are integrating f(x) from (−1→ 1) using the trapezoidal rule
We attempt to minimize the total errorThere will be 2 interior points x0, x1 that the area of the trapezoid going throughf(x0), f(x1) =
∫ 1
−1f(x)dx
YANI’S ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010) 87
We need to find the values of x0, x1 so that
Area A + Area C︸ ︷︷ ︸Overestimation Error
= Area B︸ ︷︷ ︸Underestimation Error
Since if A+C=B, there is no overall error.The base of the trapezoid = 2 (constant). The two sides of the trapezoid can beexpressed as f(x0), f(x1)Area of trapezoid (i.e integral of function):
∫ 1
−1
f(x)dx = I = C0f(x0) + C1f(x1)
There are 4 unknowns: x0, C0, x1, C1
4 Equations are needed:
Setting f(x) arbitrarily:
f(x) = 1→ C0 + C1 =
∫ 1
−11dx = 2 (1)
f(x) = x→ C0x0 + C1x1 =∫ 1
−1xdx = 0 (2)
f(x) = x2 → C0x20 + C1x
21 =
∫ 1
−1x2dx = 2
3(3)
f(x) = x3 → C0x30 + C1x
31 =
∫ 1
−1x3dx = 0 (4)
88 DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG ([email protected])
Solving the previous 4 equations we have:
C + 0 = C1 = 1
x0 = − 1√3
x0 =1√3
I =∫ 1
−1f(x)dx = f(− 1√
3) + f(
1√3
)
This is exact for any f(x) up to and including 3rd orderWe can now extend the idea to include any arbitrary intervalActual function to be integrated can have multiple panels of arbitrary length
{x0 = xi + dix1 = xi+1 − di
Ratio:dihi
=1− 1√
3
2
di =hi2
(1− 1√3
)
di ' 0.21135hi
Therefore: Ii =∫ xi+1
xif(x)dx =
hi2
(f(x0) + f(x1))
YANI’S ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010) 89
26.1. Example. Integrate f(x) = x2 from 0→ 2 using panels of equal length
h1 = 1
d1 = 0.21135hi = 0.21135
x0 = 0 + d1 = d1
x1 = 1− d1
h2 = 1
d2 = 0.21135hi = 0.21135 = d1
x0 = d1 + 1
x1 = 2− d1
90 DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG ([email protected])
I1 =1
2(f(x0) + f(x1))
=1
2
(d2
1 + (1− d1)2)
= 0.3
I2 =1
2(f(x0) + f(x1))
=1
2
((1 + d1)2 + (2− d1)2
)= 2.3
I = I − 1 + I2 = 2.6
Theoretically: ∫ 2
0
x2dx =x3
3
∣∣20
=8
3= 2.6
In general to achieve better accuracy:Reduce panel sizeUse higher order complexity (interpolation) for each panel
26.2. Summary of Gaussian 3 Point Quadrature. look on Yani’s Cheap Plastic handout∫ 1
−1
f(x)dx = C0f(−d) + C1f(0) + C0f(d)
Use to solve for C0, C1, d
f(x) = 1
f(x) = x
f(x) = x2
f(x) = x3
f(x) = x4
f(x) = x5
YANI’S ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010) 91
27. Cheap Plastic Yani Notes
3 Point Gaussian Quadrature
92 DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG ([email protected])
Yani’s cheap, plastic handout on 3‐point Gauss Quadrature Starting point: There are 3 points, ‐d, 0, and d such that:
∫−
++−=1
1010 f(d)cf(0)cd)f(cf(x)dx (i)
Goal: Determine c0, c1, and d. Approach:
Try the following functions for f(x): f(x) = 1 f(x) = x f(x) = x2
f(x) = x3
f(x) = x4
f(x) = x5
Evaluate eq (i) for each of these instances of f(x). This will give us enough equations to solve for c0, c1, and d.
f(x) ∫−
1
1
)( dxxf f(d)cf(0)cd)f(c 010 ++− Equation
1 2 c0 + c1 + c0 = 2c0 + c1
2 = 2c0 + c1 (1)
x 0 c0 (‐d) + 0 + c0 (d)
= 0 0 = 0 D’oh!
x2 2/3 c0 (‐d)
2 + 0 + c0 (d)2
= 2 c0 (d)2 c0 (d)
2 = 1/3 (2)
x3 0 c0 (‐d)
3 + 0 + c0 (d)3
= 0 0 = 0 D’oh!
x4 2/5 c0 (‐d)
4 + 0 + c0 (d)4
= 2 c0 (d)4 c0 (d)
4 = 1/5 (3)
x5 0 c0 (‐d)
5 + 0 + c0 (d)5
= 0 0 = 0 D’oh!
From eq. (2) and (3):
53
3/15/1
20
40 =⇒= ddcdc
Sub d into (2):
95
35
311
31
20 ===d
c
Sub c0 into (1)
98
1810
222 01 =−=−= cc
Put it all together:
∫−
⎟⎟⎠
⎞⎜⎜⎝
⎛++⎟⎟
⎠
⎞⎜⎜⎝
⎛−=
1
1 53
f95
f(0)98
53
f95
f(x)dx
YANI’S ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010) 93
28. Ordinary Differential Equations
Friday March 19, 2010
Differential equations (DE’s) are equations that contains derivativesODE’s are differential equations with only 1 independent variableex:
∂2x
∂t2+k
mx = y
t = Independant Variable, x = dependant variable
or:
y(t)× ∂2x(t)
∂t2+∂y(t)
∂t+G = x(t) · y(t)
t = Independant Variable, x, y = dependant variable
A partial differential equation (PDE) has 2 or more independent variablesex: (2nd order)
∂2φ(x, y)
∂x2+∂φ(x, y)
∂y2= P (x, y)
The order of a DE is the highest order of derivative present
28.1. 1st Order ODE (of 1 variable). Problem:
dy
dx= f(y, x)
With initial condition (x0, y0)The solution is a function y(x) that satisfies all equations and initial conditionsRewrite:
dy
dx= f(y, x)
dy = f(x, y)dx∫dy =
∫f(x, y)dx
We can only integrate the function if it is a separable one: separate x and y:
f(x, y) = f(x) + f(y)
94 DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG ([email protected])
Case 1:
f(x, y) = f(x) Function of only 1 variable∫dy =
∫f(x)dx Easy to solve!
y∣∣∣y(x)
y0=
∫ x
x0
f(x)dx = y(x)− y0
Case 2:
f(x, y) = g(x)× h(y) =dy
dx∫ y
y0
(x)1
h(y)dy =
∫ x
x0
g(x)dx
However, many 1st order ODE’s are not part of these classes:
dy
dx= f(y, x)︸ ︷︷ ︸
Not Separable
If not separable, numerical solutions are needed. Methods are needed to solveThere are infinite solutions to differential equations (like flow lines). f(x,y) = slope ofthe function (field arrow diagram). At each value of x and y the slope of the functionchanges. Hence the need for initial conditions:
(same equation but different initial values)For each unique function
YANI’S ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010) 95
Start at initial value (x0, y0)Take small step along function slope to (x,y)’At the new point repeat
96 DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG ([email protected])
29. ODE Numerical Methods
Monday March 22, 2010
Truncation Errors in Numerically Solving ODESNumerical solutions are calculated in steps2 types of truncation error: local and propogated
29.1. Local Truncation Error. Error in the trajectory of the solution due to hav-ing a finte step size (δx)With step size δx, our next estimate of y(x) may be inaccurateA more accurate point in the trajectory of y(x) will occur if step size is smaller
29.2. Propagated (Accumulation) Truncation Error. Each additional step in-creases the truncation error:1st step from x0 → x0 + ∆x generates e1 = local truncation error of step 12nd step from x0 → x0 + 2∆x generates e2 = local truncation error of step 1 + localtruncation error of step 23rd step from x0 → x0 + 3∆x generates e3 = local truncation error of step 1,2 and 3
YANI’S ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010) 97
The approximate value of y at each additional point is not at the true trajectorybased on x0, y0
Slope in the next line is slightly in error
29.3. Forward Euler’s method. Objective is to find y(x) so thatdy
dx= f(y, x)
with initial value (x0, y0)Methodology: Calculate slope at (x0, y0) from DEQ
slope =dy
dx
∣∣x0
= f(x0, y0)
To estimate the next point on the numerical solution (x,y), assume a linear slopebetween (x0, y0) and (x1, y1). Use the slope from the previous step.
slope = f(x0, y0)
y1 = y0 + f(x0, y0)︸ ︷︷ ︸slope
·h1︷ ︸︸ ︷
(x1 − x0)
Next step: slope = f(x1, y1) 6= real slope
y2 = y1 + f(x1, y1) ·h2︷ ︸︸ ︷
(x2 − x1)
98 DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG ([email protected])
Small error in slope evaluation since calculated slope not equal to real trajectory
In general: yi+1 =Old answer︷︸︸︷
yi + f(x1, y1)︸ ︷︷ ︸slope
·h︷ ︸︸ ︷
(x1 − x0)
Notice that this is 1st order taylor series approximation:
yi+1 = yi +dy
dx(xi+1 − xi) +
Local Truncation Error︷ ︸︸ ︷1
2
d2y
dx2h2 + . . .
Therefore local truncation error is the second order term:
E =1
2
d2y
dx2h2
YANI’S ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010) 99
30. 1st Order ODE Example
Wednesday March 24, 20102 t= independent variable, x= dependent variable
dx
dt= f(t, x)
t0 =, x0 = 0
Letdx
dt= v(t, x) velocity
if:dx
dt= 3 = constant at t=1, x=?
x1 ' x0 + 0 + v(t0, x0)(t1 − t0)
' (0 + 3)× 1 = 3
x2 ' x1 + v(t0, x0)(t2 − t1)
' 3 + (3)(1)
if:dx
dt= t+ 1 at t=1,2,3, x=?
x1 ' x0 + v(t0, x0)(t1 − t0)
= 0 + (0 + 1)× 1 = 1
x2 ' 1 + (1 + 1)× 1 = 3
x3 ' 3 + (2 + 1)× 1 = 6
Euler’s Forward:xi+1 = xi + V (ti+1, xi+1)(ti+1 − ti)
Central Difference (Trapezoidal)
xi+1 = xi +1
2[V (ti−1, xi−1) + V (ti+1, xi+1)] (ti+1 − ti)
2Class was getting boring at this point. Yani was getting ready to go to the hospital for his sonto be born, and we got a sub for a bit. More to follow
100 DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG ([email protected])
Part 5. Missing Notes
Yani had his baby son Brady and took about a week off. I didn’t like the suband didn’t go to class for a bit. The notes are about linear algebra and differentialequations.
Shortly after Yani returned, I had to attend my grandfather’s funeral [doing matlabassignments on the plane = ( ] and I believe I missed two more classesAnyone willing to donate missing notes is welcome; use the email provided in theintroduction, I might release the TEXdocument later too...
YANI’S ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010) 101
Part 6. April Notes
31. I Didn’t Know What Was Going on
Wednesday April 7, 2010
d3x
dt3x2t+
d2x
dt2+ t2x = 0
Solve via 4th order classical RK method
d3x
dt3x2t+
d2x
dt2+ t2x = 0
d3x
dt3x2t = −d
2x
dt2− t2x
d3x
dt3= −d
2x
dt2
(1
x2t
)− t
x
31.1. Step 1. Transform 3rd order ODE into 1st order ODE using vectors
Change of Variable
x1 = x
dx1
dt= x2
x2 =dx
dt
dx2
dt= x3
x3 =d2x
dt2dx3
dt=
x3
dt3= −x3
(1
x21t
)− t
x1
In the form:
d
dt
x1
x2
x3
=
x2
x3
− x3
x21t− t
x1
d
dt(x) = F (x, t)
102 DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG ([email protected])
31.2. At iteraion i:
k1,i = F (ti, xi) = F
x2,i
x3,i
− x3,i
x21,it− tix1,i
k2,i = F ( ti +
∆t
2︸ ︷︷ ︸Plus half step
, xi +∆t
2k1,i︸ ︷︷ ︸
Estimate next x
)
= F
ti +∆t
2,
x1,i
x2,i
x3,i
+∆t
2
x2,i
x3,i
− x3,i
x21,it− tix1,i
= F
ti +
∆t
2,
x1,i + x2,i
x2,i + x3,i
x3,i + − x3,i
x21,it− tix1,i
︸ ︷︷ ︸
Simplify:
UVW
k2,i = F
ti +∆t
2,
UVW
Therefore:
k2,i =,
VW
− W
U2(t1 + ∆t2
−ti + ∆t
2
U
Similar approach for: k3, k4 (very tedious)
xi+1 = xi +∆t
6
(k1 + 2k2 + 2k3 + k4
)
YANI’S ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010) 103
31.3. Example. Write recursive equation for solving angle of pendulum in timeA) Forward Euler
ML2d2θ
dt2+ µ
dθ
dt+MLg sin (θ) = 0
d2θ
dt2= − µ
ML2
dθ
dt− g sin (θ)
LLet: x1 = θ, x2 = θ′
d
dt[x] =
x2
− µ
ML2︸ ︷︷ ︸a
x2 −g
L︸︷︷︸b
sin (x1)
d
dt[x] =
[x2
ax2 + b sin (x1)
]= F (x, t)
Forward Euler: xi+1 = xi + ∆tF (x, t)
=
[x1,i
x2,i
]+ ∆t
[x2,i
ax2 + b sin (x1)
]=
[x1,i + x2,i
x2,i + ax2 + b sin (x1)
]
104 DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG ([email protected])
32. More Examples
Monday April 12, 2010
32.1. Adaptive Step Size. Will not be tested on the finalConstant step size can be inefficient for functions that contain both abrupt changesand slow gradual changesSlow changes = large step size more efficientAbrupt changes = Smaller step sizes more efficient
32.2. Example.
Time(t) Measurement(x)0 21 12 03 1
Solve the trend using: A) least square method: x(t) = a+ b[ t ]B) LU matrixC) Polynomial Interpolation D) Quadratic Spline between 0 ≤ t ≤ 2 and 2 ≤ t ≤ 3
Model:
y1...yN
=
f1(x1) . . . fK(x1)...
. . ....
f1(xN) . . . fK(xN)
P1...PK
M = AP
pseudo inverse: ATM = (ATA)P
P = (ATA)−1ATM
YANI’S ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010) 105
In our case:
f1(t) = 1, f2(t) = t, P =
[ab
]
2101
=
1 01 11 21 3
[ab]
P =
[1 1 1 10 1 2 3
]1 01 11 21 3
−1 [
1 1 1 10 1 2 3
]2101
P =
[4 66 14
]−1 [44
]
To solve using LU decomposition:
[4 66 14
] identity matrix︷ ︸︸ ︷[1 00 1
]
−32r1
[4 60 5
] [1 032
1
]
Therfore:
U =
[4 60 5
]L
[1 032
1
]Test:LU = X =
[4 60 5
] [1 032
1
]=
[4 66 14
]
106 DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG ([email protected])
Now:
LU
[ab
]=
[44
][
1 032
1
] [4 60 5
] [ab
]︸ ︷︷ ︸
Y
=
[44
]
LY =
[44
]y1 = 4
3
2y1 + y2 = 4
y =
[4−2
]=
[4 60 5
] [ab
]b =
2
5a = 1.6
Therefore the system is:
x(t) =2
5t+ 1.6
Quadratic Spline: 3 equations are needed for a 2nd order equationBetween 0 ≤ t ≤ 2
f1(t) = a1t2 + b1t+ c1
f(0) = 2 = c1
f(1) = 1 = a1 + b1 + 2
f(2) = 0 = 4a1 + 2b1 + 2
Or:
210
=
0 0 11 1 14 2 1
YANI’S ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010) 107
Solve for the factors Between 2 ≤ t ≤ 3
f2(t) = a2t2 + b2t+ c2
f(2) = 0 = 4a2 + 2b2 + c2
f(3) = 0 = 4a2 + 2b2 + c2
Additional equation: f ′1(2) = f2(2)′
2a1(2) + b1 = 2a2(2) + b2
Solve all factors
108 DANIEL CHIN UNIVERSITY OF CALGARY MECH ENGG ([email protected])
Part 7. Final Review Sheet
1 question on each part1) Taylor series2) Linear equations
• Nieve Gaussian Elimination• Pivoting• LU decomposition• Eigenvectors / Eigenvalues
3) Linear regression-linear and non-linearleast squares4)Interpolation-Polynomial-Lagrangian Example:
f(x) =N∑
n=0
YnLn(x) = Y0L0(x) + Y1L1(x)
= Y0
[N∏
i=0,i 6=0
x− xix0 − xi
]+ Y1
[N∏
i=0,i 6=0
x− xix1 − xi
]
= Y0
[(x− x1)(x− x2) . . .
(x0 − x1)(x0 − x2) . . .
]+ Y1
[(x− x0)(x− x2) . . .
(x1 − x0)(x1 − x2) . . .
]Spline interpolation
• 1 equation / point• make endpoints first derivatives same: f ′(x) = limx→x− = limx→x+
• if cubic, set second derivative same• set f ′′(x) = 0 at first point
5) Integration
• Rectangular• Trapezoidal• Simpson’s• Gaussian Quadrature
(Simpson’s and quadrature are given on formula sheet)6) ODE
• Euler• Backwards f(xi+1) ' f(x0) + f ′(x0)∆x
YANI’S ENGG 407 NUMERICAL METHODS NOTES (WINTER 2010) 109
• Forwards f(xi+1) ' f(x0) + f ′(xguess)∆x
• Huen’s (central difference kinda) f(xi+1) ' f(x0) +f ′(x0) + f ′(xguess)
2∆x
• Serial RK (Formula Sheet)• Higher order → make into system of equations
Fin!