> 5. Numerical Integration
Review of Interpolation
Find pn(x) with pn(xj) = yj , j = 0, 1, 2, . . . , n. Solution:
pn(x) = y0`0(x) + y1`1(x) + . . .+ yn`n(x), `k(x) =∏nj=1,j 6=k
x−xj
xk−xj.
Theorem
Let yj = f(xj), f(x) smooth and pn interpolates f(x) atx0 < x1 < . . . < xn. For any x ∈ (x0, xn) there is a ξ such that
f(x)− pn(x) =f (n+1)(ξ)(n+ 1)!
(x− x0)(x− x1) . . . (x− xn)︸ ︷︷ ︸ψ(x)
Example: n = 1, linear:
f(x)− p1(x) =f′′(ξ)2
(x− x0)(x− x1),
p1(x) =y1 − y0
x1 − x0(x− x0) + y0.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.1 The Trapezoidal Rule
Mixed rule
Find area under the curve y = f(x)
∫ b
af(x)dx =
N−1∑j=0
∫ xj+1
xj
f(x)dx︸ ︷︷ ︸interpolate f(x)on (xj, xj+1) andintegrate exactly
≡N−1∑j=0
(xj+1 − xj)f(xj) + f(xj+1)
2
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.1 The Trapezoidal Rule
Trapezoidal Rule
∫ xj+1
xj
f(x) ≈∫ xj+1
xj
p1(x)dx
=∫ xj+1
xj
{f(xj+1) + f(xj)
xj+1 − xj(x− xj) + f(xj)
}dx
= (xj+1 − xj)f(xj) + f(xj+1)
2
T1(f) = (b− a)f(b) + f(a)2
(6.1)
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.1 The Trapezoidal Rule
Another derivation of Trapezoidal Rule:
Seek: ∫ xj+1
xj
f(x)dx ≈ wjf(xj) + wj+1f(xj+1)
exact on polynomials of degree 1 (1, x).
f(x) ≡ 1 :∫ xj+1
xj
f(x)dx = xj+1 − xj = wj · 1 + wj+1 · 1
f(x) = x :∫ xj+1
xj
xdx =x2j+1
2−x2j
2= wjxj + wj+1xj+1
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.1 The Trapezoidal Rule
Example
Approximate integral
I =∫ 1
0
dx
1 + x
The true value is I = ln(2) ·= 0.693147. Using (6.1), we obtain
T1 =12
(1 +
12
)=
34
= 0.75
and the error is
I − T1(f) ·= −0.0569 (6.2)
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.1 The Trapezoidal Rule
To improve on approximation (6.1) when f(x) is not a nearly linear functionon [a, b], break interval [a, b] into smaller subintervals and apply (6.1) on eachsubinterval.If the subintervals are small enough, then f(x) will be nearly linear on eachone.
Example
Evaluate the preceding example by using T1(f) on two subintervals of equallength.
For two subintervals,
I =∫ 1
2
0
dx
1 + x+∫ 1
12
dx
1 + x
·=12
1 + 23
2+
12
23 + 1
2
2
T2 =1724
·= 0.70833
and the error
I − T2·= −0.0152 (6.3)
is about 14 of that for T1 in (6.2).
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.1 The Trapezoidal Rule
We derive the general formula for calculations using n subintervals ofequal length h = b−a
n . The endpoints of each subinterval are then
xj = a+ jh, j = 0, 1, . . . , n
Breaking the integral into n subintegrals
I(f) =∫ b
af(x)dx =
∫ xn
x0
f(x)dx
=∫ x1
x0
f(x)dx+∫ x2
x1
f(x)dx+ . . .+∫ xn
xn−1
f(x)dx
≈ hf(x0) + f(x1)2
+ hf(x1) + f(x2)
2+ . . .+ h
f(xn−1) + f(xn)2
The trapezoidal numerical integration rule
Tn(f)=h(
12f(x0)+f(x1)+f(x2)+· · ·+f(xn−1)+
12f(xn)
)(6.4)
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.1 The Trapezoidal Rule
With a sequence of increasing values of n, Tn(f) will usually be anincreasingly accurate approximation of I(f).
But which sequence of n should be used?If n is doubled repeatedly, then the function values used in each T2n(f)will include all earlier function values used in the preceding Tn(f).Thus, the doubling of n will ensure that all previously computed informationis used in the new calculation, making the trapezoidal rule less expensive thanit would be otherwise.
T2(f) = h
(f(x0)
2+ f(x1) +
f(x2)2
)with
h =b− a
2, x0 = a, x1 =
a+ b
2, x2 = b.
AlsoT4(f) = h
(f(x0)
2+ f(x1) + f(x2) + f(x3) +
f(x4)2
)with
h =b− a
4, x0 = a, x1 =
3a+ b
4, x2 =
a+ b
4, x3 =
a+ 3b4
, x4 = b
Only f(x1) and f(x3) need to be evaluated.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.1 The Trapezoidal Rule
ExampleWe give calculations of Tn(f) for three integrals
I(1) =∫ 1
0
e−x2dx
·= 0.746824132812427
I(2) =∫ 4
0
dx
1 + x2= tan−1(4) ·= 1.32581766366803
I(3) =∫ 2π
0
dx
2 + cos(x)=
2π√3
·= 3.62759872846844
n I(1) I(2) I(3)
Error Ratio Error Ratio Error Ratio
2 1.55E-2 -1.33E-1 -5.61E-14 3.84E-3 4.02 -3.59E-3 37.0 -3.76E-2 14.98 9.59E-4 4.01 5.64E-4 -6.37 -1.93E-4 195.016 2.40E-4 4.00 1.44E-4 3.92 -5.19E-9 37,600.032 5.99E-5 4.00 3.60E-5 4.00 *64 1.50E-5 4.00 9.01E-6 4.00 *128 3.74E-6 4.00 2.25E-6 4.00 *
The error for I(1), I(2) decreases by a factor of 4 when n doubles, for I(3) the answers for n = 32, 64, 128 were
correct up to the limits due to rounding error on the computer (16 decimal digits).5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.1.1 Simpson’s rule
To improve on T1(f) in (6.1), use quadratic interpolation to approximatef(x) on [a, b]. Let P2(x) be the quadratic polynomial that interpolates f(x)at a, c = a+b
2 and b.
I(f) ≈∫ b
a
P2(x)dx (6.5)
=∫ b
a
((x−c)(x−b)(a−c)(a−b)
f(a) +(x−a)(x−b)(c−a)(c−b)
f(c) +(x−a)(x−c)(b−a)(b−c)
f(b))dx
This can be evaluated directly. But it is easier with a change of variables.Let h = b−a
2 and u = x− a. Then∫ b
a
(x−c)(x−b)(a−c)(a−b)
dx =1
2h2
∫ a+2h
a
(x− c)(x− b)dx
=1
2h2
∫ 2h
0
(u− h)(u− 2h)du =1
2h2
(u3
3− 3
2u2h+ 2h2u
)∣∣∣∣2h
0
=h
3
and
S2(f) =h
3
(f(a) + 4f(
a+ b
2) + f(b)
)(6.6)
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.1.1 Simpson’s rule
Example
I =∫ 1
0
dx
1 + x
Then h = b−a2 = 1
2 and
S2(f) =12
3
(1 + 4(
23) +
12
)=
2536
·= 0.69444 (6.7)
and the error is
I − S2 = ln(2)− S2·= −0.00130
while the error for the trapezoidal rule (the number of functionevaluations is the same for both S2 and T2) was
I − T2·= −0.0152.
The error in S2 is smaller than that in (6.3) for T2 by a factor of 12,a significant increase in accuracy.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.1.1 Simpson’s rule
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10.5
0.55
0.6
0.65
0.7
0.75
0.8
0.85
0.9
0.95
1
Figure: An illustration of Simpson’s rule (6.6), y = f(x), y = P2(x)5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.1.1 Simpson’s rule
The rule S2(f) will be an accurate approximation to I(f) if f(x) is nearlyquadratic on [a, b]. For the other cases, proceed in the same manner as forthe trapezoidal rule.Let n be an even integer, h = b−a
n and define the evaluation points for f(x)by
xj = a+ jh, j = 0, 1, . . . , n
We follow the idea from the trapezoidal rule, but break [a, b] = [x0, xn] intolarger intervals, each containing three interpolation node points:
I(f) =∫ b
a
f(x)dx =∫ xn
x0
f(x)dx
=∫ x2
x0
f(x)dx+∫ x4
x2
f(x)dx+ . . .+∫ xn
xn−2
f(x)dx
≈ h
3[f(x0) + 4f(x1) + f(x2)] +
h
3[f(x2) + 4f(x3) + f(x4)]
+ . . .+h
3[f(xn−2) + 4f(xn−1) + f(xn)]
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.1.1 Simpson’s rule
Simpson’s rule:
Sn(f) =h
3(f(x0)+4f(x1)+2f(x2)+4f(x3)+2f(x4) (6.8)
+· · ·+2f(xn−2)+4f(xn−1)+f(xn))
It has been among the most popular numerical integrationmethods for more than two centuries.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.1.1 Simpson’s rule
ExampleEvaluate the integrals
I(1) =∫ 1
0
e−x2dx
·= 0.746824132812427
I(2) =∫ 4
0
dx
1 + x2= tan−1(4) ·= 1.32581766366803
I(3) =∫ 2π
0
dx
2 + cos(x)=
2π√3
·= 3.62759872846844
n I(1) I(2) I(3)
Error Ratio Error Ratio Error Ratio
2 -3.56E-4 8.66E-2 -1.264 -3.12E-5 11.4 3.95E-2 2.2 1.37E-1 -9.28 -1.99E-6 15.7 1.95E-3 20.3 1.23E-2 11.216 -1.25E-7 15.9 4.02E-6 485.0 6.43E-5 191.032 -7.79E-9 16.0 2.33E-8 172.0 1.71E-9 37,600.064 -4.87E-10 16.0 1.46E-9 16.0 *128 -3.04E-11 16.0 9.15E-11 16.0 *
For I(1), I(2), the ratio by which the error decreases approaches 16.
For I(3), the errors converge to zero much more rapidly
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.2 Error formulas
Theorem
Let f ∈ C2[a, b], n ∈ N. The error in integrating
I(f) =∫ b
af(x)dx
using the trapezoidal ruleTn(f) = h
[12f(x0) + f(x1) + f(x2) + · · ·+ f(xn−1) + 1
2f(xn)]
isgiven by
ETn ≡ I(f)− Tn(f) =−h2(b− a)
12f ′′(cn) (6.9)
where cn is some unknown point in a, b, and h = b−an .
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.2 Error formulas
Theorem
Suppose that f ∈ C2[a, b], and h = maxj(xj+1 − xj). Then∣∣∣∣∣∣∫ b
af(x)dx−
∑j
(xj+1−xj)f(xj+1)+f(xj)
2
∣∣∣∣∣∣ ≤ b− a12
h2 maxa≤x≤b
|f ′′(x)|.
Proof: Let Ij be the jth subinterval and p1 = linear interpolant on Ijat xj , xj+1.
f(x)− p1(x) =f ′′(x)
2(x− xj)(x− xj+1)︸ ︷︷ ︸
ψ(x)
.
Local error:∣∣∣∣∣∫ xj+1
xj
f(x)dx− (xj+1−xj)f(xj) + f(xj+1)
2
∣∣∣∣∣ =5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.2 Error formulas
proof
=
∣∣∣∣∣∫ xj+1
xj
f ′′(ξ)2!
(x− xj)(x− xj+1)
∣∣∣∣∣≤ 1
2
∫ xj+1
xj
|f ′′(ξ)||x− xj | |x− xj+1|dx
≤ 12
maxa≤x≤b
|f ′′(x)|∫ xj+1
xj
(x− xj)(xj+1 − x)dx
=12
maxa≤x≤b
|f ′′(x)|(xj+1 − xj)3
6.
Hence
|local error| ≤ 112
maxa≤x≤b
|f ′′(x)| · h3j , hj = xj+1 − xj .
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.2 Error formulas
proof
Finally,
|global error| =
∣∣∣∣∣∣∫ b
af(x)dx−
∑j
(xj+1−xj)f(xj) + f(xj+1)
2
∣∣∣∣∣∣=
∣∣∣∣∣∣∑j
(∫ xj+1
xj
f(x)dx− (xj+1−xj)f(xj) + f(xj+1)
2
)∣∣∣∣∣∣ ≤∑j
|local error|
≤∑j
112
maxa≤x≤b
|f ′′(x)|(xj+1 − xj)3 =112
maxa≤x≤b
|f ′′(x)|n∑j=0
(xj+1 − xj)3︸ ︷︷ ︸≤h2(xj+1−xj)
≤ h2
12maxa≤x≤b
|f ′′(x)|n∑j=0
(xj+1 − xj)︸ ︷︷ ︸b−a
. �
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.2 Error formulas
Example
Recall the exampleI(f) =
∫ 1
0
dx
1 + x= ln 2
Here f(x) = 11+x , [a, b] = [0, 1], and f ′′(x) = 2
(1+x)3 . Then by (6.9)
ETn (f) = −h
2
12f ′′(cn), 0 ≤ cn ≤ 1, h =
1n.
This cannot be computed exactly since cn is unknown. But
max0≤x≤1
|f ′′(x)| = max0≤x≤1
2(1 + x)3
= 2
and therefore
|ETn (f)| ≤ h2
12(2) =
h2
6For n = 1 and n = 2 we have
|ET1 (f)︸ ︷︷ ︸
−0.0569
| ≤ 16
·= 0.167, |ET2 (f)︸ ︷︷ ︸
−0.0152
| ≤(
12
)26
·= 0.0417.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.2 Error formulas
A possible weakness in the trapezoidal rule can be inferred from theassumption of the theorem for the error.If f(x) does not have two continuous derivatives on [a, b], thenTn(f) does converge more slowly??
YES
for some functions, especially if the first derivative is not continuous.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.2.1 Asymptotic estimate of Tn(f)
The error formula (6.9)
ETn (f) ≡ I(f)− Tn(f) = −h2(b−a)12 f ′′(cn)
can only be used to bound the error, because f ′′(cn) is unknown.
This can be improved by a more careful consideration of the error formula.
A central element of the proof of (6.9) lies in the local error∫ α+h
αf(x)dx− hf(α) + f(α+ h)
2= −h
3
12f ′′(c) (6.10)
for some c ∈ [α, α+ h].
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.2.1 Asymptotic estimate of Tn(f)
Recall the derivation of the trapezoidal rule Tn(f) and use the local error(6.10):
ETn (f) =
∫ b
a
f(x)dx− Tn(f) =∫ xn
x0
f(x)dx− Tn(f)
=∫ x1
x0
f(x)dx− hf(x0) + f(x1)2
+∫ x2
x1
f(x)dx− hf(x1) + f(x2)2
+ · · ·+∫ xn
xn−1
f(x)dx− hf(xn−1) + f(xn)2
= −h3
12f ′′(γ1)−
h3
12f ′′(γ2)− · · · − −
h3
12f ′′(γn)
with γ1 ∈ [x0, x1], γ2 ∈ [x1, x2], . . . γn ∈ [xn−1, xn], and
ETn (f) = −h
2
12
(hf ′′(γ1) + · · ·+ hf ′′(γn)︸ ︷︷ ︸
=(b−a)f ′′(cn)
), cn ∈ [a, b].
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.2.1 Asymptotic estimate of Tn(f)
To estimate the trapezoidal error, observe thathf ′′(γ1) + · · ·+ hf ′′(γn) is a Riemann sum for the integral∫ b
af ′′(x)dx = f ′(b)− f ′(a) (6.11)
The Riemann sum is based on the partition
[x0, x1], [x1, x2], . . . , [xn−1, xn] of [a, b].
As n→∞, this sum will approach the integral (6.11). With (6.11),we find an asymptotic estimate (improves as n increases)
ETn (f) ≈ −h2
12(f ′(b)− f ′(a)) =: ETn (f). (6.12)
As long as f ′(x) is computable, ETn (f) will be very easy to compute.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.2.1 Asymptotic estimate of Tn(f)
Example
Again consider I =∫ 1
0
dx
1 + x.
Then f ′(x) = − 1(1 + x)2
, and the asymptotic estimate (6.12) yields
the estimate
ETn =−h2
12
(−1
(1 + 1)− −1
(1 + 0)2
)=−h2
16, h =
1n
and for n = 1 and n = 2
ET1 = − 116 = −0.0625, ET2
·= −0.0156I − T1
·= −0.0569, I − T2·= −0.0152
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.2.1 Asymptotic estimate of Tn(f)
The estimate ETn (f) = −h2
12 (f ′(b)− f ′(a)) has several practical
advantages over the earlier formula (6.9) ETn (f) = −h2(b−a)12 f ′′(cn).
1 It confirms that when n is doubled (or h is halved), the error decreasesby a factor of about 4, provided that f ′(b)− f ′(a) 6= 0. This agreeswith the results for I(1) and I(2).
2 (6.12) implies that the convergence of Tn(f) will be more rapid whenf ′(b)− f ′(a) = 0. This is a partial explanation of the very rapidconvergence observed with I(3)
3 (6.12) leads to a more accurate numerical integration formula
by taking ETn (f) into account:
I(f)− Tn(f) ≈− h2
12(f ′(b)− f ′(a))
I(f) ≈ Tn(f)− h2
12(f ′(b)− f ′(a)) := CTn(f), (6.13)
the corrected trapezoidal rule
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.2.1 Asymptotic estimate of Tn(f)
Example
Recall the integral I(1), I =∫ 1
0e−x2
dx·= 0.74682413281243
n I − Tn(f) En(f) CTn(f) I − CTn(f) Ratio2 1.545E-4 1.533E-2 0.746698561877 1.26E-44 3.840E-3 3.832E-3 0.746816175313 7.96E-6 15.88 9.585E-4 9.580E-4 0.746823634224 4.99E-7 16.016 2.395E-4 2.395E-4 0.746824101633 3.12E-8 16.032 5.988E-5 5.988E-5 0.746824130863 1.95E-9 16.064 1.497E-5 1.497E-5 0.746824132690 2.22E-10 16.0
Table: Example of CTn(f) and En(f)Note that the estimate
ETn (f) =
h2e−1
6, h =
1n
is a very accurate estimator of the true error.Also, the error in CTn(f) converges to zero at a more rapid rate than doesthe error for Tn(f).When n is doubled, the error in CTn(f) decreases by a factor of about 16.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.2.2 Error formulae for Simpson’s rule
Theorem
Assume f ∈ C4[a, b], n ∈ N. The error in using Simpson’s rule is
ESn (f) = I(f)− Sn(f) = −h
4(b− a)180
f (4)(cn) (6.14)
with cn ∈ [a, b] an unknown point, and h = b−an . Moreover, this error can be
estimated with the asymptotic error formula
ESn (f) = − h4
180(f ′′′(b)− f ′′′(a)) (6.15)
Note that (6.14) says that Simpson’s rule is exact for all f(x) that arepolynomials of degree ≤ 3, whereas the quadratic interpolation on whichSimpson’s rule is based is exact only for f(x) a polynomial of degree ≤ 2.
The degree of precision being 3 leads to the power h4 in the error, rather
than the power h3, which would have been produced on the basis of the error
in quadratic interpolation.
The higher power of h4, and
the simple form of the method
that historically have caused Simpson’s rule to become the most popular numerical integration rule.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.2.2 Error formulae for Simpson’s rule
Example
Recall (6.7) where S2(f) was applied to I =∫ 1
0dx
1+x :
S2(f) =12
3
(1 + 4(
23) +
12
)=
2536
·= 0.69444
f(x) =1
1 + x, f3(x) =
−6(1 + x)4
, f (4)(x) =24
(1 + x)5
The exact error is given by
ESn (f) = − h4
180f (4)(cn), h =
1n
for some 0 ≤ cn ≤ 1. We can bound it by
|ESn (f)| ≤ h4
18024 =
2h4
15The asymptotic error is given by
ESn (f) = − h4
180
(−6
(1 + 1)4− −6
(1 + 0)4
)= −h
4
32
For n = 2, ESn
·= −0.00195; the actual error is −0.00130.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.2.2 Error formulae for Simpson’s rule
The behavior in I(f)− Sn(f) can be derived from (6.15):
ESn (f) = − h4
180(f ′′′(b)− f ′′′(a)
),
i.e.,when n is doubled, h is halved, and h4 decreases by of factor of 16.
Thus, the error ESn (f) should decrease by the same factor, providedthat f ′′′(a) 6= f ′′′(b). This is the error observed with integrals I(1) andI(2).
When f ′′′(a) = f ′′′(b), the error will decrease more rapidly, which is apartial explanation of the rapid convergence for I(3).
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.2.2 Error formulae for Simpson’s rule
The theory of asymptotic error formulae
En(f) = En(f) (6.16)
such as for ETn (f) and ESn (f), says that (6.16) will vary with theintegrand f , which is illustrated with the two cases I(1) and I(2).
From (6.14) and (6.15) we infer that Simpson’s rule will not performas well if f(x) is not four times continuously differentiable, on [a, b].
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.2.2 Error formulae for Simpson’s rule
Example
Use Simpson’s rule to approximate
I =∫ 1
0
√xdx =
23.
n Error Ratio
2 2.860E-2
4 1.014E-2 2.82
8 3.587E-3 2.83
16 1.268E-3 2.83
32 4.485E-4 2.83
Table: Simpson’s rule for√x
The column ”Ratio” show the convergence is much slower.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.2.2 Error formulae for Simpson’s rule
As was done for the trapezoidal rule, a corrected Simpson’s rule canbe defined:
CSn(f) = Sn(f)− h4
180(f ′′′(b)− f ′′′(a)
)(6.17)
This will usually will be more accurate approximation than Sn(f).
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.2.3 Richardson extrapolation
The error estimates for Trapezoidal rule (6.12)
ETn (f) ≈ −h2
12(f ′(b)− f ′(a)
)and Simpson’s rule (6.15)
ESn (f) = − h4
180(f ′′′(b)− f ′′′(a)
)are both of the form
I − In ≈c
np(6.18)
where In denotes the numerical integral and h = b−an .
The constants c and p vary with the method and the function. Withmost integrands f(x), p = 2 for the trapezoidal rule and p = 4 forSimpson’s rule.There are other numerical methods that satisfy (6.18), with other value of p
and c. We use (6.18) to obtain a computable estimate of the error I − In,
without needing to know c explicitly.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.2.3 Richardson extrapolation
Replacing n by 2n
I − I2n ≈c
2pnp(6.19)
and comparing to (6.18)
2p(I − I2n) ≈c
np≈ I − In
and solving for I gives the Richardson’s extrapolation formula
(2p − 1)I ≈ 2pI2n − In
I≈ 12p − 1
(2pI2n − In) ≡ R2n (6.20)
R2n is an improved estimate of I, based on using In, I2n, p, and theassumption (6.18). How much more accurate than I2n depends on thevalidity of (6.18), (6.19).
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.2.3 Richardson extrapolation
To estimate the error in I2n, compare it with the more accurate valueR2n
I − I2n ≈ R2n − I2n =1
2p − 1(2pI2n − In)− I2n
I − I2n ≈1
2p − 1(I2n − In) (6.21)
This is Richardson’s error estimate.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.2.3 Richardson extrapolation
Example
Using the trapezoidal rule to approximate
I =∫ 1
0e−x
2dx
·= 0.74682413281243
we haveT2
·= 0.7313702518, T4·= 0.7429840978
Using (6.20) I ≈ 12p−1(2pI2n − In) with p = 2 and n = 2, we obtain
I ≈ R4 =13(4I4 − I2) =
13(4T4 − T2)
·= 0.7468553797
The error in R4 is −0.0000312; and from a previous Table, R4 is moreaccurate than T32. To estimate the error in T4, use (6.21) to get
I − T4 ≈13(T4 − T2)
·= 0.00387
The actual error in T4 is 0.00384; and thus (6.21) is a very accurate error
estimate.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.2.4 Periodic Integrands
Definition
A function f(x) is periodic with period τ if
f(x) = f(x+ τ), ∀x ∈ R (6.22)
and this relation should not be true with any smaller value of τ .
For example,f(x) = ecos(πx)
is periodic with periodic τ = 2.
If f(x) is periodic and differentiable, then itsderivatives are also periodic with period τ.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.2.4 Periodic Integrands
Consider integrating
I =∫ b
af(x)dx
with trapezoidal or Simpson’s rule, and assume that b− a is andinteger multiple of the period τ .Assume f(x) ∈ C∞[a, b] (has derivatives of any order).
Then for all derivatives of f(x), the periodicity of f(x) implies that
f (k)(a) = f (k)(b), k ≥ 0 (6.23)
If we now look at the asymptotic error formulae for the trapezoidaland Simpson’s rules, they become zero because of (6.23).Thus, the error formulae ETn (f) and ESn (f) should converge to zeromore rapidly when f(x) is a periodic function, provided b− a is aninteger multiple of the period of f .
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.2.4 Periodic Integrands
The asymptotic error formulae ETn (f) and ESn (f) can be extended tohigher-order terms in h, using the Euler-MacLaurin expansion andthe higher-order terms are multiples of f (k)(b)− f (k)(a) for all oddintegers k ≥ 1. Using this, we can prove that the errors ETn (f) andESn (f) converge to zero even more rapidly than was implied by theearlier comments for f(x) periodic.
Note that the trapezoidal rule is the preferred integration rulewhen we are dealing with smooth periodic integrands.The earlier results for the integral I(3) illustrate this.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.2.4 Periodic Integrands
Example
The ellipse with boundary (xa
)2
+(yb
)2
= 1
has area πab. For the case in which the area is π (and thus ab = 1), we studythe variation of the perimeter of the ellipse as a and b vary.
The ellipse has the parametric representation
(x, y) = (a cos θ, b sin θ), 0 ≤ θ ≤ 2π (6.24)
By using the standard formula for the perimeter, and using the symmetry ofthe ellipse about the x-axis, the perimeter is given by
P = 2∫ π
0
√(dx
dθ
)2
+(dy
dθ
)2
dθ
= 2∫ π
0
√a2 sin2 θ + b2 cos2 θdθ
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.2.4 Periodic Integrands
Since ab = 1, we write this as
P (b) = 2∫ π
0
√1b2
sin2 θ + b2 cos2 θdθ
=2b
∫ π
0
√(b4 − 1) cos2 θ + 1dθ (6.25)
We consider only the case with 1 ≤ b <∞. Since the perimeters forthe two ellipses(x
a
)2+(yb
)2= 1 and
(xb
)2+(ya
)2= 1
are equal, we can always consider the case in which the y-axis of theellipse is larger than or equal to its x-axis; and this also shows
P
(1b
)= P (b), b > 0 (6.26)
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.2.4 Periodic Integrands
The integrand of P (b)
f(θ) =2b
[(b4 − 1) cos2 θ + 1
] 12
is periodic with period π. As discussed above, the trapezoidal rule is the
natural choice for numerical integration of (6.25). Nonetheless, there is a
variation in the behaviour of f(θ) as b varies, and this will affect the accuracy
of the numerical integration.
0 0.5 1 1.5 2 2.5 3 3.50
2
4
6
8
10
12
14
16
π/2
Figure:The graph of integrand f(θ) : b = 2, 5, 8
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.2.4 Periodic Integrands
n b = 2 b = 5 b = 88 8.575517 19.918814 31.690628
16 8.578405 20.044483 31.953632
32 8.578422 20.063957 32.008934
64 8.578422 20.065672 32.018564
128 8.578422 20.065716 32.019660
256 8.578422 20.065717 32.019709
Table: Trapezoidal Rule Approximation of (6.25)Note that as b increases, the trapezoidal rule converges more slowly.This is due to the integrand f(θ) changing more rapidly as b increases.For large b, f(θ) changes very rapidly in the vicinity of θ = 1
2π; andthis causes the trapezoidal rule to be less accurate than when b issmaller, near 1. To obtain a certain accuracy in the perimeter P (b), wemust increase n as b increases.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.2.4 Periodic Integrands
0 1 2 3 4 5 6 7 8 9 105
10
15
20
25
30
35
40
45
z=P(b)
Figure: The graph of perimeter function P (b) for ellipseThe graph of P (b) reveals that P (b) ≈ 4b for large b. Returning to (6.25), wehave for large b
P (b) ≈ 2b
∫ π
0
(b4 cos2 θ
) 12 dθ
≈ 2bb2∫ π
0
| cos θ|dθ = 4b
We need to estimate the error in the above approximation to know when we
can use it to replace P (b); but it provides a way to avoid the integration of
(6.25) for the most badly behaved cases.
5. Numerical Integration Math 1070
> 5. Numerical Integration > Review and more
Review
∫ xj+1
xj
f(x)dx ≈∫ xj+1
xj
pn(x)dx︸ ︷︷ ︸≡Ij
pn(x) interpolates at x(0)j , x
(1)j , . . . , x
(n)j points on [xj , xj+1]
Local error:∫ xj+1
xj
f(x)dx− Ij =∫ xj+1
xj
f (n+1)(ξ)
(n+ 1)!ψ(x)dx
(integrand is error in interpolation)
where ψ(x) = (x− x(0)j )(x− x(1)
j ) · · · (x− x(n)j ).
5. Numerical Integration Math 1070
> 5. Numerical Integration > Review and more
x_j x_{j+1}−6
−4
−2
0
2
4
6x 106
xj ≤ x ≤ x
j+1
ψ(x
)
Conclusion: exact on Pn1 |local error| ≤ Cmax |f (n+1)(x)|hn+2
2 |global error| ≤ Cmax |f (n+1)(x)|hn+1(b− a)
5. Numerical Integration Math 1070
> 5. Numerical Integration > Review and more
Observation:If ξ is a point on (xj , xj+1), then
g(ξ) = g(xj+ 12) +O(h)
(if g′ is continuous) i.e.,
g(ξ) = g(xj+ 12) + (ξ − xj+ 1
2)︸ ︷︷ ︸
≤h
g′(η)
︸ ︷︷ ︸O(h)
5. Numerical Integration Math 1070
> 5. Numerical Integration > Review and more
Local error:
1(n+ 1)!
∫ xj+1
xj
f (n+1)(ξ)︸ ︷︷ ︸=f (n+1)(x
j+12)+O(h)
ψ(x)dx
=1
(n+ 1)!f (n+1)(xj+ 1
2)∫ xj+1
xj
ψ(x)dx︸ ︷︷ ︸Dominant Term O(hn+2)
+
C(n+1)!
max |f (n+2)|hn+3︷ ︸︸ ︷1
(n+ 1)!
∫ xj+1
xj
f (n+2)(η(x))︸ ︷︷ ︸take max out
(ξ − xj+ 12)︸ ︷︷ ︸
integrate
ψ(x)dx
︸ ︷︷ ︸Higher Order Terms O(hn+3)
5. Numerical Integration Math 1070
> 5. Numerical Integration > Review and more
The dominant term: case N = 1, Trapezoidal Rule
x_j x_{j+1}−0.2
−0.15
−0.1
−0.05
0
0.05ψ(x) = (x−x
j)(x−x
j+1)
5. Numerical Integration Math 1070
> 5. Numerical Integration > Review and more
The dominant term: case N = 2, Simpson’s Rule
ψ(x) = (x− xj)(x− xj+ 12)(x− xj+1)⇒
∫ xj+1
xj
ψ(x)dx = 0
x_j x_{j+1/2} x_{j+1}−0.2
−0.15
−0.1
−0.05
0
0.05
0.1
0.15
0.2ψ(x) = (x−x
j)(x−x
j+1/2)(x−x
j+1)
5. Numerical Integration Math 1070
> 5. Numerical Integration > Review and more
The dominant term: case N = 3, Simpson’s 3/8’s Rule
Local error = O(h5)
x_j x_{j+1/3} x_{j+2/3} x_{j+1}−0.06
−0.04
−0.02
0
0.02
0.04
0.06
0.08
0.1ψ(x) = (x−x
j)(x−x
j+1/3)(x−x
j+2/3)(x−x
j+1)
5. Numerical Integration Math 1070
> 5. Numerical Integration > Review and more
The dominant term: case N = 4
∫ψ(x)dx = 0⇒ local error = O(h7)
x_j x_{j+1}−0.4
−0.3
−0.2
−0.1
0
0.1
0.2
0.3
0.4
5. Numerical Integration Math 1070
> 5. Numerical Integration > Review and more
Simpson’s Rule
Is exact on P2 (and P3 actually)
xj+1/2
= (xj+ x
j+1)/2
← xj+1/2
Seek:∫ xj+1
xj
f(x)dx ≈ wjf(xj) + wj+1/2f(xj+1/2) + wj+1f(xj+1)
5. Numerical Integration Math 1070
> 5. Numerical Integration > Review and more
Exact on 1, x, x2:
1 :∫ xj+1
xj
1dx = xj+1 − xj = wj · 1 + wj+1/2 · 1 + wj+1 · 1,
x :∫ xj+1
xj
xdx =x2j+1
2−x2j
2= wjxj + wj+1/2xj+1/2 + wj+1xj+1,
x2 :∫ xj+1
xj
x2dx =x3j+1
3−x3j
3= wjx
2j + wj+1/2x
2j+1/2 + wj+1x
2j+1.
3× 3 linear system:
wj =16(xj+1 − xj);
wj+1/2 =46(xj+1 − xj);
wj+1 =16(xj+1 − xj).
5. Numerical Integration Math 1070
> 5. Numerical Integration > Review and more
Theorem
h = max(xj+1 − xj), I(f) = Simpson’s rule approximation:∣∣∣∣∫ b
af(x)dx− I(f)
∣∣∣∣ ≤ b− a2880
h4 maxa≤x≤b
∣∣∣f (4)(x)∣∣∣ .
Trapezoid rule versus Simpson’s rule
Cost in TR
Cost in SR=
2 function evaluations/integral × no. intervals
3 function evaluations/integral × no. intervals=
23;
(reducible to 12 if storing the previous values)
Accuracy in TR
Accuracy in SR=h2 b−a
12
h4 b−a2880
=240h2
.
E.g. for h = 1100 ⇒ 2.4× 106, i.e. SR is more accurate than TR by factor of 2.4× 106.
5. Numerical Integration Math 1070
> 5. Numerical Integration > Review and more
What if there is round-off error?
Suppose we use the method with
f(xj)computed = f(xj)true ± εj , εj = O(machine precision︸ ︷︷ ︸=ε
)
∫ b
af(x)dx ≈
∑j
(xj+1 − xj)f(xj+1)computed + f(xj)computed
2
=∑j
(xj+1 − xj)f(xj+1)± εj+1 + f(xj)± εj
2
=∑j
(xj+1 − xj)f(xj+1) + f(xj)
2︸ ︷︷ ︸value in exact arithmetic
+∑j
(xj+1 − xj)±εj+1 ± εj
2︸ ︷︷ ︸contribution of round-off error≤ε(b−a)
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.3 Gaussian Numerical Integration
The numerical methods studied in the first 2 sections were based onintegrating
1 linear (trapezoidal rule) and
2 quadratic (Simpson’s rule)
and the resulting formulae were applied on subdivisions of ever smallersubintervals.
We consider now a numerical method based on exact integration ofpolynomials of increasing degree; no subdivision of the integrationinterval [a, b] is used.Recall the Section 4.4 of Chapter 4 on approximation of functions.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.3 Gaussian Numerical Integration
Let f(x) ∈ C[a, b].Then ρn(f) denotes the smallest error bound that can be attained inapproximating f(x) with a polynomial p(x) of degree ≤ n on the giveninterval a ≤ x ≤ b.The polynomial mn(x) that yields this approximation is called theminimax approximation of degree n for f(x)
maxa≤x≤b
|f(x)−mn(x)| = ρn(f) (6.27)
and ρn(f) is called the minimax error.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.3 Gaussian Numerical Integration
Example
Let f(x) = e−x2
for x ∈ [0, 1]
n ρn(f) n ρn(f)1 5.30E-2 6 7.82E-6
2 1.79E-2 7 4.62E-7
3 6.63E-4 8 9.64E-8
4 4.63E-4 9 8.05E-9
5 1.62E-5 10 9.16E-10
Table: Minimax errors for e−x2, 0 ≤ x ≤ 1
The minimax errors ρn(f) converge to zero rapidly, although not at auniform rate.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.3 Gaussian Numerical Integration
If we have a
numerical integration formula to integrate low- tomoderate-degree polynomials exactly,
then the hope is that
the same formula will integrate other functions f(x) almostexactly,
if f(x) is well ”approximable” by such polynomials.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.3 Gaussian Numerical Integration
To illustrate the derivation of such integration formulae, we restrict ourselvesto the integral
I(f) =∫ 1
−1
f(x)dx.
The integration formula is to have the general form:(Gaussian numerical integration method)
In(f) =n∑j=1
wjf(xj) (6.28)
and we require that
the nodes {x1, . . . , xn} and
weights {w1, . . . , wn}be so chosen that
In(f) = I(f)for all polynomials f(x) of as large degree as possible.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.3 Gaussian Numerical Integration
Case n = 1
The integration formula has the form∫ 1
−1
f(x)dx ≈ w1f(x1) (6.29)
Using f(x) ≡ 1, and forcing (6.29) 2 = w1
Using f(x) = x 0 = w1x1
which implies x1 = 0. Hence (6.29) becomes∫ 1
−1
f(x)dx ≈ 2f(0) ≡ I1(f) (6.30)
This is the midpoint formula, and is exact for all linear polynomials.
To see that (6.30) is not exact for quadratics, let f(x) = x2. Then theerror in (6.30) is ∫ 1
−1
x2dx− 2(0)2 =236= 0,
hence (6.30) has degree of precision 1.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.3 Gaussian Numerical Integration
Case n = 2
The integration formula is∫ 1
−1f(x)dx ≈ w1f(x1) + w2f(x2) (6.31)
and it has four unspecified quantities: x1, x2, w1, w2. To determinethese, we require (6.31) to be exact for the four monomials
f(x) = 1, x, x2, x3.
obtaining 4 equations
2 = w1 + w2
0 = w1x1 + w2x223 = w1x
21 + w2x
22
0 = w1x31 + w2x
32
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.3 Gaussian Numerical Integration
Case n = 2
This is a nonlinear system with a solution
w1 = w2 = 1, x1 = −√
33, x2 =
√3
3(6.32)
and another one based on reversing the signs of x1 and x2.This yields the integration formula∫ 1
−1f(x)dx ≈ f
(−√
33
)+ f
(√3
3
)≈ I2(f) (6.33)
which has degree of precision 3 (exact on all polynomials of degree 3and not exact for f(x) = x4).
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.3 Gaussian Numerical Integration
Example
Approximate
I =∫ 1
−1exdx = e− e−1 ·= 2.5304024
Using ∫ 1
−1f(x)dx ≈ f
(−√
33
)+ f
(√3
3
)≈ I2(f)
we get
I2 = e−√
33 + e−
√3
3·= 2.2326961
I − I2·= 0.00771
The error is quite small, considering we are using only 2 node points.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.3 Gaussian Numerical Integration
Case n > 2
We seek the formula (6.28)
In(f) =n∑
j=1
wjf(xj)
which has 2n points unspecified parameters x1, . . . , x,w1, . . . , wn, by forcingthe integration formula to be exact for 2n monomials
f(x) = 1, x, x2, . . . , x2n−1
In turn, this forces In(f) = I(f) for all polynomials f of degree ≤ 2n− 1.This leads to the following system of 2n nonlinear equations in 2nunknowns:
2 = w1 + w2 + . . .+ wn
0 = w1x1 + w2x2 + . . .+ wnxn23 = w1x
21 + w2x
22 + . . .+ wnx
2n
...2
2n−1 = w1x2n−21 + w2x
22 + . . .+ wnx
2n−2n
0 = w1x2n−11 + w2x
22 + . . .+ wnx
2n−1n
(6.34)
The resulting formula In(f) has degree of precision 2n− 1.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.3 Gaussian Numerical Integration
Solving this system is a formidable problem. The nodes {xi} and weights{wi} have been calculated and collected in tables for most commonly usedvalues of n.
n xi wi2 ± 0.5773502692 1.03 ± 0.7745966692 0.5555555556
0.0 0.88888888894 ± 0.8611363116 0.3478548451
± 0.3399810436 0.65214515495 ± 0.9061798459 0.2369268851
± 0.5384693101 0.47862867050.0 0.5688888889
6 ± 0.9324695142 0.1713244924± 0.6612093865 0.3607651730± 0.2386191861 0.4679139346
7 ± 0.9491079123 0.1294849662± 0.7415311856 0.2797053915± 0.4058451514 0.3818300505
0.0 0.41795918378 ± 0.9602898565 0.1012285363
± 0.7966664774 0.2223810345± 0.5255324099 0.3137066459± 0.1834346425 0.3626837834
Table: Nodes and weights for Gaussian quadrature formulae
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.3 Gaussian Numerical Integration
There is also another approach to the development of the numericalintegration formula (6.28), using the theory of orthogonalpolynomials.From that theory, it can be shown that the nodes {x1, . . . , xn} are thezeros of the Legendre polynomials of degree n on the interval [−1, 1].Recall that these polynomials were introduced in Section 4.7. Forexample,
P2(x) =12(3x2 − 1)
and its roots are the nodes given in (6.32)
x1 = −√
33 , x2 =
√3
3 .Since the Legendre polynomials are well known, the nodes {xj} can befound without any recourse to the nonlinear system (6.34).
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.3 Gaussian Numerical Integration
The sequence of formulae (6.28) is called Gaussian numerical integrationmethod. From its definition, In(f) uses n nodes, and it is exact for all
polynomials of degree ≤ 2n− 1. In(f) is limited to∫ 1
−1f(x)dx, an integral
over [−1, 1]. But this limitation is easily removed.Given an integral
I(f) =∫ b
a
f(x)dx (6.35)
introduce the linear change of variable
x =b+ a+ t(b− a)
2, −1 ≤ t ≤ 1 (6.36)
transforming the integral to
I(f) =b− a
2
∫ 1
−1
f(t)dt (6.37)
with
f(t) = f
(b+ a+ t(b− a)
2
)Now apply In to this new integral.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.3 Gaussian Numerical Integration
Example
Apply Gaussian numerical integration to the three integralsI(1) =
∫ 10 e
−x2dx, I(2) =
∫ 40
dx1+x2 , I(3) =
∫ 2π0
dx2+cos(x) , which were
used as examples for the trapezoidal and Simpson’s rules.
All are reformulated as integrals over [−1, 1]. The error results are
n Error in I(1) Error in I(2) Error in I(3)
2 2.29E-4 -2.33E-2 8.23E-1
3 9.55E-6 -3.49E-2 -4.30E-1
4 -3.35E-7 1.90E-3 1.77E-1
5 6.05E-9 1.70E-3 -8.12E-2
6 -7.77E-11 2.74E-4 3.55E-2
7 7.89E-13 -6.45E-5 -1.58E-2
10 * 1.27E-6 1.37E-3
15 * 7.40E-10 -2.33E-5
20 * * 3.96E-7
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.3 Gaussian Numerical Integration
If these results are compared to those of trapezoidal and Simpson’srule, thenGaussian integration of I(1) and I(2) is much more efficient thanthe trapezoidal rules.But then integration of the periodic integrand I(3) is not as efficientas with the trapezoidal rule.These results are also true for most other integrals.
Except for periodic integrands, Gaussian numerical integration isusually much more accurate than trapezoidal and Simpson rules.
This is even true with many integrals in which the integrand does nothave a continuous derivative.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.3 Gaussian Numerical Integration
Example
Use Gaussian integration onI =
∫ 1
0
√xdx =
23
The results are n I − In Ratio2 -7.22E-34 -1.16E-3 6.28 -1.69E-4 6.916 -2.30E-5 7.432 -3.00E-6 7.664 -3.84E-7 7.8
where n is the number of node points. The ratio column is defined asI − I 1
2 n
I − Inand it shows that the error behaves like
I − In ≈c
n3(6.38)
for some c. The error using Simpson’s rule has an empirical rate of
convergence proportional to only 1n1.5 , a much slower rate than (6.38).
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.3 Gaussian Numerical Integration
A result that relates the minimax error to the Gaussian numerical integrationerror.
Theorem
Let f ∈ C[a, b], n ≥ 1. Then, if we aply Gaussian numerical integration to
I =∫ b
af(x)dx, the error In satisfies
|I(f)− In(f)| ≤ 2(b− a)ρ2n−1(f) (6.39)
where ρ2n−1(f) is the minimax error of degree 2n− 1 for f(x) on [a, b].
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.3 Gaussian Numerical Integration
Example
Using the tablen ρn(f) n ρn(f)1 5.30E-2 6 7.82E-6
2 1.79E-2 7 4.62E-7
3 6.63E-4 8 9.64E-8
4 4.63E-4 9 8.05E-9
5 1.62E-5 10 9.16E-10apply (6.39) to
I =∫ 1
0e−x
2dx
For n = 3, the above bound implies
|I − I3| ≤ ρ5(e−x2) ·= 3.24× 10−5.
The actual error is 9.95E − 6.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.3 Gaussian Numerical Integration
Gaussian numerical integration is not as simple to use as are thetrapezoidal and Simpson rules, partly because the Gaussian nodes andweights do not have simple formulae and also because the error isharder to predict.Nonetheless, the increase in the speed of convergence is so rapid anddramatic in most instances that the method should always beconsidered seriously when one is doing many integrations.Estimating the error is quite difficult, and most people satisfythemselves by looking at two or more succesive values.If n is doubled, then repeatedly comparing two successive values, Inand I2n, is almost always adequate for estimating the error in In
I − In ≈ I2n − In
This is somewhat inefficient, but the speed of convergence in In is sorapid that this will still not diminish its advantage over most othermethods.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.3.1 Weighted Gaussian Quadrature
A common problem is the evaluation of integrals of the form
I(f) =∫ b
aw(x)f(x)dx (6.40)
with f(x) a “well-behaved” function and w(x) a possibly (and often)ill-behaved function. Gaussian quadrature has been generalized tohandle such integrals for many functions w(x). Examples include∫ 1
−1
f(x)√1− x2
,
∫ 1
0
√xf(x)dx,
∫ 1
0f(x) ln(
1x
)dx.
The function w(x) is called a weight function.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.3.1 Weighted Gaussian Quadrature
We begin by imitating the development given earlier in this section,and we do so for the special case of
I(f) =∫ 1
0
f(x)√xdx
in which w(x) = 1√x.
As before, we seek numerical integration formulae of the form
In(f) =n∑j=1
wjf(xj) (6.41)
and we require that the nodes {x1, . . . , xn} and the weights{w1, . . . , wn} be so chosen that In(f) = I(f) for polynomials f(x)of as large as possible.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.3.1 Weighted Gaussian Quadrature
Case n = 1
The integration formula has the form∫ 1
0
f(x)√xdx ≈ w1f(x1)
We force equality for f(x) = 1 and f(x) = x. This leads to equations
w1 =∫ 1
0
1√xdx = 2
w1x1 =∫ 1
0
x√xdx =
23
Solving for w1 and x1, we obtain the formula∫ 1
0
f(x)√xdx ≈ 2f(
13) (6.42)
and it has the degree of precision 1.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.3.1 Weighted Gaussian Quadrature
Case n = 2The integration formula has the form∫ 1
0
f(x)√xdx ≈ w1f(x1) + w2f(x2) (6.43)
We force equality for f(x) = 1, x, x2, x3. This leads to equations
w1 + w2 =∫ 1
0
1√xdx = 2
w1x1 + w2x2 =∫ 1
0
x√xdx =
23
w1x21 + w2x
22 =
∫ 1
0
x2
√xdx =
25
w1x31 + w2x
32 =
∫ 1
0
x3
√xdx =
27
This has the solution
x1 = 37 −
235
√30 ·= 0.11559, x2 = 3
7 + 235
√30 ·= 0.74156
w1 = 1 + 118
√30 ·= 1.30429, w2 = 1− 1
18
√30 ·= 0.69571
The resulting formula (6.43) has degree of precision 3.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.3.1 Weighted Gaussian Quadrature
Case n > 2
We seek formula (6.41), which has 2n unspecified parameters, x1, . . . , xn,w1, . . . , wn, by forcing the integration formula to be exact for the 2nmonomials
f(x) = 1, x, x2, . . . , x2n−1.
In turn, this forces In(f) = I(f) for all polynomials f of degree ≤ 2n− 1.This leads to the following system of 2n nonlinear equations in 2n unknowns:
w1 + w2 + . . .+ wn = 2w1x1 + w2x2 + . . .+ wnxn = 2
3
w1x21 + w2x
22 + . . .+ wnx
2n = 2
5...w1x
2n−11 + w2x
2n−12 + . . .+ wnx
2n−1n = 2
4n−1
(6.44)
The resulting formula In(f) has degree of precision 2n− 1.
As before, this system is very difficult to solve directly, but there are
alternative methods of deriving {xi} and {wi}. It is based on looking at the
polynomials that are orthogonal with respect to the weight function
w(x) = 1√x
on the interval [0, 1].
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.3.1 Weighted Gaussian Quadrature
Example
We evaluate
I =∫ 1
0
cos(πx)√x
dx·= 0.74796566683146
using (6.42) ∫ 1
0
f(x)√xdx ≈ 2f(
12) ≡ 1.0
and (6.43) ∫ 1
0
f(x)√xdx ≈ w1f(x1) + w2f(x2)
·= 0.740519
I2 is a reasonable estimate of I, with I − I2·= 0.00745.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.3.1 Weighted Gaussian Quadrature
A general theory can be developed for the weighted Gaussian quadrature
I(f) =∫ b
a
w(x)f(x)dx ≈n∑
j=1
wjf(xj) = In(f) (6.45)
It requires the following assumptions for the weight function w(x):
1 w(x) > 0 for a < x < b;
2 For all integers n ≥ n, ∫ b
a
w(x)|x|ndx <∞
These hypotheses are the same as were assumed for the generalized leastsquares approximation theory following Section 4.7 of Chapter 4. This is notaccidental since both Gaussian quadrature and least squares approximationtheory are dependent on the subject of orthogonal polynomials. The nodepoints {xj} solving the system (6.44) are the zeros of the degree n orthogonalpolynomial on [a, b] with respect to the weight function w(x) = 1√
x.
For the generalization (6.45), the nodes {xi} are the zeros of the degree n
orthogonal polynomial on [a, b] with respect to the weight function w(x).
5. Numerical Integration Math 1070
> 5. Numerical Integration > Supplement
Gauss’s idea:
The optimal abscissas of the κ−point Gaussian quadrature formulas are
precisely the roots of the orthogonal polynomial for the same interval and
weighting function.∫ b
af(x)dx =
∑j
∫ xj+1
xj
f(x)dx︸ ︷︷ ︸composite formula
=∑j
∫ 1
−1f
(xj+1 − xj
2t+
xj+1 + xj2
)(xj+1 − xj
2
)dt︸ ︷︷ ︸
=R 1−1 g(t)dt ≈
κ∑`=1
w`g(q`)︸ ︷︷ ︸κ point Gauss Rule for max accuracy
w1, . . . , wκ: weights, q1, . . . , qκ: quadrature points on (−1, 1).Exact on polynomials p(x) ∈ P2κ−1, i.e., 1, t, t2, . . . , t2κ−1.
5. Numerical Integration Math 1070
> 5. Numerical Integration > Supplement
Example: 3 point Gauss, exact on P5 ⇔ exact on 1, t, t2, t3, t4, t5
∫ +1
−1g(t)dt ≈ w1g(q1) + w2g(q2) + w3g(q3)∫ 1−1 1dt = 2 = w1 · 1 + w2 + w3∫ 1−1 t dt = 0 = w1q1 + w2q2 + w3q3∫ 1−1 t
2dt = 23 = w1q
21 + w2q
22 + w3q
23∫ 1
−1 t3dt = 0 = w1q
31 + w2q
32 + w3q
33∫ 1
−1 t4dt = 2
5 = w1q41 + w2q
42 + w3q
43∫ 1
−1 t5dt = 0 = w1q
51 + w2q
52 + w3q
53
Guess: q1 = −q3, q2 = 0(q1 ≤ q2 ≤ q3), w1 = w3.
5. Numerical Integration Math 1070
> 5. Numerical Integration > Supplement
Example: 3 point Gauss, exact on P5 ⇔ exact on 1, t, t2, t3, t4, t5
With this guess: 2w1 + w2 = 22w1q
21 = 2/3
2w1q41 = 2/5
,
hence
q1 = −√
35, q3 =
√35
w1 =59, w3 =
59, w2 =
89
A. H. Stroud and D. Secrest: ”Gaussian Quadrature Formulas”.Englewood Cliffs, NJ: Prentice-Hall, 1966.
5. Numerical Integration Math 1070
> 5. Numerical Integration > Supplement
1 The idea of GaussGauss-Lobatto∫ 1
−1g(t)dt = w1g(−1) + w2g(q2) + · · ·+ wk−1g(k−1) + wkg(1)
k − 2 nodes locates as k-point formula; is accurate P2k−3. (Orderdecreased by 2 beside the Gauss quadrature formula).
5. Numerical Integration Math 1070
> 5. Numerical Integration > Supplement
Adaptive Quadrature
Problem
Given∫ ba f(x)dx and ε−preassigned tolerance
compute
I(f) ≈∫ b
af(x)dx
with
(a) to assured accuracy ∣∣∣∣∫ b
af(x)dx− I(f)
∣∣∣∣ < ε
(b) at minimal / near minimal cost (no. function evaluations)
Strategy: LOCALIZE!
5. Numerical Integration Math 1070
> 5. Numerical Integration > Supplement
Localization Theorem
Let I(f) =∑
j Ij(f) where Ij(f) ≈∫ xj+1
xjf(x)dx.
If
∣∣∣∣∣∫ xj+1
xj
f(x)dx− Ij(f)
∣∣∣∣∣ < ε(xj+1 − xj)b− a
(= local tolerance),
then
∣∣∣∣∫ b
af(x)dx− I(f)
∣∣∣∣ < ε(= tolerance)
Proof:
∣∣∣∣∫ b
af(x)dx− I(f)
∣∣∣∣ = |∑j
∫ xj+1
xj
f(x)dx−∑j
I(f)|
= |∑j
(∫ xj+1
xj
f(x)dx− Ij(f)
)| ≤
∑j
|∫ xj+1
xj
f(x)dx− Ij(f)|
=∑j
ε(xj+1 − xj)b− a
=ε
b− a∑j
(xj+1 − xj) =ε
b− a(b− a) = ε.
5. Numerical Integration Math 1070
> 5. Numerical Integration > Supplement
Need:
Estimator for local error
and
Strategy
when to cut h to ensure accuracy?when to increase h to ensure minimal cost?
One approach: halving and doubling!
Recall: Trapezoidal rule
Ij ≈ (xj+1 − xj)f(xj)+f(xj+1)2 . A priori estimate:∫ xj+1
xj
f(x)dx− Ij =(xj+1 − xj)3
12f ′′(sj)
for some sj in (xj , xj+1).
5. Numerical Integration Math 1070
> 5. Numerical Integration > Supplement
Step 1: compute Ij
Ij = f(xj+1)+f(xj)2 (xj+1 − xj)
Step 2: cut interval in half + reuse trapezoidal rule
Ij =f(xj) + f(xj+ 1
2)
2(xj+ 1
2−xj)+
f(xj+1) + f(xj+ 12)
2(xj+1−xj+ 1
2)
Error estimate:∫ xj+1
xj
f(x)dx− Ij =h3j
12f ′′(ξj) = ej 1st use of trapezoid rule∫ xj+1
xj
f(x)dx− Ij =(hj/2)3
12f ′′(η1) +
(hj/2)3
12f ′′(η2) 2nd use of TR
=14h3j
12f ′′(ξj)︸ ︷︷ ︸ej
+O(h4j )
5. Numerical Integration Math 1070
> 5. Numerical Integration > Supplement
ej = 4ej + Higher Order Terms
Substracting
Ij − Ij = 3ej +O(h4) =⇒ ej =Ij − Ij
3+ Higher Order Terms︸ ︷︷ ︸
O(h4)
5. Numerical Integration Math 1070
> 5. Numerical Integration > Supplement
4 points Gauss: exact on P7
local error: O(h9)global error: O(h8)A priori estimate:∫ xj+1
xj
f(x)dx− Ij = C(xj+1 − xj)9f (8)(ξj)∫ xj+1
xj
f(x)dx− Ij = Ch9jf
(8)(ξj)∫ xj+1
xj
f(x)dx− Ij = C
(hj2
)9
f (8)(ξ′j) + C
(hj2
)9
f (8)(ξ′′j )︸ ︷︷ ︸= C
28h9
jf(8)(ξj)+O(h10)
⇒ Ij − Ij = 225ej +O(h10)
⇒ ej =Ij − Ij
225+ Higher Order Terms︸ ︷︷ ︸
O(h10)
5. Numerical Integration Math 1070
> 5. Numerical Integration > Supplement
Algorithm
Input: a, b, f(x)upper error tolerance: εmax
initial mesh width: h
Initialize: Integral = 0.0xL = aεmin = 1
2k+3 εmax
* xR = xL + h( If xR > b, xR ← b
do integral 1 more time and stop)
Compute on xL, xR:I, I and EST
EST =∣∣∣∣ I − I2k+1 − 1
∣∣∣∣ (if exact on Pk)
5. Numerical Integration Math 1070
> 5. Numerical Integration > Supplement
’error is just right’:If εmin
hb−a < EST < εmax
hb−a
Integral← Integral + IxL ← xRgo to *
’error is too small’:If EST ≤ εmin
b−a h
Integral← Integral + IxL ← xRh← 2hgo to *
’error is too big’:If EST ≥ εmax
b−a hh← h/2.0go to *
STOPEND
5. Numerical Integration Math 1070
> 5. Numerical Integration > Supplement
Trapezium rule∫ xj+1
xj
f(x)dx− (xj+1 − xj)f(xj+1) + f(xj)
2∫ xj+1
xj
f(x)− p1(x)dx =∫ xj+1
xj
f ′′(ξ)2
(x− xj)(x− xj+1)︸ ︷︷ ︸ψ(x)
dx
=f ′′(x)
2
∫ xj+1
xj
ψ(x)dx︸ ︷︷ ︸integrate exactly
+O(h)∫ xj+1
xj
ψ(x)dx
5. Numerical Integration Math 1070
> 5. Numerical Integration > Supplement
The mysteries of ψ(x)
x_j q_1 q_2 q_3 q_4 q_5 q_6 q_7−1.5
−1
−0.5
0
0.5
1
1.5x 105
xj ≤ x ≤ x
j+1
ψ(x
)
ψ(x) = (x− q1)(x− q2) · · · (x− q7)
5. Numerical Integration Math 1070
> 5. Numerical Integration > Supplement
Error in k + 1 point quadrature
pk(x) interpolates f(x) =⇒ f(x)− pk(x) = fk+1(ξ)(k+1)! ψ(x)
(xj ≤) q1 < q2 < . . . < qk+1 (≤ xj+1)
∫ xj+1
xj
f(x)dx︸ ︷︷ ︸true
−∫ xj+1
xj
pk(x)dx︸ ︷︷ ︸approx
=∫ xj+1
xj
ψ(x)(k + 1)!
f (k+1)(ξ)dx
5. Numerical Integration Math 1070
> 5. Numerical Integration > Supplement
1. A simple error bound
Ignoring oscillation of ψ(x):
|error| ≤ max |fk+1|(k + 1)!
∫ xj+1
xj
|ψ(x)|dx︸ ︷︷ ︸=
R xj+1xj
|x−q1|···|x−qk+1|≤hk+1R xj+1
xjdx
≤ max |fk+1|(k + 1)!
|xj+1−xj |k+2
x_j q_1 q_2 q_3 q_4 q_5 q_6 q_7−1.5
−1
−0.5
0
0.5
1
1.5x 105
xj ≤ x ≤ x
j+1
ψ(x
) an
d ab
s(ψ
(x))
5. Numerical Integration Math 1070
> 5. Numerical Integration > Supplement
2. Analysis without cancelation
xj < ξ < x < xj+1
Lemma
Let ξ, x ∈ (xj , xj+1). Then
f (k+1)(ξ) = f (k+1)(x) + (ξ − x)fk+2(η) MVT
for some η between ξ and x, and |ξ − x| ≤ xj+1 − xj ≤ h.
5. Numerical Integration Math 1070
> 5. Numerical Integration > Supplement
2. Analysis without cancelation
|error| ≤ |true− approx|
= | 1(k + 1)!
∫ xj+1
xj
ψ(x)[f (k+1)(x)︸ ︷︷ ︸
fixed
+(ξ − x︸ ︷︷ ︸O(h)
)f (k+2)(η)]dx|
≤ f (k+1)(x)(k + 1)!
|∫ xj+1
xj
ψ(x)dx︸ ︷︷ ︸=0 if
R xj+1xj
ψ(x)dx=0
|
+1
(k + 1)!|∫ xj+1
xj
f (k+2)(η)(ξ − x)ψ(x)|
≤ max |f (k+2)|(k + 1)!
∫ xj+1
xj
|ξ − x|︸ ︷︷ ︸≤h
|ψ(x)|︸ ︷︷ ︸≤hk+1
dx
≤ hk+3 max |f (k+2)(x)|(k + 1)!
The error for Simpson’s rule, i.e. cancelation.
5. Numerical Integration Math 1070
> 5. Numerical Integration > Supplement
ψ(x) interpolates zero at k + 1 points (degψ(x) = k + 1)
Lemma
The general result of p(q`) = 0, ` = 1, . . . , k + 1, p ∈ Pk+1 isp(x) = Constant · ψ(x).
5. Numerical Integration Math 1070
> 5. Numerical Integration > Supplement
Questions:
1) How to pick the points q1, . . . , qk+1 so that∫ 1
−1g(x)dx ≈ w1g(q1) + . . .+ wk+1g(qk+1) (6.46)
integrates Pk+m exactly?
2) What does this imply about the error?
Remark
If m < 1, pick q1, . . . , qk+1 so that∫ 1−1 ψ(x)dx = 0 and then the error
converges O(hk+3).
m=2
5. Numerical Integration Math 1070
> 5. Numerical Integration > Supplement
Step 1
Let r1 be some fixed point on [−1, 1]:−1 < q1 < q2 < . . . < r1 < . . . < qk < qk+1
pk+1(x) = pk(x) + ψ(x)g(r1)− pk(r1)
ψ(r1)(6.47)
pk interpolates g(x) at q1, . . . , qk+1.Claim: pk+1 interpolates g(x) at k + 2 points q1, . . . , qk+1, r1.Suppose now that (6.46) is exact on Pk+1, then from (6.47)∫ 1
−1g(x)dx︸ ︷︷ ︸true
−∫ 1
−1pk+1(x)︸ ︷︷ ︸
substitute (6.47)
dx error in k + 2 quadrature rule, Ek+2
=∫ 1
−1g(x)dx−
∫ 1
−1pk(x)dx︸ ︷︷ ︸
error in k + 1 quadrature rule ≡ Ek+1
−∫ 1
−1ψ(x)
f(r1)− pk(r1)ψ(r1)
dx
5. Numerical Integration Math 1070
> 5. Numerical Integration > Supplement
Step 1
So Ek+2 = Ek+1 −f(r1)− pk(r1)
ψ(r1)
∫ 1
−1ψ(x)dx
Conclusion 1
If∫ 1−1 ψ(x)dx = 0, then error in k + 1 point rule is exactly the same as
if we had used k + 2 points.
5. Numerical Integration Math 1070
> 5. Numerical Integration > Supplement
Step 2
Let r1, r2 be fixed points in [−1, 1]. So interpolate at k + 3 points:q1, . . . , qk+1, r1, r2
pk+2(x) = pk(x) + ψ(x)(x− r1)g(r2)− pk(r2)(r2 − r1)ψ(r2)
(6.48)
+ ψ(x)(x− r2)g(r1)− pk(r1)(r1 − r2)ψ(r1)
Consider error in a rule with k + 1 + 2 points:
error in k + 3 p. r. =∫ 1
−1g(x)dx−
∫ 1
−1pk+2(x)dx
=∫ 1
−1g(x)dx−
∫ 1
−1pk(x)dx−
g(r2)− pk(r2)(r2 − r1)ψ(r2)
∫ 1
−1ψ(x)(x− r1)dx
− g(r1)− pk(r1)(r1 − r2)ψ(r1)
∫ 1
−1ψ(x)(x− r2)dx.
5. Numerical Integration Math 1070
> 5. Numerical Integration > Supplement
Step 2
So Ek+3 = Ek+1 + Const∫ 1−1 ψ(x)(x− r1)dx+ Const
∫ 1−1 ψ(x)(x− r2)dx
Conclusion 2
If∫ 1−1 ψ(x)dx = 0 and
∫ 1−1 xψ(x)dx = 0, then error in k + 1 point rule
has the same error as k + 3 point rule.
5. Numerical Integration Math 1070
> 5. Numerical Integration > Supplement
. . .
Ek+1+m = Ek+1 + C0
∫ 1
−1ψ(x)dx+ C1
∫ 1
−1ψ(x)x1dx+ . . . (6.49)
+ Cm
∫ 1
−1ψ(x)xm−1dx (6.50)
So
Conclusion 3
If∫ 1−1 ψ(x)xjdx = 0, j = 0, . . . ,m− 1, then error is as good as using
m extra points.
5. Numerical Integration Math 1070
> 5. Numerical Integration > Supplement
Overview
Interpolating QuadratureInterpolate f(x) at q0, q1, q2, . . . , qk ⇒ pk(x)
f(x)− pk(x) =f (k+1)(ξ)(k + 1)!
(x− q0)(x− q1) . . . (x− qk)∫ 1
−1f(x)dx−
∫ 1
−1pk(x)dx =
1(k + 1)!
∫ 1
−1f (k+1)(ξ)ψ(x)dx
Gauss rules
pick q` to maximize exactness
what is the accuracy
what are the q`’s?
5. Numerical Integration Math 1070
> 5. Numerical Integration > Supplement
Overview
Interpolate at k + 1 +m points Interpolate at k + 1 pointsq0, . . . , qk, r1 . . . , rm q0, . . . , qk
error error⇓ ⇓
Ek+m = Ek + c0∫ 1−1 ψ(x) · 1dx
+c1∫ 1−1 ψ(x) · xdx+ . . .
+cm∫ 1−1 ψ(x) · xm−1dx
Definition
p(x) is the µ+ 1st orthogonal polynomial on [−1, 1] (weightw(x) ≡ 1) if p(x) ∈ Pµ+1 and
∫ 1−1 p(x)x
`dx = 0, ` = 0, . . . , µ, i.e.,∫ 1−1 p(x)q(x)dx = 0∀q ∈ Pµ.
5. Numerical Integration Math 1070
> 5. Numerical Integration > Supplement
Overview
Pick q0, q1, . . . , qk so∫ 1−1 ψ(x) · 1dx = 0∫ 1−1 ψ(x) · xdx = 0
...∫ 1−1 ψ(x) · xm−1dx = 0
⇔∫ 1
−1ψ(x)︸︷︷︸
deg k+1
degm−1︷︸︸︷q(x) dx = 0, ∀q ∈ Pm−1
So, maximum accuracy if ψ(x) is the orthogonal polynomial of degreek + 1:∫ 1
−1ψ(x)q(x)dx = 0 ∀q ∈ Pk ⇒ m− 1 = k,m = k + 1
So, the Gauss quadrature points are the roots of the orthogonalpolynomial.
5. Numerical Integration Math 1070
> 5. Numerical Integration > Supplement
Overview
Adaptivity
I = I1 + I2
Trapezium rule’s local error = O(h3)∫ xj+1
xj
f(x)dx− I = 4e⇐ e ≈ 8e1, e ≈ 8e2 so, e ≈ 4e∫ xj+1
xj
f(x)dx− I = e = e1 + e2
I − I = 3e (+ Higher Order Terms)⇒ e =I − I
3(+ Higher Order Terms)
5. Numerical Integration Math 1070
> 5. Numerical Integration > Supplement
Overview
Final Observation
True−Approx ≈ 4e
True−Approx ≈ e
}2 equations, 2 unknowns: e, True
So, can solve for e+ TrueSolving for True:
True ≈ I + e ≈ I +I − I
3≈ 4
3I − 1
3I (+ Higher Order Terms)
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.4 Numerical Differentiation
To numerically calculate the derivative of f(x), begin by recalling thedefinition of derivative
f ′(x) = limx→0
f(x+ h)− f(x)h
This justifies using
f ′(x) ≈f(x + h) − f(x)
h≡ Dhf(x) (6.51)
for small values of h. Dhf(x) is called a numerical derivative of f(x)with stepsize h.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.4 Numerical Differentiation
Example
Use Dhf to approximate the derivative of f(x) = cos(x) at x = π6 .
h Dn(f) Error Ratio
0.1 -0.54243 0.04243
0.05 -0.52144 0.02144 1.98
0.025 -0.51077 0.01077 1.99
0.0125 -0.50540 0.00540 1.99
0.00625 -0.50270 0.00270 2.00
0.003125 -0.50135 0.00135 2.00
Looking at the error column, we see the error is nearly proportional toh; when h is halved, the error is almost halved.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.4 Numerical Differentiation
To explain the behaviour in this example, Taylor’s theorem can be usedto find an error formula. Expanding f(x+ h) about x, we get
f(x+ h) = f(x) + hf ′(x) +h2
2f ′′(c)
for some c between x and x+ h. Substituting on the right side of(6.51), we obtain
Dhf(x) =1h
{[f(x) + hf ′(x) +
h2
2f ′′(c)
]− f(x)
}= f ′(x) +
h
2f ′′(c)
f ′(x)−Dhf(x)= −h2f ′′(c) (6.52)
The error is proportional to h, agreeing with the results in the Tableabove.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.4 Numerical Differentiation
For that example,f ′(π
6
)−Dhf
(π6
)=h
2cos(c) (6.53)
where c is between 16π and 1
6π + h.Let’s check that if c is replaced by 1
6π, then the RHS of (6.53) agrees withthe error column in the Table.As seen in the example, we use the formula (6.51) with a positive stepsizeh > 0. The formula (6.51) is commonly known as the forward differenceformula for the first derivative.We can formally replace h by −h in (6.51) to obtain the formula
f ′(x) ≈f(x) − f(x − h)
h, h > 0 (6.54)
This is the backward difference formula for the first derivative.A derivation similar to that leading to (6.52) shows that
f ′(x)− f(x)− f(x− h)h
=h
2f ′′(c)) (6.55)
for some c between x and x− h.Thus, we expect the accuracy of the backward difference formula to be
almost the same as that of the forward difference formula.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.4.1 Differentiation Using Interpolation
Let Pn(x) denote the degree n polynomial that interpolates f(x) atn+ 1 node points x0, . . . , xn.
To calculate f ′(x) at some point x = t, use
f ′(t) ≈ P ′n(t) (6.56)
Many different formulae can be obtained by
1 varying n and by
2 varying the placement of the nodes x0, . . . , xn relative to thepoint t of interest.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.4.1 Differentiation Using Interpolation
As an especially useful example of (6.56), take
n = 2, t = x1, x0 = x1 − h, x2 = x1 + h.
Then
P2(x) =(x− x1)(x− x2)
2h2f(x0) +
(x− x0)(x− x2)−h2
f(x1)
+(x− x0)(x− x1)
2h2f(x2)
P ′2(x) =2x− x1 − x2
2h2f(x0) +
2x− x0 − x2
−h2f(x1)
+2x− x0 − x1
2h2f(x2)
P ′2(x1) =x1 − x2
2h2f(x0) +
2x1 − x0 − x2
−h2f(x1) +
x1 − x0
2h2f(x2)
=f(x2)− f(x0)
2h(6.57)
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.4.1 Differentiation Using Interpolation
The central difference formula
Replacing x0 and x2 by x1 − h and x1 + h, from (6.56) and (6.57) weobtain central difference formula
f ′(x1) ≈f(x1 + h) − f(x1 − h)
2h≡ Dhf(x1), (6.58)
another approximation to the derivative of f(x).
It will be shown below that this is a more accurate approximation tof ′(x) than is the the forward difference formula Dhf(x) of (6.51), i.e.,
f ′(x) =f(x+ h)− f(x)
h≡ Dhf(x).
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.4.1 Differentiation Using Interpolation
Theorem
Assume f ∈ Cn+2[a, b].Let x0, x1, . . . , xn be n+ 1 distinct interpolation nodes in [a, b], andlet t be an arbitrary given point in [a, b].Then
f ′(t) − P ′n(t) = Ψn(t)
f (n+2)(c1)
(n + 2)!+ Ψ′
n(t)f (n+1)(c2)
(n + 1)!(6.59)
withΨn(t) = (t− x0)(t− x1) · · · (t− xn)
The numbers c1 and c2 are unknown points located between themaximum and minimum of x0, x1, . . . , xn and t.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.4.1 Differentiation Using Interpolation
To illustrate this result, an error formula can be derived for the centraldifference formula (6.58).Since t = x1 in deriving (6.58), we find that the first term on the RHSof (6.59) is zero. Also n = 2 and
Ψ2(x) = (x− x0)(x− x1)(x− x2)Ψ′
2(x) = (x− x0)(x− x1) + (x− x0)(x− x2) + (x− x0)(x− x1)Ψ′
2(x1) = (x1 − x0)(x1 − x2) = −h2
Using this in (6.59), we get
f ′(x1)−f(x1 + h)− f(x1 − h)
2h= −h2
6f ′′′(c2) (6.60)
with x1 − h ≤ c2 ≤ x1 + h. This says that for small values of h, thecentral difference formula (6.58) should be more accurate that theearlier approximation (6.51), the forward difference formula, becausethe error term of (6.58) decreases more rapidly with h.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.4.1 Differentiation Using Interpolation
Example
The earlier example f(x) = cos(x) is repeated using the centraldifference fomula (6.58) (recall x1 = π
6 ).
h Dn(f) Error Ratio
0.1 -0.49916708 -0.0008329
0.05 -0.49979169 -0.0002083 4.00
0.025 -0.49994792 -0.00005208 4.00
0.0125 -0.49998698 -0.00001302 4.00
0.00625 -0.49999674 -0.000003255 4.00
The results confirm the rate of convergence given in (6.58), and theyillustrate that the central difference formula (6.58) will usually will besuperior to the earlier approximation, the forward difference formula(6.51).
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.4.2 The Method of Undetermined Coefficients
The method of undetermined coefficients is a procedure used inderiving formulae for numerical differentiation, interpolation andintegration. We will explain the method by using it to derive anapproximation for f ′′(x).To approximate f ′′(x) at some point x = t, write
f ′′(t) ≈ D(2)h f(t) ≡ Af(t+ h) +Bf(t) + Cf(t− h) (6.61)
with A,B and C unspecified constants. Replace f(t− h) and f(t+ h)by the Taylor polynomial approximations
f(t− h) ≈ f(t)− hf ′(t) + h2
2 f′′(t)− h3
6 f′′′(t) + h4
24f(4)(t)
f(t+ h) ≈ f(t) + hf ′(t) + h2
2 f′′(t) + h3
6 f′′′(t) + h4
24f(4)(t)
(6.62)
Including more terms would give higher powers of h; and for smallvalues of h, these additional terms should be much smaller that theterms included in (6.62).
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.4.2 The Method of Undetermined Coefficients
Substituting these approximations into the formula for D(2)h f(t) and
collecting together common powers of h give us
D(2)h f(t) ≈ (A+B + C)︸ ︷︷ ︸
=0
f(t) + h(A− C)︸ ︷︷ ︸=0
f ′(t) +h2
2(A+ C)︸ ︷︷ ︸
=1
f ′′(t)
+h3
6(A− C)f ′′′(t) +
h4
24(A+ C)f (4)(t) (6.63)
To haveD
(2)h f(t) ≈ f ′′(t)
for arbitrary functions f(x), it is necessary to require
A+B + C = 0; coefficient of f(t)h(A− C) = 0; coefficient of f ′(t)h2
2 (A+ C) = 1; coefficient of f ′′(t)
This system has the solution
A = C =1h2, B = − 2
h2.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.4.2 The Method of Undetermined Coefficients
This determines
D(2)h f(t) =
f(t + h) − 2f(t) + f(t − h)
h2(6.64)
To determine an error error formula for D(2)h f(t), substitute
A = C = 1h2 , B = − 2
h2 into (6.63) to obtain
D(2)h f(t) ≈ f ′′(t) +
h2
12f (4)(t).
The approximation in this arises from not including in the Taylorpolynomials (6.62) corresponding higher powers of h. Thus,
f ′′(t)− f(t+ h)− 2f(t) + f(t− h)h2
≈ −h2
12f (4)(t) (6.65)
This is accurate estimate of the error for small values of h. Of course,in a practical situation we would not know f (4)(t). But the errorformula shows that the error decreases by a factor of about 4when h is halved. This can lead to justify Richardson’s extrapolationto obtain an even more accurate estimate of the error and of f ′′(t).
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.4.2 The Method of Undetermined Coefficients
Example
Let f(x) = cos(x), t = 16π, and use (6.64) to calculate
f ′′(t) = − cos(16π).
h D2h(f) Error Ratio
0.5 -0.84813289 -1.789E-2
0.25 -0.86152424 -4.501E-3 3.97
0.125 -0.86489835 -1.127E-3 3.99
0.0625 -0.86574353 -2.819E-4 4.00
0.03125 -0.86595493 -7.048E-5 4.00
The results shown (see the ratio column) are consistent with the errorformula (6.65)
f ′′(t)− f(t+ h)− 2f(t) + f(t− h)h2
≈ −h2
12f (4)(t).
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.4.2 The Method of Undetermined Coefficients
In the derivation of (6.65), the form (6.61)
f ′′(t) ≈ D(2)h f(t) ≡ Af(t+ h) +Bf(t) + Cf(t− h)
was assumed for the approximate derivative. We could equally well havechosen to evaluate f(x) at points other than those used there, for example,
f ′′(t) ≈ Af(t+ 2h) +Bf(t+ h) + Cf(t)
Or, we could have chosen more evaluation points, as in
f ′′(t) ≈ Af(t+ 3h) +Bf(t+ 2h) + Cf(t+ h) +Df(t)
The extra degree of freedom could have been used to obtain a more accurateapproximation to f ′′(t), by forcing the error term to be proportional to ahigher power of h.
Many of the formulae derived by the method of undetermined coefficients can
also be derived by differentiating and evaluating a suitably chosen
interpolation polynomial. But often, it is easier to visualize the desired
formula as a combination of certain function values and to then derive the
proper combination, as was done above for (6.64).
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.4.3 Effects of error in function evaluation
The formulae derived above are useful for differentiating functions thatare known analytically and for setting up numerical methods for solvingdifferential equations. Nonetheless, they are very sensitive to errors inthe function values, especially if these errors are not sufficiently smallcompared with the stepsize h used in the differentiation formula.To explore this, we analyze the effect of such errors in the formula
D(2)h f(t) approximating f ′′(t).
Rewrite (6.64), D(2)h f(t) = f(t+h)−2f(t)+f(t−h)
h2 , as
D(2)h f(x1) =
f(x2)− 2f(x1) + f(x0)h2
≈ f ′′(x1)
where x2 = x1 + h, x0 = x1 − h. Let the actual function values usedin the computation be denoted by f0, f1, and f2 with
f(xi)− fi = εi, i = 0, 1, 2
the errors in the function values.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.4.3 Effects of error in function evaluation
Thus, the actual quantity calculated is
D2hf(x1) =
f2 − 2f1 + f0
h2
For the error in this quantity, replace fj by f(xj)− εj , j = 0, 1, 2 toobtain
f ′′(x1)−D2hf(x1) = f ′′(x1)−
[f(x2)− ε2]− 2[f(x1)− ε1] + [f(x0)− ε0]h2
=[f ′′(x1)−
f(x2)− 2f(x1) + f(x0)h2︸ ︷︷ ︸
(6.65)= −h2
12f (4)(t)
]+ε2 − 2ε1 + ε0
h2
≈ −h2
12f (4)(x1) +
ε2 − 2ε1 + ε0
h2(6.66)
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.4.3 Effects of error in function evaluation
The errors ε0, ε1, ε2 are generally random in some interval [−δ, δ].If the values f0, f1, f2 are experimental data, then δ is a bound onthe experimental data.
Also, if these function values fi are obtained from computingf(x) on a computer, then the errors εj are combination ofrounding or chopping errors and δ is a bound on these errors.
In either case, (6.66)
f ′′(x1)−D2hf(x1) ≈ −
h2
12f (4)(x1) +
ε2 − 2ε1 + ε0
h2
yields the approximate inequality∣∣∣f ′′(x1)− D(2)h f(x1)
∣∣∣ ≤ h2
12
∣∣∣f (4)(x1)∣∣∣+ 4δ
h2(6.67)
This error bound suggests that as h→ 0,the error will eventually increase, because of the final term 4δ
h2 .
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.4.3 Effects of error in function evaluation
Example
Calculate D(2)h (x1) for f(x) = cos(x) at x1 = 1
6π. To show the effect of
rounding errors, the values fi are obtained by rounding f(xi) to sixsignificant digits; and the errors satisfy
|εi| ≤ 5.0× 10−7 = δ, i = 0, 1, 2
Other than these rounding errors, the formula D(2)h f(x1) is calculated exactly.
The results are
h D2h(f) Error
0.5 -0.848128 -0.0178970.25 -0.861504 -0.0045210.125 -0.864832 -0.0011930.0625 -0.865536 -0.0004890.03125 -0.865280 -0.0007450.015625 -0.860160 -0.0058650.0078125 -0.851968 -0.0140570.00390625 -0.786432 -0.079593
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.4.3 Effects of error in function evaluation
In this example, the bound (6.67), i.e.,∣∣∣f ′′(x1)− D(2)h f(x1)
∣∣∣ ≤ h2
12
∣∣f (4)(x1)∣∣+ 4δ
h2 ,
becomes∣∣∣f ′′(x1)− D(2)h f(x1)
∣∣∣ ≤ h2
12cos(
16π
)+(
4h2
)(5× 10−7)
·= 0.0722h2 +2× 10−6
h2≡ E(h)
For h = 0.125, the bound E(h) ·= 0.00126, which is not too far offfrom actual error given in the table.
The bound E(h) indicates that there is a smallest value of h, call ith∗, below which the error will begin to increase.To find it, let E′(h) = 0, with its root being h∗.
This leads to h∗·= 0.00726, which is consistent with the behaviour of
the errors in the table.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.4.3 Effects of error in function evaluation
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.4.3 Effects of error in function evaluation
One must be very cautious in using numerical differentiation,because of the sensitivity to errors in the function values.
This is especially true if the function values are obtained empiricallywith relatively large experimental errors, as is common in practice. Inthis latter case, one should probably use a carefully prepared packageprogram for numerical differentiation. Such programs take intoaccount the error in the data, attempting to find numerical derivativesthat are as accurate as can be justified by the data.
In the absence of such a program, one should consider producing acubic spline function that approximates the data, and then use itsderivative as a numerical derivative for the data. The cubic splinefunction could be based on interpolation; or better for data with largerelatively errors, construct a cubic spline that is a least squaresapproximation to the data. The concept of least squaresapproximation is introduced in Section 7.1.
5. Numerical Integration Math 1070
Top Related