1 Spring 2003 Prof. Tim Warburton MA557/MA578/CS557 Lecture 34.
-
Upload
sara-clarke -
Category
Documents
-
view
228 -
download
0
description
Transcript of 1 Spring 2003 Prof. Tim Warburton MA557/MA578/CS557 Lecture 34.
![Page 2: 1 Spring 2003 Prof. Tim Warburton MA557/MA578/CS557 Lecture 34.](https://reader034.fdocuments.net/reader034/viewer/2022051101/5a4d1b267f8b9ab0599973cd/html5/thumbnails/2.jpg)
2
Today’s Class
• A few ways to solve Ax=b for a sparse matrix A in Matlab.
• Followed by time for work on HW 10.
• On Friday 04/25/03 we will benefit from a special lecture on preconditioning for iterative methods presented by:– Dr D.M. Day of Sandia National Laboratories
• http://www.cs.sandia.gov/~dday/
![Page 3: 1 Spring 2003 Prof. Tim Warburton MA557/MA578/CS557 Lecture 34.](https://reader034.fdocuments.net/reader034/viewer/2022051101/5a4d1b267f8b9ab0599973cd/html5/thumbnails/3.jpg)
3
Recall: Summary of Temporal Implicit Schemes
• Backwards Euler is unconditionally stable for non-negative diffusion parameter D (i.e. any dt>=0) and first order in dt.
• Crank-Nicholson is unconditionally stable for non-negative diffusion parameter D (i.e. any dt>=0) and second order in dt.
• ESDIRK4 – generalizes to fourth order in dt.
11
2 2n ndt dt
L LC M M C
11n ndt C M L MC
D N D Nx x y yD L M D D D D
![Page 4: 1 Spring 2003 Prof. Tim Warburton MA557/MA578/CS557 Lecture 34.](https://reader034.fdocuments.net/reader034/viewer/2022051101/5a4d1b267f8b9ab0599973cd/html5/thumbnails/4.jpg)
4
Backwards Euler Linear System
• Given Cn we wish to find a Cn+1 which satisfies:
• For simplicity we define:
• Note that A is a symmetric, positive matrix.
1n ndt M L C MC
1
:
:
:then we wish to find such that:
n
n
dt
A M L
b MCx C
xAx b
![Page 5: 1 Spring 2003 Prof. Tim Warburton MA557/MA578/CS557 Lecture 34.](https://reader034.fdocuments.net/reader034/viewer/2022051101/5a4d1b267f8b9ab0599973cd/html5/thumbnails/5.jpg)
5
Lazy Way To Build The Matrix..
• Don’t tell anyone I told you this but here’s an easy way to program the construction of the DG operator.
• The first step is to understand that if I set up a vector which only has one non-zero entry in the n’th entry and then pass it to umDIFFUSIONop then the returned vector will be the n’th column of the A matrix.
( ) ( )
1
j nj
j rank j rank
i ij j ij nj inj j
A A
x
v A x A A
![Page 6: 1 Spring 2003 Prof. Tim Warburton MA557/MA578/CS557 Lecture 34.](https://reader034.fdocuments.net/reader034/viewer/2022051101/5a4d1b267f8b9ab0599973cd/html5/thumbnails/6.jpg)
6
Laziness cont
• The next step is to use the sparsity of
• If I set one of the node values in the center white triangle to one and multiply by the Neumann DG derivative operator then the result vector will have non-zero entries in the red triangles and the original white triangle.
• If I take this result vector and premultiply by the Dirichlet DG derivative operator then there will be non-zero entries in the red, white and blue triangles..
Nx
Ny y
D DxD D DM DDA I D
![Page 7: 1 Spring 2003 Prof. Tim Warburton MA557/MA578/CS557 Lecture 34.](https://reader034.fdocuments.net/reader034/viewer/2022051101/5a4d1b267f8b9ab0599973cd/html5/thumbnails/7.jpg)
7
Finding The Neighbors and Their Neighbors of an Element
• In umMESH.m there is code which computes the sparse connectivity matrix umSparseEtoE.
• For: the matrix is:
• To find neighbors and neighbors of neighbors we consider the square of the connectivity matrix:
1 2 3 4 5
1 1 0 0 1 02 0 1 0 1 03 0 0 1 1 14 1 1 1 1 05 0 0 1 0 1
1 2
34
5
1 0 0 1 0 1 0 0 1 0 2 1 1 2 00 1 0 1 0 0 1 0 1 0 1 2 1 2 00 0 1 1 1 0 0 1 1 1 1 1 3 2 21 1 1 1 0 1 1 1 1 0 2 2 2 4 10 0 1 0 1 0 0 1 0 1 0 0 2 1 2
![Page 8: 1 Spring 2003 Prof. Tim Warburton MA557/MA578/CS557 Lecture 34.](https://reader034.fdocuments.net/reader034/viewer/2022051101/5a4d1b267f8b9ab0599973cd/html5/thumbnails/8.jpg)
8
Example cont
• Double connectivity matrix:
• i.e. Element 1 is within two elements of 2, 3, 4• Element 2 is within two elements of 1, 3, 4• Element 3 is within two elements of 1, 2, 4, 5• Element 4 is within two elements of 1, 2, 3, 5• Element 5 is within two elements of 3, 4
1 2 3 4 5
1 2 1 1 2 02 1 2 1 2 03 1 1 3 2 24 2 2 2 4 15 0 0 2 1 2
1 2
34
5
![Page 9: 1 Spring 2003 Prof. Tim Warburton MA557/MA578/CS557 Lecture 34.](https://reader034.fdocuments.net/reader034/viewer/2022051101/5a4d1b267f8b9ab0599973cd/html5/thumbnails/9.jpg)
9
Matlab Implementation (Connectivity):
• Here we compute the square of the element connectivity matrix:
![Page 10: 1 Spring 2003 Prof. Tim Warburton MA557/MA578/CS557 Lecture 34.](https://reader034.fdocuments.net/reader034/viewer/2022051101/5a4d1b267f8b9ab0599973cd/html5/thumbnails/10.jpg)
10
Building DG MatrixumDIFFUSIONilu.m
16) Create sparse matrix
20-41) for each node, compute the row of the matrix
44) Compute Cholesky factorization
47) Compute incomplete Cholesky factorization.
![Page 11: 1 Spring 2003 Prof. Tim Warburton MA557/MA578/CS557 Lecture 34.](https://reader034.fdocuments.net/reader034/viewer/2022051101/5a4d1b267f8b9ab0599973cd/html5/thumbnails/11.jpg)
11
umDIFFUSIONpartop.m
• In this function we apply the operation of the matrix on a part of the vector.
![Page 12: 1 Spring 2003 Prof. Tim Warburton MA557/MA578/CS557 Lecture 34.](https://reader034.fdocuments.net/reader034/viewer/2022051101/5a4d1b267f8b9ab0599973cd/html5/thumbnails/12.jpg)
12
First Part of the Part Matrix MultiplyumDIFFUSIONpartop.m
• In this function we apply the Helmholtz operator to all the elements specified in the elmts argument.
![Page 13: 1 Spring 2003 Prof. Tim Warburton MA557/MA578/CS557 Lecture 34.](https://reader034.fdocuments.net/reader034/viewer/2022051101/5a4d1b267f8b9ab0599973cd/html5/thumbnails/13.jpg)
13
Applying the Helmholtz OperatorumDIFFUSIONpartop.m cont
![Page 14: 1 Spring 2003 Prof. Tim Warburton MA557/MA578/CS557 Lecture 34.](https://reader034.fdocuments.net/reader034/viewer/2022051101/5a4d1b267f8b9ab0599973cd/html5/thumbnails/14.jpg)
14
DriverumDIFFUSIONdemo.m• Driver for the implicit
DG diffusion solver.
![Page 15: 1 Spring 2003 Prof. Tim Warburton MA557/MA578/CS557 Lecture 34.](https://reader034.fdocuments.net/reader034/viewer/2022051101/5a4d1b267f8b9ab0599973cd/html5/thumbnails/15.jpg)
15
umDIFFUSIONrun.m
Note the changesto pcg and the call to: umDIFFUSIONilu
![Page 16: 1 Spring 2003 Prof. Tim Warburton MA557/MA578/CS557 Lecture 34.](https://reader034.fdocuments.net/reader034/viewer/2022051101/5a4d1b267f8b9ab0599973cd/html5/thumbnails/16.jpg)
16
In Action• It takes a while to build the matrix…
• We can look at the sparsity pattern of the matrix:
![Page 17: 1 Spring 2003 Prof. Tim Warburton MA557/MA578/CS557 Lecture 34.](https://reader034.fdocuments.net/reader034/viewer/2022051101/5a4d1b267f8b9ab0599973cd/html5/thumbnails/17.jpg)
17
Some Options For Solving Ax=b
• We will consider a couple of many options for accelerating the process of solving Ax=b
1) Cholesky factorization of A before time stepping and repeated backsolving.
2) Incomplete Cholesky factorization of A and using this as a preconditioner.
![Page 18: 1 Spring 2003 Prof. Tim Warburton MA557/MA578/CS557 Lecture 34.](https://reader034.fdocuments.net/reader034/viewer/2022051101/5a4d1b267f8b9ab0599973cd/html5/thumbnails/18.jpg)
18
Option 1: Cholesky factorization
• Use Cholesky factorization to decompose A into the product of a lower triangular matrix C and its transpose:
• Then every time we need to peform a backsolve twice:
• Each backsolve takes N^2 operations (N=total number of unknowns)
tA CC
tC y bCx y
![Page 19: 1 Spring 2003 Prof. Tim Warburton MA557/MA578/CS557 Lecture 34.](https://reader034.fdocuments.net/reader034/viewer/2022051101/5a4d1b267f8b9ab0599973cd/html5/thumbnails/19.jpg)
19
Option 1: Sparsity of Cholesky Factor
![Page 20: 1 Spring 2003 Prof. Tim Warburton MA557/MA578/CS557 Lecture 34.](https://reader034.fdocuments.net/reader034/viewer/2022051101/5a4d1b267f8b9ab0599973cd/html5/thumbnails/20.jpg)
20
Option 1: Sanity Check on Cholesky Factorization
• We can check to see how stable the computation of the Cholesky factorization was:
• Not bad – we lost about 4 decimal places…
![Page 21: 1 Spring 2003 Prof. Tim Warburton MA557/MA578/CS557 Lecture 34.](https://reader034.fdocuments.net/reader034/viewer/2022051101/5a4d1b267f8b9ab0599973cd/html5/thumbnails/21.jpg)
21
Option 1: Condition number
• We can use condest to estimate the condition number of the matrix. We see that the condition number is about 800 so we may well expect to lose 3 decimal places in computing the factorization..
![Page 22: 1 Spring 2003 Prof. Tim Warburton MA557/MA578/CS557 Lecture 34.](https://reader034.fdocuments.net/reader034/viewer/2022051101/5a4d1b267f8b9ab0599973cd/html5/thumbnails/22.jpg)
22
Option 1: Direct Solver Code
31-32) note two back solves..
![Page 23: 1 Spring 2003 Prof. Tim Warburton MA557/MA578/CS557 Lecture 34.](https://reader034.fdocuments.net/reader034/viewer/2022051101/5a4d1b267f8b9ab0599973cd/html5/thumbnails/23.jpg)
23
Option 2: Incomplete Cholesky Preconditioner• We can use an incomplete Cholesky preconditioner in
the PCG algorithm.
• The idea is to use cholinc to compute the Cholesky factorization with a drop tolerance used to determine if entries in the factor are created.
![Page 24: 1 Spring 2003 Prof. Tim Warburton MA557/MA578/CS557 Lecture 34.](https://reader034.fdocuments.net/reader034/viewer/2022051101/5a4d1b267f8b9ab0599973cd/html5/thumbnails/24.jpg)
24
Sparsity of Incomplete Cholesky Factor
cholinc(A, 1e-3)
Only 93047 non-zero entries Only 68213 non-zero entries
cholinc(A, ‘0’)
![Page 25: 1 Spring 2003 Prof. Tim Warburton MA557/MA578/CS557 Lecture 34.](https://reader034.fdocuments.net/reader034/viewer/2022051101/5a4d1b267f8b9ab0599973cd/html5/thumbnails/25.jpg)
25
Iteration Count Per Time Step
• Comparing unpreconditioned and preconditioned with incomplete Cholesky:
![Page 26: 1 Spring 2003 Prof. Tim Warburton MA557/MA578/CS557 Lecture 34.](https://reader034.fdocuments.net/reader034/viewer/2022051101/5a4d1b267f8b9ab0599973cd/html5/thumbnails/26.jpg)
26
Option 2: Incomplete Cholesky Preconditioner• We can use an incomplete Cholesky preconditioner in
the PCG algorithm.
30) Call to PCG usesincomplete Choleskyfactorization.
![Page 27: 1 Spring 2003 Prof. Tim Warburton MA557/MA578/CS557 Lecture 34.](https://reader034.fdocuments.net/reader034/viewer/2022051101/5a4d1b267f8b9ab0599973cd/html5/thumbnails/27.jpg)
27
Alternative Iterative Schemes in Matlab
• BICG, BICGSTAB, CGS, GMRES, LSQR, MINRES
• They all have the same interface.
• For details type:> help gmres