An alternating gradient method for solving linear systems

41
AN ALTERNATING GRADIENi METHOD FOR SOLVING LINEAR SYSTES by EVERETT CRAìER RI&LE A THESIS submitted to OREGON STATE COLLEGE in partial fulfillment ol the requireients for the degree of MASTEE OF SCIENCE June 1959

Transcript of An alternating gradient method for solving linear systems

Page 1: An alternating gradient method for solving linear systems

AN ALTERNATING GRADIENi METHOD FOR SOLVING LINEAR SYSTES

by

EVERETT CRAìER RI&LE

A THESIS

submitted to

OREGON STATE COLLEGE

in partial fulfillment ol the requireients for the

degree of

MASTEE OF SCIENCE

June 1959

Page 2: An alternating gradient method for solving linear systems

APPROVED:

Associafle Professor of Mathematics

In Charge of Major

Chairnian of Department of Mathematics

Chairman oT School of Science Graduate Committee

Dean of Graduate School

Date thesis is presented Auust 11, l8 Typed by Everett Rig1e

Page 3: An alternating gradient method for solving linear systems

TABLE OF CONTENTS

PA(E

I. INTRODUCTION i

II. DEVELOPMENT OF FORMULAS

III. PROOF 01? BASIC PROPERTIES 9

IV. ITERATIVE ROUTINE 16

V. EXAITIPLES 18

VI CORRECTION OF ROU1D-0FF ERh3R8 21

VII. TABLES 2i-

VIII. BIBLIOGRAPHY 38

Page 4: An alternating gradient method for solving linear systems

AN ALTERNATING GRADIENT METHOD POR SOLVING LINEAR SYSTEMS

INTRODUCTION

Various gradient methods for solving systenis of

linear equations have bee presented. by Temple (, p. 476-500), Stein (k,p. 407-413), Forsythe and Motzkiri

(l,p. 304-305), Hestenes and Stiefel (2,p. 40936),

and others. However, at present, there is no best

method for the solution of systeas of linear equations,

as each method depends to some extent upon the particu-

lar system to be solved.

In the present paper, a modified gradient method

is exploited to obtin the solution, if it exists, of

In linear equations in n unknowns. If the solution does

not exist, ari indication is obtained by the process of

solution.

Let the given system of equations be denoted in

matrix notation by Ax-b=O, where A is any zu x n matiix,

b and O are colwnn vectors with in components, and x is

a column vector with n components.

Define a function of z as the sum of the absolute

values of the components of Ax-b. That is,

Ø(x) = I(Ax-b)11 + KAX_b)21 K)mI This positive definite function has the value zero if

and only if x is the solution to the given system.

Page 5: An alternating gradient method for solving linear systems

2

= constant, represents a f aimily of suriaces in n

dimensions, with x as its center. Given any arbi-

trary point, determined by the position vector X0,

that point lies on soue surface of $=c. The problem

of £indin; a solution to the systea of euations now

becomes one of finding the center of the surfaces.

This will be accomplished in the following

manner: The negative of the gradient of the function

at x0 will be used as a first direction vector y0. Proceeding in the direction of y0, a point x1 is

reached such that the distance, Ix-x11 , is a relative

minimum with respect to this direction. The point

lies in a hyperplane which is orthoonal to V0

and passes through the center x. It also lies on

some suriaco of Ø=c2, hence, the gradient of the func-

tion at this point can be determined. The second

direction vector y1 jg maie orthogonal to v0by taking a linear combination of grad

i and V0, As before,

the direction of the vector y1 is followed until a

point x2 is reached such that the distance, Ix_X21 , is

a relative minimum with respect to this direction,

This procedure is continued, obtaining a set o mut-

ually orthogonal directiri. vectors (y0, V1, ...) and.

successive new estimates x1, x2, .... It will be

Page 6: An alternating gradient method for solving linear systems

proved later, that ii a solution exists, the M th. est-

imate, XM will be the solution x, tor sorne value of

M rnin(rn,n).

Page 7: An alternating gradient method for solving linear systems

4.

DEVELOPMENT OF FORMULkS

To develop the ïorntüa for grad 4, the function

4(x) is written as follows:

4(x)= 1a11x1+a12x2+.. .+a1x-b1I +. . f ax-b1.

The gradient of this function is the column vector

grad 4 =

Xi

;

e..

n

iisl(a1ixi+. e+Sinxn_bi)+* I e +a1sn(a1x+. . .+amnxn_bm)

I ' e e e s e I e s s e e e e e s . . s s e s s . i s i e e e e e

. .+a1x-b1)+. e 1s(a1x1+. e+amnxn_b)

For any vector Xj, Ax_b=r1. That is,

a1ix1+a12x12+. .

a21x1+a22x12 . . . +axfl_b)=r2

e. e. e e ee.II ,.eS.e. sesee..,...

aalxjl+a2xj2+ . ' ' +axjn_bm=rn

Page 8: An alternating gradient method for solving linear systems

r1i+...agx1 r

la sn r. ...a sgn r Then, grad $= i I... le.a e e e e

int r1+...asgzi r

1) grad $=ATsgn r..

The notation sn r1 represents the vector whose

components are sn (j=1,2,...,m).

The direction vectors are developed by defining v0=.rad $ and V1 as a linear combination of rad $

and the previous direction vectors. That is,

2) v=ßrad j+e1,1_1vj1+ej,j_2vj_2+.. .+ej,ovo.

The direction vectors are to be mutually orthogonal,

hence, vl.vk=O (i,k).

Then, v±evk=rad $.vkel,_lv_l.vk+.. .+el,OvO.vk=O.

Each term of this equation is zero except for the

first term and. the term containing v'v. Thus,

grad .v1+e. ,k'k .vO;

-grad $.eVk -grad 4 V,

3) e1,k= VkeVk = V Vk

(k=O,1,...,i-l).

As previously stated, the initial estiiaate X0 01

the solution x, is to be arbitrary. Then, the suc-

cessive new estimates x, X2, ... are obtained by the

formula,

LI-) xi+l=xi_-IiVi,

Page 9: An alternating gradient method for solving linear systems

6

vthere is picked such that the distance, x-x1t,

will be a minimum with respect to the direction v.

That is, v will be orthogonal to the vector x11x.

Geometrically, this can be represented as follows:

fv±

xi

xi+1. ----- Let cl represent the distance Ixj+ijI , them,

(x1_x) .v

Ivi'

The desired vector jVj which is represented in the diagram as xx1, can be written as cl multiplied.

by the normalized vector v. That is,

(x1_x).v v (x1-x).v1 rtivi= - = vi,

lvii lvii

(x. -x).v. and A1=

1 V. .v. i

Introduce now a vector u, such that,

) vi= Au.

A - _(x_xi)TATu±

en, i: r - T vivi vivi

But, Ax-b=O,

and 101 any vector x, Ax-b=r;

Page 10: An alternating gradient method for solving linear systems

7

hence,

6) A(x_x1)=_r.

Making this substitution, we now have,

ru I'i= 'T vivi

However, this can. be written in a more useful form

as follows:

x=xj_ -"±- X1_1 =X1_2 _2v1_2;

......... .. ......... i o

x.=x -

Hence, x=x- v1-. . .-

(x1-x0) v1=O;

(x1-xxx0) .v1=O;

r..u.=r .u.. i i o i 'ihus,

T r u.

O i (1 r'i= 'T vivi

It is necessary to have u1 to determine

However, it is possible to deterraine u1 recursively

as fo1lois:

Since v0=grad $0ATsn r0,

we have for u0, the vector sgn r0. II u and hence

Page 11: An alternating gradient method for solving linear systems

roi

are determined up through the value i-1, we have,

v1= grad $1+e1 ,_1v1_1e ,_2v_2+e1 ,0v0;

v1=ATsgn

v1=AT(sgn r1+e1,_iu±_i+e,1_.2u_2+...+e,0u0).

Thus the necessary recurrence relation for u1 is de-

fined by:

n r.+e. u +. .e. u 8) u1=sg j_l+ei,j_2uj_2

Page 12: An alternating gradient method for solving linear systems

PROOF OF IìA3IO PROPERTIES

Proof of the basic properties of this method are

given in the following theorems.

Theorem I.

The set of vectors y0, v1 ... form a mutually

orthogonal set.

The proof will be by induction.

By formula 2), v1=grad $1+e10v0;

V1 .v0=grad l

,0v0 .v;

but, by formula 3), -grad $1.v0 e1,0=

vo vo

hence v1.v0=O, which implies y1 is orthogonal to y0.

Assume that the direction vector is orthogonal

to all preceding direction vectors, that is,

VklVjO (j=O,l,...,k-2).

By formula 2), vk=grad k+ek klvk_1+...+ek 0v0;

v.vi=grad. $k.vk_lek,k_lvk_1 Vkl+I . .e!,OvO.vl,

v.v_1=grad $k.vk_l+ek,k_lvkl .v_1;

-grad $kVk_l; but, by formula 3), e11=

hence, VksVl=O.

Page 13: An alternating gradient method for solving linear systems

lo

Similarly, for case O j k-1,

vv=rad $k.vJek,k_lvk_l.v+. ..+ek,ovovl;

v.vJ=rad $ksvekJv.v;

-grad ek,=

v..vj

hence, VksVO Therefore, it is proved by induction that for all k,

v1 is orthogonal to all preceding direetfon vecturs,

and y0, v1 ... form a mutually orthogonal set.

Corollary:

Since they are orthogonal, if none of the vectors V0 v1 ..., v is the zero vector, they are linearly independent.

Theorem II.

If a solution exists, vO when for i R-l. The proof will be by contradiction. Assume a solu-

ti3fl exists when soie v=O and rO Let s be the first

positive integer such that v=O when r5O.

By formula 5), vs= ATu5= O.

This leads to to cases to be considered; ATu5= o otien

us= O and ATu5= when u O.

Page 14: An alternating gradient method for solving linear systems

Case I. li

By formula 8), when u5=O.

+...+e u u9= sgn rs+es,s....lu.s_l s,o o

Tacing the inner product of both sides of the equation

with the vector r, 9) u 'r = sn r r +e u ,.r +...+e u r s s s s s,s-1s-. s s,00 s

However,

and by formula 'i.),

Ax5-b=r5,

X = X -A5_1v5_i. s s-1

Substituting, we have,

s-1 51Av51b=r5.

But,

hence, r5=r5_1- 5_1Av5_1.

Similarly, r5_1=r5_2- 5_2Av5_2;

r5 2=r5 - 5_v5-; . . 0sSI ø . s s... s

r1=r0- ?0Av0.

Then, r5=r0- \0Av0- ?1AV1-. . .-

Now, taking the inner product or both sides of this equation with the vector u1,

Ui .r5=uj r0 0u 'Av0- ?1u .Av1-. . _111j Av5_1.

Since v=ATu this can be written as follows, Ui 'r5= u .r0- 0v .v0- \1v sv1-. , ,_ 'v51.

Page 15: An alternating gradient method for solving linear systems

12

For j=O,l,...,s-1 every terni on. the ri,ht sid.e of the

equation is zero except for the first terni and. the term

containing v..v. Hence, u.r0-Av.v.

By formula 7), r .u. A-°3 i_ vi.vi,

and. u.r3=O for j=O,1,...,s-1. Making this substitu-

tion into equation 9), from the previous pase,

sgu r5r5.

By hypothesis hence, sgn r8.r8 > O.

But if u=O, u.r5=O. This is a contradiction, thus,

u5jO.

Case II. Âu=O when uÁO. This implies that the vector u is orthogonal to

each of the coiumnn vectors of A. If A is of raa& R,

there exists R column vectors ol' A, (o,oc-, ...,c),

which are linearly independent. Therefore, the vectors

2' 'R' u) form a linearly independent set. By formula 3),

u5= Sg;n r5+e5,5_1u5_1. .

u; us...l= sgn r5_1+e5_1,5_2u5_2. . .+e5_1,0

... .... .. . . .. . . . a... i u0= S3fl r0.

Page 16: An alternating gradient method for solving linear systems

13

Upon subtitution,

s11 r5+e5,_ici1 r5_1+...-4-(e530...e1,0)sgn

Since at least one of the constants is not equal

to zero and the vectors (u5, s r, r51,...,sgn r0)

form a linearly dependent set.

Now consider the equation r=Ax-b. The

ooiaponents of r may be written, r1= sgi:ì r11 Ir±I (i=l,2,...,m).

If any r=O then sn r=O and we may re-define as follows : r1= sn Jrj, where 1i1 = IiiI

if andIrj=1, if r.=O. Dividing the i th

row of this system by IrI, i=1,2,...,m respectively,

the system becines:

a11

Irt: i I

a21

r'. sgn r= j2

e..

r' jm

a12

k1I IriI

a22 a

Ir2I

e.. 2

am2 a

ImI

b1

Ji F

b2

xjn 2I

e..

b

rjm vector sgn r is a linear coubination of these

column vectors and the iva,rix composed of them is of the same rank as the matrix (Ab) since the rows have

merely been divided by constants. By the assumption

Page 17: An alternating gradient method for solving linear systems

14

that a solution exists, rank (Ab) = rank A. Hence,

for (j=O,1,...,R-1), the set of vectors (s r0, srn

sn r2, ..., sn rRl) lie in a subspace of the space

spanned by the R independent column vectrs of A.

Thereiore, the vectors (u5,1, 2' .*R) form a

linearly dependent set. This is a contradiction,

thus, if a solution exists, ATu5 O for s R-l.

Theorem III.

The method presented gives an exact solution, if

it exists, in M steps, where M R. (R is the dimen-

sion of t;he smallest space spanned by the column vec-

tors of A).

Let M be the first natural number, if it exists,

such that x-x0 is in the subspace spanned by

vo, vi, ..., V91. Then,

x-x =k y +...+kMlvÌl ( not all zero); o oo

(x-x0) .v1=k1v .v.;

C x-x ).v.

i= vi.vi

But, from formula 6),

-(x-x ).v. - o i

(¼]

vi.vi

hence, i

Page 18: An alternating gradient method for solving linear systems

15

ana x=x0- X0v0-...-\1v_1.

From formula L1), x1=x0- v0;

x2=x1-\1v1=x0- .. y - 00 . ........ lS . . s .. X:X0_ ?V

Therefore, x = x.

Theorem IV.

If the direction vectors .., VR1 exist,

then is always edual to zero.

Since tlie direction vectors (y0, y1, ...) are

linearly independent, the vectors (y0, y1, .., vR_i)

span the space and VR must equal zero.

Theorem V.

If v=O when then no solution exists.

For i R-1 this theorem is just the contraposi-

tive of Theorem II. For i=h, Theorem IV states that

Vp always equals zero ii V0, V1, .. VR_i exist. If

no solution exLsts, r() and. the theorem is proved.

Page 19: An alternating gradient method for solving linear systems

16

ITERATIVE huUTINU

The Íornu1as deve1opd can now be presented in

the Xollowing routine:

O) Select an arbitrary vector x0.

Compute:

1) r1=Ax1-b (i=O,1,...,M-1).

2) grad = ATs r.

3) -erad 4 v eO,k=O, e,k=

(k=O,1,... ,i-1).

) u1= s n

) v1= ATu.

r u. o i

'.J) f¼ T V. V. i i

7) xi+1 =

Page 20: An alternating gradient method for solving linear systems

17

A summary of the important attributes of this

method are listed be1ow:

1) No restrictions of any kind are placed

upon the matrix A.

2) If a solution exists, it is obtained in

M R iterations.

) If no solution exists, it is indicated in

the i th iteration (i Rl).

¿) the given matrix A ano. the vector b are

unaltered during the process, so that a

maximum of the oriina1 data is used. The

advantage oÍ havin; many zeros in the matrix

is preserved.

5) At the end of each iteration an estimate of

the solution is given, which is an improve-

ment over the one given in the preceding

iteration.

6) At the end of any iteration one can start

anew, keeping the estimate last obtained as

the initial estinate.

Page 21: An alternating gradient method for solving linear systems

EXAMPLES

In this section various eamp1es will 0e 'ivexi to

demonstrate the method. and exploit its use on a high

speed computer. Special attention will be given to

round-oU errors and comparison will be iaade with

the Conjugate Gradient Method o Hestenes and. Stiefel.

ith the exception of the first two, the results of ail exaaples were obtained by use of the ALWAC III-O

Computer, using a single length floating point routine.

I. Demonstration of Process.

The process of solution for a 3 x 3 system is

shown in Table I. The exact solution is obtained in

three iterations.

The second example, exhibits the process involved in demonstrating the inconsistency of this set o

three equations in two unknowns.

II. Comparison Vith the Conjugate Gradient etLod

of Hestenes and Stiefel (2, p. k09_A1.36).

The formulas presented by Hestenes and Stiefel in the non-symmetric case were coded using the same rei-

ative methods employed in coding the method presented

in this paper. No atten.pt was made to optLnize the

codin of either method and no correetion of rcuiii-

off errors was made.

Table II, gives the number of nain memory channels

Page 22: An alternating gradient method for solving linear systems

19

used for storage and the running tinie of various

systexas of equations. The running tinie was calculated

fron the time of the last input until the last output

was completed. This includes the ti3le consumed in

typing out each succesive estimate of the solution.

Due to round-off errors the exact solution was not

obtained in many cases of ill-conditioned matrices.

The relative accuracy of the solution of a few of the

:ìany systems tried are given by Tables III, IV, V,

and. VI.

A summary of the results o1 the comparison are

stated below:

1. The Alternatin Gradient Method requires more stor-

age space in a computer. That is, 2ni channels are

needed in excess of the requirement for the Method of

Conjugate Gradients.

2. The time ìnvolved in the use of the Alternating

Gradient Method becomes increasingly jreater over the

tine involved in the Conjugate Gradient Iethod, as the

dimension of the system increases.

3. The Alternating iradient Method is more accurate,

epecilly in large systems of ill-conditioned equa-

tions. However, the Conjuate Gradient Method allows

continuance past the n th step for greater accuracy.

Thìs cannot be done with the Method of Alternatin;

Gradients. Both methods allow startin over at any

Page 23: An alternating gradient method for solving linear systems

20

time using the last estim-ite calculated as the initial

vector.

III. Inconsistent Systems.

The inconsistency of a systeni oV equations is

exhibited by the direction vector v O when O.

Due to round-oif errors this condition is not usually

satisfied exactly. However, in ail cass trieci., the

muer product 01 this vector with itself gave a zero

divisor which caused the alarm to sound. Furthermore,

in all cases tried where a solution did exist, none of

the direction vectors was near enough to zero to Sound

the alarm. An example of the type of results obtained

is 4ven in. Table VII.

Usin the correction formulas that are developed

in the next section, the direction vector did equal

zero exactly in every inconsistent system tried.

Page 24: An alternating gradient method for solving linear systems

21

CORRECTION OF ROUND-OFF ERRORS

The formula for the successive approximations of

tue solution was given as x1=x-v. Tbi recur-

sive í'orxaula presents two possible errors, namely:

1. The direction vector vi does not ;ive the correct

direction due to round-off errors.

2. The distance, governed by from to the new

approximation Xj3 may be incorrect due to round-off

errors.

To correct the first of these, note that

v0= ATsn r0 must be very accurate, as the only proc-

esses involved are the multiplying of each component

of the columns of A by t]. and. adding. Hence, by

insuring that every y1 that is generated after

forms a mutually orthoonal set with the previously

derived direction vectors, their directions are

guaranteed, A method for this type of correction is

given by Cornelius Lanezos (, p. 271).

Let v1= theoretically correct vector;

vi= actual computed vector;

v:t= corrected vector;

i-1 V. .V

then, v=v!- i A. V. a. j

k=O VkVk That this method actually corrects the generated error

Page 25: An alternating gradient method for solving linear systems

22

is proved as follows. Let = e,k+ Lk (kO,l,...,í-1),

represent the accumulation of the round-oil error t. Prom formula 2),

v=grad 1+e1,_iv_i+.. .+e,0v0;

v=g'rad

v1=rad +ej,_iv_i+. . .+e,0v0+ £kvk;

Vj= Vi+kVk. k=O

Substituting this into the Lanczos formula,

i-1

i-1 i-1 (v1+ Ek'crk)k

= v L kVk V. VkSVk

Since the v's fornì a mutually orthogonal set,

i-1 i-]. CkVkVk v= V+ Vk Vk

r vi - vi A formula for the correction of round-off errors

of the second type can best be approached in the follow-

Ing geometrical way.

vi X. Ii. li \

x!.#' \ 1+1 I

I - x_ .,

Page 26: An alternating gradient method for solving linear systems

23

If . is the hyperplane which contains the solution x and is orthogonal to the direction vector v, then

is theoretically the next approxiatiun of the

solution x. Due to round-off errors assume that x! i1 is the actual computed approximation, then the error,

C, is the distance between x! and. x. This distance

can be represented by the inner product of the vector

(x1+i-x) with the normalized vector v. That is,

Iv±I

but by formula 6), this may be writtei,

. 1+.L i

Ivi'

The correction formula may now be written as:

(rï.u) vi. x+l= XI+i-. -,

Ivi,

(r1i.u) vi +l= 1i

vi SV1

Examples of the corrections instituted by these

formulas are given in Table VIII.

Page 27: An alternating gradient method for solving linear systems

TABLE I

DEMONSTRAHON OF PROCESS

Example 1.

x1x2-2x3=-1 1 1 -2

-xl+x2+ x5=-2 A = -1 1 1

xl+x2+ x3= 2 1 1 1

00) Ii X = o

11) fi i -2 i -i) (i r =-1 i li 1--2J=

J o li i 2J

12) Ii-i i Ii f]. grade =1 i i Ii = 13 o

(.2 i i (,l

13) e0,0 O

14) fi u = o

15) fi vo = 13

lo 16) Ii

Il = (i 3 i) I.L_ = I

(1 3 0) 13 lo

i7) fi Iii 134 Xi = Ii -- 151 = l-34 ti 210J li

21 ) r i i f 34 f-i' f-i ri s J- 1-341-1-21= I 2

1.1 1 iJ11J 2J (h-i

-1 b = -2

2

2

Page 28: An alternating gradient method for solving linear systems

25

22) 1-1 1 -1 1-3 grad$1= i i i i =

-2 1 1 -1

23) 3

e =-(-3-1 2) .2 =

1,0 (1 O) i

3 o

24) -i) i -2/5 u = iI+i 8/5

-ii S i -2/5

25) 3. -1 ii 1-2/51 -12/5 V = i i i 18/51= 415 i

-2 1 1 -2/5j 10/5

26) -2/ 8/5

? (1 3 1) -2/5

i (-12/5 415 10/5) 1-12/5 13

I 415

1.

10/5

27) i/il 1-12/51 1 37/261 X2 -1/21 - .I 4151=1-21/26

2/2J L1o/5J t_ 6/26

31) 1 1 1 -21 1 37/26 1-i1 r 15/13 1 i i l-21/26 - J-21 =1 0

J,, i i i

L 6/26 L 2J L-11

32) Ii-i i'ìfi Io grad.

2 J

i i i J J

0 =J O t-2 i 1)L-1 L-3

37) 1-12/5 I 4./5

e -(0 0 -3) t 10/5 12

2,1 (-12/5 4./5 10/5) 1-12/51 26

[

10/5

Page 29: An alternating gradient method for solving linear systems

26

13

e -(0 0-3) 2--= 2-=o

2,0 (1 3 0) 1i1 lo

Is

1.0

3L) i) 1-2/51 O 1 10/13 u2 = O + - I 8/51+ 01 = I

12/13 26

1._2/5J o) 1.-16/13

35) 1 i -1 1 10/131 1-18/13 = I

1 1 1 12/13 I = I 6/13

1.-2 1 1 _16/l3J

36) 1 10/151 I 12/131

(15/15 0 -15/13) -16/13J

(-18/15 6/13 -24/13) 1-i8/i1 12 I

6/13

37) 37/261 -18/131 1 2 x = -21/26

I - - 6/131 = I-i 6/26J 12 -24/13) 1. i

This is the exact solution.

Example 2.

x1-x2=-4 1 -1 -4

l21 A= 2 1 b= i

2 1 -1 2

00) _Ii X0

il) 1i-ir -g- (4 r = 2 1 = 2

12) grad :

13) e0,0 = O

Page 30: An alternating gradient method for solving linear systems

27

1) i u = °

i

-1

15) _12 o

16) i

-

-

(L1 2-2)--1 8

(2 1)r21 1. 1)

17) 1i1 8 (2) f-iiìs Xl = iJ 11 j - -

21) 1 -1 ( - J-11/) (- 1 12/5

- r* 1

.

1 .

-1 I - - -

2 -18/5

22) grad

Ii 2 ii = -i i iJ

25) 12 _-(-2-1) 1

(21)12 1¼1

2L) 1 1 2 u. = 1

-1 + i = o -1 -1 -2

25) i =

1-21 i-1f

(2) Io + Ill fO

This indicates that no solution exists.

Page 31: An alternating gradient method for solving linear systems

28

TABLE II COMPARISON OF METHODS

Storage Space in Main Memory oi Computer:

Alternating Gradient Conjugate Gradient

control routine

sub-routines (fioatiní pt.)

general st orage

totals

4 channels

channels

3n6 channels

n+l5 channels

1+ channels

channels

n5 channels

n+114- channels

Runnin Time:

Dimensions of System Alternating Gradient Conjugate Gradient

2 x 2 31 sec. 50 sec.

3 x 3 1 min. 14 sec. i rain. 10 sec.

4 x 4 2 min. 22 sec. 2 min. 12 sec.

6 x 6 6 min. 26 sec. 5 min. 26 sec.

8 x 8 15 min. 4-O sec. 11 min. 6 sec.

16 x 16 1 hr. 27 min. ,2 sec. i hr. 6 min. sec.

Page 32: An alternating gradient method for solving linear systems

TABLE III

Ç0ÌlPARISON 0F METHODS

Two Equations in Two Unknowns:

Example 1. (well-conditioned)

i.3tio of largest to smallest eigenvalues is 2.

[i 2

tl OJ x2

Initial vector = (5 8).

Exact Solution

Alternating Gradient

1.0000000 1.0000000

1.0000000 1. 0000000

Example 2. (ill-conditioned)

29

O on j ugat e Gradient

0. 999999

0.9999991

Ratio of largest to smallest eienva1ues is 20,000.

200 256 x1 a-56 ]

155.5625 199.08 X2! 1sL.62sJ

Initial vector = (5 8).

Exact Solution

Alternating Gradieiìt

Conjugate Gradient

xl= 1.0000000 l.0018l 0.8772738

x2= 1.0000000 0.998500Ll 1.7127669

Page 33: An alternating gradient method for solving linear systems

TABLE IV

COMPARISON Oi METHODS

Four Equations in Four Unknowns:

Example 1. (well-conditioned)

Ratio of 1arest to s[nallest

-2 1 0 0 X1

1-2 1 0 x2

o l-2 i x7 )

0 0 1-2

eigenvalues is 9.47.

-4

5

i

-6

Initial vector = (1 1 1 1).

Exact Solution

x1= 1.0000000

x2=-2 .0000000

x3= 0.0000000

X4= 5.0000000

Alternating Gradient

0.9999999

-1. 99)9999

0.0000000

2.999998

Example 2. (ill-conditioned)

30

C onjugat e Gradient

0. 9999978

-1.9999988

o. 0000012

2. 999977

Ratio of largest to smallest eigenvalues is 2984. lo 7 8 7 x1 52

7 5 6 5 x2 23

8 610 9 x 33

7 5 910 x4 51

Initial vector = (0 0 0 0).

Exact Alternating Conjugate Solution Gradient Gradient

xl= 1.0000000 0.9999721 1.0953595 x2= 1.0000000 1.0000532 0,8401629 x3= 1.0000000 0.9999796 1.0585410 x4= 1.0000000 1.0000264 0.9757126

Page 34: An alternating gradient method for solving linear systems

31

TABLE V

COMPARISON OF METHuDS

Eight Equations in Eight Untnowns:

Example 1. (well-conditioned)

Ratio oÏ 1ar:est to smallest ei:enva1ues is 3O.39.

lo 7 8 7 6 8 5 4 X1 55

7 5 6 5 7 7 k (9

X2 50

8 610 9 8 5 8 2 x 56

7 5 910 5 9 6 8 x 59

6 7 8 5 5 7 6 X5 4-8

3 7 5 9 410 9 5 x6 57

5 14. 8 6 7 910 1 x7 50

L4. 9 2 8 6 5 1 5 x3 II-O

xo= (5 2 -1 - -3 1.5 2 -k).

Exact Alternatin, Conju'ate Solution Gradient Gradient

xl= 1.0000000 0.9999989 1.O05k4-

x2= 1.0000000 1.0000005 l.00u4616

X3:: 1.0000000 0.9999998 0.9980522

XL1.= lQ0000OO 0.9999j88 0.9957286

x5= 1.0000000 1.0000008 1.0026373

1.0000000 1.0000002 1.0021600

x7= 1.0000000 0.9999993 0.9993067

1.0000000 0.9999996 0.9988578

Page 35: An alternating gradient method for solving linear systems

TABLE V (coNT.)

Example 2. (ill-conditioned)

Ratio oI 1arest to smallest eigenvalues is 667.69.

1 1 1 1 1 1 1 1 X1 8.0

i i i i i i 1 .9 x2 7.9

i i i 1 1 1 .9 .9 x 7.8

i i 1 i 1 .9 .9 .9 XL 77

i 1 1 i .9 .9 .9 .9 X5 7.6

i i 1 .9 .9 .9 .9 .9 X6

1 1 .9 .9 .9 .9 .9 .9 X7 7.4

i . .9 .9 .9 .9 .9 .9 x5 7.3

x0= (5 2 -i 4 -3 1.5 2 -4).

Exact A1ternatin Conjuate Solution Gradient Gradient

X1:: 1.0000000 1.0000303 0.9862018

x2= 1.0000000 0.9999988 1.0203175

X3= 1.0000000 1.0000105 0.9771331

x4= 1.0000000 0.9999942 1.0148934

x5= i.0000000 1.0000108 0.9934853

1.0000000 0.9999983 1.0067546

x7= 1.0000000 0.9999836 0.9880737

1.0000000 1.0000003 1.0132430

Page 36: An alternating gradient method for solving linear systems

33

TABLE VI

COMPARISON OF METHODS

Sixteen Equations in Sixteen Urilcnowns:

Example 1.

1 2-7 4-1 0-2 1 4 3-2 0-1 2 1 0

2 4 3-1 4-2 2 1 2-1 0-4 1-1 0

1-13113-311102120-1 1 7 2 2 1 1 -1 0 -2 4 0 -1 -2 4 -1 -2

3 3 1 3 2-1 1 0 3 2 0--1-2-1-1 1 1 7-1 2 0 1 -4 1 1 0 1 1-1-2 2 2 1 4-1 7 1 2 2 1-1 0 1 0-1 1

- 8 1 1-1-2-2 1-1 1 1 0-1 -2 O

1-1-1 1 0 1 1-1 4 1 1 5-1-3 1 0

2 4-1-1 4 4 2 4 0-4 0-1 3-1--1

2 1-1-2 1 2 3 1-6 0 1 0 2 2-1 1

3 1 1 2 0 3 1 0- O-2-1 1-1 1 0

i i 1 2 1-1 1 0-1-1 3 4 0-2 0 1

4 3-2 3 1 4-1-3 1 0-2-1 0 1 0-2 2-2 1 1-1 0 0 2 0-1 1 1 3 2-1-1 1 1-1 4 0-1 0 1 1 1 1 3-1 O-1-2

b= (21 10 6 14 0 -2 14 8 16 10 6 11 6 -7 13).

x0= (1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1).

Page 37: An alternating gradient method for solving linear systems

TABLE VI (CONT.)

Exact Alternating Conjugate Solution Gradient Gradient

xl= -1.0000000 -0.9999985 -0.8892401

x2= 2.0000000 2.0000001 1.8488161

x3= -2.0000000 -2.0000007 -1.824722

xL1_= 1.0000000 0.9999993 1.1011682

x5= 2.0000000 1.9999999 2.0104537

x6 10000000 1.0000019 0.7997739

x7= 10000000 -1.0000017 -0.8470299

x8= 2.0000000 2.0000011+ 1.6985474

-1.0000000 -0.9999993 -1.0599308

X10 1.0000000 0.9999978 1.5231497

0.0000000 0.0000050 -0.6304432

2.0000000 1.9999982 2.2931508

xlS= 0.0000000 -0.0000036 0.1149432

x14=-1 .0000000 -1,0000004 -0.9168063

xl5= 1,0000000 1 0000024 0.5898514

x16= 0.0000000 -0.0000009 0.4474895

Page 38: An alternating gradient method for solving linear systems

35

TABLE VII

INOuNSIBtIENT SYSTE1iS

Example 1.

i L_3 9 xl 11

-2 7 1 C) X2 11

-1 3-2 1 x ï

¿ 8-618 x O

The ratio Oí the 1arest to smallest eißenva1ue

excluding the zero elgenvalue is 7.9269.

X0 = (O O O O).

;ko = 0.0508474 Ai0.639i669 20)295198 r3= (-5.5OOOO4 10.9999942 -5.5000019 10.9999890).

v3= (0.0000006 -0.0000005 0.0000012 0.0000013).

Example 2.

i .5 .33;333333 .25 x i

.5 .333333333 .25 .2 x. O

.33333333 .25 .2 .166666667 x O

.71428571 .25771k236 .1O4-7619 .1428714) X O

The ratio o.E the 1arest to sLualiest eigenvalues

exc1udin.; the zero eienva1ue is 10.2273959.

X0 = (1 1 1 1).

= 0.L19084837 A1 = 37.O5562O

r2= (-0.0000000 -0.1815374 -0.333)11 0.5714285).

v2= (-0.0000000 -0.0000000 -0.0000000 -0. 0000000).

Page 39: An alternating gradient method for solving linear systems

1j

TABLE VIII

CORREC: ION OF ROUND-OFF ERhORS

Example 1.

Two Equations in Two Unknowns:

The system given in Table III, Example 2, is used

as the lirst example. This system was i1l-conIitioned.

= (5 8). Exact

Solution

xl= 1.0000000

x2= 1.0000000

Example 2.

Method oi Alternating Gradients Without Correction With Correction

1.00183k1

0. 9985OO'4

Four Equations in Four Unknowns:

1.0018177

O. 9985353

This system is given in Table IV, Example 2.

The system is ill-conditioned.

X0 = (0 0 0 0).

Exact Solution

xl= 1.0000000

x2= 1.0000000

x3= 1.0000000

x4= 1.0000000

Method ot Alternating Gradients Without Correction 'ith Correction

0. 9999721

1.0000532

0. 9999796

1.00002&1-

1.0000072

0. 9999938

i 0000041

O. 9999975

Page 40: An alternating gradient method for solving linear systems

TABLE VIII (CONT.)

Example .

Eight Equations in Eight Unknowns:

This system is given in Table V, Example 2.

It was not too badly conditioned, the ratio of its

largest to smallest eigenvalues being 667.69.

= (5 2 -1 4 -5 1.5 2 -4).

Exact Solution

xl= 1.0000000

1.0000000

x3= 1.0000000

1.0000000

x5= 1.0000000

x6= 1.0000000

x7= 1.0000000

x3= 1.0000000

Method of Alternating Gradients Without Correction With Correction

1. 000003 1.0000036

0.9999988 0.9999996

1.0000105 0.9999992

0. 9999942 0. 9999986

1 0000108 0 . 9999993

0. 9999983 0. 9999995

0.9999886 0.9)997

1 . 0000003 0 . 9999998

Page 41: An alternating gradient method for solving linear systems

BIBLIOGRAPHY

1. Forsythe, G. E. and. T. S. Motzkin. Acceleration of the optimum gradient method. Preliminary report. Bulletin 01' the American Matheivatical Society 57:5O.L3O5. 191. (Abstract).

2. ilestenes, Magnus R. and. Eduard Stie1e1. Method ol: conjugate gradients Lor solving linear systems. U. S. National Bureau of Standaris Journal of Research '49:4-O9-436. 1952.

3. 1aiic:os, Cornelius. An iteration method for the solution of the eigenvalue problem of linear dif- ferential and integral operators. U. S. National Bureau of Standards Journal of h eseareh '+5:255-282. 1950.

4. Stein, arvin L. Gradient methods in the solution of systems of linear ejuations. U. S. National Bureau of Standards Journal of Research 48:407-413. 1952.

5. Temple, G. The general theory of relaxation methods applied to linear systems. Proceedings of the Royal Society of London. Series A 169:4.76-500. 1939.