1 10.1 Introduction Just as functions act upon numbers, we shall see that matrices act upon vectors...

Post on 03-Jan-2016

217 views 0 download

Transcript of 1 10.1 Introduction Just as functions act upon numbers, we shall see that matrices act upon vectors...

1

10.1 Introduction

Just as functions act upon numbers, we shall see that matrices act upon vectors and are mappings from one vector space to another.

10.2 matrices and matrix algebra

A matrix is a rectangular array of quantities that are called the elements of the matrix. Let us consider the elements to be real numbers. Matrix A may be expressed as

Chapter 10 Matrices and Linear Equations

11 12 1

21 22 2

1 2

n

n

m m mn

a a a

a a a

a a a

(1)

2

a b

c d

11 12

21 22

a a

a a

ija

A horizontal line of elements is called a row, and a vertical line is called a column. The first subscript on aij the row index, and the second subscript the column index . The matrix can be expressed in different forms.

(2) (3)

aij in Eq. (3) is called ij element and i=1,…,m and j=1,…,n.

Two matrices are said to be equal if they are of the same form and if their corresponding elements are equal.

Matrix addition. If and are any two matrices of the same form, say m × n, then their sum A + B is

ijb ija

ij ija b (4)

and is itself an m × n matrix.

3

Scalar multiplication. If is any m × n matrix and c is any scalar, their product is defined as

ija

,ijc ca (7)

THEOREM 10.2.1 Properties of Matrix Addition and Scalar Multiplication If A, B, and C are m × n matrices, O is an m × n zero matrix, and α, β

(commutativity) (9a)

(associativity) (9b)

(associativity) (9e)

(distributivity) (9f) (distributivity) (9g)

(A + B) + C = A + (B + C),A + 0 = A,A + (-A) = 0,

(9c)(9d)( ) ( ) ,

( ) , ( ) ,

(9h)

(9i) (9j)1A = A,

0A = 0, 0 0,

,

4

Cayley product, if is any m × n matrix and is any n × p matrix, then the product AB is defined as

The definitions of addition and scalar multiplication above are identical to those introduced in Sec. 9.4 for n-tuple vectors.We may refer to the matrices

ija ijb

If the number of columns of A is equal to the number of rows of B, then A and B are said to be conformable for multiplication; if not, the product AB is not defined.

11

11 1

1

, , and =n

n

a

a a

a

1

n

ik kjk

a b

(10)

(11)

If we denote AB = C = {cij}

as n-dimensional row and column vectors, respectively.

1

.n

ij ik kjk

c a b

(12)

5

It is extremely important to see that matrix multiplication is not, in general, commutative; that is,

except in exceptional cases.

The system of m linear algebraic equations

in the n unknowns x1, …, xn is equivalent to the single compact matrix equation

, (15)

11 1 12 2 1 1

21 1 22 2 2 2

1 1 2 2

,

,n n

n n

m m mn n m

a x a x a x c

a x a x a x c

a x a x a x c

(17)

Ax = c, (18)

6

A is called the coefficient matrix.

Any n × n matrix is said to be square, and of order n, elements a11, a22, …, ann are said to lie on the main diagonal of A.

ija

If A is square and p is any positive integer, we defined

The familiar laws of exponents,

follow for any positive integers p and q.

11 12 1 1 1

21 22 2 2 2

1 2

, x , and c .

n

n

n mm m mn

a a a x c

a a a x c

x ca a a

(19)

factors

p

p

(21)

, ( )p q p q p q pq (22)

7

If, in particular, the only nonzero elements of a square matrix lie on the main diagonal, A is said to be a diagonal matrix. For example,

If futhermore,d11=d22=…=dnn=1, then D is called the identity matrix I. Thus,

Where is the Kronecker delta symbol defined ij

11

22

0 0

0D ,

0 nn

d

d

d

(23)

1 0 0

0 1I ,

0 1

ij

(25)

1 if ,

0 if .ij

i j

i j

(26)

8

The key property of the identity matrix is that if A is any square matrix

A is any n × n matrix, we define

Where I is an n × n identity matrix.

THEOREM 10.2.2 “Exceptional”Properties of Matrix Multiplication(i) AB ≠ BA in general.(ii) Even if A ≠ 0, AB = AC does not imply that B = C.(iii) AB=0 does not imply that A = 0 and/or B = 0(iv) A2 = I does not imply that A = +I or –I.

IA = AI = A, (27)

0 (28)

9

THEOREM 10.2.3 “Ordinary” Properties of Matrix Multiplication If α, β are scalars, and the matrices A, B, C are suitably conformable, then

A(BC) = (AB)C,

(A+B)C = AC + BC,

C(A + B) = CA + CB,

(associativity) (30a)

(associativity) (30b)

(distributivity) (30c)

(distributivity) (30d)

(linearity) (30e)

(αA)B = A(αB) = α(AB)

A(αB + βC) = αAB + βAC

10

11 12

21 22

31 32

2 0 3 2 0 3

5 2 7 5 2 7,

1 3 0 1 3 0

0 4 6 0 4 6

Partitioning. The idea for partition is that any matrix A (which is larger that 1 × 1) may be partitioned into a number of smaller matrices called blocks by vertical lines that extend from bottom to top, and horizontal lines that extend from left to right.

(31)

11

EXAMPLE 8. If

11 12

21 22

2 4 1

1 3 0 ,

5 4 6

11 12

21 22

0 1 3

2 4 1 ,

5 8 2

11 11 12 21 11 12 12 22

21 11 22 21 21 12 22 22

.

(37)

(38)

If m = p, and n = q and each Aij block is of the same form as the corresponding Bij block.

12

10.3 The Transpose Matrix

Given any m × n matrix , we define the transpose of A, denoted as AT and read as “A-transpose”, as

ija

Theorem 10.3.1 Properties of the Transpose

T =ij jia a

T T

,

+B ,

,

,

TT

T

T T

T T T

A A

A B A

A A

AB B A

11 21 1

12 22 2

1 2

,

m

mTji

n n mn

a a a

a a aA a

a a a

(1)

(2)

(3a)

(3b)

(3c)

(3d)

13

Let AB ≡ C = {cij}

1

n

ij ik kjk

c a b

1 1 1

.n n n

T T Tij ji jk ki ki jk ik kj

k k k

c c a b b a b a

T T TC or ( )T T T

(4)

(5)

Proof of (3d):

1

n

x

x

x

and

1

n

y

y

y

( ) , ( ) ,T T T T T T T T TC C CD D C (7)

Let

14

T

1 1 2 21

...n

n n j jj

x y x y x y x y x y

Tx y x y

,T

(8)

(10)

(9)

We say that it is skew-symmetric (or antisymmetric).

Then the standard dot product

and in matrix language

If

We say that A is symmetric, and if

15

10.4 Determinants

We denote the determinant of an n × n matrix as

ija

The determinant of an n × n matrix is defined by the cofactor expansion

ija

where the summation is carried out on j for any fixed value of k ( ) or on k for any fixed value of j ( ). Ajk is called the cofactor of the ajk element and is defined as

where Mjk is called the minor of ajk, namely, the determinant of the (n-1)x(n-1) matrix that survives when the row and the column containing ajk are struck out.

1 k n 1 j n

11 12 1

21 22 2

1 2

det

n

n

n n nn

a a a

a a a

a a a

1

det ,n

jk jka

( 1) ,j kjk jkM

(1)

(2)

(3)

16

A square matrix is upper triangular if aij = 0 for all j < i and lower triangular if aij = 0 for all j > i. If a matrix is upper triangular or lower triangular it is said to be triangular.

ija

Example 1 Find the determinant of the matrix

0 2 -1

A 4 3 5

2 0 -4

(1+1) (1+2) (1+3)11 11 12 12 13 13

11 11 12 12 13 13

(2+1) (2+2) (2+3)21 21 22 22 23 23

21 21 22 22 23 23

(1+3) (2+3) (3+313 13 23 23 33

det a (-1) M +a (-1) M +a (-1) M

a M -a M +a M

det a (-1) M +a (-1) M +a (-1) M

a M +a M -a M

det a (-1) M +a (-1) M +a (-1)

A

A

A

)33

13 13 23 23 33 33

M

a M -a M +a M

4 3 0 2 0 2=(-1) -(5) +(-4)

2 0 2 0 4 3

( 1)( 6) (5)( 4) ( 4)( 8) 58

17

D1. If any row (or column) of A is modified by adding α times the corresponding elements of another row (or column) to it, yielding a new matrix B, det B = det A. Symbolically: rj → rj+αrk

D2. If any two rows (or column) of A are interchanged, yielding a new matrix B, then det B = -det A. Symbolically: rj ↔ rk

D3. If A is triangular, then det A is simply the product of the diagonal elements, det A = a11a22…ann

Properties of Determinants

11 12 1 11

22 2 21 22

1 2

0 0

0 0,

0

n

n

n n nnnn

a a a a

a a a aand

a a aa

(upper triangular) (lower triangular)

18

D4. If all the elements of any row or column are zero, then det A = 0.

D5. If any two rows or columns are proportional to each other, then det A = 0.

D6. If any row (column) is a linear combination of other rows (columns), then det A = 0.

D7. If all the elements of any row or column are scaled by α, yielding a new matrix B, then det B = αdet A.

D8. det (αA) = αn det A.

D9. If any one row (or column) a of A is separated as a = b + c, then

det | det | det |a b c

19

D10. The determinant of A and its transpose are equal,

D11. In general,

D12. The determinant of a product equals the product of their determinants,

det( ) detT

det( ) det +det

det( ) (det )(det )

det( ) det + det

(11)

(12)

20

10.5 Rank; Application to Linear Dependence and to Existence and Uniqueness for Ax = c

DEFINITION 10.5.1 Rank A matrix A, not necessarily square, is of rank r, or r(A), if it contains at least one r × r submatrix with nonzero determinant but no square submatrix larger than r × r with nonzero determinant. A matrix is of rank 0 if it is a zero matrix.

EXAMPLE 1. Let 2 1 1 0

0 3 3 6

1 4 5 9

A

(1)

21

We may regard the rows of an m × n matrix as n-dimensional vectors, which we call the row vectors of A and which we denote as r1, …, rm. Similarly, the columns are m-dimensional vectors, which we call the column vectors of A and which we denote as c1, …, cn. Further, we define the vector spaces span{r1, …, rm} and span {c1, …, cn} as the row and column spaces of A, respectively.

ija

The elementary row operations:

1.Addition of a multiple of one row to another Symbolically: rj → rj + αrk

2. Multiplication of a row by a nonzero constant Symbolically: rj → αrj

3. Interchange of two rows Symbolically: rj ↔ rk

22

THEOREM 10.5.1 Elementary Row Operations and RankRow equivalent matrices have the same rank. That is, elementary row operations do not alter the rank of a matrix.

THEOREM 10.5.2 Rank and Linear DependenceFor any matrix A, the number of LI row vectors is equal to the number of LI column vectors and these, in turn, equal the rank of A.

A B

Elementary row operations r(A) = r(B)

23

EXAMPLE 5. Application to Stoichiometry.

2 2

2 2 2

4 2 2

4 2 2 2

1,

21

,2

32 ,

22 2 ,

CO O CO

H O H O

CH O CO H O

CH O CO H O

2 2

2 2 2

4 2 2

4 2 2 2

10,

21

0,2

32 0,

22 2 0,

CO O CO

H O H O

CH O CO H O

CH O CO H O

(4a)

(5)

(4b)

(4d)

(4c)

24

2 2

2 2 2

2 2 2 4

10,

22 2 0,

4 2 0,

CO O CO

O H H O

CO H H O CH

11 1 0 0 0

21

0 0 1 1 023

1 0 0 2 12

0 2 1 0 2 1

A

CO O2 CO2 H2 H2O CH4

11 1 0 0 0

20 1 0 2 2 0

0 0 1 4 2 1

0 0 0 0 0 0

CO O2 CO2 H2 H2O CH4

(6)

(8)

(7)

2 2

2 2 2

4 2 2

4 2 2 2

1 0,21 0,2

3 2 0,22 2 0,

CO O CO

H O H O

CH O CO H O

CH O CO H O

(5)

25

Example 6

1 2 3 4 6

1 3 4 5 6

1 2 3 4 5 6

1 3 4 5 6

3 2 4

3 3 +6 3

2 2 +7 9

5 8 +7 1

x x x x x

x x x x x

x x x x x x

x x x x x

1 -1 1 3 0 2 4

1 0 3 3 -1 6 3A c

2 -1 2 1 -1 7 9

1 0 5 8 -1 7 1

1 -1 1 3 0 2 4

0 1 2 0 -1 4 -1

0 0 2 5 0 1 -2

0 0 0 0 0 0 0

1 -1 1 3 0 2 4

0 1 0 -5 -1 3 -1

5 10 0 1 0 -12 20 0 0 0 0 0 0

1 0 1 -2 -1 5 5

0 1 0 -5 -1 3 1

5 10 0 1 0 -12 20 0 0 0 0 0 0

9 91 0 0 - -1 62 20 1 0 -5 -1 3 1

5 10 0 1 0 -12 20 0 0 0 0 0 0

3 2 1

13 3

2 1 2 3

1 1 2 3

5-1-

2 21-3 5

9 96 -

2 2

x

x

x

26

Conclusions

1 2 3

1 2 3

1 3 1

3

2

1

9 9 96

62 2 21 3 5 1 3

1 5 1 11

2 2 20 00

0

x

2 3

91 21 5

0 5

20 11

0 00 1 0

=x0 +1x1 + 2x2 + 3x3

X0 is a particular solution of Ax = c, and x1,x2,x3 are homogeneous solutions. That is, A(x0 +1x1 + 2x2 + 3x3)=c

13 3

2 1 2 3

1 1 2 3

5-1-

2 21-3 5

9 96 -

2 2

x

x

x

1 1 3

9 9 1 2 2 3 1 5

-51 0 , , 2 2 0 0 1

0 1 0

1 0 0

x x x

27

Suppose that a system Ax = c, where A is m x n, has a p-parameter family of solutions

x = x0 +1x1 + + pxp

Then x0 is necessarily a particular solution, and x1,…, xp, are necessarily LI homogeneous solutions. We call span{x1,…,xp} as the solution space of the homogeneous equation Ax = 0, or the null space of A. The dimension of that null space is called the nullity of A.

Theorem 10.5.3 Existence and Uniqueness for Ax = c

Consider the linear system Ax = c, where is m x n. There is

1. no solution if and only if r(A| c) ≠r(A)2. A unique solution if and only if r(A| c) = r(A) = n3. An (n-r)-parameter family of solutions if and only if r(A| c) = r(A) ≡ r is less than n.

28

Theorem 10.5.4 Homogeneous case where A is m x n

If A is m x n, then Ax = 0

1. Is consistent.2. Admits the trivial solution x = 0.3. Admits the unique solution x = 0 if and only if, r(A) =n.4. Admits an (n-r)-parameter family of nontrivial solutions, in addition to the trivial solution, if and only if r(A) ≡ r < n.

Theorem 10.5.5 Homogeneous case where A is n x n

If A is n x n, then Ax = 0 admits the nontrivial solution, besides the trivial solution x = 0, if and only if det A=0

29

Example 7 Dimensional Analysis

Consider a rectangular flat plate in steady motion through undisturbed air as shown in Fig. 1. The object is to conduct an experimental determination of the lift force l generated on the airfoil, that is, experimentally determine the functional dependence of l on the various relevant quantities. A List of the relevant variables is given in Table 1.

30

The Buckingham Pi theorem states that: Given a relation among n parameters of the form

1 2 3( , , ,...., ) 0ng q q q q then the n parameters may be grouped into n-m independent dimensionless ratios, or n parameters, expressed in a functional form by

1 2( , ,...., ) 0n mG or 1 1 2 3( , ,...., )n mG

The number m is usually, but not always, equal to the minimum number of independent dimensions required to specify the dimensions of all the parameters q1, q2, …and qn.

Next, we seek all possible dimensionless products of the form

31

0a b c d e f g hA B V V l

That is, we seek the exponents a, …, h such that 0 0 0 1 1 3 1 1 2

0 0 0

( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )a b c d e f g hL L M L T LT LT ML ML T MLT

M L T

Equating exponents og L,T, M on both sides, we see that a,…,h must satisfy the homogeneous linear system.

3 0

- - - - 2 0

0

a b d e f g h

d e g h

f g h

(26)

Solving Eq. (26) by Gauss elimination gives the five-parameterfamily of solutions

32

1 2 3 4 5

2 1 0 0 1

0 0 0 0 1

0 0 0 1 0

2 1 1 0 0

0 0 1 0 0

1 1 0 0 0

0 1 0 0 0

1 0 0 0

a

b

c

d

e

f

g

h

0

(27)

where 1,…,a5 are arbitrary constants. With 1 = 1 and 2 = …= 5 = 0, Eq. (27) gives a = -2, b = c = 0, and hencethe nondimensional parameter becomes

2 0 0 2 0 1 0 10A B V V l

that is l/(V2A2).

33

Set 2 = 1, and the other j’s = 0 gives

(Reynolds number), Re

/AV

Set 3 = -1, and the other j’s = 0 gives

(Mach number), M

0/V V

Set 4 = 1, and the other j’s = 0 gives

incident angel,

Set 5 = 1, and the other j’s = 0 gives

aspect ratio, AR

2 2 (Re, , , )

lf M AR

A V

Therefore, we can conclude that

(28)

34

10.6 Inverse matrix, Cramer’s rule, factorization

10.6.1 Inverse matrix

For a system of linear algebraic equations expressed as

Ax = c

Let us try to find a matrix A-1 having the property that A-1A = I.and

(1)

A-1Ax = A-1c

becomesI x = A-1c

since I x = x, we have the solution

x = A-1c

We call A-1 the inverse of A or A-inverse.

35

11 21 1

12 22 21

1 2n

1A

det A

n

n

n nn

A A A

A A A

A A A

The matrix in (16) is called the adjoint of A and is denoted as adjA, so

1 1A adjA

det A

(16)

(17)

If detA≠0, then A-1 exists, in this case we say that A is invertible. If detA = 0, then A-1 does not exist, and we say that A is singular.

Example 2 Determine the inverse of

自行練習

3 2 -1

A 0 1 4

1 5 -2

36

3 2 -1

A 0 1 4 ,det 57 0

1 5 -2

A

T

T

1 4 0 4 0 1 -

5 -2 1 -2 1 5 -22 4 -1

2 -1 3 -1 3 2adjA - - -1 -5 -13

5 -2 1 -2 1 5 9 -12 3

2 -1 3 -1 3 2 - 1 4 0 4 0 1

-22 -1 9

4 -5 -12

-1 -13 3

-1

-22 -1 91 1

A = adjA 4 -5 -12detA 57

-1 -13 3

37

Theorem 10.6.1 Inverse matrix

Theorem 10.6.2 Solution of Ax = c

Let A be n x n. if detA ≠0, then there exists a unique matrix A-1such that

A-1A = AA-1 = I (27)

A is then said to be invertible, and its inverse is given by Eq. (17). If detA = 0, then a matrix A-1 does not exist, and A is said to be singular.

Let A is n x n and detA ≠0, then Ax = c admits the unique solution x = A-1c.

Properties of inverses

I1. If A and B are of the same order, and invertible, then AB is too, and (AB)-1 = B-1A-1 (28)

38

I2. If A is invertible, then (AT)-1 = (A-1)T

and 1 1

det(A )det A

(29)

(30)

I3. If A is invertible, then (A-1)-1 = A and (Am)n = Amn for any integers m and n (positive, negative, and zero).

I4. If A is invertible, then AB = AC implies that B = C, BA = CA implies B = C, AB = 0 implies that B = 0, and BA = 0 implies that B = 0.

10.6.3 Cramer’s rule We have seen that if A is n x n and detA≠ 0, then Ax = c has the unique solution

x = A-1c (38)

39

Eq. (38) can be expressed as

111 12 11 1

1 2

j jjn

n n n nn n nj jj

cx c

x c c

(39)

Equating the ith components on the left with the ith component on the right, we have the scale statement

i ij jj

x c (40)

for any desired i (1 i n). Or, recalling Eq. (17)≦ ≦

( ) det A det A

ji jji ji j

j

A cAx c

(41)

40

Theorem 10.6.3 Cramer’s Rule

If Ax = c where A is invertible, then each component xi of x may be computed as the ratio of two determinants; the denominator is detA, and the numerator is also the determinant of the A matrix but with the ith column replaced by c.

Example 3 Solve the system1

2

1 3 0 5

-2 3 1 1

0 1 1 -2n

x

x

x

1 2 2

5 3 0 1 5 0 1 3 5

1 3 1 -2 1 1 -2 3 1

-2 1 1 0 -2 1 0 1 -21 13 29, ,

1 3 0 1 3 0 1 3 08 8 8

-2 3 1 -2 3 1 -2 3 1

0 1 1 0 1 1 0 1 1

x x x

(42)

(43)

41

10.6.4 Evaluation of A-1 by elementary row operations

If we solve a system Ax = c of n equations in n unknowns, or equivalently Ax = Ic, by gauss-Jordan reduction, the result is the form x = A-1c, or equivalently I x = A-1c.

1 3 0

A= -2 3 1

0 1 1

1 3 0 1 0 0 1 3 0 1 0 0 1 0 0 1 4 -3 8 3 8

A I = -2 3 1 0 1 0 0 9 1 2 1 0 0 1 0 1 4 1 8 -1 8

0 1 1 0 0 1 0 1 1 0 0 1 0 0 1 -1 4 -1 8 9 8

-1

1 4 -3 8 3 8

A = 1 4 1 8 -1 8

-1 4 -1 8 9 8

42

10.6.5 LU-factorization

LU-factorization is an alternative method of solution that is based upon the factorization of an n x n matrix A as a lower triangular matrix L times an upper triangular matrix U:

11 12 1311

21 22 22 23

31 32 33 33

0 0

A=LU= 0 0

0 0 u

u u u

u u

If we carry out the multiplication on the right and equate the nine elements of LU to the corresponding elements of A we obtain nine equations in the 12 unknown lij’s and uij’s. Since we have more unknowns than equations, there is someflexibility in implementing the idea. According to Doolittle’s Method we can set each lii = 1 in L and solve uniquely for remaining lij’s and uij’s.

43

With L and U determined, we then solve Ax = LUx = c by setting Ux = y so that L(Ux) = c breaks into the two problems

Ly = cUx = y

each of which is simple because L and U are triangular. We solve Eq. (49a) for y, put that y into Eq. (49b), and then solve Eq. (49b) for x.

Example 5 Solve by the Doolittle

LU-factorization method.

1

2

3

2 -3 3 -2

6 -8 7 = -3

-2 6 -1 3

x

x

x

11 12 13

21 22 23

31 32 33

2 -3 3 1 0 0

6 -8 7 = 1 0 0

-2 6 -1 1 0 0

u u u

u u

u

(49a)(49b)

44

11 12 13

21 11 21 12 22 21 13 23

31 11 31 12 32 22 31 13 32 23 33

2 -3 3

6 -8 7 = + +

-2 6 -1 + + +

u u u

u u u u u

u u u u u u

In turn:

11

12

13

21 11

22 21 12

23 21 13

31 11

32 31 12 22

33 31 13 32 23

= 2

= -3

3

6 / 3

8 1

7 7 (3)(3) 2

2 / 1

(6 )/ 3

1 =-1-(-1)(3)-(3)(2)=8

u

u

u

u

u u

u u

u

u u

u u u

45

1

2

3

1 0 0 2

3 1 0 3

1 3 1 3

y

y

y

1

2

3

2 -3 3 -2

0 1 -2 3

0 0 8 -8

x

x

x

Then Eq. (49a) becomes

which gives y = [-2, 3, -8]T. Finally Eq. (49a) becomes

which gives the final solution x = [2, 1, -1]T.

46

10.7 Change of Basis

Let B= {e1,…,en} be a given basis for the vector space V under consideration so that any vector x in V can be expanded as x = x1e1 +…+ xnen

x = x’1e’1 +…+ x’ne’n

If we switch to some other basis B’= {e’1,…,e’n}, then we may expand the same vector x as

(1)

(2)

We may expand each of the ej’s in terms of B’:

1 11 1 1

1 1

e e e

e e e

n n

n n nn n

q q

q q

(3)

47

Putting Eq. (3) into (1) gives

1 11 1 1 1 1

1 11 1 1 1 1

x ( e ... e ) ... ( e ... e )

( ... )e ) ... ( ... )en n n n nn n

n n n n nn n

x q q x q q

x q x q x q x q

(4)

a comparison of Eq. (2) and (4) gives the desired relations

1 11 1 1

n 1 1

...

...

n n

n nn n

x q x q x

x q x q x

(5)

or, in matrix notation

where11 1

1

Qn

n nn

q q

q q

[x]B’ = Q [x]B

and

(6)

(7)

1 1

[x] , [x] B B

n n

x x

x x

(8)

48

We call [x]B the coordinate vector of the vector x with respect to the ordered basis B, and similarly for [x]B’, and we call Q the coordinate transformation matrix from B to B’.

In the remainder of this section we assume that both bases, B and B’, are ON. Thus, let us rewrite Eq. (3) as

1 11 1 1 n

1 1 n

ˆ ˆ ˆe e e

ˆ ˆ ˆe e e

n

n n nn

q q

q q

(9)

If we dot into both sides of the first equation in Eq. (9), we obtain q11 = . Dotting gives q21 = . Dotting gives qn1 = . the result is in the formula

1e1 1ˆ ˆe e

2e 2 1ˆ ˆe e 1ˆ ˆe en en

i jˆ ˆe eijq (10)

which tell us how to compute the transformation matrix Q.

49

11 111 12 1 1 1 1 n

12 2T

1 2 n 1 n n1

ˆ ˆ ˆ ˆ e e e e

Q Q I

ˆ ˆ ˆ ˆ e e e e

nn

n

n n nnn nn

q qq q q

q q

q q qq q

(11)

so that Q-1 = QT (12)

Two properties of Q, the first of these is

The second of these is

QTQ = implies that det(QTQ) = det = 1. But det(QTQ) = (detQT)(detQ) = (detQ)(detQ) = (detQ)2. Hence detQ must be +1 or -1.

detQ = ±1

Q-1 = QT

50

Example 1

Consider the vector space R2, with theON bases . 1 2 1 2ˆ ˆ ˆ ˆ, and ,B e e B e e

B’ is obtained from B by a counter-clockwise rotation through an angle .From the figure, we have

11 1 1

12 1 2

21 2 1

11 2 2

ˆ ˆ (1)(1)cos cos

ˆ ˆ (1)(1)cos( ) sin2

ˆ ˆ (1)(1)cos( ) sin2

ˆ ˆ (1)(1)cos cos

q e e

q e e

q e e

q e e

so that the coordinate transformation matrix is cos sin

sin cosQ

Hence1 1

2 2

cos sin

sin cos

x x

x x