Chapter 5 Diff eq sm
-
Upload
fallenskylark -
Category
Documents
-
view
237 -
download
2
Transcript of Chapter 5 Diff eq sm
450
CHAPTER 5 Linear Transformations
5.1 Linear Transformations
Note: Many different arguments may be used to prove nonlinearity; our solutions to Problems 1–23 provide a sampling.
Checking Linearity
1. ( ), T x y xy=
If
[ ]1 2, u u=u , [ ]1 2, v v=v ,
( ) ( ) ( )( )
( ) ( )1 1 2 2 1 1 2 2
1 2 1 2
,
.
T T u v u v u v u v
T T u u v v
+ = + + = + +
+ = +
u v
u v
We see that
( ) ( ) ( )T T T+ ≠ +u v u v ,
so T is not a linear transformation.
2. ( ) ( ), , 2T x y x y y= +
We can write this transformation in matrix form as
( ) 1 1,
0 2 2x x y
T x yy y
+⎡ ⎤ ⎡ ⎤ ⎡ ⎤= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦
.
Hence, T is a linear transformation.
3. ( ) ( ), , 2T x y xy y=
If we let ( )1 2, u u=u , we have
( ) ( ) ( ) ( )1 2 1 2 2 1 2 2, , 2 , 2cT cT u u c u u u cu u cu= = =u
SECTION 5.1 Linear Transformations 451
and
( ) ( ) ( )21 2 1 2 2, , 2T c T cu cu c u u cu= =u .
Hence ( ) ( )cT T c≠u u , so T is not a linear transformation.
4. ( ) ( ), , 2, T x y x x y= +
Note that ( ) ( )0, 0 0, 2, 0T = . Linear transformations always map the zero vector into the zero
vector (in their respective spaces), so T is not a linear transformation.
5. ( ) ( ), , 0, 0T x y x=
We let
[ ]1 2, u u=u , [ ]1 2, v v=v so
( ) ( ) ( ) ( ) ( ) ( ) ( )1 1 2 2 1 1 1 1, , 0, 0 , 0, 0 , 0, 0T T u v u v u v u v T T+ = + + = + = + = +u v u v
and
( ) ( ) ( ) ( )1 1, 0, 0 , 0, 0cT c u cu T c= = =u u .
Hence, T is a linear transformation from 2R to 3R .
6. ( ) ( ), , 1, , 1T x y x y=
Because T does not map the zero vector [ ] 20, 0 ∈R into the zero vector [ ] 40, 0, 0, 0 ∈R , T is
not a linear transformation.
7. ( ) ( )0T f f=
If f and g are continuous functions on [ ]0, 1 , then
( ) ( )( ) ( ) ( ) ( ) ( )0 0 0T f g f g f g T f T g+ = + = + = +
and
( ) ( )( ) ( ) ( )0 0T cf cf cf cT f= = = .
Hence, T is a linear transformation.
8. ( )T f f= −
If f and g are continuous functions on [ ]0, 1 , then
( ) ( ) ( ) ( )T f g f g f g T f T g+ = − + = − − = +
and
( ) ( ) ( )T cf cf c f cT f= − = − = .
Hence, T is a linear transformation.
452 CHAPTER 5 Linear Transformations
9. ( ) ( )T f tf t′=
If f and g are continuous functions on [ ]0, 1 , then
( ) ( ) ( ) ( ) ( ) ( ) ( )T f g t f t g t tf t tg t T f T g′ ′ ′⎡ ⎤+ = + = + = +⎣ ⎦
and
( ) ( )( ) ( ) ( )T cf t cf t ctf t cT f′ ′= = = .
Hence, T is a linear transformation.
10. ( ) 2 3T f f f f′′ ′= + +
If we are given that f and g are continuous functions that have two continuous derivatives, then
( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )2 3 2 3 2 3T f g f g f g f g f f f g g g T f T g′′ ′ ′′ ′ ′′ ′+ = + + + + + = + + + + + = +
and
( ) ( ) ( ) ( ) ( ) ( )2 3 2 3T cf cf cf cf c f f f cT f′′ ′ ′′ ′= + + = + + = .
Hence, T is a linear transformation.
11. ( )2 2T at bt c at b+ + = +
If we introduce the two vectors
2
1 1 12
2 2 2
a t b t c
a t b t c
= + +
= + +
p
q
then
( ) ( ) ( ) ( )( ) ( ) ( )
( ) ( ) ( ) ( )
21 2 1 2 1 2 1 2 1 2
1 1 2 2
2
2 2
T T a a t b b t c c a a t b b
a t b a t b T T
+ = + + + + + = + + +
= + + + = +
p q
p q
and
( ) ( ) ( ) ( )21 1 1 1 1 1 12 2T c T ca t cb t cc ca t cb c a t b cT= + + = + = + =p p .
Hence, the derivative transformation defined on 2P is a linear transformation.
12. ( )3 2T at bt ct d a b+ + + = +
If we introduce the two vectors
3 2
1 1 1 13 2
2 2 2 2
a t b t c t d
a t b t c t d
= + + +
= + + +
p
q
SECTION 5.1 Linear Transformations 453
then
( ) ( ) ( ) ( ) ( )( ) ( ) ( )( ) ( ) ( ) ( )
( ) ( )( ) ( )( ) ( )
3 21 2 1 2 1 2 1 2 1 2 1 2
1 1 2 2
3 2 3 21 1 1 1 1 1 1 1 1 1
1 1 .
T T a a t b b t c c t d d a a b b
a b a b T T
T c T c a t b t c t d T ca t cb t cc t cd ca cb
c a b cT
+ = + + + + + + + = + + +
= + + + = +
= + + + = + + + = +
= + =
p q
p q
p
p
Hence, the derivative transformation defined on 3P is a linear transformation.
13. ( ) TT =A A . If we introduce two 2 2× matrices B and C, we have
( ) ( ) ( ) ( )
( ) ( ) ( )
T T T
T T .
T T T
T k k k kT
+ = + = + = +
= = =
B C B C B C B C
B B B B
Hence, the transformation defined on 22M is a linear transformation.
14. a b a b
Tc d c d⎡ ⎤
=⎢ ⎥⎣ ⎦
Letting
a bc d⎡ ⎤
= ⎢ ⎥⎣ ⎦
A
be an arbitrary vector, we show the homogeneous property ( ) ( )T k kT=A A fails because
( ) ( ) ( ) ( )2 2 2 2detka kb ka kb
T k T k ad k cb k k T kTkc kd kc kd⎡ ⎤
= = = − = = ≠⎢ ⎥⎣ ⎦
A A A A
when 1k ≠ . Hence, T is not a linear transformation.
15. Tra b a b
Tc d c d⎡ ⎤ ⎡ ⎤
=⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦
Let 11 12
21 22
a aa a⎡ ⎤
= ⎢ ⎥⎣ ⎦
A , 11 12
21 22
b bb b⎡ ⎤
= ⎢ ⎥⎣ ⎦
B
so that 11 11 12 12
21 21 22 22
a b a ba b a b
+ +⎡ ⎤+ = ⎢ ⎥+ +⎣ ⎦
A B .
Then ( ) ( ) ( ) ( ) ( ) ( ) ( )11 11 22 22 11 22 11 22T a b a b a a b b T T+ = + + + = + + + = +A B A B
and ( ) ( ) ( )ka kbT k T ka kd k a d kT
kc kd⎡ ⎤
= = + = + =⎢ ⎥⎣ ⎦
A A .
Hence, T is a linear transformation on 22M .
454 CHAPTER 5 Linear Transformations
16. ( )T =x Ax ( ) ( ) ( ) ( )T T T+ = + = + = +x y A x y Ax Ay x y
and ( ) ( ) ( )T k k k kT= = =x A x Ax x .
Hence, T is a linear transformation.
Integration
17. ( ) ( ) ( ) ( )b b
a aT kf kf t dt k f t dt kT f= = =∫ ∫
( ) ( ) ( ) ( ) ( ) ( ) ( )b b b
a a aT f g f t g t dt f t dt g t dt T f T g⎡ ⎤+ = + = + = +⎣ ⎦∫ ∫ ∫ .
Hence, T is a linear transformation.
Linear Systems of DEs
18. ( , ) ( , 2 )T x y x y x y′ ′= − +
( ) ( )1 1 2 2 1 2 1 2 1 2 1 2 1 2 1 2
1 2 1 2 1 2 1 2
1 1 1 1 2 2 2 2 1 1 2 2
( , ) ( , ) ( , ) ( ) ( ),2( ) ( )( ,2 2 )( ,2 ) ( ,2 ( , ) ( , )
T x y x y T x x y y x x y y x x y yx x y y x x y yx y x y x y x y T x y T x y
′ ′+ = + + = + − + + + +
′ ′ ′ ′= + − − + + +′ ′ ′ ′= − + + − + = +
( ) ( )
( )( , ) ( , ) ( ) ,2( ) ( )
( ,2 ) ( ), (2 ) ( ,2 )( , )
T c x y T cx cy cx cy cx cy
cx cy cx cy c x y c x y c x y x ycT x y
′ ′= = − +
′ ′ ′ ′ ′ ′= − + = − + = − +
=
19. ( , ) ( , 2 )T x y x y y x y′ ′= + − +
( )1 2 1 2 1 2 1 2 1 2 1 2 1 2
1 2 1 2 1 2 1 2 1 2
1 1 1 1 1 2 2 2 1 2
1 1 2 2
( , ) ( ) , 2( ) ( )( , 2 2 )( , 2 ) ( , 2 )
( , ) ( , )
T x x y y x x y y y y x x y yx x y y y y x x y yx y y x y x y y x y
T x y T x y
′ ′+ + = + + + + − + + +
′ ′ ′ ′= + + + + − − + +′ ′ ′ ′= + − + + + − +
= +
( )
( )( , ) ( ) , 2( ) ( )
( , 2 ) ( ), ( 2 )( , 2 ) ( , )
T cx cy cx cy cy cx cy
cx cy cy cx cy c x y c y x yc x y y x y cT x y
′ ′= + − +
′ ′ ′ ′= + − + = + − +
′ ′= + − + =
Laying Linearity on the Line
20. ( )T x x=
( ) ( ) ( )T x y x y T x T y x y+ = + ≠ + = +
so ( ) ( ) ( )T x y T x T y+ ≠ + . Hence, T is not a linear transformation.
21. ( )T x ax b= +
( ) ( ) ( ) ( )T kx a kx b akx b kT x k ax b akx kb= + = + ≠ = + = +
SECTION 5.1 Linear Transformations 455
so ( ) ( )T kx kT x≠ . Hence, T is not a linear transformation.
22. ( ) 1T xax b
=+
Not linear because when 0b ≠ , the zero vector does not map into the zero vector. Even when
0b = the transformation is not linear because the zero vector (the real number zero) does not map
into the zero vector (the real number zero).
23. ( ) 2T x x=
Because
( ) ( )
( ) ( )2 3 5 25
2 3 4 9 13
T T
T T
+ = =
+ = + =
we have that T is not linear. (You can also find examples where the property ( ) ( )T cx cT x=
fails.)
24. ( ) sinT x x=
Because ( ) ( ) ( )sin and sin ,T kx kx kT x k x= =
we have that ( ) ( )T kx kT x≠
so T is not a linear transformation. We could also simply note that
( ) ( )sin 0 02 2
T Tπ π π⎛ ⎞+ = = =⎜ ⎟⎝ ⎠
but sin sin 1 1 22 2 2 2
T Tπ π π π⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞+ = + = + =⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠
.
25. ( ) 32
xT xπ
= −+
Finally, we have a linear transformation. Any mapping of the form ( )T x ax= , where a is a
nonzero constant, is a linear transformation because
( ) ( ) ( ) ( )
( ) ( ) ( ) ( ).T x y a x y ax ay T x T y
T kx a kx k ax kT x
+ = + = + = +
= = =
In this problem we have the nonzero constant 32
aπ
= −+
.
456 CHAPTER 5 Linear Transformations
Geometry of a Linear Transformation
26. Direct computation: the vectors [ ], 0x for x real constitute the x-axis and because [ ], 0x maps
into itself the x-axis maps into itself.
27. Direct computation: the vector [ ]0, y lies on the y-axis and [ ]2 , y y lies on the line 2xy = , so the
transformation maps vectors on the y-axis onto vectors on the line 2xy = .
28. Direct computation: the transformation T maps points ( ), x y into ( )2 , x y y+ . For example, the unit square with corner ( )0, 0 , ( )1, 0 , ( )0, 1 , and ( )1, 1 map into the parallelogram with corners
( )0, 0 , ( )1, 0 , ( )2, 1 and ( )3, 1 . This transformation is called a shear mapping in the direction
y.
Geometric Interpretations in 2R
29. ( ) ( ), , T x y x y= −
This map reflects points about the x-axis. A matrix representation is
1 00 1⎡ ⎤⎢ ⎥−⎣ ⎦
.
30. ( ) ( ), , 0T x y x=
This map projects points to the x-axis. A matrixrepresentation is
1 00 0⎡ ⎤⎢ ⎥⎣ ⎦
.
SECTION 5.1 Linear Transformations 457
31. ( ) ( ), , T x y x x=
This map projects points vertically to the 45-degree line y x= . A matrix representation is
1 01 0⎡ ⎤⎢ ⎥⎣ ⎦
.
Composition of Linear Transformations
32. ( )( ) ( )( ) ( ) ( )( ) ( )( ) ( )( ) ( ) ( )( ) ( )( ) ( )( ) ( )( ) ( )
ST S T S T T S T S T ST ST
ST c S T c S cT cS T cST
+ = + = + = + = +
= = = =
u v u v u v u v u v
u u u u u
Find the Standard Matrix
33. ( ), 2T x y x y= +
T maps the point ( ) 2, x y ∈R into the real number 2x y+ ∈R . In matrix form,
( ) [ ], 1 2 2x
T x y x yy⎡ ⎤
= = +⎢ ⎥⎣ ⎦
.
34. ( ) ( ), , T x y y x= −
T maps the point ( ) 2, x y ∈R into the point ( ) 2, y x− ∈R . In matrix form,
( ) 0 1,
1 0x y
T x yy x
⎡ ⎤ ⎡ ⎤ ⎡ ⎤= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥− −⎣ ⎦ ⎣ ⎦ ⎣ ⎦
.
35. ( ) ( ), 2 , 2T x y x y x y= + −
T maps the point ( ) 2, x y ∈R into the point ( ) 22 , 2x y x y+ − ∈R . In matrix form,
( ) 1 2 2,
1 2 2x x y
T x yy x y
+⎡ ⎤ ⎡ ⎤ ⎡ ⎤= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥− −⎣ ⎦ ⎣ ⎦ ⎣ ⎦
.
458 CHAPTER 5 Linear Transformations
36. ( ) ( ), 2 , 2 , T x y x y x y y= + −
T maps the point ( ) 2,x y ∈R in two dimensions into the new point
( ) ( ) 3, 2 , 2 ,T x y x y x y y= + − ∈R . In matrix form, the linear transformation T can be written
( )1 2 2
, 1 2 20 1
x yx
T x y x yy
y
+⎡ ⎤ ⎡ ⎤⎡ ⎤⎢ ⎥ ⎢ ⎥= − = −⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦
.
37. ( ) ( ), , 2 , 2 , 2T x y z x y x y x y z= + − + −
T maps ( ) 3, , x y z ∈R into ( ) 32 , 2 , 2x y x y x y z+ − + − ∈R . In matrix form,
( )1 2 0 2
, 1 2 0 21 1 2 2
x x yT x y y x y
z x y z
+⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥= − = −⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥− + −⎣ ⎦ ⎣ ⎦ ⎣ ⎦
.
38. ( )1 2 3 1 3, , T υ υ υ υ υ= +
T maps the point ( ) 31 2 3, , υ υ υ ∈R into the real number ( )1 2 3 1 3, , T υ υ υ υ υ= + ∈R . In matrix
form,
( ) [ ]1
1 2 3 2 1 3
3
, , 1 0 1Tυ
υ υ υ υ υ υυ
⎡ ⎤⎢ ⎥= = +⎢ ⎥⎢ ⎥⎣ ⎦
.
39. ( ) ( )1 2 3 1 2 3 1 2 3, , 2 , , 4 3T υ υ υ υ υ υ υ υ υ= + − + +
T maps ( ) 31 2 3, , υ υ υ ∈R into ( ) 3
1 2 3 1 2 32 , , 4 3υ υ υ υ υ υ+ − + + ∈R . In matrix form,
( )1 1 2
1 2 3 2 3
3 1 2 3
1 2 0 2, , 0 0 1
1 4 3 4 3T
υ υ υυ υ υ υ υ
υ υ υ υ
+⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥− − + +⎣ ⎦ ⎣ ⎦ ⎣ ⎦
.
40. ( ) ( )1 2 3 2 3 1, , , , T υ υ υ υ υ υ= −
T maps the point ( ) 31 2 3, , υ υ υ ∈R into ( ) 3
2 3 1, , υ υ υ− ∈R . In matrix form,
( )1 2
1 2 3 2 3
3 1
0 1 0, , 0 0 1
1 0 0T
υ υυ υ υ υ υ
υ υ
⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥− −⎣ ⎦ ⎣ ⎦ ⎣ ⎦
.
SECTION 5.1 Linear Transformations 459
Mapping and Images
41. ( ) ( ), , T x y y x= −
T maps a vector [ ] 2, x y ∈R into the vector [ ] 2, y x− ∈R . For [ ]0, 0=u ,
( ) [ ]( ) [ ] [ ]0, 0 0, 0 0, 0T T= = − =u .
Setting
[ ] [ ] [ ], , 0, 0T x y y x= − = =w
yields [ ] [ ], 0, 0x y = .
42. ( ) ( ), , T x y x y x= +
T maps a vector [ ] 2, x y ∈R into the vector [ ] 2, x y x+ ∈R . For [ ]1, 0=u ,
( ) [ ]( ) [ ] [ ]1, 0 1 0, 1 1, 1T T= = + =u .
Setting
[ ]( ) [ ] [ ], , 3, 1T x y x y x= + = =w
yields 3x y+ = , 1x = , which has the solution 1x = , 2y = , or [ ] [ ], 1, 2x y =
43. ( ) ( ), , , T x y z x y z= +
T maps a vector [ ] 3, , x y z ∈R into the vector [ ] 2, x y z+ ∈R . For [ ]0, 1, 2=u ,
( ) [ ]( ) [ ]0, 1, 2 0, 3T T= =u .
Setting
[ ]( ) [ ] [ ], , , 1, 2T x y z x y z= + = =w
yields 1x = , 2y z+ = , which has the solution 1x = , 2y α= − , ,z α= where α is any real number. These points form a line in 3R , ( ){ }1, 2 , α α α− ∈R .
44. ( ) ( )1 2 1 1 2, , 2T u u u u u= +
T maps a vector [ ] 21 2, u u ∈R into the vector [ ] 2
1 1 2, 2u u u+ ∈R . For [ ]1, 2=u ,
( ) [ ]( ) ( ) [ ]1, 2 1, 1 2 2 1, 5T T ⎡ ⎤= = + =⎣ ⎦u .
Setting
[ ]( ) [ ] [ ]1 2 1 1 2, , 2 1, 3T u u u u u= + = =w
yields 1 1u = , 1 22 3u u+ = which yields 1 1u = , 2 1u = .
460 CHAPTER 5 Linear Transformations
45. ( ) ( )1 2 1 1 2 1 2, , , T u u u u u u u= + −
T maps a vector [ ] 21 2, u u ∈R into the vector [ ] 3
1 1 2 1 2, , u u u u u+ − ∈R . For [ ]1, 1=u ,
( ) [ ]( ) [ ] [ ]1, 1 1, 1 1, 1 1 1, 2, 0T T= = + − =u .
Setting
[ ]( ) [ ] [ ]1 2 1 1 2 1 2, , , 1, 1, 0T u u u u u u u= + − = =w
yields 1 1u = , 1 2 1u u+ = , 1 2 0u u− = , which has no solutions. In other words, no vectors [ ] 2
1 2, u u ∈R map into [ ]1, 1, 0 under the linear transformation T.
46. ( ) ( )1 2 2 1 1 2, , , T u u u u u u= +
T maps a vector [ ] 21 2, u u ∈R into [ ] 3
2 1 1 2, , u u u u+ ∈R . For [ ]1, 2=u ,
( ) [ ]( ) [ ] [ ]1, 2 2, 1, 1 2 2, 1, 3T T= = + =u .
Setting
[ ]( ) [ ] [ ]1 2 2 1 1 2, , , 2, 1, 3T u u u u u u= + = =w
yields 2 2u = , 1 1u = , 1 2 3u u+ = , which yields 1 1u = , 2 2u = .
47. ( ) ( )1 2 3 1 3 2 3, , , T u u u u u u u= + −
T maps a vector [ ] 31 2 3, , u u u ∈R into [ ] 2
1 3 2 3, u u u u+ − ∈R . For [ ]1, 1, 1=u ,
( ) [ ]( ) [ ] [ ]1, 1, 1 1 1, 1 1 2, 0T T= = + − =u .
Setting
[ ]( ) [ ] [ ]1 2 3 1 3 2 3, , , 0, 0T u u u u u u u= + − = =w
yields 1 3 0u u+ = , 2 3 0u u− = which yields 1 3u u= − , 2 3u u= , 3u arbitrary. In other words, the linear transformation T maps the entire line ( ){ } 3, , ,α α α α− ∈ ∈R R into [ ] 20, 0 ∈R .
48. ( ) ( )1 2 3 1 2 1 3, , , , T u u u u u u u= +
T maps a vector [ ] 31 2 3, , u u u ∈R into [ ] 3
1 2 1 3, , u u u u+ ∈R . For [ ]1, 2, 3=u ,
( ) [ ]( ) [ ] [ ]1, 2, 1 1, 2, 1 1 1, 2, 2T T= = + =u .
Setting
[ ]( ) [ ] [ ]1 2 3 1 2 1 3, , , , 0, 0, 1T u u u u u u u= + = =w
yields 1 0u = , 2 0u = , 1 3 1u u+ = , which yields 1 0u = , 2 0u = , 3 1u = , so [0,0,1] maps into itself.
SECTION 5.1 Linear Transformations 461
Transforming Areas
49. Computing Av for the four given corner points of the unit square, we find
0 0 1 10 0 0 2
1 0 0 1.
1 3 1 1
⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
−⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
A A
A A
In other words, the original square (shown ingray) has area 1; the image is the parallelogramwith vertices (0, 0), (1, 2), (0, 3) and (–1, 1) and area 3. Note: we have calculated the areas of theparallelogram by visualizing it as composed offour right triangles. The parallelogram will have
area 1 0.5 0.5 1 3= + + + = .
50. Computing Av for ( )0, 0 , ( )1, 1 , ( )1, 1− , we get the respective points ( )0, 0 , ( )0, 3 ,
( )2, 1− − . Hence, the image of the original
triangle (shown in gray) is the triangle with thenew vertices ( )0, 0 , ( )0, 3 , ( )2, 1− − . The
original area is 1, and the new area is 3.
51. For the points
( )0, 0 , ( )1, 0 , ( )1, 2 , ( )0, 2 ,
the image is the parallelogram with vertices
( )0, 0 , ( )1, 2 , ( )1, 4− , ( )2, 2− .
The original rectangle (shown in gray) has area 2;the new area is 6.
462 CHAPTER 5 Linear Transformations
52. The determinant of
1 12 1
−⎡ ⎤= ⎢ ⎥⎣ ⎦
A
is 3=A . In Problems 49–51 the area of the image is always three times the area of the original
figure.
Transforming Areas Again
53. For the square of Problem 49 we compute Bv
for the four corner points of the unit square,which yields
0 0 1 20 0 0 4
1 1 0 1.
1 1 1 3
⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥−⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
−⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥−⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
B B
B B
In other words, the image of the unit square with area 1 is the parallelogram with corners
( )0, 0 , ( )2, 4− , ( )1, 1− , ( )1, 3−
with area 2.
For Problem 50, we compute Bv for the points( )0, 0 , ( )1, 1 , ( )1, 1− of a triangle; we get thepoints ( )0, 0 , ( )1, 1− , ( )3, 7− . Hence, the
image of the original triangle with area 1 is thetriangle with the new vertices shown and has area2.
SECTION 5.1 Linear Transformations 463
For the rectangle of Problem 51 we compute Bvfor the points ( )0, 0 , ( )1, 0 , ( )1, 2 , ( )0, 2yielding ( )0, 0 , ( )2, 4− , ( )0, 2 , ( )2, 6−
respectively. Hence, the image of the rectanglewith area 2 is the parallelogram with area 4.
The determinant of
2 14 3
−⎡ ⎤= ⎢ ⎥−⎣ ⎦
B
is 2=B ; in each case the area of the transformed image is twice the area of the original figure.
The determinant is a scale factor for the area.
Linear Transformations in the Plane
54. (a) (B) shear; in the x direction.
(b) (E) nonlinear; linear transformations map lines into straight lines.
(c) (C) rotation; a 90-degree rotation in the counterclockwise direction.
(d) (E) nonlinear; ( )0, 0 must map into ( )0, 0 in a linear transformation.
(e) (A) scaling (dilation or contraction); contraction in both the x and y directions.
(f) (B) shear; in the y-direction.
(g) (D) reflection; through the x-axis.
Finding the Matrices
55. 0 11 0
−⎡ ⎤= ⎢ ⎥⎣ ⎦
J describes (C); (90° rotation in the counterclockwise direction)
56. 1 01 1⎡ ⎤
= ⎢ ⎥⎣ ⎦
K describes (F); (shear in the y-direction)
57. 1 00 1⎡ ⎤
= ⎢ ⎥−⎣ ⎦L describes (G), (reflection through the x-axis)
58.
1 02
102
⎡ ⎤⎢ ⎥
= ⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
M describes (E), (contraction in both the x and y directions)
59. 1 10 1⎡ ⎤
= ⎢ ⎥⎣ ⎦
N describes (A), (shear in the x-direction)
464 CHAPTER 5 Linear Transformations
Shear Transformation
60. (a) The matrix
1 01 1⎡ ⎤⎢ ⎥⎣ ⎦
produces a shear in the y-direction of one unit. Figure B is a shear in the y direction.
(b) Figure A is a shear of –1 unit and would be carried out by the matrix
1 01 1
⎡ ⎤⎢ ⎥−⎣ ⎦
.
(c) Figure C is a shear of 1 unit in the x direction; matrix is
1 10 1⎡ ⎤⎢ ⎥⎣ ⎦
.
Another Shear Transformation
61. For a shear of 2 units in the positive x direction the matrix is
1 20 1⎡ ⎤⎢ ⎥⎣ ⎦
.
The vertices of the r-shape are at
( )0, 0 , ( )0, 1 , ( )0, 2 , ( )1, 2− and ( )1, 1 .
(3, 1)
(3, 2) (4, 2)
(2, 1)
(0, 0)
2
3
4
1
21 43
Hence,
1 2 0 00 1 0 0
1 2 0 20 1 1 1
1 2 0 40 1 2 2
1 2 1 30 1 2 2
1 2 1 3.
0 1 1 1
⎡ ⎤ ⎡ ⎤ ⎡ ⎤=⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎣ ⎦ ⎣ ⎦⎡ ⎤ ⎡ ⎤ ⎡ ⎤
=⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦⎡ ⎤ ⎡ ⎤ ⎡ ⎤
=⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦
−⎡ ⎤ ⎡ ⎤ ⎡ ⎤=⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎣ ⎦ ⎣ ⎦⎡ ⎤ ⎡ ⎤ ⎡ ⎤
=⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦
SECTION 5.1 Linear Transformations 465
Clockwise Rotation
62. A matrix that rotates points clockwise by 30° is
the rotation matrix with 6πθ = − , or
( )cos sin
6 630
sin cos6 6
3 12 2 .1 32 2
π π
π π
⎡ ⎤⎛ ⎞ ⎛ ⎞− − −⎜ ⎟ ⎜ ⎟⎢ ⎥⎝ ⎠ ⎝ ⎠⎢ ⎥− ° =
⎢ ⎥⎛ ⎞ ⎛ ⎞− −⎢ ⎥⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠⎣ ⎦
⎡ ⎤⎢ ⎥⎢ ⎥=⎢ ⎥−⎢ ⎥⎣ ⎦
Rot
The rotated r-shape is shown.
Pinwheel
63. (a) A negative shear of 1 in the y-direction is
1 01 1
⎡ ⎤⎢ ⎥−⎣ ⎦
.
An easy way to see this is by observing how each point gets mapped. We have
1 0 01 1
x x xy x y y x
⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤= = −⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥− − +⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
.
Each point moves down by value of its x-coordinate. In other words, the further you are away in the x-direction from the x-axis the more the points move down (or up in case x is negative). Note that for the pinwheel the line that sticks out to the right is sheared down, whereas the line that sticks out to the left (in the NEGATIVE x region) is sheared up. Twelve rotations of 30° give the identity matrix.
(b) ( )( )30n
I° =Rot , only when n is a multiple of 12.
466 CHAPTER 5 Linear Transformations
Flower
64. Each individual image is sheared upwards by 1 unit, so we need the matrix
1 01 1⎡ ⎤⎢ ⎥⎣ ⎦
.
We then rotate the image 24 times in either direction, each time by 360 1524
= ° . If we go counter-
clockwise, we would repeatedly multiply by the matrix
( )cos sin
12 1215sin cos
12 12
π π
π π
⎡ ⎤−⎢ ⎥° = ⎢ ⎥
⎢ ⎥⎢ ⎥⎣ ⎦
Rot
24 times.
Successive Transformations
65. A matrix for a unit shear in the y-direction followed by a counterclockwise rotation of 30°would be
shear
rotation
3 11 0 3 1 112 21 1 21 3 3 1 3
2 2
⎡ ⎤−⎢ ⎥ ⎡ ⎤− −⎡ ⎤⎢ ⎥ = ⎢ ⎥⎢ ⎥⎢ ⎥ +⎣ ⎦ ⎢ ⎥⎣ ⎦⎢ ⎥
⎣ ⎦
.
The transformed r-shape is shown.
Reflections
66. (a) A reflection about the x-axis followed bya reflection through the y-axis would be
1 0 1 00 1 0 1
1 0,
0 1
y x
−⎡ ⎤ ⎡ ⎤= ⎢ ⎥ ⎢ ⎥−⎣ ⎦ ⎣ ⎦
−⎡ ⎤= ⎢ ⎥−⎣ ⎦
B B
which is a reflection through the origin.The transformed r-shape is shown.
Reflection of r shape through the origin
SECTION 5.1 Linear Transformations 467
(b) An 180° rotation in the counterclockwise direction has matrix
cos sin 1 0sin cos 0 1
π ππ π
− −⎡ ⎤ ⎡ ⎤=⎢ ⎥ ⎢ ⎥−⎣ ⎦ ⎣ ⎦
,
which is equivalent to the steps in part (a).
Derivative and Integral Transformations
67. (a) ( ) ( ) ( )x
a
dDI f f t dt f xdx
= =∫ (b) ( ) ( ) ( ) ( )x
aID f f t dt f x f a′= = −∫
(c) They commute if ( ) 0f a =
Anatomy of a Transformation
68. (a) Solving for x, y in the system
1 1 01 0 03 1 01 0 0
xy
−⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥⎡ ⎤⎢ ⎥ ⎢ ⎥=⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦
, we find
00
3 00,
x yx
x yx
− ==
+ ==
which has the unique solution 0x y= = . Hence [ ]0, 0 is the only vector that maps to
[ ]0, 0, 0, 0 in 4R .
(b) Setting
1 1 11 0 1
,3 1 11 0 1
xy
−⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥⎡ ⎤⎢ ⎥ ⎢ ⎥=⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦
we have
11
3 11
x yx
x yx
− ==
+ ==
The first and second equations require 1x = , 0y = , which contradicts the third equation. Hence, the system has no solution, and no vector [ ], x y maps to [ ]1, 1, 1, 1 in 4R .
(c) We can write the matrix product
1 1 1 11 0 1 03 1 3 11 0 1 0
xx y
y
− −⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥= +⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦
.
The range of T consists of the two-dimensional subspace of 4R spanned by the columns of A.
468 CHAPTER 5 Linear Transformations
Anatomy of Another Transformation
69. (a) Solving for x, y in the system 1 1 1 02 2 3 0
xyz
⎡ ⎤−⎡ ⎤ ⎡ ⎤⎢ ⎥ =⎢ ⎥ ⎢ ⎥⎢ ⎥−⎣ ⎦ ⎣ ⎦⎢ ⎥⎣ ⎦
, we find
02 2 3 0,
x y zx y z
+ − =+ − =
which we can solve getting the one-dimensional subspace of 3R of solutions
110
xyz
α⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥= −⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦
, α any real number.
(b) Setting 1 1 1 1
,2 2 3 1
xyz
⎡ ⎤−⎡ ⎤ ⎡ ⎤⎢ ⎥ =⎢ ⎥ ⎢ ⎥⎢ ⎥−⎣ ⎦ ⎣ ⎦⎢ ⎥⎣ ⎦
we have
12 2 3 1,
x y zx y z
+ − =+ − =
which has solutions
2 2 10 1
1 1 0
xyz
αα α
− −⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥= = +⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
for α any real number. In other words, the vectors that map into [ ]1, 1 consist of a line in 3R passing through ( )2, 0, 1 in the direction of [ ]1, 1, 0− .
(c) We can write the matrix product as
( )1 1 1 1 1 1 1 12 2 3 2 2 3 2 3
xy x y z x y zz
⎡ ⎤− − −⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ = + + = + +⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥− − −⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦⎢ ⎥⎣ ⎦
.
The image of T consists of the span of the vectors [ ]1, 2 and [ ]1, 3− − , which is 2R .
SECTION 5.1 Linear Transformations 469
Functionals
70. ( ) ( ) ( )1 0 12
T f f f⎡ ⎤= +⎣ ⎦
( ) ( )( ) ( )( ) ( ) ( ) ( ) ( )
( ) ( ) ( ) ( ) ( ) ( )
( ) ( )( ) ( )( ) ( ) ( )
( ) ( ) ( )
1 10 1 0 0 1 12 21 10 1 0 12 21 10 1 0 12 2
1 0 1 .2
T f g f g f g f g f g
f f g g T f T g
T cf cf cf cf cf
c f f cT f
⎡ ⎤ ⎡ ⎤+ = + + + = + + +⎣ ⎦ ⎣ ⎦
⎡ ⎤ ⎡ ⎤= + + + = +⎣ ⎦ ⎣ ⎦
⎡ ⎤ ⎡ ⎤= + = +⎣ ⎦ ⎣ ⎦
⎡ ⎤= + =⎣ ⎦
Hence, T is a linear functional.
71. ( ) ( )1
0T f f t dt= ∫
( ) ( ) ( ) ( ) ( ) ( ) ( )1 1 1
0 0 0T f g f t g t dt f t g t dt T f T g+ = + ≠ + = +∫ ∫ ∫ .
Hence, T is not a linear functional.
72. ( ) ( )1
02T f f t dt= − ∫
( ) ( ) ( )( ) ( ) ( ) ( ) ( )
( ) ( ) ( )( ) ( )
1 1 1
0 0 0
1 1
0 0
2 2 2
2 2 .
T f g f t g t dt f t dt g t dt T f T g
T cf cf t dt c f t dt cT f
+ = − + = − − = +
= − = − =
∫ ∫ ∫
∫ ∫
Hence, T is a linear functional.
73. ( ) ( )1 2
0T f f t dt= ∫
( ) ( ) ( ) ( ) ( ) ( )1 1 12 2 2 2
0 0 0T cf cf t dt c f t dt c f t dt cT f= = ≠ =∫ ∫ ∫ .
The preceding equation does not hold when c is negative. Hence, T is not a linear functional. The additive property also does not hold.
Further Linearity Checks
74. ( ) ( )1L y y p t y′= +
( ) ( ) ( )( ) ( ) ( ) ( )( ) ( )( )( ) ( )
( ) ( ) ( )( ) ( ) ( ) ( )
1
1 1
1
L f g f g p t f g f g p t f p t f g p t gf p t g
L f L g
L cf cf p t cf cf cp t f c f p t f cT f
′ ′ ′ ′ ′+ = + + + = + + = + + ++
= +
′ ′ ′⎡ ⎤= + = + = + =⎣ ⎦
470 CHAPTER 5 Linear Transformations
75. ( ) ( )1 0
stL y e f t dt∞ −= ∫
( ) ( ) ( ) ( ) ( ) ( ) ( )
( ) ( ) ( ) ( )0 0 0
0 0
st st st
st st
L f g e f t g t dt e f t dt e g t dt L f L g
L cf e cf t dt c e f t dt cL f
∞ ∞ ∞− − −
∞ ∞− −
⎡ ⎤+ = + = + = +⎣ ⎦
= = =
∫ ∫ ∫
∫ ∫
76. ( ) limn nnL a a
→∞=
( ) ( ) ( ) ( ) ( ) ( )
( ) ( ) ( ) ( )lim lim lim
lim lim
n n n n n n n nn n n
n n n nn n
L a b a b a b L a L b
L ca ca c a cL a→∞ →∞ →∞
→∞ →∞
+ = + = + = +
= = =
Projections
77. The transformation :T →V W defined by ( ) ( ), , , , 0T x y z x y= can be represented by matrix
multiplication as
1 0 0 1 00 1 0 0 10 0 0 0 0 0
x xy y x yz
⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥= = +⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
.
Hence, W is the space spanned by [ ]1, 0, 0 and [ ]0, 1, 0 , i.e., W is the xy plane in 3R . Also T
reduces to the identity on W because
1 0 00 1 00 0 0 0 0
x x xy y yz
⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
and, hence, T is a projection on W.
78. The transformation :T →V W defined by ( ) ( ), , , 0, 3T x y z x x= can be represented by matrix
multiplication as
1 0 0 10 0 0 0 03 0 0 3 3
x xy xz x
⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
.
Hence, W is the line spanned by [ ]1, 0, 3 . Also T reduces to the identity on W because
1 0 00 0 0 0 03 0 0 3 3
x x
x x
⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥=⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦
and, hence T is a projection on W.
SECTION 5.1 Linear Transformations 471
79. The transformation :T →V W defined by ( ) ( ), , , 0, 3T x y z x x= − can be represented by
matrix multiplication as
1 0 0 10 0 0 0 03 0 0 3 3
x xy xz x
− − −⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
.
Hence, T maps onto the space spanned by 103
−⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
, which is the subspace W. However, T does not
reduce to the identity on W because
1 0 00 0 0 0 03 0 0 3 3
x x
x x
− −⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥=⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥−⎣ ⎦ ⎣ ⎦ ⎣ ⎦
and hence, is not a projection onto W.
80. The transformation :T →V W defined by ( ) ( ), , , , 0T x y z x y y= + can be represented by
matrix multiplication as
1 1 0 1 10 1 0 0 10 0 0 0 0 0
x x yy y x yz
+⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥= = +⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
.
Hence, T maps onto the space W spanned by [ ]1, 0, 0 and [ ]1, 1, 0 . However, T does
not reduce to the identity on W because
1 1 0 20 1 00 0 0 0 0
x y x yy y
+ +⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥=⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦
.
Hence, T is not a projection on W.
Rotational Transformations
81. Using the trigonometric identities
( )( )
cos cos cos sin sin
sin sin cos cos sin
θ α θ α θ α
θ α θ α θ α
+ = −
+ = +
we have
( )( )
coscos sin cos cos cos sin sinsinsin cos sin sin cos cos sin
rr r rrr r r
θ αθ θ α θ α θ αθ αθ θ α θ α θ α
⎡ ⎤+− −⎡ ⎤ ⎡ ⎤ ⎡ ⎤= = ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ++⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
.
Hence, the original point ( )cos , sinr rα α is mapped into ( ) ( )( )cos , sinr rθ α θ α+ + .
472 CHAPTER 5 Linear Transformations
Integral Transforms
82. ( ) ( ) ( )1
0, T f K s t f t dt= ∫ is linear because
( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )
( ) ( ) ( ) ( ) ( ) ( )
1 1 1
0 0 0
1 1
0 0
, , ,
, , .
T f g K s t f t g t dt K s t f t dt K s t g t dt T f T g
T cf K s t cf t dt c K s t f t dt cT f
⎡ ⎤+ = + = + = +⎣ ⎦
= = =
∫ ∫ ∫
∫ ∫
Computer Lab: Matrix Machine
83. 0 11 0
⎡ ⎤⎢ ⎥−⎣ ⎦
(a) Only [ ]0, 0 is not moved. All (nonzero) vectors are moved by this transformation.
(b) Only [ ]0, 0 does not change direction. All (nonzero) vectors have their direction changed.
(c) All vectors remain constant in magnitude.
(d) The nullspace contains only [ ]0, 0 .
(e) Only [ ]0,1 maps onto [ ]1, 0 . (f) The image is all of 2R .
84. 1 11 1⎡ ⎤⎢ ⎥⎣ ⎦
(a) Only [ ]0, 0 is not moved. All (nonzero) vectors are moved.
(b) Only vectors on the line y x= do not change direction.
(c) Only [ ]0, 0 remains constant in magnitude. The magnitudes of all nonzero vectors change.
(d) The nullspace consists of all vectors on the line y x= − .
(e) No vectors map onto the vector [ ]1, 0 . (f) The image is the line y x= .
85. 0 11 0⎡ ⎤⎢ ⎥⎣ ⎦
(a) Only vectors in the direction of [ ]1,1 are not moved.
(b) Only vectors in the direction of [ ]1,1 do not change direction.
(c) All vectors have unchanged magnitudes.
(d) The nullspace contains only [ ]0, 0 .
(e) Only vector that maps into [ ]1, 0 is [ ]0,1 .
(f) The image is all of 2R .
SECTION 5.1 Linear Transformations 473
86. 1 22 3
−⎡ ⎤⎢ ⎥−⎣ ⎦
(a) Only [ ]0, 0 is not moved.
(b) The only vectors whose direction is not changed are those in the direction of 2, 1 5⎡ ⎤− −⎣ ⎦ or 2, 1 5⎡ ⎤− +⎣ ⎦ .
(c) The only vectors with unchanged magnitude are the ones in the directions [ ]1,1 and [ ]3,1 .
(d) The nullspace contains only [ ]0, 0 .
(e) The only vector that maps into [ ]1, 0 is [ ]3, 2− − .
(f) The image is all of 2R .
87. 2 00 3⎡ ⎤⎢ ⎥⎣ ⎦
(a) Only [ ]0, 0 is not moved.
(b) The only vectors whose direction is not changed are those in the direction of [ ]1, 0 or [ ]0,1 .
(c) Only [ ]0, 0 has unchanged magnitude.
(d) The nullspace contains only [ ]0, 0 .
(e) The only vector that maps into [ ]1, 0 is [ ]0.5, 0 .
(f) The image is all of 2R .
88. 1 21 0⎡ ⎤⎢ ⎥⎣ ⎦
(a) Only [ ]0, 0 is not moved.
(b) The only vectors whose direction is not changed are those in the direction of [ ]1,1− or [ ]2,1 .
(c) The only vectors with unchanged magnitude are those in the directions [ ]1,1− and [ ]3,1− .
(d) The nullspace contains only [ ]0, 0 .
(e) The only vector that maps into [ ]1, 0 is [ ]0, 0.5 .
(f) The image is all of 2R .
Suggested Journal Entry 89. Student Project
474 CHAPTER 5 Linear Transformations
5.2 Properties of Linear Transformations
Finding Kernels
1. ( ) ( ), , T x y x y= −
Setting ( ) ( ) ( ), , 0, 0T x y x y= − = and solving for x, y, we find that the kernel of T contains only the zero vector ( )0, 0 .
2. ( ) ( ), , 2 3 , 4 6T x y z x y z x y z= + − − + +
Setting
( ) ( ) ( ), , 2 3 , 4 6 0, 0T x y z x y z x y z= + − − + + =
we have
2 3 0
4 6 0x y zx y z
+ − =− + + =
which has an entire line of solutions
( ){ }2 , , is any real numberα α α α−
in three-dimensional space. Hence, the kernel is a line in 3R .
3. ( ) ( ), , , , 0T x y z z y=
The transformation ( ) ( ), , , , 0T x y z x y= in matrix form is
( )1 0 0
, 0 1 00 0 0
xT x y y
z
⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥= ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦
.
By solving
1 0 0 00 1 0 00 0 0 0
xyz
⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥=⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦
we get the kernel of T as ( ){ }0, 0, is any real numberα α . In other words, the kernel consists of
the z-axis in 3R .
SECTION 5.2 Properties of Linear Transformations 475
4. ( ) ( ), , , 2 , T x y z x z x y y z= − − −
Setting ( ) ( ), , , 2 , T x y z x z x y y z= − − − equal to the zero vector, we have
1 0 1 01 2 0 00 1 1 0
xyz
−⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥− =⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥−⎣ ⎦ ⎣ ⎦ ⎣ ⎦
.
Solving this system, we get the single solution ( ) ( ), , 0, 0, 0x y z = . Hence, the kernel consists of
only the zero vector.
5. ( )D f f ′=
Setting ( ) 0dfD fdt
= = , we get the constant function ( )f t c= . Hence, the kernel of the first
derivative transformation consists of the family of constant functions.
6. ( )2D f f ′′=
Setting ( )2
22 0d fD f
dt= = , we get ( ) 1 2f t c t c= + . Hence, the kernel of the second derivative
transformation consists of the family of linear functions.
7. ( ) ( )1L y y p t y′= +
Setting ( ) ( )1 0L y y p t y′= + = , we get the solution functions
( ) ( )p t dty t ce
−∫= .
Hence, the kernel of this linear differential operator consists of the above family of solutions.
8. ( ) ( ) ( ) ( ) ( ) ( )11 1
n nn n nL y y a t y a t y a t y−
− ′= + + + +
Setting ( ) 0nL y = , we solve the given nth order linear differential equation to obtain a solution of
the form
( ) 1 1 2 2 n ny t c y c y c y= + + +
where 1 2, , ny y y… is a basis for the solutions.
9. ( ) TT =A A
Setting
0 00 00 0
a da b c
T b ed e f
c f
⎡ ⎤ ⎡ ⎤⎛ ⎞⎡ ⎤ ⎢ ⎥ ⎢ ⎥= =⎜ ⎟⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎜ ⎟⎣ ⎦⎝ ⎠ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦
476 CHAPTER 5 Linear Transformations
we have that T 0=A and, hence, A is also the zero vector. Hence, the kernel of T consists of the
zero vector in 23M , i.e., 0 0 00 0 0⎡ ⎤⎢ ⎥⎣ ⎦
.
10. 0 0
0 00 0
a b c aT d e f e
g h i i
⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥=⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦
Setting
0 0 0 0 0
0 0 0 0 00 0 0 0 0
a b c aT d e f e
g h i i
⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦
we have 0a e i= = = . Hence, the kernel of T consists vectors of the form
0
00
b cd fg h
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
.
11. ( ) ( )0
xT p p t dt= ∫
Setting ( ) ( )0
0x
T p p t dt= =∫ (fixed x), we have that the kernel of T consists of all functions p on
[ ]0, x whose integral is 0.
Calculus Kernels
12. ( )2 2T at bt c at b+ + = +
Setting
( )2 2 0T at bt c at b+ + = + = ,
which gives 0a b= = , c arbitrary. Hence, the kernel of T consists of all constant function.
13. ( )2 2T at bt c a+ + = .
Setting
( )2 2 0T at bt c a+ + = = ,
which gives 0,a = b, c arbitrary. Hence, the kernel of T consists of all polynomials in 2P of the form ( )p t bt c= + .
SECTION 5.2 Properties of Linear Transformations 477
14. ( )2 0T at bt c+ + =
Setting
( )2 0T at bt c+ + = .
This poses no condition on a, b, and c, so they are arbitrary. Hence, the kernel of T consists of all polynomials in 2P .
15. ( )3 2 6 2T at bt ct d at b+ + + = +
Setting
( )3 2 6 0T at bt ct d at b+ + + = + =
gives 0a b= = , c and d arbitrary. Hence, the kernel of T consists of all polynomials in 3P of the form ( )p t ct d= + .
Superposition Principle
16. If 1u , 2u satisfy the equations
( )( )
1 1
2 2
TT
==
u bu b
then adding we get
( ) ( )1 2 1 2T T+ = +u u b b .
But by linearity we know
( ) ( ) ( )1 2 1 2T T T+ = +u u u u
and, hence, ( )1 2 1 2T + = +u u b b , which shows that 1 2+u u is a solution of ( ) 2 2T = +u b b .
Also for any real number c, 1 1 1( ) ( )cT c T c= =u b b by linearity.
17. Direct substitution 18. Direct substitution
19. From 17, 18 we have shown
( )
( )2 2
cos sin 4sin 2cos
2 6 2 2
L t t t t
L t t t
− = −
− = − −
where
( ) 2L y y y y′′= − − .
By superposition we have ( )2 2cos sin 2 4sin 2cos 6 2 2L t t t t t t t− + − = − + − − . Hence, we have
the solution
( ) 2cos sin 2y t t t t= − + − .
478 CHAPTER 5 Linear Transformations
20. If T is a linear transformation and the vectors 1 2, , , nu u u… satisfy
( )( )
( )
1 1
2 2
n n
TT
T
==
=
u bu b
u b
then adding these equations, we have
( ) ( ) ( ) ( )1 2 1 2 1 2n n nT T T T+ + + = + + + = + + +b b b u u u u u u , by linearity.
Hence,
1 2 n= + + +u u u u
satisfies ( ) 1 2 nT = + + +u b b b .
Also for any real number c,
( )
( )
1 2 1 2
1 2
1 2
1
( ) ( ) ( ) ( )( ) ( ) ( )( ) by linearity
=T ( ) ( )
n n
n
n
n
c c T T TT c T c T cT c c c
c T c
+ + + = + + +
= + + += + + +
+ + + =
b b b u u uu u uu u uu u u
… ……
…… …
Therefore, 1 2( ) ( )nT c c= + + +u b b b… .
Dissecting Transformations
21. 0 00 0⎡ ⎤⎢ ⎥⎣ ⎦
Solving ( ) 0 0 0,
0 0 0x
T x yy
⎡ ⎤ ⎡ ⎤ ⎡ ⎤= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦
,
we see the solution space, or kernel of T, consists of all points in 2R . The dim Ker (T) is 2. The image of T contains vectors of the form
0 0 00 0 0
xy
⎡ ⎤ ⎡ ⎤ ⎡ ⎤=⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎣ ⎦ ⎣ ⎦,
which means the range contains only the zero vector [ ]0, 0 so the dim Im (T) is 0. We also know
this fact because
dim Ker (T) + dim Im(T) = dim 2R = 2.
The transformation is neither surjective (onto) nor injective (one-to-one).
SECTION 5.2 Properties of Linear Transformations 479
22. 1 00 1⎡ ⎤⎢ ⎥−⎣ ⎦
Solving ( ) 1 0 0,
0 1 0x
T x yy
⎡ ⎤ ⎡ ⎤ ⎡ ⎤= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥−⎣ ⎦ ⎣ ⎦ ⎣ ⎦
,
we see the solution space, or kernel of T, consists of only [ ]0, 0 in 2R . The dim Ker (T) is 0.
The image of T contains vectors of the form
1 0 1 00 1 0 1
xx y
y⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤
= +⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥− −⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦,
which consists of all vectors in 2R . Hence, the dim Im (T) is 2. We also know that because
( ) ( ) 2dim Ker dim Im dim 2T T+ = =R ,
we could find the dim Im(T) from this equation as well. The transformation is injective and surjective.
23. 1 00 0⎡ ⎤⎢ ⎥⎣ ⎦
Solving ( ) 1 0 0,
0 0 0x
T x yy
⎡ ⎤ ⎡ ⎤ ⎡ ⎤= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦
,
we see the solution space, or kernel of T, is [ ]{ }0, any real numberα α . The dim Ker (T) is 1. The image of T contains vectors of the form
1 0 1 00 0 0 0
xx y
y⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤
= +⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
,
which consists of the x-axis in 2R . Hence, the dim Im (T) is 1. We also know that, because ( ) ( ) 2dim Ker dim Im dim 2T T+ = =R , we could find the dim Im (T) from this equation as well.
The transformation is neither injective nor surjective.
24. 1 24 1⎡ ⎤⎢ ⎥⎣ ⎦
We have the linear transformation 2 2:T →R R defined by ( )T =v Av . The image of T is the set of all vectors
1 2 1 24 1 4 1
xx y
y⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤
= +⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
for all x, y real numbers. But the vectors [ ]1, 4 and [ ]2, 1 are linearly independent, so this linear combination yields all vectors in 2R . The image of this matrix transformation is 2R . You can
480 CHAPTER 5 Linear Transformations
also show that the only vector that this matrix maps into the zero vector is [ ]0, 0 , so the kernel consists of only [ ]0, 0 and the dim Ker (T) is 0. Note that ( ) ( ) 2dim Ker dim Im dim 2T T+ = =R .
The transformation is injective and surjective.
25. 1 22 4⎡ ⎤⎢ ⎥⎣ ⎦
We have the linear transformation 2 2:T →R R defined by ( )T =v Av . The image of T is the set
of all vectors
1 2 1 22 4 2 4
xx y
y⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤
= +⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
for all x, y real numbers. Vectors [ ]1, 2 and [ ]2, 4 are linearly dependent. Therefore this linear combination consists of the line in 2R spanned by [ ]1, 2 . Hence, the image of this matrix trans-
formation is a line in 2R . One can also show that the solution of =Av 0 is the line 2 0x y+ = ,
so the kernel is a one-dimensional subspace of 2R . Hence, the dim Ker (T) of this transformation is 1 and the dim Im(T) is 1. The transformation is neither injective nor surjective.
26. 1 14 1⎡ ⎤⎢ ⎥⎣ ⎦
We have the linear transformation 2 2: T →R R defined by ( )T =v Av . The dimension of the
image of the transformation T is the number of linearly independent columns of A. It is also called the rank of the matrix. Because the two columns of
1 14 1⎡ ⎤
= ⎢ ⎥⎣ ⎦
A
are clearly linearly independent (one is not a multiple of the other), the dim Im (A) is 2. The kernel contains only [ ]0, 0 , and hence the dim Ker (T) = 0. The transformation is both injective
and surjective.
27. 1 1 11 2 1⎡ ⎤⎢ ⎥⎣ ⎦
Solving 1 1 1 01 2 1 0
xyz
⎡ ⎤⎡ ⎤ ⎡ ⎤⎢ ⎥ =⎢ ⎥ ⎢ ⎥⎢ ⎥⎣ ⎦ ⎣ ⎦⎢ ⎥⎣ ⎦
we see the kernel of T is the line
[ ]{ } 3, 0, any real numberα α α− ∈R .
SECTION 5.2 Properties of Linear Transformations 481
Hence, the dim Ker (T) = 1. The image of T consists of vectors of the form
( )1 1 1 1 1 1 1 11 2 1 1 2 1 1 2
xy x y z x y yz
⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ = + + = + +⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦⎢ ⎥⎣ ⎦
which consists of all vectors in 2R . Hence, the dim Im (T) is 2. We also know that, because ( ) ( ) 2dim Ker dim Im dim 3T T+ = =R , we could find the dim Im (T) from this equation as well.
The transformation is neither injective nor surjective.
28. 1 2 12 4 2⎡ ⎤⎢ ⎥⎣ ⎦
Solving 1 2 1 02 4 2 0
xyz
⎡ ⎤⎡ ⎤ ⎡ ⎤⎢ ⎥ =⎢ ⎥ ⎢ ⎥⎢ ⎥⎣ ⎦ ⎣ ⎦⎢ ⎥⎣ ⎦
we see the kernel of T consists of [ ]{ }2 , , , any real numberα β α β α β− − , which is a two-
dimensional plane passing through the origin in 3R . Hence, the dim Ker(T) is 2. The image of T consists of vectors of the form
( )1 2 1 1 2 1 12
2 4 2 2 4 2 2
xy x y z x y zz
⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ = + + = + +⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦⎢ ⎥⎣ ⎦
,
which consists of a one-dimensional subspace of 2R . Hence, the dim Im(T) is 1. We also know that, because ( ) ( ) 2dim Ker dim Im dim 3T T+ = =R , we could find the dim Im (T) from this
equation as well. The transformation is neither injective nor surjective.
29. 1 2 12 1 2⎡ ⎤⎢ ⎥⎣ ⎦
We have the linear transformation 3 2:T →R R defined by ( )T =v Av . The image of T is the set
of all vectors
1 2 1 1 2 12 1 2 2 1 2
xy x y zz
⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ = + +⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦⎢ ⎥⎣ ⎦
for all x, y, and z real numbers. There are two linearly independent vectors among the above three vectors, so this combination yields all vectors in 2R . Hence, the image consists of 2R . If we set
=Av 0 we will find a one-dimensional subspace
[ ]{ }, 0, any real numberα α α−
in 3R . Hence, the dim Ker (T) is 1 and the dim Im (T) is 2. Thus, again we have the relationship
( ) ( ) 2dim Ker dim Im dim 3T T+ = =R . The transformation is not injective but it is surjective.
482 CHAPTER 5 Linear Transformations
30. 1 3 12 2 1⎡ ⎤⎢ ⎥⎣ ⎦
We have the linear transformation 3 2:T →R R defined by ( )T =v Av . The image of T is the set
of all vectors
1 3 1 1 3 12 2 1 2 2 1
xy x y zz
⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ = + +⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦⎢ ⎥⎣ ⎦
for all x, y, and z real numbers. There are two linearly independent vectors among the above three vectors, so this combination yields all vectors in 2R . Hence, the image consists of 2R . If we set
=Av 0 we will find a one-dimensional subspace
[ ]{ }, , 4 any real numberα α α α− −
in 3R . The dim Ker (T) is 1 and the dim Im (T) is 2. Thus, again we have the relationship
( ) ( ) 2dim Ker dim Im dim 3T T+ = =R . The transformation is not injective but it is surjective.
31. 1 11 21 1
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
Solving ( )1 1 0
, 1 2 01 1 0
xT x y
y
⎡ ⎤ ⎡ ⎤⎡ ⎤⎢ ⎥ ⎢ ⎥= =⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦
we see the kernel of T contains only [ ]0, 0 in 2R . The dim Ker (T) is 0. The image of T contains
vectors 1 1 1 11 2 1 21 1 1 1
xx y
y
⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥= +⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦
,
which consists of a two-dimensional subspace of 3R . Hence, the dim Im (T) is 2. We could have also found the dim Im (T) from the fact ( ) ( ) 2dim Ker dim Im dim 3T T+ = =R .
The transformation is injective but not surjective.
SECTION 5.2 Properties of Linear Transformations 483
32. 1 22 41 2
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
Solving ( )1 2 0
, 2 4 01 2 0
xT x y
y
⎡ ⎤ ⎡ ⎤⎡ ⎤⎢ ⎥ ⎢ ⎥= =⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦
we see the kernel of T consists of the line [ ]{ } 22 , any real numberα α α− ∈R . The dim Ker (T)
is 1. The image of T contains vectors of the form
( )1 2 12 4 2 21 2 1
xx y
y
⎡ ⎤ ⎡ ⎤⎡ ⎤⎢ ⎥ ⎢ ⎥= +⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦
,
which consists of a one-dimensional subspace of 3R . Hence, the dim Im (T) is 1. We could have also found the dimension of the image from the fact ( ) ( ) 2dim Ker dim Im dim 2T T+ = =R .
The transformation is neither injective nor surjective.
33. 0 00 00 0
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
Solving ( )0 0 0
, 0 0 00 0 0
xT x y
y
⎡ ⎤ ⎡ ⎤⎡ ⎤⎢ ⎥ ⎢ ⎥= =⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦
we see the kernel of T consists of all of 2R . The dim Ker (T) is 2. The image of T contains vectors of the form
( )0 0 0 00 0 0 00 0 0 0
xx y
y
⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥= + =⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦
,
which consists of the zero vector in 3R , a zero dimensional subspace of 3R . Hence, the rank of T is 0. The transformation is neither injective nor surjective.
484 CHAPTER 5 Linear Transformations
34. 1 12 13 1
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
We have the linear transformation 2 3:T →R R defined by ( )T =v Av . The image of T is the set
of all vectors
1 1 1 12 1 2 13 1 3 1
xx y
y
⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥= +⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦
for all x, y, real numbers. Because the two columns of A are linearly independent, the span of these vectors is a two-dimensional subspace (a plane through the origin) of 3R . Hence, the dim Im (T) is 2. It is also obvious that the only solution of the system =Av 0 is the zero vector
because the system describes three nonparallel lines in the plane. Hence, the dim Ker (T) of the transformation is 0. The transformation is injective but not surjective.
35. 1 2 10 1 10 0 1
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
Solving 1 2 1 00 1 1 00 0 1 0
xyz
⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥=⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦
we see the kernel of T contains only [ ]0, 0, 0 in 3R . Hence, the dim Ker (T) is 0. The image of T
consists of vectors of the form
1 2 1 1 2 10 1 1 0 1 10 0 1 0 0 1
xy x y zz
⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥= + +⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
.
Because the determinant of this matrix is nonzero, we know the three column vectors are linearly independent so the image is 3R . The transformation is both injective and surjective.
SECTION 5.2 Properties of Linear Transformations 485
36. 1 1 11 2 12 3 2
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
We have the linear transformation 3 3:T →R R defined by ( )T =v Av . The image of T is the set
of all vectors
1 1 1 1 1 11 2 1 1 2 12 3 2 2 3 2
xy x y zz
⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥= + +⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
for all x, y, and z real numbers. But the determinate of this matrix is 0, which tells us the number of linearly independent columns is less than three. But we can see by inspection there are at least two linearly independent columns, so columns of A spans a two-dimensional subspace (i.e., a plane) of 3R . To find the kernel of the transformation, we solve =Av 0 , getting
[ ]{ }, 0, any real numberα α α− ,
which consists of a one-dimensional subspace (a line) in 3R . Hence, the dim Ker (T) of the transformation is 1. The transformation is neither injective nor surjective.
37. 1 2 12 4 11 1 1
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
The dimension of the image of the transformation T is the number of linearly independent columns of A. Because the three columns of
1 2 12 4 11 1 1
⎡ ⎤⎢ ⎥= ⎢ ⎥⎢ ⎥⎣ ⎦
A
are linearly independent, the dim Im (T) is 3. Hence, the dim Ker (T) is 0, because
( ) ( ) 2dim Ker dim Im dim 3T T+ = =R . The transformation is both injective and surjective.
486 CHAPTER 5 Linear Transformations
38. 1 2 13 2 22 3 1
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
We show that the kernel of the matrix transformation ( )T =v Av consists of only the zero vector
in 3R . To do this we set
1 2 1 03 2 2 02 3 1 0
xyz
⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥=⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦
and solve for [ ], , x y z . The determinant of the matrix A is 3. We know the only solution of this
homogeneous system is 0x y z= = = , which says the kernel consists of only the zero vector in 3R . The image of T consists of the vectors of the form
1 2 1 1 2 13 2 2 3 2 22 3 1 2 3 1
xy x y zz
⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥= + +⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
.
These three vectors can be seen to be linearly independent (the determinant of the matrix is nonzero), so the image of T is 3R . Hence, the dim Im (T) is 3. The transformation is injective and surjective.
39. 1 2 00 1 10 0 1
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
First we show that the transformation T defined by ( )T =v Av is injective. The system
1 2 0 00 1 1 00 0 1 0
xyz
⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥=⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦
has only the zero solution because the determinant of the matrix is nonzero. Therefore dim Ker ( ) 0T = , so that dim Im ( ) 3T = . Therefore T is both injective and surjective.
40. 1 1 00 1 00 0 0
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
To show whether this matrix transformation 3 3:T →R R is onto 3R , we observe that the rank of
the matrix is 2 (the matrix has two linearly independent columns) so the dimension of the range is
SECTION 5.2 Properties of Linear Transformations 487
2. Hence, the image cannot be 3R . However, let us find the subspace of 3R onto which T maps. Letting ( ), , a b c be an arbitrary vector in 3R , we write
1 1 00 1 00 0 0
x ay bz c
⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥=⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦
.
From this system we see that x y a+ = , y b= , 0 c= which implies that
( ) [ ]{ }Ker 0, 0, is a real numberT c c= and ( ) [ ]{ }Im , , 0 , are real numbersT a b a b= .
Therefore dim Ker ( ) 1T = and dim Im ( ) 2T = , so that T is neither injective nor surjective.
Transformations and Linear Dependence
41. Because { }1 2 3, ,v v v is a linearly dependent set, there exist scalars c1, c2, c3, not all zero, such that
1 1 2 2 3 3c c c+ + =v v v 0 . It follows that
( )1 1 2 2 3 3 ( )T c c c T+ + = =v v v 0 0 .
Because T is linear, this is equivalent to 1 1 2 2 3 3( ) ( ) ( )c T c T c T+ + =v v v 0 . At least one of c1, c2, c3 is nonzero, so we have shown that the set { }1 2 3( ), ( ), ( )T T Tv v v is linearly dependent.
42. Let T be the zero map. Then for any set { }1 2 3, ,v v v of vector in nR , the set
{ }1 2 3( ), ( ), ( ) { }T T T =v v v 0 is linearly dependent.
43. Suppose 1 1 2 2 3 3( ) ( ) ( )c T c T c T+ + =v v v 0 for scalars c1, c2, c3 and vectors 1 2 3, , v v v in nR . We
will show that c1 = c2 = c3 = 0. Because T is linear, we have
( )1 1 2 2 3 3T c c c+ + =v v v 0
in mR . By assumption, T is injective, so Ker T = { }0 . This implies that 1 1 2 2 3 3c c c+ + =v v v 0 in nR . Finally, since { }1 2 3, ,v v v is a linearly independent set, we conclude that c1 = c2 = c3 = 0.
44. Suppose { }1 2,v v is a linearly independent set and { }1 2( ), ( )T Tv v is a linearly dependent set. Because { }1 2( ), ( )T Tv v is dependent, there are scalars c1 and c2, not both zero, such that
1 1 2 2( ) ( )c T c T+ =v v 0 ,
Thus ( )1 1 2 32T c c+ =v v 0 . Because { }1 2,v v is independent, neither 1 2 nor v v is the zero vector.
At least one of c1 or c2 is nonzero, so 1 1 2 2c c+v v is not the zero vector. Thus (T =v) 0 has a
nontrivial solution.
488 CHAPTER 5 Linear Transformations
45. (a) For any p1 p2 in P2,
( ) ( ) 1 2 1 21 2 1 2
1 2 1 2
( )(0) (0) (0)( ) ( ) ( )( )
( )(1) (1) (1)p p p p
T p t p t T p p tp p p p+ +⎡ ⎤ ⎡ ⎤
+ = + = =⎢ ⎥ ⎢ ⎥+ +⎣ ⎦ ⎣ ⎦
= ( ) ( )1 21 2
1 2
(0) (0)( ) ( )
(1) (1)p p
T p t T p tp p
⎡ ⎤ ⎡ ⎤+ = +⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎣ ⎦.
Also ( ) ( )( )(0) (0) (0)( ) ( ) .
(1) (1)( )(1)cp cp p
T cp t c cT p tcp pcp
⎡ ⎤ ⎡ ⎤ ⎡ ⎤= = = =⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎣ ⎦⎣ ⎦
Thus T is linear.
(b) p(t) = 22 1 0a t a t a+ + is in Ker(T) if
22 1 0
22 2 0
(0) (0) 00(1) (1)
a a a
a a a
⎡ ⎤+ + ⎡ ⎤=⎢ ⎥ ⎢ ⎥+ +⎢ ⎥ ⎣ ⎦⎣ ⎦
,
that is, if 0
2 1 0
00
aa a a⎡ ⎤ ⎡ ⎤
=⎢ ⎥ ⎢ ⎥+ + ⎣ ⎦⎣ ⎦. This holds if a0 = 0 and a2 + a1 = 0, that is, if a0 = 0 and
a2 = −a1. A basis for Ker(T) is {t2 − t}.
(c) For αβ⎡ ⎤⎢ ⎥⎣ ⎦
in R2, ( )22 2
2 2
( )T a t a ta a
αβ α α
β α α⎡ ⎤
+ − − + = ⎢ ⎥+ − − +⎣ ⎦ =
αβ⎡ ⎤⎢ ⎥⎣ ⎦
, so T is
surjective.
1 0
,0 1
⎧ ⎫⎡ ⎤ ⎡ ⎤⎪ ⎪⎨ ⎬⎢ ⎥ ⎢ ⎥⎪ ⎪⎣ ⎦ ⎣ ⎦⎩ ⎭
is a basis for Im(T).
Kernels and Images
46. T: M22 → M22, T(A) = AT
(i) 0 00 0
a b a cT
c d b d⎡ ⎤ ⎡ ⎤ ⎡ ⎤
= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦
only if a = b = c = d = 0, so Ker (T) = { }0
(ii) Dimension Thm ⇒ dim Im(T) = 4 ⇒ Im (T) = M22
47. T = P3 → P3, T(p) = p′
3 2 2( ) 3 2T ax bx cx d ax bx c+ + + = + +
Ker T = {d d }∈R
Im T = 2{3 2 , , }ax bx c a b c+ + ∈R
= 2{ , , }qx rx s q r s+ + ∈R
SECTION 5.2 Properties of Linear Transformations 489
48. T: M22 → M22, a b a b
Tc d b c⎡ ⎤ ⎡ ⎤
=⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦
(i) Ker (T) = 22 0a b
a b cc d
⎧ ⎫⎡ ⎤⎪ ⎪∈ = = =⎨ ⎬⎢ ⎥⎣ ⎦⎪ ⎪⎩ ⎭
M
(ii) Im (T) = 1 0 0 1 0 0
, ,0 0 1 0 0 1
a b c a b c⎧ ⎫⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎪ ⎪+ + ∈⎨ ⎬⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎣ ⎦ ⎣ ⎦⎪ ⎪⎩ ⎭R
49. T: M22 → R2, a b a b
Tc d c d
+⎡ ⎤ ⎡ ⎤=⎢ ⎥ ⎢ ⎥+⎣ ⎦ ⎣ ⎦
(i) Ker (T) = , ,b b
b dd d
⎧ ⎫−⎡ ⎤⎪ ⎪∈⎨ ⎬⎢ ⎥−⎣ ⎦⎪ ⎪⎩ ⎭R
(ii) Dimension Thm ⇒ Im (T) = R2
50. T: R5 → R5, T(a, b, c, d, e) = (a, 0, c, 0, e)
(i) Ker (T) = { }5( , , , , ) 0a b c d e a c e∈ = = =R
(ii) Im (T) = { }5( , , , , ) b 0a b c d e d∈ = =R
51. T: R2 → R3, T(x, y) = (x + y, 0, x− y)
(i) (x, y) ∈ Ker (T) ⇔ 00
x yx y+ =⎧
⎨ − =⎩ ⇔ x = y = 0,
so Ker (T) = { }0 .
(ii) Im (T) = 11
0 0 , ,1 1
x y x y⎧ ⎫⎡ ⎤⎡ ⎤⎪ ⎪⎢ ⎥⎢ ⎥ + ∈⎨ ⎬⎢ ⎥⎢ ⎥⎪ ⎪⎢ ⎥⎢ ⎥ −⎣ ⎦ ⎣ ⎦⎩ ⎭
R
Examples of Matrices
52. A = 2 0 00 3 00 0 1
⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥⎣ ⎦
53. A = 2 0 00 0 00 0 0
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
or any matrix for which a11 is the only nonzero element
490 CHAPTER 5 Linear Transformations
54. Set 1 00 01 0
a b cd e fg h i
⎡ ⎤ ⎡ ⎤⎡ ⎤⎢ ⎥ ⎢ ⎥⎢ ⎥ =⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦
so that 00
0
a cd fg i
+ =+ =+ =
Set 0 01 02 0
a b cd e fg h i
⎡ ⎤ ⎡ ⎤⎡ ⎤⎢ ⎥ ⎢ ⎥⎢ ⎥ =⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦
so that 2 02 02 0
b ce fh i
+ =+ =+ =
Therefore for A = 1 2 11 1 11 2 1
−⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥−⎣ ⎦
and ( )T =x Ax
Ker T is spanned by 1 00 , 11 2
⎧ ⎫⎡ ⎤ ⎡ ⎤⎪ ⎪⎢ ⎥ ⎢ ⎥⎨ ⎬⎢ ⎥ ⎢ ⎥⎪ ⎪⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦⎩ ⎭
(As a check, dim Ker(T) + dim Im(T) = 2 + rank A = 3 = dim 3R )
True/False Questions
55. False. Let A = 0 10 0⎡ ⎤⎢ ⎥⎣ ⎦
. Then A2 = 0 00 0⎡ ⎤⎢ ⎥⎣ ⎦
, and Ker (A) ≠ Ker (A2).
56. False. See Problem 55 for an example.
57. True. Elementary row operations will not change the solutions of =Ax 0 .
58. False. Let A = 1 0
.1 0⎡ ⎤⎢ ⎥⎣ ⎦
Then Im (A) = span 11
⎧ ⎫⎡ ⎤⎪ ⎪⎨ ⎬⎢ ⎥⎪ ⎪⎣ ⎦⎩ ⎭
.
RREF(A) = 1 00 0⎡ ⎤⎢ ⎥⎣ ⎦
, and Im(RREF(A)) = span 10
⎧ ⎫⎡ ⎤⎪ ⎪⎨ ⎬⎢ ⎥⎪ ⎪⎣ ⎦⎩ ⎭
.
59. False. Let A = 1 0
,0 0⎡ ⎤⎢ ⎥⎣ ⎦
B = 1 0
.0 0−⎡ ⎤⎢ ⎥⎣ ⎦
Ker(A) = Ker(B) = 1
;0
⎧ ⎫⎡ ⎤⎪ ⎪⎨ ⎬⎢ ⎥⎪ ⎪⎣ ⎦⎩ ⎭
However, Ker(A + B) = 0
.0
⎧ ⎫⎡ ⎤⎪ ⎪⎨ ⎬⎢ ⎥⎪ ⎪⎣ ⎦⎩ ⎭
60. True. Im (T) = span 11
⎧ ⎫⎡ ⎤⎪ ⎪⎨ ⎬⎢ ⎥⎪ ⎪⎣ ⎦⎩ ⎭
.
SECTION 5.2 Properties of Linear Transformations 491
Detective Work
61. The rank of the matrix 1 2 3 00 0 0 1
−⎡ ⎤⎢ ⎥⎣ ⎦
is equal to the number of linearly independent columns,
which is 2. Because ( ) ( ) 4dim Ker dim Im dim 4T T+ = =R , we see that dim Ker (T) is also 2.
The transformation is neither injective nor surjective.
Detecting Dimensions
62. The rank of the matrix
1 00 10 00 0
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
equals the number of linearly independent columns, which is 2.
Because ( ) ( ) 2dim Ker dim Im dim 2T T+ = =R we see that dim Ker (T) is 0.
The transformation is injective but not surjective.
Still Investigating
63. The dim Im (T): 3 4→R R is the number of linearly independent columns of the matrix A, which
is 3. Hence, the transformation T is not surjective.
Also, because ( ) ( ) 3dim Ker dim Im dim 3T T+ = =R , we see that dim Ker ( ) 0T = , which means
that T is injective.
Dimension Theorem Again
64. The dim Im (T): 3 3→R R is the number of linearly independent columns of the matrix A, which
is 1. Hence, the transformation T is not surjective.
Also, because ( ) ( ) 3dim Ker dim Im dim 3T T+ = =R we see that dim Ker ( ) 2T = , which means
that T is neither injective nor surjective.
The Inverse Transformation
65. Suppose :T → V W is an injective and surjective linear transformation and 1 : T − →W W is its inverse (i.e., ( )1T − =w v if and only if ( )T =v w ).
1T − is surjective: Let ∈v V . Then ( )T =v w for some ∈w W . Thus ( )1T − =w v .
1T − is injective: Suppose ( )11T − =w v and ( )1
2T − =w v for some 1w and 2 ∈w W . Then
( ) 1T =v w and ( ) 2T =v w . But T is a function so 1 2=w w .
1T − is linear: Suppose 1 2, ∈v v V . Then because 1T − is surjective, there exist vectors 1w and
2 ∈w W such that ( )1 1T =v w and ( )2 2T =v w . Because T is linear,
( ) ( ) ( )1 2 1 2 1 2T T T+ = + = +v v v v w w
492 CHAPTER 5 Linear Transformations
so that
( ) ( ) ( )1 1 11 2 1 2 1 2T T T− − −+ = + = +w w v v w w .
Also for any ∈w W and real number c, suppose ( )T =v w for some ∈v V .
Then ( ) ( )T c cT c= =v v w so that ( ) ( )1 1T c c cT− −= =w v w . Therefore 1T − is linear.
Review of Nonhomogeneous Algebraic Systems
66. 1x y+ = . A particular solution of the nonhomogeneous equation 1x y+ = is 12
x = , 12
y = .
The general solution of the homogeneous equation 0x y+ = is
( ){ }, is an arbitrary constantc c c− .
Hence, the general solution of the nonhomogeneous equation is
1 111 12
xc
y−⎡ ⎤ ⎡ ⎤ ⎡ ⎤
= +⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦
.
67. 3 4x y z− + = − . We see that 1x = , 1y = , 6z = − is a particular solution of the nonhomogeneous
equation (we simply picked x and y and solved for z). To find the homogeneous solutions, we let ,x α= y β= , and solve 3 0x y z− + = for z, yielding
3 3z y x β α= − = − .
Hence, the homogeneous solutions are ( ) ( ), , , , 3x y z α β β α= − , where α and β are any real
numbers. Adding the particular solution to the solutions of the homogeneous equation yields
1 1 01 0 16 3 1
xyz
α β⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥= + +⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥− −⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
where α, β are arbitrary real numbers.
68. 2 22 2
x yx y
+ =+ =
Solving the original nonhomogeneous system, we find a particular solution of 23
x = , 23
y = .
Then, solving the corresponding homogeneous system, we find the only solution to be 0x = , 0y = . Hence, the general solution of the original nonhomogeneous system is
2 0 21 12 0 23 3
xc
y⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤
= + =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
for any real number c.
SECTION 5.2 Properties of Linear Transformations 493
69. 2 52 4 5
x yx y
− =+ = −
Solving the original nonhomogeneous system, we find a particular solution of 108
x = , 158
y = − .
Then, solving the corresponding homogeneous system, we find the only solution to be 0x = , 0y = . Hence, the general solution of the original nonhomogeneous system is
10 0 101 115 0 158 8
xc
y⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤
= + =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥− −⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ for any real number c.
70. 2 62 3 3
x y zx y z
+ − =− + = −
Solving the original nonhomogeneous system, we find a particular solution of 0x = , 3y = , 0z = . Then, solving the corresponding homogeneous system, we find ( ) ( ), , , , x y z α α α= − ,
where α is any real number. Hence, the general solution of the original nonhomogeneous system is
0 13 10 1
xyz
α−⎡ ⎤ ⎡ ⎤ ⎡ ⎤
⎢ ⎥ ⎢ ⎥ ⎢ ⎥= +⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦
.
71. If we reduce the original system to RREF, we get
1 3 4 92 1 2 3 9 15 0 3
⎡ ⎤⎢ ⎥−⎡ ⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥− − →⎢ ⎥ ⎢ ⎥⎢ ⎥− −⎣ ⎦ ⎢ ⎥⎢ ⎥⎣ ⎦
.
Hence, the system has no solution.
Review of Nonhomogeneous First-Order DEs
72. 3y y′ − = . By inspection, we find a particular solution of ( ) 3py t = − . We then solve the
corresponding linear homogeneous equation 0y y′ − = , and get ( ) thy t ce= . Hence, the general
solution of the nonhomogeneous equation is
( ) ( ) ( ) 3th py t y t y t ce= + = − .
101 0 0760 1 07
0 0 0 1
−
−
494 CHAPTER 5 Linear Transformations
73. 2 1y y′ + = − . By inspection, we find a particular solution of ( ) 12py t = − . We then solve the
corresponding homogeneous equation and get ( ) 2thy t ce−= . Hence, the general solution of the
nonhomogeneous equation is
( ) 2 12
th py t y y ce−= + = − .
74. 1 1y yt t
′ + = . By inspection, we find a particular solution ( ) 1py t = . We then solve the
corresponding homogeneous equation by separating variables and get
( )hcy tt
= .
Hence, the general solution of the nonhomogeneous equation is
( ) 1h pcy t y yt
= + = + .
75. 2 2
1 2y yt t
′ + = . By inspection, we find a particular solution ( ) 2py t = . We then solve the
corresponding homogeneous equation
2
2 0y yt
′ + =
by separating variables and get
( ) 2 thy t ce= .
Hence, the general solution of the nonhomogeneous equation is
( ) 2 2th py t y y ce= + = + .
76. 2 23y t y t′ + = . By inspection, we find a particular solution
( ) 3py t = .
We then solve the corresponding homogeneous equation
2 0y t y′ + =
by separating variables and get
( ) 3 3thy t ce−= .
Hence, the general solution of the nonhomogeneous equation is
( ) 3 3 3th py t y y ce−= + = + .
SECTION 5.2 Properties of Linear Transformations 495
77. 21y ty t′ + = + . By inspection, we find a particular solution
( )py t t= .
We then solve the corresponding homogeneous equation
0y ty′ + =
by separating variables and get
( ) 2 2thy t ce−= .
Hence, the general solution of the nonhomogeneous equation is
( ) 2 2th py t y y ce t−= + = + .
Review of Nonhomogeneous Second-Order DEs
78. 2 2 3y y y t′′ ′+ − = −
Using the method of undetermined coefficients, we find a particular solution
( ) 1py t t= − + .
We then solve the corresponding homogeneous equation
2 0y y y′′ ′+ − =
and get
( ) 21 2
t thy t c e c e−= + .
Hence, the general solution of the nonhomogeneous equation is
( ) ( ) ( ) 21 2 1t t
h py t y t y t c e c e t−= + = + − + .
79. 2 2 4 6y y y t′′ ′− + = −
Using the method of undetermined coefficients, we find a particular solution
( ) 2 1py t t= − .
We then solve the corresponding homogeneous equation
2 2 0y y y′′ ′− + =
and get
( ) 1 2cos sint thy t c e t c e t= + .
Hence, the general solution of the nonhomogeneous equation is
( ) 1 2cos sin 2 1t th py t y y c e t c e t t= + = + + − .
496 CHAPTER 5 Linear Transformations
80. 2 3y y y t′′ ′− + = −
Using the method of undetermined coefficients, we find a particular solution
( ) 1py t t= − .
We then solve the corresponding homogeneous equation
2 0y y y′′ ′− + =
and get
( ) 1 2t t
hy t c e c te= + .
Hence, the general solution of the nonhomogeneous equation is
( ) 1 2 1t th py t y y c e c te t= + = + + − .
81. 2y y t′′ + =
By inspection, we find a particular solution
( ) 2py t t= .
We then solve the corresponding homogeneous equation
0y y′′ + =
and get
( ) 1 2cos sinhy t c t c t= + .
Hence, the general solution of the nonhomogeneous equation is
( ) 1 2cos sin 2h py t y y c t c t t= + = + + .
Suggested Journal Entry I
82. Student Project
Suggested Journal Entry II
83. Student Project
SECTION 5.3 Eigenvalues and Eigenvectors 497
5.3 Eigenvalues and Eigenvectors
Computing Eigenstuff
1. 2 00 1⎡ ⎤⎢ ⎥⎣ ⎦
. The characteristic equation is
( ) 22 03 2 0
0 1p
λλ λ λ λ
λ−
= − = = − + =−
A I ,
which has roots, or eigenvalues 1 1λ = , 2 2λ = . To find the eigenvectors we substitute 1iλ = into the linear system iλ=Av v (or equivalently ( )iλ− = 0A I v ) and solve for iv .
For λ1 = 1, substitution yields
2 0
10 1
x xy y
⎡ ⎤ ⎡ ⎤ ⎡ ⎤=⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎣ ⎦ ⎣ ⎦.
Solving this 2 2× system for x and y, yields
2x x= , y y= ,
which implies 0x = , y arbitrary. Hence,
there are an infinite number ofeigenvectors corresponding to 1 1λ = , namely [ ]1 0, 1c=v , where c is any
nonzero real number. By a similar argument, the eigenvectorscorresponding to 2 2λ = are
[ ]2 1, 0c=v .
2. 3 22 0⎡ ⎤⎢ ⎥⎣ ⎦
. The characteristic equation is
( ) 23 23 4 0
2p
λλ λ λ λ
λ−
= − = = − − =−
A I ,
which has roots, or eigenvalues 1 1λ = − , 2 4λ = . To find the eigenvectors we substitute iλ into the linear system i iλ= 1Av v (or equivalently ( )i iλ− =v 0A I ) and solve for iv .
For λ1 = 1, substitution yields
3 2
12 0
x xy y
⎡ ⎤ ⎡ ⎤ ⎡ ⎤= −⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎣ ⎦ ⎣ ⎦.
1v
2v
E1
E2
498 CHAPTER 5 Linear Transformations
Solving this 2 2× system for x and y, yields
3 2x y x+ = − , 2x y= − ,
which implies x arbitrary, 2y x= − .
Hence, there are an infinite number ofeigenvectors corresponding to 1 1λ = − ,namely [ ]1 1, 2c= −v , where c is any
nonzero real number. By a similar argument, the eigenvectorscorresponding to 2 4λ = are
[ ]2 2, 1c=v .
3. 1 21 2⎡ ⎤⎢ ⎥⎣ ⎦
. The characteristic equation is
( ) 21 23 0
1 2p
λλ λ λ λ
λ−
= − = = − =−
A I ,
which has roots of 1 0λ = , 2 3λ = . To find the eigenvectors, we substitute iλ into the system
iλ=Av v and solve for v . For λ1 = 0, substitution yields
1 2 0
01 2 0
x xy y
⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦.
Solving this 2 2× system for x and y yields the equation 2 0x y+ = , which implies all vectors [ ]1 2, 1α= −v in 2R are eigenvectors for any real number α.
We find the eigenvectors corresponding to
2 3λ = by solving 3=Av v , yielding the
equations
2 3x y x+ = , 2 3x y y+ = ,
which gives x y= .
Hence [ ]2 1, 1α=v where α is any real
number.
E2
E1
1v
2v
E1
E2
2v
1v
SECTION 5.3 Eigenvalues and Eigenvectors 499
4. 3 45 5
⎡ ⎤⎢ ⎥− −⎣ ⎦
. The characteristic equation is
( ) 23 42 5 0
5 5p
λλ λ λ λ
λ−
= − = = + + =− − −
A I ,
which has roots of 1 1 2iλ = − + , 2 1 2iλ = − − . To find the eigenvectors, we substitute iλ into the system iλ=Av v and solve for v .
For 1 2iλ = − +i , substitution yields
( )3 41 2
5 5x x
iy y
⎡ ⎤ ⎡ ⎤ ⎡ ⎤= − +⎢ ⎥ ⎢ ⎥ ⎢ ⎥− −⎣ ⎦ ⎣ ⎦ ⎣ ⎦
.
Solving this 2 2× system for x and y yields one independent equation
( )2 2 0i x y− + = ,
which, if we let 1x = , gives 112
y i= − + . Hence, we have a complex eigenvector
111, 12
i⎡ ⎤= − +⎢ ⎥⎣ ⎦v .
Likewise the other eigenvalues and eigenvectors are the conjugates 2 1 2iλ = − − and conjugate
eigenvector 211, 12
i⎡ ⎤= − −⎢ ⎥⎣ ⎦v . All multiples of these eigenvectors are eigenvectors; we have
chosen them so the first coordinate is 1. The eigenspaces are not real.
5. 1 31 3⎡ ⎤⎢ ⎥⎣ ⎦
. The characteristic equation is
( ) 21 34 0
1 3p
λλ λ λ λ
λ−
= − = = − =−
A I ,
which has roots of 1 0λ = , 2 4λ = . To find the eigenvectors, we substitute iλ into the system
iλ=Av v and solve for v .
For λ1 = 0, substitution yields
1 3 0
01 3 0
x xy y
⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦.
500 CHAPTER 5 Linear Transformations
Solving these equations we find 3x y= − .
Picking 1y = , we have an eigenvector [ ]1 3, 1= −v .
An eigenvector corresponding to 2 4λ =
is [ ]2 1, 1=v .
All multiples of these eigenvectors arealso eigenvectors.
6. 3 22 3
⎡ ⎤⎢ ⎥− −⎣ ⎦
. The characteristic equation is
( ) 23 25 0
2 3p
λλ λ λ
λ−
= − = = − =− − −
A I ,
which has roots of 1 5λ = , 2 5λ = − . To find the eigenvectors, we substitute iλ into the system
iλ=Av v and solve for v .
To find eigenvectors for 1̀ 5λ = , substitution yields
3 2
52 3
x xy y
⎡ ⎤ ⎡ ⎤ ⎡ ⎤=⎢ ⎥ ⎢ ⎥ ⎢ ⎥− −⎣ ⎦ ⎣ ⎦ ⎣ ⎦
.
Solving this 2 2× system for x and yyields one independent equation
( )3 5 2 0x y− + = ,
which gives an eigenvector
11 31, 52 2
⎡ ⎤= −⎢ ⎥⎣ ⎦v .
Likewise for the second eigenvalue, 2 5λ = − ,
we find eigenvector
21 31, 52 2
⎡ ⎤= − −⎢ ⎥⎣ ⎦v .
1v 2v
E1
E2
E2
E1
1v
2v
SECTION 5.3 Eigenvalues and Eigenvectors 501
7. A = 1 11 1⎡ ⎤⎢ ⎥⎣ ⎦
21 1(1 )(1 ) 1 2 ( 2) 0
1 1λ
λ λ λ λ λ λ λλ
−− = = − − − = − = − =
−A I , so 1 20, 2λ λ= =
To find eigenvectors for λ1 = 0:
1 1 01 1 0⎡ ⎤⎢ ⎥⎣ ⎦
RREF→
1 1 00 0 0⎡ ⎤⎢ ⎥⎣ ⎦
⇒ 1 2
2
0free
v vv+ =
E1 = span 1
1⎧ ⎫−⎡ ⎤⎪ ⎪⎨ ⎬⎢ ⎥⎪ ⎪⎣ ⎦⎩ ⎭
To find eigenvectors for 2 2λ = :
1 1 0
1 1 0⎡ ⎤−⎢ ⎥−⎣ ⎦
→ 1 1 01 1 0
⎡ ⎤−⎢ ⎥−⎣ ⎦
RREF→
1 1 00 0 0⎡ ⎤−⎢ ⎥⎣ ⎦
⇒ 1 2
2
0free
v vv− =
E2 = span 11
⎧ ⎫⎡ ⎤⎪ ⎪⎨ ⎬⎢ ⎥⎪ ⎪⎣ ⎦⎩ ⎭
8. A = 12 615 7
−⎡ ⎤⎢ ⎥−⎣ ⎦
2
12 6(12 )( 7 ) 6(15)
15 7
5 6 ( 2)( 3), so 2,3.
λλ λ λ
λ
λ λ λ λ λ
− −− = = − − − +
− −
= − + = − − =
A I
To find eigenvectors for 1 2 :λ =
1
2
12 2 6 015 7 2 0
vv
− − ⎡ ⎤ ⎡ ⎤⎡ ⎤=⎢ ⎥ ⎢ ⎥⎢ ⎥− −⎣ ⎦ ⎣ ⎦⎣ ⎦
10v1 − 6v2 = 0 ⇒ 2 1
1
53
free
v v
v
= ⇒ 1
35⎡ ⎤
= ⎢ ⎥⎣ ⎦
v , and E1 = span 35
⎧ ⎫⎡ ⎤⎪ ⎪⎨ ⎬⎢ ⎥⎪ ⎪⎣ ⎦⎩ ⎭
.
To find eigenvectors for 2 3:λ =
1
2
12 3 6 015 7 3 0
vv
− − ⎡ ⎤ ⎡ ⎤⎡ ⎤=⎢ ⎥ ⎢ ⎥⎢ ⎥− −⎣ ⎦ ⎣ ⎦⎣ ⎦
9v1 − 6v2 = 0 ⇒ 2 1
1
32
free
v v
v
= ⇒ 2
23⎡ ⎤
= ⎢ ⎥⎣ ⎦
v ,
E1 E2
2v1v
E1
E2
1v
2v
502 CHAPTER 5 Linear Transformations
and
E2 = span 23
⎧ ⎫⎡ ⎤⎪ ⎪⎨ ⎬⎢ ⎥⎪ ⎪⎣ ⎦⎩ ⎭
.
9. A = 1 44 11
⎡ ⎤⎢ ⎥−⎣ ⎦
1 4
det( ) det (1 )(11 ) 164 11λ
λ λ λλ
−⎡ ⎤− = = − − +⎢ ⎥− −⎣ ⎦
A I
2 12 27
( 3)( 9) 0λ λλ λ
= − += − − =
⇒ λ1 = 3, λ2 = 9
To find eigenvectors for λ1 = 3:
2 4 04 8 0
⎡ ⎤−⎢ ⎥−⎣ ⎦
→ 1 2 04 8 0
⎡ ⎤−⎢ ⎥−⎣ ⎦
RREF→
1 2 00 0 0⎡ ⎤−⎢ ⎥⎣ ⎦
⇒ 1 2
2
2 0 free
v vv− =
E1 = span 21
⎧ ⎫⎡ ⎤⎪ ⎪⎨ ⎬⎢ ⎥⎪ ⎪⎣ ⎦⎩ ⎭
To find eigenvectors for λ2 = 9:
8 4 04 2 0
⎡ ⎤−⎢ ⎥−⎣ ⎦
→ 1 1/ 2 04 2 0
⎡ ⎤−⎢ ⎥−⎣ ⎦
RREF→
1 1/ 2 00 0 0⎡ ⎤−⎢ ⎥⎣ ⎦
⇒ 1 2
2
1 02
free
v v
v
− =
E2 = span 12
⎧ ⎫⎡ ⎤⎪ ⎪⎨ ⎬⎢ ⎥⎪ ⎪⎣ ⎦⎩ ⎭
10. A = 4 23 11
⎡ ⎤⎢ ⎥−⎣ ⎦
4 2
det( ) det (4 )(11 ) 63 11λ
λ λ λλ
−⎡ ⎤− = = − − +⎢ ⎥− −⎣ ⎦
A I
2
1 2
15 50( 10)( 5) 0 5, 10λ λλ λ λ λ
= − += − − = ⇒ = =
E1
E2
2v
1v
SECTION 5.3 Eigenvalues and Eigenvectors 503
To find eigenvectors for λ1 = 5:
1 2 03 6 0
⎡ ⎤−⎢ ⎥−⎣ ⎦
RREF→
1 2 00 0 0⎡ ⎤−⎢ ⎥⎣ ⎦
⇒ 1 2
2
2 0 free
v vv− =
E1 = span 21
⎧ ⎫⎡ ⎤⎪ ⎪⎨ ⎬⎢ ⎥⎪ ⎪⎣ ⎦⎩ ⎭
To find eigenvectors for λ2 = 10:
6 2 03 1 0
⎡ ⎤−⎢ ⎥−⎣ ⎦
RREF→
1 1/ 3 00 2 0⎡ ⎤−⎢ ⎥⎣ ⎦
⇒ 1 2
2
1 03
free
v v
v
− =
E2 = span 13
⎧ ⎫⎡ ⎤⎪ ⎪⎨ ⎬⎢ ⎥⎪ ⎪⎣ ⎦⎩ ⎭
11. 3 51 1
⎡ ⎤⎢ ⎥− −⎣ ⎦
. The characteristic equation is
( ) 23 52 2 0
1 1p
λλ λ λ λ
λ−
= − = = − + =− − −
A I ,
which has roots of 1 1 iλ = + , 2 1 iλ = − .
To find eigenvectors for 1 1 iλ = + :
( )3 51
1 1x x
iy y
⎡ ⎤ ⎡ ⎤ ⎡ ⎤= +⎢ ⎥ ⎢ ⎥ ⎢ ⎥− −⎣ ⎦ ⎣ ⎦ ⎣ ⎦
.
Solving this 2 2× system for x and y yields one independent equation
( )2 5 0i x y− + = ,
which gives an eigenvector [ ]1 2 , 1i= − −v .
Likewise for the second eigenvalue 2 1 iλ = − we find eigenvector [ ]2 2 , 1i= − +v .
The eigenspaces are not real.
E2
E1
1v
2v
504 CHAPTER 5 Linear Transformations
12. 1 10 1⎡ ⎤⎢ ⎥⎣ ⎦
. The characteristic equation is
( ) ( )21 11
0 1p
λλ λ λ
λ−
= − = = −−
A I ,
which has roots of 1 1, 1λ = .
To find eigenvectors for 1 1λ = :
1 1
10 1
x xy y
⎡ ⎤ ⎡ ⎤ ⎡ ⎤=⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎣ ⎦ ⎣ ⎦.
Solving this 2 2× system for x and y, yields
x y xy y
+ ==
which implies x arbitrary, 0y = . Hence,
there is only one independent eigenvector:
1
10⎡ ⎤
= ⎢ ⎥⎣ ⎦
v
And E1 = span 10
⎧ ⎫⎡ ⎤⎪ ⎪⎨ ⎬⎢ ⎥⎪ ⎪⎣ ⎦⎩ ⎭
is its eigenspace.
13. A = 2 41 2
⎡ ⎤⎢ ⎥− −⎣ ⎦
2 0 0, so 0, 0.λ λ+ = =
To find eigenvectors for 0 :λ =
1
2
2 0 4 01 2 0 0
vv
− ⎡ ⎤ ⎡ ⎤⎡ ⎤=⎢ ⎥ ⎢ ⎥⎢ ⎥− − −⎣ ⎦ ⎣ ⎦⎣ ⎦
⇒ 1 2
2 1
2 4 012
v v
v v
+ =
= − ⇒
21−⎡ ⎤
= ⎢ ⎥⎣ ⎦
v
E = span 2
1⎧ ⎫−⎡ ⎤⎪ ⎪⎨ ⎬⎢ ⎥⎪ ⎪⎣ ⎦⎩ ⎭
Dim E = 1
E1 1v
v
E
SECTION 5.3 Eigenvalues and Eigenvectors 505
14. 3 00 3⎡ ⎤⎢ ⎥⎣ ⎦
2 6 0, so 3, 3.λ λ λ− = =
To find eigenvectors for 3:λ =
1
2
3 3 0 00 3 3 0
vv
− ⎡ ⎤ ⎡ ⎤⎡ ⎤=⎢ ⎥ ⎢ ⎥⎢ ⎥−⎣ ⎦ ⎣ ⎦⎣ ⎦
0v1 + 0v2 = 0 so 1
2
vv⎡ ⎤⎢ ⎥⎣ ⎦
is an eigenvector for any v1 or v2 ∈ R2.
Every vector in R2 is an eigenvector.
For example 1
10⎡ ⎤
= ⎢ ⎥⎣ ⎦
v , 2
01⎡ ⎤
= ⎢ ⎥⎣ ⎦
v are two
linearly independent eigenvectors in R2.
E1,2 = span 21 0,
0 1⎧ ⎫⎡ ⎤ ⎡ ⎤⎪ ⎪ =⎨ ⎬⎢ ⎥ ⎢ ⎥⎪ ⎪⎣ ⎦ ⎣ ⎦⎩ ⎭
R
Dim E = 2
15. 2 11 4
−⎡ ⎤⎢ ⎥⎣ ⎦
2 6 9 0, so 3,3.λ λ λ+ + = =
To find eigenvectors for 3:λ =
1
2
2 3 1 01 4 3 0
vv
− − ⎡ ⎤ ⎡ ⎤⎡ ⎤=⎢ ⎥ ⎢ ⎥⎢ ⎥−⎣ ⎦ ⎣ ⎦⎣ ⎦
⇒ 1 2 0v v− − = ⇒ 2 1
1 freev vv= −
⇒ 11
⎡ ⎤= ⎢ ⎥−⎣ ⎦
v
E = span 11
⎧ ⎫⎡ ⎤⎪ ⎪⎨ ⎬⎢ ⎥−⎪ ⎪⎣ ⎦⎩ ⎭
Dim E = 1.
E1,2
2v
1v
E
v
506 CHAPTER 5 Linear Transformations
16. 1 11 1
⎡ ⎤⎢ ⎥− −⎣ ⎦
2 0, so 0, 0.λ λ= =
To find eigenvectors for 0 :λ =
1
2
1 0 1 01 1 0 0
vv
− ⎡ ⎤ ⎡ ⎤⎡ ⎤=⎢ ⎥ ⎢ ⎥⎢ ⎥− − −⎣ ⎦ ⎣ ⎦⎣ ⎦
⇒ 1 2
2 1
1
0
free
v vv vv
+ == − ⇒
11
⎡ ⎤= ⎢ ⎥−⎣ ⎦
v
E = span 11
⎧ ⎫⎡ ⎤⎪ ⎪⎨ ⎬⎢ ⎥−⎪ ⎪⎣ ⎦⎩ ⎭
Dim E = 1.
Eigenvector Shortcut
17. 1 1
2 2
v va bc d v v
λ⎡ ⎤ ⎡ ⎤⎡ ⎤
= =⎢ ⎥ ⎢ ⎥⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦
Av b ≠ 0
Thus, av1 + bv2 = 1vλ cv1 + dv2 = 2vλ
or
1 2
1 2
( ) 0( ) 0
a v bvcv d v
λλ
− + =+ − =
(*)
For b ≠ 0,
ba λ−⎡ ⎤
= ⎢ ⎥−⎣ ⎦v
Note: It is also true that if c ≠ 0,
dcλ−⎡ ⎤
= ⎢ ⎥−⎣ ⎦v ,
which can be shown in different ways. For example, setting these two expressions for v to be
equal leads to the characteristic equation for λ.
E
v
SECTION 5.3 Eigenvalues and Eigenvectors 507
When Shortcut Fails
18. In all the three matrices given the shortcut b
a λ−⎡ ⎤
= ⎢ ⎥−⎣ ⎦v fails for λ = 3, because it gives
0 03 3 0⎡ ⎤ ⎡ ⎤
=⎢ ⎥ ⎢ ⎥−⎣ ⎦ ⎣ ⎦ which cannot be an eigenvector.
If matrix element b= 0, the eigenvector system (*) in Problem 17 gives
1
1 2
( ) 0( ) 0.
a vcv d v
λλ
− =+ − =
There are several possibilities for solving this system, depending on which factor of the first equation is zero.
• If v1 = 0, the second equation says one possibility is that v2 = 0 and 00⎡ ⎤
= ⎢ ⎥⎣ ⎦
v , but a zero
vector cannot be an eigenvector.
The only other option is that d = λ in which case v2 can be anything, so 01⎡ ⎤
= ⎢ ⎥⎣ ⎦
v .
• If a = λ, then v1 can be anything and the second equation gives d
cλ−⎡ ⎤
= ⎢ ⎥−⎣ ⎦v .
The second equation of the system then determines the outcome.
(a) 3 05 3⎡ ⎤⎢ ⎥⎣ ⎦
has eigenvalue 3.
The first eigenvector equation, 3v1 = 3v1 says v1 can be anything.
But the second equation, 5v1 + (3 − 3)v2 = 0, requires that v1 = 0; then v2 can be anything,
so 01⎡ ⎤
= ⎢ ⎥⎣ ⎦
v .
(b) 3 05 2⎡ ⎤⎢ ⎥⎣ ⎦
has eigenvalues 3 and 2.
For λ1 = 3, the first equation says 3v1 = 3v1 so v1 can be anything.
But the second equation, 5v1 + 2v2 = 3v2, requires v2 = 5v1 so 1
15⎡ ⎤
= ⎢ ⎥⎣ ⎦
v is an eigenvector.
For λ2 = 2, the first equation 3v1 = 2v1 says v1 = 0.
Then the second equation 5v1 + 2v2 = 2v2 says v2 can be anything, so 2
01⎡ ⎤
= ⎢ ⎥⎣ ⎦
v .
508 CHAPTER 5 Linear Transformations
(c) 3 00 3⎡ ⎤⎢ ⎥⎣ ⎦
has a double eigenvalue 3.
3v1 = 3v1 3v2 = 3v2
so both v1 and v2 can be anything! Any vector is an eigenvector .The eigenspace is two
dimensional, all of R2 spanned by 01⎡ ⎤⎢ ⎥⎣ ⎦
and 10⎡ ⎤⎢ ⎥⎣ ⎦
(or any other two linearly independent
vectors).
More Eigenstuff
19. The characteristic equation of the given matrix is
( ) ( )( )( )2 0 0
1 1 2 2 1 1 01 0 1
pλ
λ λ λ λ λ λλ
−= − = − − − = − + − =
− −A I ,
which has eigenvalues 1 2λ = , 2 1λ = − , and 3 1λ = . To find the eigenvector corresponding to
1 2λ = , we substitute 2 into the system λ=Av v and solve for v . The system
2 0 01 1 2 21 0 1
x xy yz z
⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥− − =⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥−⎣ ⎦ ⎣ ⎦ ⎣ ⎦
gives ,x c= − y c= − , ,z c= where c is any real
number. Hence, we have the eigenvector[ ]1 1, 1, 1 .= − −v By a similar argument, the
eigenvectors corresponding to 2 1λ = − and
3 1λ = are
[ ][ ]
2 2
3 3
1 0, 1, 0
1 0, 1, 1 .
λ
λ
= − ⇒ =
= ⇒ = −
v
v
Each eigenvalue corresponds to a one-dimen-sional eigenspace in R3.
E1
E2
E3
1v
2v
3v
SECTION 5.3 Eigenvalues and Eigenvectors 509
20. The characteristic equation of the given matrix is
( ) 3 2
1 2 11 1 6 11 6 04 4 5
pλ
λ λ λ λ λ λλ
− −= − = − = − + − =
− −A I
which has eigenvalues 1 1λ = , 2 2λ = , and 3 3λ = . To find the eigenvector corresponding to
1 1λ = , we substitute 1 into the system λ=Av v and solve for v. The system
1 2 11 0 1 14 4 5
x xy yz z
−⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥=⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥−⎣ ⎦ ⎣ ⎦ ⎣ ⎦
gives ,x c= y c= − , 2z c= − , where c is any real number. Hence, we have the eigenvector [ ]1 1, 1, 2= − −v . By a similar argument, the eigenvectors corresponding to 2 2λ = and 3 3λ =
are
[ ][ ]
2 2
3 3
2 2, 1, 4
3 1, 1, 4 .
λ
λ
= ⇒ = −
= ⇒ = −
v
v
Each eigenvalue corresponds to a one-dimensional eigenspace in R3.
21. The characteristic equation of the given matrix is
( ) 3 2
1 2 22 3 17 15 02 3
pλ
λ λ λ λ λ λλ
−= − = − = − − − =
−A I ,
which has eigenvalues 1 5λ = , 2 3λ = − , and 3 1λ = − . To find the eigenvector corresponding to
1 5λ = , we substitute 5 into the system λ=Av v and solve for v. The system
1 2 22 0 3 52 3 0
x xy yz z
⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥=⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦
gives ,x c= y c= , ,z c= where c is any real number. Hence, we have the eigenvector [ ]1 1, 1, 1=v . By a similar argument, the eigenvectors corresponding to 2 3λ = − and 3 1λ = − are
[ ][ ]
2 2
3 3
3 0, 1, 1
1 2, 1, 1
λ
λ
= − ⇒ = −
= − ⇒ = −
v
v
Each eigenvalue corresponds to a one-dimensional eigenspace in R3.
510 CHAPTER 5 Linear Transformations
22. The characteristic equation of the given matrix is
( ) ( )2
1 10 1 1 1 00 0
pλ
λ λ λ λ λλ
− −= − = − − = + =
−A I ,
which has eigenvalues 1 0λ = , 2 0λ = , and 3 1λ = − . To find the eigenvector corresponding to 1λ or 2λ , we substitute 0 into the system λ=Av v and solve for v . Doing this gives
0 1 10 1 1 00 0 0
x xy yz z
−⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥− =⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦
,
and 0y z− = , 0y z− + = , yielding two independent eigenvectors [ ]1 1, 0, 0=v , [ ]2 0, 1, 1=v . By a similar argument, the eigenvector for –1 is [ ]3 1, 1, 0= −v . The eigenspace for 1λ is the two-dimensional space spanned by the vectors [ ]1, 0, 0 and [ ]0, 1, 1 . The eigenspace for 3λ is the one-dimensional eigenspace spanned by [ ]1, 1, 0− .
23. The characteristic equation of the given matrix is
( ) 3
1 11 1 3 2 01 1
pλ
λ λ λ λ λλ
−= − = − = − − =
−A I ,
which has a double root of 1 2 1λ λ= = − and a single root of 3 2λ = . To find the eigenvector scorresponding to the double root, we substitute 1 1λ = − into the system λ=Av v and solve for
v . Doing this, gives
0 1 11 0 11 1 0
x xy yz z
⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥= −⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦
,
and ,x r= y s= , ,z r s= − − where r, s are any real numbers. Hence, corresponding to 1λ = − ,
we have in R3 the two-dimensional space of eigenvectors
1
1 0span 0 , 1
1 1
⎧ ⎫⎡ ⎤ ⎡ ⎤⎪ ⎪⎢ ⎥ ⎢ ⎥= ⎨ ⎬⎢ ⎥ ⎢ ⎥⎪ ⎪⎢ ⎥ ⎢ ⎥− −⎣ ⎦ ⎣ ⎦⎩ ⎭
E .
The eigenvector corresponding to the eigenvalue 3 2λ = is [ ]3 1, 1, 1=v , which has a one-
dimensional eigenspace in R3.
SECTION 5.3 Eigenvalues and Eigenvectors 511
24. The characteristic equation of the given matrix is
( ) ( )( )( )1 0 0
1 3 0 1 3 23 2 2
pλ
λ λ λ λ λ λλ
−= − = − − = − − +
− −A I ,
which has roots 1 1λ = , 2 3λ = , and 3 2λ = − . To find the eigenvector corresponding to 1 1λ = , we substitute into the system 1λ=Av v and solve for v . Doing this yields an eigenvector
[ ]1 6, 3, 8=v . By a similar analysis, we substitute 2 3λ = and 3 2λ = − into λ=Av v for λ yield-ing eigenvectors [ ]2 0, 5, 2=v , [ ]3 0, 0, 1=v . Each eigenvalue corresponds to a one-
dimensional eigenspace.
25. The characteristic equation of the given matrix is
( ) 3 2
1 0 11 3 0 24 13 1
pλ
λ λ λ λ λ λλ
− −= − = − − = − − −
− − −A I ,
which has roots 1 2λ = , 21 1 32 2
iλ = − + , and 31 1 32 2
iλ = − − . To find the eigenvector corre-
sponding to 1 2λ = , we substitute into the system 1λ=Av v and solve for v . Doing this yields an eigenvector [ ]1 1, 1, 3=v . By a similar analysis we find
2 2
3 3
1 3 7 1 5 33, 1, 32 2 2 2 2 2
1 3 7 1 5 33, 1, 32 2 2 2 2 2
i i i
i i i
λ
λ
⎡ ⎤= − + ⇒ = − +⎢ ⎥⎣ ⎦
⎡ ⎤= − − ⇒ = + −⎢ ⎥⎣ ⎦
v
v
E1 is one dimensional in R3; 2λ and 3λ have no real eigenspaces.
26. The characteristic equation of the given matrix is
( ) 3 2
2 2 31 2 1 5 2 82 2 1
pλ
λ λ λ λ λ λλ
−= − = − = − + +
− −A I ,
which has roots 1 2λ = , 2 4λ = , and 3 1λ = − . To find the eigenvector corresponding to 1 2λ = , we substitute into the system 1λ=Av v and solve for v . Doing this yields
[ ]1 2, 3, 2= − −v .
By a similar analysis we substitute 2 4λ = into λ=Av v for λ yielding solutions [ ]2 8, 5, 2=v . The eigenvector corresponding to the eigenvalue 3 1λ = − is [ ]3 1, 0, 1= −v . Each eigenvalue
corresponds to a one-dimensional eigenspace in R3.
512 CHAPTER 5 Linear Transformations
27. A = 1 0 04 3 04 2 1
⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥−⎣ ⎦
21,2 3
1 0 0det( ) det 4 3 0 (1 ) (3 ) 1, 1, 3.
4 2 1
λλ λ λ λ λ λ
λ
−⎡ ⎤⎢ ⎥− = − − = − − ⇒ = =⎢ ⎥⎢ ⎥− −⎣ ⎦
A I
To find eigenvectors for double eigenvector 1,2 1:λ =
0 0 04 2 04 2 0
⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥−⎣ ⎦
RREF→
1 1/ 2 00 0 00 0 0
−⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
⇒ 1 2
2 3
1 02
, free
v v
v v
− =
E1,2 = span1 02 , 0 ,0 1
⎧ ⎫⎡ ⎤ ⎡ ⎤⎪ ⎪⎢ ⎥ ⎢ ⎥⎨ ⎬⎢ ⎥ ⎢ ⎥⎪ ⎪⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦⎩ ⎭
dim E1,2 = 2.
To find eigenvectors for 3 3λ = :
2 0 04 0 04 2 2
−⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥− −⎣ ⎦
→ 1 0 00 0 00 2 2
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥−⎣ ⎦
RREF→
1 0 00 1 10 0 0
⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥⎣ ⎦
⇒ 1
2 3
3
00
free
vv vv
=− =
E3 = span01 ,1
⎧ ⎫⎡ ⎤⎪ ⎪⎢ ⎥⎨ ⎬⎢ ⎥⎪ ⎪⎢ ⎥⎣ ⎦⎩ ⎭
dim E3 = 1.
28. A = 1 1 10 1 10 0 1
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
3
1 1 1det( ) det 0 1 1 (1 )
0 0 1
λλ λ λ
λ
−⎡ ⎤⎢ ⎥− = − = −⎢ ⎥⎢ ⎥−⎣ ⎦
A I ⇒ 1 1, 1, 1 λ =
To find eigenvectors for 1λ = :
0 1 10 0 10 0 0
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
⇒ 2 3
3
1
00
= free
v vvv
+ ==
E1 = span10 ,0
⎧ ⎫⎡ ⎤⎪ ⎪⎢ ⎥⎨ ⎬⎢ ⎥⎪ ⎪⎢ ⎥⎣ ⎦⎩ ⎭
dim E1 = 1.
SECTION 5.3 Eigenvalues and Eigenvectors 513
29. A = 3 2 01 0 01 1 3
−⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥−⎣ ⎦
[ ]3 2 0
3 2det( ) det 1 0 (3 ) (3 ) (3 ) 2
11 1 3
λλ
λ λ λ λ λ λλ
λ
− −⎡ ⎤− −⎡ ⎤⎢ ⎥− = − = − = − − − +⎢ ⎥⎢ ⎥ −⎣ ⎦⎢ ⎥− −⎣ ⎦
A I
21 2 3(3 )( 3 2) (3 )( 2)( 1) 1, 2, 3λ λ λ λ λ λ λ λ λ= − − + = − − − ⇒ = = = .
To find eigenvectors for 1 1λ = :
2 2 01 1 01 1 2
−⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥−⎣ ⎦
→ 1 1 01 1 2
0 0 0
−⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥⎣ ⎦
→ 1 1 00 0 20 0 0
−⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
RREF→
1 1 00 0 10 0 8
−⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
⇒ 1 2
3
2
00
free
v vvv
− ==
E1 = span 11 ,0
⎧ ⎫⎡ ⎤⎪ ⎪⎢ ⎥⎨ ⎬⎢ ⎥⎪ ⎪⎢ ⎥⎣ ⎦⎩ ⎭
dim E1 = 1.
To find eigenvectors for 2 2λ = :
1 2 01 2 01 1 1
−⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥−⎣ ⎦
→ 1 2 00 0 00 1 1
−⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥−⎣ ⎦
RREF→
1 2 00 1 10 0 0
−⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥⎣ ⎦
⇒ 1 2
2 3
3
2 00
free
v vv vv
− =− =
E2 = span 21 ,1
⎧ ⎫⎡ ⎤⎪ ⎪⎢ ⎥⎨ ⎬⎢ ⎥⎪ ⎪⎢ ⎥⎣ ⎦⎩ ⎭
dim E2 = 1.
To find eigenvectors for 3 3λ = :
0 2 01 3 01 1 0
−⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥−⎣ ⎦
→ 1 3 00 1 00 2 0
−⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥−⎣ ⎦
RREF→
1 3 00 1 00 0 0
−⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
⇒ 1 2
2
3
3 00
free
v vvv
− ==
E3 = span 00 ,1
⎧ ⎫⎡ ⎤⎪ ⎪⎢ ⎥⎨ ⎬⎢ ⎥⎪ ⎪⎢ ⎥⎣ ⎦⎩ ⎭
dim E3 = 1.
514 CHAPTER 5 Linear Transformations
30. A = 0 0 21 1 21 0 3
⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥−⎣ ⎦
0 2
1 2 1 1det( ) det 1 1 2 2
0 3 1 01 0 3
λλ λ
λ λ λλ
λ
−⎡ ⎤− − −⎢ ⎥− = − − = − +⎢ ⎥ − −
⎢ ⎥− −⎣ ⎦
A I
( )[ ]
21,2 3
(1 )(3 ) 2(1 )
(1 ) (3 ) 2
(1 )( 3 2) (1 )( 2)( 1) 1, 1, 2.
λ λ λ λ
λ λ λ
λ λ λ λ λ λ λ λ
= − − + −
= − − − +
= − − + = − − − ⇒ = =
To find eigenvectors for double eigenvalue 1,2 1λ = :
1 0 21 0 21 0 2
−⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥−⎣ ⎦
RREF→
1 0 20 0 00 0 0
−⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
1 3
2 3
2 0, free
v vv v− =
E1,2 = span 0 21 , 0 ,0 1
⎧ ⎫⎡ ⎤ ⎡ ⎤⎪ ⎪⎢ ⎥ ⎢ ⎥⎨ ⎬⎢ ⎥ ⎢ ⎥⎪ ⎪⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦⎩ ⎭
dim E1,2 = 2.
To find eigenvectors for 3 2λ = :
2 0 21 1 21 0 1
−⎡ ⎤⎢ ⎥− −⎢ ⎥⎢ ⎥−⎣ ⎦
→ 1 0 10 1 10 0 0
−⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥⎣ ⎦
RREF→
1 0 10 1 10 0 0
−⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥⎣ ⎦
⇒ 1 3
2 3
3
00
free
v vv vv
− =− =
E3 = span 11 ,1
⎧ ⎫⎡ ⎤⎪ ⎪⎢ ⎥⎨ ⎬⎢ ⎥⎪ ⎪⎢ ⎥⎣ ⎦⎩ ⎭
dim E3 = 1.
31. A =
2 1 8 10 4 0 00 0 6 00 0 0 4
−⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
21 2,3 4
2 1 8 10 4 0 0
det( ) det (2 )(4 ) (6 ) 2, 4, 60 0 6 00 0 0 4
λλ
λ λ λ λ λ λ λλ
λ
− −⎡ ⎤⎢ ⎥−⎢ ⎥− = = − − − ⇒ = = =⎢ ⎥−⎢ ⎥
−⎣ ⎦
A I
SECTION 5.3 Eigenvalues and Eigenvectors 515
To find eigenvectors for 1 2λ = :
0 1 8 10 2 0 00 0 4 00 0 0 2
−⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
→
0 1 8 10 0 16 20 0 1 00 0 0 1
−⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
RREF→
0 1 8 10 0 1 00 0 0 10 0 0 0
−⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
⇒
2 3 4
3
4
1
8 000
free
v v vvvv
+ − =
==
E1 = span
10
,00
⎧ ⎫⎡ ⎤⎪ ⎪⎢ ⎥⎪ ⎪⎢ ⎥⎨ ⎬⎢ ⎥⎪ ⎪⎢ ⎥⎪ ⎪⎣ ⎦⎩ ⎭
dim E1 = 1.
To find eigenvectors for double eigenvalue 2,3 4λ = :
2 1 8 10 0 0 00 0 2 00 0 0 0
− −⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
RREF→
1 1/ 2 4 1/ 20 0 1 00 0 0 00 0 0 0
− −⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
⇒
1 2 3 4
3
2 4
1 14 02 20
, free
v v v v
vv v
− − + =
=
E2,3 = span
1 12 0
, ,0 00 2
⎧ ⎫−⎡ ⎤ ⎡ ⎤⎪ ⎪⎢ ⎥ ⎢ ⎥⎪ ⎪⎢ ⎥ ⎢ ⎥⎨ ⎬⎢ ⎥ ⎢ ⎥⎪ ⎪⎢ ⎥ ⎢ ⎥⎪ ⎪⎣ ⎦ ⎣ ⎦⎩ ⎭
dim E2,3 = 2.
To find eigenvectors for 4 6λ = :
4 1 8 10 2 0 00 0 0 00 0 0 2
− −⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥⎢ ⎥
−⎣ ⎦
RREF→
1 1/ 4 2 1/ 40 1 0 00 0 0 10 0 0 0
− −⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
⇒
1 2 3 4
2
4
3
1 12 04 400
free
v v v v
vvv
− − + =
==
E4 = span
20
,10
⎧ ⎫⎡ ⎤⎪ ⎪⎢ ⎥⎪ ⎪⎢ ⎥⎨ ⎬⎢ ⎥⎪ ⎪⎢ ⎥⎪ ⎪⎣ ⎦⎩ ⎭
dim E4 = 1.
516 CHAPTER 5 Linear Transformations
32. A =
4 0 4 00 4 0 00 0 8 01 2 1 8
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥− −⎣ ⎦
4 0 4 04 0 4
0 4 0 0det( ) det (8 ) 0 4 0
0 0 8 00 0 8
1 2 1 8
λλ
λλ λ λ
λλ
λ
−⎡ ⎤−⎢ ⎥−⎢ ⎥− = = − −
⎢ ⎥−−⎢ ⎥
− − −⎣ ⎦
A I
2 21,2 3,4
4 0(8 )(4 ) (4 ) (8 ) 4, 4 8, 8
0 8λ
λ λ λ λ λ λλ
−= − − = − − ⇒ = =
−
To find eigenvectors for double eigenvalue 1,2 4λ = :
0 0 4 00 0 0 00 0 4 01 2 1 4
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥− −⎣ ⎦
RREF→
1 2 1 40 0 1 00 0 0 00 0 0 0
− −⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
⇒ 1 2 3 4
3
2 4
2 4 00
, free
v v v vvv v
+ − − ==
E1,2 = span
2 41 0
, ,0 00 1
⎧ ⎫−⎡ ⎤ ⎡ ⎤⎪ ⎪⎢ ⎥ ⎢ ⎥⎪ ⎪⎢ ⎥ ⎢ ⎥⎨ ⎬⎢ ⎥ ⎢ ⎥⎪ ⎪⎢ ⎥ ⎢ ⎥⎪ ⎪⎣ ⎦ ⎣ ⎦⎩ ⎭
dim E1,2 = 2.
To find eigenvectors for double eigenvalue 3,4 8λ = :
4 0 4 00 4 0 00 0 0 01 2 1 0
−⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥⎢ ⎥− −⎣ ⎦
→
1 0 1 00 1 0 00 2 0 00 0 0 0
−⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥−⎢ ⎥⎣ ⎦
RREF→
1 0 1 00 1 0 00 0 0 00 0 0 0
−⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
⇒ 1 3
2
3 4
00
, free
v vvv v
− =
=
E3,4 = span
1 00 0
, ,1 00 1
⎧ ⎫⎡ ⎤ ⎡ ⎤⎪ ⎪⎢ ⎥ ⎢ ⎥⎪ ⎪⎢ ⎥ ⎢ ⎥⎨ ⎬⎢ ⎥ ⎢ ⎥⎪ ⎪⎢ ⎥ ⎢ ⎥⎪ ⎪⎣ ⎦ ⎣ ⎦⎩ ⎭
dim E3,4 = 2
SECTION 5.3 Eigenvalues and Eigenvectors 517
33. A =
2 0 1 20 2 0 00 0 6 00 0 1 4
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
2 0 1 20 2 0 0
det( ) det0 0 6 00 0 1 4
2 0 0(2 ) 0 6 0
0 1 4
λλ
λλ
λ
λλ λ
λ
−⎡ ⎤⎢ ⎥−⎢ ⎥− =⎢ ⎥−⎢ ⎥
−⎣ ⎦−
= − −−
A I
21,2 3 4(2 )(4 )(6 ) 2, 2, 4, 6λ λ λ λ λ λ= − − − ⇒ = = = .
To find eigenvectors for double eigenvalue 1,2 2λ = :
0 0 1 20 0 0 00 0 4 00 0 1 2
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
→
0 0 1 20 0 4 00 0 0 00 0 0 0
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
RREF→
0 0 1 20 0 0 10 0 0 00 0 0 0
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
⇒ 3 4
4
1 2
2 00
, free
v vvv v
+ ==
E1,2 = span
1 00 1
, ,0 00 0
⎧ ⎫⎡ ⎤ ⎡ ⎤⎪ ⎪⎢ ⎥ ⎢ ⎥⎪ ⎪⎢ ⎥ ⎢ ⎥⎨ ⎬⎢ ⎥ ⎢ ⎥⎪ ⎪⎢ ⎥ ⎢ ⎥⎪ ⎪⎣ ⎦ ⎣ ⎦⎩ ⎭
dim E1,2 = 2.
To find eigenvectors for 3 4λ = :
2 0 1 00 2 0 00 0 2 00 0 1 0
−⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
RREF→
1 0 1/ 2 10 1 0 00 0 1 00 0 0 0
− −⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
⇒
2 3 4
2
3
4
1 0200
free
v v v
vvv
− − =
==
E3 = span
10
,01
⎧ ⎫⎡ ⎤⎪ ⎪⎢ ⎥⎪ ⎪⎢ ⎥⎨ ⎬⎢ ⎥⎪ ⎪⎢ ⎥⎪ ⎪⎣ ⎦⎩ ⎭
dim E3 = 1.
518 CHAPTER 5 Linear Transformations
To find eigenvectors for 4 6λ = :
4 0 1 20 4 0 00 0 0 00 0 1 2
−⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥⎢ ⎥
−⎣ ⎦
RREF→
1 0 1/ 4 1/ 20 1 0 00 0 1 20 0 0 0
− −⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥−⎢ ⎥⎣ ⎦
⇒
1 3 4
2
3 4
4
1 1 04 202 = 0
free
v v v
vv vv
− − =
=−
E4 = span
10
,21
⎧ ⎫⎡ ⎤⎪ ⎪⎢ ⎥⎪ ⎪⎢ ⎥⎨ ⎬⎢ ⎥⎪ ⎪⎢ ⎥⎪ ⎪⎣ ⎦⎩ ⎭
dim E4 = 1.
34. A =
2 0 0 01 2 0 01 0 1 00 2 0 1
⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
1 2 3,42, 2, 1, 1λ λ λ= = − =
To find eigenvectors for 1 2 :λ =
1
2
3
4
00 0 0 01 4 0 0 01 0 1 0 00 2 0 1 0
vvvv
⎡ ⎤ ⎡ ⎤⎡ ⎤⎢ ⎥ ⎢ ⎥⎢ ⎥− ⎢ ⎥ ⎢ ⎥⎢ ⎥ =⎢ ⎥ ⎢ ⎥⎢ ⎥−⎢ ⎥ ⎢ ⎥⎢ ⎥
− ⎢ ⎥⎣ ⎦ ⎣ ⎦⎣ ⎦
⇒ 1 2
1 3
2 4
4 00
2 0
v vv v
v v
− =− =− =
⇒ 1 2 1 3 1 4 11 1 free, , , 4 2
v v v v v v v= = =
1
4142
⎡ ⎤⎢ ⎥⎢ ⎥=⎢ ⎥⎢ ⎥⎣ ⎦
v , E1 = span
41
,42
⎧ ⎫⎡ ⎤⎪ ⎪⎢ ⎥⎪ ⎪⎢ ⎥⎨ ⎬⎢ ⎥⎪ ⎪⎢ ⎥⎪ ⎪⎣ ⎦⎩ ⎭
dim E1 = 1.
To find eigenvectors for 2 2 :λ = −
1
2
3
4
04 0 0 01 0 0 0 01 0 3 0 00 2 0 3 0
vvvv
⎡ ⎤ ⎡ ⎤⎡ ⎤⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥ =⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦ ⎣ ⎦⎣ ⎦
⇒ 1
1 3
2 4
4 03 0
2 3 0
vv v
v v
=+ =+ =
⇒
1 3
4 2
2
023
free
v v
v v
v
= =
= −
2
0302
⎡ ⎤⎢ ⎥⎢ ⎥=⎢ ⎥⎢ ⎥−⎣ ⎦
v , E2 = span
03
,02
⎧ ⎫⎡ ⎤⎪ ⎪⎢ ⎥⎪ ⎪⎢ ⎥⎨ ⎬⎢ ⎥⎪ ⎪⎢ ⎥⎪ ⎪−⎣ ⎦⎩ ⎭
dim E1 = 1.
SECTION 5.3 Eigenvalues and Eigenvectors 519
To find eigenvectors for double eigenvector 3,4 1:λ =
1
2
3
4
01 0 0 01 3 0 0 01 0 0 0 00 0 0 0 0
vvvv
⎡ ⎤ ⎡ ⎤⎡ ⎤⎢ ⎥ ⎢ ⎥⎢ ⎥− ⎢ ⎥ ⎢ ⎥⎢ ⎥ =⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦ ⎣ ⎦⎣ ⎦
⇒ 1 2
3 4
0, free
v vv v= =
Thus
0 00 0
,1 00 1
⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦
are linearly independent eigenvectors for λ3,4 = 1.
E3,4 = span
0 00 0
, ,1 00 1
⎧ ⎫⎡ ⎤ ⎡ ⎤⎪ ⎪⎢ ⎥ ⎢ ⎥⎪ ⎪⎢ ⎥ ⎢ ⎥⎨ ⎬⎢ ⎥ ⎢ ⎥⎪ ⎪⎢ ⎥ ⎢ ⎥⎪ ⎪⎣ ⎦ ⎣ ⎦⎩ ⎭
dim E3,4 = 2.
Prove the Eigenspace Theorem
35. Let 1v and 2v be eigenvectors of an n n× matrix A corresponding to a given eigenvalue λ. Hence, we have 1 1λ=Av v and 2 2λ=Av v . If we add these equations we get
1 2 1 2λ λ+ = +Av Av v v
or ( ) ( )1 2 1 2λ+ = +A v v v v .
Thus, 1 2+v v is also an eigenvector of A corresponding to λ. Now, if we multiply 1 1λ=Av v by
a constant c, we get the equation
1 1c cλ=Av v or ( ) ( )1 1c cλ=A v v .
Hence, scalar multiples of eigenvectors are also eigenvectors. We have just proven that the set of all eigenvectors corresponding to a given eigenvalue is closed under vector addition and scalar multiplication and is, therefore, a subspace of nR .
Distinct Eigenvalues Extended
36. We wish to show that for any three distinct eigenvalues of a 3 × 3 matrix A, their eigenvectors are linearly independent. Assume 1λ , 2λ , 3λ are distinct eigenvalues and let
1 1 2 2 3 3 0c c c+ + =v v v .
Multiplying by A yields
( )1 1 2 2 3 3 1 1 1 2 2 2 3 3 3 0c c c c c cλ λ λ+ + = + + =A v v v v v v .
520 CHAPTER 5 Linear Transformations
Subtracting ( )3 1 1 2 2 3 3c c cλ + +v v v (whose value is zero) from this equation yields
( ) ( )1 1 3 1 2 2 3 2 0c cλ λ λ λ− + − =v v .
We have seen that the eigenvectors of any two distinct eigenvalues are linearly independent so
1 2 0c c= = in the preceding equation. But if this is true, the equation
1 1 2 2 3 3 0c c c+ + =v v v
shows that 3 0c = also and, hence, we have proven the result for three distinct eigenvalues. We
can continue this process indefinitely for any number of vectors.
Invertible Matrices
37. Suppose A is an invertible matrix with characteristic polynomial
11 0n n
na aλ λ λ −− = + + + =A I .
If we let 0λ = , we have the relationship na=A . If the matrix A is invertible we know it has a
nonzero determinant, hence, na is different from zero, which says the characteristic polynomial
does not have any zero roots or eigenvalues.
38. Suppose A is an invertible matrix and λ is any eigenvalue for A. Then 0λ ≠ (by Problem 37). Also λ must satisfy the system ( )λ− =A I v 0 . If we premultiply the equation by 1−A it yields
( )1 1λ− −− = =A A I v A 0 0
or
( )1λ −− =I A v 0 .
Now dividing by λ, we can write
1 1λ
−⎛ ⎞− =⎜ ⎟⎝ ⎠
A I v 0 ,
which states that 1λ
is an eigenvalue of 1−A .
39. One example of the result in Problem 38 is
2 00 3⎡ ⎤
= ⎢ ⎥⎣ ⎦
A , with eigenvalues 2, 3, and for which
1
1 02
103
−
⎡ ⎤⎢ ⎥
= ⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
A , with eigenvalues 12
and 13
.
SECTION 5.3 Eigenvalues and Eigenvectors 521
Similar Matrices
40. (a) If ~B A then 1−=B P AP for some invertible matrix P. We will use the determinant of products of matrices in 3.4.
( )1 1 1 1 1
1
λ λ λ λ λ
λ λ
− − − − −
−
− = − = − = − = −
= − = −
B I P AP P P P A I P P A I P P P A I
P P A I A I
Because A and B have the same characteristic polynomial, they have the same eigenval-ues.
(b) 1 10 2⎡ ⎤
= ⎢ ⎥⎣ ⎦
A ,1 11 2⎡ ⎤
= ⎢ ⎥⎣ ⎦
P , 1 2 11 1
− −⎡ ⎤= ⎢ ⎥−⎣ ⎦
P , 1 2 1 1 1 1 1 2 21 1 0 2 1 2 0 1
− −⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤= = =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥−⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
B P AP .
Both matrices A, B have characteristic polynomial ( ) ( )( )1 2p λ λ λ= − − , with
eigenvalues λ1 = 2, λ2 = 1.
We calculate the two eigenvectors of A:
1
1 12
0 2u uv v
⎡ ⎤ ⎡ ⎤ ⎡ ⎤= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦
Av ,
22 2 .
u v uv v
+ ==
For convenience we let 1v = so
that 1u = . Thus, 1
11⎡ ⎤
= ⎢ ⎥⎣ ⎦
v ,
2
1 11
0 2u uv v
⎡ ⎤ ⎡ ⎤ ⎡ ⎤= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦
Av ,
2 .u v u
v v+ =
=
Note that 0v = . For convenience,
we let 1u = . Hence,
2
10⎡ ⎤
= ⎢ ⎥⎣ ⎦
v .
Now we calculate the eigenvectors of B:
1
2 22
0 1u uv v
⎡ ⎤ ⎡ ⎤ ⎡ ⎤= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦
Bv ,
2 2 22 .
u v uv v
+ ==
Note that 0v = . For convenience,
we let 1u = . Therefore, 1
10⎡ ⎤
= ⎢ ⎥⎣ ⎦
v ,
2
2 21
0 1u uv v
⎡ ⎤ ⎡ ⎤ ⎡ ⎤= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦
Bv ,
2 2.
u v uv v
+ ==
For convenience, we let 1v =
so that 2u = − and 2
21
−⎡ ⎤= ⎢ ⎥⎣ ⎦
v .
Hence the eigenvectors of A and B are different.
522 CHAPTER 5 Linear Transformations
Identity Eigenstuff
41. The 2 2× identity matrix has the characteristic equation ( )21 0λ− = , so we have repeated eigenvalues of 1, 1. Substituting the value 1 into 1=Iv v , where [ ], x y=v , yields the equations
,x x= y y= . Inasmuch as these equations pose no conditions on x and y, we find that every vector ( ), x y in 2R is an eigenvector. This is not surprising, because from a geometric point of
view, eigenvectors are vectors whose direction does not change under multiplication by the matrix. Because the identity matrix leaves all vectors unchanged, we find that every vector in 2R is an eigenvector. Similar results hold in higher-dimensional spaces. In nR we have a repeated eigenvalue of 1 with multiplicity n, and n linearly independent eigenvectors.
Eigenvalues and Inversion
42. We have seen that if a matrix A has an inverse, none of its eigenvalues are zero. Hence, for an eigenvalue λ we can write
λ=Av v
so 1 1λ− −=A Av A v
or 1λ −=v A v .
Thus
1 1λ
− =A v v ,
which shows that the eigenvalues of A and 1−A are reciprocals and that they have the same eigenvectors. (Creating an example is left to the reader.)
Triangular Matrices
43. 1 10 1⎡ ⎤⎢ ⎥⎣ ⎦
The characteristic polynomial is
( ) ( )21 11 0
0 1p
λλ λ λ
λ−
= − = = − =−
A I .
Hence, the eigenvalues are 1, 1, which are the elements on the diagonal of the matrix.
SECTION 5.3 Eigenvalues and Eigenvectors 523
44. 2 03 1
⎡ ⎤⎢ ⎥− −⎣ ⎦
The characteristic polynomial is
( ) ( )( )2 02 1 0
3 1p
λλ λ λ λ
λ−
= − = = − + =− − −
A I .
Hence, the eigenvalues are 2, –1, which are the elements on the diagonal of the matrix.
45. 1 0 30 4 10 0 2
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
The characteristic polynomial is
( ) ( )( )( )1 0 3
0 4 1 1 4 2 00 0 2
pλ
λ λ λ λ λ λλ
−= − = − = − − − =
−A I .
Hence, the eigenvalues are 1, 2, 4, which are the elements on the diagonal of the matrix.
46. As we proved in Section 3.4, Problem 15, the determinant of an n n× triangular matrix is the product of its diagonal elements. Let A be an n n× upper triangular matrix; then
( ) ( )( ) ( )
11
22
11 22
. . .0 . . .0 0 *. . *
,. . *. . *. .
0 . . . . 0
nn
nn
aa
p a a a
a
λλ
λ λ λ λ λ
λ
− ∗ ∗ ∗− ∗ ∗
= − = = − − −
∗−
A I
which shows that the eigenvalues of a general uppern n× triangular matrix will always be the
elements on the diagonal of the matrix.
Eigenvalues of a Transpose
47. A matrix A is invertible if and only if det A ≠ 0. We shall show that T=A A suffices to prove
that A is invertible if and only if AT is invertible.
An inductive proof is outlined below:
We want to prove that for any positive integer n, the determinant of an n n× matrix is equal to the determinant of its transpose. For ( ) ( )T
111,det detn a= = =A A .
524 CHAPTER 5 Linear Transformations
Now suppose that for some k, every k k× matrix has this property.
Now consider any ( ) ( )1 1k k+ × + matrix B: Cofactor expansion (Section 3.4) by the first column of matrix B gives
( )
11 12 1
21 22 211 11 21 21 1 1
1 2
det
n
nn n
n n nn
b b bb b b
b C b C b C
b b b
= = + + +B … .
However, each cofactor ( ) ( ) ( ) ( )T1 det 1 deti j i jij ij ij jiC C+ += − = − =M M because the minor matrix
ijM , corresponding to element ijb is a k k× matrix, so ( ) ( )Tdet detij ij=M M . Consequently,
( ) ( )T11 11 21 12 1 1det detn nb C b C b C= + + + =B B… ,
when ( )Tdet B is expressed as an expansion along the first row of TB .
We have shown that if the result is true for one positive integer n, it is true for 1n + . And we have shown it is true for 1n = . (We can think of the proof rippling out through all the positive integers.)
48. By Problem 40 in Section 3.1, T T T T( )λ λ λ− = − = −A I A I A I
so that by Problem 47 in this section T Tλ λ λ− = − = −A I A I A I .
Therefore A and AT have the same characteristic polynomial and hence, the same eigenvalues.
49. The matrices 1 10 2⎡ ⎤
= ⎢ ⎥⎣ ⎦
A , and T 1 01 2⎡ ⎤
= ⎢ ⎥⎣ ⎦
A both have eigenvalues 1 1λ = and 2 2λ = .
The eigenvectors of A satisfy
1 1 1 1
2 2 2 2
1 1 1 1 and 2
0 2 0 2x x x xx x x x⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎡ ⎤ ⎡ ⎤
= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
,
with eigenvectors [ ]1 1, 0=v and [ ]2 1, 1=v , corresponding to 1λ and 2λ , respectively. However, the eigenvectors of TA satisfy
1 1 1 1
2 2 2 2
1 0 1 0 and 2
1 2 1 2x x x xx x x x⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎡ ⎤ ⎡ ⎤
= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
,
with eigenvectors [ ]1 1, 1= −v and [ ]2 0, 1=v , corresponding to 1λ and 2λ , respectively.
Hence, the eigenvectors of A and AT are not the same.
SECTION 5.3 Eigenvalues and Eigenvectors 525
Orthogonal Eigenvectors
50. (a) 1 22 4⎡ ⎤⎢ ⎥⎣ ⎦
By direct computation we find the eigenvalues and eigenvectors of this matrix to be
[ ][ ]
1 1
2 2
0 2, 1
5 1, 2 .
λ
λ
= ⇒ = −
= ⇒ =
v
v
We see that the eigenvectors 1v , 2v are orthogonal.
(b) Suppose A is a symmetric (real) matrix, i.e., A = AT, with some nonzero 1v and 2v in
V ,
1 1 1
2 2 2 .λλ
==
vv
A v
A v
Then, recalling that T1 2 1 2• =v v v v as a matrix product, we have
( ) ( ) ( ) ( )( )
T T T T T T T1 1 2 1 1 2 1 2 1 2 1 2 1 2 2 2 1 2
2 1 2 .
λ λ λ λ
λ
• = = = = = =
= •
v v v v Av v v A v v Av v v v v
v v
Thus ( )( )1 2 0λ λ− • =1 2v v . However 1 2λ λ≠ , so 1 2 0• =v v .
Another Eigenspace
51. The matrix representation for the linear transformation ( )2T ax bx c bx c+ + = + is
0 0 00 1 00 0 1
⎡ ⎤⎢ ⎥= ⎢ ⎥⎢ ⎥⎣ ⎦
T because
20 0 0 00 1 00 0 1 1
a xb b xc c
←⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥= ←⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ←⎣ ⎦ ⎣ ⎦ ⎣ ⎦
.
Because T is a diagonal matrix, the eigenvalues are 0, 1, 1.
The eigenvector corresponding to 1 0λ = satisfies
1
2
3
0 0 0 00 1 0 00 0 1 0
xxx
⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥=⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦
,
so 1v can be [ ], 0, 0α for any real number α.
The eigenvector corresponding to the multiple eigenvalue 2 3 1λ λ= = satisfies the equation
1 1
2 2
3 3
0 0 00 1 00 0 1
x xx xx x
⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥=⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦
.
526 CHAPTER 5 Linear Transformations
The only condition these equations specify on 1x , 2x , 3x is 1 0x = . Hence, we have the two-
dimensional eigenspace
2 3,
0 00 1 , any real numbers1 0
Eλ λ α β α β⎧ ⎫⎡ ⎤ ⎡ ⎤⎪ ⎪⎢ ⎥ ⎢ ⎥= +⎨ ⎬⎢ ⎥ ⎢ ⎥⎪ ⎪⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦⎩ ⎭
.
Checking Up on Eigenvalues
52. (a) One can factor any quadratic equation as
( )( ) ( )21 2 1 2 1 2 0x x x xλ λ λ λ λ λ− − = − + + = ,
which proves the constant term in the quadratic is the product of the roots, and the coefficient of λ is the negative of the sum of the roots. Hence, the product of the roots of
2 3 2 0x x+ + = is 2 and the sum is –3.
(b) Comparing the known fact
( ) ( )2 tr 0p λ λ λ= − + =A A
with the result from part (a), we conclude that the trace of a 2 2× matrix is the sum of the eigenvalues, and the determinant of a 2 2× matrix is the product of the eigenvalues.
(c) For the matrix
3 22 0⎡ ⎤
= ⎢ ⎥⎣ ⎦
A
the sum of the eigenvalues is ( )tr 3=A and the determinant is 4= −A . You can find the
eigenvalues of this matrix and verify this fact.
Looking for Matrices
53. 0 0 0
1 1
a b bc d d
λλ
⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎡ ⎤= ⇒ =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥
⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⇒ b = 0.
All matrices of the form 0a
c d⎡ ⎤⎢ ⎥⎣ ⎦
have 01⎡ ⎤⎢ ⎥⎣ ⎦
as an eigenvector.
54. 1 1
1 1
a b a bc d c d
λλ
λ+⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎡ ⎤
= ⇒ = =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ +⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⇒
a bc d
λλ
+ =⎧⎨ + =⎩
Thus, matrices of the form b bd d
λλ−⎡ ⎤
⎢ ⎥−⎣ ⎦ have
11⎡ ⎤⎢ ⎥⎣ ⎦
as an eigenvector with eigenvalue λ.
SECTION 5.3 Eigenvalues and Eigenvectors 527
55. For λ = 1,
1 10 0
a b ac d c
⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎡ ⎤= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥
⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⇒ a = 1 and c = 0. Furthermore,
1 2 1
2 2 2a b a bc d c d
− − + −⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎡ ⎤= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ − +⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
⇒ 1 2 10 2 2
bd
− + = −+ =
⇒ b = 0, d = 1.
Hence 1 00 1⎡ ⎤⎢ ⎥⎣ ⎦
is the only matrix with eigenvalue 1 and the given eigenvectors.
56. For λ = 1,
1 10 0
a b ac d c
⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎡ ⎤= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥
⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⇒
10
ac==
For λ = 2,
1 2 2
2 2 4a b a bc d c d
− − + −⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎡ ⎤= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ − +⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
⇒ 1 2 22 4
bd
− + = −=
⇒ 1/ 22
bd= −=
Hence 1 1/ 20 2
−⎡ ⎤⎢ ⎥⎣ ⎦
is the only matrix with the given eigenvalues and eigenvectors.
57. For λ = −1,
02
a bc d
⎡ ⎤⎡ ⎤⎢ ⎥⎢ ⎥
⎣ ⎦ ⎣ ⎦ and
0 1 11 1 1 1
a ac c
−⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎡ ⎤= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥− − +⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
⇒ 1
0ac= −=
2 02 2bd
⎡ ⎤ ⎡ ⎤= =⎢ ⎥ ⎢ ⎥−⎣ ⎦ ⎣ ⎦
⇒ 01
bd== −
1 1
1a b bc d d− − − −⎡ ⎤ ⎡ ⎤ ⎡ ⎤
=⎢ ⎥ ⎢ ⎥ ⎢ ⎥− −⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⇒ 0
1bd== −
Hence 1 0
0 1−⎡ ⎤⎢ ⎥−⎣ ⎦
is the only matrix with double eigenvalue −1 and the given eigenvectors.
Linear Transformations in the Plane
58. A = 1 00 1⎡ ⎤⎢ ⎥−⎣ ⎦
1 2
1 0(1 )( 1 ) 0 1, 1
0 1λ
λ λ λ λ λλ
−− = = − − − = ⇒ = = −
− −A I .
To find eigenvectors for λ1 = 1:
0 0 00 2 0⎡ ⎤⎢ ⎥−⎣ ⎦
RREF→
0 1 00 0 0⎡ ⎤⎢ ⎥⎣ ⎦
⇒ 2
1
0free
vv=
⇒ 1
10⎡ ⎤
= ⎢ ⎥⎣ ⎦
v .
528 CHAPTER 5 Linear Transformations
To find eigenvectors for λ2 = −1:
2 0 00 0 0⎡ ⎤⎢ ⎥⎣ ⎦
RREF→
1 0 00 0 0⎡ ⎤⎢ ⎥⎣ ⎦
⇒ 1
2
0free
vv=
⇒ 2
10⎡ ⎤
= ⎢ ⎥⎣ ⎦
v .
59. A = 1 0
0 1−⎡ ⎤⎢ ⎥⎣ ⎦
Eigenvalues are 1 21, 1λ λ= = − .
To find eigenvectors for λ1 = 1:
2 0 0
0 0 0⎡ ⎤−⎢ ⎥⎣ ⎦
⇒ 1
2
0 free
vv=
⇒ 1
10⎡ ⎤
= ⎢ ⎥⎣ ⎦
v .
To find eigenvectors for λ2 = −1:
0 0 00 2 0⎡ ⎤⎢ ⎥⎣ ⎦
⇒ 1
2
free0
vv =
⇒ 2
10⎡ ⎤
= ⎢ ⎥⎣ ⎦
v .
60. A = cos / 4 sin / 4 2 / 2 2 / 2sin / 4 cos / 4 2 / 2 2 / 2
π ππ π
⎡ ⎤⎡ ⎤= ⎢ ⎥⎢ ⎥− −⎣ ⎦ ⎢ ⎥⎣ ⎦
= 1 121 12
⎡ ⎤⎢ ⎥−⎣ ⎦
.
( )( ) 22 / 2 2 / 2 12 / 2 2 / 2 2 1 022 / 2 2 / 2
λλ λ λ λ λ
λ
−− = = − − + = − + =
− −A I
Hence, 1 22 / 2 2 / 2 , 2 / 2 2 / 2i iλ λ= + = − , or 2 (1 )2
iλ = ± .
To find eigenvectors for 2 (1 )2
i+ :
1 1
2 2
1 12 2 (1 )1 12 2
v vi
v v⎡ ⎤ ⎡ ⎤⎡ ⎤
= +⎢ ⎥ ⎢ ⎥⎢ ⎥−⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⇒ 1 2 1 1
1 2 2 2
v v v ivv v v iv− +⎡ ⎤ ⎡ ⎤
=⎢ ⎥ ⎢ ⎥− + +⎣ ⎦ ⎣ ⎦
⇒ 2 1
1 2
v ivv iv=
− = ⇒ 1
1i
⎡ ⎤= ⎢ ⎥−⎣ ⎦
v
Eigenvectors for 22 (1 )
2iλ = − are complex conjugates of 1v , so 2v =
1.
i⎡ ⎤⎢ ⎥⎣ ⎦
61. A = 0 11 0⎡ ⎤⎢ ⎥⎣ ⎦
21 2
11 0 1, 1
1λ
λ λ λ λλ
−− = = − = ⇒ = = −
−A I .
SECTION 5.3 Eigenvalues and Eigenvectors 529
To find eigenvectors for λ1 = 1:
1 1 0
1 1 0⎡ ⎤−⎢ ⎥−⎣ ⎦
RREF→
1 1 00 0 0⎡ ⎤−⎢ ⎥⎣ ⎦
⇒ 1 2
1
0 free
v vv− =
⇒ 1
11⎡ ⎤
= ⎢ ⎥⎣ ⎦
v .
To find eigenvectors for λ2 = −1:
1 1 01 1 0⎡ ⎤⎢ ⎥⎣ ⎦
RREF→
1 1 00 0 0⎡ ⎤⎢ ⎥⎣ ⎦
⇒ 1 2
2
0free
v vv+ =
⇒ 2
11−⎡ ⎤
= ⎢ ⎥⎣ ⎦
v .
62. A = 1 02 1⎡ ⎤⎢ ⎥⎣ ⎦
1 0
(1 )(1 ) 0 1, 12 1λ
λ λ λ λλ
−− = = − − = ⇒ =
−A I .
To find eigenvectors for λ = 1:
0 0 02 0 0⎡ ⎤⎢ ⎥⎣ ⎦
⇒ 1
2
0free
vv=
⇒ 01⎡ ⎤
= ⎢ ⎥⎣ ⎦
v .
Cayley-Hamilton
63. 1 14 1⎡ ⎤⎢ ⎥⎣ ⎦
The characteristic equation is
( ) 21 1 2 3 0
4 1p
λλ λ λ λ
λ−
= − = = − − =−
A I .
Substituting A into this polynomial, we can easily verify
( )2
2 1 1 1 1 1 0 0 02 3 2 3
4 1 4 1 0 1 0 0p
⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤= − − = − − =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦A A A I .
64. 0 11 0
⎡ ⎤⎢ ⎥−⎣ ⎦
The characteristic equation is
( ) 211 0
1p
λλ λ λ
λ−
= − = = + =− −
A I .
Substituting A into this polynomial, we can easily verify
( )2
2 0 1 1 0 1 0 1 0 0 01 0 0 1 0 1 0 1 0 0
p−⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤
= + = + = + =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥− −⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦A A I .
530 CHAPTER 5 Linear Transformations
65. 1 1 00 1 10 0 1
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
The characteristic equation is
( ) ( )31 1 0
0 1 1 1 00 0 1
pλ
λ λ λ λλ
−= − = − = − =
−A I .
Substituting A for λ into this polynomial, we can easily verify
( ) ( )
3
30 1 0 0 0 00 0 1 0 0 00 0 0 0 0 0
p⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥= − = =⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦
A A I .
66. 1 1 20 2 31 0 4
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
The characteristic equation is
( ) 3 2
1 1 2 0 2 3 7 12 7 0
1 0 4p
λλ λ λ λ λ λ
λ
−= − = − = − + − =
−A I .
Substituting A for λ into this polynomial, we can easily verify
( ) 3 2
0 0 07 12 7 0 0 0
0 0 0p
⎡ ⎤⎢ ⎥= − + − = ⎢ ⎥⎢ ⎥⎣ ⎦
A A A A I .
Inverses by Cayley-Hamilton
67. 2 0 01 1 31 0 1
⎡ ⎤⎢ ⎥− −⎢ ⎥⎢ ⎥−⎣ ⎦
The characteristic polynomial of the matrix is
( ) 3 22 2p λ λ λ λ= − − + ,
so we have the matrix equation
( ) 3 22 2p = − − + =A A A A I 0 .
SECTION 5.3 Eigenvalues and Eigenvectors 531
Premultiplying by 1−A yields the equation
2 12 2 −− − + =A A I A 0 ,
so
( )1 2
4 0 0 2 0 0 1 0 01 12 4 1 0 2 1 1 3 0 1 02 2
3 0 1 1 0 1 0 0 1
1 0 01 0 0 21 2 2 6 1 1 3 .2
1 0 2 1 0 12
−
⎧ ⎫⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎪ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥= − + + = − + − − +⎨ ⎬⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎪ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥− −⎣ ⎦ ⎣ ⎦ ⎣ ⎦⎩ ⎭
⎡ ⎤⎢ ⎥⎡ ⎤⎢ ⎥⎢ ⎥= − − − = − − −⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦ ⎢ ⎥⎣ ⎦
A A A I
68. 1 2 11 0 14 4 5
−⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥−⎣ ⎦
The characteristic polynomial of the matrix is
( ) 3 26 11 6p λ λ λ λ= − + − ,
so we have the matrix equation
( ) 3 26 11 6p = − + − =A A A A I 0 .
Premultiplying by 1−A yields the equation
2 16 11 6 −− + − =A A I A 0 ,
so
( )1 2
1 6 4 1 2 1 1 0 01 16 11 5 2 4 6 1 0 1 11 0 1 06 6
20 12 17 4 4 5 0 0 1
2 113 34 6 2
1 1 3 11 9 2 .6 6 2 3
4 12 2 2 123 3
−
⎧ ⎫− − −⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎪ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥= − + = − − +⎨ ⎬⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎪ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥− −⎣ ⎦ ⎣ ⎦ ⎣ ⎦⎩ ⎭
⎡ ⎤−⎢ ⎥−⎡ ⎤ ⎢ ⎥
⎢ ⎥ ⎢ ⎥= − − = − −⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥− −⎣ ⎦
⎢ ⎥− −⎢ ⎥⎣ ⎦
A A A I
532 CHAPTER 5 Linear Transformations
69. The matrix a bc d⎡ ⎤
= ⎢ ⎥⎣ ⎦
A has the characteristic equation
( ) ( )2 0a d ad bcλ λ− + + − = , so
( ) ( )2 a d ad bc− + + − =A A I 0 .
Premultiplying by 1−A yields the equation ( ) ( ) 1a d ad bc −− + + − =A I A 0 .
Solving for A−1
( )1 1 1 (det
a d trad bc ad bc
− += − = −
− −A I A A)I A
A.
(a) 3 22 3
⎡ ⎤⎢ ⎥− −⎣ ⎦
Using the preceding formula yields
1
3 23 2 3 20 1 1 5 52 3 2 3 2 35 5 5
5 5
−
⎡ ⎤⎢ ⎥⎡ ⎤ ⎡ ⎤
= − = = ⎢ ⎥⎢ ⎥ ⎢ ⎥− − − −− − ⎢ ⎥⎣ ⎦ ⎣ ⎦ − −⎢ ⎥⎣ ⎦
A I .
(b) 3 51 1
⎡ ⎤⎢ ⎥− −⎣ ⎦
1
1 53 5 1 52 1 1 2 21 1 1 3 1 32 2 2
2 2
−
⎡ ⎤− −⎢ ⎥− −⎡ ⎤ ⎡ ⎤= − = = ⎢ ⎥⎢ ⎥ ⎢ ⎥− − ⎢ ⎥⎣ ⎦ ⎣ ⎦
⎢ ⎥⎣ ⎦
A I .
Trace and Determinant as Parameters
70. Let A .a bc d⎡ ⎤⎢ ⎥⎣ ⎦
Then the characteristic polynomial. is
2
2
( )( )
( )(Tr ) .
a ba d bc
c d
a d ad bc
λλ λ λ
λ
λ λ
λ λ
−− = = − − −
−
= − + + −
= − +
A I
A A
To find eigenvalues λ, set 2
2
(Tr ) 0, so
(Tr ) (Tr ) 4.
2
λ λ
λ
− + =
± −=
A A
A A A
SECTION 5.3 Eigenvalues and Eigenvectors 533
Raising the Order
71. ( ) ( ) ( ) ( )3 211 22 12 21 11 33 13 31 22 33 23 32( ) Tr .p a a a a a a a a a a a aλ λ λ λ⎡ ⎤⎡ ⎤= − + − + − + − −⎣ ⎦⎣ ⎦A A
If 1 2 11 0 14 4 5
−⎡ ⎤⎢ ⎥= ⎢ ⎥⎢ ⎥−⎣ ⎦
A , then 1 1 1 1
2 4 2 8 64 5 1 1
−= − + = − + =A , and Tr 6=A ,
( ) ( ) ( )11 22 12 21 11 33 13 31 22 33 23 32 0 2 5 4 0 4 11a a a a a a a a a a a a− + − + − = − + + + + = .
Therefore from the given formula, ( ) 3 26 11 6p λ λ λ λ= − + − , which agrees with the calculation
from the definition of characteristic polynomial.
( ) 3 2
1 2 11 1 6 11 64 4 5
pλ
λ λ λ λ λ λλ
− −= − = − = − + −
− −A I .
Eigenvalues and Conversion
72. 2 0y y y′′ ′− − = has characteristic equation 2 2 0r r− − = with roots r = −1 and 2.
On the other hand, 1y y= , 2y y′= yields the first-order system 1 2y y′ = , 2 1 22y y y′ = + , which in
matrix form is
1 1
2 2
0 12 1
y yy y′⎡ ⎤ ⎡ ⎤⎡ ⎤=⎢ ⎥ ⎢ ⎥⎢ ⎥′ ⎣ ⎦⎣ ⎦ ⎣ ⎦
.
The coefficient matrix of this system has the characteristic polynomial
( ) ( ) ( )( )11 2 1 2
2 1p
λλ λ λ λ λ
λ−
= = − − − = + −−
,
so the roots of the characteristic equation are the same as the eigenvalues of the companion matrix.
73. 2 5 0y y y′′ ′− + = has characteristic equation 2 2 5 0r r− + = .
On the other hand, 1y y= , 2y y′= , yields the first-order system 1 2y y′ = , 2 1 25 2y y y′ = − + , which
in matrix form is
1 1
2 2
0 15 2
y yy y′⎡ ⎤ ⎡ ⎤⎡ ⎤=⎢ ⎥ ⎢ ⎥⎢ ⎥′ −⎣ ⎦⎣ ⎦ ⎣ ⎦
.
The coefficient matrix of this system has the characteristic polynomial
( ) ( ) 21 2 5 2 5
5 2p
λλ λ λ λ λ
λ−
= = − − + = − +− −
,
so the roots of the characteristic equation are the same as the eigenvalues of the companion matrix.
534 CHAPTER 5 Linear Transformations
74. 2 2 0y y y y′′′ ′′ ′+ − − = has characteristic equation 3 22 2 0r r r+ − − = .
On the other hand, 1y y= , 2y y′= , 3y y′′= yields the first-order system 1 2y y′ = , 2 3y y′ = ,
3 1 2 32 2y y y y′ = + − , which in matrix form is
1 1
2 2
3 3
0 1 00 0 12 1 2
y yy yy y
′⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥′ =⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥′ −⎣ ⎦ ⎣ ⎦ ⎣ ⎦
.
The coefficient matrix of this system has the characteristic polynomial
( ) 3 2
1 00 1 2 22 1 2
pλ
λ λ λ λ λλ
−= − = + − −
− −.
Which gives the same characteristic equation as the third order DE, so the roots of the characteristic equation are the same as the eigenvalues of the companion matrix.
75. 2 5 6 0y y y y′′′ ′′ ′− − + = has characteristic equation 3 22 5 6 0r r r− − + = .
On the other hand 1y y= , 2y y′= , 3y y′′= yields 1 2y y′ = , 2 3y y′ = , 3 1 2 36 5 2y y y y′ = − + + which
in matrix form is
1 1
2 2
3 3
0 1 00 0 16 5 2
y yy yy y
′⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥′ =⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥′ −⎣ ⎦ ⎣ ⎦ ⎣ ⎦
.
The coefficient matrix of this system has the characteristic polynomial
( ) ( )( )( )1 0
0 1 1 3 26 5 2
pλ
λ λ λ λ λλ
−= − = − − +− −
,
which is the same as the characteristic polynomial of the third order DE so the eigenvalues of the companion matrix are the same as the roots of the characteristic polynomial.
Eigenfunction Boundary-Value Problems
76. 0y yλ′′ + =
If λ = 0, then y(t) = at + b.
(0) 0 0 .
( ) 0 0 0.y b y at
y a aπ π= ⇒ = ⇒ = ⎫
⎬= ⇒ = ⇒ = ⎭ ⇒ zero solution.
SECTION 5.3 Eigenvalues and Eigenvectors 535
If λ > 0, then y(t) = 1 2cos sin .c t c tλ λ+
( )1 2
2
(0) 0 0 sin .
( ) 0 sin 0.
y c y c t
y c
λ
π π λ
= ⇒ = ⇒ =
= ⇒ =
To obtain nonzero solutions, we must have ( )sin π λ = 0.
Thus π λ = nπ, or λ = n2, where n is any nonzero integer.
77. 0y yλ′′ + =
If λ = 0, then y(t) = at + b, so that y′ = a.
(0) 0 0 ( ) 0 0.
y a y by bπ′ = ⇒ = ⇒ = ⎫
⎬= ⇒ = ⎭ ⇒ zero solution
If λ > 0, then y(t) = 1 2cos sin ,c t c tλ λ+ so that y′(t) = 1 2sin cos .c t c tλ λ λ λ− +
( ) ( )2 1
1
(0) 0 0 cos .
( ) 0 cos 0 cos 0.
y c y c t
y c
λ
π π λ π λ
′ = ⇒ = ⇒ =
= ⇒ = ⇒ =
There are nonzero solutions if 22 1 ,
2nλ +⎛ ⎞= ⎜ ⎟
⎝ ⎠ for n an integer.
78. 0y yλ′′ + =
If λ = 0, then y(t) = at + b, so that y′(t) = a.
( ) ( ) 0.y y a b a b aπ π π π− = ⇒ − + = + ⇒ =
There are non zero solutions if y(t) = a nonzero constant.
If λ > 0, then y(t) = 1 2cos sin ,c t c tλ λ+ y′(t) = 1 2sin cos .c t c tλ λ λ λ− +
( ) ( ) ( ) ( )1 2 1 2( ) ( ) cos sin cos siny y c c c cπ π π λ π λ π λ π λ− = ⇒ − + − = +
( ) ( )( ) ( )( )
2 2
2
sin sin
sin sin
sin 0
, for any nonzero integer.
c c
n n
π λ π λ
π λ π λ
π λ
λ
⇒ − =
⇒ − =
⇒ =
⇒ =
536 CHAPTER 5 Linear Transformations
( ) ( ) ( ) ( )1 2 1 2( ) ( ) sin cos sin cosy y c c c cπ π λ π λ λ π λ λ π λ λ π λ′ ′− = ⇒ − − − = − +
( ) ( )( ) ( )( )
1 1
1 1
2
sin sin
sin sin
sin 0
, for any nonzero integer.
c c
c c
n n
λ π λ λ π λ
λ π λ λ π λ
π λ
λ
⇒ − − = −
⇒ = −
⇒ =
⇒ =
Computer Lab: Eigenvectors
79. Note how quickly the IDE Eigen Engine Tool lets you see the eigenvectors.
(a) 1 00 1⎡ ⎤⎢ ⎥⎣ ⎦
has eigenvalues and eigenvectors
[ ][ ]
1 1
2 2
1 1, 01 0, 1 .
λλ
= == =
vv
From the preceding results we might conjecture that the diagonal entries of a diagonal matrix are the eigenvalues of the matrix (which is true), and that an n n× diagonal matrix
has n independent eigenvectors even if the matrix has multiple eigenvalues (which is also true). You can easily show that for an eigenvalue of multiplicity k in a diagonal matrix, k of the elements vi in the eigenvector will be free, giving an eigenspace of dimension k.
(b) 2 00 2⎡ ⎤⎢ ⎥⎣ ⎦
has eigenvalues and eigenvectors
[ ][ ]
1 1
2 2
2 1, 02 0, 1 .
λλ
= == =
vv
In this case the multiple eigenvalue 2 has two linearly independent eigenvectors. The conclusions here would be the same as in part (a).
(c) 2 10 2⎡ ⎤⎢ ⎥⎣ ⎦
has eigenvalues and eigenvectors
[ ][ ]
1 1
2 2
2 1, 02 1, 1 .
λλ
= == =
vv
In this case 2λ = is a repeated eigenvalue, but [ ]1, 0=v is its only linearly independent
eigenvector. A correct observation is that the 1 in the 1 2× entry of the matrix causes the eigenvalue 2 to have only one independent eigenvector (which is true).
SECTION 5.3 Eigenvalues and Eigenvectors 537
(d) 1 11 1⎡ ⎤⎢ ⎥⎣ ⎦
has eigenvalues and eigenvectors
[ ][ ]
1 1
2 2
0 1, 12 1, 1 .
λλ
= = −= =
vv
The fact the two rows of the matrix are multiples of each other (in fact they are the same) means one of the eigenvalues will be zero.
(e) 1 41 1⎡ ⎤⎢ ⎥⎣ ⎦
has eigenvalues and eigenvectors
[ ][ ]
1 1
2 2
1 2, 13 2, 1 .
λλ
= − = −= =
vv
Also from the fact the determinant is not zero, we know the eigenvalues will both be different from zero.
(f) 2 11 2
⎡ ⎤⎢ ⎥−⎣ ⎦
has eigenvalues and eigenvectors
[ ][ ]
1 1
2 2
2 , 1, ,2 , 1, .
i ii i
λλ
= + == − = −
vv
We might suspect a skew symmetric matrix to have complex eigenvalues.
(g) 0 00 1⎡ ⎤⎢ ⎥⎣ ⎦
has eigenvalues and eigenvectors
[ ][ ]
1 1
2 2
0, 1, 0 ,1, 0, 1 .
λλ
= == =
vv
Note: The determinant is zero, so we know that one of the eigenvalues will be zero.
(h) 1 00 0⎡ ⎤⎢ ⎥⎣ ⎦
has eigenvalues and eigenvectors
[ ][ ]
1 1
2 2
0, 0, 1 ,1, 1, 0 .
λλ
= == =
vv
Note: The determinant of the matrix is zero, so we know that one of the eigenvalues will be zero.
Suggested Journal Entry
80. Student Project
538 CHAPTER 5 Linear Transformations
5.4 Coordinates and Diagonalization
Changing Coordinates I
1. 1 2
3 42 3B
−⎡ ⎤⎡ ⎤= = ⎢ ⎥⎣ ⎦ −⎣ ⎦M b b , 1 3 4
2 3B− ⎡ ⎤= ⎢ ⎥⎣ ⎦
M
2. 1B B S
−=u M u , 1 3 3 4 3 418 2 3 8 30B
− ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦M , 1 2 3 4 2 2
1 2 3 1 1B− ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤
= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥− −⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦M ,
1 0 3 4 0 41 2 3 1 3B
− ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦M
3. S B B=u M u , 3 3 4 3 131 2 3 1 9B
−⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥− − − −⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
M , 2 3 4 2 22 2 3 2 2B
− −⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥−⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
M ,
1 3 4 1 30 2 3 0 2B
−⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥− −⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
M
Changing Coordinates II
4. 1 2B⎡ ⎤= ⎣ ⎦M b b ,
1 11 2B
−⎡ ⎤= ⎢ ⎥−⎣ ⎦
M , 1 2 11 1B
− ⎡ ⎤= ⎢ ⎥⎣ ⎦
M
5. 1B B S
−=u M u , 1 1 2 1 1 53 1 1 3 4B
− ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦M , 1 1 2 1 1 1
1 1 1 1 0B− − − −⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤
= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
M ,
1 4 2 1 4 135 1 1 5 9B
− ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦M
6. S S B=u M u , 2 1 1 2 02 1 2 2 2B
−⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥−⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
M , 1 1 1 1 21 1 2 1 3B
−⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥− − − −⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
M ,
1 1 1 1 10 1 2 0 1B
−⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥− −⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
M
Changing Coordinates III
7. 1 2 3B⎡ ⎤= ⎣ ⎦M b b b ,
1 1 10 1 10 0 1
B
⎡ ⎤⎢ ⎥= ⎢ ⎥⎢ ⎥⎣ ⎦
M , 1
1 1 00 1 10 0 1
B−
−⎡ ⎤⎢ ⎥= −⎢ ⎥⎢ ⎥⎣ ⎦
M
SECTION 5.4 Coordinates and Diagonalization 539
8. 1B B S
−=u M u , 1
1 1 1 0 1 10 0 1 1 0 11 0 0 1 1 1
B−
−⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥= − = −⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
M , 1
2 1 1 0 2 13 0 1 1 3 30 0 0 1 0 0
B−
− −⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥= − =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
M ,
1
0 1 1 0 0 44 0 1 1 4 13 0 0 1 3 3
B−
− −⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥= − =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
M
9. S B B=u M u , 1 1 1 1 1 00 0 1 1 0 11 0 0 1 1 1
B
⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥= = −⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥− − −⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
M , 1 1 1 1 1 51 0 1 1 1 43 0 0 1 3 3
B
⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
M ,
2 1 1 1 2 01 0 1 1 1 21 0 0 1 1 1
B
− −⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
M
Changing Coordinates IV
10. 1 2 3B⎡ ⎤= ⎣ ⎦M b b b ,
1 0 20 0 10 1 1
B
⎡ ⎤⎢ ⎥= ⎢ ⎥⎢ ⎥−⎣ ⎦
M , 1
1 2 00 1 10 1 0
B−
−⎡ ⎤⎢ ⎥= ⎢ ⎥⎢ ⎥⎣ ⎦
M
11. 1B B S
−=u M u , 1
1 1 2 0 1 52 0 1 1 2 11 0 1 0 1 2
B−
−⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥− = − = −⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥−⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
M , 1
1 1 2 0 1 31 0 1 1 1 10 0 1 0 0 1
B−
− − − −⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
M ,
1
3 1 2 0 3 30 0 1 1 0 22 0 1 0 2 0
B−
−⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
M
12. S B B=u M u , 1 1 0 2 1 91 0 0 1 1 44 0 1 1 4 3
B
− − −⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥− = − = −⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥− − −⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
M , 1 1 0 2 1 71 0 0 1 1 33 0 1 1 3 4
B
⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥− = − =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥− −⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
M ,
3 1 0 2 3 111 0 0 1 1 44 0 1 1 4 3
B
⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥− −⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
M
Polynomial Coordinates I
13. 1 2 3N ⎡ ⎤= ⎣ ⎦M n n n , 2 1 11 0 00 0 1
N
⎡ ⎤⎢ ⎥= −⎢ ⎥⎢ ⎥⎣ ⎦
M , 1
0 1 01 2 10 0 1
N−
−⎡ ⎤⎢ ⎥= −⎢ ⎥⎢ ⎥⎣ ⎦
M .
540 CHAPTER 5 Linear Transformations
14. To find the coordinates of a polynomial relative to the basis { }2 2 22 , , 1N x x x x= − + , we first
determine the basis vectors for the new basis in terms of the standard basis and then construct
BM as outlined in Example 5 in Section 5.4.
2 1 11 0 00 0 1
⎡ ⎤⎢ ⎥= −⎢ ⎥⎢ ⎥⎣ ⎦
BM
The polynomial ( ) 2 2 3p x x x= + + is represented by the coordinate vector [ ]1, 2, 3 in terms of the standard basis. We find the coordinate vector [ ]1 2 3, , N α α α=p in terms of the new basis by
solving the system
1
2
3
2 1 1 11 0 0 20 0 1 3
ααα
⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥− =⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦
Then [ ] [ ]1 2 3, , 2, 2, 3N α α α= = −p . We use the same method to obtain:
[ ][ ]0, 3, 2
4, 13, 5 .N
N
= −
= − −
q
r
15. For the basis { }2 2 22 , , 1N x x x x= − + the vectors; ( )xu , ( )xv , ( )xw , whose coordinate vectors
are, respectively, [ ]1, 0, 2 , [ ]2, 2, 3− , [ ]1, 1, 0− − , are
( ) ( ) ( ) ( )2 2 2 21 2 0 2 1 4 2u x x x x x x x= − + + + = − + so 412
S
⎡ ⎤⎢ ⎥= −⎢ ⎥⎢ ⎥⎣ ⎦
u .
( ) ( ) ( ) ( )2 2 2 22 2 2 3 1 2 3v x x x x x x x= − − + + + = + + so 123
S
⎡ ⎤⎢ ⎥= ⎢ ⎥⎢ ⎥⎣ ⎦
v .
( ) ( ) ( ) ( )2 2 2 21 2 1 0 1 3w x x x x x x x= − − − + + = − + so 310
S
−⎡ ⎤⎢ ⎥= ⎢ ⎥⎢ ⎥⎣ ⎦
w .
SECTION 5.4 Coordinates and Diagonalization 541
Polynomial Coordinates II
16. 1 2 3 4QM ⎡ ⎤= ⎣ ⎦q q q q . Here the new basis is
{ }3 3 2 2, , , 1Q x x x x x= + + ,
so
1
1 1 0 00 0 1 10 1 0 00 0 0 1
1 0 1 00 0 1 0
.0 1 0 10 0 0 1
Q
Q−
⎡ ⎤⎢ ⎥⎢ ⎥=⎢ ⎥⎢ ⎥⎣ ⎦
−⎡ ⎤⎢ ⎥⎢ ⎥=⎢ ⎥−⎢ ⎥⎣ ⎦
M
M
17. Here, we have
( )( )( )
3 2
2
3
2 3
2
1
p x x x
q x x x
r x x
= + +
= − −
= +
so to find the coordinates of p, q, r relative to the basis
{ }3 3 2 2, , , 1x x x x x+ + .
We must use QM from Problem 16 to find Qp , Qq and Qr by solving the following systems:
1
2
3
4
1 1 0 0 10 0 1 1 20 1 0 0 00 0 0 1 3
αααα
⎡ ⎤⎡ ⎤ ⎡ ⎤⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥=⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎣ ⎦⎣ ⎦
,
1
2
3
4
1 1 0 0 00 0 1 1 10 1 0 0 10 0 0 1 2
ββββ
⎡ ⎤⎡ ⎤ ⎡ ⎤⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥=⎢ ⎥⎢ ⎥ ⎢ ⎥−⎢ ⎥⎢ ⎥ ⎢ ⎥
−⎣ ⎦ ⎣ ⎦⎣ ⎦
,
1
2
3
4
1 1 0 0 10 0 1 1 00 1 0 0 00 0 0 1 1
δδδδ
⎡ ⎤⎡ ⎤ ⎡ ⎤⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥=⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎣ ⎦⎣ ⎦
.
We obtain the coordinate vectors:
[ ] [ ][ ] [ ][ ] [ ]
1 2 3 4
1 2 3 4
1 2 3 4
, , , 1, 0, 1, 3
, , , 1, 1, 3, 2
, , , 1, 0, 1, 1 .
Q
Q
Q
a a a a
b b b b
c c c c
= = −
= = − −
= = −
p
q
r
542 CHAPTER 5 Linear Transformations
18. Here, we have p Q Q=u M u , using QM from Problem 16. So the standardized representations of
( )xu , ( )xv , ( )xw are
M u
M v
M
Q p
Q p
Q
1102
1 1 0 00 0 1 10 1 0 00 0 0 1
1102
0212
2020
1 1 0 00 0 1 10 1 0 00 0 0 1
2020
2200
3
−L
N
MMMM
O
Q
PPPP=
L
N
MMMM
O
Q
PPPP−L
N
MMMM
O
Q
PPPP=
−
L
N
MMMM
O
Q
PPPP=
−
−
L
N
MMMM
O
Q
PPPP=
L
N
MMMM
O
Q
PPPP
−
−
L
N
MMMM
O
Q
PPPP=
−−L
N
MMMM
O
Q
PPPP=
−142
1 1 0 00 0 1 10 1 0 00 0 0 1
3142
2612
L
N
MMMM
O
Q
PPPP=
L
N
MMMM
O
Q
PPPP−L
N
MMMM
O
Q
PPPP=
−
L
N
MMMM
O
Q
PPPP= w p .
We can check our results by observing that ( )xu , ( )xv , ( )xw can be written in both coordinate
systems as with the coordinates shown bold:
( ) ( ) ( )( ) ( ) ( ) ( ) ( ) ( )( ) ( )( ) ( ) ( )( ) ( )( ) ( ) ( )( ) ( ) ( )( ) ( )( ) ( ) ( )( ) ( ) ( ) ( ) ( ) ( )( ) ( )
3 3 2 2 3 2
3 3 2 2 3 2
3 3 2 2 3 2
1 1
1 1
1 2 1
x x x x x x x x x
x x x x x x x x x
x x x x x x x x x
= + − + + + + = + + − +
= − + + + − + + = − + − + +
= + − + + + + = + + − +
u 1 1 0 2 0 2 1 2
v 2 0 2 0 2 2 0 0
w 3 1 4 2 2 6 1
Matrix Representations for Polynomial Transformations
19. ( )( ) ( )T f t f t′′= and f(t) = at4 + bt3 + ct2 + dt + e.
We first write
( ) ( )2
4 3 2 22 12 6 2dT f at bt ct dt e at bt c
dt= + + + + = + + .
We apply the matrix MB that sends the coordinate vector for f into the coordinate vector of f ′′
(relative to the basis):
4
3
2
0 0 0 0 0 00 0 0 0 0 0
12 0 0 0 0 120 6 0 0 0 60 0 2 0 0 2 1
a tb tc a td b te c
←⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ←⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥= ←⎢ ⎥ ⎢ ⎥ ⎢ ⎥
←⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ←⎣ ⎦ ⎣ ⎦ ⎣ ⎦
.
MB
SECTION 5.4 Coordinates and Diagonalization 543
(a) We can use the matrix BM to find the second derivative of
( ) 4 3 2 1g t t t t t= − + − +
by multiplying the matrix by the coordinate vector of ( )g t , which is [ ]1, 1, 1, 1, 1− − to
find
4
3
2
0 0 0 0 0 1 00 0 0 0 0 1 0
12 0 0 0 0 1 120 6 0 0 0 1 60 0 2 0 0 1 2 1
tttt
←⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥− ←⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥= ←⎢ ⎥ ⎢ ⎥ ⎢ ⎥
− − ←⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ←⎣ ⎦ ⎣ ⎦ ⎣ ⎦
.
This means that the second derivative of ( )g t is ( ) 212 6 2g t t t′′ = − + , and
( ) [ ]( ) 0, 0, 12, 6, 2 .T g t = −
We find coordinate vectors for parts (b), (c), (d) in the same way.
(b) [ ]0, 0,12, 0, 4 (c) [ ]0, 0, 48,18, 0− (d) [ ]0, 0,12, 0, 16−
20. ( )( ) ( )0T f t f= and f(t) = at4 + bt3 + ct2 + dt + e.
We first write ( ) ( )0T f f= . We then apply the matrix that sends the coordinates of f into the coordinates of ( )0f :
4
3
2
0 0 0 0 0 00 0 0 0 0 00 0 0 0 0 00 0 0 0 0 00 0 0 0 1 1
a tb tc td te e
←⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ←⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥= ←⎢ ⎥ ⎢ ⎥ ⎢ ⎥
←⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ←⎣ ⎦ ⎣ ⎦ ⎣ ⎦
.
(a) We can use this matrix to evaluate ( )0g for the function
( ) 4 3 2 1g t t t t t= − + − +
by multiplying the matrix by the coordinate vector of ( )g t , which is [ ]1, 1, 1, 1, 1− − to
find
4
3
2
0 0 0 0 0 1 00 0 0 0 0 1 00 0 0 0 0 1 00 0 0 0 0 1 00 0 0 0 1 1 1 1
tttt
←⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥− ←⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥= ←⎢ ⎥ ⎢ ⎥ ⎢ ⎥
− ←⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ←⎣ ⎦ ⎣ ⎦ ⎣ ⎦
.
This means that ( )0 1g = , so ( ) [ ]( ) 0, 0, 0, 0, 1 .T g t =
Of course we would never use this idea to evaluate such a simple transformation, but it illustrates the idea. We find coordinate vectors for parts (b), (c), and (d) in the same way.
544 CHAPTER 5 Linear Transformations
(b) [ ]0, 0, 0, 0, 4 (c) [ ]0, 0, 0, 0, 0 (d) [ ]0, 0, 0, 0, 16
21. ( )( ) ( )T f t f t′′′= and f(t) = at4 + bt3 + ct2 + dt + e.
We first write ( ) 24 6T f f at b′′′= = + . We then apply the matrix that sends the coordinates of f
into the coordinates of f ′′′ :
4
3
2
0 0 0 0 0 00 0 0 0 0 00 0 0 0 0 0
24 0 0 0 0 240 6 0 0 0 6 1
a tb tc td a te b
←⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ←⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥= ←⎢ ⎥ ⎢ ⎥ ⎢ ⎥
←⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ←⎣ ⎦ ⎣ ⎦ ⎣ ⎦
.
(a) We can use this matrix to find the third derivative of ( ) 4 3 2 1g t t t t t= − + − + by multiplying the coordinate vector of ( )g t , which is [ ]1, 1, 1, 1, 1− − to find
4
3
2
0 0 0 0 0 1 00 0 0 0 0 1 00 0 0 0 0 1 0
24 0 0 0 0 1 240 6 0 0 0 1 6 1
tttt
←⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥− ←⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥= ←⎢ ⎥ ⎢ ⎥ ⎢ ⎥
− ←⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥− ←⎣ ⎦ ⎣ ⎦ ⎣ ⎦
.
This means that the third derivative of ( )g t is 24 6g t′′′ = − and
( )( ) [0, 0, 0, 24, 6]T g t = − . We find the third derivatives in parts (b), (c), and (d) in the
same way.
(b) [ ]0, 0, 0, 24, 0 (c) [ ]0, 0, 0, -96, 18 (d) [ ]0, 0, 0, 24, 0
22. ( )( ) ( )T f t f t= − and f(t) = at4 + bt3 + ct2 + dt + e.
We write
( ) ( ) 4 3 2T f f t at bt ct dt e= − = − + − +
We use the matrix that sends the coordinates of f into the coordinates of ( )f t− :
4
3
2
1 0 0 0 00 1 0 0 00 0 1 0 00 0 0 1 00 0 0 0 1 1
a a tb b tc c td d te e
←⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥− − ←⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥= ←⎢ ⎥ ⎢ ⎥ ⎢ ⎥
− − ←⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ←⎣ ⎦ ⎣ ⎦ ⎣ ⎦
.
SECTION 5.4 Coordinates and Diagonalization 545
(a) We evaluate ( )g t− for the function
( ) 4 3 2 1g t t t t t= − + − +
by multiplying the coordinate vector of ( )g t , which is [ ]1, 1, 1, 1, 1− − , and then
evaluating
4
3
2
1 0 0 0 0 1 10 1 0 0 0 1 10 0 1 0 0 1 10 0 0 1 0 1 10 0 0 0 1 1 1 1
tttt
←⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥− − ←⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥= ←⎢ ⎥ ⎢ ⎥ ⎢ ⎥
− − ←⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ←⎣ ⎦ ⎣ ⎦ ⎣ ⎦
.
This means that ( ) 4 3 2 1g t t t t t− = + + + + and ( )( ) [1, 1, 1, 1, 1]T g t = .
We leave parts (b), (c), (d) for the reader to carry out in the same way.
(b) [ ]1,0,2,0,4 (c) [ ]4, 3,0,0,0− − (d) [ ]1,0, 8,0,16−
23. ( )( ) ( ) ( )2T f t f t f t′= − and f(t) = at4 + bt3 + ct2 + dt + e.
We first write
( ) ( ) ( ) ( ) ( )4 3 22 2 4 2 3 2 2 2 2T f f f at a b t b c t c d t d e′= − = − + − + − + − + − .
We then apply the matrix that sends the coordinates of f into the coordinates of 2f f′ − :
4
3
2
2 0 0 0 0 24 2 0 0 0 4 20 3 2 0 0 3 20 0 2 2 0 2 20 0 0 1 2 2 1
a a tb a b tc b c td c d te d e
− − ←⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥− − ←⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥=− − ←⎢ ⎥ ⎢ ⎥ ⎢ ⎥
− − ←⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥− − ←⎣ ⎦ ⎣ ⎦ ⎣ ⎦
.
(a) We evaluate ( ) ( )2g t g t′ − for the function ( ) 4 3 2 1g t t t t t= − + − + by multiplying the matrix by coordinate vector of ( )g t , which is [ ]1, 1, 1, 1, 1− − , to find
4
3
2
2 0 0 0 0 1 24 2 0 0 0 1 60 3 2 0 0 1 50 0 2 2 0 1 40 0 0 1 2 1 3 1
tttt
− − ←⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥− − ←⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥=− − ←⎢ ⎥ ⎢ ⎥ ⎢ ⎥
− − ←⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥− − ←⎣ ⎦ ⎣ ⎦ ⎣ ⎦
.
This means that ( ) 4 3 22 2 6 5 4 3g t g t t t t′ − = − + − + − , so ( )( ) [ 2, 6, 5, 4, 3]T g t = − − − .
We leave parts (b), (c), and (d) for the reader to carry out in the same way.
(b) [ ]2, 4, 4, 4, 8− − − (c) [ ]8, 22, 9, 0, 0− (d) [ ]2, 4,16, 16, 32− − −
546 CHAPTER 5 Linear Transformations
24. ( )( ) ( ) ( )T f t f t f t′′= +
We write
( ) ( ) ( ) ( )4 3 212 6 2T f f f at bt c a t d b t e c′′= + = + + + + + + + .
We use the matrix that sends the coordinates of f into the coordinates of f f′′ + :
4
3
2
1 0 0 0 00 1 0 0 0
12 0 1 0 0 120 6 0 1 0 60 0 2 0 1 2 1
a a tb b tc c a td d b te e c
←⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ←⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥= + ←⎢ ⎥ ⎢ ⎥ ⎢ ⎥
+ ←⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥+ ←⎣ ⎦ ⎣ ⎦ ⎣ ⎦
.
(a) We evaluate ( ) ( )g t g t′′ + for the function ( ) 4 3 2 1g t t t t t= − + − + by multiplying the matrix by the coordinate vector of ( )g t , which is [ ]1, 1, 1, 1, 1− − to find
4
3
2
1 0 0 0 0 1 10 1 0 0 0 1 1
12 0 1 0 0 1 130 6 0 1 0 1 70 0 2 0 1 1 3 1
tttt
←⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥− − ←⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥= ←⎢ ⎥ ⎢ ⎥ ⎢ ⎥
− − ←⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ←⎣ ⎦ ⎣ ⎦ ⎣ ⎦
.
This means that ( ) ( ) 4 3 213 7 3g t g t t t t t′′ + = − + − + , and ( )( )T g t = [ ]1, 1,13, 7,3− − . (b) [ ]1, 0,14, 0, 8 (c) [ ]4, 3, 48,18, 0− − (d) [ ]1, 0, 4, 0, 0
Diagonalization
25. A=3 22 3
⎡ ⎤⎢ ⎥− −⎣ ⎦
Recall that P is not unique. The matrix A has two independent eigenvectors, which are the columns of the matrix
1 3 1 35 52 2 2 2
1 1
⎡ ⎤− − −⎢ ⎥=⎢ ⎥⎢ ⎥⎣ ⎦
P .
Hence
1
1 31 51 2 21 35 1 52 2
−
⎡ ⎤+⎢ ⎥= ⎢ ⎥
⎢ ⎥− −⎢ ⎥⎣ ⎦
P .
SECTION 5.4 Coordinates and Diagonalization 547
The eigenvalues are the diagonal elements of the matrix
1 5 0
0 5−
⎡ ⎤= ⎢ ⎥
−⎢ ⎥⎣ ⎦P AP .
Note: P is not unique.
26. A=1 11 3
−⎡ ⎤⎢ ⎥⎣ ⎦
The matrix has a double eigenvalue of 2 with only one independent eigenvector, [ ]1, 1− .
Hence, the matrix cannot be diagonalized.
27. A=1 22 1⎡ ⎤⎢ ⎥⎣ ⎦
The matrix has two independent eigenvectors, which are the columns of the matrix
1
1 11 1
1 11 .1 12
−
−⎡ ⎤= ⎢ ⎥⎣ ⎦
−⎡ ⎤= ⎢ ⎥
⎣ ⎦
P
P
Hence, the eigenvalues will be the diagonal elements of the matrix
1 1 1 1 2 1 1 1 011 1 2 1 1 1 0 32
− − − −⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦P AP .
Note that P is not unique.
28. 1 31 3⎡ ⎤
= ⎢ ⎥⎣ ⎦
A 0, 4λ =
The matrix P that diagonalizes the given matrix A is the matrix of eigenvectors of A.
3 11 1
−⎡ ⎤= ⎢ ⎥⎣ ⎦
P , 1 1 111 34
− −⎡ ⎤= − ⎢ ⎥− −⎣ ⎦
P , 0 00 4
− ⎡ ⎤= ⎢ ⎥⎣ ⎦
1P AP .
Note that P is not unique.
29. A=3 11 5
⎡ ⎤⎢ ⎥−⎣ ⎦
This matrix cannot be diagonalized (double eigenvalue with a single eigenvector).
548 CHAPTER 5 Linear Transformations
30. A=0 11 0
−⎡ ⎤⎢ ⎥⎣ ⎦
The matrix P that diagonalizes the given matrix A is the matrix of eigenvectors of A.
1 1i i
⎡ ⎤= ⎢ ⎥−⎣ ⎦
P , 1 1112
ii
− ⎡ ⎤= ⎢ ⎥−⎣ ⎦
P , 1 1 0 1 1 1 011 1 0 02
i ii i i i
− −⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥− − −⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
P AP .
31. A=12 615 7
−⎡ ⎤⎢ ⎥−⎣ ⎦
The eigenvalues of A are λ1 = 2 and λ2 = 3, so we know that A is diagonalizable.
To find the eigenvectors:
For λ1 = 2 1 2
2
3310 6 0 0 01 RREF 55
15 9 0 0 free0 0
v v
v
⎡ ⎤⎡ ⎤− − =−⎢ ⎥ ⇒⎢ ⎥ ⎢ ⎥−⎣ ⎦ ⎢ ⎥⎣ ⎦
Thus an eigenvector for λ1 = 2 is 35⎡ ⎤⎢ ⎥⎣ ⎦
.
For λ2 = 3 1 2
2
229 6 0 0 01 RREF 33
15 10 0 0 free0 0
v v
v
⎡ ⎤⎡ ⎤− − =−⎢ ⎥ ⇒⎢ ⎥ ⎢ ⎥−⎣ ⎦ ⎢ ⎥⎣ ⎦
Thus an eigenvector for λ2 = 3 is 23⎡ ⎤⎢ ⎥⎣ ⎦
.
The matrix P = 3 25 3⎡ ⎤⎢ ⎥⎣ ⎦
diagonalizes A, and P−1AP = 2 00 3⎡ ⎤⎢ ⎥⎣ ⎦
.
32. The matrix A = 3 1/ 20 3⎡ ⎤⎢ ⎥⎣ ⎦
has repeated eigenvalue λ = 3, 3
For λ = 3: 2
1
00 1/ 2 0
free0 0 0vv
⎡ ⎤ =⇒⎢ ⎥
⎣ ⎦
Therefore, there is only one linearly independent eigenvector 10⎡ ⎤⎢ ⎥⎣ ⎦
.
Since dim E = 1 < 2, A is not diagonalizable.
SECTION 5.4 Coordinates and Diagonalization 549
33. The matrix A = 4 2
1/ 2 2−⎡ ⎤
⎢ ⎥⎣ ⎦
has repeated eigenvalue λ = 3, 3
For λ = 3: 1 2
2
2 01 2 0 1 2 0 RREF
free1/ 2 1 0 0 0 0v v
v⎡ ⎤ ⎡ ⎤ − =− −
⇒⎢ ⎥ ⎢ ⎥−⎣ ⎦ ⎣ ⎦
Therefore there is only one linearly independent eigenvector 21⎡ ⎤⎢ ⎥⎣ ⎦
.
Since dim E = 1 < 2, A is not diagonalizable.
34. The matrix A = 1 44 1
⎡ ⎤⎢ ⎥−⎣ ⎦
has eigenvalues λ = 1 ± 4i.
For λ1 = 1 ± 4i, 1i⎡ ⎤
= ⎢ ⎥⎣ ⎦
v∓
, so 1 1i i−⎡ ⎤
= ⎢ ⎥⎣ ⎦
P and P−1AP = 1 4 0
0 1 4i
i+⎡ ⎤
⎢ ⎥−⎣ ⎦.
35. A = 1 0 10 1 01 0 1
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
has eigenvalues 0, 2, 1 and associated eigenvectors 1 1 0
0 , 0 , 1 .1 1 0
−⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦
The matrix P that diagonalizes the given matrix A is the matrix of three eigenvectors of A.
1 1 00 0 11 1 0
−⎡ ⎤⎢ ⎥= ⎢ ⎥⎢ ⎥⎣ ⎦
P , 1
1 0 11 1 0 12
0 2 0
−
−⎡ ⎤⎢ ⎥= ⎢ ⎥⎢ ⎥⎣ ⎦
P , 0 0 00 2 00 0 1
−
⎡ ⎤⎢ ⎥= ⎢ ⎥⎢ ⎥⎣ ⎦
1P AP .
Note that P is not unique.
36. A = 0 1 10 1 10 0 0
−⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥⎣ ⎦
has eigenvalues −1, 0, 0 and three linearly independent eigenvectors to use as
columns of P, a matrix that diagonalizes the given matrix A.
1 0 11 1 00 1 0
−⎡ ⎤⎢ ⎥= ⎢ ⎥⎢ ⎥⎣ ⎦
P , 1
0 1 10 0 11 1 1
−
−⎡ ⎤⎢ ⎥= ⎢ ⎥⎢ ⎥−⎣ ⎦
P , 1 0 00 0 00 0 0
−
−⎡ ⎤⎢ ⎥= ⎢ ⎥⎢ ⎥⎣ ⎦
1P AP .
We can confirm that the eigenvectors (columns of P) are linearly independent because the determinant of P is nonzero.
550 CHAPTER 5 Linear Transformations
37. A = 4 2 01 1 00 1 2
⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥⎣ ⎦
The matrix A has eigenvalues 2, 2, 3. The double eigenvalue 2 has only one independent eigenvector [ ]0, 0, 1 , so the matrix cannot be diagonalized.
38. A = 4 1 12 5 21 1 2
−⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥⎣ ⎦
The matrix A has eigenvalues 3, 3, 5, with three linearly independent eigenvectors to use as columns of the matrix P that diagonalizes the given matrix A.
1 1 10 1 21 0 1
−⎡ ⎤⎢ ⎥= ⎢ ⎥⎢ ⎥⎣ ⎦
P , 1
1 1 32 2 21 0 11 1 12 2 2
−
⎡ ⎤− −⎢ ⎥⎢ ⎥
= −⎢ ⎥⎢ ⎥
−⎢ ⎥⎣ ⎦
P , 3 0 00 3 00 0 5
−
⎡ ⎤⎢ ⎥= ⎢ ⎥⎢ ⎥⎣ ⎦
1P AP .
We know the eigenvectors (columns of P) are linearly independent because the determinant of P is nonzero. Note however that P is not unique.
39. A = 3 1 17 5 16 6 2
−⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥−⎣ ⎦
The eigenvalue 1 4λ = − has eigenvector [ ]0, 1, 1 . However, the double eigenvalue 2 3 2λ λ= = , has only one linearly independent eigenvector [ ]1, 1, 0 . This matrix cannot be diagonalized
because it has only two linearly independent eigenvectors.
40. A = 0 0 10 1 20 0 1
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
We find eigenvalues 0, 1, 1. The eigenvector corresponding to 0 is [ ]1, 0, 0 , and there is only one independent eigenvector [ ]0, 1, 0 corresponding to 1λ = . This matrix cannot be
diagonalized because it has only two linearly independent eigenvectors.
41. A = 1 1 10 0 10 0 1
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
The matrix has eigenvalues 0, 1, 1. The matrix A cannot be diagonalized because the double eigenvalue has only one linearly independent eigenvector.
SECTION 5.4 Coordinates and Diagonalization 551
42. A = 4 2 32 1 21 2 0
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥−⎣ ⎦
We find eigenvalues 1, 5, –1, and use their linearly independent eigenvectors to form the matrix P that diagonalizes A.
1 2 10 1 21 0 3
−⎡ ⎤⎢ ⎥= ⎢ ⎥⎢ ⎥−⎣ ⎦
P , 1
1 112 21 1 13 3 31 1 16 3 6
−
⎡ ⎤−⎢ ⎥⎢ ⎥⎢ ⎥= ⎢ ⎥⎢ ⎥⎢ ⎥− −⎢ ⎥⎣ ⎦
P , 1
1 0 00 5 00 0 1
−
⎡ ⎤⎢ ⎥= ⎢ ⎥⎢ ⎥−⎣ ⎦
P AP .
43. A = 1 0 04 3 04 2 1
⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥−⎣ ⎦
The eigenvalues of A are λ1 = 1, 1 and λ2 = 3.
For λ1 = 1:
1 2
2 3
11 00 0 0 0 0 12 04 2 0 0 RREF 0 0 0 0 2
, free4 2 0 0 0 0 0 0
v v
v v
⎡ ⎤−⎢ ⎥⎡ ⎤
⎢ ⎥ − =⎢ ⎥− ⇒⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥−⎣ ⎦ ⎢ ⎥⎢ ⎥⎣ ⎦
There are two (linearly independent) eigenvectors: 120
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
, 001
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
.
At this point, we know that A is diagonalizable.
For λ2 = 3:
1
2 3
3
0 2 0 0 0 1 0 0 004 0 0 0 RREF 0 1 1 0
4 2 2 0 0 0 0 0 free
vv vv
⎡ ⎤ ⎡ ⎤ =−⎢ ⎥ ⎢ ⎥ − =− − ⇒⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥− −⎣ ⎦ ⎣ ⎦
An eigenvector is 011
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
.
552 CHAPTER 5 Linear Transformations
The matrix P = 1 0 02 0 10 1 1
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
diagonalizes A,
and P−1AP = 1 0 00 1 00 0 3
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
.
44. A = 3 2 01 0 01 1 3
−⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥−⎣ ⎦
To find the eigenvalues:
2 2
3 2 00 1 0
1 0 (3 ) 21 3 1 3
1 1 3
(3 ) ( 2) 2(3 ) (3 ) 3 2
(3 )( 2)( 1) 01,2,3
λλ
λ λλ λ
λ
λ λ λ λ λ
λ λ λλ
− −−
− = − +− − −
− −
⎡ ⎤= − − + − = − − + +⎣ ⎦= − − − ==
To find an eigenvector for λ1 = 1:
1
2
3
02 2 01 1 0 01 1 3 0
vvv
− ⎡ ⎤⎡ ⎤ ⎡ ⎤⎢ ⎥⎢ ⎥ ⎢ ⎥− = ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥−⎣ ⎦ ⎣ ⎦ ⎣ ⎦
⇒ 1 2
1 2
1 2 3
2 2 00
3 0
v vv v
v v v
− =− =
− + + = ⇒
2 1
3
1
0free
v vvv
==
1
110
⎡ ⎤⎢ ⎥= ⎢ ⎥⎢ ⎥⎣ ⎦
v , E1 = span 11 ,0
⎧ ⎫⎡ ⎤⎪ ⎪⎢ ⎥⎨ ⎬⎢ ⎥⎪ ⎪⎢ ⎥⎣ ⎦⎩ ⎭
dim E1 = 1
To find an eigenvector for λ2 = 2:
1
2
3
01 2 01 2 0 01 1 1 0
vvv
− ⎡ ⎤⎡ ⎤ ⎡ ⎤⎢ ⎥⎢ ⎥ ⎢ ⎥− = ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥−⎣ ⎦ ⎣ ⎦ ⎣ ⎦
⇒ 1 2
1 2 3
1 2 00
v vv v v− =
− + + = ⇒
2 1
3 1
1
1212
free
v v
v v
v
=
=
2
211
⎡ ⎤⎢ ⎥= ⎢ ⎥⎢ ⎥⎣ ⎦
v , E2 = span 21 ,1
⎧ ⎫⎡ ⎤⎪ ⎪⎢ ⎥⎨ ⎬⎢ ⎥⎪ ⎪⎢ ⎥⎣ ⎦⎩ ⎭
dim E2 = 1
SECTION 5.4 Coordinates and Diagonalization 553
To find an eigenvector for λ3 = 3:
1
2
3
00 2 01 3 0 01 1 0 0
vvv
− ⎡ ⎤⎡ ⎤ ⎡ ⎤⎢ ⎥⎢ ⎥ ⎢ ⎥− = ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥−⎣ ⎦ ⎣ ⎦ ⎣ ⎦
⇒ 1 2
3
, 0free
v vv
=
3
001
⎡ ⎤⎢ ⎥= ⎢ ⎥⎢ ⎥⎣ ⎦
v , E3 = span 00 ,1
⎧ ⎫⎡ ⎤⎪ ⎪⎢ ⎥⎨ ⎬⎢ ⎥⎪ ⎪⎢ ⎥⎣ ⎦⎩ ⎭
dim E3 = 1
P = 1 2 01 1 00 1 1
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
, P−1AP = 1 0 00 2 00 0 3
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
45. A = 0 0 21 1 21 0 3
⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥−⎣ ⎦
The eigenvalues are λ1 = 1, 1 and λ2 = 2.
To find eigenvectors for λ1 = 1:
1 3
2 3
1 0 2 0 1 0 2 02 0
1 0 2 0 RREF 0 0 0 0 , free
1 0 2 0 0 0 0 0
v vv v
⎡ ⎤ ⎡ ⎤− −− =⎢ ⎥ ⎢ ⎥− ⇒⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥−⎣ ⎦ ⎣ ⎦
Two (linearly independent) eigenvectors for λ1 = 1 are 010
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
and 201
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
.
At this point, we know that A is diagonalizable.
To find an eigenvector for λ2 = 2:
1 3
2 3
3
02 0 2 0 1 0 1 001 1 2 0 RREF 0 1 1 0
1 0 1 0 0 0 0 0 free
v vv vv
⎡ ⎤ ⎡ ⎤ − =− −⎢ ⎥ ⎢ ⎥ − =− − − ⇒⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥−⎣ ⎦ ⎣ ⎦
An eigenvector for λ2 = 2 is 111
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
.
The matrix P = 0 2 11 0 10 1 1
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
diagonalizes A, and P−1AP = 1 0 00 1 00 0 2
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
.
554 CHAPTER 5 Linear Transformations
46. A =
2 1 8 10 4 0 00 0 6 00 0 0 4
−⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
The eigenvalues are λ1 = 4, 4, λ2 = 2, and λ3 = 6.
To find eigenvectors for λ1 = 4:
1 2 3 4
3
2 4
2 1 8 1 0 1 1/ 2 4 1/ 2 0 1 14 00 0 0 0 0 0 0 1 0 0 2 2
RREF 00 0 2 0 0 0 0 0 0 0, free0 0 0 0 0 0 0 0 0 0
v v v v
vv v
⎡ ⎤ ⎡ ⎤− − − −− − + =⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⇒ =⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦
Two (linearly independent) eigenvectors for λ1 = 4 are
1200
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
and
1002
−⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
.
At this point, we know that A is diagonalizable. To find an eigenvector for λ2 = 2:
0 1 8 1 0 0 1 8 1 00 2 0 0 0 0 0 1 1/8 0
RREF 0 0 4 0 0 0 0 0 1 00 0 0 2 0 0 0 0 0 0
⎡ ⎤ ⎡ ⎤− −⎢ ⎥ ⎢ ⎥−⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦
⇒
2 3 4
3 4
4
1
8 01 08
0 free
v v v
v v
vv
+ − =
− =
=
An eigenvector for λ2 = 2 is
1000
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
.
To find an eigenvector for λ3 = 6:
4 1 8 1 0 1 1/ 4 2 1/ 4 00 2 0 0 0 0 1 0 0 0
RREF 0 0 0 0 0 0 0 0 1 00 0 0 2 0 0 0 0 0 0
⎡ ⎤ ⎡ ⎤− − − −⎢ ⎥ ⎢ ⎥−⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥
−⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦
⇒
1 2 3 4
2
4
3
1 12 04 4
0 0
free
v v v v
vv
v
− − + =
==
An eigenvector for λ3 = 6 is
2010
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
.
The matrix P =
1 1 1 22 0 0 00 0 0 10 2 0 0
−⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
diagonalizes A, and P−1AP =
4 0 0 00 4 0 00 0 2 00 0 0 6
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
.
SECTION 5.4 Coordinates and Diagonalization 555
47. A =
4 0 4 00 4 0 00 0 8 01 2 1 8
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥− −⎣ ⎦
The eigenvalues are λ1 = 4, 4, λ2 = 8, 8.
To find eigenvectors for λ1 = 4:
1 2 3 4
3
2 4
0 0 4 0 1 2 1 42 4 0
0 0 0 0 0 0 1 0 RREF 0
0 0 4 0 0 0 0 0, free
1 2 1 4 0 0 0 0
v v v vv
v v
− −⎡ ⎤ ⎡ ⎤+ − − =⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⇒ =⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥− −⎣ ⎦ ⎣ ⎦
Two (linearly independent) eigenvectors for λ1 = 4 are
2100
−⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
and
4001
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
.
To find eigenvectors for λ2 = 8:
1 2 3
2
3 4
4 0 4 0 1 2 1 02 0
0 4 0 0 0 1 0 0 RREF 0
0 0 0 0 0 0 0 0, free
1 2 1 0 0 0 0 0
v v vv
v v
− −⎡ ⎤ ⎡ ⎤+ − =⎢ ⎥ ⎢ ⎥−⎢ ⎥ ⎢ ⎥ ⇒ =
⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥− −⎣ ⎦ ⎣ ⎦
Two (linearly independent) eigenvectors for λ2 = 8 are
1010
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
and
0001
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
.
The matrix P =
2 4 1 01 0 0 00 0 1 00 1 0 1
−⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
diagonalizes A, and P−1AP =
4 0 0 00 4 0 00 0 8 00 0 0 8
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
.
48. A =
2 0 1 20 2 0 00 0 6 00 0 1 4
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
The eigenvalues are λ1 = 2, 2, λ2 = 4, λ3 = 6.
To find eigenvectors for λ1 = 2:
3 4
4
1 2
0 0 1 2 0 0 1 22 0
0 0 0 0 0 0 0 1 RREF 0
0 0 4 0 0 0 0 0, free
0 0 1 2 0 0 0 0
v vv
v v
⎡ ⎤ ⎡ ⎤+ =⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⇒ =⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦
556 CHAPTER 5 Linear Transformations
Two (linearly independent) eigenvectors for λ1 = 2 are
1000
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
and
0100
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
.
(Note: A is diagonalizable.)
To find an eigenvector for λ2 = 4:
1 3 4
2 3
4
2 0 1 2 1 0 1/ 2 1 1 00 2 0 0 0 1 0 0 2
RREF 00 0 2 0 0 0 1 0 free0 0 1 0 0 0 0 0
v v v
v vv
− − −⎡ ⎤ ⎡ ⎤− − =⎢ ⎥ ⎢ ⎥−⎢ ⎥ ⎢ ⎥ ⇒ = =⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦
An eigenvector for λ2 = 4 is
1001
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
.
To find an eigenvector for λ3 = 6:
1 3 4
2
3 4
4
1 14 0 1 2 1 0 1/ 4 1/ 2 04 2
0 4 0 0 0 1 0 0 0 RREF 0 0 0 0 0 0 1 2 2 00 0 1 2 0 0 0 0 free
v v v
vv v
v
− − − − − =⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥−⎢ ⎥ ⎢ ⎥ =⇒⎢ ⎥ ⎢ ⎥− − =⎢ ⎥ ⎢ ⎥
−⎣ ⎦ ⎣ ⎦
An eigenvector for λ3 = 6 is
1021
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
.
The matrix P =
1 0 1 10 1 0 00 0 0 20 0 1 1
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
diagonalizes A, and P−1AP =
2 0 0 00 2 0 00 0 4 00 0 0 6
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
.
SECTION 5.4 Coordinates and Diagonalization 557
Powers of a Matrix
49. (a) Informal Proof: If 1−=A PDP then
( ) ( )( ) ( ) ( )( ) ( )
1
kk
k k
k
− − − − − − − −
−
−
= = =
=
1 1 1 1 1 1 1 1
times times
1
A PDP PDP PDP PDP PD P PD P PD P PD P
PD P
Alternatively, this proposition can be proved with an induction proof
1: b bkS −=A PD P
1S is given: 1−=A PDP . Assume mS (and show that implies 1mS + )
( )( ) ( )1
1 1 1 1 1 1 1
m m
m m m m
−
+ − − − − + −
=
= = =
A PD P
A PD P PDP PD P P DP PD P
Therefore kS is true for all positive integers k.
(b) For 1 14 1⎡ ⎤
= ⎢ ⎥⎣ ⎦
A , as shown in Section 5.3 Example 2, the eigenvalues are 3 and –1, with
eigenvectors 12⎡ ⎤⎢ ⎥⎣ ⎦
and 12
⎡ ⎤⎢ ⎥−⎣ ⎦
. Hence, 1 12 2⎡ ⎤
= ⎢ ⎥−⎣ ⎦P and 1 2 11
2 14− ⎡ ⎤= ⎢ ⎥−⎣ ⎦
P ,
so D = 3 00 1⎡ ⎤⎢ ⎥−⎣ ⎦
and
( )( )
( ) ( ) ( ) ( )( ) ( ) ( ) ( )
( )
50
50 50 150
50 50 50 50
50 50 50 50
50
3 0 1 1 2 112 2 2 140 1
2 3 2 1 3 11 4 4 3 4 1 2 3 2 1
3 2 1 2 11 .4 2 4 24 4
−⎡ ⎤⎡ ⎤ ⎡ ⎤⎢ ⎥= = ⎢ ⎥ ⎢ ⎥− −⎢ ⎥−⎣ ⎦ ⎣ ⎦⎣ ⎦
⎡ ⎤+ − − −⎢ ⎥=⎢ ⎥− − + −⎣ ⎦
−⎡ ⎤ ⎡ ⎤= +⎢ ⎥ ⎢ ⎥−⎣ ⎦ ⎣ ⎦
A PD P
(c) The statement that nD is diagonal follows from the fact that the product of diagonal matrices is diagonal (See Section 3.1, Problem 64).
(d) Yes. ( ) ( )1 11 1 1 1 1 1 1− −− − − − − − −= = =A PDP P D P PD P .
Although it is correct, this procedure seems not too useful for finding an inverse of A, because the formula itself requires finding an inverse of P.
558 CHAPTER 5 Linear Transformations
Determinants and Eigenvalues
50. A is an n × n matrix with 1 2( )( )...( )nλ λ λ λ λ λ λ− = − − −A I where all the λi are distinct.
(a) By the Diagonalization Theorem, A is diagonalizable
so A is similar to D =
1
2
0 00 0 0
0 0 n
λλ
λ
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
By Section 3.4, Problem 31,
=A D = λ1λ2…λn
(b) A is diagonalizable only if the sum of the dimensions of its eigenspaces is n (Diagonalization Theorem). Otherwise A would not have sufficiently many linearly independent eigenvectors to form a basis for Rn. Although A might equal λ1λ2…λn, this
result is not guaranteed in that case.
Constructing Counterexamples
51. A = 1 10 1⎡ ⎤⎢ ⎥⎣ ⎦
is invertible because A = 1 ≠ 0, but A is not diagonalizable.
That is, λ = 1, 1 but 10⎡ ⎤
= ⎢ ⎥⎣ ⎦
v is the only linearly independent eigenvector.
52. A = 0 00 1⎡ ⎤⎢ ⎥⎣ ⎦
has eigenvalues λ = 0 and 1 but A = 0.
Thus A is diagonalizable but not invertible.
53. A = 1 11 1
⎡ ⎤⎢ ⎥− −⎣ ⎦
has eigenvalues λ = 0,0 but 11
⎡ ⎤= ⎢ ⎥−⎣ ⎦
v is the only linearly independent eigenvector.
Thus A is not diagonalizable and A = 0, so A is also not invertible.
SECTION 5.4 Coordinates and Diagonalization 559
Computer Lab: Diagonalization
54. (a) The given matrix has an eigenvalue of 1 with multiplicity 4 and only one linearly inde-pendent eigenvector ( )1, 0, 0, 0 . Hence, it cannot be diagonalized.
(b) For
2 1 1 0 01 2 1 0 01 1 2 0 00 0 0 1 10 0 0 4 1
−⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥−⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
, 0, 3, 3, 3, 1λ = − − − ,
1 1 1 0 01 0 1 0 01 1 0 0 00 0 0 1 10 0 0 2 2
− −⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥=⎢ ⎥
−⎢ ⎥⎢ ⎥⎣ ⎦
P .
(c) For
3 0 0 00 1 1 00 1 1 00 0 0 5
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
, 3, 0, 2, 5λ = ,
1 0 0 00 1 1 00 1 1 00 0 0 1
⎡ ⎤⎢ ⎥⎢ ⎥=⎢ ⎥−⎢ ⎥⎣ ⎦
P .
Similar Matrices
55. (a) Proof is provided in Section 5.3, Problem 40.
(b) Because in part (a) we showed that similar matrices have the same eigenvalues, they have the same trace ( )1 2λ λ+ and the same determinant ( )1 2λ λ .
(c) Consider 1 21 4
⎡ ⎤= ⎢ ⎥−⎣ ⎦
A and 2 00 3⎡ ⎤
= ⎢ ⎥⎣ ⎦
B .
Both have the same characteristic polynomial 2 5 6 0λ λ− + = , so they are similar and
have the same eigenvalues, 2 and 3. However the eigenvectors of A are 21⎡ ⎤⎢ ⎥⎣ ⎦
and 11⎡ ⎤⎢ ⎥⎣ ⎦
and
the eigenvectors of B are 10⎡ ⎤⎢ ⎥⎣ ⎦
and 01⎡ ⎤⎢ ⎥⎣ ⎦
.
How Similar Are They?
56. Both 4 21 1
−LNMOQP and
−−LNMOQP
3 103 8
have eigenvalues 2 and 3, so that they are similar.
4 21 1
2 00 3
3 103 8
−LNMOQPLNMOQP
−−LNMOQP~ ~
(b) Use A B~ ~2 00 3LNMOQP .
560 CHAPTER 5 Linear Transformations
Computer Lab: Similarity Challenge
57. (a) We need to show that A is similar to B where
1 2 32 0 11 3 1
−⎡ ⎤⎢ ⎥= ⎢ ⎥⎢ ⎥−⎣ ⎦
A and 1 19 581 12 275 15 11
−⎡ ⎤⎢ ⎥= −⎢ ⎥⎢ ⎥−⎣ ⎦
B
Both matrices have the same characteristic polynomial ( ) 3 22 3 19p λ λ λ λ= − + − + .
There is one real eigenvalue 1λ and a complex conjugate pair of eigenvalues 2λ and 3λ .
Because all three eigenvalues are distinct, there are three linearly independent eigenvectors so that A and B are similar to the same diagonal matrix D:
1
2
3
0 00 00 0
λλ
λ
⎡ ⎤⎢ ⎥= ⎢ ⎥⎢ ⎥⎣ ⎦
D .
(b) A and B both have trace 2 and determinant of 19.
Orthogonal Matrices
58. (a) Premultiplying each side of T 1−=P P by P yields 1− =PP I .
(b) For the matrix P, we write out the matrix TP :
11 12 1
21 22 2
1 2
n
n
n n nn
p p pp p p
p p p
⎡ ⎤⎢ ⎥⎢ ⎥=⎢ ⎥⎢ ⎥⎣ ⎦
P ,
11 21 1
12 22 2T
1 2
n
n
n n nn
p p pp p p
p p p
⎡ ⎤⎢ ⎥⎢ ⎥=⎢ ⎥⎢ ⎥⎣ ⎦
P .
The condition that T =PP I says that ijth element ija of the product is:
T
1 1
10
n n
ij ik kj ik jk ijk k
i ja p p p p
i jδ
= =
=⎧= = = = ⎨ ≠⎩∑ ∑ .
In other words, the ijth element of TPP is the dot product of the ith and jth columns P, which is 0 if the columns are different and 1 if the columns are the same.
SECTION 5.4 Coordinates and Diagonalization 561
Orthogonally Diagonalizable Matrices
59. Computing in the usual manner for the (symmetric) matrix
4 22 7⎡ ⎤
= ⎢ ⎥⎣ ⎦
A ,
we find the eigenvalues 3, 8 and the orthogonal eigenvectors
21
−⎡ ⎤⎢ ⎥⎣ ⎦
, 12⎡ ⎤⎢ ⎥⎣ ⎦
.
In order that these vectors form columns of an orthogonal matrix, we must first normalize them, yielding the eigenvectors of length one:
1
2115
−⎡ ⎤= ⎢ ⎥
⎣ ⎦v , 2
1125⎡ ⎤
= ⎢ ⎥⎣ ⎦
v .
We can now form the orthogonal diagonalizing matrix
1 T
2 111 25
2 111 25
−
−⎡ ⎤= ⎢ ⎥
⎣ ⎦−⎡ ⎤
= = ⎢ ⎥⎣ ⎦
P
P P
from which we can diagonalize A from the formula
1 T 2 1 4 2 2 1 3 011 2 2 7 1 2 0 85
− − −⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤= = =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦P AP P AP .
When Diagonalization Fails
60. Let A = a bc d⎡ ⎤⎢ ⎥⎣ ⎦
with double eigenvalue λ and only one linearly independent eigenvector
1
2
vv⎡ ⎤
= ⎢ ⎥⎣ ⎦
v , where v2 ≠ 0.
Let Q = 1
2
10
vv⎡ ⎤⎢ ⎥⎣ ⎦
AQ = 1 1 2 1
2 1 2 2
10
v av bv a v aa bv cv dv c v cc d
λλ
+⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎡ ⎤= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ +⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
because av1 + bv2 = λv1 and cv1 + dv2 = λv2, by the fact that λ=Av v .
Q−1 = 2 12
0 11v vv
−⎡ ⎤− ⎢ ⎥−⎣ ⎦
562 CHAPTER 5 Linear Transformations
Q−1AQ = 1 2
2 1 2 2 12 2
0 11 10
v a v cv v v c av cvv v
λ λλ
− − −⎡ ⎤ ⎡ ⎤ ⎡ ⎤− = −⎢ ⎥ ⎢ ⎥ ⎢ ⎥− − +⎣ ⎦ ⎣ ⎦ ⎣ ⎦
= 2
1 2
/0 /
c va cv v
λ −⎡ ⎤⎢ ⎥−⎣ ⎦
A∼
By Section 5.3, Problem 40(a), Q−1AQ ∼ A, and they will have the same eigenvalues. By Section 5.3, Problems 43−46, a triangular matrix has its eigenvalues on the main diagonal, so the lower
right element must also be λ. Hence 2
cvav
λ = − .
Triangularizing
61. A = 2 14 6
−⎡ ⎤⎢ ⎥⎣ ⎦
The characteristic polynomial of A is 2 8 16 0, so 4,4λ λ λ+ + = = .
To find eigenvectors for λ = 4:
1
2
2 1 04 2 0
vv
− − ⎡ ⎤ ⎡ ⎤⎡ ⎤=⎢ ⎥ ⎢ ⎥⎢ ⎥
⎣ ⎦ ⎣ ⎦⎣ ⎦ ⇒
1 2
2 1
1
2 0 2 free
v vv vv
− − == −
1
2⎡ ⎤
= ⎢ ⎥−⎣ ⎦v is the only linearly independent eigenvector.
Let Q = 1 12 0
⎡ ⎤⎢ ⎥−⎣ ⎦
AQ = 2 1 1 1 4 24 6 2 0 8 4
−⎡ ⎤ ⎡ ⎤ ⎡ ⎤=⎢ ⎥ ⎢ ⎥ ⎢ ⎥− −⎣ ⎦ ⎣ ⎦ ⎣ ⎦
Q−1 = 0 112 12
−⎡ ⎤⎢ ⎥⎣ ⎦
Q−1AQ = 0 1 4 2 8 4 4 21 12 1 8 4 0 8 0 42 2
− − −⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤= =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥−⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
, the triangularization of A.
SECTION 5.4 Coordinates and Diagonalization 563
62. A = 1 11 1
⎡ ⎤⎢ ⎥− −⎣ ⎦
λ2 = 0, so λ = 0, 0.
The only linearly independent eigenvector is 11
⎡ ⎤⎢ ⎥−⎣ ⎦
.
Let Q = 1 11 0
⎡ ⎤⎢ ⎥−⎣ ⎦
AQ = 1 1 1 1 0 11 1 1 0 0 1
⎡ ⎤ ⎡ ⎤ ⎡ ⎤=⎢ ⎥ ⎢ ⎥ ⎢ ⎥− − − −⎣ ⎦ ⎣ ⎦ ⎣ ⎦
Q−1 = 0 11 1
−⎡ ⎤⎢ ⎥⎣ ⎦
Q−1AQ = 0 1 0 1 0 11 1 0 1 0 0
−⎡ ⎤ ⎡ ⎤ ⎡ ⎤=⎢ ⎥ ⎢ ⎥ ⎢ ⎥−⎣ ⎦ ⎣ ⎦ ⎣ ⎦
, the triangularization of A.
Suggested Journal Entry I
63. Student Project
Suggested Journal Entry II
64. Student Project
Suggested Journal Entry III
65. Student Project