Solutions Manual LINEAR SYSTEM THEORY, 2/E

106
Solutions Manual LINEAR SYSTEM THEORY, 2/E Wilson J. Rugh Department of Electrical and Computer Engineering Johns Hopkins University

Transcript of Solutions Manual LINEAR SYSTEM THEORY, 2/E

Page 1: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Solutions Manual

LINEAR SYSTEM THEORY, 2/E

Wilson J. Rugh

Department of Electrical and Computer Engineering

Johns Hopkins University

Page 2: Solutions Manual LINEAR SYSTEM THEORY, 2/E

PREFACE

With some lingering ambivalence about the merits of the undertaking, but with a bit more dedication thanthe first time around, I prepared this Solutions Manual for the second edition of Linear System Theory. Roughly40% of the exercises are addressed, including all exercises in Chapter 1 and all others used in developments in thetext. This coverage complements the 60% of those in an unscientific survey who wanted a solutions manual, andperhaps does not overly upset the 40% who voted no. (The main contention between the two groups involved theinevitable appearance of pirated student copies and the view that an available solution spoils the exercise.)

I expect that a number of my solutions could be improved, and that some could be improved using onlytechniques from the text. Also the press of time and my flagging enthusiasm for text processing impeded thecrafting of economical solutions—some solutions may contain too many steps or too many words. However Ihope that the error rate in these pages is low and that the value of this manual is greater than the price paid.

Please send comments and corrections to the author at [email protected] or ECE Department, Johns HopkinsUniversity, Baltimore, MD 21218 USA.

Page 3: Solutions Manual LINEAR SYSTEM THEORY, 2/E

CHAPTER 1

Solution 1.1(a) For k = 2, (A + B)2 = A2 + AB + BA + B2. If AB = BA, then (A + B)2 = A2 + 2AB + B2. In general ifAB = BA, then the k-fold product (A + B)k can be written as a sum of terms of the form AjBk−j , j = 0, . . . , k. Thenumber of terms that can be written as AjBk−j is given by the binomial coefficient �

� jk �

� . Therefore AB = BA

implies

(A + B)k =j= 0Σk �

� jk �

� AjBk−j

(b) Write

det [λ I −A (t)] = λn + an−1(t)λn−1 + . . . + a 1(t)λ + a 0(t)

where invertibility of A (t) implies a0(t) ≠ 0. The Cayley-Hamilton theorem implies

An(t) + an−1(t)An−1(t) + . . . + a 0(t)I = 0

for all t. Multiplying through by A−1(t) yields

A−1(t) =a0(t)

−a1(t)I − . . . −an−1(t)An−2(t) −An−1(t)_________________________________

for all t. Since a0(t) = det [−A (t)], a0(t) = det A (t) . Assume ε > 0 is such that det A (t) ≥ ε for all t. Since A (t) ≤ α we have aij (t) ≤ α, and thus there exists a γ such that aj (t) ≤ γ for all t. Then, for all t,

A−1(t) = det A (t)

a1(t)I + . . . + A n−1(t)______________________

≤ε

γ+ γ α+ . . . + αn−1_________________ =∆ β

Solution 1.2(a) If λ is an eigenvalue of A, then recursive use of Ap = λp shows that λk is an eigenvalue of Ak. However toshow multiplicities are preserved is more difficult, and apparently requires Jordan form, or at least results onsimilarity to upper triangular form.(b) If λ is an eigenvalue of invertible A, then λ is nonzero and Ap = λp implies A−1p = (1/λ)p. As in (a),addressing preservation of multiplicities is more difficult.(c) AT has eigenvalues λ1, . . . , λn since det (λI −AT) = det (λI −A)T = det (λI −A).(d) AH has eigenvalues λ1

__, . . . , λn

__using (c) and the fact that the determinant (sum of products) of a conjugate is

the conjugate of the determinant. That is

-1-

Page 4: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

det (λ I −AH) = det (λ_

I −A)H = det (λ_

I −A)__________

(e) αA has eigenvalues αλ1, . . . , αλn since Ap = λp implies (αA)p = (αλ )p.(f) Eigenvalues of ATA are not nicely related to eigenvalues of A. Consider the example

A = �� 0

00α �

� , ATA = �� 0

0α0 �

where the eigenvalues of A are both zero, and the eigenvalues of ATA are 0, α. (If A is symmetric, then (a)applies.)

Solution 1.3(a) If the eigenvalues of A are all zero, then det (λ I −A) = λn and the Cayley-Hamilton theorem shows that A isnilpotent. On the other hand if one eigenvalue, say λ1 is nonzero, let p be a corresponding eigenvector. ThenAkp = λ1

kp ≠ 0 for all k ≥ 0, and A cannot be nilpotent.(b) Suppose Q is real and symmetric, and λ is an eigenvalue of Q. Then λ

_also is an eigenvalue. From the

eigenvalue/eigenvector equation Qp = λ p we get pHQp = λ pHp. Also Qp_

= λ_

p_, and transposing gives

pHQp = λ_

pHp. Subtracting the two results gives (λ − λ_

)pHp = 0. Since p ≠ 0, this gives λ = λ_

, that is, λ is real.(c) If A is upper triangular, then λ I −A is upper triangular. Recursive Laplace expansion of the determinant aboutthe first column gives

det (λ I −A) = (λ −a11) . . . (λ −ann)

which implies the eigenvalues of A are the diagonal entries a11 , . . . , ann.

Solution 1.4(a)

A = �� 1

000 �

� implies ATA = �� 0

100 �

� implies A = 1

(b)

A = �� 1

331 �

� implies ATA = �� 6

10106 �

Then

det (λI −ATA) = (λ − 16)(λ − 4)

which implies A = 4.(c)

A = �� 0

1−i1+i0 �

� implies AHA = �� 0

(1+i )(1−i )(1−i )(1+i )

0 �� = �

� 02

20 �

This gives A = √��2 .

Solution 1.5 Let

A = �� 0

1/α1/αα �

� , α > 1

Then the eigenvalues are 1/α and, using an inequality on text page 7,

A ≥1 ≤ i, j ≤ 2max aij = α

-2-

Page 5: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

Solution 1.6 By definition of the spectral norm, for any α ≠ 0 we can write

A = x = 1max A x =

x = 1max

x A x_______

= α x = 1

max α x

Aα x________ = x = 1/ α

max α x

α A x__________

Since this holds for any α ≠ 0,

A = x ≠ 0max

x A x_______ =

x ≠ 0max

x A x_______

Therefore

A ≥ x

A x_______

for any x ≠ 0, which gives

A x ≤ A x

Solution 1.7 By definition of the spectral norm,

AB = x = 1max (AB)x =

x = 1max A (Bx)

≤ x = 1max { A Bx } , by Exercise 1.6

= A x = 1max Bx = A B

If A is invertible, then A A−1 = I and the obvious I = 1 give

1 = A A−1 ≤ A A−1

Therefore

A−1 ≥ A

1_____

Solution 1.8 We use the following easily verified facts about partitioned vectors:

�� x2

x1��

≥ x1 , x2 ; �� 0

x1��

= x1 , �� x2

0 ��

= x2

Write

Ax =�� A21

A11

A22

A12��

�� x2

x1��

=�� A21x1 + A 22x2

A11x1 + A 12x2��

Then for A11 , for example,

A = x = 1max A x ≥

x = 1max A11x1 + A 12x2

≥ x1 = 1

max A11x1 = A11

The other partitions are handled similarly. The last part is easy from the definition of induced norm. For exampleif

-3-

Page 6: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

A =�� 0

00

A12��

then partitioning the vector x similarly we see that

x = 1max A x =

x2 = 1max A12x2 = A12

Solution 1.9 By the Cauchy-Schwarz inequality, and xT = x ,

xTA x ≤ xTA x = ATx x

≤ AT x 2 = A x 2

This immediately gives

xTA x ≥ − A x 2

If λ is an eigenvalue of A and x is a corresponding unity-norm eigenvector, then

λ = λ x = λ x = A x ≤ A x = A

Solution 1.10 Since Q = QT , QTQ = Q2, and the eigenvalues of Q2 are λ12, . . . , λn

2. Therefore

Q = √� � � � � �λmax(Q2) =1 ≤ i ≤ nmax λ i

For the other equality Cauchy-Schwarz gives

xTQx | ≤ xTQ x = Qx x

≤ Q x 2 = [1 ≤ i ≤ nmax λ i ] xTx

Therefore | xTQx | ≤ Q for all unity-norm x. Choosing xa as a unity-norm eigenvector of Q corresponding tothe eigenvalue that yields

1 ≤ i ≤ nmax λ i gives

xaTQxa = xa

T [1 ≤ i ≤ nmax λ i ] xa =

1 ≤ i ≤ nmax λ i

Thus x = 1max xTQx = Q .

Solution 1.11 Since A x = √�� � � � � � � �(A x)T(A x) = √� � � � � � �xTATA x,

A = x = 1max √� � � � � � �xTATA x

= �� x = 1

max xTATA x ��

1/2

The Rayleigh-Ritz inequality gives, for all unity-norm x,

xTATA x ≤ λmax(ATA) xTx = λmax(ATA)

and since ATA ≥ 0, λmax(ATA) ≥ 0. Choosing xa to be a unity-norm eigenvector corresponding to λmax(ATA) gives

xaTATA xa = λmax(ATA)

Thus

-4-

Page 7: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

x = 1max xTATA x = λmax(ATA)

so we have A = √� � � � � � �λmax(ATA) .

Solution 1.12 Since ATA > 0 we have λ i (ATA) > 0, i = 1, . . . , n, and (ATA)−1 > 0. Then by Exercise 1.11,

A−1 2 = λmax((ATA)−1) =λmin(ATA)

1_________

=λmin(ATA) . det (ATA)

i = 1Πn

λ i (ATA)

__________________ ≤(det A)2

[λmax(ATA)]n−1______________

=(det A)2

A 2(n−1)_________

Therefore

A−1 ≤ det A A n−1________

Solution 1.13 Assume A ≠ 0, for the zero case is trivial. For any unity-norm x and y,

yTA x ≤ yT A x

≤ y A x = A

Therefore

x , y = 1max yTA x ≤ A

Now let unity-norm xa be such that A xa = A , and let

ya = A

Axa_____

Then ya = 1 and

yaTA xa =

A

xaTATA xa___________ =

A

A xa 2________ =

A A 2______ = A

Therefore

x , y = 1max yTA x = A

Solution 1.14 The coefficients of the characteristic polynomial of a matrix are continuous functions of matrixentries, since determinant is a continuous function of the entries (sum of products). Also the roots of apolynomial are continuous functions of the coefficients. (A proof is given in Appendix A.4 ofE.D. Sontag,Mathematical Control Theory,Springer-Verlag, New York, 1990.) Since a composition of continuous functionsis a continuous function, the pointwise-in-t eigenvalues of A (t) are continuous in t.This argument gives that the (nonnegative) eigenvalues of AT(t)A (t) are continuous in t. Then the maximum ateach t is continuous in t — plot two eigenvalues and consider their pointwise maximum to see this. Finally sincesquare root is a continuous function of nonnegative arguments, we conclude A (t) is continuous in t.However for continuously-differentiable A (t), A (t) need not be continuously differentiable in t. Consider the

-5-

Page 8: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

example

A (t) =�� 0

tt 20 �

, A (t) =�� t 2 , 1 < t < ∞

t , 0 ≤ t ≤ 1

Clearly the time derivative of A (t) is discontinuous at t = 1. (This overlaps Exercise 1.18 a bit.)Also the eigenvalues of continuously-differentiable A (t) are not necessarily continuously differentiable, consider

A (t) = �� −1

0−t1 �

An easy computation gives the eigenvalues

λ(t) =2t__ ±

2√�� � � �t 2 − 4_______

Thus

λ.(t) =

21__ ±

2√�� � � �t 2 − 4

t________

and this function is not continuous at t = 2.

Solution 1.15 Clearly Q is positive definite, and by Rayleigh-Ritz if x ≠ 0,

0 < λmin(Q) xTx ≤ xTQ x ≤ λmax(Q) xTx

Choosing x as an eigenvector corresponding to λmin(Q) (respectively, λmax(Q)) shows that these inequalities aretight. Thus

ε1 ≤ λmin(Q) , λmax(Q) ≤ ε2

Therefore

λmin(Q−1) =λmax(Q)

1________ ≥ε2

1___

λmax(Q−1) =λmin(Q)

1_______ ≤ε1

1___

Thus Rayleigh-Ritz for the positive definite matrix Q−1 gives

ε2

1___ I ≤ Q−1 ≤ε1

1___ I

Solution 1.16 If W(t) − εI is symmetric and positive semidefinite for all t, then for any x,

xTW(t) x ≥ ε xTx

for all t. At any value of t, let xt be an eigenvector corresponding to an eigenvalue (necessarily real) λt of W(t).Then

xtTW(t) xt = λt xt

Txt ≥ ε xtTxt

That is λt ≥ ε. This holds for any eigenvalue of W(t) and every t. Since the determinant is the product ofeigenvalues,

det W(t) ≥ εn > 0

for any t.

-6-

Page 9: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

Solution 1.17 Using the product rule to differentiate A (t) A−1(t) = I yields

A.(t) A−1(t) + A (t)

dtd___ A−1(t) = 0

which gives

dtd___ A−1(t) = −A−1(t) A

.(t) A−1(t)

Solution 1.18 Assuming differentiability of both x (t) and x (t) , and using the chain rule for scalarfunctions,

dtd___ x (t) 2 = 2 x (t)

dtd___ x (t)

= 2 x (t)dtd___ x (t)

Also we can write, using the product rule and the Cauchy-Schwarz inequality,

dtd___ x (t) 2 =

dtd___ xT(t) x (t) = x

. T(t) x (t) + x T(t) x

.(t) = 2xT(t) x

.(t)

≤ 2 x (t) x.(t)

For t such that x (t) ≠ 0, comparing these expressions gives

dtd___ x (t) ≤ x

.(t)

If x (t) = 0 on a closed interval, then on that interval the result is trivial. If x (t) = 0 at an isolated point, thencontinuity arguments show that the result is valid. Note that for the differentiable function x (t) = t, x (t) = t is not differentiable at t = 0. Thus we must make the assumption that x (t) is differentiable. (While thisinequality is not explicitly used in the book, the added differentiability hypothesis explains why we alwaysdifferentiate x (t) 2 = xT(t) x (t) instead of x (t) .)

Solution 1.19 To prove the contrapositive claim, suppose for each i, j there is a constant βij such that

0∫t

fij (σ) dσ ≤ βij , t ≥ 0

Then by the inequality on page 7, noting thati, j

max fij (t) is a continuous function of t and taking the pointwise-

in-t maximum,

0∫t

F (σ) dσ ≤0∫t

√� � �mni, j

max fij (σ) dσ

≤ √� � �mn0∫t

i = 1Σm

j= 1Σn

| fij (σ) dσ

≤ √� � �mni= 1Σn

j= 1Σm

βij < ∞ , t ≥ 0

The argument forj = 0Σk

F ( j ) is similar.

-7-

Page 10: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

Solution 1.20 If λ(t), p (t) are a pointwise-in-t eigenvalue/eigenvector pair for A−1(t), then

A−1(t) p (t) = λ(t) p (t) = λ(t) p (t)

Therefore, for every t,

λ(t) = p (t)

A−1(t) p (t)_____________ ≤ p (t)

A−1(t) p (t)_______________ ≤ α

Since this holds for any eigenvalue/eigenvector pair,

det A (t) = det A−1(t)

1___________ = λ1(t) . . . λn(t)

1_________________ ≥αn

1___ > 0

for all t.

Solution 1.21 Using Exercise 1.10 and the assumptions Q (t) ≥ 0, tb ≥ ta,

ta∫tb

Q (σ) dσ =ta∫tb

λmax[Q (σ)] dσ ≤ta∫tb

tr [Q (σ)] dσ = trta∫tb

Q (σ) dσ

Note that

ta∫tb

Q (σ) dσ ≥ 0

since for every x

xT

ta∫tb

Q (σ) dσ x =ta∫tb

xTQ (σ) x dσ ≥ 0

Thus, using a property of the trace on page 8 of Chapter 1, we have

ta∫tb

Q (σ) dσ ≤ trta∫tb

Q (σ) dσ ≤ nta∫tb

Q (σ) dσ

Finally,

ta∫tb

Q (σ) dσ ≤ εI

implies, using Rayleigh-Ritz,

ta∫tb

Q (σ) dσ ≤ ε

Therefore

ta∫tb

Q (σ) dσ ≤ nε

-8-

Page 11: Solutions Manual LINEAR SYSTEM THEORY, 2/E

CHAPTER 2

Solution 2.3 The nominal solution for u(t) = sin (3t) is y(t) = sin t. Let x1(t) = y (t), x2(t) = y.(t) to write the

state equation

x.(t) =

�� −(4/3)x1

3 (t) − (1/3)u (t)x2(t) �

Computing the Jacobians and evaluating gives the linearized state equation

xδ.

(t) =�� −4 sin2t

001 �

xδ(t) +�� −1/3

0 ��

uδ(t)

yδ(t) = �� 1 0 �

� xδ(t)

where

xδ(t) = x (t) −�� cos t

sin t ��

, uδ(t) = u (t) − sin (3t) , yδ(t) = y (t) − sin t , xδ(0) = x (0) −�� 1

0 ��

Solution 2.5 For u = 0 constant nominal solutions are solutions of

0 = x 2 − 2x1 x2 = x 2(1−2x1)

0 = −x1 + x12

+ x22

= x 1(x1−1) + x22

Evidently there are 4 possible solutions:

xa = �� 0

0 �� , xb = �

� 01 �

� , xc = �� 1/2

1/2 �� , xd = �

� −1/21/2 �

Since

∂x∂f___ =

�� −1+2x1

−2x2

2x2

1−2x1��

,∂u∂f___ =

�� 1

0 ��

evaluating at each of the constant nominals gives the corresponding 4 linearized state equations.

Solution 2.7 Clearly x is a constant nominal if and only if

0 = A x + bu

that is, if and only if A x = −bu. There exists such an x if and only if b ∈ Im [A ], in other words

-9-

Page 12: Solutions Manual LINEAR SYSTEM THEORY, 2/E

rank A = rank [ A b ].Also, x is a constant nominal with c x = 0 if and only if

0 = A x + bu

0 = c x

that is, if and only if

�� c

A �� x =

�� 0

−bu ��

As above, this holds if and only if

rank �� c

A �� = rank �

� cA

0b �

Finally, x is a constant nominal with c x = u if and only if

0 = A x + bu = ( A + bc ) x

and this holds if and only if

x ∈ Ker [ A + bc ]

(If A is invertible, we can be more explicit. For any u˜ the unique constant nominal is x˜ = −A−1bu. Then y = 0 foru ≠ 0 if and only if c A−1b = 0, and y = u if and only if c A−1b = −1.)

Solution 2.8(a) Since

�� C

A0B �

is invertible, for any K

�� C

A + BK0B �

� = �� C

A0B �

��� K

II0 �

is invertible. Let

�� C

A + BK0B �

�� R3

R1

R4

R2��

=�� 0

II0 �

Then the 1, 2-block gives R2 = −(A + BK)−1BR4 and the 2, 2-block gives CR2 = I, that is, I = −C(A + BK)−1BR4

Thus [ C (A + BK)−1B ]−1 exists and is given by −R4.

(b) We need to show that there exists N such that

0 = (A + BK)x + BNu

u = Cx

The first equation gives

x = −(A + BK)−1BN u

Thus we need to choose N such that

−C (A + BK)−1BN u = u

From part (a) we take N = [−C (A + BK)−1B ]−1 = R4.

-10-

Page 13: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

Solution 2.10 For u (t) = u, x is a constant nominal if and only if

0 = (A + Du) x + bu

This holds if and only if bu ∈ Im [ A + Du], that is, if and only if

rank ( A + Du ) = rank �� A +Du bu �

If A + Du is invertible, then

x = −(A + Du)−1bu (+)

If A is invertible, then by continuity of the determinant det (A + Du) ≠ 0 for all u such that u is sufficientlysmall, and (+) defines a corresponding constant nominal. The corresponding linearized state equation is

x.

δ(t) = (A + Du) xδ(t) + [ b −D (A + Du)−1bu ] uδ(t)

yδ(t) = C xδ(t)

Solution 2.12 For the given nominal input, nominal output, and nominal initial state, the nominal solutionsatisfies

x.(t) =

�� x2(t) − 2 x3(t)

x1(t) − x3(t)1 �

, x(0) =�� −2

−30 �

1 = x 2(t) − 2 x3(t)

Integrating for x1(t) and then x3(t) easily gives the nominal solution x1(t) = t, x2(t) = 2 t − 3, and x3(t) = t − 2.The corresponding linearized state equation is specified by

A =�� 0

10

100

−2−10 �

, B (t)=�� 0

t0 �

, C = �� 0 1 −2 �

It is unusual that the nominal input and nominal output are constants, but the linearization is time varying.

Solution 2.14 Compute

z.(t) = x

.(t) −q

.(t) = A x (t) + Bu (t) + A −1Bu

.(t)

= A x (t) −A[−A−1Bu(t)] + A −1Bu.(t)

= A z(t) + A −1Bu.(t)

If at any value of ta > 0 we have x (ta) = q (ta), that is z(ta) = 0, and u.(t) = 0 for t ≥ ta, that is u (t) = u (ta) for

t ≥ ta, then z(t) = 0 for t ≥ ta. Thus x (t) = q (ta) for t ≥ ta, and q (t) represents what could be called an‘instantaneous constant nominal.’

-11-

Page 14: Solutions Manual LINEAR SYSTEM THEORY, 2/E

CHAPTER 3

Solution 3.2 Differentiating term k +1 of the Peano-Baker series using Leibniz rule gives

∂τ∂___

�� τ∫t

A (σ1)τ∫σ1

A (σ2)τ∫σ2

. . .τ∫σk

A (σk+1) dσk+1. . . dσ1

��

=

��A (t)

τ∫t

A (σ2)τ∫σ2

. . .τ∫σk

A (σk+1) dσk+1. . . dσ2

��

dτd___ t −

��

A (τ)τ∫τ

A (σ2)τ∫τ

. . . dσk+1. . . dσ2

�� dτ

d___ τ

+τ∫t

A (σ1)∂τ∂___

�� τ

∫σ1

A (σ2)τ∫σ2

. . .τ∫σk

A (σk+1)

��

dσk+1. . . dσ1

=τ∫t

A (σ1)∂τ∂___

�� τ

∫σ1

A (σ2)τ∫σ2

. . .τ∫σk

A (σk+1)

��

dσk+1. . . dσ1

Repeating this process k times gives

∂τ∂___

�� τ∫t

A (σ1)τ∫σ1

A (σ2)τ∫σ2

. . .τ∫σk

A (σk+1) dσk+1. . . dσ1

��

=τ∫t

A (σ1)τ∫σ1

. . .τ∫

σk−1

A (σk)∂τ∂___ �

� τ

∫σk

A (σk+1) dσk+1

��

dσk. . . dσ1

=τ∫t

A (σ1)τ∫σ1

. . .τ∫

σk−1

A (σk)��

0 −A (τ) +τ∫σk

0 dσk+1

��

dσk. . . dσ1

=τ∫t

A (σ1)τ∫

σ1

A (σ2)τ∫σ2

. . .τ∫

σk−1

A (σk) dσk. . . dσ1

�� − A (τ) �

Recognizing this as term k of the uniformly convergent series for −Φ(t, τ) A (τ) gives

∂τ∂___ Φ(t, τ) = −Φ(t, τ) A (τ)

(Of course it is simpler to use the formula for the derivative of an inverse matrix given in Exercise 1.17.)

-12-

Page 15: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

Solution 3.6 Writing the state equation as a pair of scalar equations, the first one is

x.

1(t) =1 + t 2

−t______ x1(t)

and an easy computation gives

x1(t) =(1 + t 2)1/2

x1o_________

Then the second scalar equation then becomes

x.

2(t) =1 + t 2

−4t______ x2(t) +(1 + t 2)1/2

x1o_________

The complete solution formula gives, with some help from Mathematica,

x.

2(t) =(1 + t 2)2

1________ x2o +0∫t

(1 + t 2)2

(1 + σ2)3/2_________ dσ x1o

=(1 + t 2)2

1________ x2o +(1 + t 2)2

√� � � �1+t 2 (t 3/4+5t/8)+(3/8) sinh−1(t)_____________________________ x1o

If x1o = 1, then as t →∞, x2(t) → 1/4, not zero.

Solution 3.7 From the hint, letting

r (t) =to∫t

v (σ)φ(σ) dσ

we have r.(t) = v (t)φ(t), and

φ(t) ≤ ψ(t) + r (t) (*)

Multiplying (*) through by the nonnegative v (t) gives

v (t)φ(t) ≤ v (t)ψ(t) + v (t)r (t)

or

r.(t) −v (t)r (t) ≤ v (t)ψ(t)

Multiply both sides by the positive quantity

e−

to

∫t

v(τ) dτ

to obtain

dtd___ �

r (t)e−

to

∫t

v(τ) dτ ��

≤ v (t)ψ(t)e−

to

∫t

v(τ) dτ

Integrating both sides from to to t, and using r (to) = 0 gives

r (t)e−

to

∫t

v(τ) dτ

≤to∫t

v (σ)ψ(σ)e−

to

∫σ

v(τ) dτ

Multiplying through by the positive quantity

-13-

Page 16: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

eto

∫t

v(τ) dτ

gives

r (t) ≤to∫t

v (σ)ψ(σ)eσ∫t

v(τ) dτ

and using (*) yields the desired inequality.

Solution 3.10 Multiply the state equation by 2 zT(t) to obtain

2 zT(t) z.(t) =

dtd___ z(t) 2

=i= 1Σn

j= 1Σn

2 zi (t)aij (t) zj (t)

≤i = 1Σn

j= 1Σn

2 aij (t) zi (t) zj (t) , t ≥ to

At each t ≥ to let

a (t) = 2n2

1 ≤ i, j ≤ nmax aij (t)

Note a (t) is a continuous function of t, as a quick sample sketch indicates. Then, since zi (t) ≤ z(t) ,

dtd___ z(t) 2 ≤ a (t) z(t) 2 , t ≤ to

Multiplying through by the positive quantity

e−

to

∫t

a(σ) dσ

gives

dtd___ �

e−

to

∫t

a(σ) dσ

z(t) 2��

≤ 0 , t ≤ to

Integrating both sides from to to t and using z(to) = 0 gives

z(t) = 0 , t ≥ to

which implies z(t) = 0 for t ≥ to.

Solution 3.11 The vector function x (t) satisfies the given state equation if and only if it satisfies

x (t) = xo +to∫t

A (σ) x(σ) dσ +to∫t

to∫τ

E (τ, σ) x(σ) dσdτ +to∫t

B (σ)u (σ) dσ

Assuming there are two solutions, their difference z(t) satisfies

z(t) =to∫t

A (σ) z(σ) dσ +to∫t

to∫τ

E (τ, σ) z(σ) dσdτ

Interchanging the order of integration in the double integral (Dirichlet’s formula) gives

-14-

Page 17: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

z(t) =to∫t

A (σ) z(σ) dσ +to∫t

σ∫t

E (τ, σ) dτ z(σ) dσ

=to∫t �

A (σ) +σ∫t

E (τ, σ) dτ��

z(σ) dσ

=∆

to∫t

A(t, σ) z(σ) dσ

Thus

z(t) = to∫t

A(t, σ) z(σ) dσ ≤to∫t

A(t, σ) z(σ) dσ

By continuity, given T > 0 there exists a finite constant α such that A(t, σ) ≤ α for to ≤ σ ≤ t ≤ to + T. Thus

z(t) ≤to∫t

α z(t) dσ , t ∈ [to, to+T ]

and the Gronwall-Bellman inequality gives z(t) = 0 for t ∈ [to, to+T ], implying that there can be no morethan one solution.

Solution 3.13 From the Peano-Baker series,

Φ(t, τ) −��

I +τ∫t

A (σ1) dσ1 + . . . +τ∫t

A (σ1)τ∫σ1

. . .τ∫

σk−1

A (σk) dσk. . . dσ1

��

=j=k+ 1Σ∞

τ∫t

A (σ1)τ∫σ1

. . .τ∫

σj−1

A (σj ) dσj. . . dσ1

For any fixed T > 0 there is a finite constant α such that A (t) ≤ α for t ∈ [−T, T ], by continuity. Therefore

j =k+ 1Σ∞

τ∫t

A (σ1)τ∫σ1

. . .τ∫

σj−1

A (σj ) dσj. . . dσ1 ≤

j =k+ 1Σ∞

τ∫t

A (σ1)τ∫σ1

. . .τ∫

σj−1

A (σj ) dσj. . . dσ1

≤j =k+ 1Σ∞

τ∫t

A (σ1)τ∫σ1

. . .τ∫

σj−1

A (σj ) dσj. . . dσ1

...

≤j =k+ 1Σ∞

α j τ∫t

. . .τ∫

σj−1

1 dσj. . . dσ1

≤j =k+ 1Σ∞

α j

j ! t − τ j_______

≤j =k+ 1Σ∞

j !(α2T) j______ , t, τ ∈ [−T, T ]

We need to show that given ε > 0 there exists K such that

-15-

Page 18: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

j =K + 1Σ∞

j !(α2T) j_______ < ε (*)

Using the hint,

j =k+ 1Σ∞

j !(α2T) j_______ =

i= 0Σ∞

(k +1+i )!(α2T)k+1+i__________ ≤

i = 0Σ∞

(k +1)!(α2T)k+1________ .

ki

(α2T)i______

If k > α2T, then

j =k+ 1Σ∞

j !(α2T) j_______ ≤

(k +1)!(α2T)k+1________ .

1 − α2T/k1_________ =

(k−1)!(k +1)(k−α2T)(α2T)k+1__________________

Because of the factorial in the denominator, given ε > 0 there exists a K > α2T such that (*) holds.

Solution 3.15 Writing the complete solution of the state equation at tf, we need to satisfy

Hoxo + H f

��

Φ(tf, to) xo +to∫tf

Φ(tf, σ)f (σ) dσ

��

= h (+)

Thus there exists a solution that satisfies the boundary conditions if and only if

h −Hfto∫tf

Φ(tf, σ)f (σ) dσ ∈ Im[ Ho + H f Φ(tf, to) ]

There exists a unique solution that satisfies the boundary conditions if Ho + H f Φ(tf, to) is invertible. To computea solution x (t) satisfying the boundary conditions:

(1) Compute Φ(t, to) for t ∈ [to, tf]

(2) Compute Ho + Hf Φ(tf, to)

(3) Computeto∫tf

Φ(tf, σ)f (σ) dσ

(4) Solve (+) for xo

(5) Set x (t) = Φ(t, to) xo +to∫t

Φ(t, σ)f (σ) dσ, t ∈ [to, tf]

-16-

Page 19: Solutions Manual LINEAR SYSTEM THEORY, 2/E

CHAPTER 4

Solution 4.1 An easy way to compute A (t) is to use A (t) = Φ.

(t, 0)Φ(0, t). This gives

A (t) = �� 1

−2t−2t−1 �

This A (t) commutes with its integral, so we can write Φ(t, τ) as the matrix exponential

Φ(t, τ) = exp�� τ∫t

A (σ) dσ��

= exp�� (t−τ)

−(t−τ)2

−(t−τ)2−(t−τ) �

Solution 4.4 A linear state equation corresponding to the nth-order differential equation is

x.(t) =

�� −a0(t)

0

...

00

−a1(t)0

...

01

. . .

. . .

...

. . .

. . .

−an−1(t)1

...

00 �

x (t)

The corresponding adjoint state equation is

z.(t) =

�� 0

0

...

−10

. . .

. . .

...

. . .

. . .

−10

...

00

an−1(t)an−2(t)

...

a1(t)a0(t) �

z(t)

To put this in the form of an nth-order differential equation, start with

z.n(t) = −zn−1(t) + an−1(t) zn(t)

z.n−1(t) = −zn−2(t) + an−2(t) zn(t)

These give

z..

n(t) = −z.n−1(t) +

dtd___ [ an−1(t) zn(t) ]

= zn−2(t) −an−2(t) zn(t) +dtd___ [ an−1(t) zn(t) ]

Next,

-17-

Page 20: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

z.n−2(t) = −zn−3(t) + an−3(t) zn(t)

gives

dt3

d3____zn(t) = z.n−2(t) −

dtd___ [ an−2(t) zn(t) ] +

dt2

d2____ [ an−1(t) zn(t) ]

= −zn−3(t) + an−3(t) zn(t) −dtd___ [ an−2(t) zn(t) ] +

dt2

d2____ [ an−1(t) zn(t) ]

Continuing gives the nth-order differential equation

dtn

dn____ zn(t) =dtn−1

dn−1_____ [ an−1(t) zn(t) ] −dtn−2

dn−2_____ [ an−2(t) zn(t) ]

+ . . . + (−1)n

dtd___ [ a1(t) zn(t) ] + (−1)n +1a0(t) zn(t)

Solution 4.6 For the first matrix differential equation, write the transpose of the equation as (transpose anddifferentiation commute)

X. T

(t) = A T(t)XT(t) , XT(to) = XoT

This has the unique solution XT(t) = ΦAT(t)(t, to)XoT, so that

X (t) = XoΦAT(t)T (t, to)

In the second matrix differential equation, let Φk(t, τ) be the transition matrix for Ak(t), k = 1, 2. Then it is easyto verify (Leibniz rule) that a solution is

X (t) = Φ1(t, to)XoΦ2T(t, to) +

to∫t

Φ1(t, σ)F (σ)Φ2T(t, σ) dσ

Or, one can generate this expression by using the obvious integrating factors on the left and right sides of thedifferential equation. (To show this is a unique solution, show that the difference Z (t) between any two solutionssatisfies Z

.(t) = A1(t)Z (t) + Z (t)A2

T (t), with Z (to) = 0. Integrate both sides and apply the Bellman-Gronwallinequality to show Z (t) is identically zero.)

Solution 4.9 Clearly A (t) commutes with its integral. Thus we compute

exp��

�� −1

001 �

� �

and then replace τ by0∫t

a (σ) dσ. From the power series for the exponential,

exp��

�� −1

001 �

� �

=k=0Σ∞

k !1___ �

� −10

01 �

�k

τ k

=k=0Σ∞

(2k)!1_____ �

� −10

01 �

�2k

τ 2k +k=0Σ∞

(2k +1)!1________ �

� −10

01 �

�2k+1

τ 2k+1

=k=0Σ∞

(2k)!1_____ �

� 0

(−1)k

(−1)k0 �

τ 2k +k=0Σ∞

(2k +1)!1________ �

� (−1)k+1

00

(−1)k ��

τ 2k+1

-18-

Page 21: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

= �� 0

cos τcos τ

0 �� + �

� −sin τ0

0sin τ �

= �� −sin τ

cos τcos τsin τ �

Replacing τ as noted above gives Φ(t, 0).

Solution 4.10 For sufficiency, suppose Φx(t, 0) = T (t)eRt. Then T (0) = I and T (t) is continuouslydifferentiable. Let z(t) = T−1(t) x (t) so that

Φz(t, 0) = T −1(t)Φx(t, 0)T (0) = T −1(t)T (t)eRt = eRt

Thus z.(t) = R z(t).For necessity, suppose P (t) is a variable change that gives

z.(t) = Ra z(t)

Then

Φz(t, 0) = eRat = P −1(t)Φx(t, 0)P (0)

that is,

Φx(t, 0) = P (t)eRatP−1(0)

Let T (t) = P (t)P−1(0) and R = P (0)RaP−1(0). Then

Φx(t, 0) = T (t)P (0) eP−1(0)RP(0)t P−1(0)

= T (t)P (0) [ P−1(0)eRtP (0) ] P−1(0)

= T (t)eRt

Solution 4.11 Suppose

Φ(t, 0) = eA1 t eA2 t

Then

Φ.

(t, 0) =dtd___ �

� eA1 t eA2 t �� = eA1 t ( A1+A 2 ) eA2 t

= eA1 t ( A1+A 2 ) e−A1 t . eA1 t eA2 t

This implies A (t) = eA1 t[ A1+A 2 ] e−A1 t. Therefore A (0) = A1+A 2 is clear, and

A.(t) = A 1eA1 t ( A1+A 2 ) e−A1 t + eA1 t ( A1+A 2 ) e−A1 t(−A1)

= A 1 A (t) −A (t) A1

Conversely, assume A1 and A2 are such that

A.(t) = A 1 A (t) −A (t) A1 , A (0) = A 1 + A 2

This matrix differential equation has a unique solution (by rewriting it as a linear vector differential equation), andfrom the calculation above this solution is

A (t) = eA1 t( A1 + A 2 ) e−A1 t

Since

-19-

Page 22: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

dtd___ �

� eA1 t eA2 t �� = A (t)eA1 teA2 t , eA10eA20 = I

we have that Φ(t, 0) = eA1 teA2 t.

Solution 4.13 Writing

∂t∂___ ΦA(t, τ) = A (t)ΦA(t, τ) , Φ(τ, τ) = I

in partitioned form shows that

∂t∂___ Φ21(t, τ) = A 22(t)Φ21(t, τ) , Φ21(τ, τ) = 0

Thus Φ21(t, τ) is identically zero. But then

∂t∂___ Φii (t, τ) = Aii (t)Φii (t, τ) , Φii (τ, τ) = I

for i = 1, 2, and

∂t∂___ Φ12(t, τ) = A 11(t)Φ12(t, τ) + A 12(t)Φ22(t, τ) , Φ12(τ, τ) = 0

Using Exercise 4.6 with F (t) = A12(t) Φ22(t, τ) gives

Φ12(t, τ) =τ∫t

Φ11(t, σ) A12(σ) Φ22(σ, τ) dσ

Solution 4.17 We need to compute a continuously-differentiable, invertible P (t) such that

�� 1

tt1 �

= P −1(t)�� 2−t 2

02 t1 �

�P (t) −P−1(t)P

.(t)

Multiplying on the left by P (t), the result can be written as a dimension-4 linear state equation. Choosing theinitial condition corresponding to P (0) = I, some clever guessing gives

P (t) = �� t

110 �

Solution 4.23 Using the formula for the derivative of an inverse matrix given in Exercise 1.17,

∂t∂___ ΦA(−τ, −t) =

∂t∂___ ΦA

−1(−t, −τ) = −ΦA−1(−t, −τ)

�� ∂t

∂___ ΦA(−t, −τ)��

ΦA−1(−t, −τ)

= −ΦA−1(−t, −τ)

��

−∂(−t)

∂_____ ΦA(−t, −τ)��

ΦA−1(−t, −τ)

= −ΦA−1(−t, −τ) �

� −A (−t)ΦA(−t, −τ) �� ΦA

−1(−t, −τ)

= ΦA−1(−t, −τ) A (−t) = ΦA(−τ, −t) A (−t)

Transposing gives

-20-

Page 23: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

∂t∂___ ΦA

T(−τ, −t) = A T(−t)ΦAT(−τ, −t)

Since Φ(−τ, −τ) = I, we have F (t) = AT(−t).

Or we can use the result of Exercise 3.2 to compute:

∂t∂___ ΦA(−τ, −t) =−

∂(−t)∂_____ ΦA(−τ, −t) = ΦA(−τ, −t)A (−t)

This implies

∂t∂___ ΦA

T(−τ, −t) = A T(−t)ΦA(−τ, −t)

Since Φ(−τ, −τ) = I, we have F (t) = AT(−t).

Solution 4.25 We can write

Φ(t + σ, σ) = I +σ∫

t+σ

A (τ) dτ +k=2Σ∞

σ∫

t +σ

A (τ1)σ∫τ1

A (τ2) . . .σ∫

τk−1

A (τk) dτk. . . dτ1

and

eA__

t(σ)t = I + A_

t(σ)t +k=2Σ∞

k !1___ A

_tk(σ)t k

Then

R(t, σ) = Φ(t + σ, σ) −eA__

t(σ)t

= k=2Σ∞ �

� σ

∫t +σ

A (τ1)σ∫τ1

A (τ2) . . .σ∫

τk−1

A (τk) dτk. . . dτ1 −

k !1___ A

_tk(σ)t k

��

From A (t) ≤ α and the triangle inequality,

R(t, σ) ≤ 2k=2Σ∞

αk

k !t k___ = α2t 2

k=2Σ∞

k !2___ αk−2 t k−2

Using

k !2___ ≤

(k−2)!1______ , k ≥ 2

gives

R(t, σ) ≤ α2t 2

k=2Σ∞

(k−2)!1______ αk−2 t k−2

= α2t 2eα t

-21-

Page 24: Solutions Manual LINEAR SYSTEM THEORY, 2/E

CHAPTER 5

Solution 5.3 Using the series definition, which involves talent in series recognition,

A2k+1 = �� 1

001 �

� , A2k = �� 0

110 �

� , k = 0, 1, . . .

gives

eAt = I +�� t

00t �

+2!1___ �

� 0

t2

t 20 �

+3!1___ �

� t 3

00t 3 �

+ . . .

=�� (et−e−t)/2

(et+e−t )/2(et+e−t)/2(et−e−t)/2 �

=�� sinh t

cosh tcosh tsinh t �

Using the Laplace transform method,

(sI −A)−1 = �� −1

ss

−1 ��

−1=

�� s2−1

s_____s2−1

1_____

s2−11_____

s2−1s_____ �

which gives again

eAt = �� sinh t

cosh tcosh tsinh t �

Using the diagonalization method, computing eigenvectors for A and letting

P = �� 1

1−11 �

gives

P−1AP = �� 0

1−10 �

Then

eAt = P�� 0

et

e−t0 �

P−1 =�� sinh t

cosh tcosh tsinh t �

Solution 5.4 Since

A (t) = �� 1

tt1 �

commutes with its integral,

-22-

Page 25: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

Φ(t, 0) = e 0∫t

A(σ) dσ

= exp�� t

t 2/2t 2/2

t ��

And since

�� 0

t2 /2t 2/2

0 ��

,�� t

00t �

commute,

Φ(t, 0) = exp��

�� 0

110 �

� t 2/2���

. exp��

�� 1

001 �

� t���

Using Exercise 5.3 gives

Φ(t, 0) =�� 0

et2 /2

et2 /20 �

�� sinh t

cosh tcosh tsinh t �

=�� et2 /2 sinh t

et2 /2 cosh tet2 /2 cosh tet2 /2 sinh t �

Solution 5.7 To verify that

A0∫t

eAσ dσ = eAt − I

note that the two sides agree at t = 0, and the derivatives of the two sides with respect to t are identical.If A is invertible and all its eigenvalues have negative real parts, then limt → ∞ eAt = 0. This gives

A0∫∞

eAσ dσ = − I

that is,

A−1 = −0∫∞

eAσ dσ =∞∫0

eAσ dσ

Solution 5.9 Evaluating the given expression at t = 0 gives x (0) = 0. Using Leibniz rule to differentiate theexpression gives

x.(t) =

dtd___

0∫t

eA(t−σ)eD

σ∫t

u(τ) dτ

bu(σ) dσ

= bu (t) +0∫t

∂t∂___ �

eA(t−σ)eD

σ∫t

u(τ) dτ

bu(σ)��

Using the product rule and differentiating the power series for eD

σ∫t

u(τ) dτ

gives

x.(t) = bu (t) +

0∫t �

AeA(t−σ)eD

σ∫t

u(τ) dτ

bu(σ) + eA(t−σ)Du (t)eD

σ∫t

u(τ) dτ

bu(σ)��

If we assume that AD = DA, then eA(t−σ)D = DeA(t−σ) and

-23-

Page 26: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

x.(t) = bu (t) + A

0∫t

eA(t−σ)eD

σ∫t

u(τ) dτ

bu(σ) dσ + Du (t)0∫t

eA(t−σ)eD

σ∫t

u(τ) dτ

bu(σ) dσ

= A x (t) + Dx (t)u (t) + bu (t)

Solution 5.12 We will show how to define β0(t), . . . , βn−1(t) such that

k=0Σn−1

β.k(t)Pk =

k=0Σn−1

βk(t)APk ,k=0Σn−1

βk(0)Pk = I (*)

which then gives the desired expression by Property 5.1. From the definitions,

P1 = AP0 − λ1I , P2 = AP1 − λ2P1 , . . . , Pn−1 = APn−2 − λn−1Pn−2

Also Pn = (A−λnI )Pn−1 = 0 by the Cayley-Hamilton theorem, so APn−1 = λnPn−1. Now we equate coefficients oflike Pk’s in (*), rewritten as

k=0Σn−1

β.k(t)Pk =

k=0Σn−1

βk(t)[Pk+1 + λk+1Pk]

to get equations for the desired βk(t)’s:

P0 : β.

0(t) = λ1β0(t)

P1 : β.

1(t) = β0(t) + λ2β1(t)...

Pn−1 : β.n−1(t) = βn−2(t) + λnβn−1(t)

that is,

�� β

.n−1(t)

...

β.

1(t)β.

0(t)��

=

�� 0

0

...

1λ1

00

...

λ2

0

. . .

. . .

...

. . .

. . .

1λn−1

...

00

λn

0

...

00 �

�� βn−1(t)

...

β1(t)β0(t) �

With the initial condition provided by β0(0) = 1, βk(0) = 0, k = 1, . . . , n−1, the analytic solution of this stateequation provides a solution for (*). (The resulting expression for eAt is sometimes called Putzer’s formula.)

Solution 5.17 Write, by Property 5.11,

Φ(t, to) = P −1(t)eR(t−to)P (to)

where P (t) is continuous, T-periodic, and invertible at each t. Let

S = P−1(to)RP(to) , Q (t, to) = P −1(t)P (to)

Then Q (t, to) is continuous and invertible at each t, and satisfies

Q (t +T, to) = P −1(t +T )P (to) = P −1(t)P (to) = Q (t, to)

with Q (to, to) = I. Also,

-24-

Page 27: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

Φ(t, to) = P −1(t) eP(to)SP−1(to) (t−to)P (to) = P −1(t)P (to) eS(t−to)P−1(to)P (to)

= Q (t, to)eS(t−to)

Solution 5.19 From the Floquet decomposition and Property 4.9,

det Φ(T, 0) = det eRT = e 0∫T

tr [A(σ)] dσ

Because the integral in the exponent is positive, the product of eigenvalues of Φ(T, 0) is greater than unity, whichimplies that at least one eigenvalue of Φ(T, 0) has magnitude greater than unity.Thus by the argument followingExample 5.12 there exist unbounded solutions.

Solution 5.20 Following the hint, define a real matrix Sby

eS2T = Φ2(T, 0)

and set

Q (t) = Φ(t, 0)e−St

Clearly Q (t) is real and continuous, and

Q (t +2T) = Φ(t +2T, 0)e−S(t +2T) = Φ(t +2T, T)Φ(T, 0)e−S2Te−St

= Φ(t +T, 0)Φ(T, 0)e−S2Te−St = Φ(t +T, T)Φ2(T, 0)e−S2Te−St

= Φ(t +T, T)e−St = Φ(t, 0)e−St

= Q (t)

That is, Q (t) is 2T-periodic. (For a proof of the hint, see Chapter 8 ofD.L. Lukes, Differential Equations:Classical to Controlled, Academic Press, 1982.)

Solution 5.22 The solution will be T-periodic for initial state xo if and only if xo satisfies (see text equation(32))

[ Φ−1(to+T, to) − I ] xo =to∫

to+T

Φ(to, σ)f(σ) dσ

This linear equation has a solution for xo if and only if

zoT

to∫

to+T

Φ(to, σ)f(σ) dσ = 0 (*)

for every nonzero vector zo that satisfies

[ Φ−1(to+T, to) − I ]Tzo = 0 (**)

The solution of the adjoint state equation can be written as

z(t) = [ Φ−1(t, to) ]Tzo

Then by Lemma 5.14, (**) is precisely the condition that z(t) be T-periodic. Thus writing (*) in the form

-25-

Page 28: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

0 =to∫

to+T

zoTΦ(to, σ)f(σ) dσ =

to∫

to+T

zT(σ)f (σ) dσ

completes the proof.

Solution 5.24 Note A = −AT, and from Example 5.9,

eAt = �� −sin t

cos tcos tsin t �

Therefore all solutions of the adjoint equation are periodic, with period of the form k2π, where k is a positiveinteger. The forcing term has period T = 2π/ω, where we assume ω > 0. The rest of the analysis breaks downinto 3 cases.Case 1: If ω ≠ 1, 1/2, 1/3, . . . then the adjoint equation has no T-periodic solution, so the condition (Exercise5.22)

0∫T

zT(σ)f (σ) dσ = 0 (+)

holds vacuously. Thus there will exist corresponding periodic solutions.Case 2: If ω = 1, then

0∫T

zT(σ)f (σ) dσ =0∫T

zoTeAσ f (σ) dσ

= −zo10∫T

sin2(σ) dσ + zo20∫T

cos σ sin σ dσ

≠ 0

so there is no periodic solution.Case 3: If ω = 1/k, k = 2, 3, . . . , then since

0∫T

cos σ sin (σ/k) dσ =0∫T

sin σ sin (σ/k) dσ = 0

the condition (+) will hold, and there exist periodic solutions.In summary, there exist periodic solutions for all ω > 0 except ω = 1.

-26-

Page 29: Solutions Manual LINEAR SYSTEM THEORY, 2/E

CHAPTER 6

Solution 6.1 If the state equation is uniformly stable, then there exists a positive γ such that for any to and xo

the corresponding solution satisfies

x (t) ≤ γ xo , t ≥ to

Given a positive ε, take δ = ε/γ. Then, regardless of to, xo ≤ δ implies

x (t) ≤ γ δ= ε , t ≥ to

Conversely, given a positive ε suppose positive δ is such that, regardless of to, xo ≤ δ implies x (t) ≤ ε,t ≥ to. For any ta ≥ to let xa be such that

xa = 1 , Φ(ta, to)xa = Φ(ta, to)

Then xo = δ xa satisfies xo = δ, and the corresponding solution at t = ta satisfies

x (ta) = Φ(ta, to)xo = δ Φ(ta, to) ≤ ε

Therefore

Φ(ta, to) ≤ ε/δ

Such an xa can be selected for any ta, to such that ta ≥ to. Therefore

Φ(t, to) ≤ ε/δ

for all t and to with t ≥ to, and we can take γ = ε/δ to obtain

x (t) = Φ(t, to)xo ≤ Φ(t, to) xo ≤ γ xo , t ≥ to

This implies uniform stability.

Solution 6.4 Using the fact that A (t) commutes with its integral,

Φ(t, τ) = e τ∫t

A(σ) dσ

= I +�� e−(t−τ)

t−τt−τ

−e−(t−τ) ��

+2!1___ �

� e−(t−τ)

t−τt−τ

−e−(t−τ) ��

2

+ . . .

For any fixed τ, φ11(t, τ) clearly grows without bound as t → ∞, and thus the state equation is not uniformlystable.

Solution 6.6 Using elementary properties of the norm,

-27-

Page 30: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

Φ(t, τ) = I +τ∫t

A (σ) dσ +τ∫t

A (σ1)τ∫σ1

A (σ2) dσ2dσ1 + . . .

= I + τ∫t

A (σ) dσ + τ∫t

A (σ1)τ∫σ1

A (σ2) dσ2dσ1 + . . .

= 1 + τ∫t

A (σ) dσ + τ∫t

A (σ1) τ∫σ1

A (σ2) dσ2 dσ1 + . . .

(Be careful of t < τ.) Since A (t) ≤ α for all t,

Φ(t, τ) ≤ 1 + ατ∫t

1 dσ + α2 τ∫t

τ∫

σ1

1 dσ2 dσ1 + . . .

= 1 + α t−τ + α2

2!| t−τ 2_______ + . . .

For | t−τ ≤ δ,

Φ(t, τ) ≤ 1 + α δ +2!

α2δ2_____ + . . .

= eα δ

Solution 6.8 See the proof of Theorem 15.2.

Solution 6.10 Write Re[λ] = −η, where η > 0 by assumption, so that

t eλt = t e−ηt , t ≥ 0

A simple maximization argument (setting the derivative to zero) gives

t e−ηt ≤η e1___ =∆ β , t ≥ 0

so that

t eλt ≤ β , t ≥ 0

Using this bound we can write

t eλt = t e−ηt = t e−(η /2)te−(η /2)t ≤η e2___ e−(η /2)t , t ≥ 0

Similarly,

t 2 eλt = t 2 e−ηt ≤η e2___ t e−(η /2)t =

η e2___ t e−(η /4)te−(η /4)t ≤

η e2___ .

η e4___ e−(η /4)t , t ≥ 0

and continuing we get, for any j ≥ 0,

t j eλt ≤(η e) j

2 j + ( j −1)+ . . . +1____________ e−(η /2 j )t , t ≥ 0

Therefore

-28-

Page 31: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

0∫∞

t j eλt dt ≤(η e) j

2 j + ( j −1)+ . . . +1____________

0∫∞

e−(η /2 j )t dt

≤(η e) j

2 j + ( j −1)+ . . . +1____________ .η2 j___

=ej Re[λ] j + 1

22 j + ( j −1)+ . . . +1_____________

Solution 6.12 By Theorem 6.4 uniform stability is equivalent to existence of a finite constant γ such that eAt ≤ γ for all t ≥ 0. Writing

eAt =k=1Σm

j= 1Σσk

Wkj ( j−1)!t j−1______ eλkt

where λ1, . . . , λm are the distinct eigenvalues of A, suppose

Re[λk] ≤ 0 , k = 1, . . . , m (*)

Re[λk] = 0 implies σk = 1

Since t j−1 eλkt is bounded if Re[λk] < 0 (for any j), and eλkt

= 1 if Re[λk] = 0, it is clear that eAt isbounded for t ≥ 0. Thus (*) is a sufficient condition for uniform stability.

A necessary condition for uniform stability is

Re[λk] ≤ 0 , k = 1, . . . , m

For if Re[λk] > 0 for some k, the proof of Theorem 6.2 shows that eAt grows without bound as t → ∞. The gapbetween this necessary condition and the sufficient condition is illustrated by the two cases

A = �� 0

000 �

� , A = �� 0

001 �

Both satisfy the necessary condition, neither satisfy the sufficient condition, and the first case is uniformly stablewhile the second case is not (unbounded solutions exist, as shown by easy computation of the transition matrix).(It can be shown that a necessary and sufficient condition for uniform stability is that each eigenvalue of A hasnonpositive real part and any eigenvalue of A with zero real part has algebraic multiplicity equal to its geometricmultiplicity.)

Solution 6.14 Suppose γ, λ > 0 are such that

Φ(t, to) ≤ γe−λ(t−to)

for all t, to such that t ≥ to. Then given any xo, to, the corresponding solution at t ≥ to satisfies

x (t) = Φ(t, to)xo ≤ Φ(t, to) xo ≤ γe−λ(t−to) xo

and the state equation is uniformly exponentially stable.Now suppose the state equation is uniformly exponentially stable, so that there exist γ, λ > 0 such that

x (t) ≤ γe−λ(t−to) xo , t ≥ to

for any xo and to. Given any to and ta ≥ to, choose xa such that

Φ(ta, to)xa = Φ(ta, to) , xa = 1

Then with xo = xa the corresponding solution at ta satisfies

-29-

Page 32: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

x (ta) = Φ(ta, to)xa = Φ(ta, to) ≤ γe−λ(ta−to)

Since such an xa can be selected for any to and ta > to, we have

Φ(t, τ) ≤ γe−λ(t−τ)

for all t, τ such that t ≥ τ, and the proof is complete.

Solution 6.18 The variable change z(t) = P−1(t) x (t) yields z.(t) = 0 if and only if

P−1(t) A (t)P (t) −P−1(t)P.(t) = 0

for all t. This clearly is equivalent to P.(t) = A (t)P (t), which is equivalent to ΦA(t, τ) = P (t)P−1(τ). Now, if P (t)

is a Lyapunov transformation, that is P (t) ≤ ρ < ∞ and det P (t) ≥ η > 0 for all t, then

ΦA(t, τ) ≤ P (t) P−1(τ) ≤ P (t) det P (τ) P (τ) n−1__________

≤ ρn/η =∆ γ

for all t and τ.Conversely, suppose ΦA(t, τ) ≤ γ for all t and τ. Let P (t) = ΦA(t, 0). Then P (t) ≤ γ and

P (t) ≤ det P−1(t) P−1(t) n−1___________ = P−1(t) n−1 det P (t)

for all t. Using P (t) ≥ 1/ P−1(t) gives

det P (t) ≥ P−1(t) n

1__________

and since P−1(t) = ΦA(0, t) ≤ γ,

det P (t) ≥γn

1___

Thus P (t) is a Lyapunov transformation, and clearly

P−1(t) A (t)P (t) −P−1(t)P.(t) = 0

for all t.

-30-

Page 33: Solutions Manual LINEAR SYSTEM THEORY, 2/E

CHAPTER 7

Solution 7.3 Let A = FA, and take Q = F−1, which is positive definite since F is positive definite. Then since Fis symmetric,

ATQ + QA = A TFF−1 + F −1FA = AT + A < 0

This gives exponential stability by Theorem 7.4.

Solution 7.5 By our default assumptions, a (t) is continuous. Since Q is constant, symmetric, and positivedefinite, the first condition of Theorem 7.2 holds. Checking the second condition,

AT(t)Q + QA(t) = �� −a (t) /2

−a (t)−1

−a (t)/2 �� ≤ 0

gives the requirements

a (t) ≥ 0 , 4a (t) ≥ a2(t)

Thus the state equation is uniformly stable if a (t) is a continuous function satisfying 0 ≤ a (t) ≤ 4 for all t.

Solution 7.6 With

Q(t) =�� 0

a (t)10 �

, AT(t)Q(t) + Q (t)A (t) + Q.

(t) =�� 0

a.(t)

−40 �

we need to assume that a (t) is continuously differentiable and η ≤ a (t) ≤ ρ for some positive constants η and ρ sothat the first condition of Theorem 7.4 is satisfied. For the second condition we need to assume a

.(t) ≤ −ν, for

some positive constant ν. Unfortunately this implies, taking any to,

a (t) = a (to) +to∫t

a.(σ) dσ ≤ a (to) + ν to − ν t , t ≥ to

and for sufficiently large t the positivity condition on a (t) will be violated. Thus there is no a (t) for which thegiven Q (t) shows uniform exponential stability of the given state equation.

Solution 7.9 We need to assume that a(t) is continuously differentiable. Consider

Q (t) − η I =

��

1

2a (t)+1−η

a (t)a (t)+1_______−η

1��

Suppose there exists a small, positive constant η such that

-31-

Page 34: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

η ≤ a (t) ≤ 1/ (2η)

for all t. Then

2a (t) + 1 − η ≥ η + 1 > 1

a (t)a (t)+1_______ − η ≥ 1 +

1/ (2η)1______ = 1 + η > 1

and Q (t)−ηI ≥ 0, for all t, follows easily. Similarly, with ρ = (2η+1)/η we can show ρI−Q (t) ≥ 0 using

ρ − 2a (t) − 1 ≥η

2η+1______ − 22η1___ − 1 = 1

ρ −a (t)

a (t)+1_______ ≥η

2η+1______ − 1 −a (t)

1____ ≥ 1

Next consider

AT(t)Q (t) + Q (t) A (t) + Q.

(t) =

��

0

2a.(t)−2a(t)

−2a(t)−a2(t)a.(t)_____

0 ��

≤ −νI

This gives that for uniform exponential stability we also need existence of a small, positive constant ν such that

νa2(t) − 2a3(t) ≤ a.(t) ≤ a (t)−ν/2

for all t. For example, a (t) = 1 satisfies these conditions.

Solution 7.11 Suppose that for every symmetric, positive-definite M there exits a unique, symmetric,positive-definite Q such that

ATQ + QA + 2µQ = −M (*)

that is,

(A + µ I )TQ + Q (A + µ I ) = −M (**)

Then by the argument above Theorem 7.11 we conclude that all eigenvalues of A +µ I have negative real parts.That is, if

0 = det [ λI − (A +µ I ) ] = det [ (λ − µ)I −A ]

then Re[λ] < 0. Since µ > 0, this gives Re[λ − µ] < −µ, that is, all eigenvalues of A have real parts strictly lessthan −µ.

Now suppose all eigenvalues of A have real parts strictly less than −µ. Then, as above, eigenvalues ofA + µ I have negative real parts. Then by Theorem 7.11, given symmetric, positive-definite M there exists aunique, symmetric, positive-definite Q such that (**) holds, which implies (*) holds.

Solution 7.16 For arbitrary but fixed t ≥ 0, let xa be such that

xa = 1 , eAtxa = eAt

By Theorem 7.11 the unique solution of QA + ATQ = −M is the symmetric, positive-definite matrix

Q =0∫∞

eATσMeAσ dσ

Thus we can write

-32-

Page 35: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

t∫∞

xaTeATσMeAσxa dσ ≤

0∫∞

xaTeATσMeAσxa dσ

= xaTQxa ≤ λmax(Q) = Q

Also, using a change of integration variable from σ to τ = σ − t,

t∫∞

xaTeATσMeAσxa dσ =

0∫∞

xaTeAT(t + τ)MeA(t + τ)xa dτ

= xaTeATtQeAtxa ≥ λmin(Q) eAtxa 2 =

Q−1

eAt 2_______

Therefore

Q−1

eAt 2_______ ≤ Q

Since t was arbitrary, this gives

t ≥ 0max eAt ≤ √� � � � � � � � � � Q Q−1

Solution 7.17 Let F = A + (µ−ε)I. Then F ≤ A +µ−ε, all eigenvalues of F have real parts less than −ε,and

eFt = eAt e(µ−ε)t

Thus

eAt = e−(µ−ε)t eFt (*)

By Theorem 7.11 the unique solution of FTQ + QF = −I is

Q =0∫∞

eFTσeFσ dσ

For any n × 1 vector x,

dσd___ xTeFTσeFσx = xTeFTσ[ FT + F ] eFσx

≥ − FT + F xTeFTσeFσx (Exercise1.9)

≥ −2( A +µ−ε) xTeFTσeFσx

Thus for any t ≥ 0,

−xTeFTteFtx =t∫∞

dσd___ �

� xTeFTσeFσx �� dσ

≥ −2 ( A +µ−ε)t∫∞

xTeFTσeFσx dσ

≥ −2 ( A +µ−ε) xTQx

Therefore

-33-

Page 36: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

xTeFTteFtx ≤ 2 ( A +µ−ε) xTQx , t ≥ 0

which gives

eFt ≤ √� � � � � � � � � � � � � � � �2 ( A +µ−ε) Q , t ≥ 0

Thus the desired inequality follows from (*).

Solution 7.19 To show uniform exponential stability of A (t), write the 1,2-entry of A (t) as a (t), and letQ (t) = q (t) I, where

q (t) =

�� 3 , t ≤ −1/2

q ⁄12 (t) , −1/2 < t < 1/2

2+e−2t , t ≥ 1/2

Here q ⁄12 (t) is a continuously-differentiable ‘patch’ satisfying 2 ≤ q ⁄1

2 (t) ≤ 3 for −1/2 < t < 1/2, and anothercondition to be specified below. Then we have 2 I ≤ Q (t) ≤ 3 I for all t. Next consider

AT(t)Q (t) + Q (t)A (t) + Q.

(t) =�� a (t)q (t)

−2q (t)+q.(t)

−6q (t)+q.(t)

a (t)q (t) ��

≤ −νI

We choose ν = 1 and show that

�� a (t)q (t)

−2q (t)+q.(t)+1

−6q (t)+q.(t)+1

a (t)q (t) ��

≤ 0

for all t. With t < −1/2 or t > 1/2 it is easy to show that q (t)−q.(t)−1 ≥ 0, and a patch function can be sketched

such that this inequality is satisfied for −1/2 < t < 1/2. Then, for all t,

−2q (t)+q.(t)+1 ≤ −q (t) ≤ 0 , −6q (t)+q

.(t)+1 ≤ −5q (t) ≤ 0

[−2q (t)+q.(t)+1][−6q (t)+q

.(t)+1] −a2(t)q2(t) ≥ [5−a2(t)]q2(t) ≥ 4q2(t) ≥ 0

Thus we have proven uniform exponential stability.To show AT(t) is not uniformly exponentially stable, write the state equation as two scalar equations to

compute

ΦAT(t)(t, 0) =�� (et−e−3t) /4

e−t

e−3t0 �

, t ≥ 0

and the existence of unbounded solutions is clear.

Solution 7.20 Using the characterization of uniform stability in Exercise 6.1, given ε > 0, let δ = β−1(α(ε)).Then δ > 0, since α(ε) > 0, and the inverse exists since β(.) is strictly increasing. Then for any to, and any xo suchthat xo ≤ δ, the corresponding solution is such that

v (t, x (t)) ≤ v (to, xo) ≤ β( xo ) ≤ β(δ) = α(ε) , t ≥ to

Therefore

α( x (t) ) ≤ v (t, x (t)) ≤ α(ε) , t ≥ to

But since α(.) is strictly increasing, this gives x (t) ≤ ε , t ≥ to, and thus the state equation is uniformly stable.

-34-

Page 37: Solutions Manual LINEAR SYSTEM THEORY, 2/E

CHAPTER 8

Solution 8.3 No. The matrix

A =�� 0

−2−1√��8 �

has negative eigenvalues, but

A + AT =�� √��8

−4−2√��8 �

has an eigenvalue at zero.

Solution 8.6 Viewing F (t)x (t) as a forcing term, for any to, xo, and t ≥ to we can write

x (t) = ΦA +F (t, to) xo = ΦA(t, to) xo +to∫t

ΦA(t, σ)F (σ) x(σ) dσ

which gives, for suitable constants γ, λ > 0,

x (t) ≤ γ e−λ(t−to) xo +

to∫t

γ e−λ(t−σ) F (σ) x(σ) dσ

Thus

eλt x (t) ≤ γ eλto xo +to∫t

γ F (σ) eλσ x(σ) dσ

and the Gronwall-Bellman inequality (Lemma 3.2) implies

eλt x (t) ≤ γ eλto xo eto

∫t

γ F (σ) dσ

Therefore

-35-

Page 38: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

x(t) ≤ γ e−λ(t−to)eto

∫t

γ F (σ) dσ

xo

≤ γ e−λ(t−to)eto

∫∞

γ F (σ) dσ

xo

≤ γ e−λ(t−to) eγ β xo

and we conclude the desired uniform exponential stability.

Solution 8.8 We can follow the proof of Theorem 8.7 (first and last portions) to show that the solution

Q (t) =0∫∞

eAT(t)σ eA(t)σ dσ

of

AT(t)Q (t) + Q (t) A (t) = −I

is continuously-differentiable and satisfies, for all t,

ηI ≤ Q (t) ≤ ρI

where η and ρ are positive constants. Then with

F (t) = A (t) − ⁄12Q−1(t)Q

.(t)

an easy calculation shows

FT(t)Q (t) + Q (t)F (t) + Q.

(t) = A T(t)Q (t) + Q (t) A (t) = −I

Thus

x.(t) = F (t) x (t)

is uniformly exponentially stable by Theorem 7.4.

Solution 8.9 As in Exercise 8.8 we have, for all t,

ηI ≤ Q (t) ≤ ρI

which implies

Q−1(t) ≤η1__

Also, by the middle portion of the proof of Theorem 8.7,

Q.

(t) ≤ 2 A.(t) Q (t) 2

Therefore

⁄12Q−1(t)Q

.(t) ≤

ηβρ2____

for all t. Write

x.(t) = A (t) x (t) = [ A (t) − ⁄1

2Q−1(t)Q.

(t) ] x (t) + ⁄12Q−1(t)Q

.(t) x (t)

=∆ F (t) x (t) + ⁄12Q−1(t)Q

.(t) x (t)

-36-

Page 39: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

Then the complete solution formula gives

x (t) = ΦF(t, to) xo +to∫t

ΦF(t, σ) ⁄12Q−1(σ)Q

.(σ) x(σ) dσ

and the result of Exercise 8.8 implies that there exists positive constants γ, λ such that, for any to and t ≥ to,

x (t) ≤ γ e−λ(t−to) xo +

to∫t

γ e−λ(t−σ)

ηβρ2____ x(σ) dσ

Therefore

eλt x (t) ≤ γ eλto xo +to∫t

ηγβρ2_____ eλσ x(σ) dσ

and the Gronwall-Bellman inequality (Lemma 3.2) implies

eλt x (t) ≤ γ eλto xo eto

∫t

γβρ2 /η dσ

Thus

x (t) ≤ γ e−(λ−γβρ2 /η)(t−to) xo

Now, writing the left side as ΦA(t, to)xo and for any to and t ≥ to choosing the appropriate unity-norm xo gives

ΦA(t, to) ≤ γ e−(λ−γβρ2 /η)(t−to)

For β sufficiently small this gives the desired uniform exponential stability. (Note that Theorem 8.6 also can beused to conclude that uniform exponential stability of x

.(t) = F (t) x (t) implies uniform exponential stability of

x.(t) = [ F (t) + ⁄1

2Q−1(t)Q.

(t) ] x (t) = A (t) x (t)

for β sufficiently small.)

Solution 8.10 With F (t) = A (t) + (µ /2)I we have that F(t) ≤ α + µ /2, F.(t) = A

.(t), and the eigenvalues of

F (t) satisfy Re[λF(t)] ≤ −µ /2. The unique solution of

FT(t)Q (t) + Q (t)F (t) = −I

is

Q (t) =0∫∞

eFT(t)σ eF (t)σ dσ

As in the proof of Theorem 8.7, there is a constant ρ such that Q (t) ≤ ρ for all t. Now, for any n × 1 vector z,

dσd___ zTeFT(t)σ eF (t)σz = zTeFT(t)σ[ FT(t) + F (t) ] eF (t)σz

≥ −(2α + µ) zTeFT(t)σ eF (t)σz

Thus for any τ ≥ 0,

-37-

Page 40: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

−zTeFT(t)τ eF (t)τz =τ∫∞

dσd___ �

� zTeFT(t)σ eF (t)σz �� dσ

≥ −(2α + µ)τ∫∞

zTeFT(t)σ eF (t)σz dσ

≥ −(2α + µ)0∫∞

zTeFT(t)σ eF (t)σz dσ

≥ −(2α + µ) zTQ (t) z

Thus

eFT(t)τ eF (t)τ ≤ (2α + µ) Q (t) , τ ≥ 0

and using

eF(t)τ = eA(t)τ e(µ/2) τ , τ ≥ 0

gives

eA(t)τ ≤ √�� � � � � � �(2α+µ)ρ e(−µ/2) τ , τ ≥ 0

Solution 8.11 Write (the chain rule is valid since u (t) is a scalar)

q.(t) = −A−1(u (t))

�� du

dA___(u (t))u.(t)

��

A−1(u (t))b (u (t)) −A−1(u (t))dudb___(u (t))u

.(t)

=∆ −B(t)u.(t)

Then

x.(t) = A (u (t)) x (t) + b (u (t))

= A (u (t)) [ x (t) −q (t) ] + A (u (t))q (t) + b (u (t))= A (u (t)) [ x (t) −q (t) ]

gives

dtd___ [ x (t) −q (t) ] = A (u (t)) [ x (t) −q (t) ] + B(t)u

.(t) (*)

Since

dtd___ A (u (t)) =

dudA___(u (t))u

.(t) =

dudA___(u (t)) u

.(t)

we can conclude from Theorem 8.7 that for δ sufficiently small, and u (t) such that u.(t) ≤ δ for all t, there exist

positive constants γ and η (depending on u (t)) such that

ΦA(u(t)) (t, σ) ≤ γ e−η (t−σ) , t ≥ σ ≥ 0

But the smoothness assumptions on A (.) and b (.) and the bounds on u (t) also give that there exists a positiveconstant β such that B(t) ≤ β for t ≥ 0. Thus the solution formula for (*) gives

x (t) −q (t) ≤ γ x (0) −q (0) + γ βδ/η , t ≥ 0

for u (t) as above, and the claimed result follows.

-38-

Page 41: Solutions Manual LINEAR SYSTEM THEORY, 2/E

CHAPTER 9

Solution 9.7 Write

�� B (A−βI )B (A−βI )2B . . . �

� = �� B AB−βB A2B−2βAB+β2B . . . �

= �� B AB A2B . . . �

��

...

000Im

...

00Im

−βIm

...

0Im

−2βIm

β2Im

...

. . .

. . .

. . .

. . . ��

Clearly the two controllability matrices have the same rank. (The solution is even easier using rank tests fromChapter 13.)

Solution 9.8 Since A has negative-real-part eigenvalues,

Q =0∫∞

eAtBBTeATt dt

is well defined, symmetric, and

AQ + QAT =0∫∞

�� AeAtBBTeATt + eAtBBTeATtAT �

� dt

=0∫∞

dtd___ �

� eAtBBTeATt �� dt

= −BBT

Also it is clear that Q is positive semidefinite. If it is not positive definite, then for some nonzero, n × 1 x,

0 = x TQx =0∫∞

xTeAtBBTeATtx dt

=0∫∞

xTeAtB 2 dt

Thus xTeAtB = 0 for all t ≥ 0, and it follows that

-39-

Page 42: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

0 =dt j

d j___ �� xTeAtB �

��� t = 0

= x TAjB

for j = 0, 1, 2, . . . . But this implies

xT �� B AB . . . An−1B �

� = 0

which contradicts the controllability hypothesis. Thus Q is positive definite.

Solution 9.9 Suppose λ is an eigenvalue of A, and p is a corresponding left eigenvector. Then p ≠ 0, and

pTA = λ pT

This implies both

pHA = λ_

pH , ATp = λp

Now suppose Q is as claimed. Then

pHAQp + pHQATp = λ_

pHQp + λ pHQp

= −pHBBTp

that is,

2Re[λ] pHQ p = −pHBBT p (*)

This gives Re[λ] ≤ 0 since Q is positive definite. Now suppose Re[λ] = 0. Then (*) gives pHB = 0. Also, forj = 1, 2, . . . ,

pHAjB = λ_

pHAj−1B = . . . = λ_

j pHB = 0

Thus

pH �� B AB . . . An−1B �

� = 0

which contradicts the controllability assumption. Therefore Re[λ] < 0.

Solution 9.10 Let

Wy(to, tf) =∆

to∫tf

C (tf)Φ(tf, t)B (t)BT(t)ΦT(tf, t)CT(tf) dt

If Wy(to, tf) is invertible, given any x(to) = xo choose

u (t) = −BT(t)ΦT(tf, t)CT(tf)Wy−1(to, tf)C (tf)Φ(tf, to) xo

Then the corresponding complete solution of the state equation gives

y (tf) = C (tf)Φ(tf, to) xo −to∫tf

C (tf)Φ(tf, σ)B (σ)BT(σ)ΦT(tf, σ)CT(tf) dσ Wy−1(to, tf) C (tf)Φ(tf, to) xo

= 0

and we have shown output controllability on [to, tf]..

-40-

Page 43: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

Now suppose the state equation is output controllable on [to, tf], but that Wy(to, tf) is not invertible. Thenthere exists a p × 1 vector ya ≠ 0 such that ya

TWy(to, tf)ya = 0. Using by now familiar arguments, this gives

yaTC (tf)Φ(tf, t)B (t) = 0 , t ∈ [to, tf]

Consider the initial state

xo = Φ(to, tf)CT(tf)[ C (tf)C

T(tf) ]−1ya

which is well defined and nonzero since rank C (tf) = p. There exists an input ua(t) such that

0 = C (tf)Φ(tf, to) xo +to∫tf

C (tf)Φ(tf, σ)B (σ)ua(σ) dσ

= ya +to∫tf

C (tf)Φ(tf, σ)B (σ)ua(σ) dσ

Premultiplying by yaT gives

0= yaT ya

This contradicts ya ≠ 0, and thus Wy(to, tf) is invertible.The rank assumption on C (tf) is needed in the necessity proof to guarantee that xo is well defined. For

m = p = 1, invertibility of Wy(to, tf) is equivalent to existence of a ta ∈ (to, tf) such that

C (tf)Φ(tf, ta)B (ta) ≠ 0

That is, there exists a ta ∈ (to, tf) such that the output response at tf to an impulse input at ta is nonzero.

Solution 9.11 From Exercise 9.10, since rank C = p, the state equation is output controllable if and only if forsome fixed tf > 0,

Wy =∆

0∫tf

CeA(tf−t)BBTeAT(tf−t)CT dt

is invertible. We will show this holds if and only if

rank �� CB CAB . . . CAn−1B �

� = p

by showing equivalence of the negations. If Wy is not invertible, there exists a nonzero p × 1 vector ya such thatya

TWyya = 0. Thus

yaTCeA(tf−t)B = 0 , t ∈ [0, tf]

Differentiating repeatedly, and evaluating at t = tf gives

yaTCAjB = 0 , j = 0, 1, . . .

Thus

yaT �

� CB CAB . . . CAn−1B �� = 0

and this implies

rank �� CB CAB . . . CAn−1B �

� < p

Conversely, if the rank condition fails, then there exists a nonzero ya such that yaTCAjB = 0,

j = 0, . . . , n−1. Then

-41-

Page 44: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

yaTCeA(tf−t)B = ya

TCk=0Σn−1

αk(tf−t) AkB = 0 , t ∈ [0, tf]

Therefore yaTWyya = 0, which implies that Wy is not invertible.

For m = p = 1 argue as in Solution 9.10 to show that a linear state equation is output controllable if andonly if its impulse response (equivalently, transfer function) is not identically zero.

Solution 9.17 Beginning with

y (t) = c (t)x (t)

y.(t) = c

.(t)x (t) + c (t)x

.(t)

= [c.(t) + c (t)A (t)]x (t) + c (t)b (t)u (t)

= L 1(t)x (t) + L 0(t)b (t)u (t)

it is easy to show by induction that

y(k)(t) = Lk(t)x (t) +j= 0Σk−1

dtk−j −1

dk−j −1_______ [ Lj (t)b (t)u (t) ] , k = 1, 2, . . .

Now if

Ln(t)M__−1

=∆ �� α0(t) α1(t) . . . αn−1(t) �

then

i = 0Σn−1

αi (t)Li (t) = �� α0(t) . . . αn−1(t) �

�� Ln−1(t)

...

L0(t) ��

= Ln(t)

Thus we can write

y(n)(t) −i = 0Σn−1

αi (t)y(i )(t) = Ln(t)x (t) +

j= 0Σn−1

dtn−j −1

dn−j −1_______ [ Lj (t)b (t)u (t) ]

−i = 0Σn−1

αi (t)Li (t)x (t) −i = 0Σn−1

αi (t)j = 0Σi−1

dti −j −1

di −j −1______ [ Lj (t)b (t)u (t) ]

=j= 0Σn−1

dtn−j −1

dn−j −1_______ [ Lj (t)b (t)u (t) ] −i = 0Σn−1

αi (t)j = 0Σi−1

dti −j −1

di −j −1______ [ Lj (t)b (t)u (t) ]

This is in the desired form of an nth-order differential equation.

-42-

Page 45: Solutions Manual LINEAR SYSTEM THEORY, 2/E

CHAPTER 10

Solution 10.2 We show equivalence of full-rank failure in the respective controllability and observabilitymatrices, and thus conclude that one realization is controllable and observable (minimal) if and only if the other iscontrollable and observable (minimal). First,

rank �� B AB . . . An−1B �

� < n

if and only if there exits a nonzero, n × 1 vector q such that

qTB = qTAB = . . . = qTAn−1B = 0

This holds if and only if

qTB = qT(A+BC)B = . . . = qT(A+BC)n−1B = 0

which is equivalent to

rank �� B (A+BC)B . . . (A+BC)n−1B �

� < n

Similarly,

rank

������ CAn−1

...

CAC �

�����

< n

if and only if there exists a nonzero, n × 1 vector p such that

Cp = CAp = . . . = CAn−1p = 0

This is equivalent to

Cp = C (A+BC)p = . . . = C (A+BC)n−1p = 0

which is equivalent to

rank

������ C (A+BC)n−1

...

C (A+BC)C �

�����

< n

Solution 10.9 Since

-43-

Page 46: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

C (t)B (σ) = H (t)F (σ) (*)

for all t, σ, picking an appropriate to and tf > to ,

Mx(to , tf)Wx(to , tf) =to

∫tf

CT(t)H (t) dtto

∫tf

F(σ)BT(σ) dσ (**)

where the left side is a product of invertible matrices by minimality. Therefore the two matrices on the right sideare invertible. Let

P−1 = Mx−1(to , tf)

to

∫tf

CT(t)H (t) dt

Then multiply both sides of (*) by CT(t) and integrate with respect to t to obtain

Mx(to , tf)B (σ) =to

∫tf

CT(t)H (t) dt F (σ)

for all σ. That is,

B (σ) = P−1F (σ)

for all σ. Similarly, (*) gives

C (t)Wx(to , tf) = H (t)to

∫tf

F(σ)BT(σ) dσ

that is,

C (t) = H (t)to

∫tf

F(σ)BT(σ) dσ Wx−1(to , tf)

But (**) then gives

to

∫tf

F(σ)BT(σ) dσ Wx−1(to , tf) =

���� to

∫tf

CT(t)H (t) dt

����

−1

Mx(to , tf) = P

so we have

C (t) = H (t)P

for all t. Noting that 0 = P−1 . 0 . P, we have that P is a change of variables relating the two zero-A minimalrealizations. Since a change of variables always can be used to obtain a zero-A realization, this shows that anytwo minimal realizations of a given weighting pattern are related by a variable change.

Solution 10.11 Evaluating

X (t+σ) = X (t) X (σ)

at σ = −t gives that X (t) is invertible, and X−1(t) = X (−t) for all t. Differentiating with respect to t, and withrespect to σ, and using

∂t∂___X (t+σ) =

∂σ∂___X (t+σ)

gives

-44-

Page 47: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

��� dt

d___X (t)���

X (σ) = X (t)��� dσ

d___X (σ)���

which implies

dσd___X (σ) = X (−t)

��� dt

d___X (t)���

X (σ)

Integrate both sides with respect to t from a fixed to to a fixed tf > to to obtain

(tf − to)dσd___X (σ) =

to

∫tf

X (−t)��� dt

d___X (t)���

dt X (σ)

Now let

A =tf−to

1_____

to

∫tf

X (−t)��� dt

d___X (t)���

dt

to write

dσd___ X (σ) = A X (σ) , X (0) = I

This implies X (σ) = e Aσ. (Of course there are quicker ways. For example note that

∂t∂___X (t+σ) =

∂σ∂___X (t+σ) = X (t)

dσd___X (σ)

Evaluating at σ = 0 gives X.(t) = X (t)X

.(0), which implies

X (t) = X (0)e X.(0)t = e X

.(0) t

Also the result holds for continuous solutions of the functional equation, though the proof is much more difficult.)

Solution 10.12 If rank Gi = ri we can write (admittedly using a matrix factorization unreviewed in the text)

Gi = CiBi

where Ci is p × ri, Bi is ri × m, and both have rank ri. Then it is easy to check that

A = block diagonal { −λiIr i, i = 1, . . . , r } , B =

����� Br

...

B1 �����

, C = �� C1

. . . Cr��

is a realization of G (s) of dimension r1 + . . . + rr = n. We need only show that this realization is controllableand observable. Write

�� B AB . . . An−1B �

� =

������ 0

...

0B1

0

...

B2

0

. . .

...

. . .

. . .

Br

...

00 �

�����

����� Im

...

Im

λrIm

...

λ1Im

. . .

...

. . .

λ rn−1Im

...

λ1n−1Im �

����

On the right side the first matrix has rank n, while the second is invertible due to its Vandermonde structure andthe fact that λ1, . . . , λr are distinct. This shows controllability. A similar argument shows observability.(Controllability and observability can be shown more easily using rank tests developed in Chapter 13.)

-45-

Page 48: Solutions Manual LINEAR SYSTEM THEORY, 2/E

CHAPTER 11

Solution 11.4 Since

rank �� b Ab �

� = rank �� 1

1−1−1 �

� = 1

the state equation is not minimal. It is easy to compute the impulse response:

G (t, σ) = C (t)e A (t−σ)B = (t 2 + 1) e−(t−σ)

Then a factorization is obvious, giving a minimal realization

x.(t) = e t u (t)

y (t) = (t 2 + 1)e−t x (t)

Solution 11.7 For the given impulse response,

Γ22(t, σ) =��� e 2t

1+e2t /2+e 2σ/20

e 2σ ���

It is easy to check that rank Γ22(t, σ) = 2 for all t, σ, and a little more calculation shows that rank Γ33(t, σ) = 2.Then a minimal realization is, using formulas in the proof of Theorem 11.3,

F(t, σ) = Γ22(t, σ)

Fc(t, σ) = �� 1+e 2t/2+e 2σ/2 e 2σ

��

C (t) = Fc(t, t)F−1(t, t) = �� 1 0 �

B (t) = Fr(t, t) =��� e 2t

1+e2t ���

A (t) = Fs(t, t)F−1(t, t) =��� 2e 2t

e2t

00 �

��

F−1(t, t) =��� 0

021 �

��

Solution 11.12 The infinite Hankel matrix is

-46-

Page 49: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

Γ =

������

...

111

...

111

...

111

...

. . .

. . .

. . . ������

and clearly the rank condition in Theorem 11.7 is satisfied with l = k = n = 1. Then, following the proof ofTheorem 11.7,

F = Fs = Fc = Fr = H1 = H1s = 1

and a minimal (dimension-1) realization is

x.(t) = x (t) + u (t)

y (t) = x (t)

For the truncated sequence,

Γ =

�������

...

0111

...

0011

...

0001

...

0000

...

. . .

. . .

. . .

. . . �������

The rank condition in Theorem 11.7 is satisfied with l = k = n = 3. Taking

F = H3 =��� 1

11

011

001 �

��

, Fs = H3s =

��� 0

11

001

000 �

��

Fc = �� 1 1 1 �

� , Fr =��� 1

11 �

��

gives a minimal realization specified by

A = FsF−1 =

��� 0

00

001

010 �

��

, B =��� 1

11 �

��

, C = �� 1 0 0 �

(This is an example of ‘Silverman’s formulas’ in Exercise 11.13. Also, it is not hard to see that truncation of thesequence after any finite number n of 1’s will lead to a minimal realization of dimension n.)

Solution 11.13 Writing the rank-n infinite Hankel matrix as

Γ =

���������

...

Gn−1

...

G1

G0

...

Gn

...

G2

G1

...

. . .

...

. . .

. . . ���������

suppose for some 1 ≤ i ≤ n a left-to-right column search yields that the first linearly dependent column is columni. Then there exist scalars α0, . . . , αi−2 such that column i is given by the linear combination

-47-

Page 50: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

���������

...

Gn−2+i

...

Gi

Gi−1���������

= α0

���������

...

Gn−1

...

G1

G0���������

+ . . . + αi−2

���������

...

Gn−3+i

...

Gi−1

Gi−2���������

By ignoring the top entry, this linear combination shows that column i +1 is given by the same linear combinationof the i−1 columns to its left, and so on. Thus by the rank assumption on Γ there cannot exist such an i, and thefirst n columns of Γ are linearly independent. A similar argument shows that the first n columns of Γn,n +j arelinearly independent, for every j ≥ 0, and thus that Γnn is invertible.

It remains only to show that the given A, B, C provides a realization for G (s), since minimality is thenimmediate. Premultiplication by Γnn verifies

Γnn−1

����� Gn +k−1

...

Gk �����

= ek +1 , k = 0, . . . , n−1

Then, since A = Γnns Γnn

−1,

A

����� Gn +k−1

...

Gk �����

= Γnns ek +1 =

����� Gn +k

...

Gk +1 �����

, k = 0, . . . , n−1

Now, CB = G0, and

CA jB = CA j−1A

����� Gn−1

...

G0 �����

= CA j−1

����� Gn

...

G1 �����

= . . . = C

����� Gn−1+j

...

Gj �����

= Gj , j = 1, . . . , n

To complete the verification we use the fact that each dependent column of Γn,n +j is given by the same linearcombination of n columns to its left. This follows by writing column n +1 of Γ as a linear combination of the firstn (linearly independent) columns, and deleting partitions from the top of the resulting expression. This impliesthat multiplying any column of Γn,n +j by A gives the next column to the right. Thus

CAn +jB = CA j

����� G2n−1

...

Gn �����

= C

����� G2n−1+j

...

Gn +j �����

= Gn +j , j = 1, 2, . . .

-48-

Page 51: Solutions Manual LINEAR SYSTEM THEORY, 2/E

CHAPTER 12

Solution 12.1 If the state equation is uniformly bounded-input, bounded-output stable, then it is clear from thedefinition that given δ we can take ε = η δ.

Now suppose the ε, δ condition holds. In particular we can take δ = 1 and assume ε is such that, for any to ,

u (t) ≤ 1 , t ≥ to

implies

y (t) ≤ ε , t ≥ to

Now suppose u (t) is any bounded input signal. Given to let µ = t ≥ tosup u (t) . Note µ > 0 can be assumed, for

otherwise we have a trivial case. Then u (t) /µ ≤ 1 for all t ≥ to , and the zero-state response to u (t) satisfies

y (t) = to

∫t

G (t, σ)u (σ) dσ

= µ to

∫t

G (t, σ)u (σ)/µ dσ

≤ µε = εt ≥ tosup u (t) , t ≥ to

Thus we have

t ≥ tosup y (t) ≤ ε

t ≥ tosup u (t)

and conclude uniform bounded-input, bounded-output stability, with η = ε.

Solution 12.8 For any δ > 0, and constant A and B,

W (t−δ, t) =t−δ∫t

e A (t−δ−σ)BBTe A T(t−δ−σ) dσ

Changing the variable of integration from σ to τ = t−σ yields

W (t−δ, t) = e−Aδ

0∫δ

e AτBBTe A Tτ dτ e−A Tδ

It is easy to prove (by showing the equivalence of the negations by contradiction, as in the proof of Theorem 9.5)that this is positive definite if and only if

-49-

Page 52: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

rank �� B AB . . . An−1B �

� = n

Then given δ we can take

ε = λmin

���

e−Aδ

0∫δ

e AτBBTe A Tτ dτ e−A Tδ���

For a time-varying example, take scalar a (t) = 0, b (t) = e−t /2. Then

W (t−δ, t) = e−t(e δ−1)

Given any δ > 0, W (t−δ, t) > 0 for all t, but there exists no ε > 0 such that

W (t−δ, t) ≥ ε

for all t.

Solution 12.9 Consider a scalar state equation

x.(t) = b (t)u (t)

y (t) = x (t)

where b (t) is a ‘smooth bump function’ described as follows. It is a continuous, nonnegative function that is zerofor t ∉ [0, 1], and has unit area on [0, 1]. Then for any input signal the zero-state response satisfies

y (t) ≤0∫1

b (σ) u(σ) dσ

for any t. Thus for any to and any t ≥ to ,

y (t) ≤0∫1

b(σ) dσ .t ≥ tosup u (t)

≤t ≥ tosup u (t)

and the state equation is uniformly bounded-input, bounded-output stable with η = 1. However if we consider abounded input that is continuous and satisfies

u (t) =�� 0 , t ≥ 2

1 , 0 ≤ t ≤ 1

then limt → ∞ u (t) = 0, but y (t) = 1 for t ≥ 1.The result is true in the time-invariant case, however. Suppose

0∫∞

G (t) dt = ρ < ∞

and suppose u (t) is continuous, and u (t) → 0 as t → ∞. Then u (t) is bounded, and we let µ = t ≥ 0sup u (t) . Now

given ε > 0, pick T1 > 0 such that

T1

∫∞

G (t) dt ≤2µε___

and pick T2 > 0 such that

-50-

Page 53: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

u (t) ≤2ρε___ , t ≥ T2

Let T = 2 max [T1, T2]. Then for t ≥ T,

y (t) ≤0∫t

G (t−σ) u (σ) dσ

= µ0∫T

G (t−σ) dσ +2ρε___

T∫t

G (t−σ) dσ

Changing the variables of integration gives

y (t) ≤ µt−T∫t

G (τ) dτ +2ρε___

0∫

t−T

G (τ) dτ

≤ µ2µε___ +

2ρε___ ρ =ε

This shows that y (t) → 0 as t → ∞.

Solution 12.11 The hypotheses imply that given ε > 0 there exist δ1, δ2 > 0 such that if

xo < δ1 ; u (t) < δ2 , t ≥ to

where u (t) is n × 1, then the solution of

x.(t) = A (t) x (t) + u (t) , x (to) = xo

satisfies

x (t) < ε , t ≥ to

In particular, with xo = 0, this shows that if u (t) < δ2 for t ≥ to , then the corresponding zero-state solution ofthe state equation

x.(t) = A (t) x (t) + u (t)

y (t) = x (t) (*)

satisfies y (t) < ε for t ≥ to . But this implies uniform bounded-input, bounded-output stability by Exercise12.1. Thus there exists a finite constant α such that the impulse response of (*), which is identical to the transitionmatrix of A (t), satisfies

to

∫t

Φ(t, σ) dσ ≤ α

for all t, to such that t ≥ to . Since A (t) is bounded, this gives uniform exponential stability of

x.(t) = A (t) x (t)

by Theorem 6.8.

Solution 12.12 Suppose the impulse response is G (t), where G (t) = 0 for t < 0. For u (t) = e−λt, t ≥ 0,

-51-

Page 54: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

0∫∞

y (t)e−η t dt =0∫∞ �

�� 0∫t

G (t−σ)e−λ σ dσ���

e−η t dt =0∫∞ �

�� 0

∫∞

G (t−σ)e−λ σ dσ���

e−η t dt

=0∫∞ �

�� 0

∫∞

G (t−σ)e−η t dt���

e−λ σ dσ

where all integrals are well-defined because of the stability assumption, and λ, η > 0. Changing the variable ofintegration in the inner integral from t to γ = t−σ gives

0∫∞

y (t)e−η t dt =0∫∞ �

�� 0

∫∞

G (γ)e−η γ dγ���

e−η σe−λ σ dσ

= �� G(s) s = η

��

0∫∞

e−(η+λ)σ dσ

=η + λ

1_____ G(η)

Without the stability assumption we can say that U (s) = 1/(s+λ) for Re [s ] > −λ, and the integral for G (s)converges for Re [s ] > Re [p1], . . . , Re [pn], where p1, . . . , pn are the poles of G (s). Thus

Y (s) =s+λG (s)_____ =

0∫∞

y (t)e−st dt

is valid for Re [s ] > −λ, Re [p1], . . . , Re [pn]. This implies that if

η > −λ, Re [p1], . . . , Re [pn]

then

0∫∞

y (t)e−ηt dt =η+λG (η)_____

even though y (t) may be unbounded.

Solution 12.14 Given u (t), t ≥ 0, and xo , suppose x (t) is a solution of the given state equation. Then withv (t) = y (t) = C x (t) we have

x.(t) = A x (t) + Bu (t) , x (0) = xo

z.(t) = AP z (t) + AB(CB)−1C x (t)

= AP z (t) + A (I − P) x (t) , z (0) = xo

Thus

x.(t) − z

.(t) = AP [ x (t) − z (t) ] + Bu (t) , x (0) − z (0) = 0

and this gives

x (t) − z (t) =0∫t

e AP (t−σ)Bu (σ) dσ

Since PB = 0 and

e AP (t−σ) =i =0Σn−1

αi(t−σ) (AP)i

-52-

Page 55: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

we get

x (t) − z (t) =0∫t

α0(t−σ)Bu (σ) dσ

Then

w (t) = −(CB)−1CAP z (t) − (CB)−1CAB(CB)−1C x (t) + (CB)−1C x.(t)

= −(CB)−1CAP z (t) − (CB)−1CAB(CB)−1C x (t) + (CB)−1CA x (t) + (CB)−1CBu (t)

= −(CB)−1CAP z (t) + (CB)−1CA [ −B(CB)−1C + I ] x (t) + u (t)

= (CB)−1CAP[ x (t) − z (t) ] + u (t)

= (CB)−1CAP0∫t

α0(t−σ)Bu (σ) dσ + u (t)

Again using PB = 0 gives

w (t) = u (t) , t ≥ 0

To address stability, since PB = 0 we see that P is not invertible. Thus AP is not invertible, which implies thesecond state equation is never exponentially stable. The scalar case with A = −1, B = C = 1 is uniformlybounded-input, bounded-output stable, but the resulting

z.(t) = −v (t)

w (t) = v (t) + v.(t)

is not, as the bounded input v (t) = cos (e t) shows.

-53-

Page 56: Solutions Manual LINEAR SYSTEM THEORY, 2/E

CHAPTER 13

Solution 13.1 Suppose n = 2 and A has complex eigenvalues. Let

A =��� a21

a11

a22

a12���

, b =��� b2

b1���

Then A has eigenvalues

2

a11+a22 ± √ (a11+a22)2−4(a11a22−a12a21)____________________________________

and since the eigenvalues are complex,

(a11+a22)2 − 4(a11a22−a12a21) = (a11−a22)2 + 4a12a21 < 0 (*)

Supposing that det [ b Ab ] = 0, we will show that if b ≠ 0 we get a contradiction. For

0 = det [ b Ab ] = a21b12 − a12b2

2 − (a11−a22)b1b2

implies

(a11−a22)2b12b2

2 = (a21b12 −a12b2

2 )2 (**)

If b1 = 0, b2 ≠ 0, then (**) implies a12 = 0, which contradicts (*). If b1 ≠ 0, b2 = 0, then (**) implies a21 = 0,which contradicts (*). If b1 ≠ 0, and b2 ≠ 0, then multiplying (*) by b1

2b22 and using (**) gives

(a21b12 −a12b2

2 )2 + 4a12a21b12b2

2 < 0

or,

(a21b12 +a12b2

2 )2 < 0

which is a contradiction. Thus det [ b Ab ] ≠ 0 for every b ≠ 0.Conversely, suppose det [ b Ab ] ≠ 0 for every b ≠ 0. If A has real eigenvalues, let p be a left eigenvector

of A corresponding to λ, and take b ≠ 0 such that bTp = 0. (Note b and p are real.) Then

pTA = λ pT , pTb = 0

which implies that the state equation is not controllable for this b, a contradiction. Therefore A cannot have realeigenvalues, so it must have complex eigenvalues. (For the more challenging version of the problem, we canshow controllability for all nonzero b implies n = 2 by using a (real) P to transform A to real Jordan form. Thenfor n > 2 pick a left eigenvector of P−1AP and a real b ≠ 0 such that pTP−1b = 0 to obtain a contradiction.)

-54-

Page 57: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

Solution 13.4 We need to show that

rank �� B AB . . . An−1B �

� = n , rank �� C

ADB �

� = n +p (+)

if and only if the (n +p)-dimensional state equation

z.(t) = �

� CA

00 �

� z (t) + �� D

B �� u (t) (++)

is controllable. First suppose (+) holds but (++) is not controllable. Then there exists a complex so such that

rank��� −C

soIn−AsoIp

0DB �

��

< n +p (*)

Since rank [ soI−A B ] = n, this implies

rank �� −C soIp D �

� < p

In turn, this implies so = 0, so that (*) becomes

rank �� −C

−A00

DB �

� < n +p

and this contradicts the second rank condition in (+).Conversely, supposing (++) is controllable, then

rank��� D

BCBAB

CABA2B

. . .

. . . ���

= n +p

This implies

rank �� B AB . . . An−1B �

� = n

in other words, the first rank condition in (+) holds. Now suppose

rank �� C

ADB �

� < n +p

Then

rank��� −C

soIn−AsoIp

0DB �

��

�� so = 0

< n +p

that is,

rank �� soIn +p− �

� CA

00 �

��� D

B ��

��

�� so = 0

< n +p

and this implies that (++) is not controllable. The contradiction shows that the second rank condition in (+) holds.

Solution 13.5 Since J has a single eigenvalue λ, controllability is equivalent to the condition

rank �� λ I−J B �

� = n

From the form of the matrix λ I−J it is clear that a necessary and sufficient condition for controllability is that theset of rows of B corresponding to zero rows of λ I−J must be a linearly independent set of 1 × m vectors.

In the general Jordan form case, applying this condition for each eigenvalue λi gives a necessary andsufficient condition for controllability. (Note that independence of one set of such rows of B (corresponding to onedistinct eigenvalue) from another set of such rows of B (corresponding to another distinct eigenvalue) is notrequired.)

-55-

Page 58: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

Solution 13.10 Since

�� P−1B (P−1AP)P−1B . . . (P−1AP)n−1P−1B �

� = P−1 �� B AB . . . An−1B �

and controllability indices are defined by a left-to-right linear independence search, it is clear that controllabilityindices are unaffected by state variable changes.

For the second part, let rk be the number of linearly dependent columns in AkB that arise in the left-to-rightcolumn search of [ B AB . . . An−1B ]. Note r0 = 0 since rank B = m. Then rk is the number of controllabilityindices that have value ≤ k. This is because for each of the rk columns of the form AkBi that are dependent, wehave ρi ≤ k, since for j > 0 the vector Ak +jBi also will be dependent on columns to its left. Thus fork = 1, . . . , m, rk−rk−1 gives the number of controllability indices with value k. Writing

�� BG ABG . . . AkBG �

� = �� B AB . . . AkB �

������ 0

...

0G

0

...

G0

. . .

...

. . .

. . .

G

...

00 �

�����

and using the invertibility of G shows that the same sequence of rk’s are generated by left-to-right column searchin [ BG ABG . . . An−1BG ].

Solution 13.11 For the time-invariant case, if

pTA = pTλ , pTB = 0

implies p = 0, then

pT(A +BK) = pTλ , pTB = 0

obviously implies p = 0. Therefore controllability of the open-loop state equation implies controllability of theclosed-loop state equation.

In the time-varying case, suppose the open-loop state equation is controllable on [to , tf]. Thus givenx (to) = xo there exists an input signal ua(t) such that the corresponding solution xa(t) satisfies xa(tf) = 0. Then theclosed-loop state equation

z.(t) = [ A (t) + B (t)K (t) ] z (t) + B (t)v (t)

with initial state z (to) = xo and input va(t) = ua(t) − K (t) xa(t) has the solution z (t) = xa(t). Thus z (tf) = 0. Sincethis argument applies for any xo , the closed-loop state equation is controllable on [to , tf].

Solution 13.12 By controllability, we can apply a variable change to controller form, with

A = Ao + BoUP−1 = PAP−1 , B = BoR = PB

Then we can choose K such that

A + BK =

������� −p0

0

...

00

−p1

0

...

01

. . .

. . .

...

. . .

. . .

−pn−1

1

...

00 �

������

Now we want to compute b such that

-56-

Page 59: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

Bb =

������� 1

0

...

00 �

������

Using × to denote various unimportant entries, set

Bb = BoRb = block diagonal

������

������� 1

0

...

00 �

������

ρi × 1 , i = 1, . . . , m

�������

.

������� 0

0

...

01

00

...

. . .

. . .

...

. . .

. . .

...

×× �

������

b =

������� 1

0

...

00 �

������

This gives a set of equations of the form

0 = b1 +i =2Σm

×i bi

0 = b2 +i =3Σm

×i bi

...

0 = bm−1 + × bm

1 = bm

Clearly there is a solution for the entries of b, regardless of the ×’s. Now it is easy to conclude controllability ofthe single-input state equation by calculation of the form of the controllability matrix. Then changing to theoriginal state variables gives the result since controllability is preserved. In the original variables, take K = KPand b = b. For an example to show that b alone does not suffice, take Exercise 13.11 with all ×’s zero.

Solution 13.14 Supposing the rank of the controllability matrix is q, Theorem 13.1 gives an invertible Pa suchthat

Pa−1APa =

��� 0

A11

A22

A12���

, Pa−1B =

��� 0

B1���

, CPa = �� C1 C2

��

where A11 is q × q and the state equation defined by C1, A11 , B1 is controllable. Now suppose

rank

������� C1A11

n−1

...

C1A11

C1�������

= l

Applying Theorem 13.12 there is an invertible Pb such that with

P =��� 0

Pb

In−q

0 ���

we have

-57-

Page 60: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

P−1(Pa−1APa)P =

����� 0

A21

A11

0

A22

0

A33

A23

A13�����

, P−1(Pa−1B) =

���� 0

B2

B1 ����

(*)

CPaP = �� C1 0 C2

��

where A11 is l × l, and in fact A33 = A22 , C2 = C2. It is easy to see that the state equation formed fromC1, A11 , B1 is both controllable and observable. Also an easy calculation using block triangular structure showsthat the impulse response of the state equation defined by (*) is

C1e A 11 tB1

It remains only to show that l = s. Using the effect of variable changes on the controllability and observabilitymatrices and the special structure of (*) give

������ CAn−1

...

CAC �

�����

�� B AB . . . An−1B �

� =

������� C1A11

n−1

...

C1A11

C1�������

�� B1 A11 B1

. . . A11n−1

B1��

Thus

rank

������

������� C1A11

n−1

...

C1A11

C1�������

�� B1 A11 B1

. . . A11n−1

B1��

�������

= s

But

rank

������� C1A11

l−1

...

C1A11

C1�������

= rank �� B1 A11 B1

. . . A11l−1

B1�� = l

and so we must have l = s.

-58-

Page 61: Solutions Manual LINEAR SYSTEM THEORY, 2/E

CHAPTER 14

Solution 14.2 For any tf > 0,

W =0∫tf

e−AtBBTe−A Tt dt

is symmetric and positive definite by controllability, and

AW + WAT = −0∫tf

dtd___ �

� e−AtBBTe−A Tt �� dt

= − e−Atf BBTe−A Ttf + BBT

Letting K = −BTW−1, we have

(A + BK)W + W (A + BK)T = − ( e−Atf BBTe−A Ttf + BBT ) (*)

Suppose λ is an eigenvalue of A +BK. Then λ is an eigenvalue of (A+BK)T, and we let p ≠ 0 be a correspondingeigenvector. Then

(A + BK)Tp = λ p

Also,

pH(A + BK) = λ_

pH

Pre- and post-multiplying (*) by pH and p, respectively, gives

2Re [λ] pHW p ≤ 0

which implies Re [λ] ≤ 0. Further, if Re [λ] = 0, then

pH (e−Atf BBTe−A Ttf + BBT) p = 0

Thus pHB = 0, and this gives

pHAB = pH(A + BK − BK)B = pH(A + BK)B − pHBKB = λ_

pHB = 0

Continuing this calculation for pHA2B, and so on, gives

pH �� B AB . . . An−1B �

� = 0

which contradicts controllability of the given state equation.

-59-

Page 62: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

Solution 14.5

(a) For any n × 1 vector x,

x H(A + AT) x = x HA x + x HATx ≥ −2αm x Hx

If λ is an eigenvalue of A, and x is a unity-norm eigenvector corresponding to λ, then

A x = λ x , x HAT = λ_

x H

and we conclude

λ + λ_

≥ −2 αm

Therefore any eigenvalue of A satisfies Re [λ] ≥ −αm, and this implies that for α > αm all eigenvalues of A +α Ihave positive real parts. Therefore all eigenvalues of −(AT+α I) = (−A−α I)T have negative real parts.

(b) Using Theorem 7.11, with α > αm, the unique solution of

Q (−A − α I)T + (−A − α I)Q = −BBT (*)

is

Q =0∫∞

e−(A +α I)tBBTe−(A T+α I)t dt

Clearly Q is positive semidefinite. If x TQx = 0, then

x Te−(A +α I)tB = 0 , t ≥ 0

and the usual sequential differentiation and evaluation at t = 0 gives a contradiction to controllability. Thus Q ispositive definite.

(c) Now consider the linear state equation

z.(t) = ( A+α I−BBTQ−1 )z (t) (**)

Using (*) to write BBTQ−1 gives

z.(t) = −Q(A+α I)T

Q−1z (t)

But Q [ −(A + α I)T ]Q−1 has negative-real-part eigenvalues, which proves that (**) is exponentially stable.

(d) Invoking Lemma 14.6 gives that

z.(t) = ( A−BBTQ−1 )z (t)

is exponentially stable with rate α > αm.

Solution 14.6 Given a controllable linear state equation

x.(t) = A x (t) + Bu (t)

by Exercise 13.12 we can choose an m × n matrix K and an m × 1 vector b such that

x.(t) = (A+BK)x (t) + (Bb)u (t)

is a controllable single-input state equation. By a single-input controller form calculation, it is clear that we canchoose a 1 × n gain k that yields a closed-loop state equation with any specified characteristic polynomial. Thatis,

-60-

Page 63: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

A+BK+Bbk = A+B (K+bk)

has the specified characteristic polynomial. Thus for the original state equation, the feedback law

u (t) = (K+bk) x (t)

yields a closed-loop state equation with specified characteristic polynomial.

Solution 14.8 Without loss of generality we can assume the change of variables in Theorem 13.1 has beenperformed so that

A =��� 0

A11

A22

A12���

, B =��� 0

B1���

where A11 is q × q, and

rank �� λ I−A11 B1 �

� = q

for all complex values of λ. Then the eigenvalues of A comprise the eigenvalues of A11 and the eigenvalues ofA22 . Also, for any complex λ,

rank �� λ I−A B �

� = rank��� 0

λ I−A11

λ I−A22

−A12

0B1

���

= q + rank �� λ I−A22 �

� (+)

Now suppose rank [λ I−A B ] = n for all nonnegative-real-part eigenvalues of A. Then by (+) any sucheigenvalue must be an eigenvalue of A11 , which implies that all eigenvalues of A22 have negative real parts. Butwe can compute an m × q matrix K1 such that A11 + B1K1 has negative-real-part-eigenvalues. So settingK = [ K1 0 ] we have that

A + BK =��� 0

A11+B1K1

A22

A12���

has negative-real-part eigenvalues.On the other hand, if there exists a K = [K1 K2] such that

A + BK =��� 0

A11+B1K1

A22

A12+B1K2���

has negative-real-part eigenvalues, then A22 has negative-real-part eigenvalues. Thus if Re [λ] ≥ 0, then (+) gives

rank �� λ I−A B �

� = q+n−q = n

Solution 14.9 For controllability assume A and B have been transformed to controller form by a state variablechange. By Exercise 13.10 this does not alter the controllability indices. Then it is easy to show that A+BLC andB are in controller form with the same block sizes, regardless of L and C. Thus the controllability indices do notchange. Similar arguments apply in the case of observability.

Solution 14.10 For any L, using properties of the trace,

-61-

Page 64: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

tr [A+BLC] = tr [A] + tr [BLC]

= tr [A] + tr [CBL]

= tr [A]

> 0

Thus at least one eigenvalue of A+BLC has positive real part, regardless of L.

Solution 14.12 Write the k th-row of G(s) in terms of the k th-row Ck of C as

Ck(sI − A)−1B =j =0Σ∞

CkA jBs−(j+1)

The k th-relative degree κk is such that, since LAj [Ck](t)B (t) = CkA jB,

CkB = . . . = CkAκk−2B = 0

CkAκk−1B ≠ 0

Thus in the k th-row of G(s), the minimum difference between the numerator and denominator polynomial degreesamong the entries Gk1(s), . . . , Gkm (s) is κk.

-62-

Page 65: Solutions Manual LINEAR SYSTEM THEORY, 2/E

CHAPTER 15

Solution 15.2 The closed-loop state equation can be written as

x.(t) = Ax(t) + BMz(t) + BNv(t)

= Ax(t) + BMz(t) + BNC[Lz(t)+x(t)]

z.(t) = Fz(t) + GC[Lz(t)+x(t)]

Making the variable change w (t) = x (t)+Lz (t) gives the description

w.

(t) = Ax(t) + BMz(t) + BNCw(t) + LFz (t) + LGCw (t)

= Ax(t) + [BM+LF]z(t) + [BN+LG]Cw (t)

= [A−HC]w (t)

z.(t) = Fz(t) + GCw(t)

Thus the closed-loop state equation in matrix form is

��� z

.(t)

w.

(t) ���

=��� GC

A−HCF0 �

��

��� z (t)

w (t) ���

and the result is clear.

Solution 15.4 Following the hint, write for any τ

τ∫

τ+δ

B (σ) 2 dσ =τ∫

τ+δ

Φ(σ, τ)Φ(τ, σ)B (σ)BT(σ)ΦT(τ, σ)ΦT(σ, τ) dσ

≤τ∫

τ+δ

Φ(σ, τ) 2 Φ(τ, σ)B (σ)BT(σ)ΦT(τ, σ) dσ

Since A (t) is bounded, by Exercise 6.6 there is a positive constant γ such that Φ(σ, τ) 2 ≤ γ2 for σ ∈ [τ, τ+δ].And since

τ∫

τ+δ

Φ(τ, σ)B (σ)BT(σ)ΦT(τ, σ) dσ ≤ ε1I

Exercise 1.21 gives, for any τ,

-63-

Page 66: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

τ∫

τ+δ

B (σ) 2 dσ ≤ γ2

τ∫

τ+δ

Φ(τ, σ)B (σ)BT(σ)ΦT(τ, σ) dσ

≤ γ2 n ε1 =∆ β1

Now for any τ, and t ∈ [τ+kδ, τ+(k +1)δ], k = 0, 1, . . . ,

τ∫t

B (σ) 2 dσ ≤τ∫

τ+(k +1)δ

B (σ) 2 dσ

≤j =0Σk

τ+jδ∫

τ+(j +1)δ

B (σ) 2 dσ

≤ (k +1) β1 ≤ [1 + (t−τ) /δ ] β1

This bound is independent of k, so letting β2 = β1/δ we have

τ∫t

B (σ) 2 dσ ≤ β1 + β2(t−τ)

for all t, τ with t ≥ τ. (Of course this provides a simplification of the hypotheses of Theorem 15.5 for thebounded-A (t) case.)

Solution 15.6 Write the given state equation in the partitioned form

��� z

.b(t)

z.a(t) �

��

=��� A21

A11

A22

A12���

��� zb(t)

za(t) ���

+��� B2

B1���

u(t)

y (t) = �� Ip 0 �

���� zb(t)

za(t) ���

and the reduced-dimension observer and feedback in the form

z.c(t) = [ A22 − H A12 ] zc(t) + [ B2 − HB1 ]u(t) + [ A21 + (A22−H A12)H − H A11 ] za(t)

zb(t) = zc(t) + Hza(t)

u(t) = K1za(t) + K2 zb(t) + Nr(t)

It is predictable that writing the overall closed-loop state equation in terms of the variables za(t), zb(t), andeb(t) = zb(t)−zb(t) is revealing. This gives

����� e

.b(t)

z.b(t)

z.a(t) �

����

=

���� 0

A21+B2K1

A11+B1K1

0A22+B2K2

A12+B1K2

A22−HA12

−B2K2

−B1K2����

���� eb(t)

zb(t)za(t) �

���

+

���� 0

B2NB1N �

���

r(t)

y (t) = �� Ip 0 0 �

���� eb(t)

zb(t)za(t) �

���

Thus we see that the eigenvalues of the closed-loop state equation are provided by the n eigenvalues of A +BKand the (n−p) eigenvalues of A22−H A12 . Furthermore, the block triangular structure gives the closed-looptransfer function as

-64-

Page 67: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

Y(s) = �� Ip 0 �

� (sI−A−BK)−1BN R(s)

which is the same as if a static state feedback gain K is used.

Solution 15.9 Similar in style to Solution 14.8.

Solution 15.10 Since

u = Hz + Jv = Hz + JC 2x + JD 21r + JD 22u

we assume that I−JD 22 is invertible, and let L =∆ (I−JD 22)−1 to write

u = LHz + LJC2x + LJD21r

Then, substituting for u,

x.

= (A+BLJC2)x + BLHz + BLJD21r

z.

= (GC2+GD22LJC2)x + (F+GD22LH)z + (GD22+GD22LJD21)r

y = (C1+D1LJC2)x + D1LHz + D1LJD21r

This gives the closed-loop coefficients

A =��� GC2+GD22LJC2

A+BLJC2

F+GD22LHBLH �

��

, B =��� GD22+GD22LJD21

BLJD21���

C = �� C1+D1LJC2 D1LH �

� , D = D1LJD21

These expressions can be rewritten using

L = (I − JD 22)−1 = I + J (I−D22J)−1D22

which follows from Exercise 28.2 or is easily verified using the identity in Exercise 28.1.

-65-

Page 68: Solutions Manual LINEAR SYSTEM THEORY, 2/E

CHAPTER 16

Solution 16.4 By Theorem 16.16 there exist polynomial matrices X (s), Y (s), A (s), and B (s) such that

N(s) X(s) + D(s)Y(s) = Ip (*)

Na(s) A(s) + Da(s)B(s) = Ip (**)

Since D−1(s)N(s) = Da−1(s)Na(s), Na(s) = Da(s)D−1(s)N(s). Substituting this into (**) gives

Da(s)D−1(s)N(s) A(s) + Da(s)B(s) = Ip

that is,

N(s) A(s) + D(s)B(s) = D(s)Da−1(s)

Similarly, N(s) = D(s)Da−1(s)Na(s), and substituting into (*) gives

Na(s) X(s) + Da(s)Y(s) = Da(s)D−1(s)

Therefore D(s)Da−1(s) and [ D(s)Da

−1(s) ]−1 both are polynomial matrices, and thus both are unimodular.

Solution 16.5 From the given equality,

NL(s)D(s) − DL(s)N(s) = 0

and since N (s) and D (s) are right coprime there exist polynomial matrices X(s) and Y(s) such that

X(s)D(s) − Y(s)N(s) = I

Putting these two equations together gives

��� NL(s)

X(s)DL(s)Y(s) �

��

��� −N(s)

D(s) ���

=��� 0

I ���

It remains only to prove unimodularity. Since NL(s) and DL(s) are left coprime, there exist polynomial matricesA(s) and B(s) such that

DL(s) A(s) + NL(s)B(s) = I

That is,

��� NL(s)

X(s)DL(s)Y(s) �

��

��� −N(s)

D(s)A(s)B(s) �

��

=��� 0

II

X(s)B(s)+Y(s)A(s) ���

Multiplying on the right by

-66-

Page 69: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

��� 0

II

−[X(s)B(s)+Y(s)A(s)] ���

gives

��� NL(s)

X(s)DL(s)Y(s) �

��

��� −N(s)

D(s)N(s)[X(s)B(s)+Y(s)A(s)]+A(s)

−D(s)[X(s)B(s)+Y(s)A(s)]+B(s) ���

= I

That is

��� NL(s)

X(s)DL(s)Y(s) �

��

−1

=��� −N(s)

D(s)N(s)(X(s)B(s)+Y(s)A(s))+A(s)

−D(s)(X(s)B(s)+Y(s)A(s))+B(s) ���

which is another polynomial matrix. Thus

��� NL(s)

X(s)DL(s)Y(s) �

��

is unimodular.

Solution 16.7 The relationship

(Pρs+Pρ−1)−1 = R1s+R0

holds if R1 and R0 are such that

I = (Pρs+Pρ−1) (R1s+R0) = PρR1s2 + (PρR0+Pρ−1R1)s + Pρ−1R0

Taking R0 = Pρ−1−1 and R1 = −Pρ−1

−1 PρPρ−1−1 , it remains to verify that PρR1 = 0. We have

I = (Pρsρ + . . . + P0) (Qηsη + . . . + Q0)

= PρQηsη+ρ + (PρQη−1+Pρ−1Qη)sη+ρ−1 + . . .

with Pρ−1 and Qη−1 invertible. Therefore

PρQη = 0 , PρQη−1+Pρ−1Qη = 0 (+)

The second equation gives

Pρ = −Pρ−1QηQη−1−1

Then we can write

R1 = QηQη−1−1 Pρ−1

−1

and the first equation in (+) gives PρR1 = 0. In summary,

(Pρs+Pρ−1)−1 = QηQη−1−1 Pρ−1

−1 s + Pρ−1−1

and thus Pρs+Pρ−1 is unimodular.

Solution 16.10 Since N(s)D−1(s) = N(s)D−1

(s) both are coprime right polynomial fraction descriptions, thereexists a unimodular U(s) such that D(s) = D(s)U(s). Suppose for some integer 1 ≤ J ≤ m we have

ck[D ] = ck[D] , k = 1, . . . , J−1 ; cJ[D ] < cJ[D]

Writing D(s) and D(s) in terms of columns Dk(s) and Dk(s) and writing the (i, j)-entry of U(s) as uij(s) give

-67-

Page 70: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

Dk(s) = D1(s)u1,k(s) + . . . + DJ(s)uJ,k(s) + . . . + Dm(s)um,k(s) , k = 1, . . . , m

Using a similar column notation for Dhc and Dl(s) gives

Dkhcsck[D ] + Dk

l (s) = [D1hc

sc1[D]+D1l

(s)] u1,k(s) + . . . + [DJhc

scJ[D]+DJl(s)] uJ,k(s)

+ . . . + [Dmhc

scm[D]+Dml

(s)] um,k(s) , k = 1, . . . , m

We claim that

ck[D ] =j = 1, . . . , m

max { cj[D]+degree uj,k(s) }

This is shown by a an argument using linear independence of D1hc

, . . . , Dmhc

as follows. Let

c =j = 1, . . . , m

max { cj[D]+degree uj,k(s) }

and let µj,k be the coefficient of s c−cj[D] in uj,k(s). Then not all the µj,k are zero, and the vector coefficient of the s c

term on the right side is

j =1Σm

µj,k D jhc

By linear independence this sum is nonzero, which implies ck[D ] = c.Now, using the definition of J,

ck[D ] < cJ[D] ≤ . . . ≤ cm[D] , k = 1, . . . , J−1

and this implies uJ,k(s) = . . . = um,k(s) = 0. Thus U (s) has the form

U (s) =��� 0(m−J+1) × J

Ua(s)Uc(s)Ub(s) �

��

where Ua(s) is (J−1) × J, from which rank U (s) ≤ m−1 for all values of s. This contradicts unimodularity, ThuscJ[D ] = cJ[D]. The proof is complete since the roles of D (s) and D(s) can be reversed.

-68-

Page 71: Solutions Manual LINEAR SYSTEM THEORY, 2/E

CHAPTER 17

Solution 17.1 If

x.(t) = A x (t) + Bu (t)

y (t) = C x (t)

is a realization of GT(s), then

z.(t) = ATx (t) + CTv (t)

w (t) = BTz (t)

is a realization for G (s) since

G (s) = [ GT(s) ]T= [ C (sI − A)−1B ]T

= BT(sI − AT)−1CT

Furthermore, easy calculation of the controllability and observability matrices of the two realizations shows thatone is minimal if and only if the other is. Now, if N (s) and D (s) give a coprime left polynomial fractiondescription for G(s), then there exist polynomial matrices X (s) and Y (s) such that

N (s) X (s) + D(s)Y(s) = I

Therefore

XT(s)NT(s) + YT(s)DT(s) = I

which implies that NT(s) and DT(s) are right coprime. Also, since D(s) is row reduced, DT(s) is column reduced.Thus we can write down a controller-form minimal realization for GT(s) = NT(s)[ DT(s) ]−1 as per Theorem 17.4,and this provides a minimal realization for G (s) by the correspondence above.

Solution 17.3 Proof of Theorem 17.7: From Theorem 13.17 we have

Q−1AQ = AoT + Q−1VBo

T , CQ = SBoT

Transposing (6) gives

∆(s)BoT = sΨT(s) − ΨT(s)Ao

T (+)

and (13) implies

∆(s) = D (s)S + ΨT(s)Q−1V

Substituting into (+) gives

D (s)SBoT = ΨT(s) [ sI − Ao

T − Q−1VBoT ]

-69-

Page 72: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

Therefore

SBoT [ sI − Ao

T − Q−1VBoT ]−1

= D−1(s)ΨT(s)

Using the definition of N (s),

D−1(s)N (s) = SBoT [ sI − (Ao

T + Q−1VBoT) ]−1

Q−1B

= CQ[ sI − Q−1AQ ]−1Q−1B

= C (sI − A)−1B

Note that D (s) is row reduced since Dlr = S−1, which is invertible. Finally, if the state equation is controllable aswell as observable, hence minimal, then it is clear from the definition of D (s) that the degree of the polynomialfraction description equals the dimension of the minimal realization. Therefore D−1(s)N (s) is a coprime leftpolynomial fraction description.

Solution 17.5 Suppose there is a nonzero h with the property that for each uo there is an xo such that

hCe Atxo +0∫t

hCe A (t−σ)Buoe soσ dσ = 0 , t ≥ 0

Suppose G (s) = N (s)D−1(s) is a coprime right polynomial fraction description. Then taking Laplace transformsgives

hC (sI − A)−1xo + hN (s)D−1(s)uo(s−so)−1 = 0

that is,

(s−so)hC (sI − A)−1xo + hN (s)D−1(s)uo = 0

If so is not a pole of G (s), then D (so) is invertible. Thus evaluating at s = so gives

hN (so)D−1(so)uo = 0

and we have that if so is not a pole of G (s), then for every uo

hN (so)uo = 0

Thus hN (so) = 0, that is rank N (so) < p < m, which implies that so is a transmission zero.Conversely, suppose so is a transmission zero that is not a pole of G (s). Then for a right-coprime

polynomial fraction description G (s) = N (s)D−1(s) we have that D (so) is invertible, and rank N (so) < p < m.Thus there exists a nonzero 1 × p vector h such that hN (so) = 0. Using the identity (just as in the proof ofTheorem 17.13)

(soI − A)−1(s − so)−1 = (sI − A)−1(soI − A)−1 + (sI − A)−1(s−so)−1

we can write for any uo and the choice xo = (soI − A)−1Buo ,

L���

hCe Atxo +0∫t

hCe A (t−σ)Buoe soσ dσ���

= hN (so)D−1(so)uo(s−so)−1 = 0

That is, h has the property that for any uo there is an xo such that

hCe Atxo +0∫t

hCe A (t−σ)Buoe soσ dσ = 0 , t ≥ 0

-70-

Page 73: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

Solution 17.9 Using a coprime right polynomial fraction description

G (s) = N (s)D−1(s) =det D (s)

N (s) adj D (s)____________

suppose for some i, j and complex so we have

∞ = Gij(so) = det D (so)

[ N (so) adj D (so) ]ij ___________________

Since the numerator is the magnitude of a polynomial, it is finite for every so , and this implies det D (so) = 0, thatis, so is a pole of G (s).

Now suppose so is such that det D (so) = 0. By coprimeness of the right polynomial fraction descriptionN (s)D−1(s), there exist polynomial matrices X (s) and Y (s) such that

X(s)N(s) + Y(s)D(s) = Im

for all s. Therefore

[ X(s)G(s) + Y(s) ] D(s) = Im

for all s, and thus

det [ X(s)G(s) + Y(s) ] det D(s) = 1

for all s. This implies that at s = so we must have

det [ X(so)G (so) + Y (so) ] = ∞

Since the entries of the polynomial matrices X (so) and Y (so) are finite, some entry of G (so) must have infinitemagnitude.

-71-

Page 74: Solutions Manual LINEAR SYSTEM THEORY, 2/E

CHAPTER 18

Solution 18.2(a) If x ∈ A (A−1V ), then clearly x ∈ Im [A ], and there exists y ∈ A−1V such that x = Ay, which implies x ∈ V.Therefore A (A−1V ) ⊂ V ∩ Im [A ]. Conversely, suppose x ∈ V ∩ Im [A ]. Then x ∈ Im [A ] implies there exists ysuch that x = Ay, and x ∈ V implies y ∈ A−1V. Thus x ∈ A (A−1V ), that is, V ∩ Im [A ] ⊂ A (A−1V ).

(b) If x ∈ V + Ker [A ], then we can write

x = xa + xb , xa ∈ V , xb ∈ Ker [A ]

and Ax = Axa ∈ AV. Thus x ∈ A−1(AV ), which gives V + Ker [A ] ⊂ A−1(AV ). Conversely, if x ∈ A−1(AV ), thenthere exists y ∈ V such that Ax = Ay, that is, A (x−y) = 0. Thus writing

x = y + (x−y) ∈ V + Ker [A ]

gives A−1(AV ) ⊂ V + Ker [A ].

(c) If AV ⊂ W, then using (b) gives A−1(AV ) = V + Ker [A ] ⊂ A−1W. Thus V ⊂ A−1W.Conversely, V ⊂ A−1W implies, using (a),

AV ⊂ A (A−1W ) = W ∩ Im [A ]

Therefore AV ⊂ W.

Solution 18.4 For x ∈ Wa ∩ V + Wb ∩ V, write

x = xa + xb , xa ∈ Wa ∩ V , xb ∈ Wb ∩ V

Then xa , xb ∈ V, and xa ∈ Wa , xb ∈ Wb , which imply xa + xb ∈ V and xa + xb ∈ Wa + Wb , that is,

x = xa + xb ∈ (Wa + Wb) ∩ V

and we have shown that

Wa ∩ V + Wb ∩ V ⊂ (Wa+Wb) ∩ V

For the second part, if Wa ⊂ V, then x ∈ (Wa + Wb) ∩ V implies x ∈ V and x ∈ Wa + Wb . We can write

x = xa + xb , xa ∈ Wa ⊂ V , xb ∈ Wb

But x − xa = xb ∈ V, so we have x ∈ Wa + Wb ∩ V. This gives

(Wa + Wb) ∩ V ⊂ Wa + Wb ∩ V

The reverse containment follows from the first part since Wa ⊂ V implies Wa = Wa ∩ V.

-72-

Page 75: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

Solution 18.9 Clearly C<A | B> = Y if and only if

rank��

C �� B AB . . . An−1B �

����

= p

and thus the proof involves showing that the rank condition is equivalent to positive definiteness of

0∫tf

Ce A (tf−t)BBTe A T(tf−t)CT dt

This is carried out in Solution 9.11.

Solution 18.10 We show equivalence of the negations. First suppose 0 ≠ V ⊂ Ker [C ] is a controlled invariantsubspace. Then picking a friend F of V we have

(A + BF)V ⊂ V ⊂ Ker [C ]

Selecting 0 ≠ xo ∈ V, this gives

e (A + BF)txo ∈ V , t ≥ 0

and thus

Ce (A + BF)txo = 0 , t ≥ 0

Thus the closed-loop state equation is not observable, since the zero-input response to xo ≠ 0 is identical to thezero-input response to the zero initial state.

Conversely, suppose the closed-loop state equation is not observable for some F. Then

N =k =0∩

n−1

Ker [C (A + BF)k] ≠ 0

Thus 0≠xo ∈ N implies, using the Cayley-Hamilton theorem,

0 = Cxo = C (A + BF) xo = C (A + BF)2xo = . . .

That is, (A + BF) xo ∈ N, which gives (A + BF)N ⊂ N. Clearly N ⊂ Ker [C ], so N is a nonzero controlled invariantsubspace contained in Ker [C ].

Solution 18.11 Let P1, . . . , Pq be a basis for B ∩ R = Im [B1] + . . . + Im [Bq] , P1, . . . , Pr be a basis for R ,P1, . . . , Pc be a basis for <A | B> , P1, . . . , Pn be a basis for X . Then for i = 1, . . . , q, Bi ∈ B ∩ R , and fori = q +1, . . . , m, Bi ∉ R , Bi ∈ <A | B>. Thus P−1B = B has the form

B =

������ 0(n−c) × q

0(c−r) × q

0(r−q) × q

B11

0(n−c) × (m−q)

B32

B22

B12������

If B1, . . . , Bq are linearly independent and we choose Pj = Bj, j = 1, . . . , q, then B11 = Iq . Finally, since<A | B> is invariant for A,

P−1AP =��� 0c × (n−c)

A11

A22

A12���

-73-

Page 76: Solutions Manual LINEAR SYSTEM THEORY, 2/E

CHAPTER 19

Solution 19.1 First we show

( W + S ) ⊥ = W ⊥ ∩ S ⊥

An n × 1 vector x satisfies x ∈ ( W + S ) ⊥ if and only if x T(w + s) = 0 for all w ∈ W and s ∈ S . This is equivalentto x Tw + x Ts = 0 for all w ∈ W and s ∈ S , and by taking first s = 0 and then w = 0 this is equivalent to x Tw = 0for all w ∈ W and x Ts = 0 for all s ∈ S . These conditions hold if and only if x ∈ W ⊥ and x ∈ S ⊥ , that is,x ∈ W ⊥ ∩ S ⊥ .

Next we show

( ATS ) ⊥ = A−1S ⊥

An n × 1 vector x satisfies x ∈ ( ATS ) ⊥ if and only if x Ty = 0 for all y ∈ ATS , which holds if and only ifx TATz = 0 for all z ∈ S, which is the same as (Ax)Tz = 0 for all z ∈ S , which is equivalent to Ax ∈ S ⊥ , which isequivalent to x ∈ A−1S ⊥ .

Finally we prove that ( S ⊥ ) ⊥ = S . It is easy to show that S ⊂ ( S ⊥ ) ⊥ since x ∈ S implies y Tx = 0 for ally ∈ S ⊥ , that is, x Ty = 0 for all y ∈ S ⊥ , which implies x ∈ ( S ⊥ ) ⊥ .

To show ( S ⊥ ) ⊥ ⊂ S , suppose 0 ≠ x ∈ ( S ⊥ ) ⊥ . Then for all y ∈ S ⊥ we have x Ty = 0. That is, if y Tz = 0 forall z ∈ S , then x Ty = 0. Equivalently, if zTy = 0 for all z ∈ S , then x Ty = 0. Thus

Ker �� zT �

� = Ker��� x T

zT ���

, for all z ∈ S (*)

This implies x ∈ S , for if not, then for any z ∈ S ,

rank �� zT �

� < rank��� x T

zT ���

By the matrix fact in the Hint, this implies

dim Ker��� x T

zT ���

< dim Ker �� zT �

which contradicts (*).

Solution 19.2 By induction we will show that (W k) ⊥ = V k, where V k is generated by the algorithm for V * inTheorem 19.3:

-74-

Page 77: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

V 0 = K

V k +1 = K ∩ A−1(V k + B )

= V k ∩ A−1(V k + B )

For k = 0 the claim becomes ( K ⊥ ) ⊥ = K , which is established in Exercise 19.1. So suppose for some nonegativeinteger K we have (W K) ⊥ = V K . Then, using Exercise 19.1,

(W K +1) ⊥ = �� W K + AT[ W K ∩ B ⊥ ] �

�⊥

= (W K) ⊥ ∩ �� AT(W K ∩ B ⊥ ) �

�⊥

= V K ∩ �� AT [ (V K) ⊥ ∩ B ⊥ ] �

�⊥

But further use of Exercise 19.1 gives

�� AT �

� (V K) ⊥ ∩ B ⊥ ��

��

⊥= A−1 �

� (V K) ⊥ ∩ B ⊥ ��

⊥= A−1(V K + B)

Thus

(W K +1) ⊥ = V K ∩ A−1(V K + B) = V K +1

This completes the induction proof, and gives V * = V n = (W n) ⊥ .

Solution 19.4 We establish the Hint by induction, for F a friend of V *. For k = 1,

j =1Σk

(A + BF)j−1(B ∩ V *) = B ∩ V * = V * ∩ (A.0 + B )

= R1

Assume now that for some positive integer K we have

j =1ΣK

(A + BF)j−1(B ∩ V *) = R K = V * ∩ (AR K−1 ∩ B )

Then

j =1Σ

K +1

(A + BF)j−1(B ∩ V *) = B ∩ V * + (A + BF)j =1ΣK

(A + BF)j−1(B ∩ V *)

= B ∩ V * + (A + BF)R K (+)

From the algorithm, R K ⊂ R n ⊂ V *, thus

(A + BF)R K ⊂ (A + BF)V * ⊂ V *

Using the second part of Exercise 18.4 gives

B ∩ V * + (A + BF)R K = [ B + (A + BF)R K ] ∩ V *

Since (A + BF)R K + B = AR K + B, the right side of (+) can be rewritten as

B ∩ V * + (A + BF)R K = V * ∩ [ AR K + B ]

= R K +1

This completes the induction proof of the Hint, and Theorem 19.6 gives R * = R n .

-75-

Page 78: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

Solution 19.7 The closed-loop state equation

x.(t) = (A + BF)x (t) + (E + BK)w (t) + BGv (t)

y (t) = Cx (t)

is disturbance decoupled if and only if

C (sI − A − BF)−1(E + BK) = 0

That is, if and only if

<A +BF Im [E +BK ]> ⊂ Ker [C ] (*)

Thus we want to show that there exist F and K such that (*) holds if and only if Im [E ] ⊂ V * + B, where V * is themaximal controlled invariant subspace contained in Ker [C ] for the plant.

First suppose F and K are such that (*) holds. Since <A +BF Im [E +BK ]> is invariant under (A + BF), itis a controlled invariant subspace contained in Ker [C ] for the plant. Then

Im [E +BK ] ⊂ <A +BF Im [E +BK ]> ⊂ V *

That is, for any x ∈ X there is a v ∈ V * such that (E + BK)x = v. Therefore

Ex = v + B (−K x)

which implies Im [E ] ⊂ V * + B.Conversely, suppose Im [E ] ⊂ V * + B, where V * is the maximal controlled invariant subspace contained

in Ker [C ] for the plant. We first show how to compute K such that Im [E +BK ] ⊂ V *. Then we can pick anyfriend F of V * and the proof will be finished since we will have

<A +BF Im [E +BK ]> ⊂ V * ⊂ Ker [C ]

If w1, . . . , wq is a basis for W, then there exist v 1, . . . , vq ∈ V * and u1, . . . , uq ∈ U such that

Ewj = vj + Buj , j = 1, . . . , q

Let

K = �� −u1

. . . −uq ��

�� w1

. . . wq �� −1

Then

(E + BK)wj = Ewj + BKwj

= vj + Buj + B �� −u1

. . . −uq �� ej

= vj , j = 1, . . . , q

That is, K is such that

Im [E + BK ] ⊂ V *

Solution 19.11 Note first that

span { pr +1, . . . , pn } = R 2*

Since R 1* ⊂ K 1 = Ker [C2] and R 2* ⊂ K 2 = Ker [C1], we have that in the new coordinates,

-76-

Page 79: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

C1 = C1P = �� C11 0 0 �

C2 = C2P = �� 0 C11 0 �

Since Im [BG1] ⊂ B ∩ R 1* ⊂ R 1* and BG1 = PB1 we have

B1 =

���� B13

0B11

����

Similarly, Im [BG2] ⊂ B ∩ R 2* ⊂ R 2* gives

B2 =

���� B23

B22

0 ����

Finally, (A + BF)R i* ⊂ R i*, i = 1, 2, and (A + BF)P = PA give

A =

����� A31

0

A11

A32

A22

0

A33

0

0 �����

That is, with z (t) = P−1x (t), the closed-loop state equation takes the partitioned form

z.a(t) = A11za(t) + B11r1(t)

z.b(t) = A22zb(t) + B22r2(t)

z.c(t) = A31za(t) + A32zb(t) + A33zc(t) + B13r1(t) + B23r2(t)

y 1(t) = C11za(t)

y 2(t) = C12zb(t)

-77-

Page 80: Solutions Manual LINEAR SYSTEM THEORY, 2/E

CHAPTER 20

Solution 20.1 A sketch shows thatv (t) is a sequence of unit-height rectangular pulses, occurring everyTseconds, with the width of thek th pulse given byk/5, k = 0, . . . , 5. This is a piecewise-continuous (actually,piecewise-constant)input, and the continuous-time solution formula gives

z (t) = e F (t−to)z (to) +to∫t

e F (t−σ)Gv (σ) dσ

Evaluate this att = (k +1)T andto = kT to get

z [(k +1)T ] = e FTz (kT) +kT∫

(k +1)T

e F (kT +T−σ)Gv (σ) dσ

Let τ = kT+T−σ in the integral, to obtain

z [(k +1)T ] = e FTz (kT) +0∫T

e FτGv (kT +T−τ) dτ

Then the special form ofv (t) gives

z [(k +1)T ] = e FTz (kT) +T− u (k) T

∫T

e FτG dτ sgn [u (k)]

The integral term is not linear in the input sequenceu (k), so we approximate the integral when u (k) is small.Changing integration variable toγ = T−τ, another way to write the integral term is

e FT

0∫

u (k) T

e−FγG dγ sgn [u (k)]

For u (k) small,

0∫

u (k) T

e−Fγ dγ =0∫

u (k) T

( I−Fγ+ . . . ) dγ ∼ u (k) T I

Then since u (k) sgn [u (k)] = u (k), this gives the approximate, linear, discrete-time state equation.

z [(k +1)T ] = e FTz (kT) + e FTT u (k)

Solution 20.4 For a constant nominal inputu (k) = u, constant nominal solutions are given by

-78-

Page 81: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

x =��� u

2u �

��

, y = u2

Easy calculation gives the linearized state equation

x δ(k +1) =��� 0

−1−10 �

��

x δ(k) +��� 4u

2 ���

uδ(k)

y δ(k) = �� 2u −1 �

� x δ(k) + 2u uδ(k)

SinceAk = (−1)kI andCB = 0, the zero-state solution formula easily gives

y δ(k) = 2u uδ(k)

Thus the zero-state behavior of the linearized state equation is that of a pure gain.

Solution 20.10 ComputingΦ( j +q, j) for the first few values ofq ≥ 0 easily leads to the general formula forΦ(k, j):

��� a2(k−1)a 1(k−2)a 2(k−3)a 1(k−4) . . . a 2( j)

00

a 1(k−1)a 2(k−2)a 1(k−3)a 2(k−4) . . . a 1( j) ���

, k−j odd,≥ 1

��� 0

a 1(k−1)a 2(k−2)a 1(k−3)a 2(k−4) . . . a 2( j)a 2(k−1)a 1(k−2)a 2(k−3)a 1(k−4) . . . a 1( j)

0 ���

, k−j even,≥ 1

Solution 20.11 By definition, fork ≥ j +1,

ΦF(k, j) = F (k−1)F (k−2) . . . F ( j +1)F ( j)

= AT(1−k)AT(2−k) . . . AT(−1−j)AT(−j) , k ≥ j +1

Therefore, fork ≥ j +1,

ΦFT (k, j) = A (−j)A (−j−1) . . . A (−k+2)A (−k+1)

However, for−j+1 ≥ −k+2, that is,k ≥ j +1,

ΦA(−j+1, −k+1) = A (−j)A (−j−1) . . . A (−k+2)A (−k+1)

and a comparison gives

ΦA T(−k)(k, j) = ΦA (k)T (−j+1, −k+1) , k ≥ j +1

Solution 20.14 For k ≥ k1+1 ≥ ko+1 we can write, somewhat cleverly,

Φ(k, ko) (k−k1) =j =k1

Σk−1

Φ(k, ko)

≤j =k1

Σk−1

Φ(k, j) Φ( j, k)

Clearly this gives

-79-

Page 82: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

Φ(k, ko) ≤k−k1

1_____

j =k1

Σk−1

Φ(k, j) Φ( j, k) , k ≥ k1+1 ≥ ko+1

Solution 20.16 GivenA (k) andF we wantP (k) to satisfy

F = P−1(k +1)A (k)P (k)

for all k. AssumingF is invertible andA (k) is invertible for everyk, it is easy to verify that

P (k) = ΦA(k, 0)F−k

is the correct choice. Obviously ifF = I, then the variable change isP (k) = ΦA(k, 0). Using this in Example20.19, where

A (k) =��� 0

11

a (k) ���

gives

P (k) = ΦA(k, 0) =

���� 0

1

1i =0Σk−1

a (i)����

, k ≥ 1

and

P−1(k +1) = ΦA(0, k +1) =

���� 0

1

1

−i =0Σk−1

a (i)����

, k ≥ 0

Then an easy multiplication verifies the property.

-80-

Page 83: Solutions Manual LINEAR SYSTEM THEORY, 2/E

CHAPTER 21

Solution 21.3 Usingz-transforms,

(zI − A)−1 =��� 12

zz+7−1 �

��

−1

=z2+7z+12

1_________��� −12

z +7z1 �

��

and

Y (z) = zc(zI−A)−1xo + c(zI−A)−1b U (z)

=z2+7z+12

z_________�� −z−19 z−1 �

� �� 1/20

1/20 �� +

z2+7z+12z−1_________

z−1z____

= 0

Therefore the complete solution isy (k) = 0,k ≥ 0.

Solution 21.4 First compute the corresponding discrete-time state equation

x ([(k +1)T ] = Fx (kT) + gu (kT)

y (kT) = hx (kT)

UsingA2 = 0, it is easy to compute

F = e AT = �� 0

11T �

� , g =0∫T

e Aσb dσ =��� T

T2/2 ���

andh = c. The transfer functions are

U (s)Y (s)_____= c (sI−A)−1b = �

� 0 1 �� �

� 0s

s−1 �

�−1 �

� 10 �

� = 1/ s

and

Z [u (kT)]Z [y (kT)]_________ = h (zI−F)−1g = �

� 0 1 �� �

� 0z−1

z−1−T �

�−1 �

�� T

T2/2 ���

= T / (z−1)

Solution 21.7(a) The solution formula gives, using a standard formula for a finite geometric sum,

-81-

Page 84: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

x (k) = (1+r/ l)kxo +j =0Σk−1

(1+r / l)k−j−1b

= (1+r/ l)kxo + b (1+r / l)k−1��� 1−1/(1+r / l)

1−1/(1+r / l)k____________ ���

= (1+r/ l)k(xo+bl /r) − bl /r

(b) In one year a depositxo yields

x (l) = (1+r / l)lxo

so

effective interest rate =xo

(1+r / l)lxo − xo_____________ × 100%= [(1+r / l)l −1] × 100%

For r = 0.05,l = 2, the effective interest rate is 5.06%. Forr = 0.05,l = 12, the effective interest rate is 5.12%.

(c) Set

0 = x (19) = (1.05)19���

xo+0.05

(−50,000)_________ ���

+0.05

50,000_______

and solve to obtainxo = $604,266. Of course this means you have actually won only$654,266, butcongratulations remain appropriate.

Solution 21.9 With T = Td/ l andv (t) = v (kT), kT ≤ t ≤ (k +1)T, evaluate the solution formula

z (t) = e F (t−τ)z (τ) +τ∫t

e F (t−σ)Gv (σ−Td) dσ , t ≥ T

at t = (k +1)T, τ = kT to obtain

z [(k +1)T ] =e FTz (kT) +0∫T

e Fτ dτ G v [(k−l)T ]

=∆ Az (kT) + Bv [(k−l)T ]

Defining

x (k) =

������ v [(k−1)T ]

...

v [(k−l)T ]z (kT) �

�����

, u (k) = v (kT) , y(k) = y (kT)

we get

-82-

Page 85: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

x (k +1) =

������� 0

0

...

0A

00

...

0B

00

...

10

. . .

. . .

...

. . .

. . .

01

...

00 �

������

x (k) +

������� 1

0

...

00 �

������

u (k) , x (0) =

������� v (−T)

v (−2T)

...

v (−lT)z (0) �

������

y(k) = �� C 0 . . . 0 �

� x (k)

The dimension of the initial state isn+l. The transfer function of this state equation is the same as the transferfunction of

z (k +1) = Az (k) + Bu (k−l)

y (k) = Cz (k)

Taking thez-transform, using the right shift property, gives

Y (z) = C (zI−A)−1Bz−lU (z)

Solution 21.12 Easy calculation shows that for

Ma =��� 0

100 �

��

, Mb =��� 0

001 �

��

Ma has a square root, with√ Ma = Ma, butMb does not.

Solution 21.13 By Lemma 21.6, given anyko there is aK-periodic solution of the forced state equation if andonly if there is anxo satisfying

[I − Φ(ko+K, ko)]xo =j =ko

Σko+K−1

Φ(ko+K, j +1)f ( j) (*)

Similarly there is aK-periodic solution of the unforced state equation if and only if there is azo satisfying

[I − Φ(ko+K, ko)]zo = 0 (**)

Since there is nozo ≠ 0 satisfying (**), it follows that [I−Φ(ko+K, ko)] is invertible. This implies that for eachko

there exists aunique xo satisfying (*). For thisxo the forced state equation has aK-periodic solution.However, if there is azo ≠ 0 satisfying (**), (*) might still have a solution if the right side is in the range of

[I−Φ(ko+K, ko)].

Solution 21.14 Since the forced state equation has noK-periodic solutions, for anyko there is by Exercise21.13 azo ≠ 0 such that the solution of

z (k +1) = A (k)z (k) , z (ko) = zo

is K-periodic. Thus by Lemma 21.6,

[I − Φ(ko+K, ko)]zo = 0

and therefore [I − Φ(ko+K, ko)] is not invertible. Since there are no solutions to

[I − Φ(ko+K, ko)]xo =j =ko

Σko+K−1

Φ(ko+K, j +1)f ( j)

-83-

Page 86: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

we have by linear algebra that there exits a nonzero,n × 1 vectorp such that

[I − Φ(ko+K, ko)]Tp = 0

and

pT

j =ko

Σko+K−1

Φ(ko+K, j +1)f ( j) =∆ q ≠ 0

Now pick any xo. Then it is easy to show that the corresponding solution satisfiespTx(ko+jK) = pTxo+jq,j = 1, 2, . . . . This shows that thesolution is unbounded.

-84-

Page 87: Solutions Manual LINEAR SYSTEM THEORY, 2/E

CHAPTER 22

Solution 22.1 Similar to Solution 6.1.

Solution 22.4 If the state equation is uniformly exponentially stable, then there existγ ≥ 1 and 0 ≤ λ < 1 suchthat

Φ(k, j) ≤ γ λk−j , k ≥ j

Equivalently, for everyk,

Φ(k +j, k) ≤ γ λj , j ≥ 0

which implies

φj =k

sup Φ(k +j, k) ≤ γ λj

Then

j → ∞lim φj

1/j =j → ∞lim (γ1/jλ) = λ

j → ∞lim γ1/j

≤ λ

< 1

Now suppose

j → ∞lim (φj)

1/j < 1

Picking 0 < ε < 1 there exists a positive integerJ such that

φj1/j < 1−ε , j ≥ J

Let λ = 1−ε and

γ =λJ

1___ max[1≤ j ≤ Jmax φj , 1]

Then for j ≤ J,

Φ(k +j, k) ≤k

sup Φ(k +j, j) = φj

≤1≤ j ≤ Jmax φj ≤ γ λJ

≤ γ λj

-85-

Page 88: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

Similarly, for j > J,

Φ(k +j, k) ≤k

sup Φ(k +j, j)

= φj

< (1−ε)j = λ j

≤ γ λj

This implies uniform exponentialstability.

Solution 22.6 Forλ = 0 the problem is trivial, so supposeλ ≠ 0 and write

k λk = k λ k = k( e ln λ )k, k ≥ 0

Let η = −ln λ , so thatη > 0 since λ < 1. Then

k ≥ 0maxk λ k ≤

t ≥ 0maxt e−η t

and a simple maximization argument (as in Exercise 6.10) gives

t ≥ 0maxte −η t ≤

η e1___

Therefore

k λk ≤−e ln λ

1________ =∆ β , k ≥ 0

To get a decaying exponential bound, write

k λ k = k( √ λ )k ( √ λ )k = 2β ( √ λ )k , k ≥ 0

Then

k =0Σ∞

k λ k ≤1−√ λ

2β________

For j > 1 write

k j λ k = k( λ 1/j +1 )k . . . k( λ 1/j +1 )k . ( λ 1/j +1 )k

and proceed as above.

Solution 22.7 Use the fact from Exercise 20.11 that

ΦA T(−k)(k, j) =ΦA (k)T (−j +1, −k +1) , k ≥ j

ThenA (k) is uniformly exponentially stable if and only if there existγ ≥ 1 and 0 ≤ λ < 1 such that

ΦA (k)(k, j) ≤ γ λk−j , k ≥ j

This is equivalent to

ΦA (k)T (k, j) ≤ γ λk−j , k ≥ j

which is equivalent to

ΦA (−k)T (−j +1, −k +1) ≤ γ λ(−j +1)−(−k +1)−j , −j +1 ≥ −k +1

-86-

Page 89: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

which is equivalent to

ΦA T(−k)(k, j) ≤ γ λk−j , k ≥ j

which is equivalent to uniform exponential stability ofAT(−k).However for the case ofAT(k), consider the example whereA (k) is 3-periodic with

A (0) =��� 1/2

002 �

��

, A (1) =��� 1/2

00

1/2 ���

, A (2) =��� 0

21/20 �

��

Then

ΦA (k)(3, 0)=��� 0

1/21/20 �

��

and it is easy to conclude uniform exponential stability. However

ΦA T(k)(3, 0)=��� 0

21/80 �

��

and it is easy to see that there will be unbounded solutions.

-87-

Page 90: Solutions Manual LINEAR SYSTEM THEORY, 2/E

CHAPTER 23

Solution 23.1 With Q = qI, where q > 0 we computeAT(k)QA (k)−Q to get the sufficient condition foruniform exponential stability:

a12 (k), a2

2 (k) ≤ 1−qν__ , ν > 0

Thus the state equation is uniformly exponentially stable if there exists a constantα < 1 such that for allk

a1(k) , a2(k) ≤ α

With

Q =��� 0

q1

q2

0 ���

whereq1, q2 > 0, the sufficient condition for uniform exponential stability becomes existence of a constantν > 0such that for allk,

a12 (k) ≤

q1

q2−ν_____ , a22 (k) ≤

q2

q1−ν_____

These conclusions show uniform exponential stability under weaker conditions, where one bounded coefficientcan be larger than unity if the other bounded coefficient is suitably small. For example, suppose

ksup |a2(k) = α < ∞. Then we can takeq1 = α2+0.01, q2 = 1, andν = 0.01 to conclude uniform exponential

stability if a12 (k) ≤ 0.99/ (α2+0.01) for allk.

Solution 23.4 Using the transition matrix computed in Exercise 20.10, an easy computation gives that

Q (k) = I +j =k +1Σ∞

ΦT( j, k)Φ( j, k)

is a diagonal matrix with

q11(k) = 1+ a22 (k) + a1

2 (k +1)a22 (k) + a2

2 (k +2)a12 (k +1)a2

2 (k)

+ a12 (k +3)a2

2 (k +2)a12 (k +1)a2

2 (k) + . . .

q22(k) = 1+ a12 (k) + a2

2 (k +1)a12 (k) + a1

2 (k +2)a22 (k +1)a1

2 (k)

+ a22 (k +3)a1

2 (k +2)a22 (k +1)a1

2 (k) + . . .

Since thisQ (k) is guaranteed to satisfyI ≤ Q (k) andAT(k)Q (k)A (k)−Q (k) ≤ −I for all k, a sufficientconditionfor uniform exponential stability is existence of a constantρ such thatq11(k), q22(k) ≤ ρ for all k. Clearly this

-88-

Page 91: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

holds if a12 (k), a2

2 (k) ≤ α < 1 for all k, but it also holds under weaker conditions. For example suppose theα-bound is violated only fork = 0, and

a12 (0) > 1 , a1

2 (0)a22 (1) < α

Then we can conclude uniform exponential stability. (More sophisticated analyses should be possible . . . .)

Solution 23.6 If the state equation is exponentially stable, then by Theorem 23.7 there is for any symmetricMa unique symmetricQ such that

ATQA − Q = −M

Write

M =��� m2

m1

m3

m2���

, Q =��� q2

q1

q3

q2���

and write the discrete-time Lyapunov equation as the vector equation

���� 1

0−1

−2−1−a0

0

0a0

a02 �

���

���� q3

q2

q1����

=

���� −m3

−m2

−m1����

The condition

det

���� 1

0−1

−2−1−a0

0

0a0

a02 �

���

≠ 0

reduces to the conditiona0 ≠ 0, 1, −2. Assuming this condition we computeQ for M = I, and use the fact thatQ > 0 sinceM > 0. The expression

���� q3

q2

q1����

=

���� 1

0−1

−2−1−a0

0

0a0

a02 �

���

−1 ���� −1

0−1 �

���

gives

Q =a0(a0+2)(a0−1)

1______________ ��� −2a0

−a0(a02 +a0+2)

−2(a0+1)−2a0

���

By Sylvester’s criterion,Q > 0 if and only if

a0(a0+2) > 0 , (1−a0)(a0+2) > 0 (+)

Note that these conditions subsume the conditions assumed above.Now suppose the conditions in (+) hold. Then forM = I > 0 there is a solutionQ > 0 to the discrete-time

Lyapunov equation. Thus the state equation is exponentially stable. That is, the conditions in (+) are necessaryand sufficient forexponential stability.

Solution 23.10 Supposeλ is an eigenvalue ofA with eigenvectorp. Then sinceM, Q ≥ 0 satisfy

ATQA − Q = −M

we have

-89-

Page 92: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

pHATQAp − pHQp = −pHMp

That is,

( λ 2−1)pHQp = −pHMp

If pHMp > 0, then λ 2−1 < 0, which gives λ < 1. But supposepHMp = 0. Then fork ≥ 0,

0 = λ 2kpHMp = λ_

kpHMpλk = pH(AT)kMAkp

= (Re [p ])T(AT)kMAk(Re [p ]) + (Im [p ])T(AT)kMAk(Im [p ])

SinceM ≥ 0, this implies

0 = (Re [p ])T(AT)kMAk(Re [p ]) = (Im [p ])T(AT)kMAk(Im [p ])

By hypothesis this implies

k → ∞lim Ak(Re [p ]) =

k → ∞lim Ak(Im [p ]) = 0

Therefore

k → ∞lim Akp =

k → ∞lim λkp = 0

which implies λ < 1.

-90-

Page 93: Solutions Manual LINEAR SYSTEM THEORY, 2/E

CHAPTER 24

Solution 24.1 Since

AT(k)A (k) =��� 0

a22

a12

0 ���

it is clear that

λmax1/2 (k) = max[ a1(k) , a2(k) ]

Thus Corollary 24.3 states that the state equation is uniformly stable if there exists a constantγ such that

i =jΠk

max[ a1(i) , a2(i) ] ≤ γ (#)

for all k, j with k ≥ j. (Note that this condition holds if

max[ a1(k) , a2(k) ] ≤ 1

for all but a finite number of values ofk.) Of course the condition (#) is not necessary. Consider

x (k +1) =��� 4

00

1/9 ���

x (k)

The eigenvalues are± 2/3, so the state equation is uniformly stable, but clearly (#) fails.

Solution 24.5 Following the hint, setr (ko) = 0 and

r (k) =j =ko

Σk−1

ν( j)φ( j) , k ≥ ko+1

and write the given inequality as

φ(k) ≤ ψ(k) + η(k)r (k) , k ≥ ko+1 (*)

Then, using nonnegativity ofν(k),

r(k +1) = r(k) + ν(k)φ(k)

≤ [1+ν(k)η(k)]r (k) + ν(k)ψ(k) , k ≥ ko+1

Since 1+η(k)ν(k) ≥ 1,k ≥ ko,

-91-

Page 94: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

r (k +1)j =ko

Πk �

�� 1+η( j)ν( j)

1__________ ���

≤ r (k)j =ko

Πk−1 �

�� 1+η( j)ν( j)

1__________ ���

+ ν(k)ψ(k)j =ko

Πk �

�� 1+η( j)ν( j)

1__________ ���

, k ≥ ko+1

Iterating this inequality gives

r (k) ≤j =ko

Σk−1

ν( j)ψ( j)i =j +1

Πk−1

[ 1+η(i)ν(i) ] , k ≥ ko+1

and substituting this into (*) yields the result.

Solution 24.7 By assumption ΦA(k, j) ≤ γ for k ≥ j. Treatingf (k, z (k)) as an input, the complete solutionformula is

z (k) = ΦA(k, ko)z (ko) +j =ko

Σk−1

ΦA(k, j +1)f ( j, z ( j)) , k ≥ ko+1

This gives

z (k) ≤ γ z (ko) +j =ko

Σk−1

γ f ( j, z ( j))

≤ γ z (ko) +j =ko

Σk−1

γ αj z ( j) , k ≥ ko+1

Applying Lemma 24.5,

z (k) ≤ γ z (ko) exp[ γj =ko

Σk−1

α j ]

≤ γ z (ko) exp[ γj =ko

Σ∞

α j ]

≤ γe γ α z (ko) , k ≥ ko

This implies uniform stability.For the scalar example

A (k) = 1/2 , f (k, z (k)) =�� z (k), k < 0

0, k ≥ 0, αk =

�� 1, k < 0

0, k ≥ 0,

we have

j =kΣ∞

α j =�� k , k < 0

0, k ≥ 0

which is bounded for eachk. But for ko < 0, the solution of this state equation yields

z (0) = (3/2)−ko zo = (3/2) ko zo

Clearly any candidate boundγ can be violated by choosing ko sufficiently large, so the state equation is notuniformly stable.

-92-

Page 95: Solutions Manual LINEAR SYSTEM THEORY, 2/E

CHAPTER 25

Solution 25.1 If M (ko, kf) is not invertible, then there exists a nonzero,n × 1 vectorxa such that

0 = xaTM (ko, kf)xa

=j =ko

Σkf−1

xaTΦT( j, ko)CT( j)C ( j)Φ( j, ko)xa

=j =ko

Σkf−1

C ( j)Φ( j, ko)xa 2

This implies

C ( j)Φ( j, ko)xa = 0 , j = ko, . . . , kf−1

which shows that the nonzero initial statexa yields the same output on the interval as does the zero initial state.Therefore the state equation is not observable.

On the other hand, for any initial statexo we can write, just as in the proof of Theorem 25.9,

M (ko, kf)xo = OT(ko, kf)

����� y (kf−1)

...

y (ko) �����

If M (ko, kf) is invertible, then the initial state is uniquely determined by

xo = M−1(ko, kf)OT(ko, kf)

����� y (kf−1)

...

y (ko) �����

Solution 25.2 In general the claim is false. IfA (k) is zero, then

W (0, kf) =j =0Σ

kf−1

Φ(kf, j +1)b ( j)bT( j)ΦT(kf, j +1)

= b (kf−1)bT(kf−1)

ThisW (0, kf) has rank at most 1, and ifn ≥ 2 the state equation is not reachable on [0, kf].

-93-

Page 96: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

The claim is true ifA (k) is invertible at eachk. Let kf = n so that

W (0, n) =j =0Σn−1

Φ(n, j +1)b ( j)bT( j)ΦT(n, j +1)

SinceΦ(n, j +1) is invertible forj = 0, . . . , n−1, let

b (k) = Φ−1(n, k +1)ek +1 , k = 0, . . . , n−1

whereek is thek th-column ofIn. Then

W (0, n) =j =0Σn−1

ej +1e j +1T = In

and the state equation is reachable on [0, n ].

Solution 25.7 SupposeWO(ko, kf) is invertible. Given ap × 1 vectoryf, let

u (k) = BT(k)ΦT(kf, k +1)CT(kf)WO−1(ko, kf)yf , k = ko, . . . , kf−1

and letu (k) = 0 for other values ofk. Then it is easy to show that the zero-state response to this input yieldsy (kf) = yf. Thus the state equation is output reachable on [ko, kf].

Conversely, suppose the state equation is output reachable on [ko, kf]. If WO(ko, kf) is not invertible, thenthere exists a nonzerop × 1 vectorya such that

0 = yaTWO(ko, kf)ya

=j =ko

Σkf−1

yaTC (kf)ΦT(kf, j +1)B( j)BT( j)ΦT(kf, j +1)CT(kf)ya

=j =ko

Σkf−1

yaTC (kf)Φ(kf, j +1)B( j) 2

Therefore

yaTC (kf)Φ(kf, j +1)B( j) = 0 , j = ko, . . . , kf−1

But by output reachability,with yf = ya, there exists an inputua(k) such that

ya =j =ko

Σkf−1

C (kf)Φ(kf, j +1)B ( j)ua( j)

Thus

yaTya =

j =ko

Σkf−1

yaTC (kf)Φ(kf, j +1)B ( j)ua( j) = 0

and this impliesya = 0. This contradiction shows thatWO(ko, kf) must be invertible.Note that ifrank C (kf) < p, thenWO(ko, kf) cannot be invertible, and the state equation cannot be output

reachable.If m = p = 1, then

WO(ko, kf) =j =ko

Σkf−1

G2(kf, j)

Thus the state equation is output reachable on [ko, kf] if and only if G (kf, j) ≠ 0 for somej = ko, . . . , kf−1.

-94-

Page 97: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

Solution 25.13 We will prove that the state equation is reconstructible if and only if

������ CAn−1

...

CAC �

�����

z = 0 implies Anz = 0 (*)

That is, if and only if the null space of the observabilitymatrix is contained in the null space ofAn.First, suppose the state equation is not reconstructible. Then there existn × 1 vectorsxa andxb such that

xa ≠ xb and

����� CAn−1

...

C �����

xa =

����� CAn−1

...

C �����

xb , Anxa ≠ Anxb

That is

����� CAn−1

...

C �����

(xa−xb) = 0 , An(xa−xb) ≠ 0

Thus the condition (*) fails.Now suppose the condition (*) fails andz is such that

����� CAn−1

...

C �����

z = 0 and Anz ≠ 0

Obviouslyz ≠ 0. Then forx (0) = z the zero-input response is

y (k) = 0 , k = 0, . . . , n−1 (+)

andx (n) ≠ 0. But the same output sequence is produced byx (0) = 0, and for this initial statex (n) = 0. Thus wecannot determine from the output (+) whetherx (n) = z or x (n) = 0, which implies the state equation is notreconstructible.

-95-

Page 98: Solutions Manual LINEAR SYSTEM THEORY, 2/E

CHAPTER 26

Solution 26.2 For the linear state equation

x (k +1) = �� 1

11k �

� x (k) + �� 1

0 �� u (k)

easy computations give

R2(k) = �� B (k) Φ(k +1, k)B (k−1) �

� = �� 1

01k �

and

R3(k) = �� B (k) Φ(k +1, k)B (k−1) Φ(k +1, k−1)B (k−2) �

� = �� 1

01k

k2k−1 �

From the respective ranks the state equation is 3-step reachable, but not 2-step reachable.

Solution 26.4 The (n +1)-dimensional state equation

z (k +1) =��� c

A00 �

��

z (k) +��� d

b ���

u (k)

y (k) = �� 01×n 1 �

� z (k) − u (k)

has the transfer function

H (z) = �� 01×n 1 �

� �� −c

zI−Az0 �

�−1 �

� db �

� −1

= �� 01×n 1 �

���� z−1c (zI−A)−1

(zI−A)−1

z−10 �

��

��� d

b ���

−1

= z−1c (zI−A)−1b + z−1d −1

= z−1G (z) −1

Solution 26.6 By Theorem 26.8G (z) is realizable if and only if it is a matrix of (real-coefficient) strictly-proper rational functions. By partial fraction expansion ofG (z)/ z we can writeG (z) in the form

-96-

Page 99: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

G (z) =l =1Σm

r =1Σσl

Glr(z−λl)

r

z_______

Hereλ1, . . . , λm are distinct complex numbers such that ifλL is complex, thenλM = λ_

L for someM. Furthermorethe p × m complex matrices satisfyGMr = G

__Lr for r = 1, . . . , σL. From Table 1.10 the corresponding unit pulse

response is

G (k) =l =1Σm

r =1Σσl

Glr�� l−1

k �� λ l

k+1−r (#)

Thus we can state that a unit pulse responseG (k) is realizable if and only if(a) there exist positive integersm, σ1, . . . , σm, distinct complex numbersλ1, . . . , λm, andσ1+ . . . +σm complexp × m matricesGlr such that (#) holds for allk ≥ 1, and(b) if λL is complex, thenλM = λ

_L for someM. Furthermore thep × m complex matrices satisfyGMr = G

__Lr for

r = 1, . . . , σL.

Solution 26.8 Suppose the given state equation is minimal and of dimensionn. We can write its (strictly-proper, rational) transfer function as

G (z) =det (zI−A)

c . adj (zI−A) . b______________

where the polynomial det (zI−A) has degreen. If the numerator and denominator polynomials have a commonroot, then this root can be canceled without changing the inversez transform ofG (z). Therefore, followingExample 26.10, we can write by inspection a dimension-(n−1) realization of the unit pulse response of theoriginal state equation. This contradicts the assumed minimality, and the contradiction gives that the twopolynomials cannot have a common root.

Now suppose the polynomials det (zI−A) andc . adj (zI−A) .b have no common root, but that the given stateequation is not minimal. Then there is a minimal realization

z (k +1) = Fz (k) + gu (k)

y (k) = hz (k)

and we then have

det (zI−A)c . adj (zI−A) . b______________ =

det (zI−F)h . adj (zI−F) . g______________

where the polynomial det (zI−F) has degree no larger thann−1. This implies that the polynomials det (zI−A) andc . adj (zI−A) .b have a common root—a contradiction. Therefore the given state equation is minimal.

Solution 26.11 This is essentially the same as Solution 11.12.

Solution 26.12 Either by writing a minimal realization ofG (z) in the form of Example 26.10 and computingcAkb, k = 0, . . . , 4, or by long division ofG (z), it is easy to verify the first 5 Markov parameters.

For the second part we can either work with an assumed transfer function, or assume a dimension-2 stateequation of the form

x (k +1) =��� −a0

0−a1

1 ���

x (k) +��� 1

0 ���

u (k)

y (k) = �� c0 c1 �

� x (k)

-97-

Page 100: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

From the latterapproach, setting

cb = 0, cAb = 1, cA2b = 1/2, cA3b = 1/2

easily yieldsc1 = 0, c0 = 1, a0 = 1/4, a1 = 1/2.

-98-

Page 101: Solutions Manual LINEAR SYSTEM THEORY, 2/E

CHAPTER 27

Solution 27.1 Similar to Solution 12.1.

Solution 27.4 Suppose the entryGij(z) has one pole atz = 1, that is

Gij(z) =(z−1)Dij(z)

Nij(z)__________

where all roots of the polynomialDij(z) have magnitude less than unity (soDij(1) ≠ 0), and the polynomialNij(z)satisfiesNij(1) ≠ 0. Suppose that them × 1 U (z) has all components zero except forUj(z) = z /(z−1). Then thei th-component of the output is given by

Yi(z) =(z−1)2Dij(z)

z Nij(z)___________

By partial fraction expansionyi(k) includes decaying exponential terms, possibly a constant term, and the term

Dij(1)

Nij(1)______ k , k ≥ 0

Since this term is unbounded, every realization ofG (z) fails to be uniform bounded-input, bounded-output stable.

Solution 27.7 The claim is not true in the time-varying case. Consider the scalar state equation

x (k +1) = x (k) + δ(k)u (k)

y (k) = x (k)

whereδ(k) is the unit pulse. The zero-state response to any input is

y (k) =�� u (0), k ≥ ko+1, ko ≤ 0

0, k ≥ ko > 0

Thus the state equation is uniform bounded-input, bounded-output stable withη = 1. However forko = 0 andu (k) = (1/2)k we haveu (k) → 0 ask → ∞, buty (k) = 1 for all k ≥ 1.

For the time-invariant case the claim can be proved as follows. Assumeu (k) → 0 ask → ∞. Given ε > 0we will find a K such that y (k) ≤ ε, k ≥ K, which shows thaty (k) → 0 ask → ∞. With

y (k) =j =0Σk

G (k−j)u ( j)

and an input signalu (k) such thatu (k) → 0 ask → ∞, let

-99-

Page 102: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

µ =k ≥ 0sup u (k) , η =

k =0Σ∞

G (k)

The first constant is finite for a well-defined sequence that goes to zero, and the second is finite by uniformbounded-input, bounded-output stability. Then there is a positive integerK1 such that

u (k) ≤2ηε___ ,

k =K 1

Σ∞

G (k) ≤2µε___ , k ≥ K1

Let K = 2K1. Then fork ≥ K we have

y (k) ≤ µj =0Σ

K 1−1

G (k−j) +2ηε___

k =K 1

Σk

G (k−j)

≤ µq =k−K 1

Σk

G (q) +2ηε___

q =0Σ

k−K 1

G (q)

≤ µ2µε___ +

2ηε___ η = ε

Solution 27.8 Similar to Solution 12.12.

-100-

Page 103: Solutions Manual LINEAR SYSTEM THEORY, 2/E

CHAPTER 28

Solution 28.2 Lemma 16.18 gives that ifV11 andV are invertible, then

V−1 =��� V21

V11

V22

V12���

−1

=��� −Va

−1V21V11−1

V11−1 +V11

−1V12Va−1V21V11

−1

Va−1

−V11−1V12Va

−1 ���

−1

whereVa = V22−V21V11−1V12. From the expressionVV−1 = I, written as

��� V21

V11

V22

V12���

��� W21

W11

W22

W12���

= I

we obtain

V11W11 + V12W21 = I

V21W11 + V22W21 = 0

Under the assumption thatV11 andV22 are invertible these imply

W11 = V11−1 − V11

−1V12W21 , W21 = −V22−1V21W11

Solving forW11 gives

W11 = (V11−V12V22−1V21)

−1

and comparing this with the 1,1-block ofV−1 from Lemma 16.18 gives

(V11−V12V22−1V21)

−1 = V11−1 + V11

−1V12(V22−V21V11−1V12)

−1V21V11−1

Solution 28.3 Givenα > 1 consider

z (k +1) = Az (k) + Bu (k)

whereA = α A andB = α B. It is easy to see that reachability is preserved, and if we chooseK such that

z (k +1) = (A+BK)z (k) = α (A+BK)z (k)

is uniformly exponentially stable, then by Lemma 28.7 we have that

x (k +1) = (A+BK)x (k)

is uniformly exponentially stable with rateα. So choose, by Theorem 28.9,

-101-

Page 104: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

K = −BT(A

T)n

��� k =0

Σn

AkBB

T(A

T)k

���

−1

An +1

That is,

K = −α BT(α AT)n��� k =0

Σn

(α A)k(α B)(α B)T(α AT)k���

−1

(α A)n +1

= −BT(AT)n��� k =0

Σn

α−2(n−k)AkBBT(AT)k���

−1

An +1

Solution 28.4 Similar to Solution 13.11. However for the time-invariant case the reachability matrix rank testcan be used, rather than theeigenvector test, bywriting

�� B (A+BK)B (A+BK)2B . . . �

� = �� B AB A2B . . . �

������

...

00I

...

0I

KB

...

IKB

KAB+(KB)2

...

. . .

. . .

. . .������

Solution 28.6 Similar to Solution 2.8.

Solution 28.8 Supposing that the linear state equation is reachable, there existsK such that all eigenvalues ofA+BK have magnitude less than unity. Therefore (I−A−BK) is invertible, and if we suppose

�� C

A−I0B �

is invertible, thenC (I−A−BK)−1B is invertible from Exercise 28.6. Then given any diagonal,m × m matrix Λ, wecan choose

N = [C (I−A−BK)−1B]−1Λ

to obtainG(1) = Λ. For this closed-loop system, anyx (0) and any constant inputR (k) = ro yields

k → ∞lim y (k) = Λro

by the final value theorem. That is, the steady-state value of the response to constant inputs is ‘noninteracting.’(For finite time values, or other inputs, interaction typically occurs.)

-102-

Page 105: Solutions Manual LINEAR SYSTEM THEORY, 2/E

CHAPTER 29

Solution 29.1 The erroreb(k) satisfies

eb(k +1) = z (k +1)− Pb(k +1)x (k +1)

= F(k)z (k) + [Gb(k)C (k)−Pb(k +1)A (k)]x (k) + [Ga(k)−Pb(k +1)B (k)]u (k)

= F(k)z (k) − F(k)Pb(k)x (k)

= F(k)eb(k)

Thereforeeb(k) → 0 exponentially ask → ∞. Now

e (k) =∆ x (k) − x(k)

= x (k)−H (k)C (k)x (k)−J (k)z (k)

= −J (k)eb(k) + [I−H (k)C (k)−J (k)Pb(k)]x (k)

= −J (k)eb(k)

Therefore ifJ (k) is bounded, that is, J (k) ≤ α < ∞ for all k, theneb(k) → 0 impliese (k) → 0, ask → ∞, andx(k) is an asymptotic estimate ofx (k).

Solution 29.2 The plant is

��� xb(k +1)

xa(k +1) ���

=��� F21(k)

F11(k)F22(k)F12(k) �

��

��� xb(k)

xa(k) ���

+��� G2 (k)

G1(k) ���

u (k)

y (k) = �� Ip 0 �

���� xb(k)

xa(k) ���

With

Pb(k) = �� −H(k) In −p �

we have

��� Pb(k)

C (k) ���

−1

=��� −H (k)

Ip

In −p

0 ���

−1

=��� H (k)

Ip

In −p

0 ���

Then the equations in Exercise 29.1 give

-103-

Page 106: Solutions Manual LINEAR SYSTEM THEORY, 2/E

Linear System Theory, 2/E Solutions Manual

F(k) = F22(k) − H(k +1)F12(k)

Gb(k) = F(k)H(k) − H(k +1)F11(k) + F21(k)

Ga(k) = −H(k +1)G1(k) + G2(k)

The observer estimate is

x(k) =��� H(k)

Ip���

y (k) +��� I

0 ���

z (k)

=��� H(k)

Ip���

�� I 0 �

���� xb(k)

xa(k) ���

+��� I

0 ���

z (k)

=��� H(k)xa(k)+z (k)

xa(k) ���

Therefore

xa(k) = xa(k)

xb(k) = H(k)xa(k) + z (k)

where

z (k +1) = F(k)z (k) + Ga(k)u (k) + Gb(k)y (k)

This is exactly the same as the reduced-dimension observer in the text.

Solution 29.5 Similar to Solution 15.6.

Solution 29.6 Similar to Solution 15.10.

-104-