Differentiation ENGI 7825: Control Systems II The Matrix...

4
1 ENGI 7825: Control Systems II Instructor: Dr. Andrew Vardy Adapted from the notes of Gabriel Oliver Codina A B u(t) x(t) y(t) + + + + x(t) x 0 D C State-Space Fundamentals: Part 2: The Matrix Exponential A “Matrixy” Integrating Factor Go back through the solution to the scalar system and see what operations are performed using the integrating factor: Multiplication on both sides Differentiation Multiplication on both sides of the “inverse” e a(t-t 0 ) We need some kind of matrix entity that can do all of the above in the same way as e -a(t-t0) What is special about differentiation of exponential functions of the form e at ? We need something with the same property in order to solve an SS system for x(t). The Matrix Exponential The scalar exponential is defined by the infinite power series: We define the matrix exponential by replacing a with the matrix A: Although the notation is surprising, e At is really just a matrix, defined to behave in a similar way to the scalar exponential. e at = 1 + at + 1 2 a 2 t 2 + 1 6 a 3 t 3 +··· = k=0 1 k! a k t k e At = I + At + 1 2 A 2 t 2 + 1 6 A 3 t 3 +··· = k=0 1 k! A k t k Once again, e At is just notation used to represent a power series. In general, the matrix exponential does not equal the matrix of scalar exponentials of the elements in the matrix A. Example 1: Consider the following 4x4 matrix: Lets obtain the first few terms of the power series: The power series contains only a finite number of nonzero terms: e At [e aij t ] A = 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 # $ % % % % & ' ( ( ( ( A 2 = 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 " # $ $ $ $ % & ' ' ' ' A 3 = 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 # $ % % % % & ' ( ( ( ( A 4 = 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 " # $ $ $ $ % & ' ' ' ' A k = 0 k 4 e At = I + At + 1 2 A 2 t 2 + 1 6 A 3 t 3 = 1 0 0 0 t 1 0 0 1 2 t 2 t 1 0 1 6 t 3 1 2 t 2 t 1 # $ % % % % & ' ( ( ( ( e aij t [ ] = 0 0 0 0 e t 0 0 0 0 e t 0 0 0 0 e t 0 $ % & & & & ' ( ) ) ) )

Transcript of Differentiation ENGI 7825: Control Systems II The Matrix...

Page 1: Differentiation ENGI 7825: Control Systems II The Matrix Exponentialav/courses/7825-s16/manual_uploads/... · 2016. 5. 25. · Properties of the matrix exponential For any real nxn

1

ENGI 7825: Control Systems II

Instructor: Dr. Andrew Vardy

Adapted from the notes ofGabriel Oliver Codina

A

Bu(t) x(t) y(t)

++

++x(t)

x0

D

C

State-Space Fundamentals: Part 2: The Matrix Exponential

A “Matrixy” Integrating Factor► Go back through the solution to the scalar system and

see what operations are performed using the integrating factor:

■ Multiplication on both sides■ Differentiation■ Multiplication on both sides of the “inverse” ea(t-t0)

► We need some kind of matrix entity that can do all of the above in the same way as e-a(t-t0)

► What is special about differentiation of exponential functions of the form eat?

► We need something with the same property in order to solve an SS system for x(t).

The Matrix Exponential► The scalar exponential is defined by the infinite power

series:

► We define the matrix exponential by replacing a with the matrix A:

► Although the notation is surprising, eAt is really just a matrix, defined to behave in a similar way to the scalar exponential.

STATE EQUATION SOLUTION 53

index gives

∞∑

k=0

(k + 1)Xk+1(t − t0)k = A

( ∞∑

k=0

Xk(t − t0)k

)

=∞∑

k=0

A Xk(t − t0)k

By equating like powers of t − t0, we obtain the recursive relationship

Xk+1 = 1k + 1

AXk k ≥ 0

which, when initialized with X0 = I , leads to

Xk = 1k!

Ak k ≥ 0

Substituting this result into the power series (2.7) yields

X(t) =∞∑

k=0

1k!

Ak(t − t0)k

We note here that the infinite power series (2.7) has the requisite con-vergence properties so that the infinite power series resulting from term-by-term differentiation converges to X(t), and Equation (2.6) is satisfied.

Recall that the scalar exponential function is defined by the followinginfinite power series

eat = 1 + at + 12a2t2 + 1

6a3t3 + · · ·

=∞∑

k=0

1k!

aktk

Motivated by this, we define the so-called matrix exponential via

eAt = I + At + 12A2t2 + 1

6A3t3 + · · ·

=∞∑

k=0

1k!

Aktk(2.8)

STATE EQUATION SOLUTION 53

index gives

∞∑

k=0

(k + 1)Xk+1(t − t0)k = A

( ∞∑

k=0

Xk(t − t0)k

)

=∞∑

k=0

A Xk(t − t0)k

By equating like powers of t − t0, we obtain the recursive relationship

Xk+1 = 1k + 1

AXk k ≥ 0

which, when initialized with X0 = I , leads to

Xk = 1k!

Ak k ≥ 0

Substituting this result into the power series (2.7) yields

X(t) =∞∑

k=0

1k!

Ak(t − t0)k

We note here that the infinite power series (2.7) has the requisite con-vergence properties so that the infinite power series resulting from term-by-term differentiation converges to X(t), and Equation (2.6) is satisfied.

Recall that the scalar exponential function is defined by the followinginfinite power series

eat = 1 + at + 12a2t2 + 1

6a3t3 + · · ·

=∞∑

k=0

1k!

aktk

Motivated by this, we define the so-called matrix exponential via

eAt = I + At + 12A2t2 + 1

6A3t3 + · · ·

=∞∑

k=0

1k!

Aktk(2.8)

► Once again, eAt is just notation used to represent a power series. In general, the matrix exponential does not equal the matrix of scalar exponentials of the elements in the matrix A.

► Example 1: Consider the following 4x4 matrix:

Lets obtain the first few terms of the power series:

The power series contains only a finite number of nonzero terms:

eAt ≠ [eaij t ]

A =

0 0 0 0−1 0 0 00 −1 0 00 0 −1 0

#

$

% % % %

&

'

( ( ( (

A2 =

0 0 0 00 0 0 01 0 0 00 1 0 0

"

#

$ $ $ $

%

&

' ' ' '

A3 =

0 0 0 00 0 0 00 0 0 0−1 0 0 0

#

$

% % % %

&

'

( ( ( (

A4 =

0 0 0 00 0 0 00 0 0 00 0 0 0

"

#

$ $ $ $

%

&

' ' ' '

Ak = 0 ∀k ≥ 4

eAt = I+At +12A 2t 2 +

16A 3t 3 =

1 0 0 0−t 1 0 012 t

2 −t 1 0− 16 t

3 12 t

2 −t 1

#

$

% % % %

&

'

( ( ( (

≠ eaij t[ ] =

0 0 0 0e− t 0 0 00 e−t 0 00 0 e− t 0

$

%

& & & &

'

(

) ) ) )

Page 2: Differentiation ENGI 7825: Control Systems II The Matrix Exponentialav/courses/7825-s16/manual_uploads/... · 2016. 5. 25. · Properties of the matrix exponential For any real nxn

2

► Example 2: For a diagonal matrix, this equality is satisfied:

Consider the diagonal nxn matrix A:

The power series contains an infinite number of terms:

Thus:

eAt = [eaij t ]

A =

λ1 0 0 00 λ2 0 0 0 0 λn−1 00 0 0 λn

$

%

& & & & & &

'

(

) ) ) ) ) )

Ak =

λ1k 0 0 00 λ2

k 0 0 0 0 λn−1

k 00 0 0 λn

k

$

%

& & & & & &

'

(

) ) ) ) ) )

eAt =1k!k= 0

λ1k 0 0 00 λ2

k 0 0 0 0 λn−1

k 00 0 0 λn

k

&

'

( ( ( ( ( (

)

*

+ + + + + +

t k =

1k!λ1kt k

k= 0

∑ 0 0 0

0 1k!λ2k t k

k= 0

∑ 0 0

0 0 1k!λn−1k t k

k= 0

∑ 0

0 0 0 1k!λnk t k

k= 0

&

'

( ( ( ( ( ( ( ( ( (

)

*

+ + + + + + + + + +

=

eλ1t 0 0 00 eλ2t 0 0 0 0 eλn−1t 00 0 0 eλnt

&

'

( ( ( ( ( (

)

*

+ + + + + +

Properties of the matrix exponential► For any real nxn matrix A, the matrix exponential eAt satisfies:

1. eAt is the unique matrix for which:

2.For any t1 and t2:

As a consequence:

Thus, eAt is invertible for all t, with the inverse:

3.For all t, A and eAtcommute with respect to matrix product:

4.For all t:

5.For any real nxn matrix B, e(A+B)t=eAteBt for all t if and only if AB=BA6.Finally, a useful property of the matrix exponential is that it can be

reduced to a finite power series involving n scalar analytic functions αj(t)

)( 0

nxneeedtd

t

ttt IA AAA ===

2121 )( tttt eee AAA =+

eA(0) = eA(t−t ) = eAte−At = I

eAt[ ] −1

= e−At

AA AA tt ee =

eAt[ ] T

= eAT t

eAt = αk (t)Ak

k= 0

n−1

State Equation Solution► The solution of the state differential equation is found to be:

where the matrix exponential is needed:

► The matrix exponential is sometimes referred to as the state-transition matrixand denoted by φ(t, t0):

► Using this notation, the solution of the state differential equation can be written as:

eA( t− t0 ) =1k!k= 0

∑ A k t − t0( )k = I+A t − t0( ) +12A 2 t − t0( )2 +

16A 3 t − t0( )3 +…

x(t) = eA(t−t0 )x0 + eA(t−τ )Bu(τ )dτt0

t∫

Φ(t, t0) = eA(t− t0 )

x(t) =Φ(t, t0 )x0 + Φ(t,τ )Bu(τ )dτt0

t∫

zero-input response: xzi(t) xzs(t): zero-state response

Output Equation Solution► Having the solution for the complete state response, a solution for

the complete output equation can be obtained as:

► Any issues with these solutions for x(t) and y(t)?

► Yes! How can we evaluate eA(t-t0) when its defined by an infinite series?

► Solution: We will look to the frequency domain for the solution for the zero-state response, which will lead to another way of evaluating eA(t-t0).

y(t) =CeA(t−t0 )x0 + CeA(t−τ )Bu(τ )dτt0

t∫ +Du(t)

zero-input output: yzi(t) yzs(t): zero-state output

Page 3: Differentiation ENGI 7825: Control Systems II The Matrix Exponentialav/courses/7825-s16/manual_uploads/... · 2016. 5. 25. · Properties of the matrix exponential For any real nxn

3

► The solution to the unforced system (u=0) is simply

therefore, the term φ ij(t) can be interpreted (and determined) as the response of the ith state variable due to an initial condition on the jth state variable when there are zero initial conditions on all other states

x1(t)x2(t)

xn (t)

"

#

$ $ $ $

%

&

' ' ' '

=

φ11(t) φn1(t)φ21(t) φn2(t)

φn1(t) φnn (t)

"

#

$ $ $ $

%

&

' ' ' '

x1(0)x2(0)

xn (0)

"

#

$ $ $ $

%

&

' ' ' '

x(t) = eA(t−t0 )x0 + eA(t−τ )Bu(τ )dτt0

t∫ Frequency Domain Solution

► The solution for x(t) proposed so far is based on the mysterious matrix exponential. Ignoring the weirdness of eAt, the bigger problem is how to compute it.

► We will pursue an alternate solution in the frequency domain which will actually lead to another way of computing eAt.

L ˙ x (t) = Ax(t)+ Bu(t)[ ]

L ˙ x (t)[ ] = L Ax(t)[ ] + L Bu(t)[ ]

sX(s) − x0 =AX(s) + BU(s)

X(s) = (sI−A)−1x0 + (sI−A)−1BU(s)

zero-input response: Xzi(s) Xzs(s): zero-state response

L eA(t−t0 )⎡⎣ ⎤⎦ = sI−A[ ]−1 eA(t−t0 ) = L−1 sI−A[ ]−1

x(t) = eA(t−t0 )x0 + eA(t−τ )Bu(τ )dτt0

t∫

X(s) = (sI−A)−1x0 + (sI−A)−1BU(s)

zero-input response: Xzi(s) Xzs(s): zero-state response

► This frequency-domain solution bears a strong resemblance to our previous solution:

► In fact, we can conclude that the matrix exponential, eA(t-t0) and (sI – A)-1 form a Laplace transform pair:

► We should also finish the frequency domain solution by obtaining the output:

Impulse Response► We can divide the output into zero-input and zero-state

components:

► Consider just Yzs(s) and recall that multiplication in the frequency domain is convolution in the time domain.

An impulse in the time-domain is just a 1 in the frequency domain. So the impulse response in the frequency domain (i.e. the transfer function) is easily found:

zero-input output: Yzi(s) Yzs(s): zero-state output

Page 4: Differentiation ENGI 7825: Control Systems II The Matrix Exponentialav/courses/7825-s16/manual_uploads/... · 2016. 5. 25. · Properties of the matrix exponential For any real nxn

4

State-Space Fundamentals: exercise► Solve the following linear second-order ordinary differential eq:

Consider the input u(t) is a step of magnitude 3and the initial conditions:

First choose state variables in controller canonical form:

)()(12)(7)( tutytyty =++ !!!

10000500 .) y(.)(y ==!

212

12

1

712 xxuyxxyx

yx

s −−==

==

=

!!!

!!)(

10

71210

2

1

2

1 tuxx

xx

⎥⎦

⎤⎢⎣

⎡+⎥⎦

⎤⎢⎣

⎡⋅⎥⎦

⎤⎢⎣

−−=⎥

⎤⎢⎣

⎡!!

⎥⎦

⎤⎢⎣

⎡=⎥

⎤⎢⎣

05.010.0

)0()0(

2

1

xx

[ ] [ ] )(001)(2

1 tuxx

ty ⋅+⎥⎦

⎤⎢⎣

⎡⋅=

⎥⎦

⎤⎢⎣

−−=

71210

A

Powers of A are not nulls,thus, obtaining the statetransition matrix as a powerseries is impractical

B

C D

► The matrix (sI - A)-1 is required:

► Look back on your old linear algebra notes for a quick refresher on matrix inversion. Another source is Williams and Lawrence, Appendix A.

⎥⎦

⎤⎢⎣

+

−=−

7121

ss

s AI

det(sI−A) = sI−A = s2 + 7s+12

(sI−A)−1 = 1s2 + 7s+12

s+ 7 1−12 s

"

#$

%

&'

X(s) = (sI−A)−1x0 + (sI−A)−1BU(s)⎥⎦

⎤⎢⎣

−−=

71210

A

► Thus, from

(sI−A)−1 = 1s2 + 7s +12

s + 7 1−12 s

⎣⎢

⎦⎥

X(s) = (sI−A)−1x0 + (sI−A)−1BU(s)

X(s) =1

s2 + 7s+12s+ 7 1−12 s#

$ %

&

' ( 0.100.05#

$ %

&

' ( +

1s2 + 7s+12

s+ 7 1−12 s#

$ %

&

' ( 01#

$ % &

' ( 3s

=

=1

s2 + 7s+120.1s+ 0.75 + 3

s

0.05s+1.8"

# $

%

& ' =

0.1s2 + 0.75s+ 3s(s+ 3)(s+ 4)0.05s+1.8(s+ 3)(s+ 4)

"

#

$ $ $ $

%

&

' ' ' '

=X1(s)X2(s)"

# $

%

& '

X1(s) =0.1s2 + 0.75s+ 3s(s+ 3)(s+ 4)

=0.25s

−0.55s+ 3

+0.4s+ 4

X2(s) =0.05s+1.8(s+ 3)(s+ 4)

=1.65s+ 3

−1.60s+ 4

[ ] tt eesXLty 431

1 4.055.025.0)()( −−− +−==

[ ] tt eesXLty 432

1 60.165.1)()( −−− −==!

Take derivativeto confirm

►The general expression of the state-transition matrix can be obtained from

►But what if t0 ≠ 0? Then t is just replaced with t – t0,

[ ] tt eesXLty 431

1 4.055.025.0)()( −−− +−==

[ ] tt eesXLty 432

1 60.165.1)()( −−− −==!

10.0)0( =y

05.0)0( =y!

eAt = L−1 (sI−A)−1⎡⎣ ⎤⎦ = L−1 1

s2 + 7s +12s + 7 1−12 s

⎣⎢

⎦⎥

⎣⎢⎢

⎦⎥⎥= 4e−3t − 3e−4 t e−3t − e−4 t

−12e−3t +12e−4 t −3e−3t + 4e−4 t⎡

⎣⎢⎢

⎦⎥⎥

eA(t−t0 ) = L−1 (sI−A)−1⎡⎣ ⎤⎦ =4e−3(t−t0 ) − 3e−4(t−t0 ) e−3(t−t0 ) − e−4(t−t0 )

−12e−3(t−t0 ) +12e−4(t−t0 ) −3e−3(t−t0 ) + 4e−4(t−t0 )⎡

⎣⎢⎢

⎦⎥⎥