Introduction to Linear Systems and State Space
-
Upload
srivathsan-vuruputoor -
Category
Documents
-
view
216 -
download
0
Transcript of Introduction to Linear Systems and State Space
-
7/30/2019 Introduction to Linear Systems and State Space
1/81
4246 Linear System
xu-week1.docx
1
1. Introduction and Motivation
In this week, we will review classical control, highlight some of the shortcoming of the classical approach,
which motivates the study of state-space models. To prepare for further study, we will also review
knowledge of vectors and matrices.
1.1 Introduction
Linear systems are usually mathematically described in one of two domains: time-domain orfrequency domain. The classical frequency-domain approach results in a system representation inthe form of a transfer function where the initial conditions are assumed to be zero. In the time-domain approach, the systems representation is in the form of a differential equation. However,rather than working with the high-order differential equation, we will use a set of 1st order
differential equations to describe the equivalent high-order differential equation. Of course if theorginal differential equation is of 4
thorder, then we will need four 1
storder differential equations
in order to describe the same system. Using this 1st
order representation (essentially stateequations), we will see that it provide a convenient time-domain representation that is the samefor systems of any order. Furthermore, state space descriptions do not assume zero initialconditions, and allow for the analysis and design of system characteristics that are not possiblewith frequency domain representations.
A Control System is a system in which some physical quantities are controlled by regulatingcertain energy inputs.
A system is a group of physical components assemblied to perform a specific function. It may beelectrical, mechanical, thermal, biomedical, chemical, pneumatic or any combination of the theabove.
A physical quantity may be temperature, pressure, electrical voltage, mechanical position orvelocity, liquid level, etc.
Some applications of automatic control
Space vehicle systems
Missile Guidance Systems Aircraft autopilot systems Robotic systems Weapon control systems
Power systems
Home appliances: aircons, refrigerators, dryers, microwave ovens etc. Automatic assembly line
Chemical processescontrolling pressures, temperature,humidity, pH values, etc.
Manufacturing systemsCNC machines, machine tool control, EDM control, etc
Quality control and Inspection of manufactured parts
Classical Control VS Modern Control:
The following are some of the features of Classical Control Theory:
-
7/30/2019 Introduction to Linear Systems and State Space
2/81
4246 Linear System
xu-week1.docx
2
1. It uses extensively the Laplace operator in the description of a system.2. It is based on input/output relationship called transfer function. In computing the transferfunction of a system, all initial conditions are set to zero.3. It was primarily developed for single-input-single-output systems, although extension to multi-input and multi-output is now possible.4. It can only describe linear time-invariant differential equation systems.5. Based on methods like frequency response, root locus etc.6. Its formulation is not very well suited for digital computer simulation.
The following are some of the features of Modern Control Theory:
1. Uses the time domain representation of the system known as the state space representation.2. Can be used to describe multi-input-multi-output systems. Extensions to linear time varying,nonlinear and distributed systems are easy and straight forward.3. The formulation incorporates initial conditions into descriptions.4. It is based on time domain approach and is therefore, very well suited for computersimulations.
5. Provides a complete description of the system.
Open-loop control systemsTypically represented in block diagram form
Output has no effect on the control action. Examples: washing machine, traffic lights, microwave oven, dryers,toasters, etc.
Closed-loop control systemsTypically represented as:
Actuating signal, and hence output, depends on the input and the currentstate of the output.
-
7/30/2019 Introduction to Linear Systems and State Space
3/81
4246 Linear System
xu-week1.docx
3
Examples are servo controlled motor, missile, robots etc.
Laplace transformation:
Given a functionf(t) where t 0.
Let sbe the complex variable defined as s = +j where , are variables.Then the Laplace transform of f(t), denoted as F(s), is
0( ) [ ( )] ( ) stF s L f t f t e dt
Transfer Function:
Consider the system given by
where1
0 1 11
1
0 1 11
( ) ( ) ( ) ( )
( ) ( ) ( ) ( )
n n
n nn n
m m
m mm m
d d da c t a c t a c t a c t
dt dt dt
d d db r t b r t b r t b r t
dt dt dt
with . n m
The transfer function of a linear, time-invariant, differential equationsystem is defined
1 2
0 1 2 1
1 2
0 1 2 1
( ) ( )( ) .
( ) ( )
m m m
m m
n n n
n n
b s b s b s b s bC S L output G s
R s L input a s a s a s a s a
Poles of a system: The roots of the denominator of the transfer function G(s) arecalled the poles of the system.
e.g.,2
2
2 2( )
( 3 2)
s sG s
s s s
poles of the system are s=0, s=-2. s=-1.
When is a SISO system stable?Ans: depends on the definition of stability.
Simplest answer: all poles have negative real parts.However, this leads to some problems, which we will see below.
1.2 Definitions
Dynamic System: A dynamic system (or a system with memory) is one whose output depends
on itself from an earlier point in time. A system whose output depends only on the current timeand the current input is memoryless.
Dynamic Systems most often occur as differential equation (continuous-time), or as differenceequations (discrete-time) because closed-form solutions of such systems require integration or
-
7/30/2019 Introduction to Linear Systems and State Space
4/81
4246 Linear System
xu-week1.docx
4
summation, of a quantity over past time. The system equation will be algebraic if the system ismemoryless.
Causality: A system is said to be causal if the value of the output at time to depends on the values
of the input and output for all time t up to and including to (i.e for tto).
Systems that are not causal are called anticipatory, because they violate the seemingly impossibleconditoin that they depend on some future values for use at the current time. Physical systemsobviously must be causal but non-causal systems are often used in data filtering and imageprocessing applications. In those cases, an entire data set is first acquired so that at any giventime, you have data available for processing at a time ahead of the present time.
If a system is causal, then its transfer function will be proper, i.e the degree of its numeratorpolynomial must be no greater than its denominator polynomial.
Time-invariance: A time-invariant system is one whose output depends only on the differencebetween the initial time and the present time, i.e. y = y(t-t0) where t is the present time and t0 is the
initial time. Otherwise the system is time-varying.
Time-varying systems are typically systems in which time appears as an explicit variable in thedifferential, difference or algebraic equation that describe the system. A time-invariantdifferential equation must, by necessity, be one with constant coefficients.
Example: For a system with output y and input u,
uuuyyy 6286 is time-invariant
uuducbyyay is time-invariant provided a, b, c and d are constants
uuducytbytay )()( is time-varying
Linearity: A linear system is one that satisfies the homogeneity and additivity conditions. Ahomogeneous system is one for which, given y(t) = S(u(t)), then S(au(t)) = aS(u(t)) for all a and u,and an additive system is one for which S(u1 + u2)= S(u1)+ S(u2).
Linear systems are thus systems for which the principle of superposition holds, i.e the effect ofeach input can be considered independently of one another. In systems with memory, the systemis linear if the nth derivative depends in a linear way on each of the lower derivatives, and also ina linear way on the input, if any. If the coefficients of the differential equation are functions of
the dependent variable then a nonlinear differential equation results.
Example:
y + ay +by = cu+ du+ u is linear time-invariant (LTI)
uuducytbytay )()( is linear time-varying
uuducytbyyy )(2 is nonlinear, time-varying
1.3 Motivation for State-Space model
-
7/30/2019 Introduction to Linear Systems and State Space
5/81
4246 Linear System
xu-week1.docx
5
Consider an unstable plant G ss
( )
1
1. In order to design a compensator for such a plant,
one might use a compensator1
1)(
s
ssH as shown in the block diagram Figure 2.1.
V(s) U(s) Y(s)
Y(s)V(s)
G(s)H(s)
GH(s)
1
1)()(
ssHsG
Now, as far as the input/output is concerned the transfer function from V(s) to Y(s) is now stable.In fact, the solution is:
)(*
)()(
0
)(
tve
dvety
t
tt
which looks completely stable.
But this controller will not work because the system will tend to saturate or burn out. To see this,instead of just looking at the input/output alone, we really need to analyze the complete
representation of the cascaded system and not only look at the input and output. In fact, theactual system consists of the two cascaded systems as shown below:
Y ss
U s
sY s Y s U s
sY S Y s U s
( ) ( )
( ) ( ) ( )
( ) ( ) ( )
1
1
Let x t y t2 ( ) ( ) , therefore we get ( ) ( ) ( )x t x t u t2 2 (2.1)
Consider
-
7/30/2019 Introduction to Linear Systems and State Space
6/81
4246 Linear System
xu-week1.docx
6
or
U ss
V s( ) ( )
12
1
(2.2)
Multiply throughout by et, we get
e x t e v d x
x t e e v d e x
x t e v d e x
x t e v x e
tt
tt
t
tt
t
t t
1
0
1
1
0
1
1
0
1
1 1
2 0
2 0
2 0
2 0
( ) ( ) ( )
( ) ( ) ( )
( ) ( ) ( )
( ) * ( )
( )
(2.3)
From (2.1), ( ) ( ) ( )x t x t u t2 2
The solution is given by (multiply byte )
2 2 2( ) ( ) ( )t t t
de x x e x e u t
dt
2 2
0
2 2
0
( )
2 1 2
0
( ) ( ) (0)
( ) ( ) (0)
( ) [ ( ) ( )] (0)
t
t
t
t t
t
t t
e x t e u d x
x t e e u d e x
x t e v x d e x
If you worked this through, youll get
x t e x e e x e vt t t t 2 212 1
0 0( ) ( ) ( ) ( ) * (2.4)
U
V
s
s
1
1
Let so thatX ss
V s
U s V s X s
Now sX s X s V s
1
1
1 1
2
1
2
( ) ( )
( ) ( ) ( )
, ( ) ( ) ( )
( ) ( ) ( )x t x t v t1 1 2
e x x e x e vt d
dt
t t( ) ( )1 1 1 2
-
7/30/2019 Introduction to Linear Systems and State Space
7/81
4246 Linear System
xu-week1.docx
7
Therefore, because of the term et, unless the initial conditions can be guaranteed to be zero or
x x1 20 2 0( ) ( ) , the output will grow without bounds! In practice, it is impossible to guaranteesuch initial conditions as noise exists invariably in all physical systems and any small noise will
cause the system to go unstable.
If you had used Laplace to solve the equation, things would be much easier!!From (2.2),
1 1( ) ( ) 2 ( )x t x t v t
Taking the Laplace Transform, we get,
sX s x X s V s1 1 10 2( ) ( ) ( ) ( )
i.e.
X sx
s
V s
s1
1 0
1
2
1( )
( ) ( )
(2.5)
Also, ( ) ( ) ( )x t x t u t2 2
so that ( ) ( ) ( ) ( )x t x t x t v t2 1 2
Taking Laplace,
sX s X s X s V s x2 1 2 2 0( ) ( ) ( ) ( ) ( )
( ) ( ) ( ) ( ) ( )s X s X s V s x
1 02 1 2
)1(
)(
)1)(1(
)0(
)1(
)0()(
)0()(1
)(2
1
)0()()1(
122
21
2
s
sV
ss
x
s
xsX
xsVs
sV
s
xsXs
(2.6)
From Inverse Laplace Table, the solution is
x t e x e e x e vt t t t
2 212 1
0 0( ) ( ) ( ) ( ) *
Q: What went wrong in the previous analysis which concluded that the system is stable?Ans: Only the input/output was considered, not the whole system (i.e., internal behavior).
We are able to see why the solution is unbounded because we write the equations as
( ) ( )t fx x, v
This is called a state-space equation which is a system of 1st order differential equation. Notethat the variables x and v are written as bold to signify that they are vectors, ie.
-
7/30/2019 Introduction to Linear Systems and State Space
8/81
4246 Linear System
xu-week1.docx
8
x
x
x
xn
1
2
wherex1,x2 are state variables.
In the example above, we see that we cannot just look at the external (input/output) behavior ofthe system alone. We need to know the internal behavior as well. Only by knowing the internalbehavior will we able to understand why the system goes unstable. This is the advantage of thestate-space approach because it gives us the internal behavior of the system. But, compared to thetransfer function approach, it is more difficult to design and implement. Note that any controllerthat is obtained using state space can also be found using the transfer function approach, andusually easier too. However, state-space gives us the insight into why a controller is designed assuch, which we may not know if we only use the transfer function approach.
1.4 Review of vector and matrix properties
Linear Independence of vectors : A set of vectors nRm21 ,...xx,x is said to be linearly
dependentif and only if there exist scalars ic not all zero, such that
0...21 m21 xxx mccc
If the only set ofci for which the above equation hold is 021 mccc , then the
set of vectorsnRm21 ,...xx,x is said to be linearly independent.
Alternatively, a set of vectors are linearly dependent if and only ifxi can be expressed as a linear
combination ofxj (j=1,2,....m,ji).
Examples:
0
0
1
2
2
121 cc
021 cc
We introduce the notation
01
21
m
m
c
c
ccc m1m21 xxxxx
-
7/30/2019 Introduction to Linear Systems and State Space
9/81
4246 Linear System
xu-week1.docx
9
Hence, if m1 xx are linearly independent, then 021 mccc
Matrices:
Transpose of a matrixA.
IfnmRA is given by
mnmm
n
aaa
a
aaa
21
21
11211
then the transpose ofA, denoted byAT
nmR , is given by
mnnn
m
aaa
a
aaa
21
12
12111
Note that (A+B)T
=AT
+BTand (AB)
T=B
TA
T
Symmetric Matrix:
A matrixA is said to be symmetric if
TAA
Skew-symmetric Matrix.
A matrix A is said to be skew-symmetric if
-
7/30/2019 Introduction to Linear Systems and State Space
10/81
4246 Linear System
xu-week1.docx
10
TAA
Remarks:
(i) Given any square matrixA, thenTAA is symmetric and TAA is skew-symmetric.
(ii) For any matrixA, AAB T is a symmetric matrix.
Properties ofDeterminant of a square matrix.
21212121 abbabb
aaA
21
213
31
312
32
321
321
321
321
cc
bba
cc
bba
cc
bba
ccc
bbb
aaa
A
1.1.1.If any two row or any two columns of a square matrix A are linearly dependent, thendet(A)=0.
1.1.2.If det(A)=0,A is asingular matrix ( inverse does not exist). Otherwise, it is nonsingular.1.1.3.det(A)=det(AT).1.1.4.det(AB)=det(A)det(B) ifA andB are both square matrices.1.1.5.det(A)= n ....21 where s are the eigenvalues of A.1.1.6.If a matrix is singular, at least one of its eigenvalues is zero. (why?)
1.1.7.det(A-1) =)det(
1
A.
1.1.8.If two rows ( or two columns ) of the determinant are interchanged, only the sign of thedeterminant is changed.
1.1.9.If a row (or a column ) is multiplied by a scalark, then the determinant is multiplied by k.1.1.10. If nmmnnn RCRBRA ,, and mmRD , then
-
7/30/2019 Introduction to Linear Systems and State Space
11/81
4246 Linear System
xu-week1.docx
11
)det()det(0
det0
det DADC
A
D
BA
Rank of a matrix.
The rank ofA is the maximum number of linearly independent columns or rows inA. IfnmRA , then
nmArank ,min)( .
)(),(min)( BrankArankABrank .
Remarks:
(i) If rank(A) is equal to the number of columns or the number of rows ofA, thenA is said
to be full rank.
(ii) IfA is square and full rank, thenA is nonsingular.
Derivatives:
1 Ifx Rn
, then
d
d tx(t)
d
dtx1(t)
.
.d
dtxn (t)
2 dAdt daij
dt
and Adt aijdt
.
3 IfJ(x) is a scalar function of x Rn , then
-
7/30/2019 Introduction to Linear Systems and State Space
12/81
4246 Linear System
xu-week1.docx
12
2J
x2
2J
x12
. .
2J
x1xn. . . .
. . . .
2J
xnx1. .
2J
xn2
Trace:
IfARnn , then Tr(A)= Trace of A= aiii1
n
.
Tr(A+B)=Tr(A)+Tr(B).
Tr(AB)=Tr(BTAT)=Tr(BA)=Tr(ATBT).
Eigenvalues and Eigenvectors:
A scalar ( which can be complex ) is called an eigenvalue ofA if there exists a nonzero
vectorx ( which can be complex ) such that Ax x . Any nonzero vectorx satisfyingAx x is called an eigenvector ofA associated with the eigenvalue .
If ARnn , the determinant
det(I-A)
is called the characteristic polynomial ofA. It is an nth-degree polynomial in . The
characteristic equation is given by
det(I-A)=0
If the determinant det(I-A) is expanded, the characteristic equation becomes
det(I-A)=n a1
n1 ... an 0
The n roots of the characteristic equation are called the eigenvalues ofA.
Note : Eigenvectors are unique up to a non-zero scalar multiple
Cayley-Hamilton Theorem:
-
7/30/2019 Introduction to Linear Systems and State Space
13/81
4246 Linear System
xu-week1.docx
13
Suppose ARnn and det( I-A)= n a1n1 ... an 0 is the characteristic
polynomial ofA. Then
An a1A
n1 ... anI 0
Positive and Nonnegative definite matrices.
A square matrix ARnn is said to be positive definite if for all x Rn , x 0 ,x
TAx>0.
A square matrix ARnn is said to be non-negative definite ( or positive semi-definite )if for all x Rn , x 0 , x
TAx 0 .
Remarks:
(i) ForA to be positive definite, all leading minors must be positive; i.e.,
a11 0 ,a11 a12
a21 a22 0 ,
a11 a12 a13
a21 a22 a23
a31 a32 a33
0 ,....
(ii) A symmetric matrix A is positive definite if and only if its eigenvalues are
positive. It is positive semi-definite if all eigenvalues are non-negative.
(iii) If DRnm , then DDT A is always positive semi-definite. Furthermore,it is positive definite if and only ifD has full rankn.
Negative and Non-positive definite matrices.
A square matrix ARnn is said to be negative definite if for all x Rn , x 0 ,x TAx
-
7/30/2019 Introduction to Linear Systems and State Space
14/81
4246 Linear System
xu-week2
1
2 System Modelling
In this chapter, we will look at physical modeling of systems, i.e. using known laws of physics to derive the
underlying state equations that described the system. Whether the system is mechanical, electrical,
chemical or biological, a mathematical description should be written in a unified manner so that a single
theory of stability, control or analysis can be applied to the model. This mathematical description that we
will use is a set of equations known as the state equations.
2.1 Modeling of dynamical system
Example 2.1 Mechanical System Equations
Derive the equations of motion for the spring-mass-damper system shown in Figure 2.1. In thesystem, the mass is M, the spring constant K and the damping constant B. An externally appliedforcefis applied to the mass.
Solution:
Applying Newtons law
Fma
we get
Md z t
dt
f t Bdz t
dt
kz t
2
2
( )( )
( )( )
This is a second order ODE. If we want to write this as a set of 1st order ODE, we can
choosex z
x z
1
2
then
( ( ) )
x z x
x z M f t Bx kx
1 2
2
1
2 1
i.e.
x x
xk
Mx
B
Mx
Mf
1 2
2 1 21
or, in matrix form
x
xk
M
B
M
x
xM
f1
2
1
2
0 1 01
(3.1)
If we are interested in finding the position of the mass, then the output isy(t), is then y(t)=x1(t),or
forcefM
B
K
z
-
7/30/2019 Introduction to Linear Systems and State Space
15/81
4246 Linear System
xu-week2
2
yx
x
1 0
1
2
(Q: what is the physical meaning of 2x ?)
More compactly we can write,
)()()(
)()()(
tDtCt
tBtAt
uxy
uxx
(3.2)
where
0
01
10
10
D
CM
B
M
B
M
kA
Equation (3.2) is known as a state-space equation or simply state equations and the variables x1andx2 are known as state variables. In the equation,A is known as the state matrix,B the inputmatrix, Cthe output matrix andD the feedthrough matrix.
Definition: State Equation: The state equations of a system are the set of n first-orderdifferential equations, where n is the number of independent states.
If we are interested in the force acting on the spring, then the outputy(t) will become
y t kx t kx
xf( ) ( ) .
1
1
20 0
If we are interested in both the force acting on the dashpot as well as the force acting on thespring, then our output becomes
y t Bx t Bx
xf
y t kx t k
x
x f
1 21
2
2 1
1
2
0 0
0 0
( ) ( ) .
( ) ( ) .
or
y( )( )
( ).t
y t
y t
B
k
x
xf
1
2
1
2
0
00
or if we want to know about the total force acting on the mass, then the output is
1
2
( ) 1x
y t k B fx
-
7/30/2019 Introduction to Linear Systems and State Space
16/81
4246 Linear System
xu-week2
3
Example 2.2: Electrical Systems Equations: Series RLC circuit
vi(t)
R L
Cv(t)
i(t)
It will be shown later that the choice of state variables is not unique. In general, then number ofstate variables will depend on the number of energy storage elements in the system. Onlyindependent state variables can be chosen. In this example, there are 2 energy storage elementsin the circuit, the inductor and the capacitor. Therefore we can choose i(t) andVo(t) as our state
variables.
idt
dvC
vvRidt
diL i
or, we can write as
ivL
tv
ti
C
LL
R
tv
ti
0
1
)(
)(
01
1
)(
)(
If we now letxi=i, x2=v andu=vi, then we can write the state equation as
x
x
R
L L
C
x
xL u
1
2
1
2
1
10
1
0
or in a more compact form
uA bxx
where x is a nx1 state vectorA is an nxn matrixb is a nx1 vector
What about the output quantity? If, in this circuit example, we are interested in the output voltageacross the capacitor, then we let the output,y(t) to be
y tx
x x v( )
0 1
1
22
-
7/30/2019 Introduction to Linear Systems and State Space
17/81
4246 Linear System
xu-week2
4
again, in a more compact form, we get
y=cTx
Example 2.3 DC Motor
On the electrical side, by KVL,
e-Ri=vb (KVL)
=kii (Torque-current relation)
vb=k2 (back emf)
On the mechanical side, using Newtons law
Jd
dt
Hence, we have,
Jd
dtk i k
Re v
Jd
dt
k
Re k
i i b
i
1
2
( )
( )
Jk k
R
k
R
e
k k
JR
k
JRe
i i
i i
2
2
Since
d
dt, therefore we have
k k
JR
k
JRe
i i2
or
+
-
e
J
+
-
vb
-
7/30/2019 Introduction to Linear Systems and State Space
18/81
4246 Linear System
xu-week2
5
0 1
0
02k k
JR
k
JR
ei i
The above 2 examples illustrate the form of the final expressions of the system, ie. the state-space
equations. Note that the general form is
( ) ( ) ( )
( ) ( ) ( )
x Ax Bu
y Cx Du
t t t
t t t
where
A B C D R R R Rn n n r m n m r , , , andx u R Rn r1 1,
Both x andu are function of time, i.e. we should write x(t), andu(t). However, for the sake ofnotational simplicity, we normally write it as x and u. In the above cases,
A B C D R R R Rn n n r m n m r , , , are constant matrices and the state equation is alinear-time-invariant (LTI) system.
IfA B C D R R R Rn n n r m n m r , , , are themselves time-varying, then the systemis known as linear time-varying system, and so the state equations are given by
( ) ( ) ( ) ( ) ( )
( ) ( ) ( ) ( ) ( )
x A x B u
y C x D u
t t t t t
t t t t t
A more general form the above is
, )
( , )
( , )
( , , )
x f(x,u
x,u
x,u
y g x u
t
f t
f t
t
n
1
This represents a class of nonlinear time varying state equations.
2.2 State Concepts
State: The state of a system is a mathematical structure containing a set of n variablesx1,x2
...xn(t), called the state variables such that knowledge of these variables at t=t0, togetherwith the input fort>t0 completely determines the behavior of the sytem for any time t>t0.
There is a minimum set of state variables which is required to represent the systemaccurately. The initial starting time is normally taken as zero.
Note: state variables need not be physically observable and measurable quantities (infact, in most cases they are not).
Example:
dx
dt ax
au t x t x
x t e x
a
e u dat
at t
1 1
1
0 0
0
10
( ) ( )
( ) ( )( )
( )
(2.3)
-
7/30/2019 Introduction to Linear Systems and State Space
19/81
4246 Linear System
xu-week2
6
Herex(t) is the state variable, and given the input u(t) and the initial value ofx(t)at t= t0, x(t0 ) Eq(2.3) completely define the trajectory ofx(t) for all time t>t0.
State Variable: The state variables of a dynamical system are the variables making up thesmallest set of variables that determine the state of the system. If at least n variablesx1, , xn areneeded to completely describe the behavior of a dynamical system (so that once the input is given
for t t 0 and the initial state at t=t0 is specified, the future state of the system is completelydetermined), then such n variables are a set of state variables.As noted above, the state variables need not be physically measurable or observable quantities.Variables that do not represent physical quantities and those that are neither measurable norobservable can be chosen as state variables.
State Vector: If n state variablexi(t) , i=1,...n, are needed to completely describe the behavior
of a given system, then these n state variables can be considered as the elements orcomponents of the n-dimensional vectorx(t). Such a vectorx(t) is called a state vector.
State Space: State space is defined as the n-dimensional space in which the components of the
state vector (i.e., state variables) represent its' coordinate axes. Eg. phase plane.
State Trajectory: The path produced in the state space by the state vectorx(t) as itchanges with the passage of time. Eg. phase trajectory
The first step in applying these definitions is the selection of the state variable, given a physicalsystem. NOTE that the state variables are NOT unique.
What is the relationship of a same system represented using different sets of state variables?
Now, the state of a system at any given time can be seen as a point in the state space. Suppose we
set up three coordinate systems (U, I, II). Let a ,a ,a1 2 3 R3
be the unit vector of the axes of
coordinate system I andb ,b ,b1 2 3 R3
be the unit vector of coordinate system II. Then the
point P can be described using coordinate I as
O1
U
O
III
x1
x2
x3
x1
_
x2_
x3
_ P
-
7/30/2019 Introduction to Linear Systems and State Space
20/81
4246 Linear System
xu-week2
7
3
2
1
3211
x
x
x
O
xxxOPO
321
321
aaa
aaa
Alternatively, P can also be described by using coordinate II as
3
2
1
3211
x
x
x
O
xxxOPO
321
321
bbb
bbb
The above equations show that
a1
a2
a3
x1
x2
x3
b1 b2 b3 x1
x2
x3
or
x
x
x
x
x
x
T
x
x
x
1
2
3
11
2
3
1
2
3
a a a b b b1 2 3 1 2 3
provided that the inverse exists. The equation shows that the coordinate of a point can always beobtained from another coordinate system through a nonsingular square matrix T! Extending theargument to n dimension,
x Tx
relates the coordinate of a point in one coordinate system to another. We now look at the effect ofa coordinate change when applied to the state-space model.
( ) ( ) ( )
( ) ( ) ( )
x Ax Bu
y Cx Du
t t t
t t t
Since x T x1
,
)()()()()(
)()()()()(
ttttt
ttttt
DuxCTDuCxy
BuxATBuAxxTx
or
)()()(
)()( 11
ttt
tt
DuxCTy
BuTxATTx
Note that these two equations are the state space equations of the same system! In general, one
can define as many different coordinate system as can be by specifying a different T (as long as itis nonsingular), it is obvious that there are many different state space representations of the same
-
7/30/2019 Introduction to Linear Systems and State Space
21/81
4246 Linear System
xu-week2
8
system. In summary, the state-space representation of a system is non-unique and we have aninfinite choice of state vectors!
Example:
x
x
x
x
x
x
1
2
1
2
1
2
1 2
0 2
0
1
1 0
u
y
Let
x
x
x
x
1
2
1
2
2 0
0 5
then,
.
x
x
x
x
x
x
1
2
1
2
1
2
1
0 2
0
5
05 0
45
u
y
Since there are infinite representations of the same physical system, a natural question is whetherthere are some representations that are more useful or more insightful than others. It turns out thatthere are: eg controllable canonical forms, observable canonical forms and diagonal forms, as weshall see later.
2.3 Selection of state variables
Consider the following SISO nth order system,
Now, if we know the values of all the initial conditions for y and its derivatives, then we can findthe future behaviour of the system completely. Recall the definition of states! Hence, we cantakey and all its' differentials as the state variables
We therefore, now define state variables as
then, we can rewrite the equation as:
y a y a y a y un nn( ) ( )
1 2 1
x yx y
x ynn
1
2
1
( )
-
7/30/2019 Introduction to Linear Systems and State Space
22/81
4246 Linear System
xu-week2
9
in matrix form, we get
( ) ( ) ( )x Ax But t t where
A
0 1 0 0
0 0 1 0
0 0 0 1
1 2 3
a a a an
Output is
Example: (from Ogata)
Choose
then,
x x
x x
x x
x a x a x u
n n
n n n
1 2
2 3
1
1 1
y 1 0 0 0 x
or
xy =cT
y y y y u 6 11 6 6
x y
x y
x y
1
2
3
.
x x
x x
x x x x u
ie
1 2
2 3
3 1 2 36 11 6 6
-
7/30/2019 Introduction to Linear Systems and State Space
23/81
4246 Linear System
xu-week2
10
x
x
x
x
x
x
u
y
x
x
x
1
2
3
1
2
3
1
2
3
0 1 0
0 0 1
6 11 6
0
0
6
1 0 0
(3.4)
Since state variables are not unique, someone else using a different representation can come upwith a different state variables that can describe the same system. Of course the solution thoughwill be the same.
Example: For the same system above, we can choose
so that
3
2
1
3
2
1
3
2
1
001
6
00
6100
101010
x
x
x
y
u
x
xx
x
xx
(3.5)
Although equations (3.4) and (3.5) do not look the same, the solution is the same.
For the general differential equation:
)()()(
)()()()(
01011
1
1 tubdt
tdub
dt
tudbtya
dt
tdya
dt
tyda
dt
tyd
n
n
nn
n
nn
n
if we follow the same method to get the state equations, we will find that we would need thederivatives ofu(t) in the state equations. However, this will then not be in the standard format of(3.2). Instead, there are other commonly used formulations for state equations. In such cases, thestate variables are chosen so that the matrices A, B, C, and D have particular forms. We call suchstandard forms canonical forms. We illustrate one such canonical form using an example.
Example:
x y
x y
x y y
1
2
3
x x
x x x
x x x u
1 2
2 3 1
3 3 26 10 6
-
7/30/2019 Introduction to Linear Systems and State Space
24/81
4246 Linear System
xu-week2
11
Given a system described by the following differential equation, find an appropriate state spaceformulation.
)(2)(
3)()(
2)(
2
2
tudt
tduty
dt
tdy
dt
tyd (3.6)
If we take the Laplace Transform of (3.6) and assuming zero initial conditions, we get
2 ( ) 2 ( ) ( ) 3 ( ) 2 ( )s Y s sY s Y s sU s U s
so that the transfer function from U(s) to Y(S) is
G ss
s s( )
3 2
2 12
Y s
U s
( )
( )
We could rewrite this as:
Y ss
s sU s( ) ( )
3 2
2 12
(this gives us back the nice form of the previous example)
Let
then we get,
and the state equation is
Also, the output is given by
Let X ss s
U s( ) ( ) 1
2 12
s X sX X U2
2
x x
x x x
1
2 1
x x x u2 2 12
x x
x x x U
1 2
2 2 12
y t x t x t
x x
( ) ( ) ( )
3 2
3 22 1
-
7/30/2019 Introduction to Linear Systems and State Space
25/81
4246 Linear System
xu-week2
12
x
x
x
xu
yx
x
1
2
1
2
1
2
0 1
1 2
0
1
2 3
In general, for the general differential equation given by
)()()(
)()()()(
01011
1
1 tubdt
tdub
dt
tudbtya
dt
tdya
dt
tyda
dt
tyd
n
n
nn
n
nn
n
(Q: why the right hand side is at most ordern?)
the transfer function is
1
1 1 0
1
1 1 0
1
1 1 1 1 0 0
1
1 1 0
( )
( )
( ) ( ) ( )
n n
n n
n n
n
n
n n n n nn n n
n
b s b s b s bY s
U s s a s a s a
b b a s b b a s b b ab
s a s a s a
A simulation diagram for the above realization is
u(t)
an-1
an-2
a0
bn
bn-1
b1
b0
xn xn-1 x1x2
1nx
y(t)
-- -
and a representation of the state equation is
-
7/30/2019 Introduction to Linear Systems and State Space
26/81
4246 Linear System
xu-week2
13
u
x
x
x
aaaax
x
x
nnn
n
1
0
0
100
000
010
2
1
1210
2
1
x110 ncccy + bnu
wherec0 = b0 a0bnc1 = b1 a1bn
.and cn-1 = bn-1 - an-1bn
This is known as the controller canonical form. In this form, the feedback coefficients ai onlyand appears in the final state equation. The significance of the controller canonical form will bemake known later when you study controllability of a system. A simulation diagram of thecontroller canonical form is given in Figure 2.1. (Sometimes this diagram is preferable to the onegiven earlier because in this case, the output is taken directly from the states while earlier there is
a output from nx which has to be calculated.)
Figure 2.1 Simulation Diagram for controllable canonical form
u(t)
an-1
an-2
a0
bn
cn-1
c1
c0
xn xn-1 x1x2
1nx
y(t)
- - -
-
7/30/2019 Introduction to Linear Systems and State Space
27/81
4246 Linear System
xu-week2
14
If we define
C B
A A
B C
D D
T
T
T
T
, then, we get another realization given by
( ) ( ) ( )
( ) ( ) ( )
x Ax Bu
y Cx Du
t t t
t t t
then we get the observable canonical form below:
ub
x
x
x
y
u
c
c
c
x
x
x
a
a
a
a
x
x
x
n
n
nn
n
n
n
2
1
1
1
0
2
1
1
2
1
0
2
1
1000
1000
10
001
000
The simulation diagram for the observable canonical form is shown in Figure 2.6.
Figure 3.6 Simulation Diagram for observable canonical form
c0 1
c1 c2 bn
-a0 -a1 -a2 -an-1
cn-1
yu
-
7/30/2019 Introduction to Linear Systems and State Space
28/81
4246 Linear System
xu-week2
15
An important realization is the diagonal form. It can be shown that the diagonal form is best fromthe viewpoint of sensitivity, ie it will be not as sensitive to roundoff errors, truncation errors or thefinite word length in a computer.
The analog simulation for the diagonal realization is
For Y sB s
A s
U s
g
sU s g b ci
i
i i i
i
n
( )( )
( )
( ),
( ) :
1
c1b1
1
cnbn
n
1/(s-2) c2b2
y
u
-
7/30/2019 Introduction to Linear Systems and State Space
29/81
4246 Linear System
xu-week2
16
Diagonal forms are useful from the viewpoint of sensitivity and also in solving the equations.This is because with state equations, the problem in solving is the coupling between the statevariables. Now in diagonal form, we only have a set of independent 1st order differentialequations and that can be solved easily. But it is not always possible to obtain a diagonal form. Itis only possible if and only if A has n linearly independent eigenvectors.
2.4 Relation between Transfer Funct ion and State Equation representations
It would be reasonable to ask how the state space description, in time domain, is related to thetransfer function representation of a system. In this section, we will show the relationship
between the system matrices (A,B, C, D) and the transfer function. Since the transfer functiontypically refers to a single-input-single output system, we will consider state-space representations
that are single-input-single-output, ie. for the case where B R C R D Rn n 1 1 1 1, ,
Given a system with transfer function given by
and the state-space description given by
( ) ( ) ( )
( ) ( ) ( )
x t Ax t B t
Cx
u
y t t Du t (11.1)
From the state equation (11.1), taking the Laplace Transform, we get
(11.2)
Since transfer function is obtained with zero initial conditions, therefore, setting x(0)=0 andrearranging, (11.2) becomes
(11.3)
Comparing (11.3) with Y(s)=G(s)U(s), we get
DBAsICsG 1)()( (11.4)
Note that since B R C R A Rn n n n 1 1, , , therefore the product C sI A B R( ) 1 1 1
is a scalar.
We now take a closer look at equation (11.4). Since, in general, the inverse of a matrix H can be
expressed as
G s Y sU s
( ) ( )( )
sX s x AX s BU s
Y s CX s DU s
( ) ( ) ( ) ( )
( ) ( ) ( )
0
X s sI A BU s
and
Y s C sI A B D U s
( ) ( ) ( )
( ) ( ) ( )
1
1
-
7/30/2019 Introduction to Linear Systems and State Space
30/81
4246 Linear System
xu-week2
17
)det(
)(adj1
H
HH
where adj(H) refers to the adjoint of A, therefore the inverse of (sI-A) is given by
)det(
)(adj)( 1
AsI
AsIAsI
From (11.4), we get
])det()(adj[)det(
1)( DAsIBAsIC
AsIsG
In the case of a SISO system, the expression ])det()(adj[ DAsIBAsIC is a polynomial ins. Also, det(sI-A) is a polynomial and so we would get
)()(
)det()()(
sDSN
AsISNsG
From classical control, the characteristic roots (or poles) of the system are the values ofs suchthat D(s)=0. Clearly this is also the same as the values of s such det(sI-A)=0. From this, we canclearly see that the eigenvalues ofA are the poles of the system G(s)! (Q: what does that imply interms of stability of the system?)
Example:From the earlier example, we have the state equations:
x
xk
M
B
M
x
xM
f
yx
x
1
2
1
2
1
2
0 1 01
1 0
Using G s C sI A B D( ) ( ) 1 , we get
M
B
sM
ks
M
B
M
k
s
sAsI
110
0
0
( )( )
sI As s
sB
Mk
Ms
BM
kM
1 11
-
7/30/2019 Introduction to Linear Systems and State Space
31/81
4246 Linear System
xu-week2
18
Example:Given the state equations
( ) ( ) ( )
,
x xt A t Bu t
A B
where
0 1
2 3
0
1
and
( ) 3 1 ( )y t t x
then using , we get
3 1
0( 1)( 2) ( 1)( 2)( ) 3 1
2 1
( 1)( 2) ( 1)( 2)
3
( 1)( 2)
s
s s s sG s
s
s s s s
s
s s
As an exercise, check that the state equation from this transfer function is the same as above.Note that the above is in controller canonical form.Write out the observable canonical form of the transfer function, and compute the related transfer
function. It should be the same.
2.5 Notes:The matrix (sI-A) is sometimes called the characteristic matrix of A. Its determinant,
is known as the characteristic polynomial ofA, and its roots [i] are the eigenvalues or
characteristic values ofA (recall that they are the poles of the system). An importantproperty ofa(s) is that
This is called the Cayley-Hamilton theorem, which we will need to use later.
Example:
Given anA matrix A=[ 1 2; 0 3], show that it satisfies the Cayley-Hamilton theorem.
30
21A
and
G s C sI A B D( ) ( ) 1
a s sI A s a s a s a
s s s
n
n
n
n
( ) det( )
( )( ) ( )
12 1
1 2
a A A a A a In nn( ) 1
10
-
7/30/2019 Introduction to Linear Systems and State Space
32/81
4246 Linear System
xu-week2
19
30
21
30
21
10
01)(
s
ssAsI
34)3)(1()det()( 2 ssssAsIsa
and so
10
013
30
214
30
21
30
2134)( 2 IAAAa
010
013
30
214
90
81
We have stated earlier that state-space equations are not unique and so there are infinite state-
space representations of a physical system. However, there is only one transfer functionrepresentation of the same system. Hence in the next section, we will now show that no matterwhat state-space forms describing a system are found, the eigenvalues of the state space equationswill be the same. We call this the invariant of eigenvalues undersimilarity transform.
2.6 Invariant of eigenvalues under similarity transform
Given ( ) ( ) ( )t t t x Ax Bu
Let x Px (Similarity Transform)
Now for P non-singular, we have
-1 -1P x = AP x +Bu
i.e., -1x = PAP x + PBu
To show invariant of eigenvalues (ie for different state space representation of the same system,
the eigenvalues are the same), we need to show that are identical.
To do that, we consider
I A and I PA P -1
I PA P PA
P I A
P I A
P I A
P I A
I A
P P P
P
P
P
P
-1 -1 -1
-1
-1
-1
-1
( )
-
7/30/2019 Introduction to Linear Systems and State Space
33/81
4246 Linear System
xu-week2
20
Therefore, the eigenvalues are invariant under a similarity transform
Appendix: Inverse of Matrix
(the following taken from www.mathwords.com)
Cofactor
The determinant obtained by deleting the row andcolumn of a given element of a matrix
or determinant. The cofactor is preceded by a + or sign depending whether the element
is in a + or position.
Cofactor Matrix - Matrix of Cofactors
A matrix with elements that are the cofactors, term-by-term, of a given square matrix.
-
7/30/2019 Introduction to Linear Systems and State Space
34/81
4246 Linear System
xu-week2
21
Adjoint
The matrix formed by taking the transpose of the cofactor matrix of a given original
matrix. The adjoint of matrix A is often written adj A.
Example: Find the adjoint of the following matrix:
Solution: First find the cofactorof each element.
As a result the cofactor matrix of A is
Finally the adjoint of A is the transpose of the cofactor matrix:
Inverse of matrix
A-1 = (adjoint of A) or A-1 = (cofactor matrix of A)T
-
7/30/2019 Introduction to Linear Systems and State Space
35/81
4246 Linear System
xu-week2
22
Example: The following steps result in A-1 for .
The cofactor matrix for A is , so the adjoint is
. Since det A = 22, we get
-
7/30/2019 Introduction to Linear Systems and State Space
36/81
4246 Linear System
xu-week3.docx
1
3 Solutions to State Space Models
In this week we will discuss the methods to solve linear systems represented using state-spacemodels. We will also address the issue of handling non-linearity in modeling.
3.1 Linearization
The utility of the linear-time-invariant (LTI) model goes beyond systems that are described bylinear model. Although most physical systems encountered in the real world are nonlinear, lineartechniques can still be used to analyze the local behaviour of a nonlinear system about smalldeviations about an operating point. While this may appear restrictive, in practice it is not too badas most control systems are designed to maintain variables close to a particular operating point
anyway. In order to obtain a linear model from a nonlinear system, we introduce a linearizationtechnique based on Taylor series expansion of a function.
Specifically, if a nonlinear time invariant system is given by
( , )x f x u
We can linearize about any given operating point using Taylors series expansion about aparticular operatin point. Suppose we wish to find the linear equation about the operating point(x0, u0 ), then at the operating point, we get
( , )x f x u0 0 0 (3.1.1)
Considerx x x
u u u
0
0
then
( , )
x x x
f x x u u
0
0
Expanding using Taylors series expansion, we get,
( , )
, ,
x x f x uf
xx
f
uu0 0
x u x u
0
0 0 0 0
and so
, ,
xf
xx
f
uu
x u x u
0 0 0 0
(3.1.2)
Equation (3.1.2) is the state space equation with A corresponding to
f
xx u0 0,
. i.e.
-
7/30/2019 Introduction to Linear Systems and State Space
37/81
4246 Linear System
xu-week3.docx
2
A =
f
xx ,u
x ,u
0 0
0 0
f
x
f
x
f
x
f
x
f
x
f
x
f
x
f
x
n
n
n n n
n
1
1
1
2
1
2
1
2
1 2
and
B =
00
00
u,x
u,xu
f
n
nnn
n
n
uf
uf
uf
u
f
u
f
u
f
u
f
u
f
21
2
1
2
1
2
1
1
1
Example: Inverted Pendulum
As an example, consider the classical inverted pendulum problem. The system consists of a cartwith an inverted pendulum attached. This system is also commonly known as pole-balancerbecause the problem of applying a force u(t) to keep the pendulum upright is similar to that ofbalancing a pole on one's hand. It has more practical interpretation in terms of launching of arocket but we will look at this system because it can illustrate certain control concepts very
clearly.
Figure Pole Balancing cart system
Define the center of gravity of the mass as (xG, yG), then,
xG = x + l sin
yG = l cos
Applying Newtons law in the x -direction, we get
Md x
dt
md x
dt
uG
2
2
2
2
ie.
Force u(t)
M
m
l
P
x
y
-
7/30/2019 Introduction to Linear Systems and State Space
38/81
4246 Linear System
xu-week3.docx
3
Md x
dtm
d x l
dtu
2
2
2
2
( sin )(3.1.3)
Since
d
dtd
dt
d
dt
d
dt
sin (cos )
sin (sin ) (cos )
cos (sin )
cos (cos ) (sin )
2
22
2
22
Eq. (3.1.3) can be written as
( ) (sin ) (cos ) M m x ml ml u 2 (3.14)
The equation of motion of the mass m in the y direction will also contain terms related to the
motion of the mass in the x direction. Instead, we consider the rotational motion of the mass maround point P. Applying Newtons second law to the rotational motion, we get
md x
dtl m
d y
dtl mgl
G G
2
2
2
2cos sin sin
(3.15)
Substituting forxGand yG, we get
md x l
dtl m
d l
dtl mgl
2
2
2
2
( sin )cos
( cos )sin sin
m x l l l m l l l mgl [ (sin ) (cos ) ] cos [ (cos ) (sin ) ] sin sin 2 2
Simplifying by using cos2
+ sin2
= 1, we get
mxl ml ml mgl cos (cos ) (sin ) sin 2 2 2 2 or
mx ml mg cos sin (3.16)
The nonlinear equations of motion are then given by Eqs. (3.14) and (3.16), i.e.
( ) (sin ) (cos ) M m x ml ml u 2
mx ml mg cos sin
-
7/30/2019 Introduction to Linear Systems and State Space
39/81
4246 Linear System
xu-week3.docx
4
-
7/30/2019 Introduction to Linear Systems and State Space
40/81
4246 Linear System
xu-week3.docx
5
-
7/30/2019 Introduction to Linear Systems and State Space
41/81
4246 Linear System
xu-week3.docx
6
-
7/30/2019 Introduction to Linear Systems and State Space
42/81
4246 Linear System
xu-week3.docx
7
-
7/30/2019 Introduction to Linear Systems and State Space
43/81
4246 Linear System
xu-week3.docx
8
u
Ml
Mx
Ml
gmM
M
mgx
1
10
0
00)(0
000
1000
0100
-
7/30/2019 Introduction to Linear Systems and State Space
44/81
4246 Linear System
xu-week3.docx
9
3.2 Solution of time-invariant state equation
Consider the scalarhomogeneous differential equation
(7.1)
Assume the solution of the form
Substitute in (7.1) to get
Now, this must hold for all t, so equating the coefficients, we get
At t=0, x(0)=b0, so that we get
As an exercise, substitute back this into (7.1) to check it indeed is the solution.
Now, we will use a similar method to try and solve the state equation. Consider
)()( tAt xx where x: n-dimensional vectorA: n by n matrix
By analogy with the scalar case, we can assume the solution in the form of a vector power series
in t, or
ktttt k210 bbbbx2)(
By substituting for )(tx in the state equation, we get
)(32 2123 kk tttAtktt k210k21 bbbbbbbb
This must hold for all t, so
x ax
x t b b t b tkk( ) 0 1
b b t kb t
a b b t b t
k
k
k
k
1 2
1
0 1
2
( )
b ab
b ab a b
b a bk kk
1 0
212 1
12
2
0
10
!
x t at a t a t x
x t e x
kk k
at
( ) ( ) ( )
( ) ( )
! !
1 0
0
12
2 2 1
-
7/30/2019 Introduction to Linear Systems and State Space
45/81
4246 Linear System
xu-week3.docx
10
0k
012
01
bb
bbb
bb
kAk
AA
A
!1
2
1
2
1 2
At t=0, 0bx )0( ,
)0()!
1
!2
1()( 22 xx kktA
ktAAtIt
We call the power series in the parenthesis the matrix exponential because of its similarity to theinfinite power series for a scalar exponential. ie.
Atkk etAk
tAAtI !
1
!2
1 22
Therefore, the solution to the state equation is
)0()( xx Atet
whereAte is also known as thestate transition matrix
3.3 Properties of State Transition Matrix
1.
The convergence of the series can be proven for any finite t.That is,Ate always
exists.
2.Proof:
eA t
jI AtAt
n
j j
A tn
lim!
!
2 2
2
0
d
dte Ae e AAt At At
-
7/30/2019 Introduction to Linear Systems and State Space
46/81
4246 Linear System
xu-week3.docx
11
At
At
Ae
tA
tAAtIA
tA
tAtAA
tA
tAAtI
dt
de
dt
d
]!3!2
[
]!3!2
0[
]!3!2
[][
33
22
34
232
33
22
3.AsAtstA eee )(
Proof:
00
!!k
kk
k
kk
AsAt
k
sA
k
tAee
By direct expansion, well get
)(
222
22
22
2
33
22
33
22
)!2
2()(
!2!2)(
)!3!2
)(!3!2
(
stA
AsAt
e
ststAstAI
tsAs
At
AstAI
sA
sAAsI
tA
tAAtIee
More rigorous derivation:
0 0
0 0
0
( )
! !
!( )!
( )
!
k k k k At As
k k
i k ik
k i
kk
k
A t s
A t A se e
k k
t sA
i k i
t sA
k
e
4. expAtis nonsingular and [ expAt]-1
= exp ( -At)
Proof:
Since2121 )( AtAtttA eee , therefore if we let tt 1 , and tt 2 , then
-
7/30/2019 Introduction to Linear Systems and State Space
47/81
4246 Linear System
xu-week3.docx
12
AtAt
AtAt
AtAtttA
eeei
eeI
eee
1
)(
][..
SinceAte always exists, therefore inverse ofeAtalways exists and so eAt is nonsingular.
5. BtAttBA eee )( ifAB=BABtAttBA eee )( ifABBA
Proof:
!3
)(!2
)()(3
32
2)( tBAt
BAtBAIe tBA (4.3.1)
!3!2!2!3
!2!2)(
)!3!2
)(!3!2
(
33
32
32
33
222
22
33
22
33
22
tB
tAB
tBA
tA
tBABt
tAtBAI
tB
tBBtI
tA
tAAtIee BtAt
(4.3.2)
Subtracting (4.3.2) from (4.3.1), we get
!3
)22(!2
)(3
22222
)( tABBABABABABABAt
ABBAeee BtAttBA
so that ifAB=BA (this is called A and B commute), then the right hand terms vanish and so
BtAttBA eee )(
3.4 Solution ofinhomogenousstate equation (ie. with input)
Consider the scalar case,
Integrating w.r.t t, we get
[ ]
. . [ ( )]
x ax bu
x ax b u
e x ax e bu
i ed
dte x t e b u
at at
at at
-
7/30/2019 Introduction to Linear Systems and State Space
48/81
4246 Linear System
xu-week3.docx
13
taat dbuextxe
0
)()0()(
ttaat dbuexetx
0
)( )()0()(
The former term is thefree response (zero input response) and the latter term known as theforcedresponse (zero state response).
We will again extend the result to the matrix case. Consider the state equation,
Following the scalar case, we get
Integrating from 0 to t,
0
( ) (0) ( )t
At Ae x t x e Bu d
ttAAt duBexetx
0
)( )()0()(
(4.4.1)
3.5 How to find eAt?
Computing eAtby summing up infinite terms is impossible. So we discuss how to compute itmore conveniently. In this section, we illustrate the method using inverse Laplace transforem.
Again we will look at the scalar case to get an idea of what to do. Consider the scalar 1st orderdifferential equation,
( ) ( ) ( )
:
:: tan
: tan
x t Ax t Bu t
where
x n vector
u m vector A n n cons t matrix
B n m cons t m atrix
1
1
[ ]
. . [ ( )]
x Ax Bu
x Ax Bu
e x Ax e Bu
i e ddt
e x t e Bu
At At
At At
x ax
-
7/30/2019 Introduction to Linear Systems and State Space
49/81
4246 Linear System
xu-week3.docx
14
Taking the Laplace Transform, we get
1
( ) (0) ( )
(0)( )
( ) (0)
sX s x aX s
xX s
s a
s a x
Taking the Inverse Laplace Transform, we get
Compare with the solution we obtained earlier, ie
Hence, we get
Similarly, for the matrix case,
It can be shown that,
Hence, we have just shown that
The importance of this equation is that it provides a convenient means to compute the matrix
exponential: compute1( )sI A first, then take inverse Laplace transform (on each of its entries).
For the case of the state equation with input,we get
x t L s a x( ) {( ) } ( ) 1 1 0
x t e xat( ) ( ) 0
e L s aat 1 1{( ) }
X s sI A x
x t L sI A x
( ) ( ) ( )
( ) {( ) } ( )
1
1 1
0
0
( )
{( ) }!
sI AI
s
A
s
A
s
L sI A I At A t
eAt
1
2
2
3
1 12 2
2
L sI A eAt 1 1{( ) }
-
7/30/2019 Introduction to Linear Systems and State Space
50/81
4246 Linear System
xu-week3.docx
15
From the previous result foreAt, we get
X s L e x L e BU sAt At( ) { } ( ) { } ( ) 0
Taking the inverse Laplace Transform,
x(t) eAtx(0) eA(t)Bu()d0
t
(4.4.2)
If the initial time is not zero but at t0, then the solution becomes
x(t) eA(tt0 )x(t0) eA(t)Bu()d
t0
t
(4.4.3)
We can see this because from (4.4.2), the solution at time t0 is given by
x(t0
) eAt0 x(0) eA(t0 )Bu()d0
t0
so that
x(0) eAt0 x(t0) eAt0 eA(t0 )Bu()d
0
t0
But the solution at time tfor zero initial condition is given by (4.4.1)
x(t) eAtx(0) eA(t)Bu()d0
t
Therefore, substituting forx(0), we get
x(t) eAt(eAt0 x(t0) eABu()d
0
t0
) eA(t)Bu()d0
t
x(t) eA(tt0 )x(t0) eA(t)Bu()d
t0
t
Another way to think of it is that if the intial time is t0, consider some one (observer 2) else whohas a watch pointed at zero. From the view point of observer 2, the intial value is x(t0), and thesystem will evolve for a time t-t0. Solving the equation of observer 2, one get (4.4.3). Notice thatthe system is independent to the observer.
( ) ( ) ( )
( ) ( ) ( ) ( )
( ) ( ) ( ) ( )
( ) ( ) ( ) ( ) ( )
x t Ax t Bu t
sX s x AX s BU s
sI A X s x BU s
X s sI A x sI A BU s
0
0
01 1
-
7/30/2019 Introduction to Linear Systems and State Space
51/81
4246 Linear System
xu-week3.docx
16
Example:
Solve ux
x
x
x
1
0
21
10
2
1
2
1
Solution:
21
10
0
0
s
sAsI
21
1
s
s
Hence we get
s
s
ssAsI
1
12
12
1)(
2
1
22
22
)1()1(
1
)1(
1
)1(
2
s
s
s
ss
s
Therefore, using the Inverse Laplace Table,
tt
ttAt
ette
teetAsILe
)1(
)1(}){( 11
and so
duet
etx
ette
teettx
t
t
t
tt
tt
)()](1[
)()0(
)1(
)1()(
0)(
)(
3.6 Other methods to computeAte
It was shown previously thatAte can be obtained using the Inverse Laplace Transform. This is
the most common way to findAte in closed-form, i.e.
}{ 11 AsILeAt
Other methods are as follows:
3.6.1 Series expansion
-
7/30/2019 Introduction to Linear Systems and State Space
52/81
4246 Linear System
xu-week3.docx
17
SinceAte was defined as an infinite series, that can be used to compute Ate to any degree of
accuracy required. This is used in most computer code (MATLAB for example). Recall that
Since the series converges,Ate can be approximated by evaluating up toNterms only.
3.6.2 Using similarity transform
If we can transform the matrixA into either a diagonal or Jordan (discuss later) canonical form,
thenAte can be easily evaluated. We shall first consider the case whereA has only distinct
eigenvalues. In that case,A can be transformed into a diagonal matrix using similarity transform.IfA has repeated eigenvalues, then it may not always be able to be diagonalized, but it may be
transformed into a form called the Jordan canonical form using similarity transform.
Case 1: Matrix A is diagonalizable
Consider a diagonal matrix where
n
00
0
0
00
2
1
We will now show that it is possible to obtain from matrixA by a similarity transform, i.e. we
will show that it is possible to get 1PAP .
Similarity transform:
Let 1PAP , then premultipy byP-1 (assuming that the inverse exists) on both side to get
11 PAP
or
1
2
0 00
0
0 0 n
A
1 2 n 1 2 nc c c c c c
where cIare the columns ofP-1
then
1 2 nA A A 1 2 n 1 2 nc c c c c c
It is easy to see that this is equivalent to the following set of equations
Ac1
1c1
e
A t
j I AtAt
n
j j
A tn
lim !!
2 2
20
-
7/30/2019 Introduction to Linear Systems and State Space
53/81
4246 Linear System
xu-week3.docx
18
Ac2
2c2 (4.7.2.1)
and so on.
But, recall the definition of eigenvector: Any nonzero vectorx satisfying Ax x is called an
eigenvector ofA associated with the eigenvalue . Hence, from the equation above, we can seethat c1 is the eigenvector associated with eigenvalue 1 , c2is the eigenvector associated with
eigenvalue 2 and so ciis the eigenvector associated with eigenvalue i . This shows that byselecting the eigenvectors as columns of theP-1 matrix, and if the n eigenvectors are independent(which will be the case if the eigenvalues are unique), then it is possible to diagonalize the matrixA.
We can use similarity transform to findAte as we will now show.
Consider the state space equation
A Bu
y C Du
x x
x
where the solution is given by
x(t) eAtx(0) eA(t)Bu()d0
t
(4.7.2.2)
Using theP defined earlier, if we let
x Px , then
1
1
PAP PBu
y CP Du
x x
x
We have shown earlier that 1PAP , and so we get
Bu
y C Du
x x
x
The solution to this state equation would be
x(t) etx(0) e(t)B u()d0
t
(4.7.2.3)and using
x Px , we get
x(t) P1etPx(0) P1e(t)B u()d0
t
(4.7.2.4)
Comparing (4.7.2.4) with (4.7.2.2), we can see that
PePe tAt 1
-
7/30/2019 Introduction to Linear Systems and State Space
54/81
4246 Linear System
xu-week3.docx
19
Since is diagonal, i.e.
n
0
01
and
t
t
t
t
ne
e
e
e
00
0
0
2
1
Ate can be easily evaluated as
P
e
e
e
Pe
t
t
t
At
n
00
0
0
2
1
1
Note: From (4.7.2.4), if we assume that the input u(t) is zero, then we get
x(t) P1etPx(0)or
x(t) P1etx(0)
so that
1
2
1
2
(0)0
(0)( )
0
(0)0 0 n
t
t
t n
xe
xet
xe
1 2 nx c c c
i.e. 1 21 2( ) (0) (0)t tt e x e x
1 2x c c
This shows that the solutionx(t) is a combinations of the responses )0(it
i xeci and hence is a
combination (superposition) of the eigenmodestie
. We will discuss this later in this chapter.
Example:
-
7/30/2019 Introduction to Linear Systems and State Space
55/81
4246 Linear System
xu-week3.docx
20
Given
20
10A , find Ate using the diagonalization method.
Solution:
The first thing we need to do is to find the eigenvalues. This can be found by solving thecharacteristic polynomial det(sI-A)=0.
0)2(20
1)det(
ss
s
sAsI
hence the eigenvalues ares=0, -2.
Next, we need to find the eigenvectors associated with the eigenvalues. Considering s=0, we needto seek solution ofx such that
(0I-A)x=0;
i.e. 020
10
2
1
x
x
and sox1=k, where kis a constant andx2=0.Normalizing, we get the eigenvector associated withs=0 as x=[1 0]T.
Similarly for the other eigenvalue,
(-2I-A)x=0
i.e. 000
12
2
1
x
x
so that 02 21 xx or 12 2xx . Hence the eigenvector associated withs=2 is given byx=[1 -2]
T, or if we normalized it, we get
5
251
x
Hence, using the eigenvectors as columns of the matrixP-1, we get
520
511
1P
and
250
211
P
-
7/30/2019 Introduction to Linear Systems and State Space
56/81
4246 Linear System
xu-week3.docx
21
therefore, since PePe tAt 1 , we get
2
50
211
0
0
520
511
2
0
t
tAt
e
ee
t
t
t
tAt
e
e
e
ee
2
2
2
2
0
)1(2
11
250
211
520
511
end
Example 2:
Given Ax x where
31
20A , find x(t) for the intial conditions [1 1]T and [2 1]T
respectively.
Solution:
The eigenvalues of theA matrix ares=-2,-1 and the associated eigenvectors are (not normalized)
c1 1
1
and c2
2
1
(notice conincide with the initial conditions)
Hence we form
11
211P
and so
11
21P
Since )0()( 1 xePtx t or
x(t) c1e1tx1(0) c2e
2tx2(0), i.e. we get
)0(1
2)0(
1
1)( 21
2 xexet tt
x
-
7/30/2019 Introduction to Linear Systems and State Space
57/81
4246 Linear System
xu-week3.docx
22
If
1
1)0(x , then
0
1
1
1
11
21)0(x and so
tet 2
1
1)(
x
If
1
2)0(x , then
1
0
1
2
11
21)0(x , and so
tet
1
2)(x
Hence we can conclude that if the initial point lies on the eigenvector, it will remain on that line
forever. A phase plane plot (x2 vs x1 ) for the two initial condistions is shown in the figure below.
For arbitrary initial points, a series of phase plots can be found as shown below:
x2
x1
(2,1)
(1,1)
x2
-
7/30/2019 Introduction to Linear Systems and State Space
58/81
4246 Linear System
xu-week3.docx
23
Example 3:
Given
32
12A ,find the phase portrait.
Solution (as an exercise, please fill out the details):
The eigenvalues of theA matrix ares=-1,-4 and the associated eigenvectors are (not normalized)
2
1
1
121 candc
and the corresponding phase-plane plot is
Case 2: Matrix A is not diagonalizable
We will not be going into the details of this case for this course. Suffice to say that in case where
A has repeated eigenvalues, thenA may or may not be diagonalizable. In such cases, it can be
shown that thebest one can do is to reduce the matrixA to a form known as the Jordan canonicalform which has the following structure:
2
1
-
7/30/2019 Introduction to Linear Systems and State Space
59/81
4246 Linear System
xu-week3.docx
24
1
1
1
1
001
00
10
001
J
The above shows the Jordan form for the case where the eigenvalues are repeated n times. The
matrixJte is then given by
t
tn
tt
tn
ttt
Jt
e
en
ttee
en
te
ttee
e
1
111
1111
00
0)!2(
0
)!1(!22
12
{ Review:
Examples of eigenvalues and eigenvectors
Find the eigenvalues and eigenvectors of the matrix
41
23A
Solution:
Using the determinant rule,
107
2)4)(3(
41
23
2
ss
ss
s
sAsI
so that 0 AsI gives the eigenvalues ass1=2, ands2= 5.
To find the eigenvectors associated with the eigenvalues, first consider the eigenvalues1=2. Todetermine the eigenvector, we must find x such that
xx 1sA
ie. 021
21
)( 1
xx
AIs
-
7/30/2019 Introduction to Linear Systems and State Space
60/81
4246 Linear System
xu-week3.docx
25
and therefore
1
2x is a solution for any scalar. We can choose the solution =1 or the
normalized solution
51
52
x
Similarly, the eigenvector associated withs2=5 is given by
1
1x for any scalar. We can
choose the solution =1 or the normalized solution
21
21
x
It is understood that any scalar multiples of such normalized eigenvectors will also be aneigenvector.
}
Example:
Consider the matrix
A
1 3 2
0 4 2
0 3 1
Using the determinant rule,
sI A
s 1 3 2
0 s 4 2
0 3 s 1
(s 1)[(s 4)(s 1) 6]
(s 1)(s2 3s 2)
(s 1)2(s 2)
so that 0 AsI gives the eigenvalues ass1=1, ands2= 2.
To find the eigenvectors associated with the eigenvalues, first consider the eigenvalues1=1. Todetermine the eigenvector, we must find x such that
Ax x
ie.
sI A x 0 3 2
0 3 2
0 3 2
x 0
-
7/30/2019 Introduction to Linear Systems and State Space
61/81
4246 Linear System
xu-week3.docx
26
i.e.
3x2 2x3 0We get 1 equation (i.e. 1 constraint) with 3 unknowns and so we can get 2 linearly independenteigenvectors such as:
1 0 0 T
and
1 2 3
For eigenvalues2= 2, we get the eigenvector
1 1 1 T and so the matrix can be diagonalized.(Prove this!)
Example:
Consider the matrix
A
2 1 2
0 2 1
0 0 1
The eigenvalues are:s1=1, ands2= 2 (solving2( 2) ( 1) 0s s ).
Eigenvector associated withs1=1 is given by:
sI A x 1 1 2
0 1 1
0 0 0
x 0
so that the eigenvector is
1 1 1 T
Eigenvector associated withs1=2 is given by:
sI A x 0 1 2
0 0 1
0 0 1
x 0
so that the eigenvector is
1 0 0 T
For this example, there are only one linearly independent eigenvector of the repeated eigenvalues1=2, hence it is not diagonalizable. Instead, we can only convert it into Jordan canonical form
2 1 0
0 2 0
0 0 1
-
7/30/2019 Introduction to Linear Systems and State Space
62/81
4246 Linear System
xu-week3.docx
27
Example:
Consider the matrix
A 3 2
4 1
Characteristics equation is given by:
sI A s 3 2
4 s 1
s2 2s 5
so that 0 AsI gives the eigenvalues as
s1,2 2 i 16
2
1 2i
.
Eigenvectors associated with the eigenvalues1 = 1+2i is given by:
(s1I A)x 2 2i 2
4 2 2i
x 0
or
(2 2i)x1 2x2 0
4x1 (2 2i)x2 0
Note that the 2 equations are the same since
(2 2i) (22i) 8
So that the eigenvector is given by
x 1
1 i
.
The other eigenvector is
x 1
1 i
3.7 Modes of the System
Consider the time invariant linear system
0)0(, xxAxx
where nnn RARx , .
For simplicity, let use assume that all the eigenvalues of A are distinct. Let the
eigenvectors ofA be npp ,...,1 associated with the eigenvalues n ,...,1 . As these
eigenvectors are linearly independent, they form a basis.
Let us define the nonsingular matrix
PRnn as
-
7/30/2019 Introduction to Linear Systems and State Space
63/81
4246 Linear System
xu-week3.docx
28
1 1 2 nP p p p
where the columns ofP-1 are the eigenvector ofA. Let us also denote
PRn n as
1
2
n
q
q
P
q
where nqq ,...,1 are the row vectors of
P. Therefore,
qipj 1 i j
0 i j
We know that
0)( xetx At for 0t , and
x(t) P1etPx0 , for 0t ,
where ).,...,,( 21 ndiag Hence, the solution ofx(t) can be rewritten as
0
1
)( xqpetxn
i
ii
ti
, for 0t ,
or
n
i
ii
tpetx i
1
)( , (5.1)
where 0xqii , i=1,..,n.
(1) The above (5.1) shows that the response of the system is a composition of motions along theeigenvectors of A. We call such a motion a mode or eigenmode of the system. A particular
mode of the system can be excited by choosing the initial state0
x as a component along the
corresponding eigenvector.
(2) Each mode of the system varies in time according to the exponential function oftie
.
(3) An arbitrary initial condition will in general excite all n modes of the system. The amount of
excitation of each mode depends on the value of 0xqii .
(4) For example, suppose 10 kpx where kis a constant, then1
1)( pketx t .This implies that the solution x(t) consists of only the response of only the first mode. In state
space, this response can be seen to move along the eigen direction of1
p . The other modes
are suppressed.
Example:
Given a system xx
54
10
-
7/30/2019 Introduction to Linear Systems and State Space
64/81
4246 Linear System
xu-week3.docx
29
The eigenvalues of A are 1 and 4 respectively. The eigenvectors are
1
1and
4
1.
Therefore,
P1 1 1
1 4
and
Pq1
q2
4
3
1
31
3
1
3
Therefore, 221121)( pepetx tt where
20
10
11x
xq and
20
10
22x
xq and
20101
3
1
3
4xx and 20102
3
1
3
1xx . Thus
tt exxexxtx 4201020101 )3
1
3
1()
3
1
3
4()( and
tt exxexxtx 4201020102 )3
1
3
1(4)
3
1
3
4()(
-
7/30/2019 Introduction to Linear Systems and State Space
65/81
4246 Linear System
xu-week4.docx
1
4 Controllability and Observability
In this week, we are interested in qualitative features of state-space representation of
system. Specifically, we will be discussing two ideas known as controllability and
observability of a system.
4.1 Introduction and Motivation
While many features in state-space analysis of system have their parallels in frequency
domain analysis or classical analysis, Controllability and Observability are unique
features of state-space analysis. These ideas were first introduced by E.G.Gilbert and R.F.
Kalman in the early 1960s. They give a clear explanation as to why cancellation of
unstable poles are undesirable even if perfect cancellation is possible. ( c.f. the very first
example given in the introduction to state-space system ). Using these ideas, it can be
shown that the overall system is unstable although the transfer function of the system isstable. Hence, the idea of controllability and observability is an important concept in the
state-space analysis of the system. Before we proceed with the definitions of
controllability and observability, let us look at some motivating examples.
Example 1 : Consider
ux
ux
2
1
,
If )0()0( 21 xx , then for all time t and all control u. So, there are no
possible u and tto make say 1 2( ) ( ) 1x t x t .
Example 2 : Consider
2
2
21
0xy
x
xx
The above shows thaty(t) will always be a constant. Hence, observingy(t) does not tell us
what is doing.
A more interesting example is one given by the following diagram. It consists of two carts
coupled by a passive spring. In addition to the spring force, an active control force fis to
be provided by some means within the system, so thatfacts on both cart 1 and 2.
Example 3.
x 1
x 2
m 1 m 2f f
k Figure: Example of two-cart system
x t x t1 2( ) ( )
x1
-
7/30/2019 Introduction to Linear Systems and State Space
66/81
4246 Linear System
xu-week4.docx
2
The equations of motion of the system can be shown to be
The state equation using the state vector is given by
00//
00//
1000
0100
22
11
mkmk
mkmkA ,
2
1
/1
/1
0
0
m
mB
It is well known from the law of physics that the force can change the relative distance
between the two carts. i.e., but it cannot change the variables or
independently. However, if only the matricesA andB are given, based on what we have
learnt so far, we have no way of knowing that such a constraint exists. The problem is
even more pronounce since state-space representation of system is not unique. The
question is: is it possible to identify such constraints bsed on the matricesA andB?
Example 4
Consider the system given below:
1R 2R0 0R
1C 2C
3R
1v 2vV
Figure: An uncontrollable system
By Physics, we know
1 2 11 1
1 3
2 1 2
2 22 3
;V v v v
C vR R
V v v v
C v R R
dx
dtx1 1
dxdt
x2 2
dx
dt
k
mx x
f
m( )1
1
1 2
1
dx
dt
k
mx x
f
m( )2
2
2 1
2
x x x x xT
1 2 1 2
x x2 1 x1 x2
-
7/30/2019 Introduction to Linear Systems and State Space
67/81
4246 Linear System
xu-week4.docx
3
Hence, the differential equation can be described by
VRCvRRCvRCv 222
3221
322
11111
Consider the voltage across 3R , 21 vvv then
VRRCC
CRCRv
RRCRCv
RCRRCv
2121
11222
32231
1
32311
11111111
If 2211 CRCR , then the coefficient ofVvanishes and
1 2 3
1 1 3
R R Rv v
C R R
This implies that v is not influenced by Vand the voltage v decays from whatever initial
voltage to zero, i.e., v is not controllable.
The above examples illustrate the concept of controllability of system.
Practical examples of unobservability are also easily available. Typically examples of
unobservability of system occur when two systems are connected in tandem. If the output
consists of the output from the first system only, then the states of the second system are
unobservable.
Controllability and Observability are concerned with providing an analysis
tool to the above situations. For a given system of the form
x = Ax + B u
y = Cx + Du(1)
where , and matrices. Concept of Controllability
and Observabilty are useful in answering the following questions.
Can we drivex(t) wherever we want using some u?
Can we causex(t) to follow a given path?
Can the measurement ofy(t) tell us whatx(t0) was?
Can we trackx(t) by observingy(t)?
The concepts of controllability and observability are defined under the assumption that
we have complete knowledge of a dynamical equation; i.e., matrices A, B, Cand D are
known.
4.2 Controllability
A R B Rn n n r , C Rm n D Rm r
VRC
vRC
vRRC
v11
2
31
1
311
1
11111
-
7/30/2019 Introduction to Linear Systems and State Space
68/81
4246 Linear System
xu-week4.docx
4
Definition : A Linear Time-invariant system is said to be controllable if there exists an
input, u(t), 10 tt which drives the system from any initial state 0)0( xx to any
other state 11)( xtx in a finite time 1t .
The key to the above definition are the words "any" and "finite". If the
input is only possible to make the system go from some states to some other states, the
system is not controllable. Moreover, if it takes an infinite amount of time to go from the
arbitrary initial state to the arbitrary final state, the system is likewise uncontrollable.
It is possible to define the concept of controllability to accommodate time-
varying systems, in which it may happen that the possibility of reaching 1x may depend
on the initial time 0t (in that case 0t may not be 0 anymore). For our case, we will
confine our studies on time-invariant systems. In that case, there is no loss of generality in
taking the initial time 0t to be zero.
Theorem 1 - ControllabilityThe n-dimensional linear time-invariant state equation in (1) is controllable if and only if
the following condition is satisfied:
1) The nrn controllability matrix BAABBU n 1
is full row rank (i.e., rank(U)=n).
Proof:
and .There are various ways of showing this. The following is a more direct proof showing
the connection to the Cayley-Hamilton principle.
For the system
x = Ax + B u
y = Cx + Du(2)
The time solution of the system given in (2) at time1t is
1
11
0
)(
01 )(
t
tAAtdBuexex
(3)
or
1
1
0
01 )(
t
AAt dBuexxe (4)
Note that ( From Cayley-Hamilton Theorem and infinite series expansion of )
(5)
e A
e AA kk
nk
( )0
1
-
7/30/2019 Introduction to Linear Systems and State Space
69/81
4246 Linear System
xu-week4.docx
5
Substituting (5) into (4) gives
1
1
0
1
0
01 )()(
t
k
n
k
kAtduBAxxe (6)
If we denote
1
0
)()(
t
kk du
then
1
1
0
11
0
011
n
n
k
n
k
kAt BAABBBAxxe
(7)
For the system to be controllable, it means that