Control Crash Course

download Control Crash Course

of 41

Transcript of Control Crash Course

  • 8/2/2019 Control Crash Course

    1/41

    Astro 250

    Crash course on Control Systems

    Part I, March 3, 2003

    Andy Packard, Zachary Jarvis-Wloszek, Weehong Tan, Eric Wemhoff

    [email protected]

    1

  • 8/2/2019 Control Crash Course

    2/41

    Feedback Systems Motivation

    Process to be controlled

    -l

    -

    P-

    l -

    ? ?

    d1

    u

    d2

    y

    P

    Goal: regulate y as desired, by freely manipulating u

    Problem: Effect ofu on y is partially unknown:

    External disturbances, d1 and d2 act

    Process behavior, P is somewhat unknown, and may drift/changewith time

    Note: arrows indicate cause/effect relationships, not necessarily power,

    force, flow, etc.

    Open-loop regulation: Make H the inverse of P

    - H - l - P - l -? ?

    ydes u

    d1 d2

    yP

    If the unknown effects (d1, d2, ) are small, this calibration strategy

    may work. Well not focus on this.

    2

  • 8/2/2019 Control Crash Course

    3/41

    Feedback Systems Motivation

    Feedback regulation:

    -

    C-

    l-

    P-

    l-

    ?

    F?l

    -

    ? ?ydes

    u

    d1 d2

    y

    P

    Benefits of feedback

    1. Strategy C turns ydes and ymeas into u, so u depends on d1, d2,

    (and ). Automatic compensation for the unknowns occurs, but

    it is corrupted by .

    2. If C is properly designed, the feedback mechanism yields several

    benefits:

    (a) Effects ofd1 and d2 on y are reduced, and modestly insensitive

    to Ps behavior

    (b) The output y closely follows the desired trajectory, ydes, per-

    haps responding faster than the process naturally does on its

    own.

    (c) If the process P is inherently unstable, the feedback provides

    constant, corrective inputs u to stabilize the process

    3

  • 8/2/2019 Control Crash Course

    4/41

    Feedback Systems Motivation/Nomenclature

    -C - l - P - l -

    ?

    F?l

    -? ?ydes

    u

    d1 d2

    y

    P

    Drawbacks of using feedback

    1. A feedback loop requires a sensing element, F, which may be F

    2. Measurements potentially introduce additional noise, , into

    process

    3. System performance or even stability can be degraded if strategy

    C is not appropriate for P

    Nomenclature

    If keeping the mapping from d1 and d2 to y small is the focus,

    then the problem is a disturbance rejection problem.

    If keeping the mapping from r to y approximately unity is thefocus, then the problem is a reference tracking problem.

    In either case, the ability for C to augment the systems performance is

    dependent on the dynamics ofF, the noise level , and the uncertainty

    in the process behavior, P.

    4

  • 8/2/2019 Control Crash Course

    5/41

    Feedback Loops Arithmetic

    Many principles of feedback control derive from the arithmetic rela-

    tions (and their sensitivities) implied by the figure below.

    g C G g

    H

    F

    Sg

    - - - -

    -

    ? -

    ?

    ?

    6

    Process to be controlled

    Sensor

    Controller

    Filter

    r

    d

    e u y

    nymyf

    Analysis is oversimplified, not realistic, but relevant.

    Lines represent variables, arrows give cause/effect direction, rect-

    angular blocks are multiplication operators. Finally, circles are

    summing junctions (with subtraction explicitly denoted).

    (r,d,n) are independent variables; (e,u,y,ym, yf) are dependent,

    being generated (caused) by specific values of (r,d,n).

    Writing each operation in the loop gives

    e = r yf generate the regulation error

    u = Ce control strategyy = Gu + Hd process behavior

    ym = Sy + n sensor behavior

    yf = F ym filtering the measurement

    5

  • 8/2/2019 Control Crash Course

    6/41

  • 8/2/2019 Control Crash Course

    7/41

    Goals Implications

    The first two goals are:

    1. Make the magnitude of (d y)CL significantly smaller than the

    uncontrolled effect that d has on y, which is H.2. Make the magnitude of (n y)CL small, relative to

    1S.

    Implications

    Goal 1 implies

    H1 + GCFS 1.

    Goal 2 implies that any noise injected at the sensor output shouldbe significantly attenuated at the process output y (with proper

    accounting for unit changes by S). This requires GCF1 + GCFS

  • 8/2/2019 Control Crash Course

    8/41

    Conflict Impact on achieving Goal 3

    3. Make (r y)CL response approximately equal to 1

    Depending on which of Goal 1 or Goal 2 is followed, Goal 3 is accom-plished in different manners. By itself, goal 3 requires

    GC

    1 + GCFS 1.

    If Goal 1 is satisfied, then |GCFS| is large (relative to 1), so

    GC

    1 + GCFS

    GC

    GCFS =

    1

    F S.

    Therefore, the requirement of Goal 3 is that

    1

    F S 1, |GC| >> 1.

    On the other hand, if Goal 2 is satisfied, then |GCFS| is small

    (relative to 1), so

    GC1 + GCFS

    GC

    Therefore, the requirement of Goal 3 is that

    F S

  • 8/2/2019 Control Crash Course

    9/41

    Tradeoffs Arithmetic

    Let T(G,C,S,F) denote the factor

    that relates r to y

    T(G,C,S,F) = GC1 + GCFS

    .

    d C G dH

    F

    S

    d

    - - - -

    -

    ? -?

    ?

    6r

    d

    y

    n

    Use T to denote T(G,C,F,S) for short, and consider two sensitivities:

    sensitivity of T to G, and sensitivity of T to the product F S.

    Obtain

    STG =

    1

    1 + GCFS, STFS =

    GCFS

    1 + GCFSNote that (always)

    STG = 1 + STFS

    Hence, if one of the sensitivity measures is very small, then the other

    sensitivity measure will be approximately 1. So, if T is insensitive to

    G, it will be sensitive to F S and visa-versa.

    Defn: For a function F of a many variables (say two) the sensitivity

    ofF to x is defined as the percentage change in F due to a percentage

    change in x. denoted as SFx . For infinitesimal changes in x, the

    sensitivity isSFx =

    x

    F(x, y)

    F

    x

    Other more interesting conservation laws hold.

    9

  • 8/2/2019 Control Crash Course

    10/41

    Systems time, signals

    Time: dominant (only) independent variable

    usual notation, t (and , , ,...)

    real number, so t R, often time starts from 0, so t R+

    Signals: real-valued, functions on time variable

    usual notation, u,y,x,w,v

    explicit example: u(t) = e3t

    sin4t for all t R+.

    Systems: mapping from signal to signal (often called op-

    erator)

    usual notation, L for mapping, and Lu for L acting on u.

    explicit example: (Lu)(t) := t0 u2()d t exp4 u()dA system L is linear if for all signals u and v, and all scalars ,

    L(u + v) = Lu + Lv

    10

  • 8/2/2019 Control Crash Course

    11/41

    Linear systems Examples/Non-Examples

    Examples

    (Lw)(t) =t

    0

    e2(t)w() 12 + 1

    w( 4)d

    (Lw)(t) = 5tw(t)

    (Lw)(t) =

    t0

    w()d

    (Lw)(t) = 3w(t 4)

    Non-Examples

    (Lw)(t) =

    w2(t) + 1

    (Lw)(t) =

    t0

    sin(w())d

    (Lw)(t) = 3e|w(t4)|

    Alot is learned by considering feedback configurations of linear sys-

    tems, and then studying how nonideal aspects affect the conclusions

    Direct consideration of nonlinear systems is also possible, but is not

    how we structure this crash-course.

    11

  • 8/2/2019 Control Crash Course

    12/41

    Causality Time Invariance

    A system L is causal if for any two inputs u and v,

    u(t) = v(t) for all t T

    implies that

    (Lu)(t) = (Lv)(t) for all t T

    ie. output at t only depends on past values of input.

    Anything operating in real-time produces outputs that are causally

    related to its inputs.

    Off-line filtering (ie., what is the best estimate of what was happening

    at t = 2.3 given the data on the window [0 10]) is not necessarily

    causal.

    A system L is time-invariant if the systems input/output behavior

    is not explicitly changing/varying with time.

    12

  • 8/2/2019 Control Crash Course

    13/41

    3 Representations Linear, Time-Invariant, Causal Systems

    Convolution: given a function g (on time), define a system (rela-

    tionship between input u and output y as

    y(t) = t0

    g(t )u()d

    g can also be a matrix-valued function of time, and the u and y are

    vector-valued signals. g is called the convolution kernel.

    Linear Ordinary Differential Equations: Given constants aiand bi, define a system (relationship between input u and output y as

    y[n](t) + a1y[n1](t) + + an1y

    [1](t) + any(t)

    = b0u[n](t) + b1u

    [n1](t) + + bn1u[1](t) + bnu(t)

    with given initial conditions on y and its derivatives.

    State-Space (coupled, first-order LODEs): Given matrices

    A, B, C , D of appropriate dimensions, define a system (relationshipbetween input u, output y, and internal state x as

    x(t)

    y(t)

    =

    A B

    C D

    x(t)

    u(t)

    with given initial conditions on x. Well focus on these types of de-

    scriptions on Wednesday.

    13

  • 8/2/2019 Control Crash Course

    14/41

    Linear, Time-Invariance and Convolution Equivalence

    Fact: Give a linear, time-invariant system. If for all T > 0, there is

    a number MT such that

    max0tT |u(t)| 1 max0tT |y(t)| MT

    then the system can be represented as a convolution system, withT0

    |g()| d <

    for all T.

    If the convolution kernel, g, is the finite sum of exponentially weightedsines, cosines, and polynominals in t, then it can also come from a

    linear ODE, or a system of coupled, 1st order linear ODEs.

    Translation between the representations, when possible, is easy...

    14

  • 8/2/2019 Control Crash Course

    15/41

    Stability Things to know

    A system L is BIBO (Bounded-Input, Bounded-Output) stable if

    there is a number M < such that

    maxt |y(t)| Mmaxt |u(t)|

    for all possible input signals u, starting from 0 initial conditions.

    A system L is internally stable all homogeneous solutions (ie., u 0,

    nonzero initial condiitons) decay to zero as t .

    Ignoring mathematically relevant, but physically artificial situations,

    these are the same, and are equivalent to:

    for a convolution description:0

    |g()| d <

    for a LODE description: All roots of

    n + a1n1 + + an1 + an = 0

    (which may be complex) have negative real-part

    for a state-space description: All eigenvalues of A have negative

    real-part

    15

  • 8/2/2019 Control Crash Course

    16/41

    Simple tools

    Frequency Response for stable systems, derived from model and/or

    obtained from experiment

    Behavior (model) of interconnection of a collection of linear sys-

    tems, from the individual behaviors

    Quantitative, qualitative reasoning about 1st, 2nd, and 3rd order

    linear differential equations

    Decrease in sensitivity and linearizing effect of feedback

    Destabilizing potential of time-delays in feedback path

    A few relevant architectures for control of simple dynamic pro-

    cesses

    16

  • 8/2/2019 Control Crash Course

    17/41

    Frequency Response Convolution

    Assume convolution system is BIBO,0

    |g()| d <

    The tail of the integral satisfies

    limt

    t

    |g()| d = 0

    and for all R,

    G() :=

    0

    g(t)ejtdt

    is well defined. Let R, and u C. Apply the complex sinusoidalinput u(t) = uejt. The output is

    y(t) =

    t0

    g(t )u()d

    =

    t0

    g(t )uejd

    = t0

    g()ej(t)d u using := t

    = ejtt

    0

    g()ejd u

    = ejt

    0

    g()ejd

    t

    g()ejd

    u

    = G()uejt + ejt

    t

    g()ejd u

    yd(t)Clearly, limt y

    d(t) = 0, and the response tends to a complex sin at

    same frequency of input. For stable, linear time-invariant systems,

    u(t) = ejt yss(t) = H()ejt

    17

  • 8/2/2019 Control Crash Course

    18/41

    Frequency Response Other representations

    If system is given in convolution form,

    y(t) = t

    0

    g(t )u()d

    then H() = G()

    If system is given in linear ODE form,

    y[n](t) + a1y[n1](t) + + an1y

    [1](t) + any(t)

    = b0u[n](t) + b1u

    [n1](t) + + bn1u[1](t) + bnu(t)

    then

    H() :=b0(j)

    n + b1(j)n1 + + bn1(j) + bn

    (j)n + a1(j)n1 + + an1(j) + an

    Finally, if system is given in 1st-order form,

    x(t) = Ax(t) + Bu(t)y(t) = Cx(t) + Du(t)

    then

    H() = D + C(jI A)1 B

    18

  • 8/2/2019 Control Crash Course

    19/41

    Complex Arithmetic Review

    Suppose G C is not equal to zero. The magnitude of G is denoted

    |G| and defined as

    |G| := [ReG]2 + [ImG]21/2The quantity G is a real number, unique to within additive 2, which

    has the properties

    cos G =ReG

    |G|, sin G =

    ImG

    |G|.

    Then, for any real ,

    Re

    Gej

    = Re[(GR +jGI)(cos + j sin )]

    = GR cos GIsin

    = |G|GR|G|

    cos GI|G|

    sin

    = |G| [cos G cos sin G sin ]

    = |G| cos( + G)

    Im Gej = = |G| sin( + G)

    19

  • 8/2/2019 Control Crash Course

    20/41

    Complex Arithmetic Real-valued Interpretation

    g is real, u is complex, so

    y(t) := t

    0

    g(t )u()d

    is complex. The linearity, obvious from the integral form, implies that

    the real part of u leads to/causes the real part ofy, and

    the imaginary part of u leads to the imaginary part of y

    namely

    y(t) =

    t0

    g(t )u()d yR(t) =

    t0 g(t )uR()d

    yI(t) =t

    0 g(t )uI()d

    In steady-state (after transients decay), we saw

    u(t) = ejt y(t) = G()ejt

    The real and imaginary parts mean

    u(t) = cos t y(t) =G() cost + G()

    u(t) = sin t y(t) =G() sint + G()

    20

  • 8/2/2019 Control Crash Course

    21/41

    A most important feedback loop... Feedback around integrator

    Diagram and equations:

    l l- - - - -?l

    6

    ?x x

    d

    r y

    n

    x(t) = r(t) y(t) n(t)

    y(t) = d(t) + x(t)

    Eliminate x to yield

    y(t) + y(t) = r(t) + d(t) n(t)

    Frequency Response Functions (r y, d y and n y):

    GRY() = GNY =

    j + GDY() =

    j

    j +

    Properties:

    Ifr(t) r, d(t) d, then

    y(t) = r + et(x0 + d y0

    r)

    d, not d itself, affects y. Slowly varying d has little affect ofy

    The bandwidth of the closed-loop system is , the time constant

    is 1.

    r and n, though interpreted differently, enter in essentially the

    same manner; feedback loop and integrator combine to a low-pass

    filter to y.

    Apparent from time simulations and frequency response function plots.

    21

  • 8/2/2019 Control Crash Course

    22/41

    MIFL Step/Frequency Responses

    l l- - - - -?l

    6

    ?x x

    d

    r y

    n

    y(t) + y(t) = r(t) + d(t) n(t)

    GRY() = GNY =

    j+

    GDY() =j

    j+

    Time Responses

    0 2/Beta 4/Beta 6/Beta 8/Beta 10/Beta 12/Beta0.2

    0

    0.2

    0.4

    0.6

    0.8

    1

    1.2

    TIME

    REFERENCE R, and OUTPUT Y

    Reference, rOutput, y

    0 2/Beta 4/Beta 6/Beta 8/Beta 10/Beta 12/Beta

    0.5

    0

    TIME

    DISTURBANCE D

    0 2/Beta 4/Beta 6/Beta 8/Beta 10/Beta 12/Beta0.2

    0

    0.2

    0.4

    0.6

    0.8

    1

    1.2

    1.4

    1.6

    TIME

    STATE X

    Frequency Responses

    Beta/100 Beta/10 Beta 10Beta 100Beta0.01

    0.1

    1

    10

    FREQUENCY

    FREQUENCY MAGNITUDE RESPONSE from R > Y

    Beta/100 Beta/10 Beta 10Beta 100Betapi/2

    pi/4

    0

    FREQUENCY

    FREQUENCY PHASE RESPONSE from R > Y

    Beta/100 Beta/10 Beta 10Beta 100Beta0.01

    0.1

    1

    10

    FREQUENCY

    FREQUENCY MAGNITUDE RESPONSE from D > Y

    22

  • 8/2/2019 Control Crash Course

    23/41

    MIFL Application Integral Control

    Process model:

    y(t) = Hu(t) + d(t)

    with H uncertain gain of process, and d an exogenous disturbance.Goal: Regulate y to a given value r, even in the presence of slowly-

    varying d and measurement noise n.

    A Solution: Integral control action:

    u(t) = KIt

    0

    e()d

    (equivalently: u(0) = 0, u(t) = KIe(t)).

    KI Hl l- - - - - -

    ?6

    l

    ?e x u

    d

    r y

    n

    After KI is chosen, certain properties of the closed-loop system are

    insensitive to H, others are still 1-1 sensitive to H...

    Description Value Sensitivity

    Time constant 1HKI

    1

    Time-Delay for instability 2HKI1

    (r y)ss for r(t) r, d(t) d 0 0

    23

  • 8/2/2019 Control Crash Course

    24/41

    MIFL Application Some plots

    2 1.5 1 0.5 0 0.5 1 1.5 22

    1.5

    1

    0.5

    0

    0.5

    1

    1.5

    2

    Experimental Process Data Runs

    U

    Y

    Shown are several plots of the

    (u, y) relationship

    y = Hu + d

    for different values of H and d.

    Fix these, and try the integral

    control solution to regulate y

    0 15/KI 30/KI 45/KI 60/KI

    0.5

    0

    0.5

    1

    1.5

    2

    2.5

    3

    TIME

    REFERENCE R, and OUTPUT Y

    Reference, rOutput, y

    Closed-loop time-responses ofy(t) for staircase r: Note that

    time-constant is affected by the

    variability in H, but the steady-

    state tracking (y = r) is not.

    0 15/KI 30/KI 45/KI 60/KI

    0

    2

    4

    6

    8

    10

    12

    TIME

    U/KI

    CONTROL ACTION, U

    Corresponding value of control

    input u: Even though regula-

    tory strategy is fixed, namely

    u(t) = KIt

    0

    e()d

    the value clearly depends on the

    specific d and H.

    24

  • 8/2/2019 Control Crash Course

    25/41

    What limits bandwidth? Discussion

    Without loss in generality,

    Take Hnominal := 1;

    Drop r from the discussion (r = 0, or recenter variables around

    the value of r).

    Give sensor a model, S.

    Then, closed-loop (nominal) appears as

    KI Hf f- - -

    ?

    d

    - -

    ?

    S?

    6

    f

    e x u y

    n

    H = 1

    Here, KI sets the bandwidth of the system. What limits our choice?

    Time-delay in feedback path Tradeoff between effect of d and n on y

    H may actually not have constant gain at all frequencies.

    We know this, and use a more complex corrective strategy

    (Wednesday)

    We dont know this, or choose not to figure out (for instance,too difficult and/or unreliable to predict) Hs behavior at

    high frequencies

    25

  • 8/2/2019 Control Crash Course

    26/41

    What limits bandwidth? Case 1: Lag/Delay in Feedback Path

    If the feedback signal is subject to a time delay of magnitude T, some

    of the properties are adversely effected.

    Diagram and equations:l l- - - - -

    delay, T

    6

    ?x x

    d

    r y

    x(t) = r(t) y(t T)

    y(t) = d(t) + x(t)

    Eliminate x to yield

    y(t) = d(t) + [r(t) y(t T)]

    or

    y(t) + y(t T) = r(t) + d(t)

    Properties: If 0 T < 2, then the system is stable. Time re-

    sponses for T = {0, .1, .3, .5, .7, .9} 2

    are shown on left. Time

    delay in feedback loop degrades the systems ability to reject a rapidly

    changing disturbance.

    Time responses for T = {0, .1, .3, .5, .7, .9} 2

    are shown:

    0 2/Beta 4/Beta 6/Beta 8/Beta 10/Beta 12/Beta

    0

    0.5

    1

    1.5

    2

    TIME

    REFERENCE R, and OUTPUT Y

    Reference, rOutput, y

    0 2/Beta 4/Beta 6/Beta 8/Beta 10/Beta 12/Beta

    0

    0.5

    1

    1.5

    2

    TIME

    REFERENCE R, and OUTPUT Y

    Reference, rOutput, y

    26

  • 8/2/2019 Control Crash Course

    27/41

    What limits bandwidth? Case 2: Sensor Noise

    KI Hf f- - -

    ?

    d

    - -

    ?

    S?

    6

    f

    e x u y

    n

    H = 1

    Consider given power spectral densities for independent d and n

    d() =2

    2, n() =

    2

    With integral feedback, the PSD of y and variance of a weighted (by

    a scalar q) multiple ofy are

    y() =2 + K2I

    2

    2 + K2IS2

    , E(q2y(t)2) = q22 + K2I

    2

    2KIS

    The integral gain which minimizes the variance is KI =

    , leading to

    a closed-loop bandwidth of BW = S

    , and variance

    E(qy)2

    (t) = q2

    S .

    A specification imposes a lower bound on bandwidth. If E(qy)2(t)

    M is a requirement, then we must have

    M S

    q2,

    and relating this to bandwidth gives

    BW q22

    M.

    This has implications on how much the actual process H can deviate

    from its idealized model within the frequency range [0, BW].

    27

  • 8/2/2019 Control Crash Course

    28/41

    What limits bandwidth? Case 3: Process Uncertainty

    KI Hf f- - - - -

    ?

    S?

    6

    f

    ?e x u

    d

    y

    n

    6perception

    KI Hf f- - - - -

    ?

    S?

    6

    f

    ?e x u

    d

    y

    n

    6reality

    How much can process H change if

    G() := GDY() =1

    1 + HKIj S=

    j

    j + HKIS

    is not to degrade significantly? Let BW denote bandwidth here

    BW = HKIS. Pick R > 0. Easy-to-show that for all stable Hsatisfying H(j) HH

    R1 + Rj + BWBW

    it follows that

    G()

    (1 + R) |G()|.

    0.01 BW 0.1 BW BW 10 BW 100 BW10

    2

    101

    100

    101

    102

    R=10

    R=0.1

    R=1

    Take R = 0.1 (for example).

    For a guarantee of no surprises

    10% degradation across frequency

    then one should be able to say

    thatH(j) H < |H| for

    [0, 10 BW].

    Statement above is general, and tight, in that no stronger statement

    can be made.

    There probably is a better way to get the take-home-message across...

    28

  • 8/2/2019 Control Crash Course

    29/41

    Effect of Process/Model Mismatch Toy Example

    Take = 1, S = 1, q = 1 and from 0.5 3, giving

    BW =S

    = 2

    1

    3, E(qy)2(t) = q2

    S= 0.5 3

    Suppose that the (u, d) y relationship is not simply y(t) = u(t) +

    d(t), but rather y(t) = f(t) + d(t), where f behaves

    f(t) + 2nf(t) + 2nf(t) =

    2nu(t)

    with n = 10 and = 0.1.

    100

    101

    102

    102

    101

    100

    101

    MAGNITUDE

    FREQUENCY10

    0

    101

    102

    180

    160

    140

    120

    100

    80

    60

    40

    20

    0

    PHASE

    ANGLE

    (degrees)

    FREQUENCY

    Frequency re-sponses of H(= 1)

    and H. Similar

    over [0 1], differ by

    about 10% at 3,

    and 100% at 8.

    As decreases (and is exploited by increasing the bandwidth) the

    output variance decreases. But, for large bandwidths (about 1.4 and

    higher), the performance actually degrades as ones attempts to exploit

    the sensor quality.

    0.5 1 1.5 2 2.5 30.5

    0

    0.5

    1

    1.5

    2

    2.5

    3

    3.5

    VARIANCE

    Noise

    Actual

    Expected

    100

    101

    102

    103

    102

    101

    100

    101

    Magnitude

    FREQUENCY

    Percentage Mismatch

    R=5 Bounds

    29

  • 8/2/2019 Control Crash Course

    30/41

    Multi-Input, Multi-Output MIFL

    Cf f- - - Q -?

    S?

    6

    f

    ?e u

    d

    y z

    n

    Many control inputs, many disturbances, many sensors (not the regu-

    lated variables)

    Process, Measurement, Error criterion:

    y(t) = u(t) + d(t)

    ym(t) = Sy(t) + n(t)z(t) = Qy(t)

    For now, assume all are the same dimension, so S is a square matrix.

    Statistical descriptions of d and n: say, for instance

    d() =1

    2, n() = N N

    Note that everything is a matrix: , N , S , Q. Each component of theproblem has its own prefered directions, and these will interact...

    Goal: Find best feedback strategy, minCEzT(t)z(t).

    Solution: Easy to use singular value decomposition to reduce to

    many scalar problems (exercise) ... Facts:

    Optimal control is integral control, u(t) = KIx(t), x(t) = e(t).

    KI depends in a complicated way on the directionality/magnitudes

    of the matrices , S , N (though not on Q).

    Feedback loop has many bandwidths (eigenvalues of matrix SKI).

    30

  • 8/2/2019 Control Crash Course

    31/41

    Linear Algebra Singular Value Decomposition (SVD)

    Theorem: Given M Fnm. Then there exists

    U Fnn, with UU = In,

    V Fmm, with VV = Im,

    integer 0 k min(n, m), and

    real numbers 1 2 k > 0

    such that

    M = U 00 0

    Vwhere Rkk is

    =

    1 0 0

    0 2 0... ... . . . ...

    0 0 k

    We need to apply it to real, square, invertible matrices...

    31

  • 8/2/2019 Control Crash Course

    32/41

    Multi-Input, Multi-Output MIFL Solution

    Cf f- - - Q -?

    S?

    6

    f

    ?e u

    d

    y z

    n

    Process, Measurement, Error criterion:

    y(t) = u(t) + d(t)

    ym(t) = Sy(t) + n(t)

    z(t) = Qy(t)

    Statistical descriptions of d and n: say, for instance

    d() =1

    2, n() = N N

    Solution:

    1. Calculate SVD of =: UVT

    2. Calculate SVD of N =: UNNVTN

    3. Calculate SVD of 1N UTNSU =: UV

    T

    4. Define KI := UV UT1N U

    TN

    This is a special case of the LQG problem.

    32

  • 8/2/2019 Control Crash Course

    33/41

    Linear-Quadratic Gaussian Problem Statement

    General dynamical system setup for process:

    x(t)

    e(t)y(t)

    = A B1 B2

    C1 0 D12C2 D21 D22

    x(t)

    d(t)u(t)

    Assumptions:

    All matrices known

    d zero mean, d() = I (absorb actual PSD into process model)

    Measure y, manipulate u

    Goal: Find the best dynamic, linear control strategy (matrices F,G,H,L)(t)

    u(t)

    =

    F G

    H L

    (t)

    y(t)

    to minimize EeT(t)e(t).

    Solution: Well-known since 1960s (Kalman, Bucy, Kushner, Won-

    ham, Fleming, and others).

    Computation: Easy to compute controller matrices, solving 2 quadratic

    matrix equations. Ordered Schur decomposition is the main tool.

    Issues (1978): There are no guarantees as to how sensitive theachieved closed-loop performance is to variations in the process be-

    havior. [Doyle, IEEE TAC].

    Robust Control (1978-199X): Tempering the optimization based

    on description of what is possibly unreliable in process model.

    33

  • 8/2/2019 Control Crash Course

    34/41

    2nd MIFL Controlling the position of an inertia

    Diagram and equations:

    KI

    1m

    KD

    KP

    -f - - -f -f - - - -

    ?f

    ?f

    6

    -

    6

    -

    ?

    ?r u x x

    n2

    n1

    d

    y2 = x + n2

    y1 = x + n1

    mx(t) = u(t) + d(t)mx(t) = u(t) + d(t)

    Controller equations:e(t) = r(t) y1(t)

    z(t) = e(t)

    u(t) = Kpe(t) + KIz(t) KDy2(t)

    Eliminating z and u:

    m...x(t) + K

    Dx(t) + K

    Px(t) + K

    Ix(t)

    = KIr(t) + KPr(t) + d(t) KPn1(t) KIn1(t) KDn2(t)

    Facts:

    Knowledge of m implies characteristic polynomial can be set

    with ni 0, r(t) r, d(t) d, x(t) r.

    KP gives initial control reaction to error

    KI keeps fighting low frequency biases

    KD adds damping

    34

  • 8/2/2019 Control Crash Course

    35/41

    2nd MIFL Design Equations

    Characteristic polynomial is

    p() = 3 +KD

    m2 +

    KP

    m +

    KI

    m

    Parametrize roots with positive real numbers , n,

    n jn

    1 2, n

    which implies

    p() = 2 + 2n +

    2n ( + n)= 3 + n(2 + )2 + 2n(2 + 1) + 3n

    Matching coefficients yields the design equations

    KI = m3n

    KP = m2n(2 + 1)

    KD = mn(2 + )

    Look at results for = 0.707, and = 0, 0.4, 2.5. Start with a

    robust stability calculation - how much variation can be tolerated in

    the process behavior, which nominally is mx(t) = u(t).

    0.01 wn 0.1 wn wn 10 wn 100 wn

    100

    101

    102

    PERCENTAGEVARIATION

    FREQUENCY

    The maximum allowable percent-

    age variation in U X (described

    in terms of Frequency Response)for which closed-loop stability is

    guaranteed to be maintained.

    35

  • 8/2/2019 Control Crash Course

    36/41

    Results Frequency Response Functions

    0.01 wn 0.1 wn wn 10 wn 100 wn10

    4

    103

    10

    2

    101

    100

    101

    MAGNITUDE

    FREQUENCY

    Magnitude of frequency response

    from R X,

    KP(j) + KI

    m(j)3 + KD(j)2 + KP(j) + KI

    0.01 wn 0.1 wn wn 10 wn 100 wn180

    160

    140

    120

    100

    80

    60

    40

    20

    0

    PHASE

    FREQUENCY

    Phase of frequency response from

    R X

    0.01 wn 0.1 wn wn 10 wn 100 wn10

    4

    103

    102

    101

    100

    MAGNITUDE

    FREQUENCY

    Magnitude of frequency response

    from D X (normalized).

    0.01 wn 0.1 wn wn 10 wn 100 wn10

    4

    103

    102

    101

    100

    101

    MAGNITUDE

    FREQUENCY

    Magnitude of frequency response

    from N1, N2 X.

    36

  • 8/2/2019 Control Crash Course

    37/41

    Results Time Responses

    0 10 20 30 400.5

    0

    0.5

    1

    1.5

    2

    2.5

    R

    eferenceR

    Normalized Time, t/wn

    Applied reference signal r.

    0 10 20 30 401

    0.5

    0

    0.5

    NormalizedD

    isturbanceD

    Normalized Time, t/wn

    Applied disturbance signal d.

    0 10 20 30 401

    0.5

    0

    0.5

    1

    1.5

    2

    2.5

    OutputResp

    onse

    Normalized Time, t/wn

    Output (x) response.

    0 10 20 30 4010

    5

    0

    5

    ControlAction

    Normalized Time, t/wn

    Control action u.

    37

  • 8/2/2019 Control Crash Course

    38/41

    Reduction in sensitivity from feedback

    Ll l- - - -6

    ?e

    d

    r y

    Constraints are

    y = d + Le, e = r y

    Eliminating e (for instance) gives

    y =L

    1 + L

    T(L) orTr +

    1

    1 + L

    S(L) orSd

    Obviously, L > 0 (which is negative feedback) means 1

    1+L

    < 1.Suppose L changes to L + . Obviously T changes as well. Compare

    percentage change in T to percentage change in L,

    % change in T

    % change in L =

    T(L+)T(L)T(L)

    L+LL

    =

    T(L + ) T(L)

    L

    T(L)

    Compute for differential changes in L, so take lim0, giving

    % change in T

    % change in L=

    dT

    dL

    L

    T(L)=

    1

    (1 + L)2L

    T(L)= S

    This is why Bode called S the sensitivity function.

    38

  • 8/2/2019 Control Crash Course

    39/41

    Linearizing effect of Feedback

    K ()l l- - - - -6

    ?e

    d

    r y

    Constraints are

    y = d + (e), e = K(r y) y = r 1K

    e

    whose solution (for certain ) implicitly defines a function y(r, d). The

    chain rule implies

    y

    r =K

    (e

    ) 1 yr , yd = 1 K(e)ydwhere e = K(r y(r)). Rearranging, gives

    y

    r=

    K(e)

    K(e) + 1,

    y

    d=

    1

    K(e) + 1

    Note, ifK >> 1 everywhere, then the function y is more linear in

    r than , and nearly unaffected by d.

    10 8 6 4 2 0 2 4 6 8 1015

    10

    5

    0

    5

    10

    15

    20

    25

    Solution of y

    e

    y

    y = (e)

    y=r e/Kr

    y(r)

    39

  • 8/2/2019 Control Crash Course

    40/41

    Linearizing effect of Feedback Dynamic Example

    ()l l- - - - -

    6

    ?e

    d

    r y

    Replace K by an integrator and inject sine wave, r = 10sin0.01t. At

    = 0.01, the gain from the integrator is 100.

    10 8 6 4 2 0 2 4 6 8 1020

    15

    10

    5

    0

    5

    10

    15

    20

    Nonlinear function 1

    x

    e

    y = x + 0.01x3

    10 8 6 4 2 0 2 4 6 8 1020

    15

    10

    5

    0

    5

    10

    15

    20

    Nonlinear function 2

    e

    y

    y = 2e for e < 0

    y = 4e for 0

  • 8/2/2019 Control Crash Course

    41/41

    Linearizing effect of Feedback Dynamic Example (contd)

    ()l l- - - - -

    6

    ?e

    d

    r y

    However, if r = 10sin0.1t, the gain from the integrator is 10, and the

    time responses for this system with the same nonlinear functions are

    shown.

    0 10 20 30 40 50 60 70 80 90 10020

    15

    10

    5

    0

    5

    10

    15

    20

    Function 1: Ref(o), Output(), Openloop(.)

    Time (sec)

    Output

    y

    Ref &Closedloop

    0 10 20 30 40 50 60 70 80 90 10020

    15

    10

    5

    0

    5

    10

    15

    20

    Function 2: Ref(o), Output(), Openloop(.)

    Time (sec)

    Output

    y

    Ref &Closedloop

    0 10 20 30 40 50 60 70 80 90 10020

    15

    10

    5

    0

    5

    10

    15

    20

    Function 2: Ref(o), Output(), Openloop(.)

    Time (sec)

    Output

    y

    Ref &Closedloop

    20 15 10 5 0 5 10 15 2010

    8

    6

    4

    2

    0

    2

    4

    6

    8

    10

    Reference vs Input, compared to inverse of function 1

    Reference r

    I

    npute

    20 15 10 5 0 5 10 15 2010

    8

    6

    4

    2

    0

    2

    4

    6

    8

    10

    Reference vs Input, compared to inverse of function 2

    Reference r

    I

    npute

    20 15 10 5 0 5 10 15 2010

    8

    6

    4

    2

    0

    2

    4

    6

    8

    10

    Reference vs Input, compared to inverse of function 3

    Reference r

    I

    npute

    Notice that the output, y, does not track the reference, r, as well as

    when r = 10sin0.01t. Also, the scatter plots of r vs e have more

    dispersion and indicate that e does not invert (.) as well as in the

    previous case.

    This example shows that feedback can have a linearizing effect when

    the gain is large enough