Identification of Linear Parameter-Varying Systems via LFTs Lee Poolla Cdc_1996

download Identification of Linear Parameter-Varying Systems via LFTs Lee Poolla Cdc_1996

of 6

  • date post

    07-Apr-2018
  • Category

    Documents

  • view

    215
  • download

    0

Embed Size (px)

Transcript of Identification of Linear Parameter-Varying Systems via LFTs Lee Poolla Cdc_1996

  • 8/6/2019 Identification of Linear Parameter-Varying Systems via LFTs Lee Poolla Cdc_1996

    1/6

    Proceedings of the 35thConference on Decision and ControlKobe,Japan December 1996WP18 550

    Identification of Linear Parameter-Varying Systemsvia L F T ~

    Lawton N.Lee2 and Kames1iwa.rPoolla3Department of Mechanical Engineering, University of C;tlifornia, Berkeley CA 94720

    AbstractThis paper considers the identification of LinearParameter-Varyiing (LPV) systems having linear-fractional parameter dependence. We present a naturalprediction error imethod, using gradient- and IIessian-based nonlinear optimization algorithms to minimizethe cost function,. Computing the gradients and (ail-proximate) Hessians is shown to reduce to siniiilatingLPV systems ancl computing inner products. Issues re-lating to initialization and identifiability are discussed.The algorithms are demonstrated on a numerical ex-ample.

    1 IntroductioiiThis paper considers the identification of LinearParameter-Varyiing (LPV) systems. LPV models havereceived a great deal of recent attention (for example,see [ I , 4, 13, 161 and the references therein) in the con-text of developing systematic techniques for designinggain-scheduled controllers.An LPV system is a linear system that is dependent, onone or more time-varying parameters and hence repre-sents a family of LTV systems (one for each parametertrajectory). The parameter are considered measurablein real-time but not known in advance. Such modelshave been used effectively in missile [2, 141, aircraft[15], robotic [3], and process [5] control pr ol hn s.We consider here parametric identification methods forLPV systems; as is common [ I , 131, we deal wi th thecase where the time-varying paramrtrrs enter in linear-fractional fashion. Nemani et. al. [1 ] have been able toreduce simplified forms of the problem ( c g., exart ornoisy state measurement, eqnation error noise, scalartime-varying parameter block) to least squares. Still,investigation of this problem is still in relative infancy.

    ric LTI system identification (Box-Jenkins and output-error models, for example [9]), consists of three steps:1, Form a cost function based on prediction error.2. Generate an initial parameter estimate.3 . Minirtrize the cost function via gradient- andHe~sian-based onlinear programming. These al-gorithms are sensitive to initial seeds, so the pre-cediiig step merits consitlerable at tention.

    The reniaintler of this paper is organized as follows.Section 2 establishes some notation. In Section 3 weformulate the identificat,ion problem and form the costfunction. In Section 4 we present nonlinear optimiza-tion algorithms for parameter estimation and provideformulas for the gradient and (approximate) Hessian ofthe cost. In Section 5 we discuss identifiability and thegeneration of initial seeds. Section 6 gives a numericalexample. Conclusions are given in Section 7.

    2 NotationDenote a discrete-time linear (LTI/LTV/LPV) systemt ( t f 1 ) = A( t ) t ( t )+ B ( t ) u ( t )

    y ( t ) = C ( t ) X ( t ) D ( t ) u ( t )by M = [ and let S ( M ) := [ $ E ] denotethe corresponding state-space matrix. For properlypartitioned matrices or systems M = [ 2: 21and A , let F , , (M ,A) denote the familiarLFT intercon-nection F U (M , A )= A ~ Z Z M2tA(1- MllA)- 'MtZ.Given finit8e equences { ~ ( t ) ,( t ) E Rn}f',lr let (U, v ) ~denote the inner product

    - L-1( U ,1)L := 1.-2T ( t ) V ( t )Y t =OThe approach considered here, standard in pnramet- Given matrices A E Rmxn nd B E Rp 'q , let A CB E

    Rmpxnqdenote the J

  • 8/6/2019 Identification of Linear Parameter-Varying Systems via LFTs Lee Poolla Cdc_1996

    2/6

    Finally, we should keep in mind that in this paper twodifferent ent ities are referred to as param etersJJ :hetime-varying parameters or scheduling variables thatare measured, and the unknouin structuml pornmetersthat are estimate d. It should be clear from the contextwhich of the two is being mentioned.

    3 Problem FormulationConsider the nth-order discrete-time LPV plant

    where 6 E R is the time-varying, measured parameterand 8 E RN s the fixed, unknown parameter. Assumethat P = F , ( M ( t ) ,A ) , a feedback interconnection ofthe nth-order LTI system

    and the r x r time-varying parameter blockA := diag(&I,,,. ..,d31,.,) (3.3)

    where r := r1 + .. + rs . Fig. 1 depicts the system,along with the measured output y E RP,measuredinput U E Rm,and noise e E RP.

    Y e

    Figure 1: The plantAs is standard [9], we assume the noise model to bestably invertible (hence D12 is nonsingular). Invert-ing the noise model produces the LPV system I; =Fu ( k ( z ) , ) , shown in Fig. 2 . Here P is a (weighted)one-stepahead prediction error and A?(%) is given by

    The following result shows the equivalaiiceof t,he modelsets describing by P and via M ( r ) and M ( r ) .Lemma 3.5 The state-space data S ( M ) a nd S(A?)are related by the bijection 7,defirrerl as

    0 0 0 0

    Figure 2: The predictor and prediction error

    The assochted model sets are therefore equivalent.

    The identification problem is now the fol!owing: givenmeasurements {y( t ) , ~ ( i ) ,(t)}fz:,ind 0 to minimizethe meati-square prediction error as represented by thecost function14 0 ) = 5 MO), (e ) ) , (3 .6)

    where e ( 0 ) i s as shown in Fig. 2. In general, suchpredictioh error methods may produce biased esti-mates [9], As is well known, this can be remedied bymaximum-likelihood estimation [6]. In our case, thesetwo estimates coincide and are asymptotically unbiasedif and only if D12(0 ) is known (i.e. , independent of e).

    4 Gradient-Based Parameter EstimationIn this section we present an iterative gradient-basedscheme for minimizing the cost function J ( 0 ) . Thesetechniqiiea draw from existing identification methodsfor LT I sys kins (represented by the special case r = 0).The gradient .(I(@)E RN and [the positive semi-definiteapproximation of ] the Hessian H ( 0 ) E RNxN f J ( 0 )ar e given Ily

    !Ik(O) =H i . i ( o ) =

    Denoting the .Jacobiany(0 ) and 11(0 ) as of e ( 0 ) by E(O),we can write

    1546

  • 8/6/2019 Identification of Linear Parameter-Varying Systems via LFTs Lee Poolla Cdc_1996

    3/6

    Given g(0) and , H ( B ) , we can estimate 0 using thesteepest-descent adgorithme(k + 1 ) = 6 ( k ) - p ( k ) cl(&)) (4.5)

    (4.6)or the Gauss-Newton algorithm

    i(k + 1) = e ( k ) - ( k )H - ' ( e ( k ) ) g ( e ( k ) )The damping factor p ( k ) can be held constant or cho-sen by a line search along the step direction.4.1 Computing Gradients & HessiansFor notational ewe, we will hereafter suppress the de-pendence on 0 where it can be readily inferred. Thefollowing result gives formulas for the sensitivity func-tions $ k = 1, I . .,N), from which the gradient8andapproximate Hessian can be obtained directly via theinner-product formulas (4.1)-(4.2) or (4 .3)-( .4).Lemma 4.7 Define the LPV systems

    Q i .--

    t.1elQ2 := 3u([vi. 1 D ,o 0 0 I p , A )

    Given {u ( t ) ,y ( t ) ,s(t )} ;Zj l arid 0, ol)t-ain sequences{ z ( t )E R " , w ( t ) E R ' , e ( t ) E RP>fi; via the LPVsimulation

    Each sensitivity function & of the error sigual e canthen be expressed as(4.9)

    where v := [zT T U yTIT.The Jacobian E can be obtained either by executing(4.9) for each k := 1 , . . . N or v ia the more efficientbatch simulation

    X ( t + l ) =: AX@)+&W( f )+ F,y(t) (4.10)Z ( t ) =: 6 o X ( t )+ i o " W ( / ) + F z ( t ) (4.11)E ( t ) =: C ' , X ( t ) + ~ i o I V ( l ) FE(^) (4.12)W( t ) =: A( t )Z( t ) (4.13)

    whereFX 4F E t )

    [ & ( t ) ] = [ W V ( t ) . . . * V ( l ) ](4.14)

    Crilculoting the g n i d i m t and (approximate) Hessian ofJ ( B ) thcrefor-e reduces i o N + 1 L P V simulations andN(N +1)/2+ N inner products, where N is the numberof unknown pammrters.Let us mention two examples of special interest.

    Example 4.15 Choose the vector 4 of estimated pa-rameters to be the elements of the state-space matrixS ( M ) ordered by columns (left to right). With thisparameterization, (4.14) assumes the form

    Example 4.16 choose the vector 0 of estimated pa-rameters to be the dements of the state-space matrixS(n;r) ordered t)y rows (top taobottom). Then (4.14)asslimes the form0 0

    F E (1) 0 0 Ipwhere v ;= [ wT U y']'.4.2 Gradient Computation Using AdjointsWe begin with a well-known result on adjoints of LTI,LTV, or LPV systems.Lemma 4.17 Consider the discrete-time linear sys-tem G governed by

    z( t + 1) = A ( t ) z ( t )+ B(t)u(t)y(t) = C ( t ) z ( t ) D ( t ) u ( t )

    for t = 0,. , , L - and z(0)= 0. The adjoint G' of Gsatisfiesfor all sequcnc*rs { ? t ( f ) , r( t) }f si and is governed by

    {%, = (GI%,

    { ( t - 1) = A T ( t ) { ( t )+CT( t ) z ( t )U , ( / ) = B'( t ) ( ( t ) + DT( t ) z ( t )

    for t = 1,- 1 , . . , O and ((IJ - 1) = 0.Snppose we only need to compute the gradient g ofJ (e.g., for the steepest,-descent estitnation algorithm(4 .6)) . Substituting (4.9) into (4 .1) and using the ad-joint property, we can express each element of 9 as

    1547

  • 8/6/2019 Identification of Linear Parameter-Varying Systems via LFTs Lee Poolla Cdc_1996

    4/6

    where the adjoint operator QZ is as in Lemma 4.17.This clearly reduces the computation of 9 to only tw oLPV simulations (one forward in time, one backward)and N inner products. The following example illus-trates this point.Example 4.19 Suppose the parameter vector consistsof the elements of S ( M ) . Given 0 E RN and measure-ments { ~ ( t ) ,(t) , 6 ( t ) } f l / , obtain { z ( t ) ,w( t ) , ( t ) } t : /via (4.8). Also compute the signals [ = & ;e andv = [zTwT uT y TIT . Then the gradient vector is de-termined by

    whereE= [((O)...t(L-l)]and V = v(O).+.v(L-l)].4.3 Recursive AlgorithmsThe Gauss-Newton algorithm (4.6) has a natural re