2008 Soize Uncertainty Dynamical Systems

download 2008 Soize Uncertainty Dynamical Systems

of 14

Transcript of 2008 Soize Uncertainty Dynamical Systems

  • 8/4/2019 2008 Soize Uncertainty Dynamical Systems

    1/14

  • 8/4/2019 2008 Soize Uncertainty Dynamical Systems

    2/14

    general, no experimental measurements are available for the re-

    sponses of such complex dynamical systems. Consequently, the

    parameter uncertainties and the model uncertainties have to be ta-

    ken into account in the computational model in order to improve

    the predictability and the robustness of the predictions. We then

    need to model the uncertainties in the dynamical system. It is

    known that several approaches can be used to take into account

    the uncertainties in the computational models of complex dynam-

    ical systems for the low- and medium-frequencies (method of

    intervals, fuzzy sets, probabilistic approach, etc.). The most effi-

    cient and the most powerful mathematical tool adapted to model

    the uncertainties is the probabilistic approach as soon as the prob-

    ability theory can be used. The parametric probabilistic approach

    which includes the stochastic finite element method is the most

    efficient method to address the parameter uncertainties in the pre-

    dictive models. Such an approach consists in modeling the vector-

    valued parameter by a vector-valued random variable for which

    the probability distribution must be constructed using the infor-

    mation theory [56] and the available information (for instance,

    consisting of given algebraic properties and of a given mean value).

    If a lot of experimental data are available, then the Bayesian ap-

    proach can be used to update such a probability distribution (see

    for instance [64,66]). However, the parametric probabilistic ap-

    proach cannot address the model uncertainties as it is demon-

    strated, for example, in [61,62]. In addition, the use of the

    parametric probabilistic approach of the parameter uncertainties

    for the analysis of a complex dynamical system generally requires

    the introduction of a very large number of random variables. This is

    due to the fact that the dynamical responses can be very sensitive

    to many parameters relative to the boundary conditions or to the

    geometry (such as the thicknesses of plates or as the curvatures

    of panels), etc. Typically, several hundred thousands random vari-

    ables can be necessary to take into account parameter uncertain-

    ties in a complex dynamical system. It should be noted that the

    construction of the probabilistic model for a large number of ran-

    dom parameters is not so easy to carry out. In addition, for such

    a complex dynamical system there are generally no available mea-surements. If very little measurements are available, then the iden-

    tification of a large number of probability distributions using

    experimental data (using the Bayesian approach, the maximum

    likelihood method or another approach leading an optimization

    problem to be solved) can become unrealistic. In this context for

    which no measurements are available or for which very little mea-

    surements are available, it can be concluded that very little can be

    learnt from the experimental data. The idea is then to develop an

    approach for which the uncertain computational model can mainly

    learn from the available information (such as algebraic properties

    and given mean values) but not from experimental data. The non-

    parametric probabilistic approach [5763] recently proposed and

    based on the use of the random matrix theory is a way to address

    both model uncertainties and parameter uncertainties withoutusing experimental data. This nonparametric approach proposed

    introduces a very small number of parameters (typically seven

    parameters for a structural-acoustic system) which allows the level

    of uncertainties to be controlled. In the particular case for which

    very little measurements are available, the identification of these

    parameters using experimental data is realistic and can be per-

    formed by solving adapted mathematical statistical tools such as

    the mean-square method or the maximum likelihood method.

    The objective of this paper is to present formulations for exper-

    imental validation of complex uncertain dynamical systems for

    which uncertainties are taken into account by using the nonpara-

    metric probabilistic approach and for which very little measure-

    ments are available. Consequently, the optimization problem

    which has to be solved in order to identify the unknown identifica-tion parameters from experiments is feasible for very large compu-

    tational dynamical model. It should be noted that when the

    parameters of this nonparametric probabilistic model have been

    identified for one complex dynamical system belonging to a large

    class of dynamical systems representing many different configura-

    tions, this probabilistic model can be reused to analyze or to opti-

    mize another design belonging to this large class without needing

    experimental data. This important property is due to the fact that

    the nonparametric probabilistic approach consists in constructinga probabilistic model of the operators of the problem using intrin-

    sic available information relative to the operators of the dynamical

    system such as the mass, damping, stiffness, structural-acoustic

    coupling operators. Three formulations are proposed. The first

    one is based on the usual mean-square method with a differentia-

    ble objective function. The second one introduces a new unusual

    non-differentiable objective function which is particularly well

    adapted to the responses of dynamical systems formulated in the

    frequency domain. The last one is a new methodology which is

    very efficient for dynamical systems formulated in the frequency

    domain and which is based on the use of the maximum likelihood

    method coupled with a statistical reduction in the frequency do-

    main and which leads us to a considerable improvement of the

    method. Three applications with experimental validations will be

    presented in the area of structural vibrations and vibroacoustics

    to illustrate the different methods proposed.

    2. Reduced mean computational model

    In this paper, we consider a linear elastodynamic problem

    (structural dynamics) or a linear elastoacoustic problem (structural

    acoustics of a structure coupled with a bounded internal acoustic

    cavity). It is assumed that the computational model is constructed

    using the finite element method and is formulated in the frequency

    domain x (angular frequency). A reduced mean computationalmodel is constructed with the usual method of projection on sub-

    spaces spanned by a set of elastic modes of the structure in vacuo

    and/or a set of acoustic modes of the acoustic cavity with rigid

    walls. This means that the domain of validity of such a reduced

    mean computational model is the low- and medium-frequency

    ranges and the frequency band of analysis is denoted by B

    xmin;xmax with 0 6 xmin < xmax. The reduced mean computa-tional model is then written as

    vr;x Ur0qr;x; 1

    ar;xqr;x Ur0T fr;x; 2

    wr;x hx;vr;x: 3

    In Eqs. (1)(3), the parameter r belongs to an admissible set R

    which is a subset ofRnr (with nrP 1). Parameter rrepresents either

    an updating parameter or a design parameter which has to be iden-

    tified and which will be called the identification parameter of the

    mean computational model. The parameter r0 is a given nominalvalue of r. For all r fixed in R and for all x fixed in B:

    (i)vr;x is the complex vector inCnv of the degrees of freedom(structural displacements and/or acoustic pressures) of the

    mean computational model with nv > 1.

    (ii) fr;x is a complex vector in Cnv which represents the loadsdue to forces applied to the structure and/or the internal acous-

    tic sources.

    (iii) wr;x is a real vector in Rn with nP 1 and representsthe real-valued observation of the mean computational model.

    For instance, we will have wjr;x 20log10x2jvkj r;xj in

    which vkj r;x will be a structural displacement or wr;x 20log10jpk r;xj in which pk r;x will be an acoustic pres-

    sure. Such equations define the mapping v#

    w hx;v fromC

    nv in Rn.

    C. Soize et al. / Comput. Methods Appl. Mech. Engrg. 198 (2008) 150163 151

  • 8/4/2019 2008 Soize Uncertainty Dynamical Systems

    3/14

    (iv) Ur0 is a nv N real matrix whose columns are made up

    of elastic modes of the structure in vacuo and/or acoustic modes

    of the acoustic cavity with rigid wall, computed for the nominal

    value r0 of the updating parameter r. The matrix Ur0T

    is the

    transpose of Ur0.

    (v) qr;x is the complex vector in CN of the generalized coor-dinates of the reduced mean computational model.

    (vi)a

    r;xis the

    N

    N

    complex matrix representing the

    generalized stiffness matrix.

    It is assumed that for all rinR and for all x in B, dimension Nofthe reduced computational model is sufficiently large (but N ( nv)

    to reach a reasonable convergence.

    3. Construction of the stochastic computational model to take

    into account uncertainties

    It is assumed that there are uncertainties in the mean computa-

    tional model. There are two main types of uncertainties, parameter

    uncertainties and model uncertainties.

    Usually, parameter uncertainties are taken into account by

    using a parametric probabilistic approach allowing uncertainparameters of the computational model to be modeled by random

    variables, random fields, etc. Such a parametric probabilistic ap-

    proach is the most efficient method to address parameter uncer-

    tainties in a computational model (see for instance [5254]). In

    particular, the stochastic finite element method is a powerful tool

    to analyze propagation of parameter uncertainties through the

    computational model associated with a boundary value problem

    (see for instance [24,25]).

    The mathematicalmechanical modeling process of the de-

    signed system used to construct the computational model intro-

    duces model uncertainties which cannot be addressed by the

    parametric probabilistic approach (see [61]). In this paper, it is as-

    sumed that parameter uncertainties and model uncertainties exist

    in the computational model. Consequently, we propose to use the

    nonparametric probabilistic approach of both parameter uncer-

    tainties and model uncertainties recently introduced (see [57

    63]), constructed by using the maximum entropy principle

    [31,56], and for which experimental validations can be found in

    [810,18,19]. The maximum entropy principle was introduced by

    Shannon [56] in the construction of the information theory. This

    principle consists in constructing the probability density function

    which maximizes the uncertainties under the constraints defined

    by the available information. Applying the nonparametric probabi-

    listic approach to Eqs. (1)(3) yields the following stochastic re-

    duced model such that, for all x in B and for all r in R:

    Vr; d;x Ur0Qr; d;x; 4

    Ar; d;xQr; d;x Ur0T

    fr;x; 5Wr; d;x hx;Vr; d;x 6

    in which all the random quantities are defined on a probability

    space H;T;P. The random matrix Ar; d;x is a N N complexrandom matrix for which the probability distribution is known and

    which is such that

    EfAr; d;xg ar;x; 7

    where E is the mathematical expectation. This random matrix de-

    pends on the random matrix germs modeling uncertainties and

    for which their probability distributions depend on a vector param-

    eter d belonging to a subset D ofRnd . The probability model of the

    random matrix germs and the corresponding generator of indepen-

    dent realizations can be found in Refs. [5763] and in particular, inRef. [62]. Some additional explanations are given in Section 6

    devoted to the applications. Parameter d allows the dispersion

    induced by uncertainties to be controlled [62]. We will givethecon-

    struction of such a probability model in the applications presented

    in Section 6. The probability model used is such that, for all r in R,

    for all d in D and for all x in B, the randomequation (5) has a uniquesecond-order random solution, i.e., EfkQr; d;xk2Ng < 1 in whichkzk2N jz1j

    2 jzNj2.

    4. Definition of the identification parameters

    In this paper, the identification problem which has to be solved

    is

    (i) either the identification of parameter d 2 D & Rnd of the

    probability model of uncertainties for a fixed value of parame-

    ter r2 R of the mean computational model using experimental

    data. In this case vector-valued random observation Wr; d;xis rewritten as Ws;x in which dependence with ris removedand where s belongs to the subset S ofRns with s d, S D

    and ns nd;

    (ii) or the identification of parameter r2 R & Rnr of the mean

    computational model and of parameter d 2 D & Rnd of the prob-

    ability model of uncertainties using experimental data. In this

    case, the vector-valued random observation Wr; d;x is alsorewritten as Ws;x in which s r; d 2 S R D & Rns R

    nr Rnd with ns nr nd.

    Consequently, we will define s 2 S & Rns as the identification

    parameter (the parameter which has to be identified using exper-

    imental measurements of the vector-valued observation).

    5. Identification method of the identification parameter using

    experimental observations of the system

    5.1. Setting the problem

    Letx#Ws;x be the second-order stochastic process definedon H;T;P indexed by B, with values in Rn and depending on the

    identification parameter s 2 S & Rns . Let fx1; . . . ;xmg & B be asampling of frequency band B. Let l1; . . . ;lm be the integrationweights associated with fx1; . . . ;xmg such thatZB

    dx Xmk1

    lk;ZB

    fxdx Xmk1

    lkfxk: 8

    For all s fixed in S, let Ws;x1; . . . ;Ws;xm be the finitefamily of the Rn-valued random variables associated with the

    uncountable family fWs;x;x 2 Bg. In general, random variablesWs;x1; . . . ;Ws;xm are statistically dependent. LetPWs;x1;...;Ws;xm dw

    1; . . . ; dwm; s be the joint probability distribu-

    tion on Rn Rn (m times) Rnm of random variablesWs;x1; . . . ;Ws;xm depending on parameter s in whichdwk dw

    k1 dw

    kn is the Lebesgue measure on R

    n for all

    k 1; . . . ; m. For all s fixed inS, this joint probability distribution

    is assumed to be written as

    PWs;x1;...;Ws;xmdw1; . . . ;dwm; s pw1; . . . ;wm; sdw1 dwm

    9

    in which pw1; . . . ;wm; s is the probability density function on

    Rn Rn (m times) Rnm with respect to dw1 dwm and

    where wk wk1; . . . ; wkn). For all s fixed in S and for all vectors

    w1; . . . ;wm given in Rn, the estimation of pw1; . . . ;wm; s is per-

    formed by using the stochastic computational model and the Monte

    Carlo method as stochastic solver with m independent realizationsfWs;x1; h1; . . . ;Ws;xm; h1g; . . . ; fWs;x1; hm; . . . ;Ws;xm; hmg

    152 C. Soize et al./ Comput. Methods Appl. Mech. Engrg. 198 (2008) 150163

    http://-/?-http://-/?-
  • 8/4/2019 2008 Soize Uncertainty Dynamical Systems

    4/14

    of the Rnm-valued random observations fWs;x1; . . . ;Ws;xmgwith h1; . . . ; hm in H. This means that:

    (1) Any independent realization As;xk; h of the randommatrix As;xk is constructed using the probability modelintroduced in Section 3. This probability model is completely

    defined and there exists an associated generator of independent

    realizations (see the given references and also the applications

    presented in Section 6).

    (2) The corresponding realization Ws;xk; h is calculatedusing the computational stochastic model defined by Eqs. (4)

    (6), that is to say, is given by Ws;xk; h hxk;Vs;xk; hwith Vs;xk; h Ur0Qs;xk; h in which the deterministicvector Qs;xk; h is the solution of the deterministic matrixequation As;xk; hQs;xk; h Ur0

    Tfr;xk.(3) Finally, the probability density function pw1; . . . ;wm; s or

    any moments are estimated using the appropriate estimators

    given by the mathematical statistics.

    Concerning the experimental observations, we consider mexpmanufactured systems g1; . . . ;gmexp with mexpP 1. In general, mexpis small (one unit or a few units). These mexp manufactured systemsare considered as mexp independent realizations of a random man-ufactured system. We then denote by fWexpx;x 2 Bg the ran-dom observation defined on a probability space Hexp;Texp;Pexp

    of the random manufactured system corresponding to the random

    observation fWx;x 2 Bg of the stochastic computational model.By construction, we then have g1; . . . ;gmexp in Hexp. The experimen-tal observation of the manufactured system g corresponds to therealization fWexpx;g;x 2 Bg of the random experimental obser-vation fWexpx;x 2 Bg of the random manufactured system. Inpractice, the sampling fx1; . . . ;xmg of B is used and the experi-mental observation of the manufactured system g is made up ofthe finite family of the Rn-valued experimental observations

    fWexpx1;g; . . . ;Wexpxm;gg. Consequently, for s fixed in S,

    the experimental observations corresponding to the random obser-

    vation fWs;x1; . . . ;Ws;xmg of the stochastic computationalmodel are made up of

    fWexpx1;g1; . . . ;Wexpxm;g1g; . . . ;

    Wexpx1;gmexp ; . . . ;Wexpxm;gmexp

    n o; 10

    which are mexp independent realizations of the random experimen-tal observation fWexpx1; . . . ;W

    expxmg. The problem to be solvedis then the identification of the optimal value sopt of the identifica-

    tion parameter s of the stochastic computational model using the

    experimental values defined by Eq. (10). Below, we propose to use

    two main methods: (1) the mean-square method for differentiable

    and not differentiable objective functions and (2) the maximum

    likelihood method.

    5.2. Mean-square method

    This section deals with the usual mean-square method [64,66].

    Two cases will be considered: the case of a differentiable objective

    function and the case of a non-differentiable objective function.

    These two cases will be used in the applications presented in Sec-

    tion 6.

    We begin by introducing an inner product which is adapted to

    the sampled frequency random observations fWs;x1; . . . ;Ws;xmg of the stochastic computational model and to itscorresponding sampled random experimental observation

    fWexpx1; . . . ;Wexpxmg.

    Let X X1; . . . ; Xm and Y Y1; . . . ;Ym be two random vari-

    ables with values inR

    n

    R

    n

    (m times) R

    nm

    in which, forall j 1; . . . ; m; Xj and Yj are random variables with values in Rn.

    This algebraic structure of random vectors X and Y is adapted to

    the frequency sampled random observations over frequency band

    B. We then introduce the inner product hhX;Yii ofX and Y such

    that

    hhX;Yii EfhX;YiBg; hX;YiB Xmk1

    lkhXk;Ykin 11

    in which hXk;Ykin Xk1Yk1 XknYkn and where the weightsl1; . . . ;ln are defined by Eq. (8). The associated norm jjjXjjj ofXis then such that

    jjjXjjj2

    EfkXk2Bg; kXk

    2B

    Xmk1

    lkkXkk

    2n 12

    in which kXkk2n X

    k1

    2 Xkn

    2. For all s fixed in S, we intro-

    duce the random vectors Ws andWexp such that

    Ws Ws;x1; . . . ;Ws;xm;

    Wexp Wexpx1; . . . ;W

    expxm: 13

    Then the real vector Wmexp related to the experimental observa-

    tions is introduced such that

    Wmexp

    1

    mexp

    Xmexp1

    Wexpg: 14

    On the other hand, the mean values ms and mexp of random

    vectors Ws andWexp are introduced such that

    ms EfWsg; mexp EfWexpg: 15

    Eq. (14) defined an estimation ofmexp and ifmexp goes to infinity,then Wmexp tends to mexp:

    limmexp!1

    Wmexp mexp: 16

    For all s fixed in S, we introduce the following objective func-

    tion s#eJmexp s from S into R 0; 1, such thateJmexp s jjjWs Wexpjjj2: 17

    The problem is to find s in S which minimizes eJmexp. Often thenumber mexp of experiments is not sufficiently large to be able towrite mexp Wmexp and consequently, for such case we have

    EfWexpg Wmexp bmexp0: 18

    Clearly, vector bmexp ! 0 when mexp ! 1. We then considerthat Eq. (18) holds, i.e., bmexp0 for a given value ofmexp. Therefore,Eq. (17) can be rewritten aseJmexp s jjjWs ms Wexp Wmexp ms Wmexp jjj2: 19

    Developing the right-hand side of Eq. (19), taking into account

    that the random variables Ws ms andWexp Wmexp are inde-

    pendent, removing the term jjjWexp

    Wmexp

    jjj2

    which is indepen-dent of s, yieldeJmexp s jjjWs msjjj2 kms Wmexp k2B

    hhWexp Wmexp ;ms Wmexp ii: 20

    It should be noted that the third term in the right-hand side of

    Eq. (20) can be rewritten as hbmexp ;ms Wmexp iB. Consequently,

    from Eqs. (18) and (20), it can be deduced that

    limmexp!1

    eJmexp s eJs 21in which the objective function s#eJs fromS into R is defined,for mexp sufficiently large (equivalently to mexp ! 1) by

    eJs jjjWs msjjj2 kms Wmexp k2B: 22

    C. Soize et al. / Comput. Methods Appl. Mech. Engrg. 198 (2008) 150163 153

  • 8/4/2019 2008 Soize Uncertainty Dynamical Systems

    5/14

    Let us consider the identification of parameter rinR & Rnr with

    the mean computational model, i.e., with the stochastic computa-

    tional model for d 0. We then have s0 r; 0 2 S and the opti-

    mization problem is written as

    sopt arg mins0r;02RD

    eJs0; eJs0 kms0 Wmexp k2B 23in which s

    opt

    0 ropt

    ; 0. IfeJsopt0 0, then msopt0 Wmexp 0 and inmean, there is neither parameter uncertainties nor model uncer-

    tainties (note that with such an identified deterministic computa-

    tional model, the dispersion of the experimental data induced by

    the variability of the real system is not taken into account). In gen-

    eral,eJsopt0 is not equal to zero which means that there are data andmodel uncertainties which have to be considered in taking d0 and

    therefore, the stochastic computational model must be used.

    Considering the construction of the objective function, it is not

    necessary to use Eq. (20) or Eq. (22) and other objective functions

    can be derived. This is the way which is followed below.

    5.2.1. Case of a differentiable objective function and optimization

    problem

    Even ifmexp is not sufficiently large, an objective function can bederived from Eq. (22) in replacing eJ by the following functions#JDc s fromS into R

    such that

    JDc s 21 cjjjWs msjjj2

    2ckms Wmexp k2B 24

    in which c is fixed in ]0,1]. In the right-hand side of Eq. (24), thefirst term is related to the variance of the stochastic computational

    model and the second one is related to the bias between the mean

    model and the mean value of experiments. The parameter c allowsthe balance between these two terms to be chosen and fixed. It

    should be noted that for all s in S, JD1=2s eJs. For a fixed value

    ofc, the identification of parameter s inS is then the optimal valuesopt inS such that

    sopt argmins2S JDc s: 25

    For c 0, we have JD0 s 2 jjjWs msjjj2

    and consequently,

    the optimization problem defined by Eq. (25) yields

    sopt ropt; dopt with dopt 0 and then JD0 sopt 0. Such a case is

    not coherent with the explanation given above and therefore, can-

    not be considered. This is the reason why the value c 0 has to beremoved from the admissible set ]0,1] of values for c.

    For c 1, we have JD1 s 2kms Wmexp k2B and the optimiza-

    tion problem defined by Eq. (25) yields an optimal value sopt which

    only minimizes the bias. In this case, the variance can take very

    high values and so the robustness of the prediction with the sto-

    chastic computational model decreases. Since the objective is to

    identify a stochastic computational model which is sufficiently ro-

    bust with respect to uncertainties, then kdopt

    k has to be not toolarge and thus, the weight 21 c of the variance term in theobjective function has to be increased with respect to the weight

    2c of the bias term. Such a condition is satisfied in choosing c in0; 1

    2.

    In analyzing the objective function defined by Eq. (24), it can be

    seen that sopt calculated with Eq. (25) does not take into account

    the dispersion of the experimental observations induced by the

    variability of the real system but only uses the mean value of the

    experimental data. So if the dispersion of the experimental data

    around its mean value is important, then the objective function de-

    fined by Eq. (24) is not the most effective objective function. In this

    case, another objective function can be constructed in the context

    of the mean-square method and this question is treated in the fol-

    lowing subsection in introducing a non-differentiable objectivefunction.

    5.2.2. Case of a non-differentiable objective function and optimization

    problem

    As explained in the previous subsection, this case is useful when

    the dispersion of the experimental observations is important. The

    criterion used to define the objective function is to write that, for

    all s inS and for all j in f1; . . . ; ng, the experimental observations

    fWexpj x;g;x 2 Bg for all in f1; . . . ; mexpg belongs to the confi-dence region of the stochastic process

    fW

    js;x;x 2B

    gassociated

    with a probability level Pc 20; 1. Let fwj s;x;x 2 Bg and

    fwj s;x;x 2 Bg be the upper and the lower envelopes whichdefine the confidence region of stochastic process fWjs;x;x 2 Bg and such that, for all x fixed in B; Probafwj s;x