Dusanka Zupanski CIRA/Colorado State University Fort Collins, Colorado Ensemble Kalman Filter Guest...

Post on 01-Apr-2015

224 views 2 download

Tags:

Transcript of Dusanka Zupanski CIRA/Colorado State University Fort Collins, Colorado Ensemble Kalman Filter Guest...

Dusanka Zupanski CIRA/Colorado State University

Fort Collins, Colorado

Ensemble Kalman FilterEnsemble Kalman Filter

Guest Lecture at AT 753: Atmospheric Water Cycle 21 April 2006, CSU/ATS Dept., Fort Collins, CO

Dusanka Zupanski, CIRA/CSUZupanski@CIRA.colostate.edu

Acknowledgements: M. Zupanski, C. Kummerow, S. Denning, and M. Uliasz, CSUA. Hou and S. Zhang, NASA/GMAO

Why Ensemble Data Assimilation?

Kalman filter and Ensemble Kalman filter Maximum likelihood ensemble filter (MLEF)

Examples of MLEF applications

Future research directions

Dusanka Zupanski, CIRA/CSUZupanski@CIRA.colostate.edu

OUTLINE

Why Ensemble Data Assimilation?

Dusanka Zupanski, CIRA/CSUZupanski@CIRA.colostate.edu

Three main reasons :

Need for optimal estimate of the atmospheric state + verifiable uncertainty of this estimate; Need for flow-dependent forecast error covariance matrix; and The above requirements should be applicable to most complex atmospheric models (e.g., non-hydrostatic, cloud-resolving, LES).

Example 1: Fronts

Example 2: Hurricanes

(From Whitaker et al., THORPEX web-page)

Benefits of Flow-Dependent Background Errors

Are there alternatives?

Dusanka Zupanski, CIRA/CSUZupanski@CIRA.colostate.edu

Two good candidates: 4d-var method: It employs flow-dependent forecast error covariance, but it does not propagate it in time. Kalman Filter (KF): It does propagate flow-dependent forecast error covariance in time, but it is too expensive for applications to complex atmospheric models.

EnKF is a practical alternative to KF, applicable to most complex atmospheric models.

A bonus benefit: EnKF does not use adjoint models!

Typical EnKF

Dusanka Zupanski, CIRA/CSUZupanski@CIRA.colostate.edu

Forecast error Covariance Pf

(ensemble subspace)

DATA ASSIMILATION

Observations First guess

Optimal solution for model statex=(T,u,v,f, )

ENSEMBLE FORECASTING

Analysis error Covariance Pa

(ensemble subspace)

INFORMATION CONTENT ANALYSIS

Tb,ub,vb,fb, ,,

Hessian preconditioning

Non-Gaussian PDFs

Maximum Likelihood Ensemble Filter

x

Dusanka Zupanski, CIRA/CSUZupanski@CIRA.colostate.edu

Data Assimilation Equations

xn =Mn,n−1(xn−1) +G(xn−1)wn−1

Equations in model space:

Pf =E x−E x( )⎡⎣ ⎤⎦ x−E x( )⎡⎣ ⎤⎦T

{ }

Prior (forecast) error covariance of x (assumed known):

- Dynamical model for model state evolution (e.g., NWP model)M

- Model state vector of dim Nstate ; w - Model error vector of dim Nstate

G - Dynamical model for state dependent model error

Model error covariance (assumed known):

Q =E wwT( )

E - Mathematical expectation;

GOAL: Combine Model and Data to obtain optimal estimate of dynamical state x

n - Time step index

y - Observations vector of dim Nobs ;

yn =Hn(xn) + εn

Observation error covariance, includes also representatives error (assumed known):

H

ε

R =E εε T( )

- Observation operator

Equations in data space:

- Observation error

Data Assimilation Equations

Dusanka Zupanski, CIRA/CSUZupanski@CIRA.colostate.edu

n - Time step index (denoting observation times)

Data assimilation should combine model and data in an optimal way. Optimal solution z can be defined in terms of optimal initial conditions xa (analysis), model error w, and empirical parameters ,,.

Approach 1:Approach 1: Optimal solution (e.g., analysis xa) = Minimum variance estimate, or conditional mean of Bayesian posterior probability density function (PDF) (e.g., Kalman filterKalman filter; Extended Kalman filterExtended Kalman filter; EnKFEnKF)

xa =E x y( ) = xp(x y)∫ dx= xp(y x)p(x)

p(y)∫ dx p - PDF

Dusanka Zupanski, CIRA/CSUZupanski@CIRA.colostate.edu

How can we obtain optimal solution?Two approaches are used most often:

For non-liner M or H the solution can be obtained employing Extended Kalman filterExtended Kalman filter, or Ensemble Kalman filterEnsemble Kalman filter.

Assuming liner M and H and independent Gaussin PDFs Kalman filterKalman filter solution (e.g., Jazwinski 1970)

xa is defined as mathematical expectation (i.e., mean) of the conditional posterior p(x|y), given observations y and prior p(x).

Dusanka Zupanski, CIRA/CSUZupanski@CIRA.colostate.edu

Approach 2:Approach 2: Optimal solution (e.g., analysis xa) = Maximum likelihood estimate, or conditional mode of Bayesian posterior p(x|y)(e.g., variationalvariational methods; MLEFMLEF)

J =12[x−xb]

T Pf-1[x−xb] +

12[H[M (x)] −y]T R−1[H[M (x)] −y]

For independent Gaussian PDFs, this is equivalent to minimizing cost function J:

xk+1 =xk −H −1∇J k

Solution can be obtained (with ideal preconditioning) in one iteration for liner H and M. Iterative solution for non-linear H and M:

H−1

- Preconditioning matrix = inverse Hessian of J

xa= Maximum of posterior p(x|y), given observations and prior p(x).

xa =max p x y( )⎡⎣ ⎤⎦=maxp(y x)p(x)

p(y)=min −log p x y( )⎡⎣ ⎤⎦{ }

J=const.

ζ0

ζmin

x0

xmin

J=const.

Physical space (x)

Preconditioning space (ζ)

-gζ

-gx

IMPACT OF MATRIX CIN HESSIAN PRECONDITIONING

1.00E-01

1.00E+00

1.00E+01

1.00E+02

1 6 11 16 21 26 31 36 41 46 51

Number of iterations

Cost function

VARIATIONAL

MLEF

H -1 =∂2 J∂x2

⎝⎜⎞

⎠⎟

-1

=Pf1/2 (I +C)−1Pf

T /2

PMLEF-1 =Pf

1/2 (I +C)−1PfT /2

xk+1 =xk −P−1gk

PVAR-1 =Pf

Milija Zupanski, CIRA/CSUZupanskiM@CIRA.colostate.edu

Ideal Hessian Preconditioning

Dusanka Zupanski, CIRA/CSUZupanski@CIRA.colostate.edu

xmode xmean

x

p(x)

Non-Gaussian

xmode = xmean

x

p(x)

Gaussian

MEAN vs. MODE

For Gaussian PDFs and linear H and M results of all methods [KF, EnKF (with enough ensemble members), and variational] should be identical, assuming the same Pf , R, and y are used in all methods.

Minimum variance estimate= Maximum likelihood estimate!

KF,EnKF,4d-var,

all created equal?

Does this really happen?!?

TEST RESULTS EMPLOYING A LINEAR MODEL AND GAUSSIAN PDFs

(M.Uliasz)

(D. Zupanski)

xa

Dusanka Zupanski, CIRA/CSUZupanski@CIRA.colostate.edu

- Optimal estimate of x (analysis)

Kalman filter solutionKalman filter solution

xa =xb + Pf H

T (HPf HT + R)−1 y − H (xb )[ ]

Analysis step:

xb - Background (prior) estimate of x

Pa =[I − Pf HT (HPf HT + R)−1 H ]Pf = I − KH( ) Pf

Pa - Analysis (posterior) error covariance matrix (Nstate x Nstate)

Forecast step:

x0 =xa ;

Pf =MPaM

T + GQGT - Update of forecast error covariance

K - Kalman gain matrix (Nstate x Nobs)

xn =Mn,n−1(xn−1) +G(xn−1)wn−1 Often neglected

Ensemble Kalman Filter (EnKF) solutionEnKF as first introduced by Evensen (1994) as a Monte Carlo filter.

Analysis solution defined for each ensemble member i:

xa

i =xbi + Pf

eH T (HPfe HT + Re )−1( y i − H (xb

i ))

Mean analysis solution:

xa =xb + Pf

eH T (HPfe HT + Re )−1( y − H (xb ))

Analysis error covariance in ensemble subspace:

Analysis step:

bia = xa

i - xa

Analysis ensemble perturbations:

=

p1,1a p1,2

a . p1,Nensa

p2,1a p2,2

a . p2,Nensa

p3,1a p3,2

a . p3,Nensa

. . . .

pNstate,1f pNstate,2

f . pNstate,Nensf

⎢⎢⎢⎢⎢⎢

⎥⎥⎥⎥⎥⎥

= b1a b2

a . bNensa⎡⎣ ⎤⎦

Pae( )

1 2

Pa

e =1

Nens -1Pa

e( )1 2

Pae( )

1 2⎡⎣

⎤⎦

T

Sample analysis covariance

Equations given here following Evensen (2003)

Ensemble Kalman Filter (EnKF)

Forecast step:

xnj =Mn,n−1(xn−1

j ) +G(xn−1j )wn−1

j

Forecast error covariance calculated using ensemble perturbations:

Ensemble forecasts employing a non-linear model M

=

p1,1f p1,2

f . p1,Nensf

p2,1f p2,2

f . p2,Nensf

p3,1f p3,2

f . p3,Nensf

. . . .

pNstate,1f pNstate,2

f . pNstate,Nensf

⎢⎢⎢⎢⎢⎢

⎥⎥⎥⎥⎥⎥

= b1f b2

f . bNensf⎡⎣ ⎤⎦

bif =M (xa

i ) −M (xa)

Pf

e( )1 2

Pf

e =1

Nens -1Pf

e( )1 2

Pfe( )

1 2⎡⎣

⎤⎦

T

;

Sample forecast covarianceNon-linear forecast perturbations

There are many different versions of EnKF

Monte Carlo EnKF (Evensen 1994; 2003)

EnKF (Houtekamer et al. 1995; 2005; First operational version)

Hybrid EnKF (Hamill and Snyder 2000) EAKF (Anderson 2001)

ETKF (Bishop et al. 2001)

EnSRF (Whitaker and Hamill 2002)

LEKF (Ott et al. 2004)

MLEF (Zupanski 2005; Zupanski and Zupanski 2006)

Minimum variance solution

Maximum likelihood solution

Why maximum likelihood solution? It is more adequate for employing non-Gaussian PDFs (e.g., Fletcher and Zupanski 2006).

Current status of EnKF applications

EnKF is operational in Canada, since January 2005 (Houtekamer et al.). Results comparable to 4d-var.

EnKF is better than 3d-var (experiments with NCEP T62 GFS) - Whitaker et al., THORPEX presentation ).

Very encouraging results of EnKF in application to non-hydrostatic, cloud resolving models (Zhang et al., Xue et al.).

Very encouraging results of EnKF for ocean (Evensen et al.), climate (Anderson et al.), and soil hydrology models (Reichle et al.).

Theoretical advantages of ensemble-based DA methods are getting confirmed in an increasing number of practical applications.

Examples of MLEF applications

Dusanka Zupanski, CIRA/CSUZupanski@CIRA.colostate.edu

Dusanka Zupanski, CIRA/CSUZupanski@CIRA.colostate.edu

- Dynamical model for standard model state xxn =Mn,n−1(xn−1,bn−1,n−1)

Maximum Likelihood Ensemble Filter

bn =Gn,n−1(bn−1) - Dynamical model for model error (bias) b

n = Sn,n−1(γ n−1) - Dynamical model for empirical parameters

zn =Fn,n−1(zn−1)

Define augmented state vector z

Find optimal solution (augmented analysis) za by minimizing J

(MLEF method):

J =12[z−zb]

T Pf-1[z−zb] +

12[H[F(z)] −yobs]

T R−1[H[F(z)] −yobs] =min

zn =(xn−1,bn−1,n−1)

And augmented dynamical model F

,

.

(Zupanski 2005; Zupanski and Zupanski 2006)

Both the magnitude and the spatial patterns of the true bias are successfully captured by the MLEF.

40 Ens

100 Ens

True R

Cycle 1 Cycle 3 Cycle 7

Bias estimation: Respiration bias R, using LPDM carbon transport model (Nstate=1800, Nobs=1200, DA interv=10 days)

Domain with larger bias (typically land)

Domain with smaller bias (typically ocean)

Dusanka Zupanski, CIRA/CSUZupanski@CIRA.colostate.edu

Information measures in ensemble subspace

∑ +=+= −

i i

is trd

)1(])([

2

21

λ

λCCI

∑ +=i

ih )1ln(2

1 2λ

Shannon information content, or entropy reduction

Degrees of freedom (DOF) for signal (Rodgers 2000):

- information matrix in ensemble subspace of dim Nens x Nens

x −xb =Pf1 2 (I +C)−1 2ζ

C =ZTZ

z i =R−1 2H[M (x+ pai )] −R−1 2H[M (x)] ≈R−1 2HPf

1 2 - are columns of Z

- control vector in ensemble space of dim Nensζ

z i

C

x - model state vector of dim Nstate >>Nens

Errors are assumed Gaussian in these measures.

(Bishop et al. 2001; Wei et al. 2005; Zupanski et al. 2006, subm. to MWR)

λi2

- eigenvalues of C

for linear H and M

Dusanka Zupanski, CIRA/CSUZupanski@CIRA.colostate.edu

GEOS-5 Single Column Model: DOF for signal(Nstate=80; Nobs=80, seventy 6-h DA cycles, assimilation of simulated T,q observations)

DOF for signal varies from one analysis cycle to another due to changes in atmospheric conditions. 3d-var approach does not capture this variability.

DOF for signal (ds), 80

0

5

10

15

20

25

30

35

40

45

1 11 21 31 41 51 61

Cycles

ds

ds_10ensds_20ensds_40ensds_80ensds_80ens_KFds_80ens_3dv

QuickTime™ and aTIFF (LZW) decompressor

are needed to see this picture.

QuickTime™ and aTIFF (LZW) decompressor

are needed to see this picture.

Small ensemble size (10 ens), even though not perfect, captures main data signals.

T obs (K) q obs (g kg-1)

Data assimilation cycles

Ver

tical

leve

ls

RMS Analysis errors for T, q:------------------------------------ 10ens ~ 0.45K; 0.377g/kg 20ens ~ 0.28K; 0.265g/kg40ens ~ 0.23K; 0.226g/kg80ens ~ 0.21K; 0.204g/kg-------------------------------------No_obs ~ 0.82K; 0.656g/kg

ds =λi

2

(1+ λi2 )i

Non-Gaussian (lognormal) MLEF framework: CSU SWM Non-Gaussian (lognormal) MLEF framework: CSU SWM (Randall et al.)(Randall et al.)

( ) ( )i

N

iS

T

ff

Tfobs y

my

my

xJ ∑=

−−⎥⎦

⎤⎢⎣

⎡+⎟⎟⎠

⎞⎜⎜⎝

⎛−⎥

⎤⎢⎣

⎡⎟⎟⎠

⎞⎜⎜⎝

⎛−⎥

⎤⎢⎣

⎡+−−=

1

11

)(ln

)(ln

)(ln

2

1

2

1)(

xxR

xxxPxx

HHH

Beneficial impact of correct PDF assumption – practical advantagesDusanka Zupanski, CIRA/CSUZupanski@CIRA.colostate.edu

Cost function derived from posterior PDF ( x-Gaussian, y-lognormal):

Lognormaladditional nonlinear term

Normal(Gaussian)

Courtesy of M. Zupanski

Height RMS Error(SW model, 520 ensembles, Height obs 3_1)

0

2

4

6

8

10

12

14

16

1 6 11 16 21 26 31 36

Cycles

Height Error(m)

GaussianMLEF

LognormalMLEF

⎟⎟⎠

⎞⎜⎜⎝

⎛ −=

ba refxx

x exp)(H

Future Research Directions

Covariance inflation and localization need further investigations: Are these techniques necessary?

Model error and parameter estimation need further attention: Do we have sufficient information in the observations to estimate complex model errors?

Information content analysis might shed some light on DOF of model error and also on the necessary ensemble size.

Non-Gaussian PDFs have to be included into DA (especially for cloud variables).

Characterize error covariances for cloud variables.

Account for representativeness error.

Dusanka Zupanski, CIRA/CSUZupanski@CIRA.colostate.edu

References for further readingReferences for further reading

Anderson, J. L., 2001: An ensemble adjustment filter for data assimilation. Mon. Wea. Rev., 129, 2884–2903.Fletcher, S.J., and M. Zupanski, 2006: A data assimilation method for lognormally distributed observational

errors. Q. J. Roy. Meteor. Soc. (in press).Evensen, G., 1994: Sequential data assimilation with a nonlinear quasi-geostrophic model using Monte Carlo

methods to forecast error statistics. J. Geophys. Res., 99, (C5),. 10143-10162.Evensen, G., 2003: The ensemble Kalman filter: theoretical formulation and practical implementation. Ocean

Dynamics. 53, 343-367.Hamill, T. M., and C. Snyder, 2000: A hybrid ensemble Kalman filter/3D-variational analysis scheme. Mon. Wea. Rev., 128, 2905–2919.Houtekamer, Peter L., Herschel L. Mitchell, 1998: Data Assimilation Using an Ensemble Kalman Filter

Technique. Mon. Wea. Rev., 126, 796-811.Houtekamer, Peter L., Herschel L. Mitchell, Gerard Pellerin, Mark Buehner, Martin Charron, Lubos Spacek, and Bjarne Hansen, 2005: Atmospheric data assimilation with an ensemble Kalman filter: Results with

real observations. Mon. Wea. Rev., 133, 604-620.Ott, E., and Coauthors, 2004: A local ensemble Kalman filter for atmospheric data assimilation. Tellus., 56A,

415–428. Tippett, M. K., J. L. Anderson, C. H. Bishop, T. M. Hamill, and J. S. Whitaker, 2003: Ensemble square root

filters. Mon. Wea. Rev., 131, 1485–1490.Whitaker, J. S., and T. M. Hamill, 2002: Ensemble data assimilation without perturbed observations. Mon. Wea.

Rev., 130, 1913–1924.Zupanski D. and M. Zupanski, 2006: Model error estimation employing an ensemble data assimilation approach. Mon. Wea. Rev. 134, 1337-1354.Zupanski, M., 2005: Maximum likelihood ensemble filter: Theoretical aspects. Mon. Wea. Rev., 133, 1710–1726

Dusanka Zupanski, CIRA/CSUZupanski@CIRA.colostate.edu

Dusanka Zupanski, CIRA/CSUZupanski@CIRA.colostate.edu

Thank you.