Talk litvinenko prior_cov

12
Initial covariance matrix for the Kalman Filter Alexander Litvinenko Group of Raul Tempone, SRI UQ, and Group of David Keyes, Extreme Computing Research Center KAUST Center for Uncertainty Quantification http://sri-uq.kaust.edu.sa/

Transcript of Talk litvinenko prior_cov

Page 1: Talk litvinenko prior_cov

Initial covariance matrix for the Kalman Filter

Alexander LitvinenkoGroup of Raul Tempone, SRI UQ, and Group of David Keyes,

Extreme Computing Research Center KAUST

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

http://sri-uq.kaust.edu.sa/

Page 2: Talk litvinenko prior_cov

4*

Two variants

Either we assume that matrix of snapshots is given

[q(x ,θ1), ..., q(x ,θnq)]

Or we assume that the covariance function is of a certain type:The Matern class of covariance functions is defined as

C (r) := Cν,`(r) =2σ2

Γ(ν)

( r

2`

)νKν( r`

), (1)

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

2 / 12

Page 3: Talk litvinenko prior_cov

−2 −1.5 −1 −0.5 0 0.5 1 1.5 20

0.05

0.1

0.15

0.2

0.25

Matern covariance (nu=1)

σ=0.5, l=0.5

σ=0.5, l=0.3

σ=0.5, l=0.2

σ=0.5, l=0.1

−2 −1.5 −1 −0.5 0 0.5 1 1.5 20

0.05

0.1

0.15

0.2

0.25

nu=0.15

nu=0.3

nu=0.5

nu=1

nu=2

nu=30

Figure : Matern function for different parameters (computed in sglib).Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

3 / 12

Page 4: Talk litvinenko prior_cov

4*

Types of Matern covariance

Cν=3/2(r) =

(−√

2νr

`

)Γ(p + 1)

Γ(2p + 1)

p∑i=0

(p + i)!

i !(p − i)!(

√8νr

`)p−i . (2)

The most interesting cases are ν = 3/2:

Cν=3/2(r) =

(1 +

√3r

`

)exp

(−√

3r

`

)(3)

and ν = 5/2, for which

Cν=5/2(r) =

(1 +

√5r

`+

5r2

3`2

)exp

(−√

5r

`

)(4)

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

4 / 12

Page 5: Talk litvinenko prior_cov

4*

Comparison

[q(x ,θ1), ..., q(x ,θnq)] ≈ ABT .

n rank k size, MB t, sec. ε maxi=1..10

|λi − λi |, i ε2

for C C C C C

4.0 · 103 10 48 3 0.8 0.08 7 · 10−3 7.0 · 10−2, 9 2.0 · 10−4

1.05 · 104 18 439 19 7.0 0.4 7 · 10−4 5.5 · 10−2, 2 1.0 · 10−4

2.1 · 104 25 2054 64 45.0 1.4 1 · 10−5 5.0 · 10−2, 9 4.4 · 10−6

Table : Accuracy of H-matrix approximation, l1 = l3 = 0.1, l2 = 0.5.

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

5 / 12

Page 6: Talk litvinenko prior_cov

4*

Storage cost and computing time

k size, MB t, sec.

1 1548 332 1865 423 2181 504 2497 596 nem -

k size, MB t, sec.

4 463 118 850 22

12 1236 3216 1623 4320 nem -

Table : Dependence of the computing time and storage requirement onthe H-matrix rank k , l1 = 0.1, l2 = 0.5, n = 2.3 · 105. (right) l1 = 0.1,l2 = 0.5, l3 = 0.1, n = 4.61 · 105.

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

6 / 12

Page 7: Talk litvinenko prior_cov

4*

Examples of H-matrix approximation

25 20

20 20

20 16

20 16

20 20

16 16

20 16

16 16

19 20

20 19 32

19 19

16 16 32

19 20

20 19

19 16

19 16

32 32

20 20

20 20 32

32 32

20 19

19 19 32

20 19

16 16 32

32 20

32 32

20 32

32 32

32 20

32 32

20 19

19 19

20 16

19 16

32 32

20 32

32 32

32 32

20 32

32 32

20 32

20 20

20 20 32

32 32

32 32

32 32

32 32

32 32

32 32

32 32

20 2020 19

20 20 32

32 32 20

20 20

32 20

32 32 20

20 20

32 32

20 32 20

20 20

32 32

32 32 20

20

20 20

19 20 32

32 32

20 20

2032 20

32 32

20 20

2032 32

20 32

20 20

2032 32

32 32

20 20

20 20

20 20 32

32 32

20 19

20 19 32

32 32

20 20

19 19 32

32 32

20 20

20 20 32

32 32

32 20

32 32

32 20

32 32

32 20

32 32

32 20

32 32

32 32

20 32

32 32

20 32

32 32

20 32

32 32

20 32

32 32

32 32

32 32

32 32

32 32

32 32

32 32

32 32

20 20

20 20 2019 20

20 20 32

32 32

32 20

32 32

20 32

32 32

32 20

32 32

20 20

20 20

20 20

20 20 20

20 20

32 20

32 32 20

20 20

20 20

20 20

20 20 20

2032 20

32 32

20 20

20 20

20 20

20 20

20 20 2032 20

32 32

32 20

32 32

32 20

32 32

32 20

32 32

20 20

20 20

20 20

20 20

19 20

20 20 32

32 32

20 32

32 32

32 32

20 32

32 32

20 32

2020 20

20 20

20 20

20 20

20 20

32 32

20 32 20

2020 20

20 20

20 20

20 20

2032 32

20 32

20 20

2020 20

20 20

20 20

20 20

32 32

20 32

32 32

20 32

32 32

20 32

32 32

20 32

2020 20

20 20

20 20

20 20 32

32 32

32 32

32 32

32 32

32 32

32 32

32 32

19 19

20 20 32

32 32

32 20

32 32

20 32

32 32

32 20

32 32

19 20

19 20 32

32 32

20 32

32 32

32 32

20 32

32 32

20 32

20 20

20 20 32

32 32

32 32

32 32

32 32

32 32

32 32

32 32

20 20

32 32

32 32 20

20 20

32 20

32 32 20

20 20

32 32

20 32 20

20 20

32 32

32 32 20

2032 32

32 32

20 20

2032 20

32 32

20 20

2032 32

20 32

20 20

2032 32

32 32

20 20

32 32

32 32

32 32

32 32

32 32

32 32

32 32

32 32

32 20

32 32

32 20

32 20

32 20

32 32

32 20

32 20

32 32

20 32

32 32

20 32

32 32

20 20

32 32

20 20

32 32

32 32

32 32

32 32

32 32

32 32

32 32

32 32

25 9

9 20 9

920 7

7 169

9

20 9

9 20 9

9 329

9

20 9

9 20 9

9 32 9

932 9

9 32

9

9

20 9

9 20 9

9 32 9

9

20 9

9 20 9

9 329

9

32 9

9 32 9

932 9

9 32

9

9

20 9

9 20 9

9 32 9

932 9

9 329

9

20 9

9 20 9

9 32 9

932 9

9 32

9

9

32 9

9 32 9

932 9

9 329

9

32 9

9 32 9

932 9

9 32

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

7 / 12

Page 8: Talk litvinenko prior_cov

4*

Kullback-Leibler divergence (KLD)

DKL(P‖Q) is measure of the information lost when distribution Qis used to approximate P:

DKL(P‖Q) =∑i

P(i) lnP(i)

Q(i), DKL(P‖Q) =

∫ ∞−∞

p(x) lnp(x)

q(x)dx ,

where p, q densities of P and Q. For miltivariate normaldistributions (µ0,Σ0) and (µ1,Σ1)

2DKL(N0‖N1) = tr(Σ−11 Σ0)+(µ1−µ0)TΣ−11 (µ1−µ0)−k− ln

(det Σ0

det Σ1

)

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

8 / 12

Page 9: Talk litvinenko prior_cov

4*

Convergence of KLD with increasing the rank k

k KLD ‖C− CH‖2 ‖C(CH)−1 − I‖2L = 0.25 L = 0.75 L = 0.25 L = 0.75 L = 0.25 L = 0.75

5 0.51 2.3 4.0e-2 0.1 4.8 636 0.34 1.6 9.4e-3 0.02 3.4 228 5.3e-2 0.4 1.9e-3 0.003 1.2 8

10 2.6e-3 0.2 7.7e-4 7.0e-4 6.0e-2 3.112 5.0e-4 2e-2 9.7e-5 5.6e-5 1.6e-2 0.515 1.0e-5 9e-4 2.0e-5 1.1e-5 8.0e-4 0.0220 4.5e-7 4.8e-5 6.5e-7 2.8e-7 2.1e-5 1.2e-350 3.4e-13 5e-12 2.0e-13 2.4e-13 4e-11 2.7e-9

Table : Dependence of KLD on the approximation H-matrix rank k,Matern covariance with parameters L = {0.25, 0.75} and ν = 0.5,domain G = [0, 1]2, ‖C(L=0.25,0.75)‖2 = {212, 568}.

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

9 / 12

Page 10: Talk litvinenko prior_cov

4*

Convergence of KLD with increasing the rank k

k KLD ‖C− CH‖2 ‖C(CH)−1 − I‖2L = 0.25 L = 0.75 L = 0.25 L = 0.75 L = 0.25 L = 0.75

5 nan nan 0.05 6e-2 2.1e+13 1e+2810 10 10e+17 4e-4 5.5e-4 276 1e+1915 3.7 1.8 1.1e-5 3e-6 112 4e+318 1.2 2.7 1.2e-6 7.4e-7 31 5e+220 0.12 2.7 5.3e-7 2e-7 4.5 7230 3.2e-5 0.4 1.3e-9 5e-10 4.8e-3 2040 6.5e-8 1e-2 1.5e-11 8e-12 7.4e-6 0.550 8.3e-10 3e-3 2.0e-13 1.5e-13 1.5e-7 0.1

Table : Dependence of KLD on the approximation H-matrix rank k,Matern covariance with parameters L = {0.25, 0.75} and ν = 1.5,domain G = [0, 1]2, ‖C(L=0.25,0.75)‖2 = {720, 1068}.

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

10 / 12

Page 11: Talk litvinenko prior_cov

4*

Application of large covariance matrices

1. Kriging estimate s := CsyC−1yy y

2. Estimation of variance σ, is the diagonal of conditional cov.matrix Css|y = diag

(Css − CsyC−1yy Cys

),

3. Gestatistical optimal design ϕA := n−1traceCss|y ,

ϕC := cT(Css − CsyC−1yy Cys

)c ,

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

11 / 12

Page 12: Talk litvinenko prior_cov

4*

Mean and variance in the rank-k format

u :=1

Z

Z∑i=1

ui =1

Z

Z∑i=1

A · bi = Ab. (5)

Cost is O(k(Z + n)).

C =1

Z − 1WcW

Tc ≈

1

Z − 1UkΣkΣT

k UTk . (6)

Cost is O(k2(Z + n)).Lemma: Let ‖W −Wk‖2 ≤ ε, and uk be a rank-k approximationof the mean u. Then a) ‖u− uk‖ ≤ ε√

Z,

b) ‖C− Ck‖ ≤ 1Z−1ε

2.

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

12 / 12