Nonlinear model reduction - Sandia National...

97
Nonlinear model reduction: discrete optimality, h-adaptivity, and error surrogates Kevin Carlberg Sandia National Laboratories MORML April 1, 2016 Nonlinear model reduction Kevin Carlberg 1

Transcript of Nonlinear model reduction - Sandia National...

Page 1: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Nonlinear model reduction:discrete optimality, h-adaptivity, and error surrogates

Kevin Carlberg

Sandia National Laboratories

MORMLApril 1, 2016

Nonlinear model reduction Kevin Carlberg 1

Page 2: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Computational barrier at Sandia

CFD model

100 million cells200,000 time steps

High simulation costs

6 weeks, 5000 cores6 runs maxes out Cielo

Barrier

Fast-turnaround design Uncertainty quantification

Objective: break barrier via nonlinear model reduction

Nonlinear model reduction Kevin Carlberg 2

Page 3: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

ROM: state of the art [Benner et al., 2015]

Linear time-invariant systems: mature [Antoulas, 2005]

Balanced truncation [Moore, 1981]

Empirical balanced truncation [Willcox and Peraire, 2002, Rowley, 2005]

Moment matching[Bai, 2002, Freund, 2003, Gallivan et al., 2004, Baur et al., 2011]

Loewner framework [Lefteriu and Antoulas, 2010, Ionita and Antoulas, 2014]

+ Reliable: guaranteed stability, a priori error bounds+ Certified : sharp, computable a posteriori error bounds

Elliptic/parabolic PDEs (FEM): mature [Rozza et al., 2008]

Reduced-basis method[Prud’Homme et al., 2001, Veroy et al., 2003, Barrault et al., 2004]

Subsystem-based reduced-basis method[Maday and Rønquist, 2002, Phuong Huynh et al., 2013, Eftang and Patera, 2013]

+ Reliable: a priori error bounds+ Certified : sharp, computable a posteriori error bounds

Nonlinear dynamical systems: unprovenProper orthogonal decomposition (POD)–Galerkin

- Not reliable: Stability and accuracy not guaranteed- Not certified : error bounds not sharp

Nonlinear model reduction Kevin Carlberg 3

Page 4: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Our research goal

Nonlinear model-reduction methods that areaccurate, low cost, certified, and reliable.

+ Accuracy

Improve projection techniquePreserve problem structure

+ Low cost

Sample-mesh approachLeverage time-domain data

+ Certification

Error boundsStatistical error modeling

+ Reliability

A posteriori h-refinement

Nonlinear model reduction Kevin Carlberg 4

Page 5: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Our research goal

Nonlinear model-reduction methods that areaccurate, low cost, certified, and reliable.

+ AccuracyImprove projection technique [C. et al., 2011a, C. et al., 2015a]

Preserve problem structure

+ Low costSample-mesh approachLeverage time-domain data

+ CertificationError bounds [C. et al., 2015a]

Statistical error modeling

+ ReliabilityA posteriori h-refinement

Collaborators: M. Barone (Sandia), H. Antil (GMU)

Nonlinear model reduction Kevin Carlberg 5

Page 6: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

POD–Galerkin: offline data collection

dxdt

= f (x ; t,µ); x(0,µ) = x0(µ), t ∈ [0,T ] , µ ∈ D

1 Collect ‘snapshots’ of the state

Nonlinear model reduction Kevin Carlberg 6

Page 7: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

POD–Galerkin: offline data collection

2 Data compression

Compute SVD: X1 X2 X3 = U VT[ ]

Truncate: Φ = [u1 · · · up]

Nonlinear model reduction Kevin Carlberg 7

Page 8: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

POD–Galerkin: online projection

Full-order model:dxdt

= f (x ; t,µ), x(0,µ) = x0(µ)

1 x(t) ≈ x(t) = Φx(t) =

2 ΦT (f (x ; t,µ)− d xdt ) = 0

=

( (=

Galerkin ROM:d xdt

= ΦT f (Φx ; t,µ), x(0,µ) = ΦTx0(µ)

Nonlinear model reduction Kevin Carlberg 8

Page 9: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Cavity-flow problem. Collaborator: M. Barone (SNL)

Unsteady Navier–Stokes

DES turbulence model

1.2 million degrees offreedom

Re = 6.3× 106

M∞ = 0.6

CFD code: AERO-F[Farhat et al., 2003]

Nonlinear model reduction Kevin Carlberg 9

Page 10: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Full-order model responses

vorticity field

pressure field

Nonlinear model reduction Kevin Carlberg 10

Page 11: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

POD–Galerkin failure

0 1 2 3 4 5 61.4

1.6

1.8

2

2.2

2.4

2.6

2.8

t ime

pre

ssure

FOM, ∆t = 0.0015∆t=9.375e-05∆t=0.0001875∆t=0.000375∆t=0.00075∆t=0.0015∆t=0.003∆t=0.006∆t=0.012∆t=0.015∆t=0.024

0 2 4 6 8 10 12 141.6

1.8

2

2.2

2.4

2.6

2.8

3

t ime

pre

ssure

FOM, ∆t = 0.0015∆t=0.00015∆t=0.0001875∆t=0.000375∆t=0.00075∆t=0.0015∆t=0.003∆t=0.006∆t=0.012∆t=0.015∆t=0.024

- Galerkin ROMs unstable

Nonlinear model reduction Kevin Carlberg 11

Page 12: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

How to construct a ROM for nonlinear dynamical systems?

Optimize then discretize? (Galerkin)

Discretize then optimize? (Least-squares Petrov–Galerkin)

Full-order modelODE

optimalprojection

Galerkin ROMODE

time discretization

Galerkin ROMO∆E

time discretization

Full-order modelO∆E

optimalprojection

LSPG ROMO∆E

Outstanding questions:

1 Which notion of optimality is better in practice?2 Discrete-time error bounds?3 Time step selection?

Nonlinear model reduction Kevin Carlberg 12

Page 13: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Full-order modelODE

time discretization

Full-order modelO∆E

Nonlinear model reduction Kevin Carlberg 13

Page 14: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Full-order model (FOM)ODE: time continuous

dxdt

= f (x , t), x(0) = x0, t ∈ [0,T ]

O∆E, linear multistep schemes: rn (xn) = 0 , n = 1, ... ,N

rn (x) := α0x −∆tβ0f (x , tn) +k∑

j=1

αjxn−j −∆tk∑

j=1

βj f(xn−j , tn−j

)

xn = xn (explicit state update)

O∆E, Runge–Kutta: rni (xn1, ... , xn

s ) = 0 , i = 1, ... , s

rni (x1, ... , x s) := x i − f (xn−1 + ∆ts∑

j=1

aijx j , tn−1 + ci∆t)

xn = xn−1 + ∆ts∑

i=1

bixni (explicit state update)

This talk focuses on linear multistep schemes.

Nonlinear model reduction Kevin Carlberg 14

Page 15: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Galerkin ROM: first optimize

Full-order modelODE

Galerkinprojection

Galerkin ROMODE

time discretization

Full-order modelO∆E

Nonlinear model reduction Kevin Carlberg 15

Page 16: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Galerkin: first optimize, then discretize

Full-order modelODE

Galerkinprojection

Galerkin ROMODE

time discretization

Galerkin ROMO∆E

time discretization

Full-order modelO∆E

Nonlinear model reduction Kevin Carlberg 16

Page 17: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Galerkin ROMODE

d xdt

= ΦT f (Φx , t), x(0) = ΦTx0, t ∈ [0,T ]

+ Continuous velocity d xdt is optimal

Theorem (Galerkin ROM: continuous optimality)

The Galerkin ROM velocity minimizes the time-continuous FOM residual:d xdt

(x , t) = arg minv∈range(Φ)

‖v − f (x , t)‖22

O∆Ern (xn) = 0, n = 1, ... ,N

r n (x) := α0x−∆tβ0ΦT f (Φx , tn)+k∑

j=1

αj xn−j−∆tk∑

j=1

βjΦT f(

Φxn−j , tn−j)

- Discrete state xn is not generally optimal

Can we fix this? Will doing so help?Nonlinear model reduction Kevin Carlberg 17

Page 18: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

LSPG ROM: first discretize, then optimize

Full-ordermodelODE

Galerkinprojection

Galerkin ROMODE

time discretizationtime discretization

Full-ordermodelO∆E

Galerkinprojection

Galerkin ROMO∆E

Petrov–Galerkinprojection

LSPG ROMO∆E

Nonlinear model reduction Kevin Carlberg 18

Page 19: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

LSPG ROM

FOM O∆E

rn (xn) = 0, n = 1, ... ,N

LSPG ROM O∆E:

xn = arg minz∈Rp‖Arn (Φz) ‖2

2.

m

Ψn(xn)T rn (Φxn) = 0, Ψn(x) := ATA∂rn

∂x(Φx)Φ

A = I : LSPG [LeGresley, 2006, Bui-Thanh et al., 2008, C. et al., 2011a]

+ Discrete solution is optimal

Nonlinear model reduction Kevin Carlberg 19

Page 20: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Does the LSPG ROM have a time-continuous representation?

Full-ordermodelODE

?Galerkin

projectionGalerkin ROM

ODE

time discretizationtime discretization

Full-ordermodelO∆E

Galerkinprojection

Galerkin ROMO∆E

Petrov–Galerkinprojection

LSPG ROMO∆E

Nonlinear model reduction Kevin Carlberg 20

Page 21: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Does the LSPG ROM have a time-continuous representation?

Sometimes.

Full-ordermodelODE

Petrov–Galerkinprojection

LSPG ROMODE

time discretization

Galerkinprojection

Galerkin ROMODE

time discretizationtime discretization

Full-ordermodelO∆E

Galerkinprojection

Galerkin ROMO∆E

Petrov–Galerkinprojection

LSPG ROMO∆E

Nonlinear model reduction Kevin Carlberg 21

Page 22: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

LSPG ROM: continuous representation

Theorem

The LSPG ROM is equivalent to applying a Petrov–Galerkin projection tothe FOM ODE with test basis

Ψ(x , t) = ATA(α0I −∆tβ0

∂f∂x

(x0 + Φx , t)

if

1 βj = 0, j ≥ 1 (e.g., a single-step method),

2 the velocity f is linear in the state, or

3 β0 = 0 (i.e., explicit schemes).

Time-continuous test basis depends ontime-discretization parameters!

Nonlinear model reduction Kevin Carlberg 22

Page 23: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Are the two approaches ever equivalent?

Galerkin: ΦT rn (Φxn) = 0

LSPG: Ψn(xn)T rn (Φxn) = 0

Does Ψn(xn) = Φ ever?

Yes.

Ψn(x) := ATA∂r n

∂x(Φx) = ATA

(α0I −∆tβ0

∂f∂x

(Φx , tn)

Theorem

The two approaches are equivalent (Ψn(x) = Φ)

1 in the limit of ∆t → 0 with A = 1/√α0I ,

2 if the scheme is explicit (β0 = 0) with A = 1/√α0I , or

3 if ∂rn∂x is positive definite with [∂r

n

∂x ]−1 = ATA.

Nonlinear model reduction Kevin Carlberg 23

Page 24: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Runge–Kutta

Ψnij (x1, ... , x s) :=AT

i Ai∂rni∂x j

(x1, ... , x s)

=ATi Ai

(Iδij −∆taij

∂f∂x

(xn−1 + ∆tΦs∑

k=1

aik xk , tn−1 + ci∆t)

Corollary

Galerkin projection is discrete-optimal (i.e., Ψn(x) = Φ) forRunge–Kutta schemes with Ai = I in the limit of ∆t → 0.

Nonlinear model reduction Kevin Carlberg 24

Page 25: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Discrete-time error bound

Theorem

If the following conditions hold:

1 f (·, t) is Lipschitz continuous with Lipschitz constant κ, and

2 ∆t is such that 0 < h := |α0| − |β0|κ∆t,

then

‖δxnG‖ ≤

∆t

h

k∑`=0

|β`|‖ (I − V) f(x0 + Φxn−`

G

)‖+ 1

h

k∑`=1

(|β`|κ∆t + |α`|) ‖δxn−`G ‖

‖δxnL‖ ≤

∆t

h

k∑`=0

|β`|‖ (I − Pn) f(x0 + Φxn−`

L

)‖+ 1

h

k∑`=1

(|β`|κ∆t + |α`|) ‖δxn−`L ‖,

with

δxnG := xn

? −ΦxnG .

δxnL := xn

? −ΦxnL

V := ΦΦT

Pn := Φ((Ψn)TΦ

)−1(Ψn)T

Nonlinear model reduction Kevin Carlberg 25

Page 26: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

LSPG ROM yields a smaller error bound

Theorem (Backward Euler)

If conditions (1) and (2) hold, then

‖δxnG‖ ≤ ∆t

n−1∑j=0

1

(h)j+1‖ (I − V) f

(x0 + Φxn−j

G

)‖︸ ︷︷ ︸

εn−jG

‖δxnL‖ ≤ ∆t

n−1∑j=0

1

(h)j+1‖(I − Pn−j

)f(x0 + Φxn−j

L

)‖︸ ︷︷ ︸

εn−jL

εkG = ‖ΦxkG −∆tf

(x0 + Φxk

G

)−Φxk−1

G ‖

εkL = ‖ΦxkL −∆tf

(x0 + Φxk

L

)−Φxk−1

L ‖ = miny‖Φy −∆tf (x0 + Φy)−Φxk−1

L ‖

Corollary (LSPG smaller error bound)

If xk−1L = xk−1

G , then εkL ≤ εkG .

Nonlinear model reduction Kevin Carlberg 26

Page 27: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

LSPG ROM has an interesting time-step dependence

Corollary (Backward Euler)

Define

∆x jL := x j

L − x j−1L and

∆x j : full-space solution increment from x j−1L .

Then, the LSPG error can also be bounded as

‖δxnL‖ ≤ ∆t(1 + κ∆t)

n−1∑

j=0

µn−j

(h)j+1‖f (x j−1

L + ∆xn−j)‖

with µj := ‖Φ∆x jL −∆x j‖/‖∆x j‖.

Effect of decreasing ∆t:

+ The terms ∆t(1 + κ∆t) and 1/(h)j+1 decrease

- The number of total time instances n increases

? The term µn−j may increase or decrease, depending on thespectral content of the basis Φ

Nonlinear model reduction Kevin Carlberg 27

Page 28: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Galerkin and LSPG responses for basis dimension p = 204

0 1 2 3 4 5 61.4

1.6

1.8

2

2.2

2.4

2.6

2.8

t ime

pre

ssure

FOM, ∆t = 0.0015∆t=9.375e-05∆t=0.0001875∆t=0.000375∆t=0.00075∆t=0.0015∆t=0.003∆t=0.006∆t=0.012∆t=0.015∆t=0.024

0 2 4 6 8 10 12 141.6

1.8

2

2.2

2.4

2.6

2.8

3

t ime

pre

ssure

FOM, ∆t = 0.0015∆t=0.00015∆t=0.0001875∆t=0.000375∆t=0.00075∆t=0.0015∆t=0.003∆t=0.006∆t=0.012∆t=0.015∆t=0.024

(a) Galerkin

0 2 4 6 8 10 12 141.6

1.8

2

2.2

2.4

2.6

2.8

3

t ime

pre

ssure

FOM, ∆t = 0.0015∆t=0.00015∆t=0.0001875∆t=0.000375∆t=0.00075∆t=0.0015∆t=0.003∆t=0.006∆t=0.012∆t=0.015∆t=0.024

(b) LSPG

- Galerkin ROMs unstable for long time intervals

+ LSPG ROMs accurate and stable (most time steps)

Nonlinear model reduction Kevin Carlberg 28

Page 29: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

LSPG ROM: superior performance

10−5

10−4

10−3

10−2

10−1

∆t

errorin

time-averagedpressure

1

GalerkinMinimum residual

(c) 0 ≤ t ≤ 0.55

10−5

10−4

10−3

10−2

10−1

∆t

errorin

time-averagedpressure

1

GalerkinMinimum residual

(d) 0 ≤ t ≤ 1.1

10−4

10−2

10−4

10−3

10−2

10−1

∆t

errorin

tim

e-averagedpressure

1

GalerkinMinimum residual

(e) 0 ≤ t ≤ 1.54

X LSPG ROM yields a smaller error for all time intervals andtime steps.

Nonlinear model reduction Kevin Carlberg 29

Page 30: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

LSPG performance (t ≤ 12.5 sec)

∆t

p = 564p = 368p = 204

errorin

time-averaged

pressure

10−5 10−4 10−3 10−2 10−1 10010−4

10−3

10−2

10−1

100

p = 564p = 367p = 204

simulation

time(seconds)

∆t10−4 10−3 10−2 10−1

103

104

105

106

107

X An intermediate ∆t produces the lowest error and better speedup.

p = 564 case:

∆t = 1.875× 10−4 sec: relative error = 1.40%, time = 289 hrs

∆t = 1.5× 10−3 sec: relative error = 0.095%, time = 35.8 hrs

Nonlinear model reduction Kevin Carlberg 30

Page 31: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Summary: Improve projection technique

Discrete optimality (LSPG) outperforms continuous optimality(Galerkin) in practice

Equivalence conditions

1 Limit of ∆t → 02 Explicit schemes3 Positive definite residual Jacobians

Discrete-time error bounds

LSPG ROM yields smaller error bound than GalerkinAmbiguous role of time step ∆t

Numerical experiments

LSPG ROM yields a smaller error than GalerkinEquivalent as ∆t → 0Error minimized for intermediate ∆t

Reference: C., Barone, and Antil. Galerkin v. least-squaresPetrov–Galerkin projection in nonlinear model reduction.arXiv e-print, (1504.03749), 2015.

Nonlinear model reduction Kevin Carlberg 31

Page 32: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Our research goal

Nonlinear model-reduction methods that areaccurate, low cost, certified, and reliable.

+ Accuracy

Improve projection techniquePreserve problem structure

+ Low cost

Sample-mesh approach [C. et al., 2011b, C. et al., 2013]

Leverage time-domain data

+ Certification

Error boundsStatistical error modeling

+ Reliability

A posteriori h-refinement

Collaborators: C. Farhat, J. Cortial (Stanford)

Nonlinear model reduction Kevin Carlberg 32

Page 33: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

LSPG performance (t ≤ 2.5 sec)

∆t

p = 564p = 368p = 204

errorin

time-averaged

pressure

10−5 10−4 10−3 10−2 10−1 10010−4

10−3

10−2

10−1

100

p = 564p = 367p = 204

simulation

time(seconds)

∆t10−4 10−3 10−2 10−1

103

104

105

106

107

+ Always sub-3% errors

- More expensive than the FOM

FOM simulation: 1 hour, 48 CPULSPG ROM simulation (fastest): 1.3 hours, 48 CPU

Nonlinear model reduction Kevin Carlberg 33

Page 34: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Hyper-reduction via Gappy POD [Everson and Sirovich, 1995]

xn = arg minz∈Rp

‖Arn (Φz) ‖22.

Can we select A to make this inexpensive?

1. rn(x) ≈ rn(x) = ΦR rn(x) 2. rn(x) = arg minr‖PΦR r − Prn(x)‖2

c

=k k= arg minr

2

=k k= arg minr

2

=k k= arg minr

2

xn = arg minz∈Rp‖rn (Φz) ‖2

2 = arg minz∈Rp‖ΦR rn (Φz) ‖2

2 = arg minz∈Rp‖rn (Φz) ‖2

2

= arg minz∈Rp‖ (PΦR)+ P︸ ︷︷ ︸

A

rn (Φz) ‖22.

+ GNAT: A = (PΦR)+ P leads to low-cost

Offline: Construct ΦR (POD) and P (greedy method)

Nonlinear model reduction Kevin Carlberg 34

Page 35: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Sample mesh: HPC implementation

xn = arg minz∈Rp

‖ (PΦR)+ Prn (Φz) ‖22

Key : GNAT samples only a few entries of the residual Prn

Idea: Extract minimal subset of the mesh

Related : subgrid [Haasdonk et al., 2008], reduced integration domain[Ryckelynck, 2005]

Sample mesh: 4.1% nodes, 3.0% cells

+ Small problem size: can run on many fewer cores

Nonlinear model reduction Kevin Carlberg 35

Page 36: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

GNAT performance (t ≤ 12.5 sec)

vorticity field pressure field

GNATROM

FOM

+ < 1% error in time-averaged drag

+ 229x CPU-hour savings

FOM: 5 hour x 48 CPUGNAT ROM: 32 min x 2 CPU

Nonlinear model reduction Kevin Carlberg 36

Page 37: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Our research goal

Nonlinear model-reduction methods that areaccurate, low cost, certified, and reliable.

+ AccuracyImprove projection techniquePreserve problem structure

+ Low costSample-mesh approachLeverage time-domain data [C. et al., 2015b]

+ CertificationError boundsStatistical error modeling

+ ReliabilityA posteriori h-refinement

Collaborators: L. Brencher, B. Haasdonk, A. Barth (U Stuttgart)

Nonlinear model reduction Kevin Carlberg 37

Page 38: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Our research goal

Nonlinear model-reduction methods that areaccurate, low cost, certified, and reliable.

+ Accuracy

Improve projection techniquePreserve problem structure

+ Low cost

Sample-mesh approachLeverage time-domain data

+ Certification

Error boundsStatistical error modeling

+ Reliability

A posteriori h-refinement [C., 2015]

Nonlinear model reduction Kevin Carlberg 38

Page 39: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

GNAT performance (t ≤ 12.5 sec)

vorticity field pressure field

GNATROM

FOM

+ < 1% error in time-averaged drag

+ 229x CPU-hour savings

FOM: 5 hour x 48 CPUGNAT ROM: 32 min x 2 CPU

- However, ROMs are not guaranteed to be accurate.

ROM accuracy is limited by the information in Φ.

Nonlinear model reduction Kevin Carlberg 39

Page 40: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Example: inviscid Burgers equation [Rewienski, 2003]

∂u(x , τ)

∂τ+

1

2

∂(u2 (x , τ)

)

∂x= 0.02e0.02x

u(0, τ) = 3, ∀τ > 0

u(x , 0) = 1, ∀x ∈ [0, 100] ,

Discretization: Godunov’s scheme

Simulate τ ∈ [0, 50]

FOM: 250 degrees of freedom

ROM: 150 degrees of freedom

Φ constructed via POD using snapshots in τtrain ∈ [0, 2.5]

Nonlinear model reduction Kevin Carlberg 40

Page 41: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

ROM accuracy limited by relevance of training data

ROM

FOM

u

x

0 50 100 150 200 2501

1.5

2

2.5

3

3.5

4

4.5

5

5.5

- ROM inaccurate when outside predictive domain of Φ

Nonlinear model reduction Kevin Carlberg 41

Page 42: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Existing ROM adaptation methods

A priori adaptation: unique ROM for separate regions of the

input space [Amsallem and Farhat, 2008, Amsallem et al., 2009,

Eftang et al., 2010, Eftang et al., 2011, Haasdonk et al., 2011,

Drohmann et al., 2011, Peherstorfer et al., 2014]

time domain [Drohmann et al., 2011, Dihlmann et al., 2011]

state space[Amsallem et al., 2012, Washabaugh et al., 2012, Peherstorfer et al., 2014].

+ Reduces the dimension of the ROM- No mechanism to improve the ROM a posteriori

A posteriori adaptation

Revert to the FOM, solve it, and add solution to the basis[Eldred et al., 2009, Arian et al., 2000, Ryckelynck, 2005]

+ Improves the ROM a posteriori- Incurs large-scale operations

Goal: Cheap, a posteriori improvement of the ROM

Nonlinear model reduction Kevin Carlberg 42

Page 43: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Main ideaROM analog to mesh-adaptive h-refinement

‘Split’ basis vectors

finite element h-refinement ROM h-refinementGenerate hierarchical subspaces

ROM converges to the FOM

Nonlinear model reduction Kevin Carlberg 43

Page 44: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

h-refinement ingredients

1 Adaptive algorithm

2 Refinement

finite element h-refinement ROM h-refinement

3 Error indicators

dual solve prolongation dual solve prolongation

finite element h-refinement ROM h-refinement

Nonlinear model reduction Kevin Carlberg 44

Page 45: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Ingredient 1: Adaptive algorithm

1 Adaptive algorithm

2 Refinement

finite element h-refinement ROM h-refinement

3 Error indicators

dual solve prolongation dual solve prolongation

finite element h-refinement ROM h-refinement

Nonlinear model reduction Kevin Carlberg 45

Page 46: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Adaptive algorithm

Algorithm 1 Outer loop

Input: timestep n, current basis ΦOutput: updated basis Φ, generalized state xn

1: Solve ΦT rn(Φxn;µ) = 0 for current ROM solution xn.2: if estimate of output error δs is ‘too large’ then3: Refine basis: Φ← Refine (Φ, xn).4: Return to Step 1.5: end if6: if mod (n, nreset) = 0 then

7: Reset basis: Φ← Φ(0).8: end if

Nonlinear model reduction Kevin Carlberg 46

Page 47: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Adaptive algorithm

Algorithm 2 Refine

Input: initial basis Φ, reduced solution xOutput: refined basis Φ

1: Compute prolongation operator I hH and fine basis Φh (Ingredient 2)2: Solve: Compute coarse adjoint solution (Ingredient 3)3: Estimate: Compute fine error indicators (Ingredient 3)4: Mark: Identify basis vectors to refine I5: for i ∈ I do6: Refine: Split φi into child vectors7: end for8: Compute QR factorization with column pivoting Φ = QR, RΠ = QR9: Ensure full-rank matrix Φ← Φ [π1 · · · πr ]

Nonlinear model reduction Kevin Carlberg 47

Page 48: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Ingredient 2: Refinement

1 Adaptive algorithm

2 Refinement

finite element h-refinement ROM h-refinement

3 Error indicators

dual solve prolongation dual solve prolongation

finite element h-refinement ROM h-refinement

Nonlinear model reduction Kevin Carlberg 48

Page 49: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Tree data structured = 1

C (1) = 2, 3E (1) = 1, ... , 6

d = 2C (2) = 4, 5, 6E (2) = 1, 3, 4

d = 4C (4) = ∅E (4) = 1

d = 5C (5) = ∅E (5) = 3

d = 6C (6) = ∅E (6) = 4

d = 3C (3) = 7, 8E (3) = 2, 5, 6

d = 7C (7) = ∅E (7) = 2

d = 8C (8) = 9, 10E (8) = 5, 6

d = 9C (9) = ∅E (9) = 5

d = 10C (10) = ∅E (10) = 6

Tree data structure with m nodeschild function C : N (m)→ P (N (m))element function E : N (m)→ P (N (N))

Requirements

1 Root node includes all elements E (1)

2 Each element has a single leaf node3 Disjoint support of children E (j) ∩ E (k) = ∅, ∀j 6= k ∈ C (i)

4 ∪j∈C(i) E (j) = E (i)+ 1–2 ensure the ROM converges to the FOM+ 4 ensures hierarchical refined subspaces

Nonlinear model reduction Kevin Carlberg 49

Page 50: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Tree example: N = 6

d = 1C (1) = 2, 3

E (1) = 1, ... , 6

d = 2C (2) = 4, 5, 6E (2) = 1, 3, 4

d = 4C (4) = ∅E (4) = 1

d = 5C (5) = ∅E (5) = 3

d = 6C (6) = ∅E (6) = 4

d = 3C (3) = 7, 8E (3) = 2, 5, 6

d = 7C (7) = ∅E (7) = 2

d = 8C (8) = 9, 10E (8) = 5, 6

d = 9C (9) = ∅E (9) = 5

d = 10C (10) = ∅E (10) = 6

Φ(0) =

φ(0)11

φ(0)21

φ(0)31

φ(0)41

φ(0)51

φ(0)61

→ Φ =

φ(0)11 0 0 0

0 φ(0)21 0 0

φ(0)31 0 0 0

φ(0)41 0 0 0

0 0 φ(0)51 0

0 0 0 φ(0)61

.

Nonlinear model reduction Kevin Carlberg 50

Page 51: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Tree constructionState variables that are strongly correlated or

anticorrelated should reside in the same tree node.

1 Normalize state-variable observation history2 If first observation is negative, flip over origin3 Recursively apply k-means clustering

observation2

observation 1

5

4

2

6

3

1

−30 −25 −20 −15 −10 −5 0 5 10−10

−5

0

5

10

15

20

25

30

35

(j) before modification

observation2

observation 10.15 0.16 0.17 0.18 0.19 0.2 0.21 0.22

−0.4

−0.3

−0.2

−0.1

0

0.1

0.2

0.3

0.4

(k) after modification

State-variable observation history (variable index labeled).

Nonlinear model reduction Kevin Carlberg 51

Page 52: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Refinement machinery

ΦH = ΦhI hH

coarse basis ΦH ∈ RN×p

fine basis Φh ∈ RN×q with q =∑p

i=1 |C (di )|prolongation operator I hH ∈ 0, 1q×pprolongated generalized coordinates xh

H = I hH xH

restriction operator IHh =(I hH)+

Nonlinear model reduction Kevin Carlberg 52

Page 53: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Ingredient 3: Error indicators

1 Adaptive algorithm

2 Refinement

finite element h-refinement ROM h-refinement

3 Error indicators: a) dual solve (coarse), b) prolongation (fine)

dual solve prolongation dual solve prolongation

finite element h-refinement ROM h-refinement

Nonlinear model reduction Kevin Carlberg 53

Page 54: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Dual-weighted residual error indicators

Goal-oriented: reduce the error in output g(x)

Analogous to duality-based error control for

differential equations [Estep, 1995, Pierce and Giles, 2000]

finite elements [Babuska and Miller, 1984, Becker and Rannacher, 1996,

Rannacher, 1999, Bangerth and Rannacher, 1999, Becker and Rannacher, 2001,

Bangerth and Rannacher, 2003],finite volumes [Venditti and Darmofal, 2000, Venditti and Darmofal, 2002,

Park, 2004, Nemec and Aftosmis, 2007]

discontinuous Galerkin methods [Lu, 2005, Fidkowski, 2007]

Nonlinear model reduction Kevin Carlberg 54

Page 55: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Dual-weighted residual error indicatorsApproximate fine output:

g(Φhxh) ≈ g(ΦH xH) +∂g

∂x(ΦH xH)Φh(xh − I hH xH) (1)

Approximate the fine residual:

0 = (Φh)T r(Φhxh) ≈ (Φh)T r(ΦH xH)+(Φh)T∂r∂x

(ΦH xH)Φh(xh − I hH xH)

Solve for the error:

(xh − I hH xH) ≈ −[(Φh)T∂r∂x

(ΦH xH)Φh]−1(Φh)T r(ΦH xH) (2)

Substitute (2) in (1):

g(Φhxh)− g(ΦH xH) ≈ −(yh)T (Φh)T r(ΦH xH) ,

with the fine adjoint solution yh ∈ Rq satisfying

(Φh)T∂r∂x

(ΦH xH)TΦhyh = (Φh)T∂g

∂x(ΦH xH)T .

Nonlinear model reduction Kevin Carlberg 55

Page 56: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Dual-weighted residual error indicators

g(Φhxh)− g(ΦH xH) ≈ −(yh)T

(Φh)T r(ΦH xH) (3)

(Φh)T∂r∂x

(ΦH xH)TΦhyh = (Φh)T∂g

∂x(ΦH xH)T (4)

We want to avoid fine solve (4), so approximate yh as

yhH = I hH yH ,

where yH is the coarse adjoint solution to

(ΦH)T∂r∂x

(ΦH xH)TΦH yH = (ΦH)T∂g

∂x(ΦH xH)T .

Substituting yhH for yh in (3) yields cheaply computable

g(Φhxh)− g(ΦH xH) ≈ −(yhH)T (Φh)T r(ΦH xH).

The RHS can be bounded by cheaply computable errorindicators

|(yhH

)T(Φh)T r(ΦH xH)| ≤

q∑

i=1

δhi , δhi = |[yhH

]i

(φh

i

)Tr(

ΦH xH)|.

Nonlinear model reduction Kevin Carlberg 56

Page 57: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Previous example

ROM

FOM

u

x

0 50 100 150 200 2501

1.5

2

2.5

3

3.5

4

4.5

5

5.5

Generated by Φ ∈ R250×150 using τtrain ∈ [0, 2.5]

Now try h-adaptivity with Φ(0) ∈ R250×10, τtrain ∈ [0, 2.5].

Nonlinear model reduction Kevin Carlberg 57

Page 58: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Previous example with h-adaptivity

ROM

FOM

u

x

0 50 100 150 200 2501

1.5

2

2.5

3

3.5

4

4.5

5

5.5

dim Φ(0) = 10

meank(dim Φ(k)) = 44.3

h-adaptation allows the ROM to capture phenomena notpresent in the training data!

Nonlinear model reduction Kevin Carlberg 58

Page 59: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

h-adaptivity enables error control

ROM

FOM

u

x

0 50 100 150 200 2500

1

2

3

4

5

6

(a) tolerance ε = 0.35

ROM

FOM

u

x

0 50 100 150 200 2501

1.5

2

2.5

3

3.5

4

4.5

5

5.5

(b) tolerance ε = 0.05

ROM

FOM

u

x

0 50 100 150 200 2501

1.5

2

2.5

3

3.5

4

4.5

5

5.5

(c) tolerance ε = 0.01

ε = 0.35 ε = 0.05 ε = 0.01average basis dimension

33.6 44.2507 53.9per Newton iteration

relative error (%) 12.2 0.51 0.078online time (seconds) 4.61 4.63 7.64

Nonlinear model reduction Kevin Carlberg 59

Page 60: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Summary: a posteriori h-refinement

Adaptive h-refinement via splitting

+ Incrementally improves ROM+ Does not require large-scale operations+ Enables error control+ Extends utility of ROMs to hyperbolic PDEs

Reference: C., Adaptive h-refinement for reduced-ordermodels. International Journal for Numerical Methods inEngineering, 102(5):1192–1210, 2015.

Nonlinear model reduction Kevin Carlberg 60

Page 61: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Our research goal

Nonlinear model-reduction methods that areaccurate, low cost, certified, and reliable.

+ AccuracyImprove projection techniquePreserve problem structure

+ Low costSample-mesh approachLeverage time-domain data

+ CertificationError boundsStatistical error modeling [Drohmann and C., 2015]

+ ReliabilityA posteriori h-refinement

Collaborator: M. Drohmann (Sandia)

Nonlinear model reduction Kevin Carlberg 61

Page 62: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Strategies for ROM error quantification1 Rigorous error bounds

+ independent of input-space dimension- high effectivity (overestimation), especially for nonlinear, time

depedent problems- improving effectivity incurs high costs

[Huynh et al., 2010, Wirtz et al., 2012] or intrusive reformulation ofdiscretization [Yano et al., 2012]

- not amenable to statistics

2 Multifidelity correction [Eldred et al., 2004]

y

eyylow

outp

uts

105 104

105

104

residual R

erro

r(e

ner

gy

norm

)|||u

h

ure

d|||

(i) Kernel method

105 104

106

104

residual R

erro

r(e

ner

gy

norm

)|||u

h

ure

d|||

(ii) RVM

training point mean 95% confidence “uncertainty of mean”

error

error indicator

zzlow

z

inputs µmodel low-fidelity error as a function of inputs‘correct’ low-fidelity outputs with model

+ amenable to statistics- curse of dimensionality- ROM errors highly oscillatory in the input space

[Ng and Eldred, 2012]

Nonlinear model reduction Kevin Carlberg 62

Page 63: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Data-driven observation

10−5 10−4

10−5

10−4

Residual r/error bound

error(energy

norm

)|||u

h−

ure

d|||

10−5

10−4

10−5

10−4

10−5

10−4

Residual r error b

ound

error(energy

norm

)|||u

h−

ure

d|||

(r; |||δu|||)(∆µ

u ; |||δu|||)

ROMs generate error indicators that correlate with the error

Main idea: map error indicators to a distribution over thetrue error using Gaussian process regression

+ independent of input-space dimension

Reduced-order model error surrogates

Nonlinear model reduction Kevin Carlberg 63

Page 64: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

ROMES

y

eyylow

inputs µ

outp

uts

105 104

105

104

residual R

erro

r(e

ner

gy

norm

)|||u

h

ure

d|||

(i) Kernel method

105 104

106

104

residual R

erro

r(e

ner

gy

norm

)|||u

h

ure

d|||

(ii) RVM

training point mean 95% confidence “uncertainty of mean”

error

error indicator error indicator

tranformed errord()

Approximate deterministic µ 7→ d(δ) by stochastic ρ(µ) 7→ dd : invertible transformation functiond : random variable for transformed error.δ := d−1(d): random variable for the error

ROMES ingredients:1 error indicators ρ2 transformation function d3 statistical model: Gaussian process

Desired conditions1 indicators ρ(µ) are low dimensional and cheaply computable2 distribution of random variable δ has low variance3 statistical model is validated

Nonlinear model reduction Kevin Carlberg 64

Page 65: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Normed errors: δ(µ) = ‖x(µ)−Φx(µ)‖ROMs often equipped with bounds ∆(µ) ≥ δ(µ) ≥ 0

Bound effectivity η(µ) := ∆(µ)δ(µ) ≥ 1 often lies in a small range:

η1 ≤ η(µ) ≤ η2, ∀µlog ∆(µ)− log η1 ≥ log δ(µ) ≥ log ∆(µ)− log η2, ∀µ

Ingredient 1: indicator ρ = log ∆

Also consider cheaper ρ = log ‖r(Φx(µ);µ)‖2 because

∆µx (µ) :=

‖r(Φx(µ);µ)‖2√αLB(µ)

≥ |||x(µ)−Φx(µ)|||

∆x(µ) :=‖r(Φx(µ);µ)‖2

αLB(µ)≥ ‖x(µ)−Φx(µ)‖Xh

and αLB(µ) is costly to compute.

Ingredient 2: transformation function d = log

Nonlinear model reduction Kevin Carlberg 65

Page 66: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

General errors: δ(µ) = g(x(µ))− g(Φx(µ))

Recall from before that dual-weighted residuals lead to

g(x)− g(Φx) ≈ −yT r(Φx).

with the adjoint solution satisfying

∂r∂x

(Φx)Ty =∂g

∂x(Φx)

To avoid this costly solve, we approximate it as y ≈ Y y with

Y T ∂r∂x

(Φx)TY y = Y T ∂g

∂x(Φx)

such that

g(x)− g(Φx) ≈ −yTY T r(Φx) .

Ingredient 1: indicator ρ = yTY T r(Φx)+ Uncertainty control: can add columns to dual basis Y

Ingredient 2: transformation function d = id

Nonlinear model reduction Kevin Carlberg 66

Page 67: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Ingredient 3: Gaussian process [Rasmussen and Williams, 2006]

Definition (Gaussian process)

A Gaussian process is a collection of random variables, any finitenumber of which have a joint Gaussian distribution.

d(ρ) ∼ GP(m(ρ), k(ρ, ρ′))

mean function m(ρ); covariance function k(ρ, ρ′)

given a training set (d(δi ), ρi ), can infer m(ρ) and k(ρ, ρ′)

Consider kernel regression [Rasmussen and Williams, 2006]

Nonlinear model reduction Kevin Carlberg 67

Page 68: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Kernel regression [Rasmussen and Williams, 2006]C. E. Rasmussen & C. K. I. Williams, Gaussian Processes for Machine Learning, the MIT Press, 2006,ISBN 026218253X. c 2006 Massachusetts Institute of Technology. www.GaussianProcess.org/gpml

2.2 Function-space View 15

−5 0 5

−2

−1

0

1

2

input, x

outp

ut, f

(x)

−5 0 5

−2

−1

0

1

2

input, x

outp

ut, f

(x)

(a), prior (b), posterior

Figure 2.2: Panel (a) shows three functions drawn at random from a GP prior;the dots indicate values of y actually generated; the two other functions have (lesscorrectly) been drawn as lines by joining a large number of evaluated points. Panel (b)shows three random functions drawn from the posterior, i.e. the prior conditioned onthe five noise free observations indicated. In both plots the shaded area represents thepointwise mean plus and minus two times the standard deviation for each input value(corresponding to the 95% confidence region), for the prior and posterior respectively.

which informally can be thought of as roughly the distance you have to move ininput space before the function value can change significantly, see section 4.2.1.For eq. (2.16) the characteristic length-scale is around one unit. By replacing|xpxq| by |xpxq|/` in eq. (2.16) for some positive constant ` we could changethe characteristic length-scale of the process. Also, the overall variance of the magnitude

random function can be controlled by a positive pre-factor before the exp ineq. (2.16). We will discuss more about how such factors a↵ect the predictionsin section 2.3, and say more about how to set such scale parameters in chapter5.

Prediction with Noise-free Observations

We are usually not primarily interested in drawing random functions from theprior, but want to incorporate the knowledge that the training data providesabout the function. Initially, we will consider the simple special case where theobservations are noise free, that is we know (xi, fi)|i = 1, . . . , n. The joint joint prior

distribution of the training outputs, f , and the test outputs f according to theprior is

ff

N

0,

K(X, X) K(X, X)K(X, X) K(X, X)

. (2.18)

If there are n training points and n test points then K(X, X) denotes then n matrix of the covariances evaluated at all pairs of training and testpoints, and similarly for the other entries K(X, X), K(X, X) and K(X, X).To get the posterior distribution over functions we need to restrict this jointprior distribution to contain only those functions which agree with the observeddata points. Graphically in Figure 2.2 you may think of generating functionsfrom the prior, and rejecting the ones that disagree with the observations, al- graphical rejection

(d) prior

C. E. Rasmussen & C. K. I. Williams, Gaussian Processes for Machine Learning, the MIT Press, 2006,ISBN 026218253X. c 2006 Massachusetts Institute of Technology. www.GaussianProcess.org/gpml

2.2 Function-space View 15

−5 0 5

−2

−1

0

1

2

input, x

outp

ut, f

(x)

−5 0 5

−2

−1

0

1

2

input, x

outp

ut, f

(x)

(a), prior (b), posterior

Figure 2.2: Panel (a) shows three functions drawn at random from a GP prior;the dots indicate values of y actually generated; the two other functions have (lesscorrectly) been drawn as lines by joining a large number of evaluated points. Panel (b)shows three random functions drawn from the posterior, i.e. the prior conditioned onthe five noise free observations indicated. In both plots the shaded area represents thepointwise mean plus and minus two times the standard deviation for each input value(corresponding to the 95% confidence region), for the prior and posterior respectively.

which informally can be thought of as roughly the distance you have to move ininput space before the function value can change significantly, see section 4.2.1.For eq. (2.16) the characteristic length-scale is around one unit. By replacing|xpxq| by |xpxq|/` in eq. (2.16) for some positive constant ` we could changethe characteristic length-scale of the process. Also, the overall variance of the magnitude

random function can be controlled by a positive pre-factor before the exp ineq. (2.16). We will discuss more about how such factors a↵ect the predictionsin section 2.3, and say more about how to set such scale parameters in chapter5.

Prediction with Noise-free Observations

We are usually not primarily interested in drawing random functions from theprior, but want to incorporate the knowledge that the training data providesabout the function. Initially, we will consider the simple special case where theobservations are noise free, that is we know (xi, fi)|i = 1, . . . , n. The joint joint prior

distribution of the training outputs, f , and the test outputs f according to theprior is

ff

N

0,

K(X, X) K(X, X)K(X, X) K(X, X)

. (2.18)

If there are n training points and n test points then K(X, X) denotes then n matrix of the covariances evaluated at all pairs of training and testpoints, and similarly for the other entries K(X, X), K(X, X) and K(X, X).To get the posterior distribution over functions we need to restrict this jointprior distribution to contain only those functions which agree with the observeddata points. Graphically in Figure 2.2 you may think of generating functionsfrom the prior, and rejecting the ones that disagree with the observations, al- graphical rejection

(e) posterior

prior: d(ρ) ∼ N (0,K(ρ, ρ)

+ σ2I )

k(ρi , ρj) = exp−‖ρi−ρj‖2

2r2 is a positive definite kernel

ρ :=[ρ

trainρ

predict

]T

posterior: d(ρpredict) ∼ N (m(ρpredict), cov(ρpredict))

Nonlinear model reduction Kevin Carlberg 68

Page 69: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

ROMES Algorithm

Offline

1 Populate ROMES database (d(δ(µ)), ρ(µ)) | µ ∈ D, whereρ denotes candidate indicators.

2 Identify a few error indicators ρ ⊂ ρ that lead to alow-variance GP.

3 Construct the Gaussian process d(ρ) ∼ GP(m(ρ), k(ρ, ρ′)) byBayesian inference.

Online (for any µ? ∈ D)

1 compute the ROM solution

2 compute error indicators ρ(µ?)

3 obtain d(ρ(µ?)) ∼ N (m(ρ(µ?))), k(ρ(µ?), ρ(µ?))

4 obtain random variable for the error δ(µ?) = d−1(d(µ?))

5 correct the ROM solution

Nonlinear model reduction Kevin Carlberg 69

Page 70: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Thermal block

1 2 3

4 5 6

7 8 9

ΓD

ΓN1

ΓN0

4c(x ;µ)u(x ;µ) = 0 in Ω x(µ) = 0 on ΓD

∇c(µ)x(µ) · n = 0 on ΓN0 ∇c(µ)x(µ) · n = 1 on ΓN1

Inputs µ ∈ [0.1, 10]9 define diffusivity c in subdomains

ROM constructed via RB–Greedy [Patera and Rozza, 2006]

Nonlinear model reduction Kevin Carlberg 70

Page 71: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Error #1: energy norm of error δ(µ) = |||x(µ)− x(µ)|||M. DROHMANN, K. CARLBERG 7

102 101

102

101

Residual kr(V u; µ)k/ error bound u

erro

r|||

u|||

101.5

101102

101

102

101

Residual kr(V u;µ)k erro

r boun

du

erro

r|||

u|||

residuals

error bounds

Figure 2. Relationship between RB error bounds u, residual norms kr(V u; µ)k, and the true state-spaceerrors |||u|||, visualized by evaluation of 200 random sample points in the input space. Here, |||·||| denotes theenergy norm defined in Section 5.1.

logarithm) that can be specified to facilitate construction of the statistical model. We canthen interpret the statistical model of the error as a random variable : 7! d1(m()).

Three ingredients must be selected to construct this mapping m: 1) the error indicators, 2) the transformation function d, and 3) the methodology for constructing the statisticalmodel from the training data. We will make these choices such that the stochastic mappingsatisfies the following conditions:

1. The indicators (µ) are cheaply computable and low dimensional given any µ 2 P.In practice, they should also incur a reasonably small implementation e↵ort, e.g., notrequire modifying the underlying high-fidelity model.

2. The mapping m exhibits low variance, i.e., Eh(m((µ)) E [m((µ))])2

iis ‘small’ for

all µ 2 P. This ensures that little additional epistemic uncertainty is introduced.3. The mapping m is validated :

(3.1) !validation (!) !, 8! 2 [0, 1) ,

where !validation (!) is the frequency with which validation data lie in the !-confidenceinterval predicted by the statistical model

(3.2) !validation (!) :=card (µ 2 Pvalidation | d((µ)) 2 C! (µ))

card (Pvalidation).

Here, the validation set Pvalidation P should not include any of the points µn, n =1, . . . , N employed to train the error surrogate, and the confidence interval C! (µ) R,which is centered at the mean of m((µ)), is defined for all µ 2 P such that

(3.3) P[m((µ)) 2 C! (µ))] = !.

In essence, validation assesses whether or not the data do indeed behave as randomvariables with probability distributions predicted by the statistical model.

+ Residual norm and error bound correlate with error

16 ROMES METHOD

102 101103

102

101

erro

r(e

ner

gynor

m)

|||u|||

(i) Kernel method

102 101

103

101

101

erro

r(e

ner

gynor

m)

|||u|||

(ii) RVM

102 101103

102

101

residual r

erro

rk

uk X

102 101

103

101

101

residual r

erro

rk

uk X

training point mean 95% confidence of N (,2) 95% confidence of N (,2)

Figure 4. Visualization of ROMES surrogates ( = |||u||| and kukX , = log r, d = log), computed usingN = 100 training points and the (i) GP kernel method and (ii) RVM.

linear relationship between the indicators and true errors (see Section 4.2). Because Legendrepolynomials are defined on the interval [1, 1], we must transform and scale this domain tospan the possible range of indicator values. For this purpose, we apply the heuristic of settingthe domain of the polynomials to be 20% larger than the interval bounded by the smallestand largest indicator values:

(5.6) [min 0.1(max min),max + 0.1(max min)] ,

where min = minµ2Plearn(µ) and max = maxµ2Plearn

(µ). We include Legendre polynomi-als of orders 0 to 4; however, the RVM method typically discards the higher order polynomialsdue to the near-linear relation between indicators and errors.

Figure 4 depicts the ROMES surrogate |||u||| generated by both machine-learning methods

using all 100 training points. For comparison, we also create ROMES surrogates kukX forerrors in the parameter-independent norm k·kX of the state space X = H1

0 . In addition tothe expected mean of the inferred surrogate, the figure displays two 95%-confidence intervalsfor the prediction (see Remark 4.1):

(i) The darker shaded interval corresponds to the confidence interval arising from theinherent uncertainty in the error due to the non-uniqueness of the mapping 7! |||u|||, i.e.,the inferred variance 2 of Eq. (4.5).

(ii) The lighter shaded interval also includes the ‘uncertainty in the mean’ due to a lackof training data, i.e., of Eq. (4.5). With an increasing number of training points, this area

+ ROMES (ρ = log ‖r(Φx(µ);µ)‖2, d = log) promising

Nonlinear model reduction Kevin Carlberg 71

Page 72: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Gaussian process validated18 ROMES METHOD

0.2 0

0.2

0

5

10

deviation from GP–mean

norm

alize

dra

teofocc

urr

ence

N = 10

0.2 0

0.2

0

2

4

6

8

deviation from GP–mean

norm

alize

dra

teofocc

urr

ence

N = 35

0.2 0

0.2

0

2

4

6

8

deviation from GP–mean

norm

alize

dra

teofocc

urr

ence

N = 65

0.2 0

0.2

0

2

4

6

deviation from GP–mean

norm

alize

dra

teofocc

urr

ence

N = 95Validation frequency !validation (!)

predicted ! N = 10 N = 35 N = 65 N = 95

0.80 0.49 0.71 0.76 0.780.90 0.59 0.82 0.87 0.880.95 0.68 0.89 0.92 0.930.98 0.76 0.93 0.95 0.960.99 0.80 0.94 0.96 0.97

histogram inferred pdf

Figure 5. Gaussian-process validation for the ROMES surrogate (GP kernel, = |||u|||, = log r,d = log) with a varying number of training points N . The histogram corresponds to samples of D(µ) and thered curve depicts the probability distribution function N (0,2). The table reports how often the actual errorlies in the inferred confidence intervals.

The reason multifidelity correction fails for most reduced-order models is twofold. First,the mapping µ 7! s can be highly oscillatory in the input space. This behavior arises from thefact the the reduced-order model error is zero at the (greedily-chosen) ROM training points butgrows (and can grow quickly) away from these points. Such complex behavior requires a largenumber of error-surrogate training points to accurately capture. In addition, the numberof system inputs is often large (in this case nine); this introduces curse-of-dimensionalitydiculties in modeling the error. Figure 6(ii) visualizes this problem. The depicted mappingbetween the first two parameter components µ1, µ2 and the output error s(µ) displays nostructured behavior. As a result, there is no measurable improvement of the corrected output

sred + ]s,MF over the evaluation of the ROM output sred alone.

In order to quantify the performance of the error surrogates, we introduce a normalizedexpected improvement

(5.8) I(e, µ) :=

s(µ)modee((µ))

s(µ)

.

Nonlinear model reduction Kevin Carlberg 72

Page 73: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Error #2: compliant-output error δ(µ) = |y(µ)− yred(µ)|M. DROHMANN, K. CARLBERG 19

104 103 102 101

104

103

102

Residual r/error bound

outp

ut

erro

r s

(i) Output error v. ROMES indicators

0 5 10

104

103

102

Parameters (µ1, µ2)

outp

ut

bia

s s

(ii) Output error v. system inputs

(r; s)

(s; s)

(µ1; s(µ))

(µ2; s(µ))

Figure 6. Relationship between (i) ROMES error indicators and the compliant-output error and (ii) thefirst two parameter components and the (compliant) output error, visualized by evaluation of 200 randomsample points in the input space. Clearly, the observed structure in the former relationship is more amenableto constructing a Gaussian process.

If this value is less than one, then the exected corrected output sred +e is more accurate thanthe ROM output sred itself for point µ 2 P, i.e., the additive error surrogate improves theprediction of the ROM. On the other hand, values above one indicate that the error surrogateworsens the ROM prediction.

Figure 7 reports the mean, median, standard deviations, and extrema for the expectedimprovement (5.8) evaluated for all validation points Pvalidation and a varying number of

training points. Here, we also compare with the performance of the error surrogate guni,which is defined as a uniform distribution on the interval

LB

s (µ),s(µ), where LB

s (µ)and s(µ) are the the lower and upper bounds for the output error, respectively. Note thatguni does not require training data, as it is based purely on error bounds.

The expected improvement for the ROMES output-error surrogate I(es, µ) as depictedin Figure 7(i) is approximately 0.2 on average, which constitutes an improvement of nearlyan order of magnitude. Further, the maximum expected improvement almost always remainsbelow 1; this implies that the corrected ROM output is almost always more accurate than theROM output alone.

On the other hand, the expected improvement generated by the error surrogate guni isalways greater than one, which means that its correction always increases the error. Thisarises from the fact that the center of the interval

LB

s (µ),s(µ)

is a poor approximationfor the true error.

In addition, Figure 7(ii) shows that the expected improvement produced by the multifidelity-

correction surrogate I]s,MF, µ

is often far greater than one. This shows that the multifidelity-

correction approach is not well suited for this problem. Presumably, with (far) more trainingpoints, these results would improve.

Again, we can validate the Gaussian-process assumptions underlying the error surrogates.For N = 100 training points, Figure 8 compares a histogram of deviation of the true errorfrom the surrogate mean to the inferred probability density function. The associated table

+ ROMES: residual and error bound correlate with error

- Multifidelity correction: inputs are poor indicators

Nonlinear model reduction Kevin Carlberg 73

Page 74: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Gaussian-process validation

M. DROHMANN, K. CARLBERG 21

as the number of training points increases, as this e↵ectively decreases the uncertainty dueto a lack of training. In addition, only a moderate number of training points (around 20)is required to generate a reasonably converged ROMES surrogate. On the other hand themultifidelity-correction surrogate exhibits no such convergence when fewer than 100 trainingpoints are used.

0 50 100

0.8

0.9

0.950.99

number of training points N

Validati

on

fre-

quen

cy!

validati

on(!

)(i) ROMES

0 50 100

0.8

0.9

0.950.99

number of training points N

Validati

on

fre-

quen

cy!

validati

on(!

)

(ii) Multifidelity correction

80% 90% 95% 98% 99%

Figure 9. Gaussian-process validation for the ROMES surrogate (GP kernel, compliant = s, = log r,d = log) and multifidelity-correction surrogate (GP kernel, compliant = s, = µ, and d = idR) and avarying number of training points N . The plots depict how often the actual error lies in the inferred confidenceintervals.

5.4. Reduced-basis error bounds. In this section, we compare the reduced-basis error

bound µu (S1.14) with the probabilistically rigorous ROMES surrogates |||u|||

c(2.7) with

rigor values of c = 0.5 and c = 0.9 as introduced Section 3.3.5 The ROMES surrogate isconstructed with the GP kernel method and ingredients = |||u|||, = log r, and d = log.As discussed in Section 2.3 the error-bound e↵ectivity (2.7) is important to quantify theperformance of these bounds; a value of 1 is optimal, as it implies no over-estimation.

As the probabilistically rigorous ROMES surrogates |||u|||c

are stochastic processes, wecan measure their (most common) e↵ectivity as

(5.9) (c, µ) :=mode

|||u|||

c((µ))

|||u(µ)||| .

The top plots of Figure 10 report the mean, median, standard deviation, and extremaof the e↵ectivities (0.5, µ) and (0.9, µ) for all validation points µ 2 Pvalidation. Again, we

compare with guni, which is a uniform distribution on an interval whose endpoints correspondto the lower and upper bounds for the error |||u(µ)|||. We also compare with the corresponding

5Note that c = 0.5 implies no modification to the original ROMES surrogate, as |||u|||0.5

= |||u||| (seeEqs. (3.17)–(3.19)).

+ ROMES: confidence intervals converge

- Multifidelity correction: confidence intervals do not converge

Nonlinear model reduction Kevin Carlberg 74

Page 75: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Error reduction

expected improvement =|s(µ)− sred(µ)−mode(δs(µ))|

|s(µ)− sred(µ)|20 ROMES METHOD

50 100

102

100

g uni

number of training points N

expec

ted

impro

vem

ent

I(e

,µ)

(i) ROMES

20 40 60 80 100102

101

104

number of training points Nex

pec

ted

impro

vem

ent

I(e

,µ)

(ii) Multifidelity correction

mean ± std median minimum maximum

Figure 7. Expected improvement I(e, µ) for a varying number of training points N : (i) ROMES (GP

kernel, compliant = s, = log r, d = log) with uniform distribution based on reduced-basis error bounds guni

and (ii) multifidelity correction (GP kernel, compliant = s, = µ, and d = idR). (1: no improvement, > 1:error worsened, < 1: error improved).

0.5 0

0.5

0

1

2

3

deviation from GP–mean

norm

alize

dra

teofocc

urr

ence

(i) ROMES

2 0 2

0

0.2

0.4

0.6

deviation from GP–mean

norm

alize

dra

teofocc

urr

ence

(ii) Multifi-delity correction Validation

frequency!validation (!)

predicted ! (i) (ii)

0.80 0.78 0.980.90 0.89 1.000.95 0.93 1.000.98 0.96 1.000.99 0.97 1.00

histogram inferred pdf

Figure 8. Gaussian-process validation for the ROMES surrogate (GP kernel, compliant = s, = log r,d = log) and multifidelity-correction surrogate (GP kernel, compliant = s, = µ, and d = idR) usingN = 100 training points The histogram corresponds to samples of D(µ) and the red curve depicts the probabilitydistribution function N (0,2). The table reports how often the actual error lies in the inferred confidenceintervals. Clearly, this validation test fails for the multilfidelity-correction surrogate.

reports how often the validation data lie in inferred confidence intervals. We observe thatthe confidence intervals are valid for the ROMES surrogate, but are not for the multifidelity-correction surrogate, as the bins do not align with the inferred distribution. Figure 9 depictsthe convergence of these confidence-interval validation metrics as the number of training pointsincreases. As expected (see Remark 4.1) the ROMES observed confidence intervals moreclosely align with the confidence intervals arising from the inherent uncertainty (i.e., 2)

+ ROMES: reduces error by roughly an order of magnitude

- Multifidelity correction: often increases the error

Nonlinear model reduction Kevin Carlberg 75

Page 76: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Error #3: error in general output26 ROMES METHOD

2 1 0 1 2·1022

0

2

4

6·102

dual weighted residuals

outp

ut

erro

r s

1(i) dual RB size py = 10

2 0 2 4 6·1022

0

2

4

6·102

dual weighted residuals

outp

ut

erro

r s

1

(ii) dual RB size py = 15

2 0 2 4 6·1022

0

2

4

6·102

dual weighted residuals

outp

ut

erro

r s

1

(iii) dual RB size=py = 20

Figure 15. Relationship between between dual-weighted-residual indicators 1 = yred,1(µ)tr (ured; µ) anderrors in the (non-compliant) first output s1 .

5.6. Multiple and non-compliant outputs. Finally, we assess the performance of ROMESon a model with multiple and non-compliant output functionals as discussed in Section 3.2.2.For this experiment, we set two outputs to be temperate measurements at points x1 and x2:

si (µ) := gi (u (µ)) := gi (u (µ)) =

Z

Dirac(x xi)u (xi; µ) dx = u (xi; µ) , i = 1, 2.(5.11)

where Dirac denotes the Dirac delta function. In this case, we construct a separate ROMESsurrogates for each output error fs1 and fs2 . As previously discussed, we use dual-weightedresiduals as indicators i(µ) = yred,i(µ)tr (ured; µ), i = 1, 2 and no transformation d idR.This necessitates the computation of approximate dual solutions, for which dual reduced-basisspaces must be generated in the oine stage. The corresponding finite element problem canbe found in Eq. (S1.28), where Eq. (5.11) above provides the right-hand sides. The algebraicproblems can be inferred from Eq. (S1.29), where the discrete right-hand sides are canonicalunit vectors because the points x1 and x2 coincide with nodes of the finite-element mesh.

Like the primal reduced basis, the dual counterpart can be generated with a greedy algo-rithm that minimizes the approximation error for the reduced dual solutions.

To assess the ability for uncertainty control with the dual-weighted-residual indicators (seeRemark 3.2) we generate three dual reduced bases of increasing fidelity: 1) error tolerance of1 (basis sizes py of 10 and 11), 2) error tolerance of 0.5 (basis sizes py of 15 and 17), 3) errortolerance of 0.1 (basis sizes py of 20 and 23).

To train the surrogates, we compute s1(µ), s2(µ), 1(µ) (of varying fidelity), 2(µ) (ofvarying fidelity), for µ 2 P P with card

P

= 500. The first T = 100 points define thetraining set Plearn P and the following 400 points constitute the validation set Pvalidation P.

Figure 15 depicts the observed relationship between indicators 1(µ) (of di↵erent fidelity)and the error in the first output s1(µ). Note that as the dual-basis size py increases, theoutput error exhibits a nearly exact linear dependence on the dual-weighted residuals. Thisis expected, as the residual operator is linear in the state. Therefore, the RVM with a linearpolynomial basis produces the best (i.e., minimum variance) results for the ROMES surrogatesin this case.

+ Dual-weighted residuals correlate with error

+ Uncertainty control: less variance as columns added to Y

M. DROHMANN, K. CARLBERG 27

Figure 16 reflects the necessity of employing a large enough dual reduced basis to computethe dual-weighted-residual error indicators. For a small dual reduced basis, there is almost noimprovement in the mean, and only a slight improvement in the median; in some cases, the‘corrected’ outputs are actually less accurate. However, the most accurate dual solutions yielda mean and median error improvement of two orders of magnitude. This illustrates the abilityand utility of uncertainty control when dual-weighted residuals are used as error indicators.

20 40 60 80

102

100

number of training points N

impro

vem

ent

I(e

,µ)

infirs

tou

tput

(i) dual RB size py = 10

20 40 60 80

102

100

number of training points N

(ii) dual RB size py = 15

20 40 60 80

102

100

number of training points N

(iii) dual RB size py = 20

20 40 60 80

102

100

number of training points N

impro

vem

ent

I(e

,µ)

inse

cond

outp

ut

(iv) dual RB size py = 11

20 40 60 80

102

100

number of training points N

(v) dual RB size py = 17

20 40 60 80

102

100

number of training points N

(vi) dual RB size py = 23

mean ± std median minimum maximum

Figure 16. Expected improvement I(e, µ) for ROMES surrogate (RVM, = s, i = yred,i(µ)tr (ured; µ),i = 1, 2, d = idR) for a varying number of training points T and di↵erent dual reduced-basis-space dimensions.Compare with Figure 7 (1: no improvement, > 1: error worsened, < 1: error improved).

Table 17 reports validation results for the inferred confidence intervals. While the valida-tion results are quite good (and appear to be converging to the correct values), they are notas accurate as those obtained for the compliant output.

6. Conclusions and outlook. This work presented the ROMES method for statisticallymodeling reduced-order-model errors. In contrast to rigorous error bounds, such statisticalmodels are useful for tasks in uncertainty quantification. The method employs supervisedmachine learning methods to construct a mapping from existing, cheaply computable ROMerror indicators to a distribution over the true error. This distribution reflects the epistemicuncertainty introduced by the ROM. We proposed ROMES ingredients (supervised-learningmethod, error indicators, and transformation function) that yield low-variance, numericallyvalidated models for di↵erent types of ROM errors.

+ Uncertainty control: error reduces as columns added to YNonlinear model reduction Kevin Carlberg 76

Page 77: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Summary: statistical error modeling

ROMES

+ Uses cheap error indicators to statistical quantify ROM error+ Outperforms multifidelity correction (inputs = poor indicators)+ Uncertainty control for general errors

Related follow-on work

Reduction error models (REM) [Manzoni et al., 2016]

Our current work

Apply to nonlinear, time-dependent problemsCollaborators: Trehan, Durlofsky (Stanford)

Reference: Drohmann, C. The ROMES method for statisticalmodeling of reduced-order-model error. SIAM/ASA Journalon Uncertainty Quantification, 3(1):116–145, 2015.

Nonlinear model reduction Kevin Carlberg 77

Page 78: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Questions?

y

eyylow

inputs µ

outp

uts

105 104

105

104

residual R

erro

r(e

ner

gy

norm

)|||u

h

ure

d|||

(i) Kernel method

105 104

106

104

residual R

erro

r(e

ner

gy

norm

)|||u

h

ure

d|||

(ii) RVM

training point mean 95% confidence “uncertainty of mean”

error

error indicator error indicator

tranformed errord()

Nonlinear model reduction Kevin Carlberg 78

Page 79: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Acknowledgments

This research was supported in part by an appointment to theSandia National Laboratories Truman Fellowship in NationalSecurity Science and Engineering, sponsored by SandiaCorporation (a wholly owned subsidiary of Lockheed MartinCorporation) as Operator of Sandia National Laboratoriesunder its U.S. Department of Energy Contract No.DE-AC04-94AL85000.

Nonlinear model reduction Kevin Carlberg 79

Page 80: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Amsallem, D., Cortial, J., C., K., and Farhat, C. (2009).A method for interpolating on manifolds structural dynamicsreduced-order models.International Journal for Numerical Methods in Engineering,80(9):1241–1258.

Amsallem, D. and Farhat, C. (2008).An interpolation method for adapting reduced-order modelsand application to aeroelasticity.AIAA Journal, 46(7):1803–1813.

Amsallem, D., Zahr, M. J., and Farhat, C. (2012).Nonlinear model order reduction based on local reduced-orderbases.International Journal for Numerical Methods in Engineering,92(10):891–916.

Antoulas, A. C. (2005).Approximation of Large-Scale Dynamical Systems.

Nonlinear model reduction Kevin Carlberg 79

Page 81: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Society for Industrial and Applied Mathematics, Philadelphia,PA.

Arian, E., Fahl, M., and Sachs, E. W. (2000).Trust-region proper orthogonal decomposition for flow control.Technical Report 25, ICASE.

Babuska, I. and Miller, A. (1984).The post-processing approach in the finite elementmethod—part 1: Calculation of displacements, stresses andother higher derivatives of the displacements.International Journal for numerical methods in engineering,20(6):1085–1109.

Bai, Z. (2002).Krylov subspace techniques for reduced-order modeling oflarge-scale dynamical systems.Applied Numerical Mathematics, 43(1):9–44.

Bangerth, W. and Rannacher, R. (1999).

Nonlinear model reduction Kevin Carlberg 79

Page 82: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Finite element approximation of the acoustic wave equation:Error control and mesh adaptation.East West Journal of Numerical Mathematics, 7(4):263–282.

Bangerth, W. and Rannacher, R. (2003).Adaptive finite element methods for differential equations.Springer.

Barrault, M., Maday, Y., Nguyen, N. C., and Patera, A. T.(2004).An ‘empirical interpolation’ method: application to efficientreduced-basis discretization of partial differential equations.Comptes Rendus Mathematique Academie des Sciences,339(9):667–672.

Baur, U., Beattie, C., Benner, P., and Gugercin, S. (2011).Interpolatory projection methods for parameterized modelreduction.SIAM Journal on Scientific Computing, 33(5):2489–2518.

Becker, R. and Rannacher, R. (1996).

Nonlinear model reduction Kevin Carlberg 79

Page 83: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Weighted a posteriori error control in finite element methods,volume preprint no. 96-1.Universitat Heidelberg.

Becker, R. and Rannacher, R. (2001).An optimal control approach to a posteriori error estimation infinite element methods.Acta Numerica 2001, 10:1–102.

Benner, P., Gugercin, S., and Willcox, K. (2015).A survey of projection-based model reduction methods forparametric dynamical systems.SIAM Review, 57(4):483–531.

Bui-Thanh, T., Willcox, K., and Ghattas, O. (2008).Model reduction for large-scale systems with high-dimensionalparametric input space.SIAM Journal on Scientific Computing, 30(6):3270–3288.

C., K. (2015).Adaptive h-refinement for reduced-order models.

Nonlinear model reduction Kevin Carlberg 79

Page 84: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

International Journal for Numerical Methods in Engineering,102(5):1192–1210.

C., K., Barone, M., and Antil, H. (2015a).Galerkin v. discrete-optimal projection in nonlinear modelreduction.arXiv e-print, (1504.03749).

C., K., Bou-Mosleh, C., and Farhat, C. (2011a).Efficient non-linear model reduction via a least-squaresPetrov–Galerkin projection and compressive tensorapproximations.International Journal for Numerical Methods in Engineering,86(2):155–181.

C., K., Cortial, J., Amsallem, D., Zahr, M., and Farhat, C.(2011b).The GNAT nonlinear model reduction method and itsapplication to fluid dynamics problems.

Nonlinear model reduction Kevin Carlberg 79

Page 85: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

AIAA Paper 2011-3112, 6th AIAA Theoretical Fluid MechanicsConference, Honolulu, HI.

C., K., Farhat, C., Cortial, J., and Amsallem, D. (2013).The GNAT method for nonlinear model reduction: effectiveimplementation and application to computational fluiddynamics and turbulent flows.Journal of Computational Physics, 242:623–647.

C., K., Ray, J., and van Bloemen Waanders, B. (2015b).Decreasing the temporal complexity for nonlinear, implicitreduced-order models by forecasting.Computer Methods in Applied Mechanics and Engineering,289:79–103.

Dihlmann, M., Drohmann, M., and Haasdonk, B. (2011).Model reduction of parametrized evolution problems using thereduced basis method with adaptive time partitioning.

Drohmann, M. and C., K. (2015).

Nonlinear model reduction Kevin Carlberg 79

Page 86: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

The romes method for reduced-order-model uncertaintyquantification.SIAM/ASA Journal on Uncertainty Quantification,3(1):116–145.

Drohmann, M., Haasdonk, B., and Ohlberger, M. (2011).Adaptive reduced basis methods for nonlinearconvection–diffusion equations.In Finite Volumes for Complex Applications VI Problems &Perspectives, pages 369–377. Springer.

Eftang, J. L., Knezevic, D. J., and Patera, A. T. (2011).An hp certified reduced basis method for parametrizedparabolic partial differential equations.Mathematical and Computer Modelling of Dynamical Systems,17(4):395–422.

Eftang, J. L. and Patera, A. T. (2013).Port reduction in parametrized component static condensation:approximation and a posteriori error estimation.

Nonlinear model reduction Kevin Carlberg 79

Page 87: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

International Journal for Numerical Methods in Engineering,96(5):269–302.

Eftang, J. L., Patera, A. T., and Rønquist, E. M. (2010).An ‘hp’ certified reduced basis method for parametrized ellipticpartial differential equations.SIAM Journal on Scientific Computing, 32(6):3170–3200.

Eldred, M. S., Giunta, A. A., Collis, S. S., Alexandrov, N. A.,and Lewis, R. M. (2004).Second-order corrections for surrogate-based optimization withmodel hierarchies.In Proceedings of the 10th AIAA/ISSMO MultidisciplinaryAnalysis and Optimization Conference, Albany, NY, numberAIAA Paper 2004-4457.

Eldred, M. S., Weickum, G., and Maute, K. (2009).A multi-point reduced-order modeling approach of transientstructural dynamics with application to robust designoptimization.

Nonlinear model reduction Kevin Carlberg 79

Page 88: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Structural and Multidisciplinary Optimization, 38(6):599–611.

Estep, D. (1995).A posteriori error bounds and global error control forapproximation of ordinary differential equations.SIAM Journal on Numerical Analysis, 32(1):1–48.

Everson, R. and Sirovich, L. (1995).Karhunen–Loeve procedure for gappy data.Journal of the Optical Society of America A, 12(8):1657–1664.

Farhat, C., Geuzaine, P., and Brown, G. (2003).Application of a three-field nonlinear fluid-structureformulation to the prediction of the aeroelastic parameters ofan F-16 fighter.Computers & Fluids, 32(1):3–29.

Fidkowski, K. J. (2007).A simplex cut-cell adaptive method for high-orderdiscretizations of the compressible Navier-Stokes equations.PhD thesis, Massachusetts Institute of Technology.

Nonlinear model reduction Kevin Carlberg 79

Page 89: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Freund, R. (2003).Model reduction methods based on Krylov subspaces.Acta Numerica, 12:267–319.

Gallivan, K., Vandendorpe, A., and Van Dooren, P. (2004).Model reduction of mimo systems via tangential interpolation.SIAM Journal on Matrix Analysis and Applications,26(2):328–349.

Haasdonk, B., Dihlmann, M., and Ohlberger, M. (2011).A training set and multiple bases generation approach forparameterized model reduction based on adaptive grids inparameter space.Mathematical and Computer Modelling of Dynamical Systems,17(4):423–442.

Haasdonk, B., Ohlberger, M., and Rozza, G. (2008).A reduced basis method for evolution schemes withparameter-dependent explicit operators.Electronic Transactions on Numerical Analysis, 32:145–161.

Nonlinear model reduction Kevin Carlberg 79

Page 90: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Huynh, D. B. P., Knezevic, D. J., Chen, Y., Hesthaven, J. S.,and Patera, A. T. (2010).A natural-norm successive constraint method for inf-sup lowerbounds.Comput. Methods Appl. Mech. Engrg., 199:1963–1975.

Ionita, A. and Antoulas, A. (2014).Data-driven parametrized model reduction in the loewnerframework.SIAM Journal on Scientific Computing, 36(3):A984–A1007.

Lefteriu, S. and Antoulas, A. C. (2010).A new approach to modeling multiport systems fromfrequency-domain data.Computer-Aided Design of Integrated Circuits and Systems,IEEE Transactions on, 29(1):14–27.

LeGresley, P. A. (2006).Application of Proper Orthogonal Decomposition (POD) toDesign Decomposition Methods.

Nonlinear model reduction Kevin Carlberg 79

Page 91: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

PhD thesis, Stanford University.

Lu, J. C.-C. (2005).An a posteriori error control framework for adaptive precisionoptimization using discontinuous Galerkin finite elementmethod.PhD thesis, Massachusetts Institute of Technology.

Maday, Y. and Rønquist, E. M. (2002).A reduced-basis element method.J. Sci Comput, 17(1-4):447–459.

Manzoni, A., Pagani, S., and Lassila, T. (2016).Accurate solution of bayesian inverse uncertainty quantificationproblems using model and error reduction methods.

Moore, B. (1981).Principal component analysis in linear systems: Controllability,observability, and model reduction.Automatic Control, IEEE Transactions on, 26(1):17–32.

Nemec, M. and Aftosmis, M. (2007).Nonlinear model reduction Kevin Carlberg 79

Page 92: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Adjoint error estimation and adaptive refinement forembedded-boundary cartesian meshes.In 18th AIAA CFD Conference, Miami. PaperAIAA-2007-4187.

Ng, L. and Eldred, M. S. (2012).Multifidelity uncertainty quantification using non-intrusivepolynomial chaos and stochastic collocation.In AIAA 2012-1852.

Park, M. A. (2004).Adjoint-based, three-dimensional error prediction and gridadaptation.AIAA journal, 42(9):1854–1862.

Patera, A. T. and Rozza, G. (2006).Reduced basis approximation and a posteriori error estimationfor parametrized partial differential equations.MIT.

Nonlinear model reduction Kevin Carlberg 79

Page 93: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Peherstorfer, B., Butnaru, D., Willcox, K., and Bungartz, H.-J.(2014).Localized discrete empirical interpolation method.SIAM Journal on Scientific Computing, 36(1):A168—A192.

Phuong Huynh, D. B., Knezevic, D. J., and Patera, A. T.(2013).A static condensation reduced basis element method:approximation and a posteriori error estimation.ESAIM: Mathematical Modelling and Numerical Analysis,47(01):213–251.

Pierce, N. A. and Giles, M. B. (2000).Adjoint recovery of superconvergent functionals from pdeapproximations.SIAM review, 42(2):247–264.

Prud’Homme, C., Rovas, D. V., Veroy, K., Machiels, L.,Maday, Y., Patera, A. T., Turinici, G., et al. (2001).

Nonlinear model reduction Kevin Carlberg 79

Page 94: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Reliable real-time solution of parametrized partial differentialequations: Reduced-basis output bound methods.Journal of Fluids Engineering, 124(1):70–80.

Rannacher, R. (1999).The dual-weighted-residual method for error control and meshadaptation in finite element methods.MAFELEAP, 99:97–115.

Rasmussen, C. and Williams, C. (2006).Gaussian Processes for Machine Learning.MIT Press.

Rewienski, M. J. (2003).A Trajectory Piecewise-Linear Approach to Model Order Reduction of Nonlinear Dynamical Systems.

PhD thesis, Massachusetts Institute of Technology.

Rowley, C. W. (2005).Model reduction for fluids, using balanced proper orthogonaldecomposition.

Nonlinear model reduction Kevin Carlberg 79

Page 95: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Int. J. on Bifurcation and Chaos, 15(3):997–1013.

Rozza, G., Huynh, D., and Patera, A. T. (2008).Reduced basis approximation and a posteriori error estimationfor affinely parametrized elliptic coercive partial differentialequations.Archives of Computational Methods in Engineering,15(3):229–275.

Ryckelynck, D. (2005).A priori hyperreduction method: an adaptive approach.Journal of Computational Physics, 202(1):346–366.

Venditti, D. and Darmofal, D. (2000).Adjoint error estimation and grid adaptation for functionaloutputs: Application to quasi-one-dimensional flow.Journal of Computational Physics, 164(1):204–227.

Venditti, D. A. and Darmofal, D. L. (2002).Grid adaptation for functional outputs: application totwo-dimensional inviscid flows.

Nonlinear model reduction Kevin Carlberg 79

Page 96: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

Journal of Computational Physics, 176(1):40–69.

Veroy, K., Prud’homme, C., Rovas, D. V., and Patera, A. T.(2003).A posteriori error bounds for reduced-basis approximation ofparametrized noncoercive and nonlinear elliptic partialdifferential equations.AIAA Paper 2003-3847, 16th AIAA Computational FluidDynamics Conference, Orlando, FL.

Washabaugh, K., Amsallem, D., Zahr, M., and Farhat, C.(2012).Nonlinear model reduction for cfd problems using localreduced-order bases.In 42nd AIAA Fluid Dynamics Conference and Exhibit, FluidDynamics and Co-located Conferences, AIAA Paper, volume2686.

Willcox, K. and Peraire, J. (2002).Balanced model reduction via the proper orthogonaldecomposition.

Nonlinear model reduction Kevin Carlberg 79

Page 97: Nonlinear model reduction - Sandia National Laboratoriesktcarlb/presentations/kc_16_04_1_plenary_morml.pdfFull-order model OE Nonlinear model reduction Kevin Carlberg 15. Galerkin:

AIAA Journal, 40(11):2323–2330.

Wirtz, D., Sorensen, D. C., and Haasdonk, B. (2012).A-posteriori error estimation for DEIM reduced nonlineardynamical systems.Preprint Series, Stuttgart Research Centre for SimulationTechnology.

Yano, M., Patera, A. T., and Urban, K. (2012).A space-time certified reduced-basis method for Burgers’equation.Math. Mod. Meth. Appl. S., submitted.

Nonlinear model reduction Kevin Carlberg 79