Abstracts - SIAM

138
Abstracts Abstracts are printed as submitted by the authors.

Transcript of Abstracts - SIAM

Abstracts

Abstracts are printed as submitted by the authors.

2 UQ16 Abstracts

IP1

Prediction, State Estimation, and UncertaintyQuantification for Complex Turbulent Systems

Complex Turbulent Systems such as those in climate sci-ence and engineering turbulence are a grand challenge forprediction, state estimation, and uncertainty quantification(UQ). Such turbulent dynamical systems have an erroneousphase space with a large dimension of instability and cru-cial extreme events with major societal impact. MonteCarlo statistical predictions for complex turbulent dynam-ical systems are hampered by severe model error due tothe curse of small ensemble size with the overwhelmingexpense of the forecast model and also due to lack of phys-ical understanding. This lecture surveys recent strategiesfor prediction, state estimation, and UQ for such complexsystems and illustrates them on prototype examples. Thenovel methods include physics contrained nonlinear regres-sion strategies for low order models, calibration strategiesfor imperfect models combining information theory andstatistical response theory, and novel state estimation al-gorithms.

Andrew MajdaCourant Institute [email protected]

IP2

Sparse Grid Methods in Uncertainty Quantifica-tion

In this presentation, we give an overview on generalizedsparse grid methods for stochastic and parametric par-tial differential equations as they arise in various forms inuncertainty quantification. We focus on the efficient ap-proximation and treatment of the stochastic/parametricvariables and discuss both, the case of finite and infi-nite/parametric stochastic dimension. Moreover, we dealwith optimal numerical schemes based on sparse gridswhere also the product between the spatial and temporalvariables and the stochastic/parametric variables is collec-tively taken into account. Overall, we obtain approxima-tion schemes which involve cost complexities that resemblejust the cost of the numerical solution of a constant numberof plain partial differential equations in space (and time),i.e. without any stochastic/parametric variable. Here, thisconstant number depends only on the covariance decay ofthe stochastic fields of the input data of the overall prob-lem.

Michael GriebelInstitut fur Numerische Simulation, Universitat Bonn &and Fraunhofer-Institut fur Algorithmen [email protected]

IP3

Covariance Functions for Space-time Processes andComputer Experiments: Some Commonalities andSome Differences

Gaussian processes are commonly used to model both natu-ral processes, for which the indices are space and/or time,and computer experiments, for which the indices are of-ten parameters of the computer model. In both settings,the modeling of the covariance function of the process iscritical. I will review various approaches to modeling co-variance functions for natural processes, with a focus onthe space-time setting. I will then consider what lessons, if

any, can be drawn from this work for modeling covariancefunctions for computer experiments.

Michael SteinUniversity of [email protected]

IP4

Reduction of Epistemic Uncertainty in Multifi-delity Simulation-Based Multidisciplinary Design

Epistemic model uncertainty is a significant source of un-certainty that affects the prediction of a multidisciplinarysystem using multifidelity analyses. Uncertainty reductioncan be achieved by gathering additional experiments andsimulations data; however resource allocation for multidis-ciplinary design optimization (MDO) and analysis remainsa challenging task due to the complex structure of a mul-tidisciplinary system and the dynamic nature of decisionmaking. We will present a novel approach that integratesmultidisciplinary uncertainty analysis (MUA) and multi-disciplinary statistical sensitivity analysis (MSSA) to an-swer the questions about where (sampling locations), what(disciplinary responses), and which (simulations versus ex-periments) for allocating more resources. The proposedapproach strategically breaks resource allocation into a se-quential process, the decision making is hence much moretractable. Meanwhile the method is efficient for complexmultidisciplinary analysis by employing inexpensive Spa-tial Random Process (SRP) emulators and analytical for-mulas of MUA and MSSA.

Wei ChenNorthwestern [email protected]

IP5

Multi-fidelity Approaches to UQ for PDEs

We discuss the use of a set of multi-fidelity computationalphysical models for reducing the costs of obtaining statisti-cal information about PDE outputs of interest. MultilevelMonte Carlo and multilevel stochastic collocation meth-ods use a hierarchy of successively coarser discretizations,in physical space, of the parent method. Reduced-ordermodels of different dimension can be also be used as cansurrogates built from data. More generally, a managementstrategy for the use of combinations of these and othercomputational models is discussed.

Max GunzburgerFlorida State UniversitySchool for Computational [email protected]

IP6

Uncertainty Quantification in Weather Forecasting

A major desire of all humankind is to make predictionsfor an uncertain future. Clearly then, forecasts ought tobe probabilistic in nature, taking the form of probabilitydistributions over future quantities or events. Over thepast decades, the meteorological community has been tak-ing massive steps in a reorientation towards probabilisticweather forecasts, serving to quantify the uncertainty inthe predictions. This is typically done by using a numeri-cal model, perturbing the inputs to the model (initial con-ditions, physics parameters) in suitable ways, and runningthe model for each perturbed set of inputs. The result is

UQ16 Abstracts 3

then viewed as an ensemble of forecasts. However, forecastensembles typically are biased and uncalibrated. Theseshortcomings can be addressed by statistical postprocess-ing, using methods such as distributional regression andBayesian model averaging.

Tilmann GneitingHeidelberg Institute for Theoretical Studies, [email protected]

IP7

Uncertainty Quantification and Numerical Analy-sis: Interactions and Synergies

The computational costs of uncertainty quantification canbe challenging, in particular when the problems are largeor real time solutions are needed. Numerical methods ap-propriately modified can turn into powerful and efficienttools for uncertainty quantification. Conversely, state-of-the-art numerical algorithms reinterpreted from the per-spective of uncertainty quantification can becomes muchmore powerful. This presentation will highlight the natu-ral connections between numerical analysis and uncertaintyquantification and illustrate the advantages of re-framingclassical numerical analysis in a probabilistic setting. Inparticular, we will show that using preconditioning as a ve-hicle for coupling hierarchical models with iterative linearsolvers becomes a means not only to assess the reliabilityof the solutions, but also to achieve super-resolution, as il-lustrated with an application to magneto-encephalography.

Daniela CalvettiCase Western Reserve UnivDepartment of Mathematics, Applied Mathematics [email protected]

IP8

Multilevel Monte Carlo Methods

Monte Carlo methods are a standard approach for the esti-mation of the expected value of functions of random inputparameters. However, to achieve improved accuracy oftenrequires more expensive sampling (such as a finer timestepdiscretisation of a stochastic differential equation) in ad-dition to more samples. Multilevel Monte Carlo methodsaim to avoid this by combining simulations with differentlevels of accuracy. In the best cases, the average cost ofeach sample is independent of the overall target accuracy,leading to very large computational savings. This lecturewill introduce the key ideas, and survey the progress in thearea.

Mike GilesUniversity of Oxford, United [email protected]

CP1

An Adaptive Algorithm for Stochastic OptimalControl Problems based on Intrusive PolynomialChaos

We develop an adaptive optimization scheme for uncertainoptimal control problems constrained by nonlinear ODEsor differential algebraic equations. The algorithm solvesthe deterministic surrogate obtained from the intrusivepolynomial chaos method by an SQP-based direct optimal

control method while simultaneously refining the approxi-mation order using suitable error estimates. The goal is toobtain sufficiently precise estimates of complex quantitiessuch as higher moments while keeping the computationalburden feasible for optimal control problems.

Lilli Bergner

Interdisciplinary Center for Scientific Computing (IWR)Heidelberg [email protected]

Christian KirchesInterdisciplinary Center for Scientific ComputingUniversity of Heidelberg, [email protected]

CP1

Uncertainty Quantification of High-DimensionalStochastic Systems Using Two-Level Domain De-composition Algorithms

The scalablilities of intrusive polynomial chaos expansionbased two-level domain decomposition (DD) algorithms forSPDEs are demonstrated by Subber (PhD thesis, CarletonUniversity (CU), 2012) for high resolution finite element(FE) meshes in the cases of a few random variables. TheseDD algorithms will be extended here to concurrently han-dle both high resolution FE meshes and a large number ofrandom variables using the codes developed by the authors.

Abhijit SarkarAssociate Professorcarleton University, Ottawa, Canadaabhijit [email protected]

Ajit DesaiCarleton University, [email protected]

Mohammad KhalilCarleton University, Canada(Currently at Sandia National Laboratories, USA)[email protected]

Chris PettitUnited States Naval Academy, [email protected]

Dominique PoirelRoyal Military College of Canada, [email protected]

CP1

On Efficient Construction of Stochastic MomentMatrices

We consider the construction of stochastic moment matri-ces that appear in the model elliptic diffusion problem con-sidered in the setting of stochastic Galerkin finite elementmethod (sGFEM). Algorithms for the efficient constructionof the stochastic moment matrices are presented for cer-tain combinations of affine/non-affine diffusion coefficientsand multivariate polynomial spaces. We report the perfor-mance of various standard polynomial spaces for three dif-ferent non-affine diffusion coefficients in a one-dimensionalspatial setting.

Harri H. Hakula

4 UQ16 Abstracts

Helsinki University of [email protected]

Matti LeinonenAalto [email protected]

CP1

Iterative Solution of Random Eigenvalue Problemin An Ssfem Framework

This work is aimed at reducing the computational cost ofsolving the random eigenvalue problem in the weak formu-lation of spectral stochastic finite element method. Thisformulation leads to a system of deterministic nonlinearequations, solved using the Newton-Raphson method. Themain contribution lies in developing a set of scalable pre-conditioners for accelerating the computation in a parallelmatrix-free implementation. A detailed numerical study isconducted for testing the accuracy and efficiency.

Soumyadipta SarkarIndian Institute of Science, [email protected]

Debraj GhoshIndian Institute of [email protected]

CP2

Selection and Validation of Models of TumorGrowth in the Presence of Uncertainty

In recent years, advances have been made in the develop-ment of multiscale models of tumor growth. Of overridingimportance is the predictive power of these models, partic-ularly in the presence of uncertainties. This presentationdescribes new adaptive algorithms for model selection andmodel validation embodied in the Occam Plausibility Al-gorithm (OPAL), that brings together model calibration,determination of sensitivities of outputs to parameter vari-ances, and calculation of model plausibilities for model se-lection.

Ernesto A B F LimaICES - UT [email protected]

Marissa N Rylander, Amir ShahmoradiThe University of Texas at [email protected], [email protected]

Danial FaghihiInstitute for Computational Engineering and SciencesUniversity of Texas at [email protected]

J Tinsley OdenThe University of Texas at [email protected]

CP2

Dna Pattern Recognition Using Canonical Corre-lation Analysis

The canonical correlation analysis (CCA) is used as an un-supervised statistical tool to identify patterns. We consider

two different semantic objects of DNA sequences as train-ing and test set. CCA check whether the correlation atthe point of interest is well above the correlations foundelsewhere. As a case study, CCA is applied to investi-gate HIV-1 preferred integration sites, where the left andright flanking from the integration site are taken as the twoviews.

Bimal K. SarkarGalgotias [email protected]

CP2

Modeling the Expected Time to Reach the Recog-nition Element in Nanopore Dna Sequencing

We examine the functionality of a new approach to next-generation DNA sequencing, where single nucleobases of aDNA oligomer are captured and identified in a modifiedalpha-hemolysin nanopore. The main uncertainty is whenthe isolated nucleobases reach a certain location inside thenanopore. We present our model equations and numericalresults based on the self-consistent drift-diffusion-Stokes-Poisson system, directed Brownian motion, and a stochas-tic exit-time problem.

Benjamin Stadlbauer, Andreas Buttinger-Kreuzhuber,Gregor Mitscha-Eibl, Clemens HeitzingerVienna University of [email protected],[email protected],[email protected],[email protected]

CP2

Statistical Assessment and Calibration of Ecg Mod-els

We address the problem of calibrating some functional out-put of a parametrised numerical model to a dataset of real,high dimensional observations. We propose a novel methodaiming at reproducing targeted quantiles of the referencedata by minimising a suitable probabilistic cost functionaldefining their statistical spatial quantiles. We apply it tothe calibration of models for the simulation of human ECGs(ODE and PDE based) through a dataset of real functionaltraces.

Nicholas TarabelloniLab. MOX, Dept. Mathematics, Politecnico di [email protected]

Elisa SchenoneUniversity of [email protected]

Annabelle CollinUniv. Bordeaux and Bordeaux INP, IMB, Talence, FranceINRIA Bordeaux-Sud-Ouest, Talence, [email protected]

Francesca IevaDept. Mathematics, University of [email protected]

Anna PaganoniLab. MOX, Dept. Mathematics, Politecnico di [email protected]

UQ16 Abstracts 5

Jean-Frederic GerbeauINRIA Paris-RocquencourtSorbonne Universites, UPMC Universite Paris [email protected]

CP3

Simplification of Uncertain System to Their Ap-proximate Model

Mathematical modeling of physically available modules re-sult in higher order system, sometime with the uncertaintywithin, making their study and analysis difficult. Solutionto this exists, model order reduction. An effective proce-dure to obtain a reduced model for an uncertain system us-ing Routh approximant is discussed in this paper. The pro-posed methodology is an extension of an existing techniquefor continuous-time domain to discrete-time domain. Thealgorithm is justified and strengthened by various availableexamples from the literature.

Amit Kumar Choudhary

Indian Insttitute of Technology (Banaras HinduUniversity)[email protected]

Shyam Krishna NagarDept. of Electrical EngineeringIIT (BHU) Varanasi, U.P. [email protected]

CP3

Comparison of Surrogate-Based Uncertainty Quan-tification Methods for Computationally ExpensiveSimulators

Polynomial chaos (PC) and Gaussian process (GP) emula-tion are popular surrogate modelling techniques, developedindependently over the last 25 years. Despite tackling sim-ilar problems in the field, there has been little comparisonof the two methods in the literature. In this work, we buildPC and GP surrogates for two black-box simulators usedin industry. We assess their comparative performance withchanges in design size and type using a number of valida-tion metrics.

Nathan Owen, Peter Challenor, Prathyush MenonCollege of Engineering, Mathematics and PhysicalSciencesUniversity of [email protected], [email protected],[email protected]

CP3

Randomized Cross Validation

We propose a new randomized cross validation (CV) crite-rion offering improved performance for model choice in re-gression problems when the design is not space filling. Theproposed criterion is of special interest in environmentaland biomedical observation, where strong constraints onplacement of measuring points are enforced. The paperpresents the new criterion, and demonstrates its superior-ity with respect to usual CV techniques, considering sev-eral alternative regression models as well as simulated andrealistic data.

Maria-Joao Rendas

[email protected]

Li YouI3S CNRS [email protected]

CP5

Uncertainty Quantification and Sensitivity Analy-sis for Functional System Response, with An Ap-plication to Flyer Plate Experiments

We are usually interested in the functional system responsewhen investigating the shock ignition of explosives. Thevariation in functional response involves both phase andamplitude, and confounding these two may lead to someproblems. In this paper, we derive that the total uncer-tainty is the sum of the phase and amplitude uncertainty,and give the sensitivity analysis based on this decomposi-tion. Then we apply our method to numerical simulationsof a flyer plate experiment.

Hua Chen, Haibing Zhou, Shudao ZhangInstitute of Applied Physics and ComputationalMathematicschen [email protected], zhou [email protected],zhang [email protected]

CP5

Sobol’ Indices for Problems Defined in Non-Rectangular Domains

Uncertainty and sensitivity analysis has been recognizedas an essential part of model applications. Global sensitiv-ity analysis based on Sobol’ sensitivity indices (SI) offers acomprehensive approach to model analysis by quantifyingthe relative importance of each input parameter in deter-mining the value of model output. We have developed anovel method for the estimation of Sobol’ SI for modelsin which inputs are confined to a non-rectangular domain(e.g., in the presence of constraints).

Oleksiy V. Klymenko, Sergei Kucherenko, CleoKontoravdiImperial College [email protected], [email protected],[email protected]

CP5

Sensitivity Analysis with Dependence Measures forSpatio-Temporal Numerical Simulators

The numerical simulators used in environmental applica-tions often deal with complex output parameters such asspatio-temporal, take several uncertain parameters as in-puts and can be time expensive. To perform the globalsensitivity analysis of such simulators, we propose to usenew dependence measures (HSIC) based on reproducingkernel Hilbert spaces. Spatio-temporal HSIC indices andtheir aggregated versions are thus obtained, yielding theglobal and local influence of each input variable and itsevolution over time.

Amandine Marrel, Matthias De LozzoCEA Cadarache, [email protected], [email protected]

Fadji Hassane Mamadou

6 UQ16 Abstracts

Universite de Strasbourg, [email protected]

Olivier BildsteinCEA Cadarache, [email protected]

Philippe AckererUniversite de Strasbourg, [email protected]

CP6

Multidimensional Time Model for Probability Cu-mulative Function

The new method is based on changes of Cumulative Dis-tribution Function in relation to time change in samplingpatterns. Multidimensional Time Model for ProbabilityCumulative Function can be reduced to finite-dimensionaltime model with two ordinal numbers 4 for the summationand multiplication over events and their probabilities andordinal number 17 for the fractal-dimensional time arisingfrom alike supersymmetrical properties of probability.

Michael FundatorSackler Colloquium, National Academy of Sciences [email protected]

CP6

A Bayesian Inference and Importance SamplingApproach to Propagation of Uncertain ProbabilityDistributions

Bayesian inference is used to quantify the uncertainty as-sociated with a distribution derived from data. Yet, in thealmost universal case of lack of complete data, the deriveddistribution cannot be precisely specified. We propose anovel approach, based on Importance Sampling, for prop-agating uncertain probability distributions. The methodidentifies an optimal sampling distribution that is repre-sentative of the possible range of distributions and adap-tively reweights the samples to simultaneously propagatethe full range.

Michael D. Shields, Jiaxin ZhangJohns Hopkins [email protected], [email protected]

CP7

Some a Priori Error Estimates of StochasticGalerkin Approximations for Randomly Parame-terized ODEs

In this work we focus on generalized polynomial chaos(gPC) based stochastic Galerkin approximations of randomODEs. We provide a priori error estimates for gPC ap-proximations of different classes of first- and second-orderrandom ODEs assuming stochastic regularity conditionsfor the exact solution. The upper bounds are provided interms of truncation errors and temporal discretization er-rors corresponding to different time-stepping schemes. Nu-merical studies are provided to illustrate some of the the-oretical results.

Christophe AudouzeUniversity of TorontoInstitute for Aerospace Studies

[email protected]

Prasanth B. NairUniversity of [email protected]

CP7

Generalised Anova for the Solution of StochasticPartial Differential Equations

This paper presents a novel approach for the solution ofstochastic partial differential equations (SPDE). The pro-posed approach couples Galerkin projection into the frame-work of polynomial correlated function expansion, whichis the generalisation of the ANOVA decomposition. Theframework presented decouples the SPDE into a set ofPDEs. The coupled set of PDEs obtained are solved usingfinite difference method and homotopy algorithm. Perfor-mance of the proposed approach has been illustrated withtwo PDEs.

Souvik ChakrabortyReearch Scholar, Department of Civil EngineeringIIT [email protected]

Rajib ChowdhuryDepartment of Civil EngineeringIIT [email protected]

CP7

Propagation of Uncertainties for Hyperbolic Equa-tions

Modeling uncertainty propagation in hyperbolic equationsand kinetic equations is extremely demanding in terms ofrobustness and preservation of the maximum properties.This presentation will review new families of such systemswith proved BV bounds and maximum preserving esti-mates. The design of these new systems of PDEs showsconnection with L1 minimization techniques and optimalcontrol of polynomial systems.

Bruno DespresUniversity Paris VILaboratory [email protected]

CP7

Level Set Methods for Polynomial Chaos Expan-sion of Stochastic PDEs Outputs

We propose a new method to approximate stochastic so-lutions of uncertain PDEs using Polynomial Chaos expan-sions of their level sets. The method is non-intrusive andtargets solutions with steep gradients with random loca-tions. An adaptive choice of the level set is used to con-trol the approximation error, ensuring high accuracy at asignificantly lower cost compared to classical non-intrusiveprojection approach. We apply and validate the methodon subsurface flows exhibiting steep fronts.

Pierre [email protected]

Olivier P. Le Matre

UQ16 Abstracts 7

[email protected]

CP8

Bayesian Parameter Inference with Stochastic Dif-ferential Equation Models

Inferring parametric uncertainty, for stochastic differentialequation (SDE) models, is a computationally hard prob-lem due to the high dimensional integrals that have to becalculated. Here, we consider the generic problem of cali-brating a one dimensional (1D) SDE model to time seriesand quantifying the ensuing parametric uncertainty. Were-interpret this problem as the problem of simulating thedynamics of a 1D statistical mechanical system and employa Hamiltonian Monte Carlo algorithm together with mul-tiple time scale integration to solve it efficiently. A genericre-parametrization allows us to solve the dynamics of thefast modes in between measurement points analytically.

Carlo Albert, Simone [email protected], [email protected]

CP8

Residuals in Inverse Problems and Model FormUncertainty Quantification

Mathematical models often differ in their predictions fromexperimental observations. Residual errors between a best-fit model and experimental data can be used to investigatemissing dynamics. One of the challenges in analyzing resid-uals is separating deterministic mismatch from variability.We consider in this talk a Bayesian framework for covari-ance modeling of residuals, an approach that can lead toimproved understanding of the uncertainty in model pre-dictions. We illustrate these techniques on a vibration con-trol example.

Ben G. FitzpatrickTempest [email protected]

CP8

Shape-Constrained Uncertainty Quantification inUnfolding Elementary Particle Spectra at theLarge Hadron Collider

The particle spectrum unfolding problem is a statisti-cal inverse problem central to making inferences at theLarge Hadron Collider at CERN. Existing methods for con-structing confidence intervals in this application can havemarkedly lower coverage than expected, even for realisticspectra. We present a novel approach that guarantees con-servative finite-sample coverage, but still produces usefullytight confidence bounds by imposing physically motivatedshape constraints on the particle spectrum using convexoptimization.

Mikael KuuselaEcole Polytechnique Federale de Lausanne (EPFL)[email protected]

Philip B. StarkDepartment of StatisticsUniversity of California, Berkeley

[email protected]

CP8

Empirical Evolution Equations

Evolution equations are differential equations that describethe evolution of a system depending on a continuous timevariable t. The equation takes the form

γ = v(γ)

where γ(t) ∈ X is the state of the system at time t, the dotdenotes a time derivative, and v is a vector field on X. Thespace X represents the state space of the system. When Xis finite dimensional, the evolution equation reduces to asystem of ordinary differential equations. When X is infi-nite dimensional, the equation can be regarded as a partialdifferential equation. The classical treatment of evolutionequations supposes the vector field v is known and concernsissues including the existence and uniqueness of solutions,the stability of solutions, and the asymptotic behavior ofsolutions as t → ∞. In this paper we bring a statisticalperspective to the study of evolution equations. Our pri-mary curiousity is in performing statistical inference forthe solution of the evolution equation when v is estimated,based on a sample of data, by its empirical version vn.

Susan WeiUniversity of [email protected]

Victor M. PanaretosSection de [email protected]

CP9

Quantifying the Degradation in Thermally TreatedCeramic Matrix Composites

Reflectance spectroscopy obtained from ceramic matrixcomposite is used to quantity the products of oxidation.The data collection will be described in detail in order topoint out the potential biasing present in the data process-ing. A probability distribution is imposed on select modelparameters, and then non-parametrically estimated and apointwise asymptotic confidence band is constructed. Anincrease in the SiO2 present was detected in samples as thelength of heating was increased.

Jared Catenacci, H. T. BanksNorth Carolina State [email protected], [email protected]

CP9

A Bayesian Inversion Approach to Controlled-Source Electromagnetic Imaging

In the Bayesian approach to controlled-source electromag-netic imaging, the objective is to infer the conductivityrandom field based on finite, noisy measurements of theelectromagnetic field. We present a methodology to solvethe pertinent non-linear, high-dimensional Bayesian in-verse problem by addressing the numerical solution of theparametric, deterministic Maxwell’s equations in combi-nation with adaptive sparse-grid interpolation and modelorder reduction.

Dimitris Kamilis

8 UQ16 Abstracts

Institute for Digital CommunicationsSchool of Engineering, University of [email protected]

Nick PolydoridesInstitute for Digital CommunicationsUniversity of [email protected]

CP9

Spatial Prediction for Quantification of RadarCross Section Measurement Uncertainty

Uncertainty quantification is an important issue in RadarCross Section (RCS) measurement, which quantifies thescattering power of an object. At high frequency, the posi-tioning error is a dominant source, difficult to introduce inthe uncertainty budget. In our original approach, we showthat the RCS uncertainty quantification actually leads toa spatial statistics problem, involving kriging or other spa-tial prediction methods. The key point is then the priorprobabilistic knowledge of the RCS variability.

Pierre Minvielle-LarrousseCEA, DAM, [email protected]

Jean [email protected]

Bruno StupfelCEA/[email protected]

CP9

Characterizing Uncertainties in Photoacoustic To-mography

For inverse problems in photoacoustic tomography (PAT),some coefficients of the PDE model must be assumedknown in order to perform the reconstruction of other PDEcoefficients of greater interest. As these fixed parameterscan only be known up to a certain accuracy in practice, thisintroduces error in the reconstructed images. In this workwe quantify the errors in the reconstructed optical param-eters of interest caused by model uncertainties in PAT andfluorescence PAT.

Sarah VallelianStatistical and Applied Mathematical Sciences Institute &North Carolina State [email protected]

Kui RenUniversity of Texas at [email protected]

CP10

Performance Tuning of Next-Generation Sequenc-ing Assembly Via Gaussian Process Model withBranching and Nested Factors

For next-generation sequencing data, the selection of as-sembly tools and the corresponding parameters has a greatimpact on the quality of de novo assembly. Among assem-bly tools, there are some shared factors, and some factorsare specific to particular tools. Due to complex structures,

a tuning procedure is proposed based on the Gaussian pro-cess model with branching and nested factors. The per-formance of the proposed procedure is demonstrated vianumerical experiments and a real example.

Ray-Bing Chen, Xinjie Chen, Shuen-Lin JengDepartment of StatisticsNational Cheng Kung [email protected], [email protected],[email protected]

CP10

Attitude Determination and Uncertainty Analysisin Eclipse of Small Leo Satellites

Attitude determination concept and uncertainty analysisfor LEO hypothetical small satellite with specific emphasison the eclipse operation are presented. Sensor configura-tions are used in single-frame method. Common sensorsfor nanosatellites are selected in order to analyze the uncer-tainty in the sense of covariance results. For this purpose,covariance and RMS errors with respect to the actual val-ues are presented. In/out of the eclipse period, changingconfigurations provide better attitude estimation results tofiltering techniques.

Demet Cilden, Chingiz HajiyevIstanbul Technical [email protected], [email protected]

CP10

Quantifying Sources of Variability in Planing HullExperiments

We will present a methodological framework to appro-priately handle experimental time history data collectedacross different study conditions, e.g. variable speeds, sizeof hull, instrumentation, and facility. We propose a model-ing approach that allows for quantification of the differ-ent sources of variability inherent to these types of ex-periments. The experiments relate to an effort to resolvethe discrepancy between experimental measurements andsimulation-based predictions for planing boats traveling attarget speeds through calm water.

Margaret C. NikolovUnited States Naval [email protected]

Carolyn JudgeUnited States Naval AcademyNaval Architecture and Ocean Engineering [email protected]

CP10

Study of Detonation Computer Model Calibrationwith Approximate Bayesian Computing

This paper considers a flyer plate experiment driving bydetonation. In this experiment the velocity of free surfacewas measured. Numerical simulation of the experimentwas done by a computer code. In order to determine theparameter of computer model including JWL EOS’s pa-rameters, approximate Bayesian computing is used. Andthen according the post PDF of these parameters, this pa-per studies the uncertainty quantification of the computermodel.

Yan-Jin Wang

UQ16 Abstracts 9

Institute of Applied Physics and ComputationalMathematicswang [email protected]

CP11

Stochastic Modeling of Dopant Atoms in NanoscaleTransistors Using Multi-Level Monte Carlo

We consider the system of stochastic drift-diffusion-Poissonequations to model nanoscale devices such as FinFETs andnanowire sensors. We developed a multi-level Monte-Carlofinite-element method to obtain the current-voltage charac-teristics as functions of the inherently random doping con-centrations. Permittivity, mobility, and forcing terms arerandom. Finally, we discuss the effect of random-dopantfluctuations on transistor performance, which is the mainlimiting factor in today’s semiconductor technology.

Amirreza KhodadadianTechnical University Vienna (TU Vienna)[email protected]

Leila Taghizadeh, Clemens HeitzingerVienna University of [email protected],[email protected]

CP11

Non-Intrusive Stochastic Galerkin Method for theStochastic Nonlinear Poisson-Boltzmann Equation

The stochastic nonlinear Poisson-Boltzmann equation de-scribes the electrostatic potential in the presence of freecharges and has applications in many fields. Here wepresent a non-intrusive Galerkin method for this nonlin-ear model equation. It is non-intrusive in the sense thatsolvers and preconditioners for the deterministic equationcan be used as they are. By comparing the non-intrusivestochastic Galerkin method and a stochastic collocationmethod, it is found that the Galerkin method is a viableoption with comparable computational effort.

Gudmund Pammer, Clemens HeitzingerVienna University of [email protected],[email protected]

CP11

Optimal Method for Calculating Solutions of theStochastic Drift-Diffusion-Poisson System

The stochastic drift-diffusion-Poisson system has beenof great importance for the development of efficientuncertainty-quantification tools for charge transport. Wepresent a rigorous work-and-error analysis of a multi-levelestimator for this problem with random permittivities andrandom forcing terms. We optimize the number of Monte-Carlo samples and the fineness of the mesh in the finite-element method to find the most efficient method to solvethis problem. Numerical results support this claim.

Leila Taghizadeh, Clemens HeitzingerVienna University of [email protected],

[email protected]

CP12

A Low Rank Kriging for Large Spatial Data Sets

Providing a best linear unbiased estimator (BLUP) is al-ways a challenge for a non-repetitive, irregularly spaced,spatial data. The estimation process as well as spatialprediction involves inverting an n × n covariance matrixof which typically require computation time of order n3.Studies showed that often times the complete covariancematrix, can be additively decomposed into two matrices:one from the measurement error modeled as white noise,and the other due to the observed process which can benonstationary. If nonstationarity is needed, is often as-sumed to be low rank. Method of fixed rank kriging (FRK)developed in Cressie and Johannesson (2008), where thebenefit of smaller rank has been used to improve the com-putation time of order nr2 where r is the rank of non-stationary covariance matrix. In this work, we considercholesky decomposition for FRK and use a group-wise pe-nalized likelihood where each row of the lower triangularmatrix is penalized. More precisely, we present a two-stepapproach using group LASSO type shrinkage estimationtechnique for estimating the rank of the covariance matrixand finally the matrix itself. We implement the block co-ordinate decent algorithm to obtain the local optimizer toour likelihood problem. We investigate our findings over aset of simulation study and finally apply to a rainfall dataobtained on Colorado, US.

Siddhartha NandyDepartment of Statistics and ProbabilityMichigan State [email protected]

CP12

The ”Spatial Boxplot”: a Comprehensive Ex-ploratory Tool for Spatial Data

We introduce a versatile exploratory tool that may be usedto describe and visualize various distributional character-istics for data with complex spatial dependencies. Wepresent a flexible mathematical framework for modelingspatial random fields, give possible extensions to space-time data, and show mapping hot zones of batting abilityin baseball as an illustration.

Dana SylvanDepartment of MathematicsCalifornia State University Dominguez [email protected]

CP12

Using Stochastic Estimation for Selective Inversionof Sparse Matrices

The computation of the trace of inverse matrices is a com-mon problem in several applications, from risk analysisto reservoir characterization, requiring the use of selectiveinversion techniques.In the field of Uncertainty Quantifi-cation, both the estimation of the diagonal via unbiasedstochastic estimators and the inverse covariance approxi-mation are suitable approaches to this problem. The twomethods can be combined in order to obtain an efficientselective inversion framework, with good potential for scal-ing.

Fabio Verbosio

10 UQ16 Abstracts

USI Universita della Svizzera [email protected]

Matthias BollhoferTU [email protected]

Olaf SchenkUSI - Universita della Svizzera italianaInstitute of Computational [email protected]

Drosos KourounisUSI - Universita della Svizzera italianaInstitute of Computational [email protected]

CP12

The Utility of Quantile Regression in Large ScaleDisaster Research

Following disaster, population-based screening programsare routinely established to assess physical and psycholog-ical consequences of exposure. These datasets are highlyskewed as only a small percentage of trauma-exposed in-dividuals develop health issues. We evaluate the utilityof quantile regression in disaster research in contrast tothe commonly used population-averaged models with fo-cus on the distribution of risk factors for posttraumaticstress symptoms across quantiles.

Katarzyna WykaCity University of New York School of Public [email protected]

Dana SylvanDepartment of MathematicsCalifornia State University Dominguez [email protected]

JoAnn DifedePsychiatry DepartmentWeill Cornell Medical [email protected]

CP13

Estimating Uncertainty in Computer Vision Algo-rithms

We introduce a statistical framework for estimating theuncertainty given by standard algorithms in computer vi-sion for segmenting an image, by generating realizationsof an actual image, which is taken to represent a samplefrom some distribution of possible observable images. Withestimates of segmentation uncertainty for a given imageand algorithm, uncertainty can be propagated to measuresof an images characteristics. We show the application toquantitative analysis of materials in scanning electron mi-crographs.

James GattikerLos Alamos National [email protected]

CP13

Estimation of the Super-Quantile for Optimization

- Application in Thermal Engineering

Lifetime’s improvement of an electronic component forthermal design can be performed by minimizing a quan-tile. It is well known that the super-quantile of a distri-bution is a much more robust quantity. Nevertheless, itsestimation can be too costly and the optimization frame-work implies that this estimation has to be realized onmany different samples. In this work, we introduce an ef-ficient estimation of the super-quantile based on a scalarminimization problem and an importance sampling methodthat allows the use of such minimization. This techniqueis applied on a thermal use case coming from the TOICAEuropean project (Thermal Overall Integrated Conceptionof Aircraft).

Jonathan GuerraONERA / Epsilon [email protected]

Fabrice GamboaLaboratory of Statistics and ProbabilityUniversity of [email protected]

Patrick CattiauxInstitut de Mathematiques de [email protected]

Patricia [email protected]

Nicolas DolinEpsilon [email protected]

CP13

Surrogate Based Inference of Atomic Diffusivity inMetallic Multilayers

The atomic diffusivity in reactive multilayers is character-ized by two Arrhenius branches, above and below the melt-ing point of one of its constituents. Observations of ho-mogeneous ignition and of self-propagating fronts are usedto infer the corresponding parameters. Posterior distribu-tions are inferred using polynomial chaos surrogates of theobservables. Results reveal a large sensitivity to the activa-tion energy and highly correlated posteriors. The potentialuse of PC surrogates in optimal experimental design is alsoillustrated.

Manav VohraDuke UniversityCorning [email protected]

Omar M. KnioDuke [email protected]

CP14

Uncertainty Quantification Approaches for RiverHydraulics Modelling

We compare different computational approaches for uncer-tainty quantification via Monte Carlo and Quasi MonteCarlo simulation in fixed bed river hydraulics modelling.

UQ16 Abstracts 11

We assess the sensitivity of the model results with respectto different model parameters and boundary conditions andwe study empirically statistical convergence of the outputpdfs. Accurate and efficient semi-implicit models are alsoproposed, that allow to reduce the computational cost ofprobabilistic open channel flow simulations.

Lea BoittinMOX - Dipartimento di Matematica - Politecnico [email protected]

Luca BonaventuraPolitecnico di MilanoMOX, Dipartimento di Matematica F. [email protected]

Alessio RadiceDipartimento di Ingegneria Civile e Ambientale [email protected]

CP14

Probabilistic Model Identification for the Simula-tion of Turbulent Flow over Porous Media

In this contribution the application of porous material onthe trailing edge of an airfoil for noise reduction is con-cerned. Herein the focus is on identifying coefficients ofa model that simulates the turbulent flow over the porousmedia by solving the inverse problem in a probabilistic set-ting. The coefficients of the volume and Reynolds averagedNavier-Stokes equations are modeled as random variableswith distributions defined by prior expert knowledge, anthen improved by inferring from DNS data of the velocityfield and the Reynold stresses.

Noemi FriedmanInstitute of Scientific ComputingTechnische Universitat [email protected]

Elmar ZanderTU BraunschweigInst. of Scientific [email protected]

Pradeep KumarInstitute of Fluid MechanicsTechnische Universitat [email protected]

Hermann G. MatthiesInstitute of Scientific ComputingTechnische Universitat [email protected]

CP14

Drop Spreading with Random Viscosity

Motivated by an application in respiratory mechanics, weexamine the spreading of a viscous drop over a film, assum-ing the liquids viscosity is regulated by the concentration ofa solute with an initially heterogeneous distribution withprescribed statistical features. We asymptotically reducethe governing nonlinear PDEs to a set of ODEs, allowingstochastic effects to be examined efficiently. In some in-stances, the variability in the drop’s spreading rate can be

understood in terms of the spatial sampling of the solutefield when the drop is initially deposited on the film.

Oliver E. JensenUniversity of Manchester, United [email protected]

Feng XuUniversity of [email protected]

CP14

Calibration of a Set of Wall Functions for TurbulentFlows

Wall functions have a significant importance for analysisand simulation of wall-bounded turbulent flows. However,contrary to general belief, their model parameters appearnot to be universal. In the present study these param-eters are estimated using UQ-techniques applied to pub-lically available numerical and experimental datasets forthe canonical flows. Furthermore, an algorithm consistingof a one-dimensional ODE is presented to estimate meanquantities of turbulent channel flow using the resulting wallfunctions. Sensitivity analysis of this model with respectto various parameters is also addressed.

Saleh RezaeiraveshPhD Student, Division of Scientific Computing,Uppsala [email protected]

Mattias LiefvendahlDivision of Scientific Computing, Uppsala University,SwedenSwedish Defense Research Agency, FOI, [email protected]

Per LotstedtUppsala [email protected]

CP15

Total Error: Propagation and Partitioning in a Li-dar Driven Forest Growth Simulator

The US Forest Service FVS (Forest Vegetation Simulator)is a very widely used growth model developed for projectingindividual trees and forest development through time. FVSis now being used to evaluate a variety of global change sce-narios as it relates to forest health, carbon life cycle anal-ysis, sustainability, wildlife habitat, wildland fires, etc. Inthis paper, the total error in form of an uncertainty bud-get is developed for FVS projections, where initial modelsinputs are spatially explicit single-tree stem maps devel-oped with small-footprint airborne lidar (Laser ImagingDetection and Ranging). An uncertainty budget shows theoverall precision of estimates/predictions made with a sys-tem, partitioned according to different types of uncertaintysources within and outside of the system. In a comprehen-sive fashion, sources of uncertainties due to measurements,classification, sampling error, model parameter estimates,are accounted for in the lidar derived stem maps and withinthe FVS system. Spatially identifying the sources of uncer-tainties in time, modeling their propagation and accumu-lation, and finally, quantifying them locally on a tree basisand globally on a forest level are presented. Uncertaintiesin future forest responses due to uncertainties in projectedglobal climatic change predictions that will also drive this

12 UQ16 Abstracts

type of forest model will also be discussed.

George Z. GertnerUniversity of Illinois, [email protected]

CP15

Uncertainty Propagation Through Multidisci-plinary Black-Box Co-Simulation Systems

Modern multidisciplinary sytems (systems of systems, e.g.future energy grids) are too complex to be modelled by asmall number of people. Instead, component models de-veloped in different institutes may be coupled in a mod-ular manner (co-simulation). Due to imperfect knowl-edge of these component models, they have to be treatedas black boxes. We present a modular, non-intrusive,ensemble-based uncertainty quantification framework forSmart Grid co-simulation systems considering epistemic aswell as aleatory uncertainty.

Cornelius SteinbrinkUniversity Oldenburg, [email protected]

Sebastian [email protected]

MS1

Askit: An Efficient Algorithm for ApproximatingLarge Kernel Matrices

Kernel-based methods are a powerful tool in many infer-ence and learning problems. A key bottleneck in thesemethods is computations involving the Gram matrix of allpairwise kernel interactions. We describe ASKIT, a scal-able, kernel-independent method for approximately eval-uating kernel matrix-vector products. ASKIT generalizesthe Fast Multipole Method to high-dimensional problemsand is based on a randomized method for efficiently factor-ing off-diagonal blocks. We describe the method and giveresults on accuracy and scalability.

William MarchUT [email protected]

MS1

Fast Algorithms for Generating Realisations fromHigh Dimensional Distributions

Generating realisations from high dimensional distribu-tions is computationally expensive, typically scaling asO(N3), whereN is the number of dimensions. The talk dis-cusses a fast direct way of generating realisations from suchhigh dimensional distributions whose complexity scales al-most linearly in finite arithmetic.

Sivaram AmbikasaranDepartment of MathematicsCourant Institute of Mathematical [email protected]

Krithika NarayanaswamyIndian Institute of Technology

[email protected]

MS1

Kernel Methods for Large Scale Data Analysis

In this talk, we focus on kernel techniques for applicationsin data analysis. The first major step in a practical ap-plication is concerned with the modeling. Usually, there issome prior knowledge (such as distances) on the data whichneeds to be reflected in the reproducing kernel Hilbertspace in order to yield a problem-adapted reconstructionscheme. We present a general framework which allows thesystematic construction of such smoothness spaces. A tech-nical difficulty which can arise in this procedure is that thekernel itself needs to be numerically computed and hencethis numerical error has to be taken care of. To this end,we will give a priori error bounds for the reconstruction er-ror using such manually designed kernels. The second stepis on a computational level. Usually in kernel-based meth-ods, one faces large and densely populated ill-conditionedmatrices. It is a common observation that some of thesenumerical problems are due to the chosen basis representa-tion. We will report on recent works to handle the compu-tational issues by problem adapted basis representations.This is partly based on joint work with M. Griebel, B.Zwicknagl (both Bonn University and Peter Zaspel (Hei-delberg University).

Christian RiegerUniversitat BonnCollaborative Research Centre [email protected]

MS1

A Robust Parallel Direct Inverse Approximationfor Kernel Machine with Applications

We present a parallel direct approximation for fast kernelmachine inversion in high dimension, a common problem indata analysis and computational statistics. Fast kernel ma-chine inversion can be viewed as approximation schemes fordense kernel matrix inversion that exploits the hierarchicallow-rank structure of the matrix with the help of spatialdata structures, typically trees. We introduce a novel par-allel inverse approximation for the ASKIT matrix, derivecomplexity estimates, and apply it to the kernel classifica-tion problem.

Chenhan YuUniversity of Texas at [email protected]

MS2

Accelerating the Calibration of High-ResolutionClimate Models

Climate models contain numerous parameters associatedwith aerosols, clouds, and other small-scale physical phe-nomena. The parameter values are a function of modelresolution and must be adjusted for each resolution underconsideration. Objective methods for parameter estima-tion are computationally prohibitive at high resolutions,but can be accelerated by leveraging low resolution sim-ulations that approximate high resolution responses. Weillustrate calibration acceleration using a statistical frame-work that blends climate model ensembles at multiple res-olutions. Prepared by LLNL under Contract DE-AC52-

UQ16 Abstracts 13

07NA27344.

Donald Lucas, Vera Bulaevskaya, Gemma J. AndersonLawrence Livermore National [email protected], [email protected], [email protected]

MS2

Towards Uncertainty Quantification in 21st Cen-tury Sea-Level Rise Predictions: PDE ConstrainedOptimization as a First Step for Accelerating For-ward Propagation

As a first step to quantify the uncertainty of sea level risepredictions due to uncertainty on basal friction coefficientof Greenland ice sheet, we invert for the basal frictionfield by solving a large-scale PDE constrained optimizationproblem, minimizing the mismatch with the observations.Then, with the reduced Hessian we compute an approxi-mate Gaussian distribution for the basal friction field, tobe used as prior for Bayesian calibration or sampled forUncertainty Propagation.

Mauro PeregoCSRI Sandia National [email protected]

Stephen PriceLos Alamos National [email protected]

Andrew SalingerCSRISandia National [email protected]

Irina K. TezaurSandia National [email protected]

John D. JakemanSandia National [email protected]

Michael S. EldredSandia National LaboratoriesOptimization and Uncertainty Quantification [email protected]

MS2

Quantifying the Impacts of Parametric Uncertaintyon Biogeochemistry in the ACME Land Model

Large uncertainties remain in climate predictions, manyof which originate from uncertainties in land-surface pro-cesses. In particular, uncertainties in land-atmospherefluxes of carbon dioxide and energy are driven by incom-plete knowledge about model parameters and their varia-tion over space and time. Using the ACME land model,we perform uncertainty decomposition based on globalPolynomial Chaos (PC) surrogate construction, using theBayesian Compressive Sensing (BCS) sparse learning tech-nique.

Daniel RicciutoOak Ridge National [email protected]

Khachik SargsyanSandia National [email protected]

Peter ThorntonOak Ridge National [email protected]

MS2

Towards Uncertainty Quantification in 21st Cen-tury Sea-Level Rise Predictions: Efficient Methodsfor Forward Propagation of Uncertainty ThroughLand-Ice Models

This talk will present the evolution of our approach forquantifying uncertainty in anticipated sea-level rise due tomelting of the polar ice-sheets. Specifically we will dis-cuss approaches for propagating an uncertain spatially dis-tributed basal friction through a transient ice-sheet model.The run time and high-dimensionality of the transientmodel pose numerous challenges to most UQ methods. Inthis talk we will present an initial study that highlightsthese challenges and discuss avenues for improvement.

Irina K. TezaurSandia National [email protected]

John D. JakemanSandia National [email protected]

Michael S. EldredSandia National LaboratoriesOptimization and Uncertainty Quantification [email protected]

Mauro PeregoCSRI Sandia National [email protected]

Andrew SalingerCSRISandia National [email protected]

Stephen PriceLos Alamos National [email protected]

MS3

Hyperbolic Stochastic Galerkin Methods for Non-linear Systems of Hyperbolic Conservation Laws

When applying the generalized polynomial chaos basedstochastic Galerkin method (gPC-SG) to nonlinear systemsof hyperbolic conservation laws it generates a system whichis not necessarily hyperbolic. In this talk we will introducea general framework which will avoid this difficulty. We willdiscuss the general idea, its applications of the compress-ible Euler and shallow-water equations, and open questionsand challenges.

Shi JinShanghai Jiao Tong University, China and theUniversity of Wisconsin-Madison

14 UQ16 Abstracts

[email protected]

MS3

MLMC Methods for Computing Measure Valuedand Statistical Solutions of Systems of Conserva-tion Laws

We briefly review the theory of entropy measure valued so-lutions(EMVS), and show how we can compute statisticsof multi-dimensional systems of conservation laws using en-tropy preserving schemes and stochastic sampling. The nu-merical framework can be improved by Multi-level Monte-Carlo (MLMC) approximations, obtaining a speedup com-pared to ordinary Monte-Carlo sampling procedure, evenin the setting where we do not have convergence of sin-gle samples. We also investigate the use computation ofstatistical solutions to EMVS.

Kjetil O. LyeETH, [email protected]

MS3

Robust Boundary Conditions for Stochastic Incom-pletely Parabolic and Hyperbolic Systems of Equa-tions

We study hyperbolic and incompletely parabolic systemsin three space dimensions with stochastic boundary andinitial data. The goal is to show how the variance of thesolution depends on the boundary conditions imposed. Es-timates of the variance of the solution is presented both an-alytically and numerically. The technique is used on both asimple model problem as well as the one-dimensional Eulerand Navier-Stokes equations.

Markus K. WahlstenDepartment of MathematicsLinkoping University SE 581 83 Linkoping, [email protected]

Jan NordstromMathematicsLinkoping [email protected]

MS3

A Dynamically Bi-Orthogonal Method for Time-Dependent Stochastic Partial Differential Equation

We propose a dynamically bi-orthogonal method (DyBO)to study time dependent stochastic partial differentialequations (SPDEs). The objective of our method is toexploit some intrinsic sparse structure in the stochastic so-lution by constructing the sparsest representation of thestochastic solution via a bi-orthogonal basis. In this talk,we derive an equivalent system that governs the evolutionof the spatial and stochastic basis in the KL expansion.Several numerical experiments will be provided to demon-strate the effectiveness of the DyBO method.

Zhiwen ZhangDepartment of Mathematics,The University of Hong Kong,

[email protected]

MS4

Recursive Estimation Procedure of Sobol Indicesbased on Replicated Designs

In the context of global sensitivity analysis, the estimationprocedure based on replicated designs, denoted by repli-cation procedure, allows to estimate Sobol’ indices at anefficient cost. However this method still requires a largenumber of model evaluations. Here, we consider the abil-ity of increasing the number of evaluation points, thus theaccuracy of estimates, by rendering the replication proce-dure recursive. The key feature of this approach is theconstruction of structured space-filling designs. For the es-timation of first-order indices, we exploit a nested LatinHypercube already introduced in the literature. For theestimation of closed second-order indices, we propose aniterative construction of an orthogonal array of strengthtwo. Various space-filling criteria are used to evaluate ourdesigns.

Laurent GilquinUniversite Grenoble Alpes, Inria Grenoble Rhone-Alpes,[email protected]

Elise ArnaudLJK / Universite Joseph [email protected]

Herve MonodINRA, [email protected]

Clementine PrieurLJK / Universite Joseph Fourier Grenoble (France)INRIA Rhone Alpes, [email protected]

MS4

Generation of Tailor-Made Regular Factorial De-signs

Many factorial designs used in practice belong to the classof regular designs, whose construction is based on prop-erties of finite abelian groups. Regular factorial designscan be adapted to a large diversity of situations with re-spect to the input factors. I will talk about the automaticgeneration of regular designs based on practical user’s spec-ifications, and discuss how it can be applied to the designof computer experiments.

Herve MonodINRA, [email protected]

MS4

Some New Space-Filling Designs for Computer Ex-periments

Orthogonal arrays provide an attractive class of space-filling designs for computer experiments. In this talk, I willpresent several new classes of space-filling designs, whichenjoy better space-filling properties than ordinary orthogo-nal arrays. Of the main focus are strong orthogonal arrays

UQ16 Abstracts 15

and mappable nearly orthogonal arrays.

Boxin TangDepartment of Statistics and Actuarial Science, [email protected]

MS4

Experimental Design Principles in Quadrature forUncertainty Quantification

The sparse quadrature grids at which functions are evalu-ated for quadrature In UQ can be considered as experimen-tal design points. An example is in estimating the coeffi-cients of a polynomial chaos expansion. This prompts theuse, and combination, of several areas of experimental de-sign in UQ: (i) algebraic statistics (using Groebner bases)applied to the problem of which multivariate moments canbe estimated (ii) optimal and space-filling designs for com-puter experiments.

Henry P. WynnLondon School of [email protected]

MS5

Map Estimators and Their Consistency in BayesianInverse Problems for Functions

We consider the inverse problem of estimating an unknownfunction from noisy measurements of a known, possiblynonlinear map of the unknown function. Under certainconditions, the Bayesian approach to this problem resultsin a well-defined posterior measure. Under these condi-tions, we show that the maximum a posteriori (MAP) es-timator is characterised as the minimiser of an Onsager-Machlup functional on an appropriate function space de-pending on the prior. We then establish a form of Bayesianposterior consistency for the MAP estimator.

Masoumeh DashtiUniversity of [email protected]

MS5

The Bayesian Formulation of EIT: Analysis and Al-gorithms

We provide a rigorous Bayesian formulation of the EITproblem in an infinite dimensional setting, leading to well-posedness in the Hellinger metric with respect to the data.We focus particularly on the reconstruction of binary fieldswhere the interface between different media is the primaryunknown. We consider three different prior models - log-Gaussian, star-shaped and level set. Numerical simulationsbased on the implementation of MCMC are performed, il-lustrating the advantages and disadvantages of each typeof prior in the reconstruction, in the case where the trueconductivity is a binary field, and exhibiting the propertiesof the resulting posterior distribution.

Matthew M. DunlopUniversity of [email protected]

Andrew StuartMathematics Institute,University of Warwick

[email protected]

MS5

Maximum a Posteriori Estimates in Bayesian In-verse Problems

We discuss the maximum a posteriori (MAP) estimateand its definition for infinite-dimensional non-GaussianBayesian inverse problems. Moreover, we consider howBregman distance can be used to characterize the MAPestimate.

Tapio HelinUniversity of [email protected]

Martin BurgerUniversity of MuensterMuenster, [email protected]

MS5

Approximations of Bayesian Inverse Problems Us-ing Gaussian Process Emulators

A major challenge in the application of Markov chainMonte Carlo methods to large scale inverse problems, isthe high computational cost associated with solving theforward model for a given set of input parameters. To over-come this difficulty, we consider using a surrogate modelthat approximates the solution of the forward model at amuch lower computational cost. We focus in particular onGaussian process emulators, and analyse the error in theposterior distribution resulting from this approximation.

Aretha L. TeckentrupUniversity of [email protected]

Andrew StuartMathematics Institute,University of [email protected]

MS6

Stability of and Preconditioners for the Pressureand Velocity Reconstruction Based on Coarse andNoisy Velocity Measurement Such As 4D Phase-Contrast Magnetic Resonance Imaging

Modern imaging such as Phase-Contrast Magnetic Reso-nance Imaging provide 4D images of blood flow in for ex-ample cerebral aneurysms. The problem, however, is thatthe resolution in both space and time is coarse and subjectto noise. This makes important quantities like pressure orwall-shear stress difficult to compute by local considera-tions. Therefore, we will in this talk consider the subjectof reconstructing high-resolution velocities and pressure bysolving optimal control problems subject to PDEs for thefluid flow.

Kent-Andre MardalSimula Research Laboratory and Department ofInformaticsUniversity of [email protected]

Magne Nordaas, Simon W. Funke

16 UQ16 Abstracts

Center for Biomedical ComputingSimula Research [email protected], [email protected]

MS6

Statistical Inverse Problems with Application toNeurosciences

Spatial regression with differential regularization is a novelclass of models for the accurate estimation of surfaces andspatial fields, that merges advanced statistical methodol-ogy and scientific computing techniques. The proposedmodels efficiently handle data distributed over irregularlyshaped domains and manifold domains and allow for avery flexible modeling of the space variation. Full un-certainty quantification is available via classical inferentialtools. The method is illustrated via application to neu-roimaging data.

Laura M. SangalliMOX Department of MathematicsPolitecnico Milano, [email protected]

MS6

Understanding the Effects of Internal CarotidArtery Flow Uncertainty on Intracranial AneurysmHaemodynamics: an ”In Silico” Study on a VirtualPopulation

Inter-subject variations of systemic flow waveforms couldinfluence vascular wall response in intracranial aneurysmsand consequently the progression of the disease. In thiswork, flow waveform variability is quantified using a vir-tual waveform population, generated by Gaussian processmodel. Computational fluid dynamics simulations mapthis virtual population onto the aneurysmal wall shearstress space. Surrogate models and sparse grid samplingare used to explore the flow uncertainty related to distaloutlet branch resistances and compliances.

Ali Sarrami-ForoushaniCISTIB - University of Sheffield, [email protected]

Toni LassilaCISTIB, University of [email protected]

Luigi Y. Di MarcoCISTIB, University of Sheffield, [email protected]

Arjan J. GeersCentre for Cardiovascular ScienceUniversity of Endinburgh, [email protected]

Ali Gooya, Alejandro F. FrangiCISTIB, University of Sheffield, [email protected], [email protected]

MS6

Impact of Material Parameter Uncertainty onStress in Patient Specific Models of the Heart

In this talk we study the impact of material parameter un-certainty on stress computations in patient specific models

of the heart. Stress is a driver of pathological remodeling,and stress estimates may therefore have substantial clinicalvalue. The computations rely on parameters that must beestimated from medical image data, and which typicallycontain a large degree of uncertainty. We present methodsfor parameter estimation, and study the impact of param-eter variation on myocardial stress.

Joakim SundnesSimula Research [email protected]

MS7

Statistical Stopping Rules for Iterative InverseSolvers

Iterative linear system solvers have been shown to be reg-ularization methods when equipped with a suitable stop-ping rule. In this talk we propose new statistical stoppingcriteria that provides a measure of the uncertainty in thesolution.

Daniela CalvettiCase Western Reserve UnivDepartment of Mathematics, Applied Mathematics [email protected]

MS7

Stochastic Boundary Maps in Eit

Electrical impedance tomography (EIT) is considered in asetting where the computational domain with an unknownconductivity distribution comprises only a portion of thewhole conducting body, and a boundary condition alongthe artificial boundary needs to be set in order to mini-mally disturb the estimate in the domain of interest. Theboundary map at the artificial boundary is an additionalunknown that needs to be estimated along with the con-ductivity of interest.

Jari Kaipio, Jari KaipioDepartment of MathematicsUniversity of [email protected], [email protected]

MS7

Inverse Boundary Value Problem of Thermal To-mography

We consider the inverse parabolic boundary value prob-lem of thermal imaging. The corresponding forward prob-lem is first solved by using the stochastic finite elementmethod. The heat capacity and the thermal conductiv-ity are then reconstructed simultaneously by minimizing achosen Tikhonov-type functional. Only polynomial evalu-ation and differentiation are required during the inversion,which makes the method fast. Numerical examples are pre-sented in two spatial dimensions with uncertain boundarycurve.

Lauri MustonenAalto UniversityDepartment of Mathematics and Systems [email protected]

MS7

Statistical Methods for Pricing and Risk Manage-

UQ16 Abstracts 17

ment of CoCo-bonds

Substantial amounts of contingent convertible (CoCo)bonds have recently been issued by banks to guaranteethe minimum capital requirements of the Basel III regu-lations. Mark-to-model pricing of such instruments poseschallenges from modeling and calibration perspectives. Wepresent a smile-conform generalization of the Black and Sc-holes equity derivatives modeling approach. We formulatethe corresponding calibration problem as a statistical in-verse problem and discuss numerical aspects as well as theusefulness of such an approach with regard to risk manage-ment.

Martin Simon, Tobias SchodenDeka Investment [email protected], [email protected]

MS8

Review of Uq in Turbulence Modelling to Date

We give an overview of stochastic techniques in turbulencemodeling, including outlining the major motivations, is-sues, techniques and applications. We aim to provide acontext within which the following talks can be understood.

Paola CinnellaLaboratoire DynFluid, Arts et Metiers ParisTechand Univesita del [email protected]

Richard P. DwightDelft University of [email protected]

Wouter EdelingTU [email protected]

MS8

Quantifying Model-Form Uncertainties Using FieldInversion and Machine Learning

The dominant source of error in turbulence models is be-cause of structural inadequacies in the model. In this work,full-field inversion is used to obtain corrective, spatially dis-tributed functional forms of model discrepancies in a wideclass of problems. Once the inference has been performedover a number of problems that are representative of thedeficient physics in the closure model, machine learningtechniques are used to reconstruct the model corrections interms of variables that are accessible in the closure model.The problem is cast in a Bayesian setting and is shownto effectively represent model-form errors in a predictivesetting.

Karthik [email protected]

MS8

A Practical Method for Estimating UncertaintyDue to Rans Modeling

Bayesian Model Scenario Averaging (BMSA) combinesmultiple RANS turbulence models and posterior closurecoefficient distributions to obtain a stochastic estimate of a

Quantity of Interest for an unmeasured prediction scenario.However, the full BMSA approach requires many forwardsamples, which can prohibit its application to complex flowtopologies. Therefore, we present a simplified version ofBMSA based on Maximum a Posteriori estimates, and ap-ply it to a transonic flow over a 3D wing.

Wouter EdelingTU [email protected]

Richard P. DwightDelft University of [email protected]

Paola CinnellaLaboratoire DynFluid, Arts et Metiers ParisTechand Univesita del [email protected]

MS8

A Stochastic Model Inadequacy Representation fora Reynolds-Averaged Burgers’ Equation K-EpsilonModel

We perform Reynolds’ averaging to a random Burgers’Equation and close the equation with an k−ε eddy viscos-ity model. A stochastic model inadequacy formulation isdeveloped in which a stochastic PDE governs the error inthe computed Reynolds stress. This formulation is basedon an exact equation for the Reynolds stress but includesrandom forcing to represent uncertainty due to the tur-bulence model. The inadequacy model is calibrated withBayesian methods and compared to data.

Bryan W. ReuterUniversity of Texas, [email protected]

Todd OliverPECOS/ICES, The University of Texas at [email protected]

Robert D. MoserUniversity of Texas at [email protected]

MS9

Data Assimilation, Parameter Estimation and Romfor Cardiovascular Flows

We present a complete framework combining data assimi-lation from real medical (patient-specific) data, numericalsimulation carried out by high performance infrastructureand its combination with reduced order modelling method-ologies in order to guarantee competitive computationalcosts and real-time computing. Parameters estimation isanother important goal, as well as parametric sensitivityanalysis in the worst medical scenario. Several examplesare discussed to introduce recent advances in reduced ordermethodology and its properties.

Gianluigi RozzaSISSA, International School for Advanced StudiesTrieste, [email protected]

Francesco Ballarin

18 UQ16 Abstracts

SISSA, International School for Advanced [email protected]

MS9

Space-TimeAdaptive Approach to Variational DataAssimilation Using Wavelets

This talk focusses on one of the main challenges of 4-dimensional variational data assimilation, namely the re-quirement to have a forward solution available when solv-ing the adjoint problem. The issue is addressed by consid-ering the time in the same fashion as the space variables,reformulating the mathematical model in the entire space-time domain, and solving the problem on a near optimalcomputational mesh that automatically adapts to spatio-temporal structures of the solution. The compressed formof the solution eliminates the need to save or recomputedata for every time slice as it is typically done in tradi-tional time marching approaches to 4-dimensional varia-tional data assimilation. The reduction of the requiredcomputational degrees of freedom is achieved using thecompression properties of multi-dimensional second gener-ation wavelets. The simultaneous space-time discretizationof both the forward and the adjoint models makes it pos-sible to solve both models either concurrently or sequen-tially. In addition, the grid adaptation reduces the amountof saved data to the strict minimum for a given a priori con-trolled accuracy of the solution. The proposed approach isdemonstrated for the advection diffusion problem in twospace-time dimensions.

M. Yousuff HussainiDepartment of MathematicsFlorida State [email protected]

Innocent SouopguiDepartment of Marine ScienceThe University of Southern [email protected]

Scott WielandDepartment of Mechanical EngineeringUniversity of Colorado, [email protected]

Oleg V. VasilyevDepartment of Mechanical EngineeringUniversity of Colorado [email protected]

MS9

Non Intrusive Polynomial Chaos Using Model Re-duction

This work proposes a new efficient method for estimatinguncertainty propagation. We investigate time-dependentviscous Burgers equation. The viscosity parameter is as-sumed to be an uncertain quantity with normal distribu-tion of known mean and variance. A stochastic Galerkinprojection is performed on a truncated polynomial chaos(PC) expansion of Hermite polynomials. Smolyak sparseinterpolation method and first and second order adjointtechniques are applied to accelerate the spectral colloca-tion PC. In addition, a dictionary of local parametric re-duced order models with known error estimates are used toincrease the method efficiency too since multiple solutionsof the original model (deterministic) model are required.

Finally we compare results and CPU load against MonteCarlo outcomes.

Ionel M. NavonFlorida State UniversityDepartment of Scientific [email protected]

Razvan StefanescuNorth Carolina State [email protected]

MS9

Non-smooth Optimization Techniques for Solvinga Radiation Data Assimilation Problem

Detection of radiation release events in an urban environ-ment requires efficient strategies to accurately identify andtrack the threat to the urban population in the proximityof the event. This work proposes to estimate the locationand intensity of a simulated radiation source located in aWashington DC neighborhood. Due to the domain geom-etry and the source to sensor radiation model employed inthis study the quadratic cost function associated with thisproblem presents multiple discontinuities and local min-ima. Non-smooth optimization techniques including Im-plicit Filtering, Nelder-Mead, Genetic Algorithm and Sim-ulated Annealing have the potential to deal with such dif-ficulties and their performances are compared for compu-tational efficiency.

Razvan StefanescuNorth Carolina State [email protected]

Ralph C. SmithNorth Carolina State UnivDept of Mathematics, [email protected]

MS10

Fracture Length and Aperture Correlations: Impli-cations for Transport Simulations in Discrete Frac-ture Networks

Abstract not available.

Jeffrey HymanLos Alamos National LaboratoryEarth and Environmental Sciences [email protected]

MS10

Decision-Oriented Optimal Experimental Design

Abstract not available.

Dan O’MalleyLos Alamos National [email protected]

MS10

Using Level-Set Methods to Generate RandomConductivity Fields Conditioned on Data and De-

UQ16 Abstracts 19

sired Statistical and Geometric Characteristics

Abstract not available.

Larrabee WinterUniversity of ArizonaDepartment of Hydrology and Water [email protected]

MS10

Inversion by Conditioning

Abstract not available.

J. Jaime Gomez-HernandezResearch Institute of Water and EnvironmentalEngineeringUniversitat Politecnica de Valencia - [email protected]

Teng XuInstitute for Water and Environmental EngineeringUniversitat Politecnica de [email protected]

MS11

Ensemble Grouping Strategies for EmbeddedStochastic Collocation Methods Applied toAnisotropic Diffusion Problems

In this presentation we propose a method for the solutionof a stochastic elliptic PDE with an uncertain diffusionparameter featuring high anisotropy. In the context ofstochastic collocation finite element methods we use a localhierarchical polynomial approximation with respect to theuncertain parameters in the sample space. The determinis-tic PDEs are solved by propagating ensembles of samples inan embedded fashion; we explore different sample-groupingtechniques and compare their performance.

Marta DElia, H. Carter EdwardsSandia National [email protected], [email protected]

Jonathan J. HuSandia National LaboratoriesLivermore, CA [email protected]

Eric PhippsSandia National LaboratoriesOptimization and Uncertainty Quantification [email protected]

Siva RajamanickamSandia National [email protected]

MS11

A Multilevel Stochastic Collocation Method forPdes with Random Inputs

Abstract not available.

Max GunzburgerFlorida State UniversitySchool for Computational Sciences

[email protected]

MS11

Embedded Ensemble Propagation for ImprovingPerformance, Portability and Scalability of Uncer-tainty Quantification on Emerging ComputationalArchitectures

Typical approaches for forward uncertainty propagation in-volve sampling of computational simulations over the rangeof uncertain input data. Often simulation processes fromsample to sample are similar. We explore a rearrangementof sampling methods to simultaneously propagate ensem-bles of samples in an embedded fashion. We demonstratethis enables reuse between samples, reduces computationand communication costs, and improves opportunities forfine-grained parallelization, resulting in improved perfor-mance on a variety of contemporary computer architec-tures.

Eric PhippsSandia National LaboratoriesOptimization and Uncertainty Quantification [email protected]

Marta D’Elia, H. Carter Edwards, Mark HoemmenSandia National [email protected], [email protected],[email protected]

Jonathan J. HuSandia National LaboratoriesLivermore, CA [email protected]

Siva RajamanickamSandia National [email protected]

MS11

A Reduced-Basis Multilevel Method for High-Dimensional Stochastic Systems

We combine multi-level Monte Carlo (MLMC) samplingstrategy with complexity reduction techniques, POD andreduced basis. Reduced models dramatically lower thecomputational cost of PDE realizations, but often sufferfrom limited accuracy. MLMC offers a robust frameworkfor exploiting multi-fidelity models, however, thus far, theapplication of MLMC has been restricted to various dis-cretization schemes. Our approach combines model reduc-tion techniques with MLMC, thus arbitrary accuracy canbe achieved at a very low computational cost.

Miroslav StoyanovOak Ridge National [email protected]

Guannan Zhang, Clayton G. WebsterOak Ridge National [email protected], [email protected]

MS12

Infinite-Dimensional Compressed Sensing andFunction Interpolation

We introduce a framework for function interpolation usingcompressed sensing with weighted l1 minimization. Unlike

20 UQ16 Abstracts

previous results, our recovery guarantees are sharp for largeclasses of functions regardless of the weights used. Theyalso allow one to determine an optimal weighting strategyin the case of multivariate polynomial approximations inlower sets. Finally, we use these guarantees to establishthe benefits of a well-known weighting strategy where theweights are chosen based on prior support information.

Ben AdcockSimon Fraser Universityben [email protected]

MS12

Gradient-enhanced �1-minimization for StochasticCollocation Approximations

This work is concerned with stochastic collocation methodsvia gradient-enhanced �1-minimization, where the deriva-tive information is used to identify the Polynomial Chaosexpansion coefficients. With an appropriate precondi-tioned matrix and normalization, we show the inclusionof derivative information will lead to a successful solutionrecovery, both for bounded and unbounded domains. Nu-merical examples are provided to compare the computa-tional perfomance between standard �1-minimization andthe gradient-enhanced �1 minimization. Numercl resultssuggest that including derivative information can acceler-ate the recovery of the PCE coefficients.

Ling GuoDepartment of Mathematics, Shanghai NormalUniversity, [email protected]

Tao ZhouInstitute of Computational Mathematics, AMSSChinese Academy of [email protected]

Akil Narayan, Dongbin XiuUniversity of [email protected], [email protected]

MS12

Stability and Accuracy of the Discrete Least-squares Approximation on Multivariate Polyno-mial Spaces

We review the main results achieved in the analysis of thestability and accuracy of the discrete least-squares approx-imation on multivariate polynomial spaces, with noiselessevaluations at random points, noiseless evaluations at low-discrepancy point sets, and noisy evaluations at randompoints.

Giovanni MiglioratiUniversit e Pierre et Marie [email protected]

MS12

Stochastic Collocation with Multi-Fidelity Models

We shall discuss a numerical approach for the stochasticcollocation method with multi-fidelity simulation models.The method combines the computational efficiency of low-fidelity models with the high accuracy of high-fidelity mod-els. When incorporating the idea of control variates, weexploit the statistical correlation between the low-fidelity

and high-fidelity model to accelerate the convergence of thestandard multi-fidelity approximation. We shall presentseveral numerical examples to demonstrate the efficiencyand applicability of the multi-fidelity algorithm.

Xueyu ZhuScientific Computing and Imaging InstituteUniversity of [email protected]

Dongbin Xiu, Akil NarayanUniversity of [email protected], [email protected]

MS13

Sparse Grid, Reduced Basis Bayesian Inversion

In this talk, we present a computational reduction frame-work for efficient and accurate solution of Bayesian inverseproblems that commonly face the curse of dimensional-ity and large-scale computation. For the approximationof high or infinite dimensional integration, we take advan-tage of the sparsity that can be automatically detected by adimension-adaptive sparse grid construction. In harnessingthe large-scale computation, we exploit the reducibility ofhigh-fidelity approximation and propose a goal-oriented re-duced basis method. This framework features considerablereduction of the computational cost for solving Bayesianinverse problems of a large range of parametric operatorequations.

Peng ChenUT [email protected]

Christoph SchwabETH [email protected]

MS13

Goal-Oriented Nonlinear Model Reduction for FastBayesian Inference

This work describes a novel approach for inverting the truecharacterization of large-scale and non-Gaussian randomfield from noisy measurements, with the help of verifiedand validated simulations of the underlying physical pro-cesses. We will investigate innovative goal-oriented nonlin-ear dimension reduction methods based on kernel princi-pal component analysis as well as adjoint methods for fastBayesian inference. We will demonstrate the approach byapplying it to recovering the probabilistic distribution of achannelize permeability field.

Xiao ChenLawrence Livermore National LaboratoryCenter for Applied Scientific [email protected]

Charles Tong, Christina Morency, Joshua WhiteLawrence Livermore National [email protected], [email protected], [email protected]

MS13

Nonlinear Reduced-Order Models for Unsteady

UQ16 Abstracts 21

Compressible Flows

There are many theoretical and practical challenges in de-veloping accurate and efficient reduced-order models fornonlinear compressible flow. This talk will describe ideasdeveloped to meet these challenges and their implementa-tion in a Sandia in-house CFD solver. This work focuseson projection-based Galerkin and Petrov-Galerkin modelreduction methods with hyper-reduction with the goal ofemploying these ROMs for uncertainty quantification ofunsteady compressible flows.

Jeffrey Fike, Irina K. Tezaur, Matthew Barone, Kevin T.Carlberg, Micah Howard, Srinivasan ArunajatesanSandia National [email protected], [email protected],[email protected], [email protected],[email protected], [email protected]

MS13

Control for Systems with Uncertain Parameters

We consider PDE systems that depend on a parameter (i.e.viscosity) that is critical for the dynamics and stability ofthe plant. By using a mixture of classical optimal con-trol techniques combined with data-driven reduced ordermodel updates, we obtain accurate online models for con-trol. We present a strategy, where the control action canrespond to failure scenarios through significant updates tothe feedback gains.

Boris KramerVirginia [email protected]

Karen E. WillcoxMassachusetts Institute of [email protected]

Benjamin Peherstorfer, Benjamin PeherstorferACDL, Department of Aeronautics & AstronauticsMassachusetts Institute of [email protected], [email protected]

MS14

Low-Rank Tensor Approximations in Reduced Ba-sis Methods and Uncertainty Quantification

We propose a novel combination of the reduced basismethod with low-rank tensor techniques for the efficientsolution of parameter-dependent linear systems in the caseof several parameters, as they arise, for example, in uncer-tainty quantification. This combination consists of threeingredients. First, the underlying parameter-dependentoperator is approximated by an explicit affine represen-tation in a low-rank tensor format. Second, a standardgreedy strategy is used to construct a problem-dependentreduced basis. Third, the associated reduced parametricsystem is solved for all parameter values on a tensor gridsimultaneously via a low-rank approach. This allows us toexplicitly represent and store an approximate solution forall parameter values at a time. Once this approximationis available, the computation of output functionals and theevaluation of statistics of the solution becomes a cheap on-line task, without requiring the solution of a linear system.

Jonas BallaniEPF Lausanne

[email protected]

Daniel KressnerEPFL [email protected]

MS14

Block-Diagonal Preconditioning for Optimal Con-trol Problems Constrained by PDEs with Uncer-tain Inputs

We present an efficient approach for simulating optimiza-tion problems governed by partial differential equations in-volving random coefficients. This class of problems leads toprohibitively high dimensional saddle point systems withKronecker product structure, especially when discretizedwith the stochastic Galerkin finite element method. Here,we derive and analyze robust Schur complement-basedblock diagonal preconditioners for solving the resultingstochastic Galerkin systems with low-rank iterative solvers.Finally, we illustrate the effectiveness of our solvers withnumerical experiments.

Akwum OnwuntaMax Planck Institute, Magdeburg, [email protected]

MS14

Fast Solvers for Parameter-Dependent Pdes

The first talk of this two-part mini-symposium is intendedto be an introduction. I will give an overview of thecomputational challenges associated with solving the lin-ear systems of equations associated with various numer-ical methods for discretizing parameter-dependent PDEsand/or PDEs with random inputs. The talk is intended toset up the other seven in the mini-symposium by coveringconcepts like low-rank and reduced basis iterative solvers,preconditioning for matrices with Kronecker product struc-ture, and optimal stopping criteria.

Catherine PowellSchool of MathematicsUniversity of [email protected]

MS14

An Efficient Reduced Basis Solver for StochasticGalerkin Matrix Equations

Stochastic Galerkin FE approximation of PDEs with ran-dom inputs leads to linear systems with characteristic Kro-necker product structure coefficient matrix. By reformulat-ing the systems as multi-term linear matrix equations, wedevelop an efficient solution algorithm which generalizesideas from rational Krylov subspace projection methods.Its convergence rate is independent of the spatial approxi-mation, while its memory requirements are far lower thanthose of the preconditioned conjugate gradient applied tothe Kronecker formulation of the systems.

David SilvesterUniversity of [email protected]

Catherine PowellSchool of Mathematics

22 UQ16 Abstracts

University of [email protected]

Valeria SimonciniUniversita’ di [email protected]

MS15

Adaptive Methods for Pde Constrained Optimiza-tion with Uncertain Data

I present an approach to solve risk averse PDE constrainedoptimization problems with uncertain data that uses dif-ferent PDE model fidelities and adaptive sampling to sub-stantially reduce the overall number of costly, high fidelityPDE. The approximation qualities of the optimization sub-problems due to sampling and lower fidelity PDE modelsis adjusted to the progress of the overall optimization al-gorithm.

Matthias HeinkenschlossDepartment of Computational and Applied MathematicsRice [email protected]

MS15

Stochastic Collocation for Optimal Control withStochastic Pde Constraints

We discuss the use of stochastic collocation for the solu-tion of optimal control problems which are constrained bystochastic partial differential equations (SPDE). Therebythe constraining SPDE depends on data which is not de-terministic but random. Assuming a deterministic control,randomness within the states of the input data will propa-gate to the states of the system. For the solution of SPDEsthere has recently been an increasing effort in the develop-ment of efficient numerical schemes based upon the mathe-matical concept of generalized Polynomial Chaos. Modal-based stochastic Galerkin and nodal-based stochastic col-location versions of this methodology exist, both of whichrely on a certain level of smoothness of the solution inthe random space to yield accelerated convergence rates.In the talk we show how to apply the stochastic colloca-tion method in a gradient descent as well as a sequentialquadratic program (SQP) for the minimization of objectivefunctions constrained by an SPDE. The stochastic functioninvolves several higher-order moments of the random statesof the system as well as classical regularization of the con-trol. In particular we discuss several objective functionsof tracking-type. To demonstrate the performance of themethod we show numerical test examples and we give anoutlook to a real-world problem from medical treatmentplanning.

Robert KirbyUniversity of UtahSchool of [email protected]

Hanne TieslerFraunhofer [email protected]

Dongbin XiuUniversity of [email protected]

Tobias PreusserFraunhofer MEVIS, [email protected]

MS15

Mean-variance Risk-averse Optimal Control of Sys-tems Governed by PDEs with Random ParameterFields using Quadratic Approximations

We present a method for risk-averse optimal control of sys-tems governed by PDEs with uncertain parameter fields.To make the problem computationally tractable, we invokea quadratic Taylor series approximation of the control ob-jective with respect to the uncertain parameter field. Thisenables deriving explicit expressions for the mean and vari-ance of the control objective in terms of its gradients andHessians with respect to the uncertain parameter. The pro-posed approach is illustrated using an elliptic PDE controlproblem with uncertain coefficient, and we present a com-prehensive numerical study.

Alen AlexanderianNC State [email protected]

Noemi PetraUniversity of California, [email protected]

Georg StadlerCourant Institute for Mathematical SciencesNew York [email protected]

Omar GhattasThe University of Texas at [email protected]

MS15

Low-Rank Solvers for An Unsteady Stokes-Brinkman Optimal Control Problem with RandomData

We consider the simulation of an optimal control prob-lem constrained by the unsteady Stokes-Brinkman equa-tion involving random data. We consider a generalizedpolynomial chaos approximation of these random functionsin the stochastic Galerkin finite element method. The dis-crete problem yields a prohibitively high dimensional sad-dle point system with Kronecker product structure. Wediscuss how such systems can be solved using tensor-basedmethods. The performance of our approach is illustratedwith extensive numerical experiments.

Martin StollMax Planck Institute, [email protected]

Sergey DolgovMax Planck Institutefor Dynamics of Complex Technical [email protected]

Peter Benner, Akwum OnwuntaMax Planck Institute, Magdeburg, [email protected], onwuntaju-

UQ16 Abstracts 23

[email protected]

MS16

Generalized Rybicki Press Algorithm in Higher Di-mensions

Covariance matrices whose elements are sums of exponen-tials are frequently encountered in computational statis-tics in the context of Continuous time AutoRegressive-Moving-Average. The talk discusses a more general, nu-merically stable, linear complexity Rybicki Press algorithmfor such matrices. The algorithm which enables invertingand computing determinants of covariance matrices scalesas O(N (3d−2)/d) in exact arithmetic. In finite arithmetic,the algorithm scales as O(N) and offers more compress-ibility than the typical fast direct solvers. Detailed bench-marks illustrate the scaling of the algorithm in various di-mensions.

Sivaram AmbikasaranDepartment of MathematicsCourant Institute of Mathematical [email protected]

MS16

Hierarchically Compositional Kernels for ScalableNonparametric Learning

We propose a novel class of kernels to alleviate the highcomputational cost of large-scale nonparametric learningwith kernel methods. The proposed kernel marries theNystrom method with a locally lossless approximation ina hierarchical fashion and maintains positive-definiteness.We demonstrate comprehensive experiments to show its ef-fectiveness. In particular, on a laptop and a dataset withapproximately half a million training examples, the pro-posed kernel achieves a performance close to that of thenonapproximate kernel by using less than two minutes oftraining.

Jie ChenIBM Thomas J. Watson Research [email protected]

MS16

Fitting Gaussian Processes in Ensemble KalmanFilters

Ensemble Kalman filters have become popular for data as-similation in high-dimensional non-linear systems. Themethods rely on propagating Monte Carlo samples, andconstructing an empirical Gaussian approximation whenintegrating (dynamic) data. Because of the empirical ap-proximations, samples get coupled over integration steps,and the approach often underestimates the prediction vari-ance. Several approaches have been suggested to mitigatethis challenge; localization, inflation, dimension reduction,etc. We here suggest to fit an approximation by estimatingthe covariance parameters of a Gaussian process. We com-pare various methods using spatial blocking schemes, andnon-stationary models. We show applications from Earthsciences.

Jo EidsvikNTNUNorway

[email protected]

MS16

Fast Spatial Gaussian Process Maximum Likeli-hood Estimation Via Skeletonization Factorizations

We present a procedure for sets of n unstructured 2D ob-servations for evaluation of the kernelized Gaussian processlog-likelihood and gradient in O(n3/2) time and O(n log n)storage. Our method uses the skeletonization proceduredescribed by Martinsson & Rokhlin (2005) in the form ofthe recursive skeletonization factorization of Ho & Ying(2015). We combine this with the matrix peeling algo-rithm of Lin et al. (2011) to compute maximum-likelihoodestimates using black-box optimization packages.

Victor [email protected]

Anil DamleStanford [email protected]

Ken HoTSMC Technology [email protected]

Lexing YingStanford [email protected]

MS17

Dealing with Uncertainties in Global DecadalOcean State Estimation

Over the last 1.5 decades the consortium ”Estimating theCirculation and Climate of the Ocean” (ECCO) has devel-oped a framework for fitting a state of the art global ocean(and sea ice) general circulation model (GCM) to muchof the available diverse streams of satellite and in situ ob-servations via a deterministic least-squares approach. Akey ingredient is the availability of an adjoint model ofthe time-evolving GCM to invert for uncertain initial andsurface boundary conditions, as well as internal model pa-rameters. With increasing maturity of the framework andthe decadal global state estimates produced, increased at-tention is warranted to assess the fidelity of prior errorsassigned to the observations and the inversion (control)variables, as well as a rigorous assessments of posterior un-certainties. The latter is a key open question, both for theprovision of realistic uncertainties in climate reconstruc-tions, as well as implications for forecasting. Ongoing workto tackle these problems in the context of ECCO will bedescribed.

Patrick HeimbachUniversity of Texas - [email protected]

MS17

UQ for the Global Atmosphere Using ConcurrentSamples

A universal problem facing all scientific computing appli-cations is how to achieve efficient calculations on emergingarchitectures. Uncertainty quantification imposes the needfor ensembles of simulations. It also provides a potential

24 UQ16 Abstracts

source of additional computational work on each computenode by computing these ensemble samples concurrently,i.e. as an inner loop rather than an outer loop. We presentresults for this approach for a global atmosphere modelbeing developed at Sandia National Laboratories.

Bill Spotz, Jeffrey Fike, Irina K. TezaurSandia National [email protected], [email protected],[email protected]

Andrew SalingerCSRISandia National [email protected]

Eric PhippsSandia National LaboratoriesOptimization and Uncertainty Quantification [email protected]

MS17

The Importance of Being Uncertain: The Influenceof Initial Conditions in Ocean Models

This paper describes how we might quantify the initial con-tributions to overall model uncertainty. We test our as-sumptions about the uncertainty of the initial conditionsby designing a robust sampling scheme of the inputs anduse selected inputs to run a set of global Community EarthSystem Model (CESM) simulations. Using Bayesian statis-tics, the selective inputs and associated outputs are usedto interpolate outcomes to produce a broad distribution ofuncertainty in a given metric.

Robin TokmakianNaval Postgraduate [email protected]

Peter ChallenorNational Oceanography Centre, Southampton, [email protected]

MS18

Asymptotic-Preserving Stochastic GalerkinSchemes for the Boltzmann Equation with Uncer-tainty

We develop a stochastic Galerkin method for the nonlin-ear Boltzmann equation with uncertainty. The method isbased on the generalized polynomial chaos (gPC) and canhandle random inputs from collision kernel, initial data orboundary data. We show that a simple singular value de-composition of gPC related coefficients combined with theFourier-spectral method (in velocity space) allows one tocompute the collision operator efficiently. When the Knud-sen number is small, we propose a new technique to over-come the stiffness. The resulting scheme is uniformly stablein both kinetic and fluid regimes, which offers a possibil-ity of solving the compressible Euler equation with randominputs.

Jingwei HuDepartment of Mathematics, Purdue [email protected]

MS18

An Adaptive Hybrid Stochastic Galerkin Method

for Uncertainty Quantification for HyperbolicProblems with Several Random Variables

The continuous sedimentation process in a clarifier-thickener can be described by a scalar non-linear conser-vation law. The applications of this model on the realisticsetting requires the dealing with several uncertain param-eters. Therefore the numerical simulation of this problemrequires an efficient method of the uncertainty quantifi-cation. The presented hybrid stochastic Galerkin (HSG)method extends the classical polynomial chaos approachby multiresolution discretization in the stochastic space.The HSG approach leads to a partially decoupled deter-ministic hyperbolic system, which allows an efficient par-allel numerical simulation. The numerical approximationof the solution of the resulting high-dimensional hyperbolicsystem is provided by the introduced finite-volume scheme.The multi-wavelet based stochastic adaptivity provides thefurther reduction of the problem complexity. Several pre-sented numerical experiments show the application of theHSG method on the clarifier-thickener problem with sev-eral random parameters and demonstrate advantages of themethod.

Ilja KroekerUniversity of [email protected]

MS18

Control Problems in Socio-Economic Models withRandom Inputs

Abstract not available.

Lorenzo PareschiDepartment of MathematicsUniversity of [email protected]

MS18

Stochastic Regularity of Observables for High Fre-quency Waves

We consider high frequency waves satisfying the scalarwave equation with highly oscillatory initial data. Thespeed of propagation of the medium as well as the phaseand amplitude of the initial data is assumed to be uncer-tain, described by a finite number of independent randomvariables with known probability distributions. We intro-duce quantities of interest (QoIs) as local averages of thesquared modulus of the wave solution, or its derivatives.In the talk we will discuss the regularity of these QoIsin terms of the input random parameters, and the wavelength. The regularity is important for uncertainty quan-tification methods based on interpolation in the stochasticspace. In particular, the stochastic regularity should, in anappropriate sense, be independent of the wave length. Weshow that the QoIs can indeed have this property, despitethe highly oscillatory character of the waves.

Olof RunborgKTH, Stockholm, [email protected]

MS19

Inference for Multi-Model Ensembles: An Appli-cation in Glaciology

Computational models are frequently used to explore phys-

UQ16 Abstracts 25

ical systems. Often, there are several computer modelsavailable and also a limited number of observations fromthe physical system. Here, new methodology is proposedfor combining multiple computational models and field ob-servations to make predictions for the system with uncer-tainty. We do not choose a best model, but instead usea spatially varying convex combination of models. Themethodology is motivated by an application in glaciology.

Derek BinghamDepartment of Statistics and Actuarial ScienceSimon Fraser [email protected]

Gwenn FlowersDepartment of Earth SciencesSimon Fraser [email protected]

Ofir HarariDepartment of Statistics and Actuarial ScienceSimon Fraser [email protected]

MS19

Optimal Design for Correlated Processes withInput-Dependent Noise

We present a design criterion for parameter estimation inGaussian process regression models with input-dependentnoise. The motivation stems from the area of computerexperiments, where computationally demanding simulatorsare approximated using Gaussian process emulators. De-signs are proposed with the aim of minimising the varianceof heteroscedastic Gaussian process parameter estimation.A Gaussian process model is presented which allows foran experimental design technique based on an extension ofFisher information to heteroscedastic models.

Alexis BoukouvalasUniversity of Manchester, [email protected]

MS19

Combination of Sequential Design and VariableScreening. Application to Uncertainties in theSource of Tsunami Simulations

There is a computational advantage in building a designof computer experiments solely on a subset of active vari-ables. However, this prior selection inflates the limitedcomputational budget. We propose to interweave the re-cent sequential design strategy MICE (Mutual Informationfor Computer Experiments) with a screening algorithm toimprove the overall efficiency of building an emulator. Thisapproach allows us to assess future tsunami risk for com-plex earthquake sources over Cascadia using the simulatorVOLNA.

Joakim Beck, Joakim BeckUniversity College London (UCL)[email protected], [email protected]

Serge Guillas, Simon DayUniversity College London

[email protected], [email protected]

MS19

Design for Calibration and History Matching forComplex Simulators

Design for building emulators of computer experimentsusually focusses on how to build accurate global approxi-mations. However, if our aim is calibration, then this canbe wasteful. In this talk I will discuss design strategies thatcan be used for history matching and calibration, both ina deterministic and stochastic setting. Entropy based de-signs that minimize the uncertainty in the classificationsurface are particularly useful, and we will show how thesecan be efficiently computed.

Richard D. WilkinsonUniversity of [email protected]

MS20

Optimal Experimental Design for Large-ScalePDE-Constrained Nonlinear Bayesian InverseProblems

We address the problem of optimal experimental design(OED) for infinite-dimensional nonlinear Bayesian inverseproblems. We seek an A-optimal design, i.e., we aim tominimize the average variance of a Gaussian approxima-tion to the inversion parameters at the MAP point. TheOED problem includes as constraints the optimality con-dition PDEs defining the MAP point as well as the PDEsdescribing the action of the posterior covariance. We pro-vide numerical results for the inference of the permeabilityfield in a porous medium flow problem.

Alen AlexanderianNC State [email protected]

Omar GhattasThe University of Texas at [email protected]

Noemi PetraUniversity of California, [email protected]

Georg StadlerCourant Institute for Mathematical SciencesNew York [email protected]

MS20

Advances in Dimension-independent MarkovChain Monte Carlo Methods

This talk concerns Markov chain Monte Carlo (MCMC)algorithms whose performance is independent of the di-mension of the underlying state-space, in an appropriatesense. The first part of the talk concerns function-spaceMCMC algorithms for sampling the posterior distribu-tion arising from Bayesian inverse problems. A generalclass of operator-weighted proposal distributions are intro-duced, which are dimension(and covariance)-independent,and may include local gradient information. These pro-posals utilize Hessian information to identify a subspace inwhich the posterior measure concentrates, and adaptively

26 UQ16 Abstracts

scale the proposal distribution according to the posteriorcovariance in this space. The high level of efficiency ofthese algorithms is demonstrated numerically on examples.Time permitting, the second part of the talk will describerecent advances, exploiting many-core architectures com-bined with GPU accelerators, which illustrate the potentialfor gradient-free adaptive MCMC algorithms to efficientlyexplore targets over high-dimensional state spaces, whichmay not arise from the discretization of a function space.

Kody LawOak Ridge National [email protected]

Youssef M. Marzouk, Tiangang CuiMassachusetts Institute of [email protected], [email protected]

David E. [email protected]

Hatem LtaiefKing Abdullah University of Science & Technology(KAUST)[email protected]

Yuxin [email protected]

MS20

Multilevel Sampling Techniques for Bayesian Infer-ence

Multilevel sampling methods, for example within aMetropolis-Hastings algorithm or as ratio estimators forBayes’ formula, are among the most efficient methods forlarge-scale PDE-constrained Bayesian inference. Througha clever use of model hierarchies – naturally available inthe numerical approximation of PDEs – they offer the ac-curacy of ”gold-standard” classical MCMC estimators at afraction of the cost, avoiding dimension truncation, Gaus-sian approximations or linearisations. We present theoryand numerical experiments.

Robert ScheichlUniversity of [email protected]

Aretha L. TeckentrupUniversity of [email protected]

Tim DodwellUniversity of [email protected]

Andrew StuartMathematics Institute,University of [email protected]

MS20

Metropolis-Hastings Algorithms in Function Spacefor Bayesian Inference of Groundwater Flow

We consider a class of Metropolis-Hastings algorithms

suited for general Hilbert spaces and their application toBayesian inference in PDE models. In particular, we gener-alize the existing pCN-proposal in order to allow for adap-tion to the posterior covariance. This improves the statisti-cal efficiency and numerical experiments indicate that thenew method performs independent of dimension and noisevariance. We further provide a convergence result for theproposed MH algorithms in terms of spectral gaps.

Bjorn SprungkTU ChemnitzDepartment of [email protected]

Daniel RudolfInstitute for MathematicsUniversity of [email protected]

Oliver G. ErnstTU ChemnitzDepartment of [email protected]

MS21

A Reduced-Order Strategy for EfficientState/parameter Identification in Cardiac Electro-physiology

A reduced basis (RB) ensemble Kalman filter is proposedfor solving efficiently inverse UQ problems in cardiac elec-trophysiology. Our goal is to identify the location andshape of ischemic regions, affecting the coefficients of anonlinear unsteady PDE which models the evolution of thecardiac potential. Combining a RB method for state reduc-tion and a reduction error model for the sake of reliabilitywe can speedup both sampling and updating of parametersprobability distributions.

Stefano PaganiPolitecnico di [email protected]

Andrea ManzoniEPFL, [email protected]

Alfio QuarteroniEcole Pol. Fed. de [email protected]

MS21

A Moment-Matching Method to Study the Vari-ability of Phenomena Described by Complex Mod-els

A stochastic inverse problem is proposed aiming at study-ing the observed variability of phenomena. Given an ex-perimental setup and a parametric model, the probabilitydensity of the parameters is estimated by imposing that thestatistics of the model outputs match the measured ones.An entropy regularisation formulation of the problem isadopted, and numerically solved by a stochastic colloca-tion approach. The method is tested on various test cases,including experiments in cardiac electrophysiology.

Eliott TixierINRIA, Paris Rocquencourt, France

UQ16 Abstracts 27

[email protected]

Jean-Frederic GerbeauINRIA Paris-RocquencourtSorbonne Universites, UPMC Universite Paris [email protected]

Damiano LombardiINRIA [email protected]

MS21

Data Assimilation and Uncertainty Quantificationfor Surgery Planning of Single-Ventricle Patho-physiology

Blood flow simulations for surgery planning are typicallybased on preoperative patient data. Thus the first stepconsists in performing data assimilation on an appropri-ate reduced model which represents the circulation outsideof the 3D domain of interest, and which is coupled to it.Through cases of single-ventricle pathophysiology, we willpresent two different strategies to estimate such parame-ters and take into account the uncertainties in the mea-surements.

Irene Vignon-ClementelINRIA [email protected]

MS21

Bayesian Multi-Fidelity Monte Carlo for Uq withLarge-Scale Models

We propose a novel Bayesian multi-fidelity uncertaintyquantification scheme for complex systems with highstochastic dimension, which achieves unrivalled efficiencythrough rigorous incorporation of information from low-fidelity models. These need not to be accurate in a deter-ministic sense; merely a similar stochastic structure needsto be retained by the low-fidelity model. We demonstratethe capabilities of our approach through its application tocomplex, large-scale biomechanical models of abdominalaortic aneurysms and the human respiratory system.

Wolfgang A. WallInstitute for Computational MathematicsTechnical University of Munich, [email protected]

Jonas BiehlerTechnische Universitat [email protected]

MS22

Novel Uncertainty Measures for Inverse Problems

Uncertainty quantification is measured by variances ormean-squared errors, which makes perfect sense in aHilbert space framework respectively Bayesian posteriorsclose to Gaussians. For different underlying geometries farfrom Hilbert spaces and posteriors far from Gaussian, themeaning of such measures is questionable and often noteven defined in an infinite-dimensional limit. In this talkwe therefore consider different measures for uncertaintyquantification based on Banach spaces, there duals, andBregman distances related to the norm. We discuss theirproperties and their potential use in uncertainty quantifica-

tion, in particular related to variational methods in inverseproblems and imaging.

Martin BurgerUniversity of MuensterMuenster, [email protected]

MS22

Stochastic Inverse Problems with Impulsive Noise

Impulsive noise models such as salt & pepper noise are fre-quently used in mathematical image restoration, but areusually restricted to the discrete setting. We introduce aninfinite-dimensional stochastic impulsive noise model basedon marked Poisson point processes, discuss its propertiesand show regularization properties of inverse problems sub-ject to such noise. We also treat the conforming discretiza-tion of such noise models and compare this to the classicaldiscrete impulsive noise.

Christian ClasonUniversity of Duisburg-EssenFaculty of [email protected]

MS22

Stochastic Modulus of a Quadrilateral As a Bench-mark Problem for Uncertain Domains

We consider the stochastic version of the classical capacityproblem for uncertain domains. For this problem, thereexists a natural error estimate for any realization of anyrandom boundary. This so-called reciprocal error estimatecan be used in the error analysis of statistical quantitiesrelated to the numerical solution of the model problem byusing a sparse grid stochastic collocation method, wherethe error estimate is applied to each individual solutionwithin the non-intrusive process.

Vesa KaarniojaAalto UniversityDepartment of Mathematics and Systems [email protected]

Harri H. HakulaHelsinki University of [email protected]

MS22

Convergence Rates for Bayesian Inversion

Let us consider an indirect noisy measurementM of a phys-ical quantity U ;

M = AU + δE ,where the measurement M , noise E and the unknown U aretreated as random variables and δ models the noise ampli-tude. We are interested to know what happens to the ap-proximate solution of above when δ → 0. The analysis ofsmall noise limit, also known as the theory of posterior con-sistency, has attracted a lot of interest in the last decade,however, much remains to be done. Developing a com-prehensive theory is important since posterior consistencyjustifies the use of the Bayesian approach the same way asconvergence results do the use of regularisation techniques.This is joint work with Matti Lassas and Samuli Siltanen

28 UQ16 Abstracts

(University of Helsinki)

Hanne KekkonenUniversity of HelsinkiDepartment of Mathematics and [email protected]

MS23

Quantifying Turbulence Model Form Uncertaintyin Rans Simulations of Complex Flow and ScalarTransport

Reynolds-averaged Navier-Stokes (RANS) simulations of-fer a practical approach for engineering analysis of turbu-lent flows, but the uncertainty related to the turbulencemodel should be quantified when using the results for de-sign decisions. Previously we demonstrated promising ca-pabilities of an approach that quantifies the uncertainty inturbulence and turbulent scalar flux models by perturbingthe modeled quantities in the governing equations. Thispresentation will discuss recent findings from applying themethodology to different complex flow configurations.

Catherine Gorle, Z. HouColumbia [email protected], [email protected]

Stephanie Zeoli, Laurent BricteuxUniversity of Mons, Belgiumn/a, n/a

MS23

Machine Learning for Uncertainty Quantificationin Turbulent Flow Simulations

Because Reynolds Averaged Navier Stokes (RANS) sim-ulations rely on turbulence model closures, the simula-tion predictions are prone to increased uncertainty in flowswhere the underlying model assumptions are invalid. Ma-chine learning methods, in combination with the big datasets generated by high fidelity simulations, can be lever-aged to detect when these underlying assumptions are vio-lated. This presentation will discuss feature selection meth-ods, machine learning algorithm performance, extrapola-tion metrics, and rule extraction techniques in the contextof RANS model form uncertainty quantification.

Julia LingSandia National [email protected]

Jeremy TempletonSandia National [email protected]

MS23

Uncertainty Quantification and Sensitivity Analy-sis in Large-Eddy Simulations

The assessment of the quality and reliability of large-eddy simulation (LES) results is difficult, because differentsources of uncertainty/errors, such as modeling or numer-ical discretization, may have comparable effects. We useUQ to address the sensitivity of LES results to grid reso-lution and SGS modeling, by assuming the uncertain pa-rameters as input random variables. Continuous responsesurfaces in the parameter space are built from a small num-ber of LES simulations by using a surrogate model, as e.g

the gPC approach.

Lorenzo Siconolfi, Alessandro MariottiUniversity of [email protected],[email protected]

Maria Vittoria SalvettiUniversity [email protected]

MS23

Combining Approximate and Exact Bayesian Al-gorithms to Quantify Model-Form Uncertainties inRANS Simulations

Properly quantifying model-form uncertainties in RANSsimulations is important yet challenging. In this work, wepropose a physics-informed Bayesian framework to addressthis problem. Uncertainties are introduced directly to theReynolds stresses, and are parameterized accounting forempirical prior and physical constraints. A hybrid inver-sion scheme is adopted based on Ensemble Kalman methodand MCMC method, where a surrogate model is employedto reduce the computational cost. Two test cases are per-formed to evaluate the proposed framework.

Jianxun WangDept. of Aerospace and Ocean Engineering, Virginia [email protected]

Yang [email protected]

Jinlong Wu, Heng XiaoDept. of Aerospace and Ocean Engineering, Virginia [email protected], [email protected]

MS24

Use of Difference-Based Methods to DetermineStatistical Models in Inverse Problems

We propose the use of difference-based methods withpseudo measurement errors to determine directly from data(without IP calculations and residual plots) an appropri-ate statistical model and then to subsequently investigatewhether there is a mathematical model misspecification ordiscrepancy. In the presence of mathematical model mis-specification, we also propose to use the information pro-vided by pseudo measurement errors to quantify uncer-tainty in parameter estimation by bootstrapping methods.

H. T. BanksNorth Carolina State [email protected]

MS24

Estimation of Covariance Matrices in EnsembleKalman Filtering

This talk discusses efficient implementations of the ensem-ble Kalman filter (EnKF) based on on shrinkage covari-ance estimation and on modified Cholesky decomposition.The proposed EnKF implementations exploit the condi-tional correlations of model components in order to obtainsparse estimators of the background error covariance ma-

UQ16 Abstracts 29

trix or its inverse. The forecast ensemble members at eachstep are used to estimate the background error covariancematrix via the Rao-Blackwell Ledoit and Wolf estimator,which has been specifically developed to approximate high-dimensional covariance matrices using a small number ofsamples. The estimation of inverse background error co-variance is performed via the modified Cholesky decom-position. Different implementations are proposed in orderto face typical scenarios in operational data assimilation.Numerical results are presented to illustrate how these es-timates of background error correlations can increase theaccuracy of EnKF analyses.

Adrian SanduVirginia Polytechnic Instituteand State [email protected]

Elias NinoVirginia TechComputational Science [email protected]

Xinwei DengVirginia [email protected]

MS24

The Intrinsic Dimension of Importance Sampling

We study importance sampling and particle filters in highdimensions and link the collapse of importance samplingto results about absolute continuity of measures in Hilbertspaces and the notion of effective dimension.

Daniel Sanz-AlonsoMathematics and StatisticsUniversity of [email protected]

Sergios AgapiouUniversity of [email protected]

Omiros PapaspiliopoulosDepartment of EconomicsUniversitat Pompeu [email protected]

Andrew StuartMathematics Institute,University of [email protected]

MS24

A Limited Memory Multiple Shooting Approachfor Weakly Constrained Data Assimilation

We present a variational limited memory method for stateestimation of hidden Markov models. Our method is basedon a multiple shooting approach that improves stabilityand a recursion to enforce optimality. The matching ofstates at checkpoints are imposed as constraints of the op-timization problem which is solved with augmented La-grangian method. We prove that for nonlinear systemswithin certain regime, the condition number of the aug-mented Lagrangian function is bounded above with respectto number of shooting intervals. Hence the method is sta-

ble for increasing time horizon. We demonstrate numeri-cally the improved stability with Burgers’ equation.

Wanting XuUniversity of [email protected]

Mihai AnitescuArgonne National LaboratoryMathematics and Computer Science [email protected]

MS25

Prediction Uncertainty for Solute Transport inFractured Rock: From Theory to Experiments

Predictive modelling of solute transport in consolidatedmaterials (rocks) is relevant in various engineering appli-cations, from water supply to ecological risk assessment,from construction to waste disposal. Although basic pro-cesses are for the most part well understood, heterogeneityover the entire range of scales still poses a major chal-lenge for predictive modelling. We present the theoreticalbasis for transport modelling and illustrate basic sensitiv-ities to controlling mechanisms. Prediction uncertainty iscomputed for generic cases and then verified with exper-imental field data for three different configurations whichcombine inverse and forward modelling. The experimentsselected are from the nuclear waste disposal programs inSweden carried out as part of crystalline rock site inves-tigations. It is shown that tracer transport can be pre-dicted with reasonable confidence provided that the un-derlying physical mechanisms are well understood and suit-ably parametrized. We also present various strategies andopen issues for improving confidence in predictive mod-elling of solute transport in fractured rock.

Vladimir CvetkovicRoyal Institute of Technology, Stockholm, [email protected]

MS25

Preliminary Analysis of Sampling Methods withGeological Prior Models for Solving Inverse Prob-lems

In hydrogeology, the characterization of reservoirs by phys-ical parameters (e.g. hydraulic conductivity, porosity, etc.)is required to understand and predict groundwater behav-ior. Inverse problems consists in identifying the parameterfields allowing to reproduce field observations of state vari-ables. In general, this constitutes an ill-posed problem.Whereas optimization techniques (e.g. maximum likeli-hood methods) show very good performance when assum-ing that the parameter fields can be modeled using Gaus-sian processes, sampling methods are needed for a reason-able uncertainty quantification in the posterior space if amore general prior model is used. In this work, we con-sider prior models that are generated with Multiple PointStatistics (MPS). MPS is a non parametric approach al-lowing to generate realistic geological fields reproducingspatial features present in a training data set and hon-ouring conditioning data. To sample the posterior space,MPS simulations are integrated in an iterative approachthat consists in selecting suitable hard conditioning data atspecific locations. We show that this approach is promisingand accelerate the rate of convergence.

Christoph Jaeggli, Julien Straubhaar, Philippe Renard

30 UQ16 Abstracts

Universite de [email protected], [email protected],[email protected]

MS25

Data Dimension Reduction for Seismic Inversion

Abstract not available.

Gowri SrinivasanT-5, Theoretical Division, Los Alamos [email protected]

MS25

Determining the Mechanisms That Result in Hy-draulic Fracturing Production Decline Using In-verse Methods and Uncertainty Quantification

Abstract not available.

Hari ViswanathanLos Alamos National LaboratoryEarth and Environmental Sciences [email protected]

MS26

Impact of Spectral Sampling Techniques on Surro-gate Modeling

Many science and engineering application explore high-dimensional parameter spaces by creating initial randomsamplings to build surrogate models. However, it is un-clear how the sampling properties of this sample impactregression performance. We generalize spectral samplingtechniques to high-dimensions, introduce a novel samplesynthesis algorithm and present systematic comparisonswith classical techniques. Preliminary results show thatapproaches, such as, Latin Hypercube designs produce sig-nificantly inferior samplings, i.e. require more points toproduce equivalent regression errors.

Bhavya KailkhuraSyracuse UniversityNY, [email protected]

Jayaraman J. ThiagarajanLawrence Livermore National [email protected]

Peer-Timo Bremerlawrence livermore national [email protected]

Pramod VarshneySyracuse [email protected]

MS26

Experimental Design-based Methods for Simulat-ing Probability Distributions

In this talk, I will explain an experimental design-basedmethod for simulating complex probability distributions.The method requires fewer evaluations of the probabilitydistribution and therefore, it has an advantage over the

usual Monte Carlo or Markov chain Monte Carlo meth-ods when the distribution is expensive to evaluate. Iwill explain the application of the proposed method inBayesian computational problems, optimization, and un-certainty quantification.

Roshan VengazhiyilGeorgia Institute of [email protected]

MS26

Kennedy-O’Hagan Model with Calibration Param-eters: Parameter Identifiability, Estimation Con-sistency and Efficiency

Abstract not available.

C. F. Jeff WuGeorgia Institute of [email protected]

Rui TuoOak Ridge National [email protected]

MS26

An Empirical Interpolation Method for a Class ofQuasilinear PDEs with Random Input Data

We aim at approximating a class of quasi-linear pa-rameterized PDEs by solving the corresponding SDEswith reduced-basis methods. The empirical interpolationmethod is utilized in our numerical schemes for SDEs toobtain the off-line online cost decomposition. The mainfeature of the SDE approach is that we do not need tosolve linear or nonlinear systems when using implicit time-stepping schemes, leading to a significantly reduction byavoiding approximating the Jacobian of the nonlinear op-erator.

Guannan ZhangOak Ridge National [email protected]

Weidong Zhaoshandong [email protected]

MS27

Error Bounds for Regularized Least-squares Re-gression on Finite-dimensional Function Spaces

We present a framework to deduce bounds on the errorof least-squares regression methods with smoothness con-straints on finite-dimensional spaces. To this end, we relyon certain Jackson- and Bernstein-type inequalities in thesespaces. We point out similarities and differences to the re-sults in the infinite-dimensional setting, e.g. [Cucker &Zhou 2007], and results in the finite-dimensional settingwithout additional smoothness constraints, e.g. [Chkifa etal. 2015]. Finally, we apply our results to a sparse grid-based regression algorithm.

Bastian BohnUniversitat [email protected]

Michael Griebel

UQ16 Abstracts 31

Institut fur Numerische Simulation, Universitat Bonn &and Fraunhofer-Institut fur Algorithmen [email protected]

MS27

Sparse Polynomial Chaos Expansions Via WeightedL1-Minimization

This talk is concerned with the solution of PDEs with ran-dom inputs where the solution-dependent quantity of in-terest (QOI) admits a sparse polynomial chaos (PC) ex-pansion. As an extension of the standard l1-minimization,a widely used method for sparse approximation, we con-sider a weighted l1-minimization that relies on a prioriknowledge on the PC coefficients to improve the solutionrecovery. We present theoretical and empirical results todemonstrate different aspects of this sparse approximationtechnique.

Alireza DoostanDepartment of Aerospace Engineering SciencesUniversity of Colorado, [email protected]

Ji PengCU [email protected]

Jerrad HamptonUniversity of Colorado, [email protected]

MS27

Estimate of the Lebesgue Constant of WeightedLeja Sequence on Unbounded Domain

The Leja points are a sequence of nested interpolationpoints in one dimension, which show promise as a foun-dation for high-dimensional approximation methods. Forthe problem of weighted interpolation on the real line, weanalyze the Lebesgue constant for a sequence of weightedLeja points. As in the unweighted case of interpolation ona bounded domain, we use results from potential theory toshow that the Lebesgue constant for the Leja points growssubexponentially.

Peter JantschUniversity of [email protected]

MS27

Quasi-Optimal Points Selection for Radial BasisFunction Interpolation and Its Application to UQ

Radial basis functions (RBF) provide powerful meshfreemethods for multivariate interpolation for scattered data.The approximation quality and stability depend on the dis-tribution of the center set. The optimal placements of thecenters in the RBFs method are always unclear. In thistalk, we present an efficient data-independent algorithmto select the distribution of centers. We then apply themethod to parametric uncertainty quantification. Variousnumerical tests are provided to examine the performanceof the methods.

Liang YanDpt Mathematics, Southeast University, Nanjing, China

[email protected]

Akil NarayanUniversity of [email protected]

Tao ZhouInstitute of Computational Mathematics, AMSSChinese Academy of [email protected]

MS28

Weakly Intrusive Low-rank Methods for Model Or-der Reduction

A strategy is proposed for computing a low-rank approxi-mation of a nonlinear parameter-dependent equation. Foran iteration of a Newton algorithm, the residual and thepreconditioner are approximated with weakly intrusivevariants of the empirical interpolation method. The in-crement is then computed with a greedy rank-one algo-rithm, and the iterate is compressed in low-rank format.The efficiency of the proposed algorithm is illustrated withnumerical examples.

Loic GiraldiKAUST, Kingdom of Saudi [email protected]

Anthony NouyEcole Centrale Nantes, [email protected]

MS28

Joint Model and Parameter Dimension Reductionfor Bayesian Inversion Applied to An Ice SheetProblem

We consider Bayesian inference for ice sheet inverse prob-lems, where exploration of the posterior suffers from thetwin difficulties of high dimensionality of the uncertain pa-rameters and computationally expensive forward models.We present a data-informed approach that jointly identi-fies intrinsic dimensionality in both parameter space andstate space by exploiting the interplay between noisy ob-servations and the physical model. We show that usingonly a limited number of model simulations, the resultingsubspaces lead to an efficient method to explore the high-dimensional posterior.

Noemi PetraUniversity of California, [email protected]

Tiangang CuiMassachusetts Institute of [email protected]

Omar GhattasThe University of Texas at [email protected]

Youssef M. MarzoukMassachusetts Institute of [email protected]

Benjamin Peherstorfer, Benjamin PeherstorferACDL, Department of Aeronautics & Astronautics

32 UQ16 Abstracts

Massachusetts Institute of [email protected], [email protected]

Karen E. WillcoxMassachusetts Institute of [email protected]

MS28

Rom for Uq Control Problems

We focus on reduced order modelling adaptive techniquesto deal with UQ problems in the optimal control frame-work. A special focus is on multi-level sampling for greedyalgorithm, combined with sparse-grid sampling. Approxi-mation stability is discussed as well. Examples are takenfrom viscous flows.

Gianluigi RozzaSISSA, International School for Advanced StudiesTrieste, [email protected]

Peng ChenUT [email protected]

MS28

Projection Based Model Order Reduction for theEstimation of Vector-valued Variable of Interest

We present a projection-based model order reductionmethod for the estimation of vector-valued (or functional-valued) variables of interest associated to parameter-dependent equations. We first present an extension ofthe standard primal-dual approach. Then, we propose anew projection method based on a saddle-point problemwhich involves three reduced spaces: the approximationand test spaces associated to the primal variable, and theapproximation space associated to the dual variable. Inthe spirit of the Reduced Basis method, we propose greedyalgorithms for the construction of these reduced spaces.

Olivier Zahm, Marie Billaud-FriessLUNAM Universite, Ecole Centrale Nantes, CNRS, [email protected], [email protected]

Anthony NouyEcole Centrale Nantes, [email protected]

MS29

Explicit Cost Bounds of Stochastic Galerkin Ap-proximations for Parameterized Pdes with RandomCoefficients

We will present explicit bounds on the overall computa-tional complexity of the stochastic Galerkin finite elementmethod (SGFEM) for approximating the solution of pa-rameterized elliptic partial differential equations with bothaffine and non-affine random coefficients. These boundsaccount for the sparsity of the SGFEM system that re-sults from an orthogonal expansion of the random coef-ficient. We also present computational evidence comple-menting our theoretical estimates and a comparison withthe stochastic collocation finite element method.

Nick DexterUniversity of Tennessee

[email protected]

Clayton G. Webster, Guannan ZhangOak Ridge National [email protected], [email protected]

MS29

Balanced Iterative Solvers for Linear Systems Aris-ing from Finite Element Approximation of PDEs

This talk discusses the design of efficient solution algo-rithms for symmetric and nonsymmetric linear systems as-sociated with FEM approximation of PDEs. The novel fea-ture of our preconditioned MINRES and GMRES solversincorporates error control in the natural norm (associatedwith the specific approximation space) in combination witha reliable and efficient a posteriori estimator for the PDEapproximation error. Balancing the algebraic and the ap-proximation error leads to a robust and balanced inbuiltstopping criterion.

Pranjal PranjalSchool of MathematicsUniversity of [email protected]

MS29

Hydrodynamic Stability and UQ

This talk highlights some recent developments in the de-sign of robust solution methods for the Navier–Stokes equa-tions modelling incompressible fluid flow. Our focus is onuncertainty quantification. We discuss stochastic colloca-tion approximation of critical eigenvalues of the linearisedoperator associated with the transition from steady flowin a channel to vortex shedding behind an obstacle. Ourcomputational results confirm that classical linear stabilityanalysis is an effective way of assessing the stability of sucha flow.

David SilvesterSchool of MathematicsUniversity of [email protected]

MS29

Solving Log-Transformed Random Diffusion Prob-lems by Stochastic Galerkin Mixed Finite ElementMethods

We study mixed formulations of second-order elliptic prob-lems where the diffusion coefficient is the exponential of arandom field. We build on the previous work [E. Ullmann,H. C. Elman, O. G. Ernst, SIAM J. Sci. Comput., 34]and reformulate the PDE as a first-order system in whichthe logarithm of the diffusion coefficient appears on theleft-hand side. This gives a sparse, but non-symmetricGalerkin matrix. We discuss block triangular and blockdiagonal preconditioners for the Galerkin matrix and an-alyze a practical approximation to its Schur complement.

Elisabeth UllmannTU [email protected]

Catherine PowellSchool of Mathematics

UQ16 Abstracts 33

University of [email protected]

MS30

Non Parametric Calibration of Lvy Processes ViaKolmogorov’s Forward and Backward Equation

We present an optimal control approach to the problemof model calibration for Levy processes based on a nonparametric estimation procedure. The calibration prob-lem is of considerable interest in mathematical finance andbeyond. Calibration of Levy processes is particularly chal-lenging as the jump distribution is given by an arbitraryLevy measure, which form a infinite dimensional space. Inthis work, we follow an approach which is related to themaximum likelihood theory of sieves. The sampling of theLevy process is modelled as independent observations ofthe stochastic process at some terminal time T . We use ageneric spline discretization of the Levy jump measure andselect an adequate size of the spline basis using the AkaikeInformation Criterion (AIC). The numerical solution of theLevy calibration problem requires efficient optimization ofthe log likelihood functional in high dimensional param-eter spaces. We provide this by the optimal control ofKolmogorov’s forward equation for the probability densityfunction (Fokker-Planck equation). The first order opti-mality conditions are derived based on the Lagrange multi-plier technique in a functional space. The resulting PartialIntegral-Differential Equations (PIDE) are discretized, nu-merically solved and controlled using scheme a composedof Chang-Cooper, BDF2 and direct quadrature methods.The talk is based on joint work with Mario Annunziato

Hanno GottschalkUniversity of WuppertalFBC - AG [email protected]

MS30

Shape Optimization for Quadratic Functionals andStates with Random Right-Hand Sides

In this talk, we investigate a particular class of shape opti-mization problems under uncertainties on the input param-eters. More precisely, we are interested in the minimizationof the expectation of a quadratic objective in a situationwhere the state function depends linearly on a random in-put parameter. This framework covers important objec-tives such as tracking-type functionals for elliptic secondorder partial differential equations and the compliance inlinear elasticity. We show that the robust objective andits gradient are completely and explicitly determined bylow-order moments of the random input. We then derive acheap, deterministic algorithm to minimize this objectiveand present model cases in structural optimization.

Charles DapognyUniversite Joseph FourierLaboratoire Jean [email protected]

Marc DambrineDepartment of MathematicsUniversite de Pau et des Pays de [email protected]

Helmut HarbrechtUniversitaet BaselDepartement of Mathematics and Computer Science

[email protected]

MS30

Shape Optimization with Uncertain Data

Since shapes are optimized typically only once and usedlater on frequently and in many unforeseen circumstance,uncertainty quantification is a high importance in the fieldof shape optimization. This talk discusses numerical as-pects mainly in the practical application of aerodynamicshape optimization under uncertain loads and uncertain-ties with respect to the shape itself.

Volker H. SchulzUniversity of TrierDepartment of [email protected]

Claudia SchillingsUniversity of Warkick, [email protected]

MS31

Uncertainty Quantification for Multiscale SystemsUsing Deep Gaussian Processes

We present a novel approach for uncertainty quantificationin multiscale models using deep Gaussian processes. DeepGPs are suitable for deep learning in high-dimensionalcomplex systems consisting of several hidden layers con-nected with non-linear mappings. The dimensionality re-duction technique required to develop this model is basedon the Gaussian process latent variable model which pro-vides a flexible technique for non-linear dimensionality re-duction. The method is applied to a number of stochasticmultiscale PDE problems.

Alireza DaneshkhahUniversity of [email protected]

Nicholas ZabarasThe University of WarwickWarwick Centre for Predictive Modelling (WCPM)[email protected]

MS31

Model Error Quantification for High-DimensionalBayesian Inverse Problems

While a parametric fit can almost always be achieved withvarious model calibration tools, if the underlying model isincorrect, it will lead to erroneous predictions. The tra-ditional approach of using an additional regression model(e.g. GPs) to account for model errors is practically infea-sible in high-dimensions, can violate physical constraintsand does not provide the requisite insight. In this work,conservation & constitutive laws are unfolded and prob-abilistic estimates for model discrepancies are computed.

Isabell FranckTechnische Universitat Munchen, [email protected]

Phaedon S. KoutsourelakisTechnical University Munich

34 UQ16 Abstracts

[email protected]

MS31

Sequential Monte Carlo in Truly High-dimensionalModels

It is widely believed that for sequential Monte Carlo meth-ods to work in high-dimensional models the effective di-mensionality of the system needs to be low. Against thisbelief, we will discuss a class of local algorithms that can ex-ploit decay of correlations in truly high-dimensional modelsand yield error bounds that are uniform both in time andspace.

Patrick RebeschiniYale [email protected]

Ramon Van HandelPrinceton [email protected]

MS31

On the Low Dimensional Structure of Bayesian In-ference via Measure Transport

A recent approach to non-Gaussian Bayesian inferenceseeks a deterministic transport map that pushes forwarda reference density to the posterior. In this talk, we ad-dress the computation of transport maps in high dimen-sions. In particular, we show how the Markov structure ofthe posterior density induces low-dimensional parameteri-zations of the transport map. Topics include the sparsity ofinverse triangular transports, the ordering of the Knothe-Rosenblatt rearrangement, decompositions of transports,and other related ideas in dimensionality reduction forBayesian inversion.

Alessio Spantini, Youssef M. MarzoukMassachusetts Institute of [email protected], [email protected]

MS32

Quantifying the Effect of Parameter Uncertaintieson the Simulation of Drought in the CommunityAtmosphere Model

We analyze a perturbed physics ensemble performed withthe Community Atmospheric Model to gain insights intocauses of the 1998-2002 North American drought. Theensemble was constructed by varying the values of inputparameters related to physical parameterizations over al-lowable ranges of uncertainty. We perform a sensitivityanalysis to identify the key parameters influencing drought-related metrics. We also identify parameter values thatyield a better fit to data, resulting in a better simulationof drought.

Gemma J. Anderson, Don Lucas, Celine Bonfils, BenSanterLawrence Livermore National [email protected], [email protected], [email protected],[email protected]

MS32

Estimation of the Coefficients in a Coulomb Fric-tion Boundary Condition for Ice Sheet Sliding on

Bedrocks

When modeling the sliding of ice sheets over a solidbedrock, one of the most difficult aspects is the treatmentof the boundary condition at the ice-bedrock interface. Inthis presentation we consider a regularized Coulomb slid-ing condition, where the friction coefficient depends on theice velocity. Our focus will be on the estimation of the pa-rameters in the sliding condition, with the goal of finding ageneral and portable expression for the friction coefficient.

Luca BertagnaFlorida State [email protected]

Max GunzburgerFlorida State UniversitySchool for Computational [email protected]

Mauro PeregoCSRI Sandia National [email protected]

MS32

Emulation of Future Climate Uncertainties Arisingfrom Uncertain Parametrizations

The Whole Atmosphere Community Climate Model(WACCM) is a comprehensive numerical model. Manyparameterizations of physical processes have to be set, re-sulting in potential concerns about error growth. To assessfuture climate uncertainties, we need to develop statisti-cal models that can reproduce the outputs, and retrievethe uncertainties about the parameters and outputs in pastand present climates. We perform uncertainty analysis andthe emulation by calibrating the gravity waves parametersand accelerating the algorithm.

Kai-Lan [email protected]

Serge GuillasUniversity College [email protected]

Hanli [email protected]

MS32

Uncertainty Quantification for NASA’s OrbitingCarbon Observatory 2 (OCO-2) Mission

OCO-2 measurements of atmospheric carbon dioxide areinferred from observed radiance spectra. The inferenceuses Bayes’ Theorem: a ”retrieval” algorithm computesthe posterior mean and covariance matrix of atmosphericstate vectors given the radiances. However, there are ad-ditional sources of uncertainty including computational ar-tifacts and uncertain algorithm parameters. Here we de-scribe our approach to quantifying these uncertainties inaggregate in order to provide more realistic characteriza-tions of atmospheric CO2 globally.

Jonathan Hobbs, Amy BravermanJet Propulsion Laboratory

UQ16 Abstracts 35

California Institute of [email protected],[email protected]

MS33

A Multi-Order Monte Carlo DiscontinuousGalerkin Method for Wave Equations withUncertainty

We show how recently developed arbitrary order accurate(in space and time) discontinuous Galerkin methods forwave equations in second order form can be used to designmulti-order Monte Carlo methods. We compare the com-plexity to the multilevel Monte Carlo approach as well asto methods that are fixed order in time and arbitrary or-der in space. We also present experiments illustrating theproperties of the method.

Daniel AppeloUniversity of New [email protected]

Mohammed MotamedDepartment of Mathematics and StatisticsThe University of New [email protected]

MS33

Numerical Methods for Stochastic ConservationLaws

Stochastic conservation laws (SCL) provides a connectionbetween conservation laws and Hamilton–Jacobi equationsand are used in models of mean field games. An impres-sive collection of theoretical results has been developed forSCL in the recent years. In this talk we present numer-ical methods for strong and weak solutions of SCL withGaussian randomness in rough fluxes and forcing terms.

Hakon HoelKTH, StockholmSchool of Computer Science and [email protected]

Kenneth KarlsenCentre of mathematics for applicationsDepartment of Mathematics, University of [email protected]

Nils Henrik RisebroDeparment of MathematicsUniversity of [email protected]

Erlend StorrøstenUniversity of OsloDepartment of [email protected]

MS33

Multilevel Monte Carlo Approximation of Statisti-cal Solutions of Incompressible Flow

We present a finite difference-(Multi-level) Monte Carloalgorithm to efficiently compute statistical solutions ofthe two dimensional Navier-Stokes equations, with peri-odic boundary conditions and for arbitrarily high Reynoldsnumber. We propose a reformulation of statistical solutions

in the vorticity-stream function form. The vorticity-streamfunction formulation is discretized with a finite differencescheme. We obtain a convergence rate error estimate forthis approximation. We also prove convergence and com-plexity estimates, for the (Multi-level) Monte Carlo finite-difference algorithm to compute statistical solutions. Nu-merical experiments illustrating the validity of our esti-mates are presented. They show that the Multi-level MonteCarlo algorithm significantly accelerates the computationof statistical solutions, even for very high Reynolds num-bers. Supported by ERC StG NN 306279 SPARCCLE (toS. Mishra), and by the Swiss National SupercomputingCentre CSCS Grant s590

Christoph SchwabETH [email protected]

Siddhartha MishraETH ZurichZurich, [email protected]

Fillippo LeonardiSAM ETH [email protected]

MS33

A Sparse Stochastic Collocation Technique forHigh Frequency Wave Propagation with Uncer-tainty

We consider the wave equation with highly oscillatory ini-tial data, where there is uncertainty in the wave speed,initial phase and/or initial amplitude. To estimate quan-tities of interest related to the solution and their statis-tics, we combine a high-frequency method based on Gaus-sian beams with sparse stochastic collocation. Althoughthe wave solution, uε, is highly oscillatory in both physi-cal and stochastic spaces, we provide theoretical argumentsand numerical evidence that quantities of interest based onlocal averages of |uε|2 are smooth, with derivatives in thestochastic space uniformly bounded in ε, where ε denotesthe short wavelength. This observable related regularitymakes the sparse stochastic collocation approach more ef-ficient than Monte Carlo methods. We present numericaltests that demonstrate this advantage.

Gabriela MalenovaKTH, Stockholm, [email protected]

Mohammad MotamedUniversity of New [email protected]

Olof RunborgKTH, Stockholm, [email protected]

Raul F. TemponeMathematics, Computational Sciences & EngineeringKing Abdullah University of Science and [email protected]

MS34

A Bayesian Level-Set Approach for Geometric In-

36 UQ16 Abstracts

verse Problems

We introduce a Bayesian level-set approach for PDE-constrained inverse problems where the unknown is a geo-metric feature of the underlying PDE forward model. Bymeans of state-of-the-art MCMC methods we describe nu-merical experiments that explore the posterior distributionthat arises from (i) inverse source detection (ii) the identi-fication of geologic structures in groundwater flow (iii) theestimation of electric conductivity in the complete elec-trode model for EIT.

Marco IglesiasUniversity of [email protected]

MS34

Reversible Proposal Mcmc in High Dimension

Metropolis-Hastings (MH) algorithm with reversible pro-posal transition kernel includes many useful MCMC algo-rithms such as random-walk Metropolis, independent typeMH and pCN algorithms. These MCMC algorithms areeasy to design, free from gradient calculation and are easyto incorporate statistical information. In this talk, we anal-yse high dimensional asymptotic property of those meth-ods, especially for heavy tailed target distribution. Also,we discuss the application to statistical inference for highdimensional stochastic diffusion models.

Kengo KamataniOsaka University, JST [email protected]

MS34

Adaptive Random Scan Gibbs Samplers

Is the uniform coordinate choice optimal for the RandomScan Gibbs Sampler? I will present an adaptive RandomScan Gibbs Sampler that guided by approximating the op-timal L2 convergence on a Gaussian target improves on theuniform selection probabilities directing computational ef-fort towards difficult coordinates. The improvement willbe illustrated on computational examples and ergodicityof this non-markovian adaptive MCMC algorithm will bediscussed.

Krys [email protected]

Gareth O. RobertsWarwick [email protected]

Cyril [email protected]

MS34

Title Not Available

Abstract not available.

Lawrence MurrayOxford

[email protected]

MS35

Artificial Boundary Conditions and Domain Trun-cation in Electrical Impedance Tomography

Electrical impedance tomography (EIT) is an imagingmodality in which unknown conductivity distribution is re-constructed from surface voltage measurements. The solu-tion is based on mathematical model (the complete elec-trode model) which comprises of the diffusion equation andrelated boundary conditions. In principle, the accuratesolution of the problem would require that the boundaryconditions such as currents through the boundaries of thecomputational domain are known. In practice this wouldmean that all currents through the boundary should hap-pen only on the electrodes. However, in some applications,such as in geological imaging, this would require the com-putational domain to be very large and the computationalproblem can be exceedingly heavy. In this work we con-sider an approach which allows truncation of the compu-tational domain without significant loss in the accuracy ofthe solution due to unknown boundary conditions (bound-ary data). We formulate the truncated p roblem using socalled Dirichlet-to-Neumann (DtN) map, which connectsthe voltages on the artificial boundary to the current flowsthrough the boundary. In the inverse problem, the DtNmap is treated as a unknown random quantity which isestimated from measurements along with the conductiv-ity distribution. This is possible due to a low-dimensionalparametrization of DtN map using PCA basis. This isjoint work with J.P. Kaipio, P. Hadwin D. Calvetti, and E.Somersalo.

Janne M. HuttunenDepartment of Applied PhysicsUniversity of Eastern [email protected]

Jari Kaipio, Jari KaipioDepartment of MathematicsUniversity of [email protected], [email protected]

Erkki SomersaloCase Western Reserve [email protected]

Daniela CalvettiCase Western Reserve UnivDepartment of Mathematics, Applied Mathematics [email protected]

Paul HadwinDepartment of MathematicsThe University of [email protected]

MS35

Iterative Ensemble Smoothers for Calibration andUncertainty Quantification of Large PetroleumReservoir Models

In this talk, I will describe some characteristics of thepetroleum history matching problem that make character-ization of uncertainty difficult. In particular, I will com-ment on the prior distribution on model variables, char-

UQ16 Abstracts 37

acteristics of the data, and typical calibration parameters.Iterative ensemble smoothers have been developed for solv-ing these high-dimensional problems, and application willbe shown.

Dean S. OliverUniversity of [email protected]

MS35

Polynomial Chaos Based Bayesian IdentificationProcedures

The need to identify the descriptive parameteric elementsof some model by comparison with measurement is mostlyresolved in a Bayesian framework with the help of com-putational algorithms based on sampling approaches. Thegoal of this work is to present a new kind of Bayesian es-timate through a sampling-free method based on the spec-tral representations. In this way the low-rank approxima-tion procedures can be easily used for the reduction of thecomputational time of posterior.

Bojana V. RosicInstitute of Scientific ComputingTU [email protected]

MS35

Shape Uncertainty Quantification for ScatteringTransmission Problems

The aim is to quantify how uncertain shape variations af-fect the optical response of a nano-device to some electro-magnetic excitation. The 2D Helmholtz transmission prob-lem for a particle in air is considered. We model the shapeusing a high-dimensional parameter, and adopt a domainmapping approach to obtain a variational formulation on aparameter-independent configuration. We present resultsobtained with high-order quadrature methods, and discussnonsmooth parameter dependence cases, where only low-order methods (MLMC) converge.

Laura ScarabosioETH [email protected]

Ralf HiptmairSeminar for Applied Mathematics,ETH [email protected]

Claudia SchillingsUniversity of Warkick, [email protected]

Christoph SchwabETH [email protected]

MS36

Sensitivity Analysis of Pulse Wave Propagation inHuman Arterial Network

The effect of uncertainties on the pulse wave propagation inthe human one-dimensional arterial network is investgated.

The uncertainties are either issued from the cardiac outputor from local parameters in the network like the arterialstifness of given arteries (the aorta for instance). The studyis based on the use of a stochastic sparse pseudospectralmethod and expresses the sensitivity of the solution withrelevant factors such as pulse pressure at various locations.

Laurent DumasUniversite de [email protected]

Antoine Brault

Didier LucorSorbonne Univ, UPMC Univ Paris 06CNRS, UMR 7190, Inst Jean Le Rond d’[email protected]

MS36

Posterior Adaptation of Polynomial Surrogate forBayesian Inference in Arterial Flow Networks

Bayesian inference is a robust but costly framework forsolving hemodynamic inverse problem, i.e., inferring me-chanical properties and parameters of an arterial systemfrom simulated and measured blood pressure and flow. Wepropose to adaptively construct a polynomial surrogatethat focuses accuracy over the targeted parametric regionof interest. The method is tested on the inference of ves-sel properties in reduced models of pulse waves in arterialnetworks with measurements from actual patients.

Didier LucorSorbonne Univ, UPMC Univ Paris 06CNRS, UMR 7190, Inst Jean Le Rond d’[email protected]

Lionel MathelinLIMSI - CNRS, [email protected]

Olivier P. Le [email protected]

MS36

Assimilation and Propagation of Clinical Data Un-certainty in Cardiovascular Modeling

Availability, consistency and variability of clinical measure-ments affect the predictive performance of cardiovascularmodels, suggesting the need to include such uncertaintyin data assimilation practices. Starting from an analysisof the structural and practical, local and global parame-ter identifiability in 0D circulation models with applica-tions in pediatric and adult diseases monitoring, we com-bine multi-level Bayesian estimation, adaptive MCMC andmulti-resolution uncertainty propagation to lean model pa-rameters and quantify confidence in numerical predictions.

Daniele E. SchiavazziInstitute for Computational and MathematicalEngineeringStanford University, [email protected]

Alison Marsden

38 UQ16 Abstracts

School of MedicineStanford University, [email protected]

MS36

Integrating Clinical Data Uncertainty in CardiacModel Personalisation

For patient-specific models of the heart to translate intoclinical decisions, a measure of confidence in their pre-dictions is needed. This requires moving from deter-ministic approaches to probabilistic ones, with challengesboth methodologically, as deriving stochastic models ofsuch complex phenomena is difficult, and computation-ally, as such approaches are much more demanding. I willpresent methods using uncertainty quantification and ma-chine learning in order to integrate data uncertainty in clin-ical applications of cardiac models.

Maxime [email protected]

MS37

On Dynamic Nearest-Neighbor Gaussian Pro-cess Models for High-Dimensional SpatiotemporalDatasets

With the growing capabilities of Geographic InformationSystems (GIS) and user-friendly software, statisticians to-day routinely encounter geographically referenced datacontaining observations from a large number of spatial lo-cations and time points. Over the last decade, hierarchi-cal spatial-temporal process models have become widelydeployed statistical tools for researchers to better under-standing the complex nature of spatial and temporal vari-ability. However, fitting hierarchical spatial-temporal mod-els often involves expensive matrix computations with com-plexity increasing in cubic order for the number of spa-tial locations and temporal points. This renders suchmodels unfeasible for large data sets. In this talk, Iwill present two approaches for constructing well-definedspatial-temporal stochastic processes that accrue substan-tial computational savings. Both these processes can beused as ”priors” for spatial-temporal random fields. Thefirst approach constructs a low-rank process operating ona lower-dimensional subspace. The second approach con-structs a Nearest-Neighbor Gaussian Process (NNGP) thatcan be exploited as a dimension-reducing prior embeddedwithin a rich and flexible hierarchical modeling frameworkto deliver exact Bayesian inference. Both these approacheslead to Markov chain Monte Carlo algorithms with float-ing point operations (flops) that are linear in the number ofspatial locations (per iteration). We compare these meth-ods and demonstrate its use in inferring on the spatial-temporal distribution of ambient air pollution in continen-tal Europe using spatial-temporal regression models withchemistry transport models.

Abhirup DattaUniversity of Minnesota, Twin [email protected]

Sudipto BanerjeeDepartment of [email protected]

Andrew Finley

Departments of Forestry and GeographyMichigan State [email protected]

MS37

Modelling Extremes on River Networks

Max-stable processes are the natural extension of the clas-sical extreme-value distributions to the functional setting,and they are increasingly widely used to estimate proba-bilities of complex extreme events. In this talk I shall dis-cuss how to broaden them from the usual setting in whichdependence varies according to functions of Euclidean dis-tance to the situation in which extreme river discharges attwo locations on a river network may be dependent, eitherbecause the locations are flow-connected or because of com-mon meteorological events. In the former case dependencedepends on river distance, and in the second it dependson the hydrological distance between the locations, eitherof which may be very different from their Euclidean dis-tance. Inference for the model parameters is performedusing a multivariate threshold likelihood, which is shownby simulation to work well. The ideas are illustrated withdata from the upper Danube basin. The work is joint withPeiman Asadi and Sebastian Engelke.

Anthony DavisonEPFLEcole Polytechnique Federale de [email protected]

MS37

Investigating Nested Geographic Structure in Con-sumer Purchases: A Bayesian DynamicMulti-ScaleSpatiotemporal Modeling Approach

Abstract not available.

Dipak DeyDepartment of StatisticsUniversity of [email protected]

MS37

Predictive Model Assessment in Spatio-temporalModels for Infectious Disease Spread

Routine public health surveillance of notifiable infectiousdiseases gives rise to daily or weekly counts of reportedcases stratified by region, age group and possibly gender.Probabilistic forecasts of infectious disease spread are cen-tral for outbreak prediction, but need to take into accounttemporal dependencies inherent to communicable diseases,spatial dynamics through human travel and social contactpatterns between age groups and gender. We describea multivariate time series model for weekly surveillancecounts on noroviral gastroenteritis from the 12 city districtsof Berlin in five age groups from 2011 to 2014. Spatial dis-persal is captured by a power-law formulation based on theorder of adjacency between districts while social contactdata from the POLYMOD study was used to describe dis-ease spread between age groups. Calibration and sharpnessof the probabilistic one-step-ahead and long-term forecastsobtained with this model are assessed using proper scoringrules.

Leonhard HeldEpidemiology, Biostatistics and Prevention InstituteLeonhard Held

UQ16 Abstracts 39

[email protected]

Sebastian MeyerEpidemiology, Biostatistics and Prevention [email protected]

MS38

Quantifying Uncertainty in the Prediction of Pol-lutant Dispersion

The transport of pollutants in urban environments is in-fluenced by turbulent wind flows that are governed by thelarge-scale variability of the atmospheric boundary layer.To improve the predictive capabilities of CFD simulationsof pollutant dispersion, the uncertainty related to this vari-ability and to the turbulence model should be quantified.In this talk we will propose a framework to achieve thisgoal and present results obtained for simulations of theJoint Urban 2003 field experiment.

Clara Garcıa-Sanchez, Catherine GorleColumbia [email protected], [email protected]

MS38

Physics-Derived Model Form Uncertainty in Tur-bulent Combustion

The major challenge in turbulent combustion modeling isthe sheer number of models that are invoked, each of whichinvolves its own set of assumptions and subsequent modelerror. In this work, two methods are presented for derivingmodel errors directly from physics: hierarchical models andpeer models. Hierarchical models use a high-fidelity modelto estimate the uncertainty in a lower-fidelity model, andpeer models use models with differing assumptions to de-rive an uncertainty estimate.

Michael E. MuellerPrinceton [email protected]

Venkat RamanUniversity of [email protected]

MS38

Bayesian Optimal Experimental Design for Esti-mating Parameters of Turbulence Models

Velocity and Reynolds stress sensor configurations extract-ing the most useful information for estimating the turbu-lence model parameters in flow models are obtained byoptimizing an expected utility associated with either theinformation entropy or the Kullback-Leibler divergence.Bayesian asymptotic approximations simplify drasticallythe computational burden due to excessive number of flowmodel simulations. Stochastic optimization algorithmssuch as the CMA-ES are necessary to avoid premature con-vergence to local optimal.

Dimitrios Papadimitriou, Costas Argyris,Costas PapadimitriouUniversity of ThessalyDept. of Mechanical Engineering

[email protected], [email protected], [email protected]

MS38

Preliminary Study of a Stochastic Rans Model Ex-ploiting Local Model Inadequacy Indicators

The lack of an universal RANS model creates the needfor quantifying local modeling uncertainty to assess the re-liability of a simulation without experimental validationdata. In this contribution, we apply state-of-the-art meth-ods for local error detection (Gorl et al., 2011; Ling andTempleton, 2015) based on physical reasoning and machinelearning algorithms for the estimation of local failure of aRANS turbulence model. This information is then used tostochastically perturb the model locally.

Martin SchmelzerTU [email protected]

Richard P. DwightDelft University of [email protected]

Paola CinnellaLaboratoire DynFluid, Arts et Metiers ParisTechand Univesita del [email protected]

MS39

Methods for Electronics Problems with UncertainParameters: An Overview

In electronic engineering applications, the variability of pa-rameters has to be included into the design cycle to achievereliable products. We focus on the modeling of uncertain-ties by random variables or random processes. An overviewis given on the relevant problems as well as methods.The numerical techniques are based on sampling methods,stochastic collocation schemes or a stochastic Galerkin ap-proach. We identify challenges in modeling and simulationto quantify uncertainties for electronics industry.

Roland PulchUniversity of [email protected]

MS39

Stochastic Inverse Problem of Material ParameterReconstruction in a Passive Semiconductor Struc-ture

We focus on the reconstruction of material parameters un-der uncertainties in a model of a passive semiconductordevice. For the UQ propagation in a 3D model, governedby Maxwell’s equations, the Spectral Collocation Method(SCM) is used. The inverse problem deals with the iden-tification of real-functions aiming to reduce iteratively arandom-dependent regularized cost functional that consistsof the weighted mean and standard deviation values. Thegradient directions are computed using the Adjoint Vari-able Method and Tellegen’s theorem combined with theSCM. Simulations involving the reconstruction based onmeasurements are performed, showing the robustness andefficiency of the proposed algorithm.

Piotr A. PutekBergische Universitat Wuppertal

40 UQ16 Abstracts

[email protected]

Wim SchoenmakerMAGWEL NV, Leuven, [email protected]

E. Jan W. ter MatenNXP Semiconductors, Corp IT, [email protected]

Michael GuentherBergische Universitaet [email protected]

Pascal ReynierACCO Semiconductor Louveciennes, [email protected]

Tomas GotthansVysoke ucen technicke v Brne, Czech [email protected]

MS39

Efficient Evaluation of Bond Wire Fusing Probabil-ities

Integrated circuits are often connected to their packagingusing bond wires. During operation the system can heatup, leading to a fusing of the wires. The probability offusing (failure) in the presence of manufacturing imper-fections is computed using surrogate based sampling tech-niques available in the literature. Both numerical resultsand a convergence analysis for the scheme are provided inthe context of a coupled electrothermal simulation of thedevice.

Thorben CasperGSCE Technische Universitat [email protected]

Ulrich RomerInstitut fuer Theorie Elektromagnetischer Felder, TEMFTechnische Universitaet [email protected]

Sebastian SchopsTheorie Elektromagnetischer Felder (TEMF) and GSCETechnische Universitat [email protected]

MS39

Parametric Model Order Reduction for EfficientUncertainty Quantification of Nanoelectronic De-vices

In robust design of nanoelectronic devices, uncertaintyquantification (UQ) is indispensable. However, UQ forlarge-scale dynamical systems is computationally expen-sive: e.g., a non-intrusive method requires simulating thelarge-scale system at various parameter values. Therefore,parametric model order reduction (PMOR), which can ef-ficiently build a small-scale model that is accurate on aspecified parameter range, serves as a powerful tool foracceleration. Numerical results for several nanoelectronicdevices validate the efficiency of PMOR for UQ.

Yao YueMax Planck Institute for Dynamics of Complex

Technical Systems, [email protected]

Lihong FengMax Planck Institute, Magdeburg, [email protected]

Wim SchoenmakerMAGWEL NV, Leuven, [email protected]

Peter MeurisMagwel NV, [email protected]

Peter BennerMax Planck Institute, Magdeburg, [email protected]

MS40

Inverse Modeling of Geochemical and MechanicalCompaction in Sedimentary Basins Through Poly-nomial Chaos Expansion

We illustrate a procedure to perform an uncertainty anal-ysis for overpressure development in sedimentary basins.We consider compaction and fluid flow in stratified sedi-mentary basins subject to mechanical and geochemical pro-cesses. We perform (i) model reduction based on a general-ized Polynomial Chaos Expansion (gPCE), (ii) maximumlikelihood calibration of model parameters. We discuss theimpact of availability of different types of data, e.g., pres-sure or porosity distributions, on parameter estimation re-sults.

Ivo ColomboPolitecnico di [email protected]

Giovanni PortaD.I.I.A.R., Sezione IdraulicaPolitecnico di [email protected]

Lorenzo TamelliniEPF-Lausanne, Switzerland,[email protected]

Monica RivaPolitecnico di [email protected]

Alberto GuadagniniPolitecnico di Milano, [email protected]

Anna ScottiMOX, Department of Mathematics, Politecnico di [email protected]

MS40

Towards Geological Realism in Predicting Uncer-tainty in Reservoir Inverse Modelling: Geostatis-tics and Machine Learning

Use of informative priors in inverse reservoir modelling re-sults in more accurate prediction of uncertainty based only

UQ16 Abstracts 41

on geologically realistic models, which are consistent withthe prior knowledge and initial assumptions about the ge-ological interpretation. Proper geological priors in formof multi-dimensional parameter relations are elicited fromnatural analogue information using a machine learning ap-proach. Furthermore, use of machine learning in geomod-elling can facilitate integration of key reservoir uncertain-ties, such as multiple geological interpretation and relevantspatial scale representations.

Vasily Demyanov, Daniel Arnold, Alexandra Kuznetsova,Temistocles S. RojasHeriot-Watt [email protected], [email protected],[email protected],[email protected]

Mike ChristieHeriot-Watt University, [email protected]

MS40

Stochastic Identification of Contaminant Sources

When a contaminant is detected at a drinking well, itssource could be uncertain. We propose a stochastic tech-nique to identify when and where such a contamination en-tered the aquifer, with a quantification of the uncertaintyassociated to such identification.

J. Jaime Gomez-HernandezResearch Institute of Water and EnvironmentalEngineeringUniversitat Politecnica de Valencia - [email protected]

Teng XuInstitute for Water and Environmental EngineeringUniversitat Politecnica de [email protected]

MS40

Accounting for Known Sources of Model ErrorDuring Bayesian Posterior Sampling of Geophys-ical Inverse Problems

A methodology is presented to account for model errorsresulting from inaccurate forward process representationsin Bayesian solutions to near-surface geophysical inverseproblems. We show how principal components analysiscombined with an informal likelihood function based on theL2 norm can be used within MCMC to account for realis-tic model error in the corresponding posterior samples. Weshow the application of the method to ground-penetratingradar monitoring of infiltration to estimate unsaturated hy-draulic properties.

James Irving, Corinna KoepkeUniversite de [email protected], [email protected]

Delphine RoubinetUniversity of [email protected]

MS41

MUQ (MIT Uncertainty Quantification): A Flexi-ble Software Framework for Algorithms and Appli-

cations

MUQ has many capabilities for defining and solving for-ward and inverse uncertainty quantification problems. Inthis talk, we will focus on our Markov chain Monte Carlo(MCMC) module and how appropriate software abstrac-tions can accelerate MCMC algorithm development andtesting. We then illustrate MUQs capabilities by creating anew surrogate-exploiting transport-map MCMC algorithmand solving a prototypical Bayesian inverse problem. Weconclude by discussing how these extensible and reusablesoftware frameworks facilitate reproducible science.

Matthew ParnoMassachusetts Institute of [email protected]

Andrew D. [email protected]

Youssef M. MarzoukMassachusetts Institute of [email protected]

MS41

UQTk: a C++/Python Toolkit for UncertaintyQuantification

The UQ Toolkit (UQTk) is a collection of libraries, toolsand apps for the quantification of uncertainty in numer-ical model predictions. UQTk offers intrusive and non-intrusive methods for forward uncertainty propagation,tools for sensitivity analysis, sparse surrogate construc-tion, and Bayesian inference. The core libraries are im-plemented in C++ but a Python interface is available foreasy prototyping and incorporation in UQ workflows. Wewill present the key UQTk capabilities and illustrate a typ-ical UQ workflow.

Bert J. DebusschereEnergy Transportation CenterSandia National Laboratories, Livermore [email protected]

Khachik Sargsyan, Cosmin Safta, Kenny ChowdharySandia National [email protected], [email protected], [email protected]

MS41

Opencossan: A Open Matlab Tool for Dealing withRandomness, Imprecision and Vagueness

OpenCossan is a open and collaborative software foruncertainty quantification and management. OpenCos-san aims to promote learning and understanding of non-deterministic analysis through the distribution of an in-tuitive, flexible, powerful and open computational tool-box. Released under the LGPL license, OpenCossan canbe used, modified and freely redistributed. The quantifi-cation of uncertainties and risks is a key requirement andchallenge across various disciplines in order to operate sys-tems of diverse nature safely under the changing state of in-puts and boundary conditions. Randomness is representedmathematically by random quantities and by suitable prob-ability distributions. However, many of the uncertain phe-nomena are non-repeatable events and the understandingin the underlying physics can also be limited or vague. A

42 UQ16 Abstracts

key feature of the software is its capability to deal withdifferent representation of the uncertainty adopting con-cepts of Imprecise probability. This allows a rational treat-ment of the information of possibly different forms withoutignoring significant information, and without introducingunwarranted assumptions. This software implements ef-ficient simulation and parallelization strategies allowing asignificant reduction of the computation costs of the non-deterministic analyses.

Edoardo Patelli, Matteo Broggi, Marco de AngelisUniversity of [email protected], [email protected], [email protected]

MS41

Adaptive Sparse Grids for UQ with SG++

Adaptive sparse grids provide a flexible and versatile way torepresent higher-dimensional dependencies. For UQ, theycan be employed to surrogate-based forward propagation aswell as for input-modeling / model calibration via densityestimation. We will show both use cases for a benchmarkproblem in CO2 storage. We demonstrate the use of themulti-platform sparse grid toolkit SG++, which providesefficient implementations of adaptive sparse grids in differ-ent flavors.

Dirk Pfluger, Fabian FranzelinUniversity of [email protected],[email protected]

MS42

Sample-Efficient, Basis-Adaptive PolynomialChaos Approximation

Methods for computing accurate approximations of aQuantity of Interest (QoI) via Polynomial Chaos and l1-regression with low-coherence sampling have developedpromising numerical and theoretical results. We presentan iterative method which for each iteration identifies abasis set and corresponding coefficients which adapt tothe QoI as a function. Further, previously generated low-coherence samples are adapted for computations with sub-sequent bases by generating samples accounting for co-herence adjustments arising from considering iterativelyadapting bases.

Jerrad HamptonUniversity of Colorado, [email protected]

Alireza DoostanDepartment of Aerospace Engineering SciencesUniversity of Colorado, [email protected]

MS42

A Dictionary Learning Strategy for Bayesian Infer-ence

To reduce the cost of evaluating the posterior in Bayesianinference, polynomial surrogates are typically used. Weintroduce a novel approach which does not rely on an apriori choice of basis for approximating the parameter field.From a prior set of realizations of the parameter field, oneseeks a basis (dictionary) such that each realization is likely

to admit a sparse representation. As demonstrated withour example, this approach can then accommodate scarceobservations.

Lionel MathelinLIMSI - CNRS, [email protected]

Alessio Spantini, Youssef M. MarzoukMassachusetts Institute of [email protected], [email protected]

MS42

Sparse Approximation Using L1-L2 Minimization

We propose a sparse approximation method using �1− �2minimization. We present several theoretical estimates re-garding its recoverability for both sparse and non-sparsesignals. Then we apply the method to sparse orthogonalpolynomial approximations for stochastic collocation. Var-ious numerical examples are presented to verify the theo-retical findings. We observe that the �1− �2 minimizationseems to produce results that are consistently better thanthe �1 minimization.

Yeonjong ShinScientific Computing and Imaging InstituteUniversity of [email protected]

Liang YanDpt Mathematics, Southeast University, Nanjing, [email protected]

Dongbin XiuUniversity of [email protected]

MS42

Interpolation Via Weighted L1 Minimization

Abstract not available.

Rachel WardDepartment of MathematicsUniversity of Texas at [email protected]

Holger RauhutRWTH Aachen UniversityLehrstuhl C fur Mathematik (Analysis)[email protected]

Giang [email protected]

MS43

The Probability Density Evolution Method forMulti-Dimensional Nonlinear Stochastic Dynami-cal Systems

The multi-dimensional nonlinear stochastic dynamical sys-tems are encountered in various scientific and engineer-ing disciplines. For instance, in earthquake engineeringand wind engineering, engineering structures themselvesor the attached energy dissipation subsystems may ex-hibit strong nonlinear behaviors under severe earthquakes

UQ16 Abstracts 43

or strong winds. Besides, randomness may occur in bothstructural parameters and excitations (inputs). Generally,great challenges exist in the analysis of such systems be-cause: (1) these systems are multi-dimensional systemswith large degrees of freedom in the order of magnitudeof hundreds or even millions, but on the other hand, theyare nonlinear systems, and thus could not be reduced ordecomposed, say by the modal superposition method; (2)the randomness is involved and coupled with the nonlin-earity, which leads to coupled problems either in the levelof moments (e.g., the closure problem) or probability den-sities (e.g., the coupled multi-dimensional FPK equationfor white noise excitations); and (3) the randomness in-volved might be non-stationary and non-Gaussian. In thepresent paper the probability density evolution method willbe outlined. In this method, the principle of preservationof probability is invoked to tackle the system by explicitlyintroducing the sources of randomness and the embeddedphysics/dynamics mechanism. A state-variable decoupledgeneralized density evolution equation is then established.In contrast to the classical equations such as the FPK equa-tion, this equation could be in any arbitrary dimensionsrather in the dimension identical to the original dynamicalsystem. Numerical methods will be discussed. A nonlinearstructure subjected to seismic excitations will be studied.

Jianbin Chen, Junyi Yang, Jie LiTongji University, [email protected], [email protected], [email protected]

MS43

Uncertainty Propagation Across Distinct Pdf Sys-tems and Stochastic Spectral Methods

Propagation of uncertainty across heterogeneous stochas-tic systems is an important research area that arises oftenin multi-fidelity modeling. The stochastic models can beclassified from PDF models that contain the full statisticaldescription of the stochastic solution, to surrogate modelsthat provide the moments of the solution up to certain or-der. In this talk, we propose interface methods betweenthese distinct stochastic systems, in particular, concern-ing the response-excitation PDF system. While this ap-proach generalizes the classical PDF models to considernon-Gaussian colored noise, it involves the extended solu-tion space from the excitation. We employ PDF transfor-mation techniques to impose the boundary condition be-tween classical and response-excitation PDF systems. Inaddition, we couple the PDF to stochastic spectral modelsby representing the random boundary in terms of orthogo-nal polynomial basis according to the probability measureprovided from the PDF system. The effectiveness of ourapproach is demonstrated in stochastic advection-reactionequation.

Heyrim ChoDepartment of Mathematics, University of [email protected]

MS43

Method of Distributions for Uncertainty Quantifi-cation

Methods of distributions designate a class of approachesbased on the derivation of deterministic partial differentialequations for either a probability density function or a cu-mulative distribution function of a state variable. In thistalk, we will discuss the mathematical foundation of such

approaches as well as promises and limitations. Severalapplications will be discussed.

Pierre GremaudDepartment of MathematicsNorth Carolina State [email protected]

Daniel M. TartakovskyUniversity of California, San [email protected]

MS43

Probabilistic Solutions to Random DifferentialEquations under Colored Excitation Using theResponse-Excitation Approach

Equations for the evolution of the Response-Excitation(RE) pdf of non-Markovian responses of non-linear ran-dom differential equations (RDEs) under colored excita-tion are non-closed in general or exhibit limitations due tohigh-dimensionality. We present a closure scheme for thegeneralized RE Liouville equation (Athanassoulis, Tsantili,Kapelonis, 2015, Proc. R. Soc. A.,Vol. 471, p.20150501)for the long-time limit state of a scalar RDE that collectsmissing information through local linearized versions of thenonlinear RDEs.

Ivi C. TsantiliNational Technical University of Athens, [email protected] , [email protected]

Gerassimos A. AthanassoulisNational Technical University of AthensITMO [email protected]; [email protected]

Zacharias G. KapelonisNational Technical University of [email protected]; [email protected]

MS44

Set Oriented Numerical Methods for UncertaintyAnalysis

In this talk we will introduce a numerical method whichallows to perform a global uncertainty analysis for dynam-ical systems. The corresponding uncertainty attractors arecomputed by a sequence of nested, increasingly refinedouter approximations. Thus, the numerical approach isinherently set oriented and therefore the method does notrely on long term simulations of the underlying system.

Michael DellnitzUniversity of Paderborn, [email protected]

Karin MoraUniversity of [email protected]

Tuhin SahaiUnited Technologies

44 UQ16 Abstracts

[email protected]

MS44

Admm for 2-Stage Stochastic Quadratic Programs

We present a scenario-decomposition based AlternatingDirection Method of Multipliers (ADMM) algorithm forthe solution of scenario-based 2-stage stochastic QuadraticPrograms. The decomposition alternates between: (i) anequality constrained quadratic program that is decoupledby scenarios and (ii) a projection onto a set bounds andthe non-anticipativity constraints. We show that the pro-jection problem scales linearly in the number of scenarios.Further, the decomposition allows using different values ofthe parameter for each of the scenarios. We provide con-vergence analysis and derive the optimal parameter settingfor the ADMM parameter in each scenario. We show thatthe proposed approach outperforms the non-decomposedADMM approach and compares favorably with Gurobi, acommercial QP solver, on a number test problems.

Arvind RaghunathanMitsubishi Electric Research [email protected]

MS44

A Chaotic Dynamical System That Samples

In this work, we construct a chaotic dynamical system thatsamples user prescribed distributions. This dynamical sys-tem is chaotic in the sense that although the trajectoriesare sensitive to initial conditions, but yet the same sta-tistical properties are recovered every time the system issimulated (independent of the initial conditions). Com-bining ideas from ergodic theory and control theory, weconstruct a novel approach for building chaotic dynami-cal systems with predetermined statistical properties. Oncomplicated distributions, we demonstrate that this algo-rithm provides significant acceleration and higher accu-racy than competing methods (such as Hamiltonian MC,slice sampling and Metropolis-Hastings) for Markov ChainMonte Carlo (MCMC).We also demonstrate our novel algo-rithm by reproducing paintings and photographs, by treat-ing them as the target distribution. If one makes the spatialdistribution of colors in the picture the desired distribution,akin to a human, the algorithm first captures large scalefeatures and then goes on to refine small scale features.The equations are easy to construct and can be simulatedusing standard Euler or Runge-Kutta integration schemes.

Tuhin SahaiUnited [email protected]

George MathewiRhythm Technologiesgeorge [email protected]

Amit SuranaSystem Dynamics and OptimizationUnited Tecnologies Research Center (UTRC)[email protected]

MS44

Koopman Operator Based Nonlinear Estimation

We describe a new approach for observer design for nonlin-

ear systems based on Koopman operator theoretic frame-work. Koopman operator is a linear but an infinite-dimensional operator that governs the time evolution ofsystem outputs in a linear fashion. We exploit this prop-erty to synthesize an observer form which enables the useof Luenberger/Kalman-like linear observers for nonlinearestimation. We describe a numerical procedure to con-struct such an observer form and demonstrate it on severalexamples.

Amit SuranaSystem Dynamics and OptimizationUnited Tecnologies Research Center (UTRC)[email protected]

Andrzej BanaszukUnited Technologies Research [email protected]

MS45

Mild Stochastic Calculus in Infinite Dimensions

This talk presents a certain class of stochastic processes,which we suggest to call mild Ito processes, and a new,somehow mild, Ito type formula for such processes. We willuse the mild Ito formula to establish essentially sharp weakconvergence rates for numerical approximations of differ-ent types of stochastic partial differential equations includ-ing nonlinear stochastic heat equations, stochastic Burgersequations, stochastic Cahn-Hilliard-Cook type equations,and stochastic Wave equations.

Arnulf JentzenETH [email protected]

MS45

Numerical Approximation of Stochastic Differen-tial Equations

In this talk we review some recent results on the numeri-cal approximation of stochastic differential equations. Weprimarily focus on the mean-square error and show opti-mal convergence rates for the backward Euler-Maruyamamethod under a global monotonicity condition. Thenwe discuss some extensions of our results to the BDF2-Maruyama two-step scheme as well as to stochastic partialdifferential equations.

Adam AnderssonTechnische Universitat [email protected]

Raphael KruseTU [email protected]

MS45

SPDEs with Levy Noise and Weak Convergence

In this talk, I will present an abstract framework to studythe weak convergence of numerical approximations of lin-ear stochastic partial differential equations of evolution-ary type driven by additive Levy noise. The central resultis a general representation formula for the error, which isapplied to study space-time discretizations of the stochas-tic heat equation, the wave equation, and a Volterra-typeintegro-differential equation as examples. For twice con-

UQ16 Abstracts 45

tinuously differentiable test functions with bounded sec-ond derivative (with an additional condition on the secondderivative for the wave equation) the weak rate of conver-gence is found to be twice the strong rate. The talk is basedon joint work with Mihaly Kovacs and Rene Schilling.

Felix LindnerTechnische Universitat [email protected]

MS45

Computations of Waves in a Neural Model

Abstract not available.

Gabriel J. LordHeriot-Watt [email protected]

MS46

Scalable Parameterized Surrogates Based on LowRank Tensor Approximations for Large-ScaleBayesian Inverse Problems

Hessian operators (of the negative log posterior) haveplayed an important role in high-(infinite-) dimensionalBayesian inverse problems, from characterizing the (inverseof the) posterior covariance under the Gaussian approxima-tion, to accelerating MCMC sampling methods by provid-ing information on the local curvature in parameter space.The key to making computations with them tractable is alow rank approximation of the (prior-preconditioned) datamisfit component of the Hessian. Here we consider the roleof higher derivative operators in Bayesian inverse problemsand whether scalable low rank approximations can be con-structed.

Nick AlgerThe University of Texas at AustinCenter for Computational Geosciences and [email protected]

Tan Bui-Thanh, Omar GhattasThe University of Texas at [email protected], [email protected]

MS46

Accelerating MCMC with Active Subspaces

Active subspaces are an emerging set of tools for dimensionreduction. When applied to MCMC, the active subspaceseparates a low-dimensional subspace that is informed bythe data from its orthogonal complement that is con-strained by the prior. With this information, one can runthe sequential MCMC on the active variables while sam-pling independently according to the prior on the inactivevariables.

Paul ConstantineColorado School of MinesApplied Mathematics and [email protected]

Tan Bui-ThanhThe University of Texas at [email protected]

Carson Kent

Colorado School of MinesApplied Mathematics and [email protected]

MS46

Subspace Acceleration Strategies for Sampling onFunction Space

Many inference problems require exploring the posteriordistribution of high-dimensional parameters, which in prin-ciple can be described as functions. Sampling posteriorsdefined on function space poses a significant challenge. Byidentifying and blending the global and local geometries in-formed by noisy observations, we present a suit of subspaceaccelerated schemes to tackle this challenge. Examples in anonlinear inverse problem and a conditioned diffusion pro-cess are used to demonstrate the efficiency of our samplingschemes.

Tiangang CuiMassachusetts Institute of [email protected]

K [email protected]

Youssef M. MarzoukMassachusetts Institute of [email protected]

MS46

Efficient Computations for Large-Scale InverseProblems with Bayesian Sampling

The U.S. Department of Energy certifies weapons throughmodeling and simulation, which is informed by the resultsof experimentation wherein the results are delivered to thephysics modelers with statistically valid estimates of un-certainty. While Bayesian sampling techniques are widelyused for quantifying such uncertainty on signal and im-age reconstruction, they are computationally prohibitivein large--scale applications. We extend these approachesby implementing Monte Carlo schemes and matrix--freelinear system solvers for reconstructions at the very large--scales, and demonstrate their viability on applications inquantitative radiographic data analysis.

Eric MachorroNational Security [email protected]

Jesse AdamsThe University of [email protected]

Kevin JoyceUniversity of MontanaDepartment of Mathematical [email protected]

MS47

Bayesian Parameter Inference Across Scales

In multi-scale models, representations at different scalesmay depend on parameters that are not directly related.We propose a Bayesian approach for estimating fine scaleparameters from the solution of the inverse problem at a

46 UQ16 Abstracts

coarser scale. Computed examples illustrate the viabilityof the approach.

Margaret CallahanCase Western Reserve [email protected]

Daniela CalvettiCase Western Reserve UnivDepartment of Mathematics, Applied Mathematics [email protected]

MS47

Bayesian Sampling Using a Wishart Prior in Ra-diography Image Reconstruction

Quantitative X-ray radiography hinges on the ability tosolve inverse problems and provide uncertainties for imagereconstructions. The Bayesian framework naturally lendsitself to quantifying uncertainties and we have developeda hierarchical Bayesian method where we place a Wishartdistribution on the prior precision matrix to provide en-hanced edge information. This allows the data to drive thelocalization of features in the reconstruction. In this workwe focus on Abel inversion to compute volumetric objectdensities from X-ray radiographs and to quantify uncer-tainties in the reconstruction. Results are presented forboth synthetic signals and real data from a U.S. Depart-ment of Energy X-ray imaging facility.

Marylesa HowardNational Security Technologies, [email protected]

MS47

Expectation Propagation for Electrical ImpedanceTomography

Electrical impedance tomography is a noninvasive imagingtechnique for determining the internal conductivity distri-bution from boundary voltage measurements. In this talk,we discuss a variational approach for electrical impedancetomography based on expectation propagation, which ateach iteration involves numerical quadrature. Some nu-merical results for real data will be presented.

Bangti JinDepartment of Computer ScienceUniversity College [email protected]

MS47

Prior and Posterior Convergence for Some FiniteElement Approximations

Finite element methods (FEM) are powerful tools for nu-merical approximations of partial differential equation.The rigorous mathematical background of FEM arises fromthorough convergence studies. We will discuss applying thewell-known convergence studies for FEM’s within Bayesianstatistical inverse problems, especially in the case of elec-trical impedance tomography.

Sari LasanenDepartment of Mathematical SciencesUniversity of Oulu

[email protected]

MS48

Multilevel Monte Carlo for Convective Transportin Inhomogeneous Media

In the last few years efficient algorithms have been pro-posed for the propagation of uncertainties through numer-ical codes. Recently, the multi-level Monte Carlo methodemerged as a novel technique able to retain the robustnessof the Montecarlo method, but increasing also its efficiency.We study the performance of the MLMC approach in sim-ulation of particle-laden turbulent flow subject to thermalradiation. We also investigate strategies to further accel-erate the approach using variance reduction techniques.

Gianluca IaccarinoStanford UniversityMechanical [email protected]

Alireza DoostanDepartment of Aerospace Engineering SciencesUniversity of Colorado, [email protected]

Gianluca GeraciUniversity of [email protected]

MS48

Fast Bayesian Optimal Experimental Design forSeismic Source Inversion

We develop a fast method for optimally designing experi-ments in the context of statistical seismic source inversion.In particular, we efficiently compute the optimal numberand locations of the receivers or seismographs. The seis-mic source is modeled by a point moment tensor multi-plied by a time-dependent function. The parameters in-clude the source location, moment tensor components, andstart time and frequency in the time function. The forwardproblem is modeled by elastodynamic wave equations. Weshow that the Hessian of the cost functional, which is usu-ally defined as the square of the weighted L2 norm of thedifference between the experimental data and the simu-lated data, is proportional to the measurement time andthe number of receivers. Consequently, the posterior dis-tribution of the parameters, in a Bayesian setting, concen-trates around the ”true” parameters, and we can employLaplace approximation and speed up the estimation of theexpected Kullback- Leibler divergence (expected informa-tion gain), the optimality criterion in the experimental de-sign procedure. Since the source parameters span severalmagnitudes, we use a scaling matrix for efficient control ofthe conditional number of the original Hessian matrix.

Quan LongUnited Technologies Research Center, [email protected]

Mohammad MotamedUniversity of New [email protected]

Raul F. TemponeMathematics, Computational Sciences & EngineeringKing Abdullah University of Science and Technology

UQ16 Abstracts 47

[email protected]

MS48

Computing Measure Valued and Statistical Solu-tions of Hyperbolic Systems of Conservation Laws

We consider UQ for nonlinear hyperbolic systems of con-servation laws within the frameworks of measure valuedand statistical solutions and present a novel algorithm tocompute these solutions. The algorithm, based on sam-pling and ensemble averaging, is shown to converge pro-vided that the underlying space-time discretizations sat-isfy certain criteria. We show convergence of interestingobservables such as moments, correlations, structure func-tions and multi-point statistics. Numerical experimentsdemonstrating the validity of this framework for unstableand turbulent flows are presented. More efficient variantssuch as MLMC methods and a coarse-graining algorithmare also considered.

Siddhartha MishraETH ZurichZurich, [email protected]

MS48

Continuation Multi Level Monte Carlo for Uncer-tainty Quantification in Compressible Aerodynam-ics

The aim of this work is to address the challenge of ef-ficiently propagating geometrical and operating uncer-tainties in compressible aerodynamics numerical simula-tions with a Continuation Multi Level Monte Carlo algo-rithm. We will present numerical test case (nozzle and theRAE2822 airfoil) of interest in compressible aerodynamicsand show the efficiency of the proposed CMLMC in termsof accuracy versus overall computational cost, compared toa standard MC method and its robustness on these appli-cations.

Michele Pisaroni

Ecole polytechnique federale de [email protected]

Fabio NobileEPFL, [email protected]

Penelope Leyland

Ecole polytechnique federale de [email protected]

MS49

A Piecewise Deterministic Markov Process for Ef-ficient Sampling in Rn

In recent work (Bierkens, Roberts 2015) we discovered anelementary piecewise deterministic Markov process (zig zagprocess) which can be extended to have a general (abso-lutely continuous) invariant probability distribution in Rn.We develop MCMC based on the zig zag process. The sam-ple paths of the zig zag process can be efficiently simulatedby rejection sampling of the switching times. We comparethe resulting algorithm to other important sampling meth-ods. It turns out that the zig zag process is particularlyefficient at sampling from heavy tailed probability distri-

butions.

Joris [email protected]

Gareth O. RobertsWarwick [email protected]

MS49

Improving the Performance of OverdampedLangevin Samplers by Breaking Detailed Balance

MCMC provides a powerful and general approach for gen-erating samples from a high-dimensional target probabilitydistribution, known up to a normalizing constant. Thereare infinitely many Markov processes whose invariant mea-sure equals a given target distribution. The natural ques-tion is whether a Markov process can be chosen to generatesamples of the target distribution as efficiently as possible.It has been previously observed that adding an appropriatenonreversible perturbation to a reversible Markov processwill improve its performance. In this talk, I will describesome recent results which characterise the effect of addinga nonreversible drift to an overdamped Langevin sampler,in terms of rate of convergence to equilibrium and asymp-totic variance. Finally, I will detail how such nonreversibleLangevin samplers can be used in practice by describing adiscretisation scheme which inherits the good performanceof the underlying process.

Andrew [email protected]

Greg PavliotisImperial College [email protected]

Tony LelievreEcole des Ponts [email protected]

MS49

The Backward Sde Filter

A novel adaptive meshfree forward backward doublystochastic differential equation approach is presented forthe nonlinear filtering problem. The algorithm is basedon the fact that the solution is the unnormalized filteringdensity required in the nonlinear filtering problem. Adap-tive space points are constructed in a stochastic manner toimprove the efficiency of the algorithm. We also presentnumerical results for (i) a double well potential problemwith oscillations and (ii) a multi-target tracking problem.

Vasileios MaroulasUniversity of Tennessee, [email protected]

Bao FengOak Ridge National Laboratory

48 UQ16 Abstracts

[email protected]

MS49

Gibbs-Langevin Samplers

Abstract not available.

Omiros PapaspiliopoulosDepartment of EconomicsUniversitat Pompeu [email protected]

MS50

Empirical Bayes Approach to Climate Model Cali-bration

One challenge for Bayesian calibration of climate models isthe question of how to assign weights to various observa-tional targets, many of which are highly correlated throughthe physics of the atmosphere. We will discuss the use of‘hyper-parameters’ and how they can be enhanced by ad-ditional information from a set of perfect modeling exper-iments in order to provide an indication of the significanceof gaps between a model and observations.

Charles JacksonUTIGUniversity of Texas at [email protected]

Gabriel HuertaIndiana [email protected]

MS50

Greater Accuracy with Reduced Precision: AStochastic Paradigm for Weather and Climate Pre-diction

Weather and climate models have become increasingly im-portant tools for making society more resilient to extremesof weather and for helping society plan for possible changesin future climate. These models are based on known laws ofphysics, but, because of computing constraints, are solvedover a considerably reduced range of scales than are de-scribed in the mathematical expression of these laws. Thisgenerically leads to systematic errors when models are com-pared with observations. A new paradigm is proposed forsolving these equations which sacrifices precision and de-terminism for small-scale motions. It is suggested that thissacrifice may allow the truncation scale of weather and cli-mate models to extend down to cloud scales in the com-ing years, leading to more accurate predictions of futureweather and climate

Tim PalmerUniversity of Oxford, [email protected]

MS50

Probabilistic Regional Ocean Predictions

This presentation is concerned with the prediction of theprobability distribution function (pdf) of large nonlin-ear dynamical systems using stochastic partial differentialequations (S-PDEs), with a focus on regional ocean dynam-ics. The stochastic dynamically orthogonal (DO) PDEs for

a primitive equation ocean model with nonlinear free sur-face are derived and numerical schemes for their space-timeintegration are obtained. Examples are provided for a set ofidealized multiscale non-hydrostatic and hydrostatic flowsas well as for more realistic regional ocean dynamics. Re-sults are compared to more classic ensemble approaches.Such comparisons utilize stochastic predictive skill met-rics developed for real-time ocean pdf forecasts issued withthe Error Subspace Statistical Estimation (ESSE) methodduring Aug-Sep 2009 for the two-month QPE IOP09 ex-periment off the coast of Taiwan.

Deepak Subramani, Pierre [email protected], [email protected]

MS50

A Parametrization of Ocean Mesoscale Eddies:Stochasticity and Non-viscous Stresses

Ocean mesoscale eddies strongly affect the strength andvariability of large-scale flows. Parametrizing eddy-meanflow processes in climate simulations remains a key chal-lenge, especially quantifying the uncertainty associatedwith parametrized eddies. We show that a successful clo-sure for turbulent mesoscale eddies can be obtained usingresolved non-Newtonian stresses. The uncertainty asso-ciated with the parametrization can be quantified using astochastic model which depends only the coarse resolution,wind stress and stratification.

Laure ZannaOxford [email protected]

MS51

Tensor Train Approximation of the First MomentEquation for Elliptic Problems with Lognormal Co-efficients

We study the elliptic equation with coefficient modeledas lognormal random field. A perturbation approach isadopted, expanding the solution in Taylor series. Theresulting recursive deterministic problem satisfied by theexpected value of the solution (first moment equation) isdiscretized with full tensor product finite elements. Wedevelop an algorithm for solving the recursive high dimen-sional first moment equation in a low-rank format (TensorTrain) and show its effectiveness with numerical examples.

Francesca BonizzoniFaculty of MathematicsUniversity of [email protected]

Fabio NobileEPFL, [email protected]

Daniel KressnerEPFL [email protected]

MS51

Blending Finite Dimensional Manifold Samplers

UQ16 Abstracts 49

with Dimension-Independent MCMC

Bayesian inverse problems involve sampling probability dis-tributions on functions. Traditional MCMC algorithms failunder mesh-refinement. Recently, a variety of dimension-independent MCMC methods have emerged, but few ofthem take the geometry of the posterior into account. Inthis work, we blend finite dimensional manifold samplerswith dimension-independent MCMC to enjoy the benefit ofgeometry, whilst remaining robust under mesh-refinement.The key idea is to employ the manifold methods on anappropriately chosen finite dimensional subspace.

Shiwei LanUniversity of [email protected]

MS51

Dynamical Low Rank Approximation of Incom-pressible Navier Stokes Equations with RandomParameters

We investigate the Dynamically Orthogonal approximationof time dependent incompressible Navier Stokes equationswith random parameters. The approximate solution issought in the low dimensional manifold of functions withfixed rank, written in separable form, and it is obtainedby performing a Galerkin projection of the governing equa-tions onto the time-dependent tangent space of the approx-imation manifold along the solution trajectory. Numericaltests at moderate Reynold number will be presented, withemphasis on the case of stochastic boundary conditions.

Eleonora MusharbashEcole Polytechnique Federale de [email protected]

Fabio NobileEPFL, [email protected]

MS51

Solution of Stochastic PDEs Via Low-Rank Sepa-rated Representation: A Randomized AlternatingLeast Squares Approach

Finding solutions, in separated form, to stochastic PDEsvia fixed point iteration requires a reduction algorithm,typically alternating least squares (ALS), to reduce theseparation rank after each iteration. Conditioning of leastsquares matrices in ALS iterations affects the convergenceof fixed point iterations. We propose a randomized varia-tion of ALS to improve conditioning. Our numerical exam-ples illustrate how better conditioning improves the perfor-mance of the fixed point iteration.

Matt ReynoldsUniversity of Colorado, [email protected]

Alireza DoostanDepartment of Aerospace Engineering SciencesUniversity of Colorado, [email protected]

Gregory BeylkinUniversity of Colorado

[email protected]

MS52

Functional Car Models for Large Spatially Corre-lated Functional Datasets

Abstract not available.

Veera BaladandayuthapaniDepartment od BiostatisticsMD [email protected]

MS52

Tukey g-and-h Random Fields

We propose a new class of trans-Gaussian random fieldsnamed Tukey g-and-h (TGH) random fields to model non-Gaussian spatial data. The proposed TGH random fieldshave extremely flexible marginal distributions, possiblyskewed and/or heavy-tailed, and, therefore, have a widerange of applications. The special formulation of the TGHrandom field enables an automatic search for the most suit-able transformation for the dataset of interest while esti-mating model parameters. An efficient estimation proce-dure, based on maximum approximated likelihood, is pro-posed and an extreme spatial outlier detection algorithmis formulated. The probabilistic properties of the TGHrandom fields, such as second-order moments, are inves-tigated. Kriging and probabilistic prediction with TGHrandom fields are developed along with prediction confi-dence intervals. The predictive performance of TGH ran-dom fields is demonstrated through extensive simulationstudies and an application to a dataset of total precipita-tion in the south east of the United States. The talk isbased on joint work with Ganggang Xu.

Marc GentonKAUSTSaudi [email protected]

MS52

Fusing Multiple Existing Space-Time Land CoverProducts

The assessment, monitoring, and characterization of landcover (LC) is essential for global change research as it isa critical variable driving many environmental processes.Most efforts have focused either on improving the accu-racy of LC products from remote sensing data or on theassimilation and synthesis of existing LC datasets usingstatistical interpolation techniques.We develop a method-ology for fusing multiple existing LC products to produce asingle LC record over a long time period and a large spatialdomain. We must first reconcile the thematic map classi-fications of the different LC products and then interpolateLC in space and time on a specified domain. Our proba-bilistic interpolation approach includes a direct estimationof the uncertainty associated with the spatio-temporal pre-diction of LC, and we provide an illustration using six LCproducts over the Rocky Mountain region of the UnitedStates.

Amanda HeringColorado School of Mines

50 UQ16 Abstracts

[email protected]

MS52

Testing for Spatial and Spatio-Temporal Stationar-ity

Abstract not available.

Suhasini Subba RaoDepartment of StatisticsTexas A&[email protected]

MS53

Uncertainty in Massflow Measurements in PipesDue to Bends

The dominant source of uncertainty in flow-rate measure-ments in pipes (with e.g. ultrasonic flow meters) is thelack of knowledge of the true velocity profile. In partic-ular a relatively narrow spectrum of velocity profiles canappear downstream of a bend. This study aims at quan-tifying the uncertainty in massflow measurements due tobends and design sensor locations, and multi-sensor sys-tems that minimize this uncertainty.

Zeno Belligoli, Richard P. DwightDelft University of [email protected], [email protected]

MS53

Surrogate-Based Robust Airfoil Optimization Con-sidering Geometrical Uncertainties

We will present surrogate-based robust shape optimizationof transonic airfoils. A large number of geometrical uncer-tainties on the airfoil is modeled by a truncated Karhunen-Love expansion to achieve a dramatic reduction of the largenumber of parameters. The combination of quasi MonteCarlo sampling and gradient-enhanced Kriging enables usto efficiently calculate statistics of the aerodynamic coeffi-cients which are used to evaluate the objective function tobe optimized in our robust design optimization framework.

Daigo MaruyamaDLR, [email protected]

Dishi LiuC2A2S2E, German Aerospace Center (DLR)[email protected]

Stefan GortzDLR [email protected]

MS53

An Investigation of Uncertainty Effects in MixedHyperbolic- Parabolic Problems Due to Stochasti-cally Varying Geometry

We study mixed hyperbolic parabolic problems with un-certain stochastically varying geometries. The numericalsolution is computed using a finite difference formulationon summation-by-parts form with weak boundary condi-tions. We prove that the continuous problem is well posedand that the semi-discrete problem is stable. The statistics

are computed non-intrusively using quadrature rules givenby the probability density function of the random variable.Numerical calculations are performed and the statisticalproperties of the solution are discussed.

Jan NordstromMathematicsLinkoping [email protected]

Markus K. WahlstenDepartment of MathematicsLinkoping University SE 581 83 Linkoping, [email protected]

MS53

Robust Design Optimization of a Supersonic Nat-ural Laminar Flow Wing-Body

A robust aerodynamic shape design problem related toan aerodynamic configuration of a supersonic business jetwing-body is here illustrated. The baseline configurationwas designed within the SUPERTRACEU project, and theaim of the present work is to improve the robustness andreliability of the baseline in presence of perturbations anduncertainties in operating conditions and in wing shape.The robust optimization approach adopted is based on theusage of value-at-risk and conditional value-at-risk.

Domenico Quagliarella, Emiliano IulianoCentro Italiano Ricerche [email protected], [email protected]

MS54

Causality Or Correlation? Multiscale Inferenceand An Application to Geoscience

One of the challenges in analysis of multiscale processesis to learn about the causality relations in the consideredsystems on a certain level of resolution - and to distin-guish between the true causality from simple statisticalcorrelations. Proper inference of such causality relations,besides giving an additional insight into such processes,can allow improving the respective mathematical and com-putational models. However, inferring such relations di-rectly from equations/models is hampered by the multi-scale character of the underlying processes and the pres-ence of latent/unresolved/subgrid scales. Implications ofmissing/unresolved scales for this problem will be discussedand an overview of methods for data-driven causality infer-ence will be given. Recently-introduced data-driven multiscale causality inference framework for Boolean data willbe explained and illustrated on analysis of historical cli-mate teleconnection series and on inference of their mu-tual influences on a monthly scale. It will be also shownhow the obtained causality networks can be used for thenetwork-driven regularization of the ill-posed data analysisproblems for noisy data.

Illia HorenkoDept. MathematicsFree University of [email protected]

MS54

Rigorous Intermittency in Turbulent Diffusion

UQ16 Abstracts 51

Models with a Mean Gradient

Intermittency of passive tracer can be described as largespikes randomly occurring in the time sequence or expo-nential like fat tails in the probability density function.This type of intermittency is subtle and occurs without anypositive Lyapunov exponents in the system. By exploitingan intrinsic conditional Gaussian structure, the enormousfluctuation in conditional variance of the passive tracer isfound to be the source of intermittency in these models. Anintuitive physical interpretation of such enormous fluctua-tion can be described through the random resonance be-tween Fourier modes of the turbulent velocity field and thepassive tracer. This intuition can be rigorously proved ina long time slow varying limit, where the limiting distribu-tion of the passive tracer is computed through an integralformula.

Xin T. [email protected]

Andrew MajdaCourant Institute [email protected]

MS54

A Probabilistic Decomposition-Synthesis Methodfor the Quantification of Rare Events

We consider the problem of probabilistic quantification ofdynamical systems that have heavy-tailed characteristics.These heavy-tailed features are associated with rare tran-sient responses due to the occurrence of internal instabil-ities. Systems with these characteristics can be found ina variety of areas including mechanics, fluids, and waves.Here we develop a computational method, a probabilisticdecomposition-synthesis technique, that takes into accountthe nature of internal instabilities to inexpensively deter-mine the non-Gaussian probability density function for aquantity of interest. Our approach relies on the decompo-sition of the statistics into a non-extreme core, typicallyGaussian, and a heavy-tailed component. We demonstratethe probabilistic decomposition-synthesis method for rareevents in two dynamical systems exhibiting extreme events:a two-degree-of-freedom system of nonlinearly coupled os-cillators, and in a nonlinear envelope equation characteriz-ing the propagation of unidirectional water waves.

Mustafa [email protected]

Themistoklis SapsisMassachusetts Institute of [email protected]

MS54

Improving Prediction Skill of Imperfect TurbulentModels Through Statistical Response and Informa-tion Theory

A recent mathematical strategy for calibrating imperfectmodels in a training phase and accurately predicting theresponse by combining information theory and linear sta-tistical response theory are developed in a systematic fash-ion for both full and reduced order models. A systematichierarchy of simple statistical imperfect closure schemes aredesigned and tested which are built through new local and

global statistical energy conservation principles combinedwith statistical equilibrium fidelity. As a further develop-ment, we use the stochastic models to predict intermittencyin turbulent diffusion of passive tracer systems. Empiricalinformation theory is used to measure the autocorrelationfunction under spectral representation. The optimal modelimproves the prediction skill uniformly in accurately cap-turing the crucial tracer statistical features.

Di QiNew York [email protected]

MS55

Making Uncertainty Quantification of Multi-Component Reactive Transport Manageable

Simulating multi-component (bio)reactive transport in thesubsurface is computationally very expensive so that di-rect application of Monte-Carlo techniques for uncertaintyquantification (UQ) is prohibitive. Linearized UQ is ham-pered by nonlinearity, but often the problems can be sim-plified by identifying conservative components, that aretransported linearly, and restricting the reactions to pri-mary controls. We present examples of estimating pdf s ofreactive-constituents concentrations in heterogeneous me-dia under conditions controlled by mixing or by electron-donor supply.

Olaf A. CirpkaUniversity of [email protected]

Matthias LoschkoUNiversity of [email protected]

Thomas WohlingTechnical University of [email protected]

MS55

Concentration Statistics for Transport in Heteroge-neous Media: Self-Averaging, Mixing Models andthe Evolution of Uncertainty

We study the PDF of concentration point values in het-erogeneous porous media. We contrast the concentrationPDFs obtained through spatial sampling and stochasticsampling, and discuss them in terms of the mean and meansquared concentrations. Specifically, we focus on the roleof medium heterogeneity and the dynamics of mixing inthe evolution of the concentration PDF, which we quantifyin terms of the Lagrangian fluid deformation. We discussmixing models that reflect these dynamics in an effectiveway and allow to quantify the evolution of the concentra-tion PDF and thus concentration uncertainty. These prop-erties depend on the mixing state of the system and theefficiency of local mass transfer properties in homogeniz-ing the transport system. This has an impact also on theself-averaging properties of observables such as the effectivedispersion coefficients.

Marco DentzIDAEA, Spanish National Research Council (CSIC)[email protected]

Tanguy Le BorgneUniversity of Rennes, France

52 UQ16 Abstracts

[email protected]

MS55

Subsurface Flow Simulations in Stochastic DiscreteFracture Networks

We focus on the Discrete Fracture Network model, in whichthe medium is modeled as a 3D set of intersecting poly-gons resembling fractures. Within this framework, we con-sider several sources of uncertainty, involving both the net-work geometry and hydrogeological properties (e.g., frac-ture transmissivities, orientation, dimensions...). We ad-dress the problem of quantifying the influence of thesestochastic parameters on the output of DFN models, pur-suing this target by applying modern UQ techniques toseveral stochastic configurations.

Stefano BerronePolitecnico di TorinoDipartimento di Scienze Matematiche ”G.L. Lagrange”[email protected]

Claudio CanutoPolitecnico di [email protected]

Sandra PieracciniDipartimento di Scienze MatematichePolitecnico di Torino, [email protected]

Stefano Scialo‘Politecnico di TorinoDipartimento di Scienze [email protected]

MS55

Sparse Grid and Monte Carlo Methods for Ground-water Transport Problems

We focus on groundwater transport problems and con-sider arrival times related to particle trajectories subjectto molecular diffusion and driven by a stochastic Darcy ve-locity obtained from log-normally distributed permeabili-ties. The goal is to efficiently compute statistics of sucharrival times, e.g. their mean or the probability of exitingthe physical domain in a given time horizon. We discussseveral scenarios and propose both adaptive sparse gridstochastic collocation and Monte Carlo type schemes.

Francesco TeseiECOLE POLYTECHNIQUE FEDERALE [email protected]

Fabio NobileEPFL, [email protected]

Sebastian KrumscheidEcole Polytechnique Federale de Lausanne (EPFL)[email protected]

MS56

Recent Advances in Dakota UQ

Sandia’s Dakota software facilitates advanced exploration

of simulations with various algorithms, including sensitiv-ity analysis, optimization, and UQ. This presentation willsurvey Dakotas capabilities, interfaces, and algorithms. Itwill then highlight recent developments, including archi-tecture improvements and algorithm R&D that aim tomake uncertainty quantification practical for complex sci-ence and engineering models. Growth directions such asarchitecture modularity, graphical user interfaces, and im-proved algorithms for sensitivity analysis, surrogate mod-elling, and Bayesian inference, will be reviewed.

Brian M. AdamsSandia National LaboratoriesOptimization/Uncertainty [email protected]

Patricia D. HoughSandia National [email protected]

MS56

The Parallel C++ Statistical Library Queso:Quantification of Uncertainty for Estimation, Sim-ulation and Optimization

QUESO is a tool for quantifying uncertainty for a spe-cific forward problem. QUESO solves Bayesian statisti-cal inverse problems, which pose a posterior distributionin terms of prior and likelihood distributions. QUESOexecutes MCMC, an algorithm well suited to evaluat-ing moments of high-dimensional distributions. Whilemany libraries exist that solve Bayesian inference prob-lems, QUESO is specialized software designed to solve suchproblems by utilizing parallel environments demanded bylarge-scale forward problems. QUESO is written in C++.

Damon McDougallInstitute for Computational Engineering and SciencesThe University of Texas at [email protected]

MS56

Handling Large-Scale Uncertainty Quantificationwith SmartUQ

Uncertainty quantification methods are widely used in en-gineering and science. However uncertainty quantificationfor large-scale problems poses great challenges. Througha combination of novel sampling and prediction tech-niques, several of these challenges have been overcome. Anoverview of these new techniques is presented along withexamples of the application of UQ methods using thesetechniques as implemented in the SmartUQ software pack-age.

Peter QianUniversity of Wisconsin - [email protected]

MS56

Chaospy: A Modular Implementation of Polyno-mial Chaos Expansions and Monte Carlo Methods

Chaospy is a Python toolbox specifically developed to im-plement polynomial chaos expansions and advanced MonteCarlo methods. The toolbox is highly modular with a pro-gramming syntax close to the mathematical theory. This

UQ16 Abstracts 53

talk will consists of an implementation walk-through withtheoretical review and practical examples. The focus willbe on (non-intrusive) methods where the software for theunderlying model can be reused without modifications.

Simen [email protected]

MS57

Efficient Sampling Schemes for Recovering SparsePCE

�1-minimization is an efficient technique for estimating thecoefficients of a Polynomial Chaos Expansion (PCE) froma limited number of model simulations. In this talk I willpresent a generalized sampling and preconditioning schemeto accurately approximate high-dimensional models fromlimited data using any orthonormal PCE basis. The ef-ficacy of this method will be demonstrated with variousnumerical and theoretical results.

John D. JakemanSandia National [email protected]

Akil NarayanUniversity of [email protected]

MS57

Germ-Transformed Polynomial Chaos Expansions

When confronted to configurations where the number ofavailable code evaluations is low compared to the numberof input variables, only low maximal polynomial order canbe used, and the relevance of the associated polynomialchaos expansion (PCE) can be limited. To circumvent thisproblem, we propose, first, to transform the input variablesby a parametrized function, and then, to carry out a PCEon these modified input variables.

Guillaume [email protected]

Christian SoizeUniversity of Paris-EstMSME UMR 8208 [email protected]

MS57

Mixing Auto-Regressive Models and Sparse Poly-nomial Chaos Expansions for Time-Variant Prob-lems

Pure vanilla polynomial chaos expansions are known to failat representing the time-varying output of computationalmodels with random input parameters, especially in longtime horizons. Such problems appear e.g. in earthquakeengineering where the displacements history of a structureis of interest. In this talk we introduce autoregressive ex-ogenous models (ARX) in conjunction with sparse polyno-mial chaos expansions to provide efficient time-dependentsurrogates and we show their accuracy in selected applica-tions.

Bruno Sudret, Chu Mai

ETH [email protected], [email protected]

MS57

Iteration Method for Enhancing the Sparsity

The combination of generalized polynomial chaos (gPC)and compressive sensing is a useful tool for uncertaintyquantification when the available data is limited. We pro-pose a method to detect important subspaces through it-erations, hence the sparsity of the representation of thesystem is enhanced. With this enhancement, the efficiencyof the compressive sensing algorithm can be improved, anda more accurate gPC approximation is available. We usePDEs to demonstrate the efficiency of this method.

Xiu Yang, Huan Lei, Nathan BakerPacific Northwest National [email protected], [email protected],[email protected]

Guang LinPurdue [email protected]

MS58

Probabilistic Active Subspaces: Learning High-Dimensional Noisy Functions Without Gradients

We develop a probabilistic version of active subspaces (AS)which is gradient-free and robust to observational noise,relying on a novel Gaussian process regression with built-in dimensionality reduction. We demonstrate that ourmethod is able to discover the same AS as the tradi-tional gradient-based approach. The addendum of our ap-proach is the determination of the dimensionality of theAS using Bayesian model selection. We use our model topropagate geometric/material uncertainties through high-dimensional granular crystals.

Ilias Bilionis, Rohit Tripathy, Marcial GonzalezPurdue [email protected], [email protected], [email protected]

MS58

Optimization Under Uncertainty of High-Dimensional, Sloppy Models

Optimization under uncertainty of high-dimensional,sloppy models This paper is concerned with the optimiza-tion of high-dimensional, complex models in the presenceof uncertainty. This is hindered by the usual difficultiesencountered in UQ tasks but also by the need to solve anonlinear optimization problem involving large numbers ofdesign variables We recast the problem as one of proba-bilistic inference and employ a Variational Bayesian (VB)formulation that also reveals a low-dimensional set of mostsensitive directions in the design variable space.

Phaedon S. KoutsourelakisTechnical University [email protected]

MS58

Recursive Cokriging Models for Global Sensitivity

54 UQ16 Abstracts

Analysis of Multi-Fidelity Computer Codes

Complex computer codes are widely used in science andengineering to model physical phenomena. Global sensi-tivity analysis aims to identify the input parameters whichhave the most important impact on the code output. Sobolindices are a popular tool for performing such analysis.However, their estimates require an important number ofsimulations and often cannot be processed under reason-able time constraint. To handle this problem, we considerin this presentation the problem of building a fast-runningapproximation also called surrogate model of a complexcomputer code. The cokriging based surrogate model is apromising tool to build such an approximation when thecomplex computer code can be run at different levels ofaccuracy. We present here an original approach to performa multifidelity cokriging model which is based on a recur-sive formulation. This approach allows to obtain originalresults. First, closed-form formulas for the universal cok-riging predictive mean and variance can be derived. Sec-ond, it has a reduced computational complexity comparedto the previous multifidelity models. This allows to easilyprovide Sobol index estimates by taking into account themetamodel error.

Loic Le GratietEDF R & DUniversite Nice Sofia [email protected]

Josselin GarnierUniversite Paris [email protected]

MS58

Multi-Fidelity Information Fusion Algorithms forHigh Dimensional Systems and Massive Data-Sets

We construct response surfaces of complex deterministicand stochastic dynamical systems by blending variable in-formation sources through Gaussian processes and auto-regressive stochastic schemes. Hierarchical functional de-compositions and local projections encode structure in GPpriors, and enable the decomposition of the global learningproblem into a series of low-dimensional tasks. These de-velopments lead to linear complexity algorithms as demon-strated in benchmark problems involving deterministic andstochastic fields in up to 105 dimensions and 105 trainingpoints.

Paris PerdikarisMassachusetts Institute of [email protected]

Daniele VenturiUniversity of California, Santa [email protected]

George Em KarniadakisBrown Universitygeorge [email protected]

MS59

Optimal L2-Norm Empirical Importance Weightsfor the Change of Probability Measure

We propose an optimization formulation to determine aset of empirical importance weights to achieve a change ofprobability measure, in the case of random samples gen-

erated from an unknown distribution. The importanceweights associated with the random samples are computedsuch that they minimize the L2-norm between the weightedempirical distribution function and the desired distributionfunction. The resulting optimization problem is solved ef-ficiently at scale. A variety of test cases are presented.

Sergio AmaralDepartment of Aeronautics & AstronauticsMassachusetts Institute of [email protected]

Douglas AllaireTexas A&M [email protected]

Karen E. WillcoxMassachusetts Institute of [email protected]

MS59

Zero-variance Approaches in Static ReliabilityProblems

Computing reliability metrics on complex systems is doneeither considering stochastic processes modeling the evo-lution with time of the system, or taking a static viewwhere the system and its components have a random butfixed state (the usual one being binary, either they work orthey don’t). The latter are sometimes called static mod-els, because time is not an explicit parameter to deal with.Numerical evaluation of these metrics is hard or impos-sible in many cases, because of the model’s size. MonteCarlo estimation is a possible solution, but then, the rareevent problem strikes. For dealing with it, one of the mostpromising approaches today is the ”zero-variance” idea, es-tablished when the model is a stochastic process. In thistalk we show how to adapt the idea to a static setting, andwe illustrate it with numerical results showing that it canbe extremely powerful also for this class of models.

Gerardo RubinoINRIA [email protected]

MS59

From Probability-Boxes to Imprecise Failure Prob-abilities Using Meta-Models

A common situation in engineering practice is to have asmall set of measurements which is insufficient to character-ize a probabilistic model. Then probability-boxes providea framework to describe the inherent uncertainty more ad-equately. The propagation of probability-boxes through acomputational model, however, is computationally expen-sive and especially in the case of rare events may becomeintractable. The use of meta-models reduces the computa-tional effort and allows for an efficient estimation of failureprobabilities.

Roland Schobi, Bruno SudretETH [email protected], [email protected]

MS59

Speeding Up Monte Carlo Simulations by Using An

UQ16 Abstracts 55

Emulator

Monte Carlo simulations is used in integrated circuit de-sign for yield optimization, statistical variability analysisand predicting failure probability. The Monte Carlo istime consuming but independent of the number of statis-tical parameters. Therefore, speedup is crucial and a newapproach needed. We introduce an Emulator techniquebased on an accept and reject algorithm for acceleratingthe Monte Carlo simulations and getting a speedup 10x.

Anuj Kumar TyagiTechnical University Eindhoven, [email protected]

Wil SchildersEindhoven University of [email protected]

Xavier JonssonMentor Graphics, Grenoblexavier [email protected]

Theo BeelenTechnical University Eindhoven, [email protected]

MS60

H-Matrix Accelerated Second Moment Analysis forPotentials with Rough Correlation

The efficient solution of operator equations with randomright hand side is considered. The solutions two-point cor-relation is well understood if the two-point correlation ofthe right hand side is known and sufficiently smooth. Un-fortunately, the problem becomes more involved in case ofrough data. However, the rough data and also the inverseoperators can efficiently be approximated by means of H-matrices. This enables us to solve the problem in almostlinear time.

Juergen Doelz, Helmut HarbrechtUniversitaet BaselDepartement of Mathematics and Computer [email protected], [email protected]

Michael PetersUniversity of [email protected]

Christoph SchwabETH [email protected]

MS60

Lattice Test Systems for Qmc Integration

We discuss a number of lattice systems which are rele-vant for models in elementary particle physics. These sys-tems can serve as benchmark models to test the feasibil-ity of QMC and iterated numerical integration methods.In particular, it will be most interested to see, whetherthese methods can outperform standard MC techniques aspresently used in particle physics computations.

Karl JansenDeutsches Elektronen-Synchrotron

[email protected]

MS60

Very High Dimensional Integration Problems inQuantum Lattice Gauge Theory

We consider very high d-dimensional integration problemsarising in the field of lattice gauge theory. The integrationproblems range from the scalar case (integration over [0, 1]d

or Rd) to integration over the d-product of spheres or eventhe d-product of more complex manifolds like SU(3). Wereport improvements over traditional MCMC methods forone dimensional physical systems with very high d, andpresent problems from general models in two and threephysical dimensions.

Hernan LeoveyHumboldt University, [email protected]

MS60

A Practical Multilevel Higher Order Quasi-MonteCarlo Method for Simulating Elliptic Pdes withRandom Coefficients

We present a Multilevel Quasi-Monte Carlo method, basedon rank-1 lattice rules, and demonstrate its performancefor solving PDEs with random coefficients. We numeri-cally show that the cost of the method is inversely propor-tional to the requested tolerance on the error. Next, wepresent the extension of the multilevel idea to higher orderdigital nets, and discuss the corresponding computationalcomplexity.

Dirk NuyensDepartment of Computer ScienceKU [email protected]

Pieterjan RobbeDepartment of Computer Science, KU [email protected]

Stefan VandewalleDepartment of Computer ScienceKatholieke Universiteit [email protected]

MS61

Several Visualization Issues in Uncertainty andSensitivity Analysis of Model Outputs

While exploring a simulation computer code, one can befaced to the complexity of its input and/or output vari-ables. For temporal or spatial phenomena, tools have tobe adapted to the data functional and uncertain naturein order to understand and synthesize their behavior. Wewill illustrate several practical issues coming from variousapplications, as well as applications of new visualizationmethods. We will focus on the issue of uncertainty analy-sis and sensitivity analysis of model outputs.

Bertrand Iooss, Anne-Laure PopelinEDF R&D

56 UQ16 Abstracts

[email protected], [email protected]

MS61

Functional Data Visualization: An Extension of theHighest Density Regions Boxplot

In this work, we propose a visualization method adaptedfrom the Highest Density Region boxplot, a functional ver-sion of the boxplot. This tool is especially aimed at visual-izing simultaneously multivariate functional data. It relieson the study of the coefficients of the data on the firstfew components of a decomposition basis. The plotted de-scriptive statistics are the so-called multivariate functionalmedian, envelopes of the most central regions and outlyingcurves.

Simon NantyCommissariat a l’energie atomiqueSt Paul lez Durance (France)[email protected]

Celine HelbertInstitut Camille [email protected]

Amandine MarrelCEA Cadarache, [email protected]

Nadia PerotCommissariat a l’energie atomiqueSt Paul lez Durance (France)[email protected]

Clementine PrieurLJK / Universite Joseph Fourier Grenoble (France)INRIA Rhone Alpes, [email protected]

MS61

Vizualizing the Uncertainty Represented byVector-Valued Ensemble Fields

In this talk I will discuss visualization techniques foranalysing the uncertainty that is represented by an en-semble of vector fields, to determine the major trends andoutliers in the shape and spatial location of relevant fieldstructures. I will focus on the problem of how to generateand visualize distributions for streamlines in vector fields,based on the clustering of similar shapes and by using vi-sual abstractions for the streamlines distributions in theseclusters.

Rudiger WestermannTechnische Universitat [email protected]

MS61

Analysis and Visualization of Ensembles of Shapes

The visualization of stochastic or uncertain data is typi-cally referred to ”uncertainty visualization”. However, thisterminology implies associated set of assumptions aboutthe paradigm for visualization, which is typically to dis-play an answer that has been modulated or augmentedby an associated uncertainty. This however, asserts theexistence of a renderable answer, which defies one of theunderlying goals or principles of visualization, which is the

exploration of data to obtain a holistic understanding orto discover properties that have no associated, a priori hy-pothesis. An alternative paradigm, is ”variability visual-ization” where the goal of the visualization is to exploreor better understand the set of possible outcomes, or theprobability distribution, associated with a set of data. Oneexample of such an approach is the method of contour box-plots, which relies on a generalization of data depth, fromdescriptive statistics, to render the variability of solutionsin an ensemble of isocountours.

Ross WhitakerSchool Of ComputingUniversity of [email protected]

MS62

Efficiently Computing Covariance of Parameter Es-timates in Inverse Problems

Uncertainty quantification in inverse problems presents dis-tinct opportunities and challenges relative to UQ in for-ward models. A formal solution for the a posteriori prob-ability distribution (PPD) for inverse problems is oftenreadily available, but is difficult to work with practically.For unimodal posteriors, the PPD may be approximatelycharacterized by its mode and covariance about the mode.Computing these quantities can present formidable chal-lenges. In practice, finding the MAP estimate may requireminimizing a very high dimensional function (O(105−106)parameters) with complex (e.g. discretized nonlinear) PDEconstraints. For such a problem, merely storing the finalcovariance may require tens of terabytes. We show thatthe structure of inverse problems leads to a sparse updateon a predictable and a priori known covariance operator.This leads to an efficient sparse basis in which to computeand store the covariance. We show how the covariance canbe accurately computed with relatively little computationover and above that used in finding the MAP estimate.This method can thus be used to compute the uncertaintyin an inverse problem solution relatively efficiently. We de-scribe the approach formally in a relatively general setting,and demonstrate it with examples in inverse diffusion andinverse elasticity.

Paul BarboneDepartment of Aerospace & Mechanical EngineeringBoston [email protected]

MS62

Estimating Large-Scale Chaotic Dynamics

Obvious likelihood approaches are not available for chaoticsystems, due to the sensitivity of the trajectories to anyperturbations in the calculations. For large systems, suchas used for weather predictions, Ensemble Prediction Sys-tems (EPS) are used to quantify the uncertainty. Herewe extend EPS, with essentially no additional CPU costs,to online estimation by perturbing the model parametersas well. The estimation can be performed both by a co-variance update process using importance weights, or byemploying evolutionary (DE) optimisation. Here we em-phasize the use of DE for multiple cost function situations.

Heikki HaarioLappeenranta University of TechnologyDepartment of Mathematics and Physics

UQ16 Abstracts 57

[email protected]

MS62

Dynamic Mutual Information Fields and AdaptiveSampling

We present a novel methodology for predicting mutualinformation fields and for adaptive sampling in high-dimensional fields. The methodology exploits the gov-erning nonlinear dynamics and captures the non-Gaussianstructures. The spatially and temporally varying mutualinformation field in general nonlinear dynamical systemsis efficiently quantified. Optimal observation locations aredetermined by maximizing the mutual information betweenthe candidate observations and the variables of interest.The globally optimal sequence of future sampling loca-tions is rigorously determined by dynamic programmingapproaches that combine mutual information fields withthe predictions of the forward reachable set. All the resultsare exemplified and their performance is quantitatively as-sessed using a variety of simulated fluid and ocean flows.

Tapovan Lolla, Pierre [email protected], [email protected]

MS62

Convolved Hidden Markov Models applied to Geo-physical Inversion

Consider a sequence of categorical facies classes along asubsurface well profile. Convolved geophysical observationsare available. Focus is on assessing the categorical profilegiven geophysical observations. Bayesian inversion, withconvolved likelihood and Markov chain prior is defined.The posterior is not easily assessable due to extensive cou-pling. An approximate k-factorial posterior is defined andexactly assessed by the extended forward-backward algo-rithm. The exact posterior is assessed by McMC samplingusing this approximate posterior.

Henning Omre, Torstein FjeldstadNorwegian University of Science and [email protected], [email protected]

MS63

A Tv-Gaussian Prior for Infinite DimensionalBayesian Inverse Problems

Many scientific and engineering problems require to per-form Bayesian inferences in function spaces, in which theunknowns are of infinite dimension. In such problems,choosing an appropriate prior distribution in these prob-lem is an important task. In particular we consider prob-lems where the function to infer is subject to sharp jumpswhich render the commonly used Gaussian measures un-suitable. On the other hand, the so-called total variation(TV) prior can only be defined in a finite dimensional set-ting, and does not lead to a well-defined posterior measurein function spaces. In this work we present a TV-Gaussian(TG) prior to address such problems, where the TV term isused to detect sharp jumps of the function, and the Gaus-sian distribution is used as a reference measure so that itresults in a well-defined posterior measure in the functionspace. We also present an efficient Markov Chain MonteCarlo (MCMC) algorithm to draw samples from the poste-rior distribution of the TG prior. With numerical examples

we demonstrate the performance of the TG prior and theefficiency of the proposed MCMC algorithm.

Jinglai LiShanghai JiaoTong University, [email protected]

MS63

High-Dimensional Bayesian Inversion with PriorsFar from Gaussians

We discuss Bayesian inversion with priors that stronglydeviate from a Gaussian structure, motivated by sparsityand edge preserving approaches in imaging. A major chal-lenge is the highly anisotropic structure of the priors, whichcalls for evaluations with different measures than varianceor mean square risks. We present some approaches basedon Bregman risks, which also provide novel view points onMAP estimates. Moreover we comment on some pitfallsand the appropriate modelling of prior knowledge in suchsituations.

Martin BurgerUniversity of MuensterMuenster, [email protected]

Felix LuckaUniversity College [email protected]

MS63

Besov Space Priors and X-Ray Tomography

Consider an indirect measurement m = Au + e, where uis a function and e is random error. The inverse prob-lems is ”given m, find an approximation of u in a noise-robust way”. In practical Bayesian inversion one mod-els m, u and e as random variables and constructs afinite-dimensional computational model of the measure-ment. The data m is complemented with a priori infor-mation using Bayes formula. Approximate reconstructionsof u can then be achieved as point estimates from the pos-terior distribution. Discretization-invariance means thatif the computational model is refined, then the Bayesianestimates and probability distributions converge towardsinfinite-dimensional limiting objects. Computational ex-amples are presented for demonstrating the properties ofwavelet-based Besov space priors, which are the first non-Gaussian priors known to be discretization-invariant. Fur-thermore, we present a theoretical approach for resolv-ing the white noise paradox appearing as the discretisa-tion of the measurement is refined arbitrarily. Namely,realisations of infinite-dimensional white noise are square-integrable only with probability zero, which motivates amodification of the classical inversion approach.

Samuli SiltanenUniversity of [email protected]

MS63

Efficient Bayesian Estimation Using ConditionalExpectations

In this talk we will show a novel method of computing ap-proximations to the mean square error estimator (MMSE),

58 UQ16 Abstracts

which is equivalent to the conditional expectation and assuch a kind of Bayesian estimator, and its use for assimila-tion of measured data into surrogate models of stochasticresponses. The proposed method can be seen as a gener-alisation of the Kalman filter to general distributions andnon-linear measurement operators. An application to theidentification of the log-conductivity of a subsurface Darcyflow by measurements of a pollutant will be shown.

Elmar ZanderTU BraunschweigInst. of Scientific [email protected]

Hermann G. MatthiesInstitute of Scientific ComputingTechnische Universitat [email protected]

MS64

Sparse Identification of Nonlinear Dynamics: Gov-erning Equations from Data

Abstract not available.

Steven BruntonUniversity of [email protected]

MS64

Low-Dimensional Modeling and Control of Nonlin-ear Dynamics Using Cluster Analysis

Abstract not available.

Eurika KaiserInstitute PPRIME, [email protected]

MS64

Bayesian Inference of Biogeochemical-Physical Dy-namical Models

Abstract not available.

Pierre [email protected]

MS64

Structure and Resilience of Two-Dimensional FluidFlow Networks

Abstract not available.

Kunihiko TairaFlorida State UniversityMechanical [email protected]

MS65

Title Not Available

Abstract not available.

Ben CalderheadImperial College London

[email protected]

MS65

Parallel Adaptive Importance Sampling

Abstract not available.

Simon CotterUniversity of OxfordMathematical [email protected]

MS65

Incremental Local Approximations for Computa-tionally Intensive Mcmc on Targeted Marginals

We introduce approximate MCMC methods to characterizeselected marginals of high-dimensional probability distri-butions, in a setting where density evaluations are compu-tationally taxing. Our approach uses importance samplingto estimate the targeted marginal density, as in pseudo-marginal MCMC, but combines noisy estimates via localpolynomial or GP approximations to capture the smoothunderlying marginal density. The approach is incremen-tal in that local approximations are refined as MCMCsampling proceeds, yielding an asymptotically exact low-dimensional chain

Andrew D. [email protected]

Youssef M. MarzoukMassachusetts Institute of [email protected]

MS65

Variance Estimation and Allocation in the ParticleFilter

We introduce estimators of the variance and weakly con-sistent estimators of the asymptotic variance of particlefilter approximations. These estimators are defined usingonly a single realization of a particle filter, and can there-fore be used to report estimates of Monte Carlo error to-gether with such approximations. We also provide weaklyconsistent estimators of individual terms appearing in theasymptotic variance, again using a single realization of aparticle filter. When the number of particles in the par-ticle filter is allowed to vary over time, the latter permitsapproximation of their asymptotically optimal allocation.Some of the estimators are unbiased, and hence generalizethe i.i.d. sample variance to the non-i.i.d. particle filtersetting.

Anthony [email protected]

Nick [email protected]

MS66

Polynomial Chaos-Based Bayesian Inference of K-Profile Parametrization in a General Circulation

UQ16 Abstracts 59

Model of the Torpical Pacific

We present a Polynomial Chaos (PC)-based Bayesian infer-ence method for quantifying the uncertainties of K-ProfileParametrization (KPP) model in MIT General CirculationModel (MITgcm). The inference of the uncertain param-eters is based on a Markov Chain Monte Carlo (MCMC)scheme that utilizes a newly formulated test statistic tak-ing into account the different components representing thestructures of turbulent mixing on both daily and sea-sonal timescales in addition to the data quality, and fil-ters for the effects of parameter perturbations over thosedue to changes in the wind. To avoid the prohibitivecomputational cost of integrating the MITgcm model ateach MCMC iteration, we build a surrogate model for thetest statistic using the PC method. The traditional spec-tral projection method for finding the PC coefficients suf-fered from convergence issues due to the internal noise inthe model predictions. Instead, a Basis-Pursuit-DeNoising(BPDN) compressed sensing approach was employed thatfiltered out the noise and determined the PC coefficientsof a representative surrogate model. The PC surrogateis then used to evaluate the test statistic in the MCMCstep for sampling the posterior of the uncertain parame-ters. We present results of the posteriors that indicate agood agreement with the default values for two parametersof the KPP model namely the critical bulk and gradientRichardson; while the posteriors of the remaining parame-ters were hardly informative.

Ihab SrajDuke [email protected]

Sarah ZedlerUniversity of Texas, [email protected]

Charles JacksonUTIGUniversity of Texas at [email protected]

Omar M. KnioDuke [email protected]

Ibrahim HoteitKing Abdullah University of Science and Technology(KAUST)[email protected]

MS66

Field Sensitivity Analysis of the Circulation in theGulf of Mexico Using a PC Representation

A global sensitivity analysis is performed of the combinedimpact of uncertainties in initial conditions and wind forc-ing on the circulation in the Gulf of Mexico. To this end,polynomial chaos surrogates are constructed using a ba-sis pursuit denoising methodology. The computations re-veal that whereas local quantities of interest can exhibitcomplex behavior that necessitates a large number of re-alizations, the modal analysis of field sensitivities can besuitably achieved with a moderate size ensemble.

Guotu [email protected]

Mohamed IskandaraniRosenstiel School of Marine and Atmospheric SciencesUniversity of [email protected]

Matthieu Le HenaffUniversity of Miamimatthieu le henaff ¡[email protected]¿

Justin WinokurDuke [email protected]

Olivier P. Le [email protected]

Omar M. KnioDuke [email protected]

MS66

Model Reduction for Climate Model StructuralUncertainty Quantification

A significant source of uncertainty in climate model pre-dictions is the structural uncertainty arising from mod-eler choices in numerical approximations and parameteri-zations, which results in a multi-model ensemble approachto prediction. We examine the use of reduced models toemulate the behavior of more complex numerical simula-tions in order to efficiently explore the uncertainties arisingfrom different model structures.

Nathan Urban, Cristina Garcia-Cardona, Terry Haut,Alexandra JonkoLos Alamos National [email protected], [email protected], [email protected],[email protected]

Balu NadigaLos [email protected]

Juan SaenzLos Alamos National [email protected]

MS66

Ensemble Variational Assimilation

Ensemble Variational Assimilation (EnsVAR) consists inperturbing the data to be assimilated according to their er-ror probability distribution, and then perform variationalassimilation on the perturbed data. EnsVAR achievesexact Bayesian estimation in the linear Gaussian case.EnsVAR is implemented on small-dimension chaotic sys-tems. It achieves a high degree of bayesianity (as mea-sured by statistical reliability). Its performance comparesfavourably with that of other ensemble assimilation algo-rithms, such as Ensemble Kalman Filter and Particle Fil-ter.

Olivier TalagrandLaboratoire de Meteorologie DynamiqueEcole Normale [email protected]

60 UQ16 Abstracts

Mohamed Jardakaboratoire de Meteorologie Dynamique, Ecole NormaleSuperieuParis [email protected]

MS67

Exploiting Tensor Structure in Bayesian InverseProblems

We discuss how to exploit low-rank tensor structure inhigh-dimensional inference and data assimilation problems,treated in a Bayesian setting. In particular, we show howone can utilize low-rank representations of the likelihoodand dynamics to yield efficient representations of the pos-terior density. We also use a recently developed continu-ous extension to the tensor-train decomposition in order toobtain efficient sampling in areas of large posterior proba-bility.

Alex A. GorodetskyMassachussets Institute of [email protected]

Youssef M. MarzoukMassachusetts Institute of [email protected]

MS67

Bayesian Inversion Using Hierarchical Tensor Ap-proximations

Recovering equation parameters from real-world measure-ments is a challenging task that has raised interest in differ-ent areas of applied mathematics and engineering. Whenrelying on Bayes’ theorem, the high dimensionality of theinvolved integrals often are tackled by sampling methodswhich usually only exhibit slow convergence. We presenta novel approach for Bayesian inversion which employs arecently developed efficient hierarchical tensor representa-tion of the solution of the forward problem. In the talk, weoutline the advantages of the employed TT-decompositionand present comparisons with some well-known methods.

Manuel MarschallTU [email protected]

Martin EigelWIAS [email protected]

Reinhold SchneiderTechnische Universitat [email protected]

MS67

Spectral Likelihood Expansions and Nonparamet-ric Posterior Surrogates

Abstract not available.

Joseph NagelETH Zurich

[email protected]

MS67

Rapid Bayesian Inference on Bayesian Cyclic Net-works

A new framework is developed for rapid Bayesian infer-ence, involving probabilistic mappings between continu-ous and discrete data and model spaces on a quaternaryBayesian cyclic network. In this method, a continuous dataspace is mapped to a discrete data space (order reduction),and thence to a discrete model space (Bayesian updat-ing). This is inverted to infer the continuous model space.The method facilitates rapid Bayesian inference for high-volume, real-time applications, including turbulent flowcontrol.

Robert K. NivenThe University of New South Wales, [email protected]

Bernd R. NoackInstitut PPRIME, [email protected]

Eurika KaiserInstitute PPRIME, [email protected]

Louis N. CattafestaFCAAP, Florida State University, [email protected]

Laurent CordierInstitute PPRIME, [email protected]

Markus AbelUniversitat [email protected]

MS67

Stochastic Collocation Methods for NonlinearProbabilistic Problems in Solid Mechanics

Abstract not available.

Joachim RangTU Braunschweig, Institute of Scientific [email protected]

Frederik FahrendorfTU Braunschweig,Institute for Applied [email protected]

Laura De LorenzisTU BraunschweigInstitute for Applied [email protected]

Hermann G. MatthiesTU Braunschweig, Institute of Scientific [email protected]

MS68

Bayesian Models for Uncertainty Quantification of

UQ16 Abstracts 61

Antarctic Ice Shelves

The challenges in quantifying uncertainty in ice model-ing have their roots in the intricacy of the equations gov-erning ice dynamics: so much computational power is re-quired for the accurate simulation of coupled ice sheet/shelfcomplexes that only few model runs can be accomplished,which in turn renders the uncertainty sampling too sparse.We present a statistical approach with which current (hy-pothetical or former) ice shelves can be modeled and recon-structed. By combining spatial processes at various scalesand approximations of break-off processes in a Bayesianhierarchical model we are able to quantify uncertaintiesof ice sheet processes within a straightforward simulationsetting. We illustrate our approach with various contem-porary Antarctic ice shelves.

Reinhard FurrerUniversity of [email protected]

David MassonUniversity of [email protected]

MS68

An Adaptive Spatial Model for Precipitation Datafrom Multiple Satellite over Large Regions

Satellite measurements have of late become an importantsource of information for climate features such as precip-itation due to their near-global coverage. We look at aprecipitation dataset during a 3-hour window over tropicalSouth America that has information from two satellites.We develop a flexible hierarchical model to combine in-stantaneous rain rate measurements from those satelliteswhile accounting for their potential heterogeneity. Con-ceptually, we envision an underlying precipitation surfacethat influences the observed rain as well as absence of it.The surface is specified using a mean function centered ata set of knot locations, to capture the local patterns in therainrate, combined with a residual Gaussian process to ac-count for global correlation across sites. To improve overthe commonly used pre-fixed knot choices, an efficient re-versible jump scheme is used to allow the number of suchknots as well as the order and support of associated poly-nomial terms to be chosen adaptively. To facilitate compu-tation over a large region, a reduced rank approximationfor the parent Gaussian process is employed

Bani MallickDepartment of StatisticsTexas A&M [email protected]

MS68

Full Scale Multi-Output Spatial Temporal Gaus-sian Process Emulator with Non-Seperable Auto-Covariance Function

Gaussian process emulator with separable covariance func-tion has been utilized extensively in modeling large com-puter model outputs. The assumption of separability im-poses constraints on the emulator and may negatively affectits performance in some applications where separabilitymay not hold. We propose a multi-output Gaussian processemulator with a nonseparable auto-covariance function toavoid limitations of using separable emulators. In addition,

to facilitate the computation of nonseparable emulator, weintroduce a new computational method, referred to as theFull-Scale approximation method with block modulatingfunction (FSA-Block) approach. The FSA-Block is an ef-fective and accurate covariance approximation method toreduce computations for Gaussian process models, whichapplies to both nonseparable and partially separable co-variance models. We illustrate the effectiveness of ourmethod through simulation studies and compare it withemulators with separable covariances. We also apply ourmethod to a real computer code of the carbon capture sys-tem.

Huiyan SangTexas A&M [email protected]

MS68

Fused Adaptive Lasso for Spatial and TemporalQuantile Function Estimation

Quantile functions are important in characterizing the en-tire probability distribution of a random variable, espe-cially when the tail of a skewed distribution is of inter-est. This article introduces new quantile function estima-tors for spatial and temporal data with a fused adaptiveLasso penalty to accommodate the dependence in spaceand time. This method penalizes the difference amongneighboring quantiles, hence it is desirable for applicationswith features ordered in time or space without replicatedobservations. The theoretical properties are investigatedand the performance of the proposed methods are evalu-ated by simulations. The proposed method is applied toparticulate matter (PM) data from the Community Multi-scale Air Quality (CMAQ) model to characterize the upperquantiles, which are crucial for studying spatial associationbetween PM concentrations and adverse human health ef-fects.

Ying SunKAUSTSaudi [email protected]

Huixia WangGeorge Washington [email protected]

Montserrat FuentesNorth Carolina State UniversityStatistics [email protected]

MS69

Innovative Methodologies for Robust Design Opti-mization with Large Number of Uncertainties

This paper describes the methodologies developed by ES-TECO in UMRIDA Project and implemented in mode-FRONTIER software. An adaptive regression methodol-ogy to reduce number of Polynomial Chaos terms is pro-posed for the Uncertainty Quantification. About RobustDesign Optimization, a new approach is proposed, basedon min-max definition of the objectives, and on the appli-cation of Polynomial Chaos for an accurate definition ofpercentiles (reliability-based robust design optimization).Aeronautical CFD test cases are proposed for validation.

Alberto Clarich, Rosario Russo

62 UQ16 Abstracts

Esteco [email protected], [email protected]

MS69

Manufacturing Tolerances in Industrial Turbo-Machinery Design

Abstract not available.

Remy NigroUniversity of Mons, Fluids-Machines Department, Mons,[email protected]

Dirk WunschNUMECA [email protected]

Gregory CoussementUniversity of Mons, Fluids-Machines Department, Mons,[email protected]

Charles HirschNUMECA [email protected]

MS69

Non-Adaptive Construction of Sparse PolynomialSurrogates in Computational Aerodynamics

A core ingredient for the construction of polynomial surro-gates of complex systems is the choice of a dedicated sam-pling strategy, so as to define the most significant scenariito be considered for the construction of such metamodels.Observing that system outputs may depend only weaklyon the cross-interactions between the variable inputs, onemay argue that only low-order polynomials significantlycontribute to these surrogates. This feature prompts theuse of reconstruction techniques benefiting from the spar-sity of the outputs, such as compressed sensing. The re-sults obtained with aerodynamic computations involvingcomplex fluid flows corroborate this expected trend. Inthe mean time, we show how to compute arbitrary mo-ments of orthogonal polynomials for the statistical analysisof the outputs from the surrogates. This research has re-ceived funding from the European Union’s Seventh Frame-work Programme for research, technological developmentand demonstration under grant agreement no ACP3-GA-2013-60503.

Eric M. SavinONERA, [email protected]

MS69

Efficient Usage of 2nd Order Sensitivity for Uncer-tainty Quantification

We propose a method based on nodal representation of ge-ometrical uncertainty field. Such discretization leads to alarge set of correlated uncertainties that is too big to beanalyzed with existing methods in reasonable time. Theproposed approach is based on the idea of reducing thenumber of analyzed uncertainties. In order to do this oneneeds to compute directional 2nd order derivatives. Thisapproach is expected to give significant gain in computa-

tional time.

Lukas [email protected]

Marcin WyrozebskiWarsaw University of Technology, [email protected]

MS70

Dynamic Patterns in the Brain

There is a broad need in neuroscience to understand andvisualize large-scale recordings of neural activity, big dataacquired by tens or hundreds of electrodes recording dy-namic brain activity over minutes to hours. Such datasetsare characterized by coherent patterns across both spaceand time, where many transient events are present at dif-ferent temporal scales. I will talk about our work applyingdynamic mode decomposition (DMD) to large-scale neuralrecordings. As a specific example, we combined spatio-temporal coherent patterns extracted by DMD with un-supervised machine learning to identify and characterizenetworks of sleep spindles. We uncovered several distinctsleep spindle networks identifiable by their stereotypicalcortical distribution patterns, frequency, and duration.

Bing W. BruntonUniversity of [email protected]

MS70

Multiscale Relations Measure for Categorical Dataand Application to Genomics

Inference of relations between categorical data sets is animportant problem in many areas of computational biologyand bioinformatics. One of the essential problems herebyemerging - and frequently leading to strongly-biased andeven completely wrong results and interpretations - is in-duced by the multiscale character of the biological systems,manifested in a presence of latent/unresolved variables, aswell as in the issue of a model error - resulting from (maybe wrong) a priori mathematical assumptions about theunderlying processes. Combining tools and concepts frominformation theory with the exact law of the total prob-ability, we derive a multiscale relation measure that - interms of the underlying mathematical assumptions - is lessrestrictive and allows to infer an eventual impact from thelatent variables. At the same time, it has a same leadingorder of the computational complexity (linear in the size ofthe data statistics) as the standard relation measures. Ap-plication of this measure to the analysis of single nucleotidepolymorphisms (SNPs) in a part of the human genome re-veals more complex relation patterns than implied by stan-dard measures - thereby providing an indication that manyof the interpretations deduced from the genomic data maybe resulting from the bias induced by too restrictive under-lying mathematical assumptions of the standard measures.As demonstrated on the test data, allowing for a systematicdistinction between different dependence and independencecombinations of category relations in the data, the pro-posed measure provides new insights into populational dif-ferences between SNP relations in the Alzheimer-relevantgene regions and opens new possibilities for Genome WideAssociation Studies (GWAS).

Susanne Gerber

UQ16 Abstracts 63

Universita della Svizzera [email protected]

MS70

Multi-Resolution Dynamic Mode Decompositionand Koopman Theory

We integrate the dynamic mode decomposition (DMD)with a multi-resolution analysis which allows for a de-composition method capable of robustly separating com-plex systems into a hierarchy of multi-resolution time-scalecomponents. A one-level separation allows for background(low-rank) and foreground (sparse) separation of dynam-ical data, or robust principal component analysis. Themulti-resolution dynamic mode decomposition is capable ofcharacterizing nonlinear dynamical systems in an equation-free manner by recursively decomposing the state of thesystem into low-rank terms whose temporal coefficients intime are known. It also has a strong connection to Koop-man theory which allows for projecting the nonlinear dy-namics onto an infinite-dimensional linear operator. DMDmodes with temporal frequencies near the origin (zero-modes) are interpreted as background (low-rank) portionsof the given dynamics, and the terms with temporal fre-quencies bounded away from the origin are their sparsecounterparts. The multi-resolution dynamic mode decom-position (mrDMD) method is demonstrated on several ex-amples involving multi-scale dynamical data, showing ex-cellent decomposition results, including sifting the El Ninomode from ocean temperature data. It is further appliedto decompose a video data set into separate objects mov-ing at different rates against a slowly varying background.These examples show that the decomposition is an effectivedynamical systems tool for data-driven discovery.

J. Nathan KutzUniversity of WashingtonDept of Applied [email protected]

MS70

Reduced Order Precursors of Rare Events in Uni-directional Nonlinear Water Waves

Abstract not available.

Themistoklis SapsisMassachusetts Institute of [email protected]

Will [email protected]

MS70

A Transfer Operator Approach to the Predictionof Atmospheric Regime Transition

The existence of persistent midlatitude atmospheric flowregimes with time-scales larger than 510 days and indica-tions of preferred transition paths between them motivatesto develop early warning indicators for such regime transi-tions. While ergodic theory provides a valuable frameworkfor prediction and uncertainty quantification in chaotic orstochastic systems, only recently has a reduction methodbeen proposed to approximate transfer operators govern-ing the evolution of statistics. This method is applied toa hemispheric barotropic model to develop an early warn-

ing indicator of the zonal to blocked flow transition in thismodel. It is shown that the spectrum of the transfer oper-ators can be used to study the slow dynamics of the flowas well as the non-Markovian character of the reduction.The slowest motions are thereby found to have time scalesof three to six weeks and to be associated with meta-stableregimes (and their transitions). These regimes can be de-tected as almost-invariant sets of the transfer operatorsand are associated regions of low kinetic energy. Further-more, the reduced operators are used to design an earlywarning indicator of transition which is statistical in na-ture, allowing to asses forecast uncertainty. Even thoughthe model is highly simplified, the skill of the early warningindicator is promising, suggesting that the transfer oper-ator approach can be used in parallel to an operationaldeterministic model for stochastic prediction or to assessforecast uncertainty.

Alexis TantetIMAUUniversity of [email protected]

Henk A. DijkstraIMAU, Utrecht University, Utrecht, the [email protected]

MS71

The Role of Global Sensitivity Analysis in Inter-preting Subsurface Processes under Uncertainty

Global sensitivity analysis (GSA) projects uncertainty ofinput parameters onto the variance of model responses,leading to improved characterization of complex systemsand assisting design of measurement networks. GSA iscompatible with model discrimination criteria and allowsassessing contributions of model uncertainty to the over-all system variability. Examples are presented to showcasethe role of GSA in interpreting groundwater flow/transportprocesses, where quantification of uncertainty associatedwith model predictions is critical to design managementstrategies.

Valentina CirielloUniversita di [email protected]

MS71

Estimating Flow Parameters for the UnsaturatedZone Using Data Assimilation

Prediction of soil moisture is an important component inmany land surface models. The Ensemble Kalman filterhas become a popular method to integrate time series ofobservations into predictions. Updating hydraulic param-eters in the filter is often done using an augmented stateapproach, as parameters might be not known. We discussthe feasibility of parameter estimation for soils in thesesettings with a focus on the presence of model errors andcompare different filters.

Insa NeuweilerInstitute for Fluid Mechanics and Environmental Physicsin CUniversity of [email protected]

Daniel ErdalUniversity of Tubingen, Germany

64 UQ16 Abstracts

[email protected]

Natascha LangeUniversity of Hannover, [email protected]

MS71

An Adaptive Sparse Grid Algorithm for DarcyProblems with Log-normal Permeability

In this talk we propose a modified version of the classicaladaptive sparse grid algorithm, that handles non-nestedcollocation points and an infinite number of input param-eters with unbounded support. We then use it to solvethe Darcy equation a with lognormal permeability randomfield that has a Matern covariance function. In case ofrough permeability fields, we propose to use the adaptivesparse grid as a control variate to a Monte Carlo estimator.

Lorenzo TamelliniEPF-Lausanne, Switzerland,[email protected]

Fabio NobileEPFL, [email protected]

Raul F. TemponeMathematics, Computational Sciences & EngineeringKing Abdullah University of Science and [email protected]

Francesco TeseiECOLE POLYTECHNIQUE FEDERALE [email protected]

MS72

Uncertainty Quantification Python Laboratory(UQ-PyL) A GUI For Parametric UncertaintyAnalysis of Large Complex Dynamical Models

This paper describes the newly developed UncertaintyQuantification Python Laboratory (UQ-PyL), a softwareplatform designed to quantify parametric uncertainty ofcomplex dynamical models. UQ-PyL integrates differ-ent UQ methods, including experimental design, statisti-cal analysis, sensitivity analysis, surrogate modeling andparameter optimization. It is written in Python with agraphical user interface (GUI) and runs on all commonoperating systems. In this presentation, we illustrate thedifferent functions of UQ-PyL through some case studies.

Qingyun DuanBeijing Normal UniversityCollege of Global Change and Earth System [email protected]

Chen WangBeijing Normal [email protected]

Charles TongLawrence Livermore National Laboratory

[email protected]

MS72

Advancements in the Uqlab Framework for Uncer-tainty Quantification

The UQLab software framework is a Matlab-based modularplatform that allows users from different fields and withoutextensive programming background to access and developstate-of-the-art algorithms for uncertainty quantification.Since the public beta testing phase started in July 2015,a number of improvements and new features have beenadded, making UQLab appealing to many different appliedresearch fields. A general overview of the current featuresof the framework and selected case studies are presented.

Stefano MarelliChair of Risk, Safety and Uncertainty QuantificationInstitute of Structural Engineering, ETH [email protected]

Christos Lataniotis, Bruno SudretETH [email protected], [email protected]

MS72

PSUADE: A Software Toolkit for UncertaintyQuantification

PSUADE is an software system for analyzing the relation-ships between the inputs and outputs of general multi-physics simulation models for the purpose of performinguncertainty and sensitivity analyses. In this talk the gen-eral capabilities of PSUADE will be highlighted and itsdesign will be discussed. Several applications will also begiven to illustrate its capabilities.

Charles TongLawrence Livermore National [email protected]

MS73

A Multi-Level Compressed Sensing PetrovGalerkin Method for the Approximation ofParametric PDEs

In this talk, we review the use of compressed sensing andits weighted version in the context of high dimensionalparametric and stochastic PDEs. We see that under somerather weak summability and ellipticity assumptions, thepolynomial chaos expansion of the solution map showssome compressibility property. We further derive a multi-level scheme to speed up the calculations, leading to amethod that has a computational cost in the order of asingle PDE solve at the finest level of approximation andstill has reliable guarantees.

Jean-Luc BouchotRWTH [email protected]

Benjamin BykowskiRWTH Aachen [email protected]

Holger RauhutRWTH Aachen UniversityLehrstuhl C fur Mathematik (Analysis)

UQ16 Abstracts 65

[email protected]

Christoph SchwabETH [email protected]

MS73

CORSING: Sparse Approximation of PDEs Basedon Compressed Sensing

Establishing an analogy between the sampling of a sig-nal and the Petrov-Galerkin discretization of a PDE, theCORSING method can recover the best s-term approx-imation to the solution with respect to N suitable trialfunctions, with s � N , by evaluating the bilinear formassociated with the PDE against a randomized choice ofm � N test functions. This yields an underdeterminedm × N linear system, that is solved by means of sparseoptimization techniques.

Simone Brugiapaglia, Stefano MichelettiPolitecnico di [email protected],[email protected]

Fabio NobileEPFL, [email protected]

Simona PerottoMOX - Modeling and Scientific ComputingDipartimento di [email protected]

MS73

Multilevel Monte-Carlo Hybrids Exploiting Multi-fidelity Modeling and Sparse Polynomial Chaos

In this talk, we present recent experience in developingmultilevel Monte Carlo (MLMC) methods that relax thedirect tie to a sequence of discretization levels for a partic-ular model form. Of particular interest are (1) extensionof MLMC methods to the general multifidelity setting withan ensemble of distinct model forms, and (2) hybridizationwith polynomial chaos expansion methods based on sparserecovery in order to increase statistical estimator perfor-mance within the MLMC framework. Numerical resultsfor multiple discretization levels and multiple model formswill be presented.

Michael S. EldredSandia National LaboratoriesOptimization and Uncertainty Quantification [email protected]

Gianluca GeraciUniversity of [email protected]

John D. JakemanSandia National [email protected]

MS73

Posterior Concentration and High Order Quasi

Monte Carlo for Bayesian Inverse Problems

We analyze combined Quasi-Monte Carlo quadrature andFinite Element approximations in Bayesian estimation ofsolutions to countably-parametric operator equations withholomorphic dependence on the parameters considered in[Cl. Schillings and Ch. Schwab: Sparsity in Bayesian Inver-sion of Parametric Operator Equations. Inverse Problems,30, (2014)]. Such problems arise in numerical uncertaintyquantification and in Bayesian inversion of operator equa-tions with distributed uncertain inputs, such as uncertaincoefficients, uncertain domains or uncertain source termsand boundary data. It implies, in particular, regularity ofthe parametric solution and of the countably-parametricBayesian posterior density in SPOD weighted spaces. This,in turn, implies that the Quasi-Monte Carlo quadraturemethods in [J. Dick, F.Y. Kuo, Q.T. Le Gia, D. Nuyens,Ch. Schwab, Higher order QMC Galerkin discretization forparametric operator equations, SINUM (2014)] are applica-ble to these problem classes, with dimension-independentconvergence rates. We address the impact of posterior con-centration, due to small observation noise covariance on theerror bounds, and present numerical results.

Christoph SchwabETH [email protected]

Josef DickSchool of Mathematics and StatisticsThe University of New South [email protected]

Thong Le GiaUniversity of New South [email protected]

Robert N. GantnerETH [email protected]

MS74

Model Adaptivity in Bayesian Inverse Problems

Many Bayesian inverse problems involve computationallyexpensive forward models. When the inference parametersimplicitly define the forward model, an additional com-plexity arises in choosing appropriate discretizations of themodel parameters and state; this choice affects both the ac-curacy of the posterior and the convergence rates of poste-rior sampling schemes. The inference of uncertain parame-ters in PDEs is one such example. We explore this problemby developing posterior-focused error estimates and adap-tivity criteria that combine adaptive procedures for deter-ministic inverse problems with Markov chain Monte Carlosampling.

Jayanth Jagalur-Mohan, Youssef M. MarzoukMassachusetts Institute of [email protected], [email protected]

MS74

A Multifidelity Stochastic CFD Framework for Ro-bust Simulation with Gappy Data

We develop a general CFD framework based on multifi-delity simulations with gappy domains, called gappy simu-

66 UQ16 Abstracts

lation. We combine approximation theory, domain decom-position, and patch dynamics together with machine learn-ing techniques, e.g., coKriging, to estimate local boundaryof non-overlapped patches. From gaps between individualpatches, we observe and analyze uncertainty quantificationby two different benchmark problems with general finitedifference methods. Finally, we set the foundations for anew class of robust algorithm for stochastic CFD.

Seungjoon LeeDivision of Applied Mathematics, Brown Universityseungjoon [email protected]

Ioannis KevrekedisPrinceton [email protected]

George Em KarniadakisBrown Universitygeorge [email protected]

MS74

Tree-Structured Expectation Propagation forStochastic Multiscale Differential Equations

We develop a hierarchical conditional random field modelusing tree-structured Expectation Propagation (EP) as asurrogate for multiscale stochastic PDEs. Hidden variablesare introduced to capture coarse graining effects and arethen integrated out for inference. EP can give predictionson new inputs as well as quantify uncertainties in the out-put whilst dealing with the local correlations within thecliques. We present applications of the model on stochas-tic ODEs and PDEs.

Demetris Marnerides, Nicholas ZabarasThe University of WarwickWarwick Centre for Predictive Modelling (WCPM)[email protected], [email protected]

MS74

Multi-Fidelity Simulation of Random Fields byMulti-Variate Gaussian Process Regression

In this talk outline a general procedure that employs recur-sive Bayesian network techniques and multiple informationsources with different levels of fidelity to predict the statis-tical properties of random fluid systems. In particular, weaddress the problem of how to construct optimal predic-tors for quantities of interest such as the temperature fieldin stochastic Rayleigh-Benard convection. The effective-ness of the new algorithms is demonstrated in numericalexamples involving prototype stochastic PDEs.

Lucia ParussiniUniversita degli Studi di [email protected]

Daniele VenturiUniversity of California, Santa [email protected]

MS75

Subset Simulation: Strategies to Enhance the Per-formance of the Method

Subset Simulation is an adaptive Monte Carlo method thatis efficient for estimating small probabilities in high di-

mensional problems. We present strategies to improve theMCMC sampling in Subset Simulation. This includes asimple yet efficient proposal distribution, as well as an al-gorithm to adaptively learn the spread of the proposal dis-tribution. Furthermore, we investigate the influence of de-pendent MCMC seeds on the variability of the computedprobability.

Wolfgang Betz, Iason Papaioannou, Daniel StraubEngineering Risk Analysis GroupTU [email protected], [email protected],[email protected]

MS75

Markov Chain Monte Carlo for Rare Event Relia-bility Analysis with Nonlinear Finite Elements

Many structural design problems are highly nonlinear andare often required to have very small probabilities of failure.Nonlinear finite element analysis can be combined withSubset Simulation for efficient calculation of rare eventprobabilities in physical models with spatially autocorre-lated material parameters. Reliability analysis for non-linear problems provides distinct challenges in addition tothose encountered in linear problems. These challenges arediscussed with reference to a practical test case, a footingon elastoplastic soil.

David K. GreenUniversity of New South [email protected]

MS75

Multilevel Monte Carlo Methods for ComputingFailure Probability of Porous Media Flow Systems

We study improvements of Monte Carlo (MC) methodsfor point evaluation of the cumulative distribution func-tion of quantities from porous media two-phase flow mod-els with uncertain permeability. We consider indicators forthe capacity of CO2 storage systems, e.g. sweep efficiency.We quantify the computational gains using recent MC im-provements: selective refinement and multilevel MC. Com-pared to MC, the computational effort can be reduced oneorder of magnitude for typical error tolerances.

Fritjof FagerlundDepartment of Earth SciencesUppsala [email protected]

Fredrik HellmanDepartment of Information TechnologyUppsala [email protected]

Axel MalqvistDepartment of Mathematical SciencesChalmers University of [email protected]

Auli NiemiDepartment of Earth SciencesUppsala University

UQ16 Abstracts 67

[email protected]

MS75

Bayesian Updating of Rare Events: The BUS Ap-proach

We present an efficient framework to performing Bayesianupdating of rare event probabilities, termed BUS. It isbased on a reinterpretation of the classical rejection-sampling approach to Bayesian analysis, combined withestablished methods for estimating probabilities of rareevents. These methods include the First-Order ReliabilityMethod (FORM), importance sampling methods as wellas subset simulation. We showcase the applicability of theBUS approach with numerical examples involving stochas-tic PDEs in one and two-dimensional physical space.

Iason Papaioannou, Wolfgang Betz, Daniel StraubEngineering Risk Analysis GroupTU [email protected], [email protected],[email protected]

MS76

Numerical Integration of Piecewise Smooth Sys-tems

Many applications involve functions that are piecewisesmooth with kinks or jumps at the interfaces. Provided thefunctions are given by evaluation procedures these discon-tinuities can be located quite accurately and the numericalpurpose, be it high dimensional quadrature or more gen-eral the numerical integration of dynamical systems cansometimes be achieved without a significant loss of accu-racy or efficiency compared to the smooth case. We discusssuitable scenarios for this to be the case.

Andreas GriewankHU Berlin, [email protected]

MS76

Gpu Acceleration of the Stochastic Grid BundlingMethod for Early-Exercise Options

Pricing early-exercise options under multi-dimensionalstochastic processes is a major challenge in the financialsector. In this work, a parallel GPU version of the MonteCarlo based Stochastic Grid Bundling Method (SGBM)[Shashi Jain and Cornelis W. Oosterlee. The StochasticGrid Bundling Method: Efficient pricing of Bermudan op-tions and their Greeks. Applied Mathematics and Compu-tation, 269: 412-431, 2015] for pricing multi-dimensionalBermudan options is presented. To extend the method’sapplicability, the problem dimensionality will be increaseddrastically. This makes SGBM very expensive in termsof computational cost. A parallelization strategy of themethod is developed by employing GPGPU paradigm.

Alvaro LeitaoCWI [email protected]

MS76

On Tensor Product Approximation of AnalyticFunctions

We discuss tensor products of certain univariate approxi-

mation schemes for multivariate functions that are analyticin certain polydiscs. In the finite-variate setting we proveexponential error bounds, while for the infinite-variate casewe prove algebraic and sub-exponential bounds.

Jens OettershagenUniversity of [email protected]

MS76

Numerical Solution of Elliptic Diffusion Problemson Random Domains

In this talk, we provide regularity results for the solution toelliptic diffusion problems on random domains. In partic-ular, based on the decay of the Karhunen-Loeve expansionof the domain perturbation field, we establish rates of de-cay of the solutions derivatives. This anisotropy can beexploited in a sparse quadrature to compute quantities ofinterest of the solution. In combination with parametric fi-nite elements, we end up with a non-intrusive and efficientmethod.

Michael PetersUniversity of [email protected]

Helmut HarbrechtUniversitaet BaselDepartement of Mathematics and Computer [email protected]

Markus SiebenmorgenUniversity of [email protected]

MS77

Uncertainty Quantification of a Tsunami Modelwith Uncertain Bathymetry Using Statistical Em-ulation

VOLNA, a nonlinear shallow water equations solver, pro-duces high resolution simulations of earthquake-generatedtsunamis. However, the uncertainties in the bathymetry(seafloor elevation) have an impact on tsunami waves.We quantify the bathymetry as random fields which areparametrised as inputs of VOLNA. Because of the high-dimensionality of these inputs, we propose a joint frame-work of statistical emulation with dimension reductionprocedure for uncertainty quantification to obtain reliableprobabilistic assessment of tsunami hazard.

Xiaoyu Liu, Serge GuillasUniversity College [email protected], [email protected]

MS77

A Measure-Theoretic Approach to Parameter Es-timation

In this talk, we will discuss the application of a newmeasure-theoretic approach to parameter estimation. Themeasure theoretic approach is based on computing theprobability of events in parameter space, given the proba-bility of measurable events in data space. We will describethe application of this methodology to two complex prob-lems: the estimation of parameter fields within a coastalocean model, and the estimation of transport parameters

68 UQ16 Abstracts

in a groundwater contamination model.

Clint DawsonInstitute for Computational Engineering and SciencesUniversity of Texas at [email protected]

Troy ButlerUniversity of Colorado [email protected]

Steven MattisUniversity of Texas at [email protected]

Lindley C. GrahamUniversity of Texas at AustinInstitute for Computational Engineering and Sciences(ICES)[email protected]

MS77

An Overview of Uncertainty Quantification in Geo-physical Hazard Analyses

Simulation based studies (and often surrogates to makeUQ tractable) are essential in geophysical hazard analysesgiven the rare nature of devastating hazards and limitedobservational data. I will give an overview of approaches,methodologies, and challenges that arise in quantifyinguncertainty in simulation-aided geophysical hazard assess-ment.

Elaine SpillerMarquette [email protected]

MS77

Uncertainty Quantification About Dynamic Flow,Frequency, and Initiation of Pyroclastic Flows

Decade-long observations of initiation angles for Pyroclas-tic Flows at the Soufriere Hills Volcano in Montserrat showlittle departure from a uniform distribution on the circle,but short-term studies tell a different story— angles aresimilar for consecutive groups of smaller eruptions betweenmajor dome-collapse events. In the hope of making moreaccurate short-term hazard assessment and forecasts, webuild and fit dynamic models of flow, frequency, and initi-ation angles for Pyroclastic Flows.

Robert WolpertDuke University¡[email protected]¿

MS78

Title Not Available

Abstract not available.

Eldad HaberDepartment of MathematicsThe University of British Columbia

[email protected]

MS78

Uncertainty Quantification in 4D Seismic

In seismic imaging, multiple large data sets are often col-lected to monitor a producing reservoir. These data arethen analyzed to image changes in the subsurface. Be-cause of the inherent non-uniqueness of the seismic inverseproblem, it is difficult to determine which changes are real.This presentation will detail how we can use the multi-ple data sets available in this case to efficiently compute aconfidence map and relate it to the inherent uncertainties.

Alison MalcolmMemorial University of [email protected]

Di [email protected]

Michael FehlerMassachusetts Institute of [email protected]

Maria KotsiMemorial University of [email protected]

MS78

A Randomized Misfit Approach for Data Reduc-tion in Large-Scale Inverse Problems

We present a randomized misfit approach (RMA) for effi-cient data reduction in large-scale inverse problems. Themethod is a random transformation approach that gener-ates reduced data by randomly combining the original ones.The main idea is to first randomize the misfit and then usethe sample average approximation to solve the resultingstochastic optimization problem. At the heart of our ap-proach is the blending of the stochastic programming andthe random projection theories, which brings together theadvances from both sides. One of the main results of thepaper is the interplay between the Johnson-Lindenstrausslemma and large deviation theory. A tight connection be-tween the Morozov discrepancy principle and the Johnson-Lindenstrauss lemma is presented. Various numerical re-sults to motivate and to verify our theoretical findings arepresented for inverse problems governed by elliptic partialdifferential equations in one, two, and three dimensions.

Tan Bui-ThanhThe University of Texas at [email protected]

Ellen Le, Aaron MyersUniversity of Texas at Austin - [email protected], [email protected]

MS78

Sub-sampled Newton Methods and Modern BigData Problems

Many data analysis applications require the solution of op-timization problems involving a sum of large number offunctions. We consider the problem of minimizing a sum

UQ16 Abstracts 69

of n functions over a convex constraint set. Algorithms thatcarefully sub-sample to reduce n can improve the computa-tional efficiency, while maintaining the original convergenceproperties. For second order methods, we give quantitativeconvergence results for variants of Newtons methods wherethe Hessian or the gradient is uniformly sub-sampled.

Farbod Roosta-Khorasani, Michael MahoneyUC [email protected], [email protected]

MS79

Improved Dynamic Mode Decomposition Algo-rithms for Noisy Data

The dynamic mode decomposition (DMD) algorithm givesa means to extract dynamical information and models di-rectly from data. The practical utility of DMD, however,can be hindered by a sensitivity to noisy data. We showthat DMD is biased to measurement noise, and discuss anumber of noise-robust modifications to the algorithm thatremove this bias. We further demonstrate how these ap-proaches can be extended to improve related data-drivenmodeling algorithms.

Scott DawsonDepartment of Mechanical and Aerospace EngineeringPrinceton [email protected]

Maziar HematiDepartment of Aerospace Engineering and MechanicsUniversity of [email protected]

Matthew WilliamsUnited Technologies Research [email protected]

Clarence RowleyPrinceton UniversityDepartment of Mechanical and Aerospace [email protected]

MS79

Temporal Compressive Sensing of Strain Historiesof Compliant Wings for Classification of Aerody-namic Regimes

Mechanosensory flight control in insects is moderated bycampaniform sensilla or strain sensors along wings that col-lect information from aerodynamic environment. We spec-ulate that sparse strain measurements that encode aero-dynamic loading are sufficient to detect the presence offluid instabilities and robustly differentiate aerodynamicdisturbances. To confirm this, strain data is obtained froma numerical fluid-structure model of a compliant flappinghawkmoth wing. A comparison of dominant features ofspatial and temporal strain signatures reveals that tem-poral strain histories are crucial for distinguishing differ-ent aerodynamic environments. Next, regime identificationis recast as a classification problem of fitting a temporalstrain history into a rich matrix library of frequency fea-tures obtained from different aerodynamical regimes. Byintegrating additive sensor noise, we demonstrate robustclassification of new strain histories into the library using�1 minimization to promote sparsity in the discriminatingcoefficients. Finally we examine classification accuracy forfrequency windows that best separate the signals by inte-

grating compressive sensing and bandlimited sampling ofstrain frequency features.

Krithika ManoharUniversity of Washington, USAn/a

MS79

Uncertainty Analysis - An Operator Theoretic Ap-proach

Abstract not available.

Igor MezicUniversity of California, Santa [email protected]

MS79

Discovering Dynamics from Measurements, Inputs,and Delays

Abstract not available.

Joshua L. ProctorInstitute for Disease [email protected]

MS80

A Constructive Method for Generating Near Min-imax Distance Designs

We propose a new class of space-filling designs called ro-tated sphere packing designs. The approach starts from theasymptotically optimal positioning of identical balls thatcovers the unit cube. Properly scaled, rotated, translatedand extracted, such designs are near optimal in minimaxdistance criterion, low in discrepancy, good in projectiveuniformity and thus useful in both prediction and numeri-cal integration purposes. The proposed designs can be con-structed for any given numbers of dimensions and points.

Xu HeChinese Academy of [email protected]

MS80

Smoothed Brownian Kriging Models for ComputerSimulations

Gaussian processes have become a common framework formodeling deterministic computer simulations and produc-ing predictions of the response surface. This talk discusseswhy the ubiquitous radial covaraince assumption is limit-ing in terms of the long range behavior of the responsesurface. To avoid this problem, this talk proposes a new,non-radial covariance function . Twelve examples indicatethat the use of this covariance can result in better predic-tions on realistic problems.

Matthew PlumleeUniversity of Michigan

70 UQ16 Abstracts

[email protected]

MS80

A Submodular Criterion for Space-Filling Design

We propose a new criterion for space-filling design in com-puter experiments, based on the notion of covering mea-sure, indexed by a nonnegative scalar q. It is submodularand is related to the minimax-distance criterion for largeq. Its submodularity implies that a simple greedy opti-mization yields a design sequence that satisfies guaranteedefficiency bounds. Various examples compare performanceachieved in terms of minimax optimality for moderate in-put dimension d to existing space filling methods.

Luc Pronzato, Maria-Joao [email protected], [email protected]

MS80

An Iterative Procedure for Large-scale ComputerExperiments

In this paper we propose an iterative procedure to fit large-scale computer experiments. This procedure avoids thecomputation of matrix inverse, and then yields a class ofcomputational cheaper and more stable emulators. In ad-dition, with the interpolation property in each iteration,our procedure can improve a cheap linear emulator in oneor several iterations. Numerical examples are presented toillustrate our method. (joint work with Peter Z G Qian)

Shifeng XiongChinese Academy of [email protected]

MS81

A Hybrid Ensemble Transform Filter for High Di-mensional Dynamical Systems

Data assimilation is the task to combine evolution modelsand observational data in order to produce reliable pre-dictions. In this talk, we focus on ensemble-based recur-sive data assimilation problems. Our main contributionis a hybrid filter that allows one to adaptively bridge be-tween ensemble Kalman and particle filters. While ensem-ble Kalman filters are robust and applicable to high dimen-sional problems, particle filters are asymptotically consis-tent in the large ensemble size limit. We demonstrate nu-merically that our hybrid approach can improve the per-formance of both Kalman and particle filters at moderateensemble sizes. We also show how to implement the con-cept of localisation into a hybrid filter, which is key to theirapplicability to high dimensional problems.

Walter AcevedoInstitut fur Mathematik, Universitat [email protected]

Sebastian ReichUniversitat [email protected]

Maria ReinhardtInstitut fur Mathematik, Universitat Potsdam

reinhardt [email protected]

MS81

Bayesian Smoothing and Learning for MultiscaleOcean Flows

We illustrate a Bayesian inference methodology that al-lows the joint inference of the state, equations, geometry,boundary conditions and initial conditions of dynamicalmodels. The learning methodology combines the adap-tive reduced-order Dynamically-Orthogonal (DO) stochas-tic partial differential equations with Gaussian MixtureModels (GMMs). To learn backward in time, we utilize anovel Bayesian smoother for high-dimensional continuousstochastic fields governed by general nonlinear dynamics,the GMM-DO smoother. Examples are provided for time-dependent fluid and ocean flows, including Strait flows withjets and eddies, and multiscale bottom gravity currents.This is joint work with our MSEAS group at MIT.

Jing Lin, Tapovan Lolla, Pierre [email protected], [email protected], [email protected]

MS81

Stability of Ensemble Kalman Filters

The ensemble Kalman filters are data assimilation methodsfor high dimensional, nonlinear dynamical models. Despitetheir widespread usage, very little is known about theirlong-time dynamical behavior. In this talk, we discuss thecriterions that guarantee the filter ensemble to be time uni-formly bounded and geometrically ergodic. Contradictionof these criterions may lead to catastrophic filter diver-gence through a concrete example. Finally, we show that asimple adaptive covariance inflation scheme can guaranteefilter stability.

Xin T. [email protected]

Andrew MajdaCourant Institute [email protected]

David KellyUniversity of [email protected]

MS81

Uncertainty Propagation for Bilinear Pdes: theMinimax Approach

Abstract not available.

Sergiy Zhuk, Tigran TchrakianIBM Research - [email protected], [email protected]

MS82

Low-rank Approximation Method for the Solutionof Dynamical Systems with Uncertain Parameters

Low-rank approximation methods are receiving a growingattention for solving multi-parametric problems, especiallyarising in the context of uncertainty quantification. Here,

UQ16 Abstracts 71

we propose a low-rank approximation method, with a sub-space point of view, for the solution of non linear dynamicalsystems with uncertain parameters. The method relies onthe construction of an increasing sequence of time depen-dent reduced spaces in which the full model is projectedto derive the reduced dynamical system. The basis of thereduced spaces is selected in a greedy fashion among sam-ples of the solution trajectories using efficient a posteriorierror estimates.

Marie Billaud-FriessLUNAM Universite, Ecole Centrale Nantes, CNRS, [email protected]

Anthony NouyEcole Centrale Nantes, [email protected]

MS82

Alternating Iteration for Low-Rank Solution of theInverse Stochastic Stokes Equation

High-dimensional partial differential equations arise natu-rally if spatial coordinates are extended by time and auxil-iary variables. The latter may account for uncertain inputsor design parameters. After discretization, the number ofdegrees of freedom grows exponentially with the numberof parameters. To reduce the complexity, we can employthe separation of variables and approximate large vectorsand matrices by a polylinear combination of factors, eachof whose depends only on one variable. One of the mostpowerful combinations are Tensor Train and HierarchicalTucker decompositions. A workhorse method to computethe factors directly is the Alternating Least Squares iter-ation and its extensions. However, it was too difficult totreat matrices of a saddle point structure via existing al-ternating schemes. Such matrices appear in an optimalcontrol problem, where a convex optimization, constrainedby a high-dimensional PDE, is solved via Lagrangian ap-proach. In this talk, we show how to preserve the saddle-point structure of the linear system during the alternatingiteration and solve it efficiently. We demonstrate numeri-cal examples of the inverse Stokes-Brinkman and Navier-Stokes problems.

Sergey DolgovMax Planck Institutefor Dynamics of Complex Technical [email protected]

Peter Benner, Akwum OnwuntaMax Planck Institute, Magdeburg, [email protected],[email protected]

Martin StollMax Planck Institute, [email protected]

MS82

On the Convergence of Alternating Least SquaresOptimisation in Tensor Format Representations

The approximation of tensors is important for the efficientnumerical treatment of high dimensional problems, but itremains an extremely challenging task. One of the mostpopular approach to tensor approximation is the alternat-ing least squares method. In our study, the convergenceof the alternating least squares algorithm is considered.

The analysis is done for arbitrary tensor format represen-tations and based on the multiliearity of the tensor format.In tensor format representation techniques, tensors are ap-proximated by multilinear combinations of objects lowerdimensionality. The resulting reduction of dimensionalitynot only reduces the amount of required storage but alsothe computational effort.

Mike EspigRWTH Aachen [email protected]

W. Hackbusch

Aram KhachatryanMax-Planck Institute for mathematics in the [email protected]

MS82

Parallel Low Rank Tensor Arithmetics for ExtremeScale UQ

Solutions of high-dimensional problems can be approxi-mated as tensors in the Hierarchical Tucker Format, if thedependency on the parameters fulfills a low rank property.Our goal is to perform arithmetics for tensors in the Hierar-chical Format, where a tensor is supposed to be distributedamong several compute nodes, as it may arise from a paral-lel approximation algorithm. This allows for parallelizationof the arithmetic operations, which are e.g. the truncationof a tensor to a certain rank or the application of an oper-ator to a tensor. When the operator as well as the right-hand side of a PDE are given in the Hierarchical Format,one can compute the residual of a solution directly in theHierarchical Format.

Christian LoebbertRWTH [email protected]

MS83

A Partial Domain Inversion Approach for Large-scale Baysian Inverse Problems in High Dimen-sional Parameter Spaces

While Bayesian inference is a systematic approach to ac-count for uncertainties, it is often prohibitively expensivefor inverse problems with large-scale forward equation inhigh dimensional parameter space. We shall develop a par-tial domain inversion strategy to only invert for distributedparameters that are well-informed by the data. This isan efficient data-driven reduction method for both forwardequation and parameter space. This approach induces sev-eral advantages over existing methods. First, depending onthe size of truncated domain, solving the truncated forwardequation could be much less computationally demanding.Consequently, the adjoint (and possibly incremental for-ward and adjoint equations if Newton-like method is usedis much less computationally intensive. Second, since theparameter to be inverted for is now restricted, the curseof dimensionality encountered when exploring the param-eter spaces (to compute statistics of the posterior density,for example) is mitigated. We present both deterministicinversion and statistical inversion with Monte Carlo ap-proaches.

Vishwas RaoICES, UT [email protected]

72 UQ16 Abstracts

Tan Bui-ThanhThe University of Texas at [email protected]

MS83

Higher-Order Quasi-Monte Carlo Integration inBayesian Inversion

An important problem in uncertainty quantification, inparticular when considering the Bayesian approach to in-verse problems, is the approximation of high-dimensionalintegrals. We consider novel quasi-Monte Carlo (QMC)methods for tackling such problems, which aim to outper-form existing methods by achieving higher orders of con-vergence. The methods presented in this talk are basedon interlaced polynomial lattice rules and allow, for certainintegrands, a convergence behavior of the quadrature er-ror of O(N−α) with α ≥ 2 [Dick, Kuo, Le Gia, Nuyens,Schwab, Higher order QMC Petrov-Galerkin discretizationfor affine parametric operator equations with random fieldinputs. SIAM J. Numerical Analysis 52(6) (2014).], [Gant-ner, Schwab, Computational Higher Order Quasi-MonteCarlo Integration (2014).] We mention the assumptionson the forward model required to obtain these better rates,and give some results of both forward and Bayesian inverseUQ of certain partial differential equation (PDE) models[Dick, Le Gia, Schwab, Higher-order quasi-Monte Carlo in-tegration for holomorphic, parametric operator equations.Tech. Rep. 2014-23, Seminar for Applied Mathematics,ETH Zurich (2014).]

Robert N. GantnerETH [email protected]

Christoph SchwabETH [email protected]

Josef DickSchool of Mathematics and StatisticsThe University of New South [email protected]

Thong Le GiaUniversity of New South [email protected]

MS83

Hierarchical Bayesian Level Set Inversion

The level set approach has proven widely successful in thestudy of inverse problems, since its systematic developmentin the 1990s. Recently it has been employed in the con-text of Bayesian inversion, allowing for the quantificationof uncertainty within reconstruction methods. Howeverthe Bayesian approach is very sensitive to the length andamplitude scales encoded in the prior probabilistic model.This paper demonstrates how the scale-sensitivity can becircumvented by means of a hierarchical approach, usinga single scalar parameter. Together with careful consid-eration of the development of algorithms which encodeprobability measure equivalences as the hierarchical pa-rameter is varied, this leads to well-defined Gibbs basedMCMC methods found by alternating Metropolis-Hastingsupdates of the level set function and the hierarchical pa-

rameter. These methods are demonstrated to outperformnon-hierarchical Bayesian level set methods.

Matthew M. DunlopUniversity of [email protected]

Marco IglesiasUniversity of [email protected]

Andrew StuartMathematics Institute,University of [email protected]

MS83

Discrete Monge-Kantorovich Approach for Sam-pling Bayesian Posteriors

Sampling techniques are important for large-scale high di-mensional Bayesian inferences. However, general-purposetechnique such as Markov chain Monte Carlo is intractable.We present an ensemble transform algorithm that is rootedfrom a discretization of the Monge-Kantarovich transportproblem. The method transforms the prior ensemble toposterior one via a sparse optimization. We develop meth-ods to accelerate the computation of the transformation.Numerical results for large-scale Bayesian inverse problemsgoverned by PDEs will be presented.

Aaron MyersUniversity of Texas at Austin - [email protected]

Kainan [email protected]

Alexandre ThieryNational University of [email protected]

Tan Bui-ThanhThe University of Texas at [email protected]

MS84

Uncertainty Quantification for Modeling Night-Time Ventilation in Stanford’s Y2E2 Building

A variety of models, ranging from box models that solve forthe volume-averaged air and thermal mass temperatures todetailed CFD models, can be used to predict natural ven-tilation performance. The simplifications and assumptionsintroduced in these models can result in significant uncer-tainty in the results. This study investigates the predictivecapability of a box model and a CFD model for night-flushventilation in the Y2E2 building, and compares the resultswith available temperature measurements.

Catherine Gorle, Giacomo LambertiColumbia [email protected], [email protected]

MS84

Epistemic Uncertainty Quantification of the Sec-

UQ16 Abstracts 73

ond Moment Closure Turbulence Modeling Frame-work

Predictive turbulence calculations require that the uncer-tainty in various constituent closures is quantified. Wepropose that uncertainty quantification must commence atthe Reynolds stress closure level. The Reynolds stress ten-sor provides an insufficient basis to describe the internalstructure of a turbulent field, expressly its dimensionality.It is demonstrated that this leads to an inherent degreeof uncertainty in classical models for turbulent flows. Wequantify the propagation of this epistemic uncertainty fordifferent regimes of mean flow. It is exhibited that the mag-nitude of this uncertainty is dependent not just upon thedimensionality of the turbulent field, but to a greater de-gree upon the nature of the mean flow. Finally, we analyzethe qualitative and quantitative effects of the non-linearcomponent of pressure on this systemic uncertainty.

Aashwin MishraTexas A&M University, College Station, [email protected]

Sharath GirimajiAerospace Engineering [email protected]

MS84

Reducing Epistemic Uncertainty of Fluid-Structure Instabilities

Aeroelastic problems contain a computationally inten-sive aerodynamic part, and a relatively cheap structurepart. Two reduced-order modeling strategies based onsystem identification, namely AutoRegression with eX-ogenous inputs (ARX), and a Linear Parameter Varying(LPV) model, are employed to build a reduced aerody-namic solver. This model is then coupled with the full-order structural solver in order to update the aeroelasticsolution. Propogating uncertainty on structural parame-ters becomes cheap, and Bayesian idenification of stabilityboundaries is performed.

Rakesh SarmaTU [email protected]

Richard P. DwightDelft University of [email protected]

MS84

Quantifying and Reducing Model-Form Uncertain-ties in Reynolds Averaged Navier-Stokes Simula-tions: An Open-Box, Physics-Based, Bayesian Ap-proach

RANS models are the workhorse tools for turbulent flowsimulations in engineering design an optimization. Formany flows the turbulence models are by far the mostimportant source of uncertainty in RANS simulations.In this work we develop an open-box, physics-informedBayesian framework for quantifying model-form uncertain-ties in RANS simulations. An iterative ensemble Kalmanmethod is used to assimilate the prior knowledge and ob-servation data. The framework is a major improvement

over existing black-box methods.

Heng Xiao, Jinlong Wu, Jianxun WangDept. of Aerospace and Ocean Engineering, Virginia [email protected], [email protected], [email protected]

Rui Sun, Chris J. RoyVirginia [email protected], [email protected]

MS85

Calibration of Velocity Moments from Experimen-tal and Les Simulation Data

Reynolds Averaged Navier Stokes (RANS) are theworkhorse of turbulence simulations, but the parametersvary from problem to problem and require setting from ex-ternal sources. A key parameter is turbulent kinetic energy,TKE, a second moment of the turbulent velocity statis-tics. Here we study verification, validation and uncertaintyquantification for this quantity, while addressing three verydistinct flows, related to inertial confinement fusion, solarenergy collectors and meteorological wind currents.

James G. GlimmStony Brook UniversityBrookhaven National [email protected] or [email protected]

MS85

Uncertainty Quantification for the Simulation ofBubbly Flows

The direct numerical simulation method for bubbly flowsbased on the front tracking technique has been used for thesimulations of the propagation of linear and shock waves.The uncertatinties in the quantity of interests (QoI) suchas the peak pressures and the collapse time depending onthe initial conditions are studied. The goal is to reduce theuncertainties in the QoI based on the collection of initialconditions and perform efficient simulations to predict thecollapse of cavitation bubbles.

Tulin Kaman, Remi AbgrallUniversity of [email protected], [email protected]

MS85

Model Validation for Porous Media Flows

We present a new Bayesian framework for the validation ofsubsurface flow models. We use a compositional model tosimulate CO2 injection in a core, and compare simulatedto observed saturations. We first present computationalexperiments involving a synthetic permeability field. Theyshow that the framework captures almost all the informa-tion about the heterogeneity of the permeability field ofthe core. We then apply the framework to real cores, usingdata measured in the laboratory.

Felipe PereiraDepartment of Mathematical SciencesUniversity of Texas at [email protected]

MS85

Recent Developments in High-Order Stochastic Fi-nite Volume Method for the Uncertainty Quantifi-

74 UQ16 Abstracts

cation in Cfd Problems

We develop the Stochastic Finite Volume (SFV) methodfor uncertainty quantification based on the parametriza-tion of the probability space and the numerical solutionof an equivalent high-dimensional deterministic problem.To this end, standard numerical approaches like finite vol-ume or discontinuous Galerkin can be used to approximatethe numerical solution of the parametrized equations, thusallowing for a natural way to achieve a high order of ac-curacy and an easy calculation of the statistical quantitiesof interest at the postprocessing stage. We derive the er-ror estimates for the mean and variance resulting from theSFVM and show that the convergence rates of the sta-tistical quantities are equivalent to the convergence ratesof the deterministic solution. We have also proposed ananisotropic discretization of the stochastic space which in-creases the computational efficiency of the SFV method.The resolution capabilities of the SFV method are com-pared to Multi-Level Monte carlo method. We finally gen-eralize the SFVM approach and apply the DG discretiza-tion on the unstructured triangular (in 2D) or tetrahedral(in 3D) grids in the physical space. We demonstrate theefficiency the implemented method on various numericaltests.

Svetlana TokarevaUniversity of [email protected]

Christoph SchwabETH [email protected]

Siddhartha MishraETH ZurichZurich, [email protected]

Remi AbgrallUniversity of [email protected]

MS86

Machine Learning Assisted Sampling Algorithm forInverse Uncertainty Quantification

An efficient Bayesian sampling algorithm for calibration ofsubsurface reservoirs is proposed. It builds on the nestedsampling (NS) algorithm [Skilling, 2006] for obtaining sam-ples from the posterior distribution of the model parame-ters. The main idea of nested sampling is to reformulatethe posterior sampling problem into a sequence of likeli-hood constraint prior sampling sub-problems with increas-ing complexity. Initially, the likelihood constraint is setto a low value and the prior constrained sampling is rel-atively easy to perform. At later stages of the NS algo-rithm, the likelihood constraint is set to higher value andthe constrained sampling is much harder to perform. Inthe current work, we combine nested sampling with a ma-chine learning based classification algorithm to acceleratethe constrained sampling step. Further, we apply an an-nealing technique for problems with very localized regionsof attraction. This results in two-way splitting of the poste-rior sampling problem both in the likelihood value (usingnested sampling) and in the likelihood surface complex-ity (using an annealing schedule). Numerical evaluation ispresented for calibration and prior model section of nonlin-

ear problems representing the two-phase flow in subsurfacereservoirs.

Ahmed H. ElSheikhInstitute of Petroleum EngineeringHeriot-Watt University, Edinburgh, [email protected]

MS86

Application of Multilevel Concepts for UncertaintyQuantification in Reservoir Simulation

Uncertainty quantification is an important task in reservoirsimulation. The main idea of uncertainty quantificationis to compute the distribution of a quantity of interest,for example, field oil production rate. That uncertainty,then feeds into the decision making process. A statisti-cally valid way of quantifying the uncertainty is a MarkovChain Monte Carlo (MCMC) method, such as Metropolis-Hastings (MH). MH can be prohibitively expensive whenthe function evaluates take a long time as is the case inreservoir simulation. There are different techniques accel-erate the convergence for MH, e.g., Hamiltonian MonteCarlo (HMC) and Multilevel Markov Chain Monte Carlo(MLMCMC). MLMCMC is based on using the multilevelconcept to accelerate the MH convergence. In this paper,we show how to use the multilevel concept in different sce-narios for quantifying uncertainty in reservoir simulation.We discuss the performance of the new techniques basedon the result for real field in the central Gulf of Mexico.

Doaa ElsakoutHeriot-Watt [email protected]

Mike ChristieHeriot-Watt University, [email protected]

Gabriel J. LordHeriot-Watt [email protected]

MS86

Accelerating Uncertainty Quantification withProxy and Error Models

To reduce the computational cost of stochastic approachesin hydrogeology, one often resorts to approximate flowsolvers (or proxy). Error models are then necessary tocorrect proxy responses for inference purposes. We pro-pose a new methodology based on machine learning toolsand we employ an iterative scheme to construct an errormodel that allows us to improve and evaluate its perfor-mance. The results are illustrated with a nested samplingexample.

Laureline Josset, Ivan LunatiUniversity of [email protected], [email protected]

MS86

Bayesian Hierarchical Model in Spatial InverseProblems

We consider a Bayesian approach to nonlinear inverse prob-lems in which the unknown quantity is a random spatialfield. The Bayesian approach contains a natural mecha-

UQ16 Abstracts 75

nism for regularization in the form of prior information,can incorporate information from heterogeneous sourcesand provide a quantitative assessment of uncertainty inthe inverse solution. The Bayesian setting casts the in-verse solution as a posterior probability distribution overthe model parameters. Karhunen- Loeve expansion is usedfor dimension reduction of the random spatial field. Fur-thermore, we use a hierarchical Bayes model to inject mul-tiscale data in the modeling framework. In this Bayesianframework, we show that this inverse problem is well-posedby proving that the posterior measure is Lipschitz continu-ous with respect to the data in total variation norm. Com-putational challenges in this construction arise from theneed for repeated evaluations of the forward model (e.g.in the context of MCMC) and are compounded by highdimensionality of the posterior. We develop two-stage re-versible jump MCMC which has the ability to screen thebad proposals in the first inexpensive stage. Numerical re-sults are presented by analyzing simulated as well as realdata from hydrocarbon reservoir.

Anirban MondalCase Western Reserve UniversityDepartment of Mathematics Applied Mathematics [email protected]

MS87

Multi-Index Stochastic Collocation

We present the novel Multi-Index Stochastic Collocationmethod (MISC) for computing statistics of the solution ofa PDE with random data. MISC is a combination tech-nique using mixed differences of spatial approximationsand quadratures over random data. We provide (i) an op-timal selection of the most effective mixed differences toinclude in MISC, (ii) a complexity analysis and (iii) a nu-merical study showing its effectiveness, comparing it withother related methods available in the literature.

Raul F. TemponeMathematics, Computational Sciences & EngineeringKing Abdullah University of Science and [email protected]

Abdul Lateef Haji AliKing Abdullah University for Science and [email protected]

Fabio NobileEPFL, [email protected]

Lorenzo TamelliniEPF-Lausanne, Switzerland,[email protected]

MS87

Best S-Term Polynomial Approximations Via Com-pressed Sensing for High-Dimensional Parameter-ized Pdes

In this talk we present a weighted �1 minimization, with anovel choice of weights, and a new iterative hard thresh-olding method that both exploit the structure of the bests-term polynomial approximation of parameterized PDEs.In addition, we extend the compressed sensing (CS) ap-proach to multidimensional Hilbert-valued signals. Fi-nally, the recovery of our proposed methods is established

through an improved estimate of restricted isometry prop-erty (RIP), whose proof will also be discussed. Numericaltests will be provided for supporting the theory and demon-strating the efficiency of our new CS methods compared tothose in the literature.

Clayton G. WebsterOak Ridge National [email protected]

Hoang A. TranOak Ridge National LaboratoryComputer Science and Mathematics [email protected]

Abdellah ChkifaOak Ridge National [email protected]

MS87

Domain Uncertainty for Navier Stokes Equations

We analyze sparsity and regularity of viscous, incompress-ible flow under uncertainty of the domain of definition.Various versions of domain variation are considered, andthe continuous dependence as well as regularity of flow fieldwith respect to variations in the domain are investigated.Sparsity results of the flow field with respect to a countablefamily of parameters for the domain are established.

Jakob ZechETH [email protected]

Christoph SchwabETH [email protected]

Albert CohenUniversite Pierre et Marie Curie, Paris, [email protected]

MS87

Gradient-Enhanced Stochastic Collocation Ap-proximations for Uncertainty Quantification

The talk is concerned with stochastic collocation methodswith gradient informations, namely, we consider the casewhere both the function values and derivative informationsare avaliable . Particular attention will be given to discreteleast-squares and the compressive sampling methods. Sta-bility results of these approaches will be presented.

Tao ZhouInstitute of Computational Mathematics, AMSSChinese Academy of [email protected]

MS88

Sensitivity Analysis Methods for Uncertainty Bud-geting in System Design

This talk presents a new approach to defining and man-aging budgets on the acceptable levels of uncertainty indesign quantities of interest. Examples of budgets are theallowable risk in not meeting a critical design constraintand the allowable deviation in a system performance met-

76 UQ16 Abstracts

ric. A sensitivity-based method analyzes the effects of de-sign decisions on satisfying those budgets, and a multi-objective optimization formulation permits the designer toexplore the tradespace of uncertainty reduction activitieswhile also accounting for a cost budget.

Max Opgenoord, Karen E. WillcoxMassachusetts Institute of [email protected], [email protected]

MS88

A Full-Space Approach to Stochastic Optimizationwith Simulation Constraints

Full-space methods are an attractive alternative toreduced-space approaches when dealing with nonlinear orrank-deficient simulation constraints in engineering ap-plications. We analyze the theoretical and computa-tional challenges that arise when the simulation con-straints include random inputs. For problems constrainedby PDEs we discuss several discretization strategies, andpresent scalable solvers for optimality systems arising in acomposite-step sequential quadratic programming scheme.Numerical examples include risk-averse control of thermalfluids.

Denis RidzalSandia National [email protected]

Drew P. KouriOptimization and Uncertainty QuantificationSandia National [email protected]

MS88

Uncertainty Quantification for Subsurface FlowProblems

The Ensemble Kalman filter (EnKF) has had enormousimpact on the applied sciences since its introduction in the1990s by Evensen and coworkers. In this talk, we will dis-cuss an analysis of the EnKF based on the continuous timescaling limits, which allows to study the properties of theEnKF for fixed ensemble size. The theoretical considera-tions give useful insights into properties of the method andprovide tools for a systematic development and improve-ment. Results from realistic petroleum engineering prob-lems with real datasets supporting the theoretical findingswill be presented.

Claudia SchillingsUniversity of Warkick, [email protected]

Andrew StuartMathematics Institute,University of [email protected]

MS88

Risk Averse Optimization for Engineering Appli-cations

Engineering applications often require solutions to multi-ple optimization problems including inversion, control, anddesign, in addition to addressing a variety of uncertainties.We demonstrate a strategy that couples uncertainties be-

tween inversion and control under uncertainty using riskmeasures. We demonstrate both risk neutral and CVaRin the context of finding a robust control strategy for themanagement of atmospheric trace gases.

Bart van Bloemen WaandersSandia National [email protected]

Harriet [email protected]

Tim WildeyOptimization and Uncertainty Quantification Dept.Sandia National [email protected]

MS89

Integrating Reduced-order Models in Bayesian In-ference via Error Surrogates

Sampling methods (e.g., MCMC) for statistical inversioncan require hundreds of ‘forward solves’ with a compu-tational model. For tractability, high-fidelity models areoften replaced with inexpensive surrogates, e.g., reduced-order models (ROMs). However, the epistemic uncertaintyintroduced by such surrogates is commonly ignored, whichleads to errors into the posterior distribution. In this work,we show that employing reduced-order model error surro-gates (ROMES) allows ROMs to be rigorously employedfor rapid yet rigorous statistical inversion.

Kevin T. CarlbergSandia National [email protected]

Matthias Morzfeld, Fei LuDepartment of MathematicsLawrence Berkeley National [email protected], [email protected]

MS89

EIM/GEIM and PBDW Approaches with NoisyInformations

The empirical interpolation method (EIM) and its gen-eralization (GEIM) together with the parametrized-background data-weak (PBDW) approach to variationaldata assimilation allow to reconstruct states from data incases where one has some information on the process underconsideration. This information is stated through a man-ifold of the states we are interested in. This manifold canbe e.g. the set of all solutions to some parameter depen-dent PDE. Data are generally noisy and the reconstructionhas to take this fact into account. In this talk we are in-terested in the situation where we want to place optimallythe different devices for data acquisition, those possibly i)having a different price, ii) related to a different precisionin the measurement and iii) measuring different quantities.The challenge is then, within a given budget, to choose andplace the devices so that the knowledge of the state will bethe most accurate.

Yvon MadayUniversite Pierre et Marie Curieand Brown university

UQ16 Abstracts 77

[email protected]

MS89

Model and Error Reduction for Inverse UQ Prob-lems Governed by Unsteady Nonlinear Pdes

We propose a new Reduced Basis Ensenble Kalman Filter(RB-EKF) for the efficient solution of inverse UQ prob-lems governed by unsteady nonlinear PDEs. Exploiting aBayesian framework, we combine the RB method for the ef-ficient approximation of the forward problem, an ensembleKalman filter, and suitable reduced error models to keepunder control the propagation of reduction errors. We ap-ply the resulting RB-EKF for identifying both state andparameters/parametric fields in some examples of interest.

Andrea ManzoniEPFL, [email protected]

Stefano PaganiPolitecnico di [email protected]

Alfio QuarteroniEcole Pol. Fed. de [email protected]

MS89

Goal-oriented Error Estimation

We consider a quantity of interest (QoI), as a functional ofthe solution of a large system of equations, whose numeri-cal solution is computationally costly. Metamodelling tech-niques yield a surrogate QoI, which is faster to compute,but can deviate from the original QoI. We apply deter-ministic (dual-based) and probabilistic methods to derivean error bound between these two quantities of interest.This explicitly computable bound make scarce assumptionsabout the QoI and the metamodelling technique.

Clementine PrieurLJK / Universite Joseph Fourier Grenoble (France)INRIA Rhone Alpes, [email protected]

Alexandre JanonUniversite [email protected]

Maelle NodetLJK / Universite Joseph Fourier Grenoble (France)INRIA Rhone Alpes, [email protected]

Christophe PrieurCNRS [email protected]

MS90

Adaptive Simulation for Large Deviations Condi-tioning

Abstract not available.

Florian AngelettiNational Institute for Theoretical Physics

South [email protected]

MS90

Rare Event and Subset Simulation for Discontinu-ous Random Variables

Subset Simulation (SS) aims at estimating extreme proba-bilities of the form P [f(U) > q] when f is the real-valuedoutput of a computationally expensive code and U is arandom vector. Discontinuities in the cumulative distribu-tion function of f(U) can make the usual SS estimator notconsistent. We present a new consistent version that canbe applied without any hypothesis on the discontinuitiesand that has the same properties as the usual estimator inabsence of discontinuity.

Clement WalterCommissariat a l’Energie Atomique et aux EnergiesAlternativUniversite Paris [email protected]

MS90

A Surrogate Accelerated Multicanonical MonteCarlo Method for Uncertainty Quantification

In this work we consider a class of uncertainty quantifi-cation problems where the system performance or relia-bility is characterized by a scalar parameter y. The per-formance parameter y is random due to the presence ofvarious sources of uncertainty in the system, and our goalis to estimate the probability density function of y. Wepropose to use the multicanonical Monte Carlo (MMC)method, a special type of adaptive importance samplingalgorithm, to compute the PDF of interest. Moreover, wedevelop an adaptive algorithm to construct local Gaussianprocess surrogates to further accelerate the MMC itera-tions. With numerical examples we demonstrate that theproposed method can achieve several orders of magnitudesof speedup over the standard Monte Carlo method.

Keyi Wu, Jinglai LiShanghai Jiaotong [email protected], [email protected]

MS90

Uniformly Efficient Simulation for the Supremumof Gaussian Random Fields

In this talk, we consider rare-event simulation of the tailprobabilities of Gaussian random fields. In particular, wedesign importance sampling estimators that are uniformlyefficient for a family of Gaussian random fields with differ-ent mean and variance functions.

Gongjun XuSchool of StatisticsUniversity of [email protected]

MS91

Openturns for Uncertainty Quantification

OpenTURNS is a scientific C++ library which aims atmodeling the uncertainty of inputs of a deterministic sim-ulation and propagating the uncertainties through themodel. This library is developed with the open source

78 UQ16 Abstracts

LGPL license by four partners: EDF, Airbus Group,Phimeca Engineering and IMACS. Also provided as aPython module, it gathers a growing community of usersfrom various industries.

Michael BaudinEDF R&[email protected]

Anne DutfoyEDF, Clamart, [email protected]

Anne-Laure PopelinEDF R&[email protected]

MS91

Uranie: the Uncertainty and Optimization Plat-form

URANIE developed by CEA is a framework to dealwith uncertainty and sensitivity analysis, code calibrationand design optimization. URANIE is a free and OpenSource multi-platform (Linux and Windows) based on thedata analysis framework ROOT, an object-oriented andpetaflopic computing system developed by the CERN. Thisframework offers several useful functionalities as advancedvisualization, powerful data storage and access in severalformat (ASCII, binary, SQL) and a C++ interpreter.

Fabrice Gaudier, Jean-Marc [email protected], [email protected]

Gilles ArnaudCEA Saclay, [email protected]

MS91

’Mystic’: Highly-constrained Non-convex Opti-mization and Uncertainty Quantification

Highly-constrained, large-dimensional, and nonlinear op-timizations are at the root of many difficult problems inuncertainty quantification (UQ), risk, operations research,and other predictive sciences. The prevalence of parallelcomputing has stimulated a shift from reduced-dimensionalmodels toward more brute-force methods for solving high-dimensional nonlinear optimization problems. The ‘mys-tic’ software enables the solving of difficult UQ problemsas embarrassingly parallel non-convex optimizations; andwith the OUQ algorithm, can provide validation and cer-tification of standard UQ approaches.

Michael McKernsCalifornia Institute of [email protected]

MS91

Promethee Environment for Computer Code Inver-sion

The Promethee project aims at spreading design of com-puter experiments in common engineering practice formany fields of applications. This full stack implementationembeds both distributed computing service, input/outputcoupling with the targeted computer code and a graphi-

cal user interface including the design algorithm. The casestudy focuses on the Stepwise Uncertainty Reduction algo-rithm for determination of safety limit in neutronics criti-cality assessment.

Yann RichetInstitut de Radioprotection et de Srete [email protected]

Gregory [email protected]

MS92

Robust Gaussian Stochastic Process Emulation

We consider parameters estimation problems in GaussianStochastic Processes (GaSP), which are widely used incomputer model emulation. We obtain theoretical resultsfor estimating parameters in a robust way through poste-rior modes. These results are applicable to many corre-lation functions, e.g power exponential, Matern, rationalquadratic and spherical correlation, along with general de-signs. Numerical results are studied to demonstrate ourbetter performance than some other methods and R pack-ages.

Mengyang Gu, James BergerDuke [email protected], [email protected]

MS92

Bayesian Inference of Manning’s N Coefficient of aStorm Surge Model: An Ensemble Kalman FilterVs. a Polynomial Chaos-Mcmc

Bayesian inversion is commonly used to quantify and re-duce the uncertainties in coastal ocean models, especiallyin the framework of parameters estimation. The posteriorof the parameters to be estimated is then computed con-ditioned on available data, either directly using a MCMCmethod, or by sequentially processing the data following adata assimilation method. The latter approach is heavilyexploited in large dimensional state estimation problemsand has the advantage to accommodate a large numberof parameters without, assumingly, increase in computa-tional cost. However, only approximate posteriors may beobtained by this approach due to the restricted Gaussianprior and noise assumptions that are generally imposed.This contribution aims at evaluating the effectiveness of us-ing an ensemble Kalman-based assimilation method for pa-rameters estimation of a realistic coastal model against anadaptive MCMC Polynomial Chaos (PC)-based approach.We focus on quantifying the uncertainties of the ADCIRCcommunity model w.r.t. the Manning coefficient. We testboth constant and variable Manning coefficients. Usingrealistic OSSEs, we apply an ensemble Kalman filter andthe MCMC method based on a surrogate of ADCIRC con-structed by a non-intrusive PC expansion, and compareboth approaches under identical scenarios. We furtherstudy the sensitivity of the estimated posteriors with re-spect to the inversion method parameters, including en-semble size, inflation factor, and PC order.

Adil SripitanaKing Abdullah University of Science and [email protected]

Talea Mayo

UQ16 Abstracts 79

Princeton [email protected]

Ihab SrajKing Abdullah University of Science and [email protected]

Clint DawsonInstitute for Computational Engineering and SciencesUniversity of Texas at [email protected]

Omar KnioKing Abdullah University of Science and Technology(KAUST)[email protected]

Olivier Le MaitreLaboratoire d’Informatique pour la Mecanique et [email protected]

Ibrahim HoteitKing Abdullah University of Science and Technology(KAUST)[email protected]

MS92

Adjoint Based Methods for Numerical Error Es-timation and Surrogate Construction For Uq andInverse Problems

In this talk, we will summarize recent work on using ad-joint based numerical error estimation of PDE solvers toinform surrogate construction and use in uncertainty quan-tification. While, it is common to assume that such erroris small and does not affect UQ procedures it is often thecase that for complex problems and large ensembles (es-pecially for extreme values of parameters) some membersof the ensemble can have very large numerical errors. Wewill show that systematic use of well established numericalerror estimates can inform surrogate construction amelio-rating these problems.

Abani K. PatraSUNY at BuffaloDept of Mechanical [email protected]

Hossein AghakhaniUniversity at Buffalo, [email protected]

Elaine SpillerMarquette [email protected]

MS92

Source Parameter Estimation For Volcanic AshTransport and Dispersion Using Polynomial ChaosType Surrogate Models

We present a framework for probabilistic volcanic ashplume forecasting using an ensemble of simulations, guidedby Conjugate Unscented Transform (CUT) method forevaluating integrals. This ensemble is used to construct apolynomial chaos expansion that can be sampled cheaply,

to either provide a forecast or combined with a minimumvariance condition, to provide a full posterior pdf of theuncertain source parameters, based on observed satelliteimagery.

E. Bruce PitmanDept. of MathematicsUniv at [email protected]

Abani K. PatraSUNY at BuffaloDept of Mechanical [email protected]

Puneet SinglaMechanical & Aerospace EngineeringUniversity at [email protected]

Tarunraj SinghDept of Mechanical EngineeringSUNY at [email protected]

Reza MadankanUniversity at [email protected]

Peter WebleyUniversity of Alaska [email protected]

MS93

Learning About Physical Parameters: the Impor-tance of Model Discrepancy

We illustrate through a simple example that calibrationwithout account for model discrepancy (model error) maylead to biased and over-confident parameter estimates andpredictions. Incorporating model discrepancy is difficultsince it is confounded with calibration parameters, whichwill only be resolved with meaningful priors. For our simpleexample, we model the model-discrepancy via a Gaussianprocess and demonstrate that our prediction within therange of data is correct, but only with realistic priors on themodel discrepancy do we uncover true parameter values.

Jenny BrynjarsdottirCase Western Reserve UniversityDept. of Mathematics, Applied Mathematics [email protected]

Anthony O’HaganUniv. Sheffield, [email protected]

MS93

Bayesian Estimates of Model Inadequacy inReynolds-Averaged Navier-Stokes

Errors due to turbulence modeling are the only uncon-trolled source of error in RANS simulations of perfect gases.We compare two recently proposed methods for estimat-ing turbulence modeling errors, based on model coefficientand model-form uncertainty. We examine their accuracyin several high Reynolds number flows, and compare the

80 UQ16 Abstracts

resulting error estimates to discretization error. The goalis a better understanding of mesh-resolution requirementsin RANS simulations.

Richard P. DwightDelft University of [email protected]

Wouter EdelingTU [email protected]

Paola CinnellaLaboratoire DynFluid, Arts et Metiers ParisTechand Univesita del [email protected]

Martin SchmelzerTU [email protected]

MS93

A Bayesian Viewpoint to Understanding Model Er-ror

When combining massive data sets with complex physicalmodels, one has to take into account ”model error”. Weconsider the problem of state prediction under an imperfectmodel, using a Bayesian approach. The underlying pro-cess is an advection-diffusion equation. Our results showthat the prediction ability of a simple working model is in-creased significantly when accounting for model error, eventhough the right dynamics are not captured.

Mark BerlinerThe Ohio State [email protected]

Radu HerbeiOhio [email protected]

MS93

Density Estimation Framework for Model ErrorQuantification

The importance of model error assessment during physicalmodel calibration is widely recognized. We highlight thechallenges arising in conventional statistical methods ac-counting for model error, and develop a density estimationframework to quantify and propagate uncertainties due tomodel errors. The reformulated calibration problem is thentackled with Bayesian techniques. We demonstrate the keystrengths of the method on both synthetic cases and on afew practical applications.

Khachik SargsyanSandia National [email protected]

Habib N. NajmSandia National LaboratoriesLivermore, CA, [email protected]

Jason BenderDepartment of Aerospace Engineering and MechanicsUniversity of Minnesota

[email protected]

Chi Feng, Youssef M. MarzoukMassachusetts Institute of [email protected], [email protected]

MS94

Multilevel Drift-Implicit Tau Leap

In biochemical reactive systems with small copy numbers ofone or more reactant molecules, stochastic effects are oftendominant. In such systems, those characterized by havingsimultaneously fast and slow time scales, the existing dis-crete space-state stochastic path simulation methods suchas the stochastic simulation algorithm (SSA) and the ex-plicit tau-leap method can be very slow. In this work, wepropose a fast and efficient Multilevel Monte Carlo methodthat uses drift-implicit tau-leap approximations.

Chiheb BenHammoudaKing Abdullah University for Science and [email protected]

MS94

MLMC for Multi-Dimensional Reflected Diffusions

The talk will discuss the combination of adaptive timestepping with multilevel Monte Carlo (MLMC) appliedto multi-dimensional reflected diffusions. By refining thetimestep as the path approaches the boundary, the accu-racy is significantly improved for a given computationalcost per path. Overall, it is possible to achieve a root-mean-square accuracy of ε for a computational cost whichis O(ε−2), and could potentially be further improved byusing multilevel quasi-Monte Carlo methods. Both numer-ical results and supporting numerical analysis will be pre-sented.

Mike GilesUniversity of Oxford, United [email protected]

Kavita RamananBrown Universitykavita [email protected]

MS94

Monte Carlo Methods and Mean Field Approxima-tion for Stochastic Particle Systems

I discuss using single level and Multilevel Monte Carlo(MLMC) methods to compute quantities of interests ofa stochastic particle system in the mean-field. In thiscontext, the stochastic particles follow a coupled systemof Ito stochastic differential equations (SDEs). Moreover,this stochastic particle system converges to a stochasticmean-field limit as the number of particles tends to infin-ity. In this talk, I start by recalling the results that firstappeared in (Haji-Ali, 2012) where I developed differentversions of MLMC for particle systems, both with respectto time steps and number of particles and proposed us-ing particle antithetic estimators for MLMC. In that work,I showed moderate savings of MLMC compared to MonteCarlo. Next, I expand on these results by proposing the useof our recent Multi-index Monte Carlo method to obtainimproved convergence rates.

Abdul Lateef Haji Ali

UQ16 Abstracts 81

King Abdullah University for Science and [email protected]

Raul F. TemponeMathematics, Computational Sciences & EngineeringKing Abdullah University of Science and [email protected]

MS94

Multilevel Monte Carlo Approximation of Quan-tiles

The multilevel Monte Carlo method has been establishedas a computationally efficient sampling method for approx-imating moments of uncertain system outputs. Its use forstatistical objects that cannot be expressed as moments,however, has not yet received much attention. An exam-ple of such an object is the quantile, which has become anintegral part in many applications, such as in robust opti-mization. In this talk we discuss multilevel Monte Carlotechniques allowing a quantile estimation.

Sebastian Krumscheid

Ecole Polytechnique Federale de Lausanne (EPFL)[email protected]

Fabio NobileEPFL, [email protected]

Michele PisaroniEcole polytechnique federale de [email protected]

MS95

High Dimensional Bayesian Optimization Via Ran-dom Embeddings Applied to Automotive Design

Bayesian optimization using random embeddings has beenshown to efficiently deal with high number of variables un-der the assumption that only a few of them actually areinfluential. We discuss in particular the choice of the ran-dom matrix defining the linear embedding and of the co-variance kernel for the Gaussian process surrogate model.We further adapted the method to deal with a test casein car crash-worthiness in constrained and multi-objectiveoptimization setups.

Mickael BinoisMines Saint-Etienne and [email protected]

David GinsbourgerDepartment of Mathematics and Statistics, University [email protected]

Roustant OlivierMines [email protected]

MS95

Multi-Objective Bayesian Optimization: Calibrat-ing a Tight Gas Condensate Well in WesternCanada

The history matching process of physical high dimensional

models is a costly task. We apply Gaussian Processes forboth single-objective and multi- objective calibration usingreal multi-stage horizontal gas condensate well data fromthe Montney Formation in Western Canada. The resultsshow a quick convergence in less than 100 simulations forthis 20-dimensional problem. The resulting Pareto frontshowed a quick convergence with a good match quality inless than 200 simulations.

Ivo CouckuytUniversity of [email protected]

Hamidreza HamdiUniversity of [email protected]

Tom DhaeneDepartment of Information Technology (INTEC)Ghent University-iMinds, Ghent, [email protected]

MS95

Scalable Bayesian Optimization

Bayesian optimization is an effective approach to find theglobal extremum of functions that are expensive to evalu-ate. In this work we propose a scalable parallelization ofthis technique that outperforms, in wall-clock time, previ-ous results in the literature. A new open source pythonframework, GPyOpt, with an efficient implementation ofthis and other Bayesian optimization techniques is also pre-sented as a complement to our work.

Javier Gonzalez, Zhenwen DaiUniversity of [email protected], [email protected]

Philipp HennigMax Planck Institute for Intelligent [email protected]

Neil LawrenceUniversity of [email protected]

MS96

A Variational Bayesian Sequential Monte Carlo Fil-ter for Large-Dimensional Systems

This work considers the Bayesian filtering problem in high-dimensional nonlinear state-space systems for which clas-sical Particle Filters (PFs) are impractical due to the pro-hibitive number of required particles to obtain reasonableperformances. To deal with the curse of dimensionality,we resort to the variational Bayesian approach to splitthe state-space into low-dimensional subspaces, and subse-quently apply one PF to each subspace. The propagationof each particle in the resulting filter requires generatingone particle only, in contrast with the existing multiplePFs for which a set of (children) particles needs to be gen-erated. After describing the proposed filter, we presentnumerical results to evaluate its behavior and to compareits performances against the standard PF and a multiplePF.

Boujemaa [email protected]

82 UQ16 Abstracts

Ibrahim HoteitKing Abdullah University of Science and Technology(KAUST)[email protected]

MS96

Forward Backward Doubly Stochastic DifferentialEquations and the Optimal Filtering of DiffusionProcesses

Abstract not available.

Feng Bao, Clayton G. WebsterOak Ridge National [email protected], [email protected]

MS96

Filtering Compressible and Incompressible Turbu-lent Flows with Noisy Lagrangian Tracers

Abstract not available.

Nan ChenNew York [email protected]

Andrew MajdaCourant Institute [email protected]

Xin T. [email protected]

MS96

Efficient Assimilation of Observations Via a Local-ized Particle Filter in High-Dimensional Geophys-ical Systems

We introduce a localized particle filter (PF) that has po-tential for high-dimensional data assimilation applicationsin geoscience. Unlike standard PFs, the local PF uses a ta-pering function to reduce the influence of observations onweights needed for approximating posterior probabilities,thus decreasing the number of particles required to pre-vent filter divergence. The local PF and ensemble Kalmanfilters are compared in atmospheric models to demonstratebenefits of the new method over linear filters.

Jonathan [email protected]

Jeffrey AndersonNational Center for Atmospheric ResearchInstitute for Math Applied to [email protected]

MS97

Ordering Heuristics for Minimal Rank Approxima-tions in Tensor-Train Format

The efficiency of the tensor-train (TT) decompositionof tensors resulting from the discretization of multi-dimensional functions relies on the existence of low-rankapproximations of different unfoldings. The order in whichthe associated dimensions are considered can lead to dra-

matically different TT-ranks, however, and thus deterio-rate the compression level (which scales quadratically withthe ranks). We will present heuristics which can be usedto find improved orderings for TT approximation, at theexpense of some limited pre-processing costs.

Daniele Bigoni, Youssef M. MarzoukMassachusetts Institute of [email protected], [email protected]

MS97

Adapted Tensor Train Approximations of Multi-variate Functions Using Least-Squares Methods

We propose low-rank tensor train (TT) approximationmethods using least squares methods for the approxima-tion of a multivariate function from random, noisy-free ob-servations. Here we present different algorithms for theconstruction of the tensor train approximations, based ei-ther on density matrix renormalization group (DMRG) oralternating least squares (ALS) methods, that are designedso to provide adaptation of the rank and adaptation of theunivariate bases.

Mathilde ChevreuilLUNAM Universite, Universite de Nantes, CNRS, [email protected]

Loic GiraldiKAUST, Kingdom of Saudi [email protected]

Anthony NouyEcole Centrale Nantes, [email protected]

Prashant [email protected]

MS97

Low-Rank Tensor Approximations for the Compu-tation of Rare Event Probabilities

Rare-event simulation with complex high-dimensional com-putational models remains a challenging problem. Thiswork demonstrates that meta-models belonging in the classof low-rank tensor approximations can provide an accu-rate representation of the PDF of the model response atthe tails, thus leading to efficient estimation of rare-eventprobabilities. Because the number of unknowns in suchmeta-models grows only linearly with the input dimension,high-dimensional problems are addressed by relying on rel-atively small experimental designs.

Katerina Konakli, Bruno SudretETH [email protected], [email protected]

MS97

Low Rank Approximation-Based Quadrature forFast Evaluation of Quantum Chemistry Integrals

A new method based on low rank tensor decomposition forfast evaluation of high dimensional integrals in quantumchemistry is proposed. The integrand is first approximatedin a suitable low rank canonical tensor subset with only a

UQ16 Abstracts 83

few integrand evaluations using classical alternating leastsquares. This allows representation of a high dimensionalintegrand function as sum of products of low dimensionalfunctions. In the second step, low dimensional Gauss Her-mite quadrature rule is used to integrate this low rankrepresentation thus avoiding the curse of dimensionality.Numerical tests on water molecule demonstrate the accu-racy of this method using very few samples as comparedto Monte Carlo and its variants.

Prashant [email protected]

Khachik SargsyanSandia National [email protected]

Habib N. NajmSandia National LaboratoriesLivermore, CA, [email protected]

Matthew Hermes, So HirataUniversity of Illinois Urbana [email protected], [email protected]

MS98

Multilevel Sequential Monte Carlo Samplers

Sequential Monte Carlo (SMC) samplers are a popular andversatile class of algorithms for computationally intensiveinference problems. One constructs a sequence of inter-mediate distributions and associated Markov chain kernelssuch that sequential importance sampling and resamplingcan guide the samples to the regions of high probability,hence avoiding degeneracy of the weights and explosion ofthe variance. For problems which admit a hierarchy ofapproximation levels, the multilevel Monte Carlo (MLMC)sampling framework enables a reduction of cost by leverag-ing a telescopic sum of increment estimators with vanishingvariance. These ideas can be combined in a very naturalway to yield the MLSMC sampling algorithm for Bayesianinverse problems. The theoretical results will be illustratedby a numerical example of permeability inversion throughan elliptic PDE given observations of the pressure

Kody LawOak Ridge National [email protected]

Alexandros BeskosDepartment of Statistical ScienceUniversity College [email protected]

MS98

Metropolis-Hastings Algorithms in Function Spacefor Bayesian Inverse Problems

We consider Markov Chain Monte Carlo methods adaptedto a Hilbert space setting. Such algorithms occur inBayesian inverse problems where the solution is a prob-ability measure on a function space according to which onewould like to integrate or sample. We focus on Metropolis-Hastings algorithms and, in particular, we introduce andanalyze a generalization of the existing pCN-proposal. This

new proposal allows to exploit the geometry or anisotropyof the target measure which in turn might improve the sta-tistical efficiency of the corresponding MCMC method. Innumerical experiments resulting methods displayed supe-rior statistical efficiency to pCN and behaves robustly withrespect to dimension and noise variance.

Oliver G. Ernst, Bjorn SprungkTU ChemnitzDepartment of [email protected],[email protected]

Daniel RudolfInstitute for MathematicsUniversity of [email protected]

MS98

Multilevel Markov Chain Monte Carlo Method forBayesian Inverse Problems

Abstract not available.

Viet Ha HoangNanyang Tech. [email protected]

Christoph SchwabETH [email protected]

Andrew StuartMathematics Institute,University of [email protected]

MS98

Accelerating Metropolis Algorithms Using Trans-port Maps

We present a new approach for efficiently characteriz-ing complex probability distributions, using a combina-tion of optimal transport maps and Metropolis corrections.We use continuous transportation to transform typicalMetropolis proposal mechanisms into non-Gaussian pro-posal distributions. Our approach adaptively constructs aKnothe-Rosenblatt rearrangement using information fromprevious MCMC states, via the solution of convex and sep-arable optimization problems. We then discuss key issuesunderlying the construction of transport maps in high di-mensions, and present ways of exploiting sparsity and low-rank structure.

Matthew Parno, Youssef M. MarzoukMassachusetts Institute of [email protected], [email protected]

MS99

Solver- vs. Grid-Hierarchies for Multi-Level MonteCarlo Sampling of Subsurface Flow and Transport

For fast sampling in the context of PDEs, multilevel MonteCarlo (MLMC) combines traditionally a multigrid tech-nique with Monte Carlo (MC) sampling. In our work, weapply instead of grids of different resolution a hierarchy

84 UQ16 Abstracts

of solution methods of different accuracy. In the contextof two-phase flow and transport, we demonstrate that theresulting solver MLMC leads like traditional MLMC to sig-nificant speedups while offering greater flexibility in certainapplications.

Florian MullerInstitute of Fluid DynamicsETH [email protected]

Daniel W. MeyerInstitute of Fluid [email protected]

Patrick JennyInstitute of Fluid DynamicsETH [email protected]

MS99

Reduced Hessian Method for Uncertainty Quantifi-cation in Ocean State Estimation

Reduced rank Hessian-based method for quantifying un-certainty covariance in ocean state estimation is imple-mented by direct second order Algorithmic Differentiationof a nonlinear General Circulation Model. Dimensional-ity of the least-squares misfit Hessian inversion is reducedby omitting its nullspace, as an alternative to suppressingit by regularization, excluding from the computation theuncertainty subspace unconstrained by observations. In-verse and forward uncertainty propagation schemes are de-signed for assimilating observation uncertainty and evolv-ing it through the model.

Alex KalmikovMassachusetts Institute of TechnologyCambridge (MA), [email protected]

Patrick HeimbachEAPS, Massachusetts Institute of Technology, andICES, University of Texas at [email protected]

Carl [email protected]

MS99

Detecting Low Dimensional Structure in Uncer-tainty Quantification Problems

Detecting low dimensional structure is useful for the un-certainty quantification for complex systems as it capturesthe main property of the system. We propose a methodto achieve this goal by combining the active subspace andthe compressive sensing method. With several iterations,this approach is able to detect important dimensions whichare linear combination of the original parameters. We usePDEs and a biomolecular system to demonstrate the effi-ciency of this method.

Xiu Yang, Huan Lei, Nathan BakerPacific Northwest National [email protected], [email protected],[email protected]

Guang LinPurdue [email protected]

MS99

Speeding Up Variational Inference

Four dimensional variational (4D-Var) approach formu-lates data assimilation as an optimization problem andcomputes a maximum aposteriori estimate of the initialcondition and parameters of the dynamical system underconsideration. The solution of large 4D-Var problems posesconsiderable challenges: the construction and validationof an adjoint model is an extremely labor-intensive pro-cess; strong-constraint 4D-Var is computationally expen-sive; and 4D-Var does not include posterior uncertaintyestimates. In this talk we present several ideas to tacklethese challenges: performing efficient computations us-ing reduced order model surrogates, exploiting time par-allelism, and constructing smoothers that sample directlyfrom the posterior distribution.

Adrian SanduVirginia Polytechnic Instituteand State [email protected]

Vishwas RaoICES, UT [email protected]

MS100

Bayesian Filtering in Cardiovascular Models

Uncertainty quantification plays an important role inmodeling cardiovascular dynamics, especially in render-ing physiological models patient-specific for use in clin-ical decision-making, where it is essential to determineoutput uncertainty. Given experimental data and modelswith numerous sources of uncertainty, such as the under-lying, perhaps time-varying system parameters, we showhow Bayesian filtering techniques can be employed in thissetting to estimate unknown model states and parameters,providing a natural measure of uncertainty in the estima-tion.

Andrea ArnoldNorth Carolina State [email protected]

Brian CarlsonUniversity of Michigan, Ann [email protected]

Mette S. OlufsenDepartment of MathematicsNorth Carolina State [email protected]

MS100

Sparse Mode Decomposition for High DimensionalRandom Fields, and Its Application on Spde

Inspired by the recent developments in data sciences, weintroduce an intrinsic sparse mode decomposition methodfor high dimensional random fields. This sparse represen-tation of the random field allows us to break a high dimen-sional stochastic field into many spatially localized modes

UQ16 Abstracts 85

with low stochastic dimension locally. Such decompositionenables us to break the curse of dimensionality in our localsolvers. To obtain such representation, we first decomposethe covariance function into low part plus sparse parts. Wethen extract the spatially localized modes from the sparsepart by minimizing a surrogate of the total area of thesupport. Moreover, we provide an efficient algorithm tosolve it. As an application, we apply our method to solveelliptic PDEs with random media having high stochasticdimension. Using this localized representation, we proposevarious combinations of local and global solver that achievedifferent level of accuracy and efficiency. At the end of thetalk, I will also discuss other applications of the intrinsicsparse mode extraction. This work is in collaboration withThomas Y. Hou and Pengchuan Zhang.

Qin LiUniversity of [email protected]

Thomas HouCalifornia Institute of [email protected]

Pengchuan [email protected]

MS100

Efficient Evaluation of Hessian Operators Arisingin Large-scale Inverse Transport Problems

Recent developments in infinite-dimensional Bayesian in-verse problems with PDE constraints have demonstratedthat an efficient evaluation of the Hessian operator is criti-cal. Sampling the associated high-dimensional probabilitydensity functions has only recently become feasible. Hes-sian based proposals can be used to guide the sampler toregions with higher acceptance probability. Applicationsthat drive our research are in medical imaging with a num-ber of unknowns that, upon discretization, is in the millionsor billions. Explicitly constructing the Hessian is thereforeout of question, not only due to memory requirements, butalso because it requires one expensive forward and adjointPDE solve per unknown. Our solver operates in a deter-ministic setting; it provides a MAP point estimate. We willdescribe efficient ways for evaluating the Hessian operatorand discuss ideas for its approximation based on problemspecific preconditioners for the associated reduced spaceKKT system. We will see that the optimality systems aremultiphysics systems of nonlinear, multicomponent PDEs,which are challenging to solve in an efficient way. We willshowcase convergence and scalability results of our solverfor 3D synthetic and real-world problems of varying com-plexity.

Andreas MangThe University of Texas at AustinThe Institute for Computational Engineering and [email protected]

George BirosUniversity of Texas at [email protected]

MS100

A Hierarchical Bayesian Approach to Pharmacoki-

netics Study

Pharmacokinetic models consist of a large set of ordinarydifferential equations with many model parameters andlarge uncertainties. We introduce a hierarchical Bayesianframework to account for the model parameters uncer-tainty across patients using a case study of brain tumortreatment. This uncertainty is explicitly separated fromthe typical additive noise that represents other model un-certainties and measurement noise. We present an efficientapproximation method to handle the intensive computa-tions required by the hierarchical model.

Stephen [email protected]

Panagiotis AngelikopoulosComputational Science, ETH Zurich, [email protected]

Panagiotis AngelikopoulosComputational Science and Engineering Lab, SwitzerlandInstitute for Computational Science DMAVT, ETHZurich,[email protected]

Petros KoumoutsakosComputational Science, ETH Zurich, [email protected]

MS101

Uncertainty Quantification for Complex MultiscaleSystems

Multiscale modeling efforts must adequately quantify theeffect of both parameter uncertainty and model discrep-ancy across scale. Advancements in uncertainty quantifi-cation using Bayesian calibration are described; a dynamicdiscrepancy approach to upscale uncertainty, functional in-puts and extrapolation uncertainty, and a large param-eter space. For emulation and discrepancy modeling, aBayesian Smoothing Spline ANOVA (BSS-ANOVA) ap-proach is utilized. These approaches are applied here toapplications in chemical kinetics and carbon capture tech-nology, with wide ranging impact.

Sham BhatStatistical Sciences GroupLos Alamos National [email protected]

MS101

Solving Stochastic Elliptic Boundary Value Prob-lems with a Multis-Scale Domain DecomositionMethod

In subsurface flow models quantifying uncertainty of coeffi-cients which have multiple scales of heterogeneity presentsa challenge. Typically, Monte-Carlo methods have beenused to model this uncertainty. Separately, multiscale finiteelement methods have been developed to accurately cap-ture heterogeneous effects of spatial heterogeneity. Sincethe heterogeneity of individual realizations can differ dras-tically, a direct use of multiscale methods in Monte-Carlosimulations is problematic. Furthermore, Monte-Carlomethods are known to be very expensive as a lot of samplesare required to adequately characterize the random com-ponent of the solution. In this study, we utilize a stochastic

86 UQ16 Abstracts

representation method that exploits the solution structureof the random process in order to construct a problem de-pendent stochastic basis. Using this stochastic basis repre-sentation a set of coupled yet deterministic equations canbe constructed. To reduce the computational cost of solv-ing the coupled system, we develop a Multi-Scale DomainDecomposition method which utilizes Robin transmissionconditions. In the developed Multi-scale method enrich-ment of the Multi-Scale solution space can be performedat multiple levels offering a balance between computationalcost, and accuracy of the found solution.

Bradley Mccaskill, Prosper TorsuUniversity of [email protected], [email protected]

Victor E. GintingDepartment of MathematicsUniversity of [email protected]

MS101

Balancing Sampling and Discretization Error in theContext of Multi-Level Monte Carlo

Multilevel Monte Carlo (MLMC) involves two types of er-rors: a sampling and a discretization error. In this work, wepresent an error balancing strategy that follows an optimaltotal error vs. total work decay. We estimate the two errorson the fly based on power law assumptions. For testing, weconsider two-phase flow and transport in a heterogeneousporous media and find growing performance gains for anincreasing number of MLMC levels.

Florian MullerInstitute of Fluid DynamicsETH [email protected]

Daniel W. MeyerInstitute of Fluid [email protected]

Patrick JennyInstitute of Fluid DynamicsETH [email protected]

MS101

Uncertainty Quantification in Cloud CavitationCollapse Using Multi-Level Monte Carlo

Cloud cavitation collapse pertains to the erosion of liquid-fuel injectors, hydropower turbines, ship propellers, and isharnessed in shock wave lithotripsy. To assess the devel-opment of extreme pressure spots, we performed numericaltwo-phase flow simulations of cloud cavitation collapse con-taining 50’000 bubbles using 1 trillion cells. To quantify theuncertainties in the collapsing clouds under random initialconditions, we investigate a novel non-intrusive multi-levelMonte Carlo methodology, aiming to accelerate the plainMonte Carlo sampling.

Jonas [email protected]@sam.math.ethz.ch

Panagiotis HadjidoukasComputational Science

ETH Zurich, [email protected]

Diego RossinelliETH [email protected]

Babak Hejazialhosseini, Ursula Rasthofer, FabianWermelingerComputational ScienceETH Zurich, [email protected], [email protected],[email protected]

Petros KoumoutsakosComputational Science, ETH Zurich, [email protected]

MS102

Sparse Approximation of Elliptic PDEs with Log-normal Coefficients

We consider elliptic partial differential equations with dif-fusion coefficients of lognormal form. For such problems,we study the �p-summability properties of the Hermitepolynomial expansion of the solution in terms of the count-ably many scalar parameters in the representation the ran-dom field. Our estimates significantly improve on knownresults, with direct implications on the approximation ratesof best n-term Hermite expansions.

Markus BachmayrUniversite Pierre et Marie [email protected]

Albert CohenUniversite Pierre et Marie Curie, Paris, [email protected]

Ronald DeVoreDepartment of MathematicsTexas A&M [email protected]

Giovanni MiglioratiUniversit e Pierre et Marie [email protected]

MS102

High-Dimensional Approximation Using Equilib-rium Measures

We consider the problem of approximating solutions toparameter-dependent partial differential equations. Stan-dard approaches frequently involve collecting PDE solu-tions computed at a judiciously-chosen finite set of para-metric samples, and using these to predict the solutionmanifold behavior for all parameter values. In this talkwe consider choosing parameter samples according to thepluripotential equilibrium measure. We also show thatsuch an approach typically yields very stable, high-ordercomputational procedures for parametrized PDE approxi-mation.

Akil NarayanUniversity of [email protected]

UQ16 Abstracts 87

John D. JakemanSandia National [email protected]

Tao ZhouInstitute of Computational Mathematics, AMSSChinese Academy of [email protected]

MS102

Smoothing by Integration and Anova Decomposi-tion of High-Dimensional Functions

In recent joint papers with Michael Griebel and FrancesKuo we have shown that the lower ANOVA terms of a non-smooth function can be smooth, because of the smoothingeffect of integration. The problem settings have includedboth the unit cube and the Euclidean space, and in thelatter case even infinite-dimensional integration. The talkwill survey this work, and report on recent developments.

Ian H. SloanUniversity of New South WalesSchool of Mathematics and [email protected]

Frances Y. KuoSchool of Mathematics and StatisticsUniversity of New South [email protected]

Andreas GriewankHU Berlin, [email protected]

Hernan LeoveyHumboldt University, [email protected]

MS102

Title Not Available

Abstract not available.

Hoang A. TranOak Ridge National LaboratoryComputer Science and Mathematics [email protected]

Clayton G. WebsterOak Ridge National [email protected]

MS103

A Data-Driven Approach to PDE-Constrained Op-timization Under Uncertainty

PDE-constrained optimization problems often possess un-certain coefficients, boundary data, etc. In many appli-cations, the probabilistic characterization of these uncer-tainties is unknown and must be estimated using data.I present an approach to incorporate data in such prob-lems using distributionally robust optimization. First, Idiscuss theoretical properties concerning distributionallyrobust PDE-optimization. Then, I introduce a novel data-driven discretization with rigorous error bounds for the un-known probability measure. I conclude with numerical re-

sults.

Drew P. KouriOptimization and Uncertainty QuantificationSandia National [email protected]

MS103

Multidimensional Buffered Probability of Ex-ceedance and Applications to Distribution Approx-imation

Probability of exceedance (POE) is widely used but hasmajor drawbacks. Buffered probability of exceedance(bPOE) is a conservative approximation of POE, elimi-nating some of its drawbacks. We suggest a new multi-dimensional generalization of bPOE (M-bPOE) to controlexceedances for several random variables simultaneously.Distribution approximation problems with M-bPOE con-straints are studied. Entropy maximization with M-bPOEdominance, which unifies convex-linear second order andlift zonoid dominances, and variance constraints leads tomaximum-of-Gaussians form of optimal solution.

Aleksandr Mafusalov, Stan UryasevUniversity of [email protected], [email protected]

MS103

Buffered Probability of Exceedance, Support Vec-tor Machines, and Robust Optimization

We first present a new probabilistic characteristic, calledBuffered Probability of Exceedance (bPOE) and show thatbPOE acts as a natural counterpart to Probability of Ex-ceedance (POE), but with superior mathematical prop-erties, in particular leading to convex optimization prob-lems. Turning to application of bPOE, we then show thatthe famous Support Vector Machine’s (SVM’s) used forbinary classification are equivalent to simple bPOE mini-mization, providing a new purely probabilistic interpreta-tion of SVM’s. We then show that this relationship revealsa connection between bPOE minimization and Robust Op-timization (i.e. deterministic optimization over fixed un-certainty sets).

Matthew Norton, Aleksandr Mafusalov, Stan UryasevUniversity of [email protected], [email protected], [email protected]

MS103

Gas Transportation under Stochastic Uncertainty:Nomination Validation and Beyond

For steady-state gas networks with uncertain outlets prob-abilities of validity are computed for balanced nomina-tions. The mathematical modeling follows Kirchhoff’sLaws. Placing accent on passive networks the role of fun-damental cycles is highlighted from analytic as well as al-gebraic perspectives. Structural information acquired thisway turns out beneficial when applying Quasi Monte CarloSampling for the calculation of the desired probabilities.

Rudiger Schultz, Claudia GotzesUniversity of [email protected], [email protected]

Holger Heitsch, Rene Henrion, Holger HeitschWeierstrass Institute Berlin, Germany

88 UQ16 Abstracts

[email protected], [email protected],[email protected]

MS104

Combined Reduction for Uncertainty Quantifica-tion

Often, uncertainty quantification tasks practically amountto a many-query setting, like model-constraint optimiza-tion; and realistic input-output models are of large-scaleand contain many uncertain parameters. Hence, a swift re-peated evaluation usually requires model order reduction.Combined reduction is a parametric model reduction tech-nique that concurrently approximates the dominant state-and parameter-space. The reduced order model then ac-celerates the single simulation, and the low-rank approx-imation to the original high-dimensional parameter-spacethe overall task.

Christian HimpeInstitute for Computational und Applied MathematicsUniverstiy of [email protected]

Mario OhlbergerUniversitat MunsterInstitut fur Numerische und Angewandte [email protected]

MS104

Multifidelity Methods for Uncertainty Quantifica-tion

Replacing high-fidelity models with low-cost surrogatemodels to accelerate sampling-based UQ usually leads tobiased estimates, because the surrogate model outputs areonly approximations of the high-fidelity model outputs. Weintroduce multifidelity methods that combine, instead ofreplace, the high-fidelity model with surrogate models. Anoptimization problem balances the number of model eval-uations among the models to leverage surrogate modelsfor speedup. Recourse to the high-fidelity model providesguarantees of unbiased estimates in the statistics of inter-est.

Benjamin Peherstorfer, Benjamin PeherstorferACDL, Department of Aeronautics & AstronauticsMassachusetts Institute of [email protected], [email protected]

Karen E. WillcoxMassachusetts Institute of [email protected]

Max GunzburgerFlorida State UniversitySchool for Computational [email protected]

MS104

Error Estimates for Reduced-Order Models UsingStatistical Learning

We present a statistical learning framework for estimatingthe error introduced by reduced-order models (ROMs) ap-plied to nonlinear dynamical systems. We use statisticaltechniques for high-dimensional regression to map a largeset of inexpensively-computed quantities (generated by the

time-dependent ROM) to errors in the output quantitiesof interest. We demonstrate the procedure on a trajectorypiecewise linear (TPWL) ROM applied to subsurface oil-water flow problems with varying well-control parameters.

Sumeet TrehanDepartment of Energy Resources EngineeringStanford [email protected]

Kevin T. CarlbergSandia National [email protected]

Louis DurlofskyStanford [email protected]

MS104

Adaptive Stochastic Collocation for PDE-Constrained Optimization under Uncertaintyusing Sparse Grids and Model Reduction

This work introduces a framework for accelerating opti-mization problems governed by partial differential equa-tions with random coefficients by leveraging adaptivesparse grids and model reduction. Adaptive sparse gridsperform efficient integration approximation in a high-dimensional stochastic space and reduced-order models(with statistically quantified errors) reduce the cost ofobjective-function and gradient queries by decreasing thecomplexity of primal and adjoint PDE solves. A globally-convergent trust-region framework accounts for inexactnessin the objective and gradient.

Matthew J. ZahrStanford [email protected]

Kevin T. CarlbergSandia National [email protected]

Drew P. KouriOptimization and Uncertainty QuantificationSandia National [email protected]

MS105

Rare Event Computation and Large Deviation forGeophysical Turbulent Flows and Climate Appli-cations

Abstract not available.

Freddy BouchetLaboratoire de PhysiqueENS de [email protected]

MS105

Singularities, Instantons and Turbulence

It is evident that coherent nearly singular structures playa dominant role in understanding the anomalous scalingbehavior in turbulent systems. We ask the question, whichrole these singular structures play in turbulence statistics.More than 15 years ago, for certain turbulent systems the

UQ16 Abstracts 89

door for attacking this issue was opened by getting accessto the probability density function to rare and strong fluc-tuations by the instanton approach. We address the ques-tion whether one can identify instantons in direct numericalsimulations of the stochastically driven Burgers equation.For this purpose, we first solve the instanton equations us-ing the Chernykh Stepanov method [2001]. These resultsare then compared to direct numerical simulations by in-troducing a filtering technique to extract prescribed rareevents from massive data sets of realizations. In addition,we solve the issue why earlier simulations by Gotoh [1999]were in disagreement with the asymptotic prediction of theinstanton method and demonstrate that this approach iscapable to describe the probability distribution of velocitydifferences for various Reynolds numbers. Finally, we willpresent and discuss first results on the instanton solutionfor vorticity in 3D Navier Stokes turbulence.

Rainer GrauerTheoretische Physik I, Ruhr-Universitat [email protected]

MS105

A Minimum Action Method for Navier-StokesEquations Based on a Divergence-Free Basis

Abstract not available.

Xiaoliang WanLouisiana State UniversityDepartment of [email protected]

MS105

Gentlest Ascent Dynamics for Locating TransitionStates

Transition state is of critical importance to describe therare event in escaping an attractor. Most of time, thesestates are of saddle point type. In this talk, we review thenumerical methods of locating transition state or the sad-dle point both on an energy landscape and for non-gradientdynamics. These include joint work with Weinan E (PekingUniversity and Princeton University) and Weiguo Gao,Jing Leng (Fudan University).

Xiang ZhouDepartment of MathematicsCity Univeristy of Hong [email protected]

MS106

Probability Models for Discretization Uncertaintyand Adaptive Mesh Selection

Abstract not available.

Oksana A. ChkrebtiiOhio State [email protected]

MS106

Uncertainty Quantification with Gaussian ProcessLatent Variable Models

Stochastic analysis of multiscale problems requires a real-istic data-driven stochastic model of the material property

variations. In this work, we use the Gaussian Process la-tent variable model (GP-LVM), a kernel based method thatprovides a probabilistic mapping from the latent space tothe physical property space. We examine the performanceof the method in dimensionality reduction of multiscaleproperties and use it with a multi-output sparse GP modelfor uncertainty propagation in multiphase flows.

Charles GaddUniversity of [email protected]

Charles GaddUniversity of WarwickWarwick Centre for Predictive Modelling (WCPM)[email protected]

MS106

Lightweight Error Estimation, Using the Proba-bilistic Interpretation of Classic Numerical Meth-ods

Certain classic numerical methods for integration, lin-ear algebra, optimization, and the solution of differentialequations construct least-squares approximations for non-analytic quantities from computable ones, and can thusbe directly interpreted as Bayesian maximum a-posteriori(mean) estimators under a number of structured Gaussianpriors. With a particular focus on solvers for ordinary dif-ferential equations, I will discuss approaches for the calibra-tion of the covariance of the associated Gaussian posteriorat runtime, which endow these classic methods with errorestimates expressed as probability measures. The empha-sis will be on computationally lightweight implementationsof this type of error estimation. If time permits, I will alsodiscuss some recent applications.

Philipp Hennig, Michael Schober, Hans KerstingMax Planck Institute for Intelligent [email protected], [email protected], [email protected]

MS107

Structural Discrepancy Analysis for Natural Haz-ards Models

Structural discrepancy refers to the differences between acomputer simulator and the physical system that the simu-lator purports to represent, even when the simulator is eval-uated at appropriate choices of input parameters. Carefulassessment of structural discrepancy is essential when con-sidering the relevance of a simulator for decisions concern-ing the physical system. We will discuss how such suchdiscrepancy may be assessed and utilised, paying partic-ular attention to features related to internal discrepancy,i.e. those features of structural discrepancy that may beassessed directly by means of experiments on the computersimulator. The approach will be illustrated by examples inflood modelling.

Michael Goldstein, Nathan HuntleyDurham [email protected],[email protected]

MS107

Coupling Computer Models Through Linking

90 UQ16 Abstracts

Their Statistical Emulators

Direct coupling of computer models is often difficult forcomputational and logistical reasons. We propose couplingtwo computer models by linking independently developedGaussian process emulators (GaSPs) of these models. Thelinked emulator results in a smaller epistemic uncertaintythan a direct GaSP of the coupled computer model wouldhave (if such a model were available). This feature is illus-trated via simulations. The application of the methodologyto complex computer models is demonstrated as well.

Ksenia N. KyzyurovaDepartment of Statistical ScienceDuke [email protected]

James Berger, Robert L. WolpertDuke [email protected], [email protected]

MS107

Multi-Objective Sequential Design for ComputerExperiments

Using a combination of statistical emulators and a mod-ified version of the EGO (Efficient Global Optimization)algorithm, we conduct a search of a state space for de-sign points that are of relevance for emulators of simulatoroutput at multiple physical locations simultaneously. Thissequential design methodology reduces the number of ex-pensive model runs to be made and is used to generateprobabilistic hazard maps of inundation due to volcaniclandslides (pyroclastic flows).

Regis Rutarindwa, Elaine SpillerMarquette [email protected],[email protected]

Marcus BursikDept. of GeologySUNY at [email protected]

MS107

Developing a Shallow-Layer Model of Lahars forHazard Assessment

Lahars are highly concentrated mixtures of mobilized vol-canic material that are potentially extremely destructiveto populations living around volcanoes. We present a newpredictive model of lahars that can describe a variety offlow regimes from concentrated streamflows to lubricatedgranular flows. We explore different parameterisations forerosion/deposition and basal drag, compare their uncer-tainty with that in model forcing, and demonstrate the useof emulation to facilitate efficient model evaluations andtherefore rapid hazard assessment.

Mark WoodhouseUniversity of [email protected]

Eliza CalderUniversity of [email protected]

Andrew Hogg

University of [email protected]

Chris JohnsonUnivesity of [email protected]

Jerry PhillipsUniversity of [email protected]

Elaine SpillerMarquette [email protected]

MS108

Uncertainty in Modeling Turbulence: Data Infer-ence and Physics Constraints

Turbulence models are commonly used in simulations of en-gineering flows. We investigate the the effect of modelingassumptions built in the models using perturbation anal-ysis applied to the Reynolds stress definition. Furthemorewe use machine learning techniques to identify the impactof specific closures on the predictions. Applications includeaerodynamic flows in internal passages.

Karthikeyan DuraisamyUniversity of [email protected]

Gianluca IaccarinoStanford UniversityMechanical [email protected]

Juan J. AlonsoDepartment of Aeronautics and AstronauticsStanford [email protected]

MS108

Multi-Response Approach to Improving Identifia-bility in Model Calibration

One of the main challenges in model calibration is the dif-ficulty in distinguishing between the effects of calibrationparameters versus model discrepancy. A multi-responseapproach is developed to improve identifiability by usingmultiple responses that share a mutual dependence on acommon set of calibration parameters. To address the is-sue of how to select the most appropriate set of responsesto measure experimentally to best enhance identifiability,a preposterior analysis approach is developed and a sur-rogate preposterior analysis is employed to enhance thecomputational efficiency of the approach.

Zhen Jiang, Dan Apley, Wei ChenNorthwestern [email protected], [email protected], [email protected]

MS108

Assessing Model Error in Reduced Chemical Mech-anisms

In the field of combustion, reduced chemical mechanisms

UQ16 Abstracts 91

are often used because the detailed model is unknown ortoo computationally expensive. However, introducing a re-duced model can be a significant source of model error. Werepresent this error with an operator that is both proba-bilistic and physically meaningful. The distributions of theoperator are calibrated using high-dimensional hierarchi-cal Bayesian modeling. We apply the stochastic operatormethod to various combustion mechanisms and investigateits effects on a one-dimensional laminar flame.

Rebecca MorrisonUT [email protected]

Robert D. MoserUniversity of Texas at [email protected]

MS108

Uncertainty Due to Inadequate Models of ScalarDispersion in Porous Media

We present recent developments on the uncertainty quan-tification of models. Models are an imperfect representa-tion of complex physical processes, hence exploring repre-sentations of the model inadequacies is crucial. We intro-duce a novel Bayesian framework for capturing the inad-equacy in the context of a linear operator acting on themodel solution. This framework is applied to predictingbreakthrough curves with uncertainty in the context ofscalar dispersion in porous media, but is applicable to othermodels.

Joyce RigeloUniversity of Texas at [email protected]

Teresa PortoneICESUT [email protected]

Damon McDougallInstitute for Computational Engineering and SciencesThe University of Texas at [email protected]

Robert D. MoserUniversity of Texas at [email protected]

MS109

An Optimized Multilevel Monte Carlo Method

In many textbooks one of the first simple applications ofthe classical Monte Carlo method is Buffon’s needle prob-lem, where for a given length of a needle one is interestedin the probability of the event that when thrown the needlehits one of some parallel lines. If one interprets the lengthof the needle as a free parameter one can also apply a mul-tilevel Monte Carlo algorithm in order to approximate thehitting probabilities for all admissible parameter values si-multaneously. It turns out that for this problem the exactsolution, the statistical bias, the computational effort andall variances can be computed explicitly. This allows us togiven sharp lower and upper estimates of the complexityof the multilevel Monte Carlo algorithm for this problem.

Andrea Barth

Uni [email protected]

Raphael KruseTU [email protected]

MS109

Multilevel Subset Simulation for Rare Events

In subset simulation (Au & Beck, 2001), rare events areexpressed as products of larger conditional failure sets. Inour new multilevel approach, the conditional probabilitiesare estimated using different model resolutions. Most sam-ples are computed on the coarsest grid, leading to efficiencygains of more than an order of magnitude. The key is a pos-teriori error estimation which guarantees the critical subsetproperty that may be violated when changing model reso-lution between failure sets.

Daniel ElfversonUppsala University (Sweden), Div. Scientific ComputingDept. Information [email protected]

Robert ScheichlUniversity of BathDepartment of Mathematical [email protected]

MS109

Mean Square Adaptive Mlmc for Sdes

We present a formal mean square error (MSE) expan-sion for numerical solutions of stochastic differential equa-tions (SDE). The expansion is used to construct an adap-tive Euler–Maruyama algorithm for SDEs. The resultingalgorithm is incorporated into a multilevel Monte Carlo(MLMC) algorithm. This novel algorithm is particularlyefficient in solving low-regularity problems, outperform-ing uniform time-stepping MLMC and producing errorsbounded with high probability by TOL at a cost rateO(TOL−2log(TOL)4).

Juho HappolaKing Abdullah University for Science and [email protected]

Raul F. TemponeMathematics, Computational Sciences & EngineeringKing Abdullah University of Science and [email protected]

Hakon HoelUniversity of [email protected]

MS110

Adaptive Design of Experiments for ConservativeEstimation of Excursion Sets

We consider a Gaussian process model trained on few eval-uations of an expensive to evaluate deterministic functionand we study the problem of estimating a fixed excursionset of this function. We focus on conservative estimatesof the excursion set and we present a method based onVorob’ev quantiles, that sequentially selects new evalua-tions of the function in order to reduce the uncertainty on

92 UQ16 Abstracts

the estimate. The method is then applied to a test case.

Dario AzzimontiUniversity of [email protected]

David GinsbourgerDepartment of Mathematics and Statistics, University [email protected]

Clement ChevalierUniversite de [email protected]

Julien BectSUPELEC, Gif-sur-Yvette, [email protected]

Yann RichetInstitut de Radioprotection et de Srete [email protected]

MS110

Bias and Variance in the Bayesian Subset Simula-tion Algorithm

The Bayesian Subset Simulation (BSS) algorithm is arecently proposed approach, based on Sequential MonteCarlo simulation and Gaussian process modeling, for theestimation of the probability that f(X) exceeds somethresold u when f is expensive to evaluate and P (f(X) >u) is small. We discuss in this talk the bias an variance ofthe BSS algorithm, and propose a variant where the bias-variance trade-off is automatically tuned.

Julien BectCentrale [email protected]

Roman SueurEDF R&[email protected]

Emmanuel [email protected]

MS110

Stepwise Uncertainty Reduction Strategies for In-version of Expensive Functions with Nuisance Pa-rameters

We deal with sequential evaluation strategies of real-valued, expensive, functions f in the context where the in-put parameters consist of controlled parameters and somenuisance, uncontrolled, parameters. Mathematically, weidentify all controlled parameters {xc : ∀xn f(xc, xn) < T}where f does not have any failure whatever the value ofthe nuisance parameters. A sampling criterion relying onrecent kriging update formulas is used and applied to anuclear safety test case.

Clement ChevalieUniversite de [email protected]

David Ginsbourger

Department of Mathematics and Statistics, University [email protected]

Yann Richet, Gregory CaplinInstitut de Radioprotection et de Srete [email protected], [email protected]

MS110

Sequential Design for the Estimation of High-Variation Regions Using Differentiable Non-Stationary Gaussian Process Models

Gaussian Process models have proven useful for sequen-tial design of computer experiments. Here we focus onthe case of functions possessing heterogeneous variationsacross the input space. We propose two sequential designapproaches dedicated to learning high variation regions:devising derivative-based infill criteria or using standardones under ad hoc non-stationary GP models. We comparethe advantages of both methods based on a benchmark ofsimulated functions and on the motivating mechanical en-gineering test case.

Sebastien MarminIRSN Cadarache/Centrale Marseille/University of [email protected]

David GinsbourgerDepartment of Mathematics and Statistics, University [email protected]

Jean BaccouIRSN [email protected]

Frederic PeralesInstitut de Radioprotection et de Srete [email protected]

Jacques LiandratCentrale Marseille - [email protected]

MS111

On Probabilistic Transformations and OptimalSampling Rules for Emulator Based Inverse UQProblems

The efficiency of all generalized polynomial chaos expan-sion based methods decrease significantly if the model in-puts are dependent random variables. This talk will discussprobabilistic transformations such as the Rosenblatt andthe Nataf transformation that decorrelate random vari-ables for improving the efficiency of Bayesian inverse prob-lems. We will present numerical studies that demonstratethe effectiveness of these transformations and present somealternatives such as leja sequences in combination with ar-bitrary polynomial chaos expansion.

Fabian Franzelin, Dirk PflugerUniversity of [email protected],[email protected]

John D. JakemanSandia National Labs

UQ16 Abstracts 93

[email protected]

MS111

Probabilistic Inference of Model Parameters andMissing High-Dimensional Data Based on Sum-mary Statistics

We present a method based on Maximum Entropy andApproximate Bayesian Computation for probabilistic infer-ence of combustion system parameters from indirect sum-mary statistics. The approach relies on generating high-dimensional data consistently with the given statistics.Importance sampling and Gauss-Hermite quadrature areused for parameter inference. Nuisance parameters withprescribed marginal posterior probability density are ad-dressed. A consensus joint posterior on the parameters isobtained by pooling the posterior parameter densities giventhe consistent data sets.

Mohammad KhalilCombustion Research FacilitySandia National [email protected]

Habib N. NajmSandia National LaboratoriesLivermore, CA, [email protected]

Kenny Chowdhary, Cosmin Safta, Khachik SargsyanSandia National [email protected], [email protected],[email protected]

MS111

Incomplete Statistical Information Limits the Util-ity of Higher-Order Polynomial Chaos Expansions

Polynomial chaos expansion is a well-established massivestochastic model reduction technique that approximatesthe dependence of model output on uncertain input param-eters. In practice, only incomplete and inaccurate statisti-cal knowledge on uncertain input parameters is available.Observation of the expansion convergence when statisticalinput information is incomplete demonstrates that higher-order expansions without adequate data support are use-less. We offer a simple relation that helps to align availableinput statistical data with an adequate expansion order.

Sergey Oladyshkin, Wolfgang NowakInstitute for Modelling Hydraulic and EnvironmentalSystemsUniversity of [email protected],[email protected]

MS111

Adaptive Response Surface Approximations forBayesian Inference

While the most popular technique for stochastic inferenceis the statistical Bayesian approach, a new approach basedon measure-theoretic principles has emerged in recentyears. In this presentation, we discuss how Bayesian con-cepts can be applied within the measure-theoretic frame-work to develop a new approach for Bayesian inference. Wethen show how adaptive surrogate models can be employedwith the specific objective of adapting the response surface

to accurately perform stochastic inversion.

Tim WildeyOptimization and Uncertainty Quantification Dept.Sandia National [email protected]

John D. JakemanSandia National [email protected]

Troy ButlerUniversity of Colorado [email protected]

MS112

Deep Learning for Multiscale Inverse Problems

A Bayesian computational approach is developed to esti-mate spatially varying parameters in a multiscale inverseproblem. The spatial field is re-parameterised using asparse adaptive wavelet representation, forming a deep net-work between the wavelet coefficients at different levels. Asequential Monte Carlo sampler is used to explore the pos-terior distribution of the wavelet coefficients at each level.The method is demonstrated with the estimation of a mul-tiscale permeability field in fluid flows through porous me-dia.

Louis EllamUniversity of [email protected]

Nicholas ZabarasThe University of WarwickWarwick Centre for Predictive Modelling (WCPM)[email protected]

MS112

Anova Based Reduced Basis Methods for PartialDifferential Equations with High Dimensional Ran-dom Inputs

Abstract not available.

Qifeng LiaoShanghaiTech [email protected]

Lin [email protected]

MS112

Parallel Implementation of Non-Intrusive Stochas-tic Galerkin Method for Solving Stochastic Pdes

In multi-parametric equations - stochastic equations are aspecial case - one may want to approximate the solutionsuch that it is easy to evaluate its dependence of the pa-rameters. Very often one has a ”solver” for the equationfor a given parameter value - this may be a software com-ponent or a program - it is evident that this can indepen-dently solve for the parameter values to be interpolated.Such uncoupled methods which allow the use of the origi-nal solver are classed as ”non-intrusive”. By extension, allother methods which produce some kind of coupled sys-tem are often - in our view prematurely - classed as ”in-

94 UQ16 Abstracts

trusive”. We showed in for simple Galerkin formulations ofthe multi-parametric problem how one may compute theapproximation in a non-intusive way. Now we extend theresults presented in by implementing a parallel version ofnumerical methods.

Alexander LitvinenkoSRI-UQ and ECRC Centers, [email protected]

Hermann MatthiesTechnische Universitaet [email protected]

Elmar ZanderTU BraunschweigInst. of Scientific [email protected]

Raul F. TemponeMathematics, Computational Sciences & EngineeringKing Abdullah University of Science and [email protected]

David E. [email protected]

MS113

Affine Invariant Samplers for Bayesian InverseProblems

Abstract not available.

Charlie MathewsUniversity of [email protected]

Jonathan WeareUniversity of ChicagoDepartment of Statistics and the James Franck [email protected]

Ben LeimkuhlerUniversity of EdinburghSchool of [email protected]

MS113

Asymptotic Analysis of Random-Walk Metropolison Ridged Densities

The asymptotic behavior of local-move MCMC algorithmsin high-dimensions is by now well understood and the emer-gence of diffusion processes as trajectory limits has beenproved in many such contexts. The results obtained sofar involve mainly i.i.d scenarios, though there are now anumber of generalizations in high/infinite dimensions. Weadopt a different point of view and look at cases when thetarget distributions tend to exhibit ‘ridges’ along directionsof the state space. Such contexts could arise for instancein classes of models when data arrive with small noises orwhen there are non-identifiable subsets of parameters. Inan asymptotic context all probability mass will concentrateon a manifold. We show that diffusion limits (on a mani-fold) abound also in this set-up of ridged densities and areparticularly useful for identifying computational costs or

providing optimality criteria.

Alexandre ThieryNational University of [email protected]

MS113

Analysis and Optimization of Stratified Sampling

Stratified sampling methods compute averages over a tar-get distribuion by partitioning the sample space into sub-regions, sampling conditional distributions confined to thesubregions, and computing averages over the target dis-tribution from the conditional averages. Such methodsmay be used to compute both averages with respect tothe ergodic measure of a process and also dynamical quan-tities, such as reaction rates, of interest in molecular simu-lation. We analyze the benefits of stratified sampling, andwe discuss some techniques for optimizing the parametersinvolved in the method.

Brian Van KotenUniversity of [email protected]

Aaron DinnerJames Franck InstituteUniversity of [email protected]

Jeremy Tempkin, Erik ThiedeChemistryThe University of [email protected], [email protected]

Jonathan WeareUniversity of ChicagoDepartment of Statistics and the James Franck [email protected]

MS113

An Analytical Technique for Forward and InversePropagation of Uncertainty

Sampling methods for forward (Monte Carlo) and inverse(Markov Chain Monte Carlo) propagation of uncertaintyinvolving complex models remain a computationally ex-tremely challenging task. In this talk, we consider aquadratic approximation of the quantity of interest withrespect to the stochastic parameter, for which – assum-ing a Gaussian prior distribution of the parameter – ananalytical expression for the expected value is known. Wethen exploit such quadratic approximation as a variance re-duction technique to accelerate the convergence of MonteCarlo and Markov Chain Monte Carlo sampling methods.

Umberto VillaICES, UT [email protected]

Omar GhattasThe University of Texas at [email protected]

MS114

Stochastic Representation of Model Inadequacy for

UQ16 Abstracts 95

Supercapacitors

A probabilistic formulation of model inadequacy is devel-oped for a reduced (upscaled) supercapacitor model. Theerrors introduced by modeling assumptions during upscal-ing are accounted for through injecting stochasticity intothe reduced model equations. The mathematical formu-lation of the injected stochasticity is informed by spatio-temporal error estimates using residual information, whichbypasses the need to solve higher-fidelity equations. Thereliability of these stochastic representations is then exam-ined using deterministic observations from a higher-fidelitysupercapacitor model.

Leen AlawiehUniversity of Texas at [email protected]

Damien Lebrun-GrandieOak Ridge National [email protected]

Damon McDougallInstitute for Computational Engineering and SciencesThe University of Texas at [email protected]

Todd OliverPECOS/ICES, The University of Texas at [email protected]

Robert D. MoserUniversity of Texas at [email protected]

MS114

Hierarchical Optimization for Neutron ScatteringProblems

We present a scalable optimization method for neutronscattering problems that determines confidence regions ofsimulation parameters in lattice dynamics models usedto fit neutron scattering data for crystalline solids. Themethod uses physics–based hierarchical dimension reduc-tion in both the computational simulation domain and theparameter space. We demonstrate for silicon that after afew iterations the method converges to parameters values(interatomic force-constants) computed with density func-tional theory simulations.

Feng BaoOak Ridge National [email protected]

Richard ArchibaldComputational Mathematics GroupOak Ridge National [email protected]

MS114

Uncertainty Quantification in Diffuse Optical To-mography

We consider efficient Bayesian inversion and uncertaintyquantification for parameterized PDE-based inverse prob-lems, such as Diffuse Optical Tomography. Although theparameterization reduces costs substantially, the spaces tobe sampled remain high dimensional, say, hundreds to a

thousand parameters. Hence, convergence of the poste-rior typically requires many samples, where each samplerequires a large number of three-dimensional PDE solves.We discuss methods that reduce both the number of sam-ple points as well as the computational effort per samplepoint in a complementary fashion.

Eric De SturlerVirginia [email protected]

Misha E. KilmerTufts [email protected]

Arvind SaibabaDepartment of Electrical and Computer EngineeringTufts [email protected]

Eric MillerTufts [email protected]

MS114

Reduction of Chemical Models under Uncertainty

We discuss recent developments for dynamical analysis andreduction of chemical models under uncertainty. We relyon computational singular perturbation analysis, allowingfor uncertainties in reaction rate parameters. We outline aconstruction for representation of uncertain reduced chem-ical models, and estimation of probabilities for inclusion ofsets of reactions in the reduced model. We demonstratethe approach in the context of homogeneous ignition of ahydrocarbon fuel-air mixture.

Habib N. NajmSandia National LaboratoriesLivermore, CA, [email protected]

Mauro ValoraniDept. Meccanica e AeronauticaUniv. Rome La [email protected]

Riccardo Malpica GalassiDept. of Mechanical and Aerospace EngineeringUniv. Rome La [email protected]

Cosmin SaftaSandia National [email protected]

Mohammad KhalilCombustion Research FacilitySandia National [email protected]

Pietro Paolo CiottoliDept. Mechanical and Aerospace EngineeringUniv. Rome La Sapienza

96 UQ16 Abstracts

[email protected]

MS114

Recursive k-d Darts Sampling for Exploring High-Dimensional Spaces

We introduce Recursive k-d Darts: a new hyperplanesampling algorithm to estimate statistical metrics (e.g.,mean and tail-probability) of an underlying black-box high-dimensional function. Our method decomposes the high-dimensional problem into a set of 1-dimensional problems.This approach enables efficient handling of discontinuitiesand higher-order estimates in smooth regions. We quantifyestimation errors by comparing a local and global surrogatemodels, built on the fly as sampling proceeds, and use thatto guide future samples.

Ahmad A. RushdiUniversity of [email protected]

Mohamed S. EbeidaSandia National [email protected]

Laura SwilerSandia National LaboratoriesAlbuquerque, New Mexico [email protected]

Scott A. MitchellSandia National [email protected]

Anjul Patney, John OwensUniversity of California, [email protected], [email protected]

MS115

Scalable Subsurface Characterization with Multi-physics Models and Massive Data Using PrincipalComponent Geostatistical Approach

Bayesian Geostatistical approach is widely used for inverseproblems in geosciences. However, the Jacobian matrixneeds to be computed from O(min (m,n)) forward runsfor m unknowns and n observations, which can be pro-hibitive when m and n become large with the advent ofthe era of “big’ data. We present a scalable Jacobian-free inversion method and show the efficiency of the pro-posed method with arsenic-bearing mineral identificationand massive MRI tracer data inversion examples.

Jonghyun Harry Lee, Peter K KitanidisStanford [email protected], [email protected]

MS115

Bayesian Reconstruction of Catalytic Propertiesof Thermal Protection Materials for AtmosphericReentry

Simulating the atmospheric entry of a spacecraft is a chal-lenging problem involving complex physical phenomena,including the response of heatshield materials to extremeaerothermal conditions. Among others, the catalycity ofthe thermal protection material is extremely important

since it quantifies the recombination of atoms at the sur-face, hence the heating transferred to the spacecraft. Us-ing data from plasma testing, sources of uncertainty inthe rebuilding of catalycity are identified and propagatedthrough a Bayesian solver.

Francois SansonPurdue [email protected]

Pietro Marco [email protected]

Thierry Maginvon Karman Institute, [email protected]

Francesco [email protected]

Alessandro Turchivon Karman Institute, [email protected]

MS116

Uncertainty Quantification for An Industrial Ap-plication

This contribution presents the application of uncertaintyquantification to an electric drive from series production.The forward propagation is based on collocation methodsusing generalized polynomial chaos and the stochastic so-lution is verified by a test bench. Furthermore, the setupcan be used for estimation of parameter distributions whichare not known in general. The aim is to make uncertaintyquantification applicable for a real use case in an electricaland mechanical engineering context.

Philipp Glaser, Kosmas PetridisRobert Bosch GmbHStuttgart, [email protected],[email protected]

Vincent HeuvelineEngineering Mathematics and Computing Lab (EMCL)University Heidelberg, [email protected]

MS116

Numerical Methods for Uncertainty Propagationin Hybrid-dynamical Systems

Hybrid-dynamical systems often arise in context of mecha-tronic systems, parameters of which typically exhibit somelevel of uncertainty, due to, for example, incomplete knowl-edge. In this talk we evaluate the feasibility of the stochas-tic Galerkin projection using Polynomial Chaos for uncer-tainty propagation in a hybrid-dynamical benchmark prob-lem with uncertain model parameters. A characteristicfunction approach is used to allow for simultaneous com-putations of different states alleviating the use of globallydefined chaos polynomials.

Michael SchickRobert Bosch GmbH

UQ16 Abstracts 97

[email protected]

Antoine VandammeRobert Bosch GmbHMechatronic Engineering CR/[email protected]

Kosmas PetridisRobert Bosch GmbHStuttgart, [email protected]

Vincent HeuvelineEngineering Mathematics and Computing Lab (EMCL)University Heidelberg, [email protected]

MS116

Uncertainty Quantification for a Biomedical BloodPump Application

The Finite Element Method (FEM) has been extensivelyemployed for the development of mechanical devices. Ahuge amount of devices are today realized owing to numer-ical simulation by finite elements. This also holds for ourstudied application, the ventricular assist device, which isone of the most common therapeutic instruments for thecardiac insufficiency. The functional reliability of biomedi-cal devices remains one of the main concerns for the utiliza-tion in different clinical areas. This especially holds for ourblood pump application. Computational Fluid Dynamicscould help to simulate the basic functionality of the device.However, there are still uncontrollable aspects, which can-not be determined by classical numerical methods, due tonoisy data, or the lack of knowledge about physical phe-nomena. Here, Uncertainty Quantification (UQ) fills thegap. In this work, we model a blood pump device withconsideration of the uncertainty. We study three uncertainparameters (kinematic viscosity, angular velocity and in-flow amplitude), which are the most important parametricvariations in the model, in order to analyze the influence onthe numerical solution. Concerning to the rotation of theblood pump, we employ the Sliding Mesh method, whichgives good results for symmetric rotating systems. We alsoapply the Variational Multiscale method to stabilize theincompressible Navier-Stokes equations. In our talk, wewill present our latest research results.

Chen SongHITS gGmbH, [email protected]

Peter ZaspelEMCL, IWR, University of [email protected]

Vincent HeuvelineEngineering Mathematics and Computing Lab (EMCL)University Heidelberg, [email protected]

MS117

Title Not Available

Abstract not available.

Abdellah Chkifa, Clayton G. WebsterOak Ridge National Laboratory

[email protected], [email protected]

MS117

Quasi-Monte Carlo and FFT Sampling for EllipticPDEs with Lognormal Random Coefficients

We generalise and study the convergence analysis of amethod which was computationally presented by the au-thors in J. Comp. Phys. in 2011, where a quasi-MonteCarlo method combined with a circulant embedding/FFTmethod was used to sample from a lognormal random field.The method allows to sample form very rough randomfields without the need to truncate a KL series expansion.

Dirk NuyensDepartment of Computer ScienceKU [email protected]

Ivan G. GrahamUniversity of [email protected]

Frances Y. KuoSchool of Mathematics and StatisticsUniversity of New South [email protected]

Robert ScheichlUniversity of [email protected]

Ian H. SloanUniversity of New South WalesSchool of Mathematics and [email protected]

MS117

Gradient-Free Active Subspace Construction ViaAdaptive Morris Indices

In this presentation, we present a gradient-free algorithmfor constructing active subspaces for physical and bio-logical applications having high-dimensional input spaces.The objective of this approach is two-fold: determine low-dimensional active input spaces for high-dimensional prob-lems and construct surrogate models based on these re-duced spaces. Whereas there exist a variety of algorithmsto construct low-dimensional subspaces when gradients areavailable, a number of production codes do not providegradient or adjoint capabilities. To accommodate these ap-plications, we consider a gradient-free approach in whichadaptive Morris indices are used to construct the reducedinput space. In this approach, steps are adapted to domi-nant directions in the parameter space to improve the accu-racy and efficiency of algorithms. Attributes of the methodare demonstrated for examples arising in aerodynamics andnuclear power plant design.

Allison LewisNorth Carolina State [email protected]

Ralph C. SmithNorth Carolina State UnivDept of Mathematics, [email protected]

98 UQ16 Abstracts

Brian WilliamsLos Alamos National [email protected]

Bassam A. KhuwailehNorth Carolina State [email protected]

Max D. MorrisIowa State [email protected]

MS117

Cross Validation for Function Approximation:Quantitative Guarantees and Error Estimates

Using tools from compressive sensing and matrix concen-tration, we provide quantitative guarantees for the accu-racy of leave-p-out cross validation towards model selectionand error estimation in function interpolation and approx-imation problems. The guarantees are ’with high proba-bility’ with respect to the stochastic sampling points.

Rachel WardDepartment of MathematicsUniversity of Texas at [email protected]

MS118

A Space-time Fractional Optimal Control Problem:Analysis and Discretization

We study a linear-quadratic optimal control problem in-volving a parabolic equation with fractional diffusion andCaputo fractional time derivative of orders s ∈ (0, 1) andγ ∈ (0, 1], respectively. The spatial fractional diffusion isrealized as the Dirichlet-to-Neumann map for a nonuni-formly elliptic operator. Thus, we consider an equivalentformulation with a quasi-stationary elliptic problem with adynamic boundary condition as state equation. The rapiddecay of the solution to this problem suggests a truncationthat is suitable for numerical approximation. We considera fully-discrete scheme: piecewise constant functions forthe control and, for the state, first-degree tensor productfinite elements in space and a finite difference discretiza-tion in time. For s ∈ (0, 1) and γ < 1 we show convergenceof this scheme. Moreover, under additional regularity, wederive a priori error estimates for the case s ∈ (0, 1) andγ = 1.

Harbir AntilGeorge Mason UniversityFairfax, [email protected]

MS118

Length Scale and Manufacturability of TopologyOptimized Designs

Topology optimization as a design methodology is widelyaccepted in the industry. The designs are often consid-ered conceptual due to loosely defined topologies and theneed of post processing. Amendments can affect the per-formance and in many cases can completely destroy theoptimality of the solution. Therefore, the goal of this workis to present recent advancements in obtaining manufac-turable designs with clearly defined minimum length scale

and performances robust with respect to production un-certainties.

Boyan S. LazarovDepartment of Mechanical EngineeringTechnical University of [email protected]

MS118

Managing Uncertainty in PDE-Constrained Opti-mization Using Risk Measures

We consider PDE-constrained optimization problems un-der uncertainty. In order to manage the risk due to the im-plicit dependence of the state with respect to the randomvariables, we make use of coherent risk measures, whichyield risk-averse optimal solutions. First-order optimalityconditions for the resulting non-smooth problems are de-rived via an epi-regularization technique that is directlylinked to a numerical method. The results are illustratedby several numerical examples.

Thomas M. SurowiecDepartment of MathematicsHumboldt University of [email protected]

Drew P. KouriOptimization and Uncertainty QuantificationSandia National [email protected]

MS118

Constrained Optimization with Low-Rank Tensorsand Applications to Problems with Pdes under Un-certainty

We present Newton-type methods for inequality con-strained nonlinear optimization using low-rank tensors andapply them to variational inequalities with several un-certain parameters and to optimal control problems withPDEs under uncertainty. The developed methods are tai-lored to the usage of low-rank tensor arithmetics, whichonly offer a limited set of operations and require truncation(rounding) in between. We show that they can solve huge-scale optimization problems with trillions of unknowns toa good accuracy.

Michael UlbrichTechnische Universitaet MuenchenChair of Mathematical [email protected]

Sebastian GarreisTU Muenchen, [email protected]

MS119

The Discrete Empirical Interpolation Method forthe Steady-State Navier-Stokes Equations

We examine the discrete empirical interpolation method(DEIM) for solving the discrete steady-state Navier-Stokesequations where the viscosity is a random field. We ex-plore the impact of number of interpolation points on theaccuracy of DEIM solutions and show that essentially fullaccuracy is obtained with fewer than 100 points, a numbertypically much smaller then the dimension of the reduced

UQ16 Abstracts 99

model. In addition, we develop preconditioning operatorsfor solving the algebraic equations that arise from DEIMand demonstrate that preconditioned solvers are effectivewhen the size of the parameter space is large.

Howard C. ElmanUniversity of Maryland, College [email protected]

Virginia ForstallUniversity of Maryland at College [email protected]

MS119

Reduced-Order Modeling Of Hidden Dynamics

The objective of this paper is to investigate how noisy andincomplete observations can be integrated in the processof building a reduced-order model. This problematic arisesin many scientific domains where there exists a need foraccurate low-order descriptions of highly-complex phenom-ena, which can not be directly and/or deterministically ob-served. Within this context, the paper proposes a proba-bilistic framework for the construction of ”POD-Galerkin”reduced-order models. Assuming a hidden Markov chain,the inference integrates the uncertainty of the hidden statesrelying on their posterior distribution. Simulations showthe benefits obtained by exploiting the proposed frame-work.

Patrick Heas, Cedric HerzetInria, Center of [email protected], [email protected]

MS119

A Triple Model Reduction for Data-Driven Large-Scale Inverse Problems in High Dimensional Pa-rameter Spaces

We present an approach to address the challenge of data-driven large-scale inverse problems in high dimensional pa-rameter spaces. The idea is to combine a goal-orientedmodel reduction approach for state, data-informed reduc-tion for parameter, and randomized misfit approach fordata reduction. The method is designed to mitigate thebottle neck of large-scale PDE solve, of high dimensionalparameter space exploration, and of ever-increasing vol-ume of data. Various numerical results are conducted tosupport the proposed approach.

Ellen Le, Aaron MyersUniversity of Texas at Austin - [email protected], [email protected]

Tan Bui-ThanhThe University of Texas at [email protected]

MS119

POD-Galerkin Modeling with Adaptive Finite El-ements for Stochastic Sampling

We study surrogate models for time-dependent PDEs withuncertain data. To include spatial adaptivity, we generalizeproper orthogonal decomposition (POD) based reduced-order modeling to snapshots from different finite elementspaces. We analyze how this generalization effects sam-pling based uncertainty quantification and comment on the

implementation for nested spatial grids. A numerical testcase involving a 2d Burgers equation with uncertain initialdata confirms the theoretical findings.

Sebastian UllmannTU [email protected]

Jens LangDepartment of MathematicsTechnische Universitat [email protected]

MS120

Local Optimal Rotation for Min-mode Calculationin Computing Transition State

Abstract not available.

Weiguo GaoFudan University, [email protected]

MS120

A Convex Splitting Scheme for the Saddle PointSearch in Phase Filed Model

The convex splitting method has been very successful inmaintaining unconditional energy stability in evolving thesteepest descent dynamics for phase field model, which isof particularly use to search local minima of energy land-scape since large time step is allowed. In this work, wegeneralize this idea to the challenging problem of search-ing transition states, i.e., index-1 saddle points. By usingthe Iterative Minimization Formulation, the saddle pointproblem is progressively solved by a series of minimizationproblems. We show how the convex splitting method stillplays a key role in searching saddle point by numerical re-sults of Allen-Cahn equation, Cahn-Hilliard equation andthe McKean Vlasov equation.

Shuting GuCity University of Hong [email protected]

MS120

The String Method for the Study of Rare Events

The string method is an effective numerical tool for com-puting minimum energy paths between two meta-statblestates. It evolves a string in the path space by the steepestdescent dynamics. In this talk, we show how the stringmethod can be used to compute saddle points (transitionstates) for a given minimum of the potential or free energy.These saddle points act as bottlenecks for barrier-crossingevents. Application to the wetting transition on patternedsurfaces will be presented.

Weiqing RenNational University of Singapore and IHPC, [email protected]

MS120

Quantifying Transition Dynamics of Nanoscale

100 UQ16 Abstracts

Complex Fluid via Mesoscopic Modeling

Abstract not available.

Lei WuSchool of Mathematical SciencesPeking University, [email protected]

MS121

Learning to Optimize with Confidence

Abstract not available.

Andreas KrauseETH [email protected]

MS121

Stratified Bayesian Optimization

Abstract not available.

S. T. Palmerinn/an/a

MS121

Empirical Orthogonal Function Calibration withSimulator Uncertainty

Abstract not available.

Matthew T. PratolaThe Ohio State [email protected]

MS121

Variational Bayesian Inference Using Basis Adap-tation in Homogeneous Chaos Surrogates

The problem of subsurface characterization in reservoirmodeling under uncertainty undergoes several computa-tional challenges, two of them typically being the high di-mensionality of the random fields involved in the descrip-tion of the rock properties and the expensive reservoir mod-els involved in uncertainty propagation. We present a noveldimension reduction method that is applicable when thetrue models are substituted by Wiener Polynomial Chaossurrogates. The method involves adapting the basis of theunderlying Gaussian Hilbert space such that the probabil-ity measure of the original Chaos expansion is concentratedin a lower dimensional space of the new adapted expansion.The attractiveness of our methodology is highlighted whenapplied as a dimensionality reduction technique within aVariational Bayesian inference (VI) framework in order toefficiently solve inverse problems.

Panagiotis TsilifisUniversity of Southern [email protected]

Roger GhanemUniversity of Southern CaliforniaAerospace and Mechanical Engineering and CivilEngineering

[email protected]

MS122

Marginal Then Conditional Sampling for Hierar-chical Models

Independent sampling in the linear-Gaussian inverse prob-lem may be performed for less computational cost thanregularized inversion, when selection of the regularizing pa-rameter is considered. We demonstrate a sequence of in-creasingly efficient sampling algorithms: block Gibbs, one-block, and the ‘marginal-then-conditional’ sampler with afancy method for evaluating the ratio of determinants re-quired for MCMC. This last method scales well with prob-lem and data size, allowing inference over function spacemodels with no approximation (!).

Colin FoxUniversity of Otago, New [email protected]

Richard A. NortonUniversity of OtagoDepartment of [email protected]

J. Andres ChristenCIMAT, [email protected]

MS122

Sampling Gaussian Distributions in High Dimen-sional Statistical Inference

Direct sampling of a multivariate Gaussian distributionis traditionally performed using a Cholesky factorizationof the covariance/precision matrix. In this talk, we re-call existing techniques to avoid the Cholesky factorizationand describe a new method that uses the reversible jumpMCMC framework to derive a convergent Gaussian sam-pler involving approximate resolution of a linear system.We also present an unsupervised adaptive scaling of theproposed sampler to minimize computation cost per effec-tive sample.

Saıd MoussaouiEcole Centrale de NantesIRCCyN, [email protected]

Jerome IdierIRCCYNNantes, Francexxx

MS122

Fast Sampling for Inverse Equilibrium Problems

We present efficient numerical methods for posterior ex-ploration by Metropolized approximate Gibbs sampling ofATCA systems that occur throughout science and engi-neering. The presented methods cover efficient simulationof the system class as well as the numerically cheap eval-uation of low rank updates and approximate conditionalsampling. In numerical tests, sampling from this nonlin-ear inverse problem incurs little more cost than the best

UQ16 Abstracts 101

samplers for linear-Gaussian inverse problems.

Markus [email protected]

Colin FoxUniversity of Otago, New [email protected]

MS122

Fitting Lateral Transfer: MCMC for a Phyloge-netic Likelihood Obtained from a Massive LinearODE System

We give a Bayesian sample-based inference procedure forreconstructing the phylogenies of taxa from evolutionarytraits when diversification is not strictly tree-like, but mayinclude lateral transfer. The likelihood for a tree with Lleaves is determined from the solution of a sequence ofsparse linear ODEs of dimension up to 2L − 1. We pro-pose an exact approach exploiting symmetries in Green’sfunctions for small L and MCMC methods for larger valuesof L.

Geoff NichollsOxford [email protected]

Luke KellyDepartment of StatisticsOxford [email protected]

MS123

Nonlinear Statistical Analysis Based on ImpreciseData

Abstract not available.

Thomas AugustinInstitut fur Statistik [email protected]

MS123

Modelling with Random Sets in Engineering

When assessing the response of a mechanical structure,loads, material and geometric parameters enter as inputs.As a rule, knowledge about these parameters is incomplete.The theory of random sets permits to combine intervalmethods and stochastics for quantifying the uncertainties.The talk addresses modelling with random sets combinedwith random fields, simulating the uncertain structuralresponse, sensitivity analysis and selected applications inelastostatics and structural dynamics.

Michael OberguggenbergerUniversity of Innsbruck, [email protected]

MS123

Dealing with Limited and Scarse Information inComplex Systems and Critical Infrastructures

Critical infrastructures are ensembles of assets, organiza-tions and complex network layers (e.g. power network, in-formation network, etc.) which reliability is of major con-

cern for the people safety and well-being. In some cases,systems data might result unavailable or parts of informa-tion lost or modified when transferred or processed. In or-der to deal with uncertainty arising from scarce or inconsis-tent data, classical probabilistic approaches are sometimeused, although requiring assumptions to model impreci-sion, sometime hardly justifiable. Unwarranted hypothe-sis can alter the apparent level of uncertainty and systemreliability. Imprecise probabilistic methods offer a solidgrounding for the rational treatment of uncertainty due toscarce and imprecise data, powerful frameworks which canbe coupled to traditional methods to better understand thetrue quality of the information and effective system relia-bility.

Roberto Rocchetta, Edoardo PatelliUniversity of [email protected],[email protected]

MS123

Nonparametric Bayesian Estimation of System Re-liability with Imprecise Prior Information on Com-ponent Lifetimes

Estimation of reliability functions for complex systems isusually based on component test data. As data is oftenscarce, expert information must be included, which is oftenalso vague and imprecise. Such uncertain but influentialexpert information can be combined with test data usinga flexible imprecise Bayesian nonparametric approach pro-ducing sets of posterior predictive system reliability func-tions that reflect uncertainty in expert information, theamount of data, and provides prior-data conflict sensitiv-ity.

Gero WalterEindhoven University of [email protected]

Louis AslettUniveristy of [email protected]

Frank CoolenDurham [email protected]

MS124

Approximation of Probability Density Functionsby the Multilevel Monte Carlo Maximum EntropyMethod

We develop a complete convergence theory for the Max-imum Entropy method based on moment matching for asequence of approximate statistical moments estimated bythe Multilevel Monte Carlo method. Under appropriateregularity assumptions on the target probability densityfunction, the proposed method is superior to the Max-imum Entropy method with moments estimated by theMonte Carlo method. New theoretical results are illus-trated in numerical examples. We compare our approachwith an alternative MLMC-based method by M. Giles, T.Nagapetyan and K. Ritter (2015) and comment on the ben-efits and drawbacks.

Alexey Chernov, Claudio BierigInstitut fur MathematikCarl von Ossietzky Universitat Oldenburg

102 UQ16 Abstracts

[email protected], [email protected]

MS124

Efficient Coupling of Multi-Level Monte CarloMethod with Multigrid

We study the benefits of combining multigrid with multi-level Monte Carlo (MLMC). We consider groundwater flowusing an elliptic PDE with lognormal random hydraulicconductivity fields. The solution is based on a finite volumediscretization that is solved with a cell-centered multigrid(CCMG). We show through numerical experiments how wecan achieve greater computational savings by using sameset of MLMC levels in the multigrid solvers.

Richard P. DwightDelft University of [email protected]

Prashant [email protected]

Kees OosterleeCWI - Center for Mathematics and Computer [email protected]

MS124

MLMC with Control Variate for Lognormal Diffu-sion

We consider the stochastic Darcy problem with log-normally distributed permeability field and propose a novelMLMC approach with control variate variance reductiontechnique on each level. We model the log-permeabilityas a stationary Gaussian random field with Matern covari-ance. The control variate is obtained by solving an aux-iliary problem with smoothed permeability field and itsexpectation is effectively computed with a Stochastic Col-location method on the finest level in which the controlvariate is applied.

Fabio NobileEPFL, [email protected]

Francesco TeseiECOLE POLYTECHNIQUE FEDERALE [email protected]

MS124

A Multiscale Model and Variance ReductionMethod for Wave Propagation in HeterogeneousMaterials

We present a model and variance reduction method forcomputing statistical outputs of stochastic wave propa-gation problems through heterogeneous materials. Wecombine the multiscale continuous Galerkin (MSCG) dis-cretization and the reduced basis method for the Helmholtzequation with a multilevel variance reduction strategy, ex-ploiting the statistical correlation among the MSCG andreduced basis approximations to accelerate the convergenceof Monte Carlo simulations. We analyze the effect of man-ufacturing errors on waveguiding through photonic crystal

structures.

Ferran Vidal-Codina, Cuong Nguyen, Jaime PeraireMassachusetts Institute of [email protected], [email protected], [email protected]

MS125

Optimal Experimental Design for ODE Models

Recovering parameters from ODE models can be inher-ently hard. Due to dimensionally, complexity, and sensi-tivity of the model states and parameters, robust parame-ter estimation methods are needed. We will present robustparameter estimation approaches that will guide optimalexperimental design methods towards accurate recovery ofmodel parameters.

Matthias Chung, Justin KruegerDepartment of MathematicsVirginia [email protected], [email protected]

Mihai PopUniversity of [email protected]

MS125

A Layered Multiple Importance Sampling Schemefor Focused Optimal Bayesian Experimental Design

The optimal selection of experimental conditions is essen-tial to maximizing the value of data for inference and pre-diction, particularly in situations where experiments arecostly. We propose an information theoretic framework forfocused experimental design with simulation-based models,with the goal of maximizing information gain in targetedsubsets of model parameters. A novel layered multiple im-portance sampling technique is used to efficiently evaluatethe expected information gain to make optimization fea-sible for computationally intensive and high-dimensionalproblems.

Chi Feng, Youssef M. MarzoukMassachusetts Institute of [email protected], [email protected]

MS125

Variance Reduction Methods for Efficient BayesianExperimental Design

We propose a framework where a lower bound of the ex-pected information gain is used as an alternative designcriterion in order to compute Bayesian optimal experimen-tal designs. In addition to alleviating the computationalburden, this also addresses issues concerning estimationbias. The validity of our approach is demonstrated in pa-rameter inference problems concerning flow and transportin porous media where and Polynomial Chaos approxima-tions of the forward models are constructed that furtheraccelerate the objective function evaluations.

Roger GhanemUniversity of Southern CaliforniaAerospace and Mechanical Engineering and [email protected]

Panagiotis Tsilifis

UQ16 Abstracts 103

University of Southern [email protected]

MS125

Efficient Computation of Bayesian Optimal Designs

In Bayesian inference, fast and reliable estimation andmaximization of the expected information gain with re-spect to alternative setups is crucial to provide a valu-able orientation toward the planning of future experiments.This talk focuses on efficient methods for obtaining approx-imations to the posterior distribution in Bayesian designproblems. Examples related to inverse heat conductionproblems will illustrate the effectiveness of such approachesin order to recommend to the practitioner the best alloca-tion of resources.

Quan LongUnited Technologies Research Center, [email protected]

Marco ScavinoUniversidad de la RepublicaMontevideo, [email protected]

Raul F. TemponeMathematics, Computational Sciences & EngineeringKing Abdullah University of Science and [email protected]

MS126

Stochastic Collocation for Differential AlgebraicEquations Arising from Gas Network Simulation

We use the method of stochastic collocation to quantifythe uncertainties arising from the stochastic parameters ofthe differential algebraic equations. Due to the method’snon-intrusiveness we can apply a commercial gas networksimulator to compute a stationary solution for a given pa-rameter set. Since in DAEs only derivatives of a part ofvariables are incorporated, the solution is typically non-differentiable. We present a method to overcome the exist-ing kinks and show some numerical convergence results.

Barbara FuchsInstitute for Numerical SimulationsUniversity of [email protected]

Jochen GarckeUniversity of [email protected]

MS126

Parametric Uncertainty in Macroscopic TrafficFlow Models Calibration from Gps Data

Facing the problem of macroscopic traffic flow models cal-ibration with Floating Car Data from GPS devices, wepropose to introduce the dependence from stochastic pa-rameters in the mean velocity closure equation and theinitial density profile. We use a semi-intrusive determinis-tic approach to quantify uncertainty propagation in trafficdensity evolution and travel-times estimation. Numericalresults are presented. The approach is then validated onprocessed real data on a stretch of highway in South-East

France.

Enrico BertinoPolitecnico di [email protected]

Regis DuvigneauINRIA Sophia [email protected]

Paola GoatinTeam [email protected]

MS126

On a Data Assimilation Method Coupling KalmanFiltering, MCRE Concept, and PGD Model Re-duction for Real-Time Updating of Structural Me-chanics Models

This work introduces a new strategy for real-time parame-ter identification or updating in structural mechanics mod-els, defined as dynamical systems. The main idea is touse the Unscented Kalman filtering, which is a Bayesiandata assimilation method, in conjunction with an energeticmethod for inverse problems, the modified Constitutive Re-lation Error. A PGD model reduction is also introduced toperform real-time computations. The proposed approach isillustrated, and compared to classical approaches, on sev-eral applications.

Basile MarchandLMT, ENS Cachan, CNRS, Universite [email protected]

Ludovic ChamoinENS [email protected]

Christian ReySafran [email protected]

MS126

Sequential Design of Computer Experiments forthe Solution of Bayesian Inverse Problems

We present an adaptive sampling method for the solution ofBayesian parameter inference problems. The model func-tion is assumed to be computationally expensive, so thegoal is to approximate the posterior with as few functionevaluations as possible. To this end, the a priori unknownmodel function is described by a process emulator, and ineach iteration, a new point is selected such that the esti-mated impact on the accuracy of the posterior distributionis maximized.

Michael Sinsbeck, Wolfgang NowakInstitute for Modelling Hydraulic and EnvironmentalSystemsUniversity of [email protected],[email protected]

MS127

Separating Discretization Error and ParametricUncertainty in Goal-Oriented Error Estimation for

104 UQ16 Abstracts

Pdes with Uncertainty

We revisit our earlier work on goal-oriented error estima-tion for partial differential equations with uncertain data,where error estimates were decomposed into contributionsfrom physical discretization and the approximation in pa-rameter space. We provide a detailed examination of theerror estimation and decomposition process for a nonlinearmodel problem; effectivity indices, error indicators, andadaptive strategies are discussed. Finally, we present theapplication of the framework to Bayesian model selectionfor turbulence modeling.

Corey M. BryantICESThe University of Texas at [email protected]

Serge Prudhomme

{E}cole Polytechnique de Montr{e}alMontr{e}al, Qu{e}bec, [email protected]

MS127

Error Estimation and Adaptivity for PGD ReducedModels

A prominent model reduction technique based on low-rankcanonical format, and referred as Proper Generalized De-composition (PGD), was recently introduced for the nu-merical solution of high-dimensional problems. In the PGDframework, an a posteriori error estimate based on dualanalysis was proposed, enabling to assess contributions ofvarious error sources. In the present work, we address newadvances in this verification method, considering parame-ters of all kinds, and nonlinear problems.

Ludovic Chamoin, Pierre Ladeveze, Pierre-Eric AllierENS [email protected], [email protected],[email protected]

MS127

Adaptive Stochastic Galerkin Fem with Hierarchi-cal Tensor Representation

PDE with stochastic data usually lead to very high-dimensional algebraic problems which easily become unfea-sible for numerical computations because of the couplingstructure of the discretised stochastic operator. Recently,an adaptive stochastic Galerkin FEM based on a residuala posteriori error estimator was presented and the conver-gence of the adaptive algorithm was shown. While thisapproach leads to a drastic reduction of the complexity ofthe problem due to the iterative discovery of the sparsity ofthe solution, the problem size and structure is still ratherlimited. To allow for larger and more general problems, weexploit the tensor structure of the parametric problem byrepresenting operator and solution iterates in the tensortrain (TT) format. The (successive) compression carriedout with these representations can be seen as a generali-sation of some other model reduction techniques, e.g. thereduced basis method. We show that this approach facili-tates the efficient computation of different error indicatorsrelated to the computational mesh, the active polynomialchaos index set, and the TT rank. In particular, the curseof dimension is avoided.

Martin Eigel

WIAS [email protected]

MS127

A Posteriori Error Estimates for Navier-StokesEquations with Small Uncertainties

We perform a posteriori error analysis for the steady-stateincompressible Navier-Stokes equations defined on randomdomains. We use the so-called domain mapping methodthat yields PDEs on a fixed domain with random coef-ficients. With a perturbation approach, we expand thenthe exact solution with respect to a parameter ε that con-trols the amount of uncertainty. We derive error estimatesaccounting the two sources of error and give numerical ex-amples to illustrate the theoretical results.

Diane GuignardEcole Polytechnique Federale de [email protected]

Fabio NobileEPFL, [email protected]

Marco PicassoEPFL, [email protected]

MS128

History Matching with Structural ReliabilityMethods

History matching is an inverse problem methodology thatidentifies subsets in the input space of an expensive com-puter model, such that the output is likely to have a goodfit with observed data. Its efficiency depends on the evalua-tion of an implausibility function and on the iterative sam-pling on the reduced input space. We present a samplingstrategy based on Subset Simulation, an efficient techniquedeveloped and used to solve structural reliability problems.

Alejandro DiazUniversity of [email protected]

MS128

Efficient Inverse Problems with Gaussian ProcessSurrogates Via Entropy Search

Scientists often express models through computer simula-tors, and Gaussian process emulation of these simulatorsprovides an effective way to reduce computation when in-terrogating such a model. Meanwhile, Bayesian optimisa-tion has gained popularity, where a Gaussian process emu-lator is used to provide information-theoretic guidance forfinding extrema of functions. Here, we combine these ideas,proposing a method for solving inverse problems guided byentropic search. The method is demonstrated on a simplemodel of climate.

James HensmanUniversity of Lancaster

UQ16 Abstracts 105

[email protected]

MS128

Bayesian Inference for Linear Parabolic PDEs withNoisy Boundary Conditions

Many real problems require inferring the coefficients of lin-ear parabolic PDEs under the assumption that noisy mea-surements are available in the interior of a domain of inter-est and for the unknown boundary conditions. This talkwill focus on solving the inverse problem by using hier-archical Bayesian techniques based on the marginalizationof the contribution of the boundary parameters. Inverseheat conduction problems for simulated and real experi-ments under different setups will illustrate the proposedapproach.

Zaid SawlanKing Abdullah University of Science and Technology(KAUST)[email protected]

MS128

Iterative History Matching for ComputationallyIntensive Inverse Problems

Iterative History Matching using Bayes Linear statisticshas been shown to be a very successful methodologyfor solving inverse problems for computationally intensivemodels. We will discuss recent advances in this area includ-ing nested emulator approximations and fast output scansfor efficient input space reduction. We will describe howhistory matching provides a natural framework for design-ing future experiments tailored to resolving specific uncer-tainties uncovered in the inverse calculation, with a systemsbiology application.

Ian VernonUniversity of [email protected]

MS129

Regression-Based Adaptive Sparse Polynomial Di-mensional Decomposition for Sensitivity Analysisin Fluids Simulation

Polynomial dimensional decomposition (PDD) is employedin this work for global sensitivity analysis and uncertaintyquantification of stochastic systems subject to a large num-ber of random input variables. Due to the intimate struc-ture between PDD and Analysis-of-Variance, PDD is ableto provide simpler and more direct evaluation of the Sobolsensitivity indices, when compared to polynomial chaos(PC). This work proposes a variance-based adaptive strat-egy aiming to build a cheap meta-model by sparse-PDDwith PDD coefficients computed by regression. The size ofthe final sparse-PDD representation is much smaller thanthe full PDD, since only significant terms are eventuallyretained.

Pietro Marco [email protected]

Kunkun TangXPACC - Coordinated Science LabUniversity of Illinois [email protected]

Remi AbgrallUniversity of [email protected]

Pietro M. CongedoINRIA Bordeaux Sud-Ouest (FRANCE)[email protected]

MS129

Effective Improvement of Aerodynamic Perfor-mance over a Parameter Interval

Accounting for uncertainties is now a requirement in air-craft design. However, classical robust optimization meth-ods, based on statistical criteria (mean, variance, etc), can-not guarantee an effective improvement of the aerodynamicperformance over the whole interval of variation of uncer-tain parameters. We propose here an extension of thesteepest-descent method, that allows to define a descentdirection common to a large set of functionals parame-terized by the uncertain parameters. Illustrations will in-clude uncertainties arising from flow conditions and time-dependency.

Regis DuvigneauINRIA Sophia [email protected]

Jean-Antoine DesideriINRIA [email protected]

MS129

Adaptive Surrogate Modeling Strategies for Effi-cient Bayesian Inference

Polynomial surrogates have recently been proposed to re-duce the computational cost of Bayesian inference. In re-gions of low posterior probability, the construction of ac-curate surrogates is however wasteful. We propose a noveladaptive construction of the surrogate where additionalmodel evaluations in high posterior regions are taken intoaccount to progressively improve the approximation. Themethod is tested on several academic examples to illustrateits effectiveness.

Olivier P. Le Matre, Didier [email protected], [email protected]

Lionel MathelinLIMSI - CNRS, [email protected]

MS129

Time-Optimal Path Planning in Uncertain FlowField Using Ensemble Method

We focus on time-optimal path planning in uncertain flows.Uncertainty is represented in terms of a finite-size ensem-ble, and for each ensemble member a deterministic time-optimal path is predicted. This enables us to perform astatistical analysis of travel times, and consequently de-velop a path planning approach that accounts for thesestatistics. The performance of the ensemble representationis analyzed, and the simulations are used to develop insight

106 UQ16 Abstracts

into extensions dealing with general circulation forecasts.

Tong [email protected]

Olivier P. Le [email protected]

Ibrahim Hoteit, Omar M. KnioKing Abdullah University of Science and Technology(KAUST)[email protected], [email protected]

MS130

Bayesian Inference and Model Selection in Molec-ular Dynamics Simulations

Abstract not available.

Panagiotis AngelikopoulosComputational Science, ETH Zurich, [email protected]

Panagiotis AngelikopoulosComputational Science and Engineering Lab, SwitzerlandInstitute for Computational Science DMAVT, ETHZurich,[email protected]

MS130

Path-Space Information Criteria and Uq Boundsfor the Long-Time Behavior of Extended StochasticSystems

Abstract not available.

Markos A. KatsoulakisUniversity of Massachusetts, AmherstDept of Mathematics and [email protected]

MS130

No Equations, No Variables: Data, and the Com-putational Modeling of Complex Systems

Abstract not available.

I. G. KevrekidisPrinceton [email protected]

MS130

Coarse-Graining of Stochastic Dynamics

Abstract not available.

Tony LelievreEcole des Ponts [email protected]

Frederic LegollLAMI, ENPC

[email protected]

MS131

Successive Approximation of Nonlinear ConfidenceRegions (SANCR)

In parameter estimation problems an important issue is theapproximation of confidence regions. These can be non-linear and, in case of bad conditioning of the estimationproblem, strongly elongated. Therefore, their approxima-tion can be computationally highly expensive, especiallyfor differential equations. To combine high accuracy andlow computational costs, we have developed a method thatuses only successive linearizations in the vicinity of an esti-mator. We show the potential of this method by numericalexamples.

Thomas CarraroHeidelberg [email protected]

MS131

Uncertainty Quantification for Crosswind Stabilityof Vehicles

The objective of uncertainty quantification for crosswindstability is the computation of failure probabilities for anon-stationary realistic gust model with random amplitudeand duration. In addition, the strongly nonlinear aero-dynamic coefficients, which highly depend on the vehicleshape as well as the relative wind angle, are modeled asrandom variables. Failure probabilities can be obtainedeither conditioned on a certain mean wind speed and di-rection or a certain physical scenario. They are based onlimit states for the safe operation of the vehicle.

Carsten ProppeKarlsruhe Institute of [email protected]

MS131

Towards Meshless Uncertainty Quantification inMedical Imaging on GPUs

In cancer diagnosis, radiological imaging is one of the onlynon-invasive ways to decide whether tissue is subject tocancer or not. However, radiological images are subjectto high variabilities due to imaging errors, patient move-ment, etc. In the recent years, the extraction of imagingbiomarkers (e.g. features or meta-data) from radiologicalimages is a growing research topic. Imaging biomarkersshall be reliable diagnosis indicators for cancer. However,these biomarkers are assumed to be subject to a high vari-ability based on the underlying image variability. The pur-pose of our ongoing research endeavour is to find ways toovercome this variability and thereby to make the over-all biomarker extraction pipeline more reliable. Here, onecrucial part is the appropriate quantification of stochasticmoments of biomarkers and images. In the recent years,we have introduced the meshless kernel-based stochasticcollocation using radial basis functions. This non-intrusiveUQ method is designed to solve large-scale problems in ahigh-order convergent and scaling fashion. We are aboutto integrate our method into the appropriate imaging toolpipeline of radiologists. Moreover, we aim at performingour full analysis on graphics processing units, to be able tointegrate the software in imaging devices, in the long run.In our presentation, we will give the latest results on the

UQ16 Abstracts 107

described developments.

Peter ZaspelEMCL, IWR, University of [email protected]

MS132

Uncertainty Propagation Estimates with Semidefi-nite Optimization

We show how tight estimates on the support of the push-forward of a probability measure through a non-linear semi-algebraic mapping can be obtained by moment-sum-of-squares hierarchies and convex semidefinite programming.

Didier HenrionUniversity of Toulouse, [email protected]

MS132

Global Sensitivity Analysis for the Boundary Con-trol of An Open Channel

We consider the boundary control problem of an open-water channel, whose boundary conditions are defined bythe position of a downstream overflow gate and an up-perstream underflow gate. Water depth and velocity aredescribed by Shallow-Water equations. As physical param-eters are unknown, a stabilizing boundary control is com-puted for their ”nominal” values, and then a sensitivityanalysis is performed to measure the impact of uncertaintyin parameters on a given to-be-controlled output.

Alexandre JanonUniversite [email protected]

MS132

Worst-Case Robustness Analysis and Synthesis inControl

We give an overview over key ideas how to handle para-metric and non-parametric uncertainties in control. Amain emphasis will be laid on robust stability and per-formance analysis techniques that are based on dissipationarguments and dedicated tools from robust semi-definiteprogramming. We highlight the difficulties that arise indesigning robust controllers and address paths how theycan be overcome. If time permits we point out relations tostochastic uncertainty descriptions.

Carsten W. SchererSRC SimTech Chair Mathematical Systems TheoryUniversity [email protected]

MS132

Uncertainty Randomization in Control Systems

It is well-known that uncertainty leads to a significant per-formance degradation of control systems. At the beginningof this lecture, we illustrate this fact using a simple ex-ample and discuss two alternative approaches: robust andstochastic. Subsequently, we provide a perspective of ran-domized techniques to handle uncertain stochastic systems.In particular, we introduce the notion of sample complex-ity, demonstrate its key role in feedback systems, and studyrelated probabilistic bounds. Regarding control design, we

formulate a robust optimization approach for both convexand nonconvex problems. Finally, connections with statis-tical learning theory will be outlined.

Roberto TempoCNR-IEIIT Politecnico di [email protected]

MS133

Renewable Energy Integration in the Central West-ern European System

This paper presents a detailed model of the current marketcoupling mechanism in the Central Western European sys-tem. Market coupling is compared against deterministicand stochastic unit commitment under current renewableenergy integration levels. Efficiency losses of the marketcoupling model with respect to deterministic unit commit-ment are estimated to range between 2.8%-6.0%. Efficiencygains of stochastic with respect to deterministic unit com-mitment are estimated at 0.9%.

Ignacio AravenaUniversite catholique de [email protected]

Anthony PapavasiliouUniversite catholique de [email protected]

MS133

Multi-Timescale Stochastic Power Systems Opera-tion with High Wind Energy Penetration

We present a novel set of stochastic unit commitment andeconomic dispatch models that consider stochastic loadsand variable generation at four distinct but interconnectedoperational timescales. Comparative case studies with de-terministic approaches are conducted in low wind and highwind penetration scenarios to highlight the advantages ofthe proposed methodology. The effectiveness of the pro-posed method is evaluated with sensitivity tests using botheconomic and reliability metrics to provide a broader viewof its impact.

Hongyu Wu, Ibrahim KradNational Renewable Energy [email protected], [email protected]

Erik ElaElectric Power Research [email protected]

Anthony Florita, Jie Zhang, Eduardo Ibanez,Bri-Mathias HodgeNational Renewable Energy [email protected], [email protected], [email protected], [email protected]

MS133

Introducing Uncertainty to Transmission Expan-sion Planning Models A North Sea Case Study

Grid investments are sunk costs with a long lifetime, andthe market fundamentals for cost recovery is experiencingincreased share intermittent power production. To copewith the changing market environments and alternativeflexibility options it is of great importance to improve TEP

108 UQ16 Abstracts

models to account for both uncertainty and enhanced mod-elling of market operation. The progressive hedging al-gorithm is applied to a MILP for transmission expansionplanning, as a first stage towards a stochastic model.

Martin Kristiansen, Magnus KorpasNorwegian University of Science and [email protected], [email protected]

MS133

Adaptive Sparse Quadrature for Stochastic Opti-mization

Monte Carlo (MC) techniques are commonly used to es-timate statistics, e.g. expectations, employed in stochas-tic optimization models. While generally robust, the MCapproach exhibits poor convergence properties. Here weconsider an alternate approach, modeling uncertain pa-rameters as random variables and employing PolynomialChaos expansions (PCEs) to efficiently propagate uncer-tainties from model inputs to outputs. These PCEs areconstructed adaptively via sparse quadrature. The perfor-mance of this approach is compared with results based onclassical formulations.

Cosmin SaftaSandia National [email protected]

Richard L. ChenSandia National LaboratoriesLivermore, [email protected]

Jianqiang ChengSandia National [email protected]

Habib N. NajmSandia National LaboratoriesLivermore, CA, [email protected]

Ali PinarSandia National [email protected]

Jean-Paul WatsonSandia National LaboratoriesDiscrete Math and Complex [email protected]

MS134

Non-Degenerate Particle Filters in High-Dimensional Systems

The general solution of the data assimilation problem canbe found using Bayes’ theorem. In practice, however, itis virtually impossible to evolve probability density func-tions in time and update them with the likelihood of ob-servations. Particle filters rely in samples to estimate theposterior probability density function. Traditional particlefilters, however, can suffer from degeneracy when the sys-tem size (actually number of independent observations) isnot small. In this work we present a formulation that a)explores regions of high probability making use of futureobservations and b) does not become degenerate by se-lecting updating positions of the particles considering the

statistical characteristics of the whole ensemble (and notjust the individual particle being updated).

Javier AmezcuaUniversity of ReadingDepartment of [email protected]

Peter Jan van Leeuwen, Mengbin ZhuUniversity of [email protected], [email protected]

MS134

Hybrid Particle - Ensemble Kalman Filter for La-grangian Data Assimilation

Lagrangian data such as those collected by gliders or floatsin the ocean are highly nonlinear and their assimilationinto high-dimensional models of the velocity field is a chal-lenging task. We present a hybrid filter that applies theEnKF update to velocity variables and the PF update todrifter variables. We present some results from this hybridfilter applied to high-dimensional quasi-geostrophic oceanmodel and compare those to results from standard ensem-ble Kalman filter and ensemble runs without assimilation.

Amit ApteTIFR Centre for Applicable MathematicsBangalore, [email protected]

Elaine SpillerMarquette [email protected]

Laura SlivinskiWoods Hole Oceanographic [email protected]

MS134

Accuracy and Stability of Nonlinear Filters

Abstract not available.

David KellyUniversity of [email protected]

MS134

Sequential Implicit Sampling Methods for BayesianInverse Problems

The solution to the inverse problems, under the Bayesianframework, is given by a posterior probability density. Forlarge scale problems, sampling the posterior can be anextremely challenging task. Markov Chain Monte Carlo(MCMC) provides a general way for sampling but it can becomputationally expensive. Gaussian type methods, suchas Ensemble Kalman Filter (EnKF), make Gaussian as-sumptions even for the possible non-Gaussian posterior,which may lead to inaccuracy. In this talk, the implicitsampling method and the newly proposed sequential im-plicit sampling methods are investigated for the inverseproblem involving time-dependent partial differential equa-tions (PDEs). The sequential implicit sampling methodcombines the idea of the EnKF and implicit sampling and

UQ16 Abstracts 109

it is particularly suitable for time-dependent problems.Moreover, the new method is capable of reducing the com-putational cost in the optimization, which is a necessaryand most expensive step in the implicit sampling method.The sequential implicit sampling method has been testedon a seismic wave inversion. The numerical experimentsshows its efficiency by comparing with the MCMC andsome Gaussian approximation methods.

Xuemin TuUniversity of [email protected]

MS135

Spectral Analysis of Stochastic Networks

Stochastic networks with pairwise transition rates of theform Lij = exp(−Uij/T ) where T is a small parameter(the absolute temperature in the physical context) arisein modeling of rare events in complex physical systems. Iwill introduce methods for finding the spectral decomposi-tion of the generator matrices of such networks in the limitT → 0, finite temperature continuation techniques, and anapplication to the problem of the Lennard-Jones-75 clusterrearrangement.

Maria K. CameronUniversity of [email protected]

MS135

Vortices in the Stochastic Parabolic Ginzburg-Landau Equation

We consider the variant of a stochastic parabolic Ginzburg-Landau equation that allows for the formation of point de-fects of the solution. We show that the family of the Jaco-bians associated to the solution is tight on a suitable spaceof measures. Its limit points are concentrated on finitesums of delta measures with integer weights. This providesthe definition of the stochastic Ginzburg-Landau vortices.We shall also discuss the possibility to study, within ourframework, the noise-induced nucleation of vortices.

Olga ChugreevaAachen University, Germanyn/a

MS135

Competition Between Energy and Entropy in theStochastic Allen-Cahn Equation

We will comment on the competition between energy andentropy in stochastically perturbed systems. We focus onthe example of the stochastic Allen-Cahn equation, forwhich we have analyzed the invariant measure in joint workwith Felix Otto and Hendrik Weber. While for small noisestrength a transition between the favored states is energet-ically penalized, a large system size means that there aremany places at which such a rare event may take place.Our first result quantifies this competition.

Felix OttoMax Planck Institute for Mathematics in the [email protected]

Hendrik WeberMathematics InstituteUniversity of Warwick

[email protected]

Maria G. WestdickenbergRWTH Aachen [email protected]

MS135

Packing and Unpacking

Abstract not available.

Matthieu WyartInstitute of Theoretical PhysicsEcole Polytechnique Federale de Lausanne, [email protected]

MS136

VPS: Voronoi Piecewise Surrogates for High-Dimensional Data

We introduce VPS: a new method to construct credi-ble global high-dimensional surrogates, using an implicitVoronoi tessellation around black-box function evaluations.The neighborhood network between cells (approximate De-launay graph) enables each cell to build its own local pieceof the global surrogate taking advantage of intrusively gen-erated information, e.g., derivatives. Due to its piecewisenature, VPS accurately handles smooth functions withhigh curvature as well as functions with discontinuities,and can adopt a parallel implementation.

Mohamed S. EbeidaSandia National [email protected]

Ahmad A. RushdiUniversity of [email protected]

Laura SwilerSandia National LaboratoriesAlbuquerque, New Mexico [email protected]

Scott A. MitchellSandia National [email protected]

Eric PhippsSandia National LaboratoriesOptimization and Uncertainty Quantification [email protected]

MS136

A Hybrid Approach to Tackle Resiliency in Ex-treme Scale Computations Using a Domain Decom-position Solver for Uncertain Elliptic Pdes

One challenging aspect of extreme scale computing con-cerns combining UQ methods with resilient PDE solvers.We present a resilient polynomial chaos solver for uncer-tain elliptic PDEs. The solver involves domain decomposi-tion for tackling extreme scale problems, robust regressiontechniques to provide resilience, and spectral methods forthe UQ. Our hybrid UQ approach mixes a resilient non-

110 UQ16 Abstracts

intrusive approximation of stochastic Dirichlet-to-Dirichletmaps and a Galerkin projection to determine the solutionat the subdomain interfaces.

Paul Mycek, Andres ContrerasDuke [email protected], [email protected]

Olivier P. Le [email protected]

Khachik Sargsyan, Francesco Rizzi, Karla Morris, CosminSaftaSandia National [email protected], [email protected],[email protected], [email protected]

Bert J. DebusschereEnergy Transportation CenterSandia National Laboratories, Livermore [email protected]

Omar M. KnioDuke [email protected]

MS136

A Soft and Hard Faults Resilient Solver for 2DUncertain Elliptic PDEs via Fault-tolerant MPIServer-client-based Implementation

We discuss an approach to solve 2D elliptic PDEs with un-certain coefficients that incorporates resiliency to soft andhard faults. Resiliency to soft faults is obtained at the al-gorithm level by recasting the PDE as a sampling problemfollowed by a resilient manipulation of the data to obtaina new solution. Resiliency to hard faults is achieved at theimplementation level by leveraging a fault-tolerant MPIimplementation (ULFM-MPI) within a task-based server-client model.

Francesco Rizzi, Karla MorrisSandia National [email protected], [email protected]

Paul MycekDuke [email protected]

Khachik SargsyanSandia National [email protected]

Olivier P. Le [email protected]

Omar M. Knio, Andres ContrerasDuke [email protected], [email protected]

Cosmin SaftaSandia National [email protected]

Bert J. DebusschereEnergy Transportation Center

Sandia National Laboratories, Livermore [email protected]

MS137

Dimension Reduction for UQ in Satellite RetrievalMethods

Many satellite instruments observe reflected or transmit-ted solar or even stellar radiation. Retrieving informationof vertical gas density profiles of atmospheric constituents,such as ozone, is an ill-posed inverse problem and a pri-ori information must be incorporated to solve it. Usingsampling-based estimation and uncertainty quantificationwith dimension reduction, we can solve the problem di-rectly in a suitable low dimensional subspace of the fullsolution. In this presentation, we discuss dimension andmodel reduction for selected satellite instruments and re-trieval algorithms. We utilize likelihood-informed dimen-sion reduction with adaptive MCMC to retrieve profile in-formation matching the degrees of freedom in the observa-tions.

Marko LaineFinnish Meteorological [email protected]

MS137

Efficient Scalable Variational Bayesian Approxima-tion Methods for Inverse Problems

When using hierarchical prior models for Bayesian infer-ence for inverse problems, the expression of the joint pos-terior law is complex and analytical computations oftenimpossible. MCMC method are then common tools but un-fortunately they are not scalable. Variational Bayesian Ap-proximation methods are alternatives which can efficientlybe used. In this work we show a few implementations ofthese methods for Image Reconstruction in Computed To-mography.

Ali [email protected]

MS137

Optimization-Based Samplers using the Frame-work of Transport Maps

Markov chain Monte Carlo (MCMC) relies on efficient pro-posals to sample from a target distribution of interest. Re-cent optimization-based MCMC algorithms for Bayesianinference, for example randomize-then-optimize (RTO), re-peatedly solve optimization problems to obtain proposalsamples. We analyze RTO using a new interpretation thatdescribes each optimization as a projection and as the ac-tion of an approximate transport map. From this analysis,several new variants of RTO — adaptive RTO, mixtures ofRTO proposals, and transformed random walks — follownaturally.

Zheng WangMassachusetts Institute of TechnologyUSAzheng [email protected]

Youssef M. MarzoukMassachusetts Institute of Technology

UQ16 Abstracts 111

[email protected]

MS137

Quantifying Model Error in Bayesian ParameterEstimation

In complex Bayesian hierarchical models, the posterior dis-tribution of interest is often replaced by a computation-ally efficient approximation to make Markov chain MonteCarlo algorithms feasible. In this work, we propose statisti-cal methodology that utilizes high performance computingand allows one to quantify the model error, defined as thediscrepancy between the target and approximate posteriordistributions. This provides a structure to analyze modelapproximations with regard to the impact on inference andcomputational efficiency.

Staci WhiteWake Forest [email protected]

Radu HerbeiOhio [email protected]

MS138

Probabilistic Formulations of Elementary Algo-rithms in Linear Algebra and Optimization

Broad practical success of error estimation in numericalmethods hinges on computational cost. Ideally, adding awell-calibrated error estimate to a classic numerical pointestimate should have no, or only very limited computa-tional overhead. This talk will start with the observationthat several basic numerical methods can directly be inter-preted as maximum a-posteriori estimates under certainfamilies of Gaussian priors (the focus will be on algorithmsfor linear algebra and optimization). By itself, however,this connection does not yield a unique calibration for theposterior’s width. I will point out technical challenges andopportunities for such calibration at runtime, which givevarying degrees of error calibration in return for a corre-spondingly varying amount of computational effort.

Philipp Hennig, Maren Mahsereci, Simon BartelsMax Planck Institute for Intelligent [email protected], [email protected],[email protected]

MS138

Bayesian Inference for Lambda-coalescents: Fail-ures and Fixes

The class of Lambda-coalescents arise as models of geneticancestry of populations with reproductive skew. In thistalk I show that naively sampling DNA from such a pop-ulation and applying Bayesian inference with a Lambda-coalescent prior leads to inconsistent inference with highsensitivity to the observation and to hyperparameters.These issues can be resolved by temporally structured sam-pling and carefully chosen inference algorithms, which areable to yield provably consistent and robust inference.

Jere KoskelaUniversity of Warwick

[email protected]

MS138

Opportunity for Uncertainty in Cardiac Modelling

Biophysical computational models of individual patientshearts represent a novel framework for understanding themechanisms behind disease, selecting patients and optimis-ing treatments. Personalising computational models fromnoisy and sparse clinical data increasingly represents thelimiting step in simulation workflows while propagation ofthat uncertainty through to model predictions is routinelyabsent. This presentation describes the problem and out-lines our progress in engaging uncertainty quantification inthe development and simulation of cardiac models.

Steven NiedererKing’s College [email protected]

MS138

Brittleness and Robustness of Bayesian Inference

It is natural to investigate the accuracy of Bayesian proce-dures from several perspectives: e.g., frequentist questionsof well-specification and consistency, or numerical analy-sis questions of stability and well-posedness with respectto perturbations of the prior, the likelihood, or the data.This talk will outline positive and negative results (bothclassical ones from the literature and new ones due to theauthors and others) on the accuracy of Bayesian inferencein these senses.

Tim SullivanFree University of Berlin / Zuse Institute [email protected]

Houman OwhadiApplied [email protected]

Clint ScovelLos Alamos National [email protected]

MS139

A Gaussian Process Trust Region Method forStochastic Constrained Derivative-Free Optimiza-tion

In this talk we present the algorithm (S)NOWPAC forstochastic constrained derivative-free optimization. Themethod uses a generalized trust region approach that ac-counts for noisy evaluations of the objective and con-straints. To reduce the impact of noise, we fit Gaussianprocess models to past evaluations. Our approach incor-porates a wide variety of probabilistic risk or deviationmeasures in both the objective and the constraints. Wedemonstrate the efficiency of the approach via several nu-merical comparisons.

Florian AugustinMassachusetts Institute of Technology

112 UQ16 Abstracts

[email protected]

MS139

Should You Derive Or Let the Data Drive ? AFramework for Hybrid Data Driven First PrincipleModel Correction

Mathematical models are employed ubiquitously for de-scription, prediction and decision making. In addressingend-goal objectives, great care needs to be devoted to at-tainment of appropriate balance of inexactness through-out the various stages of the underlying process. In thistalk, we shall describe the interplay between model reduc-tion and model mis-specification mitigation and provide ageneric infrastructure for model re-specification based upona hybrid first principles and data-driven approach.

Lior HoreshIBM [email protected]

Remi [email protected]

Haim AvronBusiness Analytics & Mathematical SciencesIBM T.J. Watson Research [email protected]

Karen E. WillcoxMassachusetts Institute of [email protected]

MS139

Distribution Shaping and Scenario Bundling forStochastic Programs with Endogenous Uncertainty

We present a new approach to handle endogenous uncer-tainty in stochastic programs, which allows an efficientpolyhedral characterization of decision-dependent proba-bility measures and thus a reformulation of the originalnonlinear stochastic MINLP as a stochastic MIP. The effec-tiveness of the approach will be demonstrated for stochas-tic network design problems, where the links are subject torandom failures, as well as for stochastic project planning.

Marco LaumannsIBM Research - [email protected]

Steven [email protected]@cs.ucc.ie

Ban Kawas, Bruno FlachIBM [email protected], [email protected]

MS139

Taming Unidentifiability of Ill-Posed Inverse Prob-lems Through Randomized-Regularized DecisionAnalyses

Properties controlling groundwater flow and contaminanttransport (e.g. permeability, reduction capacity, etc.)are often highly heterogeneous (spatially and temporally).

Characterization of these heterogeneities using inversemethods is challenging often producing ill-posed problemswith multiple plausible solutions. We have developed anapproach to decision analysis that is capable of accountingfor the unidentifiability of the inverse model. We presenta series of synthetic decision analyses that are consistentwith actual site problems.

Velimir V. VesselinovLos Alamos National [email protected]

MS140

Numerical Approaches for Sequential Bayesian Op-timal Experimental Design

How does one select a sequence of experiments that maxi-mizes the collective value of the resulting data? We formu-late this optimal sequential experimental design problemby maximizing expected information gain under continuousparameter, design, and observation spaces using dynamicprogramming. We find an approximate optimal policy viaapproximate value iteration, and utilize transport maps torepresent non-Gaussian posteriors and to enable fast ap-proximate Bayesian inference. Results are demonstratedon a dynamic sensor steering problem for source inversion.

Xun HuanSandia National [email protected]

Youssef M. MarzoukMassachusetts Institute of [email protected]

MS140

Online Optimum Experimental Design for ControlSystems under Uncertainties

We have developed efficient optimal control methods forthe determination of one, or several complementary, op-timal experiments, which maximize the information gainabout parameters subject to constraints such as experi-mental costs and feasibility or the range of model validity.Special emphasis is placed on evaluation of robust optimalexperiments by online design of experiments, according towhich the information about parameters received using on-line parameter estimation is used to online re-design of therunning experiment.

Gregor KriwetFachbereich Mathematik und InformatikPhilipps-Universitaet [email protected]

Ekaterina KostinaHeidelberg [email protected]

Hans Georg BockIWR, University of [email protected]

MS140

Bayesian Optimal Design for Ordinary Differential

UQ16 Abstracts 113

Equation Models

Methodology is described for minimising the expected lossfunction characterising the Bayesian optimal design prob-lem. This methodology relies on using a statistical emula-tor to approximate the expected loss. An extension, to con-sider optimal design for physical models derived from the(intractable) solution to a system of ordinary differentialequations (ODEs), will also be described and illustratedon examples from the biological sciences.

Antony OverstallUniversity of [email protected]

David Woods, Benjamin ParkerUniversity of [email protected], [email protected]

MS140

Gaussian Process based Closed-loop Bayesian Ex-perimental Design of Computer Experiments

Using Gaussian processes as adaptive emulators for expen-sive computer models, a sequential optimization of the em-ulator (ie. with respect to utility functions like minimizedprediction uncertainty) through a Bayesian experimentaldesign approach for simulation parameter selection is in-vestigated. The performance of several widely used utilityfunctions and the influence of different look-ahead strate-gies in the Bayesian experimental design are discussed, us-ing a plasma-wall-interaction code as real-world example.

Udo von ToussaintMax Planck Institute for Plasma [email protected]

Roland PreussMax Planck Institute for [email protected]

MS141

Scalable Methods for Optimal Experimental De-sign for Systems Governed by Pdes with UncertainParameter Fields

We formulate an A-optimal experimental design (OED)criterion for the optimal placement of sensors in infinite-dimensional nonlinear Bayesian inverse problems. We dis-cuss simplifications required to make the OED problemcomputationally feasible, and present a formulation as abi-level optimization problem whose constraints are the op-timality condition PDEs defining the MAP point and thePDEs describing the action of the posterior covariance. Weprovide numerical results for inversion of the permeabilityin a groundwater flow problem.

Alen AlexanderianNC State [email protected]

Noemi PetraUniversity of California, [email protected]

Georg StadlerCourant Institute for Mathematical SciencesNew York [email protected]

Omar GhattasThe University of Texas at [email protected]

MS141

Efficient Particle Filtering for Stochastic Korteweg-De Vries Equations

We propose an efficient algorithm to perform nonlineardata assimilation for Korteweg-de Vries solitons. In par-ticular we develop a reduced particle filtering method toreduce the dimension of the problem. The method de-composes a solitonic pulse into a clean soliton and smallradiative noise, and instead of inferring the complete pulseprofile, we only infer the two soliton parameters with parti-cle filter. Numerical examples are provided to demonstratethat the proposed method can provide rather accurate re-sults, while being much more computationally affordablethan a standard particle filter.

Feng BaoOak Ridge National [email protected]

Yanzhao CaoDepartment of Mathematics & StatisticsAuburn [email protected]

Xiaoying HanAuburn [email protected]

Jinglai LiShanghai JiaoTong University, [email protected]

MS141

Data-Driven Spectral Decomposition and Forecast-ing of Ergodic Dynamical Systems

We discuss a framework for dimension reduction, mode de-composition, and nonparametric forecasting of data gen-erated by ergodic dynamical systems. This framework isbased on a representation of the Koopman operators in asmooth orthonormal basis acquired from time-ordered datathrough the diffusion maps algorithm. Using this repre-sentation, we compute Koopman eigenfunctions through aregularized advection-diffusion operator, and employ theseeigenfunctions for dimension reduction and time integra-tion of forecast probability densities.

Dimitrios GiannakisCourant Institute of Mathematical SciencesNew York [email protected]

MS141

Fast Data Assimilation for High Dimensional Non-linear Dynamical Systems

A general and fast computational framework is developedfor data assimilation in high dimensional nonlinear dy-namical systems. This is achieved by developing a robustprobabilistic model to approximate the Bayesian inferenceproblem and obtain samples from high dimensional pos-terior distributions. In addition, the proposed approachaddresses one of the main challenges in uncertainty quan-

114 UQ16 Abstracts

tification, namely dealing with modeling errors and theireffect on inference and prediction.

Gabriel TerejanuUniversity of South [email protected]

MS142

Adaptive Mesh Re-location for Randomly Fluctu-ating Material Fields

Stochastic models with random material fields requireadaptive meshing to properly resolve the features of each(deterministic) instance at a minimum computational cost.This is a challenging issue because, to be used in as a prac-tical tool, the cost of the error indicators and the remesh-ing algorithms must be very small. This discards usingany a posteriori error assessment technique and precludesremeshing from scratch at each instance. We present anr-adaptivity approach for boundary value problems withrandomly fluctuating material parameters solved throughthe Monte Carlo or stochastic collocation methods. Thisapproach tailors a specific mesh for each sample of theproblem. It only requires the computation of the solutionof a single deterministic problem with the same geometryand the average parameter, whose numerical cost becomesmarginal for large number of samples. Starting from themesh used to solve that deterministic problem, the nodesare moved depending on the particular sample of mechan-ical parameter field. The reduction in the error is smallfor each sample but sums up to reduce the overall bias onthe statistics estimated through the Monte Carlo scheme.Several numerical examples in 2D are presented.

Pedro [email protected]

Regis CottereauCNRS, CentraleSupelec, Universite [email protected]

MS142

A Multilevel Control Variate Method Based onLow-Rank Approximation

Multilevel Monte Carlo (MLMC) methods are now widelyused for uncertainty quantification of PDE systems withhigh-dimensional random data. We present a variationof MLMC, dubbed Multilevel Control Variate (MLCV),where we rely on a low-rank approximation of fine solu-tions from the samples of coarse solutions to construct con-trol variates for approximating the expectations involved inMLMC. We present cost estimates as well as numerical ex-amples demonstrating the advantage of this new MLCVapproach.

Hillary FairbanksDepartment of Applied MathematicsUniversity of Colorado, [email protected]

Alireza DoostanDepartment of Aerospace Engineering SciencesUniversity of Colorado, [email protected]

Christian KetelsenUniversity of Colorado at Boulder

Dept. of Applied [email protected]

MS142

Adaptive Sde Based Sampling for Random Pde

Abstract not available.

Johannes NeumannWIAS [email protected]

MS142

Computational Error Estimates for Monte CarloFinite Element Approximation with Rough LogNormal Diffusion Coefficients

Abstract not available.

Mattias SandbergDepartment of MathematicsRoyal Institute of Technology, [email protected]

MS143

Detecting Periodicities with Gaussian Processes

Finding an appropriate covariance function is a key issue inGaussian process regression modelling. We discuss in thistalk the construction of periodic and aperiodic covariancefunctions based on the RKHS framework. Furthermore, wederive a periodicity ratio using analysis of variance meth-ods. Finally, the interest of the method is illustrated on abiology application where we identify, amongst the entiregenome, the genes with a periodic activity.

Nicolas DurrandeMines [email protected]

MS143

Incorporating and Learning Functional PropertiesThrough Covariance Kernels

In Gaussian Process Modelling, a number of functionalproperties of the objective function can be encoded throughcovariance kernels. While Hilbert space expansions leadto additive decompositions of covariance kernels, we con-sider the issue of identifying and extracting most influencialterms of such decomposition using GP models which ker-nels write as sums of kernels. In particular, we focus onmaximum likelihood, for which a nice property involvingconvexity arguments enable efficient estimation of kernelweights.

David GinsbourgerDepartment of Mathematics and Statistics, University [email protected]

Olivier RoustantMines Saint-Etienne

UQ16 Abstracts 115

[email protected]

MS143

Jointly Informative Feature Selection

We address the problem of selecting jointly informative fea-tures in the context of classification tasks. We proposeseveral novel criteria which combine a Gaussian modellingof the features with derived bounds on their mutual infor-mation with the class. Algorithmic implementations arepresented which significantly reduce computational com-plexity, showing that feature selection using joint mutualinformation is tractable. In extensive empirical evaluationthese methods outperformed state-of-the-art, both in termsof speed and accuracy.

Leonidas LefakisZalando SE, [email protected]

Francois FleuretIdiap Research [email protected]

MS143

Gaussian Process Models for Predicting on Circu-lar Domains

We consider the framework of Gaussian process regression(or Kriging) of variables defined on circular domains. Insome situations, such domains correspond to technologicalor physical processes involving rotations or diffusions. Weintroduce so-called polar Gaussian processes defined on thecylindrical space of polar coordinates, and new design ofexperiments on the disk. The usefulness of the approach isdemonstrated on applications in microelectronics and en-vironments.

Olivier RoustantMines [email protected]

Esperan [email protected]

MS144

Uncertainty Propagation in Dynamical Systemswith a Novel Intrusive Method Based on Polyno-mial Algebra

The paper is presenting a novel intrusive method for propa-gation of uncertainties in dynamical systems. The methodis based on an expansion of the uncertain quantities in apolynomial series and propagation through the dynamicsusing a multivariate polynomial algebra. Tchebycheff andNewton basis have been used for comparison. The first oneoffers a fast uniform convergence, the second one a lowercomputational complexity. The paper details the proposedgeneralized algebra and illustrates its applicability.

Annalisa Riccardi, Carlos Ortega Absil, Chiara Tardioli,Massimiliano VasileUniversity of [email protected], [email protected],

[email protected], [email protected]

MS144

Multi-Fidelity Models for Flow Past An Airfoilwith Operative and Geometric Uncertainties

In this work the flow past an airfoil is analysed in presenceof uncertain operative conditions and geometric uncertain-ties with Beta-distributed perturbation. The statistics ofaerodynamic performance are obtained by numerical inte-gration. In order to reduce the computational cost of thestatistical moments we use a recursive cokriging model,which exploits multi-fidelity responses, as a surrogate ofaerodynamic performace on which we can perform the nu-merical integration.

Valentino PedirodaUniversita di [email protected]

Lucia ParussiniUniversita degli Studi di [email protected]

Carlo PoloniUniversita di [email protected]

Alberto Clarich, Rosario RussoEsteco [email protected], [email protected]

MS144

Sensitivity and Uncertainty Analysis in Cfd

Abstract not available.

Dominique PelletierEcole Polytechnique [email protected]

MS145

Data-Driven Bayesian Uncertainty Quantificationfor Large-Scale Problems

Abstract not available.

Lina KulakovaETH [email protected]

Lina KulakovaCSE-lab, Insititute for Computational Science, D-MAVT,ETH Zurich, [email protected]

MS145

Image-Based Modeling of Tumor Growth in Pa-tients with Glioma

Abstract not available.

Bjoern H. MenzeTechnical University of Munich

116 UQ16 Abstracts

[email protected]

MS145

Title Not Available

Abstract not available.

Christof SchuetteUniversity of [email protected]

MS145

Predictive Coarse-Graining

We present a Bayesian formulation to coarse-graining ofatomistic systems using generative probabilistic modelswhich allows us to address the question of quantifying epis-temic uncertainty. Apart from the usual coarse-grainedpotential that approximates the exact free energy, the for-mulation is augmented with a probabilistic mapping fromcoarse to fine, which enables the prediction of properties inthe fine-scale. The formulation allows for significant flexi-bility and high-dimensional parameters can be learned bysparsity-enforcing priors.

Markus SchoberlTechnical University [email protected]

Nicholas ZabarasThe University of WarwickWarwick Centre for Predictive Modelling (WCPM)[email protected]

Phaedon S. KoutsourelakisTechnische Universitat Munchen, [email protected]

MS146

Nonlinear Filtering with Mcmc OptimizationMethod

Abstract not available.

Evangelos EvangelouUniversity of [email protected]

MS146

Small Data Driven Algorithms for Solving Large-Dimensional Bsdes

We aim at solving a dynamic programming equation (in-spired from BSDE) associated to a Markov chain X, usinga Regression-based Monte Carlo algorithm. In order tocompute the projection of the value function on a func-tions basis, we generate a learning sample of paths of Xwhich is a suitable transformation of a single data set (sayM historical data of X). Finally we design a data-driven re-sampling scheme that allows to solve the dynamic program-ming equation (possibly in large dimension) using only arelatively small set of M historical paths. To assess the ac-curacy of the algorithm, we establish non-asymptotic errorestimates. Joint work with Gang Liu and Jorge Zubelli.

Emmanuel GobetEcole Polytechnique

[email protected]

Gang LiuEcole [email protected]

Jorge P. [email protected]

MS146

Split-step Milstein Methods for Stiff StochasticDifferential Systems with Multiple Jumps

We consider stiff stochastic differential systems with Levynoise including multiple jump-diffusion processes. Suchsystems arise in biochemical networks that involve reac-tions at different time scales. They are inherently stiffand change their stiffness with uncertainty. To resolve thisissue, we utilize split-step methods and investigate theirconvergence and stability properties. Numerical examplesdemonstrate the effectiveness of these methods and theirability to handle stiffness due to multiple jumps with vary-ing intensity.

Abdul Khaliq, Viktor ReshniakMiddle Tennessee State [email protected], [email protected]

Guannan ZhangOak Ridge National [email protected]

David A. VossWestern Illinois [email protected]

MS146

High Accurate Methods for Coupled FBSDEs withApplications

In this talk, we will introduce accurate numerical meth-ods, including two-step methods, multistep methods, andstochastic deferred correction methods, for solving couplednonlinear forward backward stochastic differential equa-tions (FBSDEs). Their high accuracy is numerically shownby their applications in solving FBSDEs, second-order FB-SDEs, fully nonlinear second-order parabolic partial differ-ential equations, and stochastic optimal control problems.

Weidong ZhaoShandong University, Jinan, ChinaSchool of Mathematics and System [email protected]

Tao ZhouInstitute of Computational Mathematics, AMSSChinese Academy of [email protected]

MS147

Parameter Estimation and Uncertainty Quantifica-tion in Turbulent Combustion Computations

Bayesian inference with a model-error representation is em-

UQ16 Abstracts 117

ployed to calibrate a simple chemical kinetics model foroxidation of hydrocarbon fuels in air. The model uncer-tainties are subsequently propagated in simulations of tur-bulent combustion to quantify their impact on autoignitionpredictions at realistic Diesel engine operating conditions.The forward propagation is performed via a non-intrusivespectral projection by computing the polynomial chaos ex-pansion of the simulation output of interest.

Layal Hakim, Guilhem LacazeSandia National [email protected], [email protected]

Mohammad KhalilCombustion Research FacilitySandia National [email protected]

Habib N. NajmSandia National LaboratoriesLivermore, CA, [email protected]

Joseph C. OefeleinSandia National [email protected]

MS147

Uncertainty Inclusion and Characterization inNonlocal Theories for Materials Modeling

Nonlocal models have been proposed in recent years to de-scribe processes, which classical (local) models based onpartial differential equations encounter challenging. Ex-amples of this include peridynamics and nonlocal diffusionmodels, based on integro-differential equations, which al-low the representation of evolving discontinuities in ma-terials and of anomalous diffusion processes, respectively.These models, however, rarely account for uncertainties.Uncertainty is ubiquitous in nature. In materials modeling,uncertainty can be present in constitutive relations, mate-rial microstructure, and source terms, as well as in bound-ary and/or initial conditions. In this work, we propose ageneralization of nonlocal models to include uncertainty intheir constitutive relations and governing equations. Weexplore methods to quantify the effects of different typesof uncertainty on the material response, for various prob-lems of interest, and demonstrate these methods throughnumerical examples.

Pablo SelesonOak Ridge National [email protected]

Miroslav StoyanovOak Ridge National [email protected]

Clayton G. Webster, Guannan ZhangOak Ridge National [email protected], [email protected]

MS147

Title Not Available

Abstract not available.

Dongbin Xiu

University of [email protected]

MS147

Uncertainty Propagation Across Domains withVastly Different Correlation Lengths

We develop different transmission boundary conditionsthat will preserve the global statistical properties of thesolution across domains characterized by vastly differentcorrelation lengths. Our key idea relies on combining localreduced-order representations of random fields with multi-level domain decomposition by enforcing the continuity ofthe conditional mean and variance of the solution acrossadjacent subdomains. The effectiveness and convergeceproperties of this algorithm is illustrated in numerical ex-amples of both 1D and 2D.

Dongkun ZhangBrown Universitydongkun [email protected]

Heyrim ChoUniversity of [email protected]

George E. KarniadakisBrown UniversityDivision of Applied Mathematicsgeorge [email protected]

MS148

An Uncertainty Reduction Technique for Short-term Transmission Expansion Planning Based onLine Benefits

In this article we provide a novel uncertainty reductiontechnique to deal with uncertainties associated to Renew-able Energy Sources (RES) in Transmission ExpansionPlanning (TEP). For the sake of simplicity, we assumethe inherent uncertainty of wind and solar output are al-ready captured in the variability of their hourly load pro-file. Each hourly operational state (or snapshot) of theyear is assumed to represent both a realization of the un-certain RES outputs and the given time period of the year.Therefore, we handle uncertainty reduction by means ofsnapshot selection. Instead of taking into account all thepossible operational states and their associated optimalpower flow, we want to select a reduced group of themthat are representative of all the ones that should have aninfluence on investment decisions. This reduced group ofoperational states should be selected to lead to the sameinvestment decisions as if we were considering all snapshotsin the target year. The reduction achieve in the size of theTEP problem should allow the user to compute an accu-rate enough TEP solution in a much smaller amount oftime, or, alternatively, compute the expansion of the net-work considering more accurately other aspects of the TEPproblem. Original operational states are compared in thespace of candidate line marginal benefits which are rele-vant drivers for line investment decisions. Marginal bene-fits of reinforcements are computed from the nodal pricesresulting from the dispatch. Principal Component Anal-ysis (PCA) is applied to cope with the high dimension ofthe line marginal benefit space. Lastly, a clustering al-gorithm is used to group operational states with similarmarginal benefits together. Our algorithm has been testedon a modified version of the standard IEEE 24 bus system.

118 UQ16 Abstracts

Quentin PloussardUniversidad Pontificia [email protected]

Andres RamosUniversidad Pontificia Comillas, [email protected]

Luis OlmosUniversidad Pontificia [email protected]

MS148

Quasi-Monte Carlo Methods Applied to StochasticEnergy Optimization Models

We consider two-stage stochastic unit commitment modelsfor electricity production and trading and study the useof Quasi-Monte Carlo (QMC) methods for scenario gener-ation. We provide conditions on the optimization modeland the underlying probability distribution implying thatQMC error analysis applies approximately to two-stagemixed-integer integrands. Hence, convergence rates closeto the optimal O(1/n) may be achieved. Numerical resultsare presented for two particular randomized QMC meth-ods that show their superiority to classical Monte Carlo.

Werner RoemischHumboldt-University BerlinInstititute of [email protected]

Hernan LeoveyHumboldt-University Berlin, Institute of [email protected]

MS148

Scalable Algorithms for Large-Scale Mixed-IntegerOptimization Problems in Ac Power Systems

We propose a method that solves constrained optimiza-tion problems over AC power distribution systems overmixed set of variables, where discrete variables modelstate of power transformers or state of tap changers. Al-though these problems are mixed-integer non-linear pro-grams, they can be solved over radial networks by a fullypolynomial-time approximation scheme. The reason is thatour method takes advantage of the radial structure of thenetwork and outputs an approximate optimal solution intime that is polynomial both in the network size and inthe approximation factor. Our solution technique is basedon the Belief-Propagation (BP) algorithm that has beensuccessfully applied in the past to solve discrete problemsin statistical physics, machine learning and coding theory.The main challenge that we need to overcome to apply BPtechniques lies in extending it to problems with continu-ous variables, and specifically to voltage and flow variablesin the power flow equations. We also discuss possible ex-tensions of the BP techniques, through construction of ahierarchy of BP-based LP relaxations, to the related prob-lems of transmission level problems over loopy networks.

Marc VuffrayLos Alamos National [email protected]

Daniel BienstockColumbia University IEOR and APAM DepartmentsIEOR [email protected]

Misha ChertkovTheoretical Division, and CNLSLos Alamos National [email protected]

Krishnamurthy DvijothamUniversity of [email protected]

Miles LubinOperations Research CenterMassachusetts Institute of [email protected]

Sidhant MisraLos Alamos National [email protected]

MS148

Probabilistic Forecasting of Wind and Solar PowerUsing Epi-Splines

We introduce a non-parametric density density estima-tion method based on recently introduced epi-spline bases,which we use to characterize forecast error distributionsfor real-world wind and solar power production in electricpower systems. We discuss scenario generation methodsbased on these density estimates, and characterize boththe mean and probabilistic forecast accuracy on real-worlddata sets from the Bonneville Power Administration andthe California Independent System Operator.

Jean-Paul WatsonSandia National LaboratoriesDiscrete Math and Complex [email protected]

David WoodruffUniversity of California, [email protected]

MS149

Model Order Reduction of Linear Systems Drivenby Levy Noise

We discuss model order reduction for linear dynamical sys-tems driven by Levy processes. In particular, we inves-tigate balanced truncation and singular perturbation ap-proximation for these systems. These methods are knownto come with computable error bounds and to preserve sta-bility in the deterministic setting. We will discuss in howfar these properties carry over to the stochastic case, andwe will derive H2-type error bounds. Numerical experi-ments illustrate our findings.

Peter Benner, Martin RedmannMax Planck Institute, Magdeburg, [email protected], [email protected]

MS149

ADM-CLE Approach for Detecting Slow Variables

UQ16 Abstracts 119

in Continuous Time Markov Chains and DynamicData

A method for detecting intrinsic slow variables in high-dimensional stochastic chemical reaction networks is de-veloped and analyzed. It combines anisotropic diffusionmaps (ADM) with approximations based on the chemicalLangevin equation (CLE). The resulting approach, calledADM-CLE, has the potential of being more efficient thanthe ADM method for a large class of chemical reaction sys-tems. The ADM-CLE approach can be used to estimatethe stationary distribution of the detected slow variable,without any a-priori knowledge of it.

Mihai [email protected]

Radek ErbanUniversity of OxfordMathematical [email protected]

MS149

Balanced Truncation of Stochastic Linear ControlSystems

We consider two approaches to balanced truncation ofstochastic linear systems, which follow from different gen-eralizations of the reachability Gramian of deterministicsystems. Both preserve mean-square asymptotic stability,but only the second leads to a stochastic H∞-type boundfor the approximation error of the truncated system.

Tobias DammTU [email protected]

MS150

On a Connection Between Adaptive MultilevelSplitting and Stochastic Waves

Adaptive Multilevel Splitting (AMS for short) is a generalMonte-Carlo method to simulate and estimate rare events.In the framework of molecular dynamics, this techniquecan for example be used to generate reactive trajectories,namely equilibrium trajectories leaving a metastable stateand ending in another one. In this talk, we will present aconnection between AMS and stochastic waves, i.e. thetransformation of a Markov process by a random timechange. In particular, this connection allows us to ana-lyze AMS as a Fleming-Viot type particle system.

Arnaud GuyaderSorbonne UniversitesLaboratorie de Statistique Theorique et [email protected]

MS150

Pseudogenerators for Under-Resolved MolecularDynamics

Many features of molecules which are of physical inter-est (e.g. molecular conformations, reaction rates) are de-scribed in terms of its dynamics in configuration space. Weconsider the projection of molecular dynamics (governed bya stochastic Langevin equation) in phase space onto config-uration space. We review the Smoluchowski equations as

overdamped limit, and show that for small times a Smolu-chowski equation, scaled non-linearly in time, governs theevolution of the configurational coordinate even for generaldamping.

Peter KoltaiInstitute of MathematicsFreie Universitat [email protected]

Andreas BittracherCenter for MathematicsTU [email protected]

Carsten HartmannFU [email protected]

Oliver JungeCenter for MathematicsTechnische Universitat Munchen, [email protected]

MS150

The Parallel Replica Algorithm: MathematicalFoundations and Recent Developments

I will present the parallel replica algorithm, which is anaccelerated dynamics method proposed by A.F. Voter in1998. The aim of this technique is to efficiently gener-ate trajectories of a metastable stochastic process. Re-cently, we propose a mathematical framework to under-stand the efficiency and the error associated with this tech-nique. Generalizations of the original method in order towiden its applicability have been proposed. References: D.Aristoff, T. Lelivre and G. Simpson, The parallel replicamethod for simulating long trajectories of Markov chains,AMRX, 2, 332-352, (2014) A. Binder, T. Lelivre and G.Simpson, A Generalized Parallel Replica Dynamics, Jour-nal of Computational Physics, 284, 595-616, (2015). C. LeBris, T. Lelivre, M. Luskin and D. Perez, A mathematicalformalization of the parallel replica dynamics, Monte CarloMethods and Applications, 18(2), 119146, (2012).

Tony LelievreEcole des Ponts [email protected]

MS150

Generalizing Adaptive Multilevel Splitting

(Adaptive) Multilevel Splitting methods are standard rareevent simulation techniques, considered her for events e.g.of the form {τA ≤ τB} where τA and τB are hitting times ofsome given simulable random process. The method is ba-sically a multiple replicas method where (i) replicas’ pathsare stored; (ii) some replicas are duplicated depending onsome given level function; and (iii) new replicas dynamicsare re-simulated with the original true dynamics, startingfrom the state associated with the reached considered level.In this talk, we will give a theoretical framework givingconditions under which this type of methods (with pos-sibles generalizations and variants) are indeed consistent(i.e. here yielding unbiased estimator of the associatedrare event probability).

Mathias RoussetINRIA Rocquencourt, France.

120 UQ16 Abstracts

[email protected]

MS151

Linear Collective Approximations for Parametricand Stochastic Elliptic Pdes

Consider the elliptic problem −div(a(y)∇u(y)

)= f in

D ⊂ Rm parametrized by y ∈ I

∞ := [−1, 1]∞ withu(y)|∂D = 0 and affine parametric dependence of a(y). Ifthere is a sequence of approximations in H1

0 (D) to one so-lution u(y0) at ∀y0 ∈ I

∞, with an error convergence rate,we proved by linear collective methods that it induces asequence of approximations to the solution u(y) as a map-

ping from I∞ to H1

0 (D) in the norm of L∞(I∞,H1

0 (D))

with the same error convergence rate.

Dinh DungVietnam National [email protected]

MS151

Applying Quasi-Monte Carlo to an Eigenproblemwith a Random Coefficient

In this talk we look at applying finite elements (FE) andquasi-Monte Carlo (QMC) methods to an elliptic eigen-problem with a random coefficient where the aim is to ap-proximate the expected value of the principal eigenvalueand linear functionals of the corresponding eigenfunctions.By formulating the expected values as integrals we are ableto apply QMC quadrature to the FE approximations. Weshow that the principal eigenvalue and linear functionals ofthe corresponding eigenfunction, also their respective FEapproximations, belong to the spaces required for the QMCtheory and provide numerical results.

Alexander GilbertSchool of Mathematics and Statistics, Uni of New [email protected]

Ivan G. GrahamUniversity of [email protected]

Frances Y. KuoSchool of Mathematics and StatisticsUniversity of New South [email protected]

Robert ScheichlUniversity of [email protected]

Ian H. SloanUniversity of New South WalesSchool of Mathematics and [email protected]

MS151

Novel Monte Carlo Approaches for UncertaintyQuantification of the Neutron Transport Equation

Neutron Transport is an important problem in reactorphysics, a six dimensional integro-differential equation thatis extremely costly to solve. Moreover, the geometry and

the coefficients, known as cross-sections, are typically un-certain. Quantifying these uncertainties is a formidabletask even in simple toy problems. Using Novel ideas, suchas lattice rules (QMC) and multilevel simulation, MonteCarlo methods can be speeded up significantly. We confirmthis numerically and by exploring the background theory.

Ivan G. GrahamUniversity of [email protected]

Matthew ParkinsonDepartment of Mathematical Sciences, University of [email protected]

Robert ScheichlUniversity of [email protected]

MS151

Sparse-Grid Quadrature for Elliptic Pdes with Log-Normal Diffusion

Elliptic boundary value problems with log-normally dis-tributed diffusion coefficients arise, for example, in themodelling of subsurface flows in porous media and can bewritten in divergence form

−div(a(x,ω)∇u(x,ω)) = f(x).

The method of choice to deal with these equations mainlydepends on the quantities of interest which usually ariseas high-dimensional integrals. In many cases, the in-tegrand does not depend equally on each dimension.This anisotropy can be exploited to construct sparse-gridquadrature methods. In order to analyze the convergenceof these methods, it is important to establish sharp boundson the number of indices contained in anisotropic indexsets.

Markus SiebenmogenUniversity of [email protected]

MS152

Optimal Bayesian Design of the Oral Glucose Tol-erance Test Using An Ode Model

The Oral Glucose Tolerance Test (OGTT) is routinely per-formed in health units around the world as a tool to diag-nose Diabetes related conditions. The test is done in fast-ing after a night sleep and blood glucose is measured atarrival. The patient is then asked to drink a sugar solutionand thereafter blood glucose is measured at 1, 1.5 and/or2h depending on local practices. The profile at which bloodglucose increases and in the course of the test is controlledto return (or not return) to normal levels should providehealth specialists with a diagnosis. However, current dataanalyses are limited and a better approach is required toobtain a greater value for the OGTT. We have developeda model based on a system of ODE’s which fits the OGTTdata well in many cases. Bayesian inference is used to esti-mate three parameters that are directly related to the Glu-cose/Insulin system and provide a basis for a more compre-hensive diagnosis. In this talk we discuss a further problemon designing the blood sampling times to maximize the in-formation obtained by the OGTT with the same number ofsamples or with 1 or 2 more samples, to provide a far morereliable result while adding only marginal costs. We maxi-mize the expected gain in the information from data using

UQ16 Abstracts 121

a MCMC estimator embedded in a stochastic optimizationprocedure.

J. Andres Christen, Nicolas KuschinskiCIMAT, [email protected], [email protected]

Marcos A. CapistranCIMAT [email protected]

MS152

Efficiency and Computability of MCMC with Au-toregressive Proposals

We analyse computational efficiency of Metropolis-Hastings algorithms with stochastic AR(1) process pro-posals. These proposals include, as a subclass, discretizedLangevin diffusion (e.g. MALA) and discretized Hamil-tonian dynamics (e.g. HMC). We extend existing resultsabout MALA and HMC to target distributions that are ab-solutely continuous with respect to a Gaussian where thecovariance of the Gaussian is allowed to have off-diagonalterms.

Richard A. NortonUniversity of OtagoDepartment of [email protected]

MS152

Fast Solutions to Large Linear Bayesian InverseProblems

Matrix splittings and polynomials, well established iter-ative techniques from numerical linear algebra, can opti-mally accelerate the geometric convergence rate of Gibbssamplers used to solve large Bayesian linear inverse prob-lems. This approach is used to solve a linear model of theheights of pathogenic biofilms on medical devices, beforeand after anti-microbiocide treatments, from 3-D moviesgenerated by a confocol scanning laser microscope. Lim-itations of the linear approach are overcome by solving apixel-based Bayesian non-linear model.

Albert E. ParkerMontana State UniversityCenter for Biofilm [email protected]

MS153

Probability Measures for Numerical Solutions ofDeterministic Differential Equations

Abstract not available.

Mark GirolamiUniversity of [email protected]

MS153

Correction of Model Reduction Errors in Simula-tions

This talk presents an approximation error approach for cor-rection of the approximation errors in reduced simulationmodels. Here, the approximation error is modelled as an

additive error and approximated using a low-cost predic-tor model which predicts the approximation error for givensimulation input parameters. The approximation error ap-proach is tested with different simulation models and theaccuracies and computation times of the models with andwithout the approximation error correction are compared.

Ville P. KolehmainenDepartment of Applied PhysicsUniversity of Eastern [email protected]

Jari Kaipio, Jari KaipioDepartment of MathematicsUniversity of [email protected], [email protected]

MS153

Confidence in Bayesian Nonparametrics? SomeMathematical (frequentist) Facts

There has been substantial progress in the last few yearsabout when posterior based inference has objective justifi-cations in situations where the Bayesian model sits on aninfinite dimensional parameter (such as a regression func-tion or density). I will discuss some of the main findings,ideas, conclusions and perspectives.

Richard NicklUniversity of [email protected]

MS153

The von Neumann-Morgenstern Approach toChoice Under Ambiguity: Updating

A choice problem is risky (ambiguous) if the decision makeris choosing between probability distributions (sets of prob-ability distributions) over utility-relevant consequences.Continuous linear preferences over sets of probabilities ex-tend expected utility preferences and deliver: first- andsecond-order dominance for ambiguous problems; completeseparations of attitudes toward risk and ambiguity; newclasses of preferences that allow decreasing relative ambi-guity aversion; and a complete and dynamically consistenttheory of updating convex sets of priors.

Maxwell StinchcombeUT [email protected]

MS154

Compositional Uncertainty Quantification for Cou-pled Multiphysics Systems

In goal-oriented design and analysis processes for coupledmultiphysics systems, we often have available several in-formation sources that vary in fidelity. To optimally ex-ploit these sources of information, we develop a composi-tional offline/online uncertainty quantification methodol-ogy using a combination of ensemble Gibbs sampling andsequential importance resampling. This approach enablesthe efficient evaluation of information source discrepancysensitivities associated with system level quantities of in-terest. We demonstrate our methodology on a coupledaero-structural system.

Douglas Allaire

122 UQ16 Abstracts

Massachusetts Institute of [email protected]

MS154

Multifidelity Uncertainty Propagation in Multidis-ciplinary Systems

The objective of this work is to tackle the complexities in-volved with uncertainty propagation in feedback coupledmultidisciplinary systems. We present a multifidelity ap-proach with adaptive sampling for updating low-fidelitymodels to achieve efficient uncertainty propagation thatexploits the structure of the multidisciplinary system. Thelow-fidelity prediction and its uncertainty are used to se-lect additional designs to evaluate, so as to selectively refineeach of the disciplinary low-fidelity models.

Anirban [email protected]

Karen E. WillcoxMassachusetts Institute of [email protected]

MS154

Model Selection Under Uncertainty: Coarse-Graining Atomistic Models

Issues in modeling and simulation of reduced-order modelsinclude model selection, calibration, validation, and errorestimation, all in the presence of uncertainties in the data,prior information, and the model itself. These issues areaddressed via the Occam-Plausibility ALgorithm (OPAL),which chooses, among a large set of models, the simplestvalid model that may be used to predict chosen quantitiesof interest. An illustrative example application to coarse-grained models of atomistic polyethylene is given.

Kathryn FarrellInstitute for Computational Engineering and SciencesUniversity of Texas at [email protected]

J. Tinsley OdenThe University of Texas at [email protected]

Danial FaghihiInstitute for Computational Engineering and SciencesUniversity of Texas at [email protected]

MS154

Data-Driven Distributionally Robust OptimizationUsing the Wasserstein Metric

We consider stochastic programs where the distributionof the uncertainties is only observable through a trainingdataset. Using the Wasserstein metric, we construct a ballin the space of probability distributions around the empir-ical distribution, and we seek decisions that perform bestin view of the worst-case distribution within this ball. Wedemonstrate that the resulting distributionally robust op-timization problems can be reformulated as finite convexprograms and that their solutions enjoy powerful finite-

sample guarantees.

Peyman Mohajerin EsfahaniSwiss Federal Institute of Technology (ETH) in [email protected]

Daniel Kuhn

Ecole Polytechnique Federale de [email protected]

MS155

Optimal Experimental Design using Multi LevelMonte Carlo with application in Composite Ma-terial Damage Detection

Composite materials can be produced by a number of tech-niques, which aim to combine plies of fiber in differentdirections. Composites can be degraded by a number ofmechanisms. Detection of defects in composites such thatdelamination can be addressed by Electrical ImpedanceTomography (EIT). The electrical impedance problem isto recosntruct the conductivity field in the material basedon the voltage measurements collected on the electrodesplaced on the boundary. In a Bayesian framework, we in-fer the presence of delamination based on the posteriorconductivity field. We present a Multi-Level Monte Carlo(MLMC) approach for the integration of the posterior con-ductivity field and perform a comparative analysis withsome accelerative methods such that the Laplace method.

Ben Mansour Dia, Luis EspathKing Abdullah University of Science and [email protected], [email protected]

Quan LongUnited Technologies Research Center, [email protected]

Raul F. TemponeMathematics, Computational Sciences & EngineeringKing Abdullah University of Science and [email protected]

MS155

Optimal Experimental Design for GeophysicalImaging of Flow in Porous Media

Designing experiments for imaging fluid flow requires theintegration of the dynamical system describing the flow,the geophysical imaging technique, and historic data. Inthis talk we explore optimal experimental design methodsfor such problems, and demonstrate the applicability of thetechniques for the problem of imaging subsurface flow usingseismic methods.

Jennifer FohringUniversity of British [email protected]

Eldad HaberDepartment of MathematicsThe University of British [email protected]

MS155

Optimal Electrode Positions in Electrical

UQ16 Abstracts 123

Impedance Tomography

The aim of electrical impedance tomography is to recoverinformation about the conductivity inside a physical bodyfrom boundary measurements of current and voltage. Inpractice, such measurements are performed with a finitenumber of electrodes. This work considers finding optimalpositions for the electrodes within the Bayesian paradigmbased on available prior information; the goal is to place theelectrodes so that the posterior density of the (discretized)conductivity is as localized as possible.

Nuutti HyvonenAalto UniversityDepartment of Mathematics and Systems [email protected]

Aku SeppanenDepartment of Applied PhysicsUniversity of Eastern [email protected]

Stratos StaboulisTechnical University of [email protected]

MS155

Using Topological Sensitivity and ReachabilityAnalysis Large Model Spaces

It is common fallacy to assume that models that fit dataand our prior expectations must be good. Here we show,focussing on dynamical systems, how we start to asses howmany models can fit a given data set. And we will dis-cuss Bayesian and control engineering approaches to (i)determine experimental conditions that are likely to dis-criminate between such plausible models; or (ii) allow usto analyze ensembles of models jointly.

Eszter Lakatos, Ann C. BabtieImperial College LondonTheoretical Systems [email protected], [email protected]

Paul KirkMRC Biostatistics [email protected]

Michael StumpfImperial College LondonTheoretical Systems [email protected]

MS156

Adaptive Data Assimilation Scheme for ShallowWater Simulation

Data assimilation is the process of optimally combiningthe estimated forecast states and the observations to ob-tain a better representation of the system states. In mostDA methods, the assimilation occurs whenever the obser-vations become available. For many high dimensional andnonlinear forecast problems, this has become inefficient.We present an adaptive data assimilation scheme to con-trol the assimilation frequency. The control criteria willbe discussed together with numerical tests on the shallow

water equations.

Haiyan Cheng, Tyler Welch, Carl RodriguezWillamette [email protected], [email protected], [email protected]

MS156

A Statistical Analysis of the Kalman Filter

Kalman filters present several methodological problems be-cause analytical results focus on their exponential conver-gence. Therefore, such methods have difficulties in estimat-ing static parameters, in assimilating general data models,and in finding utility in resource constrained scenarios. Inthis work, we replace the deterministic assumptions in pre-vious analyses with more natural probabilistic ones. Con-sequently, (1) we demonstrate that the Kalman Filter con-verges for a robust choice of tuning parameters and (2)we present a new theoretical foundation for stopping cri-teria for a prescribed precision level. A static parameteralgorithm is developed and extensions to low-memory con-straints, rank-deficient problems, and general data modelsare suggested. The concepts are demonstrated on a largedata set from the Center of Medicare and Medicaid.

Vivak R. PatelStudent [email protected]

MS156

Dynamic Parameter and State Estimation forPower Grid Systems

We consider the problem of dynamic state and parame-ter estimation for large-scale power grid systems. Thegoal is to determine the most likely parameters and/ordynamic states based on fast-rate measurements availablefrom phase measuring units (PMUs). We will presentthe inversion for generators inertia under transients causedby large disturbances in the load. For this we developedan optimization-based framework that uses state-of-the-artparallel forward and adjoint numerical integration.

Cosmin G. PetraArgonne National LaboratoryMathematics and Computer Science [email protected]

Noemi PetraUniversity of California, [email protected]

Zheng ZhangMITz [email protected]

Emil M. ConstantinescuArgonne National LaboratoryMathematics and Computer Science [email protected]

MS157

Adaptive Algorithms Driven by A Posteriori Esti-mates of Error Reduction for PDEs with RandomData

We present an efficient adaptive algorithm for computing

124 UQ16 Abstracts

stochastic Galerkin finite element approximations of ellip-tic PDEs with random data. Our adaptive strategy isbased on computing two error estimators associated withtwo distinct sources of discretisation error. These esti-mators provide effective estimates of the error reductionfor enhanced approximations. The algorithm adaptively‘builds’ a polynomial space over a low-dimensional mani-fold of the parameter space so that the total discretisationerror is reduced in an optimal manner.

Alex BespalovUniversity of Birmingham, United [email protected]

David SilvesterSchool of MathematicsUniversity of [email protected]

MS157

Optimal Approach for the Calculation of Stochas-tically Homogenized Coefficients of Elliptic Pdes

The calculation of effective coefficients of stochastic ellipticPDEs involving multiple length scales is a computation-ally intensive task. In this talk, we give error estimatesfor the three types of error (discretization, statistical, fi-nite domain) and model the computational work. Then weminimize the work for a given error. This approach allowsto find an optimal balance between fineness of the spatialdiscretization, the number of samples, and the size of thespatial domain.

Caroline GeiersbachVienna University of [email protected]

Gerhard TulzerTU Vienna, [email protected]

Clemens HeitzingerVienna University of [email protected]

MS157

Goal-Oriented Error Control in Low-Rank Approx-imations

Abstract not available.

Serge Prudhomme

{E}cole Polytechnique de Montr{e}alMontr{e}al, Qu{e}bec, [email protected]

MS157

Goal-Based Anisotropic Adaptive Control ofStochastic and Deterministic Errors

A strategy for goal-oriented error estimation of the coupleddeterministic and stochastic approximation errors is pro-posed in this paper. Furthermore, an adaptive anisotropicmethod is proposed to enhance the quality of a functionalof interest obtained by the coupled stochastic-deterministicsolution of a parametrised system. The anisotropy in thedeterministic space is controled for a functional of inter-

est via a riemannian metric (hessian) based method and asimilar approach is extended to stochastic errors. The pro-posed approach is applied to computational fluid dynamicsproblems modeled by Euler and Navier-Stokes equations,where we considered up to two uncertain parameters.

Jan W. Van Langenhove, Anca Belme, Didier LucorSorbonne Univ, UPMC Univ Paris 06CNRS, UMR 7190, Inst Jean Le Rond d’[email protected], [email protected], [email protected]

MS158

Bayesian Screening Through Gaussian ProcessesHyper-Parameter Sampling

Gaussian processes have proven to be powerful emulators ofexpensive numerical models. Kernels with auto relevancedetermination use the Gaussian process’ hyper-parametersfor screening input variables. This is commonly donethrough their maximum likelihood estimates, which mightbe prone to underestimating the complexity of the correla-tion structure. In this work, we propose a Bayesian schemeakin to sequential sampling for a probabilistic estimationof the maximum a posteriori candidates.

Alfredo Garbuno-Inigo, Francisco Dıaz De la OUniversity of [email protected], [email protected]

Konstantin ZuevCalifornia Institute of [email protected]

MS158

Combining Feature Mapping and Gaussian ProcessModelling in the Context of Uncertainty Quantifi-cation

Uncertainty Quantification (UQ) and Machine Learning(ML) are two rapidly growing research topics that aim atthe solution of similar problems from different perspectivesand methodologies. This contribution explores the poten-tial of using Kriging-based surrogate models as advancedlearners, combined with unsupervised learning algorithmsbased on the deep learning paradigm. The effectiveness ofthe approach is evaluated on several benchmark applica-tions in both UQ and ML.

Christos LataniotisETH [email protected]

Stefano MarelliChair of Risk, Safety and Uncertainty QuantificationInstitute of Structural Engineering, ETH [email protected]

Bruno SudretETH [email protected]

MS158

Single Nugget Kriging: Better Predictions at theExtreme Values

We propose a method with better predictions at extremevalues than the standard method of Kriging. We construct

UQ16 Abstracts 125

our predictor in two ways: by penalizing the mean squarederror through conditional bias and by penalizing the condi-tional likelihood at the target function value. Our predic-tion exhibits robustness to the model mismatch in the co-variance parameters, a desirable feature for computer sim-ulations with a restricted number of data points.

Minyong Lee, Art OwenStanford [email protected], [email protected]

MS158

Modelling Faces in the Wild with Gaussian Pro-cesses

Face verification has been studied extensively and becomeone of the most active research topics in computer vision.However, various visual complications remain challengingfor robust face verification when face images are taken inthe wild. On the other hand, Gaussian process modelsas the representatives of Bayesian nonparametric modelscan cover the intrinsically complex variations. In this talk,we present how to model faces in the wild using Gaussianprocesses.

Chaochao LuThe Chinese University of Hong [email protected]

MS159

An Adaptive Strategy on the Error of the Objec-tive Functions for Uncertainty-Based Derivative-Free Optimization

In this work, a strategy is developed to deal with errorsaffecting the objective functions in uncertainty-based op-timization. It relies on the exchange of information be-tween the outer loop based on the optimization algorithmand the inner uncertainty quantification loop. In the innerloop, a control is performed to decide whether a refinementfor the current design is appropriate or not. Then, an ac-curate estimate of the objective function is done only fornon-dominated solutions.

Francesca FusiPolitecnico di [email protected]

Pietro Marco [email protected]

MS159

Parameter Calibration and Optimal ExperimentalDesign for Shock Tube Experiments

A Bayesian framework for parameter inference and optimalexperimental design is developed. The formalism explicitlyaccounts for the impact of uncertainty in experimental op-erating conditions during which data is collected, and ofmeasurement errors. Implementation of the framework isillustrated in light of applications to (i) the inference ofreaction rates and the parametrization of rate laws basedon time-resolved, noisy measurements of species concen-trations, and (ii) the optimal design of shock tube experi-ments.

Daesang Kim, Iman GharamtiKAUST

[email protected], [email protected]

Quan LongUnited Technologies Research Center, [email protected]

Fabrizio Bisetti, Ahmed Elwardani, Aamir Farooq, RaulTempone, Omar [email protected], [email protected],[email protected], [email protected],[email protected]

MS159

Coordinate Transformation and Polynomial Chaosfor the Bayesian Inference of a Gaussian Processwith Parametrized Prior Covariance Function

This work concerns the Bayesian inference of fields fromGaussian priors with parametrized covariance. The prob-lem is classically made finite by Karhunen-Loeve decom-positions over covariance-parameters dependent bases. Wepropose instead to use a fixed basis and account forcovariance-parameters dependences through the coordi-nates prior. Sampling the posterior is also accelerated byconstructing polynomial surrogates in the global coordi-nates. The approach is demonstrated and illustrated onthe inference of spatially-varying log-diffusivity fields fromnoisy data.

Ihab SrajKing Abdullah University of Science and [email protected]

Olivier P. Le [email protected]

Ibrahim HoteitKing Abdullah University of Science and Technology(KAUST)[email protected]

Omar M. KnioDuke [email protected]

MS159

Representation of Model Uncertainty in Space De-bris Orbit Determination with Sparse Measure-ments

This paper presents a new method to capture unmodelledcomponents in orbital mechanics with application to the or-bit determination of space debris. The approach proposedin this work uses a polynomial expansion in the state spacewith stochastic coefficients. The distribution of the coeffi-cients is recovered from the minimisation of an uncertaintymeasure in the space of the stochastic coefficients so thatthe resulting flow is contained in the confidence intervalsprovided by the measurements.

Chiara Tardioli, Massimilian Vasile, Annalisa RiccardiUniversity of [email protected], [email protected], annal-

126 UQ16 Abstracts

[email protected]

MS160

Statistical Approaches to Forcefield Calibrationand Prediction Uncertainty in Molecular Simula-tion

The important development of molecular simulation in thepast decades has made it a very attractive tool for the studyof condensed matter. It is now commonly used to predictthermophysical properties of fluids. The use of molecu-lar simulation as a predictive tool requires to estimate theuncertainty associated with the predicted value for a prop-erty. The simulation results uncertainty arising from theforcefield has long been ignored. The forcefield containsall the information about the potential energy of a system,coming from interatomic interactions, which are encryptedin parameters commonly calibrated in order to reproducesome experimental data. It is only very recently that care-ful investigations of the effect of forcefield parameters un-certainties have been undertaken [Rizzi et al, MMS 2012;Angelikopoulos et al, JCP 2012; Cailliez and Pernot, JCP2011]. This is mainly due to the difficulty to estimate force-field parameters uncertainties, that necessitates an exten-sive exploration of parameter space, incompatible with thecomputer time of molecular simulations. In recent years,we have explored various calibration strategies and cali-bration models within the Bayesian framework [Kennedyand O’Hagan, Roy. Stat. Soc. B, 2001]. We have studieda two-parameters Lennard-Jones potential for Argon, forwhich the calibration can be done using cheap analyticalexpressions. We have shown that prediction uncertaintyis larger than characteristic statistical simulation uncer-tainty [Cailliez and Pernot, JCP 2011]. For more complexsystems, more parameters have to be calibrated and, inabsence of analytical models, the calibration process re-quires to run long molecular simulations. In order to facethese issues, we have used kriging metamodels and optimalinfilling strategies to limit the number of molecular simu-lations to be performed during the calibration process. Wehave benchmarked this methodology on the water TIP4Pforcefield [Cailliez et al, JCC 2014].

Fabien CailliezLCP, Universite [email protected]

Pascal PernotLCP, Universite Paris-Sud and [email protected]

MS160

Mapping the Structural and Alchemical Landscapeof Materials, from Molecules to the CondensedPhase

Abstract not available.

Michele [email protected]

MS160

Modeling Material Stress Using Integrals of Gaus-sian Markov Random Fields

Material scientists are interested in the variability of stressconditions as compressive forces are applied throughout a

volume. Sophisticated computer codes are used to simulatevon Mises stress-fields, and it is often observed that the in-ternal grain boundary structure is important. We describethe stress-field within a realized cube of tantalum (hav-ing tens of grains and ¿570,000 computational elements)with a model featuring integrals of GMRFs on second- andthird-order grain boundaries.

Peter W. Marcy, Scott Vander WielLos Alamos National [email protected], [email protected]

Curtis StorlieMayo [email protected]

Curt BronkhorstLos Alamos National [email protected]

MS160

Multiscale Modeling of with Quantified Uncertain-ties and Cloud Computing: Towards Computa-tional Materials Design

Abstract not available.

Alejandro StrachanPurdue [email protected]

MS161

Gradient Projection Method for Stochastic Opti-mal Control in Finance

Abstract not available.

Bo GongHong Kong Baptist [email protected]

MS161

Multi-symplectic Methods for Stochastic MaxwellEquations

Abstract not available.

Jialin HongChinese Academy of [email protected]

MS161

Error Estimates of the Crank-Nicolson Scheme forSolving Decoupled Forward Backward StochasticDifferential Equations

Abstract not available.

Yang LiUniversity of Shanghai for Science and Technology

UQ16 Abstracts 127

[email protected]

MS161

Stochastic Optimization with Bsdes

Abstract not available.

Mohamed MnifUniversity of Tunis El [email protected]

MS162

Active Subspaces and Reduced-Order Models forHigh-Dimensional Uncertainty Propagation

Recent work from our team in the application of activesubspaces and reduced-order models to the problem of for-ward propagation of uncertainties in systems governed bythe solution of partial differential equations are discussed.The methods enable us to discover low-dimensional man-ifolds that, together with other advanced UQ techniquesresult in greatly improved capabilities. Examples from anaerothermal nozzle problem are presented and discussed.

Juan J. AlonsoDepartment of Aeronautics and AstronauticsStanford [email protected]

MS162

Title Not Available

Abstract not available.

Paul DupuisDivision of Applied MathematicsBrown [email protected]

MS162

Global Sensitivity Analysis with Correlated Vari-ables

This work will highlight challenges posed by input correla-tion in traditional variance based global sensitivity analy-sis. We will present semi-analytical approaches to computeglobal sensitivity metrics in the presence of input correla-tion. The proposed methods will be compared on jet en-gine applications where sensitivities of both uncalibratedand calibrated models are desired with respect to priorand posterior distributions respectively. Finally, we willalso present the use of novel visualization techniques tounderstand structural and correlative sensitivity in highdimensions.

Ankur SrivastavaGE Global [email protected]

Isaac AsherUniversity of [email protected]

Arun K. SubramanianGE Global [email protected]

Liping WangGE Global [email protected]

MS162

Overview of New Probabilistic Characteristics:Cvar Norm and Buffered Probability of Exceedance(bpoe)

This paper overviews two new probabilistic characteristics:1) Stochastic CVaR Norm, and 2) Buffered Probability ofExceedance (bPOE). Stochastic CVaR norm is a paramet-ric function measuring distance between two random val-ues. We have used this norm for building new algorithmsfor approximating distributions (e.g., a discrete distribu-tion by some other discrete distribution). bPOE countstail outcomes averaging to some specific threshold value.Minimization of bPOE can be reduced to Convex and Lin-ear Programming.

Stan Uryasev, Matthew Norton, Alexander Mafusalov,Konstantin PavlikovUniversity of [email protected], [email protected], [email protected],[email protected]

MS163

Surrogate-based Approach for Optimization Prob-lems under Uncertainty

We propose a general surrogate model approach to tackleoptimization problems under uncertainty. We then showthat the proposed surrogate models are equivalent to bothrobust and distributionally robust optimization models,provided an appropriate selection of model parameters. Fi-nally, we present computational experiments that highlightthe efficiency of the surrogate model approach.

Jianqiang ChengSandia National [email protected]

Richard L. ChenSandia National LaboratoriesLivermore, [email protected]

Habib N. NajmSandia National LaboratoriesLivermore, CA, [email protected]

Ali PinarSandia National [email protected]

Cosmin SaftaSandia National [email protected]

Jean-Paul WatsonSandia National LaboratoriesDiscrete Math and Complex [email protected]

MS163

Comparison of Stochastic Programming and Ro-

128 UQ16 Abstracts

bust Optimization Approaches for Risk Manage-ment in Energy Production

In this talk, we compare three optimization methods tosolve an operational decision-making problem. The meth-ods involve stochastic programming, robust optimization,and a third hybrid method. The uncertainty is representedby finite number of discrete scenarios and convex uncer-tainty sets, while the risk management is addressed usingCVaR and a budget uncertainty constraint. The methodsuse an efficient parallelization of Benders Decomposition.We compare the algorithmic implementation of the meth-ods, their performance and operational results.

Ricardo Lima, Sabique [email protected],[email protected]

Ibrahim HoteitKing Abdullah University of Science and Technology(KAUST)[email protected]

Omar M. KnioDuke [email protected]

Antonio ConejoOhio State [email protected]

MS163

Security and Chance Constrained Unit Commit-ment with Wind Uncertainty

With increasing wind energy penetration, power systemoperators are interested in cost-efficient operation of trans-mission grids and securing them against random compo-nent failures. Moreover, the presence of stochastic windgeneration alters the traditional methods for solving theday-ahead unit commitment. Hence, we formulate thestochastic multi-stage Security and Chance ConstrainedUnit Commitment problem under wind uncertainty anddiscuss methods to reformulate into a tractable form andfurther propose decomposition algorithms to obtain opti-mal solutions for relatively large problems.

Kaarthik SundarTexas A&M [email protected]

Harsha NagarajanLos Alamos National [email protected]

Miles LubinOperations Research CenterMassachusetts Institute of [email protected]

Line RoaldETH [email protected]

Russell Bent, Sidhant MisraLos Alamos National [email protected], [email protected]

Daniel BienstockColumbia University IEOR and APAM DepartmentsIEOR [email protected]

MS163

Spatio-Temporal Hydro Forecasting of Multireser-voir Inflows for Hydro-Thermal Scheduling

Hydro-thermal scheduling is the problem of finding an op-timal dispatch of power plants in a system containing bothhydro and thermal plants. Since hydro plants are able tostore water over long time periods, and since future inflowsare uncertain due to precipitation, the resulting multi-stagestochastic optimization problem becomes challenging tosolve. Several solution methods have been developed overthe past few decades to compute practically useful opera-tion policies. One of these methods is stochastic dual dy-namic programming (SDDP). SDDP poses strong restric-tions on the forecasting method generating the necessaryinflow scenarios. In this context, the current state-of-the-art in forecasting are periodic autoregressive (PAR) mod-els. We present a new forecasting model for hydro inflowsthat incorporates spatial information, i.e., inflow informa-tion from neighboring reservoirs of the system, and thatalso satisfies the restrictions posed by SDDP. We bench-mark our model against a PAR model that is similar to theone currently used in Brazil. Three multi-reservoir basinsin Brazil serve as a case study for the comparison. We showthat our approach outperforms the benchmark PAR modeland present the root mean squared error (RMSE) as wellas the seasonally adjusted coefficient of efficiency (SACE)for each reservoir modeled. The overall decrease in RMSEis 8.29% using our approach for one month-ahead forecasts.The decrease in RMSE is achieved without additional datacollection while only adding 11.8% more state variables forthe SDDP algorithm.

Timo Lohmann, Amanda Hering, Steffen RebennackColorado School of [email protected], [email protected],[email protected]

MS164

Theory and Applications of Random PoincareMaps

For deterministic ODEs, Poincare maps (also called returnmaps) are useful to find periodic orbits, determine theirstability, and analyse their bifurcations. The concept ofPoincare map naturally extends to the case of irreversiblestochastic differential equations, where it takes the formof a discrete-time, continuous-space Markov chain. Wepresent spectral-theoretic results allowing to quantify themetastable long-term dynamics of these processes. Appli-cations include the description of the interspike interval forthe stochastic FitzHugh–Nagumo equation and the asymp-totic computation of the distribution of first exits throughan unstable periodic orbit.

Nils [email protected]

MS164

Dynamic Model Reduction for Stochastic Chemical

UQ16 Abstracts 129

Reaction Systems with Stiffness

The research on reduction methods is driven by the com-plexity of chemical reaction systems, to seek simplified sys-tems involving a smaller set of species and reactions thatcan approximate the original detailed systems in the pre-diction of specific quantities of interest. The existence ofsuch reduced systems frequently hinges on the existenceof a lower-dimensional, attracting, and invariant manifoldcharacterizing long-term dynamics. The ComputationalSingular Perturbation (CSP) method provides a generalframework for analysis and reduction of chemical reactionsystems. In this work we propose an algorithm based onthe theory of stochastic singular perturbation, that can beapplied to stochastic reaction systems with stiffness, to ob-tain their random low-dimensional manifolds.

Xiaoying HanAuburn [email protected]

Habib N. NajmSandia National LaboratoriesLivermore, CA, [email protected]

MS164

Stochastic Effects and Numerical Artefacts in Pureand Mixed Models in Biology

Biological systems consist of coupled processes occurringat multiple time- and space-scales. Choosing a suitablepure model at a fixed scale, coupling different pure modelsto form a mixed (hybrid) model, and choosing numericalmethods for such models, are generally not straightforwardtasks. In this talk, stochastic effects and numerical arte-facts, playing an important role in the model building, areanalysed using numerical analysis, and dynamical systems,chemical reaction and perturbation theory.

Tomislav PlesaUnversity of Oxford, [email protected]

MS165

Probabilistic Analysis of a Rock Salt Cavern withApplication to Energy Storage System

This study provides a probabilistic analysis of a typicalrenewable energy storage cavern in rock salt. An elasto-viscoplastic-creep constitutive model is applied to a nu-merical model to assess its mechanical behavior. Sensitiv-ity measures of different variables are computed by globalsensitivity analysis. The propagation of parameter uncer-tainties and failure probability are evaluated by utilizing aMonte-Carlo based simulation. Surrogate modeling tech-nique is applied to reduce required computational time andeffort in probabilistic analysis.

Raoul Holter, Elham Mahmoudi, Tom SchanzRuhr-Universitat [email protected], [email protected],[email protected]

MS165

Efficient Estimation of Saturation Distribution inStochastic Two-Phase Flow Problems

We introduce an efficient distribution method to estimate

the full distribution of saturation in two-phase flow prob-lems with random (uncertain) permeability and porosityfields. We demonstrate how accurate and inexpensive theestimation is compared to direct Monte Carlo simulation,even with large variances or small correlation lengths. Fi-nally, we discuss the notion of quantiles for the saturationand illustrate how the distribution method can directlylead to this comprehensive way of managing flow scenariosunder uncertain conditions.

Fayadhoi IbrahimaStanford [email protected]

Hamdi TchelepiStanford UniversityEnergy Resources Engineering [email protected]

Daniel W. MeyerInstitute of Fluid [email protected]

MS165

Tracer Dispersion in a Realistic Field Case with thePolar Markovian Velocity Process (PMVP) Model

For the quantification of tracer dispersion uncertainty, therecently developed PMVP model employs a Markov pro-cess for the Lagrangian velocity of tracer particles. In thiswork, we evaluate the performance of the PMVP modelfor a realistic testcase that is defined by the well-knownMADE 2 benchmark. Confirming our previous results, wefind accurate model predictions and analyze in great detailremaining differences between the reference data and ourPMVP predictions.

Daniel W. MeyerInstitute of Fluid [email protected]

Simon DunserInstitute of Fluid Dynamics, ETH [email protected]

MS165

Uncertain CO2 Transport in Large-Scale CarbonCapture and Storage

Numerical simulation of migration of CO2 in subsurfacestorage reservoirs is computationally demanding due tothe large ranges of spatial and temporal scales, as wellas the uncertainty in the physical input parameters. Effi-cient representation of the uncertainties is challenging dueto the hyperbolic nature of the governing equations. Areduced-order stochastic Galerkin method by means of lo-cally discarding insignificant chaos modes is presented to-gether with test cases from the North Sea.

Per PetterssonUni Research, University of [email protected]

PP1

Ridge-Scad Quantile Regression Model for BigData

Extraction of useful information from massive data sets is

130 UQ16 Abstracts

a big challenge for statisticians and researchers. Moreover,sample correlation becomes very high in case of huge num-ber of variables. The usual linear regression and quantileregression models are failed to resolve these issues. There-fore, the Ridge-SCAD quantile variable selection techniqueis developed and the model selection consistency is estab-lished. The proposed model showed better performance ascompared to the available models in the simulations stud-ies.

Muhammad Amin, Lixin Song, Milton Abdul Thorlie,Xiaoguang WangSchool of Mathematical Sciences,Dalian University of Technology, [email protected], [email protected], [email protected], [email protected]

PP1

A Bayesian View of Doubly Robust Causal Infer-ence

In causal inference, propensity score methods rely onthe modelling of the assignment to treatment. Such anapproach is difficult to justify from a Bayesian setting,since the treatment assignment model can play no partin likelihood-based inference for outcome. We propose aBayesian posterior predictive approach in the frameworkof misspecified models to construct doubly robust estima-tion procedures, which require only one of the exposureand outcome models to be correctly specified.

Saarela OlliUniversity of [email protected]

Leo [email protected]

Stephens David A.McGill [email protected]

PP1

Pade Approximation for the Helmholtz Equationwith Parametric Or Stochastic Wavenumber

We study the Helmholtz problem with parametric orstochastic wavenumber k2, varying in K ⊂ R

+ (intervalof interest). We introduce the solution map S , which tok2 ∈ K associates u(k2, x), solution of the Helmholtz prob-lem. We extend S to C

+ := R>0 + iR, obtaining a mero-morphic map with a simple pole in each λ ∈ Λ, Λ beingthe set of eigenvalues of the Laplacian. A rational Pade ap-proximant of S is constructed, and upper bounds for theapproximation error are derived.

Francesca BonizzoniFaculty of MathematicsUniversity of [email protected]

Fabio NobileEPFL, [email protected]

Ilaria PerugiaUniversity of ViennaFaculty of Mathematics

[email protected]

PP1

Effective Emulators for Flood Forecasting and Re-altime Water Management

We present the development of a Gaussian process (GP)based mechanistic emulator of a nonlinear model used forthe control of water in urban drainage networks. The GPscovariance function corresponds to a linear model: an ef-ficient heuristic proxy for the nonlinear dynamics. Themapping between the parameters of the linear and non-linear model is learned from data generated by the latter.Small generalization errors of the emulator confirm the ad-equacy of the datadriven proxy.

Juan Pablo CarbajalSwiss Federal Institute of Aquatic Science and TechnologyDubendorf, [email protected]

David Machac, Jorg Rieckermann, Carlo Albert EawagSwiss Federal Institute of Aquatic Science and Technologyn/a, n/a, n/a

PP1

Identification of Physical Parameters Using Gaus-sian Process Change-Point Kernels

Various physical systems are approximated by a linear do-main near the equilibrium and a non-linear domain wherethe approximations wear off. Gaussian Processes are non-parametric, Bayesian models giving relationship betweenmeasured points in terms of mean and variance. In thisposter we apply a change-point kernel to parametrize theGaussian Process and learn values of required physical pa-rameters by maximizing marginal likelihood. Our methodwill be used to learn Youngs modulus from Experimentaldata.

Ankit ChiplunkarAirbus, [email protected]

Emmanuel [email protected]

Michele [email protected]

Joseph [email protected]

PP1

Quantifying the Effect of Parameter Uncertainty inthe Output of a Microsimulation Model of Cancer:The Case of Overdiagnosis in Prostate Cancer

Estimates resulting from microsimulation models of cancerhave substantial parameter uncertainty. We analyze itseffect with probabilistic sensitivity analyses. This is notfeasible since, number of parameters is large and runningtime is high. We minimize model runs by running it at alimited number of points, and use the resulting data to fita Gaussian Process emulator. Even for a computationally

UQ16 Abstracts 131

expensive model, we show it is possible to analyze para-metric uncertainty.

Tiago M. De CarvalhoErasmus Medical Center, Rotterdam, The [email protected]

PP1

Uncertainty Quantification from a Hierarchy ofModels

There are different ways to classify uncertainty or itssources. The most common distinction is to divide theminto two types: random uncertainty and epistemic uncer-tainty. The first is considered as non reducible due to thenatural variability of a random phenomena. The secondis caused by a lack of knowledge which can be reduced byacquiring more information. In this paper we argue thatthe parameters involved in the physical equations and theirnumerical implementation can not be considered to be ran-dom quantities. Our approach is to closely examine theuse of classification technics in a deterministic uncertaintyquantification method. Each output obtained with fixedparameters is assimilated to a response of the model. Theset of parameters and the response obtained using this setwill be named indifferently ”candidate”. The idea is thento sort the candidates according to a level of adequacy,named ”score”, with a set of preset measurements. Thisapproach can be seen as an attempt to produce a scoreassociated with every theoretical model (candidate) or asa protocol to refute evaluation or experimental measure-ments. Once the outputs are scored, the candidates areweighted according to their score value to estimate the co-variance matrix.

Pierre Dossantos-UzarraldeCommissariat a l’Energie [email protected]

PP1

Demonstration of Uncertainty QuantificationPython Laboratory (UQ-PyL)

Uncertainty Quantification Python Laboratory (UQ-PyL)is a software platform that integrates many kinds of un-certainty quantification (UQ) methods, including experi-mental design, statistical analysis, sensitivity analysis, sur-rogate modeling and parameter optimization. It is de-signed for uncertainty analysis and parameter optimizationof large complex dynamical models. This poster describesthe design structure and illustrates the main features ofthe software, which can be downloaded from http://uq-pyl.com.

Chen WangBeijing Normal [email protected]

Qingyun DuanBeijing Normal UniversityCollege of Global Change and Earth System [email protected]

Charles TongLawrence Livermore National Laboratory

[email protected]

PP1

Uncertainty Propagation in Flooding Using Non-Intrusive Polynomial Chaos

Uncertainty propagation of initial conditions andatmospheric forcings using the coupled hydro-logic/hydrodynamic tRIBS model suite. A polynomialchaos expansion with non-intrusive spectral projection isused for uncertainty propagation and global sensitivityanalysis to quantify the influence of input parameters andforcings on flooding extent in test cases and experimentalsetups for rainfall-runoff processes.

Chase Dwelle, Jongho Kim, Valeriy IvanovUniversity of [email protected], [email protected],[email protected]

PP1

Estimation of Smooth Functions with An Applica-tion to Climate Extremes

We develop a new and general statistical methodology forestimating smooth functions. The optimization is basedon a reliable numerical procedure which is fast, stable andefficient. The approach deals with fitting flexible models ofrandom variables that are independent but not identicallydistributed. This flexibility addresses any prior informa-tion depending on covariates. For illustration, we applythe method to a non-stationary Generalized Extreme Val-ues distribution, while incorporating a smooth structureusing Generalized Additive Models.

Yousra El Bachir

Ecole Polytechnique Federale de Lausanne (EPFL)[email protected]

PP1

High-Dimensional Uncertainty Quantification ofFluid-Structure Interaction

This contribution discusses uncertainty quantificationproblems in fluid-structure interaction, using the pseudo-spectral approach, while focusing on high dimensionalstochastic problems. Addressing the complexity posed byhigh dimensional uncertainty quantification is a challeng-ing task, which is resolved using an approach comprisingsparse grids interpolation and numerical quadrature. Inpost-processing, besides estimating the statistics, a Sobol’sindices based sensitivity analysis is performed at no ad-ditional computational cost, using the already computedcoefficients of the spectral expansion approximation.

Ionut-Gabriel FarcasTechnische Universitaet MuenchenDepartment of Scientific [email protected]

Benjamin Uekermann, Tobias NeckelTechnische Universitat [email protected], [email protected]

Hans-Joachim BungartzTechnische Universitat Munchen, Department ofInformaticsChair of Scientific Computing in Computer Science

132 UQ16 Abstracts

[email protected]

PP1

Algebraic Method for the Construction of a Seriesof Uniform Computer Designs

The incomplete block designs constructed from sub- spaceswith mj dimension, j = 1 , . . ., k ( k < m ) of a projectivegeometry PG (m, pn) provide symmetric residual designs ifmj =m−j . Their union on the ith step generates repeatedresolvable designs. More economical designs are deducted.A judicious juxtaposition of these designs engenders n-arydesigns and nested uniform Computer designs using the ”RBIBD - UD” Algorithm.

Zebida Gheribi-AoulmiDepartment of mathematics,univ of the BrothersMentouriConstantinr, [email protected]

Mohamed LaibFaculty of Mathematics,USTHB, Algiers, [email protected]

Imane RezguiDepartment of mathematics, university of the BrothersMentouri, Constantinr, AlgeriaRezgui [email protected]

PP1

A Framework for Variational Data Assimilationwith Superparameterization

Superparameterization (SP) is a multiscale computationalapproach wherein a large scale atmosphere or ocean modelis coupled to an array of simulations of small scale dynam-ics on periodic domains embedded into the computationalgrid of the large scale model. SP has been successfully de-veloped in global atmosphere and climate models, and is apromising approach for new applications, but there is cur-rently no practical data assimilation framework that canbe used with these models. The authors develop a 3D-Varvariational data assimilation framework for use with SP;the relatively low cost and simplicity of 3D-Var in com-parison with ensemble approaches makes it a natural fitfor relatively expensive multiscale SP models. To demon-strate the assimilation framework in a simple model, theauthors develop a new system of ordinary differential equa-tions similar to the two-scale Lorenz-‘96 model. The sys-tem has one set of variables denoted {Yi}, with large andsmall scale parts, and the SP approximation to the systemis straightforward. With the new assimilation frameworkthe SP model approximates the large scale dynamics of thetrue system accurately.

Ian GroomsNew York [email protected]

PP1

Identification of Aleatory Uncertainties in MaterialProperties

Recent works devoted to identification of aleatory uncer-tainties either assume known type of their probability dis-tribution or model those uncertainties by polynomial chaos

expansion. While the former approach leads to identifica-tion of (typically only few) statistical moments, the laterone allows to identify more complex distributions definedby limited number of polynomial chaos coefficients. Herewe aim to relax these assumptions and identify the distri-bution of aleatory uncertainties using Markov chain MonteCarlo method.

Eliska JanouchovaCzech Technical University in [email protected]

Anna KucerovaFaculty of Civil Engineering, Prague, Czech [email protected]

Jan SykoraFaculty of Civil EngineeringCzech Technical University in [email protected]

PP1

Hybrid Reduced Basis Method and GeneralizedPolynomial Chaos for Stochastic Partial Differen-tial Equations

The generalized Polynomial Chaos (gPC) method is a pop-ular method for solving partial differential equations withrandom parameters. However, when the probability spacehas high dimensionality, the solution ensemble size requiredfor an accurate gPC approximation can be large. We showthat this process can be made more efficient by closely hy-bridizing gPC with Reduced Basis Methods (RBM). Fur-thermore, we demonstrate that this is achieved with essen-tially no loss of accuracy through numerical experiments.

Jiahua JiangUniversity of Massachusetts, [email protected]

PP1

Using Training Realizations to Characterize, Iden-tify, and Remove Model Errors in Bayesian Geo-physical Inversion

MCMC sampling of the Bayesian posterior distribution is agrowing means of performing UQ for geophysical problems.To improve computational tractability, simplified numeri-cal models are often utilized, which can lead to strong pos-terior bias. With application to crosshole georadar tomog-raphy, we demonstrate how a suite of training realizationscan be used to generate a sparse basis for known sourcesof model error, which can be used to identify and removethese errors in MCMC.

Corinna Koepke, James IrvingUniversite de [email protected], [email protected]

PP1

Robust Experiment Design Based on Sobol Indices

Current advanced techniques for designing laboratory ex-periments use local sensitivities, which are merged intosome optimality criterion further optimised by some robustoptimisation algorithm. Use of local sensitivities, however,requires good prior guess of parameters to be identified

UQ16 Abstracts 133

from the experiment, but such guess is typically no avail-able. The aim of the presented contribution is to overcomethis difficulty by introduction of global sensitivities basedon Sobol indices.

Anna KucerovaFaculty of Civil Engineering, Prague, Czech [email protected]

Jan SykoraFaculty of Civil EngineeringCzech Technical University in [email protected]

Eliska JanouchovaCzech Technical University in [email protected]

PP1

Π4U: An Hpc Framework for Bayesian UncertaintyQuantification of Large Scale Computational Mod-els

Π4U is a task-parallel framework for non-intrusiveBayesian uncertainty quantification of computationally de-manding physical models on massively parallel comput-ing architectures. The framework incorporates asymptoticapproximations and multiple sampling algorithms. Task-based parallelism is exploited with a platform-agnosticadaptive load balancing library that orchestrates schedul-ing of multiple physical model evaluations on computingplatforms ranging from multicore systems to hybrid GPUclusters. Experimental results using representative appli-cations demonstrate flexibility and excellent scalability ofthe proposed framework.

Lina KulakovaCSE-lab, Insititute for Computational Science, D-MAVT,ETH Zurich, [email protected]

Lina KulakovaETH [email protected]

Panagiotis HadjidoukasComputational ScienceETH Zurich, [email protected]

Panagiotis AngelikopoulosComputational Science, ETH Zurich, [email protected]

Panagiotis AngelikopoulosComputational Science and Engineering Lab, SwitzerlandInstitute for Computational Science DMAVT, ETHZurich,[email protected]

Costas PapadimitriouUniversity of ThessalyDept. of Mechanical [email protected]

Dmitry Alexeev, Diego RossinelliETH [email protected], [email protected]

Petros KoumoutsakosComputational Science, ETH Zurich, [email protected]

PP1

Optimization of Expensive Black-Box Models Us-ing Information Gain

Optimization often requires numerous evaluations of aquantity of interest through a potentially expensive model.To alleviate the cost of optimization, this work presents astrategy to adaptively construct and exploit a cheap sur-rogate for the optimization of expensive black-box models.This is achieved by defining a utility function that quanti-fies the information gained about the model in regions ofinterest. The next design to evaluate is chosen to maximizethat utility function.

Remi [email protected]

Karen E. WillcoxMassachusetts Institute of [email protected]

PP1

Approximating Uncertain Dynamical Systems Us-ing Time-Dependent Orthogonal Bases

We use time-dependent orthogonal bases (TDOB) to ap-proximate stochastic dynamical systems. The object ofthis method is to track the stochastic response surface withquasi-optimal bases. The bases are adaptively modified soas to keep good approximation of stochastic response sur-faces with time progresses. The bases are assumed to beorthonormal all the time. Under these assumptions, we de-rive the equations of motion for time-dependent orthogonalbases based on the variational principle.

Jinchun Lan, Sha Wei, Zhike PengShanghai Jiao Tong [email protected], [email protected],[email protected]

PP1

On Low-Rank Entropy Maximization of CovarianceMatrices

We consider entropy maximization of covariance matricesregularized with nuclear norm. The motivation stems fromapplications where we seek low-dimensional stochastic in-puts to linear systems given partial data on state covariancematrices. This inverse problem also arises from estimationof precision matrices in machine learning. We solve thisconstrained optimization problem via solving a sequenceof linear equations. Based on a nullspace parameteriza-tion, we exploit special structure of Hessian matrices thatmakes matrix-vector multiplication efficient.

Fu LinMathematics and Computer Science DivisionArgonne National [email protected]

Jie ChenIBM Thomas J. Watson Research Center

134 UQ16 Abstracts

[email protected]

PP1

Comparing Networks and Graphical Models at Dif-ferent Scales

We consider the problem of local and global testing of de-pendent multiple hypotheses where dependency is repre-sented by complex networks. By combining complex graphtheory and statistical testing, we propose different kindsof tests that assess the deviance between data structureand null models or between groups of networks. We alsoshow how to exploit data structure and prior informationof dependency to derive hierarchical multiple testing pro-cedures, without having to relying on strong assumptions.

Djalel-Eddine Meskaldji

Ecole Polytechnique Federale de [email protected]

Stephan MorgenthalerEcole Polytechnique Federale de Lausanne, [email protected]

PP1

Sensitivity Analysis of the Uncertainty in RapidPressure Strain Correlation Models

We explicate upon the dynamics due to linear pressure in avariety of 2- and 3-dimensional mean flows. Our focus is toestablish the import of the wave space (or dimensionality)information, and, to quantify the degree of the epistemicuncertainty due to its absence in the classical rapid pres-sure strain correlation models. We carry out sensitivityanalysis to exhibit that the prediction intervals are highlydependent on the mean flow in consideration and to a lesserextent, upon the state of the Reynolds stress tensor. Suchanalysis and understanding is critical for formulating im-proved pressure strain correlation models.

Aashwin MishraTexas A&M University, College Station, [email protected]

Sharath GirimajiAerospace Engineering [email protected]

PP1

Uncertainty Classification for Strategic EnergyPlanning

Various countries and communities are defining strategicenergy plans driven by concerns related to climate changeand security of energy supply. The long time horizon inher-ent to strategic energy planning requires uncertainty to beaccounted for. Uncertainty classification consists in defin-ing the type of uncertainty involved and quantifying it. Itis needed as input for uncertainty and sensitivity analyses,and optimization under uncertainty applications. In thiswork we define a methodology for uncertainty classificationfor a typical strategic energy planning problem. As an ex-ample, the methodology is applied to some representativeparameters.

Stefano Moret, Michel Bierlaire, Francois Marechal

Ecole Polytechnique Federale de Lausanne (EPFL)[email protected], [email protected], [email protected]

PP1

Uncertainty Quantification in Large Civil Infras-tructure

Replacing all aging civil infrastructure is not sustainable.Reserve capacity estimation of existing structures is typi-cally performed through structural identification of variousdegrees of sophistication. In civil infrastructure, uncer-tainty is seldom Gaussian, often systematic and the cor-relation model is rarely known. An error-domain model-falsification methodology is presented with Markov-ChainMonte-Carlo sampling used for exploring solution spacesleading to robust identification and prediction for non-Gaussian and systematic uncertainty forms.

Sai Ganesh S. Pai

Ecole Polytechnique Federale de Lausanne (EPFL),Lausanne,[email protected]

Ian [email protected]

PP1

Data Assimilation for Uncertainty Reduction inForecasting Cholera Epidemics: An Application tothe Haiti Outbreak

Spatially explicit models for the simulation of waterbornediseases must take into account uncertainties arising fromthe atmospheric forcing and epidemiological and environ-mental parameters. The real-time assimilation of therecorded infected cases is an indispensable requirement toreduce the uncertainties in the model forecast and, thus,make these tools operative in the management of an emer-gency. Here we show the efficacy of advanced DA schemesin improving the forecast of the Haiti cholera outbreak of2010.

Damiano PasettoDepartment of MathematicsUniversity of [email protected]

Flavio Finger, Enrico Bertuzzo, Andrea [email protected], [email protected], [email protected]

PP1

State Dependent Model Error Characterization forData Assimilation

Many existing methods for model error characterization indata assimilation focus only on estimating the first and sec-ond moments. We propose a framework to estimate the fulldistributional form based on model residuals. Conditionaldensity estimation methods are used to develop the errordistribution conditioned on model states for each assimila-tion time step. Errors in observations are simultaneouslyconsidered through deconvolution. Methods for transfer-ring errors in observed variables to hidden states are also

UQ16 Abstracts 135

discussed.

Sahani Pathiraja, Lucy Marshall, Ashish SharmaUniversity of New South [email protected],[email protected], [email protected]

Hamid MoradkhaniPortland State [email protected]

PP1

A Simple Alarm for Early Detection of EpidemicsOver Networks

Forecasting the beginning of outbreaks of infectious dis-eases is well studied for the case of simple disease generat-ing models. However, epidemics over real-world, heteroge-neous networks tend to exhibit a more complex behavior.We propose a simple monitoring system suitable for non-homogeneous populations, and derive its statistical prop-erties over simulated networks. We then illustrate its useon observed rates of influenza-like illness from the FrenchSentinelles network.

Razvan RomanescuUniversity of [email protected]

Rob DeardonDepartment of Mathematics & StatisticsUniversity of [email protected]

PP1

Adaptive Optimal Designs for Dose-Finding Stud-ies with Time-to-Event Outcomes on ContinuousDose Space

Many clinical trials use time-to-event outcomes as primarymeasures of efficacy or safety. In cancer trials the goalsmay be to estimate a dose-response relationship and a doselevel that yields the longest progression-free survival fortesting in subsequent studies. Finding efficient designs forsuch trials may be complicated due to uncertainty aboutthe dose-response model, model parameters and censoredobservations. We develop optimal and adaptive designs forsuch trials.

Yevgen RyeznikDepartment of MathematicsUppsala [email protected]

Andrew HookerDepartment of Pharmaceutical BiosciencesUppsala [email protected]

Oleksandr SverdlovR&D Global BiostatisticsEMD Serono, [email protected]

PP1

A New Algorithm for Sensor-Location Problems inthe Sense of Optimal Design of Experiments for

Pdes

We state an algorithm for the computation of a sparse

optimal approximate design consisting of maximal n(n+1)2

points (Dirac-Deltas) in an experimental domain Ω , basedon a combined conditional and a proximal gradient methodwith an additional post-processing. The algorithm is ableto add/remove sensors to/from multiple points in Ω and

ensures that each iterate consists of max. n(n+1)2

points.Weak∗-Convergence of the iterates in the space of regularBorel measures is shown.

Daniel WalterTechnische Universitat [email protected]

PP1

A Fast Computational Method for Tracking theEvolving Spatial-Temporal Gene Networks Basedon Topological Information

The modeling and identification of the spatial-temporalgene networks in transcriptome scale is a challenging prob-lem. To investigate the molecular mechanisms of genesin a dynamic and systematic fashion, we develop a novelmethod for tracking the spatial-temporal modules of genenetworks reconstructed from the brain development geneexpression data. A fast computational method for topo-logical graph matching is proposed to track the evolvinggene networks. We apply this method to brain spatial-temporal networks and provide new insights into the molec-ular mechanisms of brain development as well as the func-tions of the schizophrenia-associated genes.

Lin WanChinese Academy of [email protected]

PP1

Robust Optimization for Silicon Photonics ProcessVariations

Robust design under manufacturing process variations iscritical in Electrical Engineering. The problem becomesintractable when dimensionality increases. In this work, wedevelop an efficient simulation technique to solve high di-mensional robust optimization problems based on stochas-tic spectral methods and compressed sensing. The tech-nique is applied to silicon photonic devices. Results showthat the variance of the bandwidth in a coupled ring filterexample is 31 % smaller when using our technique.

Tsui-Wei WengMITResearch Lab of [email protected]

Luca DanielM.I.T.Research Lab in [email protected]

PP1

Parameterization-Induced Model Discrepancy

Parameterization is a critical component of groundwa-ter modeling because practitioners must balance compu-tational burden with the need to express uncertainty in

136 UQ16 Abstracts

spatially-distributed properties and time-varying boundaryconditions. Using FOSM theory, we have derived exactexpressions for the penalty (e.g. increase in forecast un-certainty) for not casting uncertain model inputs as pa-rameters; the penalty expresses the induced model dis-crepancy arising from parameter compensation. We showthat choices made during the parameterization process cangreatly affect the resulting forecast uncertainty.

Jeremy WhiteU.S. Geological SurveyFlorida Water Science [email protected]

PP1

Efficient Gpce-Based Rock Characterization for theAnalysis of a Hydrocarbon Reservoir

The contribution focuses on a gPCE based rock character-ization inferring the parameters of a geomechanical modelwith seafloor surface measurements. The deformations aredue to the pressure depletion within the subsurface rockformation caused by the extraction of gas from an offshorehydrocarbon reservoir. It is shown how the prior expertknowledge of the physical properties of the system is up-dated by incorporating the measurement data via Bayesianinference using surrogate model for the forward model. Thereservoir characterization allows for predicting the propa-gation of further deformations and the long-term behaviourof the offshore platform.

Claudia ZoccaratoDept. ICEA - University of [email protected]

Noemi FriedmanInstitute of Scientific ComputingTechnische Universitat [email protected]

Elmar ZanderTU BraunschweigInst. of Scientific [email protected]

PP101

Active Subspaces: Theory and Practice

Active subspaces are an emerging set of tools for dimensionreduction in parameter studies. This poster will outlinethe essential elements of the theory and practice of activesubspaces.

Paul ConstantineColorado School of MinesApplied Mathematics and [email protected]

PP101

Active Subspaces: Deriving Metrics for SensitivityAnalysis

Active subspaces are an emerging set of tools for parame-ter studies. Sensitivity analysis helps us understand out-put variation corresponding to parameters of a model. Wepropose two new sensitivity metrics constructed from ac-tive subspaces and relate them to existing global sensitivitymetrics such as the Sobol’ Index and derivative-based sen-

sitivity metrics. In addition, we analyze two mathematicalmodels to compare results of each metric.

Paul M. DiazColorado School of [email protected]

Paul ConstantineColorado School of MinesApplied Mathematics and [email protected]

PP101

Active Subspaces: Quantifying Errors in SurrogateModels of Failure Probability

Application of active subspaces to aid in estimation ofprobabilities of rare events.

Andrew GlawsColorado School of [email protected]

PP101

Active Subspaces: Application to Gas Turbine De-sign and Optimization

Application of the active subspace dimensionality reduc-tion methods to approximate complex gas turbine airfoildesign spaces to motivate optimization and stochastic so-lutions.

Zach GreyColorado School of [email protected]

Paul ConstantineColorado School of MinesApplied Mathematics and [email protected]

PP101

Active Subspaces: Hypercube Domains and Zono-topes

Motivated by applications of Active Subspaces to integra-tion on a hypercube domain in high dimension, we detail aprobabilistic algorithm for enumerating vertices of a zono-tope.

Kerrek StinsonColorado School of [email protected]

Paul ConstantineColorado School of MinesApplied Mathematics and [email protected]

PP102

MUQ (MIT Uncertainty Quantification): Algo-rithms and Interfaces for Solving Forward and In-verse Uncertainty Quantification Problems

Standard forward and inverse uncertainty quantificationalgorithms, such as polynomial chaos and Markov chainMonte Carlo, share many common building blocks. We

UQ16 Abstracts 137

describe how the MIT Uncertainty Quantification (MUQ)library implements these building blocks and how our im-plementation leads to extensible and easy-to-use software.In particular, we illustrate MUQ’s user interfaces for defin-ing new algorithms and implementing new models, and howappropriate software abstractions are used to create theseinterfaces.

Matthew ParnoMassachusetts Institute of [email protected]

Andrew D. [email protected]

Youssef M. MarzoukMassachusetts Institute of [email protected]

PP102

UQ with the Spatially Adaptive Sparse GridToolkit SG++

SG++ is a multi-platform toolkit that excels by fast andefficient algorithms for spatially adaptive sparse grids. Itprovides advanced higher-order basis functions, such as B-splines, which can accelerate higher-dimensional stochas-tic collocation significantly. SG++ supplies operations forsparse grids to compute Jacobian, Hessian matrices andSobol-indices for sensitivity analysis, or the Rosenblatttransformation for estimated sparse grid densities. Weshow use cases and its convenient use via its rich interfacesto Python and Matlab.

Dirk Pfluger, Fabian Franzelin, Julian ValentinUniversity of [email protected],[email protected],[email protected]

PP102

Practical Use of Chaospy for Pedestrian TrafficSimulations

We applied chaospy for forward UQ simulations of pedes-trian evacuation scenarios. Chaospy proved to be very flex-ible and extensible when coupled to a black-box solver.Additional functionality concerning pre-/postprocessing ofdata as well as the parallel execution of the traffic simula-tions on a cluster can easily be realised in python directly.The poster shows the whole simulation pipeline, from thesetup to the visualisation of the uncertainty, which the ap-plication scientists can easily interpret.

Florian KunznerTechnical University Munich (TUM)[email protected]

Tobias NeckelTechnische Universitat [email protected]

Isabella von SiversMunich University of Applied Sciences

isabella.von [email protected]

PP102

Implementation of UQ Workflows with theC++/Python UQTk Toolkit

The UQ Toolkit (UQTk) is a collection of libraries, toolsand apps for uncertainty quantification in computationalmodels. UQTk offers intrusive and non-intrusive methodsfor forward uncertainty propagation and sensitivity anal-ysis, as well as inverse modeling via Bayesian inference.The Python interface allows easy prototyping and devel-oping common UQ workflows that rely on C++ libraries.The poster will demonstrate details of such workflows tohighlight key UQTk capabilities in practical settings.

Cosmin Safta, Khachik Sargsyan, Kenny ChowdharySandia National [email protected], [email protected],[email protected]

Bert J. DebusschereEnergy Transportation CenterSandia National Laboratories, Livermore [email protected]

PP102

Dakota: Algorithms for Design Exploration andSimulation Credibility

The Dakota toolkit provides a flexible interface betweensimulations and iterative analysis methods including op-timization, uncertainty quantification, parameter estima-tion, and sensitivity analysis. Methods may be used in-dividually or as components within advanced strategies toaddress questions like ”What is the best design?”, ”Howsafe is it?”, and ”How much confidence do I have inmy answer?”. We present a use case to illustrate howDakota informs engineering decisions, highlighting emerg-ing activity-based support mechanisms intended to facili-tate non-expert use.

Patricia D. Hough, Adam StephensSandia National [email protected], [email protected]

PP102

Uqlab: a General-Purpose Matlab-Based Platformfor Uncertainty Quantification

The UQLab project, developed at the Chair of Risk, Safetyand Uncertainty Quantification at ETH Zurich, offers amodular MATLAB-based software framework that enablesboth industrial and academic users to easily deploy anddevelop advanced uncertainty quantification algorithms.Its flexible, black-box-oriented and easy-to-extend modu-lar design is aimed at scientists without extensive IT back-ground. This contribution gives a general overview of thecurrent platform capabilities.

Stefano MarelliChair of Risk, Safety and Uncertainty QuantificationInstitute of Structural Engineering, ETH [email protected]

Bruno SudretETH Zurich

138 UQ16 Abstracts

[email protected]

PP102

Cossan-X: General Purpose Software for Uncer-tainty Quantification and Risk Management

COSSAN-X is a software package developed to make theconcepts and technologies of uncertainty quantification andrisk analysis available for anyone to use. COSSAN-X is ageneral-purpose software tool that can be used to solve awide range of engineering and scientific problems (e.g. un-certainty quantification, reliability analysis, sensitivity op-timisation, robust design and life-cycle management). Thesoftware package is based on the most recent and advancedalgorithms for the rational quantification and propagationof uncertainties. Its modular structure also allows for theeasy extension and implementation of new toolkits.

Edoardo Patelli, Marco de Angelis, Raneesha Manoharan,Matteo BroggiUniversity of [email protected], [email protected],[email protected], [email protected]