![]() |
SIAM/ASA Journal on Uncertainty Quantification
Journal Prestige (SJR): 0.543 ![]() Citation Impact (citeScore): 1 Number of Followers: 3 ![]() ![]() ISSN (Print) 2166-2525 Published by Society for Industrial and Applied Mathematics ![]() |
- Robust Kalman and Bayesian Set-Valued Filtering and Model Validation for
Linear Stochastic Systems-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Adrian N. Bishop, Pierre Del Moral
Pages: 389 - 425
Abstract: SIAM/ASA Journal on Uncertainty Quantification, Volume 11, Issue 2, Page 389-425, June 2023.
. Consider a linear stochastic filtering problem in which the probability measure specifying all randomness is only partially known. The deviation between the real and assumed probability models is constrained by a divergence bound between the respective probability measures under which the models are defined. This bound defines a so-called uncertainty set. A recursive set-valued filtering characterization is derived and is guaranteed (with probability one) to contain the true conditional posterior of the unknown, real world, filtering problem when the real world measure is within this uncertainty set. Some filtering approximations and related results are given. The set-valued characterization is related to the problem of robust model validation and model goodness-of-fit statistical hypothesis testing. It is shown how relevant terms involving the innovation sequence (re)appear in multiple settings from set-valued filtering to statistical model evaluation.
Citation: SIAM/ASA Journal on Uncertainty Quantification
PubDate: 2023-04-25T07:00:00Z
DOI: 10.1137/22M1481270
Issue No: Vol. 11, No. 2 (2023)
-
- Gaussian Process Regression on Nested Spaces
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Christophette Blanchet-Scalliet, Bruno Demory, Thierry Gonon, Céline Helbert
Pages: 426 - 451
Abstract: SIAM/ASA Journal on Uncertainty Quantification, Volume 11, Issue 2, Page 426-451, June 2023.
. As computer codes simulate complex physical phenomena, they involve a very large number of variables. To gain time, industrial experts build metamodels on a restricted set of variables, the most influential ones, while the others are fixed. The set of variables is then enlarged progressively to improve knowledge on the studied output. Several designs of experiment are generated, which belong to subspaces included in each other and of increasing dimension. The goal of this paper is to create a metamodel adapted to this inefficient design process, that exploits the structure of all previous runs. An approach based on Gaussian process regression and called seqGPR (sequential Gaussian process regression) is introduced. At each new step of the study (when new variables are released), the output is supposed to be the realization of the sum of two independent Gaussian processes. The first one models the output at the previous step. The second one is a correction term which must be null on the subspace studied at the previous step, that is to say null on a continuum of points. First, some candidate Gaussian processes for the correction terms are suggested. Then, an EM (expectation-maximization) algorithm is implemented to estimate the parameters of the processes. Finally, the metamodel seqGPR is compared to a standard kriging metamodel on three test cases and gives better results.
Citation: SIAM/ASA Journal on Uncertainty Quantification
PubDate: 2023-04-25T07:00:00Z
DOI: 10.1137/21M1445053
Issue No: Vol. 11, No. 2 (2023)
-
- Nonparametric Posterior Learning for Emission Tomography
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Fedor Goncharov, Éric Barat, Thomas Dautremer
Pages: 452 - 479
Abstract: SIAM/ASA Journal on Uncertainty Quantification, Volume 11, Issue 2, Page 452-479, June 2023.
. We continue studies of the uncertainty quantification problem in emission tomographies such as positron emission tomography (PET) or single photon emission computed tomography (SPECT) when additional multimodal data (anatomical magnetic resonance imaging (MRI) images) are available. To solve the aforementioned problem we adapt the recently proposed nonparametric posterior learning technique to the context of Poisson-type data in emission tomography. Using this approach we derive sampling algorithms which are trivially parallelizable, scalable and very easy to implement. In addition, we prove conditional consistency and tightness for the distribution of produced samples in the small noise limit (i.e., when the acquisition time tends to infinity) and derive new geometrical and necessary condition on how MRI images must be used. This condition arises naturally in the context of identifiability problem for misspecified generalized Poisson models with wrong design. We also contrast our approach with Bayesian Markov chain Monte Carlo sampling based on one data augmentation scheme which is very popular in the context of expectation-maximization algorithms for PET or SPECT. We show theoretically and also numerically that such data augmentation significantly increases mixing times for the Markov chain. In view of this, our algorithms seem to give a reasonable trade-off between design complexity, scalability, numerical load and assessment for the uncertainty.
Citation: SIAM/ASA Journal on Uncertainty Quantification
PubDate: 2023-05-11T07:00:00Z
DOI: 10.1137/21M1463367
Issue No: Vol. 11, No. 2 (2023)
-
- Convergence Rates for Learning Linear Operators from Noisy Data
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Maarten V. de Hoop, Nikola B. Kovachki, Nicholas H. Nelsen, Andrew M. Stuart
Pages: 480 - 513
Abstract: SIAM/ASA Journal on Uncertainty Quantification, Volume 11, Issue 2, Page 480-513, June 2023.
. This paper studies the learning of linear operators between infinite-dimensional Hilbert spaces. The training data comprises pairs of random input vectors in a Hilbert space and their noisy images under an unknown self-adjoint linear operator. Assuming that the operator is diagonalizable in a known basis, this work solves the equivalent inverse problem of estimating the operator’s eigenvalues given the data. Adopting a Bayesian approach, the theoretical analysis establishes posterior contraction rates in the infinite data limit with Gaussian priors that are not directly linked to the forward map of the inverse problem. The main results also include learning-theoretic generalization error guarantees for a wide range of distribution shifts. These convergence rates quantify the effects of data smoothness and true eigenvalue decay or growth, for compact or unbounded operators, respectively, on sample complexity. Numerical evidence supports the theory in diagonal and nondiagonal settings.
Citation: SIAM/ASA Journal on Uncertainty Quantification
PubDate: 2023-05-11T07:00:00Z
DOI: 10.1137/21M1442942
Issue No: Vol. 11, No. 2 (2023)
-
- Multifidelity Surrogate Modeling for Time-Series Outputs
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Baptiste Kerleguer
Pages: 514 - 539
Abstract: SIAM/ASA Journal on Uncertainty Quantification, Volume 11, Issue 2, Page 514-539, June 2023.
. This paper considers the surrogate modeling of a complex numerical code in a multifidelity framework when the code output is a time series and two code levels are available: a high-fidelity and expensive code level and a low-fidelity and cheap code level. The goal is to emulate a fast-running approximation of the high-fidelity code level. An original Gaussian process regression method is proposed that uses an experimental design of the low- and high-fidelity code levels. The code output is expanded on a basis built from the experimental design. The first coefficients of the expansion of the code output are processed by a cokriging approach. The last coefficients are processed by a kriging approach with covariance tensorization. The resulting surrogate model provides a predictive mean and a predictive variance of the output of the high-fidelity code level. It is shown to have better performance in terms of prediction errors than standard dimension reduction techniques.
Citation: SIAM/ASA Journal on Uncertainty Quantification
PubDate: 2023-05-12T07:00:00Z
DOI: 10.1137/20M1386694
Issue No: Vol. 11, No. 2 (2023)
-
- The Zero Problem: Gaussian Process Emulators for Range-Constrained
Computer Models-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Elaine T. Spiller, Robert L. Wolpert, Pablo Tierz, Taylor G. Asher
Pages: 540 - 566
Abstract: SIAM/ASA Journal on Uncertainty Quantification, Volume 11, Issue 2, Page 540-566, June 2023.
. We introduce a zero-censored Gaussian process as a systematic, model-based approach to building Gaussian process emulators for range-constrained simulator output. This approach avoids many pitfalls associated with modeling range-constrained data with Gaussian processes. Further, it is flexible enough to be used in conjunction with statistical emulator advancements such as emulators that model high-dimensional vector-valued simulator output. The zero-censored Gaussian process is then applied to two examples of geophysical flow inundation which have the constraint of nonnegativity.
Citation: SIAM/ASA Journal on Uncertainty Quantification
PubDate: 2023-05-17T07:00:00Z
DOI: 10.1137/21M1467420
Issue No: Vol. 11, No. 2 (2023)
-
- Wavenumber-Explicit Parametric Holomorphy of Helmholtz Solutions in the
Context of Uncertainty Quantification-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: E. A. Spence, J. Wunsch
Pages: 567 - 590
Abstract: SIAM/ASA Journal on Uncertainty Quantification, Volume 11, Issue 2, Page 567-590, June 2023.
. A crucial role in the theory of uncertainty quantification (UQ) of PDEs is played by the regularity of the solution with respect to the stochastic parameters; indeed, a key property one seeks to establish is that the solution is holomorphic with respect to (the complex extensions of) the parameters. In the context of UQ for the high-frequency Helmholtz equation, a natural question is therefore: how does this parametric holomorphy depend on the wavenumber [math]' The recent paper [] showed for a particular nontrapping variable-coefficient Helmholtz problem with affine dependence of the coefficients on the stochastic parameters that the solution operator can be analytically continued a distance [math] into the complex plane. In this paper, we generalize the result in [] about [math]-explicit parametric holomorphy to a much wider class of Helmholtz problems with arbitrary (holomorphic) dependence on the stochastic parameters; we show that in all cases the region of parametric holomorphy decreases with [math] and show how the rate of decrease with [math] is dictated by whether the unperturbed Helmholtz problem is trapping or nontrapping. We then give examples of both trapping and nontrapping problems where these bounds on the rate of decrease with [math] of the region of parametric holomorphy are sharp, with the trapping examples coming from the recent results of []. An immediate implication of these results is that the [math]-dependent restrictions imposed on the randomness in the analysis of quasi-Monte Carlo methods in [] arise from a genuine feature of the Helmholtz equation with [math] large (and not, for example, a suboptimal bound).
Citation: SIAM/ASA Journal on Uncertainty Quantification
PubDate: 2023-05-18T07:00:00Z
DOI: 10.1137/22M1486170
Issue No: Vol. 11, No. 2 (2023)
-
- Noise Level Free Regularization of General Linear Inverse Problems under
Unconstrained White Noise-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Tim Jahn
Pages: 591 - 615
Abstract: SIAM/ASA Journal on Uncertainty Quantification, Volume 11, Issue 2, Page 591-615, June 2023.
. In this note we solve a general statistical inverse problem under absence of knowledge of both the noise level and the noise distribution via application of the (modified) heuristic discrepancy principle. Hereby the unbounded (non-Gaussian) noise is controlled via introducing an auxiliary discretization dimension and choosing it in an adaptive fashion. We first show convergence for completely arbitrary compact forward operator and ground solution. Then the uncertainty of reaching the optimal convergence rate is quantified in a specific Bayesian-like environment. We conclude with numerical experiments.
Citation: SIAM/ASA Journal on Uncertainty Quantification
PubDate: 2023-05-25T07:00:00Z
DOI: 10.1137/22M1506675
Issue No: Vol. 11, No. 2 (2023)
-
- On Unbiased Estimation for Discretized Models
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Jeremy Heng, Ajay Jasra, Kody J. H. Law, Alexander Tarakanov
Pages: 616 - 645
Abstract: SIAM/ASA Journal on Uncertainty Quantification, Volume 11, Issue 2, Page 616-645, June 2023.
. In this article, we consider computing expectations w.r.t. probability measures which are subject to discretization error. Examples include partially observed diffusion processes or inverse problems, where one may have to discretize time and/or space in order to practically work with the probability of interest. Given access only to these discretizations, we consider the construction of unbiased Monte Carlo estimators of expectations w.r.t. such target probability distributions. It is shown how to obtain such estimators using a novel adaptation of randomization schemes and Markov simulation methods. Under appropriate assumptions, these estimators possess finite variance and finite expected cost. There are two important consequences of this approach: (i) unbiased inference is achieved at the canonical complexity rate, and (ii) the resulting estimators can be generated independently, thereby allowing strong scaling to arbitrarily many parallel processors. Several algorithms are presented and applied to some examples of Bayesian inference problems with both simulated and real observed data.
Citation: SIAM/ASA Journal on Uncertainty Quantification
PubDate: 2023-05-26T07:00:00Z
DOI: 10.1137/21M1460788
Issue No: Vol. 11, No. 2 (2023)
-
- A Continuation Method in Bayesian Inference
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Ben Mansour Dia
Pages: 646 - 681
Abstract: SIAM/ASA Journal on Uncertainty Quantification, Volume 11, Issue 2, Page 646-681, June 2023.
. We present a continuation method that entails generating a sequence of transition probability density functions from the prior to the posterior in the context of Bayesian inference for parameter estimation problems. The characterization of transition distributions, by tempering the likelihood function, results in a homogeneous nonlinear partial integro-differential equation for which existence and uniqueness of solutions are addressed. The posterior probability distribution is obtained as the interpretation of the final state of a path of transition distributions. A computationally stable scaling domain for the likelihood is explored for approximation of the expected deviance, where we restrict the evaluations of the forward predictive model at the prior stage. To obtain a solution formulation for the expected deviance, we derive a partial differential equation governing the moment-generating function of the log-likelihood. We show also that a spectral formulation of the expected deviance can be obtained for low-dimensional problems under certain conditions. The effectiveness of the proposed method is demonstrated using four numerical examples. These focus on analyzing the computational bias generated by the method, assessing its use in Bayesian inference with non-Gaussian noise, evaluating its ability to invert a multimodal parameter of interest, and quantifying its performance in terms of computational cost.
Citation: SIAM/ASA Journal on Uncertainty Quantification
PubDate: 2023-05-31T07:00:00Z
DOI: 10.1137/19M130251X
Issue No: Vol. 11, No. 2 (2023)
-
- Scalable Physics-Based Maximum Likelihood Estimation Using Hierarchical
Matrices-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Yian Chen, Mihai Anitescu
Pages: 682 - 725
Abstract: SIAM/ASA Journal on Uncertainty Quantification, Volume 11, Issue 2, Page 682-725, June 2023.
. Physics-based covariance models provide a systematic way to construct covariance models that are consistent with the underlying physical laws in Gaussian process analysis. The unknown parameters in the covariance models can be estimated using maximum likelihood estimation, but direct construction of the covariance matrix and classical strategies of computing with it require [math] physical model runs, [math] storage complexity, and [math] computational complexity. To address such challenges, we propose to approximate the discretized covariance function using hierarchical matrices. By utilizing randomized range sketching for individual off-diagonal blocks, the construction process of the hierarchical covariance approximation requires [math] physical model applications and the maximum likelihood computations require [math] effort per iteration. We propose a new approach to compute exactly the trace of products of hierarchical matrices which results in the expected Fisher information matrix being computable in [math] as well. The construction is totally matrix-free and the derivatives of the covariance matrix can then be approximated in the same hierarchical structure by differentiating the whole process. Numerical results are provided to demonstrate the effectiveness, accuracy, and efficiency of the proposed method for parameter estimations and uncertainty quantification.
Citation: SIAM/ASA Journal on Uncertainty Quantification
PubDate: 2023-06-05T07:00:00Z
DOI: 10.1137/21M1458880
Issue No: Vol. 11, No. 2 (2023)
-
- Multilevel Delayed Acceptance MCMC
Open Access Article
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: M. B. Lykkegaard, T. J. Dodwell, C. Fox, G. Mingas, R. Scheichl
Pages: 1 - 30
Abstract: SIAM/ASA Journal on Uncertainty Quantification, Volume 11, Issue 1, Page 1-30, March 2023.
. We develop a novel Markov chain Monte Carlo (MCMC) method that exploits a hierarchy of models of increasing complexity to efficiently generate samples from an unnormalized target distribution. Broadly, the method rewrites the multilevel MCMC approach of Dodwell et al. [SIAM/ASA J. Un-certain. Quantif., 3 (2015), pp. 1075–1108] in terms of the delayed acceptance MCMC of Christen and Fox [J. Comput. Graph. Statist., 14 (2005), pp. 795–810]. In particular, delayed acceptance is extended to use a hierarchy of models of arbitrary depth and allow subchains of arbitrary length. We show that the algorithm satisfies detailed balance and hence is ergodic for the target distribution. Furthermore, multilevel variance reduction is derived that exploits the multiple levels and subchains, and an adaptive multilevel correction to coarse-level biases is developed. Three numerical examples of Bayesian inverse problems are presented that demonstrate the advantages of these novel methods. The software and examples are available in PyMC3.
Citation: SIAM/ASA Journal on Uncertainty Quantification
PubDate: 2023-01-25T08:00:00Z
DOI: 10.1137/22M1476770
Issue No: Vol. 11, No. 1 (2023)
-
- Uncertainty Quantification of Inclusion Boundaries in the Context of X-Ray
Tomography-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Babak Maboudi Afkham, Yiqiu Dong, Per Christian Hansen
Pages: 31 - 61
Abstract: SIAM/ASA Journal on Uncertainty Quantification, Volume 11, Issue 1, Page 31-61, March 2023.
. In this work, we describe a Bayesian framework for reconstructing the boundaries of piecewise smooth regions in the X-ray computed tomography (CT) problem in an infinite-dimensional setting. In addition to the reconstruction, we quantify the uncertainty of the predicted boundaries. Our approach is goal-oriented, meaning that we directly detect the discontinuities from the data instead of reconstructing the entire image. This drastically reduces the dimension of the problem, which makes the application of Markov Chain Monte Carlo (MCMC) methods feasible. We show that our method provides an excellent platform for challenging X-ray CT scenarios (e.g., in the case of noisy data, limited angle imaging, or sparse angle imaging). We investigate the performance and accuracy of our method on synthetic data as well as real-world data. The numerical results indicate that our method provides an accurate method for detecting boundaries of piecewise smooth regions and quantifies the uncertainty in the prediction.
Citation: SIAM/ASA Journal on Uncertainty Quantification
PubDate: 2023-01-25T08:00:00Z
DOI: 10.1137/21M1433782
Issue No: Vol. 11, No. 1 (2023)
-
- On the Deep Active-Subspace Method
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Wouter Edeling
Pages: 62 - 90
Abstract: SIAM/ASA Journal on Uncertainty Quantification, Volume 11, Issue 1, Page 62-90, March 2023.
. The deep active-subspace method is a neural-network based tool for the propagation of uncertainty through computational models with high-dimensional input spaces. Unlike the original active-subspace method, it does not require access to the gradient of the model. It relies on an orthogonal projection matrix constructed with Gram–Schmidt orthogonalization to reduce the input dimensionality. This matrix is incorporated into a neural network as the weight matrix of the first hidden layer (acting as an orthogonal encoder), and optimized using back propagation to identify the active subspace of the input. We propose several theoretical extensions, starting with a new analytic relation for the derivatives of Gram–Schmidt vectors, which are required for back propagation. We also study the use of vector-valued model outputs, which is difficult in the case of the original active-subspace method. Additionally, we investigate an alternative neural network with an encoder without embedded orthonormality, which shows equally good performance compared to the deep active-subspace method. Two epidemiological models are considered as applications, where one requires supercomputer access to generate the training data.
Citation: SIAM/ASA Journal on Uncertainty Quantification
PubDate: 2023-02-02T08:00:00Z
DOI: 10.1137/21M1463240
Issue No: Vol. 11, No. 1 (2023)
-
- Analysis of a Class of Multilevel Markov Chain Monte Carlo Algorithms
Based on Independent Metropolis–Hastings-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Juan P. Madrigal-Cianci, Fabio Nobile, Raúl Tempone
Pages: 91 - 138
Abstract: SIAM/ASA Journal on Uncertainty Quantification, Volume 11, Issue 1, Page 91-138, March 2023.
. In this work, we present, analyze, and implement a class of multilevel Markov chain Monte Carlo (ML-MCMC) algorithms based on independent Metropolis–Hastings proposals for Bayesian inverse problems. In this context, the likelihood function involves solving a complex differential model, which is then approximated on a sequence of increasingly accurate discretizations. The key point of this algorithm is to construct highly coupled Markov chains together with the standard multilevel Monte Carlo argument to obtain a better cost-tolerance complexity than a single-level MCMC algorithm. Our method extends the ideas of Dodwell et al., [SIAM/ASA J. Uncertain. Quantif., 3 (2015), pp. 1075–1108] to a wider range of proposal distributions. We present a thorough convergence analysis of the ML-MCMC method proposed, and show, in particular, that (i) under some mild conditions on the (independent) proposals and the family of posteriors, there exists a unique invariant probability measure for the coupled chains generated by our method, and (ii) that such coupled chains are uniformly ergodic. We also generalize the cost-tolerance theorem of Dodwell et al. to our wider class of ML-MCMC algorithms. Finally, we propose a self-tuning continuation-type ML-MCMC algorithm. The presented method is tested on an array of academic examples, where some of our theoretical results are numerically verified. These numerical experiments evidence how our extended ML-MCMC method is robust when targeting some pathological posteriors, for which some of the previously proposed ML-MCMC algorithms fail.
Citation: SIAM/ASA Journal on Uncertainty Quantification
PubDate: 2023-03-03T08:00:00Z
DOI: 10.1137/21M1420927
Issue No: Vol. 11, No. 1 (2023)
-
- On the Generalized Langevin Equation for Simulated Annealing
-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Martin Chak, Nikolas Kantas, Grigorios A. Pavliotis
Pages: 139 - 167
Abstract: SIAM/ASA Journal on Uncertainty Quantification, Volume 11, Issue 1, Page 139-167, March 2023.
. In this paper, we consider the generalized (higher order) Langevin equation for the purpose of simulated annealing and optimization of nonconvex functions. Our approach modifies the underdamped Langevin equation by replacing the Brownian noise with an appropriate Ornstein–Uhlenbeck process to account for memory in the system. Under reasonable conditions on the loss function and the annealing schedule, we establish convergence of the continuous time dynamics to a global minimum. In addition, we investigate the performance numerically and show better performance and higher exploration of the state space compared to the underdamped Langevin dynamics with the same annealing schedule.
Citation: SIAM/ASA Journal on Uncertainty Quantification
PubDate: 2023-03-03T08:00:00Z
DOI: 10.1137/21M1462970
Issue No: Vol. 11, No. 1 (2023)
-
- Uncertainty Quantification and Experimental Design for Large-Scale Linear
Inverse Problems under Gaussian Process Priors-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Cédric Travelletti, David Ginsbourger, Niklas Linde
Pages: 168 - 198
Abstract: SIAM/ASA Journal on Uncertainty Quantification, Volume 11, Issue 1, Page 168-198, March 2023.
. We consider the use of Gaussian process (GP) priors for solving inverse problems in a Bayesian framework. As is well known, the computational complexity of GPs scales cubically in the number of datapoints. Here we show that in the context of inverse problems involving integral operators, one faces additional difficulties that hinder inversion on large grids. Furthermore, in that context, covariance matrices can become too large to be stored. By leveraging recent results about sequential disintegrations of Gaussian measures, we are able to introduce an implicit representation of posterior covariance matrices that reduces the memory footprint by only storing low rank intermediate matrices, while allowing individual elements to be accessed on-the-fly without needing to build full posterior covariance matrices. Moreover, it allows for fast sequential inclusion of new observations. These features are crucial when considering sequential experimental design tasks. We demonstrate our approach by computing sequential data collection plans for excursion set recovery for a gravimetric inverse problem, where the goal is to provide fine resolution estimates of high density regions inside the Stromboli volcano, Italy. Sequential data collection plans are computed by extending the weighted integrated variance reduction (wIVR) criterion to inverse problems. Our results show that this criterion is able to significantly reduce the uncertainty on the excursion volume, reaching close to minimal levels of residual uncertainty. Overall, our techniques allow the advantages of probabilistic models to be brought to bear on large-scale inverse problems arising in the natural sciences. Particularly, applying the latest developments in Bayesian sequential experimental design on realistic large-scale problems opens new venues of research at a crossroads between mathematical modelling of natural phenomena, statistical data science, and active learning.
Citation: SIAM/ASA Journal on Uncertainty Quantification
PubDate: 2023-03-03T08:00:00Z
DOI: 10.1137/21M1445028
Issue No: Vol. 11, No. 1 (2023)
-
- Deep Learning in High Dimension: Neural Network Expression Rates for
Analytic Functions in [math]-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Christoph Schwab, Jakob Zech
Pages: 199 - 234
Abstract: SIAM/ASA Journal on Uncertainty Quantification, Volume 11, Issue 1, Page 199-234, March 2023.
. For artificial deep neural networks, we prove expression rates for analytic functions [math] in the norm of [math] where [math]. Here [math] denotes the Gaussian product probability measure on [math]. We consider in particular [math] and [math] activations for integer [math]. For [math], we show exponential convergence rates in [math]. In case [math], under suitable smoothness and sparsity assumptions on [math], with [math] denoting an infinite (Gaussian) product measure on [math], we prove dimension-independent expression rate bounds in the norm of [math]. The rates only depend on quantified holomorphy of (an analytic continuation of) the map [math] to a product of strips in [math] (in [math] for [math], respectively). As an application, we prove expression rate bounds of deep [math]-NNs for response surfaces of elliptic PDEs with log-Gaussian random field inputs.
Citation: SIAM/ASA Journal on Uncertainty Quantification
PubDate: 2023-03-03T08:00:00Z
DOI: 10.1137/21M1462738
Issue No: Vol. 11, No. 1 (2023)
-
- A Fast and Scalable Computational Framework for Large-Scale
High-Dimensional Bayesian Optimal Experimental Design-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Keyi Wu, Peng Chen, Omar Ghattas
Pages: 235 - 261
Abstract: SIAM/ASA Journal on Uncertainty Quantification, Volume 11, Issue 1, Page 235-261, March 2023.
. We develop a fast and scalable computational framework to solve Bayesian optimal experimental design problems governed by partial differential equations (PDEs) with application to optimal sensor placement by maximizing expected information gain (EIG). Such problems are particularly challenging due to the curse of dimensionality for high-dimensional parameters and the expensive solution of large-scale PDEs. To address these challenges, we exploit two fundamental properties: (1) the low-rank structure of the Jacobian of the parameter-to-observable map, to extract the intrinsically low-dimensional data-informed subspace, and (2) a series of approximations of the EIG that reduce the number of PDE solves while retaining high correlation with the true EIG. Based on these properties, we propose an efficient offline-online decomposition for the optimization problem. The offline stage dominates the cost and entails precomputing all components that require PDE solves. The online stage optimizes sensor placement and does not require any PDE solves. For the online stage, we propose a new greedy algorithm that first places an initial set of sensors using leverage scores and then swaps the selected sensors with other candidates until certain convergence criteria are met, which we call a swapping greedy algorithm. We demonstrate the efficiency and scalability of the proposed method by both linear and nonlinear inverse problems. In particular, we show that the number of required PDE solves is small, independent of the parameter dimension, and only weakly dependent on the data dimension for both problems.
Citation: SIAM/ASA Journal on Uncertainty Quantification
PubDate: 2023-03-03T08:00:00Z
DOI: 10.1137/21M1466499
Issue No: Vol. 11, No. 1 (2023)
-
- Generalized Sparse Bayesian Learning and Application to Image
Reconstruction-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Jan Glaubitz, Anne Gelb, Guohui Song
Pages: 262 - 284
Abstract: SIAM/ASA Journal on Uncertainty Quantification, Volume 11, Issue 1, Page 262-284, March 2023.
. Image reconstruction based on indirect, noisy, or incomplete data remains an important yet challenging task. While methods such as compressive sensing have demonstrated high-resolution image recovery in various settings, there remain issues of robustness due to parameter tuning. Moreover, since the recovery is limited to a point estimate, it is impossible to quantify the uncertainty, which is often desirable. Due to these inherent limitations, a sparse Bayesian learning approach is sometimes adopted to recover a posterior distribution of the unknown. Sparse Bayesian learning assumes that some linear transformation of the unknown is sparse. However, most of the methods developed are tailored to specific problems, with particular forward models and priors. Here, we present a generalized approach to sparse Bayesian learning. It has the advantage that it can be used for various types of data acquisitions and prior information. Some preliminary results on image reconstruction/recovery indicate its potential use for denoising, deblurring, and magnetic resonance imaging.
Citation: SIAM/ASA Journal on Uncertainty Quantification
PubDate: 2023-03-03T08:00:00Z
DOI: 10.1137/22M147236X
Issue No: Vol. 11, No. 1 (2023)
-
- Context-Aware Surrogate Modeling for Balancing Approximation and Sampling
Costs in Multifidelity Importance Sampling and Bayesian Inverse Problems-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Terrence Alsup, Benjamin Peherstorfer
Pages: 285 - 319
Abstract: SIAM/ASA Journal on Uncertainty Quantification, Volume 11, Issue 1, Page 285-319, March 2023.
. Multifidelity methods leverage low-cost surrogate models to speed up computations and make occasional recourse to expensive high-fidelity models to establish accuracy guarantees. Because surrogate and high-fidelity models are used together, poor predictions by surrogate models can be compensated with frequent recourse to high-fidelity models. Thus, there is a trade-off between investing computational resources to improve the accuracy of surrogate models versus simply making more frequent recourse to expensive high-fidelity models; however, this trade-off is ignored by traditional modeling methods that construct surrogate models that are meant to replace high-fidelity models rather than being used together with high-fidelity models. This work considers multifidelity importance sampling and theoretically and computationally trades off increasing the fidelity of surrogate models for constructing more accurate biasing densities and the numbers of samples that are required from the high-fidelity models to compensate poor biasing densities. Numerical examples demonstrate that such context-aware surrogate models for multifidelity importance sampling have lower fidelity than what typically is set as tolerance in traditional model reduction, leading to runtime speedups of up to one order of magnitude in the presented examples.
Citation: SIAM/ASA Journal on Uncertainty Quantification
PubDate: 2023-03-10T08:00:00Z
DOI: 10.1137/21M1445594
Issue No: Vol. 11, No. 1 (2023)
-
- Complete Deterministic Dynamics and Spectral Decomposition of the Linear
Ensemble Kalman Inversion-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Leon Bungert, Philipp Wacker
Pages: 320 - 357
Abstract: SIAM/ASA Journal on Uncertainty Quantification, Volume 11, Issue 1, Page 320-357, March 2023.
. The ensemble Kalman inversion (EKI) for the solution of Bayesian inverse problems of type [math], with [math] being an unknown parameter, [math] a given datum, and [math] measurement noise, is a powerful tool usually derived from a sequential Monte Carlo point of view. It describes the dynamics of an ensemble of particles [math], whose initial empirical measure is sampled from the prior, evolving over an artificial time [math] toward an approximate solution of the inverse problem, with [math] emulating the posterior, and [math] corresponding to the underregularized minimum-norm solution of the inverse problem. Using spectral techniques, we provide a complete description of the deterministic dynamics of EKI and its asymptotic behavior in parameter space. In particular, we analyze the dynamics of naive EKI and mean-field EKI with a special focus on their time asymptotic behavior. Furthermore, we show that—even in the deterministic case—residuals in parameter space do not decrease monotonously in the Euclidean norm and suggest a problem-adapted norm, where monotonicity can be proved. Finally, we derive a system of ordinary differential equations governing the spectrum and eigenvectors of the covariance matrix. While the analysis is aimed at the EKI, we believe that it can be applied to understand more general particle-based dynamical systems.
Citation: SIAM/ASA Journal on Uncertainty Quantification
PubDate: 2023-03-15T07:00:00Z
DOI: 10.1137/21M1429461
Issue No: Vol. 11, No. 1 (2023)
-
- Certified Dimension Reduction for Bayesian Updating with the Cross-Entropy
Method-
Free pre-print version: Loading...Rate this result: What is this?Please help us test our new pre-print finding feature by giving the pre-print link a rating.
A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Max Ehre, Rafael Flock, Martin Fußeder, Iason Papaioannou, Daniel Straub
Pages: 358 - 388
Abstract: SIAM/ASA Journal on Uncertainty Quantification, Volume 11, Issue 1, Page 358-388, March 2023.
. In inverse problems, the parameters of a model are estimated based on observations of the model response. The Bayesian approach is powerful for solving such problems; one formulates a prior distribution for the parameter state that is updated with the observations to compute the posterior parameter distribution. Solving for the posterior distribution can be challenging when, e.g., prior and posterior significantly differ from one another and/or the parameter space is high-dimensional. We use a sequence of importance sampling measures that arise by tempering the likelihood to approach inverse problems exhibiting a significant distance between prior and posterior. Each importance sampling measure is identified by cross-entropy minimization as proposed in the context of Bayesian inverse problems in Engel et al. [J. Comput. Phys., 473 (2023), 111746]. To efficiently address problems with high-dimensional parameter spaces, we set up the minimization procedure in a low-dimensional subspace of the original parameter space. The principal idea is to analyze the spectrum of the second-moment matrix of the gradient of the log-likelihood function to identify a suitable subspace. Following Zahm et al. [Math. Comp., 91 (2022), pp. 1789–1835], an upper bound on the Kullback–Leibler divergence between full-dimensional and subspace posterior is provided, which can be utilized to determine the effective dimension of the inverse problem corresponding to a prescribed approximation error bound. We suggest heuristic criteria for optimally selecting the number of model and model gradient evaluations in each iteration of the importance sampling sequence. We investigate the performance of this approach using examples from engineering mechanics set in various parameter space dimensions.
Citation: SIAM/ASA Journal on Uncertainty Quantification
PubDate: 2023-03-15T07:00:00Z
DOI: 10.1137/22M1484031
Issue No: Vol. 11, No. 1 (2023)
-