Subjects -> MATHEMATICS (Total: 1013 journals)     - APPLIED MATHEMATICS (92 journals)    - GEOMETRY AND TOPOLOGY (23 journals)    - MATHEMATICS (714 journals)    - MATHEMATICS (GENERAL) (45 journals)    - NUMERICAL ANALYSIS (26 journals)    - PROBABILITIES AND MATH STATISTICS (113 journals) PROBABILITIES AND MATH STATISTICS (113 journals)
 Showing 1 - 98 of 98 Journals sorted alphabetically Advances in Statistics       (Followers: 9) Afrika Statistika       (Followers: 1) American Journal of Applied Mathematics and Statistics       (Followers: 11) American Journal of Mathematics and Statistics       (Followers: 9) Annals of Data Science       (Followers: 14) Annual Review of Statistics and Its Application       (Followers: 7) Applied Medical Informatics       (Followers: 11) Asian Journal of Mathematics & Statistics       (Followers: 8) Asian Journal of Probability and Statistics Austrian Journal of Statistics       (Followers: 4) Biostatistics & Epidemiology       (Followers: 4) Cadernos do IME : Série Estatística Calcutta Statistical Association Bulletin Communications in Mathematics and Statistics       (Followers: 4) Communications in Statistics - Simulation and Computation       (Followers: 9) Communications in Statistics: Case Studies, Data Analysis and Applications Comunicaciones en Estadística Econometrics and Statistics       (Followers: 1) Forecasting       (Followers: 1) Foundations and Trends® in Optimization       (Followers: 3) Frontiers in Applied Mathematics and Statistics       (Followers: 1) Game Theory       (Followers: 2) Geoinformatics & Geostatistics       (Followers: 14) Geomatics, Natural Hazards and Risk       (Followers: 13) Indonesian Journal of Applied Statistics International Game Theory Review       (Followers: 1) International Journal of Advanced Statistics and IT&C for Economics and Life Sciences International Journal of Advanced Statistics and Probability       (Followers: 7) International Journal of Algebra and Statistics       (Followers: 3) International Journal of Applied Mathematics and Statistics       (Followers: 3) International Journal of Ecological Economics and Statistics       (Followers: 4) International Journal of Energy and Statistics       (Followers: 3) International Journal of Game Theory       (Followers: 3) International Journal of Mathematics and Statistics       (Followers: 2) International Journal of Multivariate Data Analysis International Journal of Probability and Statistics       (Followers: 4) International Journal of Statistics & Economics       (Followers: 6) International Journal of Statistics and Applications       (Followers: 2) International Journal of Statistics and Probability       (Followers: 3) International Journal of Statistics in Medical Research       (Followers: 5) International Journal of Testing       (Followers: 1) Iraqi Journal of Statistical Sciences Japanese Journal of Statistics and Data Science Journal of Biometrics & Biostatistics       (Followers: 4) Journal of Cost Analysis and Parametrics       (Followers: 5) Journal of Environmental Statistics       (Followers: 4) Journal of Game Theory       (Followers: 1) Journal of Mathematical Economics and Finance Journal of Mathematics and Statistics Studies Journal of Modern Applied Statistical Methods       (Followers: 1) Journal of Official Statistics       (Followers: 2) Journal of Quantitative Economics Journal of Social and Economic Statistics Journal of Statistical Theory and Practice       (Followers: 2) Journal of Statistics and Data Science Education       (Followers: 2) Journal of Survey Statistics and Methodology       (Followers: 4) Journal of the Indian Society for Probability and Statistics Jurnal Biometrika dan Kependudukan Jurnal Ekonomi Kuantitatif Terapan Jurnal Sains Matematika dan Statistika Lietuvos Statistikos Darbai Mathematics and Statistics       (Followers: 2) Methods, Data, Analyses       (Followers: 1) METRON       (Followers: 1) Nepalese Journal of Statistics North American Actuarial Journal       (Followers: 1) Open Journal of Statistics       (Followers: 3) Open Mathematics, Statistics and Probability Journal Pakistan Journal of Statistics and Operation Research       (Followers: 1) Physica A: Statistical Mechanics and its Applications       (Followers: 6) Probability, Uncertainty and Quantitative Risk       (Followers: 2) Ratio Mathematica Research & Reviews : Journal of Statistics       (Followers: 3) Revista Brasileira de Biometria Revista Colombiana de Estadística RMS : Research in Mathematics & Statistics Romanian Statistical Review Sankhya B - Applied and Interdisciplinary Statistics SIAM Journal on Mathematics of Data Science       (Followers: 1) SIAM/ASA Journal on Uncertainty Quantification       (Followers: 2) Spatial Statistics       (Followers: 2) Sri Lankan Journal of Applied Statistics Stat       (Followers: 1) Stata Journal       (Followers: 8) Statistica       (Followers: 6) Statistical Analysis and Data Mining       (Followers: 23) Statistical Theory and Related Fields Statistics and Public Policy       (Followers: 4) Statistics in Transition New Series : An International Journal of the Polish Statistical Association Statistics Research Letters       (Followers: 1) Statistics, Optimization & Information Computing       (Followers: 3) Stats Synthesis Lectures on Mathematics and Statistics       (Followers: 1) Theory of Probability and its Applications       (Followers: 2) Theory of Probability and Mathematical Statistics       (Followers: 2) Turkish Journal of Forecasting       (Followers: 1) VARIANSI : Journal of Statistics and Its application on Teaching and Research Zeitschrift für die gesamte Versicherungswissenschaft

Similar Journals
 SIAM/ASA Journal on Uncertainty QuantificationJournal Prestige (SJR): 0.543 Citation Impact (citeScore): 1Number of Followers: 2      Hybrid journal (It can contain Open Access articles) ISSN (Print) 2166-2525 Published by Society for Industrial and Applied Mathematics  [17 journals]
• Computer Model Calibration with Time Series Data Using Deep Learning and
Quantile Regression

Authors: Saumya Bhatnagar, Won Chang, Seonjin Kim, Jiali Wang
Pages: 1 - 26
Abstract: SIAM/ASA Journal on Uncertainty Quantification, Volume 10, Issue 1, Page 1-26, January 2022.
Computer models play a key role in many scientific and engineering problems. One major source of uncertainty in computer model experiments is input parameter uncertainty. Computer model calibration is a formal statistical procedure to infer input parameters by combining information from model runs and observational data. The existing standard calibration framework suffers from inferential issues when the model output and observational data are high-dimensional dependent data, such as large time series, due to the difficulty in building an emulator and the nonidentifiability between effects from input parameters and data-model discrepancy. To overcome these challenges, we propose a new calibration framework based on a deep neural network (DNN) with long short-term memory layers that directly emulates the inverse relationship between the model output and input parameters. Adopting the “learning with noise” idea, we train our DNN model to filter out the effects from data-model discrepancy on input parameter inference. We also formulate a new way to construct interval predictions for DNN using quantile regression to quantify the uncertainty in input parameter estimates. Through a simulation study and real data application with the Weather Research and Forecasting Model Hydrological modeling system (WRF-Hydro), we show our approach can yield accurate point estimates and well-calibrated interval estimates for input parameters.
Citation: SIAM/ASA Journal on Uncertainty Quantification
PubDate: 2022-01-05T08:00:00Z
DOI: 10.1137/20M1382581
Issue No: Vol. 10, No. 1 (2022)

• A Generalized Kernel Method for Global Sensitivity Analysis

Authors: John Barr, Herschel Rabitz
Pages: 27 - 54
Abstract: SIAM/ASA Journal on Uncertainty Quantification, Volume 10, Issue 1, Page 27-54, January 2022.
Global sensitivity analysis (GSA) is frequently used to analyze how the uncertainty in input parameters of computational models or in experimental setups influences the uncertainty of an output. Here we describe a class of GSA measures based on the embedding of the multiple output's joint probability distribution into a reproducing kernel Hilbert space (RKHS). In particular, the distance between embeddings is measured utilizing the maximum mean discrepancy, which has several key advantages over many common sensitivity measures. First, the proposed methodology defines measures for an arbitrary type of output, while maintaining easy computability for high-dimensional outputs. Second, by utilizing different kernels, or RKHSs, one can determine how the input parameters influence different features of the output distribution. This new class of sensitivity analysis measures, encapsulated into what are called $\beta^k$-indicators, are shown to contain both moment-independent and moment-based measures as special cases. The specific $\beta^k$-indicator arises from the particular choice of kernel. This analysis includes deriving new GSA measures as well as showing that certain previously proposed GSA measures, such as the variance-based indicators, are special cases of the $\beta^k$-indicators. Some basic test cases are used to showcase that the $\beta^k$-indicator derived from kernel-based GSA provides flexible tools capable of assessing a broad range of applications.
Citation: SIAM/ASA Journal on Uncertainty Quantification
PubDate: 2022-01-10T08:00:00Z
DOI: 10.1137/20M1354829
Issue No: Vol. 10, No. 1 (2022)

• Joint Online Parameter Estimation and Optimal Sensor Placement for the

Authors: Louis Sharrock, Nikolas Kantas
Pages: 55 - 95
Abstract: SIAM/ASA Journal on Uncertainty Quantification, Volume 10, Issue 1, Page 55-95, January 2022.
In this paper, we consider the problem of jointly performing online parameter estimation and optimal sensor placement for a partially observed infinite-dimensional linear diffusion process. We present a novel solution to this problem in the form of a continuous-time, two-timescale stochastic gradient descent algorithm, which recursively seeks to maximize the asymptotic log-likelihood of the observations with respect to the unknown model parameters and to minimize the expected mean squared error of the hidden state estimate with respect to the sensor locations. We also provide extensive numerical results illustrating the performance of the proposed approach in the case that the hidden signal is governed by the two-dimensional stochastic advection-diffusion equation.
Citation: SIAM/ASA Journal on Uncertainty Quantification
PubDate: 2022-01-10T08:00:00Z
DOI: 10.1137/20M1375073
Issue No: Vol. 10, No. 1 (2022)

• Finite Sample Approximations of Exact and Entropic Wasserstein Distances
Between Covariance Operators and Gaussian Processes

Authors: Hà Quang Minh
Pages: 96 - 124
Abstract: SIAM/ASA Journal on Uncertainty Quantification, Volume 10, Issue 1, Page 96-124, January 2022.
This work studies finite sample approximations of the exact and entropic regularized Wasserstein distances between centered Gaussian processes and, more generally, covariance operators of functional random processes. We first show that these distances/divergences are fully represented by reproducing kernel Hilbert space (RKHS) covariance and cross-covariance operators associated with the corresponding covariance functions. Using this representation, we show that the Sinkhorn divergence between two centered Gaussian processes can be consistently and efficiently estimated from the divergence between their corresponding normalized finite-dimensional covariance matrices or, alternatively, their sample covariance operators. Consequently, this leads to a consistent and efficient algorithm for estimating the Sinkhorn divergence from finite samples generated by the two processes. For a fixed regularization parameter, the convergence rates are dimension-independent and of the same order as those for the Hilbert--Schmidt distance. If at least one of the RKHS is finite-dimensional, we obtain a dimension-dependent sample complexity for the exact Wasserstein distance between the Gaussian processes.
Citation: SIAM/ASA Journal on Uncertainty Quantification
PubDate: 2022-01-10T08:00:00Z
DOI: 10.1137/21M1410488
Issue No: Vol. 10, No. 1 (2022)

• Landmark-Warped Emulators for Models with Misaligned Functional Response

Authors: Devin Francom, Bruno Sansó, Ana Kupresanin
Pages: 125 - 150
Abstract: SIAM/ASA Journal on Uncertainty Quantification, Volume 10, Issue 1, Page 125-150, January 2022.
Many computer models output functional data, and in some cases, these functional data have similar, but misaligned, shape characteristics. We introduce a general approach for building emulators for computer models that output misaligned functional data when key values in the functional response (landmarks) can be easily identified. This approach has two main parts: modeling the aligned (using the landmarks) functional data, and modeling the functions that map the misaligned data to the aligned space (warping functions). As the warping functions are required to be monotonic, we give special attention to modeling monotonic functional response data. We discuss how our approach can be easily applied for a variety of typical emulators, such as Gaussian processes, Bayesian multivariate adaptive regression splines, and Bayesian additive regression trees, and how sensitivity analysis can be performed. We demonstrate our approach by building emulators for two applications: (1) a high-energy-density physics computer model used to simulate inertial confinement fusion ignition experiments, where model outputs are highly misaligned, and (2) a multiphysics continuum hydrocode used to simulate high-velocity impact experiments, where model outputs are only slightly misaligned. In case (1) traditional methods cannot be applied, while in (2) they can be applied, but the proposed method performs significantly better.
Citation: SIAM/ASA Journal on Uncertainty Quantification
PubDate: 2022-01-11T08:00:00Z
DOI: 10.1137/20M135279X
Issue No: Vol. 10, No. 1 (2022)

• Bayesian Inference of an Uncertain Generalized Diffusion Operator

Authors: Teresa Portone, Robert D. Moser
Pages: 151 - 178
Abstract: SIAM/ASA Journal on Uncertainty Quantification, Volume 10, Issue 1, Page 151-178, January 2022.
This paper defines a novel Bayesian inverse problem to infer an infinite-dimensional uncertain operator appearing in a differential equation, whose action on an observable state variable affects its dynamics. Inference is made tractable by parametrizing the operator using its eigendecomposition. The plausibility of operator inference in the sparse data regime is explored in terms of an uncertain, generalized diffusion operator appearing in an evolution equation for a contaminant's transport through a heterogeneous porous medium. Sparse data are augmented with prior information through the imposition of deterministic constraints on the eigendecomposition and the use of qualitative information about the system in the definition of the prior distribution. Limited observations of the state variable's evolution are used as data for inference, and the dependence on the solution of the inverse problem is studied as a function of the frequency of observations, as well as on whether or not the data is collected as a spatial or time series.
Citation: SIAM/ASA Journal on Uncertainty Quantification
PubDate: 2022-01-13T08:00:00Z
DOI: 10.1137/21M141659X
Issue No: Vol. 10, No. 1 (2022)

• Parameter Estimation in an SPDE Model for Cell Repolarization

Authors: Randolf Altmeyer, Till Bretschneider, Josef Janák, Markus Reiß
Pages: 179 - 199
Abstract: SIAM/ASA Journal on Uncertainty Quantification, Volume 10, Issue 1, Page 179-199, February 2022.
As a concrete setting where stochastic partial differential equations (SPDEs) are able to model real phenomena, we propose a stochastic Meinhardt model for cell repolarization and study how parameter estimation techniques developed for simple linear SPDE models apply in this situation. We establish the existence of mild SPDE solutions, and we investigate the impact of the driving noise process on pattern formation in the solution. We then pursue estimation of the diffusion term and show asymptotic normality for our estimator as the space resolution becomes finer. The finite sample performance is investigated for synthetic and real data.
Citation: SIAM/ASA Journal on Uncertainty Quantification
PubDate: 2022-02-07T08:00:00Z
DOI: 10.1137/20M1373347
Issue No: Vol. 10, No. 1 (2022)

• Analysis of Nested Multilevel Monte Carlo Using Approximate Normal Random
Variables

Authors: Michael Giles, Oliver Sheridan-Methven
Pages: 200 - 226
Abstract: SIAM/ASA Journal on Uncertainty Quantification, Volume 10, Issue 1, Page 200-226, March 2022.
The multilevel Monte Carlo (MLMC) method has been used for a wide variety of stochastic applications. In this paper we consider its use in situations in which input random variables can be replaced by similar approximate random variables which can be computed much more cheaply. A nested MLMC approach is adopted in which a two-level treatment of the approximated random variables is embedded within a standard MLMC application. We analyze the resulting nested MLMC variance in the specific context of an SDE discretization in which normal random variables can be replaced by approximately normal random variables, and we provide numerical results to support the analysis.
Citation: SIAM/ASA Journal on Uncertainty Quantification
PubDate: 2022-02-08T08:00:00Z
DOI: 10.1137/21M1399385
Issue No: Vol. 10, No. 1 (2022)

• Nonlinear Reduced Models for State and Parameter Estimation

Authors: Albert Cohen, Wolfgang Dahmen, Olga Mula, James Nichols
Pages: 227 - 267
Abstract: SIAM/ASA Journal on Uncertainty Quantification, Volume 10, Issue 1, Page 227-267, March 2022.
State estimation aims at approximately reconstructing the solution $u$ to a parametrized partial differential equation from $m$ linear measurements when the parameter vector $y$ is unknown. Fast numerical recovery methods have been proposed in Maday et al. [Internat. J. Numer. Methods Engrg., 102 (2015), pp. 933--965] based on reduced models which are linear spaces of moderate dimension $n$ that are tailored to approximate the solution manifold ${\cal M}$ where the solution sits. These methods can be viewed as deterministic counterparts to Bayesian estimation approaches and are proved to be optimal when the prior is expressed by approximability of the solution with respect to the reduced model [P. Binev et al., SIAM/ASA J. Uncertain. Quantif., 5 (2017), pp. 1--29]. However, they are inherently limited by their linear nature, which bounds from below their best possible performance by the Kolmogorov width $d_m({\cal M})$ of the solution manifold. In this paper, we propose to break this barrier by using simple nonlinear reduced models that consist of a finite union of linear spaces $V_k$, each having dimension at most $m$ and leading to different estimators $u_k^*$. A model selection mechanism based on minimizing the PDE residual over the parameter space is used to select from this collection the final estimator $u^*$. Our analysis shows that $u^*$ meets optimal recovery benchmarks that are inherent to the solution manifold and not tied to its Kolmogorov width. The residual minimization procedure is computationally simple in the relevant case of affine parameter dependence in the PDE. In addition, it results in an estimator $y^*$ for the unknown parameter vector. In this setting, we also discuss an alternating minimization (coordinate descent) algorithm for joint state and parameter estimation that potentially improves the quality of both estimators.
Citation: SIAM/ASA Journal on Uncertainty Quantification
PubDate: 2022-02-08T08:00:00Z
DOI: 10.1137/20M1380818
Issue No: Vol. 10, No. 1 (2022)

• Intermediate Variable Emulation: Using Internal Processes in Simulators to

Authors: Rachel H. Oughton, Michael Goldstein, John C. P. Hemmings
Pages: 268 - 293
Abstract: SIAM/ASA Journal on Uncertainty Quantification, Volume 10, Issue 1, Page 268-293, March 2022.
Complex systems are often modeled by intricate and intensive computer simulators. This makes their behavior difficult to study, and so a statistical representation of the simulator is often used, known as an emulator, to enable users to explore the space more thoroughly. These have the disadvantage that they do not allow one to learn about the simulator's behavior beyond its role as a function from input to output variables. We take a new approach by involving the internal processes modeled within the simulator in our emulator. We introduce a new technique, intermediate variable emulation, which enables a simulator to be understood in terms of the processes it models. This leads to advantages in simulator improvement and in calibration, as the simulator can be scrutinized in more detail and the physical processes can be used to refine the input space. The intermediate variable emulator also allows one to represent more complicated relationships within the simulator, as we show with a simple example. We demonstrate the method using a simulator of the ocean carbon cycle. Using an intermediate variable emulator we are able to discover unrealistic behavior in the simulator that would not be noticeable using a standard input to output emulator and reduce the input space accordingly. We also learn about the subprocesses that drive the output and about the input variables driving each subprocess.
Citation: SIAM/ASA Journal on Uncertainty Quantification
PubDate: 2022-02-28T08:00:00Z
DOI: 10.1137/20M1370902
Issue No: Vol. 10, No. 1 (2022)

• Cross-Validation--based Adaptive Sampling for Gaussian Process Models

Authors: Hossein Mohammadi, Peter Challenor, Daniel Williamson, Marc Goodfellow
Pages: 294 - 316
Abstract: SIAM/ASA Journal on Uncertainty Quantification, Volume 10, Issue 1, Page 294-316, March 2022.
In many real-world applications, we are interested in approximating black-box, costly functions as accurately as possible with the smallest number of function evaluations. A complex computer code is an example of such a function. In this work, a Gaussian process (GP) emulator is used to approximate the output of complex computer code. We consider the problem of extending an initial experiment (set of model runs) sequentially to improve the emulator. A sequential sampling approach based on leave-one-out (LOO) cross-validation is proposed that can be easily extended to a batch mode. This is a desirable property since it saves the user time when parallel computing is available. After fitting a GP to training data points, the expected squared LOO (ES-LOO) error is calculated at each design point. ES-LOO is used as a measure to identify important data points. More precisely, when this quantity is large at a point it means that the quality of prediction depends a great deal on that point and adding more samples nearby could improve the accuracy of the GP. As a result, it is reasonable to select the next sample where ES-LOO is maximized. However, ES-LOO is only known at the experimental design and needs to be estimated at unobserved points. To do this, a second GP is fitted to the ES-LOO errors, and where the maximum of the modified expected improvement (EI) criterion occurs is chosen as the next sample. EI is a popular acquisition function in Bayesian optimization and is used to trade off between local and global search. However, it has a tendency towards exploitation, meaning that its maximum is close to the (current) “best" sample. To avoid clustering, a modified version of EI, called pseudoexpected improvement, is employed which is more explorative than EI yet allows us to discover unexplored regions. Our results show that the proposed sampling method is promising.
Citation: SIAM/ASA Journal on Uncertainty Quantification
PubDate: 2022-02-28T08:00:00Z
DOI: 10.1137/21M1404260
Issue No: Vol. 10, No. 1 (2022)

• The Ensemble Kalman Filter for Rare Event Estimation

Authors: Fabian Wagner, I. Papaioannou, E. Ullmann
Pages: 317 - 349
Abstract: SIAM/ASA Journal on Uncertainty Quantification, Volume 10, Issue 1, Page 317-349, March 2022.
We present a novel sampling-based method for estimating probabilities of rare or failure events. Our approach is founded on the ensemble Kalman filter (EnKF) for inverse problems. Therefore, we reformulate the rare event problem as an inverse problem and apply the EnKF to generate failure samples. To estimate the probability of failure, we use the final EnKF samples to fit a distribution model and apply importance sampling with respect to the fitted distribution. This leads to an unbiased estimator if the density of the fitted distribution admits positive values within the whole failure domain. To handle multimodal failure domains, we localize the covariance matrices in the EnKF update step around each particle and fit a mixture distribution model in the importance sampling step. For affine linear limit-state functions, we investigate the continuous time limit and large time properties of the EnKF update. We prove that the mean of the particles converges to a convex combination of the most likely failure point and the mean of the optimal importance sampling density if the EnKF is applied without noise. We provide numerical experiments to compare the performance of the EnKF with sequential importance sampling.
Citation: SIAM/ASA Journal on Uncertainty Quantification
PubDate: 2022-02-28T08:00:00Z
DOI: 10.1137/21M1404119
Issue No: Vol. 10, No. 1 (2022)

• Varying Coefficient Models and Design Choice for Bayes Linear Emulation of
Complex Computer Models with Limited Model Evaluations

Authors: Amy L. Wilson, Michael Goldstein, Chris J. Dent
Pages: 350 - 378
Abstract: SIAM/ASA Journal on Uncertainty Quantification, Volume 10, Issue 1, Page 350-378, March 2022.
Computer models are widely used to help make decisions about real-world systems. As computer models of large and complex systems can have long run-times and high-dimensional input spaces, it is often necessary to use emulation to assess uncertainties in computer model output. This paper presents methodology for emulation of complex computer models motivated by a real-world example in energy policy. The computer model studied is an economic model of investment in electricity generation in Great Britain. The computer model was used to select parameters in a government policy designed to incentivize investment in renewable technologies to meet government targets. Limited computing time meant that few runs of the computer model were available to fit an emulator. The statistical methodology developed was therefore focused on accurately capturing the uncertainty in computer model output arising from the small number of available model runs. A varying coefficient emulator is proposed to model uncertainty in model output when extrapolating away from model runs. To maximize use of the small number of runs available, this varying coefficient emulator is paired with a criterion-based procedure for design selection.
Citation: SIAM/ASA Journal on Uncertainty Quantification
PubDate: 2022-03-07T08:00:00Z
DOI: 10.1137/20M1318560
Issue No: Vol. 10, No. 1 (2022)

• Block-Diagonal Covariance Estimation and Application to the Shapley
Effects in Sensitivity Analysis

Authors: Baptiste Broto, François Bachoc, Laura Clouvel, Jean-Marc Martinez
Pages: 379 - 403
Abstract: SIAM/ASA Journal on Uncertainty Quantification, Volume 10, Issue 1, Page 379-403, March 2022.
This paper deals with the estimation of sensitivity indices called “Shapley effects” when the model is linear and when the input vector is high dimensional with a Gaussian distribution. The computation cost of the Shapley effects makes it necessary to focus on the case where the input vector has a block-diagonal covariance matrix. First, we estimate a block-diagonal covariance matrix from Gaussian variables in high dimension. We prove that, under some mild assumptions, we find the block-diagonal structure of the matrix with probability that goes to one. We deduce an estimator of the covariance matrix that is as accurate as if the block-diagonal structure was known, with numerical applications. We also prove the asymptotic efficiency of a similar estimator in fixed dimension. Then, we apply this estimator for the estimation of the Shapley effects, in the Gaussian linear framework. We derive an estimator of the Shapley effects in high dimension with a relative error that converges to 0 at the parametric rate, up to a logarithmic factor. Finally, we apply the Shapley effects estimator on nuclear data.
Citation: SIAM/ASA Journal on Uncertainty Quantification
PubDate: 2022-03-22T07:00:00Z
DOI: 10.1137/20M1358839
Issue No: Vol. 10, No. 1 (2022)

• A Spline Dimensional Decomposition for Uncertainty Quantification in High
Dimensions

Authors: Sharif Rahman, Ramin Jahanbin
Pages: 404 - 438
Abstract: SIAM/ASA Journal on Uncertainty Quantification, Volume 10, Issue 1, Page 404-438, March 2022.
This study debuts a new spline dimensional decomposition (SDD) for uncertainty quantification analysis of high-dimensional functions, including those endowed with high nonlinearity and nonsmoothness, if they exist, in a proficient manner. The decomposition creates a hierarchical expansion for an output random variable of interest with respect to measure-consistent orthonormalized basis splines (B-splines) in independent input random variables. A dimensionwise decomposition of a spline space into orthogonal subspaces, each spanned by a reduced set of such orthonormal splines, results in SDD. Exploiting the modulus of smoothness, the SDD approximation is shown to converge in mean-square to the correct limit. The computational complexity of the SDD method is polynomial, as opposed to exponential, thus alleviating the curse of dimensionality to the extent possible. Analytical formulae are proposed to calculate the second-moment properties of a truncated SDD approximation for a general output random variable in terms of the expansion coefficients involved. Numerical results indicate that a low-order SDD approximation of nonsmooth functions calculates the probabilistic characteristics of an output variable with an accuracy matching or surpassing those obtained by high-order approximations from several existing methods. Finally, a 34-dimensional random eigenvalue analysis demonstrates the utility of SDD in solving practical problems.
Citation: SIAM/ASA Journal on Uncertainty Quantification
PubDate: 2022-03-22T07:00:00Z
DOI: 10.1137/20M1364175
Issue No: Vol. 10, No. 1 (2022)

• Effective Generation of Compressed Stationary Gaussian Fields

Authors: Robert Sawko, Małgorzata J. Zimoń, Robert Sawko, Małgorzata J. Zimoń
Pages: 439 - 452
Abstract: SIAM/ASA Journal on Uncertainty Quantification, Volume 10, Issue 1, Page 439-452, March 2022.
In this paper, we present a novel approach to compression of two-dimensional Gaussian random fields. We build on a circulant embedding method to effectively decompose and generate sample realizations. By employing the structure of the resulting circulant matrix, we propose a truncation algorithm that controls energy through rank and values of retained spectral components. In contrast with naive truncation, such construction ensures that the covariance matrix remains realizable. We discuss the properties and efficiency of the algorithm with numerical examples.
Citation: SIAM/ASA Journal on Uncertainty Quantification
PubDate: 2022-03-22T07:00:00Z
DOI: 10.1137/20M1375541
Issue No: Vol. 10, No. 1 (2022)

• Strong Rates of Convergence of a Splitting Scheme for Schrödinger
Equations with Nonlocal Interaction Cubic Nonlinearity and White Noise
Dispersion

Authors: Charles-Edouard Bréhier, David Cohen
Pages: 453 - 480
Abstract: SIAM/ASA Journal on Uncertainty Quantification, Volume 10, Issue 1, Page 453-480, March 2022.
We analyze a splitting integrator for the time discretization of the Schrödinger equation with nonlocal interaction cubic nonlinearity and white noise dispersion. We prove that this time integrator has order of convergence one in the $p$th mean sense, for any $p\geq1$ in some Sobolev spaces. We prove that the splitting schemes preserves the $L^2$-norm, which is a crucial property for the proof of the strong convergence result. Finally, numerical experiments illustrate the performance of the proposed numerical scheme.
Citation: SIAM/ASA Journal on Uncertainty Quantification
PubDate: 2022-03-29T07:00:00Z
DOI: 10.1137/20M1378168
Issue No: Vol. 10, No. 1 (2022)

• APIK: Active Physics-Informed Kriging Model with Partial Differential
Equations

Authors: Jialei Chen, Zhehui Chen, Chuck Zhang, C. F. Jeff Wu
Pages: 481 - 506
Abstract: SIAM/ASA Journal on Uncertainty Quantification, Volume 10, Issue 1, Page 481-506, March 2022.
Kriging (or Gaussian process regression) becomes a popular machine learning method for its flexibility and closed-form prediction expressions. However, one of the key challenges in applying kriging to engineering systems is that the available measurement data are scarce due to the measurement limitations or high sensing costs. On the other hand, physical knowledge of the engineering system is often available and represented in the form of partial differential equations (PDEs). We present in this paper a PDE-informed Kriging model (PIK) that introduces PDE information via a set of PDE points and conducts posterior prediction similar to the standard kriging method. The proposed PIK model can incorporate physical knowledge from both linear and nonlinear PDEs. To further improve learning performance, we propose an active PIK framework (APIK) that designs PDE points to leverage the PDE information based on the PIK model and measurement data. The selected PDE points not only explore the whole input space but also exploit the locations where the PDE information is critical in reducing predictive uncertainty. Finally, an expectation-maximization algorithm is developed for parameter estimation. We demonstrate the effectiveness of APIK in two synthetic examples: a shock wave case study and a laser heating case study.
Citation: SIAM/ASA Journal on Uncertainty Quantification
PubDate: 2022-03-29T07:00:00Z
DOI: 10.1137/20M1389285
Issue No: Vol. 10, No. 1 (2022)

• A Stochastic Levenberg--Marquardt Method Using Random Models with
Complexity Results

Authors: El Houcine Bergou, Youssef Diouane, Vyacheslav Kungurtsev, Clément W. Royer
Pages: 507 - 536
Abstract: SIAM/ASA Journal on Uncertainty Quantification, Volume 10, Issue 1, Page 507-536, March 2022.
Globally convergent variants of the Gauss--Newton algorithm are often the methods of choice to tackle nonlinear least-squares problems. Among such frameworks, Levenberg--Marquardt and trust-region methods are two well-established, similar paradigms. Both schemes have been studied when the Gauss--Newton model is replaced by a random model that is only accurate with a given probability. Trust-region schemes have also been applied to problems where the objective value is subject to noise: this setting is of particular interest in fields such as data assimilation, where efficient methods that can adapt to noise are needed to account for the intrinsic uncertainty in the input data. In this paper, we describe a stochastic Levenberg--Marquardt algorithm that handles noisy objective function values and random models, provided sufficient accuracy is achieved in probability. Our method relies on a specific scaling of the regularization parameter that allows us to leverage existing results for trust-region algorithms. Moreover, we exploit the structure of our objective through the use of a family of stationarity criteria tailored to least-squares problems. Provided the probability of accurate function estimates and models is sufficiently large, we bound the expected number of iterations needed to reach an approximate stationary point, which generalizes results based on using deterministic models or noiseless function values. We illustrate the link between our approach and several applications related to inverse problems and machine learning.
Citation: SIAM/ASA Journal on Uncertainty Quantification
PubDate: 2022-03-29T07:00:00Z
DOI: 10.1137/20M1366253
Issue No: Vol. 10, No. 1 (2022)

JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762