Authors:Martin Benning; Martin Burger Pages: 1 - 111 Abstract: Regularization methods are a key tool in the solution of inverse problems. They are used to introduce prior knowledge and allow a robust approximation of ill-posed (pseudo-) inverses. In the last two decades interest has shifted from linear to nonlinear regularization methods, even for linear inverse problems. The aim of this paper is to provide a reasonably comprehensive overview of this shift towards modern nonlinear regularization methods, including their analysis, applications and issues for future research.In particular we will discuss variational methods and techniques derived from them, since they have attracted much recent interest and link to other fields, such as image processing and compressed sensing. We further point to developments related to statistical inverse problems, multiscale decompositions and learning theory. PubDate: 2018-05-01T00:00:00.000Z DOI: 10.1017/S0962492918000016 Issue No:Vol. 27 (2018)

Authors:Nawaf Bou-Rabee; J. M. Sanz-Serna Pages: 113 - 206 Abstract: This paper surveys in detail the relations between numerical integration and the Hamiltonian (or hybrid) Monte Carlo method (HMC). Since the computational cost of HMC mainly lies in the numerical integrations, these should be performed as efficiently as possible. However, HMC requires methods that have the geometric properties of being volume-preserving and reversible, and this limits the number of integrators that may be used. On the other hand, these geometric properties have important quantitative implications for the integration error, which in turn have an impact on the acceptance rate of the proposal. While at present the velocity Verlet algorithm is the method of choice for good reasons, we argue that Verlet can be improved upon. We also discuss in detail the behaviour of HMC as the dimensionality of the target distribution increases. PubDate: 2018-05-01T00:00:00.000Z DOI: 10.1017/S0962492917000101 Issue No:Vol. 27 (2018)

Authors:C. T. Kelley Pages: 207 - 287 Abstract: This article is about numerical methods for the solution of nonlinear equations. We consider both the fixed-point form $\mathbf{x}=\mathbf{G}(\mathbf{x})$ and the equations form $\mathbf{F}(\mathbf{x})=0$ and explain why both versions are necessary to understand the solvers. We include the classical methods to make the presentation complete and discuss less familiar topics such as Anderson acceleration, semi-smooth Newton’s method, and pseudo-arclength and pseudo-transient continuation methods. PubDate: 2018-05-01T00:00:00.000Z DOI: 10.1017/S0962492917000113 Issue No:Vol. 27 (2018)

Authors:Alexander Kurganov Pages: 289 - 351 Abstract: Shallow-water equations are widely used to model water flow in rivers, lakes, reservoirs, coastal areas, and other situations in which the water depth is much smaller than the horizontal length scale of motion. The classical shallow-water equations, the Saint-Venant system, were originally proposed about 150 years ago and still are used in a variety of applications. For many practical purposes, it is extremely important to have an accurate, efficient and robust numerical solver for the Saint-Venant system and related models. As their solutions are typically non-smooth and even discontinuous, finite-volume schemes are among the most popular tools. In this paper, we review such schemes and focus on one of the simplest (yet highly accurate and robust) methods: central-upwind schemes. These schemes belong to the family of Godunov-type Riemann-problem-solver-free central schemes, but incorporate some upwinding information about the local speeds of propagation, which helps to reduce an excessive amount of numerical diffusion typically present in classical (staggered) non-oscillatory central schemes. Besides the classical one- and two-dimensional Saint-Venant systems, we will consider the shallow-water equations with friction terms, models with moving bottom topography, the two-layer shallow-water system as well as general non-conservative hyperbolic systems. PubDate: 2018-05-01T00:00:00.000Z DOI: 10.1017/S0962492918000028 Issue No:Vol. 27 (2018)

Authors:J. Tinsley Oden Pages: 353 - 450 Abstract: The use of computational models and simulations to predict events that take place in our physical universe, or to predict the behaviour of engineered systems, has significantly advanced the pace of scientific discovery and the creation of new technologies for the benefit of humankind over recent decades, at least up to a point. That ‘point’ in recent history occurred around the time that the scientific community began to realize that true predictive science must deal with many formidable obstacles, including the determination of the reliability of the models in the presence of many uncertainties. To develop meaningful predictions one needs relevant data, itself possessing uncertainty due to experimental noise; in addition, one must determine model parameters, and concomitantly, there is the overriding need to select and validate models given the data and the goals of the simulation.This article provides a broad overview of predictive computational science within the framework of what is often called the science of uncertainty quantification. The exposition is divided into three major parts. In Part 1, philosophical and statistical foundations of predictive science are developed within a Bayesian framework. There the case is made that the Bayesian framework provides, perhaps, a unique setting for handling all of the uncertainties encountered in scientific prediction. In Part 2, general frameworks and procedures for the calculation and validation of mathematical models of physical realities are given, all in a Bayesian setting. But beyond Bayes, an introduction to information theory, the maximum entropy principle, model sensitivity analysis and sampling methods such as MCMC are presented. In Part 3, the central problem of predictive computational science is addressed: the selection, adaptive control and validation of mathematical and computational models of complex systems. The Occam Plausibility Algorithm, OPAL, is introduced as a framework for model selection, calibration and validation. Applications to complex models of tumour growth are discussed. PubDate: 2018-05-01T00:00:00.000Z DOI: 10.1017/S096249291800003X Issue No:Vol. 27 (2018)