Authors:Chen Lai; Susan E. Minkoff Pages: 359 - 372 Abstract: Acoustic imaging and sensor modeling are processes that require repeated solution of the acoustic wave equation. Solution of the wave equation can be computationally expensive and memory intensive for large simulation domains. One scheme for speeding up solution of the wave equation is the operator-based upscaling method. The algorithm proceeds in two steps. First, the wave equation is solved for fine grid unknowns internal to coarse blocks assuming the coarse blocks do not need to communicate with neighboring blocks in parallel. Second, these fine grid solutions are used to form a new problem which is solved on the coarse grid. Accurate and efficient wave propagation schemes also must avoid artificial reflections off of the computational domain edges. One popular method for preventing artificial reflections is the nearly perfectly matched layer (NPML) method. In this paper, we discuss applying NPML to operator upscaling for the wave equation. We show that although we only apply NPML to the first step of this two step algorithm (directly affecting the fine grid unknowns only), we still see a significant reduction of reflections back into the domain. We describe three numerical experiments (one homogeneous medium experiment and two heterogeneous media examples) in which we validate that the solution of the wave equation exponentially decays in the NPML regions. Numerical experiments of acoustic wave propagation in two dimensions with a reasonable absorbing layer thickness resulted in a maximum pressure reflection of 3–8%. While the coarse grid acceleration is not explicitly damped in our algorithm, the tight coupling between the two steps of the algorithm results in only 0.1–1% of acceleration reflecting back into the computational domain. PubDate: 2017-06-01 DOI: 10.1007/s10596-017-9616-5 Issue No:Vol. 21, No. 3 (2017)

Authors:Roger Clavera-Gispert; Òscar Gratacós; Ana Carmona; Raimon Tolosana-Delgado Pages: 373 - 391 Abstract: Nowadays, numerical modeling is a common tool used in the study of sedimentary basins, since it allows to quantify the processes simulated and to determine interactions among them. One of such programs is SIMSAFADIM-CLASTIC, a 3D forward-model process-based code to simulate the sedimentation in a marine basin at a geological time scale. It models the fluid flow, siliciclastic transport and sedimentation, and carbonate production. In this article, we present the last improvements in the carbonate production model, in particular about the usage of Generalized Lotka-Volterra equations that include logistic growth and interaction among species. Logistic growth is constrained by environmental parameters such as water depth, energy of the medium, and depositional profile. The environmental parameters are converted to factors and combined into one single environmental value to model the evolution of species. The interaction among species is quantified using the community matrix that captures the beneficial or detrimental effects of the presence of each species on the other. A theoretical example of a carbonate ramp is computed to show the interaction among carbonate and siliciclastic sediment, the effect of environmental parameters to the modeled species associations, and the interaction among these species associations. The distribution of the modeled species associations in the theoretical example presented is compared with the carbonate Oligocene-Miocene Asmari Formation in Iran and the Miocene Ragusa Platform in Italy. PubDate: 2017-06-01 DOI: 10.1007/s10596-017-9617-4 Issue No:Vol. 21, No. 3 (2017)

Authors:Lasse Hjuler Christiansen; Andrea Capolei; John Bagterp Jørgensen Pages: 411 - 426 Abstract: The uncertainties related to long-term forecasts of oil prices impose significant financial risk on ventures of oil production. To minimize risk, oil companies are inclined to maximize profit over short-term horizons ranging from months to a few years. In contrast, conventional production optimization maximizes long-term profits over horizons that span more than a decade. To address this challenge, the oil literature has introduced short-term versus long-term optimization. Ideally, this problem is solved by a posteriori multi-objective optimization methods that generate an approximation to the Pareto front of optimal short-term and long-term trade-offs. However, such methods rely on a large number of reservoir simulations and scale poorly with the number of objectives subject to optimization. Consequently, the large-scale nature of production optimization severely limits applications to real-life scenarios. More practical alternatives include ad hoc hierarchical switching schemes. As a drawback, such methods lack robustness due to unclear convergence properties and do not naturally generalize to cases of more than two objectives. Also, as this paper shows, the hierarchical formulation may skew the balance between the objectives, leaving an unfulfilled potential to increase profits. To promote efficient and reliable short-term versus long-term optimization, this paper introduces a natural way to characterize desirable Pareto points and proposes a novel least squares (LS) method. Unlike hierarchical approaches, the method is guaranteed to converge to a Pareto optimal point. Also, the LS method is designed to properly balance multiple objectives, independently of Pareto front’s shape. As such, the method poses a practical alternative to a posteriori methods in situations where the frontier is intractable to generate. PubDate: 2017-06-01 DOI: 10.1007/s10596-017-9620-9 Issue No:Vol. 21, No. 3 (2017)

Authors:A. Kushnir; A. Varypaev Pages: 459 - 477 Abstract: Here, we present a statistical simulation study of several array data processing techniques used for the location of sources of elastic waves. The source location problem arises in such engineering applications as radio-communications, acoustics, sonar, meteorology, astrophysics, seismology, etcetera. Recently, this problem has become important for applied geophysics in connection with the modern technology of hydrocarbon production based on the hydraulic fracturing of the surface layers of the Earth’s environment. These techniques are required for the monitoring of hydraulic fracturing through the location of sources of numerous microearthquakes that occur during cracking of crustal rocks. We have established theoretical connections among several location algorithms. In particular, two different theoretical interpretations were proposed for the phase alignment location algorithm, recently proposed for the seismic array data processing. The robustness properties of the phase alignment location algorithm and the adaptive maximum likelihood location algorithm were demonstrated through Monte Carlo simulation in regard to variations of those noise statistical features, which are typical under monitoring of microseismicity at the hydrocarbon production sites. In the framework of the Monte Carlo modeling, we also compared the two mentioned location algorithms with the so-called emission tomography algorithm that is developed for the location of microseismic sources during the monitoring of hydraulic fracturing. PubDate: 2017-06-01 DOI: 10.1007/s10596-017-9623-6 Issue No:Vol. 21, No. 3 (2017)

Authors:Laura Donatella Campisi Pages: 519 - 531 Abstract: A method using multilayer perceptrons for analysing diffusion profiles and sketching the temperature history of geological samples is explored. Users of this method can intuitively test and compare results thinking in terms of analytical solutions of the diffusion equation whilst the bulk of the work is made computationally. Being neither completely analytical nor numerical, the method is a hybrid and represents an ideal man-machine interaction. The approach presented in this paper should be preferred when the retrieval of the diffusion coefficients from concentration profiles using dimensionless parameters is not possible and/or there is more than one unknown parameter in the analytical solution of the diffusion equation. Its versatility is a key factor for extending the potential of Dodson’s formulation. The case of a species produced by a radiogenic source and diffusing in a cooling system is therefore discussed. Both the classical change of variable for diffusion coefficients depending on time and an alternative approach decomposing the overall effect of diffusion into a sum of effects due to smaller events could be used to tackle this problem. As multilayer perceptrons can approximate any function, none of the assumptions originally stated by Dodson are necessary. PubDate: 2017-06-01 DOI: 10.1007/s10596-017-9628-1 Issue No:Vol. 21, No. 3 (2017)

Authors:Pål Næverlid Sævik; Martha Lien; Inga Berre Pages: 553 - 565 Abstract: Ensemble- and optimization-based parameter estimation is commonly used to calibrate simulation models of fractured reservoirs to measured data. Traditionally, statistical data on small-scale fractures are upscaled to a dual continuum model in a single step, and the subsequent history matching procedure makes adjustments to the upscaled parameters. In this paper, we show that the resulting reservoir models may be inconsistent with the initial fracture description, meaning that the reservoir parameters do not correspond to a physically valid combination of fracture parameters. A number of numerical examples is provided, which illustrate why and when the problem occurs. We utilize an invertible analytical fracture upscaling method, and deviations from the fracture model can thus be quantified in each case. We show that consistency with the fracture model is preserved if fracture parameters are history matched directly, if the relation between inversion variables and fracture parameters is linear, or if an unbiased Bayesian sampling method is used. We also show that preserving consistency is less important if the uncertainty of the fracture upscaling method is large. PubDate: 2017-06-01 DOI: 10.1007/s10596-017-9632-5 Issue No:Vol. 21, No. 3 (2017)

Authors:Nisael A. Solano; Mohammad Soroush; Chris R. Clarkson; Federico F. Krause; Jerry L. Jensen Pages: 567 - 593 Abstract: A high-resolution simulation model of a heterogeneous low-permeability rock sample is used to investigate the effects of physical and biogenic sedimentary structures on scaling and anisotropy of absolute permeability at the core scale. Several simulation sub-samples with random locations and volumes were also selected for evaluation of the effects of scale and lithological composition on the calculated permeability. Vertical and horizontal permeability values (from whole core simulation) are in good agreement with routine core analysis (RCA) measurements from offsetting cores. Despite relatively good reservoir quality associated with geobodies of biogenic and relic bedding structures, results from the full diameter core simulation demonstrate that their limited volumetric abundance and restricted connectivity prevent these features from controlling fluid flow in these rocks. In fact, permeability seems to be dominated by the tighter encasing matrix, which exhibits average permeability values very close to those reported from RCA. Geometric averaging offers a better representation for the upscaling of horizontal permeability datasets; whereas, both geometric and harmonic averaging work similarly well for the vertical measurements. The methodology used in this work is particularly applicable to the detailed characterization of reservoir rocks with a high degree of heterogeneity caused by biological reworking and diagenesis. PubDate: 2017-06-01 DOI: 10.1007/s10596-017-9635-2 Issue No:Vol. 21, No. 3 (2017)

Authors:Stein Krogstad; Knut-Andreas Lie; Halvor Møll Nilsen; Carl Fredrik Berg; Vegard Kippe Abstract: Flow diagnostics refers to a family of numerical methods that within a few seconds can compute visually intuitive quantities illuminating flow patterns and well connections for full 3D reservoir models. The starting point is a flow field, extracted from a previous multiphase simulation or computed by solving a simplified pressure equation with fixed mobilities. Time-of-flight (TOF) and stationary tracer equations are then solved to determine approximate time lines and influence regions. From these, one can derive sweep or drainage regions, injector–producer regions, and well allocation factors, as well as dynamic heterogeneity measures that characterize sweep and displacement efficiency and correlate (surprisingly) well with oil recovery from waterflooding processes. This work extends flow diagnostics to polymer flooding. Our aim is to develop inexpensive flow proxies that can be used to optimize well placement, drilling sequence, and injection strategies. In particular, we seek proxies that can distinguish the effects of improved microscopic and macroscopic displacement. To account for the macroscopic effect of polymer injection, representative flow fields are computed by solving the reservoir equations with linearized flux functions. Although this linearization has a pronounced smearing effect on water and polymer fronts, we show that the heterogeneity of the total flux field is adequately represented. Subsequently, transform the flow equations to streamline coordinates, map saturations from physical coordinates to time-of-flight, and (re)solve a representative 1D flow problem for each well-pair region. A recovery proxy is then obtained by accumulating each 1D solution weighted by a distribution function that measures the variation in residence times for all flow paths inside each well-pair region. We apply our new approach to 2D and 3D reservoir simulation models, and observe close agreements between the suggested approximations and results obtained from full multiphase simulations. Furthermore, we demonstrate how two different versions of the proxy can be utilized to differentiate between macroscopic and microscopic sweep improvements resulting from polymer injection. For the examples considered, we demonstrate that macroscopic sweep improvements alone correlate better with measures for heterogeneity than the combined improvements. PubDate: 2017-07-05 DOI: 10.1007/s10596-017-9681-9

Authors:Dean S. Oliver; Miguel Alfonzo Abstract: The problem of assimilating biased and inaccurate observations into inadequate models of the physical systems from which the observations were taken is common in the petroleum and groundwater fields. When large amounts of data are assimilated without accounting for model error and observation bias, predictions tend to be both overconfident and incorrect. In this paper, we propose a workflow for calibration of imperfect models to biased observations that involves model construction, model calibration, model criticism and model improvement. Model criticism is based on computation of model diagnostics which provide an indication of the validity of assumptions. During the model improvement step, we advocate identification of additional physically motivated parameters based on examination of data mismatch after calibration and addition of bias correction terms. If model diagnostics indicates the presence of residual model error after parameters have been added, then we advocate estimation of a “total” observation error covariance matrix, whose purpose is to reduce weighting of observations that cannot be matched because of deficiency of the model. Although the target applications of this methodology are in the subsurface, we illustrate the approach with two simplified examples involving prediction of the future velocity of fall of a sphere from models calibrated to a short-time series of biased measurements with independent additive random noise. The models into which the data are assimilated contain model errors due to neglect of physical processes and neglect of uncertainty in parameters. In every case, the estimated total error covariance is larger than the true observation covariance implying that the observations need not be matched to the accuracy of the measuring instrument. Predictions are much improved when all model improvement steps were taken. PubDate: 2017-06-27 DOI: 10.1007/s10596-017-9678-4

Abstract: When injecting CO2 or other fluids into a geological formation, pressure plays an important role both as a driver of flow and as a risk factor for mechanical integrity. The full effect of geomechanics on aquifer flow can only be captured using a coupled flow-geomechanics model. In order to solve this computationally expensive system, various strategies have been put forwards over the years, with some of the best current methods based on sequential splitting. In the present work, we seek to approximate the full geomechanical effect on flow without the need of coupling with a geomechanics solver during simulation, and at a computational cost comparable to that of an uncoupled model. We do this by means of precomputed pressure response functions. At grid model generation time, a geomechanics solver is used to compute the mechanical response of the aquifer for a set of pressure fields. The relevant information from these responses is then stored in a compact form and embedded with the grid model. We test the accuracy and computational performance of our approach on a simple 2D and a more complex 3D model, and compare the results with those produced by a fully coupled approach as well as from a simple decoupled method based on Geertsma’s uniaxial expansion coefficient. PubDate: 2017-06-24 DOI: 10.1007/s10596-017-9674-8

Authors:A. J. Hong; R. B. Bratvold; G. Nævdal Abstract: Many model-based techniques for optimizing hydrocarbon production, especially robust optimization (RO), carry prohibitive computational cost. Ensemble-based optimization (EnOpt) is a promising RO method but is computationally intensive when based on rich grid-based reservoir models with hundreds of realizations. We present a proxy-model workflow where a grid-based model is supplemented by a useful yet tractable proxy model. A capacitance-resistance model (CRM) can be a proxy model for waterflooding systems. We illustrate the use of CRM-based models and investigate their pros and cons using synthetic 2D and 3D models. A selected proxy model is embedded into the proxy-model workflow. The results obtained from the proxy-model and traditional workflows are compared. The impact of any differences is assessed by considering a relevant decision-making context. The main contributions are (1) a general RO workflow that embeds proxy models, (2) a discussion of the desiderata of proxy models, (3) illustration and discussion of the use of CRM-based models in the proxy-model workflow, and (4) a discussion of the impact of using a proxy model for production optimization in a decision-making context. Based on our study, we conclude that CRM-based models have high potential to serve as a cogent proxy model for waterflooding related decision-making context and that the proxy-model workflow, leveraging a faster, but relevant, production model, significantly speeds up the optimization yet gives robust results that leads to a near-optimal solution. PubDate: 2017-06-24 DOI: 10.1007/s10596-017-9666-8

Authors:Véronique Gervais; Mickaële Le Ravalec Abstract: Numerical representations of a target reservoir can help to assess the potential of different development plans. To be as predictive as possible, these representations or models must reproduce the data (static, dynamic) collected on the field. However, constraining reservoir models to dynamic data – the history-matching process – can be very time consuming. Many uncertain parameters need to be taken into account, such as the spatial distribution of petrophysical properties. This distribution is mostly unknown and usually represented by millions of values populating the reservoir grid. Dedicated parameterization techniques make it possible to investigate many spatial distributions from a small number of parameters. The efficiency of the matching process can be improved from the perturbation of specific regions of the reservoir. Distinct approaches can be considered to define such regions. For instance, one can refer to streamlines. The leading idea is to identify areas that influence the production behavior where the data are poorly reproduced. Here, we propose alternative methods based on connectivity analysis to easily provide approximate influence areas for any fluid-flow simulation. The reservoir is viewed as a set of nodes connected by weighted links that characterize the distance between two nodes. The path between nodes (or grid blocks) with the lowest cumulative weight yields an approximate flow path used to define influence areas. The potential of the approach is demonstrated on the basis of 2D synthetic cases for the joint integration of production and 4D saturation data, considering several formulations for the weights attributed to the links. PubDate: 2017-06-21 DOI: 10.1007/s10596-017-9663-y

Authors:Savithru Jayasinghe; David L. Darmofal; Nicholas K. Burgess; Marshall C. Galbraith; Steven R. Allmaras Abstract: This paper presents a space-time adaptive framework for solving porous media flow problems, with specific application to reservoir simulation. A fully unstructured mesh discretization of space and time is used instead of a conventional time-marching approach. A space-time discontinuous Galerkin finite element method is employed to achieve a high-order discretization on the anisotropic, unstructured meshes. Anisotropic mesh adaptation is performed to reduce the error of a specified output of interest, by using a posteriori error estimates from the dual-weighted residual method to drive a metric-based mesh optimization algorithm. The space-time adaptive method is tested on a one-dimensional two-phase flow problem, and is found to be more efficient in terms of computational cost (degrees-of-freedom and total runtime) required to achieve a specified output error level, when compared to a conventional first-order time-marching finite volume method and the space-time discontinuous Galerkin method on structured meshes. PubDate: 2017-06-20 DOI: 10.1007/s10596-017-9673-9

Authors:M. Baumann; R. Astudillo; Y. Qiu; E. Y. M. Ang; M. B. van Gijzen; R.-É. Plessix Abstract: In this work, we present a new numerical framework for the efficient solution of the time-harmonic elastic wave equation at multiple frequencies. We show that multiple frequencies (and multiple right-hand sides) can be incorporated when the discretized problem is written as a matrix equation. This matrix equation can be solved efficiently using the preconditioned IDR(s) method. We present an efficient and robust way to apply a single preconditioner using MSSS matrix computations. For 3D problems, we present a memory-efficient implementation that exploits the solution of a sequence of 2D problems. Realistic examples in two and three spatial dimensions demonstrate the performance of the new algorithm. PubDate: 2017-06-20 DOI: 10.1007/s10596-017-9667-7

Authors:Halvor Møll Nilsen; Idar Larsen; Xavier Raynaud Abstract: Simulation of fracturing processes in porous rocks can be divided into two main branches: (i) modeling the rock as a continuum enhanced with special features to account for fractures or (ii) modeling the rock by a discrete (or discontinuous) approach that describes the material directly as a collection of separate blocks or particles, e.g., as in the discrete element method (DEM). In the modified discrete element (MDEM) method, the effective forces between virtual particles are modified so that they reproduce the discretization of a first-order finite element method (FEM) for linear elasticity. This provides an expression of the virtual forces in terms of general Hook’s macro-parameters. Previously, MDEM has been formulated through an analogy with linear elements for FEM. We show the connection between MDEM and the virtual element method (VEM), which is a generalization of FEM to polyhedral grids. Unlike standard FEM, which computes strain-states in a reference space, MDEM and VEM compute stress-states directly in real space. This connection leads us to a new derivation of the MDEM method. Moreover, it enables a direct coupling between (M)DEM and domains modeled by a grid made of polyhedral cells. Thus, this approach makes it possible to combine fine-scale (M)DEM behavior near the fracturing region with linear elasticity on complex reservoir grids in the far-field region without regridding. To demonstrate the simulation of hydraulic fracturing, the coupled (M)DEM-VEM method is implemented using the Matlab Reservoir Simulation Toolbox (MRST) and linked to an industry-standard reservoir simulator. Similar approaches have been presented previously using standard FEM, but due to the similarities in the approaches of VEM and MDEM, our work provides a more uniform approach and extends these previous works to general polyhedral grids for the non-fracturing domain. PubDate: 2017-06-19 DOI: 10.1007/s10596-017-9668-6

Authors:Sujit K Bose Abstract: Free surface flow of an incompressible fluid over a shallow plane/undulating horizontal bed is characteristically turbulent due to disturbances generated by the bed resistance and other causes. The governing equations of such flows in one dimension, for finite amplitude of surface elevation over the bed, are the Continuity Equation and a highly nonlinear Momentum Equation of order three. The method developed in this paper introduces the “discharge” variable q = η U, where η = elevation of the free surface above the bed level, and U = average stream-wise forward velocity. By this substitution, the continuity equation becomes a linear first-order PDE and the momentum equation is transformed after introduction of a small approximation in the fifth term. Next, it is shown by an invertibility argument that q can be a function of η: q = F(η), rendering the momentum equation as a first order, second degree ODE for F(η), that can be be integrated by the Runge-Kutta method. The continuity equation then takes the form of a first order evolutionary PDE that can be integrated by a Lax-Wendroff type of scheme for the temporal evolution of the surface elevation η. The method is implemented for two particular cases: when the initial elevation is triangular with vertical angle of 120 ∘ and when it has a sinusoidal form. The computations exhibit the physically interesting feature that the frontal portion of the propagating wave undergoes a sharp jump followed by tumbling over as a breaker. Compared to other discretization methods, the application of the Runge-Kutta and an extended version of the Lax-Wendroff scheme is much easier. PubDate: 2017-06-15 DOI: 10.1007/s10596-017-9671-y

Authors:Aline C. Rocha; Marcio A. Murad; Tien D. Le Abstract: In this work, we construct a new coupled Multiscale/Discrete Fracture Model for compressible flow in a multiporosity shale gas reservoir containing networks of natural and hydraulic fractures. The geological formation is characterized by four distinct length scales and levels of porosity. The window of observation of the finest (nanoscale) portraits the nanopores within organic matter containing adsorbed gas. At the microscale, the medium is formed by two solid phases: organic, composed by kerogen aggregates, and inorganic (clay, quartz, calcite). Such phases are separated by the network of partially-saturated interparticle pores where microscopic free gas flow influenced by Knudsen effects along with gas diffusion in the immobile water phase occur simultaneously. The upscaling of the local flow to the mesoscale gives rise to a nonlinear homogenized pressure equation in the shale matrix which lies adjacent to the system of natural fractures. Homogenization of the coupled matrix/preexisting fractures to the macroscale leads to a microstructural model of dual porosity type. Such homogenized model is subsequently coupled with the hydrodynamics in the network of induced fractures which, in the context of the discrete fracture modeling, are treated as (n − 1), (n = 2, 3) lower dimensional objects. In order to handle numerically the nonlinear interaction between the different flow equations, we adopt a superposition argument, firstly proposed by Arbogast (1996), in each iteration of a fixed-point algorithm. The resultant governing equations are discretized by the finite element method and numerical simulations of gas production in stratified arrangements of the fracture networks are presented to illustrate the potential of the multiscale approach. PubDate: 2017-06-10 DOI: 10.1007/s10596-017-9665-9

Authors:Andreas S. Stordal; Geir Nævdal Abstract: Randomized maximum likelihood is known in the petroleum reservoir community as a Bayesian history matching technique by means of minimizing a stochastic quadratic objective function. The algorithm is well established and has shown promising results in several applications. For linear models with linear observation operator, the algorithm samples the posterior density accurately. To improve the sampling for nonlinear models, we introduce a generalized version in its simplest form by re-weighting the prior. The weight term is motivated by a sufficiency condition on the expected gradient of the objective function. Recently, an ensemble version of the algorithm was proposed which can be implemented with any simulator. Unfortunately, the method has some practical implementation issues due to computation of low rank pseudo inverse matrices and in practice only the data mismatch part of the objective function is maintained. Here, we take advantage of the fact that the measurement space is often much smaller than the parameter space and project the prior uncertainty from the parameter space to the measurement space to avoid over fitting of data. The proposed algorithms show good performance on synthetic test cases including a 2D reservoir model. PubDate: 2017-06-08 DOI: 10.1007/s10596-017-9664-x

Authors:Emmanuel Cocher; Hervé Chauris; René-Édouard Plessix Abstract: The objective of seismic imaging is to recover properties of the Earth from surface measurements recorded during active seismic surveys. Migration Velocity Analysis techniques aim at determining a background velocity model (smooth part of the pressure wave velocity model) using the redundancy of seismic data and consist of solving a nested optimisation problem. In the inner loop, an extended reflectivity model (detailed part of the model) is determined from recorded primary reflections through a data-fitting procedure depending on a given background model. In the outer loop, a coherency criterion defined on the extended reflectivity assesses the quality of the background model. The inner problem is usually solved with a single iteration of gradient optimisation, leading to artefacts in the velocity updates. We study the benefits of further iterating on the reflectivity in the inner loop, which also allows the introduction of multiple reflections in the procedure. We propose two strategies for the computation of the gradient of the outer objective function. In the first case, we compute the exact numerical gradient by taking care of the background dependency of all inner iterations. In the second case, we derive an approximate gradient by assuming the optimal reflectivity has been obtained. Both methods are compared on their computational merits and through simple numerical examples on 2D synthetic data sets. The examples illustrate that regularisation of the inner problem is essential to obtain coherent velocity updates. The second approach displays a smaller sensitivity to regularisation and is simpler to implement. PubDate: 2017-06-03 DOI: 10.1007/s10596-017-9656-x