Authors:Ilya M. Indrupskiy; Olga A. Lobanova; Vadim R. Zubov Abstract: Abstract Numerical models widely used for hydrocarbon phase behavior and compositional flow simulations are based on assumption of thermodynamic equilibrium. However, it is not uncommon for oil and gas-condensate reservoirs to exhibit essentially non-equilibrium phase behavior, e.g., in the processes of secondary recovery after pressure depletion below saturation pressure, or during gas injection, or for condensate evaporation at low pressures. In many cases, the ability to match field data with equilibrium model depends on simulation scale. The only method to account for non-equilibrium phase behavior adopted by the majority of flow simulators is the option of limited rate of gas dissolution (condensate evaporation) in black oil models. For compositional simulations, no practical yet thermodynamically consistent method has been presented so far except for some upscaling techniques in gas injection problems. Previously reported academic non-equilibrium formulations have a common drawback of doubling the number of flow equations and unknowns compared with the equilibrium formulation. In the paper, a unified thermodynamically consistent formulation for compositional flow simulations with non-equilibrium phase behavior model is presented. Same formulation and a special scale-up technique can be used for upscaling of an equilibrium or non-equilibrium model to a coarse-scale non-equilibrium model. A number of test cases for real oil and gas-condensate mixtures are given. Model implementation specifics in a flow simulator are discussed and illustrated with test simulations. A non-equilibrium constant volume depletion algorithm is presented to simulate condensate recovery at low pressures in gas-condensate reservoirs. Results of satisfactory model matching to field data are reported and discussed. PubDate: 2017-04-19 DOI: 10.1007/s10596-017-9648-x

Authors:Luz Angélica Caudillo-Mata; Eldad Haber; Christoph Schwarzbach Abstract: Abstract In order to reduce the computational cost of the simulation of electromagnetic responses in geophysical settings that involve highly heterogeneous media, we develop a multiscale finite volume method with oversampling for the quasi-static Maxwell’s equations in the frequency domain. We assume a coarse mesh nested within a fine mesh that accurately discretizes the problem. For each coarse cell, we independently solve a local version of the original Maxwell’s system subject to linear boundary conditions on an extended domain, which includes the coarse cell and a neighborhood of fine cells around it. The local Maxwell’s system is solved using the fine mesh contained in the extended domain and the mimetic finite volume method. Next, these local solutions (basis functions) together with a weak-continuity condition are used to construct a coarse-mesh version of the global problem. The basis functions can be used to obtain the fine-mesh details from the solution of the coarse-mesh problem. Our approach leads to a significant reduction in the size of the final system of equations and the computational time, while accurately approximating the behavior of the fine-mesh solutions. We demonstrate the performance of our method using two 3D synthetic models: one with a mineral deposit in a geologically complex medium and one with random isotropic heterogeneous media. Both models are discretized using an adaptive mesh refinement technique. PubDate: 2017-04-18 DOI: 10.1007/s10596-017-9647-y

Authors:J. Franc; L. Jeannin; G. Debenest; R. Masson Abstract: Abstract The present paper proposes a new family of multiscale finite volume methods. These methods usually deal with a dual mesh resolution, where the pressure field is solved on a coarse mesh, while the saturation fields, which may have discontinuities, are solved on a finer reservoir grid, on which petrophysical heterogeneities are defined. Unfortunately, the efficiency of dual mesh methods is strongly related to the definition of up-gridding and down-gridding steps, allowing defining accurately pressure and saturation fields on both fine and coarse meshes and the ability of the approach to be parallelized. In the new dual mesh formulation we developed, the pressure is solved on a coarse grid using a new hybrid formulation of the parabolic problem. This type of multiscale method for pressure equation called multiscale hybrid-mixed method (MHMM) has been recently proposed for finite elements and mixed-finite element approach (Harder et al. 2013). We extend here the MH-mixed method to a finite volume discretization, in order to deal with large multiphase reservoir models. The pressure solution is obtained by solving a hybrid form of the pressure problem on the coarse mesh, for which unknowns are fluxes defined on the coarse mesh faces. Basis flux functions are defined through the resolution of a local finite volume problem, which accounts for local heterogeneity, whereas pressure continuity between cells is weakly imposed through flux basis functions, regarded as Lagrange multipliers. Such an approach is conservative both on the coarse and local scales and can be easily parallelized, which is an advantage compared to other existing finite volume multiscale approaches. It has also a high flexibility to refine the coarse discretization just by refinement of the lagrange multiplier space defined on the coarse faces without changing nor the coarse nor the fine meshes. This refinement can also be done adaptively w.r.t. a posteriori error estimators. The method is applied to single phase (well-testing) and multiphase flow in heterogeneous porous media. PubDate: 2017-04-12 DOI: 10.1007/s10596-017-9644-1

Authors:Hadi Fattahi Abstract: Abstract The uniaxial compressive strength (UCS) of rock is widely used in designing underground and surface rock structures. The testing procedure of this rock strength is expensive and time consuming. In addition, it requires well-prepared rock cores. Therefore, indirect tests are often used to estimate the UCS, such as the Schmidt hammer test. This test is very easy to carry out because it necessitates less or no sample preparation and the testing equipment is less sophisticated. In addition, it can be used easily in the field. As a result, comparing with uniaxial compression test, indirect test is simpler, faster, and more economical. In this paper, the application of soft computing methods for data analysis named support vector regression (SVR) optimized by artificial bee colony algorithm (ABC) and adaptive neuro-fuzzy inference system-subtractive clustering method (ANFIS-SCM) to estimate the UCS of rocks from Schmidt hammer rebound values is demonstrated. The estimation abilities offered using SVR-ABC and ANFIS-SCM were presented by using experimental data given in open-source literatures. In these models, the Schmidt hammer rebound values (T1–T3, R1–R4) were utilized as the input parameters, while the UCS was the output parameter. Various statistical performance indexes were utilized to compare the performance of those estimation models. The results achieved indicate that the ANFIS-SCM model has strong potential to indirect estimation of the UCS of rocks from the Schmidt hammer rebound values with high degree of accuracy and robustness. PubDate: 2017-04-11 DOI: 10.1007/s10596-017-9642-3

Authors:Edwin Insuasty; Paul M. J. Van den Hof; Siep Weiland; Jan-Dirk Jansen Abstract: Abstract In reservoir engineering, it is attractive to characterize the difference between reservoir models in metrics that relate to the economic performance of the reservoir as well as to the underlying geological structure. In this paper, we develop a dissimilarity measure that is based on reservoir flow patterns under a particular operational strategy. To this end, a spatial-temporal tensor representation of the reservoir flow patterns is used, while retaining the spatial structure of the flow variables. This allows reduced-order tensor representations of the dominating patterns and simple computation of a flow-induced dissimilarity measure between models. The developed tensor techniques are applied to cluster model realizations in an ensemble, based on similarity of flow characteristics. PubDate: 2017-04-08 DOI: 10.1007/s10596-017-9641-4

Authors:Loïc Giraldi; Olivier P. Le Maître; Kyle T. Mandli; Clint N. Dawson; Ibrahim Hoteit; Omar M. Knio Abstract: Abstract This work addresses the estimation of the parameters of an earthquake model by the consequent tsunami, with an application to the Chile 2010 event. We are particularly interested in the Bayesian inference of the location, the orientation, and the slip of an Okada-based model of the earthquake ocean floor displacement. The tsunami numerical model is based on the GeoClaw software while the observational data is provided by a single DARTⓇ buoy. We propose in this paper a methodology based on polynomial chaos expansion to construct a surrogate model of the wave height at the buoy location. A correlated noise model is first proposed in order to represent the discrepancy between the computational model and the data. This step is necessary, as a classical independent Gaussian noise is shown to be unsuitable for modeling the error, and to prevent convergence of the Markov Chain Monte Carlo sampler. Second, the polynomial chaos model is subsequently improved to handle the variability of the arrival time of the wave, using a preconditioned non-intrusive spectral method. Finally, the construction of a reduced model dedicated to Bayesian inference is proposed. Numerical results are presented and discussed. PubDate: 2017-04-07 DOI: 10.1007/s10596-017-9646-z

Authors:Sean McGovern; Stefan Kollet; Claudius M. Bürger; Ronnie L. Schwede; Olaf G. Podlaha Abstract: Abstract As sedimentation progresses in the formation and evolution of a depositional geologic basin, the rock strata are subject to various stresses. With increasing lithostatic pressure, compressional forces act to compact the porous rock matrix, leading to overpressure buildup, changes in the fluid pore pressure and fluid flow. In the context of petroleum systems modelling, the present study concerns the geometry changes that a compacting basin experiences subject to deposition. The purpose is to track the positions of the rock layer interfaces as compaction occurs. To handle the challenge of potentially large geometry deformations, a new modelling concept is proposed that couples the pore pressure equation with a level set method to determine the movement of lithostratigraphic interfaces. The level set method propagates an interface according to a prescribed speed. The coupling term for the pore pressure and level-set equations consists of this speed function, which is dependent on the compaction law. The two primary features of this approach are the simplicity of the grid and the flexibility of the speed function. A first evaluation of the model concept is presented based on an implementation for one spatial dimension accounting for vertical effective stress. Isothermal conditions with a constant fluid density and viscosity were assumed. The accuracy of the implemented numerical solution for the case of a single stratigraphic unit with a linear compaction law was compared to the available analytical solution [38]. The multi-layer setup and the nonlinear case were tested for plausibility. PubDate: 2017-04-06 DOI: 10.1007/s10596-017-9643-2

Authors:S. Sadeghnejad; M. Masihi Abstract: Abstract The effectiveness of secondary recovery methods in reservoir development studies depends on the knowledge about how fluid-carrying regions (i.e. good-quality rock types) are connected between injection and production wells. To estimate reservoir performance uncertainty, comprehensive simulations on many reservoir model realisations are necessary, which is very CPU consuming and time demanding. Alternatively, we can use much simpler and physically based methods such as percolation approach. Classic percolation assumes connectivity between opposite 2-D faces of a 3-D system; whereas, hydrocarbon production is achieved through active wells that are one-dimensional lines (e.g. vertical, horizontal or deviated wells). The main contribution of this study is to analyse the percolation properties of 3-D continuum percolation models with more realistic well representations during secondary recovery. In particular, the connection of randomly distributed sands (i.e. good-quality rock types) between two lines (representing two wells) located at two corners of the system are modelled by Monte Carlo simulations. Subsequently, the connectivity and conductivity of such a line-to-line well representation is compared with that of face-to-face well representations in the previously published results. The critical percolation properties of those systems as well as the universality concept are also investigated. As there are many rooms for connections in 3-D models, we found that the principal percolation properties will not be altered significantly when the problem with a face-to-face connection is transformed to a line-to-line connection model. PubDate: 2017-04-05 DOI: 10.1007/s10596-017-9640-5

Authors:Lihua Zhang; Baozhi Pan; Gangyi Shan; Sihui Liu; Yuhang Guo; Chunhui Fang Abstract: Abstract Due to the powerful anisotropy of the physical properties of volcanic reservoirs, their component minerals and pore configuration are very complex, rendering fluid identification very difficult. This paper first computed the cementation exponent, which was based on triple porosity model, then used the varied matrix density and matrix neutron to compute the porosity, and finally combined with resistivity well log, and a P 1/2 probability distribution curve was built. The fluid properties were predicted from the shape of the P 1/2 probability distribution curve. Good results were achieved when these methods were used in the volcanic reservoir of the Wangfu fault depression, which indicated that these methods can be used in the fluid property identification of volcanic reservoirs and can also be referred to for other lithology reservoirs. PubDate: 2017-04-01 DOI: 10.1007/s10596-016-9608-x

Authors:Gabriela B. Savioli; Juan E. Santos; José M. Carcione; Davide Gei Abstract: Abstract The main objective of this paper is to use a flow simulator to represent the CO2 storage and combine it with a wave propagation simulator in order to obtain synthetic seismograms qualitatively matching time-lapse real field data. The procedure is applied to the Utsira formation at Sleipner field. The field data at the site available to us is a collection of seismic sections (time-lapse seismics) used to monitor the CO2 storage. An estimate of the CO2 injection rate and the location of the injection point are known. Using these data, we build a geological model, including intramudstone layers with openings, whose coordinates are defined by performing a qualitative match of the field seismic data. The flow simulator parameters and the petrophysical properties are updated to obtain CO2 saturation maps, including CO2 plumes, so that the synthetic seismic images resemble the real data. The geological model is based on a porous-media constitutive equation. It considers a poroelastic description of the Utsira formation (a shaly sandstone), based on porosity and clay content, and takes into account the variation of the properties with pore pressure and fluid saturation. Moreover, the model considers the geometrical features of the formations, including the presence of shale seals and fractures. We also assume fractal variations of the petrophysical properties. The numerical simulation of the CO2-brine flow is based on the Black-Oil formulation, which uses the pressure-volume-temperature (PVT) behavior as a simplified thermodynamic model. The corresponding equations are solved using a finite difference IMPES formulation. Using the resulting saturation and pore-pressure maps, we determine an equivalent viscoelastic medium at the macroscale, formulated in the space-frequency domain. Wave attenuation and velocity dispersion, caused by heterogeneities formed of gas patches, are described with White’s mesoscopic model. The viscoelastic wave equation is solved in the space-frequency domain for a collection of frequencies of interest using a finite-element iterative domain decomposition algorithm. The space-time solution is recovered by a discrete inverse Fourier transform, allowing us to obtain our synthetic seismograms. In the numerical examples, we determine a set of flow and petrophysical parameters allowing us to obtain synthetic seismograms resembling actual field data. In particular, this approach yields CO2 accumulations below the mudstone layers and synthetic seismograms which successfully reproduce the typical pushdown effect. PubDate: 2017-04-01 DOI: 10.1007/s10596-016-9607-y

Authors:Sylvain Weill; Frederick Delay; Yi Pan; Philippe Ackerer Abstract: Abstract A low-dimensional model that describes both saturated and unsaturated flow processes in a single equation is presented. Subsurface flow processes in the groundwater, the vadose zone, and the capillary fringe are accounted for through the computation of aggregated hydrodynamic parameters that result from the integration of the governing flow equations from the bedrock to the land surface. The three-dimensional subsurface flow dynamics are thus described by a two-dimensional equation, allowing for a drastic reduction of model unknowns and simplification of the model parameterizations. This approach is compared with a full resolution of the Richards equation in different synthetic test cases. Because the model reduction stems from the vertical integration of the flow equations, the test cases all use different configurations of heterogeneity for vertical cross-sections of a soil-aquifer system. The low-dimensional flow model shows strong consistency with results from a complete resolution of the Richards equation for both the water table and fluxes. The proposed approach is therefore well suited to the accurate reproduction of complex subsurface flow processes. PubDate: 2017-04-01 DOI: 10.1007/s10596-017-9613-8

Authors:Xiaodong Luo; Tuhin Bhakta Abstract: Abstract Estimating observation error covariance matrix properly is a key step towards successful seismic history matching. Typically, observation errors of seismic data are spatially correlated; therefore, the observation error covariance matrix is non-diagonal. Estimating such a non-diagonal covariance matrix is the focus of the current study. We decompose the estimation into two steps: (1) estimate observation errors and (2) construct covariance matrix based on the estimated observation errors. Our focus is on step (1), whereas at step (2) we use a procedure similar to that in Aanonsen et al. 2003. In Aanonsen et al. 2003, step (1) is carried out using a local moving average algorithm. By treating seismic data as an image, this algorithm can be interpreted as a discrete convolution between an image and a rectangular window function. Following the perspective of image processing, we consider three types of image denoising methods, namely, local moving average with different window functions (as an extension of the method in Aanonsen et al. 2003), non-local means denoising and wavelet denoising. The performance of these three algorithms is compared using both synthetic and field seismic data. It is found that, in our investigated cases, the wavelet denoising method leads to the best performance in most of the time. PubDate: 2017-04-01 DOI: 10.1007/s10596-016-9605-0

Authors:Mohammad J. Abdollahifard; Behrooz Nasiri Abstract: Abstract Multiple-point geostatistics has recently attracted significant attention for characterization of environmental variables. Such methods proceed by searching a large database of patterns obtained from a training image to find a match for a given data-event. The template-matching phase is usually the most time-consuming part of a MPS method. Linear transformations like discrete cosine transform or wavelet transform are capable of representing the image patches with a few nonzero coefficients. This sparsifying capability can be employed to speed up the template-matching problem up to hundreds of times by multiplying only nonzero coefficients. This method is only applicable to rectangular data-events because it is impossible to represent an odd-shaped data-event in a transformation domain. In this paper, the method is applied to speed up the image quilting (IQ) method. The experiments show that the proposed method is capable of accelerating the IQ method tens of times without sensible degradation in simulation results. The method has the potential to be employed for accelerating optimization-based and raster-scan patch-based MPS algorithms. PubDate: 2017-04-01 DOI: 10.1007/s10596-016-9612-1

Abstract: Abstract The problem of multiphase phase flow in heterogeneous subsurface porous media is one involving many uncertainties. In particular, the permeability of the medium is an important aspect of the model that is inherently uncertain. Properly quantifying these uncertainties is essential in order to make reliable probabilistic-based predictions and future decisions. In this work, a measure-theoretic framework is employed to quantify uncertainties in a two-phase subsurface flow model in high-contrast media. Given uncertain saturation data from observation wells, the stochastic inverse problem is solved numerically in order to obtain a probability measure on the space of unknown permeability parameters characterizing the two-phase flow. As solving the stochastic inverse problem requires a number of forward model solves, we also incorporate the use of a conservative version of the generalized multiscale finite element method for added efficiency. The parameter-space probability measure is used in order to make predictions of saturation values where measurements are not available, and to validate the effectiveness of the proposed approach in the context of fine and coarse model solves. A number of numerical examples are offered to illustrate the measure-theoretic methodology for solving the stochastic inverse problem using both fine and coarse solution schemes. PubDate: 2017-04-01 DOI: 10.1007/s10596-016-9603-2

Authors:Addy Satija; Celine Scheidt; Lewis Li; Jef Caers Abstract: Abstract The conventional paradigm for predicting future reservoir performance from existing production data involves the construction of reservoir models that match the historical data through iterative history matching. This is generally an expensive and difficult task and often results in models that do not accurately assess the uncertainty of the forecast. We propose an alternative re-formulation of the problem, in which the role of the reservoir model is reconsidered. Instead of using the model to match the historical production, and then forecasting, the model is used in combination with Monte Carlo sampling to establish a statistical relationship between the historical and forecast variables. The estimated relationship is then used in conjunction with the actual production data to produce a statistical forecast. This allows quantifying posterior uncertainty on the forecast variable without explicit inversion or history matching. The main rationale behind this is that the reservoir model is highly complex and even so, still remains a simplified representation of the actual subsurface. As statistical relationships can generally only be constructed in low dimensions, compression and dimension reduction of the reservoir models themselves would result in further oversimplification. Conversely, production data and forecast variables are time series data, which are simpler and much more applicable for dimension reduction techniques. We present a dimension reduction approach based on functional data analysis (FDA), and mixed principal component analysis (mixed PCA), followed by canonical correlation analysis (CCA) to maximize the linear correlation between the forecast and production variables. Using these transformed variables, it is then possible to apply linear Gaussian regression and estimate the statistical relationship between the forecast and historical variables. This relationship is used in combination with the actual observed historical data to estimate the posterior distribution of the forecast variable. Sampling from this posterior and reconstructing the corresponding forecast time series, allows assessing uncertainty on the forecast. This workflow will be demonstrated on a case based on a Libyan reservoir and compared with traditional history matching. PubDate: 2017-04-01 DOI: 10.1007/s10596-017-9614-7

Authors:Nadav Sorek; Eduardo Gildin; Fani Boukouvala; Burcu Beykal; Christodoulos A. Floudas Abstract: Abstract The objective of this paper is to introduce a novel paradigm to reduce the computational effort in waterflooding global optimization problems while realizing smooth well control trajectories amenable for practical deployments in the field. In order to overcome the problems of slow convergence and non-smooth impractical control strategies, often associated with gradient-free optimization (GFO) methods, we introduce a generalized approach which represent the controls by smooth polynomial approximations either by a polynomial function or by a piecewise polynomial interpolation, which we denote as function control method (FCM) and interpolation control method (ICM), respectively. Using these approaches, we aim to optimize the coefficients of the selected functions or the interpolation points in order to represent the well-control trajectories along a time horizon. Our results demonstrate significant computational savings, due to a substantial reduction in the number of control parameters, as we seek the optimal polynomial coefficients or the interpolation points to describe the control trajectories as opposed to directly searching for the optimal control values (bottom hole pressure) at each time interval. We demonstrate the efficiency of the method on two and three-dimensional models, where we found the optimal variables using a parallel dynamic-neighborhood particle swarm optimization (PSO). We compared our FCM-PSO and ICM-PSO to the traditional formulation solved by both gradient-free and gradient-based methods. In all comparisons, both FCM and ICM show very good to superior performances. PubDate: 2017-04-01 DOI: 10.1007/s10596-016-9610-3

Authors:Hamidreza Hamdi; Ivo Couckuyt; Mario Costa Sousa; Tom Dhaene Abstract: Abstract The process of reservoir history-matching is a costly task. Many available history-matching algorithms either fail to perform such a task or they require a large number of simulation runs. To overcome such struggles, we apply the Gaussian Process (GP) modeling technique to approximate the costly objective functions and to expedite finding the global optima. A GP model is a proxy, which is employed to model the input-output relationships by assuming a multi-Gaussian distribution on the output values. An infill criterion is used in conjunction with a GP model to help sequentially add the samples with potentially lower outputs. The IC fault model is used to compare the efficiency of GP-based optimization method with other typical optimization methods for minimizing the objective function. In this paper, we present the applicability of using a GP modeling approach for reservoir history-matching problems, which is exemplified by numerical analysis of production data from a horizontal multi-stage fractured tight gas condensate well. The results for the case that is studied here show a quick convergence to the lowest objective values in less than 100 simulations for this 20-dimensional problem. This amounts to an almost 10 times faster performance compared to the Differential Evolution (DE) algorithm that is also known to be a powerful optimization technique. The sensitivities are conducted to explain the performance of the GP-based optimization technique with various correlation functions. PubDate: 2017-04-01 DOI: 10.1007/s10596-016-9611-2

Authors:Laura Donatella Campisi Abstract: Abstract A method using multilayer perceptrons for analysing diffusion profiles and sketching the temperature history of geological samples is explored. Users of this method can intuitively test and compare results thinking in terms of analytical solutions of the diffusion equation whilst the bulk of the work is made computationally. Being neither completely analytical nor numerical, the method is a hybrid and represents an ideal man-machine interaction. The approach presented in this paper should be preferred when the retrieval of the diffusion coefficients from concentration profiles using dimensionless parameters is not possible and/or there is more than one unknown parameter in the analytical solution of the diffusion equation. Its versatility is a key factor for extending the potential of Dodson’s formulation. The case of a species produced by a radiogenic source and diffusing in a cooling system is therefore discussed. Both the classical change of variable for diffusion coefficients depending on time and an alternative approach decomposing the overall effect of diffusion into a sum of effects due to smaller events could be used to tackle this problem. As multilayer perceptrons can approximate any function, none of the assumptions originally stated by Dodson are necessary. PubDate: 2017-03-30 DOI: 10.1007/s10596-017-9628-1

Authors:Kai Bao; Alexandre Lavrov; Halvor Møll Nilsen Abstract: Abstract Non-Newtonian fluids having Bingham or power-law rheology are common in many applications within drilling and reservoir engineering. Examples of such fluids are drilling muds, foams, heavy oil, hydraulic-fracturing and other stimulation fluids, and cement slurries. Despite the importance of non-Newtonian rheology, it is rarely used in reservoir simulators and fracture flow simulations. We study two types of non-Newtonian rheology: the truncated power-law (Ostwald-de Waele) fluid and the Bingham fluid. For either of the two types of non-Newtonian rheology, we construct relationships between the superficial fluid velocity and the pressure gradient in fractures and porous media. The Bingham fluid is regularized by means of Papanastasiou-type regularization for porous media and by means of a simple hyperbolic function for fracture flow. Approximation by Taylor expansion is used to evaluate the fluid velocity for small pressure gradients to reduce rounding errors. We report simulations of flow in rough-walled fractures for different rheologies and study the effect of fluid parameters on the flow channelization in rough-walled fractures. This effect is known from previous studies. We demonstrate how rheologies on different domains can be included in a fully-unstructured reservoir simulation that incorporates discrete fracture modeling (DFM). The above formulation was implemented in the open-source MATLAB Reservoir Simulation Toolbox (MRST), which uses fully implicit discretization on general polyhedral grids, including industry standard grids with DFM. This robust implementation is an important step towards hydro-mechanically coupled simulation of hydraulic fracturing with realistic non-Newtonian fluid rheology since most hydraulic fracturing models implemented so far make use of oversimplified rheological models (e.g., Newtonian or pure power-law). PubDate: 2017-03-29 DOI: 10.1007/s10596-017-9639-y