Authors:Melania Carfagna; Alfio Grillo Abstract: Abstract Nowadays, the description of complex physical systems, such as biological tissues, calls for highly detailed and accurate mathematical models. These, in turn, necessitate increasingly elaborate numerical methods as well as dedicated algorithms capable of resolving each detail which they account for. Especially when commercial software is used, the performance of the algorithms coded by the user must be tested and carefully assessed. In Computational Biomechanics, the Spherical Design Algorithm (SDA) is a widely used algorithm to model biological tissues that, like articular cartilage, are described as composites reinforced by statistically oriented collagen fibres. The purpose of the present work is to analyse the performances of the SDA, which we implement in a commercial software for several sets of integration points (referred to as “spherical designs”), and compare the results with those determined by using an appropriate set of points proposed in this manuscript. As terms for comparison we take the results obtained by employing the integration scheme Integral, available in Matlab \(^{{\textregistered }}\) . For the numerical simulations, we study a well-documented benchmark test on articular cartilage, known as ‘unconfined compression test’. The reported numerical results highlight the influence of the fibres on the elasticity and permeability of this tissue. Moreover, some technical issues of the SDA (such as the choice of the quadrature points and their position in the integration domain) are proposed and discussed. PubDate: 2017-04-21 DOI: 10.1007/s00791-017-0278-6

Authors:Klaus-Peter Kröhn Abstract: Abstract The underground Hard Rock Laboratory (HRL) at Äspö, Sweden, is located in granitic rock and dedicated to investigations concerning deep geological radioactive waste disposal. Several in-situ experiments have been performed in the HRL with respect to geotechnical barriers which are intended to protect the waste canisters against the prevailing groundwater. Among them are the recent Buffer-Rock Interaction Experiment (BRIE) and, on a much larger scale, the long-term Prototype Repository (PR) experiment. Modelling the surrounding groundwater flow systems has been performed with the code \(\hbox {d}^{3}\hbox {f}\) using an approach where large fractures are represented discretely in a model while the remaining set of smaller fractures—also called “background fractures”—is assumed to act like an additional homogeneous continuum. It has been firstly applied to the BRIE in a cube-like domain of 40 m side length. Calibration of the model resulted in a considerable increase of matrix permeability due to the influence of the background fractures. To check the validity of the approach the calibrated data for the BRIE were applied to a model for the much larger PR which is also located in the HRL but at quite some distance from the BRIE. Only moderate modifications of the initially used permeabilities sufficed to fit the numerous outflow data for the PR-tunnel as well as for the six “deposition boreholes”. The chosen approach for the BRIE can thus be considered to be successfully transferred to the PR building considerable confidence in the conceptual approach. PubDate: 2017-04-12 DOI: 10.1007/s00791-017-0279-5

Authors:L. John; P. Pustějovská; O. Steinbach Abstract: Abstract Hemodynamic indicators such as the averaged wall shear stress (AWSS) and the oscillatory shear index (OSI) are well established to characterize areas of arterial walls with respect to the formation and progression of aneurysms. Here, we study two different forms for the wall shear stress vector from which AWSS and OSI are computed. One is commonly used as a generalization from the two-dimensional setting, the latter is derived from the full decomposition of the wall traction force given by the Cauchy stress tensor. We compare the influence of both approaches on hemodynamic indicators by numerical simulations under different computational settings. Namely, different (real and artificial) vessel geometries, and the influence of a physiological periodic inflow profile. The blood is modeled either as a Newtonian fluid or as a generalized Newtonian fluid with a shear rate dependent viscosity. Numerical results are obtained by using a stabilized finite element method. We observe profound differences in hemodynamic indicators computed by these two approaches, mainly at critical areas of the arterial wall. PubDate: 2017-04-12 DOI: 10.1007/s00791-017-0277-7

Authors:Randolph E. Bank; Chris Deotte Abstract: Abstract This paper discusses the effects that partitioning has on the convergence rate of Domain Decomposition. When Finite Elements are employed to solve a second order elliptic partial differential equation with strong convection and/or anisotropic diffusion, the shape and alignment of a partition’s parts significantly affect the Domain Decomposition convergence rate. Given a PDE, if b is the direction of convection or the prominent direction of anisotropic diffusion, then if one considers traversing the domain in the direction of b, partitions having fewer parts to traverse in this direction converge faster while partitions having more converge slower. PubDate: 2017-01-20 DOI: 10.1007/s00791-016-0271-5

Authors:Rolf Krause; Alessandro Rigazzi; Johannes Steiner Pages: 1 - 15 Abstract: Abstract The parallel solution of constrained minimization problems requires special care to be taken with respect to the information transfer between the different subproblems. Here, we present a nonlinear decomposition approach which employs an additional nonlinear correction step along the processor interfaces. Our approach is generic in the sense that it can be applied to a wide class of minimization problems with strongly local nonlinearities, including even nonsmooth minimization problems. We also describe the implementation of our nonlinear decomposition method in the object oriented library ObsLib \(++\) . The flexibility of our approach and its implementation is presented along different problem classes as obstacle problems, frictional contact problems and biomechanical applications. For the same examples, number of iterations, computation time, and parallelization speedup are measured, and the results demonstrate that the implementation scales reasonably well up to 4096 processors. PubDate: 2016-02-01 DOI: 10.1007/s00791-016-0267-1 Issue No:Vol. 18, No. 1 (2016)

Authors:Peter Frolkovič; Michael Lampe; Gabriel Wittum Pages: 17 - 29 Abstract: Abstract When a realistic modelling of radioactive contaminant transport in flowing groundwater is required, very large systems of coupled partial and ordinary differential equations can arise that have to be solved numerically. For that purpose, the software package \(r^3t\) is developed in which several advanced numerical methods are implemented to solve such models efficiently and accurately. Using software tools of \(r^3t\) one can treat successfully nontrivial mathematical problems like advection-dominated system with different retardation of transport for each component and with nonlinear Freundlich sorption and/or precipitation. Additionally, long time simulations on complex 3D geological domains using unstructured grids can be realized. In this paper we introduce and summarize the most important and novel features of numerical simulation for radioactive contaminant transport in porous media when using \(r^3t\) . PubDate: 2016-02-01 DOI: 10.1007/s00791-016-0268-0 Issue No:Vol. 18, No. 1 (2016)

Authors:Peter Frolkovič; Dmitriy Logashenko; Christian Wehner Pages: 31 - 52 Abstract: Abstract In this paper we deal with the application of the flux-based level set method to moving interface computations on unstructured grids. The focus lies on the overcoming of the known difficulties of level set methods, e.g. accurate computations of important geometric properties, reliable and precise reinitialization of the level set function and the adaption of standard discretization methods to the moving boundary case. The basic building block of our approach is the high-resolution flux-based level set method for general advection equation (Frolkovič and Mikula in SIAM J Sci Comput 29(2):579–597, 2007, Frolkovič and Wehner in Comput Vis Sci 12(6):626–650, 2009). We extend this method for the problem of reinitialization of the level set function on unstructured grids by using quadratic interpolation to compute distances for nodes close to the interface. To realize numerical simulation for some applications with moving boundaries, we adapt the approach of ghost fluid method (Gibou and Fedkiw in J Comput Phys 202:577–601, 2005) for unstructured grids. The idea is to describe the development of the moving boundary with a level set formulation while the computational grid remains fixed and the boundary conditions are enforced using some extrapolation. Our main motivation is the numerical solution of two-phase incompressible flow problems. Additionally to previously mentioned steps, we introduce further numerical schemes in the framework of finite volume discretization for the flow. Possible jumps of the pressure and the directional derivative of velocity at the interface are modeled directly within the method using the approach of extended approximation spaces. Besides that, an algorithm for the computations of curvature is considered that exhibits the second order accuracy for some examples. Numerical experiments are provided for the presented methods. PubDate: 2016-02-01 DOI: 10.1007/s00791-016-0269-z Issue No:Vol. 18, No. 1 (2016)

Authors:Tim Schenk; Albert B. Gilg; Monika Mühlbauer; Roland Rosen; Jan C. Wehrstedt Pages: 167 - 183 Abstract: Abstract Modeling and simulation is an established scientific and industrial method to support engineers in their work in all lifecycle phases—from first concepts or tender to operation and service—of a technical system. Due to the fact of increasing complexity of such systems, e.g. plants, cyber-physical systems and infrastructures, system simulation is rapidly gaining impact. In this paper, a simulation architecture is presented and discussed on three different industrial applications, which offers a client–server concept to master the challenges of a lifecycle spanning simulation framework. Looking ahead, open software concepts for modeling, simulation and optimization will be required to cover new co-simulation techniques and to realize distributed, for example web-based simulation environments and tools. PubDate: 2016-01-27 DOI: 10.1007/s00791-015-0256-9 Issue No:Vol. 17, No. 4 (2016)

Authors:Stiene Riemer; Christian Wagner Pages: 203 - 216 Abstract: Abstract The quantitative assessment of risks associated with several types, eg, rating methods for cash-flow driven projects, can be reduced to determining the probability that a random variable, for instance representing a cash-flow, drops below a given threshold. That probability can be derived in an analytic closed form, if the underlying distribution is not too complex. However, in practice there is often a reserve account in place, which saves excess cash to reduce the volatility of the cash-flow available for debt service. Due to the reserve account, the derivation of a solution in an analytic closed form is even in the case of rather simple underlying distributions, eg, independent Gaussian distribution, not feasible. In this paper, we present two very efficient approximation methodologies for calculating the probability that a random variable falls under a threshold allowing the presence of a reserve account. The first proposed approach is derived using transition probabilities. The resulting recursive scheme can be implemented easily and yields fast and stable results even in the case of dependent cash-flows. The second methodology uses the similarity of the considered stochastic processes with convection-diffusion processes and combines the stochastic transition probabilities with the finite volume method, which is well known for solving partial differential equations. We present numerical results for some realistic test problems demonstrating convergence of order h for the transition probability based approach and \(h^2\) for the combination with the finite volume method for sufficiently smooth probability distributions. PubDate: 2016-01-11 DOI: 10.1007/s00791-015-0258-7 Issue No:Vol. 17, No. 4 (2016)

Authors:Randolph E. Bank; Chris Deotte Abstract: Abstract In this work, we compare and contrast a few finite element h-adaptive and hp-adaptive algorithms. We test these schemes on three example PDE problems and we utilize and evaluate an a posteriori error estimate. In the process, we introduce a new framework to study adaptive algorithms and a posteriori error estimators. Our innovative environment begins with a solution u and then uses interpolation to simulate solving a corresponding PDE. As a result, we always know the exact error and we avoid the noise associated with solving. Using an effort indicator, we evaluate the relationship between accuracy and computational work. We report the order of convergence of different approaches. And we evaluate the accuracy and effectiveness of an a posteriori error estimator. PubDate: 2016-12-26 DOI: 10.1007/s00791-016-0272-4

Authors:Tao Cui; Jinchao Xu; Chen-Song Zhang Abstract: Abstract Due to increasing complexity of supercomputers, hard and soft errors are causing more and more problems in high-performance scientific and engineering computation. In order to improve reliability (increase the mean time to failure) of computing systems, a lot of efforts have been devoted to developing techniques to forecast, prevent, and recover from errors at different levels, including architecture, application, and algorithm. In this paper, we focus on algorithmic error resilient iterative solvers and introduce a redundant subspace correction method. Using a general framework of redundant subspace corrections, we construct iterative methods, which have the following properties: (1) maintain convergence when error occurs assuming it is detectable; (2) introduce low computational overhead when no error occurs; (3) require only small amount of point-to-point communication compared to traditional methods and maintain good load balance; (4) improve the mean time to failure. Preliminary numerical experiments demonstrate the efficiency and effectiveness of the new subspace correction method. For simplicity, the main ideas of the proposed framework were demonstrated using the Schwarz methods without a coarse space, which do not scale well in practice. PubDate: 2016-12-22 DOI: 10.1007/s00791-016-0270-6

Authors:Zheng Li; Shuhong Wu; Chen-Song Zhang; Jinchao Xu; Chunsheng Feng; Xiaozhe Hu Abstract: Abstract Numerical simulation based on fine-scale reservoir models helps petroleum engineers in understanding fluid flow in porous media and achieving higher recovery ratio. Fine-scale models give rise to large-scale linear systems, and thus require effective solvers for solving these linear systems to finish simulation in reasonable turn-around time. In this paper, we study convergence, robustness, and efficiency of a class of multi-stage preconditioners accelerated by Krylov subspace methods for solving Jacobian systems from a fully implicit discretization. We compare components of these preconditioners, including decoupling and sub-problem solvers, for fine-scale reservoir simulation. Several benchmark and real-world problems, including a ten-million-cell reservoir problem, were simulated on a desktop computer. Numerical tests show that the combination of the alternating block factorization method and multi-stage subspace correction preconditioner gives a robust and memory-efficient solver for fine-scale reservoir simulation. PubDate: 2016-12-21 DOI: 10.1007/s00791-016-0273-3

Authors:E. Carlini; R. Ferretti Abstract: Abstract We propose a Semi-Lagrangian scheme coupled with Radial Basis Function interpolation for approximating a curvature-related level set model, which has been proposed by Zhao et al. (Comput Vis Image Underst 80:295–319, 2000) to reconstruct unknown surfaces from sparse data sets. The main advantages of the proposed scheme are the possibility to solve the level set method on unstructured grids, as well as to concentrate the reconstruction points in the neighbourhood of the data set, with a consequent reduction of the computational effort. Moreover, the scheme is explicit. Numerical tests show the accuracy and robustness of our approach to reconstruct curves and surfaces from relatively sparse data sets. PubDate: 2016-12-21 DOI: 10.1007/s00791-016-0274-2

Authors:Sambasiva Rao Chinnamsetty; Mike Espig; Wolfgang Hackbusch Pages: 267 - 275 Abstract: Abstract The computation of a six-dimensional density matrix is the crucial step for the evaluation of kinetic energy in electronic structure calculations. For molecules with heavy nuclei, one has to consider a very refined mesh in order to deal with the nuclear cusps. This leads to high computational time and needs huge memory for the computation of the density matrix. To reduce the computational complexity and avoid discretization errors in the approximation, we use mesh-free canonical tensor products in electronic structure calculations. In this paper, we approximate the six-dimensional density matrix in an efficient way and then compute the kinetic energy. Accuracy is examined by comparing our computed kinetic energy with the exact computation of the kinetic energy. PubDate: 2015-12-01 DOI: 10.1007/s00791-016-0263-5 Issue No:Vol. 17, No. 6 (2015)

Authors:Felix Henneke; Manfred Liebmann Pages: 277 - 293 Abstract: Abstract A generalized Suzuki–Trotter (GST) method for the solution of an optimal control problem for quantum molecular systems is presented in this work. The control of such systems gives rise to a minimization problem with constraints given by a system of coupled Schrödinger equations. The computational bottleneck of the corresponding minimization methods is the solution of time-dependent Schrödinger equations. To solve the Schrödinger equations we use the GST framework to obtain an explicit polynomial approximation of the matrix exponential function. The GST method almost exclusively uses the action of the Hamiltonian and is therefore efficient and easy to implement for a variety of quantum systems. Following a first discretize, then optimize approach we derive the correct discrete representation of the gradient and the Hessian. The derivatives can naturally be expressed in the GST framework and can therefore be efficiently computed. By recomputing the solutions of the Schrödinger equations instead of saving the whole time evolution, we are able to significantly reduce the memory requirements of the method at the cost of additional computations. This makes first and second order optimization methods viable for large scale problems. In numerical experiments we compare the performance of different first and second order optimization methods using the GST method. We observe fast local convergence of second order methods. PubDate: 2015-12-01 DOI: 10.1007/s00791-016-0266-2 Issue No:Vol. 17, No. 6 (2015)

Authors:A. Hauser; G. Wittum Pages: 295 - 304 Abstract: Abstract Several a posteriori indicators in the framework of local grid adaptation and large eddy simulation (LES) are evaluated. In LES indicators must be capable to bound not only the discretisation error, but also the modeling error. Moreover, the numerical method must be able to adapt the computational grid dynamically, as the regions requiring different resolution are not static. The performance of different indicators is evaluated in two flow configurations. It turns out that the classic residual based error indicator and the newly introduced heuristic indicator perform best. PubDate: 2015-12-01 DOI: 10.1007/s00791-016-0265-3 Issue No:Vol. 17, No. 6 (2015)

Authors:Ulrich Langer; Ioannis Toulopoulos Pages: 217 - 233 Abstract: Abstract In this work, we study the approximation properties of multipatch dG-IgA methods, that apply the multipatch Isogeometric Analysis discretization concept and the discontinuous Galerkin technique on the interfaces between the patches, for solving linear diffusion problems with diffusion coefficients that may be discontinuous across the patch interfaces. The computational domain is divided into non-overlapping subdomains, called patches in IgA, where B-splines, or NURBS approximations spaces are constructed. The solution of the problem is approximated in every subdomain without imposing any matching grid conditions and without any continuity requirements for the discrete solution across the interfaces. Numerical fluxes with interior penalty jump terms are applied in order to treat the discontinuities of the discrete solution on the interfaces. We provide a rigorous a priori discretization error analysis for diffusion problems in two- and three-dimensional domains, where solutions patchwise belong to \(W^{l,p}\) , with some \(l\ge 2\) and \( p\in ({2d}/{(d+2(l-1))},2]\) . In any case, we show optimal convergence rates of the discretization with respect to the dG - norm. PubDate: 2015-10-01 DOI: 10.1007/s00791-016-0262-6 Issue No:Vol. 17, No. 5 (2015)

Authors:Markus M. Knodel; Arne Nägel; Sebastian Reiter; Martin Rupp; Andreas Vogel; Paul Targett-Adams; Eva Herrmann; Gabriel Wittum Pages: 235 - 253 Abstract: Abstract Viruses are a major challenge to human health and prosperity. This holds true for various viruses which are either threatening Europe (like Dengue and Yellow fever) or which are currently causing big health problems like the hepatitis C virus (HCV). HCV causes chronic liver diseases like cirrhosis and cancer and is the main reason for liver transplantations. Exploring biophysical properties of virus-encoded components and viral life cycle is an exciting new area of current virological research. In this context, spatial resolution is an aspect that has not yet been received much attention despite strong biological evidence suggesting that intracellular spatial dependence is a crucial factor in the viral replication process. We are developing first spatio-temporal resolved models which mimic the behavior of the important components of virus replication within single liver cells. HCV replication is strongly associated to the intracellular Endoplasmatic Reticulum (ER) network. Here, we present the computational basis for the estimation of the diffusion constant of a central component of HCV genome (viral RNA) replication, namely the NS5a protein, on the surface of realistic reconstructed ER geometries. The basic surface partial differential equation (sPDE) evaluations are performed with UG4 using fast massively parallel multigrid solvers. The numerics of the simulations are studied in detail. Integrated concentrations within special subdomains correspond to experimental FRAP time series. In particular, we analyze the refinement stability in time and space for these integrated concentrations based on diffusion sPDEs upon large unstructured surface grids using heuristic values for the NS5a diffusion constant. This builds up a solid basis for future research not included in this presentation. e.g. the presented refinement stability analysis of the single sPDEs allows for parameter estimations for the NS5a diffusion constant. Our advanced Finite Volume/multigrid techniques also could be applied for studying life cycles of other viruses. PubDate: 2015-10-01 DOI: 10.1007/s00791-016-0261-7 Issue No:Vol. 17, No. 5 (2015)

Authors:P. Luo; C. Rodrigo; F. J. Gaspar; C. W. Oosterlee Pages: 255 - 265 Abstract: Abstract In this study, a nonlinear multigrid method is applied for solving the system of incompressible poroelasticity equations considering nonlinear hydraulic conductivity. For the unsteady problem, an additional artificial term is utilized to stabilize the solutions when the equations are discretized on collocated grids. We employ two nonlinear multigrid methods, i.e. the “full approximation scheme” and “Newton multigrid” for solving the corresponding system of equations arising after discretization. For the steady case, both homogeneous and heterogeneous cases are solved and two different smoothers are examined to search for an efficient multigrid method. Numerical results show a good convergence performance for all the strategies. PubDate: 2015-10-01 DOI: 10.1007/s00791-016-0260-8 Issue No:Vol. 17, No. 5 (2015)

Authors:Christian V. Hansen; Hans J. Schroll; Daniel Wüstner Pages: 151 - 166 Abstract: Abstract Fluorescence loss in photobleaching (FLIP) is a modern microscopy method for visualization of transport processes in living cells. Although FLIP is widespread, an automated reliable analysis of image data is still lacking. This paper presents a framework for modeling and simulation of FLIP sequences as reaction–diffusion systems on segmented cell images. The cell geometry is extracted from microscopy images using the Chan–Vese active contours algorithm (IEEE Trans Image Process 10(2):266–277, 2001). The PDE model is subsequently solved by the automated Finite Element software package FEniCS (Logg et al. in Automated solution of differential equations by the finite element method. Springer, Heidelberg, 2012). The flexibility of FEniCS allows for spatially resolved reaction diffusion coefficients in two (or more) spatial dimensions. PubDate: 2015-12-31 DOI: 10.1007/s00791-015-0259-6 Issue No:Vol. 17, No. 4 (2015)