Authors:Markus M. Knodel; Babett Lemke; Michael Lampe; Michael Hoffer; Clarissa Gillmann; Michael Uder; Jens Hillengaß; Gabriel Wittum; Tobias Bäuerle Pages: 203 - 212 Abstract: Radiologic evaluation of images from computed tomography (CT) or magnetic resonance imaging for diagnostic purposes is based on the analysis of single slices, occasionally supplementing this information with 3D reconstructions as well as surface or volume rendered images. However, due to the complexity of anatomical or pathological structures in biomedical imaging, innovative visualization techniques are required to display morphological characteristics three dimensionally. Virtual reality is a modern tool of representing visual data, The observer has the impression of being “inside” a virtual surrounding, which is referred to as immersive imaging. Such techniques are currently being used in technical applications, e.g. in the automobile industry. Our aim is to introduce a workflow realized within one simple program which processes common image stacks from CT, produces 3D volume and surface reconstruction and rendering, and finally includes the data into a virtual reality device equipped with a motion head tracking cave automatic virtual environment system. Such techniques have the potential to augment the possibilities in non-invasive medical imaging, e.g. for surgical planning or educational purposes to add another dimension for advanced understanding of complex anatomical and pathological structures. To this end, the reconstructions are based on advanced mathematical techniques and the corresponding grids which we can export are intended to form the basis for simulations of mathematical models of the pathogenesis of different diseases. PubDate: 2018-03-01 DOI: 10.1007/s00791-018-0292-3 Issue No:Vol. 18, No. 6 (2018)

Authors:Andrea Bonito; Juan Pablo Borthagaray; Ricardo H. Nochetto; Enrique Otárola; Abner J. Salgado Abstract: We present three schemes for the numerical approximation of fractional diffusion, which build on different definitions of such a non-local process. The first method is a PDE approach that applies to the spectral definition and exploits the extension to one higher dimension. The second method is the integral formulation and deals with singular non-integrable kernels. The third method is a discretization of the Dunford–Taylor formula. We discuss pros and cons of each method, error estimates, and document their performance with a few numerical experiments. PubDate: 2018-03-07 DOI: 10.1007/s00791-018-0289-y

Authors:Vadym Aizinger; Leon Bungert; Michael Fried Abstract: Based on the local discontinuous Galerkin method, two substantially different mixed formulations for the subjective surfaces problem are compared using a number of numerical tests of various types. The work also performs the energy stability analysis for both schemes. PubDate: 2018-02-19 DOI: 10.1007/s00791-018-0291-4

Authors:Volker John; Petr Knobloch; Julia Novo Abstract: The contents of this paper is twofold. First, important recent results concerning finite element methods for convection-dominated problems and incompressible flow problems are described that illustrate the activities in these topics. Second, a number of, in our opinion, important open problems in these fields are discussed. The exposition concentrates on \(H^1\) -conforming finite elements. PubDate: 2018-02-05 DOI: 10.1007/s00791-018-0290-5

Authors:Daniel Ganellari; Gundolf Haase; Gerhard Zumbusch Abstract: Algorithms for the numerical solution of the Eikonal equation discretized with tetrahedra are discussed. Several massively parallel algorithms for GPU computing are developed. This includes domain decomposition concepts for tracking the moving wave fronts in sub-domains and over the sub-domain boundaries. Furthermore a low memory footprint implementation of the solver is introduced which reduces the number of arithmetic operations and enables improved memory access schemes. The numerical tests for different meshes originating from the geometry of a human heart document the decreased runtime of the new algorithms. PubDate: 2018-02-02 DOI: 10.1007/s00791-018-0288-z

Authors:Huda Ibeid; Rio Yokota; Jennifer Pestana; David Keyes Abstract: Among optimal hierarchical algorithms for the computational solution of elliptic problems, the fast multipole method (FMM) stands out for its adaptability to emerging architectures, having high arithmetic intensity, tunable accuracy, and relaxable global synchronization requirements. We demonstrate that, beyond its traditional use as a solver in problems for which explicit free-space kernel representations are available, the FMM has applicability as a preconditioner in finite domain elliptic boundary value problems, by equipping it with boundary integral capability for satisfying conditions at finite boundaries and by wrapping it in a Krylov method for extensibility to more general operators. Here, we do not discuss the well developed applications of FMM to implement matrix-vector multiplications within Krylov solvers of boundary element methods. Instead, we propose using FMM for the volume-to-volume contribution of inhomogeneous Poisson-like problems, where the boundary integral is a small part of the overall computation. Our method may be used to precondition sparse matrices arising from finite difference/element discretizations, and can handle a broader range of scientific applications. It is capable of algebraic convergence rates down to the truncation error of the discretized PDE comparable to those of multigrid methods, and it offers potentially superior multicore and distributed memory scalability properties on commodity architecture supercomputers. Compared with other methods exploiting the low-rank character of off-diagonal blocks of the dense resolvent operator, FMM-preconditioned Krylov iteration may reduce the amount of communication because it is matrix-free and exploits the tree structure of FMM. We describe our tests in reproducible detail with freely available codes and outline directions for further extensibility. PubDate: 2017-11-09 DOI: 10.1007/s00791-017-0287-5

Authors:Sriparna Saha; Monalisa Pal; Amit Konar Abstract: A novel approach to distinguish 25 body gestures enlightening physical disorders in young and elder individuals is explained using the proposed system. Here a well-known human sensing device, Kinect sensor is used which approximates the human body by virtue of 20 body joints and produces a data stream from which skeleton of the human body is traced. Sampling rate of the data stream is 30 frames per second where every frame represents a body gesture. The overall system is bifurcated into two parts. The offline part calculates 19 features from each frame representing a diseased gesture. These features are angle and distance information between 20 body joints. Features correspond to a definite pattern for a specific body gesture. In online part, triangular fuzzy matching based algorithm performs to detect real-time gestures with 90.57% accuracy. For achieving better accuracy, decision tree is enforced to separate sitting and standing body gestures. The proposed approach is observed to outperform several contemporary approaches in terms of accuracy while presenting a simple system which is based on medical knowledge and is capable of distinguishing as large as 25 gestures. PubDate: 2017-10-07 DOI: 10.1007/s00791-017-0281-y

Authors:Sagar Adatrao; Mayank Mittal Abstract: Detecting the size and/or location of circular object(s) in an image(s) has application in many areas, like, flow diagnostics, biomedical engineering, computer vision, etc. The detection accuracy of circular object(s) largely depends on the accuracy of centroiding algorithm and image preprocessing technique. In the present work, an error analysis is performed in determining the centroids of circular objects using synthetic images with eight different signal-to-noise ratios ranging from 2.7 to 17.8. In the first stage, four different centroiding algorithms, namely, Center of Mass, Weighted Center of Mass, Späth algorithm, and Hough transform, are studied and compared. The error analysis shows that Späth algorithm performs better than other algorithms. In the second stage, various image preprocessing techniques, consisting of two filters, namely, Median and Wiener, and five image segmentation methods, namely, Sobel, Prewitt, Laplacian of Gaussian (LoG) edge detector, basic global thresholding, and Otsu’s global thresholding, are compared to determine the centroids of circular objects using Späth algorithm. It is found that Wiener filter plus LoG edge detector performs better than other preprocessing techniques. Real images of a calibration target (typical in flow diagnostics) and the secondary atomization of water droplets are then considered for centroids detection. These two images are preprocessed using Wiener filter plus LoG edge detector and then processed using Späth algorithm to detect the centroids of circular objects. It is observed that the results of real image of the calibration target and synthetic images are comparable. Also, based on visual inspection, the centroids detected in the real image of water droplets are found to be reasonably accurate. PubDate: 2017-10-06 DOI: 10.1007/s00791-017-0286-6

Authors:R. D. Falgout; S. Friedhoff; Tz. V. Kolev; S. P. MacLachlan; J. B. Schroder; S. Vandewalle Abstract: We consider the comparison of multigrid methods for parabolic partial differential equations that allow space–time concurrency. With current trends in computer architectures leading towards systems with more, but not faster, processors, space–time concurrency is crucial for speeding up time-integration simulations. In contrast, traditional time-integration techniques impose serious limitations on parallel performance due to the sequential nature of the time-stepping approach, allowing spatial concurrency only. This paper considers the three basic options of multigrid algorithms on space–time grids that allow parallelism in space and time: coarsening in space and time, semicoarsening in the spatial dimensions, and semicoarsening in the temporal dimension. We develop parallel software and performance models to study the three methods at scales of up to 16K cores and introduce an extension of one of them for handling multistep time integration. We then discuss advantages and disadvantages of the different approaches and their benefit compared to traditional space-parallel algorithms with sequential time stepping on modern architectures. PubDate: 2017-10-06 DOI: 10.1007/s00791-017-0283-9

Authors:Yangang Chen; Justin W. L. Wan Abstract: We propose multigrid methods for convergent mixed finite difference discretization for the two dimensional Monge–Ampère equation. We apply mixed standard 7-point stencil and semi-Lagrangian wide stencil discretization, such that the numerical solution is guaranteed to converge to the viscosity solution of the Monge–Ampère equation. We investigate multigrid methods for two scenarios. The first scenario considers applying standard 7-point stencil discretization on the entire computational domain. We use full approximation scheme with four-directional alternating line smoothers. The second scenario considers the more general mixed stencil discretization and is used for the linearized problem. We propose a coarsening strategy where wide stencil points are set as coarse grid points. Linear interpolation is applied on the entire computational domain. At wide stencil points, injection as the restriction yields a good coarse grid correction. Numerical experiments show that the convergence rates of the proposed multigrid methods are mesh-independent. PubDate: 2017-10-05 DOI: 10.1007/s00791-017-0284-8

Authors:Duncan Kioi Gathungu; Alfio Borzì Abstract: The fast multigrid solution of an optimal control problem governed by a convection–diffusion partial-integro differential equation is investigated. This optimization problem considers a cost functional of tracking type and a constrained distributed control. The optimal control sought is characterized by the solution to the corresponding optimality system, which is approximated by a finite volume and quadrature discretization schemes and solved by multigrid techniques. The proposed multigrid approach combines a multigrid method for the governing model with a fast multigrid integration method. The convergence of this solution procedure is analyzed by local Fourier analysis and validated by results of numerical experiments. PubDate: 2017-10-03 DOI: 10.1007/s00791-017-0285-7

Authors:Rajib Sarkar; Soumya Kanti Naskar; Sanjoy Kumar Saha Abstract: Classification of music signal is a fundamental step for organized archival of music collection and fast retrieval thereafter. For Indian classical music, raga is the basic melodic framework. Manual identification of raga demands high expertise which is not available easily. Thus an automated system for raga identification is of great importance. In this work, we have studied the basic properties of the ragas in North Indian (Hindusthani) classical music and designed the features to capture the same. Pitch based Swara (note) profile is formed. Occurrence and energy distribution of notes generated from the profile are used as features. Note sequence plays an important role in the raga composition. Proposed note co-occurrence matrix summarizes this aspect. An audio clip is represented by these features which have strong correlation with the properties of raga. Support vector machine is used for classification. Experiment is done with a diversified dataset. Performance of the proposed work is compared with two other systems. It is observed that proposed methodology performs better. PubDate: 2017-09-23 DOI: 10.1007/s00791-017-0282-x

Authors:Randolph E. Bank Abstract: Higher order finite elements present certain challenges for multilevel methods. Such matrices have more nonzero elements and special block structure. In the case of \(h{-}p\) adaptive methods, the block structure is more complicated. In this work we present a simple two level solver for such systems, that exploits these special properties. The convergence rate is (empirically) multigrid-like, at least up to piecewise polynomials of degree nine. Numerical illustrations demonstrate its robustness on a wide variety of problems, including convection–diffusion and Helmholtz equations. PubDate: 2017-05-25 DOI: 10.1007/s00791-017-0280-z

Authors:Melania Carfagna; Alfio Grillo Abstract: Nowadays, the description of complex physical systems, such as biological tissues, calls for highly detailed and accurate mathematical models. These, in turn, necessitate increasingly elaborate numerical methods as well as dedicated algorithms capable of resolving each detail which they account for. Especially when commercial software is used, the performance of the algorithms coded by the user must be tested and carefully assessed. In Computational Biomechanics, the Spherical Design Algorithm (SDA) is a widely used algorithm to model biological tissues that, like articular cartilage, are described as composites reinforced by statistically oriented collagen fibres. The purpose of the present work is to analyse the performances of the SDA, which we implement in a commercial software for several sets of integration points (referred to as “spherical designs”), and compare the results with those determined by using an appropriate set of points proposed in this manuscript. As terms for comparison we take the results obtained by employing the integration scheme Integral, available in Matlab \(^{{\textregistered }}\) . For the numerical simulations, we study a well-documented benchmark test on articular cartilage, known as ‘unconfined compression test’. The reported numerical results highlight the influence of the fibres on the elasticity and permeability of this tissue. Moreover, some technical issues of the SDA (such as the choice of the quadrature points and their position in the integration domain) are proposed and discussed. PubDate: 2017-04-21 DOI: 10.1007/s00791-017-0278-6

Authors:Klaus-Peter Kröhn Abstract: The underground Hard Rock Laboratory (HRL) at Äspö, Sweden, is located in granitic rock and dedicated to investigations concerning deep geological radioactive waste disposal. Several in-situ experiments have been performed in the HRL with respect to geotechnical barriers which are intended to protect the waste canisters against the prevailing groundwater. Among them are the recent Buffer-Rock Interaction Experiment (BRIE) and, on a much larger scale, the long-term Prototype Repository (PR) experiment. Modelling the surrounding groundwater flow systems has been performed with the code \(\hbox {d}^{3}\hbox {f}\) using an approach where large fractures are represented discretely in a model while the remaining set of smaller fractures—also called “background fractures”—is assumed to act like an additional homogeneous continuum. It has been firstly applied to the BRIE in a cube-like domain of 40 m side length. Calibration of the model resulted in a considerable increase of matrix permeability due to the influence of the background fractures. To check the validity of the approach the calibrated data for the BRIE were applied to a model for the much larger PR which is also located in the HRL but at quite some distance from the BRIE. Only moderate modifications of the initially used permeabilities sufficed to fit the numerous outflow data for the PR-tunnel as well as for the six “deposition boreholes”. The chosen approach for the BRIE can thus be considered to be successfully transferred to the PR building considerable confidence in the conceptual approach. PubDate: 2017-04-12 DOI: 10.1007/s00791-017-0279-5

Authors:L. John; P. Pustějovská; O. Steinbach Abstract: Hemodynamic indicators such as the averaged wall shear stress (AWSS) and the oscillatory shear index (OSI) are well established to characterize areas of arterial walls with respect to the formation and progression of aneurysms. Here, we study two different forms for the wall shear stress vector from which AWSS and OSI are computed. One is commonly used as a generalization from the two-dimensional setting, the latter is derived from the full decomposition of the wall traction force given by the Cauchy stress tensor. We compare the influence of both approaches on hemodynamic indicators by numerical simulations under different computational settings. Namely, different (real and artificial) vessel geometries, and the influence of a physiological periodic inflow profile. The blood is modeled either as a Newtonian fluid or as a generalized Newtonian fluid with a shear rate dependent viscosity. Numerical results are obtained by using a stabilized finite element method. We observe profound differences in hemodynamic indicators computed by these two approaches, mainly at critical areas of the arterial wall. PubDate: 2017-04-12 DOI: 10.1007/s00791-017-0277-7

Authors:Randolph E. Bank; Chris Deotte Abstract: This paper discusses the effects that partitioning has on the convergence rate of Domain Decomposition. When Finite Elements are employed to solve a second order elliptic partial differential equation with strong convection and/or anisotropic diffusion, the shape and alignment of a partition’s parts significantly affect the Domain Decomposition convergence rate. Given a PDE, if b is the direction of convection or the prominent direction of anisotropic diffusion, then if one considers traversing the domain in the direction of b, partitions having fewer parts to traverse in this direction converge faster while partitions having more converge slower. PubDate: 2017-01-20 DOI: 10.1007/s00791-016-0271-5

Authors:Tao Cui; Jinchao Xu; Chen-Song Zhang Abstract: Due to increasing complexity of supercomputers, hard and soft errors are causing more and more problems in high-performance scientific and engineering computation. In order to improve reliability (increase the mean time to failure) of computing systems, a lot of efforts have been devoted to developing techniques to forecast, prevent, and recover from errors at different levels, including architecture, application, and algorithm. In this paper, we focus on algorithmic error resilient iterative solvers and introduce a redundant subspace correction method. Using a general framework of redundant subspace corrections, we construct iterative methods, which have the following properties: (1) maintain convergence when error occurs assuming it is detectable; (2) introduce low computational overhead when no error occurs; (3) require only small amount of point-to-point communication compared to traditional methods and maintain good load balance; (4) improve the mean time to failure. Preliminary numerical experiments demonstrate the efficiency and effectiveness of the new subspace correction method. For simplicity, the main ideas of the proposed framework were demonstrated using the Schwarz methods without a coarse space, which do not scale well in practice. PubDate: 2016-12-22 DOI: 10.1007/s00791-016-0270-6

Authors:Zheng Li; Shuhong Wu; Chen-Song Zhang; Jinchao Xu; Chunsheng Feng; Xiaozhe Hu Abstract: Numerical simulation based on fine-scale reservoir models helps petroleum engineers in understanding fluid flow in porous media and achieving higher recovery ratio. Fine-scale models give rise to large-scale linear systems, and thus require effective solvers for solving these linear systems to finish simulation in reasonable turn-around time. In this paper, we study convergence, robustness, and efficiency of a class of multi-stage preconditioners accelerated by Krylov subspace methods for solving Jacobian systems from a fully implicit discretization. We compare components of these preconditioners, including decoupling and sub-problem solvers, for fine-scale reservoir simulation. Several benchmark and real-world problems, including a ten-million-cell reservoir problem, were simulated on a desktop computer. Numerical tests show that the combination of the alternating block factorization method and multi-stage subspace correction preconditioner gives a robust and memory-efficient solver for fine-scale reservoir simulation. PubDate: 2016-12-21 DOI: 10.1007/s00791-016-0273-3

Authors:E. Carlini; R. Ferretti Abstract: We propose a Semi-Lagrangian scheme coupled with Radial Basis Function interpolation for approximating a curvature-related level set model, which has been proposed by Zhao et al. (Comput Vis Image Underst 80:295–319, 2000) to reconstruct unknown surfaces from sparse data sets. The main advantages of the proposed scheme are the possibility to solve the level set method on unstructured grids, as well as to concentrate the reconstruction points in the neighbourhood of the data set, with a consequent reduction of the computational effort. Moreover, the scheme is explicit. Numerical tests show the accuracy and robustness of our approach to reconstruct curves and surfaces from relatively sparse data sets. PubDate: 2016-12-21 DOI: 10.1007/s00791-016-0274-2