Authors:Liang Meng; Piotr Breitkopf; Guénhaël Le Quilliec; Balaji Raghavan; Pierre Villon Pages: 1 - 21 Abstract: Abstract In this paper, we present the concept of a “shape manifold” designed for reduced order representation of complex “shapes” encountered in mechanical problems, such as design optimization, springback or image correlation. The overall idea is to define the shape space within which evolves the boundary of the structure. The reduced representation is obtained by means of determining the intrinsic dimensionality of the problem, independently of the original design parameters, and by approximating a hyper surface, i.e. a shape manifold, connecting all admissible shapes represented using level set functions. Also, an optimal parameterization may be obtained for arbitrary shapes, where the parameters have to be defined a posteriori. We also developed the predictor-corrector optimization manifold walking algorithms in a reduced shape space that guarantee the admissibility of the solution with no additional constraints. We illustrate the approach on three diverse examples drawn from the field of computational and applied mechanics. PubDate: 2018-01-01 DOI: 10.1007/s11831-016-9189-9 Issue No:Vol. 25, No. 1 (2018)

Authors:T. Taddei; J. D. Penn; M. Yano; A. T. Patera Pages: 23 - 45 Abstract: Abstract We present a model-order-reduction approach to simulation-based classification, with particular application to structural health monitoring. The approach exploits (1) synthetic results obtained by repeated solution of a parametrized mathematical model for different values of the parameters, (2) machine-learning algorithms to generate a classifier that monitors the damage state of the system, and (3) a reduced basis method to reduce the computational burden associated with the model evaluations. Furthermore, we propose a mathematical formulation which integrates the partial differential equation model within the classification framework and clarifies the influence of model error on classification performance. We illustrate our approach and we demonstrate its effectiveness through the vehicle of a particular physical companion experiment, a harmonically excited microtruss. PubDate: 2018-01-01 DOI: 10.1007/s11831-016-9185-0 Issue No:Vol. 25, No. 1 (2018)

Authors:Rubén Ibañez; Emmanuelle Abisset-Chavanne; Jose Vicente Aguado; David Gonzalez; Elias Cueto; Francisco Chinesta Pages: 47 - 57 Abstract: Abstract Standard simulation in classical mechanics is based on the use of two very different types of equations. The first one, of axiomatic character, is related to balance laws (momentum, mass, energy,...), whereas the second one consists of models that scientists have extracted from collected, natural or synthetic data. Even if one can be confident on the first type of equations, the second one contains modeling errors. Moreover, this second type of equations remains too particular and often fails in describing new experimental results. The vast majority of existing models lack of generality, and therefore must be constantly adapted or enriched to describe new experimental findings. In this work we propose a new method, able to directly link data to computers in order to perform numerical simulations. These simulations will employ axiomatic, universal laws while minimizing the need of explicit, often phenomenological, models. This technique is based on the use of manifold learning methodologies, that allow to extract the relevant information from large experimental datasets. PubDate: 2018-01-01 DOI: 10.1007/s11831-016-9197-9 Issue No:Vol. 25, No. 1 (2018)

Authors:E. Lopez; D. Gonzalez; J. V. Aguado; E. Abisset-Chavanne; E. Cueto; C. Binetruy; F. Chinesta Pages: 59 - 68 Abstract: Abstract Image-based simulation is becoming an appealing technique to homogenize properties of real microstructures of heterogeneous materials. However fast computation techniques are needed to take decisions in a limited time-scale. Techniques based on standard computational homogenization are seriously compromised by the real-time constraint. The combination of model reduction techniques and high performance computing contribute to alleviate such a constraint but the amount of computation remains excessive in many cases. In this paper we consider an alternative route that makes use of techniques traditionally considered for machine learning purposes in order to extract the manifold in which data and fields can be interpolated accurately and in real-time and with minimum amount of online computation. Locallly Linear Embedding is considered in this work for the real-time thermal homogenization of heterogeneous microstructures. PubDate: 2018-01-01 DOI: 10.1007/s11831-016-9172-5 Issue No:Vol. 25, No. 1 (2018)

Authors:D. González; J. V. Aguado; E. Cueto; E. Abisset-Chavanne; F. Chinesta Pages: 69 - 86 Abstract: Abstract Parametric solutions make possible fast and reliable real-time simulations which, in turn allow real time optimization, simulation-based control and uncertainty propagation. This opens unprecedented possibilities for robust and efficient design and real-time decision making. The construction of such parametric solutions was addressed in our former works in the context of models whose parameters were easily identified and known in advance. In this work we address more complex scenarios in which the parameters do not appear explicitly in the model—complex microstructures, for instance. In these circumstances the parametric model solution requires combining a technique to find the relevant model parameters and a solution procedure able to cope with high-dimensional models, avoiding the well-known curse of dimensionality. In this work, kPCA (kernel Principal Component Analysis) is used for extracting the hidden model parameters, whereas the PGD (Proper Generalized Decomposition) is used for calculating the resulting parametric solution. PubDate: 2018-01-01 DOI: 10.1007/s11831-016-9173-4 Issue No:Vol. 25, No. 1 (2018)

Authors:Patrick Héas; Cédric Herzet Pages: 87 - 101 Abstract: Abstract This paper deals with model order reduction of parametrical dynamical systems. We consider the specific setup where the distribution of the system’s trajectories is unknown but the following two sources of information are available: (i) some “rough” prior knowledge on the system’s realisations; (ii) a set of “incomplete” observations of the system’s trajectories. We propose a Bayesian methodological framework to build reduced-order models (ROMs) by exploiting these two sources of information. We emphasise that complementing the prior knowledge with the collected data provably enhances the knowledge of the distribution of the system’s trajectories. We then propose an implementation of the proposed methodology based on Monte-Carlo methods. In this context, we show that standard ROM learning techniques, such e.g., proper orthogonal decomposition or dynamic mode decomposition, can be revisited and recast within the probabilistic framework considered in this paper. We illustrate the performance of the proposed approach by numerical results obtained for a standard geophysical model. PubDate: 2018-01-01 DOI: 10.1007/s11831-017-9229-0 Issue No:Vol. 25, No. 1 (2018)

Authors:Lionel Mathelin; Kévin Kasper; Hisham Abou-Kandil Pages: 103 - 120 Abstract: Abstract This paper introduces a method for efficiently inferring a high-dimensional distributed quantity from a few observations. The quantity of interest (QoI) is approximated in a basis (dictionary) learned from a training set. The coefficients associated with the approximation of the QoI in the basis are determined by minimizing the misfit with the observations. To obtain a probabilistic estimate of the quantity of interest, a Bayesian approach is employed. The QoI is treated as a random field endowed with a hierarchical prior distribution so that closed-form expressions can be obtained for the posterior distribution. The main contribution of the present work lies in the derivation of a representation basis consistent with the observation chain used to infer the associated coefficients. The resulting dictionary is then tailored to be both observable by the sensors and accurate in approximating the posterior mean. An algorithm for deriving such an observable dictionary is presented. The method is illustrated with the estimation of the velocity field of an open cavity flow from a handful of wall-mounted point sensors. Comparison with standard estimation approaches relying on Principal Component Analysis and K-SVD dictionaries is provided and illustrates the superior performance of the present approach. PubDate: 2018-01-01 DOI: 10.1007/s11831-017-9219-2 Issue No:Vol. 25, No. 1 (2018)

Authors:Seunghye Lee; Jingwan Ha; Mehriniso Zokhirova; Hyeonjoon Moon; Jaehong Lee Pages: 121 - 129 Abstract: Abstract Since the first journal article on structural engineering applications of neural networks (NN) was published, there have been a large number of articles about structural analysis and design problems using machine learning techniques. However, due to a fundamental limitation of traditional methods, attempts to apply artificial NN concept to structural analysis problems have been reduced significantly over the last decade. Recent advances in deep learning techniques can provide a more suitable solution to those problems. In this study, versatile background information, such as alleviating overfitting methods with hyper-parameters, is presented. A well-known ten bar truss example is presented to show condition for neural networks, and role of hyper-parameters in the structures. PubDate: 2018-01-01 DOI: 10.1007/s11831-017-9237-0 Issue No:Vol. 25, No. 1 (2018)

Authors:Jan Neggers; Olivier Allix; François Hild; Stéphane Roux Pages: 143 - 164 Abstract: Abstract Since the turn of the century experimental solid mechanics has undergone major changes with the generalized use of images. The number of acquired data has literally exploded and one of today’s challenges is related to the saturation of mining procedures through such big data sets. With respect to digital image/volume correlation one of tomorrow’s pathways is to better control and master this data flow with procedures that are optimized for extracting the sought information with minimum uncertainties and maximum robustness. In this paper emphasis is put on various hierarchical identification procedures. Based on such structures a posteriori model/data reductions are performed in order to ease and make the exploitation of the experimental information far more efficient. Some possibilities related to other model order reduction techniques like the proper generalized decomposition are discussed and new opportunities are sketched. PubDate: 2018-01-01 DOI: 10.1007/s11831-017-9234-3 Issue No:Vol. 25, No. 1 (2018)

Authors:Mar Miñano; Francisco J. Montáns Pages: 165 - 193 Abstract: Abstract The conservative elastic behavior of soft materials is characterized by a stored energy function which shape is usually specified a priori, except for some material parameters. There are hundreds of proposed stored energies in the literature for different materials. The stored energy function may change under loading due to damage effects, but it may be considered constant during unloading–reloading. The two dominant approaches in the literature to model this damage effect are based either on the Continuum Damage Mechanics framework or on the Pseudoelasticity framework. In both cases, additional assumed evolution functions, with their associated material parameters, are proposed. These proposals are semi-inverse, semi-analytical, model-driven and data-adjusted ones. We propose an alternative which may be considered a non-inverse, numerical, model-free, data-driven, approach. We call this approach WYPiWYG constitutive modeling. We do not assume global functions nor material parameters, but just solve numerically the differential equations of a set of tests that completely define the behavior of the solid under the given assumptions. In this work we extend the approach to model isotropic and anisotropic damage in soft materials. We obtain numerically the damage evolution from experimental tests. The theory can be used for both hard and soft materials, and the infinitesimal formulation is naturally recovered for infinitesimal strains. In fact, we motivate the formulation in a one-dimensional infinitesimal framework and we show that the concepts are immediately applicable to soft materials. PubDate: 2018-01-01 DOI: 10.1007/s11831-017-9233-4 Issue No:Vol. 25, No. 1 (2018)

Authors:Tobias Gleim; Detlef Kuhl Abstract: Abstract The current paper establishes different axisymmetric and two-dimensional models for an electrostatic, magnetostatic and electromagnetic induction process. Therein, the Maxwell equations are combined in a monolithic solution strategy. A higher order finite element discretization using Galerkin’s method in space as well as in time is developed for the electromagnetic approach. In addition, time integration procedures of the Runge–Kutta family are evolved. Furthermore, the residual error is introduced to open an alternative way for a numerically efficient estimation of the time integration accuracy of the Galerkin time integration method. Runge–Kutta methods are enriched by the embedded error estimate. A family of electrostatic, magnetostatic and electromagneto dynamic boundary and initial boundary value problems with existing analytical solutions are introduced, which will serve as benchmark examples for numerical solution procedures. PubDate: 2018-01-05 DOI: 10.1007/s11831-017-9249-9

Authors:Walter Boscheri Pages: 751 - 801 Abstract: Abstract In this work we develop a new class of high order accurate Arbitrary-Lagrangian–Eulerian (ALE) one-step finite volume schemes for the solution of nonlinear systems of conservative and non-conservative hyperbolic partial differential equations. The numerical algorithm is designed for two and three space dimensions, considering moving unstructured triangular and tetrahedral meshes, respectively. As usual for finite volume schemes, data are represented within each control volume by piecewise constant values that evolve in time, hence implying the use of some strategies to improve the order of accuracy of the algorithm. In our approach high order of accuracy in space is obtained by adopting a WENO reconstruction technique, which produces piecewise polynomials of higher degree starting from the known cell averages. Such spatial high order accurate reconstruction is then employed to achieve high order of accuracy also in time using an element-local space–time finite element predictor, which performs a one-step time discretization. Specifically, we adopt a discontinuous Galerkin predictor which can handle stiff source terms that might produce jumps in the local space–time solution. Since we are dealing with moving meshes the elements deform while the solution is evolving in time, hence making the use of a reference system very convenient. Therefore, within the space–time predictor, the physical element is mapped onto a reference element using a high order isoparametric approach, where the space–time basis and test functions are given by the Lagrange interpolation polynomials passing through a predefined set of space–time nodes. The computational mesh continuously changes its configuration in time, following as closely as possible the flow motion. The entire mesh motion procedure is composed by three main steps, namely the Lagrangian step, the rezoning step and the relaxation step. In order to obtain a continuous mesh configuration at any time level, the mesh motion is evaluated by assigning each node of the computational mesh with a unique velocity vector at each timestep. The nodal solver algorithm preforms the Lagrangian stage, while we rely on a rezoning algorithm to improve the mesh quality when the flow motion becomes very complex, hence producing highly deformed computational elements. A so-called relaxation algorithm is finally employed to partially recover the optimal Lagrangian accuracy where the computational elements are not distorted too much. We underline that our scheme is supposed to be an ALE algorithm, where the local mesh velocity can be chosen independently from the local fluid velocity. Once the vertex velocity and thus the new node location has been determined, the old element configuration at time \(t^n\) is connected with the new one at time \(t^{n+1}\) with straight edges to represent the local mesh motion, in order to maintain algorithmic simplicity. The final ALE finite volume scheme is based directly on a space–time conservation formulation of the governing system of hyperbolic balance laws. The nonlinear system is reformulated more compactly using a space–time divergence operator and is then integrated on a moving space–time control volume. We adopt a linear parametrization of the space–time element boundaries and Gaussian quadrature rules of suitable order of accuracy to compute the integrals. We apply the new high order direct ALE finite volume schemes to several hyperbolic systems, namely the multidimensional Euler equations of compressible gas dynamics, the ideal classical magneto-hydrodynamics equations and the non-conservative seven-equation Baer–Nunziato model of compressible multi-phase flows with stiff relaxation source terms. Numerical convergence studies as well as several classical test problems will be shown to assess the accuracy and the robustness of our schemes. Finally we briefly present some variants of the algorithm that aim at improving the overall computational efficiency. PubDate: 2017-11-01 DOI: 10.1007/s11831-016-9188-x Issue No:Vol. 24, No. 4 (2017)

Authors:S. Ivvan Valdez; Salvador Botello; Miguel A. Ochoa; José L. Marroquín; Victor Cardoso Pages: 803 - 839 Abstract: Abstract This article proposes a benchmark set of problems for fixed mesh topology optimization in 2 dimensions. We have established the problems based on an analysis of more than 100 articles from the topology optimization specialized literature, gathering the most common dimensions, loads and fixed regions used by researchers. Most of the problems reported in specialized literature present differences in specifications such as lengths, units, materials among others. For instance, some articles propose the same proportions and geometrical shapes but different dimensions. Hence, the purpose of this benchmark is to unify geometrical and mechanical characteristics and load conditions, considering that the proposed problems must be realistic, in the sense that the units are in the International System and a real-world material and load conditions are used. The final benchmark integrates 13 problems for plane stress using ASTM A-36 steel. Additionally, we report approximations to the optimum solutions for both: compliance and volume minimization problems using the Solid Isotropic Material with Penalization (SIMP) and a novel version of SIMP proposed in this article. PubDate: 2017-11-01 DOI: 10.1007/s11831-016-9190-3 Issue No:Vol. 24, No. 4 (2017)

Authors:Mohammad Junaid Khan; Lini Mathew Pages: 855 - 867 Abstract: Abstract In modern years due to rising environmental issues such as energy cost and greenhouse gas emission have motivated new research into alternative methods of generation of electrical power. A vast deal of new research and enlargement for the renewable energy photovoltaic (PV) system. The PV module is conducted to search out non-polluting and renewable sources. New inventions are in development and exploring the perfection of solar cells to increase the efficiency and reduce the cost of power in per peak watt. The analysis of different kinds of control methods in PV system according to reviewed previous studies, shows that the most useful method is a hybrid technique as compared to other maximum power point tracking (MPPT) control methods. MPPT control method used to optimize the output of solar PV system with variable inputs such as solar radiations and temperature. The MPPT may include the use of a different DC–DC converter and also some different MPPT algorithms such as current based MPPT. Multi-input energy systems for the hybrid wind/solar energy systems need to be developed. PubDate: 2017-11-01 DOI: 10.1007/s11831-016-9192-1 Issue No:Vol. 24, No. 4 (2017)

Authors:H. Zakeri; Fereidoon Moghadas Nejad; Ahmad Fahimifar Pages: 935 - 977 Abstract: Abstract Pavement condition information is a significant component in Pavement Management Systems. The labeling and quantification of the type, severity, and extent of surface cracking is a challenging area for weighing the asphalt pavements. This paper presents a widespread review on various platform and image processing approaches for asphalt surface interpretation. The main part of this study presents a comprehensive combination of the state of the art in image processing based on crack interpretation related to asphalt pavements. An attempt is made to study the existing methodologies from different points of views accompanied by extensive comparisons on three stages of methods—distress detection, classification, and quantification to facilitate further research studies. This paper presents a survey of the developed pavement inspection systems up to date. Additionally, emerging and evolution technologies considered to automate the processes are discussed. PubDate: 2017-11-01 DOI: 10.1007/s11831-016-9194-z Issue No:Vol. 24, No. 4 (2017)

Authors:G. Houzeaux; J. C. Cajas; M. Discacciati; B. Eguzkitza; A. Gargallo-Peiró; M. Rivero; M. Vázquez Pages: 1033 - 1070 Abstract: Abstract Domain composition methods (DCM) consist in obtaining a solution to a problem, from the formulations of the same problem expressed on various subdomains. These methods have therefore the opposite objective of domain decomposition methods (DDM). Indeed, in contrast to DCM, these last techniques are usually applied to matching meshes as their purpose consists mainly in distributing the work in parallel environments. However, they are sometimes based on the same methodology as after decomposing, DDM have to recompose. As a consequence, in the literature, the term DDM has many times substituted DCM. DCM are powerful techniques that can be used for different purposes: to simplify the meshing of a complex geometry by decomposing it into different meshable pieces; to perform local refinement to adapt to local mesh requirements; to treat subdomains in relative motion (Chimera, sliding mesh); to solve multiphysics or multiscale problems, etc. The term DCM is generic and does not give any clue about how the fragmented solutions on the different subdomains are composed into a global one. In the literature, many methodologies have been proposed: they are mesh-based, equation-based, or algebraic-based. In mesh-based formulations, the coupling is achieved at the mesh level, before the governing equations are assembled into an algebraic system (mesh conforming, Shear-Slip Mesh Update, HERMESH). The equation-based counterpart recomposes the solution from the strong or weak formulation itself, and are implemented during the assembly of the algebraic system on the subdomain meshes. The different coupling techniques can be formulated for the strong formulation at the continuous level, for the weak formulation either at the continuous or at the discrete level (iteration-by-subdomains, mortar element, mesh free interpolation). Although the different methods usually lead to the same solutions at the continuous level, which usually coincide with the solution of the problem on the original domain, they have very different behaviors at the discrete level and can be implemented in many different ways. Eventually, algebraic-based formulations treat the composition of the solutions directly on the matrix and right-hand side of the individual subdomain algebraic systems. The present work introduces mesh-based, equation-based and algebraic-based DCM. It however focusses on algebraic-based domain composition methods, which have many advantages with respect to the others: they are relatively problem independent; their implicit implementation can be hidden in the iterative solver operations, which enables one to avoid intensive code rewriting; they can be implemented in a multi-code environment. PubDate: 2017-11-01 DOI: 10.1007/s11831-016-9198-8 Issue No:Vol. 24, No. 4 (2017)

Authors:Benjamin Urick; Travis M. Sanders; Shaolie S. Hossain; Yongjie J. Zhang; Thomas J. R. Hughes Abstract: Abstract We review the literature on patient-specific vascular modeling, with particular attention paid to three-dimensional arterial networks. Patient-specific vascular modeling typically involves three main steps: image processing, analysis suitable model generation, and computational analysis. Analysis suitable model generation techniques that are currently utilized suffer from several difficulties and complications, which often necessitate manual intervention and crude approximations. Because the modeling pipeline spans multiple disciplines, the benefits of integrating a computer-aided design (CAD) component for the geometric modeling tasks has been largely overlooked. Upon completion of our review, we adopt this philosophy and present a CAD-integrated template-based modeling framework that streamlines the construction of solid non-uniform rational B-spline vascular models for performing isogeometric finite element analysis. Examples of arterial models for mouse and human circles of Willis and a porcine coronary tree are presented. PubDate: 2017-11-28 DOI: 10.1007/s11831-017-9246-z

Authors:A. Hanif Halim; I. Ismail Abstract: Abstract The Travelling Salesman Problem (TSP) is an NP-hard problem with high number of possible solutions. The complexity increases with the factorial of n nodes in each specific problem. Meta-heuristic algorithms are an optimization algorithm that able to solve TSP problem towards a satisfactory solution. To date, there are many meta-heuristic algorithms introduced in literatures which consist of different philosophies of intensification and diversification. This paper focuses on 6 heuristic algorithms: Nearest Neighbor, Genetic Algorithm, Simulated Annealing, Tabu Search, Ant Colony Optimization and Tree Physiology Optimization. The study in this paper includes comparison of computation, accuracy and convergence. PubDate: 2017-11-20 DOI: 10.1007/s11831-017-9247-y

Authors:Alex Jarauta; Pavel Ryzhakov Abstract: Abstract Excess of liquid water in gas channels of polymer electrolyte fuel cells is responsible for malfunctioning of these devices. Not only it decreases their efficiency via partial blockage of reactants and pressure drop, but it can also lead to the irreversible damage due to oxygen starvation in case of complete channel flooding or full coverage of the gas diffusion layer with a liquid film. Liquid water evacuation is carried out via airflow in gas channels. Several experimental and computational techniques have been applied to date for the analysis of the coupled airflow–water behavior in order to understand the impact of fuel cell design and operation regimes upon the liquid water accumulation. Considerable progress has been achieved with the development of sophisticated computational fluid dynamics (CFD) tools. Nevertheless, the complexity of the problem under consideration leaves several issues unresolved. In this paper, analysis techniques applied to liquid water–airflow transport in fuel cells gas channels are reviewed and most important results are summarized. Computationally efficient, yet strongly simplified analytical models are discussed. Afterwards, CFD approaches including the conventional fixed grid (Eulerian) and the novel embedded Eulerian–Lagrangian models are described. Critical comparative assessment of the existing methods is provided at the end of the paper and the unresolved issues are highlighted. PubDate: 2017-11-18 DOI: 10.1007/s11831-017-9243-2

Authors:Patrick Gallinari; Yvon Maday; Maxime Sangnier; Olivier Schwander; Tommaso Taddei Abstract: Abstract Reduced basis methods for the approximation to parameter-dependent partial differential equations are now well-developed and start to be used for industrial applications. The classical implementation of the reduced basis method goes through two stages: in the first one, offline and time consuming, from standard approximation methods a reduced basis is constructed; then in a second stage, online and very cheap, a small problem, of the size of the reduced basis, is solved. The offline stage is a learning one from which the online stage can proceed efficiently. In this paper we propose to exploit machine learning procedures in both offline and online stages to either tackle different classes of problems or increase the speed-up during the online stage. The method is presented through a simple flow problem—a flow past a backward step governed by the Navier Stokes equations—which shows, however, interesting features. PubDate: 2017-08-05 DOI: 10.1007/s11831-017-9238-z