Authors:Liang Meng; Piotr Breitkopf; Guénhaël Le Quilliec; Balaji Raghavan; Pierre Villon Pages: 1 - 21 Abstract: In this paper, we present the concept of a “shape manifold” designed for reduced order representation of complex “shapes” encountered in mechanical problems, such as design optimization, springback or image correlation. The overall idea is to define the shape space within which evolves the boundary of the structure. The reduced representation is obtained by means of determining the intrinsic dimensionality of the problem, independently of the original design parameters, and by approximating a hyper surface, i.e. a shape manifold, connecting all admissible shapes represented using level set functions. Also, an optimal parameterization may be obtained for arbitrary shapes, where the parameters have to be defined a posteriori. We also developed the predictor-corrector optimization manifold walking algorithms in a reduced shape space that guarantee the admissibility of the solution with no additional constraints. We illustrate the approach on three diverse examples drawn from the field of computational and applied mechanics. PubDate: 2018-01-01 DOI: 10.1007/s11831-016-9189-9 Issue No:Vol. 25, No. 1 (2018)

Authors:T. Taddei; J. D. Penn; M. Yano; A. T. Patera Pages: 23 - 45 Abstract: We present a model-order-reduction approach to simulation-based classification, with particular application to structural health monitoring. The approach exploits (1) synthetic results obtained by repeated solution of a parametrized mathematical model for different values of the parameters, (2) machine-learning algorithms to generate a classifier that monitors the damage state of the system, and (3) a reduced basis method to reduce the computational burden associated with the model evaluations. Furthermore, we propose a mathematical formulation which integrates the partial differential equation model within the classification framework and clarifies the influence of model error on classification performance. We illustrate our approach and we demonstrate its effectiveness through the vehicle of a particular physical companion experiment, a harmonically excited microtruss. PubDate: 2018-01-01 DOI: 10.1007/s11831-016-9185-0 Issue No:Vol. 25, No. 1 (2018)

Authors:Rubén Ibañez; Emmanuelle Abisset-Chavanne; Jose Vicente Aguado; David Gonzalez; Elias Cueto; Francisco Chinesta Pages: 47 - 57 Abstract: Standard simulation in classical mechanics is based on the use of two very different types of equations. The first one, of axiomatic character, is related to balance laws (momentum, mass, energy,...), whereas the second one consists of models that scientists have extracted from collected, natural or synthetic data. Even if one can be confident on the first type of equations, the second one contains modeling errors. Moreover, this second type of equations remains too particular and often fails in describing new experimental results. The vast majority of existing models lack of generality, and therefore must be constantly adapted or enriched to describe new experimental findings. In this work we propose a new method, able to directly link data to computers in order to perform numerical simulations. These simulations will employ axiomatic, universal laws while minimizing the need of explicit, often phenomenological, models. This technique is based on the use of manifold learning methodologies, that allow to extract the relevant information from large experimental datasets. PubDate: 2018-01-01 DOI: 10.1007/s11831-016-9197-9 Issue No:Vol. 25, No. 1 (2018)

Authors:E. Lopez; D. Gonzalez; J. V. Aguado; E. Abisset-Chavanne; E. Cueto; C. Binetruy; F. Chinesta Pages: 59 - 68 Abstract: Image-based simulation is becoming an appealing technique to homogenize properties of real microstructures of heterogeneous materials. However fast computation techniques are needed to take decisions in a limited time-scale. Techniques based on standard computational homogenization are seriously compromised by the real-time constraint. The combination of model reduction techniques and high performance computing contribute to alleviate such a constraint but the amount of computation remains excessive in many cases. In this paper we consider an alternative route that makes use of techniques traditionally considered for machine learning purposes in order to extract the manifold in which data and fields can be interpolated accurately and in real-time and with minimum amount of online computation. Locallly Linear Embedding is considered in this work for the real-time thermal homogenization of heterogeneous microstructures. PubDate: 2018-01-01 DOI: 10.1007/s11831-016-9172-5 Issue No:Vol. 25, No. 1 (2018)

Authors:D. González; J. V. Aguado; E. Cueto; E. Abisset-Chavanne; F. Chinesta Pages: 69 - 86 Abstract: Parametric solutions make possible fast and reliable real-time simulations which, in turn allow real time optimization, simulation-based control and uncertainty propagation. This opens unprecedented possibilities for robust and efficient design and real-time decision making. The construction of such parametric solutions was addressed in our former works in the context of models whose parameters were easily identified and known in advance. In this work we address more complex scenarios in which the parameters do not appear explicitly in the model—complex microstructures, for instance. In these circumstances the parametric model solution requires combining a technique to find the relevant model parameters and a solution procedure able to cope with high-dimensional models, avoiding the well-known curse of dimensionality. In this work, kPCA (kernel Principal Component Analysis) is used for extracting the hidden model parameters, whereas the PGD (Proper Generalized Decomposition) is used for calculating the resulting parametric solution. PubDate: 2018-01-01 DOI: 10.1007/s11831-016-9173-4 Issue No:Vol. 25, No. 1 (2018)

Authors:Patrick Héas; Cédric Herzet Pages: 87 - 101 Abstract: This paper deals with model order reduction of parametrical dynamical systems. We consider the specific setup where the distribution of the system’s trajectories is unknown but the following two sources of information are available: (i) some “rough” prior knowledge on the system’s realisations; (ii) a set of “incomplete” observations of the system’s trajectories. We propose a Bayesian methodological framework to build reduced-order models (ROMs) by exploiting these two sources of information. We emphasise that complementing the prior knowledge with the collected data provably enhances the knowledge of the distribution of the system’s trajectories. We then propose an implementation of the proposed methodology based on Monte-Carlo methods. In this context, we show that standard ROM learning techniques, such e.g., proper orthogonal decomposition or dynamic mode decomposition, can be revisited and recast within the probabilistic framework considered in this paper. We illustrate the performance of the proposed approach by numerical results obtained for a standard geophysical model. PubDate: 2018-01-01 DOI: 10.1007/s11831-017-9229-0 Issue No:Vol. 25, No. 1 (2018)

Authors:Lionel Mathelin; Kévin Kasper; Hisham Abou-Kandil Pages: 103 - 120 Abstract: This paper introduces a method for efficiently inferring a high-dimensional distributed quantity from a few observations. The quantity of interest (QoI) is approximated in a basis (dictionary) learned from a training set. The coefficients associated with the approximation of the QoI in the basis are determined by minimizing the misfit with the observations. To obtain a probabilistic estimate of the quantity of interest, a Bayesian approach is employed. The QoI is treated as a random field endowed with a hierarchical prior distribution so that closed-form expressions can be obtained for the posterior distribution. The main contribution of the present work lies in the derivation of a representation basis consistent with the observation chain used to infer the associated coefficients. The resulting dictionary is then tailored to be both observable by the sensors and accurate in approximating the posterior mean. An algorithm for deriving such an observable dictionary is presented. The method is illustrated with the estimation of the velocity field of an open cavity flow from a handful of wall-mounted point sensors. Comparison with standard estimation approaches relying on Principal Component Analysis and K-SVD dictionaries is provided and illustrates the superior performance of the present approach. PubDate: 2018-01-01 DOI: 10.1007/s11831-017-9219-2 Issue No:Vol. 25, No. 1 (2018)

Authors:Seunghye Lee; Jingwan Ha; Mehriniso Zokhirova; Hyeonjoon Moon; Jaehong Lee Pages: 121 - 129 Abstract: Since the first journal article on structural engineering applications of neural networks (NN) was published, there have been a large number of articles about structural analysis and design problems using machine learning techniques. However, due to a fundamental limitation of traditional methods, attempts to apply artificial NN concept to structural analysis problems have been reduced significantly over the last decade. Recent advances in deep learning techniques can provide a more suitable solution to those problems. In this study, versatile background information, such as alleviating overfitting methods with hyper-parameters, is presented. A well-known ten bar truss example is presented to show condition for neural networks, and role of hyper-parameters in the structures. PubDate: 2018-01-01 DOI: 10.1007/s11831-017-9237-0 Issue No:Vol. 25, No. 1 (2018)

Authors:Jan Neggers; Olivier Allix; François Hild; Stéphane Roux Pages: 143 - 164 Abstract: Since the turn of the century experimental solid mechanics has undergone major changes with the generalized use of images. The number of acquired data has literally exploded and one of today’s challenges is related to the saturation of mining procedures through such big data sets. With respect to digital image/volume correlation one of tomorrow’s pathways is to better control and master this data flow with procedures that are optimized for extracting the sought information with minimum uncertainties and maximum robustness. In this paper emphasis is put on various hierarchical identification procedures. Based on such structures a posteriori model/data reductions are performed in order to ease and make the exploitation of the experimental information far more efficient. Some possibilities related to other model order reduction techniques like the proper generalized decomposition are discussed and new opportunities are sketched. PubDate: 2018-01-01 DOI: 10.1007/s11831-017-9234-3 Issue No:Vol. 25, No. 1 (2018)

Authors:Mar Miñano; Francisco J. Montáns Pages: 165 - 193 Abstract: The conservative elastic behavior of soft materials is characterized by a stored energy function which shape is usually specified a priori, except for some material parameters. There are hundreds of proposed stored energies in the literature for different materials. The stored energy function may change under loading due to damage effects, but it may be considered constant during unloading–reloading. The two dominant approaches in the literature to model this damage effect are based either on the Continuum Damage Mechanics framework or on the Pseudoelasticity framework. In both cases, additional assumed evolution functions, with their associated material parameters, are proposed. These proposals are semi-inverse, semi-analytical, model-driven and data-adjusted ones. We propose an alternative which may be considered a non-inverse, numerical, model-free, data-driven, approach. We call this approach WYPiWYG constitutive modeling. We do not assume global functions nor material parameters, but just solve numerically the differential equations of a set of tests that completely define the behavior of the solid under the given assumptions. In this work we extend the approach to model isotropic and anisotropic damage in soft materials. We obtain numerically the damage evolution from experimental tests. The theory can be used for both hard and soft materials, and the infinitesimal formulation is naturally recovered for infinitesimal strains. In fact, we motivate the formulation in a one-dimensional infinitesimal framework and we show that the concepts are immediately applicable to soft materials. PubDate: 2018-01-01 DOI: 10.1007/s11831-017-9233-4 Issue No:Vol. 25, No. 1 (2018)

Authors:Siddharth Singh Chouhan; Ajay Kaul; Uday Pratap Singh Abstract: Image segmentation methodology is a part of nearly all computer schemes as a pre-processing phase to excerpt more meaningful and useful information for analysing the objects within an image. Segmentation of an image is one of the most conjoint scientific matter, essential technology and critical constraint for image investigation and dispensation. There has been a lot of research work conceded in several emerging algorithms and approaches for segmentation, but even at present, no solitary standard technique has been proposed. The methodologies present are broadly classified among two classes i.e. traditional approaches and Soft computing approaches or Computational Intelligence (CI) approaches. In this article, our emphasis is to focus on Soft Computing (SC) techniques which has been adopted for segmenting an image. Nowadays, it is quite often seen that SC or CI is cast-off frequently in Information Technology and Computer Technology. However, Soft Computing approaches working synergistically provides in anyway, malleable information processing competence to manipulate real-life enigmatic circumstances. The impetus of these methodologies is to feat the lenience for ambiguity, roughness, imprecise acumen and partial veracity for the sake to attain compliance, sturdiness and economical results. Neural Networks (NNs), Fuzzy Logic (FL), and Genetic Algorithm (GA) are the fundamental approaches of SC regulation. SC approaches has been broadly implemented and studied in the number of applications including scientific analysis, medical, engineering, management, humanities etc. The paper focuses on introducing the various SC methodologies and presenting numerous applications in image segmentation. The acumen is to corroborate the probabilities of smearing computational intelligence to segmentation of an image. The available articles about usage of SC in segmentation are investigated, especially focusing on the core approaches like FL, NN and GA and efforts has been also made for collaborating new techniques like Fuzzy C-Means from FL family and Deep Neural Network or Convolutional Neural Network from NN family. The impression behind this work is to simulate core Soft Computing methodologies, along with encapsulating various terminologies like evaluation parameters, tools, databases, noises etc. which can be advantageous for researchers. This study also identifies approaches of SC being used, often collectively to resolve the distinctive dilemma of image segmentation, concluding with a general discussion about methodologies, applications followed by proposed work. PubDate: 2018-02-07 DOI: 10.1007/s11831-018-9257-4

Authors:Bibin John; P. Senthilkumar; Sreeja Sadasivan Abstract: This paper documents all the important works in the field of conjugate heat transfer study. Theoretical and applied aspects of conjugate heat transfer analysis are reviewed and summarized to a great extent on the light of available literature in this field. Over the years, conjugate heat transfer analysis has been evolved as the most effective method of heat transfer study. In this approach the mutual effects of thermal conduction in the solid and convection in the fluid are considered in the analysis. Various analytical and computational studies are reported in this field. Comprehension of analytical as well as computational studies of this field will help the researchers and scientists who work in this area to progress in their research. That is the focus of this review. Early analytical studies related to conjugate heat transfer are reviewed and summarised in the first part of this paper. Background of theoretical studies is discussed briefly. More importance is given in summarising the computational studies in this field. Different coupling techniques proposed to date are presented in great detail. Important studies narrating the application of conjugate heat transfer analysis are also discussed under separate headings. Hence the present paper gives complete theoretical background of Conjugate heat transfer along with direction to its application envelope. PubDate: 2018-01-27 DOI: 10.1007/s11831-018-9252-9

Authors:Zhen Yang; Jing Lian; Yanan Guo; Shouliang Li; Deyuan Wang; Wenhao Sun; Yide Ma Abstract: In this paper, recent pulse coupled neural networks (PCNN) model’s development, especially in the fields related to the image processing, were surveyed. Our review aims to provide a comprehensive and systematic analysis of selected researches from past few decades, having powerful methods to infer the area of study. In this paper, all selected references are categorized in three groups, on the basis of neurons structure, parameters setting, and the inherent characteristics of PCNN. Various applications of these models were mentioned and underlying difficulties, limitations, merits and disadvantages were discussed in applying these models. The researchers will find it helpful to choose and use the appropriate model for a better application. PubDate: 2018-01-24 DOI: 10.1007/s11831-018-9253-8

Authors:Reginald Dewil; İlker Küçükoğlu; Corrinne Luteyn; Dirk Cattrysse Abstract: Hole drilling is one of the major basic operations in part manufacturing. It follows without surprise then that the optimization of this process is of great importance when trying to minimize the total financial and environmental cost of part manufacturing. In multi-hole drilling, 70% of the total process time is spent in tool movement and tool switching. Therefore, toolpath optimization in particular has attracted significant attention in cost minimization. This paper critically reviews research publications on drilling path optimization. In particular, this review focuses on three aspects; problem modeling, objective functions, and optimization algorithms. We conclude that most papers being published on hole drilling are simply basic Traveling Salesman Problems (TSP) for which extremely powerful heuristics exist and for which source code is readily available. Therefore, it is remarkable that many researchers continue developing “novel” metaheuristics for hole drilling without properly situating those approaches in the larger TSP literature. Consequently, more challenging hole drilling applications that are modeled by the Precedence Constrained TSP or hole drilling with sequence dependent drilling times do not receive much research focus. Sadly, these many low quality hole drilling research publications drown out the occasional high quality papers that describe specific problematic problem constraints or objective functions. It is our hope through this review paper that researchers’ efforts can be refocused on these problem aspects in order to minimize production costs in the general sense. PubDate: 2018-01-22 DOI: 10.1007/s11831-018-9251-x

Authors:Sukhvir Kaur; Shreelekha Pandey; Shivani Goel Abstract: The symptoms of plant diseases are evident in different parts of a plant; however leaves are found to be the most commonly observed part for detecting an infection. Researchers have thus attempted to automate the process of plant disease detection and classification using leaf images. Several works utilized computer vision technologies effectively and contributed a lot in this domain. This manuscript summarizes the pros and cons of all such studies to throw light on various important research aspects. A discussion on commonly studied infections and research scenario in different phases of a disease detection system is presented. The performance of state-of-the-art techniques are analyzed to identify those that seem to work well across several crops or crop categories. Discovering a set of acceptable techniques, the manuscript highlights several points of consideration along with the future research directions. The survey would help researchers to gain understanding of computer vision applications in plant disease detection. PubDate: 2018-01-19 DOI: 10.1007/s11831-018-9255-6

Authors:Muhammad Arif Abdullah; Mohd Fadzil Faisae Ab Rashid; Zakri Ghazalli Abstract: Assembly sequence planning (ASP) is an NP-hard problem that involves finding the most optimum sequence to assemble a product. The potential assembly sequences are too large to be handled effectively using traditional approaches for the complex mechanical product. Because of the problem complexity, ASP optimization is required for the efficient computational approach to determine the best assembly sequence. This topic has attracted many researchers from the computer science, engineering, and mathematics background. This paper presents a review of the research that used soft computing approaches to solve and optimize ASP problem. The review on this topic is important for the future researchers to contribute in ASP. The literature review was conducted through finding related published research papers specifically on ASP that used soft computing approaches. This review focused on ASP modeling approach, optimization algorithms and optimization objectives. Based on the conducted review, several future research directions were drawn. In terms of the problem modeling, future research should emphasize to model the flexible part in ASP. Besides, the consideration of sustainable manufacturing and ergonomic factors in ASP will also be the new directions in ASP research. In addition, a further study on new optimization algorithms is also suggested to obtain an optimal solution in reasonable computational time. PubDate: 2018-01-17 DOI: 10.1007/s11831-018-9250-y

Authors:Tobias Gleim; Detlef Kuhl Abstract: The current paper establishes different axisymmetric and two-dimensional models for an electrostatic, magnetostatic and electromagnetic induction process. Therein, the Maxwell equations are combined in a monolithic solution strategy. A higher order finite element discretization using Galerkin’s method in space as well as in time is developed for the electromagnetic approach. In addition, time integration procedures of the Runge–Kutta family are evolved. Furthermore, the residual error is introduced to open an alternative way for a numerically efficient estimation of the time integration accuracy of the Galerkin time integration method. Runge–Kutta methods are enriched by the embedded error estimate. A family of electrostatic, magnetostatic and electromagneto dynamic boundary and initial boundary value problems with existing analytical solutions are introduced, which will serve as benchmark examples for numerical solution procedures. PubDate: 2018-01-05 DOI: 10.1007/s11831-017-9249-9

Authors:Benjamin Urick; Travis M. Sanders; Shaolie S. Hossain; Yongjie J. Zhang; Thomas J. R. Hughes Abstract: We review the literature on patient-specific vascular modeling, with particular attention paid to three-dimensional arterial networks. Patient-specific vascular modeling typically involves three main steps: image processing, analysis suitable model generation, and computational analysis. Analysis suitable model generation techniques that are currently utilized suffer from several difficulties and complications, which often necessitate manual intervention and crude approximations. Because the modeling pipeline spans multiple disciplines, the benefits of integrating a computer-aided design (CAD) component for the geometric modeling tasks has been largely overlooked. Upon completion of our review, we adopt this philosophy and present a CAD-integrated template-based modeling framework that streamlines the construction of solid non-uniform rational B-spline vascular models for performing isogeometric finite element analysis. Examples of arterial models for mouse and human circles of Willis and a porcine coronary tree are presented. PubDate: 2017-11-28 DOI: 10.1007/s11831-017-9246-z

Authors:Patrick Gallinari; Yvon Maday; Maxime Sangnier; Olivier Schwander; Tommaso Taddei Abstract: Reduced basis methods for the approximation to parameter-dependent partial differential equations are now well-developed and start to be used for industrial applications. The classical implementation of the reduced basis method goes through two stages: in the first one, offline and time consuming, from standard approximation methods a reduced basis is constructed; then in a second stage, online and very cheap, a small problem, of the size of the reduced basis, is solved. The offline stage is a learning one from which the online stage can proceed efficiently. In this paper we propose to exploit machine learning procedures in both offline and online stages to either tackle different classes of problems or increase the speed-up during the online stage. The method is presented through a simple flow problem—a flow past a backward step governed by the Navier Stokes equations—which shows, however, interesting features. PubDate: 2017-08-05 DOI: 10.1007/s11831-017-9238-z