Authors:Liang Meng; Piotr Breitkopf; Guénhaël Le Quilliec; Balaji Raghavan; Pierre Villon Pages: 1 - 21 Abstract: In this paper, we present the concept of a “shape manifold” designed for reduced order representation of complex “shapes” encountered in mechanical problems, such as design optimization, springback or image correlation. The overall idea is to define the shape space within which evolves the boundary of the structure. The reduced representation is obtained by means of determining the intrinsic dimensionality of the problem, independently of the original design parameters, and by approximating a hyper surface, i.e. a shape manifold, connecting all admissible shapes represented using level set functions. Also, an optimal parameterization may be obtained for arbitrary shapes, where the parameters have to be defined a posteriori. We also developed the predictor-corrector optimization manifold walking algorithms in a reduced shape space that guarantee the admissibility of the solution with no additional constraints. We illustrate the approach on three diverse examples drawn from the field of computational and applied mechanics. PubDate: 2018-01-01 DOI: 10.1007/s11831-016-9189-9 Issue No:Vol. 25, No. 1 (2018)

Authors:T. Taddei; J. D. Penn; M. Yano; A. T. Patera Pages: 23 - 45 Abstract: We present a model-order-reduction approach to simulation-based classification, with particular application to structural health monitoring. The approach exploits (1) synthetic results obtained by repeated solution of a parametrized mathematical model for different values of the parameters, (2) machine-learning algorithms to generate a classifier that monitors the damage state of the system, and (3) a reduced basis method to reduce the computational burden associated with the model evaluations. Furthermore, we propose a mathematical formulation which integrates the partial differential equation model within the classification framework and clarifies the influence of model error on classification performance. We illustrate our approach and we demonstrate its effectiveness through the vehicle of a particular physical companion experiment, a harmonically excited microtruss. PubDate: 2018-01-01 DOI: 10.1007/s11831-016-9185-0 Issue No:Vol. 25, No. 1 (2018)

Authors:D. González; J. V. Aguado; E. Cueto; E. Abisset-Chavanne; F. Chinesta Pages: 69 - 86 Abstract: Parametric solutions make possible fast and reliable real-time simulations which, in turn allow real time optimization, simulation-based control and uncertainty propagation. This opens unprecedented possibilities for robust and efficient design and real-time decision making. The construction of such parametric solutions was addressed in our former works in the context of models whose parameters were easily identified and known in advance. In this work we address more complex scenarios in which the parameters do not appear explicitly in the model—complex microstructures, for instance. In these circumstances the parametric model solution requires combining a technique to find the relevant model parameters and a solution procedure able to cope with high-dimensional models, avoiding the well-known curse of dimensionality. In this work, kPCA (kernel Principal Component Analysis) is used for extracting the hidden model parameters, whereas the PGD (Proper Generalized Decomposition) is used for calculating the resulting parametric solution. PubDate: 2018-01-01 DOI: 10.1007/s11831-016-9173-4 Issue No:Vol. 25, No. 1 (2018)

Authors:Lionel Mathelin; Kévin Kasper; Hisham Abou-Kandil Pages: 103 - 120 Abstract: This paper introduces a method for efficiently inferring a high-dimensional distributed quantity from a few observations. The quantity of interest (QoI) is approximated in a basis (dictionary) learned from a training set. The coefficients associated with the approximation of the QoI in the basis are determined by minimizing the misfit with the observations. To obtain a probabilistic estimate of the quantity of interest, a Bayesian approach is employed. The QoI is treated as a random field endowed with a hierarchical prior distribution so that closed-form expressions can be obtained for the posterior distribution. The main contribution of the present work lies in the derivation of a representation basis consistent with the observation chain used to infer the associated coefficients. The resulting dictionary is then tailored to be both observable by the sensors and accurate in approximating the posterior mean. An algorithm for deriving such an observable dictionary is presented. The method is illustrated with the estimation of the velocity field of an open cavity flow from a handful of wall-mounted point sensors. Comparison with standard estimation approaches relying on Principal Component Analysis and K-SVD dictionaries is provided and illustrates the superior performance of the present approach. PubDate: 2018-01-01 DOI: 10.1007/s11831-017-9219-2 Issue No:Vol. 25, No. 1 (2018)

Authors:Seunghye Lee; Jingwan Ha; Mehriniso Zokhirova; Hyeonjoon Moon; Jaehong Lee Pages: 121 - 129 Abstract: Since the first journal article on structural engineering applications of neural networks (NN) was published, there have been a large number of articles about structural analysis and design problems using machine learning techniques. However, due to a fundamental limitation of traditional methods, attempts to apply artificial NN concept to structural analysis problems have been reduced significantly over the last decade. Recent advances in deep learning techniques can provide a more suitable solution to those problems. In this study, versatile background information, such as alleviating overfitting methods with hyper-parameters, is presented. A well-known ten bar truss example is presented to show condition for neural networks, and role of hyper-parameters in the structures. PubDate: 2018-01-01 DOI: 10.1007/s11831-017-9237-0 Issue No:Vol. 25, No. 1 (2018)

Authors:Mar Miñano; Francisco J. Montáns Pages: 165 - 193 Abstract: The conservative elastic behavior of soft materials is characterized by a stored energy function which shape is usually specified a priori, except for some material parameters. There are hundreds of proposed stored energies in the literature for different materials. The stored energy function may change under loading due to damage effects, but it may be considered constant during unloading–reloading. The two dominant approaches in the literature to model this damage effect are based either on the Continuum Damage Mechanics framework or on the Pseudoelasticity framework. In both cases, additional assumed evolution functions, with their associated material parameters, are proposed. These proposals are semi-inverse, semi-analytical, model-driven and data-adjusted ones. We propose an alternative which may be considered a non-inverse, numerical, model-free, data-driven, approach. We call this approach WYPiWYG constitutive modeling. We do not assume global functions nor material parameters, but just solve numerically the differential equations of a set of tests that completely define the behavior of the solid under the given assumptions. In this work we extend the approach to model isotropic and anisotropic damage in soft materials. We obtain numerically the damage evolution from experimental tests. The theory can be used for both hard and soft materials, and the infinitesimal formulation is naturally recovered for infinitesimal strains. In fact, we motivate the formulation in a one-dimensional infinitesimal framework and we show that the concepts are immediately applicable to soft materials. PubDate: 2018-01-01 DOI: 10.1007/s11831-017-9233-4 Issue No:Vol. 25, No. 1 (2018)

Authors:Gianmarco Mengaldo; Andrzej Wyszogrodzki; Michail Diamantakis; Sarah-Jane Lock; Francis X. Giraldo; Nils P. Wedi Abstract: The continuous partial differential equations governing a given physical phenomenon, such as the Navier–Stokes equations describing the fluid motion, must be numerically discretized in space and time in order to obtain a solution otherwise not readily available in closed (i.e., analytic) form. While the overall numerical discretization plays an essential role in the algorithmic efficiency and physically-faithful representation of the solution, the time-integration strategy commonly is one of the main drivers in terms of cost-to-solution (e.g., time- or energy-to-solution), accuracy and numerical stability, thus constituting one of the key building blocks of the computational model. This is especially true in time-critical applications, including numerical weather prediction (NWP), climate simulations and engineering. This review provides a comprehensive overview of the existing and emerging time-integration (also referred to as time-stepping) practices used in the operational global NWP and climate industry, where global refers to weather and climate simulations performed on the entire globe. While there are many flavors of time-integration strategies, in this review we focus on the most widely adopted in NWP and climate centers and we emphasize the reasons why such numerical solutions were adopted. This allows us to make some considerations on future trends in the field such as the need to balance accuracy in time with substantially enhanced time-to-solution and associated implications on energy consumption and running costs. In addition, the potential for the co-design of time-stepping algorithms and underlying high performance computing hardware, a keystone to accelerate the computational performance of future NWP and climate services, is also discussed in the context of the demanding operational requirements of the weather and climate industry. PubDate: 2018-02-23 DOI: 10.1007/s11831-018-9261-8

Authors:Zhao Huan; Gao Zhenghong; Xu Fang; Zhang Yidian Abstract: The ever-increasing demands for risk-free, resource-efficient and environment-friendly air vehicles motivate the development of advanced design methodology. As a particularly promising design methodology considering uncertainties, robust aerodynamic design optimization (RADO) is capable of providing robust and reliable aerodynamic configuration and reducing cost under probable uncertainties in the flight envelop and all life cycle of air vehicle. However, the major challenges including high computational cost with increasing dimensionality of uncertainty and complex RADO procedure hinder the wider application of RADO. In this paper, the complete RADO procedure, i.e., uncertainty modeling, establishment of uncertainty quantification approach as well as robust optimization subject to reliability constraints under uncertainty, is elaborated. Systematic reviews of RADO methodology including uncertainty modeling methods, comprehensive uncertainty quantification approaches, and robust optimization methods are provided. Further, this paper presents a brief survey of the main applications of RADO in the aerodynamic design of transonic flow and natural-laminar-flow, and discusses the application prospects of RADO methodology for air vehicles. The detailed statement of the paper indicates the intention, i.e., to present the state of the art in RADO methodology, to highlight the key techniques and primary challenges in RADO, and to provide the beneficial directions for future researches. PubDate: 2018-02-19 DOI: 10.1007/s11831-018-9259-2

Authors:Manuel Caicedo; Javier L. Mroginski; Sebastian Toro; Marcelo Raschi; Alfredo Huespe; Javier Oliver Abstract: A High-Performance Reduced-Order Model (HPROM) technique, previously presented by the authors in the context of hierarchical multiscale models for non linear-materials undergoing infinitesimal strains, is generalized to deal with large deformation elasto-plastic problems. The proposed HPROM technique uses a Proper Orthogonal Decomposition procedure to build a reduced basis of the primary kinematical variable of the micro-scale problem, defined in terms of the micro-deformation gradient fluctuations. Then a Galerkin-projection, onto this reduced basis, is utilized to reduce the dimensionality of the micro-force balance equation, the stress homogenization equation and the effective macro-constitutive tangent tensor equation. Finally, a reduced goal-oriented quadrature rule is introduced to compute the non-affine terms of these equations. Main importance in this paper is given to the numerical assessment of the developed HPROM technique. The numerical experiments are performed on a micro-cell simulating a randomly distributed set of elastic inclusions embedded into an elasto-plastic matrix. This micro-structure is representative of a typical ductile metallic alloy. The HPROM technique applied to this type of problem displays high computational speed-ups, increasing with the complexity of the finite element model. From these results, we conclude that the proposed HPROM technique is an effective computational tool for modeling, with very large speed-ups and acceptable accuracy levels with respect to the high-fidelity case, the multiscale behavior of heterogeneous materials subjected to large deformations involving two well-separated scales of length. PubDate: 2018-02-17 DOI: 10.1007/s11831-018-9258-3

Authors:M. Abedini; Azrul A. Mutalib; Sudharshan N. Raman; R. Alipour; E. Akhlaghi Abstract: In recent years, many studies have been conducted by governmental and nongovernmental organizations across the world attempt to better understand the effect of blast loads on structures in order to better design against specific threats. Pressure–Impulse (P–I) diagram is an easiest method for describing a structure’s response to blast load. Therefore, this paper presents a comprehensive overview of P–I diagrams in RC structures under blast loads. The effects of different parameters on P–I diagram is performed. Three major methods to develop P–I diagram for various damage criterions are discussed in this research. Analytical methods are easy and simple to use but have limitations on the kinds of failure modes and unsuitable for complex geometries and irregular shape of pulse loads that they can capture. Experimental method is a good way to study the structure response to blast loads; however, it is require special and expensive instrumentation and also not possible in many cases due to the safety and environmental consideration. Despite numerical methods are capable of incorporating complex features of the material behaviour, geometry and boundary conditions. Hence, numerical method is suggested for developing P–I diagrams for new structural elements. PubDate: 2018-02-13 DOI: 10.1007/s11831-018-9260-9

Authors:S. Krishna Addepalli; J. M. Mallikarjuna Abstract: This paper presents an objective classification of mixture distribution in the combustion chamber of a gasoline direct injection (GDI) engine into homogeneous and non-homogeneous types. The non-homogeneous mixture distribution is further classified as properly stratified, improperly stratified and mal-distributed types. Based on this classification, four types of properly stratified mixture distributions viz., random, linear, Gaussian and parabolic are virtually simulated in the combustion chamber of a GDI engine using computational fluid dynamics to identify the mixture that results in maximum indicated mean effective pressure (IMEP). It is found that the IMEP is highest for the parabolic mixture distribution which is followed by Gaussian, linear and random types. The performance and emission characteristics of the virtual mixture distributions are compared with a late fuel injection case at different over all equivalence ratios ranging from 0.3 to 0.7. Then the variation of mixture equivalence ratio with the distance from the spark plug is parametrized for different virtual mixture distribution cases and expressed using a parameter called the “stratification index”. It is found that the stratification index based on Gaussian variation gives maximum information about the mixture distribution in the combustion chamber. Finally the stratification index of different virtual mixture distributions is compared with the late fuel injection case at various overall equivalence ratios. It is found that the late fuel injection case tends to produce highest IMEP when the stratification index is close to unity. PubDate: 2018-02-12 DOI: 10.1007/s11831-018-9262-7

Authors:Siddharth Singh Chouhan; Ajay Kaul; Uday Pratap Singh Abstract: Image segmentation methodology is a part of nearly all computer schemes as a pre-processing phase to excerpt more meaningful and useful information for analysing the objects within an image. Segmentation of an image is one of the most conjoint scientific matter, essential technology and critical constraint for image investigation and dispensation. There has been a lot of research work conceded in several emerging algorithms and approaches for segmentation, but even at present, no solitary standard technique has been proposed. The methodologies present are broadly classified among two classes i.e. traditional approaches and Soft computing approaches or Computational Intelligence (CI) approaches. In this article, our emphasis is to focus on Soft Computing (SC) techniques which has been adopted for segmenting an image. Nowadays, it is quite often seen that SC or CI is cast-off frequently in Information Technology and Computer Technology. However, Soft Computing approaches working synergistically provides in anyway, malleable information processing competence to manipulate real-life enigmatic circumstances. The impetus of these methodologies is to feat the lenience for ambiguity, roughness, imprecise acumen and partial veracity for the sake to attain compliance, sturdiness and economical results. Neural Networks (NNs), Fuzzy Logic (FL), and Genetic Algorithm (GA) are the fundamental approaches of SC regulation. SC approaches has been broadly implemented and studied in the number of applications including scientific analysis, medical, engineering, management, humanities etc. The paper focuses on introducing the various SC methodologies and presenting numerous applications in image segmentation. The acumen is to corroborate the probabilities of smearing computational intelligence to segmentation of an image. The available articles about usage of SC in segmentation are investigated, especially focusing on the core approaches like FL, NN and GA and efforts has been also made for collaborating new techniques like Fuzzy C-Means from FL family and Deep Neural Network or Convolutional Neural Network from NN family. The impression behind this work is to simulate core Soft Computing methodologies, along with encapsulating various terminologies like evaluation parameters, tools, databases, noises etc. which can be advantageous for researchers. This study also identifies approaches of SC being used, often collectively to resolve the distinctive dilemma of image segmentation, concluding with a general discussion about methodologies, applications followed by proposed work. PubDate: 2018-02-07 DOI: 10.1007/s11831-018-9257-4

Authors:Bibin John; P. Senthilkumar; Sreeja Sadasivan Abstract: This paper documents all the important works in the field of conjugate heat transfer study. Theoretical and applied aspects of conjugate heat transfer analysis are reviewed and summarized to a great extent on the light of available literature in this field. Over the years, conjugate heat transfer analysis has been evolved as the most effective method of heat transfer study. In this approach the mutual effects of thermal conduction in the solid and convection in the fluid are considered in the analysis. Various analytical and computational studies are reported in this field. Comprehension of analytical as well as computational studies of this field will help the researchers and scientists who work in this area to progress in their research. That is the focus of this review. Early analytical studies related to conjugate heat transfer are reviewed and summarised in the first part of this paper. Background of theoretical studies is discussed briefly. More importance is given in summarising the computational studies in this field. Different coupling techniques proposed to date are presented in great detail. Important studies narrating the application of conjugate heat transfer analysis are also discussed under separate headings. Hence the present paper gives complete theoretical background of Conjugate heat transfer along with direction to its application envelope. PubDate: 2018-01-27 DOI: 10.1007/s11831-018-9252-9

Authors:Zhen Yang; Jing Lian; Yanan Guo; Shouliang Li; Deyuan Wang; Wenhao Sun; Yide Ma Abstract: In this paper, recent pulse coupled neural networks (PCNN) model’s development, especially in the fields related to the image processing, were surveyed. Our review aims to provide a comprehensive and systematic analysis of selected researches from past few decades, having powerful methods to infer the area of study. In this paper, all selected references are categorized in three groups, on the basis of neurons structure, parameters setting, and the inherent characteristics of PCNN. Various applications of these models were mentioned and underlying difficulties, limitations, merits and disadvantages were discussed in applying these models. The researchers will find it helpful to choose and use the appropriate model for a better application. PubDate: 2018-01-24 DOI: 10.1007/s11831-018-9253-8

Authors:Reginald Dewil; İlker Küçükoğlu; Corrinne Luteyn; Dirk Cattrysse Abstract: Hole drilling is one of the major basic operations in part manufacturing. It follows without surprise then that the optimization of this process is of great importance when trying to minimize the total financial and environmental cost of part manufacturing. In multi-hole drilling, 70% of the total process time is spent in tool movement and tool switching. Therefore, toolpath optimization in particular has attracted significant attention in cost minimization. This paper critically reviews research publications on drilling path optimization. In particular, this review focuses on three aspects; problem modeling, objective functions, and optimization algorithms. We conclude that most papers being published on hole drilling are simply basic Traveling Salesman Problems (TSP) for which extremely powerful heuristics exist and for which source code is readily available. Therefore, it is remarkable that many researchers continue developing “novel” metaheuristics for hole drilling without properly situating those approaches in the larger TSP literature. Consequently, more challenging hole drilling applications that are modeled by the Precedence Constrained TSP or hole drilling with sequence dependent drilling times do not receive much research focus. Sadly, these many low quality hole drilling research publications drown out the occasional high quality papers that describe specific problematic problem constraints or objective functions. It is our hope through this review paper that researchers’ efforts can be refocused on these problem aspects in order to minimize production costs in the general sense. PubDate: 2018-01-22 DOI: 10.1007/s11831-018-9251-x

Authors:Sukhvir Kaur; Shreelekha Pandey; Shivani Goel Abstract: The symptoms of plant diseases are evident in different parts of a plant; however leaves are found to be the most commonly observed part for detecting an infection. Researchers have thus attempted to automate the process of plant disease detection and classification using leaf images. Several works utilized computer vision technologies effectively and contributed a lot in this domain. This manuscript summarizes the pros and cons of all such studies to throw light on various important research aspects. A discussion on commonly studied infections and research scenario in different phases of a disease detection system is presented. The performance of state-of-the-art techniques are analyzed to identify those that seem to work well across several crops or crop categories. Discovering a set of acceptable techniques, the manuscript highlights several points of consideration along with the future research directions. The survey would help researchers to gain understanding of computer vision applications in plant disease detection. PubDate: 2018-01-19 DOI: 10.1007/s11831-018-9255-6

Authors:Muhammad Arif Abdullah; Mohd Fadzil Faisae Ab Rashid; Zakri Ghazalli Abstract: Assembly sequence planning (ASP) is an NP-hard problem that involves finding the most optimum sequence to assemble a product. The potential assembly sequences are too large to be handled effectively using traditional approaches for the complex mechanical product. Because of the problem complexity, ASP optimization is required for the efficient computational approach to determine the best assembly sequence. This topic has attracted many researchers from the computer science, engineering, and mathematics background. This paper presents a review of the research that used soft computing approaches to solve and optimize ASP problem. The review on this topic is important for the future researchers to contribute in ASP. The literature review was conducted through finding related published research papers specifically on ASP that used soft computing approaches. This review focused on ASP modeling approach, optimization algorithms and optimization objectives. Based on the conducted review, several future research directions were drawn. In terms of the problem modeling, future research should emphasize to model the flexible part in ASP. Besides, the consideration of sustainable manufacturing and ergonomic factors in ASP will also be the new directions in ASP research. In addition, a further study on new optimization algorithms is also suggested to obtain an optimal solution in reasonable computational time. PubDate: 2018-01-17 DOI: 10.1007/s11831-018-9250-y

Authors:Tobias Gleim; Detlef Kuhl Abstract: The current paper establishes different axisymmetric and two-dimensional models for an electrostatic, magnetostatic and electromagnetic induction process. Therein, the Maxwell equations are combined in a monolithic solution strategy. A higher order finite element discretization using Galerkin’s method in space as well as in time is developed for the electromagnetic approach. In addition, time integration procedures of the Runge–Kutta family are evolved. Furthermore, the residual error is introduced to open an alternative way for a numerically efficient estimation of the time integration accuracy of the Galerkin time integration method. Runge–Kutta methods are enriched by the embedded error estimate. A family of electrostatic, magnetostatic and electromagneto dynamic boundary and initial boundary value problems with existing analytical solutions are introduced, which will serve as benchmark examples for numerical solution procedures. PubDate: 2018-01-05 DOI: 10.1007/s11831-017-9249-9

Authors:Patrick Gallinari; Yvon Maday; Maxime Sangnier; Olivier Schwander; Tommaso Taddei Abstract: Reduced basis methods for the approximation to parameter-dependent partial differential equations are now well-developed and start to be used for industrial applications. The classical implementation of the reduced basis method goes through two stages: in the first one, offline and time consuming, from standard approximation methods a reduced basis is constructed; then in a second stage, online and very cheap, a small problem, of the size of the reduced basis, is solved. The offline stage is a learning one from which the online stage can proceed efficiently. In this paper we propose to exploit machine learning procedures in both offline and online stages to either tackle different classes of problems or increase the speed-up during the online stage. The method is presented through a simple flow problem—a flow past a backward step governed by the Navier Stokes equations—which shows, however, interesting features. PubDate: 2017-08-05 DOI: 10.1007/s11831-017-9238-z