Authors:Das; Saptarshi, Hobson, Michael P., Feroz, Farhan, Chen, Xi, Phadke, Suhas, Goudswaard, Jeroen, Hohl, Detlef First page: 1 Abstract: In passive seismic and microseismic monitoring, identifying and characterizing events in a strong noisy background is a challenging task. Most of the established methods for geophysical inversion are likely to yield many false event detections. The most advanced of these schemes require thousands of computationally demanding forward elastic-wave propagation simulations. Here we train and use an ensemble of Gaussian process surrogate meta-models, or proxy emulators, to accelerate the generation of accurate template seismograms from random microseismic event locations. In the presence of multiple microseismic events occurring at different spatial locations with arbitrary amplitude and origin time, and in the presence of noise, an inference algorithm needs to navigate an objective function or likelihood landscape of highly complex shape, perhaps with multiple modes and narrow curving degeneracies. This is a challenging computational task even for state-of-the-art Bayesian sampling algorithms. In this paper, we propose a novel method for detecting multiple microseismic events in a strong noise background using Bayesian inference, in particular, the Multimodal Nested Sampling (MultiNest) algorithm. The method not only provides the posterior samples for the 5D spatio-temporal-amplitude inference for the real microseismic events, by inverting the seismic traces in multiple surface receivers, but also computes the Bayesian evidence or the marginal likelihood that permits hypothesis testing for discriminating true vs. false event detection. PubDate: 2021-02-26 DOI: 10.1017/dce.2021.1

Authors:D’Alessio; Giuseppe, Cuoci, Alberto, Parente, Alessandro First page: 2 Abstract: The integration of Artificial Neural Networks (ANNs) and Feature Extraction (FE) in the context of the Sample- Partitioning Adaptive Reduced Chemistry approach was investigated in this work, to increase the on-the-fly classification accuracy for very large thermochemical states. The proposed methodology was firstly compared with an on-the-fly classifier based on the Principal Component Analysis reconstruction error, as well as with a standard ANN (s-ANN) classifier, operating on the full thermochemical space, for the adaptive simulation of a steady laminar flame fed with a nitrogen-diluted stream of n-heptane in air. The numerical simulations were carried out with a kinetic mechanism accounting for 172 species and 6,067 reactions, which includes the chemistry of Polycyclic Aromatic Hydrocarbons (PAHs) up to C. Among all the aforementioned classifiers, the one exploiting the combination of an FE step with ANN proved to be more efficient for the classification of high-dimensional spaces, leading to a higher speed-up factor and a higher accuracy of the adaptive simulation in the description of the PAH and soot-precursor chemistry. Finally, the investigation of the classifier’s performances was also extended to flames with different boundary conditions with respect to the training one, obtained imposing a higher Reynolds number or time-dependent sinusoidal perturbations. Satisfying results were observed on all the test flames. PubDate: 2021-04-12 DOI: 10.1017/dce.2021.2

Authors:Davis; Timothy Peter First page: 3 Abstract: We explore the concept of parameter design applied to the production of glass beads in the manufacture of metal-encapsulated transistors. The main motivation is to complete the analysis hinted at in the original publication by Jim Morrison in 1957, which was an early example of discussing the idea of transmitted variation in engineering design, and an influential paper in the development of analytic parameter design as a data-centric engineering activity. Parameter design is a secondary design activity focused on selecting the nominals of the design variables to achieve the required target performance and to simultaneously reduce the variance around the target. Although the 1957 paper is not recent, its approach to engineering design is modern. PubDate: 2021-05-07 DOI: 10.1017/dce.2021.3

Authors:Carter; Douglas W., De Voogt, Francis, Soares, Renan, Ganapathisubramani, Bharathram First page: 5 Abstract: Recent work has demonstrated the use of sparse sensors in combination with the proper orthogonal decomposition (POD) to produce data-driven reconstructions of the full velocity fields in a variety of flows. The present work investigates the fidelity of such techniques applied to a stalled NACA 0012 aerofoil at at an angle of attack as measured experimentally using planar time-resolved particle image velocimetry. In contrast to many previous studies, the flow is absent of any dominant shedding frequency and exhibits a broad range of singular values due to the turbulence in the separated region. Several reconstruction methodologies for linear state estimation based on classical compressed sensing and extended POD methodologies are presented as well as nonlinear refinement through the use of a shallow neural network (SNN). It is found that the linear reconstructions inspired by the extended POD are inferior to the compressed sensing approach provided that the sparse sensors avoid regions of the flow with small variance across the global POD basis. Regardless of the linear method used, the nonlinear SNN gives strikingly similar performance in its refinement of the reconstructions. The capability of sparse sensors to reconstruct separated turbulent flow measurements is further discussed and directions for future work suggested. PubDate: 2021-05-31 DOI: 10.1017/dce.2021.5

Authors:Özbay; Ali Girayhan, Hamzehloo, Arash, Laizet, Sylvain, Tzirakis, Panagiotis, Rizos, Georgios, Schuller, Björn First page: 6 Abstract: The Poisson equation is commonly encountered in engineering, for instance, in computational fluid dynamics (CFD) where it is needed to compute corrections to the pressure field to ensure the incompressibility of the velocity field. In the present work, we propose a novel fully convolutional neural network (CNN) architecture to infer the solution of the Poisson equation on a 2D Cartesian grid with different resolutions given the right-hand side term, arbitrary boundary conditions, and grid parameters. It provides unprecedented versatility for a CNN approach dealing with partial differential equations. The boundary conditions are handled using a novel approach by decomposing the original Poisson problem into a homogeneous Poisson problem plus four inhomogeneous Laplace subproblems. The model is trained using a novel loss function approximating the continuous norm between the prediction and the target. Even when predicting on grids denser than previously encountered, our model demonstrates encouraging capacity to reproduce the correct solution profile. The proposed model, which outperforms well-known neural network models, can be included in a CFD solver to help with solving the Poisson equation. Analytical test cases indicate that our CNN architecture is capable of predicting the correct solution of a Poisson problem with mean percentage errors below 10%, an improvement by comparison to the first step of conventional iterative methods. Predictions from our model, used as the initial guess to iterative algorithms like Multigrid, can reduce the root mean square error after a single iteration by more than 90% compared to a zero initial guess. PubDate: 2021-06-29 DOI: 10.1017/dce.2021.7

Authors:Murray; Lawrence M., Singh, Sumeetpal S., Lee, Anthony First page: 7 Abstract: Monte Carlo algorithms simulates some prescribed number of samples, taking some random real time to complete the computations necessary. This work considers the converse: to impose a real-time budget on the computation, which results in the number of samples simulated being random. To complicate matters, the real time taken for each simulation may depend on the sample produced, so that the samples themselves are not independent of their number, and a length bias with respect to compute time is apparent. This is especially problematic when a Markov chain Monte Carlo (MCMC) algorithm is used and the final state of the Markov chain—rather than an average over all states—is required, which is the case in parallel tempering implementations of MCMC. The length bias does not diminish with the compute budget in this case. It also occurs in sequential Monte Carlo (SMC) algorithms, which is the focus of this paper. We propose an anytime framework to address the concern, using a continuous-time Markov jump process to study the progress of the computation in real time. We first show that for any MCMC algorithm, the length bias of the final state’s distribution due to the imposed real-time computing budget can be eliminated by using a multiple chain construction. The utility of this construction is then demonstrated on a large-scale SMC implementation, using four billion particles distributed across a cluster of 128 graphics processing units on the Amazon EC2 service. The anytime framework imposes a real-time budget on the MCMC move steps within the SMC algorithm, ensuring that all processors are simultaneously ready for the resampling step, demonstrably reducing idleness to due waiting times and providing substantial control over the total compute budget. PubDate: 2021-06-29 DOI: 10.1017/dce.2021.6

Authors:Qiu; Zhiping, Wu, Han, Elishakoff, Isaac, Liu, Dongliang First page: 8 Abstract: This paper studies the data-based polyhedron model and its application in uncertain linear optimization of engineering structures, especially in the absence of information either on probabilistic properties or about membership functions in the fussy sets-based approach, in which situation it is more appropriate to quantify the uncertainties by convex polyhedra. Firstly, we introduce the uncertainty quantification method of the convex polyhedron approach and the model modification method by Chebyshev inequality. Secondly, the characteristics of the optimal solution of convex polyhedron linear programming are investigated. Then the vertex solution of convex polyhedron linear programming is presented and proven. Next, the application of convex polyhedron linear programming in the static load-bearing capacity problem is introduced. Finally, the effectiveness of the vertex solution is verified by an example of the plane truss bearing problem, and the efficiency is verified by a load-bearing problem of stiffened composite plates. PubDate: 2021-06-29 DOI: 10.1017/dce.2021.8

Authors:Zeraatpisheh; Milad, Bordas, Stephane P.A., Beex, Lars A.A. First page: 9 Abstract: Patient-specific surgical simulations require the patient-specific identification of the constitutive parameters. The sparsity of the experimental data and the substantial noise in the data (e.g., recovered during surgery) cause considerable uncertainty in the identification. In this exploratory work, parameter uncertainty for incompressible hyperelasticity, often used for soft tissues, is addressed by a probabilistic identification approach based on Bayesian inference. Our study particularly focuses on the uncertainty of the model: we investigate how the identified uncertainties of the constitutive parameters behave when different forms of model uncertainty are considered. The model uncertainty formulations range from uninformative ones to more accurate ones that incorporate more detailed extensions of incompressible hyperelasticity. The study shows that incorporating model uncertainty may improve the results, but this is not guaranteed. PubDate: 2021-07-13 DOI: 10.1017/dce.2021.9

Authors:Sancarlos; Abel, Cameron, Morgan, Le Peuvedic, Jean-Marc, Groulier, Juliette, Duval, Jean-Louis, Cueto, Elias, Chinesta, Francisco First page: 10 Abstract: The concept of “hybrid twin” (HT) has recently received a growing interest thanks to the availability of powerful machine learning techniques. This twin concept combines physics-based models within a model order reduction framework—to obtain real-time feedback rates—and data science. Thus, the main idea of the HT is to develop on-the-fly data-driven models to correct possible deviations between measurements and physics-based model predictions. This paper is focused on the computation of stable, fast, and accurate corrections in the HT framework. Furthermore, regarding the delicate and important problem of stability, a new approach is proposed, introducing several subvariants and guaranteeing a low computational cost as well as the achievement of a stable time-integration. PubDate: 2021-08-27 DOI: 10.1017/dce.2021.16

Authors:Tsialiamanis; George, Wagg, David J., Dervilis, Nikolaos, Worden, Keith First page: 11 Abstract: A framework is proposed for generative models as a basis for digital twins or mirrors of structures. The proposal is based on the premise that deterministic models cannot account for the uncertainty present in most structural modeling applications. Two different types of generative models are considered here. The first is a physics-based model based on the stochastic finite element (SFE) method, which is widely used when modeling structures that have material and loading uncertainties imposed. Such models can be calibrated according to data from the structure and would be expected to outperform any other model if the modeling accurately captures the true underlying physics of the structure. The potential use of SFE models as digital mirrors is illustrated via application to a linear structure with stochastic material properties. For situations where the physical formulation of such models does not suffice, a data-driven framework is proposed, using machine learning and conditional generative adversarial networks (cGANs). The latter algorithm is used to learn the distribution of the quantity of interest in a structure with material nonlinearities and uncertainties. For the examples considered in this work, the data-driven cGANs model outperforms the physics-based approach. Finally, an example is shown where the two methods are coupled such that a hybrid model approach is demonstrated. PubDate: 2021-08-31 DOI: 10.1017/dce.2021.13

Authors:Svalova; Aleksandra, Helm, Peter, Prangle, Dennis, Rouainia, Mohamed, Glendinning, Stephanie, Wilkinson, Darren J. First page: 12 Abstract: We propose using fully Bayesian Gaussian process emulation (GPE) as a surrogate for expensive computer experiments of transport infrastructure cut slopes in high-plasticity clay soils that are associated with an increased risk of failure. Our deterioration experiments simulate the dissipation of excess pore water pressure and seasonal pore water pressure cycles to determine slope failure time. It is impractical to perform the number of computer simulations that would be sufficient to make slope stability predictions over a meaningful range of geometries and strength parameters. Therefore, a GPE is used as an interpolator over a set of optimally spaced simulator runs modeling the time to slope failure as a function of geometry, strength, and permeability. Bayesian inference and Markov chain Monte Carlo simulation are used to obtain posterior estimates of the GPE parameters. For the experiments that do not reach failure within model time of 184 years, the time to failure is stochastically imputed by the Bayesian model. The trained GPE has the potential to inform infrastructure slope design, management, and maintenance. The reduction in computational cost compared with the original simulator makes it a highly attractive tool which can be applied to the different spatio-temporal scales of transport networks. PubDate: 2021-09-06 DOI: 10.1017/dce.2021.14

Authors:Zhuang; Qinyu, Lorenzi, Juan Manuel, Bungartz, Hans-Joachim, Hartmann, Dirk First page: 13 Abstract: Model order reduction (MOR) methods enable the generation of real-time-capable digital twins, with the potential to unlock various novel value streams in industry. While traditional projection-based methods are robust and accurate for linear problems, incorporating machine learning to deal with nonlinearity becomes a new choice for reducing complex problems. These kinds of methods are independent to the numerical solver for the full order model and keep the nonintrusiveness of the whole workflow. Such methods usually consist of two steps. The first step is the dimension reduction by a projection-based method, and the second is the model reconstruction by a neural network (NN). In this work, we apply some modifications for both steps respectively and investigate how they are impacted by testing with three different simulation models. In all cases Proper orthogonal decomposition is used for dimension reduction. For this step, the effects of generating the snapshot database with constant input parameters is compared with time-dependent input parameters. For the model reconstruction step, three types of NN architectures are compared: multilayer perceptron (MLP), explicit Euler NN (EENN), and Runge–Kutta NN (RKNN). The MLPs learn the system state directly, whereas EENNs and RKNNs learn the derivative of system state and predict the new state as a numerical integrator. In the tests, RKNNs show their advantage as the network architecture informed by higher-order numerical strategy. PubDate: 2021-09-08 DOI: 10.1017/dce.2021.15

Authors:Akroyd; Jethro, Mosbach, Sebastian, Bhave, Amit, Kraft, Markus First page: 14 Abstract: This paper introduces a dynamic knowledge-graph approach for digital twins and illustrates how this approach is by design naturally suited to realizing the vision of a Universal Digital Twin. The dynamic knowledge graph is implemented using technologies from the Semantic Web. It is composed of concepts and instances that are defined using ontologies, and of computational agents that operate on both the concepts and instances to update the dynamic knowledge graph. By construction, it is distributed, supports cross-domain interoperability, and ensures that data are connected, portable, discoverable, and queryable via a uniform interface. The knowledge graph includes the notions of a “base world” that describes the real world and that is maintained by agents that incorporate real-time data, and of “parallel worlds” that support the intelligent exploration of alternative designs without affecting the base world. Use cases are presented that demonstrate the ability of the dynamic knowledge graph to host geospatial and chemical data, control chemistry experiments, perform cross-domain simulations, and perform scenario analysis. The questions of how to make intelligent suggestions for alternative scenarios and how to ensure alignment between the scenarios considered by the knowledge graph and the goals of society are considered. Work to extend the dynamic knowledge graph to develop a digital twin of the UK to support the decarbonization of the energy system is discussed. Important directions for future research are highlighted. PubDate: 2021-09-06 DOI: 10.1017/dce.2021.10

Authors:Ward; Rebecca, Choudhary, Ruchi, Gregory, Alastair, Jans-Singh, Melanie, Girolami, Mark First page: 15 Abstract: Assimilation of continuously streamed monitored data is an essential component of a digital twin; the assimilated data are used to ensure the digital twin represents the monitored system as accurately as possible. One way this is achieved is by calibration of simulation models, whether data-derived or physics-based, or a combination of both. Traditional manual calibration is not possible in this context; hence, new methods are required for continuous calibration. In this paper, a particle filter methodology for continuous calibration of the physics-based model element of a digital twin is presented and applied to an example of an underground farm. The methodology is applied to a synthetic problem with known calibration parameter values prior to being used in conjunction with monitored data. The proposed methodology is compared against static and sequential Bayesian calibration approaches and compares favourably in terms of determination of the distribution of parameter values and analysis run times, both essential requirements. The methodology is shown to be potentially useful as a means to ensure continuing model fidelity. PubDate: 2021-09-30 DOI: 10.1017/dce.2021.12

Authors:Lesjak; Mathias, Doan, Nguyen Anh Khoa First page: 16 Abstract: We explore the possibility of combining a knowledge-based reduced order model (ROM) with a reservoir computing approach to learn and predict the dynamics of chaotic systems. The ROM is based on proper orthogonal decomposition (POD) with Galerkin projection to capture the essential dynamics of the chaotic system while the reservoir computing approach used is based on echo state networks (ESNs). Two different hybrid approaches are explored: one where the ESN corrects the modal coefficients of the ROM (hybrid-ESN-A) and one where the ESN uses and corrects the ROM prediction in full state space (hybrid-ESN-B). These approaches are applied on two chaotic systems: the Charney–DeVore system and the Kuramoto–Sivashinsky equation and are compared to the ROM obtained using POD/Galerkin projection and to the data-only approach based uniquely on the ESN. The hybrid-ESN-B approach is seen to provide the best prediction accuracy, outperforming the other hybrid approach, the POD/Galerkin projection ROM, and the data-only ESN, especially when using ESNs with a small number of neurons. In addition, the influence of the accuracy of the ROM on the overall prediction accuracy of the hybrid-ESN-B is assessed rigorously by considering ROMs composed of different numbers of POD modes. Further analysis on how hybrid-ESN-B blends the prediction from the ROM and the ESN to predict the evolution of the system is also provided. PubDate: 2021-10-13 DOI: 10.1017/dce.2021.17

Authors:Zafar; Muhammad I., Choudhari, Meelan M., Paredes, Pedro, Xiao, Heng First page: 17 Abstract: Accurate prediction of laminar-turbulent transition is a critical element of computational fluid dynamics simulations for aerodynamic design across multiple flow regimes. Traditional methods of transition prediction cannot be easily extended to flow configurations where the transition process depends on a large set of parameters. In comparison, neural network methods allow higher dimensional input features to be considered without compromising the efficiency and accuracy of the traditional data-driven models. Neural network methods proposed earlier follow a cumbersome methodology of predicting instability growth rates over a broad range of frequencies, which are then processed to obtain the N-factor envelope, and then, the transition location based on the correlating N-factor. This paper presents an end-to-end transition model based on a recurrent neural network, which sequentially processes the mean boundary-layer profiles along the surface of the aerodynamic body to directly predict the N-factor envelope and the transition locations over a two-dimensional airfoil. The proposed transition model has been developed and assessed using a large database of 53 airfoils over a wide range of chord Reynolds numbers and angles of attack. The large universe of airfoils encountered in various applications causes additional difficulties. As such, we provide further insights on selecting training datasets from large amounts of available data. Although the proposed model has been analyzed for two-dimensional boundary layers in this paper, it can be easily generalized to other flows due to embedded feature extraction capability of convolutional neural network in the model. PubDate: 2021-10-19 DOI: 10.1017/dce.2021.11

Authors:Di Francesco; Domenic, Chryssanthopoulos, Marios, Faber, Michael Havbro, Bharadwaj, Ujjwal First page: 18 Abstract: Attempts to formalize inspection and monitoring strategies in industry have struggled to combine evidence from multiple sources (including subject matter expertise) in a mathematically coherent way. The perceived requirement for large amounts of data are often cited as the reason that quantitative risk-based inspection is incompatible with the sparse and imperfect information that is typically available to structural integrity engineers. Current industrial guidance is also limited in its methods of distinguishing quality of inspections, as this is typically based on simplified (qualitative) heuristics. In this paper, Bayesian multi-level (partial pooling) models are proposed as a flexible and transparent method of combining imperfect and incomplete information, to support decision-making regarding the integrity management of in-service structures. This work builds on the established theoretical framework for computing the expected value of information, by allowing for partial pooling between inspection measurements (or groups of measurements). This method is demonstrated for a simulated example of a structure with active corrosion in multiple locations, which acknowledges that the data will be associated with some precision, bias, and reliability. Quantifying the extent to which an inspection of one location can reduce uncertainty in damage models at remote locations has been shown to influence many aspects of the expected value of an inspection. These results are considered in the context of the current challenges in risk based structural integrity management. PubDate: 2021-11-10 DOI: 10.1017/dce.2021.18

Authors:Melia; Hannah R., Muckley, Eric S., Saal, James E. First page: 19 Abstract: The development of transformative technologies for mitigating our global environmental and technological challenges will require significant innovation in the design, development, and manufacturing of advanced materials and chemicals. To achieve this innovation faster than what is possible by traditional human intuition-guided scientific methods, we must transition to a materials informatics-centered paradigm, in which synergies between data science, materials science, and artificial intelligence are leveraged to enable transformative, data-driven discoveries faster than ever before through the use of predictive models and digital twins. While materials informatics is experiencing rapidly increasing use across the materials and chemicals industries, broad adoption is hindered by barriers such as skill gaps, cultural resistance, and data sparsity. We discuss the importance of materials informatics for accelerating technological innovation, describe current barriers and examples of good practices, and offer suggestions for how researchers, funding agencies, and educational institutions can help accelerate the adoption of urgently needed informatics-based toolsets for science in the 21st century. PubDate: 2021-11-15 DOI: 10.1017/dce.2021.19

Authors:Papadimas; Nikolaos, Dodwell, Timothy First page: 20 Abstract: This article recasts the traditional challenge of calibrating a material constitutive model into a hierarchical probabilistic framework. We consider a Bayesian framework where material parameters are assigned distributions, which are then updated given experimental data. Importantly, in true engineering setting, we are not interested in inferring the parameters for a single experiment, but rather inferring the model parameters over the population of possible experimental samples. In doing so, we seek to also capture the inherent variability of the material from coupon-to-coupon, as well as uncertainties around the repeatability of the test. In this article, we address this problem using a hierarchical Bayesian model. However, a vanilla computational approach is prohibitively expensive. Our strategy marginalizes over each individual experiment, decreasing the dimension of our inference problem to only the hyperparameter—those parameter describing the population statistics of the material model only. Importantly, this marginalization step, requires us to derive an approximate likelihood, for which, we exploit an emulator (built offline prior to sampling) and Bayesian quadrature, allowing us to capture the uncertainty in this numerical approximation. Importantly, our approach renders hierarchical Bayesian calibration of material models computational feasible. The approach is tested in two different examples. The first is a compression test of simple spring model using synthetic data; the second, a more complex example using real experiment data to fit a stochastic elastoplastic model for 3D-printed steel. PubDate: 2021-12-17 DOI: 10.1017/dce.2021.20