A  B  C  D  E  F  G  H  I  J  K  L  M  N  O  P  Q  R  S  T  U  V  W  X  Y  Z  

  Subjects -> STATISTICS (Total: 130 journals)
The end of the list has been reached or no journals were found for your choice.
Similar Journals
Journal Cover
Structural and Multidisciplinary Optimization
Journal Prestige (SJR): 1.458
Citation Impact (citeScore): 3
Number of Followers: 12  
 
  Hybrid Journal Hybrid journal (It can contain Open Access articles)
ISSN (Print) 1615-1488 - ISSN (Online) 1615-147X
Published by Springer-Verlag Homepage  [2467 journals]
  • Bi-fidelity conditional value-at-risk estimation by dimensionally
           decomposed generalized polynomial chaos expansion

    • Free pre-print version: Loading...

      Abstract: Abstract Digital twin models allow us to continuously assess the possible risk of damage and failure of a complex system. Yet high-fidelity digital twin models can be computationally expensive, making quick-turnaround assessment challenging. Toward this goal, this article proposes a novel bi-fidelity method for estimating the conditional value-at-risk (CVaR) for nonlinear systems subject to dependent and high-dimensional inputs. For models that can be evaluated fast, a method that integrates the dimensionally decomposed generalized polynomial chaos expansion (DD-GPCE) approximation with a standard sampling-based CVaR estimation is proposed. For expensive-to-evaluate models, a new bi-fidelity method is proposed that couples the DD-GPCE with a Fourier-polynomial expansion of the mapping between the stochastic low-fidelity and high-fidelity output data to ensure computational efficiency. The method employs measure-consistent orthonormal polynomials in the random variable of the low-fidelity output to approximate the high-fidelity output. Numerical results for a structural mechanics truss with 36-dimensional (dependent random variable) inputs indicate that the DD-GPCE method provides very accurate CVaR estimates that require much lower computational effort than standard GPCE approximations. A second example considers the realistic problem of estimating the risk of damage to a fiber-reinforced composite laminate. The high-fidelity model is a finite element simulation that is prohibitively expensive for risk analysis, such as CVaR computation. Here, the novel bi-fidelity method can accurately estimate CVaR as it includes low-fidelity models in the estimation procedure and uses only a few high-fidelity model evaluations to significantly increase accuracy.
      PubDate: 2023-02-01
       
  • Topology optimization of piezoelectric actuators using moving morphable
           void method

    • Free pre-print version: Loading...

      Abstract: Abstract The piezoelectric actuator is a widely applied device owing to its appealing properties. Moving morphable void (MMV) method is an explicit topology optimization method which allows designer to obtain exact geometric information of the topology. This paper proposes a two-phase MMV method for designing an in-plane piezoelectric actuator considering the piezoelectric material distribution and the polarization direction. The first phase MMV decides the layout of the piezoelectric material and the second phase MMV distinguishes the positive and negative polarizations. The objective function is to maximize the displacement at a prescribed output port under a material volume constraint. The sensitivities of objective function and constraint with respect to the optimization design variables are analyzed. The method of moving asymptote (MMA) algorithm is used to update these variables. Several numerical examples are provided to indicate the effectiveness and adaptability of the proposed method.
      PubDate: 2023-01-25
       
  • Stacking sequence optimisation of an aircraft wing skin

    • Free pre-print version: Loading...

      Abstract: Abstract This paper demonstrates a stacking sequence optimisation process of a composite aircraft wing skin. A two-stage approach is employed to satisfy all sizing requirements of this industrial sized, medium altitude, long endurance drone. In the first stage of the optimisation, generic stacks are used to describe the thickness and stiffness properties of the structure while considering both structural requirements and discrete guidelines such as blending. In the second stage of the optimisation, mathematical programming is used to solve a Mixed Integer Linear Programming formulation of the stacking sequence optimisation. The proposed approach is suitable for real-world thick structures comprised of multiple patches. Different thickness discretisation strategies are examined for the retrieval of the discrete stacking sequences, with each one having a different influence on the satisfaction of all structural constraints across the various sub-components of the wing. The weight penalty introduced between the continuous and final discrete design of the proposed approach is negligible.
      PubDate: 2023-01-18
       
  • A proportional expected improvement criterion-based multi-fidelity
           sequential optimization method

    • Free pre-print version: Loading...

      Abstract: Abstract Multi-fidelity surrogate models fusing data from different fidelity systems can significantly reduce the computational cost while ensuring the model accuracy. The focus of this paper is on the sequential design for multi-fidelity models for expensive black-box problem. A Co-kriging-based multi-fidelity sequential optimization method named proportional expected improvement (PEI) is proposed with the objection to be more efficient for global optimization and to be more reasonable to evaluate the costs and benefits of candidate points from different levels of fidelity. The PEI method is an extension of expected improvement (EI) and uses an integrated criterion to determine both location and fidelity level of the subsequent. In the integrated criterion, a proportional factor which is adaptively adjusted according to the sample density is added in EI to adjust the tendency between exploration and exploitation during the search process. Meanwhile, Kullback–Leibler divergence is used to measure the credibility of a point from system with different fidelities, and the cost and constraint of different fidelities are also considered. The effectiveness and advantage of the proposed method were demonstrated by seven analytical functions and then applied to the aerodynamic shape optimization of NACA0012 airfoil. Experiments show that the proportional factor makes the proposed algorithm better search for the global optimum, and the KL divergence can describe the relationship between high and low fidelity more significantly.
      PubDate: 2023-01-17
       
  • A novel data-driven sparse polynomial chaos expansion for high-dimensional
           problems based on active subspace and sparse Bayesian learning

    • Free pre-print version: Loading...

      Abstract: Abstract Polynomial chaos expansion (PCE) has recently drawn growing attention in the community of stochastic uncertainty quantification (UQ). However, the drawback of the curse of dimensionality limits its application to complex and large-scale structures, and the PCE construction needs the complete knowledge of probability distributions of input variables, which may be impractical for real-world problems. To overcome these difficulties, this study proposes an active learning active subspace-based data-driven sparse PCE method (AL-AS-DDSPCE). First, we use the active subspace (AS) theory to reduce the dimension of the original input space, and establish the measure-consistent data-driven polynomial chaos bases in the reduced input space based on the samples of the original input random variables. Subsequently, to bypass the gradient calculation in the traditional AS method, we combine the sparse Bayesian learning with the manifold learning theory and propose an active learning AS method to obtain the subspace mapping matrix. The proposed AL-AS-DDSPCE can find the low-dimensional subspace of the original input space with the elaborate active learning algorithm, which does not require the probability distribution of input variables but is driven by the sample data of input random variables and the response data of the design samples, and can construct the PCE model efficiently and accurately. We verify the proposed method using two classical high-dimensional numerical examples, a 200-bar truss without explicit expression and one practical engineering problem. The results show that the AL-AS-DDSPCE is a good choice for solving high-dimensional UQ problems.
      PubDate: 2023-01-14
       
  • Non-probabilistic uncertain design for spaceborne membrane microstrip
           reflectarray antenna by using topology optimization

    • Free pre-print version: Loading...

      Abstract: Abstract Spaceborne large aperture membrane microstrip reflectarray antenna has the characteristics of high gain, lightweight and small storage volume, which will be used in future space missions. However, there are two main reasons restricting its application. Firstly, the traditional dimensional optimization method cannot effectively affect the distribution of prestress in the membrane reflector, so it needs to increase too much mass to achieve the goal of stiffness improvement; secondly, low stiffness makes the membrane reflector more sensitive to various uncertainties. In view of this, this paper proposes a method to affect the distribution of prestress by sticking irregular shaped additional layer, and proposes a non-probabilistic uncertain topology optimization method to design the shape of additional layer. The effectiveness of the proposed methods is verified by numerical examples.
      PubDate: 2023-01-13
       
  • The connection between digital-twin model and physical space for rotating
           

    • Free pre-print version: Loading...

      Abstract: Abstract Digital twin that shows great potential in different fields may serve as the enabling technology for the health monitoring of aero-engine blade. However, due to the harsh conditions inside the aero-engine, one of the most challenging issues for the implementation of digital-twin-based blade health monitoring is the lack of an accurate connection method between the digital-twin model and the physical entity for rotating blade. Wherein, the key is how to measure the blade data accurately. The emerging blade tip timing (BTT), an effective non-contact measurement method for blades, has received extensive attention recently. Whereas, due to the limited probes that are allowed to be installed on the engine casing, the BTT signal is generally incomplete and under-sampling, which makes it very difficult to reconstruct the blade vibration parameters from the measured data. In this study, a novel paradigm for blade vibration parameter reconstruction with super-resolution from the undersampled BTT signal is proposed based on atomic norm soft thresholding (AST), which may offer accurate blade vibration information for the construction and updating of blade digital-twin model. Unlike the conventional reconstruction method that generally needs the interested signal to be sparse under a finite discrete dictionary for successful reconstruction, the proposed AST-based blade vibration parameter reconstruction method can take any continuous value in the frequency domain from the measurement data with fewer sampling numbers and higher under-sampling rate. Both numerical simulation and experimental verification are utilized to verify the validity of the proposed method. The comparative results indicate that the proposed method performs well in resisting “incomplete.” Meanwhile, the proposed method performs better than state-of-the-art methods under conditions with fewer data.
      PubDate: 2023-01-11
       
  • Surrogate-based integrated design of component layout and structural
           topology for multi-component structures

    • Free pre-print version: Loading...

      Abstract: Abstract In the structural and multiphysical design of engineering structures, various functional components with fixed shapes are embedded in a host structure, which poses considerable difficulties in obtaining the optimal structure performance by the simultaneous design of the component layout and structural topology. This study proposes a surrogate-based optimization strategy for the integrated design of component layout and structural topology. In the proposed optimization framework, a multi-component layout is described with a movable material field function by several positional parameters, and the host structure topology is represented by another material field function. The dimension of the topology optimization problem (i.e., the number of design variables) drastically reduces with the material field series-expansion method, while still providing a clear and smooth structural boundary description. Then, a multi-material interpolation model is suggested to couple the host structure and functional components. To avoid the derivation of the sensitivities for complex structural responses and to alleviate the solution difficulty due to the problem’s multiple local solution features, a surrogate-based algorithm based on the sequential Kriging surrogate model is employed to solve the optimization problem. Several numerical examples, including mechanics and electromagnetics design problems, are presented to verify the validity and efficiency of the proposed method.
      PubDate: 2023-01-09
       
  • Multi-domain acoustic topology optimization based on the BESO approach:
           applications on the design of multi-phase material mufflers

    • Free pre-print version: Loading...

      Abstract: Abstract Since the early 1920s, the design of mufflers has become an influential topic of study among engineers, as they have the ability to reduce noise from industrial machinery, combustion engines, refrigerators, etc. However, since its applications are strongly dependent on the target frequencies and the adopted geometries, efficient muffler design methods are still under investigation up to this day. With that in mind, this paper presents a multi-domain acoustic topology optimization methodology applied to the design of reactive and dissipative expansion chamber mufflers. Based on the Bi-directional Evolutionary Structural Optimization (BESO) algorithm, the proposed approach also uses a novel material interpolation scheme that considers acoustic, porous and rigid domains during the optimization process, hence configuring a multi-phase procedure. The simulation of porous materials is performed by the Johnson–Champoux–Allard (JCA) mathematical formulations, while the numerical solution is obtained by the finite element method. To further compose the study, the objective function is defined as the mean value of the sound Transmission Losses (TL) obtained along one, two or three different frequency bands, while the proposed multi-domain BESO (mdBESO) algorithm is applied to the design of single and multi-chamber mufflers. Here, more than one muffler per BESO iteration is considered, being also possible to optimize for specific frequency bands in predefined chambers. The effectiveness of both, the novel material interpolation scheme and the mdBESO algorithm, are highlighted, showing considerable TL enhancements in the broad range of frequencies chosen, while also presenting clear optimized partitions as result.
      PubDate: 2023-01-09
       
  • Wheel impact test by deep learning: prediction of location and magnitude
           of maximum stress

    • Free pre-print version: Loading...

      Abstract: Abstract For ensuring vehicle safety, the impact performance of wheels during wheel development must be ensured through a wheel impact test. However, manufacturing and testing a real wheel requires a significant time and money because developing an optimal wheel design requires numerous iterative processes to modify the wheel design and verify the safety performance. Accordingly, wheel impact tests have been replaced by computer simulations such as finite element analysis (FEA); however, it still incurs high computational costs for modeling and analysis, and requires FEA experts. In this study, we present an aluminum road wheel impact performance prediction model based on deep learning that replaces computationally expensive and time-consuming 3D FEA. For this purpose, 2D disk-view wheel image data, 3D wheel voxel data, and barrier mass values used for the wheel impact test were utilized as the inputs to predict the magnitude of the maximum von Mises stress, corresponding location, and the stress distribution of the 2D disk-view. The input data were first compressed into a latent space with a 3D convolutional variational autoencoder (cVAE) and 2D convolutional autoencoder (cAE). Subsequently, the fully connected layers were used to predict the impact performance, and a decoder was used to predict the stress distribution heatmap of the 2D disk-view. The proposed model can replace the impact test in the early wheel-development stage by predicting the impact performance in real-time and can be used without domain knowledge. The time required for the wheel development process can be reduced using this mechanism.
      PubDate: 2023-01-08
       
  • Correction: A comprehensive review of digital twin—part 2: roles of
           uncertainty quantification and optimization, a battery digital twin, and
           perspectives

    • Free pre-print version: Loading...

      PubDate: 2023-01-07
       
  • Adaptive kriging model-based structural reliability analysis under
           interval uncertainty with incomplete data

    • Free pre-print version: Loading...

      Abstract: Abstract Uncertainty of quantitative models of input variables and computational model could certainly cause the uncertainties of structural response and structural reliability. Hence, structural reliability analysis requires precise input uncertainty model and highly accurate solving model. However, not all uncertain input variable can be described by an explicit quantitative model in practical engineering. Generally, only incomplete samples of some variables are available in practice. Though Monte Carlo Simulation (MCS) has been used to solve above problem, we are confronted with another trouble due to the expensive two-layer MCSs. Furthermore, approximate methods for reliability assessment would cause confidence problem of reliability. To handle with the challenges, an adaptive kriging (AK) model-based approach is proposed by dividing the two-layer MCSs into two AK models. Simultaneously, a new quantitative model for interval variables is developed to deal with input uncertainty. And a novel learning function improved by H learning function (IH function) is developed with a weight function to enhance the efficiency of constructing kriging models. The IH function not only considers design sites with large uncertainty, but also actively searches for that around the LSF by assigning different weight value for design points. In the proposed approach, the first AK model is constructed for reliability prediction. And the relationship between parameters of input models and reliability is built by the other kriging model using the first one. Credibility assessment will be implemented according to the second model. Since, only the first AK model needs the time-consuming finite element (FE) calculations, the proposed approach could significantly improve the efficiency of confidential reliability analysis without losing accuracy. Several numerical examples are implemented to demonstrate the feasibility and effectiveness of the proposed model.
      PubDate: 2023-01-07
       
  • Topology optimization with advanced CNN using mapped physics-based data

    • Free pre-print version: Loading...

      Abstract: Abstract This research proposes a new framework to develop an accurate machine-learning-based surrogate model to predict the optimum topological structures using an advanced encoder–decoder network, Unet, and Unet++. The trained surrogate model predicts the optimum structural layout as output by inputting the results from the initial static analysis without any iterative optimization calculations. Input and output data are generated using the commercial finite element analysis package, Abaqus/Standard, and an optimization package, Abaqus/Tosca. We applied the data augmentation technique to increase the amount of data without actual calculations. Primarily, this research focused on overcoming the weaknesses of previous studies that the trained network is only applicable to limited geometry variations and requires an organized grid rectangular mesh. Therefore, this study suggests a mapping process to convert the analysis data on any type of mesh element to a tensor form, which enables training and employing the network. Also, to increase the prediction accuracy, we trained the network with the labeled optimum material data using a binary segmented output, representing the structure and void regions in the domain. Finally, the trained networks are evaluated using the intersection over union (IoU) scores representing the classification accuracy. The best-performing network provides highly accurate results, and this model provided the IoU scores for average, maximum, and standard deviation as 90.0%, 99.8%, and 7.1%, respectively. Also, we apply it to solve local-global structural optimization problems, and the overall calculation time is reduced by 98%.
      PubDate: 2023-01-06
       
  • Deep neural networks for parameterized homogenization in concurrent
           multiscale structural optimization

    • Free pre-print version: Loading...

      Abstract: Abstract Concurrent multiscale structural optimization is concerned with the improvement of macroscale structural performance through the design of microscale architectures. The multiscale design space must consider variables at both scales, so design restrictions are often necessary for feasible optimization. This work targets such design restrictions, aiming to increase microstructure complexity through deep learning models. The deep neural network (DNN) is implemented as a model for both microscale structural properties and material shape derivatives (shape sensitivity). The DNN’s profound advantage is its capacity to distill complex, multidimensional functions into explicit, efficient, and differentiable models. When compared to traditional methods for parameterized optimization, the DNN achieves sufficient accuracy and stability in a structural optimization framework. Through comparison with interface-aware finite element methods, it is shown that sufficiently accurate DNNs converge to produce a stable approximation of shape sensitivity through back propagation. A variety of optimization problems are considered to directly compare the DNN-based microscale design with that of the Interface-enriched Generalized Finite Element Method (IGFEM). Using these developments, DNNs are trained to learn numerical homogenization of microstructures in two and three dimensions with up to 30 geometric parameters. The accelerated performance of the DNN affords an increased design complexity that is used to design bio-inspired microarchitectures in 3D structural optimization. With numerous benchmark design examples, the presented framework is shown to be an effective surrogate for numerical homogenization in structural optimization, addressing the gap between pure material design and structural optimization.
      PubDate: 2023-01-05
       
  • Moving morphable curved components framework of topology optimization
           based on the concept of time series

    • Free pre-print version: Loading...

      Abstract: Abstract Topology optimization provides a powerful approach to structural design with its capability to find the optimal topology automatically. However, the optimal topologies achieved by the traditional density-based method are difficult to be manufactured. Recently, some explicit descriptions, like the Moving Morphable Components (MMC) approach, have been introduced to narrow the gap between design and manufacture. However, since only components with simple geometry are considered, it is still unsatisfactory regarding geometric arbitrariness in those studies. Here we demonstrate a new topology optimization approach originating from the MMC framework by replacing the straight components with the curved ones so as to enhance the geometric arbitrariness. The skeleton of the modified component is described by the non-uniform rational B-splines (NURBS) curve. The concept of time series is then proposed to directly generate the curved component from the 1D skeleton curve. It is shown that it’s much easier to achieve the variation of width by the new strategy when compared with other curve descriptions. Numerical examples demonstrate the effectiveness and robustness of the proposed approach based on the moving morphable curved components.
      PubDate: 2023-01-04
       
  • A high-dimensional optimization method combining projection
           correlation-based Kriging and multimodal parallel computing

    • Free pre-print version: Loading...

      Abstract: Abstract In surrogate-based optimization (SBO), the recognized issues associated with the high-dimensional surrogate models focus on the prohibitive computational costs and the low model accuracy. However, there is a lack of effective solutions in the face of the ‘curse of dimensionality’. In this paper, we propose a novel Kriging metamodel to remedy this deficiency. The Kriging model based on projection correlation (KPC) introduces the projection correlation into the Kriging modeling process as prior information, taking into account the nature of hyperparameters. The effectiveness and accuracy of the KPC are illustrated through 10–70-dimensional numerical examples. Furthermore, a parallel computing strategy that combines the multi-peak characteristics of expected improvement and minimizing prediction (MEI&MP) is proposed to further improve high-dimensional optimization efficiency and potential. The global performance and optimization efficiency of our method are validated via typical test functions and structural optimization problems.
      PubDate: 2022-12-31
       
  • Topology optimization under microscale uncertainty using stochastic
           gradients

    • Free pre-print version: Loading...

      Abstract: Abstract This paper considers the design of structures made of engineered materials, accounting for uncertainty in material properties. We present a topology optimization approach that optimizes the structural shape and topology at the macroscale assuming design-independent uncertain microstructures. The structural geometry at the macroscale is described by an explicit level set approach, and the macroscopic structural response is predicted by the eXtended Finite Element Method (XFEM). We describe the microscopic layout by either an analytic geometric model with uncertain parameters or a level-cut from a Gaussian random field. The macroscale properties of the microstructured material are predicted by homogenization. Considering the large number of possible microscale configurations, one of the main challenges of solving such topology optimization problems is the computational cost of estimating the statistical moments of the cost and constraint functions and their gradients with respect to the design variables. Methods for predicting these moments, such as Monte Carlo sampling, and Taylor series and polynomial chaos expansions often require a large number of random samples resulting in an impractical computation. To reduce this cost, we propose an approach wherein, at every design iteration, we only use a small number of microstructure configurations to generate an independent, stochastic approximation of the gradients. These gradients are then used either with a gradient descent algorithm, namely Adaptive Moment (Adam), or the globally convergent method of moving asymptotes (GCMMA). Three numerical examples from structural mechanics are used to show that the proposed approach provides a computationally efficient way for macroscale topology optimization in the presence of microstructural uncertainty and enables the designers to consider a new class of problems that are out of reach today with conventional tools.
      PubDate: 2022-12-30
       
  • A reinforcement learning hyper-heuristic in multi-objective optimization
           with application to structural damage identification

    • Free pre-print version: Loading...

      Abstract: Abstract Multi-objective optimization allows satisfying multiple decision criteria concurrently, and generally yields multiple solutions. It has the potential to be applied to structural damage identification applications which are oftentimes under-determined. How to achieve high-quality solutions in terms of accuracy, diversity, and completeness is a challenging research subject. The solution techniques and parametric selections are believed to be problem specific. In this research, we formulate a reinforcement learning hyper-heuristic scheme to work coherently with the single-point search algorithm MOSA/R (Multi-Objective Simulated Annealing Algorithm based on Re-seed). The four low-level heuristics proposed can meet various optimization requirements adaptively and autonomously using the domination amount, crowding distance, and hypervolume calculations. The new approach exhibits improved and more robust performance than AMOSA, NSGA-II, and MOEA/D when applied to benchmark test cases. It is then applied to an active damage interrogation scheme for structural damage identification where solution diversity/completeness and accuracy are critically important. Results show that this approach can successfully include the true damage scenario in the solution set identified. The outcome of this research can potentially be extended to a variety of applications.
      PubDate: 2022-12-28
       
  • Variable functioning and its application to large scale steel frame design
           optimization

    • Free pre-print version: Loading...

      Abstract: Abstract To solve complex real-world problems, heuristics and concept-based approaches can be used to incorporate information into the problem. In this study, a concept-based approach called variable functioning (Fx) is introduced to reduce the optimization variables and narrow down the search space. In this method, the relationships among one or more subsets of variables are defined with functions using information prior to optimization; thus, the function variables are optimized instead of modifying the variables in the search process. By using the problem structure analysis technique and engineering expert knowledge, the Fx method is used to enhance the steel frame design optimization process as a complex real-world problem. Herein, the proposed approach was coupled with particle swarm optimization and differential evolution algorithms then applied for three case studies. The algorithms are applied to optimize the case studies by considering the relationships among column cross-section areas. The results show that Fx can significantly improve both the convergence rate and the final design of a frame structure, even if it is only used for seeding.
      PubDate: 2022-12-27
       
  • Experimental verification: a multi-objective optimization method for
           inversion technology of hydrodynamic journal bearings

    • Free pre-print version: Loading...

      Abstract: Abstract Aiming at the key design variables of journal bearings, a novel optimization scheme is proposed to minimize oil leakage and power loss. For the first time, the inversion technology is introduced into the multi-objective optimization genetic algorithm under thermohydrodynamics. Using the hybrid optimization method (sequential quadratic programming and multi-objective optimization genetic algorithm) and the pareto optimal frontier method, the journal bearing model under the oil supply condition of oil groove (Model A) and oil hole (Model B) is optimized. More importantly, the oil leakage (QL) formula is exhaustively deduced, and good prediction results are obtained by simulating the data in literature. The optimization test results show that compared with the maximum errors (13% and 25%) of the power loss and leakage flow prediction results in literature, the maximum errors of this prediction model are 8% and 14%, respectively. In addition, compared with hybrid optimization method, the pareto optimal frontier has better advantages under inversion technology. Both methods can give good prediction results. The accuracy of this model is proved by comparing experimental data in the literature.
      PubDate: 2022-12-27
       
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
 


Your IP address: 35.172.230.154
 
Home (Search)
API
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-