Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: In this article, peridynamic and finite strip sub-regions are coupled for the first time. Accordingly, in areas where non-local effects are important, the peridynamic theory is used, while in other areas, the finite strip method, which is an optimal method for solving plate problems, is applied. Static cases with and without crack are investigated using the coupling approach, and the results are compared with those available in the literature. A comprehensive parametric study is performed to investigate the effect of various parameters, such as grid size, the horizon value, the number of strips, and different term functions used in the finite strip. Finally, some examples involving non-uniform load conditions, plate with a hole and crack propagation are studied. PubDate: 2022-05-10
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: The correct function of many organs depends on proper lumen morphogenesis, which requires the orchestration of both biological and mechanical aspects. However, how these factors coordinate is not yet fully understood. Here, we focus on the development of a mechanistic model for computationally simulating lumen morphogenesis. In particular, we consider the hydrostatic pressure generated by the cells’ fluid secretion as the driving force and the density of the extracellular matrix as regulators of the process. For this purpose, we develop a 3D agent-based-model for lumen morphogenesis that includes cells’ fluid secretion and the density of the extracellular matrix. Moreover, this computer-based model considers the variation in the biological behavior of cells in response to the mechanical forces that they sense. Then, we study the formation of the lumen under different-mechanical scenarios and conclude that an increase in the matrix density reduces the lumen volume and hinders lumen morphogenesis. Finally, we show that the model successfully predicts normal lumen morphogenesis when the matrix density is physiological and aberrant multilumen formation when the matrix density is excessive. PubDate: 2022-05-06
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Volume decomposition is a technique for decomposing a computer-aided design (CAD) model into subvolumes to improve the types of meshes that can be generated and enhance the accuracy of finite element analysis. Protrusions frequently occur on thin-shell CAD models for functional and structural purposes. Automatic decomposition of such features is difficult due to the complexity and variation of shapes. In this study, a method was proposed for decomposing protrusions on thin-shell CAD models. A feature recognition algorithm was first employed to recognize four types of protrusions on a boundary representation (B-rep) model: tubes, columns, ribs, and symmetric extrusions. A specific volume decomposition algorithm was then developed for each type of protrusion. A protrusion is divided into sweepable subvolumes, with each subvolume represented by a pair of main contours and several side contours that connect to both main contours simultaneously. The contours of all subvolumes are tightly adjacent to each other to preserve the entire volume of the feature. Realistic CAD models and analysis results are presented to demonstrate the feasibility of the proposed protrusion decomposition method. The integration of the proposed algorithm with the decomposition of thin shells is also discussed. PubDate: 2022-05-06
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract This paper presents an algorithm to generate a new kind of polygonal mesh obtained from triangulations. Each polygon is built from a terminal-edge region surrounded by edges that are not the longest-edge of any of the two triangles that share them. The algorithm is termed Polylla and is divided into three phases. The first phase consists of labeling each edge of the input triangulation according to its size; the second phase builds polygons (simple or not) from terminal-edges regions using the label system; and the third phase transforms each non simple polygon into simple ones. The final mesh contains polygons with convex and non convex shape. Since Voronoi-based meshes are currently the most used polygonal meshes, we compare some geometric properties of our meshes against constrained Voronoi meshes. Several experiments were run to compare the shape and size of polygons, the number of final mesh points and polygons. For the same input, Polylla meshes contain less polygons than Voronoi meshes and the algorithm is simpler and faster than the algorithm to generate constrained Voronoi meshes. Finally, we have validated Polylla meshes by solving the Laplace equation on an L-shaped domain using the virtual element method (VEM). We show that the numerical performance of the VEM using Polylla meshes and Voronoi meshes is similar. PubDate: 2022-05-03
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract This paper proposes a novel cascadic multilevel optimization framework for the fiber-reinforced composite structure, inspired by the character of the non-uniform rational basis spline (NURBS) surface, to control the structural topology, fiber angle distribution, and to improve the computational efficiency. The NURBS surface is not only used for the calculation of the structural response and the geometry modeling of the design but also introduced to construct the hierarchy of the parameterization of design variables. The optimization problem is formulated and solved successively from a coarse mesh level to the finest mesh level. The initial design of a fine level is computed using the solution of a coarse level. The number of meshes and design variables is gradually increased, and the design freedom and the resolution of parameterization remain the same to the optimization at the finest mesh level. Because there are fewer design variables and meshes at the coarse level and the finest level is used to find an accurate solution, it efficiently reduces the computational cost of the optimization. Meanwhile, the local support character of the NURBS surface avoids the checkerboard phenomenon and improves the continuity of local fiber angle. Several numerical examples for compliance minimization are presented to verify the effectiveness of the proposed method. PubDate: 2022-04-27
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: The development of refrigeration systems using thermo-acoustic technology is a novel solution for achieving environmentally friendly refrigerators. A full transient CFD method is introduced here that can resemble the whole thermo-acoustic phenomena along with its different governing physics as a whole. The working fluid contributes critically to the thermo-acoustic refrigerators’ cooling performance. In this paper, unlike previous researches, all different possible combinations of noble gases are considered and the performance of the refrigerator from both aspects of cooling temperature and \({\mathrm{COP}}_{\mathrm{R}}\) are investigated to determine the optimized gas mixture among all combinations. For this purpose, the effect of the sound intensity and the fluid’s Prandtl number as two key factors are investigated on the refrigeration performance. By considering a 2D-axisymmetric computational geometry resembling the real model, it is tried to attain results as reliable as possible. COMSOL software is used to perform the simulations. It is concluded that from the aspect of the cooling temperature, a sample with the highest sound intensity (pure He sample in this research) is the best. But, from the aspect of a higher \({\mathrm{COP}}_{\mathrm{R}}\) (relative coefficient of performance), a sample with the lowest Pr number (72%He–28%Xe sample in this research) would be the best. The lowest cooling temperature which is achieved by the pure He sample was about 273 K and the highest \({\mathrm{COP}}_{\mathrm{R}}\) which belongs to 72%He–28%Xe sample was approximately 0.335. Graphical abstract PubDate: 2022-04-27
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract This paper introduces the F \(_3\) ORNITS non-iterative co-simulation algorithm in which F \(_3\) stands for the 3 flexible aspects of the method: flexible polynomial order representation of coupling variables, flexible time-stepper applying variable co-simulation step size rules on subsystems allowing it, and flexible scheduler orchestrating the meeting times among the subsystems and capable of asynchronousness when subsystems’ constraints require it. The motivation of the F \(_3\) ORNITS method is to accept any kind of co-simulation model as far as they represent circuits (0D models, such as ODE or DAE), including any kind of subsystem (open circuits), regardless on their available capabilities. Indeed, one of the major problems in industry is that the subsystems usually have constraints or lack of advanced capabilities making it impossible to implement most of the advanced co-simulation algorithms on them. The method makes it possible to preserve the dynamics of the coupling constraints when necessary as well as to avoid breaking \(C^1\) smoothness at communication times, and also to adapt the co-simulation step size in a way that is robust both to zero-crossing variables (contrary to classical relative error-based criteria) and to jumps. Five test cases are presented to illustrate the robustness of the F \(_3\) ORNITS method as well as its higher accuracy than the non-iterative Jacobi coupling algorithm (the most commonly used method in industry) for a smaller number of co-simulation steps. PubDate: 2022-04-26
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract This paper presents a novel method for solving partial differential equations on three-dimensional CAD geometries by means of immersed isogeometric discretizations that do not require quadrature schemes. It relies on a newly developed technique for the evaluation of polynomial integrals over spline boundary representations that is exclusively based on analytical computations. First, through a consistent polynomial approximation step, the finite element operators of the Galerkin method are transformed into integrals involving only polynomial integrands. Then, by successive applications of the divergence theorem, those integrals over B-Reps are transformed into the first surface and then line integrals with polynomials integrands. Eventually, these line integrals are evaluated analytically with machine precision accuracy. The performance of the proposed method is demonstrated by means of numerical experiments in the context of 2D and 3D elliptic problems, retrieving optimal error convergence order in all cases. Finally, the methodology is illustrated for 3D CAD models with an industrial level of complexity. PubDate: 2022-04-25
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Due to its capacity to evolve in a large solution space, the Simulated Annealing (SA) algorithm has shown very promising results for the Reverse Engineering of editable CAD geometries including parametric 2D sketches, 3D CAD parts and assemblies. However, parameter setting is a key factor for its performance, but it is also awkward work. This paper addresses the way a SA-based Reverse Engineering technique can be enhanced by identifying its optimal default setting parameters for the fitting of CAD geometries to point clouds of digitized parts. The method integrates a sensitivity analysis to characterize the impact of the variations in the parameters of a CAD model on the evolution of the deviation between the CAD model itself and the point cloud to be fitted. The principles underpinning the adopted fitting algorithm are briefly recalled. A framework that uses design of experiments (DOEs) is introduced to identify and save in a database the best setting parameter values for given CAD models. This database is then exploited when considering the fitting of a new CAD model. Using similarity assessment, it is then possible to reuse the best setting parameter values of the most similar CAD model found in the database. The applied sensitivity analysis is described together with the comparison of the resulting sensitivity evolution curves with the changes in the CAD model parameters imposed by the SA algorithm. Possible improvements suggested by the analysis are implemented to enhance the efficiency of SA-based fitting. The overall approach is illustrated on the fitting of single mechanical parts but it can be directly extended to the fitting of parts’ assemblies. It is particularly interesting in the context of the Industry 4.0 to update and maintain the coherence of the digital twins with respect to the evolution of the associated physical products and systems. PubDate: 2022-04-22
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In the target matrix optimization paradigm (TMOP), it has long been understood that one must create a set of target matrices before the mesh can be optimized. But there is still no general method to create correct, effective targets in response to a specific mesh quality improvement goal. The TMOP literature describes how certain sets of target matrices can be used to control the shape or size of mesh elements, but those examples address only a fraction of the problems that can occur in mesh quality improvement and were not derived from a general framework for target matrix construction. In this work, a general method of target construction is introduced based on an independent set of geometric parameters that are intrinsic to the Jacobian matrices upon which TMOP is based. The parameters enable a systematic approach to target definition and construction. The approach entails two parts. The first part defines correspondences between available primary data (stuff about the mesh and/or the physical solution) and secondary data (e.g., a field of error estimates). Once the correspondences are established, the primary data are processed into intermediate field data existing on mesh sample points. The second part creates a model that represents the values of the geometric target parameters as functions of the secondary data. The model is then tested numerically to establish model constants and effectiveness. This systematic approach to target construction is illustrated in a set of examples to show how it can be applied to common problems in mesh optimization such as equalization of geometric properties, preservation of existing good quality, and adaptation of the mesh to the physical solution. The result is a systematic method of target construction for TMOP that can be applied to a wide variety of planar and volume mesh quality improvement tasks. PubDate: 2022-04-20
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The ‘Grey-Box-Processing’ method, presented in this article, allows for the integration of simulated and experimental data sets with the overall objective of a comprehensive validation of simulation methods and models. This integration leads to so-called hybrid data sets. They allow for a spatially and temporally resolved identification and quantitative assessment of deviations between experimental observations and results of corresponding finite element simulations in the field of vehicle safety. This is achieved by the iterative generation of a synthetic, dynamic solution corridor in the finite element domain, which is deduced from experimental observations and restricts the freedom of movement of a virtually analyzed structure. The hybrid data sets thus contain physically based information about the interaction (e.g. acting forces) between the solution corridor and the virtually analyzed structure. An additional result of the ‘Grey-Box-Processing’ is the complemented three-dimensional reconstruction of the incomplete experimental observations (e.g. two-dimensional X-ray movies). The extensive data sets can be used not only for the assessment of the similarity between experiment and simulation, but also for the efficient derivation of improvement measures in order to increase the predictive power of the used model or method if necessary. In this study, the approach is presented in detail. Simulation-based investigations are conducted using generic test setups as well as realistic pedestrian safety test cases. These investigations show the general applicability of the method as well as the significant informative value and interpretability of generated hybrid data sets. PubDate: 2022-04-18
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In this paper, an efficient method is presented for the bending, buckling, and vibration analysis of functionally graded (FG) nanobeam based on nonlocal elasticity theory and Layerwise theory. The present method takes into account the transverse shear and normal strains of nanobeam and also the small-scale effect in modeling the mechanical behavior of nanobeams. The mechanical properties are assumed to vary continuously through the thickness of the nanobeam. The equations of motion are derived according to the nonlocal elasticity of Eringen and Hamilton’s principle. An analytical solution is presented for analysis of the bending, vibration and buckling of FG nanobeam for various boundary conditions. The results that are predicted by the proposed theory are validated by comparing with the results of other theories available in the literature. Numerical results are presented for bending, natural frequency, and buckling load of functionally graded nanobeams. In addition to flexural vibration modes, the thickness modes and natural frequencies are also predicted by the present theory. The effects of parameters such as length-to-thickness ratio, FG power-law index, nonlocal parameter, boundary conditions, and the number of numerical layers on the bending, natural frequency, and critical buckling load are investigated. It is seen that the present theory is an efficient and accurate method in predicting vibration, buckling, and bending of nanobeams. PubDate: 2022-04-16
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Identification and fitting is an important task in reverse engineering and virtual/augmented reality. Compared to the traditional approaches, carrying out such tasks with a deep learning-based method have much room to exploit. This paper presents SMA-Net (Spatial Merge Attention Network), a novel deep learning-based end-to-end bottom-up architecture, specifically focused on fast identification and fitting of CAD models from point clouds. The network is composed of three parts whose strengths are clearly highlighted: voxel-based multi-resolution feature extractor, spatial merge attention mechanism and multi-task head. It is trained with both virtually-generated point clouds and as-scanned ones created from multiple instances of CAD models, themselves obtained with randomly generated parameter values. Using this data generation pipeline, the proposed approach is validated on two different data sets that have been made publicly available: robot data set for Industry 4.0 applications, and furniture data set for virtual/augmented reality. Experiments show that this reconstruction strategy achieves compelling and accurate results in a very high speed, and that it is very robust on real data obtained for instance by laser scanner and Kinect. PubDate: 2022-04-13
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract This study proposes a new metaheuristic algorithm called sand cat swarm optimization (SCSO) which mimics the sand cat behavior that tries to survive in nature. These cats are able to detect low frequencies below 2 kHz and also have an incredible ability to dig for prey. The proposed algorithm, inspired by these two features, consists of two main phases (search and attack). This algorithm controls the transitions in the exploration and exploitation phases in a balanced manner and performed well in finding good solutions with fewer parameters and operations. It is carried out by finding the direction and speed of the appropriate movements with the defined adaptive strategy. The SCSO algorithm is tested with 20 well-known along with modern 10 complex test functions of CEC2019 benchmark functions and the obtained results are also compared with famous metaheuristic algorithms. According to the results, the algorithm that found the best solution in 63.3% of the test functions is SCSO. Moreover, the SCSO algorithm is applied to seven challenging engineering design problems such as welded beam design, tension/compression spring design, pressure vessel design, piston lever, speed reducer design, three-bar truss design, and cantilever beam design. The obtained results show that the SCSO performs successfully on convergence rate and in locating all or most of the local/global optima and outperforms other compared methods. PubDate: 2022-04-11
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Understanding the propagation of waves and their scattering characteristics is critical in various scientific and engineering domains. While the majority of present work is based on numerical approaches, their high computational cost and discontinuity in the entire engineering workflow raise the need to resolve obstacles for fully utilizing the methods in an interactive and end-to-end manner. In this study, we propose a deep learning approach that can simulate the wave propagation and scattering phenomena precisely and efficiently. In particular, we present methods of incorporating physics-based knowledge into the deep learning framework to give the learning process strong inductive biases regarding wave propagation and scattering behaviors. We demonstrate that the proposed method can successfully produce physically valid wave field trajectories induced by random scattering objects. We show that the proposed physics-informed strategy exhibits significantly improved prediction results than purely data-driven methods through quantitative and qualitative evaluation from various angles. Subsequently, we assess the computational efficiency of the proposed method as a neural engine, showing that the proposed approach can significantly accelerate the scientific simulation process compared to the numerical method. Our study delivers the potential of the proposed physics-informed approach to be utilized for real-time, accurate, and interactive scientific analyses in a wide variety of engineering and application disciplines. PubDate: 2022-04-09
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In this paper, an efficient deep unsupervised learning (DUL)-based framework is proposed to directly perform the design optimization of truss structures under multiple constraints for the first time. Herein, the members’ cross-sectional areas are parameterized using a deep neural network (DNN) with the middle spatial coordinates of truss elements as input data. The parameters of the network, including weights and biases, are regarded as decision variables of the structural optimization problem, instead of the member’s cross-sectional areas as those of traditional optimization algorithms. A new loss function of the network model is constructed with the aim of minimizing the total structure weight so that all constraints of the optimization problem via unsupervised learning are satisfied. To achieve the optimal parameters, the proposed model is trained to minimize the loss function by a combination of the standard gradient optimizer and backpropagation algorithm. As soon as the learning process ends, the optimum weight of truss structures is indicated without utilizing any other time-consuming metaheuristic algorithms. Several illustrative examples are investigated to demonstrate the efficiency of the proposed framework in requiring much lower computational cost against other conventional methods, yet still providing high-quality optimal solutions. PubDate: 2022-04-08
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Co-simulation is a widely used solution to enable global simulation of a modular system via the composition of black-boxed simulators. Among co-simulation methods, the IFOSMONDI implicit iterative algorithm, previously introduced by the authors, enables us to solve the non-linear coupling function while keeping the smoothness of interfaces without introducing a delay. Moreover, it automatically adapts the size of the steps between data exchanges among the subsystems according to the difficulty of solving the coupling constraint. The latter was solved by a fixed-point algorithm, whereas this paper introduces the Jacobian-Free Methods version. Most implementations of Newton-like methods require a jacobian matrix which, except in the Zero-Order-Hold case, can be difficult to compute in the co-simulation context. As IFOSMONDI coupling algorithm uses Hermite interpolation for smoothness enhancement, we propose hereafter a new formulation of the non-linear coupling function including both the values and the time-derivatives of the coupling variables. This formulation is well designed for solving the coupling through jacobian-free Newton-type methods. Consequently, successive function evaluations consist in multiple simulations of the systems on a co-simulation time-step using rollback. The orchestrator-workers structure of the algorithm enables us to combine the PETSc framework on the orchestrator side for the non-linear Newton-type solvers with the parallel integrations of the systems on the workers’ side thanks to MPI processes. Different non-linear methods will be compared to one another and to the original fixed-point implementation on a newly proposed 2-system academic test case with direct feedthrough on both sides. An industrial model will also be considered to investigate the performance of the method. PubDate: 2022-04-08
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract This paper presents some virtual element method (VEM) applications for topology optimization of non-Newtonian fluid-flow problems in arbitrary two-dimensional domains. The objective is to design an optimal layout for the incompressible non-Newtonian fluid flow, governed by the Navier–Stokes–Brinkman equations, to minimize the viscous drag. The porosity approach is used in the topology optimization formulation. The VEM is used to solve the governing boundary value problem. The key feature distinguishing the VEM from the classical finite element method is that the local basis functions in the VEM are only implicitly known. Instead, the VEM uses local projection operators to describe each element’s rigid body motion and constant strain components. Therefore, the VEM can handle meshes with arbitrarily shaped elements. Several numerical examples are provided to demonstrate the efficacy and efficiency of the VEM for the topology optimization of fluid-flow problems. A MATLAB code for reproducing the results provided in this paper is freely available at https://github.com/mampueros/VEM_TopOpt_FluidFlow. PubDate: 2022-04-07
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.