Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Xu H; Zhao G, Liu Y, et al. First page: 061001 Abstract: AbstractAiming at the problem of smoothness of the B-spline curve interpolation, an improved parameterized interpolation method based on modified chord length is proposed. We construct a series of interpolation arcs using the relationship between the chord length and chord angle of given data points and then calculate the global knot parameters by replacing the chord length with the arc length. In addition, we propose curve smoothness index based on the relationship between the radius of curvature and the cumulative curve length and compare it with other classical methods to construct cubic B-spline curves in the tests; at the same time, the deviation error is used to evaluate the swing of the curve. Furthermore, two sets of point cloud data are used to test the surface interpolation for different parameterization methods, and the Gauss curvature map is used to evaluate the smoothness of interpolated surfaces. As a result, the proposed method performs better than other methods; the constructed curves and surfaces maintain a good performance. PubDate: Tue, 10 May 2022 00:00:00 GMT DOI: 10.1115/1.4054089 Issue No:Vol. 22, No. 6 (2022)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Tong Y; Liang Y, Spasic I, et al. First page: 061002 Abstract: AbstractUser experience (UX) analysis is essential for designers and companies when optimizing products or services as it can help designers to uncover valuable information, such as the hedonic and pragmatic qualities of a UX. While previous research has described the conventional methods of UX analysis, such as surveys or subjective determination, this paper proposes a data-driven methodology to automatically integrate hedonic and pragmatic qualities for UX from online customer reviews. The proposed methodology comprises the following steps. First, we combined a corpus-based approach, a dictionary-based approach and word embedding to generate a lexicon of hedonic and pragmatic qualities. Second, we filtered out the sentences that contained no hedonic or pragmatic information and classified the remaining review sentences. Third, we extracted and clustered the UX elements (such as product feature, context information and context clustering). Finally, we scored each UX element based on hedonic or pragmatic qualities and compared it against previous UX modeling. This study integrates hedonic and pragmatic qualities to enrich UX modeling in the field of UX. For a product designer, the UX analysis results may highlight a requirement to optimize product design. It may also represent a potential market opportunity in a UX state where most of the current products are perceived UX results by customers. This research also examines the invaluable relationship between UX and online customer reviews to support the prospective planning of customer strategy and design activities. PubDate: Tue, 10 May 2022 00:00:00 GMT DOI: 10.1115/1.4054155 Issue No:Vol. 22, No. 6 (2022)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Feng SC; Lu Y, Jones AT, et al. First page: 061003 Abstract: AbstractRecently, the number and types of measurement devices that collect data that are used to monitor laser-based powder bed fusion of metals processes and inspect additive manufacturing metal parts have increased rapidly. Each measurement device generates data in a unique coordinate system and in a unique format. Data alignment is the process of spatially aligning different datasets to a single coordinate system. It is part of a broader process called “data registration.” This paper provides a data registration procedure and includes an example of aligning data to a single, reference, coordinate system. Such a reference coordinate system is needed for downstream applications, including data analytic, artificial intelligence, and part qualification. PubDate: Tue, 10 May 2022 00:00:00 GMT DOI: 10.1115/1.4054202 Issue No:Vol. 22, No. 6 (2022)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Qiu Y; Jin Y. First page: 061004 Abstract: AbstractIn this study, the extractive summarization using sentence embeddings generated by the finetuned Bidirectional Encoder Representations from Transformers (BERT) models and the k-means clustering method has been investigated. To show how the BERT model can capture the knowledge in specific domains like engineering design and what it can produce after being finetuned based on domain-specific data sets, several BERT models are trained, and the sentence embeddings extracted from the finetuned models are used to generate summaries of a set of papers. Different evaluation methods are then applied to measure the quality of summarization results. Both the machine evaluation method Recall-Oriented Understudy for Gisting Evaluation (ROUGE) and a human-based evaluation method are used for the comparison study. The results indicate that the BERT model finetuned with a larger dataset can generate summaries with more domain terminologies than the pretrained BERT model. Moreover, the summaries generated by BERT models have more contents overlapping with original documents than those obtained through other popular non-BERT-based models. The experimental results indicate that the BERT-based method can provide better and more informative summaries to engineers. It has also been demonstrated that the contextualized representations generated by BERT-based models can capture information in text and have better performance in applications like text summarizations after being trained by domain-specific data sets. PubDate: Tue, 10 May 2022 00:00:00 GMT DOI: 10.1115/1.4054203 Issue No:Vol. 22, No. 6 (2022)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Tran A; Wildey T, Sun J, et al. First page: 061005 Abstract: AbstractIntegrated computational materials engineering (ICME) models have been a crucial building block for modern materials development, relieving heavy reliance on experiments and significantly accelerating the materials design process. However, ICME models are also computationally expensive, particularly with respect to time integration for dynamics, which hinders the ability to study statistical ensembles and thermodynamic properties of large systems for long time scales. To alleviate the computational bottleneck, we propose to model the evolution of statistical microstructure descriptors as a continuous-time stochastic process using a non-linear Langevin equation, where the probability density function (PDF) of the statistical microstructure descriptors, which are also the quantities of interests (QoIs), is modeled by the Fokker–Planck equation. We discuss how to calibrate the drift and diffusion terms of the Fokker–Planck equation from the theoretical and computational perspectives. The calibrated Fokker–Planck equation can be used as a stochastic reduced-order model to simulate the microstructure evolution of statistical microstructure descriptors PDF. Considering statistical microstructure descriptors in the microstructure evolution as QoIs, we demonstrate our proposed methodology in three integrated computational materials engineering (ICME) models: kinetic Monte Carlo, phase field, and molecular dynamics simulations. PubDate: Tue, 10 May 2022 00:00:00 GMT DOI: 10.1115/1.4054237 Issue No:Vol. 22, No. 6 (2022)